uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,108,101,564,815 | arxiv | \section*{Combining ozone and UV-light for energy efficient removal of \ch{SO_x} in low-temperature flue gas}
\addcontentsline{toc}{section}{Combining ozone and UV-light for energy efficient removal of \ch{SO_x} in low-temperature flue gas}
\noindent
Reference number: 19-422 \\ \\
Marc Rovira, Klas Engvall, and Christophe Duwig \\
\textit{ Department of Chemical Engineering, KTH Royal Institute of Technology, SE-10044 Stockholm, Sweden} \\
\noindent
\textbf{ABSTRACT}
The potential of a combined ozone \ch{O3} ultraviolet (UV) light reactor for gas-phase oxidation of flue gas pollutants has been evaluated in this work. For this, numerical simulations of a continuously stirred tank reactor (CSTR) have been performed for the analysis of sulfur dioxide \ch{SO2} removal. Chemical kinetics have been coupled with modeling of the radiation intensity distribution produced by a UV lamp with five different radiation models. A grid convergence study and an optimization of the number of discrete point light sources employed for some of the radiation models was carried out. The effect of residence time and reactor size on the removal of \ch{SO2} have also been analyzed. Overall, results for the abatement of \ch{SO2} in an \ch{O3}-UV reactor under optimized conditions suggest that this approach is suitable for sustainable air cleaning applications.
\subsection*{1. Introduction}
\addcontentsline{toc}{subsection}{1. Introduction}
There is currently a wide consensus that poor air quality leads to adverse heals effects in humans. The World Health Organization estimates that 90\% of the world population lives in areas that do not meet their air quality guidelines and attributes more than 3 million premature deaths each year to outdoor airborne emissions \cite{world_health_organization_ambient_2016}. One of the six priority pollutants considered by international air standards is the family of gaseous sulfur oxides (\ch{SO_x}) of which sulfur dioxide (\ch{SO2}) can be found in much greater concentration \cite{curtis_adverse_2006}. Health effects of \ch{SO2} exposure are more pronounced in susceptible populations such as children and the elderly. For children, increased asthma \cite{lee_air_2002, penardmorand_long-term_2005}, rhinitis \cite{penardmorand_long-term_2005}, pneumonia and acute bronchitis \cite{barnett_air_2005} hospitalizations are linked to higher \ch{SO2} levels. The environment is also affected by high \ch{SO2} emissions mainly through acid rain.
Since the USA announced the first version of the Clean Air Act in 1963, there has been a global effort to reduce \ch{SO2} pollution from human sources as anthropogenic emissions account for close to 99\% of total emissions \cite{wang_123_2018}. The main industrial sources include vehicle and refinery emissions as well as emissions from the burning of coal \cite{godish_air_2014}. Furthermore, the presence of \ch{SO2} in exhaust gasses is proportional to the concentration of sulfur in the fuel employed. Although some industries such as the automotive have been quick to tackle \ch{SO2} emissions, others like the shipping industry are still large pollutant sources. In 2014, the marine industry was found to be responsible 13\% of global \ch{SO2} emissions \cite{smith_third_2014}. Although many vessels have embraced low sulfur emissions, upcoming legislation is bound to be even more strict. The regulations in the MARPOL Annex VI have been adopted by the International Maritime Organisation (IMO) and define the extent of admissible air pollution impact of ships \cite{kattner_monitoring_2015}. Taking effect in 2020, the revised MARPOL Annex VI reduces the ``Global Sulfur Cap'' from 3.5\% to 0.50\% and 0.1\% in sulfur Emission Control Areas \cite{animah_compliance_2018}. Hence, the shipping industry will require drastic measures to meet these targets. A similar trend is seen in industrial flue gas emissions \cite{gholami_technologies_2020, sun_abatement_2016}.
Apart from employing low-sulfur fuels when possible, most industries remove \ch{SO2} by employing gas cleaning systems. In these solutions \ch{SO2} is treated along with \ch{NO_x}. However, most implementations do so individually. For flue gas, independent denitration (i.e. selective catalytic reduction (SCR) with ammonia) and desulfurization (i.e. wet flue gas desulfurization WFGD) systems result in costly technologies that are expensive to operate and install \cite{zhao_simultaneous_2010, liu_photochemical_2011}. Hence, multi-pollutant removal solutions are quickly becoming a prominent research topic. One such technology, which uses oxidation by ozone (\ch{O3}) to treat exhaust gases, was recently review by Lin \textit{et al.} \cite{lin_flue_2020}. Although this method achieves high denitrification efficiency, its main limitation of is that \ch{O3} selectively oxidises \ch{NO} \cite{sun_simultaneous_2011}. Hence, the direct oxidation of \ch{SO2} by ozone is not significant. Current solutions employ pre-ozonation to increase the oxidation state of \ch{NO} and thus its solubility to improve removal before a wet scrubber \cite{lin_flue_2020}. Although this process can achieve high removal efficiencies, the treatment of the residual liquid employed in the scrubbing process increases operational costs and limits its applicability. Hence, although oxidative removal of \ch{NO} by \ch{O3} is feasible, other alternatives for the simultaneous oxidation of \ch{SO2} are required.
Advanced oxidation processes (AOPs) that use hydroxyl radicals (\ch{^.OH}) for water purification and disinfection have been employed for wastewater treatment for over three decades \cite{deng_advanced_2015, chong_recent_2010}. Recently, this knowledge has been applied to the gas-phase removal of exhaust gas pollutants with notable success \cite{xie_simultaneous_2019, liu_simultaneous_2018, hao_establishment_2016}. Here, direct exposure of the primary oxidant, usually hydrogen peroxide (\ch{H2O2}), to ultraviolet (UV) light at a particular wavelength creates \ch{^.OH}. This radical is then able to effectively simultaneously oxidize and remove multiple pollutants. Benefits of gas-phase AOPs include low operational costs, little sensitivity to temperature and relative humidity conditions, high oxidizing power, and the lack of formation of secondary undesired products \cite{a_adnew_gas-phase_2016, liu_photochemical_2011}.
An alternative to \ch{H2O2} as the primary oxidant for gas-phase AOPs is \ch{O3}. This has the advantage of directly oxidizing \ch{NO} while indirectly oxidizing \ch{SO2} when exposed to UV light. Some studies that dealt with industrial pollution removal in gas-phase AOPs with \ch{O3} include the work of Montecchio \textit{et al.} \cite{montecchio_development_2018, montecchio_development_2018} for volatile organic compounds (VOCs) and the work of the research group of M. Johnson \cite{johnson_gas-phase_2014,a_adnew_gas-phase_2016, meusinger_treatment_2017} for reduced sulfur compounds. However, to the best of the authors' knowledge, there are no studies that investigate the potential for radical oxidation of \ch{SO2} with \ch{O3} via UV light in the context of multi-pollutant removal solutions for cooperative denitration and desulfurization. Hence, in the present work numerical simulations of \ch{SO2} abatement by \ch{O3} and UV light in an idealized chemical reactor will be performed. In the following sections, details of the modeling approach will be provided along with the results obtained and a discussion of the conclusions that can be drawn from them.
\subsection*{2. Methods for simulation and analysis}
\addcontentsline{toc}{subsection}{2. Methods for simulation and analysis}
As described by Ducoste and Alpert \cite{ducoste_computational_2015}, simulating a UV-driven AOP system requires modeling of three different physical aspects that are interconnected. These are the modeling of fluid dynamics, chemical kinetics, and UV light distribution. In the following, the implementation of the modeling of each element will be described individually after presenting the reactor configuration that will be modeled.
\subsubsection*{2.1. Reactor configuration and conditions}
\addcontentsline{toc}{subsubsection}{2.1. Reactor configuration and conditions}
The geometry of the modeled reactor is an annular cylinder where the reactor and the lamp have the same length. Dimensions for it have been taken from Montecchio \textit{et al.} \cite{montecchio_development_2018}. Fig \ref{fig:2d} shows an illustration of the cross-section of the reactor. In the present study, the effect of reactor geometry has been studied. To do so, as in the work by Montecchio \textit{et al.} \cite{montecchio_development_2018} three different reactors have been analyzed. All three reactors have the same conditions except their outer radius varies. The relevant conditions for the three reactors studied, including the geometry and thermodynamic parameters, are presented in Table \ref{tab:params}.
\begin{figure}[h!]
\centering
\includegraphics[height=2in]{./fig/2dreactor.pdf}
\caption{Sketch of the two-dimensional cross-section of the annular UV reactor employed. Not to scale.}
\label{fig:2d}
\end{figure}
\begin{table}[!htb]
\caption{Parameters and conditions for the simulated CSTR reactor. All conditions are adapted from Montecchio \textit{et al.} \cite{montecchio_development_2018}. }
\centering
\begin{tabular}{p{6.5 cm} c c c}
\hline \hline
Parameter & Symbol & Value & Units \\ \hline
Lamp and reactor length & $L$ & 0.28 & m \\
Lamp sleeve (inner radius) & $r_{in}$ & 0.0115 & m \\
Small reactor radius & $r_{A,out}$ & 0.0425 & m \\
Medium reactor radius & $r_{B,out}$ & 0.0675 & m \\
Big reactor radius & $r_{C,out}$ & 0.0820 & m \\
Low-pressure UV lamp power & $P$ & 17 & W \\
UV lamp efficiency at $\lambda=254$ nm & $\eta_{\lambda=254}$ & 0.33 & - \\
Operating temperature & $T$ & 298 & K \\
Operating pressure & $p$ & 1 & atm \\ \hline \hline
\end{tabular}
\label{tab:params}
\end{table}
\subsubsection*{2.2. Reactor modeling}
\addcontentsline{toc}{subsubsection}{2.2. Reactor modeling}
In the present modeling approach, the fluid dynamics of the reactor were simplified to consider ideal mixing. By doing so, the concentrations of all species are assumed to be homogeneous throughout the reactor, often referred to as a continuously stirred tank reactor (CSTR). This is done to reduce the overall complexity of the implementation, which in turn reduces both the number of unknown variables in the model and the time required to produce results. This is advantageous in order to evaluate the sensitivity of the model to different initial conditions, geometries, chemical kinetics, and UV models. Furthermore, although the mixing is modeled as perfect, this approach provides an upper bound to what can be expected in more complex simulations and experimental studies.
In this work, the CSTR model as implemented in Cantera was employed \cite{goodwin_cantera_2016}. Cantera is an open-source package that aids in solving numerical chemistry problems and can be executed in Python 3 \cite{van_rossum_python_2009}. This software was used to solve the time-dependent equations that govern the evolution of the chemical species and the thermodynamic state of the reactor.
\begin{figure}[h!]
\centering
\includegraphics[height=2.5in]{./fig/cstr.png}
\caption{Illustration of the arrangement simulated for the CSTR reactor in Cantera \cite{goodwin_cantera_2016}.}
\label{fig:cstr}
\end{figure}
A sketch of the reactor network simulated can be seen in Fig. \ref{fig:cstr}. Firstly, an infinite reservoir of the gas mixture is defined. The composition of the mixture tank is defined by the user by setting the initial conditions of the molar fraction of each of the species that are desired to react together. This mixture is considered perfectly mixed and is simulated overall as a perfect gas. In the present case, the simplest simulation included only air (as \ch{N2} and \ch{O2}), \ch{SO2}, \ch{O3} and water vapour (\ch{H2O}). Initial temperature and pressure conditions also have to be defined at this stage and were set to 295 K and 1 atm. Upstream of the tank a mass flow controller object is defined, which is fully described by the user-defined residence time once the volume of the reactor tank is fixed. It is then in the reactor object where the time loop is simulated until the user-defined maximum simulation time. Only here are reactions allowed to be carried out and hence composition changes with time. Downstream of the reactor, a pressure valve is simulated by setting a pressure valve coefficient which defines the efficiency to maintain the defined reactor pressure. Finally, a capture tank object is defined at the end of the network to store the resulting species from the reactions occurring in the stirred reactor.
\subsubsection*{2.3. Chemical kinetic modeling}
\addcontentsline{toc}{subsubsection}{2.3. Chemical kinetic modeling}
The information regarding the reactions that occur in the gas mixture in Cantera is stored in a kinetic mechanism. These files are the main chemical input to the model and have to be in the pre-defined Cantera format (CTI). The mechanism file stores elements and species of the gas as well as the thermodynamic properties of each specie. Following this, the chemical equations for the reactions between all defined species are provided. Each reaction is then modeled with a modified Arrhenius equation and thus the values for the pre-exponential factor $A$, the exponential factor $n$, and activation energy for the reaction $E_a$ must be provided. This allows the Cantera to numerically solve the system of ordinary differential equations (ODEs) describing the reaction mechanism in order to determine reaction rate coefficients.
\begin{table}[!htb]
\caption{Reduced kinetic mechanism based on the mechanism proposed by Meusinger \textit{et al.} \cite{meusinger_treatment_2017}. Here the units of the pre-exponential factor $A$ depend on the order the reaction. It is [$\mathrm{s}^{-1}$] for first order reactions, [$\mathrm{cm}^3 \; \mathrm{molecule}^{-1} \; \mathrm{s}^{-1}$] for second order reactions and [$\mathrm{cm}^6 \; \mathrm{molecule}^{-2} \; \mathrm{s}^{-1}$] for third order reaction. R13 represents the photolysis of ozone by UV light. }
\centering
\begin{tabular}{c p{7cm} c c c}
\hline \hline
ID & Reaction & $A$ & $n$ & $E_a/\mathrm{R}$ [K] \\ \hline
R1 & \ch{O + O3 -> 2 O2} & 8.00E-12 & 0 & 2060 \\
R2 & \ch{O(^1D) + H2O -> 2 ^.OH} & 1.63E-10 & 0 & -65 \\
R3 & \ch{O(^1D) + N2 -> O + N2} & 1.79E-11 & 0 & -110 \\
R4 & \ch{O + ^.OH -> O2 + H} & 2.40E-11 & 0 & -110 \\
R5 & \ch{O + HO2 -> ^.OH + O2} & 2.70E-11 & 0 & -224 \\
R6 & \ch{H + O2 + M -> HO2 + M} & 2.17E-29 & -1.1 & 0 \\
R7 & \ch{^.OH + O3 -> HO2 + O2} & 1.70E-12 & 0 & 940 \\
R8 & \ch{^.OH + HO2 -> H2O + O2} & 4.80E-11 & 0 & -250 \\
R9 & \ch{^.OH + SO2 -> HSO3} & 7.07E-11 & -0.7 & 0 \\
R10 & \ch{O + SO2 + M -> SO3 + M} & 4.00E-32 & 0 & 1000 \\
R11 & \ch{SO3 + H2O -> H2SO4} & 1.20E-15 & 0 & 0 \\
R12 & \ch{HSO3 + O2 -> HO2 + SO3} & 1.30E-12 & 0 & 330 \\
R13 & \ch{O3 ->[h$\nu$] O(^1D) + O2 } & $k_{\mathrm{UV}}$ & 0 & 0 \\ \hline \hline
\end{tabular}
\label{tab:mech}
\end{table}
The kinetic mechanisms employed in the present work was originally developed by Meusinger \textit{et al.} \cite{meusinger_treatment_2017}. This mechanism employs only the most relevant reactions among the 150 reactions provided, as predicted by their model. This reduced mechanism has a total of 13 reactions. The reactions and Arrhenius constants for the reduced mechanism are presented in Table \ref{tab:mech}.
Reactions R1-R12 presented in Table \ref{tab:mech} are thermal reactions while R13 is a photolytic reaction. Although the reaction rate of thermal reactions can be modeled by a modified Arrhenius equation, photolysis reactions cannot be generally modeled in the same way. If we consider a general photochemical reaction (\ch{A ->[h$\nu$] B}), this can decomposed into three steps \cite{aillet_photochemical_2013}. Here, $h$ is the Planck constant ($6.626 \times 10^{-34}\;\mathrm{J\:s\:photon^{-1}}$) and $\nu$ is the frequency of the radiated UV light. Hence, $Q_{\lambda}=h\nu$ is the energy of one photon at a given wavelength. Firstly an activation step (\ch{A ->[h$\nu$] A^*}) occurs when the initial ground state species absorbs a photon and produces an electronically excited version of the original specie. Once this activate species is created it can take two different paths. A deactivation step (\ch{A^* -> A}) occurs when the additional energy from the photon absorption is lost (by either a radiative mechanism such as fluorescence or by a non-radiative mechanism such as heat loss) and the activate specie returns to its original ground state. In the context of photo-activated reactions, this is often a negative outcome as it reduces the efficiency of the overall photochemical reaction. Finally, a reaction step (\ch{A^* -> B}) takes place when the excited species is converted to the desired photochemical product.
The reaction rate for the photochemical activation step $r^{\lambda}_{(\ch{A -> A^*})}$ is directly proportional to the photon energy absorbed by the specie \cite{aillet_photochemical_2013}. Here the subscript described which reaction the reaction rate is referring to and the superscript $\lambda$ indicates the wavelength for which the photochemical reaction takes place (usually given in $\mathrm{nm}$). The expression for the reaction rate of the photochemical activation step for species \ch{A} is at a certain wavelength $\lambda$ is:
\begin{equation}
r^{\lambda}_{(\ch{A -> A^*})} = e_{\lambda,A}^{\prime} = \frac{\lambda}{N_A h c} e_{\lambda,A} \label{eq:1}
\end{equation}
\noindent{where $c$ is the speed of light ($2.998 \times 10^{8}\;\mathrm{m\:s^{-1}}$), $N_A$ is the Avogadro number ($6.023 \times 10^{23}\;\mathrm{mol^{-1}}$). The photon-based effective energy absorbed by \ch{A} is $e_{\lambda,A}^{\prime}$ and is often referred to as the local volumetric rate of photon absorption or LVRPA in $\mathrm{{einstein}\:s^{-1}\:m^{-3}}$ which is equivalent to $\mathrm{mol}$-$\mathrm{photon} \mathrm{\:s^{-1} \:m^{-3}}$ \cite{cassano_photoreactor_1995}. This can be converted to the energy-based effective energy absorbed by \ch{A}, $e_{\lambda,A}$, also known as the local volumetric rate of energy absorption or LVREA (in $\mathrm{{W}\:m^{-3}}$) by multiplying $e_{\lambda,A}^{\prime}$ by the energy contained in one mole of photons at a given wavelength (i.e. $Q_{\lambda,\mathrm{mol}}=N_A h \frac{c}{\lambda}$)}.
For the deactivation and reaction steps, their respective reaction rates can be formulated by considering both as first order thermal reactions:
\begin{equation}
r_{(\ch{A^* -> A})} = k_{(\ch{A^* -> A})} [\ch{A^*}] \label{eq:2}
\end{equation}
\begin{equation}
r_{(\ch{A^* -> B})} = k_{(\ch{A^* -> B})} [\ch{A^*}] \label{eq:3}
\end{equation}\label{}
\noindent{where $k_{(\ch{A^* -> A})}$ and $k_{(\ch{A^* -> B})}$ are the reaction rate constants for the deactivation and reaction step, respectively and $[\ch{A^*}]$ is the concentration of species \ch{A^*} in $\mathrm{{mol}\:m^{-3}}$. Putting Eqs \ref{eq:1} - \ref{eq:3} together the time evolution of the concentration of species \ch{A^*} can be written as:}
\begin{equation}
\dfrac{d [\ch{A^*}]}{dt} = r^{\lambda}_{(\ch{A -> A^*})} - r_{(\ch{A^* -> A})} - r_{(\ch{A^* -> B})}
\end{equation}
\noindent{It can be assumed that the concentration of the photo-activated intermediate species \ch{A^*} reaches a quasi-steady state and so $\dfrac{d [\ch{A^*}]}{dt} \approx 0$ \cite{aillet_photochemical_2013}. Therefore, the quasi-steady state concentration of species \ch{A^*} can be expressed as:}
\begin{equation}
[\ch{A^*}] = \dfrac{e_{\lambda,A}^{\prime}}{k_{(\ch{A^* -> A})} + k_{(\ch{A^* -> B})}}
\end{equation}
\noindent{Therefore, the rate of consumption of \ch{A}, $r_A$, which equals the rate of formation of \ch{B}, $r_B$, can be written for the overall photochemical reaction as:}
\begin{equation}
- r_A = r_B = k_{(\ch{A^* -> B})} [\ch{A^*}] = \dfrac{k_{(\ch{A^* -> B})}}{k_{(\ch{A^* -> A})} + k_{(\ch{A^* -> B})}} e_{\lambda,A}^{\prime} = \phi_{\lambda} e_{\lambda,A}^{\prime} \label{eq:6}
\end{equation}
\noindent{where $\phi_{\lambda}$ is the quantum yield for the overall reaction at a given wavelength.}
Another description for the quantum yield can be obtained by defining the overall photochemical reaction rate in such a way that it satisfies the first and second laws of photochemistry \cite{bolton_rethinking_2015}. The first law states that it is only the light absorbed by a species which can effectively produce a photochemical change. This law is also known as the Grotthus-Draper law. The second law, known as the Stark-Einstein law, says that each photon can at most cause photochemical change of one single molecule which has absorbed it. Therefore, it defines that the primary quantum yields of a molecule must sum one. Hence, the quantum yield itself is defined as the number of molecules of the reactant species \ch{A} consumed per unit time, $n(A)$, over the total number of photons absorbed $\Phi_p^{abs}$ \cite{t_oppenlander_photochemical_2003}:
\begin{equation}
\phi_{\lambda} = \dfrac{dn(\ch{A})/dt}{\Phi_p^{abs}}
\end{equation}
\noindent{Converting the number of molecules of \ch{A}, $n(\ch{A})$, to the concentration [\ch{A}] by \cite{t_oppenlander_photochemical_2003}:}
\begin{equation}
dn(\ch{A}) = [\ch{A}] N_A dV
\end{equation}
\noindent{where $dV$ is an infinitesimally small volume element that contain $dn$ molecules of \ch{A}. Rearranging, one can write an expression for the consumption of \ch{A} \cite{t_oppenlander_photochemical_2003}:}
\begin{equation}
-r_A = -\dfrac{d [\ch{A}]}{dt} = \phi_{\lambda} \dfrac{\Phi_p^{abs}}{N_A V} = \phi_{\lambda} e_{\lambda,A}^{\prime} = \phi_{\lambda} \frac{\lambda}{N_A h c} e_{\lambda,A} \label{eq:9}
\end{equation}
\noindent{for which we obtain the same expression as Eq. \ref{eq:6}. This is the equation that will allow expressing the photochemical reaction kinetics. However, first, the local volumetric rate of energy absorption $e_{\lambda,A}$ must be obtained. This will be discussed in Section 2.3. Note that as the reactor is assumed to be perfectly mixed, the reaction rate will only depend on time assuming fixed geometry and constant lamp power output. Hence, despite Eq. \ref{eq:9} being also applicable to any individual point in the reactor, for the present work the reaction rates will be defined for the whole volume. For this to be achieved, the LVRPA or LVREA will be required to be volume-averaged quantities as the remainder will no have a spatial dependency. Defining this explicitly, the rate of consumption of \ch{A} for the present CSTR reactor is defined as:}
\begin{equation}
-\dfrac{d [\ch{A}]}{dt} = \phi_{\lambda} \langle e_{\lambda,A}^{\prime} \rangle_V = \phi_{\lambda} \frac{\lambda}{N_A h c} \langle e_{\lambda,A} \rangle_V \label{eq:10}
\end{equation}
\subsubsection*{2.4. UV distribution modeling}
\addcontentsline{toc}{subsubsection}{2.4. UV distribution modeling}
In a general annular photoreactor where we can assume that the characteristic length is much larger than the UV radiation wavelength, the general transport equation for single-wavelength radiations, also known as the radiation transport equation (RTE) reads as follows \cite{modest_radiative_2013, huang_evaluation_2011}:
\begin{equation}
\underbrace{\dfrac{1}{c}\dfrac{\partial I_{\lambda}}{\partial t}\vphantom{\int_{4 \pi}}}_{\text{Transient}} + \underbrace{\dfrac{\partial I_{\lambda}}{\partial s}\vphantom{\int_{4 \pi}}}_{\text{Along $s$}} = \underbrace{j_{\lambda}\vphantom{\int_{4 \pi}}}_{\text{Emission}} - \underbrace{\alpha_{\lambda} I_{\lambda} \vphantom{\int_{4 \pi}}}_{\text{Absorption}} - \underbrace{\sigma_{\lambda,s} I_{\lambda} \vphantom{\int_{4 \pi}}}_{\text{Out-scattering}} + \underbrace{\dfrac{\sigma_{\lambda,s}}{4 \pi} \int_{4 \pi} I_{\lambda}(\textbf{{\^{s}}}_i) \varphi_{\lambda} (\textbf{{\^{s}}}_i, \textbf{{\^{s}}}) d\Omega_i}_{\text{In-scattering}}
\end{equation}
\noindent{where $I_{\lambda}$ is the radiation intensity in $\mathrm{{einstein}\:m^{-2}\:s^{-1}}$ at a given radiation wavelength $\lambda$, $s$ is the radiation ray path and $\alpha_{\lambda}$ is the (Naperian) absorption coefficient of the medium in $\mathrm{m^{-1}}$. A thorough description of the remainder terms can be found in Modest \textit{et al.} \cite{modest_radiative_2013}. Given that the boundary conditions are steady, the ration field will instantaneously reach a steady-state and the transient term can be neglected \cite{huang_evaluation_2011}. This is valid only when the speed of light is much larger than the ratio between the characteristic length and the characteristic time. Nevertheless, this is the case in most engineering applications \cite{modest_radiative_2013}. In UV reactors, temperatures are generally sufficiently low to assume that the emission from black-body radiation is negligible \cite{pareek_light_2008}. Hence, the emission term can be neglected. Finally, as the considered medium (i.e. air or water) does not have a large concentration of solid particles or scattering can be generally assumed to be non-elastic, the in- and out-scattering terms can be neglected \cite{elyasi_general_2010}. This simplification is also known as the purely absorbing medium \cite{huang_evaluation_2011}. Therefore, the final expression for the RTE when all relevant assumptions are made reads:}
\begin{equation}
\dfrac{\partial I_{\lambda}}{\partial s} + \alpha_{\lambda} I_{\lambda} = 0
\end{equation}
\noindent{which, when integrated is:}
\begin{equation}
I_{\lambda} = I_{\lambda,0} \: \exp \left({-\int_{0}^{l} \alpha_{\lambda} ds}\right) \label{eq:13}
\end{equation}
\noindent{where $I_{\lambda,0}$ is the radiation intensity without absorption and $l$ is the distance from the light radiation source to the point at which the radiation intensity $I_{\lambda}$ is going to be computed. Applying the definition of the absorption coefficient for the medium (i.e. the mixture of species) and the additive property Eq. \ref{eq:13} can be expressed as \cite{t_oppenlander_photochemical_2003}:}
\begin{equation}
I_{\lambda} = I_{\lambda,0} \: \exp \left({-\int_{0}^{l} \sum_{i=1}^{N} \varepsilon_{\lambda,i} [\:i\:] \ln 10 \: ds}\right)
\end{equation}
\noindent{where $i$ is each individual species in the medium, $N$ is the total number of species contained in the medium, $[\:i\:]$ is the concentration of the i-\textit{th} specie and $\varepsilon_{\lambda,i}$ is the wavelength dependent molar decadic absorption coefficient in $\mathrm{m^2\:{mol}^{-1}}$ of the i-\textit{th} specie. The $\ln 10$ term is required to convert from the Naperian absorption coefficient $\alpha_{\lambda}$ to the molar decadic absorption coefficient $\varepsilon_{\lambda,i}$ \cite{t_oppenlander_photochemical_2003}. Assuming that the the molar absorption coefficient is path-independent and the concentration of each species is homogeneous because the medium perfectly mixed:}
\begin{equation}
I_{\lambda} = I_{\lambda,0} \: \exp \left(\ln 10 \: l \sum_{i=1}^{N} \varepsilon_{i,\lambda} [\:i\:] \right) \label{eq:15}
\end{equation}
In this form the simplified RTE in Eq. \ref{eq:15} is often referred to as the Beer-Lambert Law. This simple equation describes how the different species present in the medium through which a radiation ray is traveling partially absorb its energy to reduce the incoming intensity. Eq. 14 is suited for the description of the radiation intensity at a single point in the annular reactor domain.
The LVRPA for species \ch{A} which undergoes photochemical change can be expressed as \cite{coenen_modeling_2013,cassano_design_2005,cassano_photoreactor_1995}:
\begin{equation}
e_{\lambda,A}^{\prime} = \alpha_{\lambda,A} I_{\lambda}
\end{equation}
\noindent{substituting in Eq. \ref{eq:15} and expressing the Naperian absorption coefficient of species \ch{A} in terms of the molar decadic absorption coefficient and the concentration of species \ch{A} the LVRPA equation reads:}
\begin{equation}
e_{\lambda,A}^{\prime}(\mathbf{x},t) = \ln 10 \: \varepsilon_{A,\lambda} [\ch{A}(t)] I_{\lambda,0}(\mathbf{x}) \: \exp \left(\ln 10 \: l(\mathbf{x}) \sum_{i=1}^{N} \varepsilon_{i,\lambda} [i(t)] \right) \label{eq:17}
\end{equation}
\noindent{where the dependencies of each term with respect to space (described as a spatial vector $\mathbf{x}$) and time $t$) have been made explicit. As was the case for Eq. \ref{eq:15}, Eq. \ref{eq:17} describes the LVRPA for a single point in the reactor. To calculate the LVRPA for the whole reactor, the hollow cylinder volume was discretized. This will be discussed in detail in Section 3. For each mesh element, the radiation intensity $I_{\lambda}$ was computed taking the value calculated at the cell center. Then the volume-averaged radiation intensity is computed as:}
\begin{equation}
\langle I_{\lambda} \rangle_V = \dfrac{\sum\limits_{j=1}^{M} V_j I_{j,\lambda}}{\sum\limits_{j=1}^{M} V_j} \label{eq:18}
\end{equation}
\noindent{where $M$ is the total number of cell elements, $V_j$ is the volume of the $j$-th element and $I_{j,\lambda}$ is the radiation intensity of the $j$-th element. Therefore, the equation that described the volume-averaged LVRPA reads:}
\begin{equation}
\langle e_{\lambda,A}^{\prime} \rangle_V = \ln 10 \: \varepsilon_{A,\lambda} [\ch{A}] \langle I_{\lambda} \rangle_V \label{eq:19}
\end{equation}
\noindent{If Eq. \ref{eq:19} is substituted into Eq. \ref{eq:10} we obtain a pseudo-first order reaction rate equation for the photochemical reaction \ch{A ->[h$\nu$] B}:}
\begin{equation}
-\dfrac{d [\ch{A}]}{dt} = \phi_{\lambda} \ln 10 \: \varepsilon_{A,\lambda} [\ch{A}] \langle I_{\lambda} \rangle_V = k_{\mathrm{UV}} [\ch{A}]
\end{equation}
\noindent{where $k_{\mathrm{UV}}$ is a function of the geometry of the reactor and the discretization of it, the UV lamp characteristics and the radiation model employed and the concentrations of the different radiation-absorbing species (including $\ch{A}$) which are themselves a function of time.}
The only remaining unknown is $I_{\lambda,0}$. This will be calculated using several possible radiation models. Generally, these models can be divided into incidence and emission models. The former assumes a radiation distribution inside the reactor while the latter models the type of radiation emitted by the source. Usually emission models are preferred as incidence models require experimental parameters \cite{pareek_light_2008}. Comprehensive reviews of these models can be found elsewhere in the literature \cite{pareek_light_2008, liu_evaluation_2004,alfano_radiation_1986} so only the models employed in the present work will be described briefly. All models presented in the following are known as line source models. Here, the UV lamp emission is modeled as coming from a one-dimensional line. Other models that simulate the UV lamp as a two-dimensional surface or a three-dimensional volume also exist but the line source models are simpler and offer reasonable accuracy \cite{pareek_light_2008}. The nomenclature for these models is not consistent throughout the literature, however, in the present work the terminology employed by Liu \textit{et al.} \cite{liu_evaluation_2004} will be used as their application (i.e. AOPs for water treatment) is closer to the intended application of this study. Note that the following models are usually presented in the literature including absorption (i.e. $I_{\lambda}$ instead of $I_{\lambda,0}$) but will be described in their simpler form as absorption will be treated separately. For all cases, the origin of the coordinate system will be placed at the center of the UV lamp. These models and their equations are:
\begin{itemize}
\item Radial model (RAD): the radial model is the most simple radiation model and was first introduced by Harris and Dranoff \cite{harris_study_1965}. This model assumes that the light source is only emitting UV radiation in the radial direction and hence can be considered a one-dimensional model. Mathematically, for any point in the domain, this reads:
\begin{equation}
I_{\lambda,0}(r) = \frac{P_{\lambda}^{\prime}}{2 \pi r L}
\end{equation}
\noindent{where $r$ is the radial coordinate (i.e. the radial distance from the lamp center to a specific point inside the reactor) $L$ is the length of the lamp and $P_{\lambda}^{\prime}$ is the useful UV lamp power emitted a specific wavelength $\lambda$ in $\mathrm{einstein \: s^{-1}}$. Note that $P_{\lambda}^{\prime}$ can be obtained from $P_{\lambda}$, the useful UV lamp power emitted a specific wavelength $\lambda$ in $\mathrm{W}$, as:}
\begin{equation}
P_{\lambda}^{\prime} = Q_{\lambda,\mathrm{mol}} P_{\lambda} = Q_{\lambda,\mathrm{mol}} P \eta_{\lambda}
\end{equation}
where $P$ is the UV lamp power in $\mathrm{W}$ and $\eta_{\lambda}$ is the UV lamp efficiency at a specific wavelength $\lambda$.
\item Multiple points source summation (MPSS): originally introduced by Jacob and Dranoff \cite{jacobm_light_1970}, the MPSS model divides the lamp into several equispaced point light sources along the lamp centerline. These light sources, unlike in the RAD model, are not limited to emitting light in a single direction, but do so in all available directions. This is often referred to as spherical emission. As in an annular photoreactor, there is azimuthal symmetry, this model is two-dimensional. The radiation intensity without absorption from one single point light source $k$ to a single point in the domain is:
\begin{equation}
l_{k,\lambda,0}(r,z) = \frac{P_{\lambda}^{\prime}/N_{ls}}{4 \pi \rho^2}
\end{equation}
\noindent{where $z$ is the axial coordinate (i.e. the axial distance from the lamp center to a specific point inside the reactor), $N_{ls}$ is the total number of light sources and $\rho$ can be described as:}
\begin{equation}
\rho = \sqrt{r^2+(z-h)^2}
\end{equation}
\noindent{where $h$ is the axial distance from the lamp center to the point light source. Then the total radiation intensity (without considering absorption) for any point in the domain is:}
\begin{equation}
I_{\lambda,0}(r,z) = \sum\limits_{k=1}^{N_{ls}} l_{k,0,\lambda}(r,z) \label{eq:25}
\end{equation}
The MPSS model introduces the unknown parameter of the total number of light sources with which to discretize the UV lamp. This parameter will be addressed in Section 3.
\item Multiple segment source summation (MSSS): the MSSS model was an extension of the MPSS model introduced by Bolton (as cited by Liu \textit{et al.} \cite{liu_evaluation_2004}). Here, the apparent overprediction of the MPSS model was tackled by assuming that the light emission from the point light sources was diffuse rather than spherical. This amounts to considering the point light sources as finite cylindrical segments. This reads:
\begin{equation}
l_{k,\lambda,0}(r,z) = \frac{P_{\lambda}^{\prime}/N_{ls}}{4 \pi \rho^2} \cos \left( \theta \right)
\end{equation}
where $\theta$ is defined as the angle formed between the radial direction and the direction that connects the point in the domain at which the radiation intensity is being calculated with the $k$-th point light source:
\begin{equation}
\theta = \arctan{\dfrac{|z-h|}{r}}
\end{equation}
Then, the total radiation intensity is computed following Eq. \ref{eq:25}.
\item Line source integration (LSI): the LSI model was first introduced by Jacob and Dranoff \cite{jacobm_light_1970} and later assessed by Blatchley \cite{blatchley_numerical_1997} for collimated-beam AOP reactors. Formally, this method is equivalent to the MPSS with an infinite number of discrete point light sources. Therefore, it is the integral version of the MPSS. Without considering the absorption of the medium, the LSI model has an analytical solution:
\begin{equation}
I_{\lambda,0}(r,z) = \frac{P_{\lambda}^{\prime}}{4 \pi r L} \left( \arctan{\frac{L/2+z}{r}} + \arctan{\frac{L/2-z}{r}} \right)
\end{equation}
It should be noted that when this model is employed all absorption is neglected. This includes the absorption of the photo-activated species which is therefore not physical. Although some efforts have been made to include absorption terms \textit{a posteriori} (i.e. after integration) \cite{montecchio_development_2018} experimental fitting is required. This defeats the purpose of \textit{a priori} modeling and hence the LSI model will only be employed for evaluating the value of the MPSS model with an infinite number of point light sources.
\item Modified LSI (RAD-LSI): the RAD-LSI model was proposed by Liu \textit{et al.} and incorporates both the RAD and LSI models together. This is done because the LSI model is able to correctly predict far-field radiation intensities but not the near-field distribution. This happens because, when approaching the UV lamp, most radiation incident on a point will come from the light source immediately close to it. Hence, close to the lamp the radiation can be through as being is mostly radial and so the RAD model will better predict the near field. Formally this is described as:
\begin{dmath}
I_{\lambda,0}(r,z) = \mathrm{min}\left[\mathrm{RAD},\;\mathrm{LSI} \right] = \mathrm{min}\left[ \frac{P_{\lambda}^{\prime}}{2 \pi r L}, \; \frac{P_{\lambda}^{\prime}}{4 \pi r L} \left( \arctan{\frac{L/2+z}{r}} + \arctan{\frac{L/2-z}{r}} \right) \right]
\end{dmath}
where $\mathrm{min}\left[\mathrm{RAD},\;\mathrm{LSI} \right]$ is minimum function between the RAD and LSI models.
\end{itemize}
Often when discussing the models presented above in the context of water treatment AOPs, great importance is given to evaluate the effect of additional terms that change the radiation intensity distribution \cite{bolton_calculation_2000, liu_evaluation_2004}. These terms are mainly absorption, reflection, and refraction. For air, Liu \textit{et al.} argue that the latter two terms can be omitted \cite{liu_evaluation_2004}. Nevertheless, for cases with low absorption neglecting reflection could yield lower performance. The effect of the absorption of the medium should be evaluated separately because it will depend on the species present and the wavelength at which the reactor will be operating. As presented in Eq. \ref{eq:15}, the Beer-Lambert law describes the reduction in radiation intensity by an exponential term with accounts for the distance traveled by a UV light ray $l$, the concentration of the $i$-th species and how much that species absorbs light. The latter parameter $\varepsilon_{\lambda,i}$ is often reported in the literature indirectly as the (Naperian) absorption cross-section coefficient $\sigma_{\lambda,i}$ in $\mathrm{cm^{2}}$. To obtain the former from the latter Eq. \ref{eq:30} can be used:
\begin{equation}
\varepsilon_{\lambda,i} = \frac{\sigma_{\lambda,i} N_A}{10^4 \ln{10}} \label{eq:30}
\end{equation}
\noindent{Note that Eq. \ref{eq:30} yields $\varepsilon_{\lambda,i}$ in S.I. units (i.e. $\mathrm{m^2\:{mol}^{-1}}$) but is only valid for $\sigma_{\lambda,i}$ in $\mathrm{cm^{2}}$ as it is in these units how this value is given in the literature.}
A valuable resource for obtaining accurate and updated values for the absorption cross-section coefficient of different species is the MPI-Mainz UV/VIS Spectral Atlas \cite{keller-rudek_mpi-mainz_2013}. Table \ref{tab:cross} presents the values for the absorption cross-section coefficients for the species in the kinetic mechanism as presented in Table \ref{tab:cross}. Specific references are mentioned when more than one option was available. The references employed were selected based on recency and/or if the publication was a review paper. The temperature and wavelength at which these values are evaluated are as close as possible to the operating conditions of the reactor modeled (i.e. 300 K and 254 nm).
\begin{table}[!htb]
\caption{Absorption cross-section coefficients for different species present in the present reactor simulations. All values were obtained from the MPI-Mainz UV/VIS Spectral Atlas \cite{keller-rudek_mpi-mainz_2013} for the species in gas phase at $\sim300$ K and for UV radiation at a radiation wavelength of 254 nm, except otherwise stated. }
\centering
\begin{tabular}{c c c}
\hline \hline
Species & Absorbtion cross section, $\mathrm{cm^{2}}$ & Reference \\ \hline
\ch{O3} & 1.132935E-17 & Hodges \textit{et al.} \cite{hodges_recommendation_2019} \\
\ch{O2} & 1.477865E-24 & Bogumil \textit{et al.} \cite{bogumil_measurements_2003} \\
\ch{N2} & Not reported above 150 nm & Spectral Atlas \cite{keller-rudek_mpi-mainz_2013} \\
\ch{SO2} & 1.551258E-19 & Bogumil \textit{et al.} \cite{bogumil_measurements_2003} \\
\ch{H2O} & Approaches 0 & Ranjan \textit{et al.} \cite{ranjan_photochemistry_2020} \\
\ch{HO2} & 2.63E-19 & Tyndall \textit{et al.} \cite{tyndall_atmospheric_2001} \\
\ch{SO3} & 1.34E-20 & Burkholder \textit{et al.} \cite{burkholder_uv_1997} \\
\ch{H2SO4} & 7.19E-22 (at 248 nm) & Farahani \textit{et al.} \cite{farahani_simulated_2019} \\ \hline \hline
\end{tabular}
\label{tab:cross}
\end{table}
To simplify the computations, the contribution of some species to medium absorption was neglected. Species with very low or negligible absorption cross-section coefficients like \ch{N2}, \ch{H2O}, \ch{O2} or \ch{H2SO4} were not considered. Furthermore, the concentration of others such as \ch{HO2} and \ch{SO3} will be too low to have a meaningful impact. Hence only \ch{O3} and \ch{SO2} are considered as UV radiation-absorbing species for a wavelength of 254 nm.
\subsection*{3. Results and discussion}
\addcontentsline{toc}{subsection}{3. Results and discussion}
\subsubsection*{3.1. Grid convergence}
\addcontentsline{toc}{subsubsection}{3.1. Grid convergence}
One of the earliest decisions that must be taken to simulate the UV CSTR is how the discretization of the control volume will be performed. As has been presented in Section 2.4, all the radiation models employed are either one-dimensional or two-dimensional. This defines, at most, only two unknowns which are the number of cell elements along the radial coordinate, $n_r$, and along the axial coordinate, $n_z$.
To evaluate which value for the total number of cells is high enough to ensure good accuracy for a simple simulation was performed using the LSI model. The small reactor radius is employed and both coordinates are discretized with the same number of points. Fig. \ref{fig:error} shows the convergence of the value for the volume-averaged radiation intensity (VARI) with increasing number of cells. Both the absolute value of the VARI as well as the relative rate of change (i.e. the absolute value of the change between one simulation and the next) is presented. As can be seen, above 100 cells there is little change in the VARI. This corresponds to 10 cells in each direction. Nevertheless, to ensure a value below 0.1\% for the relative rate of change of the VARI a the number of cells chosen for the subsequent studies is 225, which corresponds to 15 cells in each direction. When a larger reactor (i.e. larger outer radius) is employed the number of cells will be increased to maintain the ratio between the hollow cylinder radius and the number of cells in the radial direction constant.
\begin{figure}[h!]
\centering
\includegraphics[height=6.1cm]{./fig/conv_vari.png}
\includegraphics[height=6.1cm]{./fig/conv_rate.png}
\caption{Evolution of the the volume averaged radiation intensity (VARI) for different total number of cells using the LSI method. \textit{Left}: total value of the VARI. \textit{Right}: relative rate of change of the VARI between successive simulations in \%.}
\label{fig:mesh}
\end{figure}
\subsubsection*{3.2. Optimization of the number of point light sources}
\addcontentsline{toc}{subsubsection}{3.2. Optimization of the number of point light sources}
Although in the original work by Jacob and Dranoff \cite{jacobm_light_1970} less than $0.1\%$ error was achieved with only 10 light sources, a more recent work by Bolton \cite{bolton_calculation_2000} suggests that less than $1\%$ is usually achieved with close to 1000 discretization points. Current implementations of the MPSS model tend to use the recommendation of Bolton despite the large discrepancy between both values. Unfortunately, a large number of discrete point light sources leads to a considerable increase in the computation cost. Furthermore, if the radiation field is evaluated in each iteration of a time loop inside a reactor discretized with many cell elements even simple CSTR simulations can be costly.
To tackle this the defining property of the LSI model was employed. As the LSI model is the MPSS model without absorption in the limit when the number of discrete point light sources ($N_{ls}$) tends to infinity, a simple error equation can be written as:
\begin{equation}
\epsilon = \frac{|I_{\lambda,0}^{\mathrm{MPSS}}-I_{\lambda,0}^{\mathrm{LSI}}|}{I_{\lambda,0}^{\mathrm{LSI}}}
\end{equation}
\noindent{where $\epsilon$ is the error incurred. Hence, for fixed conditions several values for $I_{\lambda,0}^{\mathrm{MPSS}}$ can be computed until the error is as small as a user-defined tolerance. Such computation has been implemented in the present work. The evolution of the error with the number of light sources can be seen in Fig. \ref{fig:error}. Here all the error incurred in for all integer values of $N_{ls}$ from 1 to 1000 is plotted. As can be observed, although a clear trendline is visible, there is considerable variability that appears unpredictable. Hence, although using a very large value for $N_{ls}\sim1000$ can yield low error for this specific configuration $\sim 0.1\%$, the same or even lower errors can be achieved by using much lower values. It can be seen that close to $N_{ls}=400$ two distinct values produce sharp valleys which yield substantially low errors. Therefore, finding the lowest possible value of $N_{ls}$ that produces the desired error is a worth-while time investment.}
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{./fig/error.png}
\caption{Evolution of the error for the MPSS model without considering the absorption of the medium (as compared to its integrated form, the LSI model) with the discrete number of light sources employed to model the UV lamp.}
\label{fig:error}
\end{figure}
Considering the large errors that are incurred in by some of the main assumptions in the current work (e.g. ideal mixing causing perfectly homogeneous concentrations) the target error for present computations was fixed to $1\%$. As a rule of thumb less than 50 points were required for all studied conditions in order to produce this error. In particular, for the 225 cell mesh, only 24 points were enough to obtain a 0.95\% error with respect to the same LSI simulation. The speedup achieved by this optimization was also tested. For all mesh sizes studied, the case with the computed optimum value of $N_{ls}$ was around 7 times faster than the case using 1000 points. This optimum value of $N_{ls}$ is also employed when using the MSSS method, however, the value is not computed using the MSSS method but rather by using the MPSS method and comparing it with the results from the LSI prediction.
\subsubsection*{3.3. Parametric study}
\addcontentsline{toc}{subsubsection}{3.3. Parametric study}
In the following subsection, the different parameters that make up the CSTR UV simulation as explained above will be evaluated. This will be done by selectively varying some of the different values and aiming to establish an optimum or ideal range within possible trade-offs. Some parameters that were found to not significantly affect the removal rate of \ch{SO2} were not shown here. These include the initial concentration of \ch{SO2}, the relative humidity (i.e. concentration of water vapor), and the gas temperature. All of these were tested within a reasonable rage starting from the baseline conditions of the simulation. The data shown for each simulation are obtained for the final simulation time when results are considered converged in time.
\paragraph*{Radiation model.}
The five different radiation models presented in Section 2.4 have been compared under equal conditions. The reactor employed was the smallest of the three (see Table \ref{tab:params}) and was discretized with 225 mesh elements. For the MPSS and MSSS models, the previously obtained optimum value of $N_{ls}=24$ was employed. A total of 300 parts per million (ppm) of \ch{SO2} and 10,000 ppm of \ch{H2O} were employed. The concentration of \ch{O3} was varied to account for a range of molarity ratios (i.e. \ch{O3}/\ch{SO2}). A deliberately short residence time of 10 s was defined to better assess the differences between models.
\begin{figure}[h!]
\centering
\includegraphics[height=6.1cm]{./fig/models.png}
\includegraphics[height=6.1cm]{./fig/models_zoom.png}
\caption{Evolution of the percentage removal of \ch{SO2} for different molarity ratios and different radiation models. \textit{Left}: Molarity ratio from 0 to 1. \textit{Right}: Zoomed view, molarity ratio from 0.5 to 0.7.}
\label{fig:models}
\end{figure}
Fig. \ref{fig:models} shows the removal of \ch{SO2} by UV-\ch{O3} at different molarity ratios for the five models described. As expected the LSI model produces the highest \ch{SO2} removal as it is an idealized model without absorption. However, this absorption is low for the present case as only 300 ppm of \ch{SO2} and at most 300 ppm of \ch{O3} are actively absorbing UV radiation at $\lambda=254$ nm. Hence, the MPSS can be observed to perform similarly to the LSI under these conditions. This low light absorption for gas-phase UV-\ch{O3} systems was already reported by Montecchio \textit{et al.} \cite{montecchio_development_2018}. The RAD and RAD-LSI models also yield very similar results owing to the nature of the RAD-LSI formulation. However, the MSSS model provides a considerably lower prediction for \ch{SO2} removal efficiency. As noted by Bolton (as cited by Liu \textit{et al.} \cite{liu_evaluation_2004}), the MSSS was developed to tackle the overprediction of the MPSS model. The present results are in line with those obtained by Liu \textit{et al.} \cite{liu_evaluation_2004} when using the MSSS in air. Namely, the MSSS model produces the lowest volume-average radiation intensity. In their work, the MSSS model was capable of reproducing experimental actinometric data better than the other models. Therefore, this model was chosen for the remainder parametric studies.
Overall, all simulations irrespective of the radiation model employed follow a similar trend. Initially, from molarity ratio 0 to 0.7, a linear response is observed. Increasing the molarity ratio leads to an increase in \ch{SO2} removal. For the models which predict a higher fluence rate the proportionality constant between molarity ratio and \ch{SO2} removal is higher and so the gradient of their trendline is steeper. From molarity ratio 0.7 to 1 the linearity decays progressively into a plateau reaching a maximum removal of 96-98\% depending on the model.
\paragraph*{Residence time.}
The efficiency of a UV reactor will be directly related to the ability of the photochemically active species to absorb UV radiation. Hence, increasing the exposure time of a particular species to UV light will improve photochemical conversion. This exposure can be evaluated by varying the residence time $\tau$ in seconds of the present simulations. Following the previous study of the different reactor models, the MSSS model was employed for this analysis. Other parameters and conditions remained the same.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\textwidth]{./fig/res_time_v3.png}
\caption{Evolution of the percentage removal of \ch{SO2} for different molarity ratios and different residence times $\tau$ in s. Note that the line corresponding to $\tau=1000$ s is barely visible as it lies just beneath the line corresponding to $\tau=10000$ s.}
\label{fig:residence}
\end{figure}
Fig. \ref{fig:residence} shows the effect of the residence time on the overall removal efficiency of \ch{SO2}. As can be observed the residence time has a major impact on the resulting abatement of \ch{SO2}. For $\tau=1$ s just about 20\% removal is achieved with equal initial \ch{SO2} and \ch{O3} concentrations. Here, the slope is less than one. Statistically, this implies that for every 1 ppm of input \ch{O3}, less than one 1 ppm of \ch{SO2} is removed. Therefore, fast flow through the reactor is discouraged. Nevertheless, it should be noted that these idealized CSTR simulations assume ideal mixing. In reality, getting as close as possible to ideal mixing conditions will have to be done by turbulent mixing. At lower speeds, fluid flows tend to be laminar which do not promote mixing and so should be avoided. Hence, although higher residence times are ideal, these should not be achieved by lowering flow speeds. Therefore, a geometry-specific reactor has to be set up so that both relatively high residence times and turbulent mixing occur simultaneously. Increasing the residence time to $\tau=10$ s a notable increase in \ch{SO2} removal is achieved. At this time-scale, the slope of the graph approaches unity. Hence, the input concentration of \ch{O3} manages to remove close to that same concentration of \ch{SO2}. Therefore, close to 100\% removal is achieved at a molarity ratio equal to 1. Further increasing the residence time to $\tau=100$ s the value of the slope approaches a value of 2. This implies that any initial concentration of \ch{O3} manages to remove twice that concentration of \ch{SO2} from the gas phase. The peak close to 100\% removal is achieved in the vicinity of molarity ratio 0.5. This is the natural limit for this reaction, which can be seen by inspecting the original kinetic mechanism for the CSTR UV reactor in Table \ref{tab:mech}. As observed, under ideal conditions one mole of \ch{O3} produces one mole of oxygen singlets \ch{O(^1D)}. These singlets then react with water vapor to produce two moles of hydroxyl radicals \ch{^*OH} which finally react one-to-one with one mole of \ch{SO2}. This will happen if no other scavenging reaction occurs and the hydroxyl radicals only selectively oxidize \ch{SO2}. Under these conditions, one mole of \ch{O3} can oxidize two moles of \ch{SO2}. As seen in Fig. \ref{fig:residence}, this ideal limit can be approached with long residence times. However, increasing the residence time above $\tau=100$ s to $\tau=1000$ s or even $\tau=10000$ s produces diminishing returns. Therefore, the residence time for this reactor should be in the order of magnitude of $\tau=100$ s to achieve a good trade-off between high removal efficiency and reasonable residence times.
\paragraph*{Reactor size.}
Finally, following Montecchio \textit{et al.} \cite{montecchio_development_2018}, the size of the reactor was studied. This was done for three different outer radius sizes as shown in Table. \ref{tab:params}. Fig. \ref{fig:radius} shows the results of this parameter study. As can be observed, the impact of the reactor size for the selected dimension is not very large for molarity ratios below 0.2 or above 0.8. Around molarity ratio equals 0.5 the effect of the reactor size is more visible. Here, the smaller reactor is able to perform better and remove a higher percentage of initial \ch{SO2}. Increasing the reactor outer radius will in turn increase the reactor volume and decrease the abatement efficiency. Therefore, to improve the removal of \ch{SO2}, the aspect ratio of the reactor must be large, with the axial length being much larger than the radial length.
\begin{figure}[h!]
\centering
\includegraphics[height=6.1cm]{./fig/radius.png}
\includegraphics[height=6.1cm]{./fig/radius_zoom.png}
\caption{Evolution of the percentage removal of \ch{SO2} for different molarity ratios and different reactor sizes. \textit{Left}: Molarity ratio from 0 to 1. \textit{Right}: Zoomed view.}
\label{fig:radius}
\end{figure}
\subsubsection*{3.4. Time-dependent results}
\addcontentsline{toc}{subsubsection}{3.4. Time-dependent results}
To conclude the results section, some detailed results with temporal resolution will be shown. Unlike previous sections, results shown here are one for a single simulation with the optimum or desired conditions obtained from the parametric studies. This includes the MSSS model, the smallest reactor size, and a residence time of $\tau=100$ s. Here, a molarity ratio of 0.5 was chosen. Results with temporal resolution enable an understanding of the time scales of different reactions.
Fig. \ref{fig:species} shows the time dependant concentration of six relevant species in the UV reactor. First, \ch{SO2} and \ch{O3} can be seen to follow a similar trend as one is being converted to oxygen singlets \ch{O(^1D)} by the UV light while the other is being oxidized by the hydroxyl radicals \ch{^.OH}. The time-scale at which both take place appears to be almost identical, which implies that the intermediate steps that are between \ch{O3} depletion and \ch{SO2} removal happen fast. This hypothesis is further substantiated by observing the evolution of \ch{O(^1D)} and \ch{^.OH}. As can be seen, both remain at negligible concentration values which implies that all molecules being generated are instantaneously being consumed. Something similar happens to \ch{SO3} which is an intermediate species. After \ch{SO2} reacts with \ch{^.OH} it produces \ch{HSO3} which reacts with the abundant \ch{O2} to produce \ch{SO3}. However, again this species is being consumed at least as fast as it is being produced due to its constant low concentration. When \ch{SO3} reacts with the water vapor present in the gas mixture it is converted into \ch{H2SO4} which can be also seen in Fig. \ref{fig:species} in much higher concentrations.
\begin{figure}[h!]
\centering
\includegraphics[height=6.1cm]{./fig/SO2.png}
\includegraphics[height=6.1cm]{./fig/O3.png} \\
\includegraphics[height=6.1cm]{./fig/O1D.png}
\includegraphics[height=6.1cm]{./fig/OH.png} \\
\includegraphics[height=6.1cm]{./fig/SO3.png}
\includegraphics[height=6.1cm]{./fig/H2SO4.png} \\
\caption{Temporal evolution of the concentration of different important species for the CSTR UV simulation.}
\label{fig:species}
\end{figure}
\subsection*{4. Conclusions and future outlook}
\addcontentsline{toc}{subsection}{4. Conclusions and future outlook}
In the current study, simulations of a CSTR reactor were performed to analyze the abatement of \ch{SO2} by \ch{O3} via UV light. To do so, an annular reactor was discretized and several radiation models were employed to simulate the volume-average radiation intensity. The theory employed has been described in detail and equations have been derived. The information regarding radiation distribution was employed to numerically evaluate the reaction rate of a photochemically active step. This step converted \ch{O3} to oxygen singlets which gave rise to hydroxyl radicals which end up oxidizing the main pollutant \ch{SO2}. The fundamental concepts of photochemical modeling were presented. The chemical process was simulated using the Cantera \cite{goodwin_cantera_2016} software in python \cite{van_rossum_python_2009}. For this, a chemical kinetic mechanism was required, which was adapted from Meusinger \textit{et al.} \cite{meusinger_treatment_2017}.
To understand and minimize the error committed in the simulations, a grid convergence study was undertaken. A low number of mesh elements ($\sim 200$) were found to be enough for obtaining low errors. Some radiation models require the discretization of the UV lamp into point light sources or segments. To obtain acceptable errors and minimize the computational cost, a method to obtain the optimum value of the discrete light sources to be used was developed. It was shown that for the present configuration, a low number of light sources ($< 50$) was enough to obtain errors smaller than 1\%.
The results presented here include a parametric study on several simulation variables. When the different models were assessed the LSI and MPSS models were observed to predict similarly high overall \ch{SO2} removal. Following, the RAD and RAD-LSI models presented lower removal due to their mainly radial conceptualization of the direction of the emitted UV light. Finally, the MSSS method produced the lowest removal rate of \ch{SO2} as was expected from the literature. The residence time was also studied. In this parametric analysis, it was observed to be the most impact variable. Low residence times ($1 < \tau < 10$ s) yielded poor pollutant removal rates. Very high residence times ($1000 < \tau < 10000$ s) produced much higher \ch{SO2} removal but results were almost identical, showing diminishing returns for such large values. Hence a moderate residence time of $\tau=100$ s was observed to be a reasonable compromise. The reactor size was also studied. This variable was seen to have a moderate effect on removal rates for the range studied. Nevertheless, it was concluding that reactors that have a low radial length compared to their axial lengths will perform better. Hence, large aspect ratios are desirable. Finally, time-dependent results were presented. Here, the evolution of some of the key species in the present chemical process was studied. It was shown that some intermediate species such as \ch{O^1D}, \ch{^.OH} and \ch{SO3} are consumed at an equal or faster rate than they are produced. All in all, results show that the application of the present methods constitutes a viable strategy for gas-phase pollutant removal systems.
Further developments to the present work include several aspects not directly evaluated here. For instance, a validation case could be set up. Chemical UV reactors are dependant on many aspects such as geometry, chemical species, UV lamp characteristics, and placement. Hence, either another detailed experimental results should be replicated or a new experimental work should be undertaken. A more complex version of the chemistry employed here could be studied. Hence, more complex mechanisms should be studied to assess how performance varies with an increasing number of intermediate species and/or scavenging molecules. Similarly, this reactor could be configured in series with a \ch{NO_x}-\ch{O3} system. This will both simulate a more realistic scenario - as \ch{NO_x} and \ch{SO_x} tend to coexist in exhaust gases - and evaluate the selectivity of hydroxyl radical oxidation. Here, the reflection of UV light from the reactor walls has not been addressed. This process would require experimental measurements of the material, roughness, and fouling over time of the wall. Furthermore, the effect of several UV lamps and their placing could be evaluated. Finally, scrapping the idealization of chemical homogeneity from perfect mixing could lead to more realistic simulations. This would entail more complex and time-consuming computational fluid dynamics simulations (CFD) coupled with both the local radiation model and the local chemistry kinetic rates. Under this framework, more detailed radiation models could be studied (i.e. the discrete ordinate method \cite{liu_evaluation_2004}) and more complex turbulent modeling such as large-eddy simulations (LES) could be employed.
\subsection*{Acknowledgements}
\addcontentsline{toc}{subsection}{Acknowledgements}
This work has been funded by ÅForsk (grant 19-422) with complementary financing from Formas (Swedish Research Council for Sustainable Development).
\printbibliography
\end{document} |
1,108,101,564,816 | arxiv | \section{Introduction}
It is a well-known problem that perturbative approximations to low-dimensional condensed-matter systems of interacting fermions lead to expressions that diverge logarithmically at low energies.
The focus of this paper lies on a class of mostly zero- and one-dimensional systems characterized by the simple pattern in which logarithmic divergences arise in particle-particle and particle-hole bubbles of perturbative diagrams for two-particle correlation functions.
This class includes a model for X-ray absorption in metals \cite{Mahan67}, the Kondo model \cite{Abrikosov65}, and the Fermi gas model for one-dimensional conductors \cite{Solyom79}.
Technically, the divergences appear when the integral of a Green-function resolvent with respect to single-particle energy is cut off by the Fermi edge of the level occupancy.
Physically, this phenomenon is reflected by the parameter dependence of a related susceptibility.
This can be given, e.g., by a power law with an exponent that depends on the interaction: an expansion in powers of the interaction then leads to the logarithmic contributions.
The leading-logarithmic parquet approximation provides a means to compute the correlation function in the vicinity of the divergence.
The underlying reasoning is as follows.
The divergent terms resulting from perturbative diagrams of different order and structure depend on different powers of the interaction and of the logarithms.
For small interaction and not too close to the divergence of the logarithms, the diagrams can be grouped into leading contributions and negligible corrections.
Summing the ``leading logarithms'' then yields a controlled approximation in this regime.
For the class of models which we focus on, all leading contributions are contained in the so-called parquet diagrams (with bare lines), which comprise the ladder diagrams in the particle-particle and particle-hole channels and also diagrams that result from crossing the channels.
The strategy to derive the leading-logarithmic approximation from the leading contributions of all parquet diagrams was first implemented for meson scattering \cite{Diatlov57}.
Well-known realizations of this concept for low-dimensional condensed-matter systems include the application to the problem of X-ray absorption in metals \cite{Roulet69}, to the Kondo problem \cite{Abrikosov65}, and to fermions in one dimension with a short-ranged interaction \cite{Bychkov66}.
In the last-mentioned application, the leading-logarithmic approximation seems to wrongly predict a finite-temperature phase transition for attractive interaction.
This is, however, beyond its regime of applicability: at low temperatures the neglected lower-order logarithmic contributions become important \cite{Solyom79}.
Apart from the \emph{leading-logarithmic} version of the parquet approximation, there exists another well-established one \cite{DeDominicis64, Bickers91}, which we refer to as the \emph{full} parquet approximation.
While both versions are based on parquet diagrams, they differ in technical aspects, in the kind of system they are typically applied to, and in the justification for their use.
The technical considerations of this paper concern specifically the leading-logarithmic parquet approximation.
However, in the next paragraph we briefly describe the full parquet approximation since a clear understanding of the differences of both approaches will become necessary to classify our findings and relate them to the multiloop functional renormalization group (multiloop FRG).
In our class of mostly zero- and one-dimensional models, each propagator bubble in one of the relevant channels produces a simple logarithmic divergence.
The situation is more complicated for many two-dimensional models of correlated fermions: depending on the system parameters, on the filling, and on the channel under consideration, the bubbles of two-dimensional models can feature either no divergence or a logarithmic or a squared logarithmic one, see, e.g., Ref.~\cite{Irkhin01, Katanin09}.
It is then more involved to identify the leading contributions. Furthermore, in order to decide on the existence and location of phase transitions, subleading contributions might be relevant.
Nonetheless, summing up all parquet diagrams is a well-known approximation strategy for two-dimensional problems \cite{DeDominicis64, Bickers91}.
The typically applied full parquet approximation takes into account all particle-hole channels, uses propagator lines that are dressed with a self-energy determined self-consistently from a Schwinger-Dyson equation, and evaluates the exact sum of all parquet diagrams.
In these aspects it differs from the leading-logarithmic parquet approximation for systems with simple logarithmic divergences, which takes into account only the leading-logarithmic particle-hole channel, uses bare propagator lines, and evaluates only the leading-logarithmic part of the corresponding parquet diagrams.
While in the leading-logarithmic parquet approximation the totally irreducible vertex is replaced by the bare one, there exist extensions of the full parquet approximation which use more involved approximations for the totally irreducible vertex: in the parquet dynamical vertex approximation \cite{Toschi07}, e.g., it is approximated by the local vertex resulting from dynamical mean-field theory.
Due to the complicated logarithmic structure, the full parquet approximation for two-dimensional systems is usually not known to be controlled.
It is still considered to be beneficial as it includes fluctuations in different channels of pair propagation in an unbiased way, respects the crossing symmetry \cite{Bickers91} and related sum rules \cite{Vilk97}, satisfies one-particle conservation laws \cite{Kugler18c}, and is understood to comply with the Mermin-Wagner theorem \cite{Bickers92}.
Such arguments may also motivate the use of the full parquet approximation for other than two-dimensional models.
For example, Ref.~\cite{Pudleiner19} shows an application of this approximation to a model for a benzene molecule.
When the full parquet approximation is applied to models for which the leading-logarithmic parquet approximation is controlled, both approximations coincide on the leading-logarithmic level.
In this paper we examine the relation between the leading-logarithmic parquet approximation and a specific renormalization group (RG) approximation.
Indeed, scaling arguments and the RG provide another approach to interacting fermionic systems in low dimensions.
Historically, the development of these techniques for zero- and one-dimensional systems was driven by the quest for approximations beyond the parquet-based leading-logarithmic one \cite{Gruner74, Hewson93, Solyom79}.
In early approaches field-theoretical RG techniques were applied to the Kondo
problem \cite{Abrikosov70, Fowler71}, to the weakly interacting one-dimensional Fermi gas \cite{Kimura73, Solyom79} (see as well \cite{Shankar94}), and also to the problem of X-ray absorption in metals \cite{Solyom74}.
In all these applications, the lowest-order approximation reproduced the leading-logarithmic result known from the respective parquet treatments \cite{Abrikosov65, Roulet69, Bychkov66}.
Also Anderson's ``poor man's scaling'' approach to the Kondo problem \cite{Anderson70, Solyom74a} and its application to fermions in one dimension \cite{Solyom79} reproduce the respective leading-logarithmic results.
For a specific two-dimensional model, the equivalence of a one-loop RG and a parquet approximation on the leading-logarithmic level is discussed in Ref.~\cite{Binz03}; the model has a nested Fermi surface but no van Hove singularity such that the bubbles produce simple logarithmic divergences only.
For general two-dimensional systems, an equivalence of one-loop RG and parquet approaches is not expected.
For our class of mostly zero- and one-dimensional models, however, the RG idea leads beyond the leading logarithms.
A cornerstone is the accurate description of the Kondo effect by Wilson's numerical RG \cite{Wilson75}.
But even an RG flow that is constructed just to account for the leading logarithms can lead to predictions beyond the realm of parquet approximations if it is understood as connecting models with identical low-energy properties.
The underlying concept of universality classes shaped today's understanding of one-dimensional interacting fermionic systems as being Luttinger liquids, Luther-Emery
liquids, or Mott insulators \cite{Solyom79, Voit95}.
Recently, the relation between the parquet approximations and the RG, now in form of the FRG \cite{Metzner12, Kopietz10, Platt13}, came again into focus, leading to the construction of the so-called multiloop FRG.
Starting point of that development was an FRG study of X-ray absorption in metals by Lange et al. \cite{Lange15}.
Using Hubbard-Stratonovich transformations and a low-order truncation scheme, Lange et al. reproduced the result of the leading-logarithmic parquet approximation \cite{Roulet69} for the X-ray response function.
Kugler and von Delft scrutinized this FRG approach and came to the conclusion that its success is fortuitous \cite{Kugler18}.
Among other criticism Kugler and von Delft point out that the scheme of Lange et al. only reproduces ladder diagrams.
Such a diagram-wise juxtaposition of FRG and parquet approximations is possible as the FRG flow can be interpreted on the level of individual, flowing diagrams \cite{Jakobs07}.
Kugler and von Delft expanded this idea rigorously and constructed a multiloop extension to a purely-fermionic one-loop FRG which makes it possible to approach the exact sum of all parquet diagrams via iterative one-loop extensions.
They provided schemes to approach the sum of parquet diagrams with either bare lines \cite{Kugler18a} or self-consistently dressed lines \cite{Kugler18b} and for different approximations for the totally irreducible vertex \cite{Kugler18c}.
As the multiloop FRG offers the possibility to compute self-consistent parquet-based approximations by solving flow equations, it is seen as a promising tool for the study of correlated two-dimensional systems.
Recently, it was combined with special approaches to the momentum dependence and high-frequency asymptotics of the two-particle vertex and applied to the two-dimensional Hubbard model \cite{Tagliavini19, Hille20a, Hille20}.
This allowed to reduce the pseudo-critical temperature of antiferromagnetic ordering compared to one-loop FRG \cite{Tagliavini19}, to achieve numerical convergence to results of the full parquet approximation and of determinant quantum Monte Carlo up to moderate interaction strength \cite{Hille20a}, and to analyze pseudo-gap physics at weak coupling \cite{Hille20}.
Including multiloop corrections could also be beneficial for the RG study of two-dimensional quantum spin systems (in pseudo-fermion representation) because a two-loop extension was already found to attenuate the violation of the Mermin Wagner theorem \cite{Rueck18}.
Furthermore, the multiloop scheme based on an irreducible vertex from dynamical mean-field theory could provide a viable alternative way to evaluate the parquet dynamical vertex approximation \cite{Rohringer18, Kugler18c}.
Whereas these applications concern the full parquet approximation for two-dimensional systems, Kugler and von Delft motivated and introduced the multiloop scheme in the context of X-ray absorption in metals \cite{Kugler18, Kugler18a}.
The interacting region in the corresponding model is zero-dimensional and the propagator bubbles in the two relevant channels produce simple logarithmic divergences.
Due to its basic structure, this model can in fact be seen as prototypical for the case that the parquet diagrams with bare lines comprise the leading logarithmic contributions.
Correspondingly, Nozi\`eres and collaborators understood their parquet study of this model as a preparation for the analysis of more complicated models with that structure of logarithms like the Kondo model \cite{Roulet69, Nozieres69}.
Formulated from the RG perspective, the model for X-ray absorption in metals is at the core of the class of (mostly zero- and one-dimensional) models for which a reasonably crafted lowest-order, i.e., one-loop, RG approximation is understood to be accurate and equivalent to the parquet approximations on the leading-logarithmic level.
This conventional conception is in surprising contrast to what Kugler and von Delft report from their multiloop FRG study of that model \cite{Kugler18a} -- namely that increasing the number of loops improves the numerical results with respect to the known solution of Nozi\`eres and collaborators \cite{Roulet69, Nozieres69, Nozieres69a}.
In an astounding twist, the multiloop approach of Ref.~\cite{Kugler18a} inverts the direction of the historical development of techniques for such models: while the RG was originally used as a conceptual framework to overcome the restrictions inherent to the leading-logarithmic parquet approximation, Ref.~\cite{Kugler18a} transforms its modern functional formulation into a tool to evaluate the exact sum of the parquet diagrams (in this case with bare lines and without the subleading particle-hole channel).
For the model under consideration, this sum constitutes no controlled improvement compared to the leading-logarithmic parquet approximation.
It differs subleadingly without being exact on a subleading level \cite{Nozieres69}.
Since Ref.~\cite{Kugler18a}, however, reports on improvements compared to one-loop FRG, there emerges the pressing question whether the latter might be deficient: is a one-loop FRG approximation without multiloop extensions really less accurate than the early implementations of RG and poor man's scaling that were already able to reproduce the leading-logarithmic parquet results?
In this paper we establish that for the considered class of systems with simple logarithmic divergences a reasonably crafted one-loop FRG approximation is fully equivalent to the leading-logarithmic parquet approximation.
We do so by constructing, specifically for the problem of X-ray absorption in metals, a one-loop FRG approximation that is in fact identical on a detailed level to the leading-logarithmic approximation procedure which was performed by Roulet et al. within the parquet formalism \cite{Roulet69}.
The only formal difference is that the cutoff is introduced at a different stage of the derivation without influencing the result.
From this viewpoint the two approaches actually fuse into one.
In order to allow for a detailed comparison to the leading-logarithmic parquet approximation of Roulet et al.~\cite{Roulet69}, we devise our FRG approximation in the framework of the real-time zero-temperature formalism, also known as ground-state formalism.
For brevity we refer to this formalism simply as the zero-temperature formalism.
As the FRG for condensed-matter systems was so far used within the Matsubara and the Keldysh formalism \cite{Metzner12}, we first need to transfer the formulation of the method to the zero-temperature formalism.
Our approach to that formalism is inspired by Ref.~\cite{Negele88} but differs in some respects.
In particular, we develop a functional-integral representation of the generating functional that is based on standard coherent states and is therefore easily applicable to the interacting case.
Then we perform the usual steps to derive the flow equations for the one-particle irreducible (1PI) vertex functions.
The paper is organized as follows.
We briefly introduce the model under investigation in Sec.~\ref{sec:model}.
The most important features of a perturbative approach to it are recapped in Sec.~\ref{sec:perturbation_theory}.
In particular, the occurrence of logarithmic divergences is discussed.
In Sec.~\ref{sec:FRG_general_models} we set up the FRG framework within the zero-temperature formalism for a general model.
Some details on deriving the diagrammatic expansion and the flow equations are relocated from this section to the Appendix.
The core of the paper is Sec.~\ref{sec:FRG_approach}, where we construct our one-loop FRG approximation and establish its full equivalence to the leading-logarithmic parquet approximation of Ref.~\cite{Roulet69}.
Finally, Sec.~\ref{sec:conclusion} provides a conclusion and outlook.
\section{Model}
\label{sec:model}
In this section we briefly introduce the model under consideration.
It is essentially taken from Ref.~\cite{Roulet69}, where more details can be found.
The investigated basic model provides a description of the X-ray-absorption singularity in metals.
It comprises two electronic bands: the conduction band and some lower-energy band.
The latter is assumed to be flat as it typically originates from atomic orbitals that are more localized.
As such it can be represented by a single so-called deep state.
The effect of intraband Coulomb interaction, which leads to long-lasting quasi-particle states, is assumed to be already accounted for by effective single-particle parameters.
The interaction that is considered explicitly is an attractive one between the conduction electrons and a hole at the deep state.
The electron spin is neglected.
This physical model is described by the Hamiltonian
\begin{equation}
\label{eq:Hamiltonian}
H = \sum_k \varepsilon_k a_k^\dagger a_k + \varepsilon_d a_d^\dagger a_d - \frac{U}{V} \sum_{kk'} a_{k'}^\dagger a_k a_d a_d^\dagger.
\end{equation}
Here, $a_d^\dagger$ creates an electron in the deep state and $a_k^\dagger$ creates an electron with momentum $k$ and energy $\varepsilon_k$ in the conduction band, which shall have a constant density of states $\rho$ and a bandwidth $2 \xi_0$.
We set the zero of single-particle energy in the middle of the conduction band such that $\varepsilon_k \in [- \xi_0, \xi_0]$.
Then the deep-state eigenenergy is $\varepsilon_d < - \xi_0$.
The interaction amplitude $U > 0$ is assumed to be momentum independent and thus describes a local interaction in real space.
$V$ denotes the volume.
We study the system at vanishing temperature $T=0$ and with a half-filled conduction band, i.e., the Fermi energy is $\varepsilon_F = 0$.
In the resulting ground state, the deep level is occupied as long as the interaction strength is not too large.
As an external perturbation, an X-ray field with frequency $\nu$ is coupled to the system with momentum-independent amplitude $W$,
\begin{equation}
H_X (t) = \frac{W}{\sqrt{V}} \sum_k e^{- \ii \nu t} a_k^\dagger a_d + \text{H.c.}
\end{equation}
$H_X (t)$ is chosen to describe only interband transitions because these are, in conjunction with the sharp Fermi surface and the flat lower band, responsible for the absorption singularity.
Correspondingly, we consider the X-ray frequency to be of the order of $|\varepsilon_d|$.
We use units with $\hbar = 1$.
A physical observable of interest is the X-ray absorption rate $R(\nu)$ or, equivalently, the excitation rate of the deep state.
When $\nu$ approaches the threshold frequency $\nu_c$, the leading behavior of $R(\nu)$ is a power-law divergence $\propto [\xi_0 / (\nu - \nu_c)]^{2 g}$ with $g = \rho U/V$.
This was conjectured by Mahan \cite{Mahan67} based on the terms up to third order of an expansion in powers of the interaction and later confirmed by Nozi\`eres and collaborators \cite{Roulet69,Nozieres69a}.
In linear response and for sufficiently small $|W|^2 / V$, the absorption rate can be accessed with many-body techniques via $R(\nu) = - 2|W|^2 \, \text{Im} \chi (\nu)$, where
\begin{equation}
\label{eq:ph_susceptibility}
\chi (\nu) = - \ii \frac{1}{V} \sum_{kk'} \int_{- \infty}^\infty \dd t \, e^{\ii \nu t} \left\langle \mathcal{T} a_d^\dagger (t) \, a_k (t) \, a_{k'}^\dagger (0) \, a_d (0) \right\rangle
\end{equation}
is a particle-hole susceptibility. Here, $\langle . \rangle$ denotes the ground-state expectation value, $a_{k/d}^{(\dagger)} (t)$ are the ladder operators in the Heisenberg picture with respect to the Hamiltonian (\ref{eq:Hamiltonian}), and $\mathcal{T}$ is the time-ordering operator.
A diagrammatic expansion of $\chi$ results in a power series in $U / V$.
Effectively, however, one obtains an expansion in powers of the dimensionless parameter $g = \rho U / V$ because for every additional interaction vertex in a diagram there is also one more independent momentum summation $\sum_k = \rho \int_{- \xi_0}^{\xi_0} \dd \varepsilon$.
We note that a many-body approach is not necessary to treat this model.
In fact, it has been solved exactly by applying a one-body scattering theory \cite{Nozieres69a}.
This is possible because the particular interaction term in Eq.~(\ref{eq:Hamiltonian}) does not alter the deep-state occupancy and acts just as a single-particle potential for the conduction states when the deep level is empty.
However, if one chooses to treat this model with many-body perturbation theory, one encounters the interesting problem of logarithmic divergences in two distinct channels (see also Sec.~\ref{sec:logarithmic_divergences}).
Being spinless and effectively zero-dimensional, it is probably the most basic model with this important feature.
Therefore, it was repeatedly used as a test bed to refine and compare various many-body approaches \cite{Roulet69,Nozieres69,Solyom74,Lange15,Kugler18a}.
Having an exact solution for comparison was then an additional advantage of this model.
If the system is prepared in a state with empty deep level, the X-ray field can induce the relaxation of an electron from the conduction band to the deep level.
This process is accompanied by X-ray emission.
In Ref.~\cite{Roulet69} the corresponding rate of stimulated X-ray emission is studied in close analogy to the rate of X-ray absorption within the zero-temperature formalism, see also our appendix \ref{sec:generalization_of_zero-temperature_formalism}.
On the leading-logarithmic level, the main part of the calculation turns out to be identical in both cases \cite{Roulet69}.
In this paper we focus on the case of absorption.
By following the arguments of Ref.~\cite{Roulet69}, all our considerations can be straightforwardly adapted to the case of emission.
\section{Perturbation theory within zero-temperature formalism}
\label{sec:perturbation_theory}
In this section we recap the most important features of a perturbative approach to the model.
Following largely Roulet et al.~\cite{Roulet69}, we choose the (real-time) zero-temperature formalism \cite{Negele88} as framework for the
diagrammatic expansion.
Our one-loop FRG approach developed in Sec.~\ref{sec:FRG_approach} is also formulated in the realm of this formalism.
This makes a detailed comparison between the parquet-based approach of Ref.~\cite{Roulet69} and the one-loop FRG approximation possible.
\subsection{Single-particle Green function}
\label{sec:single-particle_Green_function}
We choose to dress the propagator with the first-order contribution of the self-energy, resulting in
\begin{subequations}
\label{eq:propagators}
\begin{eqnarray}
\label{eq:deep-state_propagator} G_d^H (\omega) &=& \frac{1}{\omega - \tilde{\varepsilon}_d - \ii 0^+}\\
G_k^0 (\omega) &=& \frac{1}{\omega - \varepsilon_k + \ii 0^+ \sgn \varepsilon_k}
\end{eqnarray}
\end{subequations}
for the deep state and the conduction states, respectively.
Here, the deep-state Hartree self-energy has renormalized the deep level to $\tilde{\varepsilon}_d = \varepsilon_d + g \xi_0$.
No Fock contributions to the self-energy arise in this model: they would involve a free propagation between the deep state and a conduction state, which is not admitted by the free Hamiltonian.
For the conduction states, a single-particle perturbation $\propto - U / V$ arises when the interaction term in the Hamiltonian (\ref{eq:Hamiltonian}) is brought into the standard form by permuting all creation operators to the left.
It exactly cancels with the conduction-state Hartree self-energy.
This cancellation reflects that electrons in the conduction band do not interact with an electron occupying the deep state but only with a hole at the deep state; in fact, for this reason any (time-dependent) ground-state expectation value involving only conduction-state ladder operators is not affected by the interaction from Eq.~(\ref{eq:Hamiltonian}).
In the deep-state subspace, the Hartree-dressed propagator (\ref{eq:deep-state_propagator}) is analytic in the lower half-plane and thus purely advanced.
The same holds for the full deep-state propagator.
In time representation it takes the form
\begin{equation}
G_d (t) = - \ii \left\langle \mathcal{T} a_d (t) a_d^\dagger \right\rangle = \ii \Theta (-t) \left\langle a_d^\dagger a_d (t) \right\rangle
\end{equation}
so that it is directed backwards in time.
This can be understood as creating and subsequently annihilating a hole at the deep state.
In the following computations of two-particle quantities, we are not going to include additional self-energy contributions so that the propagator (\ref{eq:propagators}) will be used as the full single-particle Green function.
This is in fact correct for the conduction states because loops of two or more deep-state propagators vanish due to $G_d (t) \propto \Theta (-t)$, hence $G_k^0 = G_k$.
But in the case of the deep state, it is an approximation.
This does not influence the shape of the divergence of $\chi (\nu)$ as far as the leading logarithms are concerned \cite{Roulet69}.
However, it influences the threshold frequency which constitutes the position of the divergence and which is in our calculations $\nu_c = - \tilde{\varepsilon}_d = |\tilde{\varepsilon}_d|$.
The further discussion would also be possible after including other real, static contributions to the self-energy.
In that case only the specific value of the renormalized deep level $\tilde{\varepsilon}_d$ would differ.
Anyway, we are going to set $\tilde{\varepsilon}_d = 0$, see Sec.~\ref{sec:setting_renormalized_deep_level_to_zero}, and focus on investigating the shape of $\chi (\nu)$ near threshold.
The bare vertex has an incoming and outgoing leg for the deep state and an incoming and outgoing leg for the conduction states, but it has no actual dependence on the momenta.
Thus, all momentum summations are independent of each other and they can be performed immediately by employing the local conduction-electron propagator
\begin{subequations}
\begin{eqnarray}
\hspace{-1em} G_c (\omega) &=& \frac{1}{V} \sum_k G_k (\omega)\\
&=& \frac{\rho}{V} \left[ \ln \frac{|\xi_0 + \omega|}{|\xi_0 - \omega|} - \ii \pi \sgn (\omega) \Theta (\xi_0 - |\omega|) \right] .
\end{eqnarray}
\end{subequations}
In this sense the model is effectively zero-dimensional.
[In the exact analytic evaluation of diagrams, e.g., for Eq.~(\ref{eq:bare_bubbles}), it can still be helpful to integrate over frequencies before summing over momenta.]
\subsection{1PI two-particle vertex}
The particle-hole susceptibility (\ref{eq:ph_susceptibility}), when expressed in terms of the 1PI two-particle vertex $\gamma$, can be calculated from
\begin{eqnarray}
\label{eq:ph-susceptibility_with_1PI_vertex}
\nonumber \chi (\nu) &=& - \ii \int \frac{\dd \omega}{2 \pi} \, G_d (\omega) G_c (\omega + \nu)\\
&& + \int \frac{\dd \omega \dd \omega'}{(2 \pi)^2} \, G_d (\omega) G_c (\omega + \nu) \bar{\gamma} (\omega, \omega'; \nu)\\
\nonumber && \hphantom{+ \int} \times G_d (\omega') G_c (\omega' + \nu).
\end{eqnarray}
The diagrammatic representation of this formula is shown in Fig.~\ref{fig:ph-susceptibility}.
\begin{figure}
\includegraphics[scale=0.75]{ph-susceptibility.pdf}
\caption{\label{fig:ph-susceptibility}
Diagrammatic representation of Eq.~(\ref{eq:ph-susceptibility_with_1PI_vertex}).
Full lines refer to local conduction-electron propagators $G_c$, fat dashed lines to deep-state propagators $G_d$.
The circle stands for the 1PI two-particle vertex.
The three-leg vertices involving each a full, dashed, and wavy line conserve frequency, but do not contribute any factor.}
\end{figure}
Throughout this paper we draw full lines for local conduction-electron propagators and dashed lines for deep-state propagators.
The 1PI vertex $\gamma_{d k'|d k}$ does not depend on the incoming and outgoing conduction-electron momentum $k$ and $k'$, respectively, because the interaction amplitude does not depend on the momenta, see Eq.~(\ref{eq:Hamiltonian}).
Therefore, we introduce the notation $\gamma_{d c|d c} = V \gamma_{d k'|d k}$.
For the frequency arguments, we employed in Eq.~(\ref{eq:ph-susceptibility_with_1PI_vertex}) the notation
\begin{subequations}
\label{eq:frequency_notation}
\begin{eqnarray}
\nonumber && \gamma_{d c|d c} (\omega_d', \omega_c'| \omega_d, \omega_c)\\
\label{eq:frequency_notation_pp} &=& 2 \pi \delta (\omega_d' + \omega_c' - \omega_d - \omega_c) \hat{\gamma} (\omega_d, \omega_d'; \Omega)\\
\label{eq:frequency_notation_ph} &=& 2 \pi \delta (\omega_d' + \omega_c' - \omega_d - \omega_c) \bar{\gamma} (\omega_d, \omega_d'; X),
\end{eqnarray}
\end{subequations}
see also Fig.~\ref{fig:1PI_vertex}.
\begin{figure}
\includegraphics[scale=0.75]{1PI_vertex.pdf}
\caption{\label{fig:1PI_vertex}
Diagrammatic representation of the 1PI vertex $\gamma_{d c|d c} (\omega_d', \omega_c'| \omega_d, \omega_c)$.
The external legs are meant to be amputated.
Frequency conservation assures $\omega_d + \omega_c = \omega_d' + \omega_c'$.
As independent frequencies we will employ $\omega_d, \omega_d'$, and either the total frequency $\Omega$ or the exchange frequency $X$, but not the conduction-state frequencies $\omega_c, \omega_c'$.}
\end{figure}
Here, either the total frequency $\Omega = \omega_d + \omega_c = \omega'_d + \omega'_c$ or the exchange frequency $X = \omega'_c - \omega_d = \omega_c - \omega'_d$ has been chosen as one of the independent frequencies.
In Eq.~(\ref{eq:frequency_notation_ph}) we have chosen the order of the frequencies $\omega_d, \omega_d'$ in $\bar{\gamma} (\omega_d, \omega_d'; X)$ to match the order of the frequencies in Fig.~\ref{fig:ph-susceptibility}.
\subsection{Setting $\tilde{\varepsilon}_d = 0$}
\label{sec:setting_renormalized_deep_level_to_zero}
In the following calculation of $\chi (\nu)$, we set the renormalized deep level to $\tilde{\varepsilon}_d = 0$.
This is equivalent to measuring the X-ray frequency $\nu$ relative to the threshold frequency $|\tilde{\varepsilon}_d|$.
It is a convenient way to eliminate one of the parameters; the same was done by Roulet et al.~\cite{Roulet69}.
We present the reasoning behind this step by relating it to a Ward identity.
We intend to build on this brief discussion in a future publication, which addresses the same topic in the framework of the Matsubara formalism.
Let us consider a diagram contributing to $\chi (\nu) = \chi (\nu, \tilde{\varepsilon}_d)$ which arises when diagrams for the 1PI vertex and for the full deep-state lines are inserted into Fig.~\ref{fig:ph-susceptibility}.
We may choose the frequencies of the internal lines in accordance with frequency conservation such that the external frequency $\nu$ appears as addend in the frequency argument of every conduction-state propagator, but not of any deep-state propagator.
Subtracting a frequency $\alpha$ from the frequency arguments of all conduction- and deep-state propagators respects frequency conservation and does not alter the value of the diagram.
Then $\nu$ appears only in the conduction-state propagators, always in the form $\nu - \alpha$, and $\tilde{\varepsilon}_d$ appears only in the deep-state propagators, always in the form $\tilde{\varepsilon}_d + \alpha$.
This proves the Ward identity
\begin{equation}
\label{eq:zero-temperature_Ward_identity}
\chi (\nu, \tilde{\varepsilon}_d) = \chi (\nu - \alpha, \tilde{\varepsilon}_d + \alpha),
\end{equation}
which results from frequency conservation (i.e., time-translational invariance) and the conservation of the number of conduction- and deep-state electrons.
Equation (\ref{eq:zero-temperature_Ward_identity}) relates the susceptibilities of two models with respective values $\tilde{\varepsilon}_d$ and $\tilde{\varepsilon}_d + \alpha$ of the renormalized deep level.
These susceptibilities are defined in the state with a filled lower half of the conduction band and an occupied deep state.
For the model with deep-state energy $\tilde{\varepsilon}_d + \alpha$ and for sufficiently large $\alpha$, this is not the ground state of the interacting Hamiltonian.
Nonetheless, the zero-temperature formalism allows to compute expectation values in this state because it is an eigenstate of both the noninteracting and the interacting Hamiltonian, see appendix \ref{sec:generalization_of_zero-temperature_formalism}.
When $\chi$ is determined accordingly, the identity (\ref{eq:zero-temperature_Ward_identity}) holds for all real frequencies $\alpha$ and even on the level of individual diagrams.
As a special case, we obtain $\chi (\nu - \tilde{\varepsilon}_d, \tilde{\varepsilon}_d) = \chi (\nu , 0)$, where $\nu$ is now the deviation of the X-ray frequency from the threshold frequency $|\tilde{\varepsilon}_d|$.
\subsection{Logarithmic divergences}
\label{sec:logarithmic_divergences}
The second-order contributions to the 1PI vertex $\gamma_{d c| d c}$ are shown in Fig.~\ref{fig:bare_bubbles}.
\begin{figure}
\includegraphics[scale=0.75]{bare_bubbles.pdf}
\caption{\label{fig:bare_bubbles}
Second-order contributions to the 1PI vertex $\gamma_{d c| d c}$.
The small, empty circle represents the bare vertex contributing a factor $U$.
The thin dashed lines refer to Hartree-dressed deep-state propagators $G_d^H$.
In each channel the respective natural frequency has been employed.
(a) Particle-hole bubble depending on the exchange frequency $X$.
(b) Particle-particle bubble depending on the total frequency $\Omega$.}
\end{figure}
They are the bare bubbles in the (exchange) particle-hole channel and in the particle-particle channel, which is strictly speaking a hole-hole channel, with the exact values
\begin{subequations}
\label{eq:bare_bubbles}
\begin{equation}
\label{eq:ph-bubble}
- g^2 \frac{V}{\rho} \left[ \ln \frac{|X|}{|\xi_0 - X|} - \ii \pi \, \Theta (\xi_0 - X) \Theta (X) \right]
\end{equation}
and
\begin{equation}
g^2 \frac{V}{\rho} \left[ \ln \frac{|\Omega|}{|\xi_0 + \Omega|} - \ii \pi \, \Theta (\xi_0 + \Omega) \Theta (- \Omega) \right] ,
\end{equation}
\end{subequations}
respectively.
As the frequencies at the external legs in Fig.~\ref{fig:bare_bubbles} have been chosen to already obey frequency conservation, there are no factors $2 \pi \delta (\ldots)$ in Eq.~(\ref{eq:bare_bubbles}).
The direct particle-hole bubble does not contribute to $\gamma_{d c| d c}$, but only to $\gamma_{d d| d d}$. The latter vertex is not considered here because its contribution is subleading \cite{Roulet69}.
Importantly, the bubbles (\ref{eq:bare_bubbles}) in both the (exchange) particle-hole and particle-particle channel feature a logarithmic divergence as their natural frequency $X$ or $\Omega$, respectively, approaches zero.
[There are also divergences for $X \to \xi_0$ and $\Omega \to - \xi_0$.
Those, however, turn out to be not important for $\chi (\nu)$ at small $\nu$.]
These diverging logarithms arise via the combination of the real part $\mathcal{P} \, 1 / \omega$ of the deep-state propagator with the discontinuous imaginary part $- \pi \rho \sgn (\omega) \Theta (\xi_0 - |\omega|) / V$ of the local conduction-electron propagator, e.g.,
\begin{eqnarray}
\label{eq:origin_of_logarithm}
\nonumber && \int \frac{\dd \tilde{\omega}}{2 \pi} \, \mathcal{P} \frac{1}{\tilde{\omega}} \, \sgn (\tilde{\omega} + X) \Theta (\xi_0 - |\tilde{\omega} + X|)\\
&=& - \frac{1}{\pi} \ln \frac{|X|}{\xi_0} + O \! \left( \frac{|X|}{\xi_0} \right)^2 .
\end{eqnarray}
It is known that such logarithmic divergences appear for all powers of the interaction.
For the 1PI vertex, the \emph{leading} logarithms have the form $g^n \ln^{n-1}$ while subleading contributions contain at least one logarithm less.
Here, the arguments of the logarithms are essentially $|X| / \xi_0$ or $|\Omega| / \xi_0$ depending on the channel.
The arguments can also depend on the other free frequencies $\omega, \omega'$ if the corresponding external deep-state leg is not attached to the same vertex as one of the external conduction-state legs [see, e.g., Fig.~\ref{fig:crossed_bubble} and Eq.~(\ref{eq:crossed_bubble_result})].
\begin{figure}
\includegraphics[scale=0.75]{crossed_bubble.pdf}
\caption{\label{fig:crossed_bubble}
A third-order diagram contributing to $\bar{\gamma} (\omega, \omega'; X)$.
Dashed lines refer to Hartree-dressed deep-state propagators $G_d^H$.
From frequency conservation follows $\Omega_i = \omega_o + \omega + X$.}
\end{figure}
All of the leading logarithms are contained within the parquet diagrams without any self-energy insertions (except for static contributions absorbed into $\tilde{\varepsilon}_d$) \cite{Roulet69}.
These diagrams can be constructed by starting with the bare vertex and successively replacing any vertex with either of the bubbles given in Fig.~\ref{fig:bare_bubbles}.
The leading logarithms appearing in the expansion of the particle-hole susceptibility $\chi (\nu)$, which directly follow from those of $\gamma$ via Eq.~(\ref{eq:ph-susceptibility_with_1PI_vertex}), assume the form $g^n \ln^{n+1} (|\nu| / \xi_0)$.
In comparison to the 1PI vertex, two additional powers of the logarithm arise from the two external bubbles in the right diagram in Fig.~\ref{fig:ph-susceptibility}.
Close to the threshold $\nu = 0$ where $g \ln (|\nu| / \xi_0)$ is not much smaller than one, these terms are significant for arbitrary powers $n$ even though $g$ itself is small.
To approximate the behavior of $\chi (\nu)$ in a reasonable way, it is then necessary to resum the leading logarithms of all orders.
Even closer to the threshold, i.e., as $\nu$ goes to zero, $g \ln (|\nu| / \xi_0)$ increases further until also subleading logarithms become large and must be included.
However, in this paper only the resummation of the leading logarithms is discussed.
As an example consider the zeroth-order contribution to $\chi (\nu)$, which is contained in the first addend on the right-hand side of Eq.~(\ref{eq:ph-susceptibility_with_1PI_vertex}).
It can be obtained from the result of the particle-hole bubble given in Eq.~(\ref{eq:ph-bubble}) with $X = \nu$ by replacing the prefactor of the square brackets with $\rho / V$.
The leading logarithm $g^0 \ln(|\nu| / \xi_0)$ then appears only in the real part.
In fact, the leading logarithms of all orders appear only in the real part of the particle-hole susceptibility \cite{Roulet69}.
If one employs a scheme to capture just the leading logarithms, one therefore has to recover the imaginary part in order to determine the absorption rate.
This can be done as outlined by Roulet et al.~\cite{Roulet69}.
It is important to note that it is not necessary to include the exact value of any given parquet diagram in the resummation scheme in order to obtain a valid leading-logarithmic approximation.
Instead, one can make further approximations as long as they do not influence the leading logarithms.
That the parquet diagrams indeed contain subleading contributions is already obvious from the exact results (\ref{eq:bare_bubbles}) for the bubbles, where, e.g.,
\begin{equation}
\ln \frac{|X|}{|\xi_0 - X|} = \ln \frac{|X|}{\xi_0} - \ln \frac{|\xi_0 - X|}{\xi_0}.
\end{equation}
\subsection{Third-order contribution as key example}
\label{sec:third-order_example}
Roulet et al. worked out how to extract the leading contribution from a given parquet diagram \cite{Roulet69}.
We briefly recap their scheme by applying it to the third-order diagram of $\bar{\gamma} (\omega, \omega'; X)$ shown in Fig.~\ref{fig:crossed_bubble}.
The achieved insights will form the basis for the construction of an RG treatment in Sec.~\ref{sec:FRG_approach}.
The parts of the propagators that do not give rise to the diverging logarithm in a bare bubble are neglected, i.e., only the real part of the deep-state propagator and the imaginary part of the local conduction-electron propagator are retained.
The diagram in Fig.~\ref{fig:crossed_bubble} then translates into
\begin{eqnarray}
\label{eq:crossed_bubble_expression}
\nonumber c \int \frac{\dd \omega_o \, \dd \omega_i}{(2 \pi)^2} \mathcal{P} \frac{1}{\omega_o} \sgn (\omega_o + X) \Theta (\xi_0 - |\omega_o + X|)\\
\times \mathcal{P} \frac{1}{\omega_i} \sgn (\Omega_i - \omega_i) \Theta(\xi_0 - |\Omega_i - \omega_i|)
\end{eqnarray}
with the prefactor $c = \pi^2 g^3 V / \rho$ and with the abbreviation $\Omega_i = \Omega_i (\omega_o, \omega, X) = \omega_o + \omega + X$.
The indices $i$ and $o$ refer to the ``inner'' and ``outer'' bubble of the diagram, respectively.
When (a part of) a diagram can be constructed by replacing a vertex at an end of some bubble by another bubble (or chain of bubbles) of the opposite channel, then we call the latter bubble (or chain of bubbles) the inner one and the former bubble the outer one.
Repeating this construction establishes the strict partial order ``being inner to'' among the bubbles of a parquet diagram.
At the end of this subsection, we will conclude that there is a related order among the absolute values of the deep-state frequencies of the bubbles as far as the leading-logarithmic approximation is concerned.
The integral over the frequency of the inner bubble in Eq.~(\ref{eq:crossed_bubble_expression}) yields
\begin{subequations}
\begin{eqnarray}
\nonumber && \int \frac{\dd \omega_i}{2 \pi} \, \mathcal{P} \frac{1}{\omega_i} \sgn(\Omega_i - \omega_i) \Theta(\xi_0 - |\Omega_i - \omega_i|)\\
\label{eq:internal_bubble_short_result} &=& \frac{1}{\pi} \ln \frac{|\Omega_i|}{\sqrt{|\xi_0^2 - \Omega_i^2|}}\\
\label{eq:internal_bubble_break_down} &=& \frac{1}{\pi} \left( \ln \frac{M}{\xi_0} + \ln \frac{|\Omega_i|}{M} + \ln \frac{\xi_0}{\sqrt{|\xi_0^2 - \Omega_i^2|}} \right)
\end{eqnarray}
\end{subequations}
with $M = M(\omega_o, \omega, X) = \max \{ |\omega_o|, |\omega|, |X| \}$.
In the particular case $|X| < |\omega| \ll \xi_0$, this result can be approximated by $[\ln (M / \xi_0)] / \pi$ when inserted into Eq.~(\ref{eq:crossed_bubble_expression}): the other two logarithmic addends contribute only subleadingly, cf. Ref.~\cite{Roulet69}.
The leading-logarithmic approximation to the value of the diagram is hence
\begin{eqnarray}
\label{eq:crossed_bubble_result}
\nonumber && \frac{c}{2 \pi^2} \int_{- \xi_0 - X}^{\xi_0 - X} \dd \omega_o \mathcal{P} \frac{1}{\omega_o} \sgn (\omega_o + X) \ln \frac{\max \{ |\omega_o|, |\omega| \} }{\xi_0}\\
& \approx & - \frac{1}{2} g^3 \frac{V}{\rho} \ln \frac{|\omega|}{\xi_0} \left( \ln \frac{|X|}{|\omega|} + \ln \frac{|X|}{\xi_0} \right)
\end{eqnarray}
for $|X| < |\omega| \ll \xi_0$, a result which is $\propto g^3 \ln^2$ as expected.
It is illuminating to identify the particular subregion of frequency integration that is responsible for this leading-logarithmic result.
For $|\Omega_i| \le \xi_0 / 2$ the range $\omega_i \in [- |\Omega_i|, |\Omega_i|]$ does not contribute to the value of the integral in Eq.~(\ref{eq:internal_bubble_short_result}); this results from the combination of the principal value and the sign function in the integrand.
Similarly, approximating the integral by $[\ln (M / \xi_0)] / \pi$ means to restrict the range of integration to $M < |\omega_i| < \xi_0$.
The leading contribution actually results from the small frequencies with $M < |\omega_i| \ll \xi_0$.
Larger $|\omega_i|$ are not important, e.g., the range $\xi_0 / 10 < |\omega_i| < \xi_0$ yields only the subleading contribution $- (\ln 10) / \pi$.
Very similarly, the relevant integration range of the frequency $\omega_o$ is $|X| < |\omega_o| \ll \xi_0$.
Two important conclusions can be drawn from these observations.
Firstly, it is indeed sufficient to consider only the case $|X| < |\omega| \ll \xi_0$.
If the diagram in Fig.~\ref{fig:crossed_bubble} is inserted for the 1PI vertex in the representation of $\chi (\nu)$ shown in Fig.~\ref{fig:ph-susceptibility}, $X$ assumes the value of the X-ray frequency $\nu$; the leading-logarithmic behavior near threshold, which we are interested in, emerges then for $|X| = |\nu| \ll \xi_0$.
Furthermore, when the argument used above for $\omega_i$ and $\omega_o$ is applied to the additional $\omega$-integration that appears in the diagram for $\chi (\nu)$, it shows that the relevant frequencies $\omega$ are from the range $|X| < |\omega| \ll \xi_0$ as well.
The same reasoning is possible if the diagram in Fig.~\ref{fig:crossed_bubble} is not directly inserted for the 1PI vertex in Fig.~\ref{fig:ph-susceptibility}, but is used as part of a larger parquet diagram which in turn is inserted for that 1PI vertex.
Secondly, the restriction to $M < |\omega_i|$ with $M = \max \{ |\omega_o|, |\omega| \}$ implies $|\omega_o| < |\omega_i|$.
For the leading logarithmic contribution, it hence suffices to integrate with respect to the deep-state frequency of the inner bubble over only those regions where its absolute value is greater than the one of the deep-state frequency of the outer bubble.
This statement can be generalized to all parquet diagrams and all pairs of bubbles where one is inner to the other; corresponding observations are described in Ref.~\cite{Abrikosov65,Binz03}.
This is the very property of the parquet diagrams that is responsible for the success of our \emph{one-loop} FRG approach to reproduce the leading-logarithmic approximation.
\section{FRG in zero-temperature formalism for a general model}
\label{sec:FRG_general_models}
In this section we develop a formulation of the FRG method \cite{Metzner12, Kopietz10, Platt13} in the framework of the (real-time) zero-temperature formalism, also known as ground-state formalism.
Since we are not aware of an FRG scheme in the literature that is based on this formalism, we present the derivation of FRG flow equations for a general model of interacting fermions.
Readers who are not interested in details on how to establish a zero-temperature FRG can skip this section.
Its central result that is subsequently used in Sec.~\ref{sec:FRG_approach} are the flow equations given in Eq.~(\ref{eq:general_flow_of_self-energy}) and (\ref{eq:general_flow_of_two-particle_vertex}).
The FRG flow equations for a class of correlation functions, e.g., Green functions or 1PI vertex functions, can be derived from the corresponding generating functional.
We will start by deriving a functional-integral representation of a generating functional of Green functions for an interacting system in the ground state.
In Ref.~\cite{Negele88} such a functional-integral representation is presented, but only for the noninteracting case.
There, the derivation is based on a non-standard variant of coherent states -- namely the common eigenstates of the annihilators of single-particle states that are empty in the noninteracting ground state and of the creators of single-particle states that are occupied in the noninteracting ground state.
Compared to standard coherent states, which are the common eigenstates of all annihilators, the role of creators and annihilators has been swapped for levels below the Fermi energy.
As a consequence the noninteracting ground state acquires the role of the vacuum state.
While this approach allows for an elegant functional-integral representation of the generating functional in the noninteracting case, see Ref.~\cite{Negele88}, we found it rather tedious to work with the corresponding representation in the interacting case: treating the coherent-state matrix elements of the interaction turns out to be cumbersome.
We consider this to be a drawback not only regarding the discussion of the FRG flow equations but also regarding the derivation of the diagrammatic perturbation theory within the functional-integral formulation.
In contrast to Ref.~\cite{Negele88}, we use standard coherent states, which turns out to be straightforward.
However, we follow Ref.~\cite{Negele88} in regard to deriving the ground-state expectation value from a damped time evolution instead of using the Gell-Mann and Low theorem.
\subsection{Definition of Green functions and their generating functional}
\label{sec:definition_of_Green_functions_and_their_generating_functional}
We consider a general Hamiltonian for an interacting many-fermion system,
\begin{subequations}
\label{eq:general_Hamiltonian}
\begin{align}
H &= H_0 + H_\text{int}\\
&= \sum_\alpha \varepsilon_\alpha a_\alpha^\dagger a_\alpha + \frac{1}{4} \sum_{\alpha'_1 \alpha'_2 \alpha_1 \alpha_2} \bar{v}_{\alpha'_1 \alpha'_2 \alpha_1 \alpha_2} a_{\alpha'_1}^\dagger a_{\alpha'_2}^\dagger a_{\alpha_2} a_{\alpha_1},
\end{align}
\end{subequations}
where $\alpha = 1, 2, \ldots$ numbers the single-particle eigenstates of $H_0$ such that the eigenenergies are ordered monotonically $\varepsilon_1 \le \varepsilon_2 \le \ldots$
Let the particle number $N$ be fixed and let there be a gap $\varepsilon_N < \varepsilon_{N + 1}$.
Then the ground state of the noninteracting Hamiltonian $H_0$ is nondegenerate and given by $| \Phi_0 \rangle = a_1^\dagger \ldots a_N^\dagger | 0 \rangle$ with $| 0 \rangle$ being the vacuum state.
We choose the zero of single-particle energies to lie between $\varepsilon_N$ and $\varepsilon_{N + 1}$ so that the negative levels $\varepsilon_1, \ldots, \varepsilon_N < 0$ are occupied and the positive levels $\varepsilon_{N + 1}, \dots > 0$ are empty in the noninteracting ground state.
The corresponding occupation numbers are
\begin{equation}
n_\alpha = \left\langle \Phi_0 \middle| a_\alpha^\dagger a_\alpha \middle| \Phi_0 \right\rangle = \begin{cases} 1, & \alpha \le N\\ 0, & \alpha > N. \end{cases}
\end{equation}
The normalized ground state of the interacting Hamiltonian $H$ shall be denoted by $| \Psi_0 \rangle$.
It is assumed to be nondegenerate and not orthogonal to $| \Phi_0 \rangle$.
Note that this scenario applies also to the model of X-ray absorption in metals even though that model involves a continuous conduction band.
An integration over said band is just an approximation for the summation over a rather dense but discrete spectrum of plane-wave states.
In particular, the noninteracting and interacting ground states are nondegenerate and not mutually orthogonal; in fact, they are identical, see also appendix \ref{sec:generalization_of_zero-temperature_formalism}.
The time-ordered multi-particle Green functions are defined as
\begin{eqnarray}
\label{eq:definition_of_Green_functions}
&& G(\alpha_1 t_1, \ldots, \alpha_n t_n| \alpha'_1 t'_1, \ldots, \alpha'_n t'_n)\\
\nonumber &=& (- i)^n \! \left\langle \Psi_0 \middle| \mathcal{T} a_{\alpha_1} \! (t_1) \ldots a_{\alpha_n} \! (t_n) a_{\alpha'_n}^\dagger \! (t'_n) \ldots a_{\alpha'_1}^\dagger \! (t'_1) \middle| \Psi_0 \right\rangle \! .
\end{eqnarray}
Similarly to the discussion in Ref.~\cite{Negele88}, one finds that they can be determined from a damped time evolution.
This can formally be realized via $G = \lim_{\eta \to 0^+} G_\eta$ and
\begin{eqnarray}
\label{eq:Green_functions_from_generating_functional}
&& G_\eta (\alpha_1 t_1, \ldots, \alpha_n t_n| \alpha'_1 t'_1, \ldots, \alpha'_n t'_n)\\
\nonumber &=& (- i)^n \frac{\delta^{2n} \mathcal{G}_\eta [\bar{J}, J]}{\delta \bar{J}_{\alpha_1} \! (t_1) \ldots \delta \bar{J}_{\alpha_n} \! (t_n) \delta J_{\alpha'_n} \! (t'_n) \ldots \delta J_{\alpha'_1} \! (t'_1)} \Bigg|_{\bar{J} = 0 = J}
\end{eqnarray}
with the generating functional
\begin{equation}
\label{eq:generating_functional_definition}
\mathcal{G}_\eta [\bar{J}, J] = \lim_{t_0 \to \infty} \frac{Z_\eta [\bar{J}, J]}{Z_\eta [0, 0]}
\end{equation}
and
\begin{equation}
\label{eq:Z_definition}
Z_\eta [\bar{J}, J] = \big\langle \Phi_0 \big| U^{(\eta)}_{\bar{J}, J} (t_0, - t_0) \big| \Phi_0 \big\rangle.
\end{equation}
In the time evolution operator
\begin{eqnarray}
\label{eq:time_evolution_operator}
&& U^{(\eta)}_{\bar{J}, J} (t_0, - t_0)\\
\nonumber & \! =& \mathcal{T} \! \exp \! \Bigg( \! \! \! - \! \ii \! \int_{- t_0}^{t_0} \! \! \dd t \bigg\{ \! (1 \! - \! \ii \eta) H \! + \! \sum_\alpha \! \big[ \bar{J}_\alpha (t) a_\alpha \! + \! a_\alpha^\dagger J_\alpha (t) \big] \! \bigg\} \! \Bigg),
\end{eqnarray}
source terms with external Grassmann variables $\bar{J}_\alpha$, $J_\alpha$ were added to the Hamiltonian.
When $| \Phi_0 \rangle$ is expanded in eigenstates of the interacting Hamiltonian $H$, the factor $1 - \ii \eta$ with $\eta > 0$ suppresses the contributions from excited states to the Green functions, leaving only the ground-state expectation value as required by the definition (\ref{eq:definition_of_Green_functions}).
In contrast to Ref.~\cite{Negele88}, in which a time contour in the complex plane is used, we formally attribute this factor not to the time integration, but to the Hamiltonian.
This corresponds to the picture that excited states decay due to a non-zero anti-Hermitian part of the Hamiltonian.
\subsection{Discrete integral expression for $Z_\eta [\bar{J}, J]$}
We introduce intermediate time steps $\tau_m = - t_0 + m \Delta$ with $\Delta = 2 t_0 / M$ and $m = 0, 1, \ldots, M$ and insert resolutions of unity into Eq.~(\ref{eq:Z_definition}) in terms of standard fermionic coherent states
\begin{equation}
| \varphi \rangle = \exp \left( - \sum_\alpha \varphi_\alpha a_\alpha^\dagger \right) | 0 \rangle,
\end{equation}
where $\varphi$ stands for the set of Grassmann generators $\{ \varphi_1, \varphi_2, \ldots \}$.
This yields
\begin{eqnarray}
\label{eq:Z_with_resolutions_of_unity}
\nonumber Z_\eta [\bar{J}, J] &=& \int \left( \prod_{m = 0}^M \prod_\alpha \dd \bar{\varphi}^m_\alpha \dd \varphi^m_\alpha e^{- \bar{\varphi}^m_\alpha \varphi^m_\alpha} \right) \big\langle \Phi_0 \big| \varphi^M \big\rangle\\
&& \hphantom{\int} \times \big\langle \varphi^M \big| U^{(\eta)}_{\bar{J}, J} (\tau_M, \tau_{M - 1}) \big| \varphi^{M - 1} \big\rangle\\
\nonumber && \hphantom{\int} \times \ldots \, \big\langle \varphi^1 \big| U^{(\eta)}_{\bar{J}, J} (\tau_1, \tau_0) \big| \varphi^0 \big\rangle \; \big\langle \varphi^0 \big| \Phi_0 \big\rangle,
\end{eqnarray}
where each $\bar{\varphi}^m_\alpha$ is an additional Grassmann generator that is by definition the conjugate of $\varphi^m_\alpha$.
The usage of standard coherent states is an important difference to Ref.~\cite{Negele88}.
It will allow for a straightforward derivation of the functional-integral representation of the generating functional in the interacting case, see Eq.~(\ref{eq:generating_functional_continuous}) below.
The factors $\big\langle \Phi_0 \big| \varphi^M \big\rangle = \varphi^M_N \ldots \varphi^M_1$ and $\big\langle \varphi^0 \big| \Phi_0 \big\rangle = \bar{\varphi}^0_1 \ldots \bar{\varphi}^0_N$ in Eq.~(\ref{eq:Z_with_resolutions_of_unity}) are important for the form of the free propagator, see the integration in Eq.~(\ref{eq:reduction_of_Grassmann_integrals}) and the remark at the end of Sec.~\ref{sec:noninteracting_generating_functional}.
Up to corrections $\propto \Delta^2$, the occurring matrix elements are given by
\begin{subequations}
\label{eq:matrix_elements}
\begin{eqnarray}
\nonumber && \left\langle \varphi^m \middle| U^{(\eta)}_{\bar{J}, J} (\tau_m, \tau_{m - 1}) \middle| \varphi^{m - 1} \right\rangle\\
\nonumber &=& \exp \bigg( \sum_\alpha \bar{\varphi}^m_\alpha \varphi^{m - 1}_\alpha \bigg)\\
\label{eq:intermediate_matrix_elements} && \times \bigg\{ 1 - \ii \Delta \bigg[ (1 - \ii \eta) H(\bar{\varphi}^m, \varphi^{m - 1})\\
\nonumber && \hphantom{\times \bigg\{ 1 - \ii \Delta \bigg[} + \sum_\alpha \left( \bar{J}^{m - 1}_\alpha \varphi^{m - 1}_\alpha + \bar{\varphi}^m_\alpha J^m_\alpha \right) \bigg] \bigg\}\\
\nonumber &=& \exp \bigg\{ \sum_\alpha \bar{\varphi}^m_\alpha e^{- (\ii + \eta) \varepsilon_\alpha \Delta} \varphi^{m - 1}_\alpha\\
&& \hphantom{\exp \bigg\{} - \ii \Delta \bigg[ (1 - \ii \eta) H_\text{int} (\bar{\varphi}^m, \varphi^{m - 1})\\
\nonumber && \hphantom{\exp \bigg\{ - \ii \Delta \bigg[} + \sum_\alpha \left( \bar{J}^{m - 1}_\alpha \varphi^{m - 1}_\alpha + \bar{\varphi}^m_\alpha J^m_\alpha \right) \bigg] \bigg\},
\end{eqnarray}
\end{subequations}
where we used the notation $\bar{J}^m_\alpha = \bar{J}_\alpha (\tau_m)$ and $J^m_\alpha = J_\alpha (\tau_m)$.
The expression for $H(\bar{\varphi}^m, \varphi^{m - 1})$ can be obtained from Eq.~(\ref{eq:general_Hamiltonian}) by replacing all ladder operators with Grassmann generators according to $a_\alpha^\dagger \to \bar{\varphi}^m_\alpha$ and $a_\alpha \to \varphi^{m - 1}_\alpha$.
In particular, we have
\begin{eqnarray}
\nonumber && H_\text{int} (\bar{\varphi}^m, \varphi^{m - 1})\\
&=& \frac{1}{4} \sum_{\alpha'_1 \alpha'_2 \alpha_1 \alpha_2} \bar{v}_{\alpha'_1 \alpha'_2 \alpha_1 \alpha_2} \bar{\varphi}^m_{\alpha'_1} \bar{\varphi}^m_{\alpha'_2} \varphi^{m - 1}_{\alpha_2} \varphi^{m - 1}_{\alpha_1}.
\end{eqnarray}
(If one uses the particular coherent states of Ref.~\cite{Negele88} instead, the expression that results for $H_\text{int}$ is not as simple.)
Since none of the matrix elements (\ref{eq:matrix_elements}) depend on $\bar{\varphi}^0$ or $\varphi^M$, the integrations for $m = 0, M$ in Eq.~(\ref{eq:Z_with_resolutions_of_unity}) reduce to
\begin{subequations}
\label{eq:reduction_of_Grassmann_integrals}
\begin{eqnarray}
\nonumber && \int \! \Bigg( \! \prod_\alpha \dd \bar{\varphi}^0_\alpha \dd \varphi^0_\alpha e^{- \bar{\varphi}^0_\alpha \varphi^0_\alpha} \dd \bar{\varphi}^M_\alpha \dd \varphi^M_\alpha e^{- \bar{\varphi}^M_\alpha \varphi^M_\alpha} \! \Bigg)\\
\nonumber && \hphantom{\int \!} \times \varphi^M_N \! \ldots \varphi^M_1 \bar{\varphi}^0_1 \ldots \bar{\varphi}^0_N f \big( \bar{\varphi}^M, \varphi^0 \big)\\
\nonumber & \! \! =& \int \! \Bigg( \! \prod_{\alpha \le N} \! \dd \bar{\varphi}^M_\alpha \dd \varphi^M_\alpha \! \! \Bigg) \varphi^M_N \! \ldots \varphi^M_1 \Bigg( \! \prod_{\alpha \le N} \! \dd \bar{\varphi}^0_\alpha \dd \varphi^0_\alpha \! \Bigg) \bar{\varphi}^0_1 \ldots \bar{\varphi}^0_N\\
\nonumber && \hphantom{\int \!} \times \! \Bigg[ \prod_{\alpha > N} \! \dd \bar{\varphi}^0_\alpha \dd \varphi^0_\alpha \big( 1 - \bar{\varphi}^0_\alpha \varphi^0_\alpha \big) \dd \bar{\varphi}^M_\alpha \dd \varphi^M_\alpha \big( 1 - \bar{\varphi}^M_\alpha \varphi^M_\alpha \big) \Bigg]\\
&& \hphantom{\int \!} \times f \big( \bar{\varphi}^M, \varphi^0 \big)\\
& \! \! =& (- 1)^N \int \! \Bigg( \! \prod_{\alpha \le N} \! \dd \bar{\varphi}^M_\alpha \dd \varphi^0_\alpha \! \Bigg) f \big( \bar{\varphi}^M, \varphi^0 \big) \bigg|_{\mathcal{B}_>}.
\end{eqnarray}
\end{subequations}
The notation in the last line involving the boundary conditions
\begin{equation}
\mathcal{B}_> = \left\{ \bar{\varphi}^M_{\alpha > N} = 0, \varphi^0_{\alpha > N} = 0 \right\}
\end{equation}
means that in $f \big( \bar{\varphi}^M, \varphi^0 \big)$ the generators $\bar{\varphi}^M_\alpha$, $\varphi^0_\alpha$ with $\alpha > N$ are replaced by zero.
These boundary conditions reflect that the levels with $\alpha > N$ are empty in the state $| \Phi_0 \rangle$.
In total one obtains
\begin{equation}
\label{eq:Z_discrete_Grassmann_integral}
Z_\eta [\bar{J}, J] = \lim_{M \to \infty} \int D_M (\bar{\varphi}, \varphi) e^{\ii S_M (\bar{\varphi}, \varphi; \bar{J}, J)}
\end{equation}
with the Grassmann integration measure
\begin{eqnarray}
&& D_M (\bar{\varphi}, \varphi)\\
\nonumber &=& (-1)^{M N} \Bigg( \! \prod_{\alpha \le N} \prod_{m = 1}^M \! \dd \bar{\varphi}^m_\alpha \dd \varphi^{m - 1}_\alpha \! \Bigg) \Bigg( \! \prod_{\alpha > N} \prod_{m = 1}^{M - 1} \! \dd \bar{\varphi}^m_\alpha \dd \varphi^m_\alpha \! \Bigg).
\end{eqnarray}
The action $S_M (\bar{\varphi}, \varphi; \bar{J}, J)$ is the sum of the free part
\begin{eqnarray}
\label{eq:free_part_of_action_discrete}
\nonumber S_M^0 (\bar{\varphi}, \varphi) &=& \sum_{\alpha', \alpha \le N} \sum_{m' = 1}^M \sum_{m = 0}^{M - 1} \bar{\varphi}^{m'}_{\alpha'} Q^{m' m}_{\alpha' \alpha} \varphi^{m}_\alpha\\
&& + \sum_{\alpha', \alpha > N} \sum_{m', m = 1}^{M - 1} \bar{\varphi}^{m'}_{\alpha'} Q^{m' m}_{\alpha' \alpha} \varphi^m_\alpha
\end{eqnarray}
with
\begin{equation}
\label{eq:inverse_free_propagator}
Q^{m' m}_{\alpha' \alpha} = \ii \delta_{\alpha' \alpha} \left[ \delta_{m' m} - \delta_{m' - 1, m} e^{- (\ii + \eta) \varepsilon_\alpha \Delta} \right],
\end{equation}
the interaction part
\begin{equation}
S_M^\text{int} (\bar{\varphi}, \varphi) = - (1 - \ii \eta) \Delta \sum_{m = 1}^M H_\text{int} (\bar{\varphi}^m, \varphi^{m - 1}) \bigg|_{\mathcal{B}_>},
\end{equation}
and the source part
\begin{eqnarray}
\label{eq:source_part_of_action_discrete}
\nonumber S_M^\text{source} (\bar{\varphi}, \varphi; \bar{J}, J) &=& - \Delta \! \sum_{\alpha \le N} \sum_{m = 1}^M \! \big( \bar{J}^{m - 1}_\alpha \varphi^{m - 1}_\alpha + \bar{\varphi}^m_\alpha J^m_\alpha \big)\\
&& - \Delta \! \sum_{\alpha > N} \sum_{m = 1}^{M - 1} \! \big( \bar{J}^m_\alpha \varphi^m_\alpha + \bar{\varphi}^m_\alpha J^m_\alpha \big) .
\end{eqnarray}
\subsection{Noninteracting generating functional}
\label{sec:noninteracting_generating_functional}
In the noninteracting case, the integral in Eq.~(\ref{eq:Z_discrete_Grassmann_integral}) is of Gaussian form.
We consider $Q$ given by Eq.~(\ref{eq:inverse_free_propagator}) to be a matrix and introduce row vectors $\bar{\varphi}$, $\bar{J}$ and column vectors $\varphi$, $J$.
We point out the peculiar ranges of the discrete-time indices [see Eq.~(\ref{eq:free_part_of_action_discrete}) and (\ref{eq:source_part_of_action_discrete})]:
In the sector with $\alpha \le N$, the row index $m'$ of $Q$ runs from $1$ to $M$, whereas its column index $m$ runs from $0$ to $M - 1$.
Correspondingly, the indices of $\bar{\varphi}$ and $J$ run from $1$ to $M$, whereas those of $\varphi$ and $\bar{J}$ run from $0$ to $M - 1$.
In the sector with $\alpha > N$, all discrete-time indices simply run from $1$ to $M - 1$.
The Gaussian integral evaluates to
\begin{eqnarray}
\label{eq:Gaussian_integral}
\nonumber && \int D_M (\bar{\varphi}, \varphi) e^{\ii [\bar{\varphi} Q \varphi - \Delta (\bar{J} \varphi + \bar{\varphi} J)]}\\
&=& e^{- (\ii + \eta) 2 t_0 \sum_{\alpha \le N} \varepsilon_\alpha} \; e^{- \ii \Delta^2 \bar{J} Q^{-1} J}.
\end{eqnarray}
The result for the noninteracting generating functional is thus
\begin{equation}
\mathcal{G}_\eta^0 [\bar{J}, J] = \lim_{t_0 \to \infty} \lim_{M \to \infty} e^{- \ii \Delta^2 \bar{J} g J}
\end{equation}
with the free propagator
\begin{subequations}
\begin{eqnarray}
g^{m m'}_{\alpha \alpha'} &=& (Q^{-1})^{m m'}_{\alpha \alpha'}\\
\nonumber &=& - \ii \delta_{\alpha \alpha'} e^{- (\ii + \eta) \varepsilon_\alpha (\tau_m - \tau_{m'})}\\
&& \times \begin{cases} 1 - n_\alpha, & M - 1 \ge m \ge m' \ge 1\\ - n_\alpha, & 0 \le m < m' \le M. \end{cases}
\end{eqnarray}
\end{subequations}
The free propagator is purely advanced for $\alpha \le N$ and purely retarded for $\alpha > N$.
The inverse of $Q$ assumes such distinct forms in the two sectors because of the differently restricted ranges of the discrete-time indices.
These in turn are a consequence of the integrations over the Grassmann generators at the boundaries, which were performed in Eq.~(\ref{eq:reduction_of_Grassmann_integrals}).
\subsection{Continuous notation}
\label{sec:continuous_notation}
In the limit $M \to \infty$, the free propagator becomes
\begin{subequations}
\label{eq:free_propagator_continuous}
\begin{eqnarray}
g_{\alpha \alpha'} (t, t') &=& \delta_{\alpha \alpha'} g_\alpha (t - t')\\
\nonumber g_\alpha (t) &=& - \ii e^{- (\ii + \eta) \varepsilon_\alpha t} \big[ (1 - n_\alpha) \Theta (t - 0^+)\\
&& \hphantom{- \ii e^{- (\ii + \eta) \varepsilon_\alpha t} \big[} - n_\alpha \Theta (- t + 0^+) \big] .
\end{eqnarray}
\end{subequations}
Following the usual convention \cite{Negele88}, we have chosen $g_\alpha (0) = g_\alpha (0^-)$.
This choice is advantageous for the diagrammatic expansion: It will allow to drop the infinitesimal differences of the times at each vertex, see Eq.~(\ref{eq:bare_vertex_general}), which matter only if a free propagator connects a vertex with itself.
And if two external ladder operators in Eq.~(\ref{eq:definition_of_Green_functions}) happen to be at equal times and to be paired by Wick's theorem, the choice agrees with the property $\mathcal{T} a_\alpha (t) a_{\alpha'}^\dagger (t) = - a_{\alpha'}^\dagger (t) a_\alpha (t)$ of the time-ordering operator.
Also on the level of the action and of the integral expression (\ref{eq:Z_discrete_Grassmann_integral}), it is possible to go over to a continuous notation.
The details are shown in appendix \ref{sec:details_on_continuum_limit}.
For the generating functional, we obtain the functional-integral representation
\begin{eqnarray}
\label{eq:generating_functional_continuous}
&& \mathcal{G}_\eta [\bar{J}, J]\\
\nonumber &=& \frac{\int D[\bar{\varphi}, \varphi] \exp \big\{ \ii \bar{\varphi} Q \varphi + \ii S^\text{int} [\bar{\varphi}, \varphi] - \ii (\bar{J} \varphi + \bar{\varphi} J) \big\} }{\int D[\bar{\varphi}, \varphi] \exp \big\{ \ii \bar{\varphi} Q \varphi + \ii S^\text{int} [\bar{\varphi}, \varphi] \big\} },
\end{eqnarray}
where $Q$ is now the differential operator given by Eq.~(\ref{eq:inverse_free_propagator_continuous}) and where the interaction part of the action can be written as
\begin{equation}
S^\text{int} [\bar{\varphi}, \varphi] = - \frac{1}{4} \sum_{x'_1 x'_2 x_1 x_2} \bar{v}_{x'_1 x'_2 x_1 x_2} \bar{\varphi}_{x'_1} \bar{\varphi}_{x'_2} \varphi_{x_2} \varphi_{x_1}
\end{equation}
with the bare vertex
\begin{eqnarray}
\label{eq:bare_vertex_general}
&& \bar{v}_{x'_1 x'_2 x_1 x_2}\\
\nonumber &=& \delta (t'_1 - t_1) \delta (t'_2 - t_1) \delta (t_2 - t_1) (1 - \ii \eta) \bar{v}_{\alpha'_1 \alpha'_2 \alpha_1 \alpha_2}.
\end{eqnarray}
In Eq.~(\ref{eq:generating_functional_continuous}) we have employed a matrix notation similar to the one in Sec.~\ref{sec:noninteracting_generating_functional} but with multi-indices of the form $x = (\alpha, t)$ and contractions $\sum_x = \sum_\alpha \int_{- \infty}^\infty \dd t$.
In writing Eq.~(\ref{eq:bare_vertex_general}) we have dropped the infinitesimal shifts of the times at a bare vertex, compare with the time arguments in Eq.~(\ref{eq:interaction_part_of_action_continuous}).
They are made redundant by the particular choice of the equal-time value of the free propagator (\ref{eq:free_propagator_continuous}).
\subsection{Diagrammatic expansion and 1PI flow equations}
Based on the functional-integral representation (\ref{eq:generating_functional_continuous}) of the interacting generating functional, a diagrammatic expansion of the Green functions can be derived in the standard way, see appendix \ref{sec:diagrammatic_expansion}.
As usual, one can choose to work in frequency representation.
Details on the relevant Fourier transforms can be found in appendix \ref{sec:frequency_representation}.
As a next step, we introduce 1PI vertex functions and derive FRG flow equations for them.
This can be done using generating functionals, starting from the one of the Green functions.
The procedure is analogous to the one in Matsubara or Keldysh formalism \cite{Metzner12}, but for definiteness we briefly show it in appendix \ref{sec:derivation_of_flow_equations}.
The flow equation of a general 1PI $n$-particle vertex function is given by Eq.~(\ref{eq:general_flow_of_1PI_vertex_functions}).
As the first two instances ($n = 1, 2$), we obtain the flow equation of the self-energy
\begin{equation}
\label{eq:general_flow_of_self-energy}
\dot{\Sigma}^\lambda_{x'| x} = - \ii \gamma^\lambda_{x' y'| x y} S^\lambda_{y| y'}
\end{equation}
and the one of the 1PI two-particle vertex function
\begin{eqnarray}
\label{eq:general_flow_of_two-particle_vertex}
\nonumber && \dot{\gamma}^\lambda_{x' y'| x y}\\
\nonumber &=& - \ii \gamma^\lambda_{x' y' a'| x y a} S^\lambda_{a| a'}\\
&& + \ii \gamma^\lambda_{x' y'| a b} S^\lambda_{a | a'} G^\lambda_{b| b'} \gamma^\lambda_{a' b'| x y}\\
\nonumber && + \ii \gamma^\lambda_{x' b'| a y} \left( S^\lambda_{a | a'} G^\lambda_{b| b'} + S^\lambda_{b| b'} G^\lambda_{a| a'} \right) \gamma^\lambda_{a' y'| x b}\\
\nonumber && - \ii \gamma^\lambda_{y' b'| y a} \left( S^\lambda_{a| a'} G^\lambda_{b| b'} + S^\lambda_{b| b'} G^\lambda_{a| a'} \right) \gamma^\lambda_{a' x'| b x}.
\end{eqnarray}
Here, the dot above $\Sigma$ and $\gamma$ denotes the derivative with respect to $\lambda$ and $S^\lambda = G^\lambda (g^\lambda)^{-1} \dot{g}^\lambda (g^\lambda)^{-1} G^\lambda$ is the single-scale propagator.
Since the flow equation of the 1PI $n$-particle vertex function contains the 1PI $(n+1)$-particle vertex function, all of the flow equations are coupled.
In Sec.~\ref{sec:flow_equation_for_1PI_two-particle_vertex} below, we truncate this infinite hierarchy by neglecting the 1PI three-particle vertex function in Eq.~(\ref{eq:general_flow_of_two-particle_vertex}).
Due to Dyson's equation $G = 1 / (g^{-1} - \Sigma)$, one is then left with the task of solving a closed set of differential equations for the self-energy and the 1PI two-particle vertex function.
In Sec.~\ref{sec:flow_equation_for_1PI_two-particle_vertex} we also neglect the flow of the self-energy and retain only the flow equation for the two-particle vertex function.
Lastly, we note that the zero-temperature formalism can be used for slightly more general problems than to study ground-state properties.
In appendix \ref{sec:generalization_of_zero-temperature_formalism} we discuss how it can be adapted to systems in particular excited states.
\section{One-loop FRG approach to the X-ray-absorption singularity in metals}
\label{sec:FRG_approach}
In the following we devise a specific one-loop 1PI FRG approach to the model described in Sec.~\ref{sec:model} that is based on the (real-time) zero-temperature formalism.
The goal is to obtain the correct leading-logarithmic result for the X-ray absorption rate.
To achieve this, we perform approximations analogous to those of Roulet et al. \cite{Roulet69}.
We discuss why a \emph{one-loop} truncation suffices to capture the leading logarithms.
In fact, we show that the parquet-based scheme by Roulet et al. and the following one-loop FRG approach are completely equivalent.
\subsection{Cutoff and initial conditions}
\label{sec:cutoff_and_initial_conditions}
In order to introduce the flow parameter directly into the Hartree-dressed propagator rather than into the free propagator, we first absorb the deep-state Hartree self-energy $\Sigma_d^H = g \xi_0$ into the latter.
We achieve this by formally adding and subtracting a term $\Sigma_d^H a_d^\dagger a_d$ in the Hamiltonian; in terms of the action, this corresponds to adding and subtracting a term $\Sigma_d^H \bar{\varphi}^m_d \varphi^{m-1}_d$ in the square brackets in Eq.~(\ref{eq:intermediate_matrix_elements}).
The added term is then absorbed into the free action such that in the deep-state subspace the Hartree-dressed propagator given by
\begin{equation}
\label{eq:Hartree-dressed_deep-state_propagator}
G_d^H (\omega)^{-1} = G_d^0 (\omega)^{-1} - \Sigma_d^H = \omega - \ii 0^+
\end{equation}
takes on the role of the free propagator.
In Eq.~(\ref{eq:Hartree-dressed_deep-state_propagator}) the renormalized deep level has been set to $\tilde{\varepsilon}_d = 0$, see Sec.~\ref{sec:setting_renormalized_deep_level_to_zero}.
The subtracted term $\Sigma_d^H a_d^\dagger a_d$ is treated as a single-particle perturbation.
It will cancel out, see below.
We choose to employ a sharp frequency cutoff that is inserted only into the real part of the Hartree-dressed deep-state propagator,
\begin{equation}
\label{eq:introduction_of_cutoff}
G_d^{H, \lambda} (\omega) = \Theta (|\omega| - \lambda) \frac{1}{\omega} + \ii \pi \delta (\omega).
\end{equation}
No cutoff is introduced into the conduction-state propagator.
For the initial value of the flow parameter, we will consider the limit $\lambda_\text{ini} \to \infty$.
At the final value $\lambda_\text{fin} = 0$, the original model is recovered.
In the following we determine the initial values of the 1PI vertex functions, starting with the first order of the self-energy.
The deep-state Hartree diagram, which consists of a single conduction-state loop, is not affected by the cutoff.
It exactly cancels with the single-particle perturbation mentioned above: this diagram has already been accounted for in Eq.~(\ref{eq:Hartree-dressed_deep-state_propagator}).
At $\lambda_\text{ini}$ the Hartree contribution to the local conduction-electron self-energy evaluates to
\begin{subequations}
\label{eq:initial_conduction-electron_self-energy}
\begin{eqnarray}
\hspace{-4ex} \Sigma_c^{H, \lambda_\text{ini}} &=& V \Sigma_{k' k}^{H, \lambda_\text{ini}}\\
\label{eq:initial_conduction-electron_self-energy_integral} \hspace{-4ex} &=& - \ii U \int \frac{\dd \omega}{2 \pi} e^{\ii \omega \eta'} \left[ \frac{\Theta (|\omega| - \lambda_\text{ini})}{\omega} + \ii \pi \delta (\omega) \right]\\
\hspace{-4ex} &=& U,
\end{eqnarray}
\end{subequations}
which is the same as without cutoff and which cancels the single-particle perturbation that arises when the interaction term in Eq.~(\ref{eq:Hamiltonian}) is brought into the standard form.
In Eq.~(\ref{eq:initial_conduction-electron_self-energy_integral}) a convergence factor $e^{\ii \omega \eta'}$ with $\eta' \to 0^+$ has been included as part of the Hartree-dressed deep-state propagator, see Eq.~(\ref{eq:frequency_representation_of_free_propagator_diagonal}).
Here, it is important to take the limit $\eta' \to 0^+$ before the limit $\lambda_\text{ini} \to \infty$.
Let us now consider an arbitrary 1PI diagram with at least two vertices and conclude that its initial value is negligible.
If it contains a deep-state propagator that connects two different vertices, this propagator is replaced by its delta-function part at $\lambda_\text{ini}$.
The result is negligible in leading logarithmic order according to Roulet et al.~\cite{Roulet69}.
If the diagram does not contain any such deep-state propagator, then all its external legs are deep-state ones.
But a 1PI subdiagram of this form does not enter the parquet diagrams that contain the leading logarithms, see Sec.~\ref{sec:logarithmic_divergences}.
In the leading-logarithmic approximation, the initial conditions are thus fully determined by diagrams with just a single vertex.
Consequently, the initial value of the 1PI two-particle vertex is given by the bare vertex,
\begin{equation}
\hat{\gamma}_{\lambda_\text{ini}} (\omega, \omega'; \Omega) = U = \bar{\gamma}_{\lambda_\text{ini}} (\omega, \omega'; X),
\end{equation}
while the initial values of all other 1PI vertex functions, including the self-energy, vanish.
\subsection{Flow equation for $\gamma$}
\label{sec:flow_equation_for_1PI_two-particle_vertex}
In order to truncate and solve the set of FRG flow equations, we neglect the flow of the self-energy and of the 1PI three-particle vertex so that the values of both remain zero.
This means that we set the right-hand side of Eq.~(\ref{eq:general_flow_of_self-energy}) to zero and neglect the first addend on the right-hand side of Eq.~(\ref{eq:general_flow_of_two-particle_vertex}).
That we can indeed renounce corrections to the self-energy for the leading-logarithmic approximation is evident from the diagrammatic discussion in Ref.~\cite{Roulet69}, see also our Sec.~\ref{sec:single-particle_Green_function}.
Let us clarify why the 1PI three-particle vertex does not affect the leading logarithmic order of the 1PI two-particle vertex via the flow equations.
The argument is based on the properties of individual diagrams.
In the diagrammatic derivation \cite{Jakobs07} of the flow equation for the 1PI two-particle vertex $\gamma_\lambda$, the derivative with respect to the flow parameter acts on each of the contributing diagrams.
In each diagram, according to the product rule, every deep-state propagator needs to be differentiated.
Therefore, $\dd \gamma_\lambda / \dd \lambda$ is represented by a sum of diagrams in each of which the derivative acts on some particular dashed line; examples for this type of diagram are shown in Fig.~\ref{fig:derivative_of_crossed_bubble}.
\begin{figure}
\includegraphics[scale=0.75]{derivative_of_crossed_bubble.pdf}
\caption{\label{fig:derivative_of_crossed_bubble}
The diagrammatic contributions to $\dd \gamma_\lambda / \dd \lambda$ that result from taking the derivative of the diagram in Fig.~\ref{fig:crossed_bubble}.
Again, $\Omega_i$ stands for $\omega_o + \omega + X$.
In each diagram the crossed-out dashed line represents the differentiated deep-state propagator.
(a) A leading contribution. Removing the crossed-out line, which is in the outer bubble, would render the diagram one-particle reducible.
(b) A subleading contribution. Removing the crossed-out line, which is in the inner bubble, would leave the diagram one-particle irreducible.}
\end{figure}
Let us now consider any such diagram that results from differentiating one of the parquet diagrams as those contain all of the important contributions.
Because of Eq.~(\ref{eq:introduction_of_cutoff}) the frequency of the differentiated propagator satisfies $|\omega| = \lambda$.
With regard to the real parts of the deep-state propagators, this frequency has the smallest absolute value of all deep-state frequencies.
As far as the leading logarithms are concerned, the differentiated propagator then has to be in one of the outermost bubbles; this follows from the discussion at the end of Sec.~\ref{sec:third-order_example} about the integration regions that give rise to the leading logarithms.
Removing this propagator would render the diagram one-particle \emph{reducible}.
In contrast all diagrams contributing to $\dd \gamma_\lambda / \dd \lambda$ that stem from the 1PI three-particle vertex, i.e., that represent the first term in Eq.~(\ref{eq:general_flow_of_two-particle_vertex}), would remain one-particle \emph{irreducible} if the respective differentiated propagator was removed.
Consequently, the leading logarithmic contributions cannot originate from the 1PI three-particle vertex so that it can indeed be neglected.
This shows that, for a sharp frequency cutoff in the deep-state propagator, a \emph{one-loop} truncation already captures all important contributions of the parquet diagrams even though it does not account for the exact values of these diagrams.
As an example we illustrate the above argument for the third-order parquet diagram in Fig.~\ref{fig:crossed_bubble}.
Its derivative with respect to $\lambda$ is the sum of the two diagrams shown in Fig.~\ref{fig:derivative_of_crossed_bubble}.
If we removed the respective differentiated propagator, diagram (a) would become one-particle reducible, whereas diagram (b) would remain one-particle irreducible, i.e., the latter stems from the 1PI three-particle vertex.
In diagram (a) the frequencies (of the real parts of the deep-state propagators) satisfy $|\omega_o| = \lambda \le |\omega_i|$ and in diagram (b) they satisfy $|\omega_o| \ge \lambda = |\omega_i|$.
Since we have shown in Sec.~\ref{sec:third-order_example} for the diagram in Fig.~\ref{fig:crossed_bubble} that only the integration region $|\omega_o| < |\omega_i|$ gives rise to the leading logarithms, the contribution to the flow represented by diagram (b) is negligible.
Following Roulet et al.~\cite{Roulet69}, we approximate the propagators by neglecting the real part of the conduction-state propagator and the imaginary part of the deep-state propagator because they do not give rise to the logarithmic divergence in a bubble, cf. Eq.~(\ref{eq:origin_of_logarithm}).
This step can be performed only after the evaluation of Eq.~(\ref{eq:initial_conduction-electron_self-energy}) for the initial conditions, where the imaginary part of the deep-state propagator contributes half of the result.
Within these approximations the local conduction-electron propagator does not depend on the flow parameter and is given by
\begin{equation}
G_c (\omega) = - \ii \pi \frac{\rho}{V} \, \sgn (\omega) \Theta (\xi_0 - |\omega|)
\end{equation}
and the Hartree-dressed single-scale propagator assumes the form
\begin{equation}
\label{eq:single-scale_propagator}
S_d^\lambda (\omega) = \frac{\dd}{\dd \lambda} G^{H, \lambda}_d (\omega) = - \frac{\delta (|\omega| - \lambda)}{\omega}.
\end{equation}
There remains to be solved the flow equation for the 1PI two-particle vertex.
While its general form is as stated in Eq.~(\ref{eq:general_flow_of_two-particle_vertex}), it now assumes the closed form
\begin{subequations}
\label{eq:flow_of_full_vertex}
\begin{eqnarray}
\nonumber \! \! && \frac{\dd}{\dd \lambda} \hat{\gamma}_\lambda (\omega, \omega'; \Omega)\\
\! \! &=& \frac{\dd}{\dd \lambda} \bar{\gamma}_\lambda (\omega, \omega'; X)\\
\label{eq:flow_of_full_vertex_rhs}
\! \! &=& - \frac{1}{2} \frac{\rho}{V} \int \dd \tilde{\omega} \, \delta (|\tilde{\omega}| - \lambda) \frac{1}{\tilde{\omega}}\\
\nonumber \! \! && \times \big[ \hat{\gamma}_\lambda (\omega, \tilde{\omega}; \Omega) \hat{\gamma}_\lambda (\tilde{\omega}, \omega'; \Omega) \sgn (\Omega - \tilde{\omega}) \Theta (\xi_0 - |\Omega - \tilde{\omega}|)\\
\nonumber \! \! && + \bar{\gamma}_\lambda (\omega, \tilde{\omega}; X) \bar{\gamma}_\lambda (\tilde{\omega}, \omega'; X) \sgn (\tilde{\omega} + X) \Theta (\xi_0 - |\tilde{\omega} + \! X|) \big],
\end{eqnarray}
\end{subequations}
where the frequency arguments are related via $\Omega - X = \omega + \omega'$.
The diagrammatic representation of this equation is shown in Fig.~\ref{fig:flow_of_full_vertex}.
\begin{figure}
\includegraphics[scale=0.75]{flow_equation.pdf}
\caption{\label{fig:flow_of_full_vertex}
Diagrammatic representation of the flow equation (\ref{eq:flow_of_full_vertex}) for the 1PI two-particle vertex.
The crossed-out dashed line stands for the single-scale propagator $S_d^\lambda$.
The first term represents the particle-particle channel while the second term represents the (exchange) particle-hole channel.
The external frequencies are related via $\omega_c = \Omega - \omega = \omega' + X$ and $\omega'_c = \Omega - \omega' = \omega + X$.}
\end{figure}
In writing Eq.~(\ref{eq:flow_of_full_vertex}) we consider the flow only of $\gamma_{d c| d c}^\lambda$ because it is the sole part of the 1PI vertex needed to calculate the particle-hole susceptibility (\ref{eq:ph-susceptibility_with_1PI_vertex}).
We do not consider the flow of $\gamma_{c c| c c}^\lambda$ because it does not influence the flow of $\gamma_{d c| d c}^\lambda$: the single-scale propagator, which has only deep-state indices, cannot be attached to $\gamma_{c c| c c}^\lambda$.
We neither consider the flow of $\gamma_{d d| d d}^\lambda$.
Its flow equation and its contribution to the flow of $\gamma_{d c| d c}^\lambda$ both involve a bubble with two deep-state propagators.
Consequently, its influence is subleading.
This reflects that the parquet diagrams containing the leading logarithms do not comprise 1PI subdiagrams with deep-state external indices only, as mentioned in the discussion of the initial conditions close to the end of Sec.~\ref{sec:cutoff_and_initial_conditions}.
As a result of neglecting $\gamma_{d d| d d}^\lambda$, the direct particle-hole channel is absent in Eq.~(\ref{eq:flow_of_full_vertex}).
We perform a channel decomposition by defining $\dd \hat{\gamma}^\text{pp}_\lambda (\omega, \omega'; \Omega) / \dd \lambda$ as the first addend in Eq.~(\ref{eq:flow_of_full_vertex_rhs}) and $\dd \bar{\gamma}^\text{ph}_\lambda (\omega, \omega'; X) / \dd \lambda$ as the second addend.
For the choice $\hat{\gamma}^\text{pp}_{\lambda_\text{ini}} (\omega, \omega'; \Omega) = 0 = \bar{\gamma}^\text{ph}_{\lambda_\text{ini}} (\omega, \omega'; X)$, a formal integration of the flow equation leads to the decomposition of the 1PI two-particle vertex
\begin{eqnarray}
\hat{\gamma}_\lambda (\omega, \omega'; \Omega) &=& \bar{\gamma}_\lambda (\omega, \omega'; X)\\
\nonumber &=& U + \hat{\gamma}^\text{pp}_\lambda (\omega, \omega'; \Omega) + \bar{\gamma}^\text{ph}_\lambda (\omega, \omega'; X),
\end{eqnarray}
where $\Omega - X = \omega + \omega'$.
Equation~(\ref{eq:flow_of_full_vertex}) can then be rewritten in terms of the two coupled flow equations
\begin{subequations}
\label{eq:decomposed_flow_equation}
\begin{eqnarray}
\nonumber && \frac{\dd}{\dd \lambda} \hat{\gamma}^\text{pp}_\lambda (\omega, \omega'; \Omega)\\
\label{eq:decomposed_flow_equation_pp} &=& - \frac{1}{2} \frac{\rho}{V} \sum_{\tilde{\omega} = \pm \lambda} \frac{1}{\tilde{\omega}} \sgn (\Omega - \tilde{\omega}) \Theta (\xi_0 - |\Omega - \tilde{\omega}|)\\
\nonumber && \times \left[ U + \hat{\gamma}^\text{pp}_\lambda (\omega, \tilde{\omega}; \Omega) + \bar{\gamma}^\text{ph}_\lambda (\omega, \tilde{\omega}; \Omega - \omega - \tilde{\omega}) \right]\\
\nonumber && \times \left[ U + \hat{\gamma}^\text{pp}_\lambda (\tilde{\omega}, \omega'; \Omega) + \bar{\gamma}^\text{ph}_\lambda (\tilde{\omega}, \omega'; \Omega - \tilde{\omega} - \omega') \right]
\end{eqnarray}
and
\begin{eqnarray}
\nonumber && \frac{\dd}{\dd \lambda} \bar{\gamma}^\text{ph}_\lambda (\omega, \omega'; X)\\
\label{eq:decomposed_flow_equation_ph} &=& - \frac{1}{2} \frac{\rho}{V} \sum_{\tilde{\omega} = \pm \lambda} \frac{1}{\tilde{\omega}} \sgn (\tilde{\omega} + X) \Theta (\xi_0 - |\tilde{\omega} + X|)\\
\nonumber && \times \left[ U + \hat{\gamma}^\text{pp}_\lambda (\omega, \tilde{\omega}; \omega + \tilde{\omega} + X) + \bar{\gamma}^\text{ph}_\lambda (\omega, \tilde{\omega}; X) \right]\\
\nonumber && \times \left[ U + \hat{\gamma}^\text{pp}_\lambda (\tilde{\omega}, \omega'; \tilde{\omega} + \omega' + X) + \bar{\gamma}^\text{ph}_\lambda (\tilde{\omega}, \omega'; X) \right].
\end{eqnarray}
\end{subequations}
These are the contributions to the flow in the particle-particle and particle-hole channel, respectively.
In the diagrammatic language, Eq.~(\ref{eq:decomposed_flow_equation_pp}) gives rise to diagrams that can be disconnected by cutting two parallel lines, whereas the diagrams resulting from Eq.~(\ref{eq:decomposed_flow_equation_ph}) can be disconnected by cutting two antiparallel lines.
Since a diagram of one type can appear as a subdiagram in diagrams of the other type, the two differential equations are coupled.
For the contribution from each channel, we have employed the notation that features the respective natural frequency, see Eq.~(\ref{eq:frequency_notation}) and (\ref{eq:bare_bubbles}).
We now make an assumption that we will later show to be correct within logarithmic accuracy based on a self-consistency argument:
We assume that the relations
\begin{subequations}
\label{eq:assumed_relations}
\begin{eqnarray}
\hat{\gamma}^\text{pp}_\lambda (\omega, \omega'; \Omega) &=& \hat{\gamma}^\text{pp}_\lambda (|\omega|, |\omega'|; \max \{ \lambda, |\Omega| \} )\\
\bar{\gamma}^\text{ph}_\lambda (\omega, \omega'; X) &=& \bar{\gamma}^\text{ph}_\lambda (|\omega|, |\omega'|; \max \{ \lambda, |X| \} )
\end{eqnarray}
\end{subequations}
hold, which are trivially satisfied at the start of the flow.
We use these relations to rewrite the vertex functions appearing on the right-hand side of the flow equations (\ref{eq:decomposed_flow_equation}).
In the terms representing the cross feedback, we subsequently approximate the third frequency argument.
For example, in the term $\hat{\gamma}^\text{pp}_\lambda (\omega, \pm \lambda; \omega \pm \lambda + X) = \hat{\gamma}^\text{pp}_\lambda (|\omega|, \lambda; \max \{ \lambda, |\omega \pm \lambda + X| \} )$ appearing in Eq.~(\ref{eq:decomposed_flow_equation_ph}), we approximate
\begin{equation}
\label{eq:max_approximation}
\max \{ \lambda, |\omega \pm \lambda + X| \} \approx \max \{ \lambda, |\omega| \}.
\end{equation}
This step is analogous to neglecting the second addend in Eq.~(\ref{eq:internal_bubble_break_down}).
It can be justified as follows.
For $\lambda \ge |X|$ the approximation (\ref{eq:max_approximation}) is correct within a factor of three.
Such a factor is negligible because, based on the considerations of Roulet et al.~\cite{Roulet69}, we expect $\hat{\gamma}^\text{pp}_\lambda$ to be a slowly varying function of its arguments; this expectation will be confirmed by the final result.
For $\lambda < |X|$ the two summands for $\tilde{\omega} = \pm \lambda$ cancel each other at least to a large extent because the factor $\sgn (\pm \lambda + X)$ does not cancel the sign of $1 / (\pm \lambda)$ anymore.
Consequently, the final part with $\lambda < |X|$ of the flow in the particle-hole channel does not contribute to building the leading logarithms.
This corresponds to the observation that small frequencies with $|\tilde{\omega}| < |X|$ do not contribute to the logarithmic divergence of the bare particle-hole bubble, see Eq.~(\ref{eq:origin_of_logarithm}).
For the other cross-feedback terms in the flow equations, we apply approximations analogous to Eq.~(\ref{eq:max_approximation}).
The justification is similar.
Next, we replace the step functions $\Theta (\xi_0 - |\Omega \mp \lambda|)$ and $\Theta (\xi_0 - |\pm \lambda + X|)$ occurring in Eq.~(\ref{eq:decomposed_flow_equation}) with $\Theta (\xi_0 - \lambda)$.
When compared with the parquet-based scheme by Roulet et al.~\cite{Roulet69}, this corresponds to neglecting the third addend in Eq.~(\ref{eq:internal_bubble_break_down}) and to replacing the integration boundaries by $\pm \xi_0$ in Eq.~(\ref{eq:origin_of_logarithm}) and (\ref{eq:crossed_bubble_result}).
To motivate this approximation, consider integrating the flow equations by applying $- \int_0^{\lambda_\text{ini}} \dd \lambda \ldots$
The resulting $\lambda$-integrals take on the role of the frequency integral in a bubble.
Provided that $|\Omega| \ll \xi_0$ or $|X| \ll \xi_0$, respectively, the replacement above is wrong only for certain $\lambda \approx \xi_0$, but the leading contribution that builds the logarithm comes from $\lambda \ll \xi_0$.
Indeed, said conditions are satisfied: For the particle-hole susceptibility (\ref{eq:ph-susceptibility_with_1PI_vertex}) near threshold, the values of $\hat{\gamma}^\text{pp}_{\lambda = 0} (\omega, \omega'; \omega + \omega' + \nu)$ and $\bar{\gamma}^\text{ph}_{\lambda = 0} (\omega, \omega'; \nu)$ are important only for $|\omega|, |\omega'|, |\nu| \ll \xi_0$; it follows that the values of $\hat{\gamma}^\text{pp}_\lambda$ and $\bar{\gamma}^\text{ph}_\lambda$ are relevant only with all frequency arguments being small -- even for the cross-feedback terms in Eq.~(\ref{eq:decomposed_flow_equation}), compare the discussion in the paragraphs following Eq.~(\ref{eq:crossed_bubble_result}).
Consequently, the error made by replacing the step functions is negligible.
Due to the factors $\Theta (\xi_0 - \lambda)$, the actual flow now starts at $\lambda = \xi_0$.
This constitutes our last approximation.
Since the vertex functions occurring in the flow equations do not depend on the sign of $\tilde{\omega} = \pm \lambda$ anymore, we can easily perform the sums over $\tilde{\omega}$ in Eq.~(\ref{eq:decomposed_flow_equation}), e.g.,
\begin{equation}
\sum_{\tilde{\omega} = \pm \lambda} \frac{1}{\tilde{\omega}} \sgn (\tilde{\omega} + X) = \frac{2}{\lambda} \Theta (\lambda - |X|).
\end{equation}
A formal integration with respect to the flow parameter, starting from $\lambda_\text{ini}$ down to some value $\lambda$, then leads to
\begin{subequations}
\label{eq:integrated_flow}
\begin{eqnarray}
\nonumber && \hat{\gamma}^\text{pp}_\lambda (\omega, \omega'; \Omega)\\
\label{eq:integrated_flow_pp} &=& - \frac{\rho}{V} \int_{\max \{ \lambda, |\Omega| \} }^{\xi_0} \frac{\dd \lambda'}{\lambda'}\\
\nonumber && \times \left[ U + \hat{\gamma}^\text{pp}_{\lambda'} (|\omega|, \lambda'; \lambda') + \bar{\gamma}^\text{ph}_{\lambda'} (|\omega|, \lambda'; \max \{ \lambda', |\omega| \} ) \right]\\
\nonumber && \times \left[ U + \hat{\gamma}^\text{pp}_{\lambda'} (\lambda', |\omega'|; \lambda') + \bar{\gamma}^\text{ph}_{\lambda'} (\lambda', |\omega'|; \max \{ \lambda', |\omega'| \} ) \right]
\end{eqnarray}
and
\begin{eqnarray}
\nonumber && \bar{\gamma}^\text{ph}_\lambda (\omega, \omega'; X)\\
&=& \frac{\rho}{V} \int_{\max \{ \lambda, |X| \} }^{\xi_0} \frac{\dd \lambda'}{\lambda'}\\
\nonumber && \times \left[ U + \hat{\gamma}^\text{pp}_{\lambda'} (|\omega|, \lambda'; \max \{ \lambda', |\omega| \} ) + \bar{\gamma}^\text{ph}_{\lambda'} (|\omega|, \lambda'; \lambda') \right]\\
\nonumber && \times \left[ U + \hat{\gamma}^\text{pp}_{\lambda'} (\lambda', |\omega'|; \max \{ \lambda', |\omega'| \} ) + \bar{\gamma}^\text{ph}_{\lambda'} (\lambda', |\omega'|; \lambda') \right]
\end{eqnarray}
\end{subequations}
for $\xi_0 > \lambda, |\Omega|, |X|$.
As claimed above, the relations (\ref{eq:assumed_relations}) follow from these flow equations and are thus validated within logarithmic accuracy: On the right-hand side of Eq.~(\ref{eq:integrated_flow_pp}), only the absolute values of $\omega$ and $\omega'$ enter and only $\max \{ \lambda, |\Omega| \}$ appears with no separate dependence on $\lambda$ or $\Omega$; the analogue holds for the particle-hole channel.
For this reason we can from now on even write $\hat{\gamma}^\text{pp}_0$ and $\bar{\gamma}^\text{ph}_0$ instead of $\hat{\gamma}^\text{pp}_{\lambda'}$ and $\bar{\gamma}^\text{ph}_{\lambda'}$ in the integrands in Eq.~(\ref{eq:integrated_flow}).
Furthermore, the flow equations (\ref{eq:integrated_flow}), in conjunction with the vanishing initial conditions, imply that $\hat{\gamma}^\text{pp}_\lambda (\omega, \omega'; \Omega)$ does not depend on $\omega^{(\prime)}$ if $|\omega^{(\prime)}| \le |\Omega|$ and the same for $\bar{\gamma}^\text{ph}_\lambda (\omega, \omega'; X)$ if $|\omega^{(\prime)}| \le |X|$.
It is therefore reasonable to introduce the shorthand notation
\begin{subequations}
\begin{eqnarray}
\hat{\gamma}^\text{pp}_\lambda (\Omega) &=& \hat{\gamma}^\text{pp}_\lambda (\omega, \omega'; \Omega) \qquad \text{if } |\omega|, |\omega'| \le |\Omega|\\
\bar{\gamma}^\text{ph}_\lambda (X) &=& \bar{\gamma}^\text{ph}_\lambda (\omega, \omega'; X) \qquad \text{if } |\omega|, |\omega'| \le |X|.
\end{eqnarray}
\end{subequations}
With this the integrated flow equations at $\lambda_\text{fin} = 0$ assume the form
\begin{subequations}
\label{eq:final_flow}
\begin{eqnarray}
&& \hat{\gamma}^\text{pp}_0 (\omega, \omega'; \Omega)\\
\nonumber &=& - \frac{\rho}{V} \int_{|\Omega|}^{\xi_0} \frac{\dd \lambda}{\lambda} \left[ U + \hat{\gamma}^\text{pp}_0 (|\omega|, \lambda; \lambda) + \bar{\gamma}^\text{ph}_0 (\max \{ \lambda, |\omega| \} ) \right]\\
\nonumber && \hphantom{- \frac{\rho}{V} \int_{|\Omega|}^{\xi_0}} \times \left[ U + \hat{\gamma}^\text{pp}_0 (\lambda, |\omega'|; \lambda) + \bar{\gamma}^\text{ph}_0 (\max \{ \lambda, |\omega'| \} ) \right]
\end{eqnarray}
and
\begin{eqnarray}
&& \bar{\gamma}^\text{ph}_0 (\omega, \omega'; X)\\
\nonumber &=& \frac{\rho}{V} \int_{|X|}^{\xi_0} \frac{\dd \lambda}{\lambda} \left[ U + \hat{\gamma}^\text{pp}_0 (\max \{ \lambda, |\omega| \} ) + \bar{\gamma}^\text{ph}_0 (|\omega|, \lambda; \lambda) \right]\\
\nonumber && \hphantom{\frac{\rho}{V} \int_{|X|}^{\xi_0}} \times \left[ U + \hat{\gamma}^\text{pp}_0 (\max \{ \lambda, |\omega'| \} ) + \bar{\gamma}^\text{ph}_0 (\lambda, |\omega'|; \lambda) \right].
\end{eqnarray}
\end{subequations}
\subsection{Relation to parquet approach of Roulet et al.}
The integrated flow equations (\ref{eq:final_flow}) are identical to Eq.~(A1) and (A2) of Roulet et al.~\cite{Roulet69}.
There are differences only in the notation, in particular the authors of Ref.~\cite{Roulet69} introduced logarithmic variables for all frequencies.
They solved these integral equations without further approximations and used the resulting 1PI vertex to determine $\im \chi (\nu)$.
We can without changes adopt the steps of Roulet et al. to determine $\im \chi (\nu) = - \pi (\rho / V) (\xi_0 / \nu)^{2g} \Theta (\nu)$, which provides the shape of the absorption rate near the threshold via $R(\nu) = - 2 |W|^2 \im \chi (\nu)$.
We refrain from repeating these steps here.
We have thus established that the one-loop FRG approach presented in this section leads to the exact same result for the X-ray absorption rate as the parquet-based scheme by Roulet et al.
In particular, this proves that the one-loop FRG approach captures all leading logarithms.
On top of that, we argue that the two approaches do not only produce the identical result but are fully equivalent on a detailed level.
In fact, the various approximation steps in the two approaches can be identified with each other.
The first approximation performed by Roulet et al. in Ref.~\cite{Roulet69} is to replace the totally irreducible interaction $R$ by the bare interaction.
This reduces the diagrams under consideration to the parquet diagrams in the particle-particle and exchange particle-hole channel.
The same reduction results in the FRG approach from neglecting the three-particle vertex and $\gamma_{d d| d d}$.
(The role of neglecting $\gamma_{d d| d d}$ is to eliminate the direct particle-hole channel; in the treatment by Roulet et al., this channel is embedded in the irreducible interaction $R$ and then neglected.)
Note that disregarding the three-particle vertex in the FRG approach brings about one additional approximation, namely that the internal frequency integrations in parquet diagrams with crossed channels are performed only partly.
In the approach of Roulet et al., the same restriction in the frequency integrations is a by-product of the logarithmic approximation, see below.
Reference \cite{Roulet69} continues with some approximations which we transferred one-to-one to our FRG approach: neglecting the real part of $G_c$, neglecting the deep-state self-energy except for a static contribution, and neglecting the imaginary part of $G_d$.
The next step is the "logarithmic approximation" ten lines above Eq.~(29) of Ref.~\cite{Roulet69}.
It has its direct counterpart in our Eq.~(\ref{eq:max_approximation}).
Furthermore, approximating the upper integration boundary by $\xi_0$ on the right-hand side of Eq.~(29) in Ref.~\cite{Roulet69} corresponds to replacing $\Theta(\xi_0 - |\Omega \mp \lambda|)$ by $\Theta(\xi_0 - \lambda)$ in our Sec.~\ref{sec:flow_equation_for_1PI_two-particle_vertex}.
Finally, the abovementioned restriction in the internal frequency integrations follows in Ref.~\cite{Roulet69} from the logarithmic approximation when Eq.~(29) and (31) of that reference are combined.
The restriction is fully realized in Eq.~(34) of Ref.~\cite{Roulet69}, where the argument of $I_1$ in the first integrand is not greater than the integration variable $t_i$.
When the inner bubble contained in $I_1$ is evaluated according to the second integral, this argument takes on the role of $\beta$ which is the upper bound of the second integral.
The integration variable of the outer bubble, i.e., of the first integral, is therefore greater than the one of the inner bubble.
For the corresponding frequencies of the bubbles follows the reverse relation, and this is precisely the same restriction as the one established by the one-loop FRG.
In the parquet approach of Ref.~\cite{Roulet69}, it remains to bring the equations into a solvable form.
To achieve this Roulet et al. invoke a ``trick by Abrikosov and Sudakov'', see p.~1081 and the appendix of Ref.~\cite{Roulet69}.
This step at the very end of their solution can indeed be identified with the introduction of a sharp frequency cutoff at the very beginning of the FRG treatment: When applying this trick, one considers the general structure of a parquet diagram reducible in a given channel; one identifies among the outermost bubbles the one with the smallest absolute value of the deep-state frequency; to both sides of this bubble, there are full 1PI vertices that are restricted to contain only greater deep-state frequencies; finally, the abovementioned smallest frequency is integrated over.
The concept of a smallest deep-state frequency which is integrated over corresponds precisely to a sharp frequency cutoff in the deep-state propagator and a formal integration of the FRG flow equations.
In this way Fig.~13 of Ref.~\cite{Roulet69} (which shows only one channel, has wrongly directed arrows on the deep-state lines, and does not formally add the kernels $I_1$ to vertex functions $\gamma$ on the left and on the right) anticipates the graphical representation of the FRG flow equation in our Fig.~\ref{fig:flow_of_full_vertex}.
We thus have established the full equivalence of both approaches.
The only difference lies in the order of the steps.
Our FRG approach starts by introducing a cutoff and continues with approximations to the flow equations.
Roulet et al., on the other hand, apply equivalent approximations to the parquet equations and use the cutoff only at the very end to rewrite and solve the resulting equations.
We note that our particular choice of the cutoff is crucial for the equivalence discussed above.
For ill-conceived cutoffs the 1PI three-particle vertex could significantly influence the flow of the 1PI two-particle vertex such that a one-loop truncation would not capture the leading logarithms.
However, we expect a one-loop truncation to be sufficient in the case of any sensible cutoff that regularizes the divergences of the bare bubbles during the entire flow.
\section{Conclusion and outlook}
\label{sec:conclusion}
Historically, the concept of summing up all parquet diagrams with bare lines was developed to construct the leading-logarithmic approximation for models in which bubbles in different channels produce simple logarithmic divergences \cite{Diatlov57, Abrikosov65, Bychkov66, Roulet69}.
In a paradigmatic case, Roulet et al. derived from this approach the leading approximation for the rate of X-ray absorption in metals close to the threshold frequency \cite{Roulet69}.
In the present paper, we have shown that a standard one-loop FRG approximation with sharp frequency cutoff reproduces identically the parquet-based leading-logarithmic approximation of Roulet et al.
There is a detailed correspondence between the approaches; in particular, the ``trick by Abrikosov and Sudakov'' to evaluate the approximate parquet equations corresponds to the introduction of the cutoff in the FRG.
In total, the two approaches can be understood as different viewpoints on the same technical steps.
Extending our one-loop scheme to a multiloop scheme on the analogy of Ref.~\cite{Kugler18a} would result in a change on the subleading level without leading to a controlled improvement.
We explained why the parts of the parquet diagrams that are not captured by the one-loop FRG (and not by the treatment of Roulet et al.) are subleading.
The traditional understanding that low-order RG approximations can reproduce the leading-logarithmic parquet result for models with simple logarithmic divergences of the bubbles \cite{Anderson70, Abrikosov70, Fowler71, Solyom74, Solyom74a, Kimura73, Solyom79, Irkhin01, Binz03} is thus reconfirmed also in this case.
For the whole class of these mostly zero- and one-dimensional models, we therefore do not expect the multiloop FRG to be advantageous on the leading-logarithmic level.
There remains at least the benefit that the multiloop scheme provides a leading-logarithmic approximation for any choice of flow parameter, as long as convergence is reached.
This establishes more flexibility compared to our analysis of the one-loop scheme, which deals with a specific cutoff.
While we expect that our analysis can be transferred to other regularizing cutoffs, this should be reexamined in each case.
In fact, the effort to do so would be worthwhile since the leading-logarithmic properties can often be extracted analytically from one-loop RG (or from the parquet equations) as seen in this study and in almost all corresponding references mentioned in the Introduction.
The multiloop scheme, in contrast, provides only a numerical solution.
Consequently, we expect that the multiloop scheme will mainly be found useful to evaluate the full parquet approximation, which is typically used for two-dimensional models.
The resulting approximation benefits from preserving certain sum rules and conservation laws and the Mermin-Wagner theorem.
Results of multiloop investigations of the two-dimensional Hubbard model are promising in this respect \cite{Tagliavini19, Hille20a, Hille20}.
As the multiloop FRG is not restricted to approximate the totally irreducible vertex by the bare one, it might also turn out to be helpful for constructing diagrammatic extensions of the dynamical mean-field theory.
In the present paper, we established a formulation of the FRG within the (real-time) zero-temperature formalism.
This formalism is more restrictive than the Matsubara and Keldysh formalisms because it only provides access to ground-state properties.
Additionally, its application requires that the noninteracting ground state is not, as a matter of different symmetries, orthogonal to the interacting one \cite{Negele88}.
However, it has the advantage that it is based on real times or frequencies and therefore does not require an analytic continuation from the imaginary to the real frequency axis.
Such an analytic continuation is a significant complication for numerical FRG results obtained within Matsubara formalism, see, e.g., Ref.~\cite{Karrasch08}.
Compared to Keldysh formalism, the zero-temperature formalism is easier to work with as it involves only a single time axis instead of a two-branch time contour.
The two branches in Keldysh formalism give rise to different components (say, chronological, lesser, greater, antichronological) of Green functions, whereas there is only a single component in the zero-temperature formalism.
Due to these features, we expect that the zero-temperature FRG developed in this paper can have useful future applications.
Several topics for future research naturally arise from the considerations set forth in this paper.
First, it should be clarified how our observations made within the real-time zero-temperature formalism can be transferred to formulations of the FRG within the Matsubara formalism.
This is important to achieve a more detailed comparability to the works of Kugler and von Delft on the X-ray-absorption problem \cite{Kugler18, Kugler18a}, which use the Matsubara formalism.
In particular, the nature of the improvements achieved by multiloop FRG as reported in Ref.~\cite{Kugler18a} could be clarified: either the one-loop schemes used in that reference are suboptimal in that they miss some leading logarithms or the observed changes due to the multiloop scheme are in the uncontrolled regime.
This question is still open as Ref.~\cite{Kugler18a} disregards the subleading difference between the exact sum of the parquet diagrams and the leading-logarithmic solution of Roulet et al. from Ref.~\cite{Roulet69}.
The transfer of our observations to the Matsubara formalism is important also from a more general perspective due to the widespread use of the Matsubara FRG as a tool to investigate low-dimensional fermionic systems \cite{Metzner12, Kopietz10, Platt13}.
We started investigations of the leading-logarithmic approximation to the X-ray-absorption rate using Matsubara FRG.
They indicate that the central message of this paper can indeed be transferred to the Matsubara case: a reasonably crafted one-loop Matsubara FRG scheme reproduces the leading-logarithmic approximation.
We observe that passing over to continuous Matsubara frequencies at zero temperature and setting $\tilde{\varepsilon}_d$ to zero requires particular care within Matsubara formalism.
We intend to address these points in a future publication.
Another topic for future research is the mechanism by which the one-loop FRG captures the leading logarithms.
The corresponding reasoning in Sec.~\ref{sec:flow_equation_for_1PI_two-particle_vertex} was based on individual diagrams; this allowed us to stress the close analogy to the leading-logarithmic parquet approximation.
We expect, however, that an argument based completely on the structure of the flow equations could be more efficient.
This could help with another task, namely, to construct FRG approximations that treat subleading contributions consistently.
Whether the multiloop FRG with dressed propagators can contribute to achieve the latter goal remains to be clarified as well.
Furthermore, it is desirable to extend the considerations of this paper to nonequilibrium situations which can be described within the Keldysh formalism.
This would allow for interesting applications to model systems for quantum dots and wires.
For example, one could expand on the FRG study of nonequilibrium Kondo physics in Ref.~\cite{Schmidt10}, which does not discuss the question of a consistent treatment of logarithmic divergences.
This could make it possible to address open questions concerning the influence of a magnetic field and to achieve a methodological comparison with the real-time RG approach to nonequilibrium Kondo physics of Ref.~\cite{Reininghaus14}.
\begin{acknowledgments}
We are grateful to Fabian Kugler and Jan von Delft for raising our interest in the topic of this work and for useful discussions.
We are obliged to Andrey Katanin for instructive explanations on the role of logarithmic divergences in two-dimensional systems.
We thank Volker Meden for many stimulating discussions in the broader context of this work and for a critical reading of the manuscript.
This work was supported by the Deutsche Forschungsgemeinschaft via RTG 1995.
\end{acknowledgments}
|
1,108,101,564,817 | arxiv | \section{Introduction}
\setcounter{equation}{0}
Currently, the focus of high energy physics is the LHC experiment.
To understand the experiment data, we need to evaluate scattering
amplitudes to high accuracy level required by data. Thus for most
processes, the one-loop evaluation becomes necessary. In last ten
years, enormous progress has been made in the computation of
one-loop scattering amplitudes(see, for example, the references
\cite{Bern:2008ef,Binoth:2010ra,AlcarazMaestre:2012vp} and citations
in the papers). However, for some processes in modern colliders,
such as the process $gg\to \gamma\gamma$ which is an important
background for searching the Higgs boson at the LHC, one-loop
amplitudes do not suffice since their leading-order terms begin at
one loop. Thus next-to-leading order corrections require the
computation of two-loop amplitudes
\cite{Berger:1983yi,Aurenche:1985yk,Ellis:1987xu}.
The traditional method for amplitude calculation is through the
Feynman diagram. This method is well organized and has clear
physical picture. It has also been implemented into many computer
programs. However, with increasing of loop level or the number of
external particles, the complexity of computation increases
dramatically. Thus even with the most powerful computer available,
many interesting processes related to LHC experiments can not be
dealt by the traditional method.
To solve the challenge, many new methods(see books
\cite{Smirnov:2004ym,Smirnov:2006ry,Smirnov:2012gma}) have been
developed, such as IBP(integrate-by-part) method
\cite{Chetyrkin:1981qh,Tarasov:1998nx,Bern:2000dn,Anastasiou:2000kg,Anastasiou:2000ue,Glover:2001af,
Anastasiou:2001sv,Laporta:2001dd,Bern:2002tk,Tarasov:2004ks}(some
new developments, see
\cite{Gluza:2010ws,Kalmykov:2011yy,Schabinger:2011dz}), differential
equation method
\cite{Kotikov:1990kg,Remiddi:1997ny,Gehrmann:1999as,Argeri:2007up,Henn:2013pwa,Henn:2013woa,
Henn:2013nsa,Argeri:2014qva}, MB(Mellin-Barnes) method
\cite{Bergere:1973fq,Usyukina:1975yg,Smirnov:1999gc,Tausk:1999vh},
etc. Among these methods, the reduction method
\cite{Passarino:1978jh,Neerven:1984,Bern:1992,Ellis:2007} is one of
the most useful methods. More explicitly, the reduction of an
amplitude means that any amplitude ${\cal A}$ can be expanded by
bases(or "master integral") as
\bea {\cal A}=\sum_ic_i {\cal A}_i~,~~~\label{1loop-exp} \eea
with rational coefficients $c_i$. With this expansion, the amplitude
calculation can be separated into two parts: (a) the evaluation of
bases(or master integrals) at given loop order and (b) the
determination of coefficients $c_i$ for a particular process. For
the former part, it can be done once for all and the results can be
applied to any process. Thus in the practical application, the
latter part, i.e., the determination of coefficients, becomes the
central focus of all calculations.
Unitarity method is an ideal tool to determine
coefficients\cite{Bern:1994zx,Bern:1995db,Bern:1997sc,Bern:1996ja,Britto:2004nc,Britto:2004nj,
Bern:2005cq,Britto:2005ha,Brandhuber:2005jw,Bern:2007dw,Forde:2007mi,
Badger:2008cm,Anastasiou:2006jv,Britto:2006fc,Berger:2009zb,Bern:2010qa}.
With the expansion \eref{1loop-exp}, if we perform unitarity cut on
both sides, we will get
\bea \Delta{\cal A}=\sum_ic_i\Delta {\cal
A}_i~.~~~\label{1loop-exp-1} \eea
So if both $\Delta{\cal A}$ and $\Delta {\cal A}_i$ can be evaluated
analytically, and if different $\Delta {\cal A}_i$ has
distinguishable analytic structure(which we will call the
"signature" of basis under the unitarity cut), we can compare both
sides of \eref{1loop-exp-1} to determine coefficients $c_i$,
analogous to the fact that if two polynomials of $x$ are equal, so
are their coefficients of each term $x^n$. The unitarity method has
been proven to be very successful in determining coefficients for
one-loop amplitudes(see reviews \cite{Britto:2010xq,
Dixon:2013uaa}). For some subsets of bases(such as box topology for
one-loop and double-box topology for planar two-loop), more
efficient method, the so called "generalized unitarity method"(or
"maximum unitarity cut" or "leading singularity"), has been
developed
\cite{Britto:2004nc,Buchbinder:2005wp,ArkaniHamed:2009dn,ArkaniHamed:2012nw,Kosower:2011ty,Larsen:2012sx,CaronHuot:2012ab,Johansson:2012zv,
Johansson:2012sf,Sogaard:2013yga,Johansson:2013sda,Sogaard:2013fpa}.
The applicability of reduction method is based on the valid
expression of expansion \eref{1loop-exp}. Thus the determination of
bases becomes the first issue. From recent study, it is realized
that there are two kinds of bases: the integrand bases and the
integral bases. The integrand bases are algebraically independent
rational functions before performing loop integration. For one-loop,
the integrand bases have been determined by OPP\cite{Ossola:2006us}.
For two-loop or more, the computational algebraic geometry method
has been proposed to determine the integrand bases
\cite{Mastrolia:2011pr,Badger:2012dp,Mastrolia:2012an,Kleiss:2012yv,Badger:2012dv,
Mastrolia:2012wf,Huang:2013kh,Badger:2013gxa,Zhang:2012ce,Feng:2012bm,Mastrolia:2012du}.
In general the number of integrand bases is larger than the number
of integral bases, because after loop integration, some combinations
of integrand bases may vanish. For one-loop amplitudes, the
difference between these two numbers is not very significant. For
example, the number of integral bases is one while the number of
integrand bases is seven for triangle topology of renormalizable
field theories\cite{Ossola:2006us}. However, for two-loop
amplitudes, the difference could be huge. As we will show later, for
double-triangle topology, there are only several integral bases,
while the number of integrand bases is about one hundred for
renormalizable field theories\cite{Feng:2012bm}. Thus the
determination of integral bases for two-loop and higher-loop becomes
necessary.
Although integrand bases can be determined systematically, the
determination of integral bases is far from being completely solved.
It is our attempt in this paper to find an efficient method to solve
the problem. Noticing that in the unitarity method, the action
$\Delta$ in \eref{1loop-exp-1} is directly acting on the integrated
results, thus if the left hand side $\Delta{\cal A}$ can be
analytically integrated for arbitrary inputs, we can classify
independent distinguishable analytic structures from these results.
Each structure should correspond to one integral basis\footnote{It
is possible that two different integral bases have the same analytic
structure for all physical unitarity cuts, but we do not consider
this possibility in current paper. All our claims in this paper are
true after neglecting above ambiguity.}. By this attempt we can
determine integral bases.
In this paper, taking double-box topology and its daughter
topologies as examples, we generalize unitarity method to two-loop
amplitudes and try to determine the integral bases. Different from
the maximal unitarity method\cite{Kosower:2011ty}, we cut only four
propagators(the propagator with mixed loop momenta will not be
touched). Comparing with maximal unitarity cut where solutions for
loop momenta are complex number in general, our cut conditions
guarantee the existence of real solutions for loop momenta, thus
avoiding the affects from spurious integrations.
This paper is organized as follows. In section 2 we review the
one-loop unitarity method and then generalize the scheme to
two-loop. For two-loop, two sub-one-loop phase space integrations
should be evaluated. In section 3, we integrate the first
sub-one-loop integration of triangle topology. The result is used in
section 4, where integration over the second sub-one-loop of
triangle topology is performed. Results obtained in this section
allow us to determine integral bases for the topology ${\cal
A}_{212}$. Results in section 3 is also also used in section 5,
where integration over the second sub-one-loop of box topology is
performed, and the result can be used to determine integral bases
for topology ${\cal A}_{213}$. In section 6, we briefly discuss the
integral bases of topology ${\cal A}_{313}$ since results are well
known for this topology. Finally, in section 7, a short conclusion
is given.
Technical details of calculation are presented in Appendix. In
Appendix A, some useful formulae for phase space integration are
summarized. In Appendix B, the phase space integration is done for
one-loop bubble, one-loop triangle and one-loop box topologies. In
Appendix C, details of an integration for topology ${\cal A}_{313}$
are discussed.
\section{Setup}
In this section, we present some general discussions about the
calculation done in this paper. Firstly, we review how to do the
phase space integration in unitarity method illustrated by one-loop
example. Then we set up the framework in unitarity method for
two-loop topologies which are the starting point of this paper.
\subsection{Phase space integration}
The unitarity method has been successfully applied to one-loop
amplitudes
\cite{Bern:1994zx,Bern:1995db,Bern:1997sc,Bern:1996ja,Britto:2004nc,Britto:2004nj,
Bern:2005cq,Britto:2005ha,Brandhuber:2005jw,Bern:2007dw,Forde:2007mi,
Badger:2008cm,Anastasiou:2006jv,Britto:2006fc,Berger:2009zb,Bern:2010qa}
. Here we give a brief summary about the general
$(4-2\epsilon)$-dimensional unitarity method\cite{Anastasiou:2006jv,
Britto:2006fc}, which will be used later. Through this paper we use
the metric $\eta_{\mu\nu}=(+,-,...,-)$ and QCD convention for
spinors, i.e., $2k_i\cdot k_j\equiv \Spaa{k_i|k_j}\Spbb{k_j|k_i}$.
For one-loop, the action $\Delta$ in \eref{1loop-exp-1} is realized
by putting two internal propagators on-shell. More explicitly, let
us consider the following most general input\footnote{The most
general expression for numerator will be $\sum_i \prod_{j}
(\ell\cdot R_{ij})$. For each term $\prod_{j=1}^n (\ell\cdot
R_{ij})$, we can construct $(\ell\cdot \W R_i)^n$ with $\W
R_i=\sum_{j=1}^n y_j R_{ij}$. Thus if we know the result for
numerator $(\ell\cdot \W R_i)^n$, we can expand it into the
polynomial of $y_i$ and read out corresponding result for
$\prod_{j=1}^n (\ell\cdot R_{ij})$. }
with massless internal propagators\footnote{For simplicity we consider the massless
propagators, but massive propagators can be dealt similarly. }
\bea {\cal A}^{(a)}_n &\equiv & \int d^{4-2\eps} \WH \ell {\cal
I}_{n }^{(a)} = \int d^{4-2\eps} \WH \ell { (2\WH\ell\cdot T)^a\over
\WH\ell^2\prod_{i=1}^{n-1} (\WH
\ell-K_i)^2}~,~~~\label{1loop-gen-integrand} \eea
where the inner momentum is in $(4-2\epsilon)$-dimensional space and
all external momenta are in pure 4D space for our regularization
scheme. The unitarity cut with intermediate flowing momentum $K$ is
given by putting $\WH \ell^2$ and $(\WH\ell-K)^2$ on-shell, and we
get the expression
\bea \Delta {\cal A}^{(a)}_n = \int d^{4-2\eps} \WH \ell {
(2\WH\ell\cdot T)^a\delta( \WH\ell^2)\delta((\WH \ell-K)^2)\over
\prod_{i=1}^{n-2} (\WH
\ell-K_i)^2}~.~~~\label{1loop-gen-integrand-cut} \eea
With two delta-functions, the original $(4-2\eps)$-dimensional
integration is reduced to $(2-2\eps)$-dimensional integration. To
carry out the remaining integration, we decompose $\WH\ell$ as
$\WH\ell=\W \ell+\mu$, where $\W\ell$ is the pure 4D part while
$\mu$ is the $(-2\eps)$-dimensional part\cite{Anastasiou:2006jv},
then the measure becomes
\bea \int d^{4-2\eps} \WH \ell\delta( \WH\ell^2)\delta((\WH
\ell-K)^2) (\bullet)=\int d^{-2\epsilon}\mu\int d^4\W \ell\delta(
\W\ell^2-\mu^2) \delta((\W \ell-K)^2-\mu^2)(\bullet)~.~~~ \eea
Next, we split $\W \ell$ into $\W\ell=\ell+zK$ with $\ell^2=0$ to
arrive
\bea & &\int d^4 \W \ell\delta( \W\ell^2-\mu^2)\delta((\W
\ell-K)^2-\mu^2)(\bullet)\nn &=&\int dz
d^4\ell\delta(\ell^2)(2\ell\cdot K)\delta( z^2K^2+2z\ell\cdot
K-\mu^2) \delta((1-2z)K^2-2\ell\cdot
K)(\bullet)~.~~~\label{lightcone} \eea
Having the form \eref{lightcone}, we can use the following well
known result of spinor integration\footnote{For one-loop, we can
take either positive light cone or negative light cone, where for
negative light cone, the $t$-integration will be $\int_{-\infty}^0$.
For two-loop, it can happen that if we take positive light cone for
$\ell_1$, then we need to take negative light cone for $\ell_2$.
However, the choice of light cone only gives an overall sign and
does not affect $\la,\W\la$ integration.}\cite{Cachazo:2004kj}.
Define null momentum as $\ell=t \la \W \la$, then
\bea \int d^4\ell\delta^+(\ell^2)(\bullet)=\int_0^{+\infty}tdt
\int\Spaa{\la|d\la}\Spbb{\W\la|d\W\la}(\bullet)~.~~~\label{mea} \eea
Substituting \eref{mea} back to \eref{lightcone}, we can use
remaining two delta-functions to fix $t$ and $z$ as
\bea z={1-\sqrt{1-u}\over 2}~~,~~t={(1-2z)K^2\over
\Spab{\la|K|\W\la}}~~,~~u\equiv {4\mu^2\over K^2}~.~~~\eea
After above simplification, the integral
\eref{1loop-gen-integrand-cut} is transformed to the following
spinor form
\bea \Delta {\cal A}^{(a)}_{n}&= & \int d^{-2\eps} \mu \int
\Spaa{\la|d\la}\Spbb{\W\la|d\W\la} {(-)^{n-2}[(1-2z) K^2]^{a-n+3}
\Spab{\la|R|\W\la}^a \over
\Spab{\la|K|\W\la}^{a-n+4}\prod_{i=1}^{n-2}\Spab{\la|Q_i|\W\la}}~,~~~
\label{1loop-uni-gen} \eea
where
\bea R \equiv T+ {z (2K\cdot T)\over (1-2z) K^2} K~~,~~Q_i\equiv
K_i+{z (2K\cdot K_i)-K_i^2\over (1-2z) K^2} K~.~~~\label{1loo-R-Q}
\eea
To deal with the integral like $\int
\Spaa{\la|d\la}\Spbb{\W\la|d\W\la}f(\la,\W\la)$ when $f(\la,\W\la)$
is a rational function,
the first step is to find a function $g(\la,\W\la)$ satisfying
\bea \int \Spaa{\la|d\la}\Spbb{\W\la|d\W\la}f(\la,\W\la)= \int
\Spaa{\la|d\la}\Spbb{d\W\la|{\partial \over
\partial\W\la}}g(\la,\W\la)~.~~~ \eea
With $g(\la,\W\la)$, the integration is given algebraically by the
sum of residues of holomorphic pole in
$g(\la,\W\la)$\cite{Britto:2005ha}. In Appendix B, we summarize some
general results of standard one-loop integrations using above
technique. It is worth to mention that for two-loop, $f(\la,\W\la)$
might not be rational function. We will discuss how to deal with it
later.
We also want to remark that under the framework of
$(4-2\eps)$-dimensional unitarity method, coefficient of each basis
will be polynomial of $\mu^2$(remembering the splitting
$\WH\ell=\W\ell+\mu$). There are two ways to handle it. For the
first way, one can further integrate $\int d^{-2\eps}\mu
~(\mu^2)^{n}$ to find coefficients depending on $\eps$. For the
second way, we just keep $\mu^2$, but include the dimensional
shifted scalar basis\cite{Bern:1996ja,Heinrich:2010ax}, such as
\bea {\cal A}^{D=(4-2\epsilon)}[(\mu^2)^r]\equiv\int
d^{-2\epsilon}\mu d^4 \W\ell{(\mu^2)^r\over
(\W\ell^2-\mu^2)\prod_{i=1}^{n-1}((\ell-K_i)^2-\mu^2)}~.~~~\eea
This is equivalent to
\bea {\cal
A}^{D=(4-2\epsilon)}[(\mu^2)^r]=-\epsilon(1-\epsilon)...(r-1-\epsilon)
{\cal A}^{D=(4+2r-2\epsilon)}[1]~.~~~\eea
For one-loop, dimensional shifted bases are often used. In this
paper we adapt the similar strategy, i.e., keeping the $\mu$-part
and introducing the dimensional shifted bases.
\subsection{Generalizing to two-loop case}
In this subsection, we set up unitarity method for two-loop
amplitudes, particularly for the attempt of determining integral
bases.
The first problem is to decide which propagators should be cut.
There are three kinds of propagators: (1) propagators depending on
$\WH \ell_1$ only; (2) propagators depending on $\WH \ell_2$ only;
(3) propagators depending on both $\WH \ell_1$ and $\WH \ell_2$. In
principle, we can cut any propagators, but for simplicity, in this
paper we will cut propagators of the first two kinds. For our
choice, we cut two propagators of the first kind and two propagators
of the second kind. With this arrangement, for each loop it is
exactly the familiar unitarity method in one-loop case.
Next we set up notation for two-loop integral. The two internal
momenta are denoted as $\WH \ell_1, \WH \ell_2$ in
$(4-2\eps)$-dimension, while all external momenta are in pure
4-dimension. We use $n_1, n_2, n_{12}$ to denote the number of each
kind of propagators respectively. Then a general integrand with
massless propagators\footnote{In this paper, we consider the
massless case only. For inner propagators with masses, we will leave
to further projects.} can be represented by\footnote{In this
paper, we use ${\cal I}$ for integrand and ${\cal A}$ for integral.}
\bea {\cal I}_{n_1 n_{12} n_2}^{(a,b)}\equiv { (2\WH\ell_1\cdot
T_1)^a(2\WH\ell_2\cdot T_2)^b\over [\WH\ell_1^2\prod_{i=1}^{n_1-1}
(\WH\ell_1-K_{1i})^2][\WH\ell_2^2\prod_{j=1}^{n_2-1}
(\WH\ell_2-K_{2j})^2][(\WH\ell_1+\WH\ell_2)^2\prod_{t=1}^{n_{12}-1}
(\WH\ell_1+\WH\ell_2-K_t)^2]}~.~~~\label{gen-integrand} \eea
The unitarity cut action $\Delta$ is then given by\footnote{We have
neglected some overall factors in the definition of integration
since it does not matter for our discussion.}
\bea \Delta {\cal A} & = & \int \prod_{i=1}^2 d^{4-2\eps}\WH \ell_i
\left\{{\cal I}_{n_1 n_{12} n_2}^{(a,b)} \prod_{i=1}^2\WH
\ell_i^2(\WH \ell_i-K_{L_i})^2 \right\} \prod_{i=1}^2\delta(\WH
\ell_i^2)\delta((\WH \ell_i-K_{L_i})^2)~.~~~\label{double-uni-cut}
\eea
\EPSFIGURE[ht]{DBox-cut.eps,width=17cm}{The unitarity cut of double
box topology $I_{313}$ as well as its seven daughter topologies. The
dashed red lines indicate cuts. \label{DBox-cut}}
With above setup, we take a well studied example\cite{Gluza:2010ws,
Kosower:2011ty,Larsen:2012sx,CaronHuot:2012ab,
Johansson:2012zv,Johansson:2012sf,Sogaard:2013yga,Johansson:2013sda},
i.e., the four-point two-loop double-box(${\cal A}_{313}$) integral
as the target to apply the unitarity method and determine integral
bases. The integrand is given by
\bea {\cal I}^{(a,b)}_{313} & = & {(2\WH\ell_1 \cdot T_1)^a
(2\WH\ell_2\cdot T_2)^b \over \WH\ell_1^2
(\WH\ell_1-K_1)^2(\WH\ell_1-K_{12})^2 \WH\ell_2^2
(\WH\ell_2-K_4)^2(\WH\ell_2-K_{34})^2
(\WH\ell_1+\WH\ell_2)^2}~,~~~\label{I313-def}\eea
and the four propagators to be cut are
\bean \WH\ell_1^2~~,~~ (\WH\ell_1-K_{12})^2~~,~~ \WH\ell_2^2~~,~~
(\WH\ell_2-K_{34})^2~,~~~\eean
where $K_{12}+K_{34}=0$. With this choice of cuts, in order to
completely understand the results, we also need to consider other
topologies besides double-box. The other contributions come from
those topologies by pinching one or more un-cut propagators of
double-box, as shown in Figure \ref{DBox-cut}. There are three
daughter topologies ${\cal I}_{213},{\cal I}_{312},{\cal I}_{303}$
by pinching one propagator. There are also three daughter topologies
${\cal I}_{212},{\cal I}_{302},{\cal I}_{203}$ by pinching two
propagators. Finally there is only one daughter topology ${\cal
I}_{202}$ by pinching three propagators. Among them, ${\cal
I}_{303}, {\cal I}_{203}, {\cal I}_{302}, {\cal I}_{202}$ are direct
products of two one-loop topologies, thus their signatures are well
known(see Appendix \ref{B}). So in fact we need to examine two
non-trivial topologies ${\cal I}_{212},{\cal I}_{213}$(by symmetry
${\cal I}_{312}$ is equivalent to ${\cal I}_{213}$) together with
the mother topology ${\cal I}_{313}$. Integrand of these two
additional topologies are given by
\bea {\cal I}^{(a,b)}_{212} & = & {(2\WH\ell_1 \cdot T_1)^a
(2\WH\ell_2\cdot T_2)^b \over \WH\ell_1^2 (\WH\ell_1-K_{12})^2
\WH\ell_2^2 (\WH\ell_2-K_{34})^2 (\WH\ell_1+\WH\ell_2)^2}~,~~~\nn
{\cal I}^{(a,b)}_{213} & = &{(2\WH\ell_1 \cdot T_1)^a
(2\WH\ell_2\cdot T_2)^b \over \WH\ell_1^2 (\WH\ell_1-K_{12})^2
\WH\ell_2^2 (\WH\ell_2-K_4)^2(\WH\ell_2-K_{34})^2
(\WH\ell_1+\WH\ell_2)^2}~.~~~\label{I312I213I303}\eea
In the following sections, we will study ${\cal I}_{212}$, ${\cal
I}_{213}$ and ${\cal I}_{313}$ one by one, and our basic strategy
will be to integrate one loop momentum $\W\ell_1$ first while
keeping $\W \ell_2$ arbitrarily. Then we analyze the integration of
$\W\ell_2$ based on the previous results.
\section{The $\W \ell_1$-part integration ($n_1=2$)}
In this section, we do the $\W\ell_1$ integration. Using the
standard method for one-loop amplitudes(reviewed in previous section
as well as in Appendix \ref{B}) we get(see formula
\eref{gen-integrand})
\bea \Delta{\cal A}_{n_1 1 n_2}^{(a,b)} & = & \int d^{-2\eps} \mu_1
d^{-2\eps} \mu_2 d^4 \W\ell_2 \delta(\W\ell_2^2-\mu_2^2)
\delta( K_{L_2}^2-2K_{L_2}\cdot \W\ell_2){ (2\W\ell_2\cdot T_2)^b
\over \prod_{j=1}^{n_2-2} ((\W\ell_2-K_{2j})^2-\mu_2^2)}\nn
& & \int \Spaa{\la_1|d\la_1}\Spbb{\W\la_1|d\W\la_1}
{ (-)^{n_1-2} ((1-2z_1) K_{L_1}^2)^{a-n_1+2}\over
\Spab{\la_1|K_{L_1}|\W\la_1}^{a-n_1+3}}{
\Spab{\la_1|R_1|\W\la_1}^a\over \Spab{\la_1|W_1|\W\la_1}
\prod_{i=1}^{n_1-2}\Spab{\la_1|Q_{1i}|\W\la_1}}~,~~~\label{L1-delta}\eea
where various quantities are defined as
\bea R_1 & \equiv & T_1+{z_1 2K_{L_1}\cdot T_1\over
(1-2z_1)K_{L_1}^2} K_{L_1}~,~~~\nn
Q_{1i} & \equiv & K_{1i}+ {z_1(2 K_{L_1}\cdot K_{1i})-K_{1i}^2\over
(1-2z_1)K_{L_1}^2} K_{L_1}~,~~~\nn
W_1 & \equiv & \W\ell_2+ {(\W\ell_2^2-\mu_2^2)-2\mu_1\cdot
\mu_2+2z_1 \W\ell_2\cdot K_{L_1}\over (1-2z_1)K_{L_1}^2}
K_{L_1}~,~~~\label{L1-var}\eea
with $z_1={1-\sqrt{1-u_1}\over 2}$ and $u_1={4\mu_1^2\over
K_{L_1}^2}$. Note that here the left cut momentum $K_{L_1}=K_{12}$
is the same to the right cut momentum $K_{L_2}=K_{34}$ up to a sign,
however we keep them independently so that it is possible to
formulate them to more general situations. The $W_1$ comes from the
mixed propagator $(\WH \ell_1+\WH \ell_2)^2$. Situations with non
trivial topologies ${\cal A}_{3 1 3}$, ${\cal A}_{3 1 2}$, ${\cal
A}_{2 1 3}$ and ${\cal A}_{2 1 2}$ are all included in the formula
\eref{L1-delta}.
Let us apply our general framework to the specific case $n_1=2$. The
general formula \eref{L1-delta} now becomes
\bea
\Delta{\cal A}_{n_1 1 n_2}^{(a,b)}\Big|_{n_1=2} & = & \int d^{-2\eps} \mu_1
d^{-2\eps} \mu_2 d^4 \W\ell_2 \delta(\W\ell_2^2-\mu_2^2)
\delta( K_{L_2}^2-2K_{L_2}\cdot \W\ell_2){ (2\W\ell_2\cdot T_2)^b
\over \prod_{j=1}^{n_2-2} ((\W\ell_2-K_{2j})^2-\mu_2^2)}\nn
& & \int \Spaa{\la_1|d\la_1}\Spbb{\W\la_1|d\W\la_1}
{ ((1-2z_1) K_{L_1}^2)^{a}\over
\Spab{\la_1|K_{L_1}|\W\la_1}^{a+1}}{ \Spab{\la_1|R_1|\W\la_1}^a\over
\Spab{\la_1|W_1|\W\la_1}}~.~~~\label{L1-delta-n1=2-0}\eea
The second line is nothing but the standard one-loop triangle
integration(see Appendix \ref{B}). When $a=0$, the integration gives
the signature of triangle part. When $a\geq 1$, the integration can
be decomposed into both triangle part and bubble part. We will
evaluate contributions from these two parts separately.
\subsection{The contribution to triangle part}
{\bf The triangle signature :} Based on our general formula of the
standard one-loop triangle integration \eref{C3-to3}, the signature
of the triangle part is
\bea {\cal S}_{tri}\equiv {1\over
\sqrt{\Delta_{W_1,K_{L_1}}}}\ln\left({ W_1 \cdot K_{L_1}- \sqrt{
(W_1\cdot K_{L_1})^2 -W_1^2 K_{L_1}^2}\over W_1 \cdot K_{L_1}+
\sqrt{ (W_1\cdot K_{L_1})^2 -W_1^2 K_{L_1}^2}}\right)~.~~~ \eea
Imposing cut conditions for $\W \ell_2$, i.e.,
$\delta(\W\ell_2^2-\mu_2^2)$ and $
\delta( K_{L_2}^2-2K_{L_2}\cdot \W\ell_2)$ we can simplify it to
\bea {\cal S}_{tri}&= &{1\over K_{L_1}^2 \sqrt{1-u_2}} \ln \left( {
(4\mu_1\cdot \mu_2+K_{L_1}^2)+\sqrt{(1-u_1)(1-u_2)} K_{L_1}^2\over
(4\mu_1\cdot \mu_2+K_{L_1}^2)-\sqrt{(1-u_1)(1-u_2)}
K_{L_1}^2}\right)={1\over t_2K_{L_1}^2 } \ln \Big( {s+t_1t_2\over
s-t_1t_2}\Big)~,~~~\label{3to3-sign} \eea
where we have introduced
\bea s= {4\mu_1\cdot \mu_2+K_{L_1}^2\over
K_{L_1}^2}~~,~~t_i=\sqrt{1-u_i}~~,~~u_i={4\mu_i^2\over
K_{L_i}^2}~~,~~i=1,2~.~~~ \label{combi}\eea
One can observe that the signature part does not depend on
$\W\ell_2$. It is an important feature which makes $\Delta{\cal
A}_{2 1 n_2}^{(a,b)}$ easier to be treated.
~\\{\bf The coefficient ${\cal C}_{3\to 3}^{(a)}$:} Using
\eref{C3-to3} the expression is
\bea {\cal C}_{3\to 3}^{(a)}&=&{(-)^a\over
a!\Delta_{W_1,K_{L_1}}^a}{d^a\over d\tau^a}\Big(\tau^2W_1^2+\tau(4
W_1^2(R_1\cdot K_{L_1})-4 (R_1\cdot W_1) (W_1\cdot K_{L_1}))+ R_1^2
\Delta_{W_1,K_{L_1}}\nn & &+ (2 R_1\cdot W_1)^2 K_{L_1}^2 +
(2R_1\cdot K_{L_1})^2 W_1^2 - (2R_1\cdot W_1)(2R_1\cdot K_{L_1})
(2W_1\cdot K_{L_1})\Big)^a\Big|_{\tau\to 0}~.~~~\eea
Again, using cut conditions $\delta(\W\ell_2^2-\mu_2^2)$ and $
\delta( K_{L_2}^2-2K_{L_2}\cdot \W\ell_2)$ we can do the following replacement
\bean \W\ell_2\to {(1-2z_2) K_{L_2}^2\over
\Spab{\la_2|K_{L_2}|\W\la_2}}\la_2 \W\la_2+z_2 K_{L_2}={(1-2z_2)
K_{L_1}^2\over -\Spab{\la_2|K_{L_1}|\W\la_2}}\la_2 \W\la_2-z_2
K_{L_1}~,~~~\eean
where $z_2={1-t_2\over 2}$. Since all derivatives act on $\tau$
only, such replacement will not affect the result. Some algebraic
manipulations shows that the coefficients of different parts are
given by
\bea \tau^2&:&~~{ (s^2-t_1^2t_2^2)K_{L_1}^2\over 4 t_1^2}~,~~~\nn
\tau&:& {-t_2K_{L_1}^2 \over t_1\Spab{\la_2|K_{L_1}|\W\la_2}}\Big(
t_2
(K_{L_1}\cdot T_1)
\Spab{\la_2|K_{L_1}|\W\la_2}+s( -(K_{L_1}\cdot
T_1)\Spab{\la_2|K_{L_1}|\W\la_2}+K_{L_1}^2
\Spab{\la_2|T_1|\W\la_2})\Big)~,~~~\nn \tau^0 &: & {t_2^2
(K_{L_1}^2)^2\over \Spab{\la_2|K_{L_1}|\W\la_2}^2}
K_{L_1}^2\Spab{\la_2|(T_1+y_1 K_{L_1})|\W\la_2}\Spab{\la_2|(T_1+y_2
K_{L_1})|\W\la_2}~,~~~\label{n1=2-tau} \eea
with
\bea y_{1,2}={ -(2T_1\cdot K_{L_1})\pm \sqrt{(2T_1\cdot K_{L_1})^2-4
K_{L_1}^2 T_1^2}\over 2K_{L_1}^2}~.~~~\label{y1y2} \eea
To get non-zero contribution from ${d^a\over d\tau^a}
(\bullet)\Big|_{\tau\to 0}$, we only need to take terms with
$\tau^a$ power. It means that terms with $\tau^2$ in \eref{n1=2-tau}
will always appear with terms $\tau^0$, therefore we can regroup
\bean \left\{\tau^2{ (s^2-t_1^2t_2^2)K_{L_1}^2\over 4
t_1^2}\right\}+\left\{{t_2^2 (K_{L_1}^2)^2\over
\Spab{\la_2|K_{L_1}|\W\la_2}^2} K_{L_1}^2\Spab{\la_2|(T_1+y_1
K_{L_1})|\W\la_2}\Spab{\la_2|(T_1+y_2 K_{L_1})|\W\la_2}\right\}
\eean
to
\bean \left\{\tau^2{ t_2(s-t_1t_2)(K_{L_1}^2)^2\over 2
t_1}{\Spab{\la_2|(T_1+y_1 K_{L_1})|\W\la_2}\over
\Spab{\la_2|K_{L_1}|\W\la_2}} \right\}+\left\{{
t_2(s+t_1t_2)(K_{L_1}^2)^2\over 2 t_1}{\Spab{\la_2|(T_1+y_2
K_{L_1})|\W\la_2}\over \Spab{\la_2|K_{L_1}|\W\la_2}}\right\}~.~~~
\eean
Thus we can write
\bean {\cal C}_{3\to 3}^{(a)} &=&{(-)^a (K_{L_1}^2)^a\over a!
(t_1t_2K_{L_1}^2)^a} {d^a\over d\tau^a} {\Spab{\la_2|{\cal
F}|\W\la_2}^a\over \Spab{\la_2|K_{L_1}|\W\la_2}^a}\Big |_{\tau\to
0}~,~~~\eean
where ${\cal F}$ is defined as
\bea {\cal F} & = & -\tau\left( t_2
{K_{L_1}\cdot T_1\over K_{L_1}^2} K_{L_1}+ s\Big(T_1- {(K_{L_1}\cdot T_1)\over K_{L_1}^2}
K_{L_1}\Big)\right)\nn & & +\tau^2 { s-t_1t_2\over 2 }\Big(T_1+y_1
K_{L_1}\Big)+ { s+t_1t_2\over 2 }\Big(T_1+y_2
K_{L_1}\Big)~.~~~\label{n1=2-F-def} \eea
Putting all results together, the triangle part becomes
\bea {\cal R}_{3\to 3}^{(a)} & = & \left({(-)^a(K_{L_1}^2)^a\over a!
(t_1t_2K_{L_1}^2)^a} {d^a\over d\tau^a} {\Spab{\la_2|{\cal
F}|\W\la_2}^a\over \Spab{\la_2|K_{L_1}|\W\la_2}^a}\Big|_{\tau\to
0}\right){1\over t_2K_{L_1}^2 } \ln \Big( {s+t_1t_2\over
s-t_1t_2}\Big)~.~~~\label{n1=2-R3-3} \eea
To do the $\W \ell_2$-part integration, it is more convenient to use
above form before taking the derivative over $\tau$.
\subsection{The contribution to bubble part}
Again we use results given in Appendix \ref{B}.
{\bf The ${\cal R}_{3\to 2}[i,m]$ term:} Using \eref{R-3-to-2}, the
typical term of triangle topology to bubble is
\bea
{\cal R}_{3\to 2}[i,m] & = & { (-)^{m+i} (K_{L_1}^2)^i \over
i!(m+1)\sqrt{\Delta(W_1,K_{L_1})}^{m+2i+2}}{d^i\over d\tau^i}\left\{\left((2
R_1\cdot P_2 -\tau\Spab{P_1|R_1|P_2})^{m+1}\right.
\right.\nn & & \left.
(-x_2 \Spab{P_2|R_1|P_1}-x_1 \tau^2 \Spab{P_1|R_1|P_2}+\tau ( x_2
(2R_1\cdot P_1) +x_1(2R_1\cdot P_2)))^i\right)\nn & & +(-)^m
\left((2 R_1\cdot P_1-\tau\Spab{P_2|R_1|P_1})^{m+1}\right. \nn & &
\left.\left. (-x_2\tau^2\Spab{P_2|R_1|P_1} -x_1 \Spab{P_1|R_1|P_2}
+\tau ( x_2 (2R_1\cdot P_1) +x_1(2R_1\cdot
P_2)))^i\right)\right\}\Big|_{\tau\to 0}~,~~~\label{R1-3-to-2} \eea
where two null momenta $P_1,~P_2$ are constructed as
$P_i=W_1+x_iK_{L_1}$, with
\bean x_1={s+t_1t_2\over 2t_1}~~,~~x_2={s-t_1t_2\over
2t_2}~.~~~\eean
Again, to get non-zero contribution, $\Spab{P_1|R_1|P_2}$ and
$\Spab{P_2|R_1|P_1}$ should always appear in pair. With a little
calculations, one can see
\bea \Spab{P_1|R_1|P_2}\Spab{P_2|R_1|P_1}={\cal T}_1{\cal T}_2~~,~~
{\cal T}_i= \left({t_2K_{L_1}^2\over
\Spab{\la_2|K_{L_1}|\W\la_2}}\right)\Spab{\la_2|(T_1+y_i
K_{L_1})|\W\la_2}~,~~~\label{n1=2=rep-T}\eea
where $y_1$ and $y_2$ are defined in \eref{y1y2}. Thus we can take
the following replacements
\bea \Spab{P_1|R_1|P_2}&\to & {\cal T}_1~~,~~
\Spab{P_2|R_1|P_1}\to {\cal T}_2~.~~~\eea
After such replacements we obtain
\bea
& & {\cal R}_{3\to 2}[i,m] = { (-)^{m+i} \over
(m+1)i!K_{L_1}^2 t_2^{i+1} t_1^{m+i+1}}{d^i\over
d\tau^i} \nn
& &\left\{ { \Spab{\la_2|T_1(-t_1-\tau t_1)+K_{L_1}(-\tau t_1 y_1-(1-t_1){K_{L_1}\cdot T_1\over
K_{L_1}^2}) |\W\la_2}^{m+1}\over \Spab{\la_2|K_{L_1}|\W\la_2}^{m+i+1}} \right. \nn
& & \times \Spab{\la_2|T_1 (-x_2 t_1-x_1 \tau^2 t_1-s\tau)+K_{L_1}(
-x_2 t_1 y_2-x_1 t_1 \tau^2 y_1+\tau (s-t_2){K_{L_1}\cdot T_1\over
K_{L_1}^2}) | \W\la_2}^{i}\nn
& & +(-)^m { \Spab{\la_2|T_1(-t_1-\tau t_1)+K_{L_1}(-\tau t_1 y_2+(1+t_1){K_{L_1}\cdot T_1\over
K_{L_1}^2}) |\W\la_2}^{m+1}\over \Spab{\la_2|K_{L_1}|\W\la_2}^{m+i+1}} \nn
& & \left.\times \Spab{\la_2|T_1 (-x_2 t_1\tau^2-x_1
t_1-s\tau)+K_{L_1}( -x_2 t_1\tau^2 y_2-x_1 t_1 y_1+\tau
(s-t_2){K_{L_1}\cdot T_1\over K_{L_1}^2}) |
\W\la_2}^{i}\right\}\Big|_{\tau\to 0}~.~~~ \eea
Above expression has the form
$\Spab{\la_2|\bullet|\W\la_2}\Spab{\la_2|\bullet|\W\la_2}$. In order
to use the results given in Appendix \ref{B}, we need to rewrite
them by using
\bean
(A)^{m+1}(B)^i= {i!\over (m+1+i)!}{d^{m+1}\over d \tau_1^{m+1}}(\tau_1 A+ B)^{m+1+i}\Big|_{\tau_1\to
0}~.~~~
\eean
So finally we have
\bea {\cal R}_{3\to 2}[i,m] &=& { (-)^{m+i} \over
(m+1)(m+1+i)!K_{L_1}^2 t_2^{i+1}t_1^{m+i+1}}\left\{{d^i\over
d\tau^i}{d^{m+1}\over d\tau_1^{m+1}}\right.\nn & & \left.
{\Spab{\la_2|{\cal G}_2|\W\la_2}^{m+i+1}+(-)^m \Spab{\la_2|{\cal
G}_1|\W\la_2}^{m+i+1}\over
\Spab{\la_2|K_{L_1}|\W\la_2}^{m+i+1}}\right\}_{\tau\to 0,\tau_1\to
0}~,~~~\eea
where we have defined
\bea {\cal G}_1 & = & T_1 \left\{ -t_1\tau_1-{(s-t_1 t_2)\over 2}
\tau^2-{(s+t_1 t_2)\over 2}-\tau s-\tau \tau_1 t_1\right\}\nn
& & +K_{L_1}\left\{\tau_1 {K_{L_1}\cdot T_1\over K_{L_1}^2} (1+t_1)-
\tau^2 y_2{(s-t_1 t_2)\over 2}-{(s+t_1 t_2)\over 2} y_1+ \tau
{K_{L_1}\cdot T_1\over K_{L_1}^2}(s-t_2)-\tau \tau_1 t_1
y_2\right\}~,~~~ \nn {\cal G}_2 & = & T_1\left\{ -\tau_1 t_1-\tau
\tau_1 t_1-{(s-t_1 t_2)\over 2}-\tau^2 {(s+t_1 t_2)\over
2}-s\tau\right\}\nn
& & +K_{L_1}\left\{ [-(1-t_1) \tau_1+\tau (s-t_2)]{K_{L_1}\cdot
T_1\over K_{L_1}^2}-\tau \tau_1 t_1 y_1 -{(s-t_1 t_2)\over
2}y_2-\tau^2 {(s+t_1 t_2)\over 2}y_1\right\}~.~~~\label{cal-G1-G2}
\eea
\subsection{The result for $n_1=2$ after $\W \ell_1$-integration}
Collecting results from triangle part and bubble part we obtain
\bea
\Delta{\cal A}_{n_1 1 n_2}^{(a,b)}\Big|_{n_1=2} & = & \int
d^{-2\eps} \mu_1 d^{-2\eps} \mu_2 \int d^4 \W\ell_2
\delta(\W\ell_2^2-\mu_2^2)
\delta( K_{L_2}^2-2K_{L_2}\cdot \W\ell_2){ (2\W\ell_2\cdot T_2)^b(t_1
K_{L_1}^2)^{a}
\over \prod_{j=1}^{n_2-2} ((\W\ell_2-K_{2j})^2-\mu_2^2)}\nn
& & \left\{{1\over K_{L_1}^2 t_2} \ln \left( { (s+ t_1 t_2)\over (s-t_1 t_2)}\right) {(-)^a (K_{L_1}^2)^a\over a! (t_1 t_2 K_{L_1}^2)^a} {d^a\over
d\tau^a} {\Spab{\la_2|{\cal F}|\W\la_2}^a\over
\Spab{\la_2|K_{L_1}|\W\la_2}^a}\Big|_{\tau\to
0}\right.~~~\label{n1=2-la1}\\
& &\left. +\sum_{i=0}^{a-1} { (-)^{a-1} \over (a-i)a!K_{L_1}^2
t_2^{i+1}t_1^{a}}{d^i\over d\tau^i}{d^{a-i}\over d\tau_1^{a-i}}
{\Spab{\la_2|{\cal G}_2|\W\la_2}^{a}+(-)^{a-1-i} \Spab{\la_2|{\cal
G}_1|\W\la_2}^{a}\over
\Spab{\la_2|K_{L_1}|\W\la_2}^{a}}\Big|_{\tau\to 0,\tau_1\to
0}\right\}~,~~~\nonumber \eea
where $s,t_1,t_2$ are defined in \eref{combi}, ${\cal F}$ in
\eref{n1=2-F-def} and ${\cal G}_1, {\cal G}_2$ in
\eref{cal-G1-G2}\footnote{Do not confuse the $t_2$ here with the
$t_2$-integration part of $\W\ell_2$ as reviewed in \eref{mea}.}.
The trick here is that instead of computing the operations
${d^a\over d\tau^a} (\bullet)\Big|_{\tau\to 0}$ and ${d^i\over
d\tau^i}{d^{a-i}\over d\tau_1^{a-i}} (\bullet)\Big|_{\tau\to
0,\tau_1\to 0}$, we will firstly do the $\W \ell_2$-part
integration.
For $\W\ell_2$-integration, after the $t_2$-integration we are left
with spinor integration given by\footnote{There is an overall sign
for $t_2$-integration since the momentum conservation forces
$K_{L_1}=-K_{L_2}$, i.e., $\Spab{\la_2|K_{L_2}|\W\la_2}<0$.}
\bea & &\Delta{\cal A}_{n_1 1 n_2}^{(a,b)}\Big|_{n_1=2} = \int
d^{-2\eps} \mu_1 d^{-2\eps} \mu_2 \int \Spaa{\la_2|d\la_2}
\Spbb{\W\la_2|d\W\la_2}{ (-)^{n_2+1} \Spab{\la_2|R_2|\W\la_2}^{b}
\over \prod_{j=1}^{n_2-2} \Spab{\la_2|Q_{2j}|\W\la_2}
\Spab{\la_2|K_{L_2}|\W\la_2}^{2+b-(n_2-2)}} \nn & &
\left\{{(-)^a t_2^{b-(n_2-2)}(K_{L_1}^2)^{a+b-(n_2-2)}\over a! t_2^a} \ln \left( { (s+ t_1 t_2)\over (s-t_1 t_2)}\right) {d^a\over
d\tau^a} {\Spab{\la_2|{\cal F}|\W\la_2}^a\over
\Spab{\la_2|K_{L_1}|\W\la_2}^a}\Big|_{\tau\to
0}\right.~~~\label{n1=2-la2}\\
& &\left. +\sum_{i=0}^{a-1} { (-)^{a-1} (K_{L_1}^2)^{a+b-(n_2-2)}
t_2^{b-(n_2-2)} \over (a-i)a! t_2^{i}}{d^i\over
d\tau^i}{d^{a-i}\over d\tau_1^{a-i}} {\Spab{\la_2|{\cal
G}_2|\W\la_2}^{a}+(-)^{a-1-i} \Spab{\la_2|{\cal
G}_1|\W\la_2}^{a}\over
\Spab{\la_2|K_{L_1}|\W\la_2}^{a}}\Big|_{\tau\to 0,\tau_1\to
0}\right\}~,~~~\nonumber \eea
where we have defined
\bea R_2 \equiv T_2+{z_2 2K_{L_2}\cdot T_2\over (1-2z_2)K_{L_2}^2}
K_{L_2}~~,~~ Q_{2j} \equiv K_{2j}+ {z_2(2 K_{L_2}\cdot
K_{2j})-K_{2j}^2\over (1-2z_2)K_{L_2}^2} K_{L_2}~.~~~\label{L2-var}
\eea
\section{The integral bases of ${\cal A}_{212}$ topology}
With results of previous section, it is possible to discuss the
integral bases of ${\cal A}_{212}$ topology in this section. To do
so, we need to finish the spinor integration given in
\eref{n1=2-la2} with $n_2=2$, and attempt to identify the results to
the bases. We will see that there are only (dimensional shifted)
scalar bases.
\subsection{The $\la_2$-integration for the case $n_2=2$}
For the case $n_2=2$ the formula \eref{n1=2-la2} becomes
\bea
\Delta{\cal A}_{2 1 2}^{(a,b)}& = & \int
d^{-2\eps} \mu_1 d^{-2\eps} \mu_2 \int \Spaa{\la_2|d\la_2}
\Spbb{\W\la_2|d\W\la_2}{ (-)\Spab{\la_2|R_2|\W\la_2}^{b}
\over
\Spab{\la_2|K_{L_2}|\W\la_2}^{2+b}} \nn & &
\left\{{(-)^a (K_{L_1}^2)^{a+b}t_2^{b}\over a! t_2^a} \ln \left( { (s+ t_1 t_2)\over (s-t_1 t_2)}\right) {d^a\over
d\tau^a} {\Spab{\la_2|{\cal F}|\W\la_2}^a\over
\Spab{\la_2|K_{L_1}|\W\la_2}^a}\Big|_{\tau\to
0}\right.~~~\label{n1=2-la2-2}\\
& &\left. +\sum_{i=0}^{a-1} { (-)^{a-1} (K_{L_1}^2)^{a+b} t_2^{b}
\over (a-i)a! t_2^{i}}{d^i\over d\tau^i}{d^{a-i}\over d\tau_1^{a-i}}
{\Spab{\la_2|{\cal G}_2|\W\la_2}^{a}+(-)^{a-1-i} \Spab{\la_2|{\cal
G}_1|\W\la_2}^{a}\over
\Spab{\la_2|K_{L_1}|\W\la_2}^{a}}\Big|_{\tau\to 0,\tau_1\to
0}\right\}~.~~~\nonumber \eea
For our momentum configuration, $K_{L_1}=-K_{L_2}$, thus we can
combine denominator together to get a simpler expression. Terms of
integrand can be classified into two parts, and we evaluate them one
by one.
~\\ {\bf First part:} The first part can be rewritten as
\bea
& & \int
d^{-2\eps} \mu_1 d^{-2\eps} \mu_2 {(-)^{a+b+1} (K_{L_1}^2)^{a+b}t_2^{b}\over a! t_2^a} \ln \left( { (s+ t_1 t_2)\over (s-t_1 t_2)}\right) {d^a\over
d\tau^a}{ a!\over (a+b)!}{d^b\over
d\W\tau^b}\nn & & \int \Spaa{\la_2|d\la_2}
\Spbb{\W\la_2|d\W\la_2}{ \Spab{\la_2|\W\tau R_2+{\cal F}|\W\la_2}^{a+b}
\over
\Spab{\la_2|K_{L_2}|\W\la_2}^{a+b+2}}\Big|_{\tau\to
0,\W\tau\to 0}~.~~~ \eea
The second line is the standard one-loop bubble integration, thus we
can use the general formulae in Appendix \ref{B}.
~\\ {\bf Second part:} The second part can be rewritten as
\bea & & \int d^{-2\eps} \mu_1 d^{-2\eps} \mu_2 \sum_{i=0}^{a-1} {
(-)^{a+b} (K_{L_1}^2)^{a+b} t_2^{b} \over (a-i)(a+b)!
t_2^{i}}{d^b\over d\W\tau^b}{d^i\over d\tau^i}{d^{a-i}\over
d\tau_1^{a-i}}\nn & & \int \Spaa{\la_2|d\la_2}
\Spbb{\W\la_2|d\W\la_2} {\Spab{\la_2|\W \tau R_2+{\cal
G}_2|\W\la_2}^{a+b}+(-)^{a-1-i} \Spab{\la_2|\W \tau R_2+ {\cal
G}_1|\W\la_2}^{a+b}\over
\Spab{\la_2|K_{L_1}|\W\la_2}^{a+b+2}}\Big|_{\tau\to 0,\tau_1\to
0,\W\tau\to 0}~.~~~ \eea
The second line is again the one-loop bubble integration. After
finishing the integration over $\la_2$-part, we can take the
derivative and the limit $\tau\to 0,\tau_1\to 0,\W\tau\to 0$.
\subsection{The result}
Collecting all results together, we get an expression of the form
\bea & & \Delta{\cal A}_{2 1 2}^{(a,b)} = \int d^{-2\eps} \mu_1
d^{-2\eps} \mu_2\left\{ f_{212\to 202}^{(a,b)} {\cal
S}_{202}+f_{212\to 212}^{(a,b)}{\cal S}_{212} \right\}~,~~~
\label{A212-final-0} \eea
where we have defined
\bea {\cal S}_{202}= -t_1t_2~~,~~{\cal S}_{212}={1\over
K_{L_1}^2}\ln{\left({s+t_1t_2\over
s-t_1t_2}\right)}~.~~~\label{212-sgn} \eea
Remind from Appendix \ref{B} that the signature of one-loop bubble
is $\int d^{-2\eps} \mu (-\sqrt{1-u^2})$, thus the term ${\cal
S}_{202}$ is the signature of topology ${\cal A}_{202}$ as the
subscript indicates. For ${\cal S}_{212}$, since the factor
$\ln{\left({s+t_1t_2\over s-t_1t_2}\right)}$ can not be factorized
to a form where $\mu_1$-part and $\mu_2$-part are decoupled, it can
not belong to the topology ${\cal A}_{n_1 0 n_2}$. So it must be the
signature of topology ${\cal A}_{212}$.
It is worth to mention that in the form \eref{A212-final-0}, the
dependence of $a,b$ is completely encoded in the coefficients
$f_{212\to 202}^{(a,b)}$ and $f_{212\to 212}^{(a,b)}$, while the
signature \eref{212-sgn} is universal. However, it does not mean
the basis is just given by $a=b=0$. It could be true only when
coefficients $f_{212\to 202}^{(a,b)}$ and $f_{212\to 212}^{(a,b)}$
satisfying the following two conditions: (1) they are polynomials
of $u_1, u_2$ and $s$; (2) they are rational functions of external
momentum $K_{L_1}$. More discussions will be given shortly after.
Having above general discussions, now we list coefficients for
various $a,b$:
~\\ {\bf Coefficients $f_{212\to 212}$:} Using expression given in
Appendix \ref{B}, the analytic results for some levels of $a+b$ are
given by ~\\{\bf $\bullet$ $a+b=0,1$:}
\bea f_{212\to 212}^{(0,0)} & = & 1~~,~~f_{212\to 212}^{(1,0)}= {
T_1\cdot K_{L_1}}~~,~~f_{212\to 212}^{(0,1)}= { -T_2\cdot
K_{L_1}}~.~~~ \eea
~\\ {\bf $\bullet$ $a+b=2$:}
\bea f_{212\to 212}^{(1,1)} & = & {1\over 3} \left( s K_{L_1}^2
(T_1\cdot T_2) -(3+s) (K_{L_1}\cdot T_1) (K_{L_1}\cdot T_2)
\right)~,~~~\nn
f_{212\to 212}^{(2,0)} & = &{1\over 3} \left( (3+ (1-u_1))
(K_{L_1}\cdot T_1)^2 -(1-u_1) K_{L_1}^2 T_1^2\right)~,~~~ \nn
f_{212\to 212}^{(0,2)}&= & {1\over 3} \left( (3+ (1-u_2))
(K_{L_1}\cdot T_2)^2 -(1-u_2) K_{L_1}^2 T_2^2\right)~.~~~ \eea
~\\ {\bf $\bullet$ $a+b=3$:}
\bea f_{212\to 212}^{(1,2)} & = & {1\over 3} \left( -2s K_{L_1}^2
(K_{L_1}\cdot T_2) (T_1\cdot T_2) + (K_{L_1}\cdot T_1)( (3+2
s+(1-u_2)) (K_{L_1}\cdot T_2)^2\right.\nn & &\left. -(1-u_2)
K_{L_1}^2 T_2^2)\right)~,~~~\nn
f_{212\to 212}^{(0,3)} & = & -(1+(1-u_2)) (K_{L_1}\cdot
T_2)^3+(1-u_2) K_{L_1}^2 (K_{L_1} \cdot T_2) T_2^2~.~~~ \eea
~\\ {\bf $\bullet$ $a+b=4$:}
\bea & & f_{212\to 212}^{(2,2)} = {1\over 15}\left\{ 2(-s(10+3 s)+
(1-u_2)(1-u_1)) K_{L_1}^2 (K_{L_1}\cdot T_1)(K_{L_1}\cdot
T_2)(T_1\cdot T_2)\right. \nn & & + (K_{L_1}\cdot T_1)^2 ( (2
s(10+s)+5(3+(1-u_1))+(5+(1-u_1)) (1-u_2)) (K_{L_1}\cdot T_2)^2\nn &
& + (s^2-(5+ 2 (1-u_1)) (1-u_2)) K_{L_1}^2 T_2^2) + K_{L_1}^2 ((
s^2-(1-u_1)(5+2 (1-u_2))) (K_{L_1}\cdot T_2)^2 T_1^2\nn & &
\left.+K_{L_1}^2 ((3 s^2-(1-u_1)(1-u_2)) (T_1\cdot T_2)^2- (s^2-2
(1-u_1)(1-u_2)) T_1^2 T_2^2)) \right\}~.~~~\label{f212-212} \eea
~\\ {\bf Coefficients $f_{212\to 202}$:} ~\\ {$\bullet$ $a=0$ or
$b=0$:} From our derivation, it can easily be seen that when $a=0$
or $b=0$, the coefficient must be zero, i.e.,
\bea f_{212\to 202}^{(0,b)}=f_{212\to 202}^{(a,0)}=0~.~~~\eea
~\\ {$\bullet$ Non-zero results:}
\bea f_{212\to 202}^{(1,1)} & = & {2\over 3} \left( T_1\cdot T_2-{
(K_{L_1}\cdot T_1)(K_{L_1}\cdot T_2)\over K_{L_1}^2}\right)~,~~~\nn
f_{212\to 202}^{(1,2)} & = & { 4 (K_{L_1}\cdot T_2) ((K_{L_1}\cdot
T_1)(K_{L_1}\cdot T_2)- K_{L_1}^2(T_1\cdot T_2))\over 3
K_{L_1}^2}~,~~~ \nn
f_{212\to 202}^{(1,3)} & = & {2 ((K_{L_1}\cdot T_1)(K_{L_1}\cdot T_2)- K_{L_1}^2(T_1\cdot T_2))
( (5+(1-u_2)) (K_{L_1}\cdot T_2)^2-(1-u_2) K_{L_1}^2 T_2^2)
\over -5 K_{L_1}^2}~,~~~\nn
f_{212\to 202}^{(2,2)} & = & {2\over 15 K_{L_1}^2} \left\{ -2(10+3
s) K_{L_1}^2 (K_{L_1}\cdot T_1) (K_{L_1}\cdot T_2)(T_1 \cdot T_2)+
(K_{L_1}\cdot T_1)^2( 2(10+s) (K_{L_1}\cdot T_2)^2 \right. \nn & &
\left.+s K_{L_1}^2 T_2^2)+s K_{L_1}^2 ( (K_{L_1}\cdot T_2)^2
T_1^2+K_{L_1}^2 (3 (T_2\cdot T_1)^2- T_1^2 T_2^2)) \right\}~.~~~\eea
\subsection{Classification of integral bases}
Now we need to analyze above results in order to determine the
integral bases. Firstly, noticing that $f_{212\to 212}^{(a,b)}$ and
$f_{212\to 202}^{(a,b)}$ are polynomials of $T_1, T_2, \mu_1\cdot
\mu_2, \mu_1^2, \mu_2^2$ as well as rational functions of external
momentum $K_{L_1}$, thus we can write them more explicitly as
\bea f_{212\to 212}^{(a,b)} & = & \sum_{\kappa_0,\kappa_1,\kappa_2}
f_{212\to 212; \mu_1,...,\mu_{a};\nu_1,...,\nu_b}^{(a,b)}
T_1^{\mu_1}...T_1^{\mu_a} T_2^{\nu_1}... T_2^{\nu_b}
(\mu_1^2)^{\kappa_1} (\mu_2^2)^{\kappa_2} (\mu_1\cdot
\mu_2)^{\kappa_0}~,~~~\nn
f_{212\to 202}^{(a,b)} & = & \sum_{\kappa_0,\kappa_1,\kappa_2}
f_{212\to 202; \mu_1,...,\mu_{a};\nu_1,...,\nu_b}^{(a,b)}
T_1^{\mu_1}...T_1^{\mu_a} T_2^{\nu_1}... T_2^{\nu_b}
(\mu_1^2)^{\kappa_1} (\mu_2^2)^{\kappa_2} (\mu_1\cdot
\mu_2)^{\kappa_0}~,~~~\label{f212-coeff-exp} \eea
where the tensor coefficients $f_{212\to 212;
\mu_1,...,\mu_{a};\nu_1,...,\nu_b}^{(a,b)}$ are rational functions
of external momentum $K_{L_1}$ only. Putting it back we get
\bea & & \Delta{\cal A}_{2 1 2}^{(a,b)}\nn & = &
\sum_{\kappa_0,\kappa_1,\kappa_2}f_{212\to 202;
\mu_1,...,\mu_{a};\nu_1,...,\nu_b}^{(a,b)} T_1^{\mu_1}...T_1^{\mu_a}
T_2^{\nu_1}... T_2^{\nu_b} \int d^{-2\eps} \mu_1 d^{-2\eps}
\mu_2(\mu_1^2)^{\kappa_1} (\mu_2^2)^{\kappa_2} (\mu_1\cdot
\mu_2)^{\kappa_0} {\cal S}_{202}\nn & +&
\sum_{\kappa_0,\kappa_1,\kappa_2}f_{212\to 212;
\mu_1,...,\mu_{a};\nu_1,...,\nu_b}^{(a,b)} T_1^{\mu_1}...T_1^{\mu_a}
T_2^{\nu_1}... T_2^{\nu_b} \int d^{-2\eps} \mu_1 d^{-2\eps}
\mu_2(\mu_1^2)^{\kappa_1} (\mu_2^2)^{\kappa_2} (\mu_1\cdot
\mu_2)^{\kappa_0} {\cal S}_{212}~.~~~ \label{A212-final-1} \eea
The above expansion leads us to define following {\bf dimensional
shifted bases}
\bea {\cal B}_{202}^{(0,0)}[\kappa_0,\kappa_1,\kappa_2] & \equiv &
\int d^{4-2\eps}\WH\ell_1\int d^{4-2\eps}\WH\ell_2
{(\mu_1^2)^{\kappa_1} (\mu_2^2)^{\kappa_2} (\mu_1\cdot
\mu_2)^{\kappa_0} \over \WH\ell_1^2 (\WH\ell_1-K_{L_1})^2
\WH\ell_2^2 (\WH\ell_2+K_{L_1})^2 }\label{B202-dim-basis} \eea
and
\bea {\cal B}_{212}^{(0,0)}[\kappa_0,\kappa_1,\kappa_2] & \equiv &
\int d^{4-2\eps}\WH\ell_1\int d^{4-2\eps}\WH\ell_2
{(\mu_1^2)^{\kappa_1} (\mu_2^2)^{\kappa_2} (\mu_1\cdot
\mu_2)^{\kappa_0} \over \WH\ell_1^2 (\WH\ell_1-K_{L_1})^2
\WH\ell_2^2 (\WH\ell_2+K_{L_1})^2
(\WH\ell_1+\WH\ell_2)^2}~.~~~\label{B212-dim-basis} \eea
An important observation is that in the definition of ${\cal
B}_{202}^{(0,0)}[\kappa_0,\kappa_1,\kappa_2]$, when $\kappa_0\neq
0$, we do have $\mu_1\cdot \mu_2$ in the numerator. Thus although
there is no mixed propagator in the denominator, it contains
information from the mother topology ${\cal A}_{212}$ where
$\WH\ell_1$ and $\WH\ell_2$ are mixed.
With above definition, we find the following reduction hinted by
unitarity method\footnote{For some topologies, such as ${\cal
A}_{112}$, since they are not detectable by our choice of unitarity
cuts, we can not find their coefficients.}
\bea
{\cal A}_{2 1 2}^{(a,b)} & \to & \sum_{\kappa_0,\kappa_1,\kappa_2}f_{212\to 202; \mu_1,...,\mu_{a};\nu_1,...,\nu_b}^{(a,b)} T_1^{\mu_1}...T_1^{\mu_a} T_2^{\nu_1}...
T_2^{\nu_b}{\cal B}_{202}[\kappa_0,\kappa_1,\kappa_2] \nn
& & +\sum_{\kappa_0,\kappa_1,\kappa_2}f_{212\to 212;
\mu_1,...,\mu_{a};\nu_1,...,\nu_b}^{(a,b)} T_1^{\mu_1}...T_1^{\mu_a}
T_2^{\nu_1}... T_2^{\nu_b}{\cal
B}_{212}[\kappa_0,\kappa_1,\kappa_2]~.~~~ \label{A212-final-2} \eea
However, before claiming ${\cal
B}_{212}[\kappa_0,\kappa_1,\kappa_2]$ are bases of the topology
${\cal A}_{212}$ studied in this paper, we need to notice that in
general $T_i$ could have four independent choices in 4D, i.e.,
$e_i$, $i=1,2,3,4$ as the momentum bases for Lorentz momenta. So if
bases have non-trivial dependence of $T_i$ in the numerator, we
should be careful to identify bases. This happens to topologies
${\cal A}_{213}$ and ${\cal A}_{313}$. However, for the current
topology ${\cal A}_{212}$, the bases ${\cal
B}_{212}[\kappa_0,\kappa_1,\kappa_2]$ are {\bf scalar bases}, i.e.,
the numerator of bases does not depend on any external momenta
$T_i$.
Now we count the number of integral bases. For pure 4D case, we can
take the limit $\mu_1^2,\mu_2^2,\mu_1\cdot \mu_2\to 0$, thus there
is only one scalar basis, with $\kappa_i=0$, $i=0,1,2$. In
\cite{Feng:2012bm} it is found that for planar double-triangle(i.e.,
the topology ${\cal A}_{212}$), the number of integrand bases is
$111$ under the renormalizable conditions in pure 4D. For general
$(4-2\eps)$-dimension, if we set constraint
$\sum_{i=0,1,2}\kappa_i\leq 3$(i.e., the sum of the power of
$\ell_1,\ell_2$ in the numerator is less than or equal to $6$) for
well-behaved quantum field theories, the number of integral bases is
$20$.
\section{The integral bases of ${\cal A}_{213}$ topology }
Encouraged by the results in previous section, in this section we
determine the integral bases of ${\cal A}_{213}$ topology. As it
will be shown shortly after, new features will appear.
\subsection{$\la_2$-integration for the case $n_2=3$}
For $n_2=3$ the general formula \eref{n1=2-la2} becomes(for
simplicity, we will drop "$\tau_i\to 0$" from now on)
\bea
\Delta{\cal A}_{2 1 3}^{(a,b)} & = & \int
d^{-2\eps} \mu_1 d^{-2\eps} \mu_2 \int \Spaa{\la_2|d\la_2}
\Spbb{\W\la_2|d\W\la_2}{ \Spab{\la_2|R_2|\W\la_2}^{b}
\over \Spab{\la_2|Q_{2}|\W\la_2}
\Spab{\la_2|K_{L_2}|\W\la_2}^{b+1}} \nn & &
\left\{{(-)^a t_2^{b-1}(K_{L_1}^2)^{a+b-1}\over a! t_2^a} \ln \left( { s+ t_1 t_2
\over s-t_1 t_2}\right) {d^a\over
d\tau^a} {\Spab{\la_2|{\cal F}|\W\la_2}^a\over
\Spab{\la_2|K_{L_1}|\W\la_2}^a}\right.~~~\label{n2=2-la2-0}\\
& &\left. +\sum_{i=0}^{a-1} { (-)^{a-1} (K_{L_1}^2)^{a+b-1}
t_2^{b-1} \over (a-i)a! t_2^{i}}{d^i\over d\tau^i}{d^{a-i}\over
d\tau_1^{a-i}} {\Spab{\la_2|{\cal G}_2|\W\la_2}^{a}+(-)^{a-1-i}
\Spab{\la_2|{\cal G}_1|\W\la_2}^{a}\over
\Spab{\la_2|K_{L_1}|\W\la_2}^{a}}\right\}~.~~~\nonumber \eea
Again, using $K_{L_1}=-K_{L_2}$ we can simplify the denominator.
There are also two parts we need to compute.
~\\ {\bf First part:} The first part of remaining integration can be
rewritten as
\bea
& & \int
d^{-2\eps} \mu_1 d^{-2\eps} \mu_2{(-)^{a+b+1} t_2^{b-1}(K_{L_1}^2)^{a+b-1}\over (a+b)! t_2^a} \ln \left( { (s+ t_1 t_2)\over (s-t_1 t_2)}\right) {d^a\over
d\tau^a}{d^b\over d\W\tau^b}\nn & & \int \Spaa{\la_2|d\la_2}
\Spbb{\W\la_2|d\W\la_2}{ \Spab{\la_2|\W\tau R_2+{\cal F}|\W\la_2}^{a+b}
\over \Spab{\la_2|Q_{2}|\W\la_2}
\Spab{\la_2|K_{L_1}|\W\la_2}^{a+b+1}}~.~~~
\eea
The second line is the standard one-loop triangle integration. The
one-loop triangle can be reduced to triangle part and bubble part,
thus they can be interpreted as contributions from topologies ${\cal
A}_{213}$ and ${\cal A}_{212}$.
~\\ {\bf Second part:} The second part can be written as
\bea
& & \int
d^{-2\eps} \mu_1 d^{-2\eps} \mu_2\sum_{i=0}^{a-1} { (-)^{a+b} (K_{L_1}^2)^{a+b-1} t_2^{b-1} \over (a-i)(a+b)!
t_2^{i}}{d^i\over
d\tau^i}{d^{a-i}\over d\tau_1^{a-i}} {d^b \over d\W\tau^b}\nn
& &\int \Spaa{\la_2|d\la_2} \Spbb{\W\la_2|d\W\la_2}
{\Spab{\la_2|\W\tau R_2+{\cal G}_2|\W\la_2}^{a+b}+(-)^{a-1-i}
\Spab{\la_2|\W\tau R_2+ {\cal G}_1|\W\la_2}^{a+b}\over
\Spab{\la_2|Q_2|\W\la_2}\Spab{\la_2|K_{L_1}|\W\la_2}^{a+b+1}}~.~~~
\eea
The second line is again the standard triangle integration which
contain contributions from topologies ${\cal A}_{203}$ and ${\cal
A}_{202}$.
\subsection{Overview of results}
Collecting all results together, we get an expression of the form
\bea & & \Delta{\cal A}_{2 1 3}^{(a,b)} = \int d^{-2\eps} \mu_1
d^{-2\eps} \mu_2\left\{ f_{213\to 213}^{(a,b)} {\cal
S}_{213}+f_{213\to 212}^{(a,b)}{\cal S}_{212} + f_{213\to
203}^{(a,b)}{\cal S}_{203} + f_{213\to 202}^{(a,b)}{\cal
S}_{202}\right\}~,~~~ \label{A213-final-0} \eea
where ${\cal S}_{202}$ and ${\cal S}_{212}$ have been defined in
\eref{212-sgn} and two new signatures are
\bea {\cal S}_{203}&=&{t_1\over 2\sqrt{(K_4\cdot
K_{L_1})^2-K_{L_1}^2 K_4^2}}\ln \left({K_4^2+K_4\cdot K_{L_1}-t_2
\sqrt{(K_4\cdot K_{L_1})^2-K_{L_1}^2 K_4^2}\over K_4^2+K_4\cdot
K_{L_1}+t_2 \sqrt{(K_4\cdot K_{L_1})^2-K_{L_1}^2 K_4^2}}
\right)~,~~~\nn
{\cal S}_{213}&=&{-\ln\left({s+t_1t_2\over s-t_1t_2}\right)\over
2t_2K_{L_1}^2\sqrt{(K_4\cdot K_{L_1})^2-K_{L_1}^2 K_4^2}}\ln
\left({K_4^2+K_4\cdot K_{L_1}-t_2 \sqrt{(K_4\cdot
K_{L_1})^2-K_{L_1}^2 K_4^2}\over K_4^2+K_4\cdot K_{L_1}+t_2
\sqrt{(K_4\cdot K_{L_1})^2-K_{L_1}^2 K_4^2}}
\right)~.~~~\label{313-sgn} \eea
There are a few remarks for expression \eref{A213-final-0}. Firstly
it is easy to see that the signature ${\cal S}_{203}$ is the direct
product of signatures of one-loop bubble and one-loop triangle.
Secondly there are two logarithms in the signature ${\cal S}_{213}$:
one depends on both $\mu_1,\mu_2$ and the other only depends on
$\mu_2$. Pictorially, the first logarithm is related to the mixed
propagator $(\ell_1+\ell_2)^2$ while the second logarithm is related
to the right hand side sub-triangle.
Thirdly, all dependence of $a,b$ are inside coefficients $f$ while
signatures are universal. However, unlike in the expression
\eref{A212-final-0} where coefficients $f$ are all rational
functions of external momenta and polynomials of $s, u_1, u_2$, here
we find that the coefficients $f$ are in general not polynomials of
$s,u_1, u_2$. In fact, factor $t_2=\sqrt{1-u_2}$ will appear in
denominators. Such behavior can not be explained by dimensional
shifted basis. Instead, we must regard it as the signature of new
integral basis. Because of such complexity, when talking about {\bf
the signature of a basis for ${\cal A}_{213}$ topology}, we should
treat all coefficients together in a list $\{ f_{213\to
213}^{(a,b)},f_{213\to 212}^{(a,b)},f_{213\to 203}^{(a,b)},f_{213\to
202}^{(a,b)}\}$ as a single object. More explicitly we will write
the expression \eref{A213-final-0} as
\bea \Delta {\cal A}_{213}^{(a,b)} \equiv \{ f_{213\to
213}^{(a,b)},f_{213\to 212}^{(a,b)}, f_{213\to
203}^{(a,b)},f_{213\to 202}^{(a,b)}\}~.~~~\label{A213-list}\eea
The reduction of $\Delta {\cal A}_{213}^{(a,b)}$ is to write it as
the linear combination $\sum_{i} C_i \{ a_{i1}, a_{i2},
a_{i3},a_{i4} \}$ where $\{ a_{i1}, a_{i2}, a_{i3},a_{i4} \}$ is the
signature of $i$-th basis. In this notation, we can rewrite the
signatures of previously discussed bases as
\bea \Delta {\cal A}_{212}^{(0,0)}= \{0,1,0,0\}~~,~~\Delta {\cal
A}_{203}^{(0,0)} =\{0,0,1,0\}~~,~~\Delta{\cal
A}_{202}^{(0,0)}=\{0,0,0,1\}~.~~~\label{sig-list}\eea
Having above general remarks, now we present explicit results.
\subsection{The result of $a=0$}
We list results for $a=0$ with various $b$. Noticing that $a=0$
implies $f_{213\to 203}^{(0,b)}=0$ and $f_{213\to 202}^{(0,b)}=0$,
we will focus on the first two coefficients only.
~\\ {\bf The case $b=0$:} It is easy to see that
\bea \Delta {\cal A}_{213}^{(0,0)}=\{1, 0,0,0\}~.~~~
\label{213-a0b0-delta} \eea
Since it can not be written as the linear combination of three bases
in \eref{sig-list}, it must indicate a new integral basis. In other
words, ${\cal A}_{213}^{(0,0)}$ is an integral basis with signature
\eref{213-a0b0-delta}.
~\\ {\bf The case $b=1$:} The result is
\bea \Delta {\cal A}_{213}^{(0,1)} = & & \{{( K_4^2+K_4\cdot
K_{L_1}) (K_4\cdot T_2) K_{L_1}^2- K_4^2( K_4\cdot K_{L_1}+
K_{L_1}^2) (K_{L_1}\cdot T_2)\over K_4^2 K_{L_1}^2- (K_4\cdot
K_{L_1})^2},\nn & & { (K_4\cdot K_{L_1})(T_2\cdot K_{L_1})-K_{L_1}^2
(K_4\cdot T_2) \over K_4^2 K_{L_1}^2- (K_4\cdot
K_{L_1})^2},0,0\}~.~~~ \label{213-a0b1-delta} \eea
Thus, at least for our choice of unitarity cuts, $\Delta{\cal
A}_{213}^{(0,1)}$ can be written as the linear combination of
signatures $\Delta{\cal A}_{213}^{(0,0)}$ and $\Delta{\cal
A}_{212}^{(0,0)}$ with rational coefficients of external momenta.
~\\ {\bf For other $b$'s:} We have calculated cases $b=2$ and
$b=3$. Again we find that $\Delta {\cal A}_{213}^{(0,b)}$ can be
written as linear combinations of signatures $\Delta{\cal
A}_{213}^{(0,0)}$ and $\Delta{\cal A}_{212}^{(0,0)}$, with
coefficients being rational functions of external momenta and
polynomials of $s, u_1, u_2$. The explicit expressions are too long
to write down here. When $s, u_1, u_2$ appear in the results, we
should include dimensional shifted bases too.
\subsection{The result of $a=1$}
In this case a non-trivial phenomenon appears, and we will show how
to explain it.
~\\ {\bf The case $b=0$:} Calculation yields
\bea f_{213\to 213}^{(1,0)} & = & {s\over t_2^2} f_{213\to
213;s^1}^{(1,0)} + f_{213\to 213;s^0}^{(1,0)}~,~~~\eea
where
\bean f_{213\to 213;s^1}^{(1,0)} & = & {( K_4^2+K_4\cdot K_{L_1})
[(K_4\cdot T_1) K_{L_1}^2-( K_4\cdot K_{L_1})( K_{L_1}\cdot T_1)]
\over (K_4^2 K_{L_1}^2- (K_4\cdot K_{L_1})^2)}~,~~~\nn
f_{213\to 213;s^0}^{(1,0)} & = & K_{L_1}\cdot T_1~.~~~\eean
Although the $s^0$-part can be explained by the signature $\Delta
{\cal A}_{213}^{(0,0)}$, the $s^1$-part with factor ${s\over t_2^2}$
can not because the appearance of $t_2^2=(1-u_2)$ in the
denominator. Thus factor ${s\over t_2^2}$ indicates a new integral
basis.
Besides $f_{213\to 213}^{(1,0)}$, other coefficients are given by
\bea f_{213\to 212}^{(1,0)}& = &{ s[- K_{L_1}^2 (K_4\cdot T_1)
+(K_4\cdot K_{L_1})(T_1\cdot K_{L_1})]\over t_2^2(-K_4^2 K_{L_1}^2+
(K_4\cdot K_{L_1})^2)}~,~~~\nn
f_{213\to 203}^{(1,0)} & = & { -2(K_4^2+K_4\cdot K_{L_1})
[ K_{L_1}^2 (K_4\cdot T_1)
-(K_4\cdot K_{L_1})(T_1\cdot K_{L_1})]\over t_2^2 K_{L_1}^2 (K_4^2
K_{L_1}^2- (K_4\cdot K_{L_1})^2)}~,~~~\nn
f_{213\to 202}^{(1,0)}& = &{ 2
[- K_{L_1}^2 (K_4\cdot T_1)
+(K_4\cdot K_{L_1})(T_1\cdot K_{L_1})]\over t_2^2 K_{L_1}^2 (-K_4^2
K_{L_1}^2+ (K_4\cdot K_{L_1})^2)}~.~~~ \eea
Again, because of the factor ${1\over t_2^2}$, they can not be
explained by signatures \eref{sig-list}. Thus we have the first
non-trivial example of signatures where all four components are
non-zero
\bea \Delta{\cal A}_{213}^{(1,0)}= \{f_{213\to
213}^{(1,0)},f_{213\to 212}^{(1,0)},f_{213\to 203}^{(1,0)},f_{213\to
202}^{(1,0)}\}~.~~~\label{A213a1b0-sig}\eea
~\\ {\bf The case $b=1$:} All coefficients $\{f_{213\to
213}^{(1,1)},f_{213\to 212}^{(1,1)},f_{213\to 203}^{(1,1)},f_{213\to
202}^{(1,1)}\}$ have ${1\over t_2^2}$ dependence. However, all these
${1\over t_2^2}$ factors can be absorbed into $\Delta {\cal
A}_{213}^{(1,0)}$. More explicitly, we found the following
decomposition
\bea \Delta {\cal A}_{213}^{(1,1)} & = & a_{11\to 00}\Delta {\cal
A}_{213}^{(0,0)} + a_{11\to 10} \Delta {\cal
A}_{213}^{(1,0)}+b_{11\to 00}\Delta {\cal A}_{212}^{(0,0)}+ d_{11\to
00}\Delta {\cal A}_{202}^{(0,0)}~,~~~\label{A213-a1b1-delta} \eea
where
\bean a_{11\to 10} & = & { (1-u_2)K_{L_1}^2 ( (K_4\cdot
K_{L_1})^2-K_4^2 K_{L_1}^2) \Sigma_1+(K_4^2+ K_4\cdot
K_{L_1})\Sigma_2 \over 2 (K_4^2+ K_4\cdot K_{L_1}) ( (K_4\cdot
K_{L_1})^2-K_4^2 K_{L_1}^2)[(K_4\cdot T_1)K_{L_1}^2 -(K_4\cdot
K_{L_1})(T_1\cdot K_{L_1})]}~,~~~\nn
a_{11\to 00} & = & { (K_{L_1}\cdot T_1) [-(K_4\cdot T_2)K_{L_1}^2
(K_4^2+ K_4\cdot K_{L_1})+ K_4^2 (K_4\cdot K_{L_1}+K_{L_1}^2)(
K_{L_1}\cdot T_2)] \over ( (K_4\cdot K_{L_1})^2-K_4^2
K_{L_1}^2)}-(K_{L_1}\cdot T_1)a_{11\to 10}~,~~~\eean
\bean b_{11\to 00} & = & {1\over 2 (K_4\cdot K_{L_1}+K_4^2) (
-(K_4\cdot K_{L_1})^2+K_4^2 K_{L_1}^2)}\left\{ 2 (K_4\cdot
K_{L_1}+K_4^2) K_{L_1}\cdot T_1\right. \nn
& & ( -K_{L_1}^2( K_4\cdot T_2) +(K_4\cdot K_{L_1})( K_{L_1}\cdot
T_2))+ s K_{L_1}^2 ( K_{L_1}\cdot T_1 ( K_4\cdot K_{L_1} (K_4\cdot
T_2) -K_{4}^2 (K_{L_1}\cdot T_2))\nn & & \left.+K_4\cdot T_1 (
-K_{L_1}^2 (K_4\cdot T_2) + K_4\cdot K_{L_1} (K_{L_1}\cdot T_2)+
T_1\cdot T_2 ( -(K_4\cdot K_{L_1})^2+K_4^2
K_{L_1}^2))\right\}~,~~~\nn
d_{11\to 00} & = & {K_{L_1}\cdot T_1 (K_4\cdot K_{L_1}(K_4\cdot T_2)
-K_4^2 (K_{L_1}\cdot T_2))+ K_4\cdot T_1 (- K_{L_1}^2 (K_4\cdot T_2)
+K_4\cdot K_{L_1} (K_{L_1}\cdot T_2)) \over (K_4\cdot K_{L_1}+K_4^2)
( -(K_4\cdot K_{L_1})^2+K_4^2 K_{L_1}^2)}\nn & & + {T_1\cdot T_2
\over (K_4\cdot K_{L_1}+K_4^2)}~,~~~ \eean
with
\bean \Sigma_1&=&(K_{L_1}\cdot T_1)( -(K_4\cdot K_{L_1}) (K_4\cdot
T_2) +K_4^2 (K_{L_1}\cdot T_2))+ K_4\cdot T_1 ( (K_4\cdot
T_2)K_{L_1}^2 \nn & &-(K_4\cdot K_{L_1})(T_2\cdot K_{L_1})) + (
(K_4\cdot K_{L_1})^2-K_4^2 K_{L_1}^2) T_1\cdot T_2~,~~~\nn
\Sigma_2 & = & (K_4^2)^2 K_{L_1}^2( -(K_{L_1}\cdot T_1)(K_{L_1}\cdot T_2)
+K_{L_1}^2 (T_1\cdot T_2)) + (K_4\cdot K_{L_1}) K_{L_1}^2 [ K_4\cdot T_1
(-3 (K_4\cdot T_2) K_{L_1}^2\nn & & + (K_4\cdot K_{L_1}) (K_{L_1}\cdot T_2))
+ K_4\cdot K_{L_1} (3 (K_4\cdot T_2) (K_{L_1}\cdot T_1)
-(K_4\cdot K_{L_1}) (T_1\cdot T_2))]\nn & & + K_4^2( K_4\cdot T_1 K_{L_1}^2
(-3 K_4\cdot T_2 K_{L_1}^2+(3 K_4\cdot K_{L_1} +2 K_{L_1}^2) K_{L_1}\cdot T_2)
+ (K_4\cdot K_{L_1})\nn & & ( (K_{L_1}\cdot T_1)(3 K_{L_1}^2 (K_4\cdot T_2-K_{L_1}\cdot T_2)
- 2 (K_4\cdot K_{L_1}) (K_{L_1}\cdot T_2))\nn
& &+ K_{L_1}^2(-K_4\cdot K_{L_1}+K_{L_1}^2) T_1\cdot T_2)~.~~~ \eean
Since above four coefficients are rational functions of external
momenta and polynomials of $u_2$, we can claim that ${\cal
A}_{213}^{(1,1)}$ is not a basis at least for our choice of
unitarity cuts.
There are some details we want to remark. The coefficient $a_{11\to
00}$ is a polynomial of $T_1$ and $T_2$ with degree one while
coefficient $a_{11\to 10}$ is a polynomial of $T_2$ with degree one
but rational function of $T_1$. More accurately, both the
denominator and the numerator of $a_{11\to 10}$ are polynomials of
$T_1$ with degree one. It is against the intuition since $T_1$
should not appear in the denominator. However, this subtlety is
resolved if one notice that the first component $f_{213\to
213;s^1}^{(1,0)}$ of $\Delta {\cal A}_{213}^{(1,0)}$ contains
exactly the same factor $ [-(K_4\cdot T_1) K_{L_1}^2+( K_4\cdot
K_{L_1})( K_{L_1}\cdot T_1)]$ in its numerator, so it cancels the
same factor in denominator of $a_{11\to 10}$.
~\\ {\bf The case $b=2$:} The whole expression is too long to write
down, thus we present only the general feature. Again although all
coefficients contain factor ${1\over t_2^2}$, the whole result can
be expanded like the one \eref{A213-a1b1-delta} with coefficients as
rational functions of external momenta and polynomials of $s,u_1,
u_2$. Thus ${\cal A}_{213}^{(1,2)}$ is not a new integral basis.
\subsection{The result of $a=2$}
We will encounter similar phenomenon as in the case $a=1$. To get
rid of tedious expressions, we will present only the main features.
~\\ {\bf The case $b=0$:} The coefficient $f_{213\to 213}^{(2,0)}$ has the following form
\bea
f_{213\to 213}^{(2,0)}= {s^2\over t_2^4} g_{0;1}+ {s^2\over t_2^2} g_{0;2}+
{s\over t_2^2} g_{0;3}+{1\over t_2^2} g_{0;4}+ g_{0;5}~,~~~
\eea
where $g_{0;i}$'s are polynomials of $u_1$ and rational functions of
external momenta(similar for all other coefficients such as $h,i,j$
in this subsection). The appearance of $g_{0;1}$-part and
$g_{0;4}$-part can not be counted by signatures $\Delta{\cal
A}_{213}^{(0,0)}$ and $\Delta{\cal A}_{213}^{(1,0)}$, thus we should
take ${\cal A}_{213}^{(2,0)}$ as a new integral basis. For other
coefficients, we have
\bea
f_{213\to 212}^{(2,0)} & = & {s^2\over t_2^4} h_{0;1}+ {s\over t_2^2} h_{0;2}+
{1\over t_2^2} h_{0;3}~,~~~ \nn
f_{213\to 203}^{(2,0)} & = & {s\over t_2^4} i_{0;1}+ {s\over t_2^2} i_{0;2}+
{1\over t_2^2} i_{0;3}~,~~~\nn
f_{213\to 202}^{(2,0)}& = & {s\over t_2^4} j_{0;1}+
{1\over t_2^2} j_{0;2}~.~~~
\eea
The signature of the new basis can be represented by
\bea \Delta {\cal A}_{213}^{(2,0)}= \{ f_{213\to 213}^{(2,0)},
f_{213\to 212}^{(2,0)}, f_{213\to 203}^{(2,0)}, f_{213\to
202}^{(2,0)}\}~.~~~\eea
~\\ {\bf The case of $b=1$:} The behavior of various coefficients are
\bea
f_{213\to 213}^{(2,1)} &= & {s^2\over t_2^4} g_{1;1}+ {s^2( g_{1;2;0}+t_2^2 g_{1;2;1})\over t_2^2} +
{s (g_{1;3;0}+t_2^2 g_{1;3;1})\over t_2^2} +{1\over t_2^2} g_{1;4}+ g_{1;5}~,~~~\nn
f_{213\to 212}^{(2,1)} & = & {s^2 (h_{1;1;0}+t_2^2 h_{1;1;1})\over t_2^4}+ {s\over t_2^2} h_{1;2}+
{1\over t_2^2} (h_{1;3;0}+t_2^2 h_{1;3;1})~,~~~ \nn
f_{213\to 203}^{(2,1)} & = & {s\over t_2^4} i_{1;1}+ {s\over t_2^2} i_{1;2}+
{1\over t_2^2} ( i_{1;3;0}+t_2^2 i_{1;3;1})~,~~~\nn
f_{213\to 202}^{(2,1)}& = & {s(j_{1;1;0}+t_2^2 j_{1;1;1})\over t_2^4} +
{1\over t_2^2} j_{1;2}~,~~~\eea
where the integer $n$ in $g_{1;m;n}$ denotes the power of $t_2^2$, and similar for $h,~i,~j$.
We found the following expansion
\bea \Delta {\cal A}_{213}^{(2,1)} & = & a_{21\to 20}\Delta {\cal
A}_{213}^{(2,0)} + a_{21\to 10} \Delta {\cal A}_{213}^{(1,0)}+
a_{21\to 00}\Delta {\cal A}_{213}^{(0,0)}+b_{21\to 00}\Delta {\cal
A}_{212}^{(0,0)}\nn & & + c_{21\to 00}\Delta {\cal A}_{203}^{(0,0)}+
d_{21\to 00}\Delta {\cal
A}_{202}^{(0,0)}~,~~~\label{A213-a2b1-delta} \eea
where coefficients are rational functions of external momenta and
polynomials of $s,u_1, u_2$. Thus ${\cal A}_{213}^{(2,1)}$ is not a
new integral basis.
\subsection{Classification of integral bases}
With above results, we can classify the integral bases of ${\cal
A}_{213}$ topology. Having shown that coefficients such as $a_{21\to
00}$ are polynomials of $\mu_1\cdot \mu_2, \mu_1^2,
\mu_2^2$ and rational functions of external momenta, we can expand
them, for example
\bea a_{21\to 00} & = & \sum_{\kappa_0,\kappa_1,\kappa_2} a_{21\to
00}^{(a,b)} (\mu_1^2)^{\kappa_1} (\mu_2^2)^{\kappa_2} (\mu_1\cdot
\mu_2)^{\kappa_0}~,~~~ \eea
where the tensor coefficients $a$ are rational functions of external
momenta. This expansion leads us to define the following dimensional
shifted bases
\bea {\cal B}_{213;a}[\kappa_0,\kappa_1,\kappa_2;T_1] & \equiv &
\int d^{4-2\eps}\WH\ell_1\int d^{4-2\eps}\WH\ell_2
{(\mu_1^2)^{\kappa_1} (\mu_2^2)^{\kappa_2} (\mu_1\cdot
\mu_2)^{\kappa_0} (\WH \ell_1\cdot T_1)^a \over \WH\ell_1^2
(\WH\ell_1-K_{L_1})^2 \WH\ell_2^2 (\WH \ell_2-K_4)^2
(\WH\ell_2+K_{L_1})^2(\WH\ell_1+\WH\ell_2)^2}~.~~~\label{B213-dim-basis-0}
\eea
Unlike the scalar basis ${\cal B}_{212}[\kappa_0,\kappa_1,\kappa_2]
$ for ${\cal A}_{212}$ topology, the basis ${\cal
B}_{213;a}[\kappa_0,\kappa_1,\kappa_2] $ depends on $T_1$
explicitly. Since $T_1$ is a 4-dimensional Lorentz vector, there are
four independent choices and we need to clarify if different choice
of $T_1$ gives new independent basis.
To discuss this problem we expand $T_1=\sum_{i=1}^4 x_i e_i$. The momentum
bases $e_i$ are constructed as follows. Using $K_4, K_{L_1}$ we
can construct two null momenta $P_{i}= K_4+w_i K_{L_1}$ with $w_i=
{-K_4\cdot K_{L_1}\pm \sqrt{(K_{L_1}\cdot K_4)^2- K_4^2
K_{L_1}^2}\over K_{L_1}^2}$, thus the momentum bases can be taken as
\bea e_1=
K_4~~,~~e_2=K_{L_1}~~,~~e_3=\ket{P_1}\bket{P_2}~,~~e_4=\ket{P_2}\bket{P_1}~.~~~\label{e-basis}
\eea
~\\{\bf The case $a=0$:}
For $a=0$, since $T_1$ does not appear, only {\bf scalar basis}
exist. Thus the independent integral bases are ${\cal
B}_{213;0}[\kappa_0,\kappa_1,\kappa_2] $.
~\\{\bf The case $a=1$:}
We set $T_1=e_i$ for $i=1,2,3,4$ in
the expressions $f_{213\to 213}^{(1,0)}$, $f_{213\to 212}^{(1,0)}$,
$f_{213\to 203}^{(1,0)}$, $f_{213\to 202}^{(1,0)}$, and found that:
\begin{itemize}
\item (1) For $T_1=e_3$ or $T_1=e_4$ we have
\bea \{ f_{213\to 213}^{(1,0)},f_{213\to 212}^{(1,0)},f_{213\to 203}^{(1,0)},f_{213\to 202}^{(1,0)}\}
=\{0,0,0,0\}~.~~~ \eea
It can be shown that $T_1=e_{3,4}$ are spurious and the
integrations are zero.
\item (2) For $T_1=K_{L_1}$, we find
\bea \{ f_{213\to 213}^{(1,0)},f_{213\to 212}^{(1,0)},f_{213\to 203}^{(1,0)},f_{213\to 202}^{(1,0)}\}
=\{K_{L_1}^2,0,0,0\}~.~~~ \eea
It is, in fact, equivalent to the basis ${\cal
B}_{213;0}[\kappa_0,\kappa_1,\kappa_2] $ and does not give new
integral basis.
\item (3) For $T_1=K_4$, we find
\bea
& & \{ f_{213\to 213}^{(1,0)},f_{213\to 212}^{(1,0)},f_{213\to 203}^{(1,0)},f_{213\to 202}^{(1,0)}\}
\nn
& = &\{{-s(K_4^2+ K_4\cdot K_{L_1})+t_2^2 K_4\cdot K_{L_1}\over
t_2^2 },{s\over t_2^2}, -{2(K_4^2+ K_4\cdot K_{L_1})\over t_2^2
K_{L_1}^2} ,{2\over t_2^2 K_{L_1}^2}\}~,~~~\label{A213-T1=K4} \eea
which is the true new integral basis.
\end{itemize}
~\\{\sl Conclusion:} For $a=1$, the integral basis is given by
${\cal B}_{213;1}[\kappa_0,\kappa_1,\kappa_2;K_4]$
~\\{\bf The case $a=2$:}
There are ten possible combinations $(\WH \ell_1\cdot e_i)(\WH \ell_1\cdot e_j)$. With the
explicit result we found that
\begin{itemize}
\item(1) For the following six combinations
\bea
(e_i,e_j)=(e_3,e_3)~,~(e_4,e_4)~,~(e_1,e_3)~,~(e_2,e_3)~,~(e_1,e_4)~,~(e_2,e_4)~,~~~
\eea
the coefficients are $\{0,0,0,0\}$.
In fact, integrations for these six cases are zero.
\item(2) For $(e_i,e_j)=(e_2,e_2)$ the list of coefficients is $
\{ 2 (K_{L_1}^2)^2,0,0,0\}$. It is equivalent to the basis ${\cal
B}_{213;0}[\kappa_0,\kappa_1,\kappa_2] $. Therefore it does not give
new integral basis.
\item(3) For $(e_i,e_j)=(e_1,e_2)$ the list of coefficients is
\bea
& &
2K_{L_1}^2\{{-s(K_4^2+ K_4\cdot K_{L_1})+t_2^2 K_4\cdot K_{L_1}\over t_2^2 },{s\over t_2^2},
-{2(K_4^2+ K_4\cdot K_{L_1})\over t_2^2 K_{L_1}^2} ,{2\over t_2^2 K_{L_1}^2}\}~,~~~
\eea
which is proportional to \eref{A213-T1=K4} by a factor $2 K_{L_1}^2$. Therefore it can be reduced to the
basis ${\cal B}_{213;1}[\kappa_0,\kappa_1,\kappa_2] $, and dose not give a new integral basis.
\item(4) For $(e_i,e_j)=(e_1,e_1)$ and $(e_i,e_j)=(e_3,e_4)$ the list is non-trivial. However,
it can be checked that
\bea
& & \{ f_{213\to 213}^{(2,0)},f_{213\to 212}^{(2,0)},f_{213\to 203}^{(2,0)},f_{213\to 202}^{(2,0)}\}|_{(e_i,e_j)=(e_1,e_1)}\nn
& = &
\{ f_{213\to 213}^{(2,0)},f_{213\to 212}^{(2,0)},f_{213\to 203}^{(2,0)},f_{213\to 202}^{(2,0)}\}|_{(e_i,e_j)=(e_3,e_4)}\nn
& & + 2 (K_4\cdot K_{L_1})\{ f_{213\to 213}^{(1,0)},f_{213\to 212}^{(1,0)},f_{213\to 203}^{(1,0)},f_{213\to 202}^{(1,0)}\} \nn
& & + ( (t_1^2-1) (K_4\cdot K_{L_1})^2- t_1^2 K_4^2 K_{L_1}^2)\{ f_{213\to 213}^{(0,0)},f_{213\to 212}^{(0,0)},
f_{213\to 203}^{(0,0)},f_{213\to 202}^{(0,0)}\}~.~~~
\eea
Thus we can take either one(but only one of them) as the
integral basis. We choose the combination $(e_i,e_j)=(e_1,e_1)$
to be a new integral basis.
\end{itemize}
~\\{\sl Conclusion:} For $a=2$, the integral basis can be chosen as
${\cal B}_{213;2}[\kappa_0,\kappa_1,\kappa_2;K_4]$
~\\{\bf For general $a$:} Although we have not done explicit
calculations for $a\geq 3$, we expect for each $a$ there is a new
integral bases ${\cal B}_{213;a}[\kappa_0,\kappa_1,\kappa_2;K_4]$.
~\\ {\bf The number of integral bases:} To finish this section, let
us count the number of integral basis. For pure 4D, we just need to
set $\mu_1\cdot \mu_2,\mu_1^2, \mu_2^2$ to zero. In this case, the
factor ${1\over t_2^n}\to 1$. In other words, there is only one
integral bases
\bea \int d^{4-2\eps}\WH\ell_1\int d^{4-2\eps}\WH\ell_2 {1 \over
\WH\ell_1^2 (\WH\ell_1-K_{L_1})^2 \WH\ell_2^2 (\WH \ell_2-K_4)^2
(\WH\ell_2+K_{L_1})^2(\WH\ell_1+\WH\ell_2)^2 }~.~~~\eea
It is useful to compare it with about $70$ integrand bases found in
\cite{Feng:2012bm} under renormalizable conditions.
For general $(4-2\eps)$-dimension, renormalizable conditions can be
roughly given by $2\kappa_1+a\leq 3$, $\kappa_2\leq 2$. Under these
two conditions, we find $48$ integral bases.
\section{The integral basis of ${\cal A}_{313}$ topology}
In this section we turn to the topology ${\cal A}_{313}$. This
topology has been extensively studied by various methods, such as
IBP method\cite{Gluza:2010ws} and maximum unitarity cut
method\cite{Kosower:2011ty,Larsen:2012sx,CaronHuot:2012ab,
Johansson:2012zv,Johansson:2012sf,Sogaard:2013yga,Johansson:2013sda},
and its integral bases have been determined\cite{Gluza:2010ws}. To
determine the integral bases using our method, we need to integrate
the following expression
\bea \Delta{\cal A}_{3 1 3}^{(a,b)}&=&\int d^{-2\eps}\mu_1
d^{-2\eps}\mu_2\int d^4 \W\ell_2~
\delta(\W\ell_2^2-\mu_2^2)\delta(K_{L_2}^2-2K_{L_2}\cdot \W\ell_2){
(2\W\ell_2\cdot T_2)^b \over((\W\ell_2-K_{4})^2-\mu_2^2)}\nn
& &\int\Spaa{\la_1|d\la_1}\Spbb{\W\la_1|d\W\la_1} { - ((1-2z_1)
K_{L_1}^2)^{a-1}\over \Spab{\la_1|K_{L_1}|\W\la_1}^{a}}{
\Spab{\la_1|R_1|\W\la_1}^a\over \Spab{\la_1|W_1|\W\la_1}
\Spab{\la_1|Q_{1}|\W\la_1}}~~~\label{redo-L1-delta-n1=3}~,~~~\eea
with $K_{L_1}=K_1+K_2$. For general situation, the integration is
very complicated and we postpone it to future study. In this paper,
we take the following simplification. Firstly we take all out-going
momenta $K_i^2=0\ (i=1,..,4)$(unlike the topologies ${\cal A}_{212}$
and ${\cal A}_{213}$ where $K_i$ can be massive). Secondly, based on
the known results of integral bases, we focus on the specific case
$a=0$ and $T_2=K_1$.
In order to make expressions compact we define some new parameters
as\footnote{It is worth to notice that $s$ in this section is
different from $s$ in \eref{combi} of section 3. }
\begin{eqnarray}
s\equiv s_{12}~~,~~ m\equiv\frac{s_{14}-s_{13}}{s_{12}}~~,~~
\chi\equiv\frac{s_{14}}{s_{12}}=\frac{m-1}{2}~.~~~
\end{eqnarray}
For physical unitarity cut, momentum configuration requires
$s_{12}>0$, $s_{13}<0$ and $s_{14}<0$. So we have
\begin{eqnarray}
-1<m<1~~,~~ -1<\chi<0
\end{eqnarray}
by momentum conservation $s_{12}+s_{13}+s_{14}=0$. Furthermore, we
define the regularization parameters $\gamma_i$ as
\begin{eqnarray}
\gamma\equiv\frac{1+\nu_1\cdot\nu_2}{\sqrt{1-\nu^2_1}\sqrt{1-\nu^2_2}}~~,~~
\gamma_i\equiv\frac{1}{\sqrt{1-\nu^2_i}}~~,~~
i=1,2~,~~~\label{gamma-i}
\end{eqnarray}
where the dimensionless extra-dimensional vector $\nu_i$ is defined
as $\nu_i\equiv2\mu_i/\sqrt{s}~,~i=1,2$.
Under the simplification $a=0$, the integration over $\la_1$-part is
trivial. Using \eref{Stan-box-box} in Appendix \ref{B} we can get
\bea \Delta{\cal A}_{3 1 3}^{(0,b)} & = & \int d\mu_i
\int\Spaa{\la_2|d\la_2}\Spbb{\W\la_2|d\W\la_2}{\gamma_1\over
s^2}\Big(-{s\over
\gamma_2}\Big)^{b-1}{\Spab{\la_2|R_2|\W\la_2}^b\over
\Spab{\la_2|Q_2|\W\la_2}\Spab{\la_2|K_{L_1}|\W\la_2}^{b+1}} \nn & &
{1\over \sqrt{{\Spab{\la_2|\W K_1|\W\la_2}^2\over
\Spab{\la_2|K_{L_1}|\W\la_2}^2}- {\b^2\over 4} }}
\ln\left({\Spab{\la_2|\W K_1|\W\la_2}\over
\Spab{\la_2|K_{L_1}|\W\la_2}}+\sqrt{{\Spab{\la_2|\W
K_1|\W\la_2}^2\over \Spab{\la_2|K_{L_1}|\W\la_2}^2}- {\b^2\over 4} }
\over {\Spab{\la_2|\W K_1|\W\la_2}\over
\Spab{\la_2|K_{L_1}|\W\la_2}}-\sqrt{{\Spab{\la_2|\W
K_1|\W\la_2}^2\over \Spab{\la_2|K_{L_1}|\W\la_2}^2}- {\b^2\over 4}
}\right)~,~~~\label{A-exp}\eea
where
\bean \b^2=(\gamma^2-1)(\gamma^2-1)~.~~~\eean
An important feature is that the signature after $\la_1$-integration
depends on $\ell_2$ explicitly, which is different from the
signature in \eref{3to3-sign}. Because of this, the integration over
$\la_2$ becomes very complicated. One way to overcome is to use
\bea {1\over b} \log {a+b\over a-b}=\int_0^1 dx\left( {1\over a+x
b}+{1\over a-x b}\right)=\int_0^1 dx {2a\over a^2-x^2 b^2 }
~.~~~\label{Useful-1}\eea
Thus the logarithmic part in \eref{A-exp} becomes rational function
of $\ell_2$ and we can use the same strategy as in previous
sections. However, for the current simple situation,
we can use another method. After expanding the spinor variables as
\bea
\ket{\la_2}=\ket{k_2}+z\ket{k_1}~~,~~\bket{\W\la_2}=\bket{k_2}+\O
z\bket{k_1}~~,~~\Spaa{\la_2|d\la_2}\Spbb{\W\la_2|d\W\la_2}=-s dz d\O
z~,~~~\eea
the integration becomes an integration over complex plane
\bea \Delta{\cal A}_{3 1 3}^{(0,b)} = \int d\mu_i \int
|dzd\bar{z}|(\bullet)=\int d\mu_i \int _0^{+\infty}r
dr\int_0^{2\pi}d\theta (\bullet)~~,~~z=r e^{i\theta}~.~~~ \eea
~\\ {\bf $\theta$-integration:} The $\theta$-dependent part of
\eref{A-exp} is given by
\bea \int_0^{2\pi} d\theta{
\Big(\Spab{K_2|R_2|K_2}+r^2\Spab{K_1|R_2|K_1}+r
e^{i\theta}\Spab{K_1|R_2|K_2} +r
e^{-i\theta}\Spab{K_2|R_2|K_1}\Big)^b\over (s_{24}-\W t_2 s_{12})+
r^2(s_{14}-\W t_2 s_{12}) +r e^{i\theta} \Spab{K_1|K_4|K_2}+r
e^{-i\theta}\Spab{K_2|K_4|K_1}}~,~~~\eea
with $\W t_2={\gamma_2-1\over 2}$. Setting $x=e^{i\theta}$ the
integral becomes a circle contour integration with radius one
\bea \oint_{|x|=1}dx {\Big(x\Spab{K_2|R_2|K_2}+x
r^2\Spab{K_1|R_2|K_1}+r x^2\Spab{K_1|R_2|K_2}+r
\Spab{K_2|R_2|K_1}\Big)^b \over i x^b \Big(x(s_{24}-\W t_2s_{12})+
xr^2(s_{14}-\W t_2 s_{12})
+rx^2\Spab{K_1|K_4|K_2}+r\Spab{K_2|K_4|K_1}\Big)}~.~~~\label{theta-1}
\eea
There are three poles in total. The first one is $x=0$ when $b\neq
0$ for general $R_2$. The other two are roots of the quadratic
polynomial in denominator
\bea x_{1,2}={-\Big(s_{24}-s_{14}+(r^2+1)(s_{14}-\W t_2s_{12})\Big)
\pm\sqrt{\Delta} \over 2r\Spab{K_1|K_4|K_2}}~,~~~\eea
where
\bea \Delta=\Big(-s_{12}+(r^2+1)(s_{14}-\W t_2
s_{12})\Big)^2+4s_{12}s_{14}(r^2+1)(1+\W t_2)~. \eea
It is easy to check that $|x_1 x_2|=1$. Thus one root is inside the
integration contour and the other is outside. The kinematic
conditions $s_{12}>0, s_{24}<0, s_{14}<0$ ensure that $x_1$ is the
one inside. The residue at the pole $x_1$ is
\bea {1\over i\sqrt{\Delta}}
\Big(\Spab{K_2|R_2|K_2}+r^2\Spab{K_1|R_2|K_1}+r
x\Spab{K_1|R_2|K_2}+r x^{-1}
\Spab{K_2|R_2|K_1}\Big)^b_{x=x_1}~.~~~\eea
~\\ {\bf The case $(T_2=K_1)$:} Under our simplification, we set
$T_2=K_1$, thus $\Spab{K_1|R_2|K_2}=0$ and $\Spab{K_2|R_2|K_1}=0$.
Because of this, there is no pole at $x=0$ in \eref{theta-1}. Thus
after the $\theta$-integration, \eref{A-exp} is reduced to
\bea \Delta{\cal A}^{(0,b)}_{313} &=&\int
d\mu_i~~\frac{\gamma_1}{2s^2}(-\frac{s}{\gamma_2})^{b-1}
\int_0^{+\infty}dr^2 {1\over \sqrt{\left({\alpha-1\over 2}+{1\over
(1+r^2)}\right)^2-{\b^2\over 4} }}\nn & &
\left(\ln{\left({\alpha-1\over2}+{1\over 1+r^2}\right)
+\sqrt{\left({\alpha-1\over2}+{1\over 1+r^2}\right)^2-{\b^2\over 4}
}\over \left({\alpha-1\over 2}+{1\over 1+r^2}\right)-
\sqrt{\left({\alpha-1\over 2}+{1\over 1+r^2}\right)^2-{\b^2\over 4}
}} \right) {1\over 1+r^2}\left({\gamma_2-1\over 2}+ {1\over 1+r^2}
\right)^b\nn & & {1\over \sqrt{\Big((r^2+1) (\chi-{\gamma_2-1\over
2})-1\Big)^2 +4(r^2+1) \chi(1+{\gamma_2-1\over 2})}}~,~~~ \eea
in which
\begin{eqnarray}
\qquad\alpha=\gamma\gamma_1~~,~~
\beta=\sqrt{(\gamma^2-1)(\gamma^2_1-1)}~.~~~ \nonumber
\end{eqnarray}
Defining $u= {1-r^2\over 1+r^2}$ we arrive
\begin{eqnarray}\label{new expression a=0}
\Delta{\cal A}^{(0,b)}_{313} &=&\int
d\mu_i~~\frac{\gamma_1}{2s^2}(-\frac{s}{2\gamma_2})^{b-1}\int^{+1}_{-1}du
\frac{(u+\gamma_2)^b}{\sqrt{(u+m\gamma_2)^2+(1-m^2)(\gamma^2_2-1)}} \nonumber \\
&&\frac{1}{\sqrt{(u+\alpha)^2-\beta^2}}\ln\frac{(u+\alpha)+\sqrt{(u+\alpha)^2
-\beta^2}}{(u+\alpha)-\sqrt{(u+\alpha)^2-\beta^2}}~.~~~\label{A-exp-3}
\end{eqnarray}
An important observation from \eref{A-exp-3} is that ${\cal
D}^{(0,b)}_{313}/(\gamma_1\gamma_2)$ has the symmetry
$\gamma\leftrightarrow \gamma_1$ as well as the symmetry
$\gamma_2\leftrightarrow \gamma_1$ for $b=0$ by the topology.
Since we use the dimensional shifted bases, the $\mu_i$ part is kept
and we will focus on ${\cal D}_{313}^{(0,b)}$ after the
$u$-integration, i.e.,
\bea \Delta{\cal A}_{313}^{(0,b)}\equiv\int d\mu_i~{\cal
D}_{313}^{(0,b)}~.~~~\eea
We found it hard to integrate over $u$ and get analytic results.
However, in the general $(4-2\eps)$-dimensional framework, we can
treat $\mu_i^2$ and $\mu_1\cdot \mu_2$ as small parameters and take
series expansion around $\mu_i^2\to 0$. It is equivalent to taking
the series expansion around $\gamma_i\to 1$. The details of
calculation can be found in Appendix C. Up to the leading order,
result for $a=0,b=0$ is given by
\begin{eqnarray}
{\cal D}^{(0,0)}_{313}=\frac{1}{s^3\ 2\chi}\Bigg[
&&\ln\Big(\frac{-2\chi}{\gamma-1}\Big)\ln\Big(\frac{-2\chi}{\gamma_1-1}\Big)+\ln\Big(\frac{-2\chi}{\gamma-1}\Big)\ln\Big(\frac{-2\chi}{\gamma_2-1}\Big)
+\ln\Big(\frac{-2\chi}{\gamma_1-1}\Big)\ln\Big(\frac{-2\chi}{\gamma_2-1}\Big) \nonumber \\
&&+2\
\textsf{Li}_2(1+\chi)-\frac{\pi^2}{3}\Bigg]~.~~~\label{b=0-result}
\end{eqnarray}
An important check for the result \eref{b=0-result} is that it has
the $S_3$ permutation symmetry among $\gamma_1, \gamma_2,\gamma$.
The terms $\ln(-\chi)$ and $\textsf{Li}_2(1+\chi)$ do not show up
for topologies ${\cal A}_{212}$ and ${\cal A}_{213}$, thus they
belong to the signature of ${\cal A}_{313}$. For $b=1$, the result
is
\begin{eqnarray}
{\cal D}^{(0,1)}_{313}=\chi\ s{\cal D}^{(0,0)}_{313}+{\cal
D}^{(0,0)}_{312} -\frac{1}{s^2}\ln\Big(\frac{-2\chi}{\gamma-1}\Big)
\ln\Big(\frac{-2\chi}{\gamma_1-1}\Big)~.~~~\label{b=1-result}
\end{eqnarray}
The extra term $-\frac{1}{s^2}\ln\Big(\frac{-2\chi}{\gamma-1}\Big)
\ln\Big(\frac{-2\chi}{\gamma_1-1}\Big)$ in \eref{b=1-result}
indicates that comparing to ${\cal D}^{(0,0)}_{313}$, ${\cal
D}^{(0,1)}_{313}$ should be taken as a new integral basis. For
$b=2$, the result is
\bea {\cal D}^{(0,2)}_{313}&=&\chi\ s{\cal
D}^{(0,1)}_{313}+\frac{2\chi+1}{s}{\cal D}^{(0,0)}_{202}
-\frac{2\chi+1}{2}{\cal D}^{(0,0)}_{212}-\frac{2\chi+1}{2}{\cal
D}^{(0,0)}_{302} -\frac{2\chi}{s}\ln(-\chi)~.~~~\label{b=2-result}
\eea
For this result, there are a few things we want to discuss. Firstly,
the same coefficient $-\frac{2\chi+1}{2}$ appears for ${\cal
D}^{(0,0)}_{212}$ and ${\cal D}^{(0,0)}_{302}$, which is the
consequence of symmetry $\gamma\leftrightarrow \gamma_1$ in
\eref{A-exp-3}. Secondly, the appearance of term $\ln(-\chi)$ is
quite intriguing. There are several possible interpretations:
\begin{itemize}
\item Under the general
$(4-2\eps)$-dimensional framework, ${\cal D}^{(0,2)}_{313}$
could be considered as a new integral bases.
\item From the result in \cite{Gluza:2010ws}, ${\cal A}^{(0,2)}_{313}$
can be written as linear combinations of integral bases ${\cal
A}^{(0,0)}_{313}$ and ${\cal A}^{(0,1)}_{313}$. However, the
coefficients depend on $\eps$. Then $\eps \Delta {\cal
A}^{(0,0)}_{313}$ and $\eps \Delta {\cal A}^{(0,1)}_{313}$ could
contribute to finite terms, such as $\ln(-\chi)$, under the
unitarity cut.
\item In fact, $\ln\Big(\frac{-2\chi}{\gamma_i-1}\Big)$ is the result
given by unitarity cut channel $K_{12}$ of one-loop massless box
$(K_1, K_2, K_3, K_4)$ up to zero-order of $(\gamma_i-1)$. It
may indicate some connection with one-loop box diagram.
\end{itemize}
Finally for $b=3$ we found
\begin{eqnarray}
{\cal D}^{(0,3)}_{313}&=&\chi^2\ s^2{\cal D}^{(0,1)}_{313}
+\Big(\frac{5}{2}\chi^2+\chi-\frac{1}{4}\Big){\cal D}^{(0,0)}_{202}
-s\Big(\frac{3}{2}\chi^2+\frac{1}{2}\chi-\frac{1}{4}\Big)\Big({\cal
D}^{(0,0)}_{212}+{\cal D}^{(0,0)}_{302}\Big)\nn & &
-3\chi^2\ln(-\chi)~.~~~\label{b=3-result}
\end{eqnarray}
It is obvious that ${\cal D}^{(0,3)}_{313}$ can be written as linear
combination of ${\cal D}^{(0,i)}_{313}$, $i=0,1,2$(as well as lower
topologies) with rational functions of $\chi,s$.
\section{Conclusion}
In this paper we applied the unitarity method to two-loop diagrams
to determine their integral bases. Two propagators for each loop
are cut while mixed propagators are untouched. Integrations for the
reduced phase space have been done in the spinor form analytically.
Based on these results, analytical structures have been identified
and integral bases have been determined.
To demonstrate, we applied our method to investigate the double-box
topology and its daughters, with appropriate choice of cut momenta
and kinematic region. For the ${\cal A}_{212}$ topology, we found
that there is only one scalar basis for the pure 4D case, while for
general $(4-2\epsilon)$-dimension, if we use the dimensional shifted
bases, there are 20 scalar bases under good renormalizability
conditions. For the ${\cal A}_{213}$ topology, there is also only
one scalar basis for the pure 4D case, but for the
$(4-2\epsilon)$-dimension, scalar bases are not enough even
considering the dimensional shifted bases. We found that there are
48 dimensional-shifted integral bases for renoramalizable theories.
For the ${\cal A}_{313}$ topology, it is difficult to get an exact
expression for general $(4-2\epsilon)$-dimension case. Thus we only
considered a specific case ${\cal A}_{313}^{(0,b)}$ with $T_2=K_1$.
We presented results to the zeroth-order and found three bases for
general $(4-2\epsilon)$-dimension if we do not allow coefficients
depending on $\eps$.
Based on the method demonstrated in this paper, several possible
directions can be done in the future. Firstly, for the ${\cal
A}_{313}$ topology, the exact result for the specific case $a=0$ is
still missing. The general value of $a$ should also be considered.
Secondly, topologies discussed in this paper are not the most
general cases. The most general configurations are those that each
vertex has external momenta attached as well as massive propagators.
Results of these more general cases are necessary. Thirdly, to
obtain a complete set of integral bases, we need to investigate
other topologies classified in \cite{Feng:2012bm}. Finally, besides
determining the integral bases, the unitarity method is also
powerful for finding rational coefficients of bases in the
reduction. We expect that, after the complete set of integral bases
being obtained, such method can be useful for practical two-loop
calculations\footnote{See also a very interesting new method
\cite{Abreu:2014cla} }.
~\\~\\
|
1,108,101,564,818 | arxiv | \section{Preamble}
The second generation interferometric detectors of gravitational waves were
the result of major upgrades in the design and implementation of the LIGO
detectors in the USA\ and the Virgo detector in Europe, during 2009-2015. The
enthusiastic surge of interest in\ the anticipated detection of gravitational
waves (GW) with advanced LIGO (aLIGO) detectors and the advanced Virgo
detector led to the LIGO-India project proposal in 2011, to build and operate
a third advanced LIGO detector in India. The proposal was the realization of
`now, or never' about gravitational wave science and astronomy, among the
physicists who initiated the proposal \cite{Proposal-DCC,Unni-LI,Current-LI}.
The discussions in small meetings had given birth to the \emph{Indigo
consortium} of the GW\ research community in India, which gained visibility of
their seriousness very soon. We were encouraged, trusted, and supported by the
GW community globally, and in particular by the GWIC (GW\ International
Committee) and the scientists from the LIGO laboratory. The LIGO project came
to India after an active discussion during 2010-11 on the possibility of
LIGO-Australia, in which the participation of India was proposed to be a
collaborating partner. The idea of LIGO-Australia was that the interferometer
components that were meant for a third LIGO detector at the same site as the
US-Hanford detector should be used to build the third detector at a site in
Australia, which would enable the accurate localization of GW\ events due to
the availability of a third independent detector at a large baseline distance
\cite{LIGO-Aus}. This was a major decision that prioritized and emphasized GW
astronomy, rather than just the detection of the gravitational waves. The
entire infrastructure and the operation of the detector in the network were
the responsibility of the host. LIGO-Australia finally turned out to be not
feasible,\ mainly due to the funding constraints. Then the Indigo consortium
and the LIGO-laboratory explored the possibility of a LIGO detector in India.
A high level committee that advised the National Planning Commission of the
Government of India on large and long term national projects in astronomy
recommended the LIGO-India project, after multiple presentations and
evaluation, fully realizing its importance and the unprecedented advantage of
obtaining the full interferometer detector components from the LIGO laboratory
in the USA.
The main infrastructural elements for building the detector are a suitable
site, laboratory buildings and the enormous ultra-high vacuum (UHV) enclosures
of the 4 km x 4 km Michelson interferometer, for which participating Indian
institutions had the full responsibility. Site selection efforts started
immediately by the Indigo consortium members, even before any assurance of
funding, with timely support from the Inter-University Centre for Astronomy
and Astrophysics (IUCAA), Pune. The identification of a few very good sites,
isolated well from common noise sources, followed after the elimination of
many which were considered, based on several visits and preliminary
measurements of seismic noise. When IUCAA, Pune, and the two key
technologically highly endowed institutes under the Department of Atomic
Energy (DAE) -- The Institute for Plasma Research (IPR), Gandhinagar, and the
Raja Ramanna Centre for Advanced Technology (RRCAT), Indore -- agreed to take
key responsibilities for the projects, things came together as a feasible
project. Speedy and cautious response and support from the NSF, USA, in the
form of visits of key persons and reviews of the proposal by special
committees gave a concrete form to the LIGO-India project. Four senior level
visits from the LIGO-Laboratory to the LIGO-India lead-institutions for
technical assessment and discussions were followed by three in depth reviews
by a NSF panel. All this culminated in a review and the following resolution
by the National Science Board, USA, in August 2012: \textquotedblleft
Resolved, that the National Science Board authorize the Deputy Director at her
discretion to approve the proposed Advanced LIGO Project change in scope,
enabling plans for the relocation of an advanced detector to
India\textquotedblright.
The responsibility for the data handling, computing as well as the important
task of building and coordinating the user community for GW astronomy was also
taken by IUCAA. With the technical aspects of UHV, seismic isolation and
suspensions, optical assembly, control systems etc. being handled by IPR,
Gandhinagar and RRCAT, Indore, the LIGO-India project had a defined
institutional structure. Regular visits by senior scientists of the LIGO
laboratory and planning meetings helped in charting a road map and organizing
a strong user community within a short period. Hence, three\textbf{
}significant developments happened by the time the first detection of
gravitational waves was announced in early 2016; 1) the Indigo consortium and
the interest in GW astronomy had grown considerably, ten-fold during
2011-2016, 2) a large number of Indian researchers were already members of the
global LIGO Scientific Collaboration (LSC) and part of the discovery
\cite{Discovery2016,Unni-bonanza}, 3) a suitable site for the LIGO-India
detector was identified in the state of Maharashtra and finalized (Aundha in
Hingoli district, Latitude 19%
${{}^\circ}$
36' 50\textquotedblright\ N, Longitude 77%
${{}^\circ}$
01' 50\textquotedblright\ E, about 175 hectares). The prompt announcement by
the Prime Minister of India of the formal support and approval for the project
then provided the boost and fuel to go ahead. The funding responsibility,
which was estimated then as about Rs. 13 billion (Rs. 1300 Crores, \$270
million), was to be shared between the DAE and the Department of Science \&
Technology (DST). The contribution from NSF and the LIGO laboratory was the
entire set of aLIGO detector components (except the vacuum infrastructure)
including the stabilised laser, to be given in kind. The notional cost of the
contribution of the LIGO laboratory and other international partners in the
LIGO Scientific Collaboration was about \$ 130 million.
\section{The LIGO-India Project: The Science Case}
The strong science case for the LIGO-India (LI) project had been discussed on
many occasions and in publications \cite{Proposal-DCC, Unni-LI,Fair-Sathya}.
The project proposal itself is available at the website of LIGO-India and the
Indigo consortium (gw-indigo.org). An early compilation, leaning on the
contributions from many from the LIGO Scientific Collaboration as well as the
Indigo Consortium, is in my 2013 overview paper, based on a talk in the Astrod
symposium (2012) held in Bengaluru \cite{Unni-LI}. The most important aspect
of LIGO-India is that it is the only planned detector with the design
sensitivity matching the aLIGO detectors, and of identical design and
technological characteristics. Therefore, if implemented, it provides assured
detections with excellent source localization on a single integrated platform.
The other aspects of the additional detector in the network are the
significantly improved network duty cycle, improved sensitivity, and larger
sky coverage. The upgraded version of the European Virgo detector was already
getting commissioned for joint observations with the two LIGO detectors, and
the Japanese detector KAGRA \cite{Kagra-1} was nearing completion, when the
LIGO-India project was approved in 2016. The hope was that a network of all
these five detectors would become operational by 2022-23.
The sensitivity of the individual detectors can be compared when stated in
terms of the equivalent range for the detection of the merger of binary
neutron stars (BNS). The design sensitivity of the aLIGO detectors is about
170 Mpc. With another planned upgrade with a frequency-dependent squeezed
light technology and improved mirror coating, called LIGO A+, the mature
sensitivity is projected to reach beyond 300 Mpc. Advanced LIGO started its
operation (O1) in 2015 with a range above 60 Mpc, and progressively reached a
sensitivity of 130 Mpc. The fourth observing cycle (O4) will start in 2023,
with a sensitivity around 170 Mpc. The full A+ design sensitivity is expected
only after 2028. We may expect that the LIGO detectors and Virgo detector will
be operating near their A+ design sensitivities of about 260-330 Mpc, and the
KAGRA detector has a goal of reaching above 130 Mpc in late 2020s. The number
of sources in the range goes approximately as the cube of the range (volume).
The scientific relevance of the LIGO-India detector will strongly depend on
its sensitivity in actual operation; \emph{at each stage after commissioning,
it needs to be at a good fraction of the LIGO-US detectors to significantly
contribute to the detection and source localization in the network operation}
\cite{Living-plans}. Due to the cubical dependence of the source-rate on the
strain sensitivity, a detector is relevantly useful in a network during the
mature era of GW observations only if its sensitivity is at least a factor
between 2 to 3 of the LIGO A+ detectors. \emph{Thus, the LIGO-India detector
needs to be sensitive at the level of 100-150 Mpc, with more than 50\% duty
cycle, to be scientifically relevant in the GW advanced detectors network that
is expected to remain operational until the third generation 10 km scale
detectors are deployed and operational, perhaps as early as 2035}.
\section{The Implementation of LIGO-India}
LIGO-India interferometer components are identical to the aLIGO components,
already fabricated for the interferometer previously meant as a third LIGO
detector at the Hanford site, with 2 km arm length. They are in storage in the
US at present. These include both the passive and active vibration isolation
systems, all optical components, electronics (except computers and
peripherals), proprietary instruments for monitoring and characterization etc.
Some of the mirrors need refiguring and re-coating in order to be suitable for
the 4 km LIGO-India detector. Some components of control systems, electronics
hardware and the laser source require technology upgrades. In addition, the
entire package of squeezed light injection and detection is to be fabricated
and set up afresh. If a more pertinent plan that follows the upgrade plans of
the LIGO-USA is adopted, to make the LIGO-India detector consistent with the
\textquotedblleft A+ design\textquotedblright\ (a significant upgrade of
detector components), further infrastructure and instrument modifications
would be necessary \cite{Shoemaker2020}. The design and specifications of the
LIGO-India detector, and the technical tasks for its realization, are clearly
known. There are specialist engineering tasks as well as specialist
commissioning tasks. Both require not only the development of specialist
expertise but also a trained work culture. The need to familiarize with a
rigorously specified and controlled work culture is because this is the first
large scale science project in India with stringent needs of quiet and clean
physical environment as well as the sustained and disciplined monitoring by a
team of experts. The advanced GW detector is a dynamic complex apparatus that
requires refined expertise and familiarity for managing a stable operation.
The trained humanpower required to assemble and commission the LIGO-India
detector is estimated to be about 70-80 members, and about half of them should
be experts in various key areas of physics, engineering, and technology. A few
experts from the LIGO laboratory are expected to participate in some
specialized tasks, especially during the assembly of vital optical elements
and commissioning, but extended direct participation will be limited because
of the ambitious upgrade plans of the operating ground based detectors in the
next decade. This means that the realisation of the operational LIGO-India
detector is crucially dependent on the amount and quality of the expertise
available and trained within the core Indian institutions involved in the project.
\section{LIGO-India Today}
Where is LIGO-India today? The Prime Minister of India announced the support
and notional approval for the project in February 2016. However, various
aspects of the project, including site section and essential characterisation,
have been anticipated and attended to since 2012 onwards. Around the same
time, the KAGRA 3-km underground gravitational wave detector project in Japan
had received full approval, and the construction was in full swing by 2016.
The updated schedule for operation and upgrades of different advanced
detectors, as projected in 2019, is compiled in the figure 1
\cite{Living-plans,Somak2018,Shoemaker2020}. The most recent laboratory that
joined the joint detection program is the Japanese GW detector collaboration
KAGRA. While the LIGO detectors and the Virgo detector are keeping to their
upgrade plans within one or two years, KAGRA has not yet managed to get useful
extragalactic sensitivity. One can see that the schedule projected for the
LIGO-India detector by its management team is lagging already several years
beyond what was originally envisaged. According to the announced schedule, the
detector is supposed to be commissioned for operation in 2026, to join the
network of advanced detectors, with the design sensitivity of about 300 Mpc by
2027. However, an examination of the ground realities paint a completely
different picture, unfortunately.%
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
width=3.5359in
]%
{F1-timeline.jpg}%
\caption{The projected observation schedule as of 2019 for the working LIGO
and Virgo detectors and the commissioned KAGRA detector, along with the
announced schedule for the implementation of the LIGO-India detector
\cite{Shoemaker2020}. This is slightly revised more recently, without a
definite projection for LIGO-India (see fig. 5).}%
\end{center}
\end{figure}
The Nobel prize advanced information document that was released after the 2017
Nobel prizes for the detection of the gravitational waves mentions LIGO-India
(citing my 2013 article on the LIGO-India project and the update in
arXiv:1510.06059) and the Japanese project KAGRA as the future detectors in
the network \cite{Nobel-adv}. KAGRA was scheduled to come online around 2018
and LIGO-India in 2022, as per the original plans. Both projects started
rolling around 2012, though the KAGRA project developed from the large sized
prototypes operated earlier, like the TAMA detector. The project is led by T.
Kajita, who won the Nobel prize in 2015 for the discovery of atmospheric
neutrino oscillations. KAGRA is an advanced detector that incorporated many
new technical features. It uses cryogenic technology for cooling its mirrors
to about 20 K, to suppress thermal noise and thermal lensing. It may benefit
from the \ very low seismic noise in the underground tunnels in the Kamioka
mountain. The tunnel excavation and the fabrication of the UHV\ beam tubes and
chambers were started in 2012 and completed in 2014. It took another 4 years
to complete the assembly of the interferometer and the tests before the start
of commissioning.
The KAGRA interferometer was locked for operation in 2019 and the team is
still working to join network observations, initially with a fraction of the
sensitivity of the aLIGO and Virgo detectors \cite{Kagra-status}. There are
technical difficulties associated with the new design aspects as well as
operational difficulties in maintaining the external environmental conditions
stable. As another example, after decommissioning the initial LIGO
interferometer, an experienced team took about 5 years to assemble and
commission the aLIGO detector with the new components and vibration isolation
systems, using the same infrastructure. A similar time frame was required as
well to bring the upgraded version of the Virgo detector to the level of
network operation.
\emph{The important point to note here is that the examples of operational
detectors demonstrate that it requires 4-5 years to assemble and commission
the detector with an initial working sensitivity after the completion of the
basic infrastructural elements, like the levelled and stabilized land at the
site, the laboratory buildings built according to stringent specifications,
and the fabrication of the beam tubes and the large UHV chambers}. This would
be the case for even an experienced team of about 40-50 scientists, engineers
and technical specialists. The delay is not mainly in fabricating the large
number of components for the interferometer. It is in the careful assembly and
thorough testing at each stage, because even a small compromise in the
operation of individual sections is not an option. Even after achieving the
stable locked operation of the interferometer, reaching a good fraction of the
design sensitivity can take considerable time (typically a year or more),
adding to unpredictable delays. Thus, \emph{even in an optimistic estimate in
the Indian conditions, the minimum duration required for an expert trained
team to realize an advanced interferometric GW detector here with a
network-worthy sensitivity is 10 years from the preparation of the site} -- 4
years for the infrastructure build-up, 4 years for the assembly, tests and
commissioning, and 2 years for tuning the detector for stable locking and
sufficient sensitivity.
The hard lesson is that the time frame for the assembly and initial operation
of the LIGO-India detector will take about 5 years \emph{after the whole
infrastructure of laboratory buildings and the UHV hardware is ready}, if we
manage to have an experienced and skilled team of 50-70 people in the project
by that time, at various levels.
\begin{figure}
[h]
\begin{center}
\includegraphics[
width=3.6197in
]%
{F2-Site.png}%
\caption{The view of the bare site for the LIGO-India at Aundha near Hingoli,
Maharashtra (2016). The lower panel depicts the vision of the site with the
laboratories and the LIGO-India detector, when the contruction is completed.
The realistic duration to complete this transformation is another 8-10 years,
in my estimate, (see also fig. 4).}%
\label{site}%
\end{center}
\end{figure}
LIGO-India today owns the required land at the chosen site. This is
commendable achievement, made possible by the coordinated efforts of IUCAA,
DAE, government of Maharashtra and the local administration at the district of
Hingoli. Some studies on the soil structure, seismic properties etc. have
progressed. A 16 months baseline seismic survey of the site has been
completed. However, the work on the land preparation for the laboratory
buildings and the levelled beam tube is still in early stages. The chosen site
near Hingoli is accessible by good roads (450 km from Pune, for example).
Flights to the nearby town of Nanded, which already has a small airport, may
become operational. LIGO-India has received the approval for construction from
the state environment impact assessment authority recently. A LIGO-India
office and a visitor guest house have been already set up at Hingoli.
However, the many tasks of building the infrastructure for an aLIGO detector
are yet to acquire any momentum. The huge task of the fabrication of UHV
chambers and the 8 km of beam tube (1.2 m diameter) requires the evaluation
and finalization of the welding technology and strategy, before the
fabrication can be entrusted to industrial partners. This can be a long
process in India. Though the trusted choice for the welding technology is the
advanced version of argon arc welding, the available technology options have
progressed very positively from the days when the LIGO beam tube was built. I
had personally explored new and efficient choices like the K-TIG key-hole
welding, right when I was planning and budgeting for the UHV systems in the
LIGO-India proposal (in 2011). Laser welding is also a technology that has
matured considerably. \ Several tests of both the steel material and the
welding options will be required for a judicious choice that ensures long term
UHV reliability. TIG welding is the chosen method for the fabrication of the
UHV elements of the LIGO-India detector. A large UHV chamber with the
specified design, fabricated in India, is undergoing quality tests. But,
achieving the required level of ultra-high vacuum is still quite far. The
advanced detector requires about a dozen such large chambers. The prototyping
and testing of a beam tube section of 10 m length is yet to be done.
The LIGO-India project and the collaboration do not have a director or a
project manager, even 6 years after the approval by the government of India.
There is a formal management structure for monitoring the project and to
coordinate activities with the LIGO laboratory in the US. \ However, without a
leader, the project has not emerged from its formal confines. Therefore, there
is no close link between the larger GW research community \ and the project
personnel yet. There is a small funding component for small research on next
generation detectors and technology, disbursed to individual researchers in
universities and research institutes other than the LIGO-India core
institutes. But there is no regular interaction possible yet between these
researchers and the LIGO-India project team. As already mentioned, even a
conservative estimate of the expert and trained human power needed to
implement the LIGO-India project is about 70 to 80 participants, for the
different tasks of the assembly of the detector components and vibration
isolation, elaborate position metrology, UHV assembly, leak tests and
certification, optical assembly and internal suspensions, control systems
tests, interferometer locking and full commissioning. \emph{At present only a
fourth of this task force is available}, leading the preparatory tasks in the
core institutes. The LIGO-India project has not yet started a targeted formal
graduate training program. Therefore, there is no significant resource of
young physicists or engineers trained, working with any of the advanced
interferometer detectors, as of now.%
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
width=5.4886in
]%
{F3-LI_status.jpg}%
\caption{Left) A view of the acquired site meant for the LIGO-India detector.
A sample layout of the 4 km x 4 km detector is indicated. Right) The
construction status as of 2022 November, showing only a site office. The
preparation of the detector site and the construction of the laboratory
buildings are yet to start (\textit{figure extracted from Google Earth}).}%
\end{center}
\end{figure}
The infrastructure of laboratory buildings and the UHV hardware installation
can be readied only after the land is levelled and prepared to the stringent
specifications. The work on major elements of the infrastructure -- the UHV
chambers and beam tubes, site preparation, and the laboratory buildings -- is
yet to start in a scheduled manner. The proper fund flow is perhaps waiting
for an updated detailed project report. This summarises the present status of
the physical progress in the LIGO-India project. Based on the present status
(2022), my experience in the scheduling and budgeting of the original
LIGO-India proposal of 2011 enables me to make a realistic decadal assessment
about the feasibility and scientific relevance of the LIGO-India gravitational
wave detector project. \emph{My revised estimate clearly shows that the
network operation of the detector with a useful extragalactic sensitivity
cannot be earlier than 2032, and the total cost of the project up to 2035 will
be more than doubled, to about Rs. 3500 Crores} (Rs. 35 billion, \$ 430
million). \emph{What is most worrisome is the obvious fact that the large
delay in implementation will severely affect and diminish the scientific
relevance of the project}. I will now examine the various factors that go into
this decadal assessment and revised estimates.
\section{LIGO-India Project Under a `Zeno-Effect'}
The LIGO-India project was about to be launched several years ago, with the
arrow well placed in the bow, still a bit unstretched. Can that arrow reach
its destination when launched? First of all, one needs to wait for the
`launcher' leader(s). Then, as Zeno of Elea wondered, the arrow has to first
reach half the distance before it can progress to the rest. But, it has to
cross `half of half the distance' first, etc. Can the arrow ever leave the bow?
Having studied such situations, and watched many small and big projects in
India and abroad, I know that the arrow can reach the target, if only it is
released. However, in a largely sequential project, the time factor becomes
very important due to the paucity of experienced personnel.
\emph{Gravitational wave detectors work as a network, to achieve the ability
for source localisation and to improve sensitivity and reliability of
detection}. This implies that the detectors in the network ideally should have
comparable sensivities for optimum effectiveness. By the time a delayed
detector reaches a good fraction of its design sensitivity, the already
operational detectors will have progressed to a better sensitivity. Then the
lagging detector will never catch up with the network leaders. Given the
relatively small workforce of limited familiarity with the advanced detector
active in the project, and the many experts who initiated the project expected
to retire half-way through the project, there is a genuine Zeno-effect shadow
on the project. This compromises the relevance and science case of a
gravitational wave detector, meant to operate in a network.
What is an optimistic estimate of the time required to operate the LIGO-India
detector at its A+ design detection sensitivity of about 250-300 Mpc for the
merger of binary neutron stars? What is a reliable road map from initial
commissioning to reaching the design sensitivity and how much time is required?
The preparation of the land, its transformation to the specifications and the
building of the main laboratories will take about 3 years. One need not and
will not wait for these tasks to be completed to start on the enormous task of
fabricating the large UHV chambers and 8 km of the 1.2 m diameter beam tubes.
However, the finalization of the UHV process is yet to be done and the
tendering process is long. Assuming that the fabrication will start in early
2023, we can expect the completed and tested UHV components in late 2025,
ready for assembly. The testing and certification of the UHV hardware is a
long process and it cannot be hurried or shortened. The full infrastructure
(chambers and components in their designated positions in the buildings) can
be expected sometime mid-2026. The assembly and testing of the beam tubes and
their cover etc. can go parallel to the interferometer assembly. The assembly
of the full interferometer will take a minimum of 3 years because of the
complexity and the number of components. Thus, the interferometer could be
ready for locking and commissioning only after mid-2029. Assuming that the
commissioning task will be aided well by some experts from the LIGO
laboratory, a reasonable estimate of locked operation is then 2030. From this
point to achieving a sensitivity at which it can join network operations
(50-100 Mpc for BNS merger) takes between 2-3 years. \emph{Optimistically
assuming a minimum duration for tuning up the sensitivity, the LIGO-India
detector will be network ready only after 2032}. It is clear that the present
projection of 2027 by the LIGO-India management is naive and not feasible. The
optimistic revised date for network operation at extragalactic sensitivity is
2033. However, the projected design sensitivity for an A+ detector (300 Mpc)
can be reached, if at all, only well past 2035.
We have assumed that the presently operating detectors (aVirgo and aLIGO-USA)
will be operating at their design sensitivity of about 250-300 Mpc till 2032.
However, further upgrades are planned for these detectors which need to be
taken into account when we discuss the relevance and science case of
LIGO-India in 2032. Note that even at the sensitivity near 100 Mpc, LIGO-India
will see effectively less than 1 in 25 (%
$<$%
4 \%) of all events detected by the LIGO-US and the Virgo detectors (the
events might be detected, but with much less statistical significance compared
to other detectors). \ Therefore, \emph{though operating as a useful support
instrument for the network, it cannot play a decisive role in discoveries,
unless the sensitivity is increased to the design sensitivity}. However, this
is a long path of tuning and can take many years, by which we enter the era of
next generation detectors. As indicated earlier, in my estimate based on the
progress achieved in presently working advanced detectors, the LIGO-India
detector can hope to reach near its design sensitivity of 250-300 Mpc only in
2035. This should be seen in the context that by 2040, the global detector
network hopes to move up from the present advanced detectors to the next
generation detectors like the Cosmic Explorer and the Einstein Telescope, with
\ 8-10 times the sensitivity of the present advanced detectors.
I am not fully aware of the upgrade plan for the KAGRA detector. But, the
present plan is to be a full partner in GW astronomy mission of the advanced
detectors by joining the next cycle of observation called the `O4' observation
cycle, due in early 2023. This is a crucial time when the additional detector
will help in better localization of the astrophysical sources, even with a
reduced sensitivity, as long as the signal is visible in the detector with a
reasonable signal-to-noise ratio.
The commissioning of an advanced detector in India by 2025-26 was a
scientifically worthy and attractive goal in 2015, when the first indication
of a detection came confirming the wealth of astronomy in store for the
interferometric GW detectors. But, as viewed in 2022, the completion and
operation of a detector after 2032 at the same level of sensitivity (about 100
Mpc) is no more a scientifically exciting goal, though its relevance as an
essential instrument for precision astronomy in a multi-detector network
remains intact. This is because LIGO-India is the only planned detector with a
sensitivity matching the two LIGO instruments in the USA. However, the
significant delay does affect its role as an instrument for discovery.
\emph{What is evident from even an optimistic estimate of the operation
schedule for LIGO-India is that the detector will definitely miss the
opportunity to contribute to multimessenger astronomy during the decade,
2022-2032}. This does not exclude a supportive role for LIGO-India in GW
astronomy in the ensuing decade. However, the enthusiasm in the astronomy
community, even within India, about such a detector will be diminished.
It is very clear that the undesirable situation of the LIGO-India detector
operating always at 3-4 times less sensitivity than other detectors in the
network should be avoided. However, I do not see how this unfortunate
situation could be avoided in any reasonable projection of the schedule,
severely affected by the past delays. This needs urgent and serious discussion
and remedial measures, if there are any.
\section{The Cost Factor}
When the LIGO-India proposal was being prepared in 2011, we were very careful
to minimize costs, and went to great lengths to find out optimal ways to keep
the cost within reasonable demands. What was important in this goal was to
identify expertise within the academic community and institutes, instead of
delegating everything to industrial expertise. In any case, the required
expertise was not available in the industrial circles. Also, timely execution
was a crucial factor. Since the major costs are in UHV fabrication and
certification, it is very important to compare and evaluate the cost increase
in steel and welding technologies etc. From 2015 to 2022, the price of
stainless steel has more than doubled. This alone can increase the costs by
\$80 million. When the proposal was submitted in 2011, the projected budget of
about Rs. 1250 crores (\$270 million then) seemed adequate, with the
understanding that a 15-20\% eventual increase and revision of costs should be
expected by the time it is implemented in 2020. Now in 2022, the projected
costs have more than doubled. A reassessment indicates that project
implementation will require about Rs. 3500 crores (about \$430 million now;
the dollar has significantly increased against the Indian Rupee, by a factor
1.7). At present, the released funding in the past six years is less than 4\%
of the total revised cost.
The Indian government has other megascience commitments as well, both national
and international. The funding required for the international ITER\ project on
nuclear fusion (located in France), which is in construction phase, will be
over 2 billion dollars (%
$>$%
Rs. 15000 crores, estimated as 9\% of 25 billion dollars total international
budget), with the expected commissioning around 2035. The INO project on the
atmospheric neutrino observatory (India-based Neutrino Observatory at Teni,
Tamilnadu, India) was fully approved, and funds were allocated, but it
continues to face legal hurdles sustained by a hostile situation of local
politics and misleading propaganda. The \$300 million project is unlikely to
be realized now because all the members with experience in experimental
particle physics and detector technology have either retired or are about to
retire. The INO graduate training strategy has not resulted in training
adequate expertise among the younger generation to take the project forward.
Thus, there are no experts to lead various aspects of such a project in the
next decade. Also, the science case of the project (if and when it can be
realized) has weakened significantly, due to the delays and the launch of new
projects on neutrino observatories elsewhere. The hyper-Kamiokande project
(with the projected commissioning after 2025) is aimed at solving the
outstanding problems of mass hierarchy, CP violation in neutrino sector etc.
The DUNE project led by the Fermilab also has unique capabilities in answering
such outstanding issues. One can gauge the relevance of the INO detector that
will take 8-10 years for completion, by comparing with these other
initiatives. This makes the INO project scientifically much less significant,
even if all the extraneous hurdles were removed today (which is not the
reality). Then there are commitments in the country in optical and radio
astronomy, for international telescope projects outside the country. The
hugely delayed Thirty-Meter Telescope (TMT) project for the large optical
telescope, proposed to be installed at Hawaii, is another major international
commitment that exceeds Rs. 1500 cores. The (current) commitments in other
astronomy projects like the Square Kilometer Array (SKA) are smaller.
LIGO-India is indeed the largest `home' project in basic science and astronomy
ever taken up in India.
Unfortunately, the revised funding needed for LIGO-India conflicts with the
present situation of decreased GDP growth in India and elsewhere. The Covid-19
pandemic is a global problem, but it will seriously affect and delay the fund
flow into the mega-science projects, justifiably. No substantial quantum of
funds is approved or released yet for LIGO-India because the actual spending
for the large hardware and infrastructure has not yet started. In my
judgement, the cost factor will significantly impact the LIGO-India schedule,
at least in the early part of its execution, and will unpredictably delay the
operation of the detector. \emph{On the other hand, any decision on fully
funding the project now will have to evaluate and reconsider the \ doubled
cost factor for a project that has reduced scientific significance, when
implemented for operation after 2032}.
\section{A View from the Other Side}
The LIGO-India project is not merely an Indian national project. It is an
important multi-national collaborative effort, of setting up and operating one
of three identical advanced LIGO detectors in India, in the network operation
with the other two detectors in the USA, and also with the Virgo detector in
Europe. An MoU was signed with the National Science Foundation and the LIGO
laboratories (Caltech and MIT). It cannot be that the NSF and LIGO
laboratories are unaware of the steadily diminishing scientific and
astronomical relevance of the delayed LIGO-India project. In fact, the
executive director of the LIGO laboratory, David Reitze, had noted in a
meeting in India (Pune, 2017) that \emph{a serious risk to LIGO-India
scientific program exists if delays begin to accumulate}.
The commitment by the LIGO laboratory for providing all the interferometer
components, some suitably modified, still stands. These items that are stored
in safe packing await shipping to India. However, before the shipping to India
takes place, the infrastructure to receive them needs to be completely ready.
In the absence of timely progress on this front, what are the options at this stage?
Apart from the uncertain internal cost factor involved in the commitment of
providing the interferometer components, there are even changes in the basic
rules of the import of scientific equipment to India. Unless explicitly
exempted, there is an additional burden of at least 18\% cost as Goods and
Services Tax, to be collected as additional customs duty, for equipment
imported by scientific establishments. Of course, this will have to be
provided as additional funding to the LIGO-India project by the Government of
India itself -- sort of a strange situation of giving with one hand and taking
with another. But, the added administrative cost on both sides is not negligible.
Even at a reduced scientific relevance, the only possibility for gainfully
using the components fabricated for a third LIGO interferometer seems to be
the implementation of the LIGO-India project. In my view, there seems to be no
possibility of reviving a plan like LIGO-Australia. However, a proper
evaluation of the projected joint operation of the ground based detectors,
estimating accurately the efficacy of the LIGO-India detector in network
operation is necessary. It is clear that the LIGO-India detector will remain
in a supporting role unless a straggling gap in the projected sensitivity,
compared with other operational detectors, is mitigated. I will now examine a
path for regaining a prominant role for LIGO-India detector.
\section{LIGO-India A+}%
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
width=3.3948in
]%
{F4-A+.png}%
\caption{Projected sensitivity of the A+ upgrade in 2026, compared to the
sensitivity of presently operating aLIGO \ detectors
\cite{Shoemaker2020,McCuller2019}. }%
\label{A+}%
\end{center}
\end{figure}
There is a well directed plan for a significant upgrade of aLIGO detectors to
LIGO A+ detector, to be completed by 2026 \cite{Shoemaker2020,McCuller2019}.
The technology additions envisaged are mature squeezed light quantum
measurement and vastly improved low loss mirror coatings. There are other
upgrade options as well, by adding to the interferometer baseline. Clearly,
LIGO-India should directly be installed and commissioned in its A+ version,
since its assembly phase is definitely after 2025. If this is not done
LIGO-India will not be very relevant or useful in the network, when it is
fully commissioned. Therefore, \emph{the present understanding is that
LIGO-India will be installed in its A+ version directly}. Still, LIGO-India
needs to go through the long and elaborate process of assembly, commissioning,
and step by step tuning of the sensitivity to reach the sensitivity suitable
for the global network operation. Since it is clear that the LIGO-India
detector is behind its announced schedule by at least 5 years, it is now
excluded from consideration for the quantitative estimates for the efficacy
(localization of sources, network duty cycle etc.) in the parameter estimation
from the observations during 2026-2030.
The O5 observation cycle for the LIGO-Virgo detectors is planned during
2026-2029. The LIGO detectors will operate with A+ design during O5. However,
the post-O5 operation of the LIGO detectors envisages a major upgrade called
\textquotedblleft A$^{\#}$\textquotedblright\ (written as A\# in the rest of
this paper) \cite{LIGO-Ahash}. This is expected to operate with two times
better sensitivity than A+. \emph{I am convinced that this situation demands
an immediate redefinition of the plans for LIGO-India, because it will be
ready for observations only after the LIGO-A\# detectors are commissioned in
the USA for their post-O5 operation}. I discuss this vital point next.%
\begin{figure}
[ptb]
\begin{center}
\includegraphics[
width=5.5973in
]%
{F5-Schedule2035.jpg}%
\caption{Revised schedule for the observation cycles with progresively
increasing sensitivity of LIGO,Virgo, and KAGRA detectors, post-pandemic (LIGO
document G2002127). I have added a realistic estimate for LIGO-India. By the
time the LIGO-India detector is operational, gravitational wave astronomy will
have entered an era of next generation detectors, like the Voyager upgrade in
the LIGO facility. Post 2036, we can expect the beginnings of the 10 km class
Cosmic Explorer and Einstein Telescope.}%
\end{center}
\end{figure}
\section{Beyond A+: Post-O5 and LIGO-A\#}
Further improvement of the LIGO-USA facility beyond 2029, after the
observation cycle O5, have been in discussion for some time, and two
possibilities were studied. The option called \textquotedblleft
A\#\textquotedblright\ involves replacing the main test mass mirrors with much
heavier mirrors and suitable upgraded suspension and vibration isolation
systems. The A+ version has 40 kg mirrors whereas the A\# design is with 100
kg test masses, suspended on the silica fibres at twice the stress levels.
Also, the intra-cavity laser power at the same wavelength (1064 nm) will be
increased from 750 kW to 1500 kW. This requires better reflective coatings of
the mirrors, still under development. The thermal noise in the mirror coating
is expected to be reduced by a\ factor of 2. Further, the amount of quantum
squeezing of light at the detection section will be improved (from 6dB in A+)
to 10 dB. These are the main upgrades that will result in the superior
sensitivity of the A\# design.
The A\# option is at present preferred over the second option called
\textquotedblleft Voyager\textquotedblright, in which the existing optical
elements are to be replaced with entirely new optics made of Silicon. The
suspended Si mirrors will be much heavier, about 200 kg. These will be in a
semi-cryogenic environment of 123 K, at which the low mechanical loss and the
zero thermal expansion of Silicon will result in reduced thermal noise and
thermal lensing. The same UHV infrastructure would be used for this transition
to the Voyager version \cite{Voyager}. The high power stabilized laser will be
replaced with one of longer wavelength, near 2 micron, to match the reflection
characteristics of the Silicon optics. The projected improvement in
sensitivity of these options is by more than a factor of 2, or about 700 Mpc
for BNS merger.
There has been no serious discussion so far about the role of LIGO-India in
its A+ version during the era of A\# or Voyager. If the technology tests
succeed and the A\# version of the LIGO-US detectors is implemented in early
2030s, LIGO-India will only have a fraction of the sensitivity of the LIGO-US
detectors, when it becomes eventually operational in 2032. Therefore, a
definite road map to closely participate in the LIGO developments and to
follow its course needs to be framed soon. In other words, the post-O5 plan of
the A\# version (or Voyager) of LIGO-USA detectors makes it imperative that
LIGO-India follows the same course, for operation at a good fraction of the
sensitivity of the leading LIGO detectors. \emph{Aiming to implement the lower
sensitivity A+ version for LIGO-India is certainly not the right course,
according to my estimate and projections, if we desire a frontline stature in
the GW detector network.} However, building the detector with the A\# design
requires certain incremental changes in the main infrastructure (vibration
isolation, UHV chambers, layout etc.) \emph{as well as the additional funding
required for the upgraded elements}.
\section{Can LIGO-India be a Late Yet Significant Success?}
This question has a conditional affirmative answer, which I examine now. What
is definite about the LIGO-India project is that it will miss the O5
observation phase of the LIGO-Virgo-KAGRA detectors, scheduled from late 2026
for a few years. We need to look for a significant participation beyond O5.
The analysis so far, especially the aspect about a reduced scientific
relevance, was based on the announced projections on the systematic upgrades
and developments beyond LIGO A+ design, which is the one to be implemented in
LIGO-India. However, these projections naturally have some uncertainty arising
from technical difficulties of cutting edge technology, as well as from
external factors like funding, global social and economic situation etc., as
one can understand. A very favourable aspect that guaranteed a special
position for LIGO-India in the detector network was the fact that its design
is identical to the two LIGO detectors in the USA, which operate at a higher
sensitivity than advanced Virgo, given it smaller baseline of 3 km. The
KAGRA\ detector at its design sensitivity cannot match with even Virgo. Thus,
the possibility of a late LIGO-India to be the crucial element in achieving
localization capability of the detector network at the level of a few square
degrees is real and evident, especially if KAGRA fails to achieve the promised
design sensitivity beyond 130 Mpc even by 2032. The fact that LIGO-India will
replicate the successful design elements of the LIGO-USA detectors is its
reliable advantage. However, the duration to reach a decisive sensitivity
cannot be reduced significantly from the estimate that I discussed already.
Given this situation, if LIGO-India is commissioned with an initial
extragalactic sensitivity larger than 10 Mpc, before 2032, and tuned up to a
stable sensitivity of 300 Mpc by 2034, then it will enjoy a position of
prominence with the LIGO-Virgo network for a few years, till \ about 2040,
with progressive improvement to the design sensitivity beyond 500 Mpc.
However,\emph{ it is mandatory that LIGO-India is replanned with the A\#
design meant for post-O5 era, rather than the less sensitive A+ design, for it
to be prominant and truly relevant}. Such a major shift of focus is what I
propose and advocate, because \emph{I think that the A\# design is the right
option for a delayed LIGO-India}. This is certainly feasible by revamping the
present structure of management and work culture, along with a determined
action on all aspects of human resource development. However, this feat is
crucially dependent on the availability of necessary funds and significantly
augmented expert human resources, after the full approval of a revised DPR and
the project within the next two years. It is a small window of opportunity.
Again, the absence of a leading figure of science and technology to take the
LIGO-India project forward at the required pace is clearly felt. The five
factors -- reliable leadership, expert human power, funding, time factor, and
scientific relevance -- need to be urgently addressed and reviewed carefully
for any realistic strategy of execution.\footnote{I am a member of the LIGO
Scientific Collaboration (LSC) and the LIGO-India Scientific Collaboration
(LISC). This article is based entirely on my presonal perception and estimates
about the LIGO-India project.}
\section{Summary}
In my earlier overview paper (2013) on the scope and plans for the LIGO-India
gravitational wave detector, I sketched some key developments on the large
canvas of the megascince project. It was then perceived as transformational
for Indian science, technology and higher education, and rejuvenating for
Indian physics and astronomy. More than a decade after the LIGO-India
proposal, and six years after its notional approval, the actual construction
of the infrastructure and the fabrication of the critical detector elements
are yet to start. This implies a shift in the schedule for implementation by
several years, and the integration in the global network by a decade, to
beyond 2032. Also, there are other hindering factors, like substantial cost
escalation, retirement of current experts etc. The consequences for the
science scope of the project and the definite erosion of the relevance of the
delayed LIGO-India detector in a network operation, in the era of GW and
multimessenger astronomy, are examined. The LIGO-India detector is sure to
miss the seminal decades of multi-messenger astronomy, 2015-2035, when various
kinds of sources are being discovered and many fundamental aspects are being
tested. However, \emph{with timely revamped action, the LIGO-India project
with a A\# design can still be a late yet significant success}, serving a
prominent role in the detector network and precision GW astronomy for a decade
after 2034. My assessment urges \ an immediate evaluation and restructuring of
the LIGO-India project in the context of the unavoidable stretch in its schedule.
\section*{Acknowledgements}
This article benefitted immensely by the suggestions and comments by David
Shoemaker. I thank Martine Armand for many helpful comments.
|
1,108,101,564,819 | arxiv | \section{Introduction}
The Crab Pulsar is a recognized standard for astronomical timing measurements (\cite{lyn97}). As we prepared to demonstrate the performance of the STIS after installation in HST in 1997 February (\cite{kim98}), we found the Crab Pulsar to be uniquely suited for testing the time-tagging capability of the STIS MAMA detectors. The Crab Pulsar is well studied at multiple wavelengths, and is the only periodic rapid variable in the ultraviolet that is bright enough to record, with good signal to noise ratio, the light curve in a few HST orbits. The arrival time and period are monitored regularly at radio wavelengths (\cite{lyn97}). The Crab Pulsar pulse profile has been studied at ultraviolet and visible wavelengths (\cite{per93}) with the High Speed Photometer (HSP) on the Hubble Space Telescope, in the near infrared (\cite{eik96}, \cite{eik97}) and at radio and gamma-ray wavelengths (\cite{lun95}, \cite{tom97}). While giant pulses, as much as 60 to 2000 times the average main pulse, are detected at radio wavelengths (\cite{lun95}), Percival et al. (1993) found no evidence for giant pulses in the near ultraviolet or visible spectral regions.
Spectroscopy of the Crab Pulsar in the ultraviolet has been limited to two observations: by the International Ultraviolet Explorer (IUE) (\cite{ben80}) extending from $2000$ - $3000$~\AA\ with low signal-to-noise, and an unpublished Faint Object Spectrograph (FOS) spectrum extending from $2000$ - $6000$~\AA\ in the HST archives. Neither are sufficiently deep to provide useful information on the interstellar absorption lines and provide only limited information on the $2200$~\AA\ diffuse absorption feature due to interstellar dust.
\section{Observations}
In the ultraviolet, with the NUV MAMA and the Far Ultraviolet (FUV) MAMA, we are able to observe transient phenomena with good spatial resolution ($0\farcs025$ /pixel), good time resolution ($125~\mu$s) and the spectral resolution appropriate to the observation of interest (\cite{kim98}). In the NUV, the following modes are possible: direct imagery with a limited set of filters, long slit spectroscopy at low and moderate dispersions (R=600 and 10,000) and small slit spectroscopy at high dispersions (R=46,000 and 114,000) (\cite{hst96}). This observation is an example of the time-tagging capabilities of STIS in the ultraviolet at low resolution.
STIS was used to observe the Crab Pulsar on 1997 August 7 for three orbits using the G230L grating with the NUV MAMA detector. As we were testing the ability of the time-tag mode versus the on-board accumulate (ACCUM) mode, observations during the first orbit were recorded in ACCUM mode for three 600 s accumulations and observations during the remaining two orbits were recorded in time-tag mode. The observation log is listed in Table 1. To the limit of photon statistics, the spectra recorded in the two modes are identical. As target acquisition had not been adequately demonstrated when the observation was prepared, a conservative $2\arcsec \times 2\arcsec$ aperture was specified. By the time the observation occurred, target acquisition using the CCD camera was routine and transfer to the MAMA modes was well proven. The pulsar was indeed well centered in the slit. Wavelengths are measured to within 1/2 low resolution (lores) pixel or $0.8$~\AA. The NUV MAMA has a higher than intended dark count rate due to phosphorescence in the window; the dark count rate for the entire detector was about 1,700 counts per second. By comparison, the Crab Pulsar count rate was about 70 per second (or two per individual period). However, the pulsar spectrum is dispersed over 1024 elements with a spatial resolution determined by the telescope point spread function in the $1600$ - $3200$~\AA\ region. With the full width at half maximum of $0\farcs05$ and low scatter of the optics in this spectral region, virtually all of the photon events from the pulsar land on an eleven pixel wide spatial slice, which is $1\%$ of the total detector format. The detector dark count for the Crab Pulsar is thus about seventeen counts per second, or one fourth of the signal.
\placetable{tbl-1}
\section{Data Reduction}
As this was a first time observation with time-tag mode, all data analysis was done using IDL software tools developed for testing of the STIS before launch. We derived the pulse profile in the following manner (\cite{lin97}): a two-dimensional ACCUM image was constructed by summing up the raw time-tagged events (time, X, Y) for each X and Y coordinate of the MAMA detector. This image served as a template to select time-tag events that are associable with the dispersed pulsar spectrum. Each event arrival time was corrected to the solar-system barycenter arrival time. We then selected photon events spatially located within 5 pixels ($0\farcs125$ along the slit) of the pulsar spectrum's peak. We computed the period by maximizing the sum of the squares of the values in the pulse profile divided into 512 time bins. This resulted in a measured period at the time beginning with observation O45701C0M at time Modified Julian Day (MJD) 2450667.70478 of $33.473313$ ~ms with a statistical error being less than the last digit. The computed period based upon Jodrell Bank Crab Pulsar timing standards (\cite{lyn97}) is identical.
The time-tag observations revealed that the initial two-dimensional image had two problems: the extracted pulse profile had an extended wing for both the primary and interpulse peaks, and the detector background showed a weak pulsar period amplitude. We determined that the flight hardware was recording the X,Y coordinates, then waiting for the next event before assembling the T, X and Y event in buffer memory. The event was recorded as T(n+1),X(n),Y(n) instead of T(n),X(n),Y(n). A simple shift of the time vector by one X,Y pair rectified both the extended wing and the background periodicity. Figure 1 demonstrates the success of this procedure by plotting the STIS pulse profile on the same graph as the HSP pulse profile (\cite{per93}), originally recorded at $21.5~\mu$s resolution in the ultraviolet ($1600-3200$~\AA\ bandpass), but resampled to 512 bins or approximately $62.5~\mu$s each. We checked for period drift across the two observing periods in adjacent orbits by plotting the photon events with phase period as demonstrated by Percival et al. (1993) in their Figure 10. No drift could be detected from the beginning of the first orbit to the end of the last orbit.
This observation utilized the MAMA detector to provide a data cube representing T(time), X(spectrum) and Y(slit), but other MAMA applications (such as the echelle modes and the direct imaging modes) apply the detector coordinates to provide alternative spectral or spatial information. For the Crab Pulsar observation, sub-cubes can be extracted with respect to time to determine spectral properties of various phases of the main pulse, the interpulse, the bridge, rising edges and falling slopes. Other sub-cubes can be extracted, using the pulsar spatial position along the slit, to study the pulse profile for smaller slices of the spectral region. One unique representation is reproduced in Figure 2 (=Plate X). The X-axis depicts the uncalibrated spectrum, while the Y-axis depicts the pulse profile. The two-dimensional plot shows the variation of the spectrum with phase of the pulse profile. No wavelength-dependent structure is noticable to the eye.
The spectra were extracted using an eleven pixel wide extraction slit. As one orbit had been expended in the ACCUM mode immediately before the time-tag orbits, we first compared the ACCUM spectrum with the time-tag spectrum. To the limit of statistical variation in number of photon events, we found no difference in the spectral properties. To obtain the best S/N possible, we combined the time-tag spectrum with the ACCUM spectrum. The extracted and summed spectrum (Figure 3a) was flux-calibrated using the on-orbit calibrations of Collins and Bohlin (1998). Absolute fluxes are accurate to within a few percent, based upon repeatability of flux measures of the several reference stars.
\section{Results}
We examined the data cube of the photon events for variations of the pulse profile as a function of wavelength and variations of the spectrum for selected intervals of the pulse profile. In terms of the wavelength, pulse profile space of Figure 2, we collapsed the data into $400$~\AA\ bins between $1600$~\AA\ and $3200$~\AA\ and derived four independent pulse profiles. For all four intervals, the ratio of the main pulse to the interpulse, or P1/P2, is stable to well within $5\%$. We looked at the spectrum of the main pulse, the interpulse, the rising edges and the falling edges, as well as the interval above $80\%$ of the main pulse. In no case do we find changes in the spectrum that exceed those expected from photon statistics, which are at the $5\%$ level. The number of events recorded in the bridge, the portion of the pulse profile between the main pulse and the interpulse, is too few to determine accurately its spectrum. The physical event creating the near ultraviolet pulsar emission is neutral in color between $1600$ and $3200$~\AA.
The net emission passing through the $2 \arcsec \times 2 \arcsec$ aperture has two components: the pulsar, or stellar emission, and the nebular emission. The point-spread function of the telescope is at its best in this spectral region (\cite{sch87}), but still has extended wings. As the NUV MAMA is a two-dimensional detector that can view slits up to $26 \arcsec$ in height, we can determine the dark count rate across the detector by measuring the detected rate outside of the $2\arcsec$ slit height (80 pixels). Five regions, each eleven pixels in height, were selected for analysis: Region A, centered on the pulsar; B1, centered 30 pixels below the pulsar; B2, centered 30 pixels above the pulsar; C1, centered 60 pixels below the pulsar; C2, centered 60 pixels above the pulsar. Here, we can utilize the periodic variability of the pulsar to estimate the nebular contribution. The average dark count will be $C=(C1+C2)/2$. The nebula plus dark count plus wings of the pulsar would be $B=(B1+B2)/2$. The pulsar profile would be $D=A-B$. We can then compute the nebular contribution to be $E=B-C-$$f$D, where $f$ is fractional contribution of the stellar point spread function. We determined the value for $f$ by computing the pulse profile for E. If $f$ is too small, residual bumps would be seen in an extracted pulse profile at the positions of the main pulse and interpulse. Likewise, if $f$ is too large, residual dips would be seen. We determined the value for $f$ to be 0.016, which is consistent with the telescope point spread function measured on standard calibration stars. We measure the nebular continuum to be $\sim 3\%$ of the pulsar continuum in a $2\arcsec \times 0\farcs275$ slice, or about $1.1 \times 10^{-15}$~ergs~cm$^{-2}$~s$^{-1}$~arcsec$ ^{-2}$.
At radio wavelengths, the Crab Pulsar has the property of superpulses centered on the main pulse for one random period out of every forty (\cite{lun95}). These superpulses range from 33 to 2000 times the average main pulse. The measured STIS light curve has a raw count rate for the Crab Pulsar of about two events per individual period. Essentially each event is associable with either the peak or the interpulse. We looked at the histogram distribution for the time interval in each individual pulse profile around the main pulse. We found the distribution to be essentially Poissonian in character. Artificially injecting a second event in every fortieth main pulse changed the distribution to be noticably different from a Poissonian histogram. We also looked at the entire light curve and found no significant evidence for large pulses. Based on this simplistic test, we detect no superpulses that are several times larger than the average main pulse.
The spectrum was dereddened using the Savage \& Mathis (1979) standard galactic interstellar extinction curve and increasing values of $E(B-V)$, until the dust absorption feature at $2200$~\AA\ disappeared. A value of $E(B-V)=0.55\pm 0.05$ results in a smooth continuum (Figure 3b). Fitting the dereddened spectrum to a power-law spectrum, we find $ \alpha _{\nu}= -0.3 \pm 0.2 $. This value compares very favorably to the value of $E(B-V)$=0.51 +0.04 -0.03 obtained for the central nebula from IUE and HUT spectra by Blair et al. (1992). Alternatively adopting $E(B-V)=0.51$ gives $\alpha _{\nu}= -0.1 \pm 0.2 $. Consistent with Percival et al. (1993), we find the standard interstellar extinction curve to apply, contrary to the results of Benvenuti et al. (1980) who argued for a more peaked $2200$~\AA\ bump. This means that a contribution from supernova ejecta to the extinction curve, as suggested by Benvenuti et al. (1980), is not needed. These values for $ \alpha _{\nu} $ are significantly less than the value measured by \cite{hil97} for the only other pulsar observed in the ultraviolet, PSR B0540-69 in the LMC, which has an $\alpha _{\nu} = -1.6 \pm 0.4 $.
The spectrum of the Crab Pulsar includes several weak absorption lines (Table 2).The measured extinction implies that the estimated neutral hydrogen column density along the line of sight is $2.9 \times 10^{21}$ cm$^{-2}$, from the relationship given between $E(B-V)$ and $N(H~I)$ by de Boer et al. (1987), which appears to hold for many different lines of sight (\cite{shu85}). From the equivalent widths of the absorption lines, a naive interpretation would lead to significant depletion (~$10^{3}$). However, the Crab is at 2 kpc, and multiple interstellar clouds are likely intervening with currently unknown properties. We must defer to other studies at higher dispersion to better understand the weak absorption line measures. From the ratio $N(Mg I)/N(Mg II)$, it is unlikely that the relatively small equivalent widths of these absorption lines are due to the population of higher ionization states for these elements.
We estimate that the centroids of the Mg I, Mg II and Fe II lines are within 2 pixels, or 170 km s$^{-1}$, of the laboratory positions. We find no evidence of blue-shifted components up to $-1900$ km s$^{-1}$ that would be expected if absorptions originate in the fast-moving supernova ejecta along line of sight (\cite{dav85}; \cite{nas96}). A fast shell (\cite{che77}) is expected to be too highly ionized to be observed in the spectral region $1600$ to $3200$~\AA\ (\cite{lun86}). Also, there are no detected emission line filaments directly in the line of sight to the pulsar. Therefore, it is most probable that the detected absorbing material is in the intervening interstellar clouds or a pre-existing stellar shell (\cite{nas96} and references therein).
\placetable{tbl-2}
\section{Conclusions}
The STIS UV time-tag mode provides a capability of separating phase dependent spectra with timing resolution as fine as $125~\mu$s. In this application, the Crab Pulsar is demonstrated to have no major wavelength dependence of features within the pulse profile in the $1600 - 3200$~\AA\ spectral region. The $2200$~\AA\ diffuse feature is consistent with the standard interstellar diffuse extinction; the interstellar line absorptions, low in strength, are consistent with low velocity components indicating that the absorption is not coming from the blue-shifted filaments associated with the Crab Nebula. This does not rule out the possibility that some of the line absorption may arise from a low velocity shell of gas due to the wind of the progenitor star, nor that a fast-moving high-ionization shell, including highly ionized magnesium, surrounds the Crab Nebula. Given the 2 kpc distance, the most likely source of the observed features is the intervening interstellar medium. At high galactic latitudes the gas and dust conditions may vary considerably and significantly influence the observed columns densities. Indeed, observations are needed both further into the ultraviolet for the higher excitation absorption lines and at visible wavelengths to determine the intervening cloud structure.
\acknowledgments
We thank the STIS Instrument Development Team and the STIS Servicing Mission Orbital Verification team for their support in obtaining these observations.
\clearpage
\begin{deluxetable}{crrr}
\tablecaption{STIS Observations of the Crab Pulsar \label{tbl-1}}
\tablewidth{0pt}
\tablehead{
\colhead{OBSERVATION} & \colhead{MODE} & \colhead{START TIME} &
\colhead{EXPOSURE TIME} }
\startdata
& &MJD2450667.+ &s\nl
O45701O10 &ACCUM &0.64494 &3 x 600\nl
O45701C0M &TIME-TAG &0.70478 &2400\nl
O45701C2M &TIME-TAG &0.77198 &2400\nl
\enddata
\end{deluxetable}
\begin{deluxetable}{crr}
\tablecaption{Interstellar Lines Identified in Crab Pulsar
Spectrum\label{tbl-2}}
\tablewidth{0pt}
\tablehead{
\colhead{Line Identification} & \colhead{Measured Wavelength} &
\colhead{Equivalent Width}}
\startdata
Fe II 2587 \AA\ &2588.8 \AA\ &0.27+/-0.20 \AA\nl
Fe II 2600 \AA\ &2600.9 \AA\ &0.40+/-0.21 \AA\nl
Mg II 2796 \AA\ &2795.1 \AA\ &0.79+/-0.25 \AA\nl
Mg II 2804 \AA\ &2804.3 \AA\ &0.55+/-0.27 \AA\nl
Mg I 2853 \AA\ &2854.4 \AA\ &0.93+/-0.33 \AA\nl
\enddata
\end{deluxetable}
|
1,108,101,564,820 | arxiv | \section{INTRODUCTION}
\label{sec:introduction}
ULTRASAT is a space mission instrumented with a scientific telescope which is designed for time domain astronomy.
Under the mission leadership of Weizmann Institute of Science (WIS)\footnote{\href{https://www.weizmann.ac.il/particle/}{Weizmann Institute of Science}, 234 Herzl Street, Rehovot 7610001 Israel} and Isreal Space Agency (ISA)\footnote{\href{https://www.space.gov.il/en/}{Israel Space Agency}, Derech Menachem Begin 52, Tel Aviv, Israel}, the project responsibilities are shared among science institutes and industry partners. The camera in the telescope focal plane is designed and developed by DESY\footnote{\href{https://www.desy.de}{Deutsches Elektronen-Synchrotron DESY}, Platanenallee 6, 15738 Zeuthen}. The launch to geostationary orbit is planned the second half of 2024.
The distinguishing feature of ULTRASAT and its Schmidt telescope is the wide field of view (FoV) of $\approx200$ square degrees. It will perform repeated observations of the sky with $300\,\text{s}$ cadence in the near ultra violet waveband (NUV, $220\,\textnormal{-}280\,\text{nm}$).
ULTRASAT will be capable of a high detection rate of transient events to enable the detection of EM counterparts of gravitational wave sources, tidal disruption events, in addition to supernovae~\cite{Sagiv_2014}.
The central element of the telescope's camera is the detector assembly equipped with four independent BSI CMOS UV sensor tiles. Each tile provides a photosensitive area of $45\times45\,\text{mm}^2$ consisting of $9.5\times9.5\,\mu\text{m}^2$ pixels. The pixels are realised in a 5T-design, that offers dual gain capability enabling a high-dynamic range operation mode. In total, the four sensor tiles have 89.8 Megapixel. The overall design of the camera has passed the preliminary design review and first models are expected in 2022\cite{Asif_2021}.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=6.5cm]{figures/setup/Scout_full.jpg} & \includegraphics[height=6.5cm]{figures/setup/Scout_System_v2.jpg}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:sens_pack_devboard}
The test sensor package is shown (left), highlighting the two active pixel arrays, labeled S1 and S2, and the analog signal processing region on the central BSI CMOS die. The test sensor features a 4T pixel design with fixed gain and $15\,\mu\text{m}$ pitch. The two pixel arrays differ in one respect, that is the additional N-Well inside S2 operating as a guard ring for each pixel to reduce blooming effects. For ULTRASAT, however, there is no such guard ring planned. Hence, if not stated differently, only pixel array S1 is subject to our measurements reported here. The development board is shown (right) in operational condition with mounted sensor and placed inside the light-tight laboratory environment. The different components and interfaces of the system are marked in the picture.}
\end{figure}
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=6.5cm]{figures/qe/QE_sim.pdf}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:qe_sim}
Numerical simulations comparing the three different ARC options, labeled Tstd, T1 and T2, show the quantum efficiency of the test sensors for light incidence at an angle of $25$ degrees. The ULTRASAT operational waveband is highlighted in gray over the range of $220\,\text{nm} \leq \lambda \leq 280\,\text{nm}$.}
\end{figure}
In advance of production of the final ULTRASAT flight sensor, test sensors sharing a comparable pixel design and epitaxial layer were provided by Tower Semiconductors (Fig.~\ref{fig:sens_pack_devboard}). We report on the characterization study of these test sensors which is carried out in order to inform the decision on the type of anti-reflective coating (ARC, Fig.~\ref{fig:qe_sim}) to be used on the final sensor. Furthermore, this study aims to verify whether the test sensors satisfy the ULTRASAT performance requirement for noise characteristics at the operating temperature of $200\,\text{K}$. The measurements rely on an updated high-precision photometric calibration setup~\cite{Kuesters2020}. The characterization encompasses a comprehensive test of sensor gain, linearity, dark current as well as quantum yield and quantum efficiency.
\section{PHOTOMETRIC CALIBRATION}
\label{sec:photometric_calibration}
The photo-metric calibration setup at DESY Zeuthen provides monochromatic light from $200\,$nm to $1200\,$nm. As a light source, we use a Laser-Driven Light Source (LDLS) which supplies a broadband spectrum with enhanced output in the UV. The light is dispersed by two monochromators working in serial with blazed diffraction gratings. The combined bandwidth is $1.7\, $nm or $3.4\, $nm depending on the chosen gratings. The light flux on the test sensor is estimated during its illumination. A photodiode measures one-half of the setup's light. It works as our working standard (WS) which is calibrated against our two flux reference photodiodes or primary standard (PS) calibrated by the Physikalisch Technischen Bundesanstalt (PTB)\footnote{Calibrated from $200\;$nm to $400\;$nm.} and the National Institute of Technology and Standardisation (NIST)\footnote{Calibrated from $300\;$nm to $1100\;$nm.}. As a wavelength standard, we use low-pressure gas lamps\footnote{The wavelength of the emission lines are taken from the Atomic Spectra Database\cite{Nist_spec}}. To transfer the calibration from the spectral lines to the LDLS's spectrum, we use an absorption line filter.
\subsection{Laboratory setup}
\label{sub:photometric_setup}
A flat mirror (M1, see Fig.~\ref{fig:setup_scheme}), which is mounted on a Thorlabs HDR50 rotation stage, reflects the light from different sources to illuminate the spherical mirror (M2\footnote{spherical mirror of 40\,cm and a diameter of 20\,cm.}). In the case of the LDLS (EQ99-X from Energetiq), two Thorlabs 2-inch off-axis parabolic mirrors transform an F/1 beam to an F/4 beam fitting the monochromator. A laser diode (Thorlabs CPS532-C2 laser diode at 532\,nm with 0.9\,mW) is available for alignments. The low-pressure gas lamps are encapsulated in a PTFE cylinder, acting as an integrating sphere with an output slit of $\approx 5\,$mm width. M2 focuses the light onto the entrance slit of the first monochromator.
Before it enters the monochromator it passes a filter wheel with longpass filters (F)\footnote[1]{No filter: $\lambda$\,$<$\,320\,nm; WG280: 320\,nm\,$<$\,$\lambda$\,$<$\,500\,nm; GG495: 500\,nm\,$<$\,$\lambda$\,$<$\,750\,nm; RG695: 750\,nm\,$<$\,$\lambda$\,$<$\,870\,nm; RG830: 870\,nm\,$<$\,$\lambda$}. These longpass filters remove the light from the input broadband source that otherwise would be transmitted as higher-order light due to the grating's interference. It is necessary to position these filters in front of the monochromator to remove the unavoidable fluorescent emission they produce \cite{Reichel_2015}.
We use two Oriel Cornerstone 260 monochromators from Newport, whereby the entrance slit of the second monochromator replaces the exit slit of the first monochromator. In this configuration, the first monochromator preselects the wavelength range, and the second suppresses the out-of-band light from the first. Each grating turret is connected to Heidenhain ERN 480 5000 high-resolution encoders, allowing sub-arcsecond angle measurements to improve the system's wavelength calibration. For a detailed description of the double monochromator and the calculation of the system's wavelength, see \cite{Kuesters2020}. The beam leaving the double monochromator is collimated using a Thorlabs off-axis parabolic mirror (M3). To account for astigmatism from the double monochromator, we use a cylindrical mirror (M4) and redirect the beam into the attenuator setup with a flat mirror (M5).
The beam passes a UV-fused silica window (M6) used as a beam splitter. The reflected beam is focused by a UV-fused silica lens (L1) on a reference diode to track the variations in intensity. Using a system of seven linear motors F1-7, we can insert five reflective neutral density\footnote[2]{Three reflective neutral density filters with nominal optical density (OD) of one, and two filters with the nominal optical density of two, from Edmund optics.} filters and two clear UV-fused silica windows into the light beam. The filters are inserted at an angle of $\pm45$\,$^{\circ}$ to reject either $90\%$ or $99\%$ from the beam. The remaining light is split into two beams (W1\footnote[3]{W1, W2: UV fused silica windows $d=2$", used as beam splitter/combiner}), which are recombined with W2 after each one passing a shutter (S1 \& S2). The recombined beam is then focused by an off-axis paraboloid (M9) onto a multimode fiber\footnote[4]{Mounted on a motorized 2d stage to enhance coupling efficiency.}. We can perform measurements with beam A (shutter S1), beam B (shutter S2), or both beams using the two shutters. If the test sensor is linear, adding the measured value with beam A and the measured value with beam B should equal the measured amount with both beams. A round variable neutral density filter (F8) allows us to decrease the light intensity for beam A by two additional optical densities continuously.
The test sensor is measured in a light-tight enclosure. The light from the source enters the enclosure and is transported via a multimode fiber of type FG400AEA (18\,m length) to the final optics held by a five-axis gantry robot (see Fig.~\ref{fig:qe_setup}, left). The robot can change its spatial position in all three Cartesian axes $[0,1070]\,\mathrm{mm}\times[0,680]\,\mathrm{mm}\times[0,580]\,\mathrm{mm}$ and rotate the optics $360\,^\circ$ in azimuthal and polar axes within $[70, 180]\,^\circ$. A reflective neutral density filter is used as beam splitter (W3) to probe $\approx 50\,\%$ of the fiber output flux using a Hamamatsu S1337-1010BQ diode as WS (see Fig.~\ref{fig:qe_setup}, right). The reflected part is used to be focused onto the test sensor. The WS's spectral response is calibrated together with the splitting factor of W3 by comparing it with the response of our PS placed as a test sensor. The light spot on the test sensor has an FWHM of $(530\pm30)\,\mathrm{\mu m}$. The light spot is always fully contained within the collecting area $A_c$ of the diode or the pixel array of the test sensor. A more detailed description about the setup and its characteristics can be found in K\"usters et al.\cite{Kuesters2020}.
\begin{figure}[]
\begin{center}
\includegraphics[width=0.85\textwidth]{figures/setup/Setup_scheme.pdf}
\caption[example]
{\label{fig:setup_scheme}
Schematic overview of the light source section of the calibration setup.
The components are:\newline
- Lamps: LDLS, 532\,nm Laser, low pressure gas lamp Hg\,/\,Ne\,/\,Ar\,/\,Na\,/\,Cd\newline
- M1: Flat mirror, rot-able (mounted on HDR50/M from Thorlabs) to select light sources \newline
- M2: Spherical mirror, $f=0.4$\,m, $d=200$\,mm \newline
- F: Filter wheel with longpass filters.\newline
- Mono1, Mono2: Cornerstone 260 monochromators\newline
- M3: Off-axis parabolic mirror from Thorlabs, $d=1$", ${\mathrm{RFL}}=2$"\newline
- M4: Concave cylindrical mirror, $d=1$", $f=2$"\newline
- M5: Flat folding mirror\newline
- M6: UV fused silica window as a beam splitter to retrieve reference light beam\newline
- A1: Aperture to define beam diameter\newline
- F1-F7: Reflective neutral density filters with ${\mathrm{ OD}}=0,1,2$\newline
- W1/2: UV fused silica windows $d=2$", used as beam splitter / combiner \newline
- S1/2: shutters operated \newline
- M7,M8: remote controlled shutters\newline
- F8: Continuously variable reflective neutral density filter ${\mathrm{OD:}}\,0.04-2.0$ mounted to a HDR50 rotation stage\newline
- M9: Off axis parabolic mirror, same as M5, with a multimode fiber in its focus to the gantry robot, type FG400AEA.}
\end{center}
\end{figure}
\begin{figure} [ht]
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=6.5cm]{figures/setup/QE_setup_v3.jpg} & \includegraphics[height=6.5cm]{figures/setup/QE_robot_setup_v2.jpg}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:qe_setup}
The picture (left) shows the inside of the light-tight optical laboratory with the most important components highlighted. Through a multimode fiber, the output from the light source enters the setup, where it is mounted on a remote-controlled five-axis gantry robot. The stage carries a collimating mirror and a beam splitter from which light is focused onto a photodiode, operating as a working standard, and the test sensor. The schematic (right) of the setup illustrates the beam path. The gantry robot can move relative to the test sensor. At focus distance ($75\,$mm), the spot's FWHM is about $(530\pm30)\,\mathrm{\mu m}$ which covers approx. $1000$\,pixels. The test sensor can be rotated around its vertical axis for a variation of the incident angle.}
\end{figure}
\subsection{System performance}
\label{sec:performance_results}
The resulting light flux on the test sensor is a product of the spectral power of the LDLS, the throughput of the longpass filter, the spectral efficiency of the gratings, the throughput of the multimode fiber, the combined efficiency of the mirror optics, as well as the additional chosen optical density with the reflective neutral density filters.
We use pairs of blazed diffracting gratings. For wavelengths in the visual (VIS) and infrared (IR) bands ($\lambda > 365\,$nm) the number of grooves per mm of the gratings is \SI{1200}{\groove\per\milli\meter} whereas in the ultraviolet (UV) ($\lambda \le 365\,$nm) it is smaller at \SI{600}{\groove\per\milli\meter} to increase the light intensity in this regime. Resulting from that, the overall bandwidths are $\approx 1.7\,$nm in the VIS/IR and $\approx 3.4\,$nm in the UV. Moreover, the generated photocurrent at the working standard varies depending on the spectral range. In the UV it ranges from 1\,pA at 220\,nm to 200\,pA at 365\,nm and in the VIS/IR bands it varies between 1\,pA and 1\,nA.
\paragraph{Uncertainty on flux scale}
We use a Dual-Channel Picoammeter 6482 from Keithley to transfer the flux calibration from the PS to our WS and estimate the light flux on the test sensor. Our primary standards are Hamamatsu S1337-1010BQ (ultraviolet, calibrated by PTB) and S2281 (optical and infrared, calibtrated by NIST) photodiodes. The calibrations have a maximal uncertainty of $0.3\,\%$ from PTB and $0.15\,\%$ from NIST, respectively. The statistical errors dominate our uncertainties in the photocurrent. The uncertainty for the absolute value is estimated from the manufacturers calibration to be $0.2\,\%$ whereby the statistical error lies between $0.1\,\%$ for $1\,$nA and $2\,\%$ for $1\,$pA. As the generated photocurrent in ultraviolet lies between $1\,$pA and $100\,$pA the statistical uncertainty is the dominating error in the ULTRASAT wavelength regime.
\paragraph{Uncertainty on the wavelength scale}
The used monochromators have a periodic error in the drive of the grating turret and we observed stepping losses of their stepper motors. Using a high-resolution encoder mounted to the grating turret, we measure the grating angles, which allows us to predict the wavelength of the light passing the monochromator system\cite{Kuesters2020}. To verify the predicted wavelength or model wavelength, we compare the measured wavelength of emission lines in the spectra of low-pressure gas lamps to the NIST database\cite{Nist_spec}. To find the "right" wavelength in the NIST database, we use a loaned Fourier transform spectrometer from Thorlabs (OSA201C, resolution of $\le$\,20\,pm for wavelength $\le$\,1\,$\mu$m) to assign the emission lines of our lamps to the NIST database. We now obtain synthetic spectra for the lamps and transfer the NIST wavelengths to our model by fitting local deviations between synthetic and predicted wavelength. These deviations follow a linear relation in the model wavelength.
For each low-pressure lamp, we find a different linear calibration, with a spread of $1\,$\AA, whereby the residuals for each lamp fit are in the order of \textcolor{black}{$\approx0.2\,$\AA}. We attribute the difference between the lamps to inhomogeneity in the illumination of the first monochromator's entrance slit, caused by direct light from the lamp\footnote[6]{The PTFE cylinder around the lamp is of insufficient size.}. This linear trend can be calibrated out. However the calibration with the emission lines is only applicable to light from the PTFE cylinder and can not be transferred to the spectrum of the LDLS directly. Instead, we measure the spectral transmission of a Holmium Didymium absorption line filter (HoDi Filter) from Hellma, type UV45, to obtain the systematic uncertainty on the model wavelength with the LDLS. The filter provides absorption lines between $241\,$nm and 864nm with optical densities between 0.07 and 0.74. It is shipped with a calibration provided by the manufacturer\footnote[7]{Its laboratory is approved by the Deutsche Akkreditierungsstelle (DAkkS).} with an accuracy to 0.2\,nm.
To determine the system's stability, we performed repeated scans of three emission lines, the first line, a blend line from two CdI lines at 325.34622\,nm, and 326.19951\,nm, the second, a HgI line at 546.22675\,nm and third, a HgI line at 1014.253\,nm. We scanned for changes within $\approx$\,300 hours or $\approx$\,70 repetitions with varying laboratory temperature. We find a correlation between the temperature and the fitted line centers within a temperature range from $17^{\circ}\,$C to $24^{\circ}$ and a linear temperature coefficient of $9.7\,$pm / $^{\circ}$C for all wavelengths.
\par
We have shown that our double monochromator system has a high reproducibility and can be calibrated to high accuracy in the case of a homogeneous illumination of the monochromator entrance slit.\cite{Kuesters2020} Applying the linear calibration with the HELLMA filter back to the QE measurements would require prominent signatures in the lamp spectrum. The spectrum of the LDLS shows weak emission lines in the VIS and the strong lines in the IR. A calibration on these would result in large extrapolation errors. We quantified the influence of the inhomogeneous illumination of the entrance slit by comparing filter calibrations with different couplings. The systematic uncertainty in the QE and QY measurements for the test sensors is estimated as $\le$\,2\,nm and dominating those from the varying laboratory temperatures and measurement statistics.
To ensure a homogeneous illumination of the monochromator, we will implement a hollow PTFE cylinder of increased size to encapsulate the low-pressure gas lamp and geometrically avoid direct light reaching the monochromator. The new cylinder will also contain a borehole to illuminate the cylinder with a Xe high-pressure arc lamp. In this way, we can transfer the linear calibration of our model wavelength to the absorption lines of the HoDi Filter with the precision of the emission line fits and apply this to the measurements using the LDLS. This will allow us to decrease the wavelength uncertainty for the ULTRASAT flight sensor's measurements and other projects to $\le$\,0.4\,\AA.
\section{SENSOR CHARACTERIZATION}
\label{sec:sensor_characterization}
To asses the question if the test sensors design fulfill the project's requirements and to achieve a recommendation of the ARC options we perform several tests on the test sensors provided by Tower Semiconductors. The test sensors are characterized in their gain, the read noise, the dark current as well as the spectral quantum efficiency. Further, we quantify the detector's quantum yield and investigate it in its non-linearity.
\subsection{Gain, non-linearity and leakage current}
\label{sec:gain_non_linearity_and_leakage_current}
The gain of CMOS sensors refers to the conversion factor used to translate the voltage generated by the collected charge in each CMOS pixel into units of the analog-to-digital converter (ADU). The gain $K$ is expressed in ADUs per electron (\si{\adu\per\electron}) and is the constant of proportionality between the mean number of collected electrons $\mu_e$ and the mean digitized signal $\mu_{digi}$ of the sensor. Hence, the linear relationship is written as $\mu_{\text{digi}} = K \cdot \mu_e$.
The knowledge of gain plays a crucial role in measuring the absolute quantum efficiency where the number of photoelectrons detected by the test sensor is compared to the number of photons estimated from the working standard's photocurrent under investigation.
The gain is characteristic to components of the sensor's analog signal processing circuits and can be determined from direct measurements on these components. The manufacturer reports a conversion factor of $52\,\mu\text{V}/\text{e}$. This is related to the gain $K \hat{=} N_{\text{bit}} / V_{\text{AD}} \cdot 52\,\mu\text{V}/\text{e} = 0.42\,\text{ADU}/\text{e}$, with a bit depth of $N_{\text{bit}} = 2^{14}$ and a supply voltage range $V_{\text{AD}}$ of the analog-to-digital converter of $2\,\text{V}$.
We use an alternative method to determine the gain which exploits the statistical nature of the signal found in sensor images recorded under exclusion of light. To separate between sensor images with and without incident light, we refer to light and dark images, respectively. Our procedure is comparable to the photon transfer method (PTM)~\cite{Janesick1987}. In this case, gain is determined from the analysis of dark images only. More specifically, an exposure time sequence of dark images is conducted. That is, several dark images are recorded at different durations of exposure time, $t_{\text{exp}}$. The timescale of these exposures reaches from one microsecond up to several seconds.
We first measure pairs of images with same exposure times which are considered to eliminate fixed pattern noise. Note, that the paring of images is exclusive for any given pair such that correlation between pairs is avoided. From the image pair average, the mean signal per pixel $\mu_{\text{digi}}$ is calculated. Whereas the signal variance per pixel $\sigma_{\text{digi}}^2$ is estimated from the image pair difference. Second, the mean signal per pixel from all available image pairs is plotted against exposure time (Fig.~\ref{fig:dc_lin_fit}, left panel) whereby we expect a linear behavior for ideal sensors. This relationship is characterized by the dark current rate $DCR$ (in ADU per pixel per second) and the offset given by the test sensor's bias level $\mu_{\text{bias}}$. A best-fit estimate is used to determine $\mu_{\text{bias}}$ and $DCR$. Finally, the signal variance per pixel is plotted against the mean signal per pixel (Fig.~\ref{fig:dc_lin_fit}, right panel) for all image pairs. Based on the assumption that the number of collected electrons follows Poisson statistics\cite{Canfield_1998}, the linear relationship is given by the gain $K$ as proportionality constant and y-axis offset by the test sensor's read noise $\sigma_{\text{read}}^2$.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=6.5cm]{figures/dc/DC_lin_fit_TJ_SC_204_2020-11-04_dc_scan_voltage_test_000.pdf}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:dc_lin_fit}
The exemplary display the gain measurement results of the test sensor with ID \emph{TS204} shows the mean signal against exposure time (left) and signal variance against bias-subtracted mean signal. The residuals are shown as relative deviation from the linear best-fit model in the lower panels. Each marker represents the data corresponding to an individual image pair.}
\end{figure}
Note, that significant deviations from linear behavior have been observed in both cases for higher values of exposure time and mean signal. Possible effects that could account for this are discussed towards the end of this section. However, the inference of a model with which corrections, i.e. re-linearization of the measured data~\cite{soman2015non}, could be achieved was found to be outside the scope of this study. In order to minimize the impact by any non-linearities, the processing of all data sets is limited below a maximum of $1\,\text{s}$ and $2000\,\text{ADU}$, respectively. To better account for the statistical fluctuations and especially the systematics introduced by remaining non-linearities, we used Monte-Carlo-simulated data based on the initial measurement data to estimate the best-fit parameters. The mean value and standard deviation of the resulting distribution of best-fit parameters are taken as the final measurement result
Throughout this study, we evaluate a total of twelve test sensors. Thus, a central gain value can be calculated from this sample to be used in the analysis of subsequent measurements. Additionally, the analysis procedure yields the test sensor's read noise as a side product and, similarly, a central value can be calculated. These values are presented in Tab.~\ref{tab:dc_results}. For the discussion of the dark current results, the reader is referred to Section~\ref{sec:dark_current}.
\begin{figure} [ht]
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=6.7cm]{figures/gain/gain.png} &
\includegraphics[height=6.7cm]{figures/gain/rn.png}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:dc_gain_all_sensors}
The measurement results for gain (left) and sensor read noise (right) of all test sensors under investigation are shown. On both diagrams, the individual measurement results are compared in the left panel, whereas in the right panel, multiple repeated measurements are shown for the test sensor with ID \emph{TS223} to illustrate the reproducibility. Moreover, in the left panel, the results are compared for dark images and also light images, where the test sensor is illuminated with monochromatic light ($500\,\text{nm} \leq\lambda\leq 900\,\text{nm}$). The average values are indicated by horizontal lines together with bands representing the systematic and statistical uncertainties (Tab.~\ref{tab:dc_results}). The manufacturer reference of the test sensor's gain at $0.42\,\text{ADU}/\text{e}$ is shown by the horizontal dashed line on the left-hand side.}
\end{figure}
\begin{table}[ht]
\caption{Averaged measurement results of the gain measurement procedure}
\label{tab:dc_results}
\small{
\begin{center}
\begin{tabular}{|l|c|}
\hline
\rule[-1ex]{0pt}{3.5ex} Gain $K\,[\text{ADU/e}]$ & $0.418 \pm 0.001_{\mathrm{stat}}\, \pm 0.006_{\mathrm{sys}}$ \\
\hline
\rule[-1ex]{0pt}{3.5ex} Read noise $\sigma_{\text{read}}\,[\text{e}]$ & $4.70 \pm 0.02_{\mathrm{stat}}\, \pm 0.90_{\mathrm{sys}}$\\
\hline
\end{tabular}
\end{center}}
\end{table}
To determine non-linearity and count rate non-linearity we illuminate the complete pixel array homogeneously with a defocused image of the multimode fiber output. Exposure time sequences are conducted with randomly sorted exposure times for different flux levels using the ND-filter of the optical setup. Any variation in the light source's intensity results in a stronger normal distributed residuals, but does not affect the later linear fit. For each exposure time we take several light and dark images from which a region of 25x25 pixels is used to calculate averaged values. We take the median over all exposures at a fixed exposure time to reject outliers in the signal and background images. A background correction is applied by removing the median background level from the median signal level. We then divide the corrected signal level by the exposure time and obtain an estimate of the rate at which the signal was generated (Fig.~\ref{fig:non_lin}). For an ideal detector the rate estimate should be constant till the pixel approaches saturation or the ADC reaches its voltage range. However, for low rates we find the rate estimates to deviate from a constant earlier than saturation. We attribute this to transfer gate leakage, which is confirmed by the manufacturer as a feature of the test sensors. However, we can also see that our QE measurements (indicated by stars in Fig.~\ref{fig:non_lin}) are far away from these measurement conditions and are therefore not affected by leakage resulting in signal rate non-linearity.
\begin{figure} [h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=6.5cm]{figures/gain/rate_plot.png}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:non_lin}
Measured linearity for a single sensor. We estimate the test sensor's signal rate summed over all pixels with varying exposure times with different flux and varying flux but constant exposure time. The signal rate becomes non-linear when some pixel reaches their ADC's saturation limit and if the transfer gate leakage current becomes non-negligible. In the last case, the signal rate follows a power-law. We set the transition to the leakage-dominated area (red shaded) as the intersection of the power-law trend with the former signal rate in the linear regime at constant flux levels. The typical signal rates during the measurements of quantum efficiency (color-coded stars) and gain (red circles) are included in the diagram.}
\end{figure}
\subsection{Dark current}
\label{sec:dark_current}
Tower semiconductor manufactured the test sensors that DESY received, with the same BSI process which will be used in the final ULTRASAT sensor. Therefore, the test sensor has a similar epitaxial layer to the ULTRASAT sensor. Even though, the pixel dimensions were different, we expect similar dark performance from the test sensor compared with the ULTRASAT flight sensor. The dark signal level of the ULTRASAT sensor at the operational temperature ($-70^{\circ}$C) shall be below 0.026 e/pix/s according to the requirement~\cite{Asif_2021}. It is one of the crucial design parameters of ULTRASAT that is directly related to the mission performance. Therefore, we were particularly interested in the dark signal level at the operating temperature and the dark current doubling factor\cite{EMVA1288} of the test sensor.
The experiment setup for the dark current-temperature dependency investigation consists of a thermal enclosure, a temperature monitoring system, the test sensor, and an evaluation board to readout the test sensor (see Fig.~\ref{fig:sens_pack_devboard}). We use the thermal enclosure to cool down the test sensor to the required temperature and monitor the temperature of the test sensor during the experiment. The dark current at the different temperatures is estimated as described in Section~\ref{sec:gain_non_linearity_and_leakage_current}.
\begin{figure} [h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=7cm]{figures/dc/DC_vs_temp.pdf}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:dc_results_dc_vs_temp}
The measured dark current rate of test sensor \emph{TS102} given in electrons per second an pixel is shown as a function of temperature. From a power-law best-fit estimate the dark current doubling factor is determined. Bias voltages are varied to optimize the DCR performance. The measurement with strong DCR optimization (red, circular markers) shows that the test sensor is capable to meet the ULTRASAT requirement (star marker) if unimpeded by other noise contributions than thermal dark current. However, only the green and yellow measurements represent conditions under which the test sensor is designed to be fully operational.}
\end{figure}
Figure.~\ref{fig:dc_results_dc_vs_temp} shows the results of our dark current measurement campaigns. Above $20^{\circ}$C the dark current decreased steeply with temperature. However, below $20^{\circ}$C dark current didn't follow the previous trend. Further reduction in the temperature didn't reduce the dark current as expected. We suspected the following reasons behind these dark current curves. First, the test sensor gets self-heated during operation. Second, there is a source (external or internal) of dark signal in addition to the usual thermal excitation in the conversion layer.\\
To infer the impact by self-heating on the dark current of the test sensor, we used extension cables between the socket and mainboard (Fig~\ref{fig:sens_pack_devboard}) to insulate the test sensor from all the heating elements in the evaluation board. We observed a slight reduction in the dark current, but the issue persisted.
\begin{figure}[ht]
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=6.5cm]{figures/dc/Scout_IR_Emission_images.jpg} &
\includegraphics[height=6.5cm]{figures/dc/Scout_DC_600ms_10temp.png}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:dc_ir_emission}
Four images (Left) taken with an IR-sensitive camera show a test sensor under different operating conditions with varied parameters of the voltage supplies for the on-die analog signal processing blocks. The upper left image represents the nominal operating condition. The images are superimposed onto a base image taken under illumination from a $950\,$nm LED. A dark image (Right) captured with a test sensor under nominal operating conditions. A rising signal gradient from top to bottom of the image is clearly visible.
}
\end{figure}
At this point, we studied each dark image separately and observed a gradient in signal level across the image. The lower part of the dark image has comparatively higher signal count than the upper part (refer Fig.~\ref{fig:dc_ir_emission}, right). The non-uniformity of the dark signal suggested a source situated at the bottom of the test sensor pixel array. We rejected the idea of a source external to the test sensor since the freezer is lightproof. We therefore identified IR emission of the periphery electronics on the test sensor's die as the likely source and verified this by applying different bias voltages. For some of the applied bias voltage, we found that the gradient disappeared. This lead to the conclusion that the source is internal.
To verify the readout electronics IR emission, we imaged the test sensor using ASI183MMPro camera from ZWO\footnote[8]{https://astronomy-imaging-camera.com/product/asi183mm-pro-mono} while the test sensor is operating. Tower semiconductor has provided us with the register values with which we could turn on or off different parts of the test sensor readout electronics. We imaged the test sensor during its nominal operating mode and while some of the subsystems of the readout electronics turned off or kept at minimal operational condition. Figure.~\ref{fig:dc_ir_emission} (left) is created by superimposing IR emission image with white light image of the test sensor during different working conditions. Figure.~\ref{fig:self_heating} confirmed our suspicion about the readout circuitry IR emission. The intensity and the location of the IR emission were also different depending upon the mode of operation of the test sensor. Based on these findings several mitigation methods were implemented into the design of the ULTRASAT sensor, on which we will report in future publications.
\begin{figure} [h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=6cm]{figures/dc/DC_Fluke_IR.PNG}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:self_heating}
Infrared image of the test sensor and its read out electronic during operation made with a Fluke Ti25 Thermal Imager. The electronics heat up to $\approx 10\,^{\circ}$C higher compared with the test sensor itself (red). The sensor is $\approx 7\,^{\circ}$C warmer than the surrounding environment. The intensity and the exact temperature distribution vary depending on the operation parameters.}
\end{figure}
\subsection{Quantum yield}
\label{sec:quantum_yield}
Quantum yield (QY) is the average number of electron-hole pairs generated in the silicon photodetector per photon interacting with the silicon. Studies \cite{Blake2011, Canfield_1998} have shown that high-energy photons with a wavelength less than $300\,$nm will generate more than one electron-hole pair in the silicon. At the lower wavelength regime, QY might lead to the overestimation of the Quantum Efficiency (QE). QE is the photon to electron conversion probability in silicon photodetectors conventionally measured as the ratio of the collected electrons to the incident photons. Thus, one might overestimate the QE below $300\,$nm because of the additionally created electrons. Therefore, the QY is the correction factor to the overestimated QE. We measure the QY in the operational bandwidth of ULTRASAT ($220\,$nm - $280\,$nm) for verification and completeness of our measurement even though previous studies and models were available.
We follow the method by Janesick et al.~\cite{Janesick1987} to estimate the QY. We define the QY as
\begin{equation}
\mathrm{QY(\lambda)} = \frac{J(\lambda)}{K(\lambda > 400\,\mathrm{nm})} \, ,
\label{eq:qy}
\end{equation}
where $J$ is the gain estimated from the light images with a wavelength less than $400\,$nm and $K$ is the gain estimated from the low energy photon light images with a wavelength above $400\,$nm for normalization.
We generate a full photon transfer curve (PTC) at every $5\,$nm from $220\,$nm to $280\,$nm by illuminating the test sensor completely with the monochromatic light. The data analysis follows the procedure described in Section~\ref{sec:gain_non_linearity_and_leakage_current}. The full width at half maximum of the monochromatic light used is $3.4\,\text{nm}$. For normalization, we estimate the gain $K$ at $500\,$nm, $700\,$nm and $900\,$nm.
Figure~\ref{fig:qy_results} shows the measured QY for two test sensors in the wavelength range from $220\,$nm to $500\,$nm. We follow Heymes et al. to empirically model the QY~\cite{Heymes2020}. The authors use a theoretical description of the QY based on measured data on the ionisation energy of silicon\cite{kuschnerus1998characterization} to derive an eighth-order polynomial fit between $40\,$nm to $400\,$nm. We use this description to reproduce the QY data on which the fit is based. The comparison to our measured QY shows the two data sets are in good agreement. Thus, we take this polynomial to model the QY and to correct the QE measurements for wavelengths below $400\,\mathrm{nm}$. An overall systematic uncertainty of $2\,\%$ on the QY derived from this model is assumed.\cite{kuschnerus1998characterization}
\begin{figure} [h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=6.5cm]{figures/qe/qy_zoom3.pdf}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:qy_results}
The green and beige markers show test sensor measurements of quantum yield as a function of wavelength. The measured values are normalized to the average measured light gain between $500\,\mathrm{nm}$ and $900\,\mathrm{nm}$ (Fig.~\ref{fig:dc_gain_all_sensors}). The solid line at $QY = 1$ represents the theoretical quantum yield expected for light in the visual (VIS) and infrared (IR) spectrum. For comparison, the black circles show the reproduced quantum yield data based on direct measurements in silicon (see description in text)~\cite{kuschnerus1998characterization}. From this data a best-fit model for quantum yield (eighth-order polynomial)~\cite{Heymes2020} is derived, represented by the dashed line. The model is found to be valid for light below $400\,\text{nm}$ and is used in this study to compute the wavelength-dependent quantum yield correction factor.}
\end{figure}
\subsection{Quantum efficiency}
\label{sec:quantum_efficiency}
The measurement of the spectral quantum efficiency\footnote[9]{We use the definition of the interacting quantum efficiency as it is defined in } is a crucial part in this characterization as it influences the decision on the anti-reflective coating (ARC) for the final UV sensor of ULTRASAT
We measure the quantum efficiency $QE(\lambda)$ as function of the wavelength $\lambda$. To illuminate the test sensor we use monochromatic light with a bandwidth of $3.4\,$nm below $365\,$nm and $1.7\,$nm above\footnote[1]{Diffracting gratings with $600\,\mathrm{grv}\,\mathrm{mm}^{-1}$ lead to an increase in the flux compared to $1200\,\mathrm{g}\,\mathrm{mm}^{-1}$ (Sec.~\ref{sec:performance_results})}. The QE can be estimated by comparison of the flux of incident photons per s $F_{\gamma}(\lambda)$ with the test sensor's signal response $S$ in photoelectrons per s. The QE can be expressed as:
\begin{equation}
\mathrm{QE}(\lambda) = \frac{S}{\mathrm{QY}(\lambda)\,F_{\gamma}}\,,
\label{eq:spectral_qe}
\end{equation}
with the wavelength-dependent quantum yield $\mathrm{QY}(\lambda)$. To convert ADUs into photoelectrons we use the gain $K$ from Table~\ref{tab:dc_results} and the exposure time $t_{\mathrm{exp}}$: $S[\si{\electron\per\second}] = S[\si{\adu\per\second}] (K[\si{\adu\per\electron}]\,t_{\mathrm{exp}}[\si{\second}])^{-1}$.
For each data point we collect a number of images on the order of $O(10)$ and measure the photocurrent of the working standard simultaneously. We perform two background measurements each time, one before illumination and one after, and subtract the averaged background from signal. The test sensor is illuminated by a circular shaped beam spot with a FWHM $(530\pm30)\,\mathrm{\mu m}$ which covers $\approx 1000$\,pixels, or $\approx 10\,\%$ of the test sensor's collective area. The setup allows to vary the angle of incidence (AoI). With the variation in the wavelength the focus is being corrected automatically by the gantry robot following the lens equation and the Sellmeier dispersion equation.
We measured the QE of test sensors of all three ARC options (Fig~\ref{fig:qe_sim}) with a resolution of $1\,$nm in\\ $[220\,\mathrm{nm}, 300\,\mathrm{nm}]$ and $2\,$nm in $[300\,\mathrm{nm}, 1100\,\mathrm{nm}]$. In both UV and IR spectral bands we observe fringing. Some characteristic values within the ULTRASAT operational waveband of these measurements are summarized in Tab.~\ref{tab:qe_results}. To test reproducibility, we perform additional measurements on different test sensors of the same ARC option as well as repeated scans of single sensors in the time frame between October 2020 and July 2021. The resolution for if the reproducibility measurements is typically larger with $5\,$nm and $10\,$nm in the UV and visual/IR bands, respectively. Figure~\ref{fig:qe_reference} shows QE measuruement results of representative test sensors of all three ARC options together with their corresponding simulations. The measurements are preformed with an AoI of $20\,^{\circ}$, whereas fro the simulated data an AoI of $\approx 25\,^{\circ}$ is assumed. The left-hand side plot shows the measurement across the full spectral range of the setup from $200\,$nm to $1100\,$nm. The measured QE follows qualitatively the simulations' trends. As the measurements are performed with a smaller AoI it is shifted red against the simulations. While coating option Tstd and T1 do not clearly decline towards the far-UV-end of the ULTRASAT operational waveband, T2 rejects the far-UV more effectively and peaks in the center at $\approx 245\text{nm}$ of the waveband. At a wavelength at $220\,\mathrm{nm}$ the efficiency is $(19\pm1)\,\%$ and its maximum is $(80\pm2)\,\%$.
\begin{table}[ht]
\caption{Characteristic values from the measured QE of the three high-resolution scans in the ULTRASAT operational waveband (Fig.~\ref{fig:qe_reference})}
\label{tab:qe_results}
\small{
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\rule[-1ex]{0pt}{3.5ex} ARC & Test Sensor ID & $\text{QE}(220\,\text{nm})\,[\%]$ & \text{QE}(250\,\text{nm})\,[\%] & \text{QE}(280\,\text{nm})\,[\%] & Averaged QE $[\%]$ \\
\hline
\rule[-1ex]{0pt}{3.5ex} Tstd & TS207 & $50 \pm 2 $& $56 \pm 2$ & $46 \pm 2$ & $56 \pm 1$\\
\hline
\rule[-1ex]{0pt}{3.5ex} T1 & TS213 & $72 \pm 6 $ & $56 \pm 1$ & $53 \pm 1$ & $58 \pm 2$\\
\hline
\rule[-1ex]{0pt}{3.5ex} T2 & TS211 & $19 \pm 1 $ & $77 \pm 2$ & $48 \pm 1$ & $58 \pm 2$\\
\hline
\end{tabular}
\end{center}}
\end{table}
\begin{figure} [ht]
\begin{center}
\begin{tabular}{cc}
\includegraphics[height=6cm]{figures/qe/QE_reference_scouts_20_full_spie_.pdf} & \includegraphics[height=6cm]{figures/qe/QE_reference_scouts_20_UV_spie_.pdf}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:qe_reference}
The spectral quantum efficiency for the three ARC options is compared across the entire spectral range (left) and the operational waveband (right) of ULTRASAT.
}
\end{figure}
To study systematic changes due to the storage of the test sensor under ambient air without dedicated cleanliness monitoring, we performed several measurements on one sensor over three months. Figure~\ref{fig:qe_reproducibility} shows three repeated scans of the same test sensor within the wavelength range 220-300\,nm.
\begin{figure} [H]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=6.5cm]{figures/qe/QE_repro.png}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:qe_reproducibility}
Shown are three QE scans of the same test sensor. The measurements are performed in the wavelength range 220-300\,nm and over a period of 3 month. The first two measurements are subject to large wavelength uncertainties shown by a black errorbar in the upper left. The last measurement is calibrated using a Holmium Didymium absorption line filter leading to 0.2\,nm uncertainties on the measurement.
}
\end{figure}
The uncertainty on the wavelength is different for the three scans. For the first and second scan in May and June the Holmium Didymium absorption line filter was not available, therefore, the uncertainty is $\approx$\,2\,nm. For the last scan we used the absorption lines to calibrate the measurement to the filter's calibration uncertainty of 0.2\,nm provided by Hellma (compare Section~\ref{sec:performance_results}).
We observe a systematic shift of the QE peak wavelength with time, but due to the lack of a filter during the first and second measurements and the resulting larger uncertainties, we do not consider this shift to be significant.
We also see a change in peak amplitude. There are two potential reasons for this, a true change of the test sensor under ambient air or an artifact due to systematic differences in the wavelength of the QE measurements and the QY measurements.
We will continue with these measurements to infer potential sensor degridation and starting from July all further measurements will be referenced to the absorption lines in the Holmium Didymium filter. The observed maximal variation on the peak QE within the ULTRASAT operational waveband is $\approx 5\,\%_{\mathrm{abs}}$ for this individual sensor over the time. The deviation between different test sensors of the same coating option is in the same order of magnitude.
\begin{figure} [h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=6cm]{figures/qe/aoi_norm_dist.pdf}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:qe_aoi_dist}
The normalized distributions of angle of incidence at the focal plane of the ULTRASAT telescope are shown color-coded with respect to the corresponding radial distance to the optical axis. The shown data is obtained from numerical simulations of ULTRASAT's telescope optics.
}
\end{figure}
\begin{figure} [h!]
\begin{center}
\begin{tabular}{c}
\includegraphics[height=5cm]{figures/qe/QE_wavelength_interp_SPIE_220_300_wcubic_icubic.pdf}\\ \includegraphics[height=5cm]{figures/qe/QE_aoi_weighted_SPIE_220_300_wcubic_icubic.pdf}
\end{tabular}
\end{center}
\caption[example]
{\label{fig:qe_angular_uv}
AoI comparison of $QE(\lambda)$ in the ULTRASAT operational waveband extended to $300\,\text{nm}$. Shown is the measured data at four different AoIs with cubic spline interpolation (left). The $QE(\lambda)$, weighted by the distribution of angle of incidence (Fig.~\ref{fig:qe_aoi_dist}), are shown color-coded with respect to the radial distance from the telescope's optical axis (right).
}
\end{figure}
Due to the obscuration and the telescope optics, the angle of incidence (AoI) has a non-trivial distribution at the final sensor which is dependent on the radial distance from the center of the focal plane. The transmission of ARC and therefore the sensor's response differs with the AoI. To quantify this effect on the QE we measure the spectral quantum efficiency with varying inclination angles in the region of interest $[0\,^{\circ}, 45\,^{\circ}$] relative to the sensor's normal. The test sensor was mounted on a rotational platform to rotate it with respect to incident light beam (Fig.~\ref{fig:qe_setup}). It is clearly shown in Figure.~\ref{fig:qe_angular_uv} that the measured QE is shifted blue. For inclined light rays the phase difference between reflected and transmitted rays in the ARC is reduced with increasing AoI.\cite{quijada2004angle} Furthermore, we weight the spectral QE with the expected AoI distribution as a function of the radial distance from the telescope's optical axis (Fig.~\ref{fig:qe_aoi_dist}). The result is shown on the right-hand side of Figure.~\ref{fig:qe_angular_uv}. The AoI-weighted QE form a set of curves whereby the measurement with an incident angle $25\,^{\circ}$ can be identified as its representative average. The set varies maximal $\pm 5\,\%_{\mathrm{abs}}$ from this representative. The reason being that the decreasing spectral response on one side of the ULTRASAT operational waveband is compensated by an increasing response on the other side and vice versa. Hence, we conclude the AoI-dependence has a negligible effect on the general QE performance and especially when compared to the impact by the ARC.
We estimated the spectral quantum efficiency of the test sensors by comparing its signal response with the calibrated light flux estimated by our working standard. The relative uncertainties for these efficiencies are limited by the systematic uncertainties on the gain and quantum yield which is $\approx 2.5\,\%_{\mathrm{rel}}$ in the range from $230\,$nm to $1100\,$nm. Only below $230\,$nm and in a absorption region of the multimode fiber at $\approx 950\,$nm, the relative uncertainty is typically between $4\,\%_{\mathrm{rel}}$ and $8\,\%_{\mathrm{rel}}$ limited by the statistical uncertainty due to the sparsely available flux of the light source. We tested all three ARC options, estimated the production reproducibility by testing multiple sensors of the same coating and varied the angle of inclination to quantify the effect of the telescope's obscuration and optics on the spectral QE. We observe variations in the peak QE for repeated measurements with single sensors as well as for sensors with the same ARC of up to $\approx 5\,\%_{\mathrm{abs}}$. The telescope's overall performance is sensitive to the degree of out-of-band rejection. Out of the three ARCS options under investigation, option T2 provides the best rejection potential, as it shows a prominent peak-like shape in the middle of the operational band. Its QE shown in Fig.~\ref{fig:qe_angular_uv} peaks at $(79\pm1)\,\%_{\mathrm{abs}}$ and declines for decreasing wavelengths to $(20\pm4)\,\%_{\mathrm{abs}}$ at $220\,$nm. Hence, we recommend the anti-reflective coatings option T2 for the production of the final ULTRASAT UV sensor.
\section{CONCLUSION}
\label{sec:conclusion}
We presented the results of the test sensor characterization for the ULTRASAT camera. We have established the methodical procedure which will be applied in future studies of the final ULTRASAT UV sensor. The two main results are:
\begin{enumerate}
\item The test sensors show a temperature independent minimal dark current floor at $\approx 7\,$e/s/pix (see Fig.~\ref{fig:dc_results_dc_vs_temp}) We conclude that this is caused by self-heating of the read out electronics and on-die infrared emission (Fig.~\ref{fig:dc_ir_emission}). Although it is not possible to make a definitive statement about the dark current performance of the test sensors at the operating temperature of ULTRASAT's camera, this result led to important design improvements in the final sensors that will be described and verified in upcoming publications.
\item From the estimated spectral quantum efficiency of the different ARC options, we expect T2 to optimize the out-of-band rejection of the telescope due to its peak-like shape in ULTRASAT operational waveband (max. QE $\approx 80\,\%$ at $245\,\mathrm{nm}$, Fig.~\ref{fig:qe_reference}). Hence, we recommend ARC option T2 to be chosen for the production of the final ULTRASAT UV sensor.
\end{enumerate}
The final sensor is expected to be available from production in early fall 2021. The characterization team is working on increasing the precision of the wavelength calibration as well as software integration of the final sensor into the calibration setup.
\acknowledgments
We would like to acknowledge the help and support of the ULTRASAT camera advisory board, composed of Andrei Cacovean, Maria F\"urmetz, Norbert Kappelmann, Olivier Limousin, Harald Michaelis, Achim Peters, Chris Tenzer, Simone del Togno, Nick Waltham and J\"orn Wilms. We also thank Thorlabs for loaning of the OSA201C Fourier transform spectrometer.
|
1,108,101,564,821 | arxiv |
\section{The CMS Collaboration \label{app:collab}}\begin{sloppypar}\hyphenpenalty=5000\widowpenalty=500\clubpenalty=5000\input{HIG-12-041-authorlist.tex}\end{sloppypar}
\end{document}
|
1,108,101,564,822 | arxiv | \section{Introduction}
In our earlier study \cite{Our1} we derived the following total Hamiltonian $H_T$ of the free gravitational
field
\begin{equation}
H_{T} = H_{c} + g_{00,0} \phi^{00} + 2 g_{0k,0} \phi^{0k} \; \; \; , \; \; \label{eq0}
\end{equation}
where $H_c$ is the `canonical', or `dynamical' part of the total Hamiltonian (or dynamical Hamiltonian, for
short), $\phi^{00}$ and $\phi^{0k}$ are the primary constraints \cite{Dirac1}, while $g_{00,0}$ and $g_{0k,0}$
are the corresponding velocities, i.e. time derivatives (or temporal derivatives) of the corresponding
components of the metric tensor $g_{\alpha\beta}$ \cite{Carm}. The dynamical Hamiltonian $H_c$ in Eq.(\ref{eq0})
takes the form
\begin{eqnarray}
H_{c} &=& \frac{1}{\sqrt{-g} g^{00}} I_{mnpq} \pi^{mn} \pi^{pq} - \frac{1}{g^{00}}
I_{mnpq} \pi^{mn} B^{( p q 0 \mid \mu \nu k )} g_{\mu\nu,k} \nonumber \\
&+& \frac14 \sqrt{-g} \Bigl[ \frac{1}{g^{00}} I_{mnpq} B^{((m n) 0 \mid \mu \nu k )}
B^{( p q 0 \mid \alpha \beta l )} - B^{\mu \nu k \alpha \beta l} \Bigr] g_{\mu\nu,k}
g_{\alpha\beta,l} \; \; \; , \; \; \label{eq1}
\end{eqnarray}
where $g_{\mu\nu}$ and $g^{\alpha\beta}$ are the covariant and contravariant components of the metric tensor
\cite{Carm}, respectively. Here and everywhere below in this study the Latin alphabet is used for spatial
components and index ``0'' for a temporal component. The notations $\pi^{mn}$ in Eq.(\ref{eq1}) stand for
momenta conjugate to the spatial components $g_{mn}$ of the metric tensor \cite{Our1}. The definition of
these `spatial' $\pi^{mn}$ (where $m \ge 1$ and $n \ge 1$) and `temporal' momenta $\pi^{m0}$ (or $\pi^{n0}$)
and $\pi^{00}$ can be found in \cite{Our1}. In general, all such momenta are designated as $\pi^{\alpha\beta}$,
while the notation $g_{\rho\nu}$ stands for the components of the metric tensor.
The notation $I_{mnpq}$ used in Eq.(\ref{eq1}) is
\begin{eqnarray}
I_{mnpq} = \frac{1}{d-2} g_{mn} g_{pq} - g_{mp} g_{nq} = I_{pqmn} \; \; \; , \label{I}
\end{eqnarray}
where $d$ ($d \ge 2$) is the total dimension of the space-time continuum. The $I_{mnpq}$ quantity is the
spatial (symmetric) tensor which is the $(d - 1)-$tensor (or $(d-1) \otimes (d-1)$-tensor) in the
$d-$dimensional space-time. These values are different from the actual (or complete) $d-$tensors. The
$d-$tensor $B^{\alpha\beta\gamma \mu\nu\rho}$ in Eq.(\ref{eq1}) is written in the following form \cite{Our1}
\begin{eqnarray}
B^{\alpha\beta\gamma \mu\nu\rho} = g^{\alpha\beta} g^{\gamma\rho} g^{\mu\nu}
- g^{\alpha\mu} g^{\beta\nu} g^{\gamma\rho} + 2 g^{\alpha\rho} g^{\beta\nu} g^{\gamma\mu}
- 2 g^{\alpha\beta} g^{\gamma\mu} g^{\nu\rho} \label{B}
\end{eqnarray}
The symmetrized tensor $B^{(\alpha\beta\gamma \mid \mu\nu\rho)}$ is defined as a half of the
following sum:
\begin{eqnarray}
B^{(\alpha \beta \gamma \mid \mu \nu \rho)} = \frac12 ( B^{\alpha \beta \gamma \mu \nu \rho}
+ B^{\alpha \beta \rho \mu \nu \gamma} ) \; \; \; , \; \; \label{BS}
\end{eqnarray}
while the tensor $B^{((\alpha \beta) \gamma \mid \mu \nu \rho)}$ from Eq.(\ref{eq1}) is the
symmetrized sum of the following two/four terms
\begin{eqnarray}
B^{((\alpha \beta) \gamma \mid \mu \nu \rho)} = \frac12 \Bigl[ B^{(\alpha \beta \gamma \mid \mu
\nu \rho)} + B^{(\beta \alpha \gamma \mid \mu \nu \rho)} \Bigr]
= \frac14 ( B^{\alpha \beta \gamma \mu \nu \rho}
+ B^{\alpha \beta \rho \mu \nu \gamma} + B^{\beta \alpha \gamma \mu \nu \rho}
+ B^{\beta \alpha \rho \mu \nu \gamma} ) \; \; \; . \; \; \label{BS1}
\end{eqnarray}
All $\frac{d (d + 1)}{2}$ components of the metric tensor $g_{\alpha\beta}$ and momenta $\pi^{\alpha\beta}$
conjugate to these components are the $d (d + 1)$ dynamical (and canonical) variables in the Hamiltonian
approach. The fundamental (classic) Poisson brackets between the $g_{\alpha\beta}$ components of metric
tensor and $\pi^{\alpha\beta}$ components of the `tensor of momentum' are \cite{Myths}
\begin{eqnarray}
\Bigl\{ g_{\alpha\beta}, \pi^{\mu\nu} \Bigr\} = \frac12 ( \delta^{\mu}_{\alpha}
\delta^{\nu}_{\beta} + \delta^{\nu}_{\alpha} \delta^{\mu}_{\beta} ) =
\Delta^{\mu\nu}_{\alpha\beta} = - \Bigl\{\pi^{\mu\nu}, g_{\alpha\beta}
\Bigr\} \; \; \; , \; \label{Pois}
\end{eqnarray}
while the remaining Poisson brackets between these dynamical variables ($\pi^{\mu\nu}$ and
$g_{\alpha\beta}$) are
\begin{eqnarray}
\Bigl\{ g_{\alpha\beta}, g_{\mu\nu} \Bigr\} = 0 \; \; \; and \; \; \;
\Bigl\{ \pi^{\alpha\beta}, \pi^{\mu\nu} \Bigr\} = 0 \; \; \; . \; \; \label{brack2}
\end{eqnarray}
In general, the knowledge of Poisson brackets between all dynamical variables in Hamiltonian system allows one
to determine the actual `trajectories' of the field(s), i.e. all components of the metric tensor
$g_{\alpha\beta}(t, \overline{x})$ as time-dependent functions given in each spatial point $\overline{x}$.
In this study the properties of the $d (d + 1)-$canonical (Hamiltonian) variables $\Bigl\{ g_{00}, \ldots,
g_{\alpha\beta}, \ldots, \pi^{00}, \ldots, \pi^{\mu\nu}, \ldots \Bigr\}$, Eqs.(\ref{Pois}) - (\ref{brack2}), are
used to reduce the dynamical $H_c$ and total $H_T$ Hamiltonians to the forms which are more appropriate for the
following analysis and derivation of the Schr\"{o}dinger equation for the free gravitational field. It also appears
that the explicit form of the total Hamiltonian $H_T$ written in these canonical variables allows one to investigate
the propagation of gravitational perturbations, or gravitational waves. In particular, it is shown below that each
elementary gravitational perturbation propagates as a solitary wave with a steep front. It looks similar to
the propagation of a strong, non-linear thermal wave. However, an important difference between a solitary
gravitational wave and strong thermal wave follows from the two facts: (a) all components of the metric tensor
$g_{\alpha\beta}$ can change their values only at the front of the propagating gravitational wave, and (b) in the
area behind the front these components are essentially constants. It follows from here that the components of the
metric tensor will never change again unless the next gravitational wave will change them. In other words, for
propagating gravitational waves we do not face the problem of exhaustion of the source. It also is interesting to
note that harmonic oscillations (i.e. vibrations) of the gravitational field itself play no role in its propagation
(see below).
\section{Transformations of the dynamical Hamiltonian}
The dynamical Hamiltonian $H_c$ given by Eq.(\ref{eq1}) is a quadratic function of the spatial
components of momenta $\pi^{mn}$. Let us transform this Hamiltonian $H_c$ to a slightly different
form which is more appropriate for the purposes of this study. First, note that Eq.(\ref{eq1}) can
be re-written in the form
\begin{eqnarray}
\tilde{H}_c &=& \sqrt{-g} g^{00} H_{c} = I_{mnpq} \pi^{mn} \pi^{pq} - \sqrt{-g}
I_{mnpq} B^{( p q 0 \mid \mu \nu k )} g_{\mu\nu,k} \pi^{mn} \nonumber \\
&+& \frac14 (-g) \Bigl[ \frac{1}{g^{00}} I_{mnpq} B^{((m n) 0 \mid \mu \nu k )}
B^{( p q 0 \mid \alpha \beta l )} - g^{00} B^{\mu \nu k \alpha \beta l} \Bigr] g_{\mu\nu,k}
g_{\alpha\beta,l} \label{eq2} \\
&-& \sqrt{-g} I_{mnpq} \Bigl\{\pi^{mn}, B^{( p q 0 \mid \mu \nu k )} g_{\mu\nu,k} \Bigr\}
\nonumber
\end{eqnarray}
where the last term is the Poisson bracket of the $\pi^{mn}$ momenta and the product of the $B^{( p q 0 \mid
\mu \nu k )}$ and $g_{\mu\nu,k}$ values. This bracket can be computed analytically with the use of Eq.(\ref{Pois})
and explicit formula for the $B^{( p q 0 \mid \mu \nu k )}$ quantity, Eq.(\ref{BS}). This leads to the result
\begin{eqnarray}
\Bigl\{\pi^{mn}, B^{( p q 0 \mid \mu \nu k )} g_{\mu\nu,k} \Bigr\} &=& \frac12 \Bigl\{\pi^{mn},
B^{p q 0 \mu \nu k} \Bigr\} g_{\mu\nu,k} + \frac12 \Bigl\{\pi^{mn}, B^{p q k \mu \nu 0} \Bigr\}
g_{\mu\nu,k} \nonumber \\
&+& B^{( p q 0 \mid \mu \nu k )} \Bigl\{\pi^{mn}, g_{\mu\nu,k} \Bigr\} \label{eqPois}
\end{eqnarray}
where the Poisson brackets in the first and second terms of this equation are determined by Eq.(\ref{B}). This reduces
each of the Poisson brackets from Eq.(\ref{eqPois}) to the sum of the following brackets
\begin{eqnarray}
\Bigl\{ \pi^{mn}, B^{\alpha \beta \gamma \mu \nu \rho} \Bigr\} &=&
\Bigl\{ \pi^{mn}, g^{\alpha\beta} g^{\gamma\rho} g^{\mu\nu} \Bigr\}
- \Bigl\{ \pi^{mn}, g^{\alpha\mu} g^{\beta\nu} g^{\gamma\rho} \Bigr\}
+ 2 \Bigl\{ \pi^{mn}, g^{\alpha\rho} g^{\beta\nu} g^{\gamma\mu} \Bigr\} \nonumber \\
&-& 2 \Bigl\{ \pi^{mn}, g^{\alpha\beta} g^{\gamma\mu} g^{\nu\rho} \Bigr\} \label{eq11}
\end{eqnarray}
Note that all terms in the right-hand side of this equation have identical structure. Therefore, to illustrate our
calculations we can consider just one of these terms. For instance, for the first term in Eq.(\ref{eq11}) one finds
\begin{eqnarray}
\Bigl\{ \pi^{mn}, g^{\alpha\beta} g^{\gamma\rho} g^{\mu\nu} \Bigr\} =
\Bigl\{ \pi^{mn}, g^{\alpha\beta} \Bigr\} g^{\gamma\rho} g^{\mu\nu} +
g^{\alpha\beta} \Bigl\{ \pi^{mn}, g^{\gamma\rho} \Bigr\} g^{\mu\nu} +
g^{\alpha\beta} g^{\gamma\rho} \Bigl\{ \pi^{mn}, g^{\mu\nu} \Bigr\}
\end{eqnarray}
where the Poisson bracket between the momentum $\pi^{mn}$ and $g^{\alpha\beta}$ component of the metric tensor
is calculated with the use of the following formulas
\begin{eqnarray}
\Bigl\{ \pi^{mn}, g^{\alpha\beta} \Bigr\} = \Bigl\{ \pi^{mn}, g^{\alpha}_{\alpha^{\prime}}
g^{\beta}_{\beta^{\prime}} g_{\alpha^{\prime}\beta^{\prime}} \Bigr\}
= - g^{\alpha}_{\alpha^{\prime}} g^{\beta}_{\beta^{\prime}} \Delta^{\mu\nu}_{\alpha^{\prime}\beta^{\prime}}
= -\Delta^{\mu\nu;\alpha\beta}
\end{eqnarray}
where the last equality contains the definition of the new delta-function (or delta-tensor) $\Delta^{\mu\nu;\alpha\beta}$
which contains the four upper indexes only.
The Poisson bracket in the third term from Eq.(\ref{eqPois}) is
\begin{eqnarray}
\Bigl\{\pi^{mn}, g_{\mu\nu,k} \Bigr\} = - \Bigl(\Delta^{\mu\nu}_{\alpha\beta}\Bigr)_{,k}
\end{eqnarray}
where the $\Delta^{\mu\nu}_{\alpha\beta}$-function (or $\delta-$tensor) is defined above (see, Eq.(\ref{Pois})).
Note that this Poisson bracket equals zero identically, if the delta-tensor $\Delta^{\mu\nu}_{\alpha\beta}$ is
considered as a constant term. This correspond to the classical approach. In quantum approach this term can be
transformed to the form where the derivative upon spatial components appears in the front of the wave function.
Then, by integrating by parts we can move the spatial derivative from the delta-function to the wave function
$\Psi$. This means that such a term is not equal zero identically. By calculating all Poisson brackets in
Eq.(\ref{eqPois}) one finds that the explicit formula for the Poisson bracket in the last term of Eq.(\ref{eq2})
contains a very large number of terms. It is not convenient in actual calculations. To avoid this problem below
we shall keep the united notation for the Poisson bracket in the last term of Eq.(\ref{eq2}).
Now we note that in the Hamiltonian $H_c$, Eq.(\ref{eq2}), all momenta $\pi^{mn}$ are located at the very right
position in each term. Such form of $H_c$ has a number of advantages to perform quantization of the
classical system with the Hamiltinian $\tilde{H}_c$ given by Eq.(\ref{eq2}). The Hamiltonian $\tilde{H}_c$ can be
represented as the product of the two spatial tensors (or $(d-1)-$tensors) $I_{mnpq}$ and $\tilde{H}^{pqmn}_{c}$,
where the spatial tensor $I_{mnpq}$ is defined in Eq.(\ref{I}) and spatial tensor $\tilde{H}^{pqmn}_{c}$ is
\begin{eqnarray}
\tilde{H}^{pqmn}_{c} &=& \pi^{pq} \pi^{mn} - \sqrt{-g}
B^{( p q 0 \mid \mu \nu k )} g_{\mu\nu,k} \pi^{mn} \nonumber \\
&-& \frac{g}{4} \Bigl[ B^{((m n) 0 \mid \mu \nu k )}
B^{( p q 0 \mid \alpha \beta l )} - g^{00} E^{pqmn} B^{\mu \nu k \alpha \beta l} \Bigr] g_{\mu\nu,k}
g_{\alpha\beta,l} \label{eq4} \\
&-& \sqrt{-g} \Bigl\{\pi^{pq}, B^{( m n 0 \mid \mu \nu k )} g_{\mu\nu,k} \Bigr\} \nonumber
\end{eqnarray}
where the spatial tensor $E^{pqmn}$ included in this equation is defined by the relation $I_{mnpq} E^{pqkl} =
\delta^{k}_{m} \delta^{l}_{n}$ (or $\hat{I} \hat{E} = 1$). Components of this spatial tensor are the spatial
components of the complete $E^{\mu\nu\gamma\sigma}$-tensor which is defined by the following equation (see,
e.g., \cite{Our1})
\begin{equation}
E^{\mu\nu\gamma\sigma} = e^{\mu\nu} e^{\gamma\sigma} - e^{\mu\gamma} e^{\nu\sigma} = E^{\gamma\sigma\mu\nu}
\end{equation}
where
\begin{equation}
e^{\mu\nu} = g^{\mu\nu} - \frac{g^{0\mu} g^{0\nu}}{g^{00}} = e^{\nu\mu}
\end{equation}
and, therefore:
\begin{equation}
E^{\mu\nu\gamma\sigma} = g^{\mu\nu} g^{\gamma\sigma} - g^{\mu\gamma} g^{\nu\sigma}
- \frac{1}{g^{00}} \Bigl( g^{0\mu} g^{0\nu} g^{\gamma\sigma} + g^{\mu\nu} g^{0\gamma} g^{0\sigma}
- g^{\mu\gamma} g^{0\nu} g^{0\sigma} - g^{0\mu} g^{0\gamma} g^{\nu\sigma} \Bigr)
\end{equation}
Let us assume that we have performed the quantization of the Hamiltonian, Eq.(\ref{eq4}). In detail the process of
quantization is discussed in the next Section, but here we just want to make a few important comments about the explicit
form of the Schr\"{o}dinger equation. As follows from our analysis above the arising Schr\"{o}dinger equation is written
in the following `matrix' form with the Hamiltonian (spatial tensor) $\tilde{H}^{pqmn}_{c}$ from Eq.(\ref{eq4})
\begin{eqnarray}
\imath \hbar E^{pqmn} \frac{\partial \Psi}{\partial \tau} = \tilde{H}^{pqmn}_{c} \Psi \label{EqSh2}
\end{eqnarray}
where $E^{pqmn}$ is the spatial tensor defined above. Both these operators ($E^{pqmn}$ and $\tilde{H}^{pqmn}_{c}$) in
Eq.(\ref{EqSh2}) are $g-$dependent (or metric-dependent) spatial tensors. This means that each component of these spatial
tensors is a function of the components of metric tensor $g_{\mu\nu}$. The time $\tau$ in Eq.(\ref{EqSh2}) is related with
the incident time $t$ by the relation $\tau = \frac{t}{\sqrt{-g} g^{00}}$. The new Hamiltonian (spatial tensor!)
$\tilde{H}^{pqmn}_{c}$ in Eq.(\ref{eq4}) is a quadratic function of the momenta $\pi^{pq}$ and $\pi^{mn}$. Note that all
momenta $\pi^{pq}$ and $\pi^{mn}$ which are included in the Hamiltonian $\tilde{H}^{pqmn}_{c}$ do not have temporal
components. In other words, the $\tilde{H}^{pqmn}_c$ Hamiltonian does not contain any of the $\pi^{00}, \pi^{0m}$ and/or
$\pi^{n0}$ momenta. Now we need to perform the last step of our procedure and transform the classical expression for the
$(d-1)-$tensor (or spatial tensor) $\tilde{H}^{pqmn}_{c}$, Eq.(\ref{eq4}), into the corresponding quantum operator.
\section{Quantization}
The goal of this Section is the quantization of the classical Hamiltonian $\tilde{H}^{pqmn}_{c}$ from Eq.(\ref{eq4}).
The main step in this process is to replace all classical momenta $\pi^{\alpha\beta}$ and `coordinates' $g_{\mu\nu}$
in the classical Hamiltonian $\tilde{H}_c$ by the corresponding quantum operators. The classical Poisson bracket
between each pair of these dynamical variables $\Bigl\{g_{00}, \ldots, g_{\alpha\beta}, \ldots, \pi^{00}, \ldots,
\pi^{\mu\nu}, \ldots \Bigr\}$ is also replaced by the corresponding quantum Poisson bracket which explicitly contains the
reduced Planck constant $\hbar$. Unfortunately, in many cases such a formal replacement of classical quantities by the
corresponding quantum operators may lead to ambiguous answers. To avoid a possible appearance of multiple answers and produce
the correct quantum expression one needs to apply the `correspondence principle' known in Quantum Mechanics since the middle
of 1920's (see, e.g., \cite{LLQ}). For the free gravitational filed the correspondence principle means that the quantum
Poisson bracket must have the correct limit in the case of very weak gravitational fields, or, in other words, for the
flat space-time. This determines the following expression for the quantum (Q) Poisson bracket
\begin{eqnarray}
\Bigl\{ g_{\alpha\beta}, \pi^{\mu\nu} \Bigr\}_Q = \imath \hbar \Bigl\{ g_{\alpha\beta}, \pi^{\mu\nu} \Bigr\}_C
= \imath \hbar \frac12 ( \delta^{\mu}_{\alpha} \delta^{\nu}_{\beta} + \delta^{\nu}_{\alpha}
\delta^{\mu}_{\beta} ) = \imath \hbar \Delta^{\mu\nu}_{\alpha\beta} \label{PoisQ}
\end{eqnarray}
From here one can write the following explicit formula for the quantum operator of momentum $\pi^{\alpha\beta}$ in the
$g_{\alpha\beta}$-representation (i.e. in the `coordinate' representation)
\begin{equation}
\pi^{\mu\nu} = - \imath \hbar \Bigl[ \frac{\partial}{\partial g_{\mu\nu}} + f_{\mu\nu}(g_{\alpha\beta}) \Bigr]
\label{moment}
\end{equation}
where $f_{\mu\nu}(g_{\alpha\beta})$ is a regular (or analytical) function of all components of the metric tensor. The quantum
operators of momenta, Eq.(\ref{moment}), must also obey the basic relations given by Eq.(\ref{brack2}) which are true for both
the classical and quantum Poisson brackets. This leads to a set of additional conditions for the $f_{\mu\nu}$-functions from
Eq.(\ref{moment})
\begin{equation}
\frac{\partial f_{\mu\nu} }{\partial g_{\alpha\beta} } = \frac{\partial f_{\alpha\beta} }{\partial g_{\mu\nu} }
\label{cond1}
\end{equation}
In general, one can use some freedom to choose different types of the $f_{\mu\nu}$ functions in Eq.(\ref{moment})
to simplify either the definition of momenta $\pi^{\mu\nu}$, or the formula for the quantum Hamiltonian operator
$\tilde{H}^{pqmn}_{c}$ in Eq.(\ref{EqSh2}). In reality, such a freedom is quite restricted, since there are a number
of rules for canonical transformations which can only be used to transform one set of dynamical (Hamiltonian) variables into
another. This is true for arbitrary Hamiltonian systems, including systems with constraints (for more details, see \cite{Our1}
and \cite{Myths}). To avoid discussion of this problem, which is not directly related with our goals in this study, below
we shall assume that the additional function $f_{\mu\nu}(g_{\alpha\beta})$ in Eq.(\ref{moment}) equals zero identically.
It can be shown that such a choice is `natural' for Hamiltonian formulation of GR originally developed in \cite{Pirani}
and later corrected in \cite{Our1}.
Substitution of the operators of momenta $\pi^{\mu\nu} = \imath \hbar \frac{\partial }{\partial g_{\mu\nu} }$ in the
classical Hamiltonian tensor $\tilde{H}^{pqmn}_{c}$, Eq.(\ref{eq4}), produces the quantum Hamiltonian operator
$\hat{\tilde{H}}^{pqmn}_{c}$ which is correct at least in the lowest-order approximation upon $\hbar$ \cite{Dirac1},
\cite{Tut}. With this Hamiltonian operator we can write the following Schr\"{o}dinger equation \cite{Schrod}
\begin{eqnarray}
\imath \hbar E^{pqmn} \frac{\partial \Psi}{\partial \tau} &=& \hat{\tilde{H}}^{pqmn}_{c} \Psi =
- \hbar^2 \frac{\partial^2 \Psi}{\partial g_{pq} \partial
g_{mn}} - \imath \hbar \Bigl[\sqrt{-g} B^{( p q 0 \mid \mu \nu k )} g_{\mu\nu,k}\Bigr]
\frac{\partial \Psi}{\partial g_{mn}} \nonumber \\
&-& \frac{g}{4} \Bigl( \Bigl[ B^{( p q 0 \mid \alpha \beta l )} B^{((m n) 0 \mid \mu \nu k )}
- g^{00} E^{pqmn} B^{\mu \nu k \alpha \beta l} \Bigr] g_{\mu\nu,k}
g_{\alpha\beta,l} \Bigr) \Psi \label{EqSh3} \\
&-& \sqrt{-g} \Bigl\{\pi^{mn}, B^{( p q 0 \mid \mu \nu k )} g_{\mu\nu,k} \Bigr\} \Psi \nonumber
\end{eqnarray}
where, in general, the wave function $\Psi = \Psi(\tau, \{ g_{\alpha\beta} \})$ depends upon all components of the metric
tensor $g_{\alpha\beta}$ and time $\tau$. This equation describes time-evolution of the free gravitational field which is
now considered as a `quantum object' and described by the wave function $\Psi$. It should be mentioned that this equation
is only one from a number of conditions (or equations) which must be obeyed for the actual wave function $\Psi$ of the free
gravitational field(s). These additional conditions are the primary constraints and all other constraints which arise during
time-evolution of the primary constraints \cite{Dirac1950}. Formally, such additional conditions for the wave function $\Psi$
follow from the fact that the Schr\"{o}dinger equation must contain the total Hamiltonian $H_T$, rather than the dynamical
Hamiltonian $H_c$. This was well understood and emphasized by Dirac in 1950 \cite{Dirac1950}.
Let us discuss these additional conditions for the wave function $\Psi$ of the free gravitational field. First, consider the
conditions which follow from the primary constraints. As mentioned above the actual Schr\"{o}dinger equation must contain the
total Hamiltonian $H_T$, Eq.(\ref{eq0}). Its replacement with the dynamical Hamiltonian $H_c$ is possible if (and only if), the
following conditions are obeyed for the wave functions $\Psi$
\begin{equation}
\phi^{00} \Psi = 0 \; \; \; and \; \; \; \phi^{0k} \Psi = 0 \label{primary}
\end{equation}
for $k = 1, 2, \ldots, d-1$, where $d$ is the total dimension of the space-time continuum. These conditions are the $d-$primary
constraints written explicitly in \cite{Our1}. The primary constraints arise, since the corresponding componenet of moments $\pi^{00}$
and $\pi^{0k}$ cannot be defined from the original singular Lagrangian \cite{Our1}. By replacing the momenta and coordinates in these
primary constraints by the corresponding quantum operators one finds
\begin{equation}
\phi^{0\sigma} = \pi^{0\sigma} - \frac12 \sqrt{-g} B^{((p q) 0 \mid \mu \nu k)} g_{\mu\nu,k} = \imath \hbar
\frac{\partial }{\partial g_{0\sigma} } - \frac12 \sqrt{-g} B^{((p q) 0 \mid \mu \nu k)} g_{\mu\nu,k}
\end{equation}
Therefore, the $d-$primary constraints, Eq.(\ref{primary}), can be written in the form of the following differential equations
for the wave function $\Psi$:
\begin{equation}
\imath \hbar \frac{\partial }{\partial g_{0\sigma} } \Psi = \frac12 \sqrt{-g} B^{((p q) 0 \mid \mu \nu k)}
g_{\mu\nu,k} \Psi \label{cond2}
\end{equation}
where $\sigma = 0, 1, \ldots, d - 1$.
As was mentioned by Dirac \cite{Dirac1} the primary constraints are absolute, i.e. they must hold at arbitrary time. This
means that the equations which govern the time evolution of the primary constraints must be time independent. For quantum system
with constraints this lead to the chain of the following equalities:
\begin{eqnarray}
\chi^{0\sigma} \Psi &=& \phi^{0\sigma}_{,0} \Psi = \{ \phi^{0\sigma}, H_T \} \Psi = 0 \; \; \; , \nonumber \\
(\chi^{0\sigma})_{,0} \Psi &=& (\phi^{0\sigma}_{,0})_{,0} \Psi = \{ \{ \phi^{0\sigma}, H_T \}, H_T \} \Psi = 0 \; \; , \ldots
\label{eq27}
\end{eqnarray}
For actual physical fields the arising chains of related equalities, i.e. constraints, are always finite \cite{Dirac1}, \cite{Tut}.
Furthermore, the values $\chi^{0\sigma} = \phi^{0\sigma}_{,0} = \{ \phi^{0\sigma}, H_T \}$ are called the secondary constraints
\cite{Dirac1950}, while their Poisson brackets with the total Hamiltonian $H_T$ are the tertiary constraints, etc. Analysis of
the constrained structure of the field equations in metric GR indicates clearly that the free gravitational field has $d-$primary
constraints $\phi^{0\sigma}$ and $d-$secondary constraints $\chi^{0\sigma}$ for $\sigma = 0, 1, \ldots, d - 1$. The secondary
constraints $\chi^{0\sigma}$ are independent of each other and of any of the primary constraints. It was shown in \cite{Our1}
that the time-evolution of the secondary constraints $\chi^{0\sigma}$ lead to the linear combinations of the same secondary
constraints. The coefficients in such linear combinations depend upon spatial components of the metric tensor, their spatial
derivatives and primary constraints \cite{Our1}. This fact proves the closure of the Dirac procedure for the free gravitational
field in metric GR, since no tertiary constraints have been found. The closed analytical formulas for the secondary constraints
$\chi^{0\sigma}$ have been found in a number of earlier papers (see, e.g., \cite{Our1} and references therein). These formulas
are extremely complicated and here we do not want to repeat them, since they are not crucially important for our analysis below.
It should be mentioned that the Poisson brackets between the primary and secondary constraints of the free gravitational field(s)
are \cite{Our1}
\begin{eqnarray}
\Bigl\{ \chi^{0\sigma}, \phi^{0\gamma} \Bigr\} = - \frac12 g^{\gamma\sigma} \chi^{00} \label{prisec}
\end{eqnarray}
i.e. it is proportional to the secondary constraint $\chi^{00}$. Therefore, if $\Psi$ is the solution of the
Schr\"{o}dinger equation, Eq.(\ref{EqSh3}), then for such wave functions one finds $\chi^{0\sigma} \phi^{0\gamma} \Psi =
\phi^{0\gamma} \chi^{0\sigma} \Psi = 0$. In other words, the operators of the primary and secondary constraints commute with each
other on the solutions $\Psi$ of the Schr\"{o}dinger equation.
Thus, for the wave function $\Psi$ of the free gravitational field in addition to the Schr\"{o}dinger equation, Eq.(\ref{EqSh3}),
we have $d-$primary and $d-$secondary constrints. In general, for $d-$dimensional space-time we have $\frac{d ( d + 1 )}{2}$
independent components of the metric tensor $g_{\alpha\beta}$. Therefore, the total number of freedoms $f$ of the free gravitational
field in metric GR is $f = \frac{d ( d + 1 )}{2} - 2 d = \frac{d (d - 3)}{2}$. As follows from this formula the actual non-constraint
motion becomes possible for the free gravitation field when $d \ge 4$. In particular, the free gravitational field in our universe
($d = 4$) has two degrees of freedom. For $d = 3$ we have $f = 0$, i.e. only constrained motion can be found at such dimension of
space-time. Note also that in addition to the primary and secondary constraints for many Hamiltonian systems one also finds a
number of `conservation laws' which must be obeyed for any real motion.
\section{Gravitational waves. Hamiltonian factorization.}
In reality it is very difficult to obtain any closed (analytical) solution of the Schr\"{o}dinger equation, Eqs.(\ref{EqSh3}),
(\ref{primary}) and (\ref{eq27}) for the wave function $\Psi$. The complexity of this problem is absolutely outstanding. However,
there is a group of problems in metric GR which can be investigated directly with the use of the total Hamiltonian $H_T$, or the
corresponding quantum operator $\hat{H}_T$. These problems are closely related with the propagation of gravitational perturbations.
In modern literature the propagation of any gravitational perturbation is always considered as a propagation of harmonic
oscillations, or, in other words, harmonic waves. Moreover, it is emphasized explicitly that the propagation of the gravitational
perturbations, or waves, is very similar to the propagation of the electromagnetic waves in vacuum. In reality, such gravitational
oscillatory waves have never been detected in various experiments extensively conducted since the middle of 1950's. Nevertheless, a
large number of people are still trying to observe such waves and measure their basic properties. Note that the first theoretical
prediction of the oscillatory gravitational waves was made by Einstein in his paper published in 1918 \cite{Einst1918}. However, in
\cite{Einst1918} Einstein considered the case of very weak gravitational fields. In this approximation the actual four-dimensional
space-time was essentially replaced by the flat space-time. In \cite{Einst1918} Einstein wrote that his conclusion about
propagating oscillatory gravitational waves is substantially based on the linear approximation used and it can be false in the
actual metric GR. Later, Einstein and his co-workers arrived to a conclusion that the oscillating gravitational waves cannot
exist in the metric GR, but their paper submitted in Phys. Rev. was rejected and manuscript was lost. Probably, some parts of the
original text of that paper were included in \cite{Einst1937}. Other details related with Einstein's opinion about oscillating
gravitational waves in the metric GR can be found on the Web-page \cite{Blog}.
The Hamiltonian approach for metric GR developed in \cite{Our1} allows one to re-consider the problem of propagating gravitational
waves. Let us assume that at some point of the Universe we have a gravitational process, e.g., collision of the two stars which lead
to the formation of the new star. Gravitational fields around this collision area change rapidly. Briefly, we can say that in such a
case we deal with perturbations of the gravitation field(s), or, in other words, with gravitational perturbations. As known from
Astrophysics typical collisions of actual stars proceed dozens of years (and even longer) and gravitational waves are generated at each
moment of this process. It is cleat that our Hamiltonians (see, e.g., Eq.(\ref{EqSh3})) contain only gravitational fields (or $g_{\alpha\beta}$
components) and we cannot describe, in principle, actual collisions of stars and/or any other process related with the finite-time
redistribution of masses in space. To avoid an unwanted discussion of the phenomena which cannot be analyzed by our methods we have to
define an elementary gravitational perturbation which is considered as an infinitesimal part of the real (i.e. finite) process of
gravitational changes. Everywhere below by an `elementary gravitational perturbation' we mean the process of actual gravitation motion
which is local and takes an infinitely small time $\delta t$. The corresponding, infinitesimal changes in the gravitational fields
$g_{\mu\nu}$ can be described with our Hamiltonian approach. By using the language from the Differential Equations one can say that here we
are trying to determine and investigate the corresponding Green's function(s). We can also introduce a closely related definition of the
solitary gravitational wave as a wave which transfrers an elementary gravitational perturbation and produces changes in the gravitational
fields, i.e. in the components of the metric tensor $g_{\alpha\beta}$. Our main goal here is to determine the laws which govern the propagation
of the solitary gravitational wave in space-time, i.e. in the Universe. Also, we want to investigate the internal structure of such a
wave. Thus, below we shall consider only elementary gravitational perturbations and solitary gravitational waves which move these
perturbations from the point of their generation to the rest of the Universe. The actual gravitational perturbation can be represented
as a superposition, i.e. sum and/or integral, of a large number (even infinite number) of elementary perturbations. The same statement
is true for gravitational waves propagating from an actual source of gravity.
Our analysis of the propagating solitary gravitational wave is based on the explicit form of the dynamical Hamiltonian $\tilde{H}_c$,
Eq.(\ref{eq2}), or the corresponding Hamiltonian spatial tensor $\tilde{H}^{pqmn}_{c}$, Eq.(\ref{eq4}). Both these Hamiltonians are the
quadratic expressions upon the momenta $\pi^{mn}$ conjugate to the spatial components $g_{mn}$ of the metric tensor $g_{\alpha\beta}$,
i.e. the situation looks similar to the case of the free electromagnetic field propagating in vacuum (see, e.g., \cite{Heitl}). A
complete similarity with electrodynamics will be observed, if (and only if) we can show that the Hamiltonian $\tilde{H}^{pqmn}_{c}$,
Eq.(\ref{eq4}), is a quadratic function of the spatial components $g_{mn}$ of the metric tensor $g_{\alpha\beta}$. In this case by
applying some standard methods from Matrix Quantum Mechanics (see, e.g., \cite{Green}) one can reduce this Hamiltonian to the sum of
`quadratic' operators which is essentially coincides with the Hamiltonian of the harmonic oscillator and/or with the Hamiltonian free
electromagnetic field which is used in Quantum Electrodynamics (see, e.g., \cite{Dirac1}, \cite{LLQE}, \cite{GelfFomin}). However, it can
be shown (see below) that the dependence of the Hamiltonian tensor $\tilde{H}^{pqmn}_{c}$ in Eq.(\ref{eq4}) upon the components of the
metric tensor $g_{\alpha\beta}$ is substantially more complex and cannot be represented as a quadratic function (i.e. quadratic polynomial)
of the components of the metric tensor.
To investigate this problem we write the Hamiltonian $\tilde{H}^{pqmn}_{c}$ in the form
\begin{eqnarray}
\tilde{H}^{pqmn}_{c} = \pi^{pq} \pi^{mn} + S^{pqmn} \label{quadr}
\end{eqnarray}
where the spatial tensor $S^{pqmn}$ takes the form
\begin{eqnarray}
S^{pqmn} &=& \imath \hbar \Bigl[\sqrt{-g} B^{( p q 0 \mid \mu \nu k )} g_{\mu\nu,k}\Bigr]
\frac{\partial}{\partial g_{mn}} \nonumber \\
&-& \frac{g}{4} \Bigl( \Bigl[ B^{( p q 0 \mid \alpha \beta l )} B^{((m n) 0 \mid \mu \nu k )}
- g^{00} E^{pqmn} B^{\mu \nu k \alpha \beta l} \Bigr] g_{\mu\nu,k}
g_{\alpha\beta,l} \Bigr) \label{EqSh5} \\
&-& \sqrt{-g} \Bigl\{ \pi^{mn}, B^{( p q 0 \mid \mu \nu k )} g_{\mu\nu,k} \Bigr\} \nonumber
\end{eqnarray}
Now, we want to show explicitly that this spatial tensor $S^{pqmn}$ is not a quadratic function of the $g_{\alpha\beta}$ components
(or $g_{mn}$ components) and/or it cannot be reduced to such a form. The formula for the $S^{pqmn}$ quantity, Eq.(\ref{EqSh5}), contains
three different terms. First, consider the second term which is a polynomial function of the $g_{\alpha\beta}$ components.
The maximal power of such a polynomial upon $g_{\alpha\beta}$ is $10 = 4 + 3 + 3$ (not 2), where four is the power of the
determinant $g$ of the metric tensor $g_{\alpha\beta}$, while the power of each factor $B$ in this formula equals three. This (second)
term in Eq.(\ref{EqSh5}) also contains the product of the two spatial derivatives $g_{\mu\nu,k} g_{\alpha\beta,l}$. The first and third
terms in Eq.(\ref{EqSh5}), contain the factor $\sqrt{-g}$ which is, in fact, an algebraic function (not a finite polynomial!) of the
$g_{\alpha\beta}$ components. In general, this functions is represented as a sum of the infinite number of powers of the $g_{\alpha\beta}$
components. This indicates clearly that we cannot reduce neither the Hamiltonian $\tilde{H}^{pqmn}_{c}$ from Eq.(\ref{eq4}), nor the
dynamical Hamiltonian $\tilde{H}_c$ from Eq.(\ref{eq2}) to a quadratic form which is needed for similarity with the Hamiltonian of the
free electromagnetic field. By using the regular transformations of the metric tensor we can try to reduce the total power of the
Hamiltonian $\tilde{H}^{pqmn}_{c}$ in Eq.(\ref{eq4}) upon $g_{\alpha\beta}$ components. However, it appears that such a power upon the
spatial components of the metric tensor $g_{mn}$ always exceeds five. This result is of fundamental importance for the metric General
Relativity, since it indicates clearly that the free gravitational fields cannot propagate in space-time as `harmonic vibrations' (or
oscillations).
In the case of very weak gravitational fields one can find a similarity with the free electromagnetic field. Indeed, for very weak
gravitational fields the differences between the corresponding components of the metric tensor and Minkovskii tensor are small and
$\sqrt{-g} = 1$. This allows to determine the limiting forms of these two Hamiltonians ($\tilde{H}^{pqmn}_{c}$ from Eq.(\ref{eq4}) and
$\tilde{H}_c$ from Eq.(\ref{eq2})). The correct Hamiltonian transition to the case of the weak gravitational fields is described in
\cite{Linear}. It appears that now both these Hamiltonians are quadratic functions of the new variables $h_{\alpha\beta}$, where
$h_{\alpha\beta} = g_{\alpha\beta} - \eta_{\alpha\beta}$ are the small corrections to the corresponding components of the Minkowskii
tensor $\eta_{\alpha\beta} = diag(-, +, +, \ldots, +)$ in the flat space-time. Formally, this means that for very weak gravitational
fields there is a similarity with the electromagnetic fields. This includes similarity in propagation of the free gravitational waves and
electromagnetic waves.
For arbitrary gravitational fields we have $\sqrt{-g} \ne 1$ and the values $h_{\alpha\beta} = g_{\alpha\beta} - \eta_{\alpha\beta}$ are
not small. Briefly, this means that there is no close analogy with the electromagnetic fields. In particular, the propagation of gravity in
space-time continuum has a number of fundamental differences with the propagation of electromagnetic radiation. It is clear that harmonic
vibrations of the free gravitational filed(s) do not play any role in the propagation of the free gravitational fields. Moreover, such
harmonic oscillations of the gravitational field do not exist as actual motions. This is the main result of our analysis. It appears
that the propagation of the gravitational fields is described by the same Schr\"{o}dinger equation, Eq.(\ref{EqSh3}), with the Hamiltonian
$\hat{\tilde{H}}^{pqmn}_{c}$ (spatial tensor) written explicitly in the right-hand side of this equation. By investigating this Hamiltonian
and comparing it with the well known Hamiltonian of the electromagnetic field (see, e.g., \cite{Dirac1} and \cite{Heitl}) one finds a few
similarities and a number of fundamental differences. An obvious similarity is the quadratic dependence of each of these Hamiltonians upon
momenta $\pi^{mn}$ conjugate to the corresponding spatial components of the gravitational and electromagnetic field(s). The main fundamental
difference between these two Hamiltonians follows from the fact that the Hamiltonian in metric GR is a substantially non-linear function of
the components of the metric tensor $g_{\alpha\beta}$, or field components, while the Hamiltonian of the free electromagnetic field is a
simple quadratic function of the corresponding field components. Such differences in the two Hamiltonians lead to fundamentally different
equations of motions for the free gravitational and free electromagnetic fields. In the last case the total Hamiltonian is represented as an
infinite sum of one-dimensional Hamiltonians of harmonic oscillators. This is well known Fourier resolution of the free electromagnetic field
(see, e.g., \cite{Heitl}, \cite{LLE}).
\section{Propagation of the free gravitational fields}
As we have shown above harmonic vibrations of the free gravitational filed(s) do not represent actual motions of the free
gravitational fields. This means that the propagation of the free gravitational field(s) cannot be represented as the propagation of the
harmonic waves, or harmonic oscillations. This fundamental fact is of great importance for the future theoretical development of metric GR.
For instance, it is clear now that all quantization procedures developed earlier for the free gravitational field(s) have no connection with
reality, if they based on the of propagating harmonic waves, or, in other words, on systems of harmonic oscillators. However, from our
astrophysical experience we know that regular gravitational fields cannot be bounded in one spatial area and they always propagate through
the whole Universe. Therefore, the propagation of gravitational waves in the metric GR is real. Moreover, we can define the propagating
gravitational wave which can be represented as a decomposition of solitary waves. The goal of this Section is to analyze such solitary
gravitational waves, their internal structure and propagation.
Here we want to discuss the actual propagation of the free gravitational fields by using the Hamiltonian approach described above. It appears
that the laws which govern the propagation of the free gravitational field can be obtained from the classic total Hamiltonian $H_T$,
Eq.(\ref{eq0}), and/or from the corresponding Schr\"{o}dinger equation with the quantum operator $\hat{H}_T$ which correspond to this total
Hamiltonian $H_T$. To simplify our analysis here we restrict ourselves to the classical approach only. In \cite{Our1} the following formula
for the classical total Hamiltonian $H_T$ has been derived
\begin{eqnarray}
H_{T} = - 2 g_{0\sigma} \chi^{0\sigma} + g_{00,0} \phi^{00} + 2 g_{0k,0} \phi^{0k} + \Bigl[ 2g_{0m} \phi^{mk}
- \sqrt{-g} E^{mnki} g_{mn,i} + \nonumber \\
\sqrt{-g} g_{\mu\nu,i} \frac{g^{0\mu}}{g^{00}}
\Bigl( g^{\nu k} g^{0i} - g^{\nu i} g^{0k} \Bigr) \Bigr]_{,k} \label{Hamiltt}
\end{eqnarray}
As follows from this equation, the total Hamiltonian $H_{T}$ is the sum of the terms proportional to the primary ($\phi^{00}$ and
$\phi^{0k}$) and secondary ($\chi^{0\sigma}$) constraints. It is also contains the total spatial derivatives which are combined in
one `surface' term. This surface term can be represented in a slightly different form with the use of the following spatial vector (or
$(d-1)-$vector) $\overline{G} = (G^1, G^2, \ldots, G^d)$, where
\begin{eqnarray}
G^{k} = 2g_{0m} \phi^{mk} - \sqrt{-g} E^{mnki} g_{mn,i} + \sqrt{-g} g_{\mu\nu,i} \frac{g^{0\mu}}{g^{00}}
\Bigl( g^{\nu k} g^{0i} - g^{\nu i} g^{0k} \Bigr) \label{vector}
\end{eqnarray}
is the $k-$contravariant component of this $(d - 1)-$vector. The vector $\overline{G}$ is the energy flux of the free gravitational
field, i.e. it determines the flow of the gravitational energy (or, gravitational flow, for short) through the closed boundary
$(d-1)-$surface of the volume occupied by the gravitational field. Indeed, by taking the integral from both sides of Eq.(\ref{Hamiltt})
over the whole volume $V$ occupied by the gravitational and enclosed by the closed surface $S$ one finds
\begin{eqnarray}
E = \int div \overline{G} dV = - \oint (\overline{G} \cdot \overline{n}) dS_{d-1} = - \oint \overline{G} \cdot d\overline{S}_{d-1}
\label{gauss}
\end{eqnarray}
where $\overline{G}$ is the (d-1)-dimensional vector defined in Eq.(\ref{vector}), $\overline{n}$ is the unit vector of the outer normal to
the surface element $dS_{d-1}$ and $d\overline{S}_{d-1} = \overline{n} dS_{d-1}$ is the elementary volume of the surface $dS_{d-1}$
oriented in the direction of the outwardly directed normal $\overline{n} = (n_1, n_2, \ldots, n_d)$. To transform the integral in
Eq.(\ref{gauss}) we have applied the Gauss formula for multi-dimensional integrals.
The gravitational vector $\overline{G}$, Eq.(\ref{vector}), plays the same role in metric General Relativity as the Pointing vector
(or Umov-Pointing vector) plays in Electrodynamics \cite{LLE}. Note that the left-hand side of the energy conservation law in Electrodynamics
contains the time-derivative of the total field energy, i.e. $\frac{\partial E}{\partial t}$, rather than the total field energy
itself. The same general identity must be correct in the metric GR. To avoid possible contradictions we can transform the expression in
the left-hand side of Eq.(\ref{gauss}) in the following way
\begin{eqnarray}
E &=& \int \frac{\partial E}{\partial t} \delta(t_f - t) dt = E_f = \frac{v_f}{c} \int \oint \Bigl(\frac{\partial w}{\partial t}\Bigr)
\delta(t_f - t) c dt dS_{d-1} \label{gauss1} \\
&=& x c \int \oint \Bigl(\frac{\partial w}{\partial t}\Bigr) \delta(t_f - t) dt dS_{d-1} \nonumber
\end{eqnarray}
where $E_f$ is the energy at the front of the propagating gravitational wave, $w$ is the spatial density of the energy $E$, i.e. the energy
per unit volume, i.e. $w = \lim_{V \rightarrow 0} \Bigl( \frac{E}{V} \Bigr)$, while $c$ is the speed of light in vacuum and $v_f$ is the
propagation velocity of the gravitational wave in vacuum. Note that from Eq.(\ref{gauss1}) we have $E = E_f$, i.e. all energy of the
propagating gravitational wave is concentrated in its front. Also, the time $t_f$ in this formula coincides with the time when the propagating
gravitational wave will reach the boundary surface $S_{d-1}$ and the factor $x = \frac{v_f}{c}$. Very likely, that the velocity of the front
propagation $v_f$ equals to the speed of light in vacuum exactly, i.e. $v_f = c$ and, therefore, the factor $x \Bigl(= \frac{v_f}{c}\Bigr)$ in
Eq.(\ref{gauss1}) equals unity. However, such an assumption must be confirmed in a number of independent experiments.
Thus, we have shown that all energy of the propagating gravitational wave is associated only with the front of such a wave. Before and after
the wave front area the local gravitational enery, i.e. energy spatial density, is a constant (or zero). This conclusion follows from the fact
that the total Hamiltonian is zero before the wave front and it equals to the sum of constraints after the wave front. The only non-zero term in
the total Hamiltonian $H_T$ describes the gravitational flow through the surface which was reached by the propagating gravitational wave. The
concentration of the whole energy of the propagating gravitational wave in its front is the direct consequence of the substantial non-linearity
of the field equations in metric GR. In some sense the propagating gravitational wave is very similar to the strong shock wave which propagates
in a compressible gas mixture. This is a brief description of the internal structure of the propagating (solitary) gravitational wave. Such a
structure is very simple, but it is clear that only this structure agrees with the original ideas of GR proposed and developed by Mach and
Einstein.
\section{Conclusions}
We have considered the Hamiltonian formulation of the metric General Relativity (or metric GR, for short) for the free gravitational field. By
using the process of quantization and explicit form of the classical Hamiltonian \cite{Our1} we have derived the Schr\"{o}dinger equation for the
free gravitation filed. The explicit forms of this Schr\"{o}dinger equation and $2 d-$additional differential conditions, which correspond to
the $d-$primary and $d-$secondary constraints, for the wave function $\Psi$ are extremely complex to obtain any closed analytical solution. It
is clear that this problem needs an additional investigation.
Nevertheless, the Hamiltonian approach used in this study allows one to make some general predictions about propagation of the free gravitational
fields. First, it is clear that gravitational waves considered as harmonic vibrations of the field itself do not exist in the metric GR (or
Einstein GR). However, in the approximation of the weak gravitational filed (so-called linearized gravity) one finds motions of the field which
can be considered as `vibrations' or reasonable approximations to vibrations of the components of some small tensors. It should be mentioned that
this result was derived by Einstein in 1918 \cite{Einst1918} for very weak gravitational field(s), but all such harmonic `vibrations', or
`oscillations' disappear quickly and completely, as soon as `linearized' GR is replaced by the real, i.e. non-linear, GR. An approximate criterion
of such a non-linearity can be written in the form $\sqrt{- g} = 1$. If we consider the cases when $\sqrt{- g}$ cannot be replaced by unity, then
harmonic oscillations of the gravitational field play no role in the propagation of this field. In particular, this includes the actual metric GR,
where the gravitational fields are not very weak.
Second, the propagation of the free gravitational field(s) in space-time continuum is substantially different from propagation of the
light wave in vacuum. The same Hamiltonian approach being applied to the free gravitational fields leads to the following conclusions about
propagation of such fields: (1) gravitational fields created by one elementary (sudden and local) gravity perturbation at
the origin begin to propagate to the rest of the Universe as a solitary gravitational wave with a steep front surface; (2) all energy and
momentum of this gravitational wave are concentrated in the front of this wave and transferred by this front; (3) behind the front of
the solitary gravitational wave we have a steady-state gravitational field distribution which corresponds to the final state of the
gravitational source; (4) there is neither energy, nor momentum transfer behind the front of the propagating gravitational wave. Very likely,
that any solitary gravitational wave is obtained from infinite number of harmonic oscillations during their propagation in the medium which is
substantially non-linear upon the spatial components of the metric tensor $g_{\alpha\beta}$. The arising internal structure of the solitary
gravitational wave is the direct consequence of very substantial non-linearity of the original field equations.
Here we need to make a few following remarks. First, in this study we have analyzed only solitary gravitational waves created by elementary
gravity perturbations. Furthermore, it was assumed that the gravitational wave arising during such a perturbation propagates into an empty
space-time continuum. In this case the speed of such a gravitational wave must be equal to the speed of light in vacuum, however, such an assumption
must be confirmed by a number of independent experiments. Otherwise, we need to introduce another fundamental velocity different from $c$. In respect
to the fundamental ideas of metric GR proposed and developed by Mach and Einstein before the front of the first solitary gravitational wave
propagating in vacuum we have no actual space-time. Such an spatial area can be considered as a continuum free from any gravitational fields. Behind
the front of the propagating gravitational wave we have a steady-state gravitational filed(s) distribution which corresponds to the final
distribution of gravity sources.
The main result of this study is the analytical, Hamiltonian-based description of the free gravitational field which is free from internal
contradictions. Dynamical variables in this approach are the components of the metric tensor $g_{\alpha\beta}$ (`coordinates') and momenta
$\pi^{\mu\nu}$ conjugate to them. Such a choice of the variables makes our dynamical system `natural' which means that the Lagrangian of this
system is a quadratic functions upon velocities, while its Hamiltonian is also a quadratic function of the space-like momenta $\pi^{mn}$. Based on
the Hamiltonian approach we derived the Schr\"{o}dinger equation and analyzed the structure of a solitary gravitational which propagates in vacuum.
The front of such a wave is an infinitely thin surface which moves away (with some finite speed) from the original perturbation area. At the front
of this wave the components of the metric tensor $g_{\alpha\beta}$ are suddenly changed to their final values. The propagating solitary gravitational
wave has only one advanced front and no reverse front et all (in contrast with the shock waves known from gasodynamics). This means that any
elementary gravitational perturbation changes all components of the metric tensor $g_{\alpha\beta}$ in each spatial point once and forever. These
values can be changed again only by the next solitary gravitational wave. Such a structure the propagating gravitational perturbation truly reflects
the original ideas of the General Relativity proposed and developed by Mach and Einstein. It is also clear that the idea of oscillating gravitational
waves contradicts these ideas.
\begin{center}
{\bf Appendix}
\end{center}
The goal of this Appendix is to show that the internal structure of the propagating gravitational wave described above is natural, i.e. it corresponds
to the nature of the propagating gravitational wave. First, we need to use the result of Einstein \cite{Einst1918} that very weak gravitational waves
essentially coinside with the harmonic oscillation waves, i.e. waves which are emitted by a set of harmonic oscillators. Second, the variables which
are used to describe such harmonic oscillations ($h_{\alpha\beta}$) are simply (linearly) related with the components of the metric tensor
$g_{\alpha\beta}$ (see above). The third fundamental fact is a substantial non-linear dependence of the `gravitational velocities' upon the
gravitational fields themselves. This means that the time-derivatives (or temporal derivatives) of the components of the metric tensor $g_{\alpha\beta}$
are the non-linear functions of these components.
To prove the last statement let us consider the Lagrangian which is the $\Gamma-\Gamma$ part the Einstein-Hilbert (EH) Lagrangian. This Lagrangian is a
quadratic function of the first-order derivatives of the metric tensor (for more details see, e.g., \cite{Carm})
\begin{equation}
L = \sqrt{-g} g^{\alpha\beta} \Bigl( \Gamma_{\alpha\nu}^{\mu} \Gamma_{\beta\mu}^{\nu} -
\Gamma_{\alpha\beta}^{\nu} \Gamma_{\nu\mu}^{\mu} \Bigr) = \frac14 \sqrt{-g} B^{\alpha\beta\gamma\mu\nu\rho} g_{\alpha\beta,\gamma}
g_{\mu \nu,\rho} \label{Aeq1}
\end{equation}
where the coefficients $B^{\alpha\beta\gamma\mu\nu\rho}$ are defined in the main text. The same Lagrangian is re-written in terms of velocities, i.e.
it is reduced to the form which explicitly contains the time derivatives of the metric tensor \cite{Our1}
\begin{eqnarray}
L = \frac14 \sqrt{-g} B^{\alpha\beta0\mu\nu0} g_{\alpha\beta,0} g_{\mu\nu,0} + \frac12 \sqrt{-g} B^{\left( \alpha\beta0\mid\mu\nu k\right) }
g_{\alpha\beta,0}g_{\mu\nu,k} + \frac14 \sqrt{-g} B^{\alpha\beta k\mu\nu l} g_{\alpha\beta,k} g_{\mu\nu,l} \; \; \; , \label{Aeq2}
\end{eqnarray}
Now, we can define the momenta $\pi^{\gamma\sigma}$ conjugate to the metric tensor $g_{\gamma\sigma}$
\begin{equation}
\pi^{\gamma\sigma} = \frac{\delta L}{\delta g_{\gamma\sigma,0}} = \frac12 \sqrt{-g} B^{\left( \left( \gamma\sigma\right) 0\mid\mu\nu0\right) }
g_{\mu\nu,0} + \frac12 \sqrt{-g} B^{\left( \left( \gamma\sigma\right)0\mid\mu\nu k\right) } g_{\mu\nu,k} \; \; \; . \label{Aeq3}
\end{equation}
It can be shown that if the both indexes $\gamma$ and $\sigma$ are space-like (i.e. $\gamma = m$ and $\gamma = n$), then Eq.(\ref{Aeq3}) is invertible
and one finds the following expression for the velocity $g_{mn,0}$
\begin{equation}
g_{mn,0} = I_{mnpq} \frac{1}{g^{00}} \Bigl( \frac{2}{\sqrt{-g}} \pi^{pq} - B^{((pq) 0\mid\mu\nu k)} g_{\mu\nu,k} \Bigr) \label{Aeq4}
\end{equation}
For non-singular dynamical systems we can always write $\pi^{pq} \approx G^{pqab} g_{ab,0}$. Therefore, from Eq.(\ref{Aeq4}) and explicit formulas for the
$B^{((pq) 0\mid\mu\nu k)}$ coefficients, Eq.(\ref{B}), one finds that the velocity $g_{mn,0}$ is the non-linear function of the components of metric tensor
$g_{\alpha\beta}$. This meas that all velocities of the space-like components of the metric tensor are the non-linear functions of $g_{\alpha\beta}$. Finally,
by combining all three facts mentioned above we arive to a uniform conclusion about the internal structure of the propagating gravitational wave
(see the main text). Note also, that our approach used in this study is based on the method originally developed in \cite{Pirani} and later corrected in
\cite{Our1}. The approach developed by Dirac in \cite{Dirac} produces essentially the same results. The quantization procedure is even simpler in the Dirac
approach. In general, the equivalence of these two Hamiltonian procedures in metric GR was clear from the very beginning, since the dynamical variables in
both these formulations are relted with each other by a canonical (Hamilton) transformation (for more details, see, \cite{Our1}, \cite{Myths}).
\section{Acknowledgments}
I am grateful to my friends D.G.C. (Gerry) McKeon, N. Kiriyushcheva and S.V. Kuzmin (all from the University of Western Ontario, London, Ontario, CANADA) for
helpful discussions and inspiration.
|
1,108,101,564,823 | arxiv | \section{Introduction}
Optical interferometry is a well-known experimental technique for making
precision displacement measurements, with examples ranging from Michelson
and Morley's famous aether-drift experiment to the extraordinary sensitivity
of modern gravitational-wave detectors. With careful attention to various
fundamental and technical noise sources, displacement sensitivities of
better than $10^{-19}$ m/Hz$^{-1/2}$ have been demonstrated (roughly
10^{-13}$ wavelengths of light) \cite{LIGO, TNI1, TNI2}, and additional
improvements are expected in the near future.
In the undergraduate laboratory, however, the use of optical interferometry
as a teaching tool has not kept pace with the development of modern
measurement techniques. Interferometer experiments in the teaching lab often
stop with the demonstration of visible fringes followed by displacement
measurement using basic fringe counting. We believe there is a substantial
pedagogical opportunity in this area to explore modern precision measurement
concepts using an instrument that is visual and tactile, relatively easy to
understand, and generally fun to work with.
While there are numerous examples of precision laser interferometry in the
literature \cite{IFO0, IFO1, IFO2, IFO3, IFO4}, these instruments are
somewhat complex in their design and are therefore not optimally suited for
a teaching environment. Below we describe a laser interferometer designed to
demonstrate precision physical measurement techniques in a compact apparatus
with a relatively simple optical layout. Students place some of the optical
components and align the interferometer, thus gaining hands-on experience
with optical and laser hardware. The alignment is straightforward but not
trivial, and the various interferometer signals are directly observable on
the oscilloscope. Some features of the instrument include:\ 1) piezoelectric
control of one mirror's position, allowing precise control of the
interferometer signal; 2) the ability to lock the interferometer at its most
sensitive point; 3) the ability to modulate the mirror position while the
interferometer is locked, thus providing a displacement signal of variable
magnitude and frequency; 4) phase-sensitive detection of the modulated
displacement signal, both using the digital oscilloscope and using basic
analog signal processing.
In working with this experiment, students are guided from micron-scale
measurement precision using direct fringe counting to picometer precision
using a modulated signal and phase-sensitive signal averaging. The end
result is the ability to see displacement modulations below one picometer in
a 10-cm-long interferometer arm, which is like measuring the distance from
New York to Los Angeles with a sensitivity better than the width of a human
hair!
Once the interferometer performance has been explored, students then
incorporate a magnetically driven oscillating mirror in the optical layout.
Observation and analysis of nanometer-scale motions of the high-Q oscillator
reveal several aspects of its behavior, including: 1) the
near-resonant-frequency response of the oscillator; 2) mass-dependent
frequency shifts; 3) changes in the mechanical Q as damping is added; and 4)
the excitation of the oscillator via higher harmonics using a square-wave
drive signal.
With this apparatus, students learn about optical hardware and lasers,
optical alignment, laser interferometry, piezoelectric transducers,
photodetection, electronic signal processing, signal modulation to avoid
low-frequency noise, signal averaging, and phase-sensitive detection.
Achieving a displacement sensitivity of 1/100th of an atom with a table-top
instrument provides an impressive demonstration of the power of
interferometric measurement and signal-averaging techniques. Further
quantifying the behavior of a mechanical oscillator executing nanoscale
motions shows the effectiveness of laser interferometry as a measurement
tool in experimental science.
\begin{figure}[htb]
\centering
\includegraphics[width=5.0in, keepaspectratio]{IFOSetup}
\caption{The interferometer optical
layout on an aluminum breadboard with mounting holes on a 25.4-mm grid. The
Mirror/PZT consists of a small mirror glued to a piezoelectric stack mounted
to a standard optical mirror mount. Mirrors 1 and 2 are basic steering
mirrors, and the Beamsplitter is a wedge with a 50:50 dielectric coating.}
\label{ifolayout}
\end{figure}
\section{Interferometer Design and Performance}
Figure \ref{ifolayout} shows the overall optical layout of the constructed
interferometer. The 12.7-mm-thick aluminum breadboard (Thorlabs MB1224) is
mounted atop a custom-made steel electronics chassis using pliable rubber
vibration dampers, and the chassis itself rests on pliable rubber feet. We
have found that this two-stage seismic isolation system is adequate for
reducing noise in the interferometer signal arising from benchtop
vibrations, as long as the benchtop is not bumped or otherwise unnecessarily
perturbed.
The Helium-Neon laser (Meredith HNS-2P) produces a 2mW linearly polarized
(500:1 polarization ratio) 633-nm beam with a diameter of approximately 0.8
mm, and it is mounted in a pair of custom fixed acrylic holders. The
Beamsplitter (Thorlabs BSW10) is a 1-inch-diameter wedged plate beamsplitter
with a broadband dielectric coating giving roughly equal transmitted and
reflected beams. It is mounted in a fixed optical mount (Thorlabs FMP1)
connected to a pedestal post (Thorlabs RS1.5P8E) fastened to the breadboard
using a clamping fork (Thorlabs CF125). Mirrors 1 and 2 (both Thorlabs
BB1-E02) are mounted in standard optical mounts (Thorlabs KM100) on the same
pedestal posts. Using these stout steel pedestal posts is important for
reducing unwanted motions of the optical elements.
The Mirror/PZT consists of a small mirror (12.5-mm diameter, 2-mm thick,
Edmund Optics 83-483, with an enhanced aluminum reflective coating) glued to
one end of a piezoelectric stack transducer (PZT) (Steminc SMPAK155510D10),
with the other end glued to an acrylic disk in a mirror mount. An acrylic
tube surrounds the Mirror/PZT assembly for protection, but the mirror only
contacts the PZT stack. The surface quality of the small mirror is
relatively poor (2-3 waves over one cm) compared with the other mirrors, but
we found it is adequate for this task, and the small mass of the mirror
helps push mechanical resonances of the Mirror/PZT assembly to frequencies
above 700 Hz.
The photodetector includes a Si photodiode (Thorlabs FDS100) with a 3.6mm x
3.6mm active area, held a custom acrylic fixed mount. The custom photodiode
amplifier consists of a pair of operational amplifiers (TL072) that provide
double-pole low-pass filtering of the photodiode signal with a 10-$\mu $sec
time constant. The overall amplifier gain is fixed, giving approximately an
8-volt output signal with the full laser intensity incident on the
photodiode's active area.
The optical layout shown in Figure \ref{ifolayout} was designed to provide
enough degrees of freedom to fully align the interferometer, but no more.
The Mirror/PZT pointing determines the degree to which the beam is
misaligned from retroreflecting back into the laser (described below), the
Mirror 2 pointing allows for alignment of the recombining beams, and the
Mirror 1 pointing is used to center the beam on the photodiode. In addition
to reducing the cost of the interferometer and its maintenance, using a
small number of optical elements also reduces the complexity of the set-up,
improving its function as a teaching tool.
Three of the optical elements (Mirror 1, Mirror 2, and the Beamsplitter) can
be repositioned on the breadboard or removed. The other three elements (the
laser, photodiode, and the Mirror/PZT) are fixed on the breadboard, the only
available adjustment being the pointing of the Mirror/PZT. The latter three
elements all need electrical connections, and for these the wiring is sent
down through existing holes in the breadboard and into the electronics
chassis below. The use of fixed wiring (with essentially no accessible
cabling) allows for an especially robust construction that simplifies the
operation and maintenance of the interferometer. At the same time, the three
free elements present students with a realistic experience placing and
aligning laser optics.
Before setting up the interferometer as in Figure \ref{ifolayout}, there are
a number of smaller exercises students can do with this instrument. The
Gaussian laser beam profile can be observed, as well as the divergence of
the laser beam. Using a concave lens (Thorlabs LD1464-A, f = -50 mm)
increases the beam divergence and allows a better look at the beam profile.
Laser speckle can also be observed, as well as diffraction from small bits
of dirt on the optics. Ghost laser beams from the antireflection-coated side
of the beamsplitter are clearly visible, as the wedge in the glass sends
these beams out at different directions from the main beams. Rotating the
beamsplitter 180 degrees results in a different set of ghost beams, and it
is instructive to explain these with a sketch of the two reflecting surfaces
and the resulting intensities of multiply reflected beams.
\begin{figure}[htb]
\centering
\includegraphics[width=3.0in, keepaspectratio]{IFOdiagram2}
\caption{Although the top diagram is
often used to depict a basic Michelson interferometer, in reality this
configuration is impractical. Reflections from the front mirror of the laser
produce multiple interfering interferometers that greatly complicate the
signal seen at the photodetector. In contrast, the lower diagram shows how a
slight misalignment (exaggerated in the diagram) eliminates these unwanted
reflections without the need for additional optical elements. In the
misaligned case, however, complete overlap of the recombined beams is only
possible if the arm lengths of the interferometer are equal.}
\label{ifoalignment}
\end{figure}
\subsection{Interferometer Alignment}
A satisfactory alignment of the interferometer is straightforward and easy
to achieve, but doing so requires an understanding of how real-world optics
can differ from the idealized case that is often presented. As shown in
Figure \ref{ifoalignment}, retroreflecting the laser beams at the ends of
the interferometer arms yields a recombined beam that is sent directly back
toward the laser. This beam typically reflects off the front mirror of the
laser and reenters the interferometer, yielding an optical cacophony of
multiple reflections and unwanted interference effects. Inserting an optical
isolator in the original laser beam would solve this problem, but this is an
especially expensive optical element that is best avoided in the teaching
lab.
The preferred solution to this problem is to misalign the arm mirrors
slightly, as shown in Figure \ref{ifoalignment}. With our components and the
optical layout shown in Figure \ref{ifolayout}, misaligning the Mirror/PZT
by 4.3 mrad is sufficient that the initial reflection from the Mirror/PZT
avoids striking the front mirror of the laser altogether, thus eliminating
unwanted reflections. This misalignment puts a constraint on the lengths of
the two arms, however, as can be seen from the second diagram in Figure \re
{ifoalignment}. If the two arm lengths are identical (as in the diagram),
then identical misalignments of both arm mirrors can yield (in principle)
perfectly recombined beams that are overlapping and collinear beyond the
beamsplitter. If the arm lengths are not identical, however, then perfect
recombination is no longer possible.
The arm length asymmetry constraint can be quantified by measuring the
fringe contrast seen by the detector. If the position $x$ of the Mirror/PZT
is varied over small distances, then the detector voltage can be writte
\begin{equation}
V_{\det }=V_{\min }+\frac{1}{2}(V_{\max }-V_{\min })[1+\cos (2kx)]
\label{detectorvoltage}
\end{equation
where $V_{\min }$ and $V_{\max }$ are the minimum and maximum voltages,
respectively, and $k=2\pi /\lambda $ is the wavenumber of the laser. This
signal is easily observed by sending a triangle wave to the PZT, thus
translating the mirror back and forth, while $V_{\det }$ is observed on the
oscilloscope. We define the interferometer fringe contrast to b
\[
F_{C}=\frac{V_{\max }-V_{\min }}{V_{\max }+V_{\min }}
\
and a high fringe contrast with $F_{C}\approx 1$ is desirable for obtaining
the best interferometer sensitivity.
With this background, the interferometer alignment consists of several
steps: 1) Place the beamsplitter so the reflected beam is at a 90-degree
angle from the original laser beam. The beamsplitter coating is designed for
a 90-degree reflection angle, plus it is generally good practice to keep the
beams on a simple rectangular grid as much as possible; 2) With Mirror 2
blocked, adjust the Mirror/PZT pointing so the reflected beam just misses
the front mirror of the laser. This is easily done by observing any multiple
reflections at the photodiode using a white card; 3) Adjust the Mirror 1
pointing so the beam is centered on the photodiode; 4) Unblock Mirror 2 and
adjust its pointing to produce a single recombined beam at the photodiode;
5) Send a triangle wave signal to the PZT, observe $V_{\det }$ with the
oscilloscope, and adjust the Mirror 2 pointing further to obtain a maximum
fringe contrast $F_{C,\max }.$
Figure \ref{contrastplot} shows our measurements of $F_{C,\max }$ as a
function of the Mirror 2 arm length when the Mirror/PZT misalignment was set
to 4.3 mrad and the Mirror/PZT arm length was 110 mm. As expected, the
highest $F_{C,\max }$ was achieved when the arm lengths were equal. With
unequal arm lengths, perfect recombination of the beams is not possible, and
we see that $F_{C,\max }$ drops off quadratically with increasing asymmetry
in the arm lengths.
As another alignment test, we misaligned the Mirror/PZT by 1.3 mrad and
otherwise followed the same alignment procedure described above, giving the
other set of data points shown in Figure \ref{contrastplot}. With this
smaller misalignment, there were multiple unwanted reflections from the
front mirror of the laser, but these extra beams were displaced just enough
to miss the active area of the photodetector. In this case we see a weaker
quadratic dependence of $F_{C,\max }$ on the Mirror 2 position, and about
the same $F_{C,\max }$ when the arm lengths are identical.
We did not examine why $F_{C,\max }$ is below unity for identical arm
lengths, but this is likely caused by the beamsplitter producing unequal
beam intensities, and perhaps by other optical imperfections in our system.
The peak value of about 97\% shows little dependence on polarization angle,
as observed by rotating the laser tube in its mount. Extrapolating the data
in Figure \ref{contrastplot} to zero misalignment suggests that the laser
has an intrinsic coherence length of roughly 15 cm. We did not investigate
the origin of this coherence length, although it appears likely that it
arises in part from the excitation of more than one longitudinal mode in the
laser cavity.
The smaller 1.3-mrad misalignment produces a higher fringe contrast for
unequal arm lengths, but this also requires that students deal with what can
be a confusing array of unwanted reflections. When setting up the
interferometer configuration shown in Figure \ref{ifolayout}, we typically
have students use the larger misalignment of 4.3 mrad, which is set up by
observing and then quickly eliminating the unwanted reflections off the
front mirror of the laser. We then ask students to match the interferometer
arm lengths to an accuracy of a few millimeters, as this can be done quite
easily from direct visual measurement using a plastic ruler.
Once the interferometer is roughly aligned (with the 4.3 mrad misalignment),
it is also instructive to view the optical fringes by eye using a white
card. Placing a negative lens in front of the beamsplitter yields a
bull's-eye pattern of fringes at the photodetector, and this pattern changes
as the Mirror 2 pointing is adjusted. Placing the same lens after the
beamsplitter gives a linear pattern of fringes, and the imperfect best
fringe contrast can be easily seen by attempting (unsuccessfully) to produce
a perfectly dark fringe on the card.
\begin{figure}[htb]
\centering
\includegraphics[width=3.5in,keepaspectratio]{ContrastPlot2}
\caption{The measured fringe contrast
F_{C,\max }$ as a function of the length of the Mirror 2 arm of the
interferometer. For each data point, Mirror 2 was repositioned and
reclamped, and then the Mirror 2 pointing was adjusted to obtain the maximum
possible fringe contrast. The \textquotedblleft X\textquotedblright\ points
were taken with a Mirror/PZT misalignment of 4.3 mrad (relative to
retroreflection), while the circles were taken with a misalignment of 1.3
mrad. The lines show parabolic fits to the data.}
\label{contrastplot}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=4in,keepaspectratio]{servo}
\caption{The electronics used to scan,
lock, and modulate the interferometer signal. With switch SW in the SCAN
position, a signal input to the Scan IN port is sent essentially directly to
the PZT. With the switch in the LOCK position, a feedback loop locks the
Mirror/PZT so the average photodiode signal (PD) equals the Servo Set Point.
With the interferometer locked, a signal sent to the Mod IN port
additionally modulates the mirror position. A resistor divider is used to
turn off the modulation or reduce it's amplitude by a factor of 1, 10, 100,
or 1000.}
\label{servo}
\end{figure}
\subsection{Interferometer Locking}
The interferometer is locked using the electronic servo circuit shown in
Figure \ref{servo}. In short, the photodiode signal $V_{\det }$ is fed back
to the PZT via this circuit to keep the signal at some constant average
value, thus keeping the arm length difference constant to typically much
better than $\lambda /2.$ The total range of the PZT is only about 1 $\mu $m
(with an applied voltage ranging from 0 to 24 volts), but this is sufficient
to keep the interferometer locked for hours at a time provided the system is
stable and undisturbed. Typically the set point is adjusted so the
interferometer is locked at $V_{\det }=(V_{\min }+V_{\max })/2,$ which is
the point where the interferometer sensitivity $dV_{\det }/dx$ is highest.
Note that the detector signal $V_{\det }$ is easily calibrated by measuring
\Delta V=V_{\max }-V_{\min }$ on the oscilloscope and using Equation \re
{detectorvoltage}, giving the conveniently simple approximatio
\[
\left( \frac{dV_{\det }}{dx}\right) _{\max }\approx \frac{\Delta V}{100 \rm{
nm}}
\
which is accurate to better than one percent. Simultaneously measuring
V_{\det }$ and the voltage $V_{PZT}$ sent to the PZT via the Scan IN port
(see Figure \ref{servo}) quickly gives the absolute PZT response function
dx/dV_{PZT}$.
The PZT can also be modulated with the servo locked using the circuit in
Figure \ref{servo} along with an external modulation signal. Figure \re
{servoon} shows the interferometer response as a function of modulation
frequency in this case, for a fixed input modulation signal amplitude. To
produce these data we locked the interferometer at $V_{\det }=(V_{\min
}+V_{\max })/2$ and provided a constant-amplitude sine-wave signal to the
modulation input port shown in Figure \ref{servo}. The resulting sine-wave
response of $V_{\det }$ was then measured using a digital oscilloscope for
different values of the modulation frequency, with the servo gain at its
minimum and maximum settings (see Figure \ref{servo}).
A straightforward analysis of the servo circuit predicts that the
interferometer response should be given b
\[
\left\vert \delta V_{det}\right\vert =AG_{1}V_{mod}\left[ 1+\frac
AG_{2}}{2\pi \tau \nu }\right] ^{-1/2}
\
where $A(\nu )=dV_{\det }/dV_{PZT}$ includes the frequency-dependent PZT
response, $\nu $ is the modulation frequency, $V_{mod}$ is the
modulation voltage, and the remaining parameters $(G_{1}=0.11;$ $G_{2}=22$
(high gain), $2$ (low gain); $\tau =RC=0.1$ seconds) can be derived from the
servo circuit elements shown in Figure \ref{servo}. Direct measurements
yielded $A(\nu )\approx 3.15,$ where this number was nearly
frequency-independent below 600 Hz and dropped off substantially above 1
kHz. In addition, a number of mechanical resonances in the Mirror/PZT
housing were also seen above 700 Hz. The theory curves shown in Figure \re
{servoon} assume a frequency-independent $A(\nu )$ for simplicity.
From these data we see that at low frequencies the servo compensates for the
modulation input, reducing the interferometer response, and the reduction is
larger when the servo gain is higher. This behavior is well described by the
servo circuit theory. At frequencies above about 700 Hz, the data begin to
deviate substantially from the simple theory. The theory curves in principle
contain no adjustable parameters, but we found that the data were better
matched by including an overall multiplicative factor of 0.94 in the theory.
This six-percent discrepancy was consistent with the overall uncertainties
in the various circuit parameters.
\begin{figure}[htb]
\centering
\includegraphics[width=3.5in, keepaspectratio]{ServoOn2}
\caption{Measurements of the
interferometer response as a function of the PZT modulation frequency, with
the servo locked. The upper and lower data points were obtained with the
servo gain at its lowest and highest settings, respectively, using the servo
control circuit shown in Figure \protect\ref{servo}. The theory curves were
derived from an analysis of the servo control circuit, using parameters that
were measured or derived from circuit elements. To better match the data,
the two theory curves each include an additional multiplicative factor of
0.94, consistent with the estimated overall uncertainty in determining the
circuit parameters.}
\label{servoon}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=3.5in, keepaspectratio]{PSD}
\caption{The electronics used to perform
a phase-sensitive detection and averaging of the modulated interferometer
signal. The input signal from the photodiode amplifier (PD) is first
low-pass filtered and further amplified, plus a negative copy is produced
with a $G=-1$ amplifier. An analog electronic switch chops between these two
signals, driven synchronously with the modulation input, and the result is
amplified and averaged using a low-pass filter with a time constant of 10
seconds.}
\label{psdcircuit}
\end{figure}
\subsection{Phase-Sensitive Detection}
Since the purpose of building an interferometer is typically to measure
small displacement signals, we sought to produce the highest displacement
sensitivity we could easily build in a compact teaching instrument. With the
interferometer locked at its most sensitive point, direct observations of
fluctuations in $V_{\det }$ indicate an ambient displacement noise of
roughly 1 nm RMS over short timescales at the maximum servo gain, and about
4 nm at the minimum servo gain. Long-term drifts are compensated for by the
servo, and these drifts were not investigated further. The short-term noise
is mainly caused by local seismic and acoustic noise. Tapping on the table
or talking around the interferometer clearly increases these noise sources.
To quantify the interferometer sensitivity, we modulated the PZT with a
square wave signal at various amplitudes and frequencies, and we observed
the resulting changes in $V_{\det }.$ The environmental noise sources were
greater at lower frequencies, so we found it optimal to modulate the PZT at
around 600 Hz. This frequency was above much of the environmental noise and
above where the signal was reduced by the servo, but below the mechanical
resonances in the PZT housing.
With a large modulation amplitude, one can observe and measure the response
in $V_{\det }$ directly on the oscilloscope, as the signal/noise ratio is
high for a single modulation cycle. At lower amplitudes, the signal is
better observed by averaging traces using the digital oscilloscope, while
triggering with the synchronous modulation input signal. By averaging 128
traces, for example, one can see signals that are about ten times lower than
is possible without averaging, as expected.
To carry this process further, we constructed the basic phase-sensitive
detector circuit shown in Figure \ref{psdcircuit}, which is essentially a
simple (and inexpensive) alternative to using a lock-in amplifier. By
integrating for ten seconds, this circuit averages the modulation signal
over about 6000 cycles, thus providing nearly another order-of-magnitude
improvement over signal averaging using the oscilloscope. The output
V_{PSD} $ from this averaging circuit also provides a convenient voltage
proportional to the interferometer modulation signal that can be used for
additional data analysis. For example, observing the distribution of
fluctuations in $V_{PSD}$ over timescales of minutes to hours gives a
measure of the uncertainty in the displacement measurement being made by the
interferometer.
Our pedagogical goal in including these measurement strategies is to
introduce students to some of the fundamentals of modern signal analysis.
Observing the interferometer signal directly on the oscilloscope is the most
basic measurement technique, but it is also the least sensitive, as the
direct signal is strongly affected by environmental noise. A substantial
first improvement is obtained by modulating the signal at higher
frequencies, thus avoiding the low-frequency noise components. Simple signal
averaging using the digital oscilloscope further increases the signal/noise
ratio, demonstrating a simple form of phase-sensitive detection and
averaging, using the strong modulation input signal to trigger the
oscilloscope. Additional averaging using the circuit in Figure \ref{servo}
yields an expected additional improvement in sensitivity. Seeing the gains
in sensitivity at each stage in the experiment introduces students to the
concepts of signal modulation, phase-sensitive detection, and signal
averaging, driving home the $\sqrt{N}$ averaging rule.
\subsection{Interferometer Response}
Figure \ref{displacement} shows the measured interferometer response at 600
Hz as a function of the PZT modulation amplitude. When the displacement
amplitude was above 0.1 nm, the modulation signal was strong enough to be
measured using the digital oscilloscope's measure feature while averaging
traces. At low displacement amplitudes, the signal became essentially
unmeasurable using the oscilloscope alone, but still appeared with high
signal-to-noise using the $V_{PSD}$ output. The overlap between these two
methods was used to determine a scaling factor between them. The absolute
measurement accuracy was about 5\% for these data, while the 1$\sigma $
displacement sensitivity at the lowest amplitudes was below 1 picometer.
These data indicate that systematic nonlinearities in the photodiode and the
PZT stack response were together below 10 percent over a range of five
orders of magnitude.
\begin{figure}[htb]
\centering
\includegraphics[width=5in, keepaspectratio]{Displacement}
\caption{The measured mirror displacement
when the piezoelectric transducer was driven with a square wave modulation
at 600 Hz, as a function of the modulation amplitude. The high-amplitude
points (closed diamonds) were measured by observing the photodiode signal
directly on the oscilloscope, while the low-amplitude points (open circles)
were measured using the phase-sensitive averaging circuit shown in Figure
\protect\ref{psdcircuit}. The fit line gives a PZT response of 45 nm/volt.
These data indicate that the combined PZT and photodiode responses are quite
linear over a range of five orders of magnitude in amplitude. At the lowest
modulation amplitudes, the noise in the averaged interferometer signal was
below one picometer for 10-second averaging times.}
\label{displacement}
\end{figure}
\section{Measuring a Simple Harmonic Oscillator}
Once students have constructed, aligned, and characterized the
interferometer, they can then use it to observe the nanoscale motions of a
simple harmonic oscillator. The optical layout for this second stage of the
experiment is shown in Figure \ref{LayoutOsc}, and the mechanical
construction of the oscillator is shown in Figure \ref{oscillator}. Wiring
for the coil runs through a vertical hole in the aluminum plate (below the
coil but not shown in Figure \ref{oscillator}) and then through one of the
holes in the breadboard to the electronics chassis below. For this reason
the oscillator position on the breadboard cannot be changed, but it does not
interfere with the basic interferometer layout shown in Figure \re
{ifolayout}.
\begin{figure}[htb]
\centering
\includegraphics[width=5.0in, keepaspectratio]{WithOscillator}
\caption{The interferometer optical
layout including the mechanical oscillator shown in detail in Figure \protect
\ref{oscillator}.}
\label{LayoutOsc}
\end{figure}
\begin{figure}[htb]
\centering
\includegraphics[width=2.5in, keepaspectratio]{Oscillator}
\caption{A side view of the magnetically
driven mechanical oscillator shown in Figure \protect\ref{LayoutOsc}. The
main body is constructed from 12.7-mm-thick aluminum plate (alloy 6061), and
the two vertical holes in the base are 76.2 mm apart to match the holes in
the breadboard. Sending an alternating current through the coil applies a
corresponding force to the permanent magnet, driving torsional oscillations
of the mirror arm about its narrow pivot point. Additional weights can be
added to the 8-32 tapped mounting hole to change the resonant frequency of
the oscillator.}
\label{oscillator}
\end{figure}
The oscillator response can be observed by viewing the interferometer signal
together with the coil drive signal on the oscilloscope, and example data
are shown in Figure \ref{oscillatorresponse}. Here the coil was driven with
a sinusoidal signal from a digital function generator with $<1$ mHz absolute
frequency accuracy, and the oscillator response was measured for each point
by averaging 64 traces on the oscilloscope. Once again, using the drive
signal to trigger the oscilloscope ensures a good phase-locked average even
with a small signal amplitude. As shown also in Figure \ref{displacement},
sub-nanometer sensitivity is easily achievable using this simple
signal-averaging method. The results in Figure \ref{oscillatorresponse} show
that this mechanical system is well described by a
simple-harmonic-oscillator model. Inserting a small piece of foam between
the magnet and the coil substantially increases the oscillator damping, and
students can examine this by measuring the oscillator $Q$ with different
amounts of damping.
The tapped mounting hole behind the oscillator mirror (see Figure \re
{oscillator}) allows additional weights to be added to the oscillator. We
use nylon, aluminium, steel, and brass thumbscrews and nuts to give a series
of weights with roughly equal mass spacings. Students weigh the masses using
an inexpensive digital scale with 0.1 gram accuracy (American Weigh
AWS-100). To achieve satisfactory results, we have found that the weights
need to be well balanced (with one on each side of the oscillator), screwed
in firmly, and no more than about 1.5 cm in total length. If these
conditions are not met, additional mechanical resonances can influence the
oscillator response.
The resonant frequency $\nu _{0}$ of the oscillator can be satisfactorily
measured by finding the maximum oscillator amplitude as a function of
frequency, viewing the signal directly on the oscilloscope, and an accuracy
of better than 1 Hz can be obtained quite quickly with a simple analog
signal generator using the oscilloscope to measure the drive frequency. The
results shown in Figure \ref{oscillatormass} show that $\nu _{0}^{-2}$ is
proportional to the added mass, which is expected from a
simple-harmonic-oscillator model. Additional parameters describing the
harmonic oscillator characteristics can be extracted from the slope and
intercept of the fit line.
As a final experiment, students can drive the coil with a square wave signal
at different frequencies to observe the resulting motion. The oscillator
shows a resonant behavior when the coil is driven at $\nu _{0},$ $\nu
_{0}/3, $ $\nu _{0}/5,$ etc., and at each of these frequencies the
oscillator response remains at $\nu _{0}.$ Measurements of the peak resonant
amplitude at each frequency show the behavior expected from a Fourier
decomposition of the square wave signal.
In summary, we have developed a fairly basic table-top laser interferometer
for use in the undergraduate teaching laboratory. Students first assemble
and align the interferometer, gaining hands-on experience using optical and
laser hardware. The experiment then focuses on a variety of measurement
strategies and signal-averaging techniques, with the goal of using the
interferometer to demonstrate picometer displacement sensitivity over arm
lengths of 10 centimeters. In a second stage of the experiment, students use
the interferometer to quantify the nanoscale motions of a driven harmonic
oscillator system.
This work was supported in part by the California Institute of Technology
and by a generous donation from Dr. Vineer Bhansali. Frank Rice contributed
insightful ideas to several aspects of the interferometer construction and
data analysis.
|
1,108,101,564,824 | arxiv | \section{Introduction}
Recovering a structured signal from a relatively small number of linear measurements has received a great deal of attention during the past few decades \cite{chandrasekaran2012convex, vershynin2015estimation, tropp2015convex, thrampoulidis2015recovering, vaiter2015low}. Typical examples of structured signals include sparse vectors and low-rank matrices. This problem arises in an incredibly wide range of applications throughout signal processing and machine learning, such as medical imaging\cite{otazo2015low}, radar imaging\cite{potter2010sparsity}, communication systems\cite{garcia2017direct}, pattern recognition\cite{wright2010sparse}, collaborative filtering \cite{srebro2010collaborative} and so on. Since this problem is highly ill-posed in general, a popular recovery procedure is to seek the solution with the desired structure that is consistent with the observations, leading to
\begin{equation} \label{StandardSignalRecovery}
\min _{\bm{x}}\|\bm{x}\|_{\mathrm{sig}} ~~~~\text { s.t. }~~ \bm{y}=\bm{A} \bm{x},
\end{equation}
where $\bm{y}$ stands for the measurements, $\bm{A}$ denotes the measurement matrix, and $\norm{\cdot}_{\mathrm{sig}}$ is a suitable norm (or convex function) which promotes the structure of the signal.
However, in many practical applications of interest, we can also acquire other prior knowledge of the desired signal in addition to structural information. For instance, in compressed sensing, besides the sparsity constraint, we might have access to the support information \cite{vaswani2010modified}, the weights of the components of the desired signal \cite{candes2008enhancing,tanaka2010optimal,khajehnejad2011analyzing,scarlett2013compressed,mansour2017recoverya,needell2017weighted}, or a similar copy of the original signal \cite{chen2008prior,weizman2015compressed,mota2017compressed,zhang2017compressed}. While in matrix completion, certain subspace information of the desired low-rank matrix might be available to us \cite{srebro2010collaborative,foygel2011learning,xu2013speedup,chiang2015matrix,eftekhari2018weighted}. In these scenarios, a key question to ask is how to use side information to improve the recovery performance of structured signals.
Recently, the present authors suggested a unified framework to incorporate prior information into structured signals recovery via maximizing the correlation between the desired signal $\bm{x}^\star$ and the prior information $\bm{\phi}$ in \cite{zhang2017compressed,zhang2018recovery}
\begin{equation} \label{OurSignalRecovery}
\min _{\bm{x}}\|\bm{x}\|_{\mathrm{sig}}-\lambda\langle\bm{x}, \bm{\phi}\rangle ~~~~\text { s.t. } \bm{y}=\bm{A} \bm{x},
\end{equation}
where $\bm{\phi}$ is some kind of prior information of the desired signal, $\langle\cdot, \cdot\rangle$ denotes the inner product, and $\lambda \ge 0$ is the tradeoff parameter. The motivation behind this approach is very natural since if $\bm{\phi}$ is similar to $\bm{x}^\star$, then they may be highly correlated. We also theoretically demonstrate that, under sub-Gaussian measurements, this approach \eqref{OurSignalRecovery} can greatly outperform the standard structured signal recovery procedure \eqref{StandardSignalRecovery} when the prior information is reliable.
When specialized this framework to low-rank matrices, the recovery procedure \eqref{OurSignalRecovery} becomes
\begin{equation}\label{OurMatrixRecovery}
\min_{\bm{X} \in \mathbb{R}^{n \times n}} \norm{\bm{X}}_{*} - \lambda \ip{\bm{\Phi}}{\bm{X}} \quad \text{s.t.}~\bm{y}=\mathcal{A}(\bm{X}),
\end{equation}
where $\norm{\cdot}_{*}$ is the nuclear norm, $\ip{\bm{\Phi}}{\bm{X}} = \tr(\bm{\Phi}^T\bm{X})$ denotes the matrix inner product, and $\mathcal{A}: \bm{X} \to \sum_{j=1}^{m} \ip{\bm{A}^j}{\bm{X}} \bm{e}_j$ is the measurement operator. Here, $\bm{e}_1,\ldots,\bm{e}_m$ denote the standard basis vectors in $\mathbb{R}^{m}$ and $\bm{A}^1,\ldots,\bm{A}^m \in \mathbb{R}^{n \times n}$ are measurement matrices. Although the theory developed for \eqref{OurMatrixRecovery} under sub-Gaussian measurements (i.e., $\{\bm{A}^j\}$ are independent sub-Gaussian matrices) is very informative, it might be far from the type of observations we often encounter in practice. The following are some typical practical applications where $\{\bm{A}^j\}$ are highly structured (i.e., $\bm{A}^j$ has only a single nonzero value of $1$ corresponding to the row and column of the observed element) and some prior information is available:
\begin{itemize}
\item \textbf{Semi-supervised clustering}\cite{yi2013semi,bair2013semi}. Semi-supervised clustering is an important machine learning problem, which is to find a good clustering such that similar items belong to the same cluster based on a relatively small amount of labeled data. One promising approach is to construct a similarity matrix with missing entries by using the labeled data and then complete the partial similarity matrix via matrix completion. In addition, the data attributes can be collected as side information, which represent the similarity among items.
\item \textbf{Collaborative filtering} \cite{koren2010collaborative,rao2015collaborative,xu2016dynamic}. Collaborative filtering is another promising machine learning problem, which is to predict new ratings based on a limited number of ratings for different movies from different users. One popular scheme is to construct a partial rating matrix based on the known ratings and then complete the partial user-item rating matrix by using matrix completion. Moreover, user attributes and item attributes can serve as prior information. Here, user attributes denote the similarities among users while item attributes illustrate the similarities among items.
\item \textbf{Dynamic sensor network localization} \cite{so2007theory,wang2008further,vaghefi2012cooperative}. Dynamic sensor network localization is a key technology in sensor wireless network, which helps the battery-powered system to improve the location accuracy and network efficiency. Due to the limit of resources, only a few sensors know their location. One typical approach to locate sensors is to complete the current incomplete distance matrix via matrix completion. In particular, when the sensor position changes slowly, the previous distance matrix is very similar to the current one, which can be used as prior information.
\end{itemize}
Motivated by the above examples, it is highly desirable to utilize side information to improve the performance of matrix completion.
In this paper, we naturally generalize the recovery procedure \eqref{OurMatrixRecovery} to integrate prior information into matrix completion
\begin{equation}\label{OurMatrixCompletion}
\min_{\bm{X}\in \mathbb{R}^{n \times n}} \norm{\bm{X}}_{*} - \lambda \ip{\bm{\Phi}}{\bm{X}} \quad \text{s.t.}~\bm{Y}=\mathcal{R}_p(\bm{X}),
\end{equation}
where $\bm{Y} \in \mathbb{R}^{n \times n}$ is the matrix of measurements and $\mathcal{R}_p(\cdot)$ denotes the Bernoulli sampling operator which is defined as
\begin{equation} \label{eq:def of R(Z)}
\mathcal{R}_p(\bm{X}) = \sum_{i,j=1}^{n}\frac{\delta_{ij}}{p_{ij}} \ip{\bm{e}_{i}\bm{e}_{j}^T}{\bm{X}} \bm{e}_{i}\bm{e}_{j}^T.
\end{equation}
Here $\{\delta_{ij}\}$ are i.i.d. Bernoulli random variables which take $1$ with probability $p_{ij}$ and $0$ with probability $1-p_{ij}$. It is not hard to see that we can observe $m=\sum_{i,j=1}^{n} p_{ij}$ elements in expectation. We then establish performance guarantees for this approach. Specifically, we show that with suitable side information, this approach \eqref{OurMatrixCompletion} can decrease the sample complexity by a logarithmic factor compared with the standard matrix completion procedure. It is worth pointing out that the extension of our theory from matrix recovery \eqref{OurMatrixRecovery} to matrix completion \eqref{OurMatrixCompletion} is not straightforward and requires totally different analytical techniques.
\begin{table*}
\caption{Summary of different matrix completion approaches with side information. The parameters $\alpha,\beta,$ and $\theta$ are related to the quality of side information.}
\label{table: comparison}
\centering
\begin{tabular}{|c|c|c|c|c|c|}
\toprule
Approach & Sample complexity & Quality of side information & Dimension of subspace side information & Reference\\
\midrule
MC & $O(rn \log^2 n)$ & None & None & \cite{candes2009exact} \\
IMC & $O(rn \log n)$ & Perfect & $s>r$ & \cite{jain2013provable} \\
Dirty IMC & $O(rn^{3/2} \log n)$ & Imperfect & $s>r$ & \cite{chiang2015matrix}\\
DWMC & $O(\beta rn \log^2 n)$ & Imperfect & $s=r$ & \cite{chen2015completing}\\
WMC & $O(rn\log n \log\theta n)$ & Imperfect & $s=r$ & \cite{eftekhari2018weighted}\\
Ours & $O(rn\log n \log (\alpha n/\log n))$ & Imperfect & $s=r$ & This paper \\
\bottomrule
\end{tabular}
\end{table*}
\subsection{Related Works}
Matrix completion refers to recovering a low-rank matrix from a small number of its random entries. To complete the matrix, the standard way is to solve the following nuclear norm minimization problem \cite{fazel2002matrix,recht2010guaranteed,candes2009exact}
\begin{equation}\label{ClassicalMatrixCompletion}
\min_{\bm{X}} \norm{\bm{X}}_{*} \quad \text{s.t.}~ \bm{Y} = \mathcal{R}_p(\bm{X}).
\end{equation}
The related performance guarantees for \eqref{ClassicalMatrixCompletion} have been extensively studied in the literature, see e.g. \cite{candes2009exact,gross2011recovering,candes2010matrix,candes2010power, chen2015completing,keshavan2010matrix,keshavan2010matrix2,koltchinskii2011nuclear,chandrasekaran2011rank, jain2013low,chen2015incoherence} and references therein. The theoretical results indicate that $O(rn \log^2 n)$ samples are sufficient to accurately complete the matrix for an incoherent rank-$r$ matrix.
Matrix completion with different kinds of prior subspace information has also been studied recently. For instance, with perfect $s$-dimensional subspace information ($s > r$), the Inductive Matrix Completion (IMC) is suggested in \cite{xu2013speedup,jain2013provable} to obtain a better recovery performance than the standard matrix completion procedure. A following work with imperfect $s$-dimensional subspace information, named Dirty IMC, is proposed in \cite{chiang2015matrix}, where the original matrix is completed by splitting it into a low-rank estimate in the subspace and a low-rank perturbation outside the subspace.
Another line of work takes advantage of $r$-dimensional imperfect subspace information to improve the performance of matrix completion. In \cite{srebro2010collaborative,foygel2011learning,negahban2012restricted,chen2015completing}, the authors propose a diagonal weighted matrix completion (DWMC) method
\begin{equation} \label{DiagonalWeightedMatrixCompletion}
\min_{\bm{X}} \norm{\bm{R} \bm{X} \bm{C}}_* \quad \text{s.t.}~ \mathcal{P}_\Omega(\bm{X}) = \mathcal{P}_\Omega(\bm{X}^\star),
\end{equation}
where $\bm{R}=\diag\{r_1,\ldots,r_n\}$ and $\bm{C}=\diag\{c_1,\ldots,c_n\}$ are the diagonal weighted matrices respectively determined by the leverage scores of prior subspace information ${\widetilde{\mathcal{U}}}_r$ and ${\widetilde{\mathcal{V}}}_r$, $\Omega$ is a random index set with
$$
[\mathcal{P}_\Omega(\bm{X})]_{ij}=
\left\{
{\begin{array}{rl}
X_{ij}, & (i,j) \in \Omega, \\
0, & (i,j) \notin \Omega. \\
\end{array} }
\right.
$$
The theoretical results presented in \cite{chen2015completing} have shown that by choosing proper weights, the approach \eqref{DiagonalWeightedMatrixCompletion} can outperform the standard low-rank matrix completion procedure.
In \cite{eftekhari2018weighted}, Eftekhari et al. propose a weighted matrix completion method (WMC) with the aid of prior subspace information ${\widetilde{\mathcal{U}}}_r$ and ${\widetilde{\mathcal{V}}}_r$
\begin{equation} \label{WeightedMatrixCompletion}
\min_{\bm{X}} \norm{\bm{Q}_{{\widetilde{\mathcal{U}}}_r,\tau} \cdot \bm{X} \cdot \bm{Q}_{{\widetilde{\mathcal{V}}}_r,\rho}}_{*} \quad \text{s.t.}~ \bm{Y} = \mathcal{R}_p(\bm{X}),
\end{equation}
where $\bm{Q}_{{\widetilde{\mathcal{U}}}_r,\tau}$ and $\bm{Q}_{{\widetilde{\mathcal{V}}}_r,\rho}$ are defined
as
$$
\bm{Q}_{{\widetilde{\mathcal{U}}}_r,\tau}=\tau \cdot \mathcal{P}_{{\widetilde{\mathcal{U}}}_r} + \mathcal{P}_{{\widetilde{\mathcal{U}}}_r^\bot} \in \mathbb{R}^{n \times n},
$$
and
$$
\bm{Q}_{{\widetilde{\mathcal{V}}}_r,\rho}=\rho \cdot \mathcal{P}_{{\widetilde{\mathcal{V}}}_r} + \mathcal{P}_{{\widetilde{\mathcal{V}}}_r^\bot} \in \mathbb{R}^{n \times n}.
$$
Here $\tau$ and $\rho$ are some weights and $\mathcal{P}_{{\widetilde{\mathcal{U}}}_r}$ and $\mathcal{P}_{{\widetilde{\mathcal{U}}}_r^\bot}$ denote the orthogonal projections onto
${\widetilde{\mathcal{U}}}_r$ and ${\widetilde{\mathcal{U}}}_r^\bot$, respectively. $\mathcal{P}_{{\widetilde{\mathcal{V}}}_r}$ and $\mathcal{P}_{{\widetilde{\mathcal{V}}}_r^\bot}$ are defined likewise. Their results have shown that with suitable side information, this approach can decrease the sample complexity by a logarithmic factor compared with the standard procedure.
Table \ref{table: comparison} provides a summary for the above methods. It is not hard to find that when prior information is reliable, our approach can achieve the state-of-the-art performance. In addition, as shown in the simulation, the proposed method \eqref{OurMatrixCompletion} outperforms others for relatively unreliable prior information.
\subsection{Organization}
The paper is organized as follows. We introduce some useful preliminaries in Section \ref{sec: Preliminaries}. Performance guarantees for matrix completion with prior information via maximizing correlation are presented in Section \ref{sec: Performance guarantees}. A practical application where the prior subspace information is available is analyzed in Section \ref{sec: Performance guarantees 2}. Simulations are included in Section \ref{sec: Simulation}, and the conclusion is drawn in Section \ref{sec: Conclusion}. The proofs are postponed to Appendices.
\section{Preliminaries} \label{sec: Preliminaries}
In this section, we provide some helpful notations, definitions and propositions which will be used later.
\subsection{Convex Geometry}
The \emph{subdifferential} of a convex function $g: \mathbb{R}^n \to \mathbb{R}$ at $\bm{x}^\star$ is defined as
\begin{multline*}
\partial g(\bm{x}^\star) = \{\bm{u} \in \mathbb{R}^n: g(\bm{x}^\star + \bm{d}) \geq g(\bm{x}^\star) + \langle \bm{u}, \bm{d} \rangle~ \\ \textrm{ for all}~\bm{d} \in \mathbb{R}^n \}.
\end{multline*}
Let $\bm{X}^\star=\bm{U}_r \bm{\Sigma}_r \bm{V}^T_r$ be the compact SVD of the rank-$r$ matrix $\bm{X}^\star$ with $\bm{U}_r \in \mathbb{R}^{n \times r},\bm{V}_r \in \mathbb{R}^{n \times r}$ and $\bm{\Sigma}_r \in \mathbb{R}^{r \times r}$. The subdifferential of $\norm{\bm{X}^\star}_*$ is given by \cite{watson1992Characterization,lewis2003mathematics}
\begin{multline*}
\partial \norm{\bm{X}^\star}_* =\bm{U}_r \bm{V}_r^T \\+\left\{\bm{W}:\bm{W}^T\bm{U}_r=\bm{0}, \bm{W}\bm{V}_r=\bm{0}, \norm{\bm{W}} \le 1 \right\}.
\end{multline*}
Let the subspace $\mathcal{T}$ be the support of $\bm{X}^\star$ and $\mathcal{T}^{\bot}$ be its orthogonal complement. Then for any matrix $\bm{X} \in \mathbb{R}^{n \times n}$, the orthogonal projection onto $\mathcal{T}$ is
\begin{equation} \label{eq: PT}
\mathcal{P}_{\mathcal{T}}(\bm{X})=\bm{U}_r\bm{U}_r^T \bm{X} + \bm{X} \bm{V}_r \bm{V}_r^T- \bm{U}_r \bm{U}_r^T \bm{X} \bm{V}_r \bm{V}_r^T,
\end{equation}
and the orthogonal projection onto $\mathcal{T}^{\bot}$ is
$$\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{X})=\bm{X}-\mathcal{P}_{\mathcal{T}}(\bm{X}).$$
\subsection{Two Useful Definitions}
We then review two definitions which are useful for the analysis of matrix completion.
\begin{definition}[Leverage scores] \label{def: ls} Let the thin SVD for a rank-$r$ matrix $\bm{X} \in \mathbb{R}^{n \times n}$ be $\bm{U}_r \bm{\Sigma}_r \bm{V}^T_r$ and define $\mathcal{U}_r={\rm{span}}\{\bm{U}_r\}$ and $\mathcal{V}_r={\rm{span}}\{\bm{V}_r\}$. Then the leverage scores $\mu_{i}(\mathcal{U}_r)$ with respect to the $i$-th row of $\bm{X}$, and $\nu_j(\mathcal{V}_r)$ with respect to the $j$-th column of $\bm{X}$ are defined as
\begin{align}
\mu_{i}=\mu_{i}(\mathcal{U}_r)&\triangleq\frac{n}{r} \norm{\bm{U}^T_r\bm{e}_{i}}_2^2, ~i=1,2,\ldots,n,\label{def: mu}\\
\nu_j=\nu_j(\mathcal{V}_r)&\triangleq\frac{n}{r} \norm{\bm{V}^T_r\bm{e}_{j}}_2^2, ~j=1,2,\ldots,n.\label{def: nu}
\end{align}
\end{definition}
Then the {\it{coherence parameter}} \cite{candes2009exact} of $\bm{X}$ can be expressed as
$$
\eta(\bm{X})=\max_{i,j}\{\mu_{i}(\mathcal{U}_r),\nu_j(\mathcal{V}_r)\}.
$$
It is not hard to verify that $\eta(\bm{X}) \in [1,\frac{n}{r}]$. Moreover, when $\eta(\bm{X})$ is small, i.e., $\mathcal{U}_r$ and $\mathcal{V}_r$ are spanned by vectors with nearly equal entries in magnitude, we call that $\bm{X}$ is {\it{incoherent}}; when $\eta(\bm{X})$ is large, i.e., $\mathcal{U}_r$ or $\mathcal{V}_r$ contains a ``spiky" basis, we call that $\bm{X}$ is {\it{coherent}}.
For convenience, we define the diagonal matrices $\bm{M}=\diag\{\mu_1,\ldots,\mu_n\} $ and $\bm{N}=\diag\{\nu_1,\ldots,\nu_n\}$.
The following two norms measure the (weighted) largest entry and largest $\ell_2$ norm of the rows or columns of a matrix, respectively.
\begin{definition}[$\mu(\infty)$ norm and $\mu(\infty,2)$ norm,\cite{chen2015completing}]
For a rank-$r$ matrix $\bm{X}\in\mathbb{R}^{n\times n}$, we set
\begin{align*}\norm{\bm{X}}_{\mu(\infty)} &=\norm{ \left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{X} \left(\frac{r \bm{N}}{n}\right)^{-\frac{1}{2}}}_{\infty}\\
&=\max_{i,j}\sqrt{\frac{n}{\mu_{i}r}}\cdot|\bm{e}^T_i\bm{X} \bm{e}_j|\cdot\sqrt{\frac{n}{\nu_{j}r}},
\label{eq:mu inf norm}
\end{align*}
where $\norm{\bm{A}}_{\infty}$ returns the largest entry of matrix $\bm{A}$ in
magnitude.
Moreover, for a rank-$r$ matrix $\bm{X}\in\mathbb{R}^{n\times n}$, we define
\begin{align*}
&\norm{\bm{X}}_{\mu(\infty,2)}\\ &=\norm{ \left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{X}} _{(\infty,2)}\vee \norm{\left(\frac{r \bm{N}}{n}\right)^{-\frac{1}{2}} \bm{X}^{T}} _{(\infty,2)} \\
&=\left(\max_{i}\sqrt{\frac{n}{\mu_{i}r}}\norm{\bm{X}^T \bm{e}_i}_{2}\right)\vee\left(\max_{j}\sqrt{\frac{n}{\nu_{j}r}}\norm{\bm{X} \bm{e}_{j}}_{2}\right),
\end{align*}
where $a \vee b =\max\{a,b\}$ and $\norm{\bm{X}}_{(\infty,2)}$ denotes the largest $\ell_{2}$ norm of the rows of $\bm{X}$.
\end{definition}
\subsection{Related Results}
For the convenience of comparison, we introduce the following theoretical result for the standard matrix completion procedure.
\begin{proposition}[Theorem 2, \cite{chen2015completing}] \label{prop: StandardMatrixCompletion} Let $\bm{X}^\star \in \mathbb{R}^{n \times n}$ be a rank-$r$ matrix, and $\bm{Y}= \mathcal{R}_p(\bm{X}^\star) \in \mathbb{R}^{n \times n}$ denote the matrix of measurements. Let $\mu_i$ and $\nu_{j}$ be the leverage scores as defined in Definition \ref{def: ls}. If
\begin{equation*}
1 \ge p_{ij} \gtrsim \frac{\left(\mu_{i}+\nu_{j}\right)r\log^2 n}{n},
\end{equation*}
for all $i,j=1,\ldots,n$, then with high probability, $\bm{X}^\star$ is the unique solution for program \eqref{ClassicalMatrixCompletion}. Here, $f \gtrsim g$ means that $f$ is greater than $g$ up to a universal constant.
\end{proposition}
From the result of Proposition \ref{prop: StandardMatrixCompletion}, we can conclude that for an incoherent rank-$r$ matrix, we need $O(rn \log^2 n)$ samples to accurately complete the matrix.
\section{Performance Guarantees} \label{sec: Performance guarantees}
In this section, we present a theoretical analysis for the proposed procedure \eqref{OurMatrixCompletion}. The results demonstrate that with suitable prior information, the proposed program only requires $O(rn \log n)$ samples to correctly complete the matrix for incoherent low-rank matrices, which outperforms the standard matrix completion program \eqref{ClassicalMatrixCompletion} and reaches the state-of-the-art performance. The proof of the result is included in Appendices \ref{sec: Proof1} and \ref{sec: Proof2}.
\begin{theorem} \label{thm: main}
Let $\bm{X}^\star=\bm{U}_r \bm{\Sigma}_r \bm{V}^T_r$ be the thin SVD of the rank-$r$ matrix $\bm{X}^\star \in \mathbb{R}^{n \times n}$ with $\bm{U}_r \in \mathbb{R}^{n \times r},\bm{V}_r \in \mathbb{R}^{n \times r}$ and $\bm{\Sigma}_r \in \mathbb{R}^{r \times r}$. Let $\mu_i$ and $\nu_{j}$ be the leverage scores as defined in Definition \ref{def: ls}.
If
\begin{multline*}
1 \ge p_{ij}\gtrsim \max \left\{\log\(\frac{\alpha_1^2 n}{r\log n}\),1\right\} \cdot \frac{\left(\mu_{i}+\nu_{j}\right)r\log n}{n} \\
\cdot \max\left\{\(2\xi_1+\xi_2\)^2,1\right\}
\end{multline*}
for all $i,j=1,\ldots,n$, and
$$ \alpha_2 < \frac{15}{16},
$$
then with high probability, we can achieve exact recovery of $\bm{X}^\star$ by solving the program \eqref{OurMatrixCompletion}, where
\begin{align*}
\alpha_1&=\norm{\bm{U}_r\bm{V}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})}_F,\\
\alpha_2&=\norm{\lambda \mathcal{P}_{\mathcal{T}^{\bot}}(\bm{\Phi})},\\
\xi_1&=\norm{\bm{U}_r\bm{V}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})}_{\mu(\infty)},
\end{align*}
and
$$
\xi_2=\norm{\bm{U}_r\bm{V}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})}_{\mu(\infty,2)}.
$$
\end{theorem}
\begin{remark}[No prior information] If there is no prior information, i.e., $\lambda=0$, then the proposed procedure \eqref{OurMatrixCompletion} reduces to the standard one
\eqref{ClassicalMatrixCompletion}. In this case, simple calculations lead to $\alpha_1=\sqrt{r},\, \alpha_2=0, \, \xi_1 \le 1,$ and $\xi_2=1$. According to Theorem \ref{thm: main}, the bound of sample probability becomes
\begin{equation*}
1 \ge p_{ij}\gtrsim \frac{\left(\mu_{i}+\nu_{j}\right)r \log n \cdot\log\( n/\log n\)}{n}.
\end{equation*}
This result implies that for incoherent matrices, $O(rn\log^2 n)$ samples are needed to complete the matrix correctly, which agrees with the result of the standard matrix completion procedure as shown in Proposition \ref{prop: StandardMatrixCompletion}.
\end{remark}
\begin{remark}[Reliable prior information] \label{rm: optimality} If $\bm{\Phi}$ is approximately equal to $\bm{U}_r\bm{V}^T_r$, i.e., $ \mathcal{P}_{\mathcal{T}}(\bm{\Phi})$ is close to $\bm{U}_r\bm{V}^T_r$ and $\norm{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{\Phi})}$ is small, then we have $\alpha_1 \to 0$, $\alpha_2 \to 0$, $\xi_1 \to 0$, $\xi_2 \to 0$, and $\log\(\frac{\alpha_1^2 n}{r\log n}\) \lesssim 1$ by setting $\lambda=1$. This means that for incoherent matrices, $m=O(rn\log n)$ samples are sufficient to complete the matrix in this case, which reduces the sample complexity of the standard matrix completion procedure by a logarithmic factor.
\end{remark}
\begin{remark}[Choice of $\bm{\Phi}$] Actually, the recovery procedure \eqref{OurMatrixCompletion} provides a general framework for matrix completion with prior information via maximizing correlation. In practice, we need to choose a suitable $\bm{\Phi}$ which encodes available prior information effectively. Generally speaking, the choice of $\bm{\Phi}$ is problem-specific and usually determined by the prior information in hand. In the subsequent section, we will demonstrate how to choose a suitable $\bm{\Phi}$ when the prior subspace information is accessible, and present a theoretical analysis for this application. Another example can be found in \cite{zhang2020}.
\end{remark}
The above main result can be naturally extended the noisy case.
\begin{corollary}
Let $\bm{X}^\star=\bm{U}_r \bm{\Sigma}_r \bm{V}^T_r$ be the thin SVD of the rank-$r$ matrix $\bm{X}^\star \in \mathbb{R}^{n \times n}$ with $\bm{U}_r \in \mathbb{R}^{n \times r},\bm{V}_r \in \mathbb{R}^{n \times r}$ and $\bm{\Sigma}_r \in \mathbb{R}^{r \times r}$. Consider the noisy observation $\bm{Y}=\bm{X}^\star + \bm{N}$, where the entries of the noise $\bm{N}$ is bounded. Let $\mu_i$ and $\nu_{j}$ be the leverage scores as defined in Definition \ref{def: ls}. For the noisy version of matrix completion with prior information via maximizing correlation
\begin{equation}\label{NoisyMatrixCompletion}
\min_{\bm{X}} \norm{\bm{X}}_{*} - \lambda \ip{\bm{\Phi}}{\bm{X}} \quad {\rm{s.t.}}~ \norm{\mathcal{R}_p(\bm{Y}-\bm{X})}_F \le \varepsilon,
\end{equation}
where $\varepsilon$ denotes the upper bound (in term of Frobenius norm) of $\mathcal{R}_p(\bm{N})$.
If
\begin{multline*}
1 \ge p_{ij}\gtrsim \max \left\{\log\(\frac{\alpha_1^2 n}{r\log n}\),1\right\} \cdot \frac{\left(\mu_{i}+\nu_{j}\right)r\log n}{n} \\
\cdot \max\left\{\(2\xi_1+\xi_2\)^2,1\right\}
\end{multline*}
for all $i,j=1,\ldots,n$, and
$$ \alpha_2 < \frac{7}{8},
$$
then with high probability, by solving the program \eqref{NoisyMatrixCompletion}, the solution $\breve{\bm{X}}$ obeys
\begin{equation*}
\norm{\bm{X}^\star-\breve{\bm{X}}}_{F}\le \[2 + 32 \sqrt{1+\frac{2 n}{r \log n}} \cdot (\sqrt{n}+\norm{\lambda \bm{\Phi}}_F) \]\cdot \varepsilon,
\end{equation*}
where
\begin{align*}
\alpha_1&=\norm{\bm{U}_r\bm{V}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})}_F,\\
\alpha_2&=\norm{\lambda \mathcal{P}_{\mathcal{T}^{\bot}}(\bm{\Phi})},\\
\xi_1&=\norm{\bm{U}_r\bm{V}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})}_{\mu(\infty)},
\end{align*}
and
$$
\xi_2=\norm{\bm{U}_r\bm{V}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})}_{\mu(\infty,2)}.
$$
\end{corollary}
\section{Case Study: prior subspace information} \label{sec: Performance guarantees 2}
In this section, we study a typical example where the desired low-rank matrix is symmetric and its corresponding prior subspace information is available to us. This model has many potential applications in machine learning such as semi-supervised clustering \cite{yi2013semi,bair2013semi} and link prediction \cite{mishra2007clustering,chen2014clustering}.
Let ${\mathcal{U}}_r={\rm span}(\bm{U}_r)$ denote the $r$-dimensional column space of $\bm{X}^\star$ and ${\widetilde{\mathcal{U}}}_r$ be the corresponding prior subspace information. To leverage the prior subspace information, we modify matrix completion procedure \eqref{OurMatrixCompletion} as follows
\begin{equation}\label{OurMatrixCompletion2}
\min_{\bm{X}\in \mathbb{R}^{n \times n}} \norm{\bm{X}}_{*} - \lambda \ip{\bm{\Phi}}{\bm{X}} \quad \text{s.t.}~\bm{Y}=\mathcal{R}_p(\bm{X}),
\end{equation}
where $\bm{\Phi}= \widetilde{\bm{U}}_r\widetilde{\bm{U}}_r^T$, the columns of $\widetilde{\bm{U}}_r \in \mathbb{R}^{n \times r}$ constitute the orthonormal bases of the subspace ${\widetilde{\mathcal{U}}}_r $, and $\lambda \in [0,1]$ is the tradeoff parameter. Define the leverage scores of the subspace
$\breve{\mathcal{U}}=\text{\rm{span}}([\bm{U}_r,\widetilde{\bm{U}}_r])$ as follows
\begin{align*}
\breve{\mu}_{i}&\triangleq\mu_{i}(\breve{\mathcal{U}}),~i=1,2,\ldots,n.\label{def: mu_c}
\end{align*}
Then we have the following result. The proof is included in Appendix \ref{Appendix: general case of Thm 2} where an arbitrary low-rank matrix $\bm{X}^\star$ (not limited to symmetric matrices) is considered.
\begin{theorem} \label{thm: main2}
Let $\bm{X}^\star=\bm{U}_r \bm{\Sigma}_r \bm{U}^T_r$ be the thin SVD of the rank-$r$ matrix $\bm{X}^\star \in \mathbb{R}^{n \times n}$ with $\bm{U}_r \in \mathbb{R}^{n \times r}$. Denote the column subspace of $\bm{X}^\star$ by ${\mathcal{U}}_r={\rm span}(\bm{U}_r)$ and its corresponding $r$-dimensional prior subspace information by ${\widetilde{\mathcal{U}}}_r$. Define $\bm{\Gamma}={\rm{diag}}\(\gamma_{1},\ldots,\gamma_{r}\) \in \mathbb{R}^{r \times r}$ whose entries are the principal angles between ${\mathcal{U}}_r$ and ${\widetilde{\mathcal{U}}}_r$. Let $\mu_i$ and $\breve{\mu}_{i}$ be the leverage scores defined as before.
If
\begin{multline*}
1 \ge p_{ij}\gtrsim \max \left\{\log\(\frac{\alpha_1^2 n}{r\log n}\),1\right\} \cdot \frac{ \mu_{i} r\log n}{n} \\
\cdot \max\left\{\alpha_3^2\beta^2,1\right\}
\end{multline*}
for all $i,j=1,\ldots,n$, and
$$ \alpha_2 < \frac{15}{16},
$$
then with high probability, we can achieve exact recovery of $\bm{X}^\star$ by solving the program \eqref{OurMatrixCompletion2}, where
\begin{align*}
\alpha_1^2&=\lambda^2 \[r-\sum_{i=1}^{r} \sin^4\gamma_i\]-2 \lambda \sum_{i=1}^{r} \cos^2\gamma_i+r,\\
\alpha_2&= \lambda \, \max_{i} \{\sin^2 \gamma_i\},\\
\alpha_3&=\max_i \{1-\lambda \cos^2\gamma_i \}+2\lambda \, \max_i\{\cos\gamma_i \sin \gamma_i\},
\end{align*}
and
$$
\beta= 1 \vee \sqrt{2 \max_i\frac{\breve{\mu}_{i}}{\mu_{i}}}.
$$
\end{theorem}
\begin{remark}[Choice of $\lambda$]
Clearly, the sample complexity is influenced by parameters $\alpha_1, \alpha_2, \alpha_3$, and $\beta$. However, it is not hard to find that $\alpha_1^2$ is the deciding factor. Thus we can choose
\begin{equation} \label{eq: Optimal_lambda2}
\lambda^\star=\frac{\sum_{i=1}^{r} \cos^2\gamma_i}{r-\sum_{i=1}^{r} \sin^4\gamma_i}
\end{equation}
such that $\alpha_1^2$ achieve its minimum
$$
\alpha_1^2 = r- \frac{\[\sum_{i=1}^{r} \cos^2\gamma_i\]^2}{r-\sum_{i=1}^{r} \sin^4\gamma_i},
$$
which will lead to the optimal sample complexity. In particular, when the prior subspace information is close to the original subspace, the best choice of $\lambda$ is $\lambda^\star \approx 1$.
\end{remark}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=2.7in]{ComparisonsForFiveMethods_Noise001.eps}}
\hfil
\subfloat[]{\includegraphics[width=2.7in]{MC_vary_lambda_Noise_0_01.eps}}
\caption{Rate of successful reconstruction v.s. sampling probability for matrix completion with prior information. (a) Comparisons for MC, our approach, WMC and DWMC. (b) Matrix completion via maximizing correlation with different weights $\lambda$.
}
\label{fig: PerformanceComparison_001}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=2.7in]{ComparisonsForFiveMethods_Noise01.eps}}
\hfil
\subfloat[]{\includegraphics[width=2.7in]{MC_vary_lambda_Noise_0_1.eps}}
\caption{Rate of successful reconstruction v.s. sampling probability for matrix completion with weaker prior information. (a) Comparisons for MC, our approach, WMC and DWMC. (b) Matrix completion via maximizing correlation with different weights $\lambda$.}
\label{fig: PerformanceComparison_01}
\end{figure*}
\begin{figure*}[!t]
\centering
\subfloat[]{\includegraphics[width=2.7in]{Comparisons_wine.eps}}
\hfil
\subfloat[]{\includegraphics[width=2.7in]{Comparisons_iris.eps}}
\caption{Relative reconstruction error v.s. sampling probability for different datasets: (a) Wine (b) Iris.}
\label{fig: PerformanceComparison_03}
\end{figure*}
\section{Simulations and Experiments} \label{sec: Simulation}
In this section, we verify the validity of our theoretical results by both synthetic simulations and real-world applications.
\subsection{Synthetic Simulations}
Let $\bm{X}^\star \in \mathbb{R}^{n \times n}$ be the original rank-$r$ matrix. We construct $\bm{X}^\star$ as follows: generate two independent Gaussian matrices $\bm{G}_1, \bm{G}_2 \in \mathbb{R}^{n \times r}$; let $\bm{U}_r\in\mathbb{R}^{n \times r}$ and $\bm{V}_r\in\mathbb{R}^{n \times r}$ be the basis matrices of subspaces $ {\mathcal{U}}_r={\rm span}\{\bm{G}_1\}$ and ${\mathcal{V}}_r={\rm span}\{\bm{G}_2\}$, respectively; then we construct $\bm{X}^\star = \bm{U}_r \bm{\Sigma}_r \bm{V}_{r}^T$, where $\bm{\Sigma}_r=\diag \{\frac{1}{\sqrt{r}},\ldots,\frac{1}{\sqrt{r}}\}$. The prior information is the perturbed matrix $\hat{\bm{X}}=\bm{X}^\star+\sigma \bm{Z} \in \mathbb{R}^{n \times n}$, where the entries of $\bm{Z}$ obey i.i.d. standard normal distribution and $\sigma>0$ is an absolute constant. By taking truncated rank-$r$ SVD for $\hat{\bm{X}}$, we obtain the prior matrix $\bm{\Phi}=\hat{\bm{U}}_r \hat{\bm{V}}_r^T$.
We set $n=32$, $r=4$, and $tol=10^{-3}$ in these synthetic experiments. For a specific sampling probability $p$, we make 50 trials, count the number of successful trials, and calculate the related probability. If the solution $\breve{\bm{X}}$ satisfies
$$
\frac{\|\bm{X}^\star - \breve{\bm{X}}\|_\text{F}}{\norm{\bm{X}^\star}_\text{F}} < tol,
$$
then the trial is successful, otherwise it fails. Let $p$ increase from $0$ to $1$ with step $1/n$, we can obtain the simulation results.
We consider two kinds of prior information: standard prior information ($\sigma=0.01$) and weaker prior information ($\sigma=0.1$). For each kind of prior information, we compare the performance of four methods: the standard matrix completion approach \eqref{ClassicalMatrixCompletion}, the proposed procedure \eqref{OurMatrixCompletion}, the diagonal weighted matrix completion \eqref{DiagonalWeightedMatrixCompletion} and the weighted matrix completion \eqref{WeightedMatrixCompletion}
\footnote{Here, we only compare the methods which work with $r$-dimensional subspace information. It seems unfair to compare with IMC and noisy IMC since they require higher dimensional subspace information ($s > r$).}. For the proposed approach \eqref{OurMatrixCompletion}, $\lambda$ is chosen by following Remark \ref{Choice_of_Lambda_Genaral}. For WMC \eqref{WeightedMatrixCompletion}, we set $w\triangleq\tau=\rho$. As suggested in \cite{eftekhari2018weighted}, the proper choice of $w$ is $w^2=\sqrt{\tan ^{4} \theta+\tan ^{2} \theta}-\tan ^{2} \theta$, where $\theta$ is the largest principal angle. In addition, we run some simulations for \eqref{OurMatrixCompletion} under different weights $\lambda$.
The results of matrix completion under standard prior information ($\sigma=0.01$) are shown in Fig. \ref{fig: PerformanceComparison_001}. Fig. \ref{fig: PerformanceComparison_001}(a) presents the comparisons for the four methods. The results illustrate that the proposed approach achieves the best performance. Although the diagonal weighted matrix completion has a worse performance than the proposed approach and the weighted approach, it performs much better than the standard one. Fig. \ref{fig: PerformanceComparison_001}(b) shows the performance of the proposed approach under different weights $\lambda$. The results indicate that the integration of side information can reduce the sample complexity of the standard matrix completion ($\lambda=0$). Furthermore, with reliable prior information, the larger the parameter $\lambda$, the better the performance. The optimal $\lambda$ calculated by Eq. \eqref{eq: Optimal_lambda} is $\lambda^\star=0.9895$, which is very close to $1$ and coincides with the simulation results.
In Fig. \ref{fig: PerformanceComparison_01}, we repeat the simulations under weaker prior information ($\sigma=0.1$). In Fig. \ref{fig: PerformanceComparison_01}(a), the performance of the proposed, weighted and diagonal method deteriorates sharply compared with the plots in Fig. \ref{fig: PerformanceComparison_001}(a). The results show the proposed method has the best performance. We also see that the proposed method and the weighted method slightly outperform the standard matrix completion while the diagonal method underperforms the standard one. In Fig. \ref{fig: PerformanceComparison_01}(b), all the results for different $\lambda$ almost coincide together, showing a slightly improvement than the standard matrix completion.
\subsection{Real-world Experiments}
Semi-supervised clustering is an important machine learning problem which can be transformed into a low-rank matrix completion problem. Let $\bm{S}$ denote the similarity matrix to be completed, where $S_{ij}=1$ if item $i$ and $j$ are similar, $0$ if dissimilar, and $?$ if similarity is unclear. Let $\bm{Z}$ denote the side information (feature matrix). Our goal is to find a good clustering such that similar items belong to the same cluster, i.e., to label the unknown similarity to promote the low-rankness of the similarity matrix by using the side information.
\begin{table}
\caption{Statistics of datasets. $\theta_1,\theta_2,$ and $\theta_3$ denote the principal angles between subspaces.}
\label{table: comparison2}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\toprule
Dataset & \makecell[c]{No. of \\Classes} & \makecell[c]{No. of \\items} & \makecell[c]{No. of \\features} & $\theta_1$ & $\theta_2$& $\theta_3$\\
\midrule
Wine & 3 & 178 & 13 & 0.13 & 0.44 & 0.65 \\
Iris & 3 &150 & 4 & 0.12 & 0.38 & 1.44 \\
\bottomrule
\end{tabular}
\end{table}
The experiments are made by using two real-world datasets, wine and iris \cite{Chang2011LIBSVM}. The statistics of the two datasets are shown in Table \ref{table: comparison2}. We compare the performance of three schemes: the standard matrix completion, the proposed method, and the weighted matrix completion. Augmented Lagrange Multiplier (ALM) method is used to solve the models \cite{lin2010augmented}. The subspace information $\hat{\bm{U}}_r$ is extracted from the feature matrix, where $\hat{\bm{U}}_r$ is the left singular matrix generated from the truncated rank-$r$ SVD of $\bm{Z}$. We set $\hat{\bm{V}}_r=\hat{\bm{U}}_r$ since the similar matrix $\bm{S}$ are symmetric. The samples are chosen randomly and symmetrically. For each sampling probability, we make $50$ trials and calculate the average of relative reconstruction error $\|\bm{X}^\star - \breve{\bm{X}}\|_\text{F}/\norm{\bm{X}^\star}_\text{F}$, where $\breve{\bm{X}}$ denotes the recovered solution.
The results are presented in Fig. \ref{fig: PerformanceComparison_03}. The parameter $\lambda$ is chosen according to Eq. \eqref{eq: Optimal_lambda2} and $w$ is set as before. Both Fig. \ref{fig: PerformanceComparison_03}(a) and Fig. \ref{fig: PerformanceComparison_03}(b) show that the proposed method achieves the best performance and WMC shows a better performance than the standard MC. The results illustrate that our procedure seems more robust than the weighed method in this application.
\section{Conclusion} \label{sec: Conclusion}
In this paper, we have suggest a strategy to complete a low-rank matrix from a small collections of its random entries with the aid of prior information. We have established the performance guarantees for the proposed method. The results have illustrated that with reliable side information, the proposed method can decrease the number of measurements of the standard matrix completion procedure by a logarithmic factor. We also have presented a typical example in which the prior subspace information is available to us. The choice of the tradeoff parameter has also been considered. Numerical experiments have been provided to verify the theoretical results.
In terms of future work, it is worthwhile to study how to choose the suitable function $\bm{\Phi}$ in Eq. \eqref{OurMatrixCompletion} based on the available prior information to improve the performance of the proposed approach. Besides, it is interesting to present some theoretical insights why the proposed approach has more robust performance than the weighted algorithm.
\appendices
\section{Proof of Theorem \ref{thm: main}} \label{sec: Proof1}
Before proving Theorem \ref{thm: main}, we require some lemmas, in which Lemma \ref{lm: DualCertificate} is proved in Section \ref{sec: Proof2}.
\begin{lemma}[Lemma 9, \cite{chen2015completing}]
\label{lemma:near isometry 1} For probabilities $\{p_{ij}\}\subset(0,1]$, consider
the measurement operator $\mathcal{R}_p(\cdot)$ defined in \eqref{eq:def of R(Z)} and projection operator $\mathcal{P}_{\mathcal{T}}$ defined in \eqref{eq: PT}. Then, except with a probability of at most $n^{-20}$,
$$
\left\Vert \left(\mathcal{P}_{\mathcal{T}}-\mathcal{P}_{\mathcal{T}}\mathcal{R}_p\mathcal{P}_{\mathcal{T}}\right)(\cdot)\right\Vert _{F\rightarrow F}\le\frac{1}{2},
$$
provided that
\begin{equation}\label{eq:no of samples}
\frac{\left(\mu_{i}+\nu_{j}\right)r\log n}{n} \lesssim p_{ij} \le 1
,
\qquad \forall i,j\in[1:n],
\end{equation}
where $\|\mathcal{A}(\cdot)\|_{F\rightarrow F}=\sup_{\|\bm{X}\|_{F}\le1}\|\mathcal{A}(\bm{X})\|_{F}$
, and $(\mathcal{A} \mathcal{B})(\cdot) = \mathcal{A}(\mathcal{B}(\cdot))$.
\end{lemma}
\begin{lemma}[Lemma 13, \cite{chen2015completing}] \label{lm: chen_lemma13}
If the projection operator $\mathcal{P}_{\mathcal{T}}$ satisfies $
\left\Vert \left(\mathcal{P}_{\mathcal{T}}-\mathcal{P}_{\mathcal{T}}\mathcal{R}_p\mathcal{P}_{\mathcal{T}}\right)(\cdot)\right\Vert _{F\rightarrow F}\le\frac{1}{2},
$
then we have
\begin{multline*}
\norm{ \mathcal{P}_{\mathcal{T}}(\bm{Z})}_{F}\le \(\max_{i,j} \sqrt{\frac{2}{p_{ij}}}\)\norm{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Z})}_F, \\ \forall\, \bm{Z} \in \{\bm{Z}:\mathcal{R}_p(\bm{Z})=0\}.
\end{multline*}
\end{lemma}
\begin{lemma} \label{lm: DualCertificate}
Let $\bm{X}^\star=\bm{U}_r \bm{\Sigma}_r \bm{V}^T_r$ be the compact SVD of the rank-$r$ matrix $\bm{X}^\star$. Let the subspace $\mathcal{T}$ be the support of $\bm{X}^\star$ and $\mathcal{T}^{\bot}$ be its orthogonal complement. Let $\mu_i$ and $\nu_{j}$ be the leverage scores as defined in Definition \ref{def: ls}. Let $l^{-1}$ is polynomial in $n$.
If
\begin{multline*}
1 \ge p_{ij}\gtrsim \max \left\{\log\(\frac{\alpha_1}{l}\),1\right\} \cdot \frac{\left(\mu_{i}+\nu_{j}\right)r\log n}{n} \\
\cdot \max\left\{\(2\xi_1+\xi_2\)^2,1\right\}
\end{multline*}
for all $i,j=1,\ldots,n$, and
\begin{equation*}
\alpha_2 < \frac{15}{16},
\end{equation*}
then with high probability, there exists $\bm{Y} \in {\rm range} (\mathcal{R}_p)$ satisfying
$$\norm{\lambda \mathcal{P}_{\mathcal{T}^{\bot}}(\bm{\Phi})+\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Y})} < \frac{1}{32}+\alpha_2,$$
and
$$
\norm{\bm{U}_r\bm{V}^T_r -\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})- \mathcal{P}_{\mathcal{T}}(\bm{Y})}_F \le \frac{l}{32\sqrt{2}},
$$
where
\begin{align*}
\alpha_1&=\norm{\bm{U}_r\bm{V}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})}_F,~
\alpha_2=\norm{\lambda \mathcal{P}_{\mathcal{T}^{\bot}}(\bm{\Phi})},\\
\xi_1&=\norm{\bm{U}_r\bm{V}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})}_{\mu(\infty)},
\end{align*}
and
$$
\xi_2=\norm{\bm{U}_r\bm{V}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})}_{\mu(\infty,2)}.
$$
\end{lemma}
Now we are ready to prove Theorem \ref{thm: main}. Consider any feasible solution ($\bm{X}^\star +\bm{Z}$) to problem \eqref{OurMatrixCompletion} for non-zero matrix $\bm{Z} \in \{\bm{Z}:\mathcal{R}_p(\bm{Z})=0\}$. Let $\bm{W} \in \mathbb{R}^{n \times n}$ be a matrix satisfying $\bm{W} \in \left\{\bm{W}:\bm{W}^T\bm{U}_r=\bm{0}, \bm{W}\bm{V}_r=\bm{0}, \norm{\bm{W}} \le 1 \right\}$ and $\ip{\bm{W}}{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Z})}=\norm{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Z})}_*$. Then we have $\bm{W} = \mathcal{P}_{\mathcal{T}^{\bot}}(\bm{W})$ and $\bm{U}_r\bm{V}_r^T+\bm{W} \in \partial \norm{\bm{X}^\star}_*$. According to the definition of subdifferential, for any non-zero matrix $\bm{Z} \in {\rm{ker}}(\mathcal{R}_p)$, we have
\begin{multline} \label{neq: subdiff}
\norm{\bm{X}^\star+\bm{Z}}_{*}-\lambda \ip{\bm{\Phi}}{\bm{X}^\star+\bm{Z}} \\ \ge \norm{\bm{X}^\star}_{*}-\lambda \ip{\bm{\Phi}}{\bm{X}^\star}
+\ip{\bm{U}_r\bm{V}_r^T+\bm{W}-\lambda \bm{\Phi}}{\bm{Z}}
\end{multline}
Let $\bm{Y} \in {\rm{range}}(\mathcal{R}_p)$, then we have $\ip{\bm{Y}}{\bm{Z}}=0$ and
\begin{multline*}
\norm{\bm{X}^\star+\bm{Z}}_{*}-\lambda \ip{\bm{\Phi}}{\bm{X}^\star+\bm{Z}}
\\ \ge \norm{\bm{X}^\star}_{*}-\lambda \ip{\bm{\Phi}}{\bm{X}^\star} +\ip{\bm{U}_r\bm{V}_r^T+\bm{W}-\lambda \bm{\Phi}-\bm{Y}}{\bm{Z}} \nonumber
\end{multline*}
Using Holder's inequality and the properties of $\bm{W}$ yields
\begin{align*}
&\ip{\bm{U}_r\bm{V}_r^T+\bm{W}-\lambda \bm{\Phi}-\bm{Y}}{\bm{Z}}\\
& =\ip{\bm{U}_r\bm{V}_r^T-\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})-\mathcal{P}_{\mathcal{T}}(\bm{Y})}{\mathcal{P}_{\mathcal{T}}(\bm{Z})}\\
&\quad+
\ip{\bm{W}-\lambda \mathcal{P}_{\mathcal{T}^{\bot}}(\bm{\Phi})-\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Y})}{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Z})}\\
& \ge - \norm{\bm{U}_r\bm{V}_r^T -\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})-\mathcal{P}_{\mathcal{T}}(\bm{Y})}_F \norm{\mathcal{P}_{\mathcal{T}}(\bm{Z})}_F\\
&\quad+(1-\norm{\lambda \mathcal{P}_{\mathcal{T}^{\bot}}(\bm{\Phi})+\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Y})})\norm{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Z})}_*\\
& \ge - \norm{\bm{U}_r\bm{V}_r^T -\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})-\mathcal{P}_{\mathcal{T}}(\bm{Y})}_F \norm{\mathcal{P}_{\mathcal{T}}(\bm{Z})}_F\\
&\quad+(1-\norm{\lambda \mathcal{P}_{\mathcal{T}^{\bot}}(\bm{\Phi})+\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Y})})\norm{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Z})}_F.
\end{align*}
where the second inequality follows from $\norm{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Z})}_* \ge \norm{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Z})}_F$.
Suppose the assumptions of Lemma \ref{lm: DualCertificate} is satisfied,
using Lemma \ref{lm: DualCertificate} yields
\begin{align} \label{neq: DualResults}
&\ip{\bm{U}_r\bm{V}_r^T+\bm{W}-\lambda \bm{\Phi}-\bm{Y}}{\bm{Z}} \nonumber\\
&\qquad \ge - \frac{l}{32\sqrt{2}} \norm{\mathcal{P}_{\mathcal{T}}(\bm{Z})}_F + \(\frac{31}{32}-\alpha_2\)\norm{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Z})}_F \\
&\qquad > - \frac{l}{32\sqrt{2}} \norm{\mathcal{P}_{\mathcal{T}}(\bm{Z})}_F + \frac{1}{32}\norm{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Z})}_F \ge 0,
\end{align}
where the last inequality applies Lemma \ref{lm: chen_lemma13}, and $\min_{i,j} p_{ij} \ge l^2$. Here, we assign
\begin{equation*}
l^2\triangleq \frac{r \log n}{n},
\end{equation*}
and the corresponding bound of probability becomes
\begin{multline*}
1 \ge p_{ij}\gtrsim \max \left\{\log\(\frac{\alpha_1^2 n}{r\log n}\),1\right\} \cdot \frac{\left(\mu_{i}+\nu_{j}\right)r\log n}{n} \\
\cdot \max\left\{\(2\xi_1+\xi_2\)^2,1\right\}
\end{multline*}
By incorporating \eqref{neq: DualResults} into \eqref{neq: subdiff}, we have
\begin{equation*}
\norm{\bm{X}^\star+\bm{Z}}_{*}-\lambda \ip{\bm{\Phi}}{\bm{X}^\star+\bm{Z}} > \norm{\bm{X}^\star}_{*}-\lambda \ip{\bm{\Phi}}{\bm{X}^\star}
\end{equation*}
for any non-zero matrix $\bm{Z} \in \{\bm{Z}:\mathcal{R}_p(\bm{Z})=0\}$, which completes the proof.
\section{Proof of Lemma \ref{lm: DualCertificate}} \label{sec: Proof2}
In this section, we use the golfing scheme to construct the dual certificate by following \cite{gross2011recovering,chen2015completing,eftekhari2018weighted}. Before proving Lemma \ref{lm: DualCertificate}, let's review some useful lemmas which will be used in the proof.
\begin{lemma} [Lemma 10, \cite{chen2015completing}]
\label{lemma:any Z} Consider a fixed $\bm{X} \in \mathbb{R}^{n \times n}$. For some universal constant $\Delta \ge 1$, if
\begin{equation*}
\frac{\Delta^2 \left(\mu_{i}+\nu_{j}\right)r\log n}{n} \lesssim p_{ij} \le 1,
\qquad \forall \, i,j\in[1:n],
\end{equation*}
holds, then
$$
\left\Vert \left(\mathcal{R}_p-\mathcal{I}\right)(\bm{X})\right\Vert \le \frac{1}{\Delta}\(\|\bm{X}\|_{\mu(\infty)}+\|\bm{X}\|_{\mu(\infty,2)}\),
$$
except with a probability of at most $n^{-20}$. Here, $\mathcal{I}(\cdot)$ is the identity operator.
\end{lemma}
\begin{lemma}[Lemma 11, \cite{chen2015completing}]
\label{lemma:inf two bnd} Consider a fixed matrix $\bm{X}\in \mathcal{T}\subset \mathbb{R}^{n\times n}$ (i.e., $\mathcal{P}_{\mathcal{T}}(\bm{X})=\bm{X}$). Then except with a probability of at most $n^{-20}$, it holds that
\begin{multline*}
\norm{\left(\mathcal{P}_{\mathcal{T}}-\mathcal{P}_{\mathcal{T}}\mathcal{R}_p\mathcal{P}_{\mathcal{T}}\right)(\bm{X})} _{\mu(\infty,2)} \\ \le \frac{1}{2}\|\bm{X}\|_{\mu(\infty)}+\frac{1}{2}\|\bm{X}\|_{\mu(\infty,2)},
\end{multline*}
as long as (\ref{eq:no of samples}) holds.
\end{lemma}
\begin{lemma} [Lemma 12, \cite{chen2015completing}]
\label{lemma:inf bound} Consider a fixed matrix $\bm{X}\in T\subset\mathbb{R}^{n\times n}$. Then except with a probability of at most $n^{-20}$, it holds that
$$
\left\Vert \left(\mathcal{P}_{\mathcal{T}}-\mathcal{P}_{\mathcal{T}}\mathcal{R}_p\mathcal{P}_{\mathcal{T}}\right)(\bm{X})\right\Vert _{\mu(\infty)}\le\frac{1}{2}\|\bm{X}\|_{\mu(\infty)},
$$
as long as (\ref{eq:no of samples}) holds.
\end{lemma}
Armed with these lemmas, we are ready to prove Lemma \ref{lm: DualCertificate}. In order to measure $\bm{X}^\star$, we use $K$ independent measurement operator $\mathcal{R}_q(\cdot)$ instead of $\mathcal{R}_p(\cdot)$, which means the probability $p_{ij}$ and $q_{ij}$ satisfies
\begin{equation}\label{eq: p_q}
(1-q_{ij})^K=1-p_{ij},~i,j=1,\ldots,n,
\end{equation}
for given $K$.
Define $\bm{W}_0 = \bm{U}_r\bm{V}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi}) $ and set $\bm{Y}_k = \sum_{j=1}^k \mathcal{R}_{q}(\bm{W}_{j-1})$, $\bm{W}_k = \bm{U}_r\bm{V}^T_r -\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})- \mathcal{P}_{\mathcal{T}}(\bm{Y}_k)$ for $k=1,\ldots, K$. Then for $k=1,\ldots, K$, we have
\begin{align*}
\bm{W}_k&= \bm{U}_r\bm{V}^T_r -\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})- \mathcal{P}_{\mathcal{T}}(\bm{Y}_{k-1}+\mathcal{R}_{q}(\bm{W}_{k-1}))\\
&= \bm{U}_r\bm{V}^T_r -\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})- \mathcal{P}_{\mathcal{T}}(\bm{Y}_{k-1})- \mathcal{P}_{\mathcal{T}}(\mathcal{R}_{q}(\bm{W}_{k-1}))\\
&=\bm{W}_{k-1}-\mathcal{P}_{\mathcal{T}}(\mathcal{R}_{q}(\bm{W}_{k-1}))\\
&=\(\mathcal{P}_{\mathcal{T}}-\mathcal{P}_{\mathcal{T}}\mathcal{R}_{q}\mathcal{P}_{\mathcal{T}}\)(\bm{W}_{k-1}).
\end{align*}
According to Lemma \ref{lemma:near isometry 1}, we have
\begin{equation*}
\norm{\bm{W}_k}_F=\norm{\(\mathcal{P}_{\mathcal{T}}-\mathcal{P}_{\mathcal{T}}\mathcal{R}_{q}\mathcal{P}_{\mathcal{T}}\)(\bm{W}_{k-1})}_F
\le \frac{1}{2}\norm{\bm{W}_{k-1}}_F,
\end{equation*}
except with a probability of at most $n^{-20}$, as long as
\begin{equation*}
\frac{\left(\mu_{i}+\nu_{j}\right)r\log n}{n} \lesssim q_{ij} \le 1
,
\qquad \forall i,j\in[1:n].
\end{equation*}
By iteration, we obtain
\begin{equation*}
\norm{\bm{W}_K}_F \le 2^{-K}\norm{\bm{W}_{0}}_F,
\end{equation*}
except with a probability of at most $K n^{-20}$,
Let $\bm{Y}=\bm{Y}_{K}$, then
\begin{align*}
\norm{\bm{W}_{K}}_F&=\norm{\bm{U}_r\bm{V}^T_r -\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})- \mathcal{P}_{\mathcal{T}}(\bm{Y})}_F \\
&\le 2^{-K}\norm{\bm{W}_{0}}_F,
\end{align*}
except with a probability of at most $K n^{-20}$. Let
$$K = \max\left\{\log\(\frac{32 \sqrt{2}\alpha_1}{l}\),1\right\},$$ where $l^{-1}$ is polynomial in $n$. Then we have
\begin{align*}
\norm{\bm{U}_r\bm{V}^T_r -\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})- \mathcal{P}_{\mathcal{T}}(\bm{Y})}_F &\le
2^{-K}\norm{\bm{W}_{0}}_F \\
&= 2^{-K} \alpha_1 \le \frac{l}{32\sqrt{2}},
\end{align*}
except with a probability of at most
$$K n^{-20}=O(\log(\alpha_1 n)) \cdot n^{-20}=o(n^{-19}).$$
From the triangle inequality, we have
\begin{equation}
\norm{\lambda \mathcal{P}_{\mathcal{T}^{\bot}}(\bm{\Phi})+\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Y})}
\le \norm{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Y})} + \norm{\lambda\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{\Phi})}.
\end{equation}
From Lemma \ref{lemma:any Z}, except with a probability of at most
$K n^{-20}=o(n^{-19})$, we have
\begin{align*}
\norm{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Y})}& \le \sum_{j=1}^{K}\norm{ \mathcal{P}_{\mathcal{T}^{\bot}} \mathcal{R}_{q}(\bm{W}_{j-1})}\\
& = \sum_{j=1}^{K}\norm{\mathcal{P}_{\mathcal{T}^{\bot}}\( \mathcal{R}_{q}(\bm{W}_{j-1})-\bm{W}_{j-1}\)}\\
& \le \sum_{j=1}^{K}\norm{\( \mathcal{R}_{q}-\mathcal{I}\)\bm{W}_{j-1}}\\
& \le\frac{1}{\Delta} \sum_{j=1}^{K}\(\|\bm{W}_{j-1}\|_{\mu(\infty)}+\|\bm{W}_{j-1}\|_{\mu(\infty,2)}\),
\end{align*}
as long as
\begin{equation*}
\frac{ \Delta^2 \left(\mu_{i}+\nu_{j}\right)r\log n}{n} \lesssim q_{ij} \le 1,
\qquad \forall i,j\in[1:n].
\end{equation*}
The second line holds since $\bm{W}_{j-1}= \mathcal{P}_\mathcal{T} (\bm{W}_{j-1})$ and the third line holds since $\norm{\mathcal{P}_{\mathcal{T}^{\bot}} (\bm{X})} \le \norm{\bm{X}}$ for any $\bm{X} \in \mathbb{R}^{n \times n}$.
Using Lemma \ref{lemma:inf bound} leads to
\begin{align*}
\norm{\bm{W}_{j-1}}_{\mu(\infty)}&=\norm{\(\mathcal{P}_{\mathcal{T}}-\mathcal{P}_{\mathcal{T}}\mathcal{R}_{q}\mathcal{P}_{\mathcal{T}}\)(\bm{W}_{j-2})}_{\mu(\infty)}\\
&\le \frac{1}{2}\norm{\bm{W}_{j-2}}_{\mu(\infty)},
\end{align*}
except with a probability of at most $n^{-20}$, as long as
\begin{equation*}
\frac{\left(\mu_{i}+\nu_{j}\right)r\log n}{n} \lesssim q_{ij} \le 1
,
\qquad \forall i,j\in[1:n].
\end{equation*}
By iteration, we obtain
$$
\norm{\bm{W}_{j-1}}_{\mu(\infty)} \le \frac{1}{2^{{j-1}}}\norm{\bm{W}_{0}}_{\mu(\infty)}
$$
except with a probability of at most $o(n^{-19})$, since $j \le K$.
By using Lemma \ref{lemma:inf two bnd}, we obtain
\begin{align*}
\|\bm{W}_{j-1}\|_{\mu(\infty,2)} &=
\norm{\left(\mathcal{P}_{\mathcal{T}}-\mathcal{P}_{\mathcal{T}}\mathcal{R}_q\mathcal{P}_{\mathcal{T}}\right)(\bm{W}_{j-2})} _{\mu(\infty,2)} \\ &\le\frac{1}{2}\|\bm{W}_{j-2}\|_{\mu(\infty)}+\frac{1}{2}\|\bm{W}_{j-2}\|_{\mu(\infty,2)},\\
&\le\frac{j-1}{2^{j-1}}\|\bm{W}_{0}\|_{\mu(\infty)}+\frac{1}{2^{j-1}}\|\bm{W}_{0}\|_{\mu(\infty,2)},
\end{align*}
except with a probability of at most $o(n^{-19})$ due to the fact $j \le K$.
It follows that
\begin{align*}
\norm{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Y})} &\le
\frac{1}{\Delta} \sum_{j=1}^{K}\frac{j}{ 2^{j-1}}\norm{\bm{W}_0}_{\mu(\infty)} \\
&\quad + \frac{1}{\Delta} \sum_{j=1}^{K} \frac{1}{2^{j-1}}\norm{\bm{W}_0}_{\mu(\infty,2)}\\
&< \frac{1}{\Delta} \(4\norm{\bm{W}_0}_{\mu(\infty)} +2\norm{\bm{W}_0}_{\mu(\infty,2)}\),
\end{align*}
where $\sum_{j=1}^{K}j\cdot 2^{-(j-1)} < 4$ and $\sum_{j=1}^{K} 2^{-(j-1)} < 2$ for finite $K$.
Then except with a probability of at most $o(n^{-19})$,
\begin{equation*}
\norm{\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Y})}
< \frac{4 \xi_1+2 \xi_2}{\Delta} = \frac{1}{32},
\end{equation*}
provided that
\begin{equation*}
\frac{\Delta^2 \left(\mu_{i}+\nu_{j}\right)r\log n}{n} \lesssim q_{ij} \le 1,
\qquad \forall i,j\in[1:n].
\end{equation*}
where we set $\Delta=128 \xi_1+64 \xi_2.$
So we conclude that if
$$
\max \left\{ \(2\xi_1+\xi_2\)^2, 1 \right\} \cdot \frac{\left(\mu_{i}+\nu_{j}\right)r\log n}{n} \lesssim q_{ij} \le 1,
$$
we have
\begin{equation*}
\norm{\lambda \mathcal{P}_{\mathcal{T}^{\bot}}(\bm{\Phi})+\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{Y})} < \frac{1}{32}+\alpha_2
\end{equation*}
and
\begin{equation*}
\norm{\bm{U}_r\bm{V}^T_r -\lambda \mathcal{P}_{\mathcal{T}}(\bm{\Phi})- \mathcal{P}_{\mathcal{T}}(\bm{Y})}_F \le \frac{l}{32\sqrt{2}},
\end{equation*}
except with a probability of at most $o(n^{-19})$.
Finally, according to \eqref{eq: p_q}, if $\{q_{ij}\}$ are small enough, which means $n$ is large enough, we have
\begin{align*}
p_{ij}&=1-(1-q_{ij})^K \gtrsim K q_{ij}\\
&\gtrsim \max \left\{\log\(\frac{\alpha_1}{l}\),1\right\} \cdot \frac{\left(\mu_{i}+\nu_{j}\right)r\log n}{n}\cdot\\
&\quad \max\left\{\(2\xi_1+\xi_2\)^2,1\right\}.
\end{align*}
\section{Proof of a general case of Theorem \ref{thm: main2}} \label{Appendix: general case of Thm 2}
Here, we consider a non-symmetric desired matrix $\bm{X}^\star$.
Let ${\mathcal{U}}_r={\rm span}(\bm{U}_r)$ and ${\mathcal{V}}_r={\rm span}(\bm{V}_r)$ denote the $r$-dimensional column and row spaces of $\bm{X}^\star$, respectively. Assume that the prior subspace information of the $r$-dimensional column and row spaces of $\bm{X}^\star$, denoted by ${\widetilde{\mathcal{U}}}_r$ and ${\widetilde{\mathcal{V}}}_r$ respectively, is available to us. By leveraging the prior subspace information, we modify matrix completion procedure \eqref{OurMatrixCompletion} as follows
\begin{equation}\label{OurMatrixCompletion2_general}
\min_{\bm{X}} \norm{\bm{X}}_{*} - \lambda \ip{\bm{\Phi}}{\bm{X}} \quad \text{s.t.}~\bm{Y}=\mathcal{R}_p(\bm{X}),
\end{equation}
where $\bm{\Phi}= \widetilde{\bm{U}}_r\widetilde{\bm{V}}_r^T$, $\widetilde{\bm{U}}_r \in \mathbb{R}^{n \times r}$ and $\widetilde{\bm{V}}_r \in \mathbb{R}^{n \times r}$ are the orthonormal bases for subspaces ${\widetilde{\mathcal{U}}}_r $ and ${\widetilde{\mathcal{V}}}_r$ respectively, and $\lambda \in [0,1]$ is a tradeoff parameter.
We first introduce an important result in matrix analysis \cite{golub2012matrix,eftekhari2018weighted}. Throughout the paper,
$\bm{I}_n \in \mathbb{R}^{n \times n}$ denotes identity matrix and $\bm{0}_n \in \mathbb{R}^{n \times n}$ denote all-zero matrix. A simple extension of \cite[Lemma 3]{eftekhari2018weighted} achieves the following general result.
\begin{lemma} \label{lem:canonical}
Consider a rank-$r$ matrix $\bm{X} \in \mathbb{R}^{n \times n}$. Let $\bm{U}_r$ and $\widetilde{\bm{U}}_r \in \mathbb{R}^{n \times r}$ be orthonormal bases for $r$-dimensional subspaces ${\mathcal{U}}_r={\rm{span}}(\bm{X})$ and ${\widetilde{\mathcal{U}}}_r$, respectively. And let the SVD of $\bm{U}_r^T \widetilde{\bm{U}}_r= \bm{L}_L \cos (\bm{\Gamma}) \bm{R}_L^T$, where $\bm{L}_L\in \mathbb{R}^{r \times r}$ and $\bm{R}_L \in \mathbb{R}^{r \times r}$ are orthogonal matrices, $\bm{\Gamma} = \diag\{\gamma_1,\ldots,\gamma_r\}\in\mathbb{R}^{r\times r}$ is a diagonal matrix, which contains the principal angles between ${\mathcal{U}}_{r}$ and $\widetilde{\mathcal{U}}_{r}$ with $\pi/2 \ge \gamma_{1}\ge \gamma_{2}\ge\cdots\ge \gamma_{r}\ge 0$. The diagonal matrix $\cos(\bm{\Gamma})$ is defined as
$$
\cos (\bm{\Gamma}) \triangleq
\diag\{\cos \gamma_1, \, \cos \gamma_2, \,\ldots, \, \cos \gamma_r \}
\in\mathbb{R}^{r\times r},
$$
and $\sin(\bm{\Gamma})\in\mathbb{R}^{r\times r}$ is defined likewise.
Then, there exist $\bm{U}_r',\,\widetilde{\bm{U}}_r'\in \mathbb{R}^{n\times r}$, and $\bm{U}_{n-2r}''\in\mathbb{R}^{n\times (n-2r)}$ such that
\begin{align*}
\bm{B}_L& =\left[\begin{array}{ccc}
\bm{U}_{r} & \bm{U}'_{r} & \bm{U}''_{n-2r}\end{array}\right] \underbrace{ \left[
\begin{array}{ccc}
\bm{L}_L & \\
& \bm{L}_L \\
& & \bm{I}_{n-2r}
\end{array}
\right]}_{\triangleq\bm{C}_L} \in\mathbb{R}^{n\times n},
\nonumber\\
\widetilde{\bm{B}}_L &=\left[\begin{array}{ccc}
\widetilde{\bm{U}}_{r} & \widetilde{\bm{U}}'_{r} & \bm{U}''_{n-2r}\end{array}\right]\underbrace{\left[
\begin{array}{ccc}
\bm{R}_L & \\
& \bm{R}_L \\
& & \bm{I}_{n-2r}
\end{array}
\right]}_{\triangleq\bm{D}_L} \in\mathbb{R}^{n\times n},
\end{align*}
are orthonormal bases for $\mathbb{R}^{n}$. Furthermore, we have
\begin{equation*}
\bm{B}_L^T \widetilde{\bm{B}}_L=
\left[
\begin{array}{ccc}
\cos (\bm{\Gamma}) & \sin (\bm{\Gamma})\\
-\sin (\bm{\Gamma}) & \cos (\bm{\Gamma})\\
& & \bm{I}_{n-2r}
\end{array}
\right].
\end{equation*}
For $r$-dimensional subspaces ${\mathcal{V}}_r={\rm{span}}(\bm{X}^T)$ and ${\widetilde{\mathcal{V}}}_r$, let $\bm{V}_r$ and $\widetilde{\bm{V}}_r \in \mathbb{R}^{n \times r}$ be orthonormal bases for ${\mathcal{V}}_r$ and ${\widetilde{\mathcal{V}}}_r$, respectively.
Let the SVD of $\bm{V}_r^T \widetilde{\bm{V}}_r= \bm{L}_R \cos (\bm{H}) \bm{R}_R^T$ with orthogonal matrices $\bm{L}_R, \bm{R}_R \in \mathbb{R}^{r \times r}$ and diagonal matrix $\bm{H} \in\mathbb{R}^{r\times r}$, we use the same way to construct the orthonormal bases
\begin{align*}
\bm{B}_R&=\left[\begin{array}{ccc}
\bm{V}_{r} & \bm{V}'_{r} & \bm{V}''_{n-2r}\end{array}\right]\underbrace{ \left[
\begin{array}{ccc}
\bm{L}_R & \\
& \bm{L}_R \\
& & \bm{I}_{n-2r}
\end{array}
\right]}_{\triangleq\bm{C}_R} \in\mathbb{R}^{n\times n} ,
\nonumber\\
\widetilde{\bm{B}}_R &=\left[\begin{array}{ccc}
\widetilde{\bm{V}}_{r} & \widetilde{\bm{V}}'_{r} & \bm{V}''_{n-2r}\end{array}\right] \underbrace{ \left[
\begin{array}{ccc}
\bm{R}_R & \\
& \bm{R}_R \\
& & \bm{I}_{n-2r}
\end{array}
\right]}_{\triangleq\bm{D}_R} \in\mathbb{R}^{n\times n},
\end{align*}
such that
\begin{equation*}
\bm{B}_R^T \widetilde{\bm{B}}_R=
\left[
\begin{array}{ccc}
\cos(\bm{H}) & \sin(\bm{H})\\
-\sin(\bm{H}) & \cos(\bm{H})\\
& & \bm{I}_{n-2r}
\end{array}
\right].
\end{equation*}
Similarly, the diagonal matrix $\bm{H}= \diag\{\eta_1,\ldots,\eta_r\}\in\mathbb{R}^{r \times r}$ contains the principal angles between $\mathcal{V}_r$ and ${\widetilde{\mathcal{V}}}_r$ in a non-decreasing order.
\end{lemma}
Define the leverage scores of subspace
$\breve{\mathcal{U}}=\text{\rm{span}}([\bm{U}_r,\widetilde{\bm{U}}_r])$ and $\breve{\mathcal{V}}=\text{\rm{span}}([\bm{V}_r,\widetilde{\bm{V}}_r])$ as follows
\begin{align}
\breve{\mu}_{i}&\triangleq\mu_{i}(\breve{\mathcal{U}}),~i=1,2,\ldots,n,\label{def: mu_c}\\
\breve{\nu}_j&\triangleq\nu_j(\breve{\mathcal{V}}),~j=1,2,\ldots,n.\label{def: nu_c}
\end{align}
Under the assumptions of Lemma \ref{lem:canonical}, we can give the bound (or value) of $\alpha_1, \alpha_2, \xi_1$ and $\xi_2$ by using the following notations
\begin{align*}
\bm{A}_{cc}&=\bm{L}_L \cos(\bm{\Gamma}) \bm{R}_L^T \bm{R}_R \cos(\bm{H}) \bm{L}_R^T,\\
\bm{A}_{cs}&=\bm{L}_L \cos(\bm{\Gamma}) \bm{R}_L^T \bm{R}_R \sin(\bm{H}) \bm{L}_R^T,\\ \bm{A}_{sc}&=\bm{L}_L \sin(\bm{\Gamma}) \bm{R}_L^T \bm{R}_R \cos(\bm{H}) \bm{L}_R^T,\\
\bm{A}_{ss}&=\bm{L}_L \sin(\bm{\Gamma}) \bm{R}_L^T \bm{R}_R \sin(\bm{H}) \bm{L}_R^T.
\end{align*}
\begin{lemma} \label{lm: key_parameters}
For $\bm{W}_0= \bm{U}_r\bm{V}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r)$ and $\mathcal{P}_{\mathcal{T}^{\bot}}(\widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r)$, it holds that
\begin{align*}
&\norm{\bm{W}_{0}}_F = \alpha_1,~\norm{ \lambda \mathcal{P}_{\mathcal{T}^{\bot}}(\widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r)}
=\alpha_2,\\
&\norm{\bm{W}_0}_{\mu(\infty,2)} \le \alpha_3 \beta,~ \norm{\bm{W}_0}_{\mu(\infty)} \le \alpha_3 \beta,
\end{align*}
where
\begin{align*}
\alpha_1^2&= \norm{\bm{I}_r- \lambda \bm{A}_{cc}}_F^2 + \norm{\lambda \bm{A}_{cs}}_F^2 +\norm{\lambda \bm{A}_{sc}}_F^2 \\
\alpha_2 &=\norm{\lambda \bm{A}_{ss}},~\alpha_3= \norm{\bm{I}_r- \lambda \bm{A}_{cc}} + \norm{\lambda \bm{A}_{sc}} + \norm{\lambda \bm{A}_{cs}},
\end{align*}
and
$$
\beta= 1 \vee \sqrt{2 \max_i\frac{\breve{\mu}_{i}}{\mu_{i}}} \vee \sqrt{2 \max_j\frac{\breve{\nu}_{i}}{\nu_{i}}}.
$$
\end{lemma}
The proof of Lemma \ref{lm: key_parameters} is deferred to Appendix \ref{Proofoflemma8}.
\begin{remark} \label{Choice_of_Lambda_Genaral}
In this case, the optimal choice of $\lambda$ is
\begin{equation} \label{eq: Optimal_lambda}
\lambda^\star=\frac{\tr(\bm{A}_{cc})}{\norm{ \bm{A}_{cc}}_F^2 + \norm{ \bm{A}_{cs}}_F^2+ \norm{ \bm{A}_{sc}}_F^2},
\end{equation}
which will achieve the minimum of $\alpha_1^2$
$$
\alpha_1^2 = r-\frac{\tr^2(\bm{A}_{cc})}{\norm{ \bm{A}_{cc}}_F^2 + \norm{ \bm{A}_{cs}}_F^2+ \norm{ \bm{A}_{sc}}_F^2}.
$$
\end{remark}
A direct corollary of Lemma \ref{lm: key_parameters} for symmetric low-rank matrices is as follows.
\begin{lemma} For $\bm{W}_0= \bm{U}_r\bm{U}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\widetilde{\bm{U}}_r\widetilde{\bm{U}}^T_r)$ and $\mathcal{P}_{\mathcal{T}^{\bot}}(\widetilde{\bm{U}}_r\widetilde{\bm{U}}^T_r)$, it holds that
\begin{align*}
&\norm{\bm{W}_{0}}_F = \alpha_1,~\norm{ \lambda \mathcal{P}_{\mathcal{T}^{\bot}}(\widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r)}
=\alpha_2,\\
&\norm{\bm{W}_0}_{\mu(\infty,2)} \le \alpha_3 \beta,~ \norm{\bm{W}_0}_{\mu(\infty)} \le \alpha_3 \beta,
\end{align*}
where
\begin{align*}
\alpha_1^2&=\lambda^2 \[r-\sum_{i=1}^{r} \sin^4\gamma_i\]-2 \lambda \sum_{i=1}^{r} \cos^2\gamma_i+r,\\
\alpha_2&= \lambda \, \max_{i} \{\sin^2 \gamma_i\},\\
\alpha_3&=\max_i \{1-\lambda \cos^2\gamma_i \}+2\lambda \, \max_i\{\cos\gamma_i \sin \gamma_i\},
\end{align*}
and
$$
\beta= 1 \vee \sqrt{2 \max_i\frac{\breve{\mu}_{i}}{\mu_{i}}}.
$$
\end{lemma}
\begin{IEEEproof} For symmetric matrix, we have $\bm{R}_L^T \bm{R}_R=\bm{I}_r$, and then
\begin{align*}
\bm{A}_{cc}&=\bm{L}_L \cos(\bm{\Gamma}) \cos(\bm{\Gamma}) \bm{L}_L^T, \bm{A}_{cs}=\bm{L}_L \cos(\bm{\Gamma}) \sin(\bm{\Gamma}) \bm{L}_L^T,\\
\bm{A}_{sc}&=\bm{L}_L \sin(\bm{\Gamma}) \cos(\bm{\Gamma}) \bm{L}_L^T, \bm{A}_{ss}=\bm{L}_L \sin(\bm{\Gamma}) \sin(\bm{\Gamma}) \bm{L}_L^T.
\end{align*}
By using the orthogonal invariance, we have
\begin{align*}
\alpha_1^2&=\norm{\bm{I}_r- \lambda \cos(\bm{\Gamma}) \cos(\bm{\Gamma})}_F^2+2\norm{\lambda \cos(\bm{\Gamma}) \sin(\bm{\Gamma})}_F^2,\\
\alpha_2& =\norm{\lambda\sin(\bm{\Gamma}) \sin(\bm{\Gamma})},\\
\alpha_3&= \norm{\bm{I}_r- \lambda \cos(\bm{\Gamma}) \cos(\bm{\Gamma})} + 2 \norm{\lambda \cos(\bm{\Gamma}) \sin(\bm{\Gamma})}.
\end{align*}
Incorporating the definition of $\bm{\Gamma}$ completes the proof.
\end{IEEEproof}
Then by combining Theorem \ref{thm: main} and Lemma \ref{lm: key_parameters}, we achieve the following results.
\begin{theorem} \label{thm: main2-general}
Let $\bm{X}^\star \in \mathbb{R}^{n \times n}$ be a rank-$r$ matrix with thin SVD $\bm{X}^\star=\bm{U}_r \bm{\Sigma}_r \bm{V}^T_r$ for $\bm{U}_r \in \mathbb{R}^{n \times r},\bm{V}_r \in \mathbb{R}^{n \times r}$ and $\bm{\Sigma}_r \in \mathbb{R}^{r \times r}$. Let the column and row subspaces of $\bm{X}^\star$ be ${\mathcal{U}}_r={\rm span}(\bm{U}_r)$ and ${\mathcal{V}}_r={\rm span}(\bm{V}_r)$, respectively. Assume that the $r$-dimensional prior subspace information ${\widetilde{\mathcal{U}}}_r$ about ${\mathcal{U}}_r$ and ${\widetilde{\mathcal{V}}}_r$ about ${\mathcal{V}}_r$ is known beforehand. Let $\bm{\Gamma} \in \mathbb{R}^{r \times r}$ be diagonal whose entries are the principal angles between ${\mathcal{U}}_r$ and ${\widetilde{\mathcal{U}}}_r$ and $\bm{H} \in \mathbb{R}^{r \times r}$ be diagonal whose entries are the principal angles between ${\mathcal{V}}_r$ and ${\widetilde{\mathcal{V}}}_r$. Let $\mu_i,\,\nu_{j},\,\breve{\mu}_{i}$ and $\breve{\nu}_{j}$ be the leverage scores defined as before.
If
\begin{multline*}
1 \ge p_{ij}\gtrsim \max \left\{\log\(\frac{\alpha_1^2 n}{r\log n}\),1\right\} \cdot \frac{\left(\mu_{i}+\nu_{j}\right)r\log n}{n} \\
\cdot \max\left\{\alpha_3^2\beta^2,1\right\}
\end{multline*}
for all $i,j=1,\ldots,n$, and
$$ \alpha_2 < \frac{15}{16},
$$
then with high probability, we can achieve exact recovery of $\bm{X}^\star$ by solving the program \eqref{OurMatrixCompletion2}, where
\begin{align*}
\alpha_1^2&= \norm{\bm{I}_r- \lambda \bm{A}_{cc}}_F^2 + \norm{\lambda \bm{A}_{cs}}_F^2 +\norm{\lambda \bm{A}_{sc}}_F^2 \\
\alpha_2 &=\norm{\lambda \bm{A}_{ss}},~\alpha_3= \norm{\bm{I}_r- \lambda \bm{A}_{cc}} + \norm{\lambda \bm{A}_{sc}} + \norm{\lambda \bm{A}_{cs}},
\end{align*}
and
$$
\beta= 1 \vee \sqrt{2 \max_i\frac{\breve{\mu}_{i}}{\mu_{i}}} \vee \sqrt{2 \max_j\frac{\breve{\nu}_{i}}{\nu_{i}}}.
$$
\end{theorem}
\section{Proof of Lemma \ref{lm: key_parameters}} \label{Proofoflemma8}
In this section, we will use principal angles between subspaces to bound $\norm{\bm{W}_{0}}_{F},\norm{\bm{W}_0}_{\mu(\infty)}$ and $\norm{\bm{W}_0}_{\mu(\infty,2)}$ and $\mathcal{P}_{\mathcal{T}^{\bot}}(\widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r)$, where $\bm{W}_0 = \bm{U}_r\bm{V}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r)$. Before that, we give an alternative expression of $\bm{W}_0$ first. For convenience, we review the definition
\begin{align*}
\bm{A}_{cc}&=\bm{L}_L \cos(\bm{\Gamma}) \bm{R}_L^T \bm{R}_R \cos(\bm{H}) \bm{L}_R^T,\\
\bm{A}_{cs}&=\bm{L}_L \cos(\bm{\Gamma}) \bm{R}_L^T \bm{R}_R \sin(\bm{H}) \bm{L}_R^T,\\ \bm{A}_{sc}&=\bm{L}_L \sin(\bm{\Gamma}) \bm{R}_L^T \bm{R}_R \cos(\bm{H}) \bm{L}_R^T,\\
\bm{A}_{ss}&=\bm{L}_L \sin(\bm{\Gamma}) \bm{R}_L^T \bm{R}_R \sin(\bm{H}) \bm{L}_R^T.
\end{align*}
We know $\bm{U}_r^T\widetilde{\bm{U}}_r=\bm{L}_L \cos (\bm{\Gamma}) \bm{R}_L^T$ and $\bm{V}_r^T \widetilde{\bm{V}}_r =\bm{L}_R \cos (\bm{H}) \bm{R}_R^T$. Besides, Lemma \ref{lem:canonical} immediately implies that
\begin{align*}
{\bm{U}_r}&=\bm{B}_{L}\left[\begin{array}{c}
\bm{L}_L^T\\
\bm{0}_r\\
\bm{0}_{(n-2r)\times r}
\end{array}\right],\quad
\widetilde{\bm{U}}_r=\bm{B}_{L}\left[\begin{array}{c}
~~\cos(\bm{\Gamma}) \bm{R}_L^T\\
-\sin(\bm{\Gamma}) \bm{R}_L^T\\
~~\bm{0}_{(n-2r)\times r}
\end{array}\right],\\
{\bm{V}_r}&=\bm{B}_{R}\left[\begin{array}{c}
\bm{L}_R^T\\
\bm{0}_r\\
\bm{0}_{(n-2r)\times r}
\end{array}\right],\quad
\widetilde{\bm{V}}_r=\bm{B}_{R}\left[\begin{array}{c}
~~\cos(\bm{H}) \bm{R}_R^T\\
-\sin(\bm{H}) \bm{R}_R^T\\
~~\bm{0}_{(n-2r)\times r}
\end{array}\right].
\end{align*}
By incorporating the above expressions, we have
\begin{align*}
\mathcal{P}_{\mathcal{T}}(\widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r) &= \bm{U}_r\bm{U}_r^T \widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r + \widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r \bm{V}_r \bm{V}_r^T\\
&\qquad \qquad \qquad \qquad- \bm{U}_r \bm{U}_r^T \widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r \bm{V}_r \bm{V}_r^T\\
&=\quad \bm{B}_{L} \bm{C}_L^T\left[\begin{array}{ccc}
\bm{A}_{cc} & -\bm{A}_{cs} & \\
-\bm{A}_{sc} & \bm{0}_r &\\
& &\bm{0}_{n-2r}
\end{array}\right]
\bm{C}_R \bm{B}^T_{R}.
\end{align*}
So we have
\begin{align} \label{eq: W_0}
\bm{W}_0&=\bm{U}_r\bm{V}^T_r-\lambda\mathcal{P}_{\mathcal{T}}(\widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r) \nonumber\\
&=\bm{B}_{L} \bm{C}_L^T \left[\begin{array}{ccc}
\bm{I}_r- \lambda \bm{A}_{cc} & \lambda \bm{A}_{cs} & \\
\lambda \bm{A}_{sc} & \bm{0}_r &\\
& &\bm{0}_{n-2r}
\end{array}\right]
\bm{C}_R \bm{B}^T_{R}.
\end{align}
1) \textbf{The new expression of $\norm{\bm{W}_{0}}_{F}$}. Expressing $\bm{W}_0$ by the principal angles \eqref{eq: W_0} yields
\begin{align*}
&\norm{\bm{W}_{0}}_{F} \\ &= \norm{\bm{U}_r\bm{V}^T_r-\lambda \mathcal{P}_{\mathcal{T}}(\widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r)}_F\\
&=\norm{\bm{B}_{L} \bm{C}_L^T \left[\begin{array}{ccc}
\bm{I}_r- \lambda \bm{A}_{cc} & \lambda \bm{A}_{cs} & \\
\lambda \bm{A}_{sc} & \bm{0}_r &\\
& &\bm{0}_{n-2r}
\end{array}\right]
\bm{C}_R \bm{B}^T_{R}
}_F\\
&=\norm{\left[\begin{array}{ccc}
\bm{I}_r- \lambda \bm{A}_{cc} & \lambda \bm{A}_{cs} & \\
\lambda \bm{A}_{sc} & \bm{0}_r &\\
& &\bm{0}_{n-2r}
\end{array}\right]
}_F \\
&=\norm{\left[\begin{array}{cc}
\bm{I}_r- \lambda \bm{A}_{cc} & \lambda \bm{A}_{cs} \\
\lambda \bm{A}_{sc} & \bm{0}_r
\end{array}\right]}_F
\end{align*}
where the third equality holds due to the rotational invariance.
2) \textbf{The bound of $\norm{\bm{W}_0}_{\mu(\infty)}$}. By using \eqref{eq: W_0}, the definition of $\bm{B}_L$ and $\bm{B}_R$ and the triangle inequality, we have
\begin{align*}
&\norm{\bm{W}_0}_{\mu(\infty)} \\
&\le \norm{\left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{U}_r \cdot (\bm{I}_r- \lambda \bm{A}_{cc}) \cdot \bm{V}_r^T \left(\frac{r \bm{N}}{n}\right)^{-\frac{1}{2}}
}_\infty\\
&\quad + \norm{\left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{U}_r' \cdot \lambda \bm{A}_{sc} \cdot \bm{V}_r^T \left(\frac{r \bm{N}}{n}\right)^{-\frac{1}{2}}}_\infty\\
&\quad +\norm{\left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{U}_r \cdot \lambda \bm{A}_{cs} \cdot {\bm{V}'_r}^T \left(\frac{r \bm{N}}{n}\right)^{-\frac{1}{2}}}_\infty.
\end{align*}
By using
\begin{equation} \label{neq: bound_infty}
\norm{\bm{X}\bm{Y}}_\infty \le \norm{\bm{X}}_{(\infty,2)} \norm{\bm{Y}}_{(\infty,2)}
\end{equation}
and
\begin{equation} \label{neq: bound_infty_2}
\norm{\bm{X}\bm{Y}}_{(\infty,2)} \le \norm{\bm{X}}_{(\infty,2)} \norm{\bm{Y}},
\end{equation}
we have
\begin{align*}
&\norm{\bm{W}_0}_{\mu(\infty)} \\
& \le \norm{\left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{U}_r}_{(\infty,2)} \norm{\bm{I}_r- \lambda \bm{A}_{cc}} \norm{\left(\frac{r \bm{N}}{n}\right)^{-\frac{1}{2}} \bm{V}_r}_{(\infty,2)}\\
& \quad + \norm{\left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{U}_r'}_{(\infty,2)} \norm{\lambda \bm{A}_{sc}} \norm{\left(\frac{r \bm{N}}{n}\right)^{-\frac{1}{2}}\bm{V}_r}_{(\infty,2)}\\
& \quad +\norm{\left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{U}_r}_{(\infty,2)} \norm{\lambda \bm{A}_{cs}} \norm{\left(\frac{r \bm{N}}{n}\right)^{-\frac{1}{2}}\bm{V}'_r}_{(\infty,2)}.
\end{align*}
Then we can obtain
\begin{align*}
\norm{\bm{W}_0}_{\mu(\infty)} &\le \norm{\bm{I}_r- \lambda \bm{A}_{cc}}+ \norm{\lambda \bm{A}_{sc}} \sqrt{2 \max_i\frac{\breve{\mu}_{i}}{\mu_{i}}}\\
& \qquad \qquad \qquad \qquad
+ \norm{\lambda \bm{A}_{cs}} \sqrt{2 \max_j\frac{\breve{\nu}_{i}}{\nu_{i}}}\\
& \le \left(\norm{\bm{I}_r- \lambda \bm{A}_{cc}}+ \norm{\lambda \bm{A}_{sc}} + \norm{\lambda \bm{A}_{cs}}\right) \cdot \\
& \qquad \quad \( 1 \vee \sqrt{2 \max_i\frac{\breve{\mu}_{i}}{\mu_{i}}} \vee \sqrt{2 \max_j\frac{\breve{\nu}_{i}}{\nu_{i}}}\),
\end{align*}
where the first inequality applies the following properties
\begin{align} \label{neq: groupbound}
\norm{\left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{U}_r}_{(\infty,2)}&=1,\,
\norm{\left(\frac{r \bm{N}}{n}\right)^{-\frac{1}{2}} \bm{V}_r}_{(\infty,2)}=1,\nonumber\\
\norm{\left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{U}_r'}_{(\infty,2)}&\le \sqrt{2 \max_i\frac{\breve{\mu}_{i}}{\mu_{i}}},\nonumber\\
\norm{\left(\frac{r \bm{N}}{n}\right)^{-\frac{1}{2}}\bm{V}_r'}_{(\infty,2)} &\le \sqrt{2 \max_j\frac{\breve{\nu}_{i}}{\nu_{i}}},
\end{align}
which are obtained by standard calculation \cite{eftekhari2018weighted}.
3) \textbf{The bound of $\norm{\bm{W}_0}_{\mu(\infty,2)}$}. We recall the definition of $\norm{\bm{W}_0}_{\mu(\infty,2)}$ as follows
\begin{multline*}
\norm{\bm{W}_0}_{\mu(\infty,2)}\\= \norm{ \left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{W}_0} _{(\infty,2)}\vee \norm{\left(\frac{r \bm{N}}{n}\right)^{-\frac{1}{2}} \bm{W}_0^T} _{(\infty,2)}.
\end{multline*}
Now we bound $\norm{ \left( r\bm{M}/{n}\right)^{-{1}/{2}} \bm{W}_0} _{(\infty,2)}$ first
\begin{align*}
&\norm{ \left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{W}_0} _{(\infty,2)}\\
&=\left \Vert \left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{B}_{L} \bm{C}_L^T \left[\begin{array}{ccc}
\bm{I}_r- \lambda \bm{A}_{cc} & \lambda \bm{A}_{cs} & \\
\lambda \bm{A}_{sc} & \bm{0}_r &\\
& &\bm{0}_{n-2r}
\end{array}\right]
\right \Vert_{(\infty,2)},
\end{align*}
where the above equality uses the rotational invariance.
Using triangle inequality yields
\begin{align*}
&\norm{ \left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{W}_0} _{(\infty,2)}\\
&\le \quad\norm{ \left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}}
\bm{U}_r \cdot (\bm{I}_r- \lambda \bm{A}_{cc})}_{(\infty,2)} \\
&\quad + \norm{ \left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}}
\bm{U}_r' \cdot \lambda \bm{A}_{sc}}_{(\infty,2)}\\
&\quad + \norm{ \left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}}
\bm{U}_r \cdot \lambda \bm{A}_{cs}}_{(\infty,2)}.
\end{align*}
By using \eqref{neq: bound_infty_2} and \eqref{neq: groupbound}, we obtain
\begin{align*}
&\norm{ \left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}} \bm{W}_0} _{(\infty,2)}\\
&\le \quad \norm{ \left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}}
\bm{U}_r}_{(\infty,2)} \norm{\bm{I}_r- \lambda \bm{A}_{cc}} \\
&\quad+ \norm{ \left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}}
\bm{U}_r'}_{(\infty,2)} \norm{ \lambda \bm{A}_{sc}}\\
&\quad + \norm{ \left(\frac{r \bm{M}}{n}\right)^{-\frac{1}{2}}
\bm{U}_r}_{(\infty,2)} \norm{ \lambda \bm{A}_{cs}}\\
&\le \quad \norm{\bm{I}_r- \lambda \bm{A}_{cc}} + \norm{ \lambda \bm{A}_{sc}} \sqrt{2 \max_i\frac{\breve{\mu}_{i}}{\mu_{i}}} + \norm{ \lambda \bm{A}_{cs}}.
\end{align*}
Similarly, we have the other bound
\begin{multline*}
\norm{ \left(\frac{r \bm{N}}{n}\right)^{-\frac{1}{2}} \bm{W}^T_0} _{(\infty,2)} \le
\norm{\bm{I}_r- \lambda \bm{A}_{cc}}\\
+ \norm{ \lambda \bm{A}_{sc}}
+ \norm{ \lambda \bm{A}_{cs}}\sqrt{2 \max_i\frac{\breve{\nu}_{i}}{\nu_{i}}}.
\end{multline*}
Therefore, we obtain
\begin{align*}
\norm{\bm{W}_0}_{\mu(\infty,2)} & \le \left(\norm{\bm{I}_r- \lambda \bm{A}_{cc}}+ \norm{\lambda \bm{A}_{sc}} + \norm{\lambda \bm{A}_{cs}}\right) \cdot \\
&\qquad \qquad \( 1 \vee \sqrt{2 \max_i\frac{\breve{\mu}_{i}}{\mu_{i}}} \vee \sqrt{2 \max_j\frac{\breve{\nu}_{i}}{\nu_{i}}}\).
\end{align*}
4) \textbf{The new expression of $\norm{\lambda\mathcal{P}_{\mathcal{T}^{\bot}}(\bm{\Phi})}$}. By applying rotational invariance, we obtain
\begin{align*}
&\norm{ \mathcal{P}_{\mathcal{T}^{\bot}}(\widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r)}
=\norm{\widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r-\mathcal{P}_{\mathcal{T}}(\widetilde{\bm{U}}_r\widetilde{\bm{V}}^T_r)}\\
&=\left\Vert\bm{B}_{L} \bm{C}_L^T \left[\begin{array}{ccc}
\bm{A}_{cc} & -\bm{A}_{cs} & \\
-\bm{A}_{sc} & \bm{A}_{ss} &\\
& &\bm{0}_{n-2r}
\end{array}\right] \bm{C}_R\bm{B}^T_{R} \right.
\\
&\quad\left.-
\bm{B}_{L} \bm{C}_L^T \left[\begin{array}{ccc}
\bm{A}_{cc} & -\bm{A}_{cs} & \\
-\bm{A}_{sc} & \bm{0}_r &\\
& &\bm{0}_{n-2r}
\end{array}\right]
\bm{C}_R\bm{B}^T_{R}\right\Vert\\
&= \norm{
\bm{B}_{L} \bm{C}_L^T \left[\begin{array}{ccc}
\bm{0}_r & \bm{0}_r & \\
\bm{0}_r & \bm{A}_{ss} &\\
& &\bm{0}_{n-2r}
\end{array}\right]
\bm{C}_R\bm{B}^T_{R} } \\
&= \norm{\bm{A}_{ss} }.
\end{align*}
|
1,108,101,564,825 | arxiv |
\section{Introduction} \label{sec:intro}
The Lennard-Jones (LJ) potential is one of the centerpieces in Molecular Dynamics (MD) simulations, the key computational method for studying atomistic phenomena across Chemistry, Physics, Biology and Mechanics. Despite the widespread use of MD simulations, a usually overlooked fact is that the classic LJ potential involves a century old and rather ad-hoc prescribed repulsion exponent. In this study we demonstrate that this parameter needs to be modified in order to enhance the predictive capabilities of MD simulations.
The structure of the LJ potential depends on the inter-atomic distance $(r)$ and consists of two parts: an attractive term $-r^q$ and a repulsive term that models the Van der Waals forces and a repulsive term $r^{p}$ that models the Pauli repulsion. While the exponent $q=6$ has a theoretical justification \cite{Jones:1924} the $p=12$ exponent has no physical justification and it was chosen for simplicity as it can be computed as the square of the attractive term.
In addition, two scaling parameters $\ps$ and $\pe$ control the shape of the potential.
The $\pe$ and $\ps$ parameters have been the subject of numerous calibration studies~\cite{Barker:1971,Rahman:1964,Rowley:1975,White:1999} and more recently the subject of Bayesian inference techniques~\cite{Cailliez:2011,Angelikopoulos:2012}. Bayesian Uncertainty Quantification (UQ) employs experimental data and provides a probability distribution of the parameters. The parameter uncertainty can then be propagated by the model in order to obtain robust predictions on a quantity of interest~\cite{Angelikopoulos:2012,Hadjidoukas:2015a}. In cases where the data sets correspond to different inputs for the system, e.g. different thermodynamic conditions, the use of Hierarchical Bayesian (HB) methods provides a stable method for UQ~\cite{Wu:2015,Wu:2016}.
Here we employ a HB method to infer the parameters $(\pe,\ps,\pp)$. In particular, we infer systematically the LJ 6-$\pp$ exponents using experimental data from Radial Distribution Function (RDF) and data from quantum simulations of argon. In the past, several values of the exponent $\pp$ of the LJ 6-$\pp$ potential, ranging from 10 to 20, have been considered~\cite{Galliero:2006}. The authors calibrated using pressure and viscosity data for various thermodynamic conditions and concluded that the exponent 12 is the best choice.
Here, we perform HB inference for the LJ 6-$\pp$ parameters of argon based on experimental RDFs of liquid argon and saturated argon vapor for six different temperature and pressure pairs. We present a rigorous model selection process for the LJ $6-12$ vs LJ $6-\pp$ potentials for each of the cases and perform robust posterior predictions for the diffusion coefficient and density. The $6-\pp$ potentials inferred with RDF experimental data are being compared with those inferred using data from quantum simulations. We conclude that the 6-$\pp$ potential inferred with RDF data is the only potential that can simulate a wide variety of thermodynamic conditions.
Moreover, we find that the most likely values for the exponent are $\pp \approx 6.5 $, strongly differing from the value of $\pp = 12$ that is being used. We remark that our results have bveen obtained in the case of a simple system. However we consider that they offer significant evidence that the repulsive exponent should be reconsidered when the parameters of the LJ potential are being fitted to data.
\section{Results} \label{sec:uq_res}
We first calibrate the parameters of the classical LJ 6-12 potential. This inference is denoted as $B_{12, R}$. Subsequently we include the exponent of the repulsion term into the parameter set (inference $B_{\pp, R}$) and perform model selection for the LJ 6-12 and LJ 6-$\pp$ force fields. Finally, we perform a HB inference for each of the potentials using the methodology from Ref.~\cite{Wu:2016}.
HB inference allows information to flow between the different data sets leading to more robust and accurate predictions for the model parameters.
These inferences are denoted $HB_{12, R}$ and $HB_{\pp, R}$. We use the experimentally measured RDFs from Ref.~\cite{Eisenstein:1942} as calibration data for these inferences. The RDFs are computed for 6 temperature/pressure pairs $(T, P)$. We denote the pairs as follows: $L_1 = (84.4, 0.8)$, $L_2 = (91.8, 1.8)$, $L_3 = (126.7, 18.3)$, $L_4 = (144.1, 37.7)$, $L_5 = (149.3, 46.8)$, $V = (149.3, 43.8)$, where $L$ stands for ``liquid'' and $V$ stands for ``vapor''. The corresponding datasets (RDFs) are denoted as $R_{Li}$ for liquid and $R_V$ for vapor. Finally, we perform the inference with LJ 6-$\pp$ using quantum dimer energy calculations from Ref.~\cite{Halpern:2010} as data and compare the obtained parameter distributions with those computed from the RDF data. The quantum dimer dataset is denoted as $Q$ and the corresponding inference is denoted as $B_{\pp, Q}$.
\subsection{Calibration of LJ 6-12}
We present results of parameter calibration for $\pe, \ps, \pn$, while $\pp$ is fixed to 12. We use a wide enough uniform prior for each of the parameters and each of the datasets $R_{Li}$, $R_V$ ($\vp \in [0.05, 3] \times [3, 4] \times [10^{-6}, 1]$). We observe that the values which were obtained in the calibration process are close to those found in literature. In Fig.~\ref{fig:params} the MPVs of the parameters along with 5\%-95\% quantiles is presented (light red). Notice that results for the four out of six datasets are only presented since the LJ 6-12 potential failed to simulate the liquid argon for conditions $L_1$ and $L_2$.
\begin{figure}[b!]
\centering
\includegraphics[width=\textwidth]{par.jpg}
\caption{Posterior parameter values: MPVs along with 5\%-95\% quantiles obtained in $B_{12, R}$ (light red), $HB_{12, R}$ (dark red), $B_{p,R}$ (light blue), $HB_{p,R}$ (dark blue). Horizontal lines for LJ 6-12 indicate the reference values: Ref.~\cite{Barker:1971} (magenta dashed line), Ref.~\cite{White:1999} (purple solid line), Ref.~\cite{Rahman:1964} (cyan solid line), Ref.~\cite{Rowley:1975} (blue dashed line). Horizontal line for LJ 6-$p$ indicates the MPV for $B_{p,Q}$.}
\label{fig:params}
\end{figure}
A large difference in the values of $\pe$ for liquid and vapor is observed, which implies that one cannot perform the simulations using the same parameters for the two phases. We define the uncertainty in a parameter as the ratio of the 5\%-95\% quantile spread to the most probable value (MPV). The uncertainty in $\pe$ varies from 14\% to 20\% depending on the dataset, while the value of $\ps$ is identified more precisely with uncertainty of 2\%-6\%. This difference can be attributed to the type of data used in the inference process: the location of the RDF peak, which gives the most significant contribution to the sum of squared errors (SSE) inside the log-likelihood, is more sensitive to $\ps$. On the other hand, $\pe$ affects the height of the RDF peak which has a smaller effect on the log-likelihood.
Next, we infer the LJ parameters using the HB approach. We select the prior $\pr{\vp_i}[\vhp]$ by using Bayesian model selection (see~\smref{sec:hb_info} for details).
The values of the LJ parameters are presented in Fig.~\ref{fig:params} (dark red). The MPVs and the quantiles of the parameters are almost the same as in the $B_{12, R}$, which means that for each dataset $\vy_i \in \{R_{L1}, \ldots, R_{L5}, R_V\}$ no information about the parameters can be extracted from the other datasets.
The full set of the MPVs and distribution quantiles for each dataset $R_{Li}$, $R_V$ is given in Table~\ref{tab:posterior_12}, while the full posterior distributions are shown in Fig.~\ref{fig:pdf_12}.
\subsection{Calibration of LJ 6-$\pp$}
\paragraph{Dataset $R$:} Here we include the LJ exponent $\pp$ into the parameter set $\vp$. As in the LJ 6-12 case, we choose a uniform prior with wide enough bounds ($[0.05, 10] \times [3, 4] \times [6.01, 15] \times [10^{-6}, 1]$). Note that with LJ 6-$\pp$ the sampling algorithm dictated much wider bounds for $\pe$ compared to the LJ 6-12 case. As will be seen later, this is due to a strong correlation between $\pe$ and $\pp$. We observe again the non-transferability of the LJ parameters from liquid to vapor simulations: the values of $\ps$ lie in disjoint domains for $L_i$ and $V$ (Fig.~\ref{fig:params}). Being a more flexible potential, LJ 6-$\pp$ can simulate a wider range of thermodynamic conditions, including $L_1$ and $L_2$, which result in the values of LJ parameters similar to those obtained for the other three liquid conditions. We observe that the 95\% quantile of $\pp$, as well as its MPV, is for four out of six RDF datasets below $7.5$ and for all the datasets below 10, which is much smaller than the conventional 12. This can be explained by the fact that the repulsion energies predicted by the standard 12-6 LJ potential are very high for the liquids. The configurations with such energies happen with probability close to zero, and the MD simulation is not able to sample them. As in the LJ 6-12 case, the parameter $\pe$ exhibits significant variation within each dataset $R_{Li}$, $R_V$ (uncertainty 110\%-216\%, computed the same way as for LJ 6-12), while $\pp$ and $\ps$ are well-defined with the uncertainty of 5\%-30\% and 1\%-6\%, respectively. In addition, $\pe$ differs substantially among the RDF datasets, but always in accordance with $\pp$: the higher the $\pp$, the lower the $\pe$ (see Fig.~\ref{fig:p_eps_all}).
Similarly with the inference of the LJ 6-12 parameters, we proceed by calibrating the parameters using the HB approach. Details for the section of the prior can be found in~\smref{sec:hb_info}.
The results of the inference are given in Fig.~\ref{fig:params}. {We observe that the uncertainty in $\pe$ gets significantly reduced for conditions $L_1$, $L_2$, $L_5$ and $V$ indicating that the inference benefited from the information contained in the two remaining datasets $R_{L3}$ and $R_{L4}$ with narrow posterior distributions of $\pe$ (Fig.~\ref{fig:pdf_free1}, \ref{fig:pdf_free2}). On the other hand, the uncertainty in $\pe$ for $L_3$ and $L_4$ increases adjusting to the wide ranges in the other four cases. A similar situation can be seen for $\pp$, where narrow distributions for $L_1$, $L_3$, $L_5$, $V$ shift the posterior values for $L_2$ and $L_4$. The RDF is, as noticed before, very sensitive to the changes in $\ps$, which controls the location of the LJ potential well, and therefore $\ps$ is well determined for each of the datasets $R_{L_i}$, $R_V$ and extracts almost no information from the other ones.}
\paragraph{Dataset $Q$:} To investigate whether the repulsion exponent 12 may be a good model for some cases, we perform a calibration using the calculated quantum dimer scans of argon as data. These data describe the behavior of the gaseous argon. We infer the LJ 6-$\pp$ parameters by fitting the LJ potential to the binding energy of the quantum dimer (Fig.~\ref{fig:params}). The resulting value of $\pp$ is much closer to the conventional 12 (Table~\ref{tab:posterior_free}), suggesting that for the gaseous argon, unlike for the liquid one, LJ 6-12 is a reasonable choice.
The full set of MPVs and distribution quantiles of the LJ parameters for $R_{Li}$, $R_V$, $Q$ is given in Table~\ref{tab:posterior_free} and the full posterior distributions are plotted in Fig.~\ref{fig:pdf_free1}, \ref{fig:pdf_free2}, \ref{fig:pdf_quantum}. In Section~\ref{sec:comparison}
\subsection{Experimental Data vs Quantum Mechanics Simulations: Model Comparison} \label{sec:comparison}
\paragraph{Model selection} We select between LJ 6-12 and LJ 6-$\pp$ potentials by applying the Bayes selection criterion. We observe that LJ 6-$\pp$ is significantly better than the LJ 6-12 for $L_3$ and $L_5$ (Table~\ref{tab:model_selection}). Recalling that LJ 6-12 is not able to produce a liquid for $L_1$ and $L_2$, we conclude that LJ 6-$\pp$ is preferred for four RDF datasets out of six. In the case of $L_4$ the potentials show indistinguishable by the Bayesian model selection results. The only dataset on which the LJ 6-12 potential produces better results (3 times more probable than LJ 6-$\pp$) is $V$, the vapor case. That brings us to the conclusion that LJ 6-$\pp$ is either much better or not worse than LJ 6-12 for all the liquid cases considered. For the vapor case, the LJ 6-$\pp$ is over parametrized, as compared to LJ 6-12.
\paragraph{LJ potentials} Studying the reasons for LJ 6-$\pp$ being more plausible than LJ 6-12, we take a closer look at the inferred shapes of the potentials. We observe a very stable correlation in the $(\pp, \pe)$ subspace (Fig.~\ref{fig:p_eps_all}) for all the datasets used.
\begin{figure}[bt]
\centering
\includegraphics[width=0.45\textwidth]{p_eps.jpg}
\caption{Posterior samples of $B_{\pp, R}$ projected onto $(\pp, \pe)$ subspace: yellow circles correspond to $V$, green circles correspond to $L_i$ in the temperature increasing order from the lightest to the darkest color, purple circles correspond to $Q$.}
\label{fig:p_eps_all}
\end{figure}
This result is expected as $\pp$ regulates the strength of the repulsion and $\pe$ alters the strength of both repulsion and attraction simultaneously. The difference between the $R_i$ and $Q$ datasets shows up in the region of the subspace which gets populated. The quantum dimer-based calibration prefers high values of $\pp$, which correspond to the tails of the distributions inferred using the RDF data. We performed a calibration with $L_3$ and narrow prior bounds ($\pp \in [12, 14]$) to see whether this is indeed a tail of the full posterior distribution (Fig.~\ref{fig:pdf_126_narrow}). The narrow posterior values are below $3.95$, while the values of the full posterior start from $4.18$, which explains why the tails of the full distributions for $L_i$, $V$ have a negligible number of samples in the region $\pp \in [12, 14]$ preferred by the $Q$-based inference. As the parameters $\pe$ and $\pp$ are highly correlated, one could expect that the inference will be able to recover values of $\pe$ for LJ 6-12 such that the resulting potential is close to the inferred LJ 6-$\pp$. However, the effect that $\pp$ and $\pe$ have on the LJ potential is not entirely the same. As $\pe$ acts as a scaling factor for the whole potential, it is not able to make the potential less deep and at the same time flat enough to avoid switching to the gas phase (compare simulations with MPVs for $L_5$, $V$ in Fig.~\ref{fig:LJ_12_free}). The same reasoning can be applied to explain the inability of LJ 6-12 to drive $L_1$ and $L_2$ to the liquid phase: the potential is too repulsive, frustrating the liquid packing, and the system behaves either like a gas or like a solid (note that $L_1$ is close to the argon triple point).
The full set of the inferred LJ 6-12 and LJ 6-$\pp$ potentials is given in Fig.~\ref{fig:LJ_12_free}.
\begin{figure*}[bt]
\includegraphics[width=\textwidth]{lj.jpg}
\caption{Posterior LJ potentials comparison: MPVs along with 5\%-95\% quantiles obtained in $HB_{\pp, R}$ (blue), $HB_{12, R}$ (red), $B_{\pp, Q}$ (purple). Black line with dots: quantum dimer calculations.}
\label{fig:LJ_12_free}
\end{figure*}
\paragraph{Robust posterior prediction} The quality of the predictions made for a QoI different than the one used for the inference quantifies the predictive power of the model (see~\smref{sec:prediction}). Making predictions for new QoIs is a challenging problem in MD as each quantity depends on LJ parameters in a specific non-linear fashion. We obtain robust predictions of the RDF, density $\rho$ and diffusion coefficient $D$ of argon by propagating the posterior LJ parameters uncertainty into these quantities.
We measure the error $\Delta q$ of the prediction of the scalar quantity $q$ as
\begin{equation}
\Delta q = \frac{1}{N} \sum_{k=1}^N \left( \frac{q_k-r_k}{r_k} \right)^2 \cnt
\end{equation}
where $N \leq 6$ is the number of thermodynamic conditions for which the prediction can be made, $q_k$ is the prediction made using the MPV and $r_k$ is the reference value. The reference values for $\rho$ are experimental measurements taken from Ref.~\cite{Eisenstein:1942}. The reference values for $D$ are computed analytically using the equations from Ref.~\cite{Kestin:1984}. The accuracy of the fit for these computations is 0.7\%. The error of the RDF is computed as an average over all the thermodynamic conditions mean squared error of the computed RDF vs the experimental RDF. The predictions are compared on three different sets of conditions: 1) the conditions which can be simulated using MPVs obtained in all the three inferences $HB_{12, R}$, $HB_{\pp, R}$, and $B_{\pp,Q}$ ($L_4$, $L_5$), 2) the conditions which can be simulated using MPVs obtained in the inferences $HB_{12, R}$ and $HB_{\pp, R}$ ($L_3-L_5$, $V$), 3) the conditions which can be simulated using MPVs obtained in the inference $HB_{\pp, R}$ ($L_1-L_5$, $V$).
The predictions made using the results of $HB_{\pp, R}$ are the most accurate for all the QoIs considered and all the sets of conditions, except for one case where $HB_{12,R}$ gives a better result (see Table~\ref{tab:fp}). On the other hand, the predictions made using the results of $B_{\pp,Q}$ are the least accurate for all the QoIs. Additionally, the inferences $HB_{12,R}$ and $B_{\pp,Q}$ result in LJ potentials which cannot be used to simulate all the thermodynamic conditions. This brings us to the conclusions that 1) $HB_{\pp,R}$ produces a better LJ model than $HB_{12,R}$, 2) $B_{\pp,Q}$ does not result in a good model for liquid argon or saturated argon vapor.
We note that the values of $D$ differ by an order of magnitude for liquid and vapor which explains the huge deterioration of the predictions on the sets of conditions that include $V$.
The MPVs of $D$ and $\rho$ along with the corresponding quantiles are presented in Fig.~\ref{fig:fp}. The same values for RDF are given in Fig.~\ref{fig:fp}.
\begin{figure*}[tp]
\centering
\includegraphics[width=\textwidth]{fp.jpg}
\caption{Robust posterior predictions: MPVs along with 5\%-95\% quantiles obtained in $HB_{\pp, R}$ (blue), $HB_{12, R}$ (red), MPV obtained in $B_{\pp, Q}$ (purple). Black line with dots: experimental data for RDF. Grey bars: experimental data for $\rho$, analytically computed values for $D$.}
\label{fig:fp}
\end{figure*}
\section{Discussion} \label{sec:disc}
We examine the classical 6-12 Lennard Jones potential using Hierarchical Bayesian inference with data form experiments and quantum mechanics simulations.
Our results show that the value ($p=12$) of the repulsive exponent needs to be revised and in the case of argon be replaced by a smaller value ($p=6.5$).
Notably we find that calibration for the repulsive exponent is more accurate and robust when using experimental data rather than data from quantum mechanics simulations The results indicate that parameters inferred from the quantum dimer calculations are not predictive for the liquid and saturated vapor conditions and that smaller values of the exponent $\pp$ ($\pp \in (6,9)$) in the Lennard-Jones potential provide better predictions for RDF (Fig.~\ref{fig:fp}), density and diffusion data (Fig.~\ref{fig:fp}) than the conventional $\pp=12$ or $\pp=12.7$ inferred from $Q$. These new LJ exponents allow to simulate a larger variety of thermodynamic conditions but cannot be transferred from liquid to gas using this simplified model (Fig.~\ref{fig:params}). We have also examined whether the smaller exponent allows for bigger time steps in MD simulations. However it appears that the exponent is not a critical factor for the stability of the system. We observed similar execution times for the simulations with MPVs of LJ 6-12 and LJ 6-$\pp$. At the same time usage of the surrogates resulted in a speed-up of 28\% for the LJ 6-12 case. For the LJ 6-$\pp$ case the unidentifiable manifold in the parameter space $(\pp, \pe)$ did not allow for an efficient kriging approximation. Our results contradict the conclusion of Ref.~\cite{Galliero:2006}, where LJ potentials with $\pp=10, 12, 14, 16, 18, 20$ were fit to viscosity and pressure data, and the potential with $\pp=12$ showed better agreement for different thermodynamic conditions. This mismatch can be explained by the fact that different data was used and also that the exponents below 10, which appear to be the best according to the results of the current study, were not tested in Ref.~\cite{Galliero:2006}.
The present results suggest that experimental data are more suitable for robust predictions in calibrated MD potentials and suggest that similar studies are necessary across all fields that employ MD simulations.
\section{Methods} \label{sec:methods}
\subsection{Molecular Dynamics} \label{sec:sim}
We perform MD simulations of argon using LAMMPS package~\cite{lammps}. The argon atoms are modeled as spheres which interact with LJ 6-$\pp$ potential:
\begin{equation}
V_{LJ}(r; \pe, \ps, \pp) = 4\pe \left( \left( \frac{\ps}{r} \right)^{\pp} - \left( \frac{\ps}{r} \right)^6 \right) \cnt
\end{equation}
where $r$ is the distance between the interacting atoms and $\pp$ is the repulsion exponent usually taken to be 12. The parameters $\pe$, $\ps$ and $\pp$ are to be chosen according to the available measurements. As the Lennard-Jones interactions quickly decay with the distance, an additional computational parameter $r_c$ is usually introduced. This parameter defines a cut-off distance at which the potential is set to zero. Here, we set $r_c = 2.5\ps$. The thermodynamic state of the system is defined by the temperature and the pressure of the argon atoms. We ensure that argon is in the liquid/vapor state by checking the self-diffusion coefficient and the density. The simulation starts with energy minimization followed by $5 \times 10^{6}$ steps, of 2 fs, in an NPT ensemble. Then the RDF is computed in the production run consisting of $10^{5}$ NVE integration steps of 2 fs each. The boundary conditions are periodic in each direction, the domain contains 666 argon atoms. The self-diffusion coefficient is calculated via the mean-squared displacement of the atoms, the RDF is discredited using 100 bins. The units used in the current work are given in Table~\ref{tab:units}.
\subsection{Bayesian Uncertainty Quantification} \label{sec:uq_th}
This section presents a brief description of the Bayesian inference theory. The details are given in~\smref{sec:uq_th_si}. Here and further in the text small bold letters represent vectors while big bold letters represent matrices. Each random variable $\vec{\xi}$ is assumed to be continuous with a probability density function (PDF) denoted as $\pr{\vec{\xi}}$.
Let $f(\vec{x}; \vp) \in \mathbb{R}^{M}$ denote the output, or a quantity of interest (QoI), of a computational model with input $\vec{x} \in \mathbb{R}^{N_x}$ and parameters $\vp = (\p_1, \ldots, \p_{N_{\p}}) \in \mathbb{R}^{N_{\p}}$. Let also $\vy \in \mathbb{R}^{N_{\y}}$ be a vector of experimental data corresponding to the QoI $f$ and input parameters $\vec{x}$. The experimental data are linked with the computational model through the likelihood function, $\pr{\vy}[\vp, \vec{x}]$. A usual model assumption for the likelihood function involves a Gaussian,
\begin{equation} \label{eq:likelihood}
\pr{\vy}[\vp, \vec{x}] = \mathcal{N}(\vy \,|\, f(\vec{x}; \vp), \C) \cnt
\end{equation}
where $\C$ is a covariance matrix that may be a function of $\vp$. To simplify the notations, the conditioning on $\vec{x}$ is omitted below. Prior information on the parameters $\vp$ is encoded into the probability distribution with PDF $\pr{\vp}[\M]$. We assume $\C = \pn^2 \vec{I}$, where $\vec{I}$ is the identity matrix in $\mathbb{R}^{N_{\p} \times N_{\p}}$ and $\pn \in \mathbb{R}$ is \textit{a priori} unknown. In this work, we infer the parameters of the LJ potential together with the parameter of the covariance matrix: $\vp = (\pe, \ps, \pn)$ or $\vp = (\pe, \ps, \pp, \pn)$ depending on whether that exponent $\pp$ is being inferred or not.
Bayes' theorem provides a tool for the inference of the parameters $\vp$ conditioned on the observations $\vy$,
\begin{equation}
\pr{\vp}[\vy, \M] = \frac{ \pr{\vy}[\vp, \M] \, \pr{\vp}[\M] }{ \pr{\vy}[\M] } \cnt
\end{equation}
where $\pr{\vy}[\M] = \int \pr{\vy}[\vp, \M] \, \pr{\vp}[\M] d\vp$ is a normalization constant and $\M$ stands for ``model'', which is a set of the assumptions regarding the likelihood and the prior. We remark that the denominator $\pr{\vy}[\M]$, called model evidence, is used for model selection (see~\smref{sec:model:selection}).
In certain cases the data may correspond to different input variables $\vec{x}$ of the model, one of the examples is pressure and temperature used in this work. Let $\vvy = \{\vy_1, \ldots, \vy_N\}$ be the set of all provided data with $\vy_i \in \mathbb{R}^{N_{\p_i}}$, where each $\vy_i$ corresponds to different input $\vec{x}_i$. In this case one wishes to infer different parameters, $\vp_i \in \mathbb{R}^{N_\p}$, for each dataset $\vy_i$. Here, we assume that the parameters $\vp_i$ depend on hyper-parameters $\vhp \in \mathbb{R}^{N_\hp}$, which encode the variability of $\vp_i$ between the datasets and should also be inferred.
For the sampling of the distributions we use the Transitional Markov Chain Monte Carlo (TMCMC) algorithm~\cite{Ching:2007} (see~\smref{sec:sampling:posterior}).
We perform all the inferences using the open-source library $\Pi$4U~\cite{Hadjidoukas:2015a} on Brutus cluster of the ETH Zurich and Piz Daint cluster of the Swiss National Supercomputing Center (CSCS). We use 2000 samples per TMCMC stage for LJ 6-12 and 4000 samples per stage for LJ 6-$\pp$. The parallelisation is made with MPI and internal worker threads of the $\Pi$4U library. The task-based parallelism and the load balancing mechanisms of $\Pi$4U provide the necessary flexibility for running MD simulations with very different execution time within TMCMC.
In order to reduce the computational cost of the simulations, we apply kriging surrogates following the methodology proposed in Ref.~\cite{Angelikopoulos:2015}. Namely, for each Markov chain leader we build a kriging interpolating surface using the samples from the leader's bounding box. We select the size of the box to be equal to a quarter of the current domain. The surrogate value is rejected if the kriging error is greater than 5\% of the predicted value. In addition, we do not allow the kriging predictions which are outside the 5\%-95\% quantile range of all the values obtained from MD simulations.
\begin{table*}[hbtp]
\caption{Units used in this work.}
\label{tab:units}
\centering
\begin{tabular}{l l l}
\hline \hline
name & notation & \textit{real} \\
\hline
Temperature & $T$ & K \\
Pressure & $P$ & atm \\
Distance & $r$ & \AA \\
LJ well depth & $\pe$ & kcal/mol \\
LJ well location & $\ps$ & \AA \\
LJ repulsion exponent & $\pp$ & -- \\
RDF model error & $\pn$ & -- \\
Density & $\rho$ & g/cm$^3$ \\
Diffusion coefficient & $D$ & cm$^2$/s \\
\hline \hline
\end{tabular}
\end{table*}
\begin{table*}[hbtp]
\caption{LJ parameters for argon used in literature. The last row shows the data used for fitting. Notations: $T$ (temperature), $P$ (pressure), $\rho$ (density), $B$ (second virial coefficient), $E$ (energy), $TP$ (gas-liquid transition pressure), $L$ (latent heat of evaporation).}
\label{tab:LJ}
\centering
\begin{tabular}{l l l l l}
\hline \hline
& Ref.~\cite{Rahman:1964} & Ref.~\cite{Barker:1971} & Ref.~\cite{Rowley:1975} & Ref.~\cite{White:1999} \\
\hline
$\pe$ & 0.2385 & 0.2824 & 0.2381 & 0.2498 \\
$\ps$ & 3.4000 & 3.3605 & 3.4050 & 3.3450 \\
$T$ & 94.4 & 86.64 - 168.86 & 137.77 & 88 - 127 \\
$\rho$ & 1.374 & 0.435 - 1.479 & 0.156, 0.972 & 0.283 - 3.897 \\
Phase & liquid & gas, liquid, solid & gas + liquid & gas + liquid \\
Data & RDF & $P$, $E$ & $TP$, $\rho$, $L$ & $P$, $B$ \\
\hline \hline
\end{tabular}
\end{table*}
\begin{table*}[hbtp]
\caption{Posterior values of each parameter $\p \in \{\pe, \ps, \pn\}$ of LJ 6-12: MPV $b(\p)$ and 5\%-95\% quantiles $q(\p)$.}
\label{tab:posterior_12}
\centering
\begin{tabular}{l l l l l l l l}
\hline \hline
& & $b(\pe)$ & $q(\pe)$ & $b(\ps)$ & $q(\ps)$ & $b(\pn)$ & $q(\pn)$ \\
\hline
$B_{12, R}$ & $L_3$ & 0.286 & [0.284, 0.323] & 3.305 & [3.250, 3.332] & 0.168 & [0.158, 0.314] \\
& $L_4$ & 0.255 & [0.254, 0.266] & 3.314 & [3.301, 3.353] & 0.089 & [0.080, 0.140] \\
& $L_5$ & 0.263 & [0.255, 0.309] & 3.266 & [3.110, 3.536] & 0.317 & [0.292, 0.586] \\
& $V$ & 0.144 & [0.083, 0.184] & 3.109 & [3.029, 3.213] & 0.147 & [0.119, 0.304] \\
\hline
$HB_{12, R}$ & $L_3$ & 0.283 & [0.283, 0.303] & 3.300 & [3.226, 3.327] & 0.177 & [0.156, 0.310] \\
& $L_4$ & 0.253 & [0.254, 0.267] & 3.333 & [3.301, 3.367] & 0.073 & [0.073, 0.147] \\
& $L_5$ & 0.262 & [0.256, 0.296] & 3.269 & [3.108, 3.523] & 0.337 & [0.301, 0.579] \\
& $V$ & 0.190 & [0.140, 0.242] & 3.075 & [3.001, 3.229] & 0.180 & [0.192, 0.385] \\
\hline \hline
\end{tabular}
\end{table*}
\begin{table*}[hbtp]
\caption{Posterior values of each parameter $\p \in \{\pe, \ps, \pp, \pn\}$ of LJ 6-$\pp$: MPV $b(\p)$ and 5\%-95\% quantiles $q(\p)$.}
\label{tab:posterior_free}
\centering
\begin{tabular}{l l l l l l l l l l}
\hline \hline
& & $b(\pe)$ & $q(\pe)$ & $b(\ps)$ & $q(\ps)$ & $b(\pp)$ & $q(\pp)$ & $b(\pn)$ & $q(\pn)$ \\
\hline
$B_{\pp, R}$ & $L_1$ & 2.286 & [2.193, 4.794] & 3.431 & [3.422, 3.464] & 6.644 & [6.292, 6.647] & 0.157 & [0.185, 0.302] \\
& $L_2$ & 3.134 & [0.712, 4.416] & 3.369 & [3.319, 3.428] & 6.370 & [6.293, 8.313] & 0.222 & [0.215, 0.518] \\
& $L_3$ & 1.250 & [0.914, 2.700] & 3.322 & [3.301, 3.395] & 6.715 & [6.332, 7.068] & 0.098 & [0.080, 0.159] \\
& $L_4$ & 0.337 & [0.322, 1.060] & 3.325 & [3.325, 3.398] & 9.501 & [6.900, 9.842] & 0.057 & [0.066, 0.153] \\
& $L_5$ & 5.928 & [0.794, 7.318] & 3.328 & [3.190, 3.395] & 6.116 & [6.102, 7.126] & 0.164 & [0.168, 0.326] \\
& $V$ & 1.065 & [0.625, 3.611] & 3.117 & [3.037, 3.122] & 6.604 & [6.151, 6.923] & 0.100 & [0.099, 0.190] \\
\hline
$HB_{\pp, R}$ & $L_1$ & 4.561 & [3.890, 4.626] & 3.454 & [3.408, 3.471] & 6.302 & [6.296, 6.366] & 0.422 & [0.450, 0.602] \\
& $L_2$ & 2.081 & [1.333, 3.515] & 3.387 & [3.335, 3.424] & 6.565 & [6.340, 7.051] & 0.211 & [0.178, 0.351] \\
& $L_3$ & 2.506 & [0.941, 4.325] & 3.345 & [3.295, 3.382] & 6.324 & [6.198, 6.996] & 0.093 & [0.081, 0.186] \\
& $L_4$ & 2.588 & [0.892, 3.676] & 3.403 & [3.358, 3.432] & 6.339 & [6.226, 7.063] & 0.082 & [0.075, 0.146] \\
& $L_5$ & 2.055 & [0.992, 4.281] & 3.252 & [3.194, 3.382] & 6.364 & [6.169, 6.837] & 0.183 & [0.159, 0.316] \\
& $V$ & 1.371 & [0.582, 1.896] & 3.129 & [3.082, 3.194] & 6.422 & [6.277, 7.025] & 0.111 & [0.103, 0.233] \\
\hline
$B_{\pp, Q}$ & & 0.252 & [0.239, 0.261] & 3.370 & [3.367, 3.375] & 12.703 & [12.333, 13.309] & 0.006 & [0.006, 0.010] \\
\hline \hline
\end{tabular}
\end{table*}
\begin{table*}[hbtp]
\caption{Log-evidences $E_{12,R}$ ($E_{p,R}$) for $B_{12,R}$ ($B_{p,R}$).}
\label{tab:model_selection}
\centering
\begin{tabular}{l r r l}
\hline \hline
& $E_{12,R}$ & $E_{p,R}$ & $e^{E_{p,R}-E_{12,R}}$ \\
\hline
$L_1$ & -- & -7.05 & -- \\
$L_2$ & -- & -14.8 & -- \\
$L_3$ & -9.72 & 2.81 & 2.74$\times$10$^5$ \\
$L_4$ & 5.10 & 5.18 & 1.09 \\
$L_5$ & -15.8 & -8.76 & 1.18$\times$10$^3$ \\
$V$ & -3.83 & -4.94 & 3.31$\times$10$^{-1}$ \\
\hline \hline
\end{tabular}
\end{table*}
\begin{table*}[hbtp]
\caption{Errors of robust posterior predictions of RDF, density and diffusion coefficient using LJ 6-12 and LJ 6-$\pp$. We denote $S_1 = \{L_4, L_5\}$ (all inferences produce the correct argon phase), $S_2 = \{L_3 - L_5, V\}$ ($B_{\pp,Q}$ produces wrong phase), $S_3 = \{L_1 - L_5, V\}$ ($B_{\pp,Q}$ and $HB_{12,R}$ produce wrong phase).}
\label{tab:fp}
\centering
\begin{tabular}{l c c c c c c c c c}
\hline \hline
\multicolumn{1}{c|}{} & \multicolumn{3}{c|}{$\Delta$ RDF} & \multicolumn{3}{c|}{$\Delta \rho$} & \multicolumn{3}{c}{$\Delta D$} \\
\hline
\multicolumn{1}{c|}{} & $S_1$ & $S_2$ & \multicolumn{1}{c|}{$S_3$} & $S_1$ & $S_2$ & \multicolumn{1}{c|}{$S_3$} & $S_1$ & $S_2$ & $S_3$ \\
\hline
$B_{\pp, Q}$ & 0.087 & -- & -- & 0.118 & -- & -- & 0.136 & -- & -- \\
$HB_{12, R}$ & 0.071 & 0.050 & -- & 0.029 & 0.108 & -- & 0.009 & 1.573 & -- \\
$HB_{\pp, R}$ & 0.016 & 0.024 & 0.027 & 0.011 & 0.068 & 0.049 & 0.043 & 1.230 & 0.898 \\
\hline \hline
\end{tabular}
\end{table*}
\section{Acknowledgments} \label{sec:ack}
We would like to acknowledge helpful discussions with Dr. S. Litvinov, Dr. J. Zavadlav and Dr. E. Cruz-Chu. We would like to acknowledge the computational time at Swiss National Supercomputing Center (CSCS) under the project s659. We gratefully acknowledge support from the European Research Council (ERC) Advanced Investigator Award (No. 2-73985-14).
\section{Author Contributions Statement} \label{sec:contrib}
L.K. ran the simulations, prepared the figures and tables, wrote the Results and Molecular Dynamics sections of the manuscript. G.A. prepared the single process HB code, the Supporting Information and the Bayesian Uncertainty Quantification text. P.A. prepared the LAMMPS script and guided the MD part of the research. P.K. wrote the Abstract, the Introduction and the Discussion sections. P.C. wrote the high performance computing implementation of the HB code and assisted in running the simulations. C.P. guided the Bayesian part of the research. All authors reviewed the manuscript.
\section{Additional Information}
\subsection{Competing financial interests}
The authors declare no competing financial interests.
\subsection{Availability of materials and data}
We use an open-source framework $\Pi$4U available at \verb|http://www.cse-lab.ethz.ch/software/Pi4U|.
The data we used comes from Ref.~\cite{Eisenstein:1942,Halpern:2010}.
\input{bbl.tex}
\input{si.tex}
\end{document}
\section{Uncertainty quantification} \label{sec:uq_th_si}
This section provides a detailed description of the UQ theory used in the current work.
\subsection{Sampling the posterior distribution} \label{sec:sampling:posterior}
For the posterior distribution $\pr{\vp}[\vy, \M]$ which is known up to a normalizing constant $\pr{\vy}[\M]$, available Markov Chain Monte Carlo (MCMC) methods can be used to efficiently generate samples that quantify the uncertainty in $\vp$~\cite{Hastings:1970,Gilks:2005,Gamerman:2006,Geyer:1992,Peskun:1973}. In our work we use the Transitional Markov Chain Monte Carlo (TMCMC) algorithm~\cite{Ching:2007} with a slight modification on the MCMC proposal covariance used to overcome low acceptance rates in several runs of TMCMC which we observed. Instead of setting the covariance matrix to the scaled sample covariance of the previous generation, we use the landscape around the chain leaders to construct local covariance matrices. The sampling algorithm automatically tries to increase the radius of the neighborhood starting from 10\% of the domain size until the local covariance matrix is positive definite.
\subsection{Robust posterior prediction} \label{sec:prediction}
The uncertainty in the model parameters can be further propagated to the uncertainty in the QoI $\vec{y}$ produced by $f(\vec{x}; \vp)$. Under the assumption of equation~\eqref{eq:likelihood} the probability of the model prediction conditioned on the parameters $\vp$ is given by $\pr{\vec{y}}[\vp, \M] = \mathcal{N}(\vec{y} \,|\, f(\vec{x}; \vp), \C)$. The probability of the model prediction conditioned on the observations $\vy$ is known as the robust posterior prediction and is given by~\cite{Angelikopoulos:2012,Hadjidoukas:2015a}:
\begin{equation}
\pr{\vec{y}}[\vy, \M] = \int \pr{\vec{y}, \vp}[\vy, \M] \, d\vp
= \int \pr{\vec{y}}[\vp, \M] \, \pr{\vp}[\vy, \M] \, d\vp
\approx \frac{1}{N_s} \sum_{k=1}^{N_s} \pr{\vec{y}}[\vp^{(k)}, \M] \cnt
\end{equation}
where $\vp^{(k)} \sim \pr{\vp}[\vy, \M]$ and $N_s$ is sufficiently large.
When a new QoI $\vec{z}$ produced by $g(\vec{x}; \vp)$ is considered, $\C$ is unknown and only the parametric uncertainty is propagated into $g(\vec{x}; \vp)$. Namely, one should estimate the density of $\sum_{k=1}^{N_s} g(\vec{x}; \vp^{(k)})$, where $\vp^{(k)} \sim \pr{\vp}[\vy, \M]$ and $N_s$ is sufficiently large.
\subsection{Model selection} \label{sec:model:selection}
The Bayesian framework allows one to select the probabilistic model which best fits the data. The criterion for the model selection comes from the Bayes' theorem, which computes the probability of a probabilistic model $\M$ as
\begin{equation}
\pr{\M}[\vy] = \frac{ \pr{\vy}[\M] \, \pr{\M} }{ \pr{\vy} } \cnt
\end{equation}
where $\pr{\M}$ is the prior PDF of the model $\M_i$ and the evidence $\pr{\vy}[\M]$ of model $\M$ is computed as a by-product of TMCMC.
\subsection{Hierarchical Bayesian models}
In our work we follow the methodology developed in \cite{Wu:2016}. We assume that data comes split in $N$ different datasets: $\vvy=\{\vy_1,\ldots, \vy_N\}$ and the likelihood in the probabilistic model $\M_i$ is $\mathcal{N}(\vy_i \,|\, f(\vec{x}; \vp_i), \C_i)$. We assume that the probability of $\vp_i$ depends on a hyper-parameter $\vhp \in \mathbb{R}^{N_\hp}$ and is given by a PDF $\pr{\vp}[\vhp, \M]$, where $\M$ corresponds to the graph describing the relations between $\vhp$, $\vp_i$ and $\vy_i$, see Fig.~\ref{fig:dag}.
\begin{figure*}[hbtp]
\centering
\includegraphics[width=\textwidth]{dag.jpg}
\caption{Bayesian networks: (a) simple network, (b) HB network, (c) network for the $i$-th dataset.}
\label{fig:dag}
\end{figure*}
Our goal is to obtain samples from the posterior distribution, $\pr{\vp_i}[\vvy, \M]$, where $\vvy = \{\vy_1, \ldots, \vy_N\}$:
\begin{equation} \label{eq:h:lik:a}
\pr{\vp_i}[\vvy, \M] = \int \pr{\vp_i}[\vhp, \vvy, \M] \, \pr{\vhp}[\vvy, \M] \, d\vhp \stp
\end{equation}
The dependency assumptions from Fig.~\ref{fig:dag} allow to simplify: $\pr{\vp_i}[\vhp, \vvy, \M] = \pr{\vp_i}[\vhp, \vy_i, \M]$, and equation~\eqref{eq:h:lik:a} can be rewritten using the Bayes' theorem:
\begin{equation} \label{eq:h:lik:b}
\pr{\vp_i}[\vvy, \M] =
\int \frac{ \pr{\vy_i}[\vp_i, \vhp, \M] \, \pr{\vp_i}[\vhp, \M] }{ \pr{\vy_i}[\vhp, \M] } \, \pr{\vhp}[\vvy, \M] \, d\vhp \stp
\end{equation}
Since $\pr{\vy_i}[\vp_i, \vhp, \M] = \pr{\vy_i}[\vp_i, \M]$, equation~\eqref{eq:h:lik:b} simplifies to
\begin{equation}
\pr{\vp_i}[\vvy, \M] =
\quad \pr{\vy_i}[\vp_i, \M] \int \frac{ \pr{\vp_i}[\vhp, \M] }{ \pr{\vy_i}[\vhp, \M] } \, \pr{\vhp}[\vvy, \M] \, d\vhp \stp
\end{equation}
Finally, the posterior distribution (\ref{eq:h:lik:a}) can be approximated as
\begin{equation}
\pr{\vp_i}[\vvy, \M] \approx \frac{ \pr{\vy_i}[\vp_i, \M] }{ N_s } \sum_{k=1}^{N_s} \frac{ \pr{\vp_i}[\vhp^{(k)}, \M] }{ \pr{\vy_i}[\vhp^{(k)}, \M] } \cnt
\end{equation}
where, $\vhp^{(k)}\sim \pr{\vhp}[\vvy, \M]$ and $N_s$ is sufficiently large. Thus, in order to obtain $\vp_i$ samples, we first have to sample the probability distribution $\pr{\vhp}[\vvy, \M]$, which, according to Bayes' theorem, is equal to
\begin{equation}
\pr{\vhp}[\vvy, \M] = \frac{ \pr{\vvy}[\vhp, \M] \, \pr{\vhp}[\M] }{ \pr{\vvy}[\M] } \cnt
\end{equation}
where $\pr{\vhp}[\M]$ is the prior PDF on $\vhp$ and $\pr{\vvy}[\M]$ is the normalizing constant. Exploiting the dependency assumption of Fig.~\ref{fig:hb} we see that
\begin{equation}
\pr{\vvy}[\vhp, \M] = \prod_{i=1}^{N} \pr{\vy_i}[\vhp, \M] \cnt
\end{equation}
and the likelihood of $i$-th dataset can be expressed according to the total probability theorem as
\begin{equation} \label{eq:h:lik:c}
\pr{\vy_i}[\vhp, \M] = \int \pr{\vy_i}[\vp_i, \M] \, \pr{\vp_i}[\vhp, \M] \, d\vp_i \stp
\end{equation}
Here we introduce the model $\M_i$ described in Fig.~\ref{fig:dag}. The posterior distribution of this model will be used as instrumental density for important sampling. Under the modeling assumption $\pr{\vy_i}[\vp_i, \M] = \pr{\vy_i}[\vp_i, \M_i]$ (see \cite{Wu:2016}) and the use of the Bayes' theorem, equation~\eqref{eq:h:lik:c} is written as
\begin{equation}
\pr{\vy_i}[\vhp, \M] =
\int \frac{ \pr{\vp_i}[\vy_i, \M_i] \, \pr{\vy_i}[\M_i] }{ \pr{\vp_i}[\M_i] } \, \pr{\vp_i}[\vhp, \M] \, d\vp_i \cnt
\end{equation}
or, equivalently, as
\begin{equation}
\pr{\vy_i}[\vhp, \M] =
\pr{\vy_i}[\M_i] \int \frac{ \pr{\vp_i}[\vhp, \M] }{ \pr{\vp_i}[\M_i] } \, \pr{\vp_i}[\vy_i, \M_i] \, d\vp_i \stp
\end{equation}
Finally, equation~\eqref{eq:h:lik:c} can be approximated as
\begin{equation}
\pr{\vy_i}[\vhp, \M] \approx
\frac{ \pr{\vy_i}[\M_i] }{ N_s } \sum_{k=1}^{N_s} \frac{ \pr{\vp_i^{(k)}}[\vhp, \M] }{ \pr{\vp_i^{(k)}}[\M_i] } \cnt
\end{equation}
where $\vp_i^{(k)} \sim \pr{\vp_i}[\vy_i, \M_i]$ and $N_s$ is sufficiently large. Note that in general $N_s$ can be different for each data set $\vy_i$. The advantage of this approach is that the likelihoods $\pr{\vy_i}[\vp_i, \M_i],\, i=1,\ldots,N$, which are the most expensive part of the computations, are not re-evaluated for each $\vhp$.
\section{Information about hyper-parameter models for hierarchical inference} \label{sec:hb_info}
\paragraph{Inference for LJ 6-12} We assume
\begin{equation}
\pr{\vp}[\vhp, \M] = \prod_{j=1}^3 \pr{\p_j}[\vhp, \M]
\end{equation}
and consider the following two models:
\begin{itemize}
\item[1)] uniform: $\pr{\p_j}[\vhp, \M] = \U( \p_j \,|\, \hp_{2j-1}, \hp_{2j-1} + \hp_{2j})$, where $\U(\xi | a, b)$ is the uniform distribution of $\xi$ with parameters $a, b$ and $\M$ is set to $\MU$,
\item[2)] log-normal: $\pr{\p_j}[\vhp, \M] = \LN(\p_j \,|\, \hp_{2j-1}, \hp_{2j})$, where $\LN(\xi | a, b)$ is the log-normal distribution of $\xi$ with parameters $a, b$ and $\M$ is set to $\MLN$.
\end{itemize}
The prior distribution on the hyper-parameters is modeled as independent uniform,
\begin{equation} \label{eq:hp_12}
\pr{\vhp}[\M] = \prod_{j=1}^6 \U( \hp_j \,|\, a_j^{\M}, b_j^{\M}) \cnt
\end{equation}
where $\M \in \{\MU, \MLN\}$ and the constants $a_j^{\M}, b_j^{\M}$ are given in Table~\ref{tab:hb_12}, along with the values of the log-evidences for the two models. The model $\U$ is according to the Bayesian model selection criterion, an order of magnitude more plausible and thus will be used for the further inference.
\begin{table*}[hbtp]
\begin{minipage}{0.45\textwidth}
\caption{$HB_{12, R}$ inference: lower bound and width for each of the hyper-parameters defined in equation~\eqref{eq:hp_12}, log-evidences for each hyper-prior model.}
\label{tab:hb_12}
\centering
\begin{tabular}{l l l l}
\hline \hline
& $ \M=\MU$ & $\M=\MLN$ \\
\hline
$[a_{1}^{\M}, b_{1}^{\M}]$ & [0.0, 3.0] & [-1.000, 2.0] \\
$[a_{2}^{\M}, b_{2}^{\M}]$ & [0.0, 7.0] & [ 0.001, 2.5] \\
$[a_{3}^{\M}, b_{3}^{\M}]$ & [3.0, 3.4] & [-3.000, 0.5] \\
$[a_{4}^{\M}, b_{4}^{\M}]$ & [0.0, 2.0] & [ 0.001, 4.0] \\
$[a_{5}^{\M}, b_{5}^{\M}]$ & [0.0, 0.2] & [-3.500, 0.5] \\
$[a_{6}^{\M}, b_{6}^{\M}]$ & [0.0, 1.0] & [ 0.001, 2.5] \\
\hline
Log-ev. & -19.1401 & -22.0198 \\
\hline \hline
\end{tabular}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\caption{$HB_{\pp, R}$ inference: lower bound and width for each of the hyper-parameters defined in equation~\eqref{eq:hp_p}, log-evidences for each hyper-prior model.}
\label{tab:hb_free}
\centering
\begin{tabular}{l l l l}
\hline \hline
& $\M=\MU$ & $\M=\MLN$ & $\M=\MTN$ \\
\hline
$[a_{1}^{\M}, b_{1}^{\M}]$ & [0.0, 3.00] & [-1.000, 2.0] & [0.0500, 10.0] \\
$[a_{2}^{\M}, b_{2}^{\M}]$ & [0.0, 7.00] & [ 0.001, 2.5] & [0.0010, 3.30] \\
$[a_{3}^{\M}, b_{3}^{\M}]$ & [3.0, 3.40] & [-3.000, 0.5] & [3.0000, 4.00] \\
$[a_{4}^{\M}, b_{4}^{\M}]$ & [0.0, 2.00] & [ 0.001, 4.0] & [0.0010, 0.30] \\
$[a_{5}^{\M}, b_{5}^{\M}]$ & [6.0, 7.00] & [-2.500, 1.5] & [6.0000, 12.0] \\
$[a_{6}^{\M}, b_{6}^{\M}]$ & [0.0, 10.0] & [ 0.001, 2.5] & [0.0010, 2.00] \\
$[a_{7}^{\M}, b_{7}^{\M}]$ & [0.0, 0.20] & [-3.500, 0.5] & [0.0001, 1.00] \\
$[a_{8}^{\M}, b_{8}^{\M}]$ & [0.0, 1.00] & [ 0.001, 2.5] & [0.0010, 0.30] \\
\hline
Log-ev. & -24.0202 & -27.3889 & -25.3927 \\
\hline \hline
\end{tabular}
\end{minipage}
\end{table*}
\paragraph{Inference for LJ 6-$\pp$} We assume
\begin{equation}
\pr{\vp}[\vhp, \M] = \prod_{j=1}^4 \pr{\p_j}[\vhp, \M]
\end{equation}
and consider the following three models:
\begin{itemize}
\item[1)] uniform: $\pr{\p_j}[\vhp, \U] = \U( \p_j \,|\, \hp_{2j-1}, \hp_{2j-1} + \hp_{2j})$, where $\U(\xi | a, b)$ is the uniform distribution of $\xi$ with parameters $a, b$ and $\M$ is set to $\MU$,
\item[2)] log-normal: $\pr{\p_j}[\vhp, \M] = \LN(\p_j \,|\, \hp_{2j-1}, \hp_{2j})$, where $\LN(\xi | a, b)$ is the log-normal distribution of $\xi$ with parameters $a, b$ and $\M$ is set to $\MLN$,
\item[3)] truncated normal: $\pr{\p_j}[\vhp, \M] = \TN(\p_j \,|\, \hp_{2j-1}, \hp_{2j})$, where $\TN(\xi | a, b)$ is the truncated normal distribution of $\xi$ with parameters $a, b$ and $\M$ is set to $\MTN$.
\end{itemize}
The prior distribution on the hyper-parameters is modeled as independent uniform,
\begin{equation} \label{eq:hp_p}
\pr{\vhp}[\M] = \prod_{j=1}^8 \pr{\hp_j}[\M] = \prod_{j=1}^8 \U( \hp_j \,|\, a_j^{\M}, b_j^{\M}) \cnt
\end{equation}
where $\M \in \{\MU, \MLN, \MTN\}$ and the constants $a_j^{\M}, b_j^{\M}$ are given in Table~\ref{tab:hb_free}, along with the values of the log-evidences for the three models. As it can be seen, the uniform model is the most plausible one.
\section{Posterior parameter distribution for LJ 6-12 and LJ 6-$\pp$} \label{sec:ind_post}
This section presents the posterior distributions for LJ 6-12 and LJ 6-$\pp$ obtained in HB TMCMC runs for each thermodynamic condition as well as distribution for the quantum dimer-based Bayesian inference. Each plot contains all the TMCMC samples of the last stage and is made as follows: histograms of marginal distributions of parameters are shown on the diagonal, projections of the samples to all possible 2-d subspaces in the parameter space colored by the log-likelihood values are given above the diagonal, the corresponding densities constructed via a bivariate kernel estimate are depicted below the diagonal. Green star shows the parameters from Ref.~\cite{Barker:1971}, green square indicates the parameters from Ref.~\cite{White:1999}, and green circle marks the parameters from Ref.~\cite{Rahman:1964,Rowley:1975}.
\begin{figure*}[b!]
\centering
\includegraphics[width=\textwidth]{pdf_free2.jpg}
\caption{LJ parameters distributions obtained in $HB_{\pp, R}$ for $L_3$ (top left), $L_4$ (top right), $L_5$ (bottom left) and $V$ (bottom right).}
\label{fig:pdf_free2}
\end{figure*}
\begin{figure*}[hbtp]
\centering
\includegraphics[width=\textwidth]{pdf_12.jpg}
\caption{LJ parameters distributions obtained in $HB_{12, R}$ for $L_3$ (top left), $L_4$ (top right), $L_5$ (bottom left) and $V$ (bottom right).}
\label{fig:pdf_12}
\end{figure*}
\begin{figure*}[hbtp]
\centering
\includegraphics[width=\textwidth]{pdf_free1.jpg}
\caption{LJ parameters distributions obtained in $HB_{\pp, R}$ for $L_1$ (left) and $L_2$ (right).}
\label{fig:pdf_free1}
\end{figure*}
\begin{figure}[hbtp]
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{pdf_quantum.jpg}
\vspace{-0.8cm}
\caption{LJ parameters distributions obtained in $B_{\pp, Q}$.}
\label{fig:pdf_quantum}
\end{minipage}
\begin{minipage}{0.5\textwidth}
\centering
\includegraphics[width=\textwidth]{pdf_free_126_narrow.jpg}
\vspace{-0.8cm}
\caption{LJ parameters distributions obtained in $B_{\pp, R}$ for $L_3$ with the prior for $\pp$ restricted to $[12, 14]$.}
\label{fig:pdf_126_narrow}
\end{minipage}
\end{figure}
|
1,108,101,564,826 | arxiv | \section{Introduction}
Berry, Chen, Zame, Heath and Shepp (1997) initiated the infinite arms bandit problem on Bernoulli rewards.
They showed in the case of uniform prior on the mean of an arm,
a $\sqrt{2n}$ regret lower bound for $n$ rewards,
and provided algorithms based on success runs that achieve no more than $2 \sqrt{n}$ regret.
Bonald and Prouti\`{e}re (2013) provided a two-target stopping-time algorithm that can get arbitrarily close to Berry et al.'s lower bound,
and is also optimal on Bernoulli rewards with general priors.
Wang, Audibert and Munos (2008) considered bounded rewards and showed that their confidence bound algorithm has regret bounds that are $\log n$ times the optimal regret.
Vermorel and Mohri (2005) proposed a POKER algorithm for general reward distributions and priors.
The confidence bound method is arguably the most influential approach for the fixed arm-size multi-armed bandit problem over the past thirty years.
Lai and Robbins (1985) derived the smallest asymptotic regret that a multi-armed bandit algorithm can achieve.
Lai (1987) showed that by constructing an upper confidence bound (UCB) for each arm,
playing the arm with the largest UCB,
this smallest regret is achieved in exponential families.
The UCB approach was subsequently extended to unknown time-horizons and other parametric families in Agrawal (1995),
Auer, Cesa-Bianchi and Fischer (2002),
Burnetas and Katehakis (1996),
Capp\'{e}, Garivier, Maillard, Munos and Stoltz (2013) and Kaufmann, Capp\'{e} and Garivier (2012),
and it has been shown to perform well in practice,
achieving optimality beyond exponential families.
Chan (2019) modified the subsampling approach of Baransi, Maillard and Mannor (2014) and showed that optimality is achieved in exponential families,
despite not applying parametric information in the selection of arms.
The method can be considered to be applying confidence bounds that are computed empirically from subsample information,
which substitutes for the missing parametric information.
A related problem is the study of the multi-armed bandit with irreversible constraints,
initiated by Hu and Wei (1989).
Good performances and optimality has also been achieved by Bayesian approaches to the multi-armed bandit problem,
see Berry and Fridstedt (1985), Gittins (1989) and Thompson (1933) for early groundwork on the Bayesian approach,
and Korda, Kaufmann and Munos (2013) for more recent advances.
In this paper we show how the confidence bound method can be extended to the infinite arms bandit problem to achieve optimality.
Adjustments are made to account for the infinite arms that are available,
in particular the specification of a target mean that we desire from our best arm,
and a mechanism to reject weak arms quickly.
For each arm a lower confidence bound of its mean is computed,
using only information on the sample mean and standard deviation of its rewards.
We play an arm as long as its confidence bound is below the target mean.
If it is above,
then a new arm is played in the next trial.
We start with the smallest possible regret that an infinite arms bandit algorithm can achieve,
as the number of rewards goes to infinity.
This is followed by the target mean selection of the confidence bound target (CBT) algorithm to achieve this regret.
The optimal target mean depends only on the prior distribution of the arm means and not on the reward distributions.
That is the reward distributions need not be specified and optimality is still achieved.
In the absence of information on the prior,
we show how to adapt via empirical determination of the target mean.
Regret guarantees of empirical CBT,
relative to the baseline lower bound,
are provided.
Numerical studies on Bernoulli rewards and on a URL dataset show that CBT and empirical CBT outperform their competitors.
The layout of the paper is as follows.
In Section 2 we review a number of infinite arms bandit algorithms and describe CBT.
In Section~3 we motivate why a particular choice of the target mean leads to the smallest regret and state the optimality results.
In Section 4 we introduce an empirical version of CBT to tackle unknown priors and explain intuitively why it works.
In Section~5 we perform numerical studies.
In Sections 6--7 we prove the optimality of CBT.
\section{Methodology}
Let $X_{k1}, X_{k2}, \ldots$ be i.i.d. non-negative rewards from an arm or population $\Pi_k$,
$1 \leq k < \infty$,
with mean $\mu_k$.
Let $\mu_1, \mu_2, \ldots$ be i.i.d. with prior density $g$ on $(0,\infty)$.
Let $F_{\mu}$ denote the reward distribution of an arm with mean $\mu$,
and let $E_{\mu}$ ($P_{\mu}$) denote expectation (probability) with respect to $X \stackrel{d}{\sim} F_{\mu}$.
Let $a \wedge b$ denote $\min(a,b)$,
$\lfloor \cdot \rfloor$ ($\lceil \cdot \rceil$) denote the greatest (least) integer function and $a^+$ denote $\max(0,a)$.
Let $a_n \sim b_n$ if $\lim_{n \rightarrow \infty} (a_n/b_n)=1$,
$a_n = o(b_n)$ if $\lim_{n \rightarrow \infty} (a_n/b_n)=0$ and $a_n = O(b_n)$ if $\limsup_{n \rightarrow \infty} |a_n/b_n| < \infty$.
A bandit algorithm is required to select one of the arms to be played at each trial,
with the choice informed from past outcomes.
We measure the effectiveness of a bandit algorithm by its regret
$$R_n = E \Big( \sum_{k=1}^K n_k \mu_k \Big),
$$
where $K$ is the total number of arms played,
$n_k$ the number of rewards from $\Pi_k$ and $n = \sum_{k=1}^K n_k$.
\subsection{Literature review}
Berry et al. (1997) showed that if $F_{\mu}$ is Bernoulli and $g$ is uniform on $(0,1)$,
then a regret lower bound
\begin{equation} \label{liminf}
\liminf_{n \rightarrow \infty} \tfrac{R_n}{\sqrt{n}} \geq \sqrt{2}
\end{equation}
is unavoidable.
They proposed a number of bandit strategies that are described below.
It should be clarified that in our notation success refers to observing a reward of 0,
and failure refers to observing a reward of 1.
\begin{enumerate}
\item $f$-failure strategy.
The same arm is played until $f$ failures are encountered.
When this happens we switch to a new arm.
We do not go back to a previously played arm,
that is the strategy is {\it non-recalling}.
\item $s$-run strategy.
We restrict ourselves to no more than $s$ arms,
following the 1-failure strategy in each,
until a success run of length $s$ is observed in an arm.
When this happens we play the arm for the remaining trials.
If no success runs of length $s$ is observed in all $s$ arms,
then the arm with the highest proportion of successes is played for the remaining trials.
\item Non-recalling $s$-run strategy.
We follow the 1-failure strategy until an arm produces a success run of length $s$.
When this happens we play the arm for the remaining trials.
If no arm produces a success run of length $s$,
then the 1-failure strategy is used for all $n$ trials.
\item $m$-learning strategy.
We follow the 1-failure strategy for the first $m$ trials,
with the arm at trial $m$ played until it yields a failure.
Thereafter we play,
for the remaining trials,
the arm with the highest proportion of successes.
\end{enumerate}
Berry et al. showed that $R_n \sim n/(\log n)$ for the $f$-failure strategy for any $f \geq 1$,
whereas for the $\sqrt{n}$-run strategy,
the $\log n \sqrt{n}$-learning strategy and the non-recalling $\sqrt{n}$-run strategy,
$$\limsup_{n \rightarrow \infty} \tfrac{R_n}{\sqrt{n}} \leq 2.
$$
Bonald and Prouti\`{e}re (2013) proposed a two-target algorithm that gets arbitrarily close to the lower bound in (\ref{liminf}).
The target values are $s_1 = \lfloor \sqrt[3]{\frac{n}{2}} \rfloor$ and $s_f = \lfloor f \sqrt{\frac{n}{2}} \rfloor$,
where $f \geq 2$ is user-defined.
An arm is discarded if it has fewer than $s_1$ successes when it encounters its first failure,
or $s_f$ successes when it encounters its $f$th failure.
If both targets are met,
then the arm is accepted and played for the remaining trials.
Bonald and Prouti\`{e}re showed that for the uniform prior,
the two-target algorithm satisfies
\begin{equation} \label{lsR}
\limsup_{n \to \infty} \tfrac{R_n}{\sqrt{n}} \leq \sqrt{2} + \tfrac{1}{f \sqrt{2}},
\end{equation}
and they get close to the lower bound of Berry et al. when $f$ is large.
Bonald and Prouti\`{e}re extended their optimality on Bernoulli rewards to non-uniform priors.
Specifically they considered $g(\mu) \sim \alpha \mu^{\beta-1}$ for some $\alpha>0$ and $\beta>0$ as $\mu \rightarrow 0$.
They extended the lower bound of Berry et al. to
\begin{equation} \label{liminf2}
\liminf_{n \rightarrow \infty} (n^{-\frac{\beta}{\beta+1}} R_n) \geq C_0,
\mbox{ where } C_0 = (\tfrac{\beta(\beta+1)}{\alpha})^{\frac{1}{\beta+1}},
\end{equation}
and showed that their two-target algorithm with $s_1 = \lfloor C_0^{\frac{\beta+1}{\beta+2}} \rfloor$ and $s_f = \lfloor f C_0 \rfloor$ satisfies
\begin{equation} \label{limsup2}
\limsup_{f \rightarrow \infty} [\limsup_{n \rightarrow \infty} (n^{-\frac{\beta}{\beta+1}} R_n)] \leq C_0.
\end{equation}
Wang, Audibert and Munos (2008) proposed a UCB-F algorithm for rewards taking values in $[0,1]$.
They showed that if
$$P_g(\mu_k \leq \mu) = O(\mu^{\beta}) \mbox{ for some } \beta>0,
$$
then under suitable regularity conditions,
$R_n = O(n^{\frac{\beta}{\beta+1}} \log n)$.
In UCB-F an order $n^{\frac{\beta}{\beta+1}}$ arms are chosen,
and confidence bounds are computed on these arms to determine which arm to play.
UCB-F is different from CBT in that it pre-selects the number of arms,
and it also does not have a mechanism to reject weak arms quickly.
Carpentier and Valko (2014) also considered rewards taking values in [0,1],
but their interest in maximizing the selection of a good arm is different from the aims here and in the papers described above.
\subsection{Confidence bound target}
In CBT we construct a confidence bound for each arm,
and play an arm as long as its confidence bound is under the target mean.
Let
\begin{equation} \label{bncn}
b_n \rightarrow \infty \mbox{ and } c_n \rightarrow \infty \mbox{ with } b_n+c_n=o(n^{\delta}) \mbox{ for all } \delta>0.
\end{equation}
In our numerical studies we select $b_n = c_n = \log \log n$.
For an arm $k$ that has been played $t$ times,
its confidence bound is
\begin{equation} \label{Lkt}
L_{kt} = \max \Big( \frac{\bar X_{kt}}{b_n}, \bar X_{kt} - c_n \frac{\wht \sigma_{kt}}{\sqrt{t}} \Big),
\end{equation}
where $\bar X_{kt} = t^{-1} S_{kt}$,
with $S_{kt} = \sum_{u=1}^t X_{ku}$,
and $\wht \sigma^2_{kt} = t^{-1} \sum_{u=1}^t (X_{ku}-\bar X_{kt})^2$.
Let $\zeta>0$ be the target mean.
We discuss in Section 3 how $\zeta$ should be selected to achieve optimality.
It suffices to mention here that it should be small for large $n$,
more specifically it should decrease at a polynomial rate with respect to $n$.
The algorithm is non-recalling,
an arm is played until its confidence bound goes above $\zeta$ and is not played after that.
\vskip0.5in
\underline{Confidence bound target (CBT)}
\smallskip
For $k=1,2,\ldots:$ Draw $n_k$ rewards from arm $k$, where
$$n_k = \inf \{ t \geq 1: L_{kt} > \zeta \} \wedge \Big( n - \sum_{\ell=1}^{k-1} n_{\ell} \Big).
$$
The total number of arms played is $K = \min \{k: \sum_{\ell=1}^k n_{\ell} = n \}$.
\vskip0.5in
There are three types of arms that we need to take care of,
and that explains the design of $L_{kt}$.
The first type are arms with means $\mu_k$ significantly larger than $\zeta$.
For these arms we would like to reject them quickly.
The condition that an arm be rejected when $\bar X_{kt}/b_n$ exceeds $\zeta$ is key to achieving this.
The second type are arms with means $\mu_k$ larger than $\zeta$ but not by as much as those of the first type.
For these arms we are unlikely to reject them quickly,
as it is difficult to determine whether $\mu_k$ is larger or less than $\zeta$ based on a small sample.
Rejecting arm $k$ when $\bar X_{kt} - c_n \wht \sigma_{kt}/\sqrt{t}$ exceeds $\zeta$ ensures that arm $k$ is rejected only when it is statistically significant that $\mu_k$ is larger than $\zeta$.
Though there may be large number of rewards from these arms,
their contributions to the regret are small because their means are small.
The third type of arms are those with means $\mu_k$ smaller than $\zeta$.
For these arms the best strategy (when $\zeta$ is chosen correctly) is to play them for the remaining trials.
Selecting $b_n \rightarrow \infty$ and $c_n \rightarrow \infty$ in (\ref{Lkt}) ensures that the probabilities of rejecting these arms are small.
For the two-target algorithm,
the first target $s_1$ is designed for quick rejection of the first type of arms and the second target $s_f$ is designed for rejection of the second type of arms.
What is different is that whereas two-target monitors an arm for rejection only when there are 1 and $f$ failures,
with $f$ chosen large for optimality,
(in the case of Bernoulli rewards) CBT checks for rejection each time a failure occurs.
The frequent monitoring of CBT is a positive feature that results in significantly better numerical performances,
see Section 5.
\section{Optimality}
We state the regret lower bound in Section 3.1 and show that CBT achieves this bound in Section 3.2.
\subsection{Regret lower bound}
In Lemma \ref{lem1} below we motivate the choice of $\zeta$.
Let $\lambda = \int_0^{\infty} E_{\mu}(X|X>0) g(\mu) d \mu$ be the (finite) mean of the first non-zero reward of a random arm.
The value $\lambda$ represents the cost of experimenting with a new arm.
We consider $E_{\mu}(X|X>0)$ instead of $\mu$ because we are able to reject an arm only when there is a non-zero reward.
For Bernoulli rewards,
$\lambda=1$.
Let $p(\zeta) = P_g(\mu_1 \leq \zeta)$ and $v(\zeta) = E_g(\zeta-\mu_1)^+$.
Consider an idealized algorithm which plays $\Pi_k$ until a non-zero reward is observed,
and $\mu_k$ is revealed when that happens.
If $\mu_k > \zeta$,
then $\Pi_k$ is rejected and a new arm is played next.
If $\mu_k \leq \zeta$,
then we end the experimentation stage and play $\Pi_k$ for the remaining trials.
Assuming that the experimentation stage uses $o(n)$ trials and $\zeta$ is small,
the regret of this algorithm is asymptotically
\begin{equation} \label{hmus}
r_n(\zeta) = \tfrac{\lambda}{p(\zeta)} + n E_g(\mu_1|\mu_1 \leq \zeta).
\end{equation}
The first term in the expansion of $r_n(\zeta)$ approximates $E(\sum_{k=1}^{K-1} n_k \mu_k)$ whereas the second term approximates $E(n_K \mu_K)$.
\begin{lem} \label{lem1}
Let $\zeta_n$ be such that $v(\zeta_n) = \frac{\lambda}{n}$.
We have
$$\inf_{\zeta > 0} r_n(\zeta) = r_n(\zeta_n) = n \zeta_n.
$$
\end{lem}
\smallskip
{\sc Proof}.
Since $E_g(\zeta-\mu_1|\mu_1 \leq \zeta) = v(\zeta)/p(\zeta)$,
it follows from (\ref{hmus}) that
\begin{equation} \label{h2}
r_n(\zeta) = \tfrac{\lambda}{p(\zeta)}+n \zeta-\tfrac{nv(\zeta)}{p(\zeta)}.
\end{equation}
It follows from $\tfrac{d}{d \zeta} v(\zeta)= p(\zeta)$ and $\tfrac{d}{d \zeta} p(\zeta) = g(\zeta)$ that
$$\tfrac{d}{d \zeta} r_n(\zeta) = \tfrac{g(\zeta)[nv(\zeta)-\lambda]}{p^2(\zeta)}.
$$
Since $v$ is continuous and strictly increasing when it is positive,
the root to $v(\zeta)=\frac{\lambda}{n}$ exists,
and Lemma \ref{lem1} follows from solving $\tfrac{d}{d \zeta} r_n(\zeta)=0$.
$\wbox$
\smallskip
Consider:
\smallskip
\noindent (A1) There exists $\alpha > 0$ and $\beta>0$ such that $g(\mu) \sim \alpha \mu^{\beta-1}$ as $\mu \to 0$.
\smallskip
\noindent Under (A1),
$p(\zeta) = \int_0^{\zeta} g(\mu) d \mu \sim \frac{\alpha}{\beta} \zeta^{\beta}$ and $v(\zeta) = \int_0^{\zeta} p(\mu) d \mu \sim \frac{\alpha}{\beta (\beta+1)} \zeta^{\beta+1}$
as $\zeta \rightarrow 0$,
hence $v(\zeta_n) \sim \lambda n^{-1}$ for
\begin{equation} \label{mustar}
\zeta_n \sim C n^{-\frac{1}{\beta+1}}, \mbox{ where } C = (\tfrac{\lambda \beta(\beta+1)}{\alpha})^{\frac{1}{\beta+1}}.
\end{equation}
\smallskip
In Lemma \ref{thm1} below we state the regret lower bound.
We assume there that:
\smallskip
\noindent (A2) There exists $a_1>0$ such that $P_{\mu}(X>0) \geq a_1 \min(\mu,1)$ for all $\mu$.
\smallskip
\noindent Without condition (A2) we may play a bad arm a large number of times because its rewards are mostly zeros but are very big when non-zeros.
\begin{lem} \label{thm1}
Assume {\rm (A1)} and {\rm (A2)}.
For any infinite arms bandit algorithm,
its regret satisfies
\begin{equation} \label{R2}
R_n \geq [1+o(1)] n \zeta_n (\sim Cn^{\tfrac{\beta}{\beta+1}}).
\end{equation}
\end{lem}
\smallskip
{\sc Example} 1.
Consider $X \stackrel{d}{\sim}$ Bernoulli($\mu$).
Condition (A2) holds with $a_1=1$.
If $g$ is uniform on (0,1),
then (A1) holds with $\alpha=\beta=1$.
Since $\lambda=1$,
by (\ref{mustar}),
$\zeta_n \sim (2/n)^{1/2}$.
Lemma \ref{thm1} says that $R_n \geq [1+o(1)] \sqrt{2n}$,
agreeing with Theorem 3 of Berry et al. (1997).
\smallskip
Bonald and Prouti\`{e}re (2013) showed (\ref{R2}) in their Lemma 3 for Bernoulli rewards under (A1) and that the two-target algorithm gets close to the lower bound when $f$ is large.
It will be shown in Theorem \ref{thm2} that the lower bound in (\ref{R2}) is achieved by CBT for general rewards.
\subsection{Optimality of CBT}
We state the optimality of CBT in Theorem \ref{thm2},
after describing the conditions on discrete rewards under (B1) and continuous rewards under (B2) for which the theorem holds.
Let $M_{\mu}(\theta) = E_{\mu} e^{\theta X}$.
\smallskip
\noindent (B1) The rewards are non-negative integer-valued.
For $0 < \delta \leq 1$,
there exists $\theta_{\delta}>0$ such that for $\mu>0$ and $0 \leq \theta \leq \theta_{\delta}$,
\begin{eqnarray} \label{Mbound}
M_{\mu}(\theta) & \leq & e^{(1+\delta) \theta \mu}, \\ \label{Mmu2}
M_{\mu}(-\theta) & \leq & e^{-(1-\delta) \theta \mu}.
\end{eqnarray}
In addition,
\begin{eqnarray} \label{a2}
P_{\mu}(X>0) & \leq & a_2 \mu \mbox{ for some } a_2>0, \\ \label{E4}
E_{\mu} X^4 & = & O(\mu) \mbox{ as } \mu \rightarrow 0.
\end{eqnarray}
\smallskip
\noindent (B2) The rewards are continuous random variables satisfying
\begin{equation} \label{Prho}
\sup_{\mu>0} P_{\mu}(X \leq \gamma \mu) \rightarrow 0 \mbox{ as } \gamma \rightarrow 0.
\end{equation}
Moreover (\ref{E4}) holds and for $0 < \delta \leq 1$,
there exists $\tau_{\delta}>0$ such that for $0 < \theta \mu \leq \tau_{\delta}$,
\begin{eqnarray} \label{tau1}
M_{\mu}(\theta) & \leq & e^{(1+\delta) \theta \mu}, \\ \label{tau2}
M_{\mu}(-\theta) & \leq & e^{-(1-\delta) \theta \mu}.
\end{eqnarray}
In addition for each $t \geq 1$,
there exists $\xi_t >0$ such that
\begin{equation} \label{sigbound}
\sup_{\mu \leq \xi_t} P_{\mu}(\wht \sigma_t^2 \leq \gamma \mu^2) \rightarrow 0 \mbox{ as } \gamma \rightarrow 0,
\end{equation}
where $\wht \sigma_t^2 = t^{-1} \sum_{u=1}^t (X_u -\bar X_t)^2$ and $\bar X_t = t^{-1} \sum_{u=1}^t X_u$ for i.i.d. $X_u \stackrel{d}{\sim} F_u$.
\begin{thm} \label{thm2}
Assume {\rm (A1), (A2)} and either {\rm (B1)} or {\rm (B2)}.
For CBT with threshold $\zeta_n$ satisfying {\rm (\ref{mustar})} and $b_n$,
$c_n$ satisfying {\rm (\ref{bncn})},
\begin{equation} \label{R1}
R_n \sim n \zeta_n \mbox{ as } n \rightarrow \infty.
\end{equation}
\end{thm}
Theorem \ref{thm2} says that CBT is optimal as it attains the lower bound given in Lemma \ref{thm1}.
In the examples below we show that the regularity conditions (A2),
(B1) and (B2) are reasonable and checkable.
\smallskip
{\sc Example} 2.
If $X \stackrel{d}{\sim}$ Bernoulli($\mu$) under $P_{\mu}$,
then
$$M_{\mu}(\theta) = 1-\mu+\mu e^{\theta} \leq \exp[\mu(e^{\theta}-1)],
$$
and (\ref{Mbound}),
(\ref{Mmu2}) hold with $\theta_{\delta} >0$ satisfying
\begin{equation} \label{etheta}
e^{\theta_{\delta}}-1 \leq \theta_{\delta}(1+\delta) \mbox{ and } e^{-\theta_{\delta}}-1 \leq -\theta_{\delta}(1-\delta).
\end{equation}
In addition (\ref{a2}) holds with $a_2=1$,
and (\ref{E4}) holds because $E_{\mu}X^4 = \mu$.
Condition (A2) holds with $a_1=1$.
\smallskip
{\sc Example} 3.
Let $F_{\mu}$ be a distribution with support on $0, \ldots, I$ for some positive integer $I>1$ and having mean $\mu$.
Let $p_i = P_{\mu}(X=i)$.
We check that $P_{\mu}(X>0) \geq I^{-1} \mu$ and therefore (A2) holds with $a_1=I^{-1}$.
Let $\theta_{\delta}>0$ be such that
\begin{equation} \label{Id}
e^{i \theta}-1 \leq i \theta(1+\delta) \mbox{ and } e^{-i \theta}-1 \leq -i \theta(1-\delta) \mbox{ for } 0 \leq i \theta \leq I \theta_{\delta}.
\end{equation}
By (\ref{Id}) for $0 \leq \theta \leq \theta_{\delta}$,
\begin{eqnarray*}
M_{\mu}(\theta) & = & \sum_{i=0}^I p_i e^{i \theta} \leq 1+(1+\delta) \mu \theta, \cr
M_{\mu}(-\theta) & = & \sum_{i=0}^I p_i e^{-i \theta} \leq 1-(1-\delta) \mu \theta, \cr
\end{eqnarray*}
and (\ref{Mbound}),
(\ref{Mmu2}) follow from $1+x \leq e^x$.
Moreover (\ref{a2}) holds with $a_2=1$ and (\ref{E4}) holds because $E_{\mu} X^4 = \sum_{i=0}^I p_i i^4 \leq I^3 \mu$.
\smallskip
{\sc Example} 4.
If $X \stackrel{d}{\sim}$ Poisson($\mu$) under $P_{\mu}$,
then
$$M_{\mu}(\theta) = \exp[\mu(e^{\theta}-1)],
$$
and (\ref{Mbound}), (\ref{Mmu2}) again follow from (\ref{etheta}).
Since $P_{\mu}(X>0) = 1-e^{-\mu}$,
(A2) holds with $a_1=1-e^{-1}$,
and (\ref{a2}) holds with $a_2=1$.
In addition (\ref{E4}) holds because
$$E_{\mu} X^4 = \sum_{k=1}^{\infty} \tfrac{k^4 \mu^k e^{-\mu}}{k!} = \mu e^{-\mu} + e^{-\mu} O \Big( \sum_{k=2}^{\infty} \mu^k \Big).
$$
{\sc Example} 5.
Let $Z$ be a continuous non-negative random variable with mean 1,
and with $Ee^{\tau_0 Z} < \infty$ for some $\tau_0 > 0$.
Let $X$ be distributed as $\mu Z$ under $P_{\mu}$.
Condition (A2) holds with $a_1=1$.
We conclude (\ref{Prho}) from
$$\sup_{\mu>0} P_{\mu}(X \leq \gamma \mu) = P(Z \leq \gamma) \rightarrow 0 \mbox{ as } \gamma \rightarrow 0.
$$
Let $0 < \delta \leq 1$.
Since $\lim_{\tau \rightarrow 0} \tau^{-1} \log Ee^{\tau Z} = EZ = 1$,
there exists $\tau_{\delta}>0$ such that for $0 < \tau \leq \tau_{\delta}$,
\begin{equation} \label{EetY}
E e^{\tau Z} \leq e^{(1+\delta) \tau} \mbox{ and } E e^{-\tau Z} \leq e^{-(1-\delta) \tau}.
\end{equation}
Since $M_{\mu}(\theta) = E_{\mu} e^{\theta X} = E e^{\theta \mu Z}$ and $M_{\mu}(-\theta) = Ee^{-\theta \mu Z}$,
we conclude (\ref{tau1}) and (\ref{tau2}) from (\ref{EetY}) with $\tau=\theta \mu$.
We conclude (\ref{E4}) from $E_{\mu} X^4 = \mu^4 EZ^4$,
and (\ref{sigbound}),
for arbitrary $\xi_t>0$,
from
$$P_{\mu}(\wht \sigma_t^2 \leq \gamma \mu^2) = P(\wht \sigma_{tZ}^2 \leq \gamma) \rightarrow 0 \mbox{ as } \gamma \rightarrow 0,
$$
where $\wht \sigma^2_{tZ} = t^{-1} \sum_{u=1}^t (Z_u - \bar Z_t)^2$,
for i.i.d. $Z$ and $Z_u$.
\section{Empirical CBT for unknown priors}
The optimal implementation of CBT,
in particular the computation of the best target mean $\zeta_n$,
assumes knowledge of how $g(\mu)$ behaves for $\mu$ near 0.
For $g$ unknown we rely on Theorem~\ref{thm2} to motivate the empirical implementation of CBT.
What is striking about (\ref{R1}) is that it relates the optimal threshold $\zeta_n$ to $\frac{R_n}{n}$,
and moreover this relation does not depend on either the prior $g$ or the reward distributions.
We suggest therefore,
in an empirical implementation of CBT,
to apply thresholds
\begin{equation} \label{hmustar}
\zeta(m):= \tfrac{S_m'}{n},
\end{equation}
where $S_m'$ is the sum of the first $m$ rewards for all arms.
In the beginning with $m$ small,
$\zeta(m)$ underestimates the optimal threshold,
but this will only encourage exploration,
which is the right strategy at the beginning.
As $m$ increases $\zeta(m)$ gets closer to the optimal threshold,
and empirical CBT behaves more like CBT in deciding whether to play an arm further.
Empirical CBT is recalling,
unlike CBT,
as it decides from among all arms which to play further.
\vskip0.5in
\underline{Empirical CBT}
\smallskip
Notation:
When there are $m$ total rewards,
let $n_k(m)$ denote the number of rewards from arm $k$ and let $K_m$ denote the number of arms played.
\smallskip
For $m=0$,
play arm 1.
Hence $K_1=1$, $n_1(1)=1$ and $n_k(1) = 0$ for $k>1$.
\smallskip
For $m=1,\ldots,n-1$:
\begin{enumerate}
\item If $\min_{1 \leq k \leq K_m} L_{kn_k(m)} \leq \zeta(m)$,
then play the arm $k$ minimizing $L_{kn_k(m)}$.
\item If $\min_{1 \leq k \leq K_m} L_{kn_k(m)} > \zeta(m$),
then play a new arm $K_m+1$.
\end{enumerate}
\vskip0.5in
Empirical CBT,
unlike CBT,
does not achieve the smallest regret.
This is because when a good arm (that is an arm with mean below optimal target) appears early,
we are not sure whether this is due to good fortune or that the prior is disposed towards arms with small means,
so we experiment with more arms before we are certain and play the good arm for the remaining trials.
Similarly when no good arm appears after some time,
we may conclude that the prior is disposed towards arms with large means,
and play an arm with mean above the optimal target for the remaining trials,
even though it is advantageous to experiment further.
There is a price for not knowing $g$.
Our calculations in Appendix \ref{thm3prf} indicate that under empirical CBT,
the regret
\begin{equation} \label{thm3.1}
R_n \sim I_{\beta} n \zeta_n,
\end{equation}
where $I_{\beta} = (\frac{1}{\beta+1})^{\frac{1}{\beta+1}} (2-\frac{1}{(\beta+1)^2}) \Gamma(2-\frac{1}{\beta+1})$ and $\Gamma(u) = \int_0^{\infty} x^{u-1} e^{-x} dx$.
The calculations are based on an idealized version of empirical CBT.
The constant $I_{\beta}$ increases from 1 (at $\beta=0$) to 2 (at $\beta=\infty$),
so the worst-case inflation is not more than 2.
The increase is quite slow so for reasonable values of $\beta$ it is closer to 1 than 2.
For example $I_1=1.10$, $I_2=1.17$, $I_3=1.24$ and $I_{10}=1.53$.
The predictions from (\ref{thm3.1}),
that the inflation of the regret increases with $\beta$,
and that it is not more than 25\% for $\beta=$1, 2 and 3,
are validated by our simulations in Section 5.
\section{Numerical studies}
We study here arms with rewards that have Bernoulli (Example 5) as well as unknown distributions (Example 6).
In our simulations 10,000 datasets are generated for each entry in Tables 1--4,
and standard errors are placed after the $\pm$ sign.
In both CBT and empirical CBT,
we select $b_n = c_n = \log \log n$.
\begin{table}[h!]
\begin{tabular}{ll|rrrr}
\multicolumn{2}{c|}{Algorithm} & \multicolumn{4}{c}{Regret} \cr
& & $n=$100 & $n=$1000 & $n=$10,000 & $n=$100,000 \cr \hline
CBT & $\zeta=\sqrt{2/n}$ & 14.6$\pm$0.1 & 51.5$\pm$0.3 & 162$\pm$1 & 504$\pm$3 \cr
& empirical & 15.6$\pm$0.1 & 54.0$\pm$0.3 & 172$\pm$1 & 531$\pm$3 \cr \hline
Berry et al. & 1-failure & 21.8$\pm$0.1 & 152.0$\pm$0.6 & 1123$\pm$4 & 8955$\pm$28 \cr
& $\sqrt{n}$-run & 19.1$\pm$0.2& 74.7$\pm$0.7 & 260$\pm$3 & 844$\pm$9 \cr
& $\sqrt{n}$-run (non-recall) & 15.4$\pm$0.1 & 57.7$\pm$0.4 & 193$\pm$1 & 618$\pm$4 \cr
& $\log n \sqrt{n}$-learning & 18.7$\pm$0.1 & 84.4$\pm$0.6 & 311$\pm$3 & 1060$\pm$9 \cr \hline
Two-target & $f=3$ & 15.2$\pm$0.1 & 52.7$\pm$0.3 & 167$\pm$1 & 534$\pm$3 \cr
& $f=6$ & 16.3$\pm$0.1 & 55.8$\pm$0.4 & 165$\pm$1 & 511$\pm$3 \cr
& $f=9$ & 17.5$\pm$0.1 & 58.8$\pm$0.4 & 173$\pm$1 & 514$\pm$3 \cr \hline
UCB-F & $K= \lfloor \sqrt{n/2} \rfloor$ & 39.2$\pm$0.1 & 206.4$\pm$0.4 & 1204$\pm$1 & 4432$\pm$15 \cr \hline
Lower bound & $\sqrt{2n}$ & 14.1 \quad \ \ \ & 44.7 \quad \ \ \ & 141 \quad \quad & 447 \quad \quad
\end{tabular}
\caption{The regrets for Bernoulli rewards with uniform prior $(\beta=1)$.}
\end{table}
\begin{table}[h!]
\begin{tabular}{ll|rrrr}
& & \multicolumn{4}{c}{Regret} \cr
\multicolumn{2}{c|}{Algorithm} & $n=100$ & $n=1000$ & $n=$10,000 & $n=$100,000 \cr \hline
CBT & $\zeta = Cn^{-\frac{1}{3}}$ & 24.9$\pm$0.1 & 124.8$\pm$0.5 & 575$\pm$3 & 2567$\pm$12 \cr
& empirical & 25.6$\pm$0.1 & 132.3$\pm$0.6 & 604$\pm$2 & 2816$\pm$11 \cr \hline
Two-target & $f=3$ & 25.0$\pm$0.1 & 132.1$\pm$0.6 & 649$\pm$3 & 3099$\pm$16 \cr
& $f=6$ & 26.0$\pm$0.1 & 131.1$\pm$0.6 & 600$\pm$3 & 2783$\pm$13 \cr
& $f=9$ & 26.7$\pm$0.1 & 136.6$\pm$0.7 & 605$\pm$3 & 2676$\pm$12 \cr \hline
UCB-F & & 43.6$\pm$0.1 & 386.8$\pm$0.3 & 2917$\pm$2 & 16038$\pm$12 \cr
$n^{\frac{1}{\beta+1}}$-run & non-recall & 28.1$\pm$0.1 & 172.5$\pm$0.9 & 903$\pm$5 & 4434$\pm$28 \cr \hline
Lower bound & $Cn^{\frac{\beta}{\beta+1}}$ & 23.0 \quad \ \ \ & 106.7 \quad \ \ \ & 495 \quad \quad & 2300 \quad \quad \cr
\end{tabular}
\caption{The regrets for Bernoulli rewards with $g(\mu) = \frac{\pi}{2} \sin(\pi \mu)$ $(\beta=2)$.}
\end{table}
\begin{table}[h!]
\begin{tabular}{ll|rrrr}
& & \multicolumn{4}{c}{Regret} \cr
\multicolumn{2}{c|}{Algorithm} & $n=100$ & $n=1000$ & $n=$10,000 & $n=$100,000 \cr \hline
CBT & $\zeta = Cn^{-\frac{1}{4}}$ & 43.3$\pm$0.1 & 254.8$\pm$0.8 & 1402$\pm$5 & \ 7658$\pm$28 \cr
& empirical & 43.1$\pm$0.1 & 263.8$\pm$0.8 & 1542$\pm$5 & \ 8860$\pm$28 \cr \hline
Two-target & $f=3$ & 43.2$\pm$0.1 & 276.0$\pm$1.0 & 1697$\pm$7 & 10235$\pm$44 \cr
& $f=6$ & 44.5$\pm$0.1 & 270.1$\pm$1.0 & 1537$\pm$6 & 8828$\pm$34 \cr
& $f=9$ & 45.6$\pm$0.1 & 278.5$\pm$1.1 & 1510$\pm$6 & 8501$\pm$33 \cr \hline
UCB-F & & 63.2$\pm$0.1 & 592.9$\pm$0.3 & 5120$\pm$3 & 34168$\pm$25 \cr
$n^{\frac{1}{\beta+1}}$-run & non-recall & 45.5$\pm$0.2 & 338.2$\pm$1.4 & 2206$\pm$10 & 14697$\pm$73 \cr \hline
Lower bound & $Cn^{\frac{\beta}{\beta+1}}$ & 39.5 \quad \ \ \ & 222.1 \quad \ \ \ & 1249 \quad \quad & 7022 \quad \quad \cr
\end{tabular}
\caption{The regrets for Bernoulli rewards with $g(\mu) = 1-\cos(\pi \mu)$ $(\beta=3)$.}
\end{table}
\smallskip
{\sc Example} 5.
We consider Bernoulli rewards with the following priors:
\begin{enumerate}
\item $g(\mu)=1$,
which satisfies (A1) with $\alpha=\beta=1$,
\item $g(\mu) = \tfrac{\pi}{2} \sin(\pi \mu)$,
which satisfies (A1) with $\alpha=\frac{\pi^2}{2}$ and $\beta=2$,
\item $g(\mu) = 1-\cos(\pi \mu)$,
which satisfies (A1) with $\alpha=\frac{\pi^2}{2}$ and $\beta=3$.
\end{enumerate}
For all three priors,
the two-target algorithm does better with $f=3$ for smaller $n$,
and with $f=6$ or 9 at larger $n$.
CBT is the best performer uniformly over $n$,
and empirical CBT is also competitive against two-target with $f$ fixed.
Even though optimal CBT performs better than empirical CBT,
optimal CBT assumes knowledge of the prior to select the threshold $\zeta$,
which differs with the priors.
On the other hand the same algorithm is used for all three priors when applying empirical CBT,
and in fact the same algorithm is also used on the URL dataset in Example 6,
with no knowledge of the reward distributions.
Hence though empirical CBT is numerically comparable to two-target and weaker than CBT,
it is more desirable as we do not need to know the prior to use it.
For the uniform prior,
the best performing among the algorithms in Berry et al. (1997) is the non-recalling $\sqrt{n}$-run algorithm.
For UCB-F [cf. Wang et al. (2008)],
the selection of $K = \lfloor (\tfrac{\beta}{\alpha})^{\frac{1}{\beta+1}} (\tfrac{n}{\beta+1})^{\frac{\beta}{\beta+1}} \rfloor$ ($\sim \tfrac{1}{p(\zeta_n)}$)
and ``exploration sequence" ${\cal E}_m = \sqrt{\log m}$ works well.
\begin{table}[h!]
\begin{tabular}{ll|ll}
\multicolumn{2}{c|}{Algorithm} & \multicolumn{2}{c}{Regret} \cr
& $\epsilon$ & $n=$130 & $n=$1300 \cr \hline
emp. CBT & & 212$\pm$2 & 123.8$\pm$0.6 \cr
POKER & & 203 & 132 \cr
$\epsilon$-greedy & 0.05 & 733 & 431 \cr
$\epsilon$-first & 0.15 & 725 & 411 \cr
$\epsilon$-decreasing & 1.0 & 738 & 411
\end{tabular}
\caption{The average regret $(R_n/n)$ for URL rewards.}
\end{table}
\smallskip
{\sc Example} 6. We consider here the URL dataset studied in Vermorel and Mohri~(2005),
where a POKER algorithm for dealing with large number of arms is proposed.
We reproduce part of their Table 1 in our Table 4,
together with new simulations on empirical CBT.
The dataset consists of the retrieval latency of 760 university home-pages,
in milliseconds, with a sample size of more than 1300 for each home-page.
The dataset can be downloaded from ``sourceforge.net/projects/bandit''.
In our simulations the rewards for each home-page are randomly permuted in each run.
We see from Table 4 that POKER does better than empirical CBT at $n=130$,
whereas empirical CBT does better at $n=1300$ .
The other algorithms are uniformly worse than both POKER and empirical CBT.
The algorithm $\epsilon$-first refers to exploring with the first $\epsilon n$ rewards,
with random selection of the arms to be played.
This is followed by pure exploitation for the remaining $(1-\epsilon) n$ rewards,
on the ``best" arm (with the smallest sample mean).
The algorithm $\epsilon$-greedy refers to selecting,
in each play,
a random arm with probability $\epsilon$,
and the best arm with the remaining $1-\epsilon$ probability.
The algorithm $\epsilon$-decreasing is like $\epsilon$-greedy except that in the $m$th play,
we select a random arm with probability $\min(1,\frac{\epsilon}{m})$,
and the best arm otherwise.
Both $\epsilon$-greedy and $\epsilon$-decreasing are disadvantaged by not making use of information on the total number of rewards.
Vermorel and Mohri also ran simulations on more complicated strategies like LeastTaken, SoftMax, Exp3, GaussMatch and IntEstim,
with average regret ranging from 287--447 for $n=130$ and 189--599 for $n=1300$.
\section{Proof of Lemma \ref{thm1}}
Let the infinite arms bandit problem be labelled as Problem A,
and let $R_A$ be the smallest regret for this problem.
We shall prove Lemma \ref{thm1} by considering two related problems,
Problems B and~C.
\smallskip
{\sc Proof of Lemma} \ref{thm1}.
Let Problem B be like Problem A except that when we observe the first non-zero reward from arm $k$,
its mean $\mu_k$ is revealed.
Let $R_B$ be the smallest regret for Problem B.
Since in Problem B we have access to additional arm-mean information,
$R_A \geq R_B$.
In Problem B the best solution involves an initial experimentation phase in which we play $K$ arms, each until its first non-zero reward.
This is followed by an exploitation phase in which we play the best arm for the remaining $n-M$ trials,
where $M$ is the number of rewards in the experimentation phase.
It is always advantageous to experiment first because no information on arm mean is gained during exploitation.
For continuous rewards $M=K$.
Let $\mu_b(=\mu_{\rm best})=\min_{1 \leq k \leq K} \mu_k$.
In Problem C like in Problem B,
the mean $\mu_k$ of arm $k$ is revealed upon the observation of its first non-zero reward.
The difference is that instead of playing the best arm for an additional $n-M$ trials,
we play it for $n$ additional trials,
for a total of $n+M$ trials.
Let $R_C$ be the smallest regret of Problem C,
the expected value of $\sum_{k=1}^K n_k \mu_k$,
with $\sum_{k=1}^K n_k=n+M$.
We can extend the optimal solution of Problem B to a (possibly non-optimal) solution of Problem~C by simply playing the best arm with mean $\mu_b$ a further $M$ times.
Hence
\begin{equation} \label{RBC}
[R_A+E(M \mu_b) \geq] R_B+E(M \mu_b) \geq R_C.
\end{equation}
Lemma \ref{thm1} follows from Lemmas \ref{lem2} and \ref{lem3} below.
We prove the more technical Lemma \ref{lem3} in Appendix \ref{lem3prf}.
$\wbox$
\begin{lem} \label{lem2}
$R_C = n \zeta_n$ for $\zeta_n$ satisfying $v(\zeta_n)= \tfrac{\lambda}{n}$.
\end{lem}
{\sc Proof}.
Let arm $j$ be the best arm after $k$ arms have been played in the experimentation phase,
that is $\mu_j = \min_{1 \leq i \leq k} \mu_i$.
Let $\phi_*$ be the strategy of trying out a new arm if and only if $n v(\mu_j) > \lambda$,
or equivalently $\mu_j > \zeta_n$.
Since we need on the average $\frac{1}{p(\zeta_n)}$ arms before achieving $\mu_j \leq \zeta_n$,
the regret of $\phi_*$ is
\begin{equation} \label{Rstar}
R_* = \tfrac{\lambda}{p(\zeta_n)}+n E_g(\mu|\mu \leq \zeta_n) = r_n(\zeta_n) = n \zeta_n,
\end{equation}
see (\ref{hmus}) and Lemma \ref{lem1} for the second and third equalities in (\ref{Rstar}).
Hence $R_C \leq n \zeta_n$ and to show Lemma \ref{lem2},
it remains to show that for any strategy $\phi$,
its regret $R_{\phi}$ is not less than $R_*$.
Let $K_*$ be the number of arms of $\phi_*$ and $K$ the number of arms of $\phi$.
Let $A_1 = \{ K < K_* \} (=\{ \min_{1 \leq k \leq K} \mu_k > \zeta_n \})$ and $A_2 = \{ K > K_* \}(=\{ \min_{1 \leq k \leq K_*} \mu_k \leq \zeta_n, K > K_* \})$.
We can express
\begin{equation} \label{RR}
R_{\phi}-R_* = \sum_{\ell=1}^2 \Big\{ \lambda E[(K-K_*) {\bf 1}_{A_{\ell}}]+n E \Big[ \Big( \min_{1 \leq k \leq K} \mu_k - \min_{1 \leq k \leq K_*} \mu_k \Big) {\bf 1}_{A_{\ell}} \Big] \Big\}.
\end{equation}
Under $A_1$,
$\min_{1 \leq k \leq K} \mu_k > \zeta_n$ and therefore by (\ref{Rstar}),
\begin{eqnarray} \label{eq3.1}
& & \lambda E[(K-K_*) {\bf 1}_{A_1}]+ nE \Big[ \Big(\min_{1 \leq k \leq K} \mu_k - \min_{1 \leq k \leq K_*} \mu_k \Big) {\bf 1}_{A_1} \Big] \\ \nonumber
& = & -\tfrac{\lambda P(A_1)}{p(\zeta_n)}+n \Big\{ E \Big[ \Big( \min_{1 \leq k \leq K} \mu_k \Big) {\bf 1}_{A_1} \Big]-P(A_1)E_g(\mu|\mu \leq \zeta_n) \Big\} \\ \nonumber
& \geq & P(A_1) \{ -\tfrac{\lambda}{p(\zeta_n)}+n[\zeta_n-E_g(\mu|\mu \leq \zeta_n)] \} = 0.
\end{eqnarray}
The identity $E[(K_*-K) {\bf 1}_{A_1}] = \tfrac{P(A_1)}{p(\zeta_n)}$ is due to $\min_{1 \leq k \leq K} \mu_k > \zeta_n$ when there are $K$ arms,
and so an additional $\frac{1}{p(\zeta_n)}$ arms on average is required under strategy $\phi_*$,
to get an arm with mean not more than $\zeta_n$.
In view that $(K-K_*) {\bf 1}_{A_2} =\sum_{j=0}^{\infty} {\bf 1}_{\{ K > K_*+j \}}$ and
\begin{eqnarray*}
& & \Big( \min_{1 \leq k \leq K} \mu_k - \min_{1 \leq k \leq K_*} \mu_k \Big) {\bf 1}_{A_2} \cr
& = & \sum_{j=0}^{\infty} \Big( \min_{1 \leq k \leq K_*+j+1} \mu_k - \min_{1 \leq k \leq K_*+j} \mu_k \Big) {\bf 1}_{\{ K > K_*+j \}},
\end{eqnarray*}
we can check that
\begin{eqnarray} \label{eq3.2}
& & \lambda E[(K-K_*) {\bf 1}_{A_2}]+n E \Big[ \Big( \min_{1 \leq k \leq K} \mu_k-\min_{1 \leq k \leq K_*} \mu_k \Big) {\bf 1}_{A_2} \Big] \\ \nonumber
& = & \sum_{j=0}^{\infty} E \Big\{ \Big[ \lambda + n \Big( \min_{1 \leq k \leq K_*+j+1} \mu_k - \min_{1 \leq k \leq K_*+j} \mu_k \Big) \Big] {\bf 1}_{\{ K > K_*+j \}} \Big\} \\ \nonumber
& = & \sum_{j=0}^{\infty} E \Big\{ \Big[ \lambda-n v \Big(\min_{1 \leq k \leq K_*+j} \mu_k \Big) \Big] {\bf 1}_{ \{ K>K_*+j \}} \Big\} \geq 0.
\end{eqnarray}
The inequality in (\ref{eq3.2}) follows from
$$v(\min_{1 \leq k \leq K_*+j} \mu_k) \leq v(\min_{1 \leq k \leq K_*} \mu_k) \leq v(\zeta_n) = \tfrac{\lambda}{n},
$$
as $v$ is monotone increasing.
Lemma \ref{lem2} follows from (\ref{Rstar})--(\ref{eq3.2}).
$\wbox$
\begin{lem} \label{lem3}
$E(M \mu_b) = o(n^{\frac{\beta}{\beta+1}})$.
\end{lem}
\smallskip
Bonald and Prouti\`{e}re (2013) also referred to Problem B in their lower bound for Bernoulli rewards.
What is different in our proof of Lemma \ref{thm1} is a further simplification by considering Problem C,
in which the number of rewards in the exploitation phase is fixed to be $n$.
We show in Lemma \ref{lem2} that under Problem C the optimal regret has a simple expression $n \zeta_n$,
and reduce the proof of Lemma \ref{thm1} to showing Lemma~\ref{lem3}.
\section{Proof of Theorem \ref{thm2}}
We preface the proof of Theorem \ref{thm2} with Lemmas \ref{lem4}--\ref{lem7}.
The lemmas are proved for discrete rewards in Section 7.1 and continuous rewards in Section 7.2.
Consider $X_1, X_2, \ldots$ i.i.d. $F_{\mu}$.
Let $S_t = \sum_{u=1}^t X_u$,
$\bar X_t = \frac{S_t}{t}$ and $\wht \sigma_t^2 = t^{-1} \sum_{u=1}^t (X_u-\bar X_t)^2$.
Let
\begin{eqnarray} \label{T1}
T_b & = & \inf \{ t: S_t > b_n t \zeta_n \}, \\ \label{T2}
T_c & = & \inf \{ t: S_t > t \zeta_n + c_n \wht \sigma_t \sqrt{t} \},
\end{eqnarray}
with $b_n$,
$c_n$ satisfying (\ref{bncn}) and $\zeta_n \sim Cn^{-\frac{1}{\beta+1}}$ for $C=(\frac{\lambda \beta(\beta+1)}{\alpha})^{\frac{1}{\beta+1}}$.
Let
\begin{equation}
\label{dn}
d_n = n^{-\omega} \text{ for some } 0 < \omega < \tfrac{1}{\beta+1}.
\end{equation}
\begin{lem} \label{lem4}
As $n \rightarrow \infty$,
\begin{eqnarray} \label{lem4.1}
\sup_{\mu \geq d_n} [\min(\mu,1) E_{\mu} T_b] & = & O(1), \\ \label{lem4.2}
E_g(T_b \mu {\bf 1}_{\{ \mu \geq d_n \}}) & \leq & \lambda+o(1).
\end{eqnarray}
\end{lem}
\begin{lem} \label{lem5}
Let $\epsilon >0$.
As $n \rightarrow \infty$,
\begin{eqnarray} \label{lem5.1}
\sup_{(1+\epsilon)\zeta_n \leq \mu \leq d_n} [\mu E_{\mu}(T_c \wedge n)] & = & O(c_n^3+\log n), \\ \label{lem5.2}
E_g[(T_c \wedge n) \mu {\bf 1}_{\{ (1+\epsilon)\zeta_n \leq \mu \leq d_n \}}] & \rightarrow & 0.
\end{eqnarray}
\end{lem}
\begin{lem} \label{lem6}
Let $0 < \epsilon < 1$.
As $n \rightarrow \infty$,
$$\sup_{\mu \leq (1-\epsilon)\zeta_n} P_{\mu}(T_b < \infty) \rightarrow 0.
$$
\end{lem}
\begin{lem} \label{lem7}
Let $0 < \epsilon < 1$.
As $n \rightarrow \infty$,
$$\sup_{\mu \leq (1-\epsilon)\zeta_n} P_{\mu}(T_c < \infty) \rightarrow 0.
$$
\end{lem}
The number of times an arm is played has distribution bounded above by $T:= T_b \wedge T_c$.
Lemmas \ref{lem6} and \ref{lem7} say that an arm with mean less than $(1-\epsilon) \zeta_n$ is unlikely to be rejected,
whereas (\ref{lem4.2}) and (\ref{lem5.2}) say that the regret due to sampling from an arm with mean more than $(1+\epsilon) \zeta_n$ is asymptotically bounded by $\lambda$.
The remaining (\ref{lem4.1}) and (\ref{lem5.1}) are technical relations used in the proof of Theorem \ref{thm2}.
\smallskip
{\sc Proof of Theorem} \ref{thm2}.
The number of times arm $k$ is played is $n_k$,
and it is distributed as $T_b\wedge T_c \wedge (n-\sum_{\ell=1}^{k-1} n_{\ell})$. Let $0 <\epsilon <1$. We can express
\begin{equation} \label{Rnz}
R_n - n \zeta_n = z_1+z_2+z_3 = z_1+z_2-|z_3|,
\end{equation}
where $z_i = E[\sum_{k: \mu_k \in D_i} n_k(\mu_k-\zeta_n)]$ for
$$D_1 = [(1+\epsilon) \zeta_n,\infty), \quad D_2 = ((1-\epsilon) \zeta_n,(1+\epsilon) \zeta_n), \quad D_3 = (0,(1-\epsilon) \zeta_n].
$$
It is easy to see that $z_2 \leq \epsilon n\zeta_n$.
We shall show that
\begin{eqnarray} \label{z1}
z_1 & \leq & \tfrac{\lambda+o(1)}{(1-\epsilon)^{\beta} p(\zeta_n)}, \\ \label{z3m}
|z_3| & \geq & [(\tfrac{1-\epsilon}{1+\epsilon})^{\beta}+o(1)][n \epsilon \zeta_n+\tfrac{(1-\epsilon)\lambda}{p(\zeta_n)}].
\end{eqnarray}
We conclude Theorem \ref{thm2} from (\ref{Rnz})--(\ref{z3m}) with $\epsilon \to 0$.
$\wbox$
\smallskip
{\sc Proof of} (\ref{z1}).
Since $T=T_b \wedge T_c$,
by Lemmas~\ref{lem6} and \ref{lem7},
\begin{eqnarray}\label{qn}
q_n & := & \sup_{\mu \leq (1-\epsilon) \zeta_n} P_{\mu}(T < \infty)\\ \nonumber
&\leq & \sup_{\mu \leq (1-\epsilon) \zeta_n}[P_{\mu}(T_b < \infty)+P_{\mu}(T_c < \infty)] \rightarrow 0.
\end{eqnarray}
That is an arm with mean less than $(1-\epsilon) \zeta_n$ is rejected with negligible probability for $n$ large.
Since the total number of played arms $K$ is bounded above by a geometric random variable with mean $\frac{1}{P_g(T=\infty)}$,
by (\ref{qn}) and $p(\zeta) \sim \frac{\alpha}{\beta} \zeta^{\beta}$ as $\zeta \rightarrow 0$,
\begin{equation}
\label{EK}
EK \leq \tfrac{1}{P_g(T=\infty)} \leq \tfrac{1}{(1-q_n)p((1-\epsilon) \zeta_n)} \sim \tfrac{1}{(1-\epsilon)^\beta p(\zeta_n)}.
\end{equation}
By (\ref{lem4.2}) and (\ref{lem5.2}),
\begin{eqnarray*}
& & E_g(n_1 \mu_1 {\bf 1}_{\{ \mu_1 \geq (1+\epsilon) \zeta_n \}}) \cr
& = & E_g(n_1 \mu_1 {\bf 1}_{\{(1+\epsilon) \zeta_n \leq \mu_1 \leq d_n\}})+E_g(n_1 \mu_1 {\bf 1}_{\{ \mu_1 \geq d_n \}}) \cr
& \leq & E_g[(T_c \wedge n) \mu_1 {\bf 1}_{\{(1+\epsilon) \zeta_n \leq \mu_1 \leq d_n\}}]+E_g(T_b \mu_1 {\bf 1}_{\{ \mu_1 \geq d_n \}}) \cr
& \leq & \lambda+o(1),
\end{eqnarray*}
and (\ref{z1}) follows from (\ref{EK}) and $z_1 \leq E_g(n_1 \mu_1 {\bf 1}_{\{ \mu_1 \geq (1+\epsilon) \zeta_n \}}) EK$.
$\wbox$
\smallskip
{\sc Proof of} (\ref{z3m}).
Let $\ell$ be the first arm with mean not more than $(1-\epsilon) \zeta_n$.
We have
\begin{eqnarray} \label{|z3|}
|z_3| & = & E \Big[ \sum_{k: \mu_k \in D_3} n_k(\zeta_n-\mu_k) \Big] \\ \nonumber
& \geq & (E n_{\ell}) \{ \zeta_n - E_g[\mu|\mu \leq (1-\epsilon) \zeta_n] \}.
\end{eqnarray}
Since $v(\zeta_n) \sim \frac{\lambda}{n}$ and $p(\zeta) \sim \frac{\alpha}{\beta} \zeta^{\beta}$,
$v(\zeta) \sim \frac{\alpha}{\beta(\beta+1)} \zeta^{\beta+1}$ as $\zeta \rightarrow 0$,
\begin{eqnarray*}
& & \zeta_n-E_g[\mu|\mu \leq (1-\epsilon) \zeta_n] \cr
& = & \zeta_n-\{ (1-\epsilon) \zeta_n-E_g[(1-\epsilon) \zeta_n-\mu| \mu \leq (1-\epsilon) \zeta_n] \} \cr
& = & \zeta_n-[(1-\epsilon)\zeta_n-\tfrac{v((1-\epsilon) \zeta_n)}{p((1-\epsilon) \zeta_n)}] \cr
& \sim & \epsilon \zeta_n + \tfrac{(1-\epsilon) v(\zeta_n)}{p(\zeta_n)} \sim \epsilon \zeta_n + \tfrac{(1-\epsilon) \lambda}{n p(\zeta_n)},
\end{eqnarray*}
and (\ref{z3m}) thus follows from (\ref{|z3|}) and
\begin{equation} \label{Enl}
E n_{\ell} \geq [(\tfrac{1-\epsilon}{1+\epsilon})^{\beta}+o(1)] n.
\end{equation}
Let $j$ be the first arm with mean not more than $(1+\epsilon) \zeta_n$ and $M = \sum_{i=1}^{j-1} n_i$.
We have
$$E n_{\ell} \geq (1-q_n) E(n-M) P(\ell=j).
$$
Since $q_n \rightarrow 0$ and $P(\ell=j) \rightarrow (\frac{1-\epsilon}{1+\epsilon})^{\beta}$,
to show (\ref{Enl}) it suffices to show that $EM=o(n)$.
Indeed by (\ref{lem4.1}),
(\ref{lem5.1}) and $E_{\mu} n_1 \leq E_{\mu}(T \wedge n)$,
\begin{eqnarray*}
& & \sup_{\mu \geq (1+\epsilon) \zeta_n} [\min(\mu,1) E_{\mu} n_1] \cr
& \leq & \max \Big[ \sup_{(1+\epsilon) \zeta_n \leq \mu \leq d_n} \mu E_\mu(T_c \wedge n), \sup_{\mu \geq d_n} \min(\mu,1) E_{\mu}T_b \Big] = O(c_n^3+\log n).
\end{eqnarray*}
Hence in view that $\frac{1}{p((1+\epsilon)\zeta_n)} = O(n^{\frac{\beta}{\beta+1}})$ and $P_g(\mu_1 > (1+\epsilon) \zeta_n) \rightarrow 1$ as $n \rightarrow \infty$,
\begin{eqnarray*}
EM & \leq & \tfrac{1}{p((1+\epsilon) \zeta_n)} E_g(n_1|\mu_1 > (1+\epsilon) \zeta_n) \cr
& = & O( n^{\frac{\beta}{\beta+1}}) E_g[\tfrac{c_n^3+\log n}{\min(\mu_1,1)} \big| \mu_1 > (1+\epsilon) \zeta_n ] \cr
& = & O( n^{\frac{\beta}{\beta+1}} (c_n^3+\log n)) \int_{(1+\epsilon) \zeta_n}^{\infty} \tfrac{g(\mu)}{\min(\mu,1)} d \mu \cr
& = & O( n^{\frac{\beta}{\beta+1}} (c_n^3+\log n)) \max(n^{\frac{1-\beta}{\beta+1}},\log n) = o(n).
\end{eqnarray*}
The first relation in the line above follows from
$$\int_{(1+\epsilon)\zeta_n}^\infty \tfrac{g(\mu)}{\min(\mu,1)} d \mu = \left\{ \begin{array}{ll} O(1) & \mbox{ if } \beta > 1, \cr
O(\log n) & \mbox{ if } \beta=1, \cr
O(n^{\frac{1-\beta}{\beta+1}}) & \mbox{ if } \beta<1. \quad \mbox{$\wbox$} \end{array} \right.
$$
\subsection{Proofs of Lemmas \ref{lem4}--\ref{lem7} for discrete rewards}
In the case of discrete rewards,
one difficulty is that for $\mu_k$ small,
there are potentially multiple plays on arm $k$ before a non-zero reward is observed.
Condition (A2) is helpful in ensuring that the mean of this non-zero reward is not too large.
\smallskip
{\sc Proof of Lemma} \ref{lem4}.
Recall that
$$T_b = \inf \{ t: S_t > b_n t \zeta_n \},
$$
and that $d_n = n^{-\omega}$ for some $0 < \omega < \frac{1}{\beta+1}$.
We shall show that
\begin{eqnarray} \label{ET}
\sup_{\mu \geq d_n} [\min(\mu,1) E_{\mu} T_b] & = & O(1), \\ \label{limlam}
E_g(T_b \mu {\bf 1}_{\{ \mu \geq d_n \}}) & \leq & \lambda+o(1).
\end{eqnarray}
Let $\theta = 2\omega \log n$.
Since $X_u$ is integer-valued,
it follows from Markov's inequality that
\begin{equation} \label{Stb}
P_{\mu}(S_t \leq b_n t \zeta_n) \leq [e^{\theta b_n \zeta_n} M_{\mu}(-\theta)]^t \leq \{ e^{\theta b_n \zeta_n} [P_{\mu}(X=0)+e^{-\theta}] \}^t.
\end{equation}
By $P_{\mu}(X>0) \geq a_1 d_n$ for $\mu \geq d_n$ [see (A2)],
$\theta b_n \zeta_n = o(d_n)$ [because $\theta$ and $b_n$ are both sub-polynomial in $n$ and $\zeta_n = O(n^{-\frac{1}{\beta+1}})]$ and (\ref{Stb}),
uniformly over $\mu \geq d_n$,
\begin{eqnarray} \label{ET1}
E_{\mu} T_b & = & 1 + \sum_{t=1}^{\infty} P_{\mu}(T_b>t) \\ \nonumber
& \leq & 1+ \sum_{t=1}^{\infty} P_{\mu}(S_t \leq b_n t \zeta_n) \\ \nonumber
& \leq & \{ 1-e^{\theta b_n \zeta_n}[P_{\mu}(X=0)+e^{-\theta}] \}^{-1} \\ \nonumber
& = & \{ 1-[1+o(d_n)][P_{\mu}(X=0)+d_n^2] \}^{-1} \\ \nonumber
& = & [ P_{\mu}(X>0)+o(d_n)]^{-1} \sim [P_{\mu}(X>0)]^{-1}.
\end{eqnarray}
The term inside $\{ \cdot \}$ in (\ref{Stb}) is not more than $[1+o(d_n)](1-a_1 d_n+d_n^2) < 1$ for $n$ large and this justifies the second inequality in (\ref{ET1}).
We conclude (\ref{ET}) from (\ref{ET1}) and (A2).
By (\ref{ET1}),
\begin{eqnarray*}
E_g[T_b \mu {\bf 1}_{\{ \mu \geq d_n \}}] & = & \int_{d_n}^{\infty} E_{\mu}(T_b)\mu g(\mu) d \mu \cr
& \leq & [1+o(1)] \int_{d_n}^{\infty} \tfrac{E_{\mu}(X)}{P_{\mu}(X>0)} g(\mu) d \mu \cr
& = & [1+o(1)] \int_{d_n}^{\infty} E_{\mu}(X|X>0) g(\mu) d \mu \rightarrow \lambda,
\end{eqnarray*}
hence (\ref{limlam}) holds.
$\wbox$
\smallskip
{\sc Proof of Lemma} \ref{lem5}.
Recall that $T_c = \inf \{ t: S_t > t \zeta_n + c_n \wht \sigma_t \sqrt{t} \}$ and let $\epsilon > 0$.
We want to show that
\begin{eqnarray} \label{lem5a}
\sup_{(1+\epsilon)\zeta_n \leq\mu \leq d_n} \mu E_{\mu} (T_c \wedge n) & =& O(c_n^3+ \log n), \\ \label{lem5b}
E_g[(T_c \wedge n) \mu {\bf 1}_{\{(1+\epsilon)\zeta_n \leq \mu \leq d_n \}}] & \rightarrow & 0.
\end{eqnarray}
We first show that there exists $\kappa>0$ such that as $n \rightarrow \infty$,
\begin{equation} \label{show}
\sup_{\mu \leq d_n} \Big[ \mu \sum_{t=1}^n P_{\mu}(\wht \sigma_t^2 \geq \kappa \mu) \Big] =O(\log n).
\end{equation}
Since $X$ is non-negative integer-valued, $X^2 \leq X^4$. Indeed by (\ref{E4}),
there exists $\kappa>0$ such that $\rho_{\mu}:=E_{\mu} X^2 \leq \frac{\kappa \mu}{2}$ for $\mu \leq d_n$ and $n$ large,
therefore by (\ref{E4}) again and Chebyshev's inequality,
\begin{eqnarray*}
P_{\mu}(\wht \sigma_t^2 \geq \kappa \mu) & \leq & P_{\mu} \Big( \sum_{u=1}^t X_u^2 \geq t \kappa \mu \Big) \cr
& \leq & P_{\mu} \Big( \sum_{u=1}^t (X_u^2-\rho_{\mu}) \geq \tfrac{t \kappa \mu}{2} \Big) \cr
& \leq & \tfrac{t {\rm Var}_{\mu}(X^2)}{(t \kappa \mu/2)^2} = O((t \mu)^{-1}),
\end{eqnarray*}
and (\ref{show}) holds.
By (\ref{show}),
uniformly over $(1+\epsilon)\zeta_n \leq \mu \leq d_n$,
\begin{eqnarray} \label{En2}
E_{\mu} (T_c \wedge n) & = & 1 + \sum_{t=1}^{n-1} P_{\mu}(T_c > t) \\ \nonumber
& \leq & 1+ \sum_{t=1}^{n-1} P_{\mu}(S_t \leq t \zeta_n + c_n \wht \sigma_t\sqrt{t}) \\ \nonumber
& \leq & 1+ \sum_{t=1}^{n-1} P_{\mu}(S_t \leq t \zeta_n + c_n \sqrt{\kappa\mu t}) + O(\tfrac{\log n}{\mu}).
\end{eqnarray}
Let $0 < \delta < \frac{1}{2}$ to be further specified.
Uniformly over $t \geq c_n^3 \mu^{-1}$,
$\mu t/(c_n \sqrt{\kappa\mu t}) \rightarrow \infty$ and therefore by (\ref{Mmu2}), $\mu \geq (1+\epsilon)\zeta_n $ and Markov's inequality,
for $n$ large,
\begin{eqnarray} \label{case2}
P_{\mu}(S_t \leq t \zeta_n+c_n \sqrt{\kappa\mu t}) & \leq & P_{\mu}(S_t \leq t(\zeta_n+\delta \mu)) \\ \nonumber
& \leq & e^{\theta_{\delta} t (\zeta_n+\delta \mu)} M_{\mu}^t (-\theta_{\delta}) \\ \nonumber
& \leq & e^{t \theta_{\delta}[\zeta_n-(1-2 \delta) \mu]} \leq e^{-\eta t \theta_{\delta} \mu},
\end{eqnarray}
where $\eta =1-2 \delta-\frac{1}{1+\epsilon} >0$ (for $\delta$ chosen small).
Since $1-e^{-\eta \theta_{\delta} \mu} \sim \eta \theta_{\delta} \mu$ as $\mu \rightarrow 0$,
\begin{equation} \label{cn3}
c_n^3 \mu^{-1} + \sum_{t \geq c_n^3 \mu^{-1}} e^{-\eta t \theta_{\delta} \mu} = O(c_n^3 \mu^{-1}),
\end{equation}
and substituting (\ref{case2}) into (\ref{En2}) gives us (\ref{lem5a}).
By (\ref{lem5a}),
\begin{eqnarray} \nonumber
E_g[(T_c \wedge n) \mu {\bf 1}_{\{(1+\epsilon)\zeta_n \leq \mu \leq d_n \}}] & = & P_g((1+\epsilon)\zeta_n \leq \mu \leq d_n) O(c_n^3+\log n) \\ \nonumber
& = & O(d_n^{\beta}(c_n^3+\log n)),
\end{eqnarray}
and (\ref{lem5b}) holds since $c_n$ is sub-polynomial in $n$. $\wbox$
\medskip
{\sc Proof of Lemma} \ref{lem6}.
We want to show that
\begin{equation} \label{Pgn1}
P_{\mu}(S_t > tb_n \zeta_n \mbox{ for some } t \geq 1) \rightarrow 0
\end{equation}
uniformly over $\mu \leq (1-\epsilon)\zeta_n$.
By (\ref{a2}) and Bonferroni's inequality,
\begin{eqnarray} \label{lem6.1}
& & P_{\mu}(S_t > tb_n \zeta_n \mbox{ for some } t \leq \tfrac{1}{\sqrt{b_n} \zeta_n}) \\ \nonumber
& \leq & P_{\mu}(X_t > 0 \mbox{ for some } t \leq \tfrac{1}{\sqrt{b_n} \zeta_n}) \leq \tfrac{a_2 \mu}{\sqrt{b_n} \zeta_n} \rightarrow 0.
\end{eqnarray}
By (\ref{Mbound}) and a change-of-measure argument,
for $n$ large,
\begin{eqnarray} \label{lem6.2}
& & P_{\mu}(S_t > tb_n \zeta_n \mbox{ for some } t > \tfrac{1}{\sqrt{b_n} \zeta_n}) \\ \nonumber
& \leq & \sup_{t > \frac{1}{\sqrt{b_n} \zeta_n}} [e^{-\theta_1 b_n \zeta_n} M_{\mu}(\theta_1)]^t \leq e^{-\theta_1(b_n \zeta_n-2 \mu)/(\zeta_n \sqrt{b_n})} \rightarrow 0.
\end{eqnarray}
To see the first inequality of (\ref{lem6.2}),
let $f_\mu$ be the density of $X_1$ with respect to some $\sigma$-finite measure, and let $E_\mu^{\theta_1} (P_\mu^{\theta_1})$ denote expectation (probability) with respect to density
\begin{equation*}
f_\mu^{\theta_1}(x) := [M_\mu(\theta_1)]^{-1}e^{\theta_1 x} f_\mu(x).
\end{equation*}
Let $T = \inf\{ t > \tfrac{1}{\sqrt{b_n} \zeta_n} : S_t > tb_n\zeta_n\}$.
It follows from a change of measure that
\begin{eqnarray} \label{com}
P_\mu(T = t) & = & M_\mu^t(\theta_1) E_\mu^{\theta_1}(e^{-\theta_1 S_t}\mathbf{1}_{\lbrace T = t \rbrace}) \\ \nonumber
& \leq & [e^{-\theta_1 b_n\zeta_n} M_\mu(\theta_1)]^t P_\mu^{\theta_1}(T=t),
\end{eqnarray}
and the first inequality of (\ref{lem6.2}) follows from summing (\ref{com}) over $t > \tfrac{1}{\sqrt{b_n} \zeta_n}$.
$\wbox$
\smallskip
{\sc Proof of Lemma} \ref{lem7}.
We want to show that
\begin{equation} \label{lem7.1}
P_{\mu}(S_t > t \zeta_n + c_n \wht \sigma_t \sqrt{t} \mbox{ for some } t \geq 1) \rightarrow 0
\end{equation}
uniformly over $\mu \leq (1-\epsilon)\zeta_n$.
By (\ref{a2}) and Bonferroni's inequality,
\begin{eqnarray} \label{Bonf}
& & P_{\mu}(S_t > t \zeta_n + c_n \wht \sigma_t \sqrt{t} \mbox{ for some } t \leq \tfrac{1}{c_n \mu}) \\ \nonumber
& \leq & P_{\mu}(X_t>0 \mbox{ for some } t \leq \tfrac{1}{c_n \mu}) \leq \tfrac{a_2}{c_n} \rightarrow 0.
\end{eqnarray}
Moreover
\begin{eqnarray} \label{lem7.2}
& & P_{\mu}(S_t > t \zeta_n+c_n \wht \sigma_t \sqrt{t} \mbox{ for some } t > \tfrac{1}{c_n \mu}) \leq {\rm (I)} + {\rm (II)}, \\ \nonumber
\mbox{where (I)} & = & P_{\mu}(S_t > t \zeta_n+c_n (\mu t/2)^{\frac{1}{2}} \mbox{ for some } t > \tfrac{1}{c_n \mu}), \\ \nonumber
{\rm (II)} & = & P_{\mu}(\wht \sigma_t^2 \leq \tfrac{\mu}{2} \mbox{ and } S_t \geq t \zeta_n \mbox{ for some } t > \tfrac{1}{c_n \mu}).
\end{eqnarray}
By (\ref{Bonf}) and (\ref{lem7.2}),
to show (\ref{lem7.1}),
it suffices to show that (I)$\rightarrow 0$ and (II)$\rightarrow 0$.
Let $0 < \delta \leq 1$ be such that $1+\delta < (1-\epsilon)^{-1}$.
Hence $\mu \leq (1-\epsilon)\zeta_n$ implies $\zeta_n \geq (1+\delta) \mu$.
It follows from (\ref{Mbound}) and change-of-measure argument [see (\ref{lem6.2}) and (\ref{com})] that
\begin{eqnarray*}
{\rm (I)} & \leq & \sup_{t>\frac{1}{c_n \mu}} [e^{-\theta_{\delta}[t \zeta_n+c_n(\mu t/2)^{\frac{1}{2}}]} M_{\mu}^t(\theta_{\delta})] \\
& \leq & \exp \{ -\theta_{\delta}[\zeta_n-(1+\delta) \mu]/(c_n \mu)-\theta_{\delta} (c_n/2)^{\frac{1}{2}} \}\\
& \leq & \exp \{-\theta_{\delta} (c_n/2)^{\frac{1}{2}}\} \rightarrow 0.
\end{eqnarray*}
Since $X_u^2 \geq X_u$,
the inequality $S_t \geq t \zeta_n (\geq t \mu)$ implies $\sum_{u=1}^t X_u^2 \geq t \mu$,
and this,
together with $\wht \sigma_t^2 \leq \frac{\mu}{2}$ implies that $\bar X_t^2 \geq \frac{\mu}{2}$.
Hence by (\ref{Mbound}) and change-of-measure argument,
for $n$ large,
\begin{eqnarray*}
\mbox{(II)} & \leq & P_{\mu}(\bar X_t \geq \sqrt{\tfrac{\mu}{2}} \mbox{ for some } t > \tfrac{1}{c_n \mu}) \cr
& \leq & \sup_{t > \frac{1}{c_n \mu}} [e^{-\theta_1 \sqrt{\mu/2}} M_{\mu}(\theta_1)]^t \cr
& \leq & \exp \{-\theta_1[\sqrt{\tfrac{\mu}{2}}-2 \mu]/(c_n \mu)\} \cr
& \leq & \exp \big\{-\theta_1 \big[ \tfrac{1}{c_n \sqrt{2(1-\epsilon)\zeta_n}}-\tfrac{2}{c_n} \big] \big\} \rightarrow 0 . \quad \mbox{$\wbox$}
\end{eqnarray*}
\subsection{Proofs of Lemmas \ref{lem4}--\ref{lem7} for continuous rewards}
In the case of continuous rewards,
the proofs are simpler due to rewards being non-zero,
in particular $\lambda = E_g \mu$.
\medskip
{\sc Proof of Lemma} \ref{lem4}.
To show (\ref{lem4.1}) and (\ref{lem4.2}),
it suffices to show that
\begin{equation} \label{42.1}
\sup_{\mu \geq d_n} E_{\mu} T_b \leq 1+o(1).
\end{equation}
Let $\theta > 0$ to be further specified.
By Markov's inequality,
\begin{equation*}
P_{\mu}(S_t \leq b_n t \zeta_n) \leq [e^{\theta b_n \zeta_n}M_{\mu}(-\theta)]^t.
\end{equation*}
Moreover, for any $\gamma >0$,
\begin{equation*}
M_{\mu}(-\theta) \leq P_{\mu}(X \leq \gamma \mu)+e^{-\gamma \theta \mu},
\end{equation*}
hence
\begin{eqnarray} \label{42.3}
E_{\mu} T_b & \leq & 1+\sum_{t=1}^{\infty} P_{\mu}(S_t \leq b_n t \zeta_n) \\ \nonumber
& \leq & \{ 1-e^{\theta b_n \zeta_n}[P_{\mu}(X \leq \gamma \mu)+e^{-\gamma \theta \mu}] \}^{-1}.
\end{eqnarray}
Let $\gamma = \frac{1}{\log n}$ and $\theta = n^{\eta}$ for some $ \omega<\eta <\frac{1}{\beta+1}$.
By (\ref{bncn}),
(\ref{Prho}) and (\ref{dn}),
for $\mu \geq d_n$,
$$e^{\theta b_n \zeta_n} \rightarrow 1,
\quad e^{-\gamma \theta \mu} \rightarrow 0,
\quad P_{\mu}(X \leq \gamma \mu) \rightarrow 0,
$$
and (\ref{42.1}) follows from (\ref{42.3}). $\wbox$
\smallskip
{\sc Proof of Lemma} \ref{lem5}.
By (\ref{E4}), for $\mu$ small,
\begin{eqnarray} \nonumber
\rho_\mu : = E_\mu X^2 & = & E_\mu(X^2 \mathbf{1}_{\lbrace X<1 \rbrace}) + E_\mu(X^2 \mathbf{1}_{\lbrace X \geq 1 \rbrace}) \\ \nonumber
& \leq & E_\mu X + E_\mu X^4 = O(\mu).
\end{eqnarray}
Hence to show (\ref{lem5.1}) and (\ref{lem5.2}),
we proceed as in the proof of Lemma \ref{lem5} for discrete rewards,
applying (\ref{tau2}) in place of (\ref{Mmu2}),
with any fixed $\theta>0$ in place of $\theta_{\delta}$ in (\ref{case2}) and (\ref{cn3}).
$\wbox$
\smallskip
{\sc Proof of Lemma} \ref{lem6}.
It follows from (\ref{tau1}) with $\theta = \frac{\tau_1}{\mu}$ and a change-of-measure argument [see (\ref{lem6.2}) and (\ref{com})] that for $n$ large,
\begin{eqnarray*}
& & P_{\mu}(S_t > tb_n \zeta_n \mbox{ for some } t \geq 1) \\
& \leq & \sup_{t \geq 1} [e^{-\theta b_n \zeta_n} M_{\mu}(\theta)]^t \leq e^{-\theta(b_n \zeta_n-2 \mu)} \rightarrow 0. \quad \mbox{$\wbox$}
\end{eqnarray*}
\smallskip
{\sc Proof of Lemma} \ref{lem7}.
Let $\eta > 0$ and choose $\delta>0$ such that $(1+\delta)(1-\epsilon) < 1$.
It follows from (\ref{tau1}) with $\theta = \frac{\tau_\delta}{\mu}$ and a change-of-measure argument that for $u$ large,
\begin{eqnarray} \label{72.2}
& & P_{\mu}(S_t \geq t \zeta_n+c_n \wht \sigma_t \sqrt{t} \mbox{ for some } t > u) \\ \nonumber
& \leq & P_{\mu}(S_t \geq t \zeta_n \mbox{ for some } t > u) \\ \nonumber
& \leq & \sup_{t>u} [e^{-\theta \zeta_n} M_{\mu}(\theta)]^t \leq e^{-u \theta[\zeta_n-(1+\delta) \mu]} \leq e^{-u \tau_{\delta} [(1-\epsilon)^{-1}-(1+\delta)]} \leq \eta.
\end{eqnarray}
By (\ref{sigbound}),
we can select $\gamma > 0$ such that for $n$ large (so that $\mu \leq (1-\epsilon)\zeta_n \leq \min_{1 \leq t \leq u} \xi_t)$,
\begin{equation} \label{7eta}
\sum_{t=1}^u P_{\mu}(\wht \sigma_t^2 \leq \gamma \mu^2) \leq \eta.
\end{equation}
Let $\theta = \frac{\tau_1}{\mu}$. By (\ref{tau1}), (\ref{7eta}) and Bonferroni's inequality,
\begin{eqnarray} \label{72.1}
& & P_{\mu}(S_t > t \zeta + c_n \wht \sigma_t \sqrt{t} \mbox{ for some } t \leq u) \\ \nonumber
&\leq & P_{\mu}(S_t \geq c_n \wht \sigma_t \sqrt{t} \mbox{ for some } t \leq u) \\ \nonumber
& \leq & \eta + \sum_{t=1}^u P_{\mu}(S_t \geq c_n \mu \sqrt{\gamma t}) \\ \nonumber
& \leq & \eta + \sum_{t=1}^u e^{-\theta c_n \mu \sqrt{\gamma t}} M_{\mu}^t(\theta) \\ \nonumber
& \leq & \eta + \sum_{t=1}^u e^{-\tau_1(c_n \sqrt{\gamma t}-2t)} \rightarrow \eta.
\end{eqnarray}
Lemma \ref{lem7} follows from (\ref{72.2}) and (\ref{72.1}) since $\eta$ can be chosen arbitrarily small.
$\wbox$
\begin{appendix}
\section{Derivation of (\ref{thm3.1}) } \label{thm3prf}
The idealized algorithm in the beginning of Section 3.1 captures the essence of how CBT behaves.
The mean $\mu_k$ of arm $k$ is revealed when its first non-zero reward appears.
If $\mu_k > \zeta$ [with optimality when $\zeta=\zeta_n$,
see (\ref{mustar})],
then we stop sampling from arm $k$ and sample next arm $k+1$.
If $\mu_k \leq \zeta$,
then we exploit arm $k$ a further $n$ times before stopping the algorithm.
For empirical CBT,
when there are $m$ rewards,
we apply threshold $\zeta(m) = \frac{S_m'}{n}$ where $S_m'$ is the sum of the $m$ rewards.
In an idealized version of CBT,
we also reveal the mean $\mu_k$ of arm when its first non-zero reward appears.
To capture the essence of how threshold $\zeta(m)$ increases with number of rewards $m$,
we assign a fixed cost of $\lambda$ to each arm that we experiment with.
That is we replace $\zeta(m)$ by $\wht \zeta_k = \frac{k \lambda}{n}$,
where $k$ is the number of arms played.
When $\min_{1 \leq i \leq k} \mu_i \leq \wht \zeta_k$,
we stop experimenting and play the best arm a further $n$ times.
More specifically:
\vskip0.5in
\underline{Idealized empirical CBT}
\begin{enumerate}
\item For $k=1,2,\ldots:$
Draw $n_k$ rewards from arm $k$,
where
$$n_k = \inf \{ t \geq 1: X_{kt} > 0 \}.
$$
\item Stop when there are $K$ arms,
where
$$K = \inf \Big\{ k \geq 1: \min_{1 \leq i \leq k} \mu_i \leq \tfrac{k \lambda}{n} \Big\}.
$$
\item Draw $n$ additional rewards from arm $j$ satisfying $\mu_j = \min_{1 \leq k \leq K} \mu_k$.
\end{enumerate}
\vskip0.5in
\noi We define the regret of this algorithm to be $R_n = \lambda EK + n E(\min_{1 \leq k \leq K} \mu_k)$.
\begin{thm} \label{thmA}
For idealized empirical CBT,
its regret $R_n \sim C I_{\beta} n^{\frac{\beta}{\beta+1}}$,
where $C = (\frac{\lambda \beta (\beta+1)}{\alpha})^{\frac{1}{\beta+1}}$ and $I_{\beta} = (\frac{1}{\beta+1})^{\frac{1}{\beta+1}} (2-\frac{1}{(\beta+1)^2}) \Gamma(2-\frac{\beta}{\beta+1})$.
\end{thm}
{\sc Proof}.
We stop experimenting after $K$ arms,
where
\begin{equation} \label{keq}
K = \inf \{ k: \min_{1 \leq j \leq k} \mu_j \leq \wht \zeta_k \}, \quad \wht \zeta_k = \tfrac{k \lambda}{n}.
\end{equation}
Let
$$D_k^1 = \{ \wht \zeta_k-\tfrac{\lambda}{n} < \min_{1 \leq j \leq k-1} \mu_j \leq \wht \zeta_k \},
\quad D_k^2 = \{ \min_{1 \leq j \leq k-1} \mu_j > \wht \zeta_k, \mu_k \leq \wht \zeta_k \}.
$$
We check that $D_k^1$,
$D_k^2$ are disjoint,
and that $D_k^1 \cup D_k^2 = \{ K=k \}$.
Essentially $D_k^1$ is the event that $K=k$ and the best arm is not $k$ and $D_k^2$ the event that $K=k$ and the best arm is $k$.
For any fixed $k \in {\bf Z}^+$,
\begin{eqnarray} \label{Dk1}
P(D_k^1) & = & [1-p(\wht \zeta_k-\tfrac{\lambda}{n})]^{k-1} - [1-p(\wht \zeta_k)]^{k-1} \\ \nonumber
& = & \{ 1-p(\wht \zeta_k)+[1+o(1)] \tfrac{\lambda}{n} g(\wht \zeta_k) \}^{k-1} - [1-p(\wht \zeta_k)]^{k-1} \\ \nonumber
& \sim & \{ [1-p(\wht \zeta_k)]^{k-1} \} \tfrac{k \lambda}{n} g(\wht \zeta_k) \\ \nonumber
& \sim & \exp(-\tfrac{\alpha \lambda^\beta}{\beta}k^{\beta+1}n^{-\beta}) \alpha \lambda^\beta k^\beta n^{-\beta}.
\end{eqnarray}
Moreover
\begin{equation} \label{RC1}
E(R_n|D_k^1) \sim k \lambda + n(\tfrac{k \lambda}{n}) = 2 k \lambda.
\end{equation}
Likewise,
\begin{eqnarray} \label{Dk2}
P(D_k^2) & = & \{ [1-p(\wht \zeta_k)]^{k-1} \} p(\wht \zeta_k) \\ \nonumber
& \sim & \exp(-\tfrac{\alpha \lambda^\beta}{\beta}k^{\beta+1}n^{-\beta}) \tfrac{\alpha \lambda^\beta}{\beta} k^\beta n^{-\beta}, \\ \label{RC2}
E(R_n|D_k^2) & = & k \lambda + n E(\mu|\mu \leq \wht \zeta_k) \\ \nonumber
& = & 2k \lambda -\tfrac{n v(\hat \zeta_k)}{p(\hat \zeta_k)} \sim (2-\tfrac{1}{\beta+1}) k\lambda.
\end{eqnarray}
Combining (\ref{Dk1})--(\ref{RC2}) gives us
\begin{eqnarray} \label{RCp}
R_n & = & \sum_{k=1}^{\infty} [E(R_C'|D_k^1)P(D_k^1) + E(R_C'|D_k^2)P(D_k^2)] \\ \nonumber
& \sim & \sum_{k=1}^{\infty} \exp(-\tfrac{\alpha \lambda^{\beta}}{\beta} k^{\beta+1}n^{-\beta})
(\tfrac{\alpha \lambda^{\beta+1}}{\beta} k^{\beta+1}n^{-\beta}) (2 \beta+2-\tfrac{1}{\beta+1}),
\end{eqnarray}
It follows from (\ref{RCp}) and a change of variables $x=\tfrac{\alpha \lambda^{\beta}}{\beta} k^{\beta+1}n^{-\beta}$ that
\begin{eqnarray} \nonumber
R_n & \sim & (2 \beta+2-\tfrac{1}{\beta+1}) \int_{0}^{\infty} \exp(-\tfrac{\alpha \lambda^{\beta}}{\beta} k^{\beta+1}n^{-\beta})
(\tfrac{\alpha \lambda^{\beta+1}}{\beta} k^{\beta+1}n^{-\beta}) dk \\ \nonumber
& = & (2 \beta+2-\tfrac{1}{\beta+1}) \int_{0}^{\infty} \tfrac{1}{\beta+1}(\tfrac{\lambda \beta}{\alpha})^{\tfrac{1}{\beta+1}} n^{\tfrac{\beta}{\beta+1}} \exp(-x) x^{\tfrac{1}{\beta+1}} dx\\ \nonumber
& = & (2 - \tfrac{1}{(\beta+1)^2}) (\tfrac{\lambda \beta}{\alpha})^{\tfrac{1}{\beta+1}} \Gamma(2 - \tfrac{\beta}{\beta+1}) n^{\tfrac{\beta}{\beta+1}},
\end{eqnarray}
and Theorem \ref{thmA} holds.
$\wbox$
\section{Proof of Lemma \ref{lem3}} \label{lem3prf}
Let $\wht K = \lfloor n \zeta_n (\log n)^{\beta+2} \rfloor$ for $\zeta_n$ satisfying $n v(\zeta_n)=\lambda$.
Express $E(M \mu_b) = \sum_{i=1}^5 E(M \mu_b {\bf 1}_{D_i})$,
where
\begin{eqnarray*}
D_1 & = & \{ \mu_b \leq \tfrac{\zeta_n}{\log n} \}, \cr
D_2 & = & \{ \mu_b > \tfrac{\zeta_n}{\log n}, K > \wht K \}, \cr
D_3 & = & \{ \tfrac{\zeta_n}{\log n} < \mu_b \leq \zeta_n (\log n)^{\beta+3}, K \leq \wht K \}, \cr
D_4 & = & \{ \mu_b > \zeta_n (\log n)^{\beta+3}, K \leq \wht K, M > \tfrac{n}{2} \}, \cr
D_5 & = & \{ \mu_b > \zeta_n (\log n)^{\beta+3}, K \leq \wht K, M \leq \tfrac{n}{2} \}.
\end{eqnarray*}
It suffices to show that for all $i$,
\begin{equation} \label{ED}
E(M \mu_b {\bf 1}_{D_i}) = o(n^{\frac{\beta}{\beta+1}}).
\end{equation}
Since $\frac{M \zeta_n}{\log n} \leq \frac{n \zeta_n}{\log n} = o(n^{\frac{\beta}{\beta+1}})$,
(\ref{ED}) holds for $i=1$.
Let $\wht \mu_b = \min_{k \leq \hat K} \mu_k$.
Since $M \leq n$,
$\mu_b \leq \mu_1$ and $E(\mu_1)\leq \lambda$,
\begin{eqnarray} \label{EM2}
E(M \mu_b {\bf 1}_{D_2}) & \leq & n E(\mu_1|\mu_1 > \tfrac{\zeta_n}{\log n}) P(D_2) \\ \nonumber
& \leq & [\lambda+o(1)] n P(\wht \mu_b > \tfrac{\zeta_n}{\log n}).
\end{eqnarray}
Substituting
$$P(\wht \mu_b > \tfrac{\zeta_n}{\log n}) = [1-p(\tfrac{\zeta_n}{\log n})]^{\hat K} = \exp \{ -[1+o(1)] \wht K \tfrac{\alpha}{\beta} (\tfrac{\zeta_n}{\log n})^{\beta}] = O(n^{-1})
$$
into (\ref{EM2}) shows (\ref{ED}) for $i=2$.
Let $M_j$ be the number of plays of $\Pi_j$ to its first non-zero reward (hence $M = \sum_{j=1}^K M_j$).
It follows from condition (A2) that $E_{\mu} M_1 = \frac{1}{P_{\mu}(X_1>0)} \leq \frac{1}{a_1 \min(\mu,1)}$,
by $\mu_b \leq \zeta_n (\log n)^{\beta+3}$,
\begin{eqnarray} \label{D3}
E(M \mu_b {\bf 1}_{D_3}) & \leq & E(M_1 {\bf 1}_{\{ \mu_1 > \frac{\zeta_n}{\log n} \}}) \wht K \zeta_n (\log n)^{\beta+3} \\ \nonumber
& \leq & \Big( \int_{\frac{\zeta_n}{\log n}}^{\infty} \tfrac{g(\mu)}{a_1 \min(\mu,1)} d \mu \Big) n \zeta_n^2 (\log n)^{2 \beta+5}.
\end{eqnarray}
Substituting
$$\int_{\frac{\zeta_n}{\log n}}^\infty \tfrac{g(\mu)}{\mu} d \mu = \left\{ \begin{array}{ll} O(1) & \mbox{ if } \beta > 1, \cr
O(\log n) & \mbox{ if } \beta=1, \cr
O((\tfrac{\zeta_n}{\log n})^{\beta-1}) & \mbox{ if } \beta<1, \end{array} \right.
$$
into (\ref{D3}) shows (\ref{ED}) for $i=3$.
If $\mu_j > \zeta_n (\log n)^{\beta+3}$,
then by condition (A2),
$M_j$ is bounded above by a geometric random variable with mean $\nu^{-1}$,
where $\nu = a_1 \zeta_n (\log n)^{\beta+3}$.
Hence for $0 < \theta < \log(\frac{1}{1-\nu})$,
$$E(e^{\theta M_j} {\bf 1}_{\{ \mu_j > \zeta_n (\log n)^{\beta+3} \}})
\leq \sum_{i=1}^{\infty} e^{\theta i} \nu (1-\nu)^{i-1} = \tfrac{\nu e^{\theta}}{1-e^{\theta}(1-\nu)},
$$
implying that
\begin{equation} \label{EtM}
[e^{\frac{\theta n}{2}} P(D_4) \leq] E (e^{\theta M} {\bf 1}_{D_4}) \leq (\tfrac{\nu e^{\theta}}{1-e^{\theta}(1-\nu)})^{\hat K}.
\end{equation}
Consider $e^{\theta}=1+\frac{\nu}{2}$ and check that $e^{\theta}(1-\nu) \leq 1-\frac{\nu}{2}$ [$\Rightarrow \theta < \log(\frac{1}{1-\nu})]$.
It follows from (\ref{EtM}) that
\begin{eqnarray*}
P(D_4) & \leq & e^{-\frac{\theta n}{2}} (\tfrac{\nu e^{\theta}}{\nu/2})^{\hat K} = 2^{\hat K} e^{\theta(\hat K-\frac{n}{2})} \cr
& = & \exp[\wht K \log 2+[1+o(1)] \tfrac{\nu}{2}(\wht K-\tfrac{n}{2})] \cr
& = & n^{-\wht K[ \tfrac{a_1}{4}-\tfrac{a_1}{2}\zeta_n (\log n)^{\beta+2}- \tfrac{\log 2}{\log n}]} = O(n^{-1}).
\end{eqnarray*}
Since $M \leq n$, $\mu_b \leq \mu_1$ and $E(\mu_1) \leq \lambda$,
$$E(M \mu_b {\bf 1}_{D_4}) \leq n E[\mu_1|\mu_1 > \zeta_n (\log n)^{\beta+3}] P(D_4) \leq n[\lambda+o(1)] P(D_4),
$$
and (\ref{ED}) holds for $i=4$.
Under $D_5$ for $n$ large,
$$(n-M) v(\mu_b) [> \tfrac{n}{2} v(\zeta_n (\log n)^{\beta+3}) \sim \tfrac{n}{2}v(\zeta_n) (\log n)^{(\beta+3)(\beta+1)}] > \lambda.
$$
The optimal solution of Problem B requires further experimentation since its cost $\lambda$ is less than the reduction in exploitation cost.
In other words $D_5$ is an event of zero probability.
Therefore (\ref{ED}) holds for $i=5$.
\end{appendix}
|
1,108,101,564,827 | arxiv | \section{Introduction}
Coronavirus, also known as COVID-19 was discovered in Wuhan, China, in December of 2019~\cite{who}. COVID-19 has many strains and can infect animals and humans. COVID-19 is hard to detect because it has common symptoms such as cold and flu. The symptoms also range in seriousness depending on the person's immune system. Symptoms can take up to 14 days to appear after exposure. Because of this, the public disregards them as everyday common flu or cold. COVID-19 is spread through respiratory droplets when you cough, sneeze, and touch~\cite{govca}. COVID-19 spread has become so severe it is shutting down our economies. There are over 127 million worldwide cases and over 2.7 million deaths as of March 29, 2021 and rising daily \cite{[2]asean}. Chest X-rays and CT scans can be conducted quickly and efficiently for detecting COVID-19~\cite{Wang2020}. It will allow the radiologist, pathologist, and physicians to properly learn the condition of the COVID-19 affected patients for additional care and drive specific clinical solutions to safe lives. The quicker the detection, the quicker the patient will receive treatment and can be put in quarantine to avoid further spread.
This work introduces a new CNN-based solution with slow feature learning strategy to predict if the person is affected with COVID-19 using the patient's chest x-ray. The model is solely trained and tested on COVIDx CXR-2 dataset provided by the \textit{AI against COVID-19: Screening X-ray Images for COVID-19 Infections} competition, which is available at
\href{https://www.kaggle.com/andyczhao/covidx-cxr2}{data set}. It comprises of over 16000 ($480 \times 480$) chest X-ray scans taken from 15000 patients from across the world (minimum of 51 participating countries). We follow the exact training and test sets splits containing positive and negative classes to indicate COVID or non-COVID cases, and evaluate the proposed model on the hold out set that do not provide the ground truths. For performance computation we submit our model's binary label predictions to the evaluating site \href{https://eval.ai/web/challenges/challenge-page/925/leaderboard}{leaderboard} and receive the results.
\begin{comment}
the paper is organized as follows: Section \ref{Section II} presents the related works, Section \ref{Section III} discusses the selected data-set, Section \ref{Section IV} walks through the prepossessing of our selected data-set, Section \ref{Section V} shows sanity tests, Section \ref{Section VI} talks about the proposed model, Section \ref{Section VII} contains the experimental analysis and finally, Section \ref{Section VIII} is our conclusion.
\end{comment}
\section{Literature Review}\label{Section II}
To tackle the surge of COVID-19, the AI community has been actively developing efficient solutions for automatic detection of COVID-19 patients from various sources as an alternative/supportive to the conventional time consuming diagnosis procedures. Among them the vision-based (e.g., the medical images, like X-rays, CT scans) classification models show promising results.
As data scarcity is a longstanding issue in the medical machine learning, most of the researchers go by transfer learning (TL) approach. In this direction, Gozes~\textit{et al.}~\cite{Gozes2020} focus using the well-known UNet~\cite{ronneberger2015u} and ResNet50-2D~\cite{he2016deep} architectures for CT scan-based COVID-19 patient classification, quantification and tracking for the patients.
Similarly, Wang \textit{et al.}~\cite{[5]Wang2020} use ResNet18 architecture, and Ali \textit{et al.}~\cite{[6]narin2021automatic} employ ResNet50, InceptionV3 and Inception-ResNetV2~\cite{szegedy2017inception} models.
The main aim of these TL techniques is to extract features from the small number of medical images leveraging an exhaustively trained CNN on large-scale data, then a shallow classifiers, like decision tree and SVM on the extracted feature sets. Relatively, such approaches work well. That being said, they are over dependant on the pretrained backbone models.
In response to that, there are few attempts where researchers carefully \cite{Wang2020}, \cite{aboutalebi2021covid} architect new deep learning models specifically for the detection COVID-19. In this line, this work CNN model is trained from scratch on Chest X-rays to classify COVID-19 cases.
\section{Proposed Slow Encoding CNN}
Earlier in \cite{Akilan2020, Akilan2018tvt, akilan2018}, we introduced a novel encoder-decoder (EnDec) foreground segmentation architecture that perform feature learning twice at every stage of down-sampling processing. It has two subnets (cf.~Fig.~\ref{fig:sendecmodel}): encoder (spatial subsampling) and decoder subnet (up-sample the lower dimensional bottle neck output feature map of the encoder back to the original dimension of the image). In the encoder subnet, a spatial input ($layer_i$) is transformed through a spatial sub-sampling convolutional (Conv) operations (($layer_{i+1}$)), followed by an up-sampling operation using a transpose convolution (($layer_{i+2}$)) that generates exactly matching spatial feature maps to the previous layer’s input feature dimension. Now, the up-sampled feature maps are aggregated depth wise with the original input features ($layer_i$). Then, the aggregated new feature maps are encoded using a spatial subsampling Conv layer. In this way, an input feature map in sub-sampling stage is learnt twice before completely moving to the next-level of lower spatial dimension. Following that, this work upcycle the model by removing decoder subnet and refurbish it by adding dense layers on the top with a Sigmoid classifier targeting the COVID-19 patient classification using Chest X-ray images. Figure ~\ref{fig:sendecmodel} depicts the proposed slow encoding Convnet model. In total the model has 9,692,865 trainable parameters.
\begin{figure}[!tb]
\centering
\includegraphics[width=0.55\columnwidth, trim={3.0cm 2.5cm 2.5cm 2.5cm},clip]{sendec-c19.png}
\caption{The proposed slow encoding Convent classifier for COVID-19 detection using chest X-rays. All Conv2D layers use ReLU activation. }
\label{fig:sendecmodel}
\end{figure}
\section{Data-set}\label{Section III}
\section{Experimental Analysis}\label{Section VII}
\subsection{Training Time Analysis}
\subsubsection{Training History}
The proposed CxSE model was trained for 30 epochs on the given train set on the competition site using Adamax optimizer with learning rate of 0.001. The training history is show in Fig.~\ref{fig:traininghis}.
\begin{figure}[!htb]
\centering
\includegraphics[width=1.0\columnwidth, trim={0.25cm 0cm 0.25cm 0cm},clip]{traninghist.png}
\caption{The training history of CxSE.}
\label{fig:traininghis}
\end{figure}
\subsubsection{Testing Results}
The proposed model is tested on the given test set on the competition site. The probability scores produces (cf.~Fig.~\ref{fig:testsetscores}) by the model is converted into class labels using a threshold of 0.5 and it is compared against the ground truths. The model's performance is shown by a confusion matrix in Fig.~\ref{fig:confusionmat}.
Similarly, the model's probability scores on the competition set (cf.~Fig.~\ref{fig:competitionsetscore}) is also converted using the same threshold and uploaded to the competition's \hyperlink{https://eval.ai/web/challenges/challenge-page/925/leaderboard/2424}{evaluation site} to get the model's overall performance as tabulated in Table~\ref{tab:leaderboardscore}.
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\columnwidth, trim={0cm 0cm 0cm 0cm},clip]{sigmoid-score-test.png}
\caption{CxSE's sigmoid probability scores on test set. }
\label{fig:testsetscores}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.6\columnwidth, trim={0cm 0cm 0cm 0cm},clip]{confusion-matrix.png}
\caption{Proposed CxSE's performance on test set. }
\label{fig:confusionmat}
\end{figure}
\begin{figure}[!htb]
\centering
\includegraphics[width=0.9\columnwidth, trim={0cm 0cm 0cm 0cm},clip]{sigmoid-score.png}
\caption{CxSE's sigmoid probability scores on competition set. }
\label{fig:competitionsetscore}
\end{figure}
\begin{table}[!ht]
\centering
\begin{tabular}{|c|c|}
\hline
\textbf{Metrics} & \textbf{Score}\\ \hline \hline
{SP} & 0.67\\\hline
{PP} & 0.98 \\\hline
{SN} & 0.96\\\hline
{PN} & 0.52 \\\hline
{Overall points} & 12.80\\ \hline
\end{tabular}
\caption{CxSE's COIVD-19 classification performance on competition set (cf. \href{{https://eval.ai/web/challenges/challenge-page/925/leaderboard/}}{leaderboard} - Participant team: MVLC)}
\label{tab:leaderboardscore}
\end{table}
\section{Conclusion}\label{Section VIII}
This work has introduced a new feature learning DCNN aiming for COVID-19 diagnosis using chest x-rays. This fist stage of our development shows promising outcome. The future work is dedicated to improving the model's learning ability through class-agnostic semi-supervised training approach.
\section*{Acknowledgements}\label{Acknowledgements}
This work acknowledges IEEE SIGHT MTL, VIP lab, and DarwinAI the organizers of ``AI Against COVID19 - Screening X-ray images for COVID-19 Infections''.
\bibliographystyle{IEEEtran}
|
1,108,101,564,828 | arxiv | \section{Introduction}
This letter is devoted to the aim of understanding what happens
inside the finite Frenkel-Kontorova (FK)
chain~\cite{kont38I,kont38II,QuBo18MolPhys},
if the two characteristic ratios $a/b$ and $V_o/k$ change.
We select ref.~\onlinecite{mun20} to discuss some general problems of
the understanding of the FK model, with the
special example of this paper.
For shortness we only treat the springlike potential of ref.~\onlinecite{mun20}.
The corresponding model of the potential energy
of a linear FK chain $\textbf{x}=(x_1,...,x_N)$ with
$x_i<x_{i+1} $ for all atoms, and with
atoms of equal mass at points $x_i$ is ~\cite{mun20}
\begin{equation}
U(\textbf{x}) = V_o\, \sum_{i=1}^N(1- cos(\frac{2\pi}{b}\,x_i)) +
\frac{k}{2} \sum_{i=1}^{N-1}V_{int} (x_{i+1}-x_i) \ .
\label{pot}
\end{equation}
$V_o$ describes the strongness of the substrate but $k$ describes
the strongness of the springlike forces between neighboring atoms.
Used are $V_o=0.02 , k/k_B=783.6$ (without units~\cite{mun20}).
Parameter $b$ is the periodicity of the substrate. It is used with a
variable length around the spring distance of the $V_{int}$ potential, $a=2.4$.
This interatomic springlike potential function is
\begin{equation}
V_{int}(r)= \frac{-1}{1+c_o(r-r_o)^2}+\frac{1}{k} Exp(-c_2(r-r_o')) \ .
\label{spring}
\end{equation}
The following parameters of the function are used~\cite{mun20}
$c_o$=12.9, $r_o$=2.4, $c_2$=263.0, $r_o'$=1.8.
\begin{figure}
\includegraphics[scale=0.875]{Figs/Fig1.pdf}
\caption{The springlike potential function of Eq.(\ref{spring}).}
\label{fig1}
\end{figure}
The graph of function (2) is shown in Fig.\ref{fig1}.
The minimum is found to be at $a=r_{min}=2.4$ units,
in contrast to ref.~\onlinecite{mun20} with 2.44 units.
We used the Mathematica program, version 13.0, for the calculations, as well as for the
Figures.
The springlike potential is of special interest because it is unsymmetrical for
stretching against compression.
However for the calculation of minima this fact alone does not
play a role.
The boundary conditions for $x_1$ and $x_N$ are free.
Parameter $a=r_{min}$ is the distance of two atoms if the parameter $V_o$
is zero. It is hold fixed throughout the letter.
However, in the chain with the substrate potential,
$V_o\ne 0$, usually the average distance, $\tilde{a}$, changes to
\begin{equation}
\tilde{a} = \frac{x_N-x_1}{N-1} \ ,
\end{equation}
because the chain will find a minimum form in the substrate\cite{stoy85},
compare Fig.2.
We have to emphasize that $ \tilde{a}$ is the average distance
in a minimum structure.
In our calculations on the relation of the influence of the two parts of Eq.(1)
we only change the parameters $b, \ V_o$ of the substrate potential.
The spring potential and $k$ are fixed.
\begin{figure}[h]
\includegraphics[scale=0.75]{Figs/Fig2a}
\includegraphics[scale=0.75]{Figs/Fig2b}
\includegraphics[scale=0.75]{Figs/Fig2c}
\includegraphics[scale=0.75]{Figs/Fig2d}
\includegraphics[scale=0.75]{Figs/Fig2e}
\caption{Ground state of FK chains of different $b$ values.
Only the substrate potential is shown, however,
the fixed springlike force is not shown.
To lead the eye, we moved the atoms up to the potential line.
Note the different length of the substrate throughs.
In (c) and in (e) new kinks emerge.
The outer atoms show impressively the effect of the free boundaries.}
\label{figX}
\end{figure}
\section{COMMENSURATE MINIMA}
Our remark concerns the discussion of commensurate or incommensurate
states of the chain.
Note that this letter is similar to another one~\cite{QuBo22StatMech}.
Thus, the problem may be widespread distributed in the FK community.
We start with the chain at the potential energy (1).
We determine the minimizer
of the chain in the combined two potentials (1), by the substrate and by the
springlike forces.
The definition of a 'commensurate' chain in ref.~\onlinecite{mun20} is useless,
because it is empty if $Q>N$
\begin{equation}
x_{Q+i}= x_i + R\,b
\label{empty}
\end{equation}
for integers $Q,\,R$.
Then $x_{Q+i}$ would be outside the chain.
For example it is the case for $N=50$ and $R/Q=52/51$.
Note that cases $Q>N$ will come far before an irrational relation
of $\tilde{a}/b$.
Of course, for nearly all cases with $a \ne b$ we will find some atoms at $x_i$
outside the bottom of their corresponding through.
We have recalculated a part of a similar curve like in Fig.4, part (a-3)~\cite{mun20}.
A result is shown in our Fig.\,2 by some special chains belonging to the case
$V_0=0.9\,k$.
Calculations are executed for steps $b=a-s*0.05$ for the step numbers
$s=0,..,10$ in the $[a-0.5,a]$ interval for $b$.
The springlike potential with parameter $a$ is not changed.
In every case, we 'sort' the constantly equal chain
into different substrate throughs.
And there it changes, see Fig.\,\ref{figX}.
To see what happens in the FK chain, we only used $N=8$ atoms
(and not 50 like it is done in ref.~\onlinecite{mun20}).
We always start the minimization of the chain with $x_1=0$
and $x_i=a\,(i-1)$ for $i=2,...,N$.
We get steps like in Fig.4 of ref.~\onlinecite{mun20}.
Of course,
only rational numbers are used for the steps.
The step near the line 1 is not constant;
it slowly increases to the right hand side.
We conclude that the statement that there are
``small intervals of zero slope'' \cite{mun20,chai95} is not correct.
The increase comes from a small ascent of the $\approx N/2$ left atoms
to the left walls of their corresponding substrate potential throughs,
and of the $\approx N/2$ right atoms
to the right walls of their corresponding substrate throughs,
what increases $\tilde{a}$.
This implies that this fact does not have relations with commensurate,
or incommensurate numbers.
We explain the steps in Fig.2.
It is $\tilde{a}=$ 2.39, 2.36, 2.25, 2.38, and 2.38 correspondingly, and
$\tilde{a}/b$ is then 1.02, 1.07, 1.13, 1.22 and 1.25.
Thus we really get consecutive states on the staircase like in Fig.4
of ref.~\onlinecite{mun20}.
The step from part (a) to (b) for a shorter $b$,
thus a larger $a/b$, shows a climbing up of the outer atoms to
their left walls (the $N/2$ left ones) or to
the right hand side walls (the right $N/2$ atoms) .
Between (b) and (c) happens a 'phase' jump, see below a definition of a phase.
Now the chain occupies one through more of the substrate, not $N=8$ however 9 waves.
This is possible because the central through, say with number 5, is now empty.
The chain has a kink.
Note that all structures in Fig.2 are minima.
Part (d) is again a stretching of the chain, but $a/b$ correctly increases.
A next 'phase' jump happens between (d) and (e). Again a new kink emerges.
Now two kinks concern the throughs 3 and 8 of the substrate,
which are bridged by the kinks.
The key is that one or two atoms, $x_5$ in (c), or $x_3$ and $x_8$ in part (e),
jump at the end of the increasing movement over a peak of the site-up potential.
{
Note that the new kinks are not restricted to the center of the chain,
as it has been claimed in a former work~\cite{mikh02}.
The pattern of new kinks
}
repeats for higher values of $a/b$ again and again at every step.
So the jumps emerge in Fig.4 of ref.~\onlinecite{mun20}. Thus, the given
jumps are correct.
The pure minima of the FK-chain with $N=8$ atoms disclose
3 'phases' in the region $1\le a/b\le 1.265$.
The minimization of Mathematica leads to the same
phase for certain intervals of $a/b$.
The character of the used number $b$ rational or irrational, is irrelevant.
What counts is the length of the chain, given by the $a$, and the (integer) number of
throughs of the substrate, which the chain occupies. Any distinction into
commensurate or incommensurate is nonsense here.
From Cantor's proof that the real numbers are uncountable but the
rational numbers are countable,
it follows that almost all real numbers are
irrational. But the rational numbers are dense as well as the irrational ones.
There is no gap in both kinds of numbers
which indicates the steps in Fig.4 of ref.~\cite{mun20}
The statement of ref.~\onlinecite{mun20} that
"the ratio $a/b$ is highly irrational and quite far from a rational value"
is not correct. There is no smallest distance from every irrational number to
any "neighboring" rational one. If $z$ is irrational, and if
such a distance existed then we coud take the
"left" rational number $z_1$ and the "right" rational number $z_2$ and
build the rational number
$
(z_1+z_2)/2
$
which is nearer to the initial $z$ number.
We conclude that relating the transition of different phases of the
FK-chain with the transition from a rational to an irrational ratio
of $\tilde{a}/b$ is mathematically and also physically not correct.
And we wonder how the authors of ref.~\onlinecite{mun20} have decided
the different kind of results in their Fig.4 to be
'rational' or 'irrational' steps?
We have to assume that they used a computer for their calculation.
Every numerical calculation on a computer goes on with a (restricted)
range of rational numbers. Irrational numbers are not representable.
We cannot imagine how they use an irrational $b$, and we cannot imagine
how they know that the result $\tilde{a}$ is 'irrational'.
The adsorption of the chain by the substrate acts in the way that for
every ratio $a/b$ a rational $\tilde{a}$ emerges (to every given
exactness of the solution) for a minimum structure, if $N$ is finite.
The latter was our assumption because in physics only finite chains exist.
Every limit $N\rightarrow \infty$ is a mathematical abstraction.
Nowhere in ref.~\onlinecite{mun20}
we could find a treatment of such a limit process.
Thus one can assume that to every constellation of the parameters
$a/b$, $V_o/k$, and $N$, exits at least one minimizer,
a structure of the FK chain in a minimum of the combined potential (1).
We now explicitely define different 'phases' of the chain.
\section{Definition of a phase}
If an equilibrium structure of the FK chain occupies $L$ throughs of
the substrate then it belongs to phase $L$.
There is a certain set of parameters for different ratios
of $a/b$, $V_o/k$, and $N$, which lead the the same phase.
\\
A single phase transition is correspondingly a change of an
equilibrium structure of the FK-chain to $L+1$ or $L-1$ throughs.
(We have outlined above that the ``commensurate-definition''
by Eq.\,(\ref{empty}) may be empty.)
For a phase transition the whole chain contracts,
or expands, so that one or more atoms of the chain climb
over their current peaks of the substrate potential.
As a result, the chain uses less, or more throughs. This is connected
by a jump in the average $\tilde{a}$, see Fig.\,\ref{figX}.
The question circles around the count of integers.
Besides the ratio of a/b the balance of $k$ and $V_o$ also
plays a role. In all cases, the chains like in Fig.\,\ref{figX}
are regular structures, no kind
of incommensurability emerges at any $a/b$ as it is claimed
in ref.~\onlinecite{mun20}.
\\
We still treat the case $N\rightarrow \infty $.
In ref.\,~\cite{mun20} is treated
the finite case only, for $N=50$.
We find that the larger $N$ is, the 'earlier' a small change in
parameter $b$ leads to a change of the number of occupied throughs.
Because the unperturbed chain ends at $x_N=(N-1)\,a$.
If then $b$ is so much smaller that holds
\[
N \ b \ < (N-1)\ a
\]
then the minimization leads to a kink. It means a step in the 'staircase'.
However, in the condition
\begin{equation}
a > \frac{N}{N-1} \ b
\label{bbb}
\end{equation}
the factor at $b$ converges to 1 in $N$ for every finite
$V_o$ and $k$.
However, for larger and larger $N$, thus the limit treatment,
formula (\ref{bbb}) leads to the decrease of the step length of
the staircase of Fig.\,4 in ref.~\onlinecite{mun20} to zero.
The staircase of Fig.\,4 in ref.~\onlinecite{mun20} degrades to a straight line.
Any kind of ``devils staircase'' \cite{mun20,chai95} disappears in the limit.
\section{Phase Changes}
A next section concerns the putative Aubry-phase transition
demonstrated by Fig.\,6 in ref.\cite{mun20}
The first eigenvalue of the second derivatives of the
potential of the chain is the frequency for a
collective movement of the chain.
For an unpinned chain it has to be zero, thus the potential has to be flat.
However, here the first eigenvalue is greater than zero, for the
blue points, as well as for the red points of Fig.\,6 in ref.\cite{mun20}.
Thus the potential is a minimum, in both cases, and the chain is pinned.
Of course, the pinning is small if the $V_o$ is small.
The chain can collectively vibrate, however it cannot move.
The step in the higher eigenvalues under the red points concerns inner
vibrations of the chain.
This has nothing to do with pinned or unpinned states.
Unpinned versions of a ground state of
a finite FK chain are not correctly calculated.
(If $V_o$ is not zero. And if $V_o$ is
zero then we do not have the FK model, at all.)
\section{Discussion}
We ask for the fascination of irrationals for workers in the field of the FK model.
The early source may be the papers of
S.Aubry~\cite{aubry83,peyr83,aubry83fr,aubryDae83,axel87,joha08}.
Our main contradiction to these works is the use of an infinite chain.
For any 'limit'-process which one needs to treat such a chain,
the correct way would be
to study a finite chain with $N$ particles, to determine its equilibrium
structure, and then to increase step by step the number, $N$, of the chain,
more and more up to a limit.
However, such a treatment is missing in the papers of S.Aubry.
Note, an actual infinity is not possible, as well as an infinite long chain.
Such a construct does not exist in reality.
In Mathematics only one studies the possibility
of infinite rows, for example, and their convergence, or divergence.
There are strong rules for the handling of the 'infinity'.
Especially, we miss the treatment of the boundary conditions (BCs),
over which the distance of the chains spring, $a=r_{min}$, (see Fig.\,1)
comes into play.
Without the BCs this parameter of the chain disappears in many FK studies.
This is questionable.
If one has free BCs
then we cannot start with a fixed `left' BC because the minimization
will result finally in a probably other BC.
So all the steps of the twist map start in a nebula.
The development by S.Aubry~\cite{aubry92},
and others~\cite{black91,bich06,vano2020} which use the twist map,
ignores that the chain will find at the boundary another minimum structure,
in comparison to any twist map result\cite{QuBo18MolPhys}.
If there are assumed fixed BCs, for example, and one starts 'left' with
the left BC, and one uses all the steps throughout the
twist map then usually the result at the 'right' end of
the chain will not fit the right BC.
(However, nobody can put BCs at infinity,
and nobody can really start at minus infinite, at all.)
We still note that some workers reduce their
treatment~\cite{shar84,bahu01,mikh01,garc07,wang2011,yjma14,byl16,babu16,novak17}
(to name a few) to finite chains,
what is quite correct, however, they continue using the erroneous
contrast of rational to irrational numbers in the finite FK model.
In a positive contrast, there is a treatment~\cite{thom22}
which sorts the FK chain in
a 'commensurable' kind into the site-up potential.
\section{Conclusion}
This letter discusses the widespread theory of so-called commensurate (C)
versus incommensurate (IC) phases of the FK model\cite{mun20,chai95}.
We have seen that the C-phases
are not specially ordered phases in the sense that
all atoms are locked-in to the minima of the substrate.
In the putative IC-phases we find no broken regular arrangement of the
atoms of the chain, either.
What makes steps in the average distance $\tilde{a}$ of the chain is the
possibility that the chain contracts or stretches over different periods $L$
of the substrate.
In a 2D or 3D crystal lattice a long-range periodic order
with an irrational ratio of the periodicities\cite{chai95,perv2000} can exist.
Its description as an IC crystal is in order.
However, the $\tilde{a}$ in the FK model is not the description of
a fixed lattice. It is the result of the balance of the four different
parameters, $a,\ b,\ V_o,\ k$, and it is only an average value.
The term IC means 'out of proportion'. However, in the FK chain we have to sort
$N$ atoms into $L$ basins of the substrate, both $N$ and $L$ are integers.
Two integers always form a proportion,
thus the coined term 'IC' is a wrong term.
One should use the already introduced terms kink and antikink.
The point of $a/b$ where the 'phase' transition occurs,
thus the chain contracts or stretches to
another number of basins of the site-up potential, has nothing to do with
rational or irrational numbers.
As discussed above, one of the origins of the incorrect
'C-IC sight' may be due to the theory of S.\,Aubry.
There is already
an analysis of this non-appropriete
theory~\cite{QuBo18MolPhys,QuBo22StatMech}.
\begin{acknowledgments}
We acknowledge the financial support from the Spanish Ministerio de
Economıa y Competitividad, Project No. PID2019-109518GB-I00,
and Spanish Structures of Excellence Maria de Maeztu program
through Grant MDM-2017-0767.
\end{acknowledgments}
\section*{References}
\vspace*{-0.5cm}
|
1,108,101,564,829 | arxiv | \section{Introduction}
Acceleration strategies in the context of optimization have proved to be powerful. One example, termed \emph{heavy ball acceleration}, was introduced by Polyak~\cite{Polyak:1964}, while perhaps the most influential form of acceleration was introduction by Nesterov~\cite{Nesterov:1983} and is often called \emph{Nesterov acceleration}. Although neither are intuitive in their precise design, it has
recently been shown that both can be obtained as explicit Euler
discretizations of a certain second-order ordinary differential equation (ODE), and correspond to
accelerated variants of gradient descent. This perspective has helped demystify the ``magic" of acceleration techniques in optimization.
There is an increasing interest in understanding the
connections between optimization
and continuous dynamical systems, especially for
accelerated gradient based methods
\cite{Candes:2016,Wibisono:2016,Krichene:2015,
Attouch:2016,Wilson:2016,Maddison:2018,Maddison:2019,Jordan:2019,Franca:2019}.
More recently, extensions of these connections
to nonsmooth settings using proximal methods have started to be considered
\cite{Attouch:2018,Attouch:2018b,May:2016,Attouch:2018c,
Attouch:2018d,Franca:2018,Franca:2018b}.
Proximal algorithms play an important role in optimization since they can
enjoy improved stability and be applied under weaker assumptions, and in many cases
the associated proximal operators have simple closed form expressions.
The majority of known proximal algorithms fall into the following types:
\begin{itemize}
\item forward-backward splitting \cite{Lions:1979,Passty:1979,Han:1988};
\item forward-backward-forward or Tseng splitting \cite{Tseng:2000};
\item Douglas-Rachford \cite{Douglas:1956,Lions:1979};
\item Davis-Yin three operator splitting \cite{Davis:2017}; and
\item alternating direction method of multipliers
(ADMM)~\cite{Glowinsky:1975,Gabay:1976}.
\end{itemize}
Many more sophisticated methods are extensions or variations of
these themes.
The first three were the only known methods for quite a long
time. Only recently have Davis and Yin \cite{Davis:2017} solved the problem of obtaining
a three operator splitting that cannot be reduced to any of the existing two operator splitting schemes. Such proximal methods are based
on fixed-point iterations of nonexpansive monotone operators.
A different technique based on projections onto separating sets has recently been proposed but will not be considered in this paper; see~\cite{Johnstone:2018} and the references therein.
ADMM dates back to
the 70's and has gained
popularity due to its effectiveness
in solving large-scale problems with sparse and low rank regularizations~\cite{Boyd:2011}.
We will focus on proximal algorithms
that have previously been introduced from an \emph{operator splitting}
approach.\footnote{One should not confuse operator splitting in
convex analysis with splitting methods for ODEs.}
The literature on operator splitting is huge,
so here we simply mention that these methods have origins in functional analysis
and differential
equations \cite{Browder:1963,Browder:1963b,Browder:1963c}, and
were later explored in convex
analysis and optimization; see \cite{Rockafellar:1976,Combettes:2004,
Combettes:2011,Combettes:2005,Guler:1991,Eckstein:1992}.
(See also \cite{BauschkeBook,Ryu:2016} for an
introduction and historical~account.)
\subsection{Summary of main results}\label{main_results}
We consider algorithms for solving
\begin{equation}\label{optimization}
\min_{x\in\mathbb{R}^n} \{ F(x) \equiv f(x) + g(x) + w(x) \}
\end{equation}
where $f$, $g$ and $w$ are functions from $\mathbb{R}^{n}$ into $\mathbb{R}$
obeying the following condition.
\begin{assumption}\label{assump1}
The functions $f$ and $g$ are proper, closed, and convex. The function $w$ is differentiable.
\end{assumption}
This assumption will be used throughout the paper.
The convexity requirements ensure that the proximal operators
associated to $f$ and $g$ are well-defined, i.e., have a unique solution.
For simplicity, in the discussion below we also assume that $f$ and $g$ are differentiable, however this condition can be relaxed.
Perhaps surprisingly, we show that \emph{all} of the above mentioned (bulleted) proximal
algorithms can be obtained as different discretizations (more precisely first-order integrators) of the simple \emph{gradient flow} introduced by Cauchy~\cite{Cauchy:1847} and given by
\begin{equation}
\label{gradflow}
\dot{x} = -\nabla F(x),
\end{equation}
where $x=x(t) \in \mathbb{R}^n$, $t$ is the time variable, and
$\dot{x} \equiv dx / dt$.
Our approach makes connections between optimization algorithms with
\emph{ODE splitting methods}
\cite{McLachlan:2002,MacNamara:2016}, which is a powerful approach for designing algorithms.
Interestingly, within this approach, ADMM emerges as a
\emph{rebalanced splitting} scheme, which is a recently introduced technique designed to preserve steady states of the underlying ODE \cite{Speth:2013}. We show that the dual variable associated with ADMM is precisely the \emph{balance coefficient} used in this approach.
We show that the other algorithms also preserve
steady states of the gradient flow, but for different reasons, which
in turn sheds light on the connections between ODE splitting ideas and operator
splitting techniques from convex analysis.
The emphasis of this paper is on accelerated algorithms. We show that by employing
similar discretization strategies to the \emph{accelerated gradient flow} given by
\begin{equation}
\label{secode}
\ddot{x} + \eta(t)\dot{x} = -\nabla F(x),
\end{equation}
where $\ddot{x} \equiv d^2 x/dt^2$ and $\eta(t)$ is a damping coefficient chosen to be either
\begin{equation}
\label{nagdamp}
\eta(t) = r/t \qquad \mbox{(decaying damping with $r\ge 3$)}
\end{equation}
or
\begin{equation}
\label{hbdamp}
\eta(t) = r \qquad \mbox{(constant damping $r > 0$),}
\end{equation}
one obtains \emph{accelerated variants} of the respective
algorithms.
The vanishing damping \eqref{nagdamp} is related to the ODE
from which Nesterov's accelerated gradient method may be obtained~\cite{Candes:2016},
while the constant damping \eqref{hbdamp}
is related to the ODE associated with Polyak's heavy ball method~\cite{Polyak:1964}.
We will refer to algorithms arising as discretizations of
\eqref{secode} derived with \eqref{nagdamp} as
accelerated variants with ``decaying damping,''
and those derived with~\eqref{hbdamp} as
accelerated variants with ``constant damping.''
In our analysis we treat both types
in a unified manner. We also note that choices other than
\eqref{nagdamp} and \eqref{hbdamp} are possible and would be automatically covered by our framework.
To the best of our knowledge, the accelerated frameworks that we introduce are new, although some recover known methods as special cases.
We show that they are all \emph{first-order integrators} that \emph{preserve steady states} of the accelerated gradient flow~\eqref{secode}.
We also show how this continuous-to-discrete analysis can be extended to monotone operators using two approaches: \emph{differential inclusions} (nonsmooth dynamical systems) and the combination of ODEs and Yosida regularized operators.
Overall, this paper brings an alternative
perspective on the
design of proximal optimization algorithms
by establishing
tight relationships with continuous dynamical systems and numerical analyses of ODEs.
Since the dynamical systems \eqref{gradflow} and \eqref{secode} play
a central role in the paper, we summarize their convergence rates
in Table~\ref{convergence}.
One expects that a suitable discretization
would, at least up to a small error, preserve the same rates. However,
a discrete analysis is still necessary to formalize such results.
\begin{table}[t]
\renewcommand\arraystretch{1.2}
\setlength{\tabcolsep}{15pt}
\centering
\begin{tabular}{@{}rcc@{}}
& convex & strongly convex \\
\midrule[1.5pt]
gradient flow \eqref{gradflow}
& $\Order\left(t^{-1}\right)$ & $\Order\left(e^{-m t }\right)$ \\
accelerated gradient flow \eqref{nagdamp}
& $\Order\left(t^{-2}\right)$ & $\Order\left( t^{-2 r/3} \right)$ \\
accelerated gradient flow \eqref{hbdamp}
& $\Order\left( t^{-1}\right)$ & $\Order\big( e^{- \sqrt{m} t } \big)$ \\
\bottomrule[1.5pt]
\end{tabular}
\caption{
\label{convergence}
Convergence rates of gradient flow \eqref{gradflow} vs. accelerated
gradient flow \eqref{secode} with decaying damping \eqref{nagdamp} and
constant damping
\eqref{hbdamp}.
Let $F^\star \equiv \inf_{x\in\mathbb{R}^n} F(x)$ and $x^\star$
be the unique minimizer
of an $m$-strongly convex function $F$.
We show rates for
$F(x(t)) - F^\star$
when $F$ is convex,
and for $\| x(t) - x^\star\|^2$ when $F$ is strongly convex.
Note the tradeoff between decaying and constant damping
for convex vs. strongly convex functions. These rates follow as special
cases of the analysis in~\cite{Franca:2018b}.
}
\end{table}
\subsection{Basic building blocks}
Standard discretizations of the derivatives appearing in the first- and
second-order systems \eqref{gradflow} and \eqref{secode}, respectively, are
\begin{align}
\dot{x}(t_k) &= \pm(x_{k\pm1}-x_k)/h + \Order(h),
\label{eulerfirst} \\
\ddot{x}(t_k) &= (x_{k+1}-2x_k + x_{k-1})/h^2 + \Order(h),
\label{eulerfirst2}
\end{align}
where $t_k = kh$ for $k\in\{0,1,2,\dotsc\}$, $x_k$ is an approximation to the true
trajectory $x(t_k)$,
and $h >0$ is the step size.
To discretize \eqref{secode} it will be convenient to define
\begin{equation}
\label{xhat}
\hat{x}_k \equiv x_k + \gamma_k(x_k - x_{k-1})
\quad \text{with} \quad
\gamma_k = \begin{cases} \tfrac{k}{k+r} & \mbox{for decaying damping~\eqref{nagdamp},} \\
1 - r h & \mbox{for constant damping~\eqref{hbdamp}.}
\end{cases}
\end{equation}
Using this notation and the ``minus" choice in~\eqref{eulerfirst}, one can verify that
\begin{equation}
\label{eulersec}
\ddot{x}(t_k) + \eta(t_k) \dot{x}(t_k) = (x_{k+1}-\hat{x}_k)/h^2 +
\Order(h).
\end{equation}
As usual, we will often keep the leading order term
and neglect $\Order(h)$ terms.
We note that although we consider \eqref{xhat}, other choices of damping $\eta(t)$
in \eqref{secode} are possible, which would lead to the same discretization \eqref{eulersec} but with a different $\gamma_k$ in \eqref{xhat}.
Since this paper focuses on proximal methods, the \emph{resolvent} operator plays a central role, and from a dynamical systems perspective so do \emph{implicit} discretizations. For example,
consider the implicit Euler discretization of~\eqref{gradflow} given by
\begin{equation}
\label{gd_implicit}
(x_{k+1} - x_{k} ) / h = -\nabla F(x_{k+1}).
\end{equation}
This nonlinear equation in $x_{k+1}$ can be solved using the \emph{resolvent},%
\footnote{The resolvent was introduced by Fredholm in the late 19th century
to study integral equations related to partial differential equations. This name was coined
by Hilbert who used it extensively to develop the theory of linear operators. Usually, the resolvent is defined as
$R(\lambda) \equiv (A - \lambda I)^{-1}$ when studying
the spectral decomposition of $A$. However, in convex analysis the resolvent
is defined as~\eqref{resolvent}, but both are related via
$J_{\lambda A} = \lambda^{-1} R(-\lambda^{-1})$.
}
which for an operator $A$ and
spectral parameter $\lambda \in \mathbb{C}$ is defined as
\begin{equation}
\label{resolvent}
J_{\lambda A} \equiv \left( I + \lambda A \right)^{-1}.
\end{equation}
We are interested in cases where $A$ is a maximal monotone operator (see Section~\ref{nonsmooth})
and will always
use $\lambda > 0$ as a real number related to the
discretization step size.
For instance,
when $A = \nabla F$, $\lambda > 0$, and the \emph{proximal operator} $\prox_{\lambda F}$ of $F$ is well-defined, it follows that the resolvent is equal t
\footnote{This
holds for nondifferentiable functions as well where $A = \partial F$ is the
subdifferential of $F$. In this case the differential equation \eqref{gradflow}
is replaced by a differential inclusion (see Section~\ref{nonsmooth}).
Thus, although we often denote the proximal operator by $J_{\lambda \nabla F}$ due
to the connection with ODEs, the reader
should keep in mind that this applies to nonsmooth functions as well and the resulting
algorithm does not require differentiability.
Recall also that $\partial F(x) = \{ \nabla F(x)\}$ when $F$ is differentiable.
}
\begin{equation}
\label{prox}
J_{\lambda \nabla F}(v) \equiv
\prox_{\lambda F}(v) \equiv \argmin_x \left(
F(x) + \tfrac{1}{2\lambda}\| x - v\|^2\right).
\end{equation}
It follows from \eqref{gd_implicit}, \eqref{resolvent}, and \eqref{prox} that
\begin{equation}
\label{proximal_point}
x_{k+1}
= J_{h \nabla F}(x_k) = \prox_{h F}(x_k),
\end{equation}
which is the \emph{proximal point algorithm}~\cite{Martinet1,Martinet2} (see \cite{Rockafellar:1976b, Rockafellar:1976c, Guler:1991} for a thorough analysis and generalizations).
When $F$ is convex, the proximal point algorithm
is known to converge when a minimizer
exists. In practice, it is often more stable than gradient descent and allows for a more aggressive
choice of step size, as is common for implicit discretizations.
Although the proximal point algorithm has the cost of computing the operator in~\eqref{prox}, it can be employed even when $F$ is nondifferentiable.
The result below follows from~\eqref{proximal_point} for the case of gradient flow, and from~\eqref{xhat}, \eqref{eulersec}, and \eqref{resolvent} for the case of accelerated gradient flow. This lemma is a key building block for the design of the algorithmic frameworks in this paper.
\begin{lemma}
\label{implilemma}
An implicit Euler discretization of the gradient flow \eqref{gradflow} and the accelerated gradient flow
\eqref{secode} with stepsize $h > 0$ yields, respectively, the updates
\begin{equation} \label{impli12}
x_{k+1} =
\begin{cases}
J_{h \nabla F}(x_k) & \text{for the gradient flow \eqref{gradflow},} \\
J_{h^2 \nabla F}(\hat{x}_k) & \text{for the accelerated gradient flow \eqref{secode},}
\end{cases}
\end{equation}
where $\hat{x}_k$ is defined in~\eqref{xhat} based on which type of damping is used.
\end{lemma}
The update~\eqref{impli12} is a proximal point computation associated with either $x_k$ or $\hat{x}_k$, depending on whether acceleration is used.
In the accelerated case, since these updates result from the discretization of
\eqref{secode}, we obtain an intuitive interpretation:
in the case of \eqref{xhat} the parameter $r$ controls the amount of friction (dissipation) in the system, and for the first choice the friction is decaying over
time, while for the second choice the friction is constant. Other choices are also possible.
\subsection{Outline}
In Section~\ref{splitting_methods},
we introduce the balanced and rebalanced splitting approaches
recently proposed in \cite{Speth:2013}. However, we propose
two modifications to this scheme that will enable us to make connections
with existing optimization algorithms as well as propose new ones.
In Section~\ref{proximal_splitting}, we derive accelerated extensions of ADMM,
accelerated extensions of the Davis-Yin method
(forward-backward and Douglas-Rachford follow as special cases),
and accelerated extensions of Tseng's method from an ODE splitting approach.
We also show that all discretizations considered
in this paper are
proper first-order integrators that preserve steady states of the underlying ODE.
In Section~\ref{nonsmooth}, we argue that our
analysis extends to the nonsmooth setting, and to
maximal monotone operators more generally.
Finally, numerical results in Section~\ref{applications} illustrate the speedup achieved by our accelerated variants.
\section{Splitting Methods for ODEs}
\label{splitting_methods}
Assume that solving or simulating the ODE
\begin{equation}
\label{first_order}
\dot{x} = \varphi(x)
\end{equation}
is an intractable problem, i.e., the structure of $\varphi$ makes the problem not computationally amenable to an iterative numerical procedure. We denote the flow map of \eqref{first_order}
by $\Phi_t$.
The idea is then to split the vector
field $\varphi:\mathbb{R}^n \to \mathbb{R}^n$ into
parts, each integrable or amenable to a feasible numerical
approximation.
For simplicity, consider
\begin{equation}
\label{splitphi}
\varphi = \varphi^{(1)} + \varphi^{(2)}
\end{equation}
and suppose that both
\begin{equation}
\label{spliteq}
\dot{x} = \varphi^{(1)}(x),
\qquad
\dot{x} = \varphi^{(2)}(x) ,
\end{equation}
are feasible, either analytically or numerically,
with respective flow maps
$\Phi_t^{(1)}$ and $\Phi_t^{(2)}$. For step size $h$, it can be shown
that the simplest composition \cite{Hairer}
\begin{equation}
\label{composition}
\hat{\Phi}_h = \Phi_h^{(2)}\circ\Phi_h^{(1)}
\end{equation}
provides a first-order approximation in the following sense.
\begin{definition} \label{order_def}
Consider a map $\hat{\Phi}_h : \mathbb{R}^n \to \mathbb{R}^n$, with step size $h > 0$, which approximates the true
flow $\Phi_t$ of the ODE \eqref{first_order} with vector field $\varphi : \mathbb{R}^n \to \mathbb{R}^n$.
Then $\hat{\Phi}_h$ is said to be an integrator of order $p$ if, for any $x \in \mathbb{R}^n$, it holds that
\begin{equation}
\| \Phi_h(x) - \hat{\Phi}_{h}(x) \| = \Order(h^{p+1}) .
\end{equation}
This implies a global error
$\| \Phi_{t_k}(x) - (\hat{\Phi}_h)^{k}(x)\|=\Order(h)$
for a finite interval
$t_k = hk$.
\end{definition}
There are different ways to compose individual flows, each
resulting in a different method.
For instance,
one can use a preprocessor map
$\chi_h:\mathbb{R}^n \to \mathbb{R}^n$ such that
\begin{equation}
\label{preprocessor}
\tilde{\Phi}_h = \chi_h^{-1}\circ \hat{\Phi}_h \circ \chi_h
\end{equation}
is more accurate than $\hat{\Phi}_h$ with little extra cost.
There are many interesting ideas in splitting methods for ODEs,
some quite sophisticated.
We mention these options to highlight that exploration beyond that considered in this paper is possible~\cite{McLachlan:2002,Blanes:2008,Hairer}.
Naturally, more accurate methods are more expensive since they
involve extra computation of individual flows. A good balance between
accuracy and computational cost are methods of order $p=2$.
Here we will focus on
the simple first-order scheme \eqref{composition}, which suffices to make
connections with many optimization methods.
\subsection{Balanced splitting}
\label{bal_splitting}
In general, splittings such as \eqref{splitphi} do
not preserve steady states of the system \eqref{first_order}.
Recently, an approach designed to preserve steady states
was proposed under the name of
\emph{balanced splitting} \cite{Speth:2013}.
The idea is to introduce a balance coefficient $c = c(t)$ into~\eqref{first_order} by writing $\dot{x} = \varphi(x) + c - c$
and then perform a splitting as above, which results in the pair of ODEs
\begin{equation}
\label{balancedsplit}
\dot{x} = \varphi^{(1)}(x) + c,
\qquad
\dot{x} = \varphi^{(2)}(x) - c.
\end{equation}
We now show how this may be used to preserve steady states of the system~\eqref{first_order}.
First, assume $x_\infty$ is a steady state of the system~\eqref{first_order} so that $x_\infty = \lim_{t\to\infty}x(t)$ satisfies $\varphi^{(1)}(x_\infty) + \varphi^{(2)}(x_\infty) = 0$. If $c_\infty = \lim_{t\to\infty} c(t)$ is found to satisfy
\begin{equation}
\label{cinfty}
c_\infty = \tfrac{1}{2}\big(\varphi^{(2)}(x_\infty) -
\varphi^{(1)}(x_\infty)\big)
\end{equation}
then $x_\infty$ is also a stationary state for both ODEs in~\eqref{balancedsplit} since
\begin{subequations}
\label{steadystate}
\begin{align}
\varphi^{(1)}(x_\infty) + c_\infty
&= \tfrac{1}{2}\big(\varphi^{(1)} + \varphi^{(2)}\big)(x_\infty) = 0, \\
\varphi^{(2)}(x_\infty) - c_\infty
&= \tfrac{1}{2}\big(\varphi^{(2)} + \varphi^{(1)}\big)(x_\infty) = 0.
\end{align}
\end{subequations}
To establish a result for the other direction, now assume that $x_\infty$
is a steady state of both ODEs in~\eqref{balancedsplit}. It follows that
\begin{equation}
\label{cinfty.2}
c_\infty
= \varphi^{(2)}(x_\infty)
= - \varphi^{(1)}(x_\infty)
= \tfrac{1}{2}\big(\varphi^{(2)}(x_\infty) -
\varphi^{(1)}(x_\infty)\big).
\end{equation}
From~\eqref{cinfty.2} and the fact that $x_\infty$ is stationary for both systems in~\eqref{balancedsplit} gives
\begin{subequations}\label{steadystate.all}
\label{steadystate.2}
\begin{align}
0 &= \varphi^{(1)}(x_\infty) + c_\infty
= \tfrac{1}{2}\big(\varphi^{(1)} + \varphi^{(2)}\big)(x_\infty), \\
0 &= \varphi^{(2)}(x_\infty) - c_\infty
= \tfrac{1}{2}\big(\varphi^{(2)} + \varphi^{(1)}\big)(x_\infty),
\end{align}
\end{subequations}
so that both equations in~\eqref{steadystate.all} imply that $x_\infty$ is stationary for~\eqref{first_order}.
Motivated by~\eqref{cinfty.2}, this can be
implemented by computing the updates
$c_k = \tfrac{1}{2}\big(\varphi^{(2)}(x_k) - \varphi^{(1)}(x_k) \big)$
during the numerical method,
together with discretizations of the ODEs in
\eqref{balancedsplit}. Note that this approach requires the explicit
computation of $\varphi^{(i)}$. In optimization, one might have
$\varphi^{(i)} = -\nabla f^{(i)}$, in which case it is not well-defined when
$f^{(i)}$ is nonsmooth.
We address this concern in the next section.
\subsection{Rebalanced splitting}
\label{rebal_splitting_sec}
The \emph{rebalanced splitting} approach was
proposed by \cite{Speth:2013}, and claimed to be
more stable than the balanced splitting of Section~\ref{bal_splitting}.
Importantly,
for our purposes, it allows for the computation of a balance coefficient using only the previous iterates so that, in particular, no evaluation of $\varphi^{(i)}$ is needed.
Let $t_k = kh$ for $k\in\{0,1,2,\dotsc\}$ and step size $h>0$.
We then integrate
$\dot{x} = \varphi^{(1)}(x) + c_k$ with initial condition
$x(t_k) = x_k$ over the interval $[t_k, t_k+h]$ to obtain
$x_{k+1/2}$, and then %
integrate
$\dot{x} = \varphi^{(2)}(x) - c_k$ over the interval $[t_k, t_k+h]$
with initial condition $x(t_k) = x_{k+1/2}$ to obtain
$x_{k+1}$ (note that $c_k$ is kept fixed during this procedure). The resulting integrals are given by
\begin{subequations}
\label{rebal}
\begin{align}
x_{k+1/2} &= x_k +
\int_{t_k}^{t_k + h}\big(\varphi^{(1)}(x(t))+c_k\big)dt,
\label{rebal1}\\
x_{k+1} &= x_{k+1/2} +
\int_{t_k}^{t_k + h}\big(\varphi^{(2)}(x(t))-c_k\big)dt. \label{rebal2}
\end{align}
\end{subequations}
In light of~\eqref{cinfty.2}, two reasonable ways of computing $c_{k+1}$ are
given by the \emph{average} of either $\tfrac{1}{2}(\varphi^{(1)} - \varphi^{(2)})$ or $\varphi^{(2)}$ over the time step, which with~\eqref{rebal} gives, respectively,
\begin{subequations}
\label{crebal}
\begin{align}
c_{k+1}
&= \dfrac{1}{h}\int_{t_k}^{t_k + h}\!\! \dfrac{ \varphi^{(2)}(x(t)) - \varphi^{(1)}(x(t)) }{2} dt
= c_k + \dfrac{1}{h}\left( \dfrac{x_{k+1} + x_k}{2} - x_{k+1/2} \right) , \label{crebal.1} \\
c_{k+1}
&= \dfrac{1}{h}\int_{t_k}^{t_k+h} \!\! \varphi^{(2)}(x(t)) dt
= c_k + \dfrac{1}{h}\left(x_{k+1}-x_{k+1/2} \right). \label{crebal.2}
\end{align}
\end{subequations}
In contrast to the balanced case in Section~\ref{bal_splitting}, both of these options
need not compute $\varphi^{(i)}$
to obtain $c_{k+1}$ as shown in~\eqref{crebal}.
Thus, the above approaches are better suited for nonsmooth optimization since they do not require explicit gradient computations.
Both options in \eqref{crebal} are slight variations of the approach proposed in \cite{Speth:2013}. To motivate the potential usefulness of \eqref{crebal.2} compared to \eqref{crebal.1}, let us first remark that
the updates to the balance coefficient play an important role in the
stability of the numerical method \cite{Speth:2013}. For example,
if $\varphi^{(2)}$ is much more ``stiff'' than
$\varphi^{(1)}$, the method may be unstable for large step sizes.
In an optimization context where $\varphi^{(2)}$ may be related to a regularizer (e.g., $\varphi^{(2)}(x) = \partial \| x \|_1$),
it may be desirable to preserve steady states
only through the first system in \eqref{balancedsplit}, which leads to the choice~\eqref{crebal.2}. In Section~\ref{extension_admm}, we show that this type of rebalanced splitting is related to the ADMM algorithm since the dual variable is precisely the balance coefficient in \eqref{crebal.2}.
\section{Proximal Algorithms from ODE Splittings}
\label{proximal_splitting}
We now use the previous ideas to construct
implicit discretizations of both the gradient flow \eqref{gradflow} and
the accelerated gradient flow \eqref{secode}.
However, our emphasis is on the latter since the analysis is more involved and can be easily
adapted to the former.
In addition to Assumption~\ref{assump1}, throughout this section we assume the following conditions.
\begin{assumption} \label{assump2}
The functions $f$, $g$ and $w$ in the optimization problem \eqref{optimization} are continuous differentiable with Lipschitz continuous gradients.
\end{assumption}
Lipschitz continuity of the gradients ensures uniqueness
of solutions of the ODEs~\cite{Butcher}.
Finally, in continuous-time both the gradient flow \eqref{gradflow} and the accelerated
gradient flow \eqref{secode},
with damping as in \eqref{nagdamp} or \eqref{hbdamp}, asymptotically solve the optimization problem
\eqref{optimization} since these systems are stable and their trajectories tend to lower level
sets of $F$ \cite{Franca:2018b,Franca:2018}. In the following we construct suitable discretizations of these ODEs.
\subsection{Accelerated extensions of ADMM}
\label{extension_admm}
Let us introduce a balance coefficient $c=c(t)$ and write
\eqref{secode} as a first-order system:
\begin{equation} \label{secbal}
\dot{x} = v , \qquad
\dot{v} = \underbrace{-\eta(t) v - \nabla f(x) -\nabla w(x)}_{\varphi^{(1)}} \underbrace{-\nabla g(x)}_{\varphi^{(2)}} +c - c .
\end{equation}
Splitting the second ODE above as indicated, we obtain the two independent systems
\begin{equation}
\label{admm_twosplit_pre1}
\begin{cases}
\dot{x} &= v \\ \dot{v} &= -\eta(t) v - \nabla f(x) - \nabla w(x) + c ,
\end{cases} \qquad
\begin{cases}
\dot{x} &= v \\
\dot{v} &= -\nabla g(x) - c .
\end{cases}
\end{equation}
Note that we are only splitting the second equation in \eqref{secbal}.
It will be convenient to treat each of these respective systems in their equivalent second-order forms:
\begin{equation} \label{admm_twosplit}
\ddot{x} + \eta(t) \dot{x} = -\nabla f(x) - \nabla w(x) + c , \qquad
\ddot{x} = -\nabla g(x) - c.
\end{equation}
We now discretize these systems using the results from Lemma~\ref{implilemma}, although we introduce some intermediary steps that will be justified by our analysis later. To this end,
let us choose a step size parametrized as $h \equiv \sqrt{\lambda}$, and then use \eqref{eulersec} (after dropping the $O(h)$ error term) and
a semi-implicit discretization on the first
equation of \eqref{admm_twosplit} to obtain the equation
\begin{equation} \label{admm_disc1}
x_{k+1/2} - \hat{x}_k = -\lambda \nabla f(x_{k+1/2}) -
\lambda \nabla w(\hat{x}_k) + \lambda c_k .
\end{equation}
This can now be solved
with the resolvent \eqref{resolvent} (i.e., in an analogous way as the second relation in \eqref{impli12}) to obtain the equation
\begin{equation} \label{admm_up1}
x_{k+1/2} = J_{\lambda \nabla f}\big( \hat{x}_k - \lambda \nabla w(\hat{x}_k) + \lambda c_k \big).
\end{equation}
For the second ODE in \eqref{admm_twosplit} we use \eqref{eulerfirst2} (after dropping the $O(h)$ term)
and an implicit discretization to obtain the equation
\begin{equation} \label{xtilde}
\tilde{x}_{k+1} -2 x_{k+1/2} + \hat{x}_k = -\lambda \nabla g(x_{k+1}) -
\lambda c_k
\end{equation}
where
\begin{equation}\label{xtilde1}
\tilde{x}_{k+1} = x_{k+1} + (x_{k+1/2} - \hat{x}_k).
\end{equation}
Note that the endpoint $\tilde{x}_{k+1}$ is related to the other endpoint $x_{k+1}$ via the momentum term $(x_{k+1/2} - \hat{x}_k)$ based on the first splitting, and together this results
in\footnote{Note that $\tilde{x}_{k+1}$ in \eqref{xtilde1} is a little further away from
$x_{k+1}$, which makes the algorithm ``look ahead'' and implicitly introduces dependency on the curvature of $g$ in the resulting update.}
\begin{equation} \label{admm_disc2}
x_{k+1} - x_{k+1/2} = -\lambda \nabla g(x_{k+1}) - \lambda c_k.
\end{equation}
This implicit equation can again be solved with the resolvent \eqref{resolvent} yielding
\begin{equation} \label{admm_up2}
x_{k+1} = J_{\lambda \nabla g}\big( x_{k+1/2} - \lambda c_k \big).
\end{equation}
For the balance coefficient $c$, we use the update \eqref{crebal.2} based on $\varphi^{(2)} = -\nabla g$ (see \eqref{secbal}). An implicit discretization is equivalent to approximating the integral by its upper limit, which in this cases results in
\begin{equation} \label{baladmm1}
c_{k+1} = \dfrac{1}{h}\int_{t_k}^{t_k+h} \varphi^{(2)} (x(t)) dt =
- \nabla g(x_{k+1}) + \Order(h).
\end{equation}
Using \eqref{admm_disc2}, and neglecting $\Order(h)$ terms, we thus obtain
\begin{equation}\label{admm_up3}
c_{k+1} = c_k + \lambda^{-1}\left( x_{k+1}-x_{k+1/2} \right).
\end{equation}
Collecting the updates \eqref{admm_up1}, \eqref{admm_up2}, \eqref{admm_up3}, and
\eqref{xhat} we obtain Algorithm~\ref{agenadmmrebal}.
\begin{algorithm}
\caption{Accelerated extension of ADMM for solving problem~\eqref{optimization}.\label{agenadmmrebal}}
\begin{algorithmic}
\STATE Choose $\lambda > 0$, and initialize $c_0$ and $\hat{x}_0$.
\STATE Choose $r \geq 3$ if decaying damping~\eqref{nagdamp}, or $r > 0$ if constant damping~\eqref{hbdamp}.
\FOR{$k=0,1,\dotsc$}
\STATE $x_{k+1/2} \leftarrow J_{\lambda \nabla f}\left(
\hat{x}_k - \lambda \nabla w(\hat{x}_k) + \lambda c_k\right)$
\STATE $x_{k+1} \leftarrow J_{\lambda \nabla g}( x_{k+1/2} - \lambda c_k)$
\STATE $c_{k+1} \leftarrow c_k + \lambda^{-1}
\left( x_{k+1}-x_{k+1/2}\right)$
\STATE Using $h=\sqrt{\lambda}$ and $r$, compute $\gamma_{k+1}$ and $\hat{x}_{k+1}$ from~\eqref{xhat}.
\ENDFOR
\end{algorithmic}
\end{algorithm}
We would like to stress some important aspects of Algorithm~\ref{agenadmmrebal}.
\begin{itemize}
\item The standard ADMM
\cite{Gabay:1976,Glowinsky:1975}
is recovered from Algorithm~\ref{agenadmmrebal} when
$w=0$ in problem~\eqref{optimization} and no acceleration is used, i.e., when $\gamma_k=0$ so that $\hat{x}_k = x_k$ for all $k$.
Algorithm~\ref{agenadmmrebal} extends ADMM to handle the case $w \neq 0$ in problem~\eqref{optimization} by incorporating $\nabla w$ in the update to $x_{k+1/2}$ in~\eqref{admm_up1}.
\item The dual vector update in ADMM is here represented by the update to the balance coefficient $c_k$, which as described earlier aims to preserve critical points of the underlying ODE. This brings new meaning to the dual vector.
\item Acceleration through the update to $\hat{x}_k$ based on vanishing and constant damping in~\eqref{xhat} have been considered.
However, one is free to consider other damping functions $\eta(t)$ in the dynamical system \eqref{secode} as well. By a suitable discretization, this would lead to a new update to $\gamma_k$ in \eqref{xhat}.
For example, choosing $\eta(t) = r_1/t + r_2$ for constants $\{r_1,r_2\} \subset(0,\infty)$ yields
\begin{equation} \label{damping_combination}
\gamma_k = k/(k+r_1) + r_2.
\end{equation}
This observation is valid for every accelerated algorithm derived in this paper.
\item When decaying damping is chosen in \eqref{xhat} and $w=0$ in problem~\eqref{optimization}, Algorithm~\ref{agenadmmrebal} is similar to the Fast ADMM proposed in~\cite{Goldstein:2014}. They differ in that the latter also ``accelerates'' the dual variable $c$ (i.e., the Lagrange multiplier update). Connections between Fast ADMM and continuous dynamical systems was recently considered in \cite{Franca:2018,Franca:2018b} and corresponds to system \eqref{secode} with $w=0$. However, in this case the discretization is not a rebalanced splitting.
\item The choice of discretization leading to \eqref{admm_disc2}, which involves relating $\tilde{x}_{k+1}$ to $x_{k+1}$ (recall \eqref{xtilde1}),
is motivated by obtaining updates similar to ADMM.
This choice is formally justified by Theorem~\ref{admm_first_order}, which shows that the discretization
has a local error of $\Order(h^2)$ compared to the continuous trajectory.
\end{itemize}
\begin{remark}[non-accelerated algorithms via gradient flow] \label{admm_grad_flow}
Although we focus on accelerated algorithms, similar (and easier) analyses apply to the gradient flow \eqref{gradflow}
which lead to non-accelerated variants of the respective algorithm.
For example, as in~\eqref{secbal}, one can introduce a balance coefficient $c$ and split the system~\eqref{gradflow} into
\begin{equation}\dot{x} = - \nabla f(x) - \nabla w(x) + c, \qquad
\dot{x} = - \nabla g(x) - c.
\end{equation}
Then, as for \eqref{admm_disc1}, a semi-implicit discretization of the first equation yields
\begin{equation}
\label{admmu1}
x_{k+1/2} = J_{\lambda \nabla f}(x_k - \lambda \nabla w(x_k) + \lambda c_k),
\end{equation}
where now $h \equiv \lambda $ in \eqref{eulerfirst}. An implicit discretization of the second equation yields $x_{k+1} - x_{k+1/2} = - \lambda \nabla g(x_{k+1}) - \lambda c_k$, so that $x_{k+1}$ can be computed as
\begin{equation}
\label{admmu2}
x_{k+1} = J_{\lambda \nabla g}(x_{k+1/2} - \lambda c_k).
\end{equation}
The balance coefficient update \eqref{admm_up3} is obtained in an analogous manner as before.
Note that updates \eqref{admmu1} and \eqref{admmu2}, together with \eqref{admm_up3}, are precisely the ADMM algorithm in the particular case $w=0$.
\end{remark}
The next result shows that the above discretization is justified since it yields a first-order integrator for the underlying ODE.
\begin{theorem}\label{admm_first_order}
The following hold true:
\begin{enumerate}
\item[(i)] Algo.~\ref{agenadmmrebal} is a first-order integrator to the accelerated gradient flow \eqref{secode};
\item[(ii)] Algo.~\ref{agenadmmrebal} with $\gamma_k = 0$ for all $k \geq 0$ is a first-order integrator to the gradient flow~\eqref{gradflow}.
\end{enumerate}
\end{theorem}
\begin{proof}
For $f$ satisfying Assumption~\ref{assump1} and
Assumption~\ref{assump2} it holds that $y = J_{ \lambda \nabla f}(x)$ if and only if
$y = x - \lambda \nabla f(y)$ (see \eqref{resolvent}). Thus, Assumption~\ref{assump2} gives
\begin{equation}
\label{proxappro}
y = J_{\lambda \nabla f}(x) = x - \lambda \nabla f(x - \lambda \nabla f(y))
= x - \lambda \nabla f(x) + \Order(\lambda^2),
\end{equation}
where the
$\|\nabla f(y)\|$
that normally appears in the $\Order$ term is suppressed because it is bounded independently of $\lambda$ for all $\lambda$ on a compact set as a consequence of $y = J_{\lambda \nabla f}(x)$ and Assumption~\ref{assump2}. (A similar convention is used in~\eqref{limm} and~\eqref{limmm} below.)
Thus,
this equality and a Taylor expansion on $\nabla f$ in
the first update of Algo.~\ref{agenadmmrebal} give
\begin{equation}
x_{k+1/2} = \hat{x}_k - \lambda \nabla w(\hat{x}_k) + \lambda c_k -
\lambda \nabla f(\hat{x}_k) + \Order(\lambda^2). \label{limm}
\end{equation}
Similarly, the second update of Algo.~\ref{agenadmmrebal} leads to
\begin{equation}
\label{limmm}
\begin{split}
x_{k+1} &= x_{k+1/2} - \lambda c_k -\lambda \nabla g(x_{k+1/2}) + \Order(\lambda^2) \\
&= \hat{x}_k - \lambda \nabla F(\hat{x}_k) + \Order(\lambda^2)
\end{split}
\end{equation}
where to derive the second equality we used~\eqref{limm} and a Taylor expansion of
$\nabla g$.
Recall \eqref{xhat} and note that $\gamma_k = 1- \eta(t) h$ for constant damping $(\eta(t)=r)$, while
\begin{equation}
\gamma_k = \dfrac{k}{k+r} = 1 - \dfrac{r}{k+r} = 1 - \dfrac{r h }{t_k} \left(1+ \dfrac{rh}{t_k}\right)^{-1}
= 1 - \eta(t_k) h + \Order(h^2)
\end{equation}
for decaying damping ($\eta(t) = r/t$).
Thus, in either case, we conclude that
\begin{equation}
\label{xhat_ord}
\begin{split}
\hat{x}_k
&= x_k + h \big( 1 - \eta(t_k)h \big) v_k + \Order(h^3)
= x_k + \Order(h)
\end{split}
\end{equation}
where we have defined the velocity variable
\begin{equation}\label{velocity}
v_k \equiv \left( x_k - x_{k-1} \right)/h,
\end{equation}
which is finite even in the limit $h \to 0$.
Using \eqref{velocity}, both equalities in \eqref{xhat_ord}, \eqref{limmm}, and recalling that
$\lambda \equiv h^2$, we conclude that
\begin{equation} \label{admm_first_final}
\begin{split}
v_{k+1} &= v_k - h \eta(t_k) v_k - h \nabla F(x_k) + \Order(h^2), \\
x_{k+1} &= x_k + h v_{k+1} = x_k + h v_k + \Order(h^2).
\end{split}
\end{equation}
Now, consider the ODE \eqref{secode}, i.e.,
$\dot{x} = v$ and
$\dot{v} = -\eta(t) v - \nabla F(x)$.
Combining this with Taylor expansions we have
\begin{equation}
\label{xtplush}
\begin{split}
v(t+h) &= v(t) + h \dot{v}(t) + \Order(h^2) = v(t) - h \eta(t) v(t) - h \nabla F(x(t)) + \Order(h^2), \\
x(t+h) &= x(t) + h \dot{x}(t) + \Order(h^2) = x(t) + h v(t) + \Order(h^2) .
\end{split}
\end{equation}
Therefore, by comparison with \eqref{admm_first_final} we conclude that
\begin{equation} \label{admm_first_order_final}
v(t_{k+1}) = v_{k+1} + \Order(h^2), \qquad
x(t_{k+1}) = x_{k+1} + \Order(h^2) ,
\end{equation}
i.e., in one step of the algorithm the discrete trajectory agrees with the continuous trajectory up to $\Order(h^2)$. This means that
Definition~\ref{order_def} is satisfied with $p=1$.
The above argument can be adapted to Algo.~\ref{agenadmmrebal} with $\gamma_k=0$ in relation to the gradient flow \eqref{gradflow}. The derivation is simpler and is thus omitted.
\end{proof}
\subsection{Accelerated extensions of Davis-Yin} \label{accel_dy}
We now split the accelerated gradient flow \eqref{secode} as in \eqref{secbal}, but without introducing a balance coefficient, to obtain
\begin{equation}
\varphi^{(1)} = -\eta(t) v - \nabla f(x), \qquad
\varphi^{(2)} = - \nabla g(x) - \nabla w(x).
\end{equation}
Hence, instead of \eqref{admm_twosplit}, we obtain
the following two individual ODEs:
\begin{equation}
\label{ady2flow}
\ddot{x} + \eta(t)\dot{x} = -\nabla f(x), \qquad
\ddot{x} = -\nabla g(x) - \nabla w(x).
\end{equation}
An implicit discretization of the first system is
\begin{equation} \label{ady.disc1}
x_{k+1/4} - \hat{x}_k = -\lambda \nabla f(x_{k+1/4})
\end{equation}
where $h \equiv \sqrt{\lambda}$,
which as a result of Lemma~\ref{implilemma} leads to
\begin{equation} \label{ady.u1}
x_{k+1/4} \equiv \Phi_{h}^{(1)}(\hat{x}_k) = J_{\lambda \nabla f}(\hat{x}_k) .
\end{equation}
Next, to ``inject momentum'' in the direction of $\nabla f$, we define the translation operator
\begin{equation}
\mathcal{T}_h(z) \equiv z - \lambda \nabla f(x_{k+1/4})
\end{equation}
for any vector $z$. Thus, the next point in the discretization is defined to be
\begin{equation} \label{ady.u2}
x_{k+1/2} \equiv \mathcal{T}_{h}(x_{k+1/4}) = x_{k+1/4} - \lambda \nabla f(x_{k+1/4}) = 2x_{k+1/4} - \hat{x}_k
\end{equation}
where we used \eqref{ady.disc1} to obtain the last equality.
Next, we can use \eqref{eulerfirst2} to obtain a semi-implicit discretization of the second system in \eqref{ady2flow} given by
\begin{equation}\label{adyfirst}
x_{k+3/4}-2x_{k+1/4}+\hat{x}_k = - \lambda \nabla g(x_{k+3/4}) - \lambda \nabla w(x_{k+1/4}).
\end{equation}
This allows us to solve the implicit equation \eqref{adyfirst} in the form
\begin{equation} \label{ady.u3}
x_{k+3/4} \equiv \Phi_h^{(2)}(\hat{x}_k) = J_{\lambda \nabla g}\left(x_{k+1/2} - \lambda \nabla w(x_{k+1/4})
\right).
\end{equation}
Finally, we apply the inverse
$\mathcal{T}^{-1}_h(z) \equiv z + \lambda \nabla f(x_{k+1/4} )$ and use~\eqref{ady.disc1} to obtain
\begin{equation} \label{ady.u4}
x_{k+1} \equiv \mathcal{T}^{-1}_{h}(x_{k+3/4})
= x_{k+3/4} + \lambda \nabla f(x_{k+1/4})
= x_{k+3/4}-(x_{k+1/4}-\hat{x}_k).
\end{equation}
The collection of \eqref{ady.u1}, \eqref{ady.u2}, \eqref{ady.u3}, and \eqref{ady.u4} results in Algo.~\ref{ady}, and the entire discretization procedure is illustrated in Fig.~\ref{discpic2}.
\begin{algorithm}[t]
\caption{Accelerated extension of Davis-Yin
for solving problem \eqref{optimization}.
\label{ady}}
\begin{algorithmic}
\STATE Choose $\lambda > 0$, and initialize $\hat{x}_0$.
\STATE Choose $r \ge 3$ if decaying damping \eqref{nagdamp}, or $r > 0$ if constant damping \eqref{hbdamp}.
\FOR{$k=0,1,\dotsc$}
\STATE $x_{k+1/4} \leftarrow J_{\lambda \nabla f}(\hat{x}_k)$
\STATE $x_{k+1/2} \leftarrow 2x_{k+1/4} - \hat{x}_k$
\STATE $x_{k+3/4} \leftarrow J_{\lambda \nabla g}\big(x_{k+1/2} - \lambda \nabla w(x_{k+1/4})\big)$
\STATE $x_{k+1} \leftarrow \hat{x}_k + x_{k+3/4} - x_{k+1/4}$
\STATE Using $h = \sqrt{\lambda}$ and $r$, compute $\gamma_{k+1}$ and $\hat{x}_{k+1}$ from \eqref{xhat}.
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{figure}
\centering
\includegraphics[scale=1.2]{davis_yin_discretization.pdf}
\caption{
\label{discpic2}
An illustration of the discretization underlying Algo.~\ref{ady}, consistent with the mapping \eqref{dycompos},
that gives the accelerated Davis-Yin method.
We indicate
the points at which the gradients act that define the implicit/explicit
discretization.
}
\end{figure}
The following comments concerning Algo.~\ref{ady} are appropriate.
\begin{itemize}
\item Algo.~\ref{ady} reduces to the Davis-Yin method \cite{Davis:2017} when $\gamma_k = 0$ in \eqref{xhat}, so that $\hat{x}_k = x_k$ for all $k$, i.e., when no acceleration is used. In this case, it has been shown for convex functions that the method has a convergence result of
$\Order( 1/k)$ in an average or ergodic sense, and
when all functions are
strongly convex and satisfy some regularity conditions that
linear convergence holds~\cite{Davis:2017}.
In the non-accelerated case, the algorithm corresponds to the application of a similar discretization of the gradient flow \eqref{gradflow} with
splitting
$\dot{x} = -\nabla f(x)$ and
$\dot{x} = -\nabla g(x) - \nabla w(x)$ (also see Remark~\ref{admm_grad_flow}).
\item Algo.~\ref{ady} is equivalent to the composition
\begin{equation}
\label{dycompos}
\hat{\Phi}_h =
\mathcal{T}^{-1}_h\circ\Phi_h^{(2)}\circ\mathcal{T}_h\circ\Phi_h^{(1)} .
\end{equation}
Thus, comparing with \eqref{preprocessor} we see that $\mathcal{T}_h$ is actually a preprocessor map.
\item
In Theorem~\ref{dy_first_order} we show that the above procedure yields a first-order integrator. Furthermore, in Theorem~\ref{dy_critical} we show that this discretization preserves critical points of the underlying ODE.
\end{itemize}
\begin{theorem}\label{dy_first_order}
The following hold true:
\begin{enumerate}
\item[(i)] Algo.~\ref{ady} is a first-order integrator
to the accelerated gradient flow \eqref{secode};
\item[(ii)] Algo.~\ref{ady} with $\gamma_k=0$ for all $k \geq 0$ is a first-order integrator
to the gradient flow \eqref{gradflow}.
\end{enumerate}
\end{theorem}
\begin{proof}
The arguments are very similar to the proof of Theorem~\ref{admm_first_order}.
From \eqref{proxappro} and Taylor expansions, the
first three updates of
Algo.~\ref{ady} yield
\begin{subequations}
\begin{align}
x_{k+1/4} &= \hat{x}_k - \lambda \nabla f(\hat{x}_k) + \Order(\lambda^2), \\
x_{k+1/2} &= \hat{x}_k - 2 \lambda \nabla f(\hat{x}_k) + \Order(\lambda^2), \\
x_{k+3/4} &= \hat{x}_k - 2\lambda \nabla f(\hat{x}_k) -
\lambda \nabla g(\hat{x}_k) - \lambda \nabla w(\hat{x}_k) + \Order(\lambda^2).
\end{align}
\end{subequations}
Hence, the fourth update of Algo.~\ref{ady} becomes
\begin{equation}
x_{k+1} = \hat{x}_k - \lambda \nabla F(\hat{x}_k) + \Order(\lambda^2),
\end{equation}
which is exactly the same as equation~\eqref{limmm}. Therefore, the remaining steps of the proof follow exactly as
in the proof of Theorem~\ref{admm_first_order},
and establishes that Algo.~\ref{ady} is an integrator with $p=1$ according to Definition~\ref{order_def}.
The proof related to the gradient flow \eqref{gradflow} (i.e., with $\gamma_k=0$ in Algo.~\ref{ady}) is similar, but easier, and therefore omitted.
\end{proof}
In order to show that Algo.~\ref{ady} preserves critical points of the underlying ODE, we require the following
technical result.
\begin{lemma}
\label{caleylemma}
It holds that
$(\nabla f + \nabla g + \nabla w)(\bar{x}) = 0$
if and only if
$\mathcal{P}(x) = x$
with
\begin{equation}
\label{Top}
\mathcal{P} \equiv \tfrac{1}{2}I +
\tfrac{1}{2}C_{\lambda \nabla g}\circ \left( C_{\lambda \nabla f} -\lambda \nabla w \circ J_{\lambda \nabla f} \right) - \tfrac{1}{2} \lambda \nabla w \circ J_{\lambda \nabla f},
\end{equation}
$\bar{x} = J_{\lambda \nabla f}(x)$, and
$C_{\lambda \nabla f} \equiv 2 J_{\lambda \nabla f} - I$
is the Cayley operator.
\end{lemma}
\begin{proof}
The first equation in the statement of the theorem
is equivalent to
$(I+\lambda \nabla g)(\bar{x}) = (I-\lambda \nabla f -
\lambda \nabla w)(\bar{x})$ since $\lambda > 0$. Using
the resolvent \eqref{resolvent} this yields
\begin{equation}
\label{lem1id}
\bar{x} = J_{\lambda \nabla g}\circ\left( I-\lambda \nabla f -
\lambda \nabla w \right) (\bar{x}).
\end{equation}
Now, making use of the identity
\begin{equation}
C_{\lambda \nabla f}\circ(I+\lambda \nabla f) = \left(
2(I+\lambda \nabla f)^{-1}-I\right)\circ
(I + \lambda \nabla f) = I -\lambda \nabla f
\end{equation}
and substituting $J_{\lambda \nabla g} = \tfrac{1}{2}(C_{\lambda \nabla g} +I)$
into \eqref{lem1id} we obtain
\begin{equation}
\label{cinter}
\bar{x} = \tfrac{1}{2}\left( C_{\lambda \nabla g} + I \right)
\circ \left\{ C_{\lambda \nabla f}\circ(I + \lambda \nabla f)
-\lambda \nabla w\right\}(\bar{x}).
\end{equation}
Since $x \equiv (I + \lambda \nabla f)(\bar{x})$, or equivalently
$\bar{x} = J_{\lambda \nabla f}(x)$, it follows from~\eqref{cinter} that
\begin{equation}
\label{iff1}
J_{\lambda \nabla f} (x) = \tfrac{1}{2} C_{\lambda \nabla g}\circ\left(
C_{\lambda \nabla f} - \lambda \nabla w \circ J_{\lambda \nabla f}\right)(x)
+ \tfrac{1}{2} C_{\lambda \nabla f}(x) -
\tfrac{1}{2}\lambda \nabla w\circ J_{\lambda \nabla f}(x),
\end{equation}
which by the definition of the Cayley operator is equivalent to
\begin{equation}
\tfrac{1}{2} x = \tfrac{1}{2} C_{\lambda \nabla g}\circ\left(
C_{\lambda \nabla f} - \lambda \nabla w \circ J_{\lambda \nabla f}\right)(x)
-
\tfrac{1}{2}\lambda \nabla w\circ J_{\lambda \nabla f}(x).
\end{equation}
Adding $x/2$ to each side of the previous equality yields
$x = \mathcal{P}(x)$, as claimed.
\end{proof}
\begin{theorem}\label{dy_critical}
If the iterate sequence $\{x_k\}$ generated by Algo.~\ref{ady} satisfies $\{x_k\}\to x_\infty$ for some $x_\infty$, then the vector $\bar{x}_\infty \equiv J_{\lambda \nabla f}(x_\infty)$ satisfies
\begin{equation}
\{x_{k+1/4}\} \to \bar{x}_\infty,
\qquad
(\nabla f + \nabla g + \nabla w)(\bar{x}_\infty) = 0,
\end{equation}
i.e., $\bar{x}_\infty$ is a solution of~\eqref{optimization} and
a steady state of the accelerated gradient flow \eqref{secode}. When $\gamma_k=0$ for all $k \geq 0$, $\bar{x}_\infty$ is a steady state
of the gradient flow \eqref{gradflow}.
\end{theorem}
\begin{proof}
It follows from the updates in Algo.~\ref{ady}, the definition of $C_{\lambda \nabla f}$ in the statement of Lemma~\ref{caleylemma}, and the definition of $\mathcal{P}$ in~\eqref{Top} that $x_{k+1} = \hat{\Phi}_h(\hat{x}_k)$ where
\begin{equation}
\label{dycaley}
\begin{split}
\hat{\Phi}_h &=
I + J_{\lambda \nabla g}\circ\left(2 J_{\lambda \nabla f} - I - \lambda \nabla w
\circ J_{\lambda \nabla f}\right) - J_{\lambda \nabla f} \\
&= I + J_{\lambda \nabla g}
\circ\left( C_{\lambda \nabla f} - \lambda \nabla w \circ
J_{\lambda \nabla f} \right) - \tfrac{1}{2}(C_{\lambda \nabla f} + I) \\
&= \tfrac{1}{2}I - \tfrac{1}{2} C_{\lambda \nabla f} +
\tfrac{1}{2}\left(C_{\lambda \nabla g} + I\right)\circ
\left( C_{\lambda \nabla f} -
\lambda \nabla w\circ J_{\lambda \nabla f} \right) \\
&= \tfrac{1}{2}I +\tfrac{1}{2}C_{\lambda \nabla g}\circ\left( C_{\lambda \nabla f}
- \lambda \nabla w \circ J_{\lambda \nabla f} \right) - \tfrac{1}{2} \lambda \nabla w \circ
J_{\lambda \nabla f} \\
&= \mathcal{P},
\end{split}
\end{equation}
i.e., that $x_{k+1} = \mathcal{P}(\hat{x}_k)$. (Note that $\mathcal{P}$ is the operator $\mathcal{T}$ studied in~\cite{Davis:2017} and proved to be an ``averaged operator", which means that $\mathcal{P}$ is a nonexpansive operator and thus continuous~\cite[Remark 4.34]{BauschkeBook}.) Combining this with~\eqref{xhat} shows that
\begin{equation}\label{P-update-DY}
\hat{x}_{k+1}
= x_{k+1} + \gamma_{k+1}(x_{k+1} - x_k)
= \mathcal{P}(\hat{x}_k) + \gamma_{k+1}(x_{k+1} - x_k).
\end{equation}
By assumption there exists some $x_\infty$
such that $\{x_k\} \to x_\infty$, which combined with~\eqref{xhat} shows that $\{\hat{x}_k\} \to x_\infty$. Combining these facts with~\eqref{P-update-DY} and continuity of $\mathcal{P}$ establishes that
$\mathcal{P}(x_\infty) = x_\infty$.
It now follows from Lemma~\ref{caleylemma} that $\bar{x}_\infty = J_{\lambda \nabla f}(x_\infty)$
satisfies $(\nabla f + \nabla g + \nabla w)(\bar{x}_\infty) = 0$. Moreover, from the update for $x_{k+1/4}$ in Algo.~\ref{ady} and continuity of $J_{\lambda \nabla f}$ we can conclude that
\begin{equation}
\lim_{k\to\infty} x_{k+1/4}
= \lim_{k\to\infty} J_{\lambda \nabla f}(\hat{x}_k)
= J_{\lambda \nabla f}(x_\infty)
= \bar{x}_\infty.
\end{equation}
This completes the proof for this case once we recall that $(\nabla f + \nabla g + \nabla w)(\bar{x}_\infty) = 0$.
The same argument applies when
$\gamma_k = 0$ (i.e., when $\hat{x}_k = x_k$ for all $k \geq 0$), in which case Algo.~\ref{ady} is a discretization of \eqref{gradflow}.
\end{proof}
\subsubsection{Accelerated extensions of Douglas-Rachford}\label{dougrach}
When Algo.~\ref{ady} with $\gamma_k = 0$ for all $k$ is applied to problem \eqref{optimization} with $w=0$, one obtains
the well-known Douglas-Rachford algorithm
\cite{Douglas:1956,Lions:1979}, which
has been extensively studied in the literature
(for recent results see \cite{Moursi:2017} and the references therein).
Therefore, Douglas-Rachford is a discretization of the gradient flow \eqref{gradflow} with $w=0$.
Also, from Algo.~\ref{ady} one obtains accelerated variants of Douglas-Rachford that are discretizations of the accelerated gradient flow \eqref{secode}. For instance, when
decaying damping in \eqref{xhat} is used,
the resulting method was studied in \cite{Patrinos:2014}.
We are not aware of previous work on accelerated variants with constant damping, or other choices such as \eqref{damping_combination}.
\subsubsection{Accelerated extensions of forward-backward} \label{fbsec}
When Algo.~\ref{ady} with $\gamma_k=0$ for all $k$ is applied to problem \eqref{optimization} with $f=0$, one obtains the forward-backward splitting method \cite{Lions:1979,Passty:1979,Han:1988}, i.e.,
$x_{k+1} = J_{\lambda \nabla g}(x_{k}-\lambda \nabla w(x_k))$. This corresponds to a first-order
integrator to the gradient flow \eqref{gradflow}.
Similarly, for nonzero $\gamma_k$, Algo.~\ref{ady} gives accelerated variants of forward-backward, i.e.,
$x_{k+1} = J_{\lambda \nabla g}\big( \hat{x}_k -
\lambda \nabla w(\hat{x}_k) \big)$.
From an ODE perspective, this is not a splitting method but rather a semi-implicit
discretization; the first equation in \eqref{ady2flow}
is absent (also see Fig.~\ref{tsengfig} (left) for an illustration).
In any case, Theorem~\ref{dy_first_order} ensures that the iterates correspond to first-order integrators, and Theorem~\ref{dy_critical} shows that critical points are preserved, where the operator in \eqref{dycaley}
reduces to $J_{\lambda \nabla g}\circ(I-\lambda \nabla w)$.
\subsection{Accelerated extensions of Tseng's splitting}
\label{subsec:tseng}
The final proximal-based method to be considered
is the forward-backward-forward splitting proposed
by Tseng \cite{Tseng:2000}, which consists of a modification (a perturbation) of the forward-backward splitting discussed above.
In order to propose accelerated extensions of Tseng's scheme, we consider the accelerated gradient flow \eqref{secode} with $f=0$ written as
\begin{equation}
\dot{x} = v , \qquad
\dot{v} = \underbrace{-\eta(t) v -\nabla g(x) - \nabla w(x)}_{\varphi^{(1)}}
+ \underbrace{\nabla w(x) - \nabla w(x)}_{\varphi^{(2)}}.
\end{equation}
Splitting this system leads to the two independent ODEs
\begin{equation} \label{tseng_split}
\ddot{x} + \eta(t) \dot{x} = -\nabla g(x)-\nabla w(x), \qquad
\ddot{x} = \nabla w(x)-\nabla w(x).
\end{equation}
Using $h \equiv \sqrt{\lambda}$, \eqref{eulersec}, and
a forward-backward discretization of the first equation
gives
\begin{equation} \label{tseng.u1}
x_{k+1/2} = J_{\lambda \nabla g}\left( \hat{x}_k - \lambda
\nabla w (\hat{x}_k) \right).
\end{equation}
This is the same step as in the forward-backward method.
For the second equation in \eqref{tseng_split} we use \eqref{eulerfirst2} in the form
$\tilde{x}_{k+1} - 2 x_{k+1/2} +
\hat{x}_k = \lambda \nabla w(\hat{x}_k) - \lambda \nabla w(x_{k+1/2})$, where
$\tilde{x}_{k+1}$ is given by \eqref{xtilde1}. This gives
\begin{equation}\label{tseng.u2}
x_{k+1} = x_{k+1/2} - \lambda \big( \nabla w(x_{k+1/2}) - \nabla w(\hat{x}_k) \big).
\end{equation}
By combining \eqref{tseng.u1} and \eqref{tseng.u2} we arrive at Algo.~\ref{atseng} (also see Fig.~\ref{tsengfig}).
\begin{algorithm}[t]
\caption{
Accelerated Tseng's method
for solving problem~\eqref{optimization} when $f = 0$.\label{atseng}
}
\begin{algorithmic}
\STATE Choose $\lambda > 0$, and initialize $\hat{x}_0$.
\STATE Choose $r\ge 3$ if decaying damping \eqref{nagdamp}, or $r > 0$ if constant damping \eqref{hbdamp}.
\FOR {$k=0,1,\dotsc$}
\STATE $x_{k+1/2} \leftarrow J_{\lambda \nabla g}\left(\hat{x}_k-
\lambda \nabla w(\hat{x}_k)\right)$
\STATE $x_{k+1} \leftarrow x_{k+1/2} -\lambda \left( \nabla w(x_{k+1/2})-
\nabla w(\hat{x}_k) \right)$
\STATE Using $h = \sqrt{\lambda}$ and $r$, compute $\gamma_{k+1}$ and $\hat{x}_{k+1}$ from \eqref{xhat}.
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{figure}
\centering
\includegraphics[scale=1.2]{fb_discretization.pdf}\qquad
\includegraphics[scale=1.2]{tseng_discretization.pdf}
\caption{\label{tsengfig}
\emph{Left:} Illustration of the accelerated forward-backward method, which is a semi-implicit Euler
discretization. \emph{Right:}
Illustration of accelerated Tseng splitting, which adds a perturbation to the forward-backward method (see \eqref{tseng.u2}).
}
\end{figure}
The original method proposed in \cite{Tseng:2000} is recovered from Algo.~\ref{atseng} by setting $\gamma_k=0$ for all $k$ (i.e., without acceleration), in which case
the algorithm is a discretization of the gradient flow \eqref{gradflow}.
We believe that the accelerated variants in Algo.~\ref{atseng} have not previously been considered in the literature.
The next result shows that Algo.~\ref{atseng} is a first-order integrator.
\begin{theorem}\label{tsengcritical}
The following hold true:
\begin{enumerate}
\item[(i)] Algo.~\ref{atseng} is a first-order integrator to the
accelerated gradient flow \eqref{secode};
\item[(ii)] Algo.~\ref{atseng} with $\gamma_k=0$ for all $k \geq 0$ is a first-order integrator to the gradient flow \eqref{gradflow}.
\end{enumerate}
\end{theorem}
\begin{proof}
The results may be proved using the similar arguments as those used to establish Theorem~\ref{admm_first_order} and Theorem~\ref{dy_first_order}.
\end{proof}
\begin{theorem}
Let $f = 0$.
If $\lambda$ is sufficiently small and the iterate sequence $\{x_k\}$ generated by Algo.~\ref{atseng} satisfies $\{x_k\} \to x_\infty$ for some $x_\infty$, then
\begin{equation}
(\nabla g + \nabla w)(x_\infty) = 0,
\end{equation}
i.e., $x_\infty$ is a solution of problem~\eqref{optimization} and a steady state of the accelerated gradient flow~\eqref{secode}. When $\gamma_k = 0$ for all $k \geq 0$, $x_\infty$ is a steady state of the gradient flow~\eqref{gradflow}.
\end{theorem}
\begin{proof}
From the updates in Algo.~\ref{atseng}, it follows that $x_{k+1} = \hat{\Phi}_h(\hat{x}_k)$ where
\begin{equation} \label{tsengop}
\hat{\Phi}_h
= (I - \lambda \nabla w)\circ
J_{\lambda \nabla g}\circ(I - \lambda \nabla w) + \lambda \nabla w.
\end{equation}
Combining this with~\eqref{xhat} shows that
\begin{equation}\label{xhat-eq-tseng}
\hat{x}_{k+1}
= x_{k+1} + \gamma_{k+1}(x_{k+1} - x_{k})
= \hat{\Phi}_h(\hat{x}_k) + \gamma_{k+1}(x_{k+1} - x_{k}).
\end{equation}
Combining $\{x_k\} \to x_\infty$ with the first equality in~\eqref{xhat-eq-tseng} and~\eqref{xhat} yields $\{\hat{x}_k\} \to x_\infty$. Combining these facts with~\eqref{xhat-eq-tseng} and continuity of $\hat{\Phi}_h$ shows that
\begin{equation}
x_\infty
= \lim_{k\to\infty} \hat{\Phi}_h(\hat{x}_k)
= \hat{\Phi}_h(x_\infty).
\end{equation}
Subtracting $\lambda \nabla w(x_\infty)$ from both sides of the previous inequality gives
\begin{equation}
(I-\lambda \nabla w)(x_\infty)
= (I - \lambda \nabla w)\circ
J_{\lambda \nabla g}\circ(I - \lambda \nabla w)(x_\infty).
\end{equation}
Next, applying the operator $(I - \lambda \nabla w)^{-1}$, which exists for $\lambda$ sufficiently small, yields
\begin{equation}
x_\infty
= J_{\lambda \nabla g}\circ(I - \lambda \nabla w)(x_\infty),
\end{equation}
which is itself equivalent to
\begin{equation}
(I+\lambda\nabla g)(x_\infty)
= (I - \lambda \nabla w)(x_\infty).
\end{equation}
Since $\lambda > 0$, the previous inequality shows that $(\nabla g + \nabla w)(x_\infty) = 0$, as claimed.
The same argument applies when
$\gamma_k = 0$ (i.e., when $\hat{x}_k = x_k$ for all $k \geq 0$), in which case Algo.~\ref{atseng} is a discretization of \eqref{gradflow}.
\end{proof}
\section{Monotone Operators}
\label{nonsmooth}
As indicated by Assumption~\ref{assump2}, all previously considered operators associated with the resolvent \eqref{resolvent} were single-valued.
Since proximal algorithms can be generalized to the more abstract level of monotone operators, in this section we discuss how our previous analysis applies in this context.
Let us recall some concepts about monotone
operators (see~\cite{BauschkeBook} for more details).
Let $H$ be a Hilbert space with inner product $\langle \cdot, \cdot \rangle:
H \times H \to \mathbb{C}$. A multi-valued map $A: H \rightrightarrows H$ with
$\dom A \equiv \{ x \in H \, \vert \, Ax \ne \emptyset \}$ is \emph{monotone} if and only if
\begin{equation}
\langle A y - Ax, y - x\rangle \ge 0 \qquad \mbox{ for all
$x,y\in\dom A$}.
\end{equation}
A monotone operator is said to be \emph{maximal} if
no enlargement of its graph is possible. Every monotone operator
admits a maximal extension. Hence, from now on, every operator $A$ is assumed to be maximal monotone.
The resolvent of $A$ with parameter $\lambda$ is defined
by \eqref{resolvent} and can be shown to be
a single-valued map, i.e., $J_{\lambda A}: H \to H$.
Moreover, $x^\star \in \zer(A) \equiv \{x\in H \, \vert \, 0\in Ax \}$ if and only if
$J_{\lambda A}(x^\star) = x^\star$.
An important concept is
the \emph{Yosida regularization} of
$A$ with parameter $\mu > 0$:
\begin{equation}
\label{yosida}
A_\mu \equiv \mu^{-1} (I - J_{\mu A}).
\end{equation}
This operator is \emph{single-valued}, since $J_{\mu A}$ is single-valued,
and Lipschitz continuous.
Moreover, $0 \in A x^{\star}$ if and only if
$0=A_{\mu} x^\star$, thus $A$ and $A_\mu$ have the same
zeros. It can be shown
that in the limit $\mu \downarrow 0$ one has
$A_{\mu} x \to A_0 x$, where $A_0 x \in A x$ is the element
of minimal norm.
Often, one is faced with the resolvent of the Yosida regularization
$J_{\lambda A_{\mu}} = (I + \lambda A_{\mu})^{-1}$, which
can be expressed in terms of the operator $A$ by using
\begin{equation}
\label{resolvent3}
J_{\lambda A_\mu} = (\mu + \lambda)^{-1}\left( \mu I + \lambda J_{(\mu + \lambda)A} \right).
\end{equation}
Importantly, in the limit $\mu \downarrow 0$
we see that \eqref{resolvent3} recovers the
resolvent $J_{\lambda A}$.
\subsection{Differential inclusions}
Consider the following two differential inclusions (we refer to \cite{CellinaBook} for background on nonsmooth dynamical systems):
\begin{align}
\label{firstmon}
\dot{x} & \in -A x - B x - Cx , \\
\label{secmon}
\ddot{x} + \eta(t)\dot{x} & \in -A x - B x - Cx ,
\end{align}
with $\eta(t)$ given by \eqref{nagdamp} or \eqref{hbdamp}, and
under the following assumption.
\begin{assumption} \label{max_mon_assump}
The operators $A,B : H \rightrightarrows H$ are maximal monotone. The operator $C : H \to H$ is maximal monotone and single-valued.
\end{assumption}
Under Assumption~\ref{max_mon_assump}, the differential inclusions \eqref{firstmon} and \eqref{secmon} have a unique solution \cite{CellinaBook}.
The previous discretizations
of the gradient flow \eqref{gradflow} and the accelerated
gradient flow \eqref{secode} extend naturally to \eqref{firstmon}
and \eqref{secmon}, respectively. This is a consequence of the resolvent \eqref{resolvent} being a single-valued map, which we illustrate through an
example.
Consider the procedure of Section~\ref{accel_dy}
that led to Algo.~\ref{ady}. Using a similar procedure, we use a splitting of~\eqref{secmon} to obtain the differential inclusions
\begin{equation}
\ddot{x} + \eta(t) \dot{x} \in -A x , \qquad
\ddot{x} + C x\in -B x.
\end{equation}
An implicit discretization of the first inclusion
yields
$(I + \lambda A)(x_{k+1/4}) \ni \hat{x}_k$, while a semi-implicit discretization
of the second inclusion together with the definition \eqref{ady.u2} yield
$(I + \lambda B)(x_{k+3/4}) \ni x_{k+1/2} - \lambda C x_{k+1/4}$.
Since under Assumption~\ref{max_mon_assump} the resolvents of $A$ and $B$ are single-valued, we can invert these relations to obtain
\begin{subequations} \label{ady.mon}
\begin{align}
x_{k+1/4} &= J_{\lambda A}(\hat{x}_k), \\
x_{k+1/2} &= 2 x_{k+1/4} - \hat{x}_k, \\
x_{k+3/4} &= J_{\lambda B}(x_{k+1/2} - \lambda C x_{k+1/4}), \\
x_{k+1} &= x_{k+3/4} - (x_{k+1/4} - \hat{x}_k), \\
\hat{x}_{k+1} &= x_{k+1} + \gamma_k (x_{k+1} - x_{k}),
\end{align}
\end{subequations}
where the second to last update follows from \eqref{ady.u4},
and the last update from \eqref{xhat}.
The algorithm given by updates \eqref{ady.mon} is the ``operator analog'' of Algo.~\ref{ady} that aims to find a vector $x^*$ satisfying $0 \in (A + B + C)(x^\star)$.
Setting $C = 0$ into \eqref{ady.mon} one obtains the operator analog of the accelerated Douglas-Rachford (see Section~\ref{dougrach}), while setting $A = 0$ one obtains the operator analog of the accelerated forward-backward method (see Section~\ref{fbsec}).
Operator analogs of Algo.~\ref{agenadmmrebal}
and Algo.~\ref{atseng} follow in a similar manner.
One can also remove acceleration by setting $\gamma_k = 0$ for all $k \geq 0$, in which case these algorithms are discretizations of the first-order differential inclusion \eqref{firstmon} (also see Remark~\ref{admm_grad_flow}).
\begin{remark}
We mention a subtlety regarding the order of accuracy of a discretization such as \eqref{ady.mon} to a differential inclusion such as \eqref{firstmon} or \eqref{secmon}.
In the smooth case we concluded that such a discretization is a first-order integrator; see Definition~\ref{order_def} and Theorems~\ref{admm_first_order}, \ref{dy_first_order}, and \ref{tsengcritical}. An important ingredient was the Taylor
approximation of the resolvent \eqref{proxappro}. However, for a maximal monotone operator $A$
only the following weaker approximation is available \cite[Remark 23.47]{BauschkeBook}:
\begin{equation} \label{taylor_res}
J_{\lambda A} = I - \lambda A_0 + \smallO(\lambda)
\end{equation}
where the action of $A_0 = \lim_{\mu \downarrow 0}A_\mu$ on a vector $x\in \dom(A)$ gives the minimal norm element of the set $Ax$.
Thus, by the same argument used in the proofs of Theorems~\ref{admm_first_order} and \ref{dy_first_order}, but now using \eqref{taylor_res} and assuming that one can expand $A_0(x + \Order(\lambda)) = A_0(x) + \Order(\lambda)$ and similarly for $B_0$ and $C_0$, one concludes that the discrete and continuous trajectories agree up to
$\smallO(h)$. This is in contrast with the $\Order(h^2)$ approximation that we proved for the smooth setting considered in Section~\ref{proximal_splitting}.
\end{remark}
\subsection{Regularized ODEs}
There is an alternative to considering the differential inclusions \eqref{firstmon} and \eqref{secmon}, which is to consider the respective ODEs
\begin{align}
\label{firstreg}
\dot{x} &= -A_\mu x - B_\mu x - Cx , \\
\label{secreg}
\ddot{x} + \eta(t)\dot{x} &= -A_\mu x - B_\mu x - Cx,
\end{align}
with the multivalued operators replaced by their
respective Yosida regularization \eqref{yosida},
which are single-valued and Lipschitz continuous.
Thus, both ODEs admit a unique global solution.
Note, however, that
\begin{equation}
\zer(A + B + C) \ne \zer( A_\mu+ B_\mu + C)
\end{equation}
so that the ODEs \eqref{firstreg} and \eqref{secreg} do
not have steady states that are compatible with zeros
of the operator sum $A+B+C$. Nevertheless, after discretizing these regularized ODEs, one
can take the limit
$\mu \downarrow 0$ to recover iterates aimed at finding
zeros of $A+B+C$. We stress that this procedure will give exactly the same updates as
if one discretizes the original differential inclusions because of the identity \eqref{resolvent3}; let us illustrate with an example.
Consider the discretization procedure of Section~\ref{accel_dy} but applied to
the ODE \eqref{secreg}. Similarly to \eqref{ady.u1}--\eqref{ady.u4}, together with the accelerated
variable $\hat{x}_k$ in \eqref{xhat}, we immediately obtain with the help of~\eqref{resolvent3} the updates
\begin{subequations} \label{ady.reg}
\begin{align}
x_{k+1/4} &= (\mu+\lambda)^{-1}( \mu I + \lambda J_{(\mu+\lambda) A})(\hat{x}_k), \\
x_{k+1/2} &= 2x_{k+1/4} - \hat{x}_k, \\
x_{k+3/4} &= (\mu+\lambda)^{-1}(\mu I + \lambda J_{(\mu + \lambda) B})(x_{k+1/2} - \lambda C x_{k+1/4}), \\
x_{k+1} &= x_{k+3/4} - (x_{k+1/4} - \hat{x}_k), \\
\hat{x}_{k+1} &= x_{k} + \gamma_k(x_{k+1}-x_k).
\end{align}
\end{subequations}
Taking the limit $\mu \downarrow 0$ above results in the updates \eqref{ady.mon}, which we can recall as a discretization of the differential inclusion \eqref{secmon}.
\begin{remark}It is possible to generalize Lemma~\ref{caleylemma} to general maximal monotone
operators, i.e., $0 \in (A + B +C)(\bar{x})$ if and only if $ \mathcal{P}(x) = x$ where
\begin{equation}
\mathcal{P} \equiv \tfrac{1}{2}I + \tfrac{1}{2}
C_{\lambda B}\circ \left( C_{\lambda A} - \lambda C \circ
J_{\lambda A} \right) - \tfrac{1}{2}\lambda C \circ J_{\lambda A}
\end{equation}
and $\bar{x} = J_{\lambda A}x$. The proof is similar to the one of Lemma~\ref{caleylemma},
although one needs to be careful in replacing equalities by appropriate inclusions
and using the fact that the resolvent of a maximal monotone operator is single-valued.
Therefore, by the same arguments as in Theorem~\ref{dy_critical}, the updates \eqref{ady.mon} preserve
zeros of $A + B + C$.
\end{remark}
\begin{remark}
To place the above concepts within an optimization context,
consider the case where $A = \partial f$ is the subdifferential of
a nonsmooth convex function $f$. Then, the Yosida regularization \eqref{resolvent3} becomes
\begin{equation}
\nabla f_{\mu}(x) = \mu^{-1}(x - \prox_{\mu f}(x)),
\end{equation}
which is the gradient of the Moreau envelope
$f_\mu(x) \equiv \min_{y}\big( f(y) + \tfrac{1}{2\mu}\| y-x\|^2\big)$.
The Moreau envelope is always differentiable and has the same minimizers
as $f$.
\end{remark}
\section{Numerical Experiments}
\label{applications}
Based on the continuous rates of Table~\ref{convergence}, we expect that the accelerated algorithms that we have introduced will converge faster than their non-accelerated counterparts.
In this section we investigate this numerically.
\subsection{LASSO}
Consider the well-known LASSO regression problem
\begin{equation}\label{lassoprob}
\min_{x\in\mathbb{R}^n} \left\{ F(x) = \tfrac{1}{2}\| A x - b\|^2 + \alpha \| x\|_1 \right\}
\end{equation}
where $A \in \mathbb{R}^{m\times n}$ is a given matrix, $b \in \mathbb{R}^m$
is a given signal, and $\alpha > 0$ is a weighting parameter.
We generate data by sampling
$A \sim \mathcal{N}(0,1)$, where $\mathcal{N}(\mu, \sigma)$ denotes a normal distribution with mean $\mu$ and standard deviation $\sigma$, and then normalizing its columns to have unit two norm.
We sample
$x_{\bullet} \in \mathbb{R}^{n}\sim \mathcal{N}(0,1)$ with sparsity level $95\%$ (i.e., only $5\%$ of its entries are nonzero), and
then add noise to obtain the observed signal
$b = A x_{\bullet} + e$, where the entries of $e$ are chosen i.i.d.~from a normal distribution with mean zero and standard deviation $10^{-3}$.
We choose $m=500$ and $n=2500$. The resulting signal-to-noise ratio
is on the order of $250$, and $x_\bullet$ has $125$ nonzero entries.
The parameter $\alpha$ is set as
$\alpha= 0.1 \alpha_{\textnormal{max}}$ where
$\alpha_{\textnormal{max}} = \| A^T b \|_{\infty}$ is the maximum value for $\alpha$ such that \eqref{lassoprob} admits a nontrivial solution.
We evaluate methods by computing the relative error
$|F_k - F^\star| / F^\star$ where $F_k \equiv F(x_k)$ and $F^\star$ is the value of the objective obtained with the default implementation of CVXPY.
We compare four frameworks:
ADMM and Douglas-Rachford~(DR) (these correspond to Algo.~\ref{agenadmmrebal} and Algo.~\ref{ady}, respectively, with $f(x) = \tfrac{1}{2} \|Ax-b\|_2^2$, $g(x) = \alpha\|x\|_1$, and $w(x) = 0$), and forward-backward (FB) splitting and Tseng splitting (these correspond to Algo.~\ref{ady} and Algo.~\ref{atseng}, respectively, with $f(x) = 0$, $g(x) = \alpha \|x\|_1$, $w(x) = \tfrac{1}{2} \|Ax-b\|_2^2$). For each of these four frameworks, we consider three variants: no acceleration
(i.e., setting $\gamma_k = 0$ for all $k$),
acceleration based on decaying damping
(i.e., setting $\gamma_k$ by~\eqref{xhat} with damping coefficient defined in~\eqref{nagdamp}), and
acceleration based on constant damping
(i.e., setting $\gamma_k$ by~\eqref{xhat} with damping coefficient defined in~\eqref{hbdamp}). This results in a total of twelve algorithms that we denote by ADMM, ADMM-decaying, ADMM-constant, DR, DR-decaying, DR-constant, FB, FB-decaying, FB-constant, Tseng, Tseng-decaying, Tseng-constant.
In all cases, we choose a step size of $\lambda = 0.1$ for the proximal operators.
For decaying damping we choose $r=3$,
and for constant damping $r=0.5$.
\begin{figure}[t]
\centering
\includegraphics[scale=.45]{lasso1.pdf}\qquad
\includegraphics[scale=.45]{lasso2.pdf}
\vspace{-.5em}
\caption{\label{lasso}
Performance of our twelve tested algorithm variants on problem~\eqref{lassoprob}.
We perform 10 Monte-Carlo runs and show the mean and standard deviation of the relative error between $F_k = F(x_k)$ and $F^*$, where $F^\star$ is the solution obtained by CVXPY.
}
\end{figure}
In Fig.~\ref{lasso} we report the mean
and standard deviation (errorbars) across 10 randomly generated instances of problem~\eqref{lassoprob} for various methods.
The figure shows that the accelerated variants of each method improve over the non-accelerated variant. In particular, the constant damping accelerated variant is the fastest in this example.
\subsection{Nonnegative matrix completion}
We now consider a matrix completion problem where the entries of the matrix are
constrained to lie in a specified range. Suppose that for
a low-rank matrix $M \in \mathbb{R}^{n\times m}$,
we only have access to certain entries whose ordered pairs are collected in a set $\Omega$; let the operator $\PP:\mathbb{R}^{n\times m} \to \mathbb{R}^{n\times m}$ denote the projection onto these observable entries.
The observable data matrix is given by $\Mobs = \PP(M)$ where
$[\PP(M)]_{ij} = M_{ij}$ if $(i,j) \in \Omega$
and $[\PP(M)]_{ij} = 0$ otherwise.
The goal is then to estimate the missing entries of $M$.
One popular approach is to solve the
convex optimization problem
$\min \{ \| X \|_{*} \, \vert \, \PP(X) = \PP(M)\}$, where $\| X \|_{*}$
is the nuclear norm of $X$~\cite{Cai:2010}.
We consider a modification of this approach by imposing
constraints of the form $a \le X_{ij} \le b$ for given constants $a$ and $b$.
Specifically, we solve
\begin{equation}
\label{matcomp}
\min_{X\in\mathbb{R}^{n\times m}} \bigg\{ F(X) = \underbrace{\alpha \| X\|_*}_{f(X)} +
\underbrace{\mathbb{I}_{[a,b]}(X)}_{g(X)} +
\underbrace{\tfrac{1}{2} \| \PP(X) - \PP(M) \|_{F}^{2}}_{w(X)} \bigg\}
\end{equation}
where $\| \cdot \|_F$ denotes the Frobenius norm,
$\mathbb{I}_{[a,b]}(X) = 0$ if
$ a \le X_{ij} \le b$ for all $(i,j)$ and
$\mathbb{I}_{[a,b]}(X) = \infty$ otherwise, and $\alpha > 0$ is a weighting parameter such that larger values of $\alpha$ promote lower rank solutions~\cite{Cai:2010} in problem~\eqref{matcomp}.
We generate the low-rank matrix as $M = L_1 L_2^T$ where $\{L_1,L_2\} \subset \mathbb{R}^{100 \times 5}$ with entries chosen i.i.d.~from $\mathcal{N}(3, 1)$.
This ensures $M$ has rank $5$ (with probability one) and that each entry is positive with high probability (each test instance was verified to have positive entries).
We sample $s n^2$ entries of $M$ uniformly at random, with
a sampling ratio $s=0.4$, i.e., $40\%$ of the matrix $M$ is observed in $M_\textnormal{obs}$. We choose
\begin{equation}
\begin{split}
a &= \min\{ [M_{\textnormal{obs}}]_{ij} \, \vert \, (i,j)\in\Omega\}
- \sigma/2 , \qquad \\
b &= \max\{ [M_{\textnormal{obs}}]_{ij} \, \vert \, (i,j)\in\Omega \} + \sigma/2
\end{split}
\end{equation}
where $\sigma$ is the
standard deviation of all entries of $M_\textnormal{obs}$.
We compare two frameworks: Davis-Yin (DY) (see Algo.~\ref{ady}) and ADMM (see Algo.~\ref{agenadmmrebal}). For each of these two frameworks, we consider the same three variants discussed in the previous section: no acceleration, acceleration based on decaying damping, and acceleration based on constant damping. These six algorithms are denoted by DY, DY-decaying, DY-constant, ADMM, ADMM-decaying, and ADMM-constant.
Problem~\eqref{matcomp} can be solved using these algorithms with the proximal operator
$ J_{\tau \partial \| \cdot \|_*}(X) = U D_{\tau }(\Sigma) V^T$,
where $X = U \Sigma V^T$ is the singular value decomposition of $X$ and
$[D_{\tau}(\Sigma)]_{ii} = \max\{ \Sigma_{ii} - \tau, 0\}$; see \cite{Cai:2010} for details.
The proximal operator of $g$ is just the projection
$\big[J_{\lambda \partial \mathbb{I}_{ [a,b] } }(X)\big]_{ij}
= \max\{a, \min(X_{ij}, b)\}$. Finally,
$\nabla w(X) = \PP(X - M)$.
In terms of algorithm parameters, we choose
a step size of $\lambda=1$ (for all variants), $r=3$ for
decaying damping,
and $r=0.1$ for constant damping.
To evaluate algorithm performance, we use the relative error measure
\begin{equation}
\| M_k - M \|_F \big/ \| M\|_F
\end{equation}
where $M_k$ is the solution estimate obtained during the $k$th iteration.
The stopping criteria for the optimization algorithms is
$
\| M_{k+1} - M_k \|_F \big / \| M_k\|_F \le 10^{-10},
$
which was satisfied for every problem instance even though it is a relatively tight tolerance.
In Fig.~\ref{mc1} we report the mean
and standard deviation (errorbars) across 10 randomly generated instances of problem~\eqref{matcomp} with $\alpha = 3.5$ for the above algorithm variants.
All methods terminate successfully and recover a matrix with the correct rank of five and
a final relative error of $\approx 5\times 10^{-3}$. The total number of iterations performed by each method are also shown in Fig.~\ref{mc1}.
\begin{figure}[t]
\centering
\includegraphics[trim={0 -11 0 0},scale=0.45]{mc_convergence1}\qquad
\includegraphics[scale=0.45]{mc_iteration1}
\vspace{-.3cm}
\caption{Performance of algorithms on problem~\eqref{matcomp} with $\alpha = 3.5$. We perform $10$ Monte Carlo runs and indicate the mean and standard deviation for the relative error between the ground truth matrix $M$ and the $k$th iterate $M_k$ (left), and the number of iterations needed by the method to reach the termination tolerance (right).}
\label{mc1}
\end{figure}
Motivated by the relatively large final relative error achieved for the single value of $\alpha$ in the previous paragraph, next we consider
an annealing schedule on $\alpha$
that improves the relative error of the computed solutions.
We follow the procedure of \cite{Goldfarb:2011}
as follows.
Given a sequence $\alpha_1 > \alpha_2 > \dotsm > \alpha_{L} = \bar{\alpha} > 0$ for some $\bar{\alpha}$,
we run each algorithm with $\alpha_j$
and then use its solution as a starting point for the solution to the next run
with $\alpha_{j+1}$; all other parameters are kept fixed.
Such an approach has been used in compressed sensing~\cite{Hale:2010} and matrix completion~\cite{Goldfarb:2011}.
Starting with $\alpha_0 = \delta \| \Mobs \|_F$ for some $\delta\in(0,1)$, we
use the schedule
$\alpha_{j+1} = \max\{\delta \alpha_j, \bar{\alpha}\}$ until
reaching $\bar{\alpha}$.
In our tests we choose $\delta = 0.25$ and $\bar{\alpha}=10^{-8}$.
We use the same algorithm parameters as those used in creating Fig.~\ref{mc1}, except that for the constant damping variants we now use $r=0.5$ since it performs better.
In Fig.~\ref{mc2} we report the mean and standard deviation (errorbars) across $10$ randomly generated instances of problem~\eqref{matcomp}. All methods successfully reach the termination tolerance, as for the previous test, but now achieve a much better reconstruction accuracy (compare Fig.~\ref{mc1} and Fig.~\ref{mc2}).
The total number of iterations
for each method are also shown in Fig.~\ref{mc2}. In this example the decaying damping variants do not improve over
the non-accelerated method, but the constant damping
variants still provide a speedup.
We believe these findings can be explained by the fact that the accelerated gradient flow with constant damping attains exponential convergence
on strongly convex problems, as opposed to the decaying damping (see Table~\ref{convergence}).
\begin{figure}[t]
\centering
\includegraphics[trim={0 -11 0 0},scale=0.45]{mc_convergence2}\qquad
\includegraphics[scale=0.45]{mc_iteration2}
\vspace{-.3cm}
\caption{Performance of algorithms on problem~\eqref{matcomp} when annealing is used for $\alpha$. We perform $10$ Monte Carlo runs and indicate the mean and standard deviation for the relative error between the ground truth matrix $M$ and the $k$th iterate $M_k$ (left), and the number of iterations needed to reach the termination tolerance (right).}
\label{mc2}
\end{figure}
\section{Final Remarks}
\label{conclusion}
We showed that four types of proximal algorithms, namely forward-backward, Tseng splitting, Douglas-Rachford, and
Davis-Yin,
correspond to different discretizations of the gradient flow \eqref{gradflow}. We also showed
that several accelerated variants of each of these methods
arise from a similar
discretization to the accelerated gradient flow \eqref{secode}. Such algorithms are steady-state-preserving first-order integrators
to the associated ODE. Moreover, we showed that ADMM and its accelerated variants
correspond to a rebalanced splitting, which is a technique recently introduced in the literature \cite{Speth:2013} to obtain discretizations that preserve steady states.
The new accelerated frameworks (see Algos.~\ref{agenadmmrebal}, \ref{ady}, and \ref{atseng}), which are new in general, reduce to known methods as special cases. Our frameworks provide different types of acceleration depending on the choice of damping strategy such as \eqref{xhat} or
\eqref{damping_combination}, although other choices are also possible.
Our derivations provide a new perspective on
the important class of ``operator splitting methods'' by establishing tight
connections with splitting methods for ODEs.
Such an approach endows the gradient flow \eqref{gradflow} and
the accelerated gradient flow \eqref{secode} with a unifying
character for optimization since they capture the leading order behavior
of several known algorithms. However, a complete understanding of a particular algorithm requires a more refined analysis and is an interesting problem.
\subsection*{Acknowledgments}
We would like to thank Patrick Johnstone for discussions.
This work was supported by grants ARO MURI W911NF-17-1-0304 and NSF 1447822.
|
1,108,101,564,830 | arxiv | \section{Introduction}
In~\cite{Gromov2014Dirac}, Gromov proposed a geometric comparison theory for metrics with scalar curvature lower bounds. He speculated that Riemannian polyhedrons should play the role of~triangles in Alexandrov's comparison theory for sectional curvature~\cite{AleksandrovRerestovskiiNikolaev86}. As a first step, he obtained the following theorem for metrics with nonnegative scalar curvature, where the comparison models are Euclidean cubes:
\begin{theo}[\cite{Gromov2014Dirac}]\label{theo.cube.nonrigit}
Let $M=[0,1]^n$ be a cube, $g$ a smooth Riemannian metric. Then $(M,g)$ cannot simultaneously satisfy:
\begin{enumerate}\itemsep=0pt
\item[${\rm 1)}$] the scalar curvature $R(g)\ge 0$;
\item[${\rm 2)}$] each face of~$M$ is weakly strictly mean convex;\footnote{In~this paper, the mean curvature is taken with respect to outer unit normal vector. For instance, the standard sphere $S^{n-1}$ in $\mathbf{R}^n$ has mean curvature $n-1$.}
\item[${\rm 3)}$] the dihedral angles between adjacent faces are all acute.
\end{enumerate}
\end{theo}
Theorem~\ref{theo.cube.nonrigit} also has a rigidity statement: if $n\le 7$, and we assume all dihedral angles are not larger than $\pi/2$ in condition (3), then $(M,g)$ is isometric to an Euclidean rectangular solid (see~\cite{Li2017polyhedron,li2019dihedral}). This is called the dihedral rigidity phenomenon. In~\cite{Gromov2014Dirac,Gromov2018Adozen}, Gromov conjectured that this property is satisfied for all convex polyhedrons in $\mathbf{R}^n$:
\begin{conj}[the dihedral rigidity conjecture]\label{conj.dihedral.rigidity}
Let $M\subset \mathbf{R}^n$ be a convex polyhedron and~$g_0$ be the Euclidean metric. Suppose $g$ is a smooth Riemannian metric on~$M$. Denote its faces by~$F_i$, the mean curvature of~$F_i$ by~$H_i$, and the dihedral angle between two adjacent faces $F_i$, $F_j$ by~$\measuredangle_{ij}$ $(\measuredangle_{ij}(g)$ may be nonconstant$)$. Assume:
\begin{enumerate}\itemsep=0pt
\item[${\rm 1)}$] $R(g)\ge 0$ in $M$;
\item[${\rm 2)}$] $H_i(g)\ge 0$ on each face $F_i$;
\item[${\rm 3)}$] $\measuredangle_{ij}(g)\le \measuredangle_{ij}(g_0)$ on each pair of~adjacent faces $F_i$, $F_j$.
\end{enumerate}
Then $(M,g)$ is isometric to a flat polyhedron in $\mathbf{R}^n$.
\end{conj}
Conjecture~\ref{conj.dihedral.rigidity} and related problems have been studied and extended in recent years (see, e.g.,~\cite{Gromov2018Metric,gromov2018IAS,gromov2019lectures,Li2017polyhedron,li2019dihedral,LiMantoulidis2018positive,miao2019measuring}), leading to a range of~interesting new discoveries and questions on~manifolds with nonnegative scalar curvature. In~this paper, we investigate the analogous polyhedral comparison principle, together with the rigidity phenomenon, for metrics with nega\-tive scalar curvature lower bound.
By scaling, we assume $R(g)\ge -n(n-1)$. Our comparison model is a collection of~polyhedrons in the hyperbolic space, called \textit{parabolic prisms}, which we define now. Let $(\mathbf{H}^n,g_H)$ be the hyperbolic space with sectional curvature $-1$. We~choose the coordinate system $\{x_1,\dots,x_n\}$, $x_j\in \mathbf{R}$, such that $g_H$ takes the form
\[g_H={\rm d}x_1^2+{\rm e}^{2x_1}\big({\rm d}x_2^2+\cdots+{\rm d}x_n^2\big).\]
For any constant~$c$, the coordinate hyperplane $x_1=c$ is umbilical with constant mean curvature $n-1$ with respect to~$\partial_{x_1}$. The induced metric on it is isometrically Euclidean. These hyperplanes are called \textit{horospheres}. For $j\ge 2$, the coordinate hyperplanes $x_j=c$ are totally geodesic, and they intersect each other and the horospheres orthogonally.
Denote $\hat{x}=(x_2,\dots,x_n)$. Given a polyhedron~$P\subset \mathbf{R}^{n-1}$, we call the set $\{(x_1,\hat{x})\colon 0\le x_1\le 1,\, \hat{x}\in P\}$ a parabolic prism in $\mathbf{H}^n$. As a special case of~the main theorem of~this paper, Theorem~\ref{theo.main}, we have the following
\begin{theo}\label{theo.parabolic.cube}
Let $n\le 7$, $M=[0,1]^n$ be a parabolic rectangle in $\mathbf{H}^n$, $g_H$ be the hyperbolic metric on~$M$. Denote the face $\partial M\cap \{x_1=1\}$ by~$F_T$, the face $\partial M\cap \{x_1=0\}$ by~$F_B$. Assume~$g$ is a Riemannian metric on~$M$ such that:
\begin{enumerate}\itemsep=0pt
\item[${\rm 1)}$] $R(g)\ge -n(n-1)$ in $M$;
\item[${\rm 2)}$] $H(g)\ge n-1$ on~$F_T$, $H(g)\ge -(n-1)$ on~$F_B$, and~$H(g)\ge 0$ on~$\partial M\setminus (F_T\cup F_B)$;
\item[${\rm 3)}$] the dihedral angles between adjacent faces of~$M$ are everywhere not larger than~$\pi/2$.
\end{enumerate}
Then $(M,g)$ is isometric to a parabolic rectangle in $\mathbf{H}^n$.
\end{theo}
Theorem~\ref{theo.parabolic.cube} addresses a question discussed during the workshop ``Emerging Topics on Scalar Curvature and Convergence'' at IAS in October, 2018. See \cite[Section~6]{gromov2018IAS}.
We~remark that Theorem~\ref{theo.parabolic.cube} holds for any general Riemannian polyhedron~$(M,g)$ with a~proper polyhedral map to the parabolic cube of~nonzero degree. It also holds for more general polyhedral types, as long as the comparison model is a parabolic prism $[0,1]\times P$, and~$P\subset \mathbf{R}^{n-1}$ satisfies Conjecture~\ref{conj.dihedral.rigidity}. By~\cite{Li2017polyhedron,li2019dihedral}, $P$ can be any $3$-dimension simplices or $n$-dimensional non-obtuse prisms. See Theorem~\ref{theo.dihedral.rigidity}.
It has been observed in \cite[Section~5]{li2019dihedral} that Conjecture~\ref{conj.dihedral.rigidity} is a localization of~the positive mass theorem for asymptotically flat manifolds \cite{SchoenYau1979ProofPositiveMass,Witten1981newproof}. Analogously, Theorem~\ref{theo.parabolic.cube} localizes the positive mass theorem for asymptotically hyperbolic manifolds (see, e.g.,~\cite{chruscielHerzlich2003mass,ChruscielNagy2001mass,Wang2001mass}), which, in special cases, can be deduced from the following rigidity result of~scalar curvature, due to Min-Oo~\cite{Minoo1989rigidity} on spin manifolds and to Andersson--Cai--Galloway~\cite{AnderssonCaiGalloway2008rigidity} on all manifolds of~dimension at most $7$ (see also~\cite{chruciel2019hyperbolic,HuangJangMartin2020mass} for more recent developments):
\begin{theo}[\cite{AnderssonCaiGalloway2008rigidity}; see also~\cite{Minoo1989rigidity} for spin manifolds]\label{theo.R.rigidity.hyperbolic}
Suppose $(S^n,g)$, $2\le n\le 7$, has scalar curvature $R(g)\ge -n(n-1)$, and is isometric to $\mathbf{H}^n$ outside a compact set. Then $(S^n,g)$ is isometric to $(\mathbf{H}^n, g_H)$.
\end{theo}
Precisely, suppose Theorem~\ref{theo.parabolic.cube} holds. Given $(S^n,g)$ with $R(g)\ge -n(n-1)$ such that $(S,g)$ is isometric to $(\mathbf{H}^n,g_H)$ outside a compact set $K$, take a sufficiently large $R$ such that the boundary of~the parabolic rectangle $M=[-R,R]^n$ isometrically embeds into $S\setminus K$. Denote the region bounded by~$\partial M$ in $S$ by~$M_1$. Then $M_1$ has a degree one map to $M$, by sending $K$ to an interior point $p\in M$ and~$M_1\setminus K$ to $M\setminus \{p\}$. Thus Theorem~\ref{theo.parabolic.cube} implies that $(M_1,g)$ is isometric to $(M,g_H)$.
Given the connection between the positive mass theorem and the dihedral rigidity conjecture, it would be interesting to see whether a similar comparison principle holds for metrics with positive scalar curvature lower bound. The delicate issue is that the corresponding rigidity phenomenon on a hemisphere is false, due to the counterexamples by Brendle--Marques--Neves~\cite{Brendle2011deformation}. This, together with other related open questions, will be discussed in Section~\ref{section4}.
\section{Notations and the main theorem}
The main objects in this paper are \textit{Riemannian polyhedrons}, which we define as follows.
\begin{defi}A compact Riemannian manifold $(M^n,g)$ with boundary is called a Riemannian polyhedron, if $(M,g)$ can be isometrically embedded into $\mathbf{R}^N$ for some $N\ge n$, and at every $x\in M$, there is a radius $r>0$ and a diffeomorphism $\phi_x\colon B_r\big(x\in \mathbf{R}^N\big)\rightarrow B_1\big(0^N\big)$, such that $\phi_x(B_r\cap M)=P\cap B_1\big(0^N\big)$ for some Euclidean polyhedral cone~$P$ of~dimension~$n$, and~$D\phi_x|_x$ is an isometry. Further, we require that $\phi_x$ is $C^{2,\alpha}$ for some $\alpha\in (0,1)$ independent of~$x$.
\end{defi}
Specially, a compact domain $M$ enclosed by piecewise $C^{2,\alpha}$ hypersurfaces in a smooth Riemannian manifold is a Riemannian polyhedron. Given $x\in M^n$, there is an integer $k\in [0, n]$ such that a neighborhood of~$x$ in $M$ is diffeomorphic to $P_0^{n-k}\times \mathbf{R}^k$, and~$P_0$ is a polyhedral cone in $\mathbf{R}^{n-k}$ without translation symmetry. We~call the union of~all such points the $k$-faces of~$M$. In~particular, the $n$-face is the interior of~$M$, the $(n-1)$-faces are the union of~smooth components of~$\partial M$ (which we called ``faces'' in Theorem~\ref{theo.parabolic.cube}), and the $(n-2)$-faces are the interior of~edges of~$M$.
\begin{defi}Let $P\subset \mathbf{R}^n$ be a flat Euclidean polyhedron, and~$(M^n,g)$ be a Riemannian poly\-hedron. We~say $M$ is over-$P$-polyhedral, if $M$ admits a proper polyhedral map $\phi$ onto $P$ (i.e., $\phi$ maps any $k$-face of~$M$ to a $k$-face of~$P$), such that $\phi$ is of~nonzero degree.
\end{defi}
In~\cite{Li2017polyhedron} and~\cite{li2019dihedral}, Conjecture~\ref{conj.dihedral.rigidity} was proved for a Riemannian polyhedrons that are over-$P$-polyhedral, where
\begin{enumerate}\itemsep=0pt
\item[1)] either $n=3$, and~$P\subset\mathbf{R}^3$ is an arbitrary simplex;
\item[2)] or $3\le n\le 7$, and~$P$ is the Cartesian product $P_0^2\times [0,1]^{n-2}$. Here $P_0\subset \mathbf{R}^2$ is a polygon with non-obtuse dihedral angles.
\end{enumerate}
Precisely, we have:
\begin{theo}[\cite{Li2017polyhedron,li2019dihedral}]\label{theo.dihedral.rigidity}
Let $P\subset \mathbf{R}^n$ be as above, $(M^n,g)$ be an over-$P$-polyhedral Riemannian polyhedron, and~$\phi\colon M\rightarrow P$ be the polyhedral map of~nonzero degree. Suppose:
\begin{enumerate}\itemsep=0pt
\item[${\rm 1)}$] $R(g)\ge 0$ in $M$;
\item[${\rm 2)}$] $H(g)\ge 0$ on each face of~$M$;
\item[${\rm 3)}$] $\measuredangle_{ij}(g)|_x\le \measuredangle_{ij}(g_0)|_{\phi(x)}$ for every point~$x$ on the edges of~$M$.
\end{enumerate}
Then $(M,g)$ is isometric to an Euclidean polyhedron.
\end{theo}
We~now state the main theorem of~this paper.
\begin{theo}\label{theo.main}Let $2\le n \le 7$, $P\subset \mathbf{R}^{n-1}$ be an Euclidean polyhedron such that Theorem~{\rm \ref{theo.dihedral.rigidity}} holds for $P$. Denote $([0,1]\times P,g_H)$ the parabolic prism in the hyperbolic space with $R(g)=-n(n-1)$. Suppose $(M^n,g)$ is a Riemannian polyhedron that is over-$[0,1]\times P$-polyhedral, and~$\phi\colon M^n\rightarrow [0,1]\times P$ be the polyhedral map of~nonzero degree. Suppose:
\begin{enumerate}\itemsep=0pt
\item[${\rm 1)}$] $R(g)\ge -n(n-1)$ in $M$;
\item[${\rm 2)}$] $H(g)\ge n-1$ on~$\phi^{-1}(\{1\}\times P)$, $H(g)\ge -(n-1)$ on~$\phi^{-1}(\{0\}\times P)$, and~$H(g)\ge 0$ on other faces of~$M$;
\item[${\rm 3)}$] $\measuredangle_{ij}(g)|_x\le \measuredangle_{ij}(g_H)|_{\phi(x)}$ for every point $x$ on the edges of~$M$.
\end{enumerate}
Then $(M,g)$ is isometric to a parabolic prism in the hyperbolic space.
\end{theo}
\begin{rema}
The dimension restriction~$n\le 7$ in Theorems~\ref{theo.dihedral.rigidity} and~\ref{theo.main} is due to regularity of~free boundary area minimizing surfaces and isoperimetric regions. In~light of~the recent progress on positive mass theorem in higher dimensions~\cite{SchoenYau2017positive}, we speculate the singular analysis may be applicable a non-rigid variance of~Theorem~\ref{theo.main}, i.e.,~Theorem~\ref{theo.cube.nonrigit}.
\end{rema}
\begin{rema}
The condition that $P=P_0\times \mathbf{R}^{n-2}$, $P_0$ is non-obtuse, is due to the boundary regularity theory developed by Edelen and the author~\cite{EdelenLi2020regularity}. In~general, one can guarantee that free boundary area minimizing surfaces are $C^{2,\alpha}$ regular in a non-obtuse polyhedral domain. See~\cite[Section~9]{EdelenLi2020regularity}.
\end{rema}
\section{Proof of~the main theorem}
The proof of~Theorem~\ref{theo.main} is an adaptation of~the proof of~Theorem~\ref{theo.dihedral.rigidity}. We~will be using lots of~techniques developed in~\cite{li2019dihedral}. Given a Riemannian polyhedron~$(M^n,g)$ as in Theorem~\ref{theo.main}, denote the faces $F_T=\phi^{-1}(\{1\}\times P)$ and~$F_B=\phi^{-1}(\{0\}\times P)$. If there exists a point $x$ on the edge of~$M$ such that $\measuredangle(g)|_x<\measuredangle(g_H)|_{\phi(x)}$, we deform the metric $g$ to $\tilde{g}$ as in \cite[Section~11]{Gromov2018Metric}, such that $\measuredangle(\tilde{g})|_x=\measuredangle(g_H)|_{\phi(x)}$ in a neighborhood of~$x$ and the mean curvature of~the two faces containing $x$ increases. Thus, without loss of~generality, we assume that $\measuredangle(g)|_x=\measuredangle(g_H)|_{\phi(x)}$ for~all points $x$ on the edge.
Consider the relative isoperimetric problem:
\begin{gather}
I=\inf\big\{\mathcal H^{n-1}\big(\partial \Omega\llcorner \mathring{M}\big)-(n-1)\mathcal H^n(\Omega)\colon
\Omega\subset M \text{ is a Caccioppoli set},\nonumber
\\ \phantom{I=\inf\{}
F_B\subset \Omega,\, F_T\cap \Omega=\varnothing\big\}.\label{problem.variation}
\end{gather}
It follows from the standard compactness results that $I$ is achieved by a Caccioppoli set $\Omega$. Denote $\Sigma=\spt\big(\partial \Omega\llcorner \mathring{M}\big)$. Since $F_B$, $F_T$ meet other faces orthogonally and~$H(g)\ge n-1$ on~$F_T$, $H(g)\ge -(n-1)$ on~$F_B$, by the strong maximum principle (see \cite[Section~3.1]{li2019dihedral}), either $\Sigma$ is disjoint from $F_T$ and~$F_B$, or $\Sigma$ coincides with $F_T$ or $F_B$. In~any case, $\Sigma$ is an isoperimetric surface with free boundary on~$\partial M\setminus (F_T\cup F_B)$.
We~remark here that similar variational problems as \eqref{problem.variation} have been considered by Witten--Yau~\cite{WittenYau1999connectedness} and by Andersson--Cai--Galloway~\cite{AnderssonCaiGalloway2008rigidity}, where it is called the BPS brane action.
We~now study the regularity of~$\Sigma$. Since $n\le 7$, the regularity of~$\Sigma$ in the interior and smooth part of~$\partial M$ follows from the classical theory~\cite{GruterJost1986Allard, Simons1968minimal}. For a point $x\in \Sigma$ and a $k$-face of~$M$ with $k\le n-2$, we note that the tangent domain of~$M$ at $x$ is given by~$W^2\times [0,\infty)^{k-2}\times \mathbf{R}^{n-k}$, where~$W^2$ is a wedge region in $\mathbf{R}^2$ with non-obtuse opening angle. Thus, we apply \cite[Section~9]{EdelenLi2020regularity} and \cite[Appendix B]{li2019dihedral}, and conclude:
\begin{prop}
$\Sigma$ is $C^{2,\alpha}$ graphical over its tangent plane everywhere.
\end{prop}
Moreover, since $\Sigma$ is homologous to $F_B$, we conclude that at least one connected component (which we still denote by~$\Sigma$) has a nonzero degree map to $P$, given by~$\Sigma\simeq F_B\xrightarrow{\phi} \{0\}\times P$.
Since $\Omega$ is a minimizer for \eqref{problem.variation}, $\Sigma$ has constant mean curvature $(n-1)$ with respect to the outer unit normal $\nu$ of~$\Omega$, and stability implies that
\begin{gather}
Q(\varphi):=\int_\Sigma |\nabla \varphi|^2 -\frac{1}{2}\big(R_M-R_\Sigma + n(n-1)+|\mathring{A}|^2\big)\varphi^2 {\rm d}\mathcal H^{n-1}\nonumber
\\ \hphantom{Q(\varphi):=}
{} -\int_{\partial \Sigma} \secondfund(\nu,\nu)\varphi^2 {\rm d}\mathcal H^{n-2}\ge 0,
\end{gather}
for all $C^1$ function~$\varphi$. Here $R_\Sigma$, $\mathring{A}$ are the scalar curvature of~the induced metric and the traceless second fundamental of~$\Sigma$, respectively, and~$\secondfund$ is the second fundamental form of~$\partial M$.
Let $\varphi>0$ be the principal eigenfunction associated with the quadratic form $Q$. Then $\varphi$ solves the equation
\begin{equation}
\begin{cases}
\Delta_\Sigma \varphi+\dfrac{1}{2}\big(R_M-R_\Sigma + n(n-1)+|\mathring{A}|^2\big)\varphi = -\lambda_1\varphi\,,
\\
\displaystyle\pa{\varphi}{\eta}=\secondfund(\nu,\nu)\varphi\,.
\end{cases}
\end{equation}
Here $\eta$ is the outer conormal vector field of~$\Sigma$, and~$\lambda_1$ is the principal eigenvalue associated with~$Q$. It follows from \cite[Lemma 4.1]{li2019dihedral} that $\varphi\in C^{2,\alpha}(\Sigma)$. Denote $\tilde{g}=\varphi^{\frac{2}{n-2}}g$ on~$\Sigma$. By the very same calculations as in \cite[equations~(4.6) and~(4.7)]{li2019dihedral}, we have
\[R(\tilde{g})=\varphi^{-\frac{n}{n-2}}\left(\big(R_M+n(n-1)+|\mathring{A}|^2+\lambda_1\big)\varphi+\frac{n-1}{n-2}\frac{|\nabla \varphi|^2}{\varphi}\right)\ge 0,\]
and~$H_{\partial \Sigma}(\tilde g)=\varphi^{-\frac{1}{n-2}}(H_{\partial \Sigma}(g)+\secondfund(\nu,\nu)=\varphi^{-\frac{1}{n-2}} H_{\partial M}(g)\ge 0$.
Moreover, since $\Sigma$ meets $\partial M$ orthogonally and~$\tilde{g}$ is conformal to $g$, the dihedral angles of~$(\Sigma,\tilde{g})$ is everywhere equal to that of~$P$. Thus, by Theorem~\ref{theo.dihedral.rigidity}, $(P,\tilde{g})$ is isometric to an Euclidean polyhedron. Tracing equality, we have
\[
R_M=0,\qquad \mathring A=0,\qquad \lambda_1=0,\qquad \nabla \varphi=0 \qquad \text{on }\Sigma.
\]
Therefore $\varphi$ is a constant function, and hence $\Ric_M(\nu,\nu)=-(n-1)$ on~$\Sigma$ and~$\secondfund(\nu,\nu)=0$ on~$\partial \Sigma$. It follows that $\Sigma$ is totally umbilical and infinitesimally rigid.
Next, we adapt the ideas in~\cite{CarlottoChodoshEichmair2016effective,ChodoshEichmairMoraru2018splitting} to study rigidity. Let $M^-$ be the region enclosed by~$\Sigma$ and~$F_B$. We~follow the same argument as in \cite[Section~4]{li2019dihedral}: by constructing the very same deformed metrics $\{g(t)\}_{t\in [0,\varepsilon)}$, solving the relative isoperimetric problem \eqref{problem.variation} and taking convergence as $t\rightarrow 0$, we obtain another free-boundary isoperimetric hypersurface $\Sigma'$ in $(M,g)$ lying between $\Sigma$ and~$F_B$. Moreover, $\Sigma'$ is also isometrically Euclidean, and is umbilical and infinitesimally rigid. By repeating this argument, we obtain a dense collection of~such hypersurfaces~$\{\Sigma^\rho\}$ in~$M$.
Fix $\Sigma^\rho$, its outer unit normal $\nu$, and~$x_0\in \Sigma^\rho$. For $\rho_j$ sufficiently close to $\rho$, $\Sigma^{\rho_j}$ can be writ\-ten as a normal graph of~function~$u^j$ over $\Sigma^\rho$. By standard curvature, the function~$u^j/u^j(x_0)$ converges in $C^{2,\alpha}(\Sigma^\rho)$ to a nonzero function~$u$. The Gauss--Codazzi equation implies that, for~any tangential vector $X,Y$ on~$\Sigma^\rho$,
\[
\big(\nabla_{\Sigma^\rho}^2 u\big)(X,Y)+Rm_M(\nu,X,Y,\nu)u + A_{\Sigma^\rho}(X,Y)u=0.
\]
Taking trace, we have that $\Delta_{\Sigma^\rho}u=0$. Also, since $\Sigma^{\rho_j}$ meets $\partial M$ orthogonally and~$\secondfund(\nu,\nu)=0$ on~$\partial\Sigma^{\rho_j}$, $\pa{u}{\eta}=0$. Thus $u$ is a constant function, and hence $Rm_M(\nu,X,Y,\nu)=-\bangle{X,Y}$. This proves that $M$ has constant sectional curvature $-1$. Theorem~\ref{theo.main} is proved.
\section{Discussions and related questions}\label{section4}
\subsection{Metrics with positive scalar curvature lower bounds}
It is tempting to conjecture that a suitable extension of~Theorems~\ref{theo.dihedral.rigidity} and~\ref{theo.main} holds for metrics with positive scalar curvature lower bounds, where the model space is $S^n$ with a~round metric. Although the precise formulation is unclear, such an extension will likely to localize certain scalar curvature rigidity phenomenon for spheres. Recall the following theorem by Brendle and Marques~\cite{BrendleMarques2011scalar}:
\begin{theo}[\cite{BrendleMarques2011scalar}]
Let $\Omega=B(\delta)\subset S^n$ be a closed geodesic ball of~radius $\delta$ with $\cos\delta\ge \frac{2}{\sqrt{n+3}}$, and~$\overline{g}$ be the standard metric on~$S^n$. Suppose $g$ is another metric on~$\Omega$ such that:
\begin{enumerate}\itemsep=0pt
\item[${\rm 1)}$] $R(g)\ge R(\overline{g})$ in $\Omega$;
\item[${\rm 2)}$] $H(g)\ge H(\overline{g})$ on~$\partial \Omega$;
\item[${\rm 3)}$] $g$ and~$\overline{g}$ induce the same metric on~$\partial \Omega$.
\end{enumerate}
If $g-\overline{g}$ is sufficiently small in the $C^2$-norm, then $g$ is isometric to $\overline{g}$.
\end{theo}
The lower bound $\frac{3}{\sqrt{n+3}}$ for $\delta$ was improved in a subsequent paper by Cox, Miao and~Tam~\cite{CoxMiaoTam2013remarks}. However, it is known that the analogous statement for $\delta=\frac{\pi}{2}$ does not hold, due to the counterexample by Brendle, Marques and Neves~\cite{Brendle2011deformation}.
The original proof of~Theorem~\ref{theo.cube.nonrigit} by Gromov uses the fact that a cube is the fundamental domain of~$\mathbf{Z}^n$ action on~$\mathbf{R}^n$: assuming a counterexample for Theorem~\ref{theo.cube.nonrigit} exists, through a~sequence of~doubling and smoothing, one obtains a smooth metric on~$T^n$ with positive scalar curvature, contradicting~\cite{GromovLawson1980Spin} and~\cite{SchoenYau1979structure}.
Take the standard embedding $S^n_+\subset\mathbf{R}^{n+1}$. The hemisphere $S^n_+=S^n\cap \{x_{n+1}\ge 0\}$ can be obtained by consecutive doublings of~the spherical simplex\vspace{-.5ex}
\[
\Omega_n:=S^n\cap \{x_j\ge 0,\, j=1,\dots,n+1\}.
\]
We~make the following conjecture concerning dihedral rigidity of~$\Omega_n$.
\begin{conj}\label{conj.spherical.simplex}
Let $(M^n,g)$ be a Riemannian polyhedron which is diffeomorphic to a simplex of~dimension~$n$. Suppose\vspace{-.5ex}
\begin{enumerate}\itemsep=0pt
\item[${\rm 1)}$] $R(g)\ge n(n-1)$ in $M$;
\item[${\rm 2)}$] $\partial M$ is piecewise totally geodesic;
\item[${\rm 3)}$] $\measuredangle_{ij}(g)\le \frac{\pi}{2}$ on the edges of~$M$;
\item[${\rm 4)}$] moreover, each face of~$M$ is globally isometric to the standard spherical simplex $\Omega_{n-1}$.
\end{enumerate}
Then $(M^n,g)$ is isometric to $\Omega_n$.
\end{conj}
When $n=2$, Conjecture~\ref{conj.spherical.simplex} holds by doubling $M$ twice across its boundary, and using a~theorem due to Toponogov~\cite{Toponogov1959evaluation}. On the other hand, the construction in~\cite{Brendle2011deformation} does not seem to give a counterexample to Conjecture~\ref{conj.spherical.simplex}.
\subsection[Weak notions of $R \ge \kappa$]{Weak notions of~$\boldsymbol{R\ge \kappa}$}
One of~Gromov's motivations of~studying Conjecture~\ref{conj.dihedral.rigidity} is to define the notion of~``$R\ge 0$'' in~the weak sense. The crucial observation is that, the conditions (2), (3) concern $C^0$ properties of~the metric $g$, and are stable under $C^0$ convergence of~metrics (see~\cite{Gromov2014Dirac}). Thus, we may define{\samepage
\begin{gather*}
\text{``}R(g)\ge 0\text{''} \Leftrightarrow \text{there exists no cube }M \nonumber
\\ \hphantom{\text{``}R(g)\ge 0\text{''} \Leftrightarrow{}}
\text{with mean convex faces and everywhere acute dihedral angle.
\end{gather*}}
And more generally for $\kappa<0$,
\begin{gather}
\text{``}R(g)\ge \kappa \text{''} \Leftrightarrow \text{there exists no cube }M \text{ in the product with }S^2_{-\kappa}\nonumber
\\ \hphantom{\text{``}R(g)\ge \kappa \text{''} \Leftrightarrow{}}
\text{with mean convex faces and everywhere acute dihedral angle.}\label{weak.notion.general}
\end{gather}
Here $S^2_{-\kappa}$ is the space form with scalar curvature $-\kappa$.
Using this observation, Gromov proved the following theorem on the convergence of~metrics with scalar curvature lower bounds:
\begin{theo}[\cite{Gromov2014Dirac}, see also~\cite{bamler2016ricci}]
Let $M^n$ be a smooth manifold, $g$, $g_k$, $k\ge 1$, be smooth Riemannian metrics on~$M$, and~$g_k\rightarrow g$ in $C^0$ as tensors. Suppose $R(g_k)\ge \kappa$ on~$M$. Then $R(g)\ge \kappa$ as well.
\end{theo}
Based on Theorem~\ref{theo.main}, we may define $R\ge \kappa$ for a negative constant $\kappa$:
\begin{gather}
\text{``}R(g)\ge -\kappa\text{''} \Leftrightarrow \text{there exists no cube }M \text{ with acute dihedral angles}\nonumber
\\ \hphantom{\text{``}R(g)\ge -\kappa\text{''} \Leftrightarrow{}}
\text{and faces }\{F_j\}, \text{such that } H>-(n-1) \text{ on }F_1,\nonumber
\\ \hphantom{\text{``}R(g)\ge -\kappa\text{''} \Leftrightarrow{}}
H>0 \text{ on other faces, and }H>(n-1) \text{ on the opposite face of~}F_1.\label{weak.notion.R>-kappa}
\end{gather}
\eqref{weak.notion.general} and \eqref{weak.notion.R>-kappa} should be equivalent on smooth metrics, but we~think that \eqref{weak.notion.R>-kappa} is sightly more natural conceptually, as it also satisfies the dihedral rigidity phenomenon.
Recently, Burkhardt-Guim~\cite{burkhardt-guim2019pointwise} proposed a different possible notion of~``$R>\kappa$'' using Ricci flow. See \cite[Definition~1.2]{burkhardt-guim2019pointwise}. These definitions all share some good properties as a weak notion. For instance, they all agree to $R(g)>\kappa$ in the classical sense for a $C^2$ metric $g$, and they can be localized in a neighborhood of~any point on the manifold. The natural question is:
\begin{quest}\label{question.different.definitions}
Given a smooth manifold $M$ and a $C^0$ metric $g$ on it. Do the definitions \eqref{weak.notion.R>-kappa} and \cite[Definition~1.2]{burkhardt-guim2019pointwise} agree on~$g$?
\end{quest}
\subsection*{Acknowledgements}
I would like to thank Christina Sormani and Misha Gromov for organizing the excellent workshop ``Emerging Topics on Scalar Curvature and Convergence'' at the Institute for Advanced Study, and everyone in the workshop for valuable discussions. The author is supported by NSF grant DMS-2005287.
\pdfbookmark[1]{References}{ref}
|
1,108,101,564,831 | arxiv | \section{Introduction}
The
discrete cosine transform (DCT)~\cite{britanak2007discrete,rao1990discrete}
is a fundamental building-block
for several image and video processing applications.
In fact,
the DCT closely approximates
the Karhunen-Lo\`eve transform (KLT)~\cite{britanak2007discrete},
which is capable of optimal
data decorrelation
and energy compaction
of first-order stationary Markov
signals~\cite{britanak2007discrete}.
This class of signals is particularly appropriate for
the modeling of
natural images~\cite{britanak2007discrete,cintra2014low}.
Thus,
the DCT finds
applications
in several
contemporary
image and video compression standards,
such as
the JPEG~\cite{penn1992}
and
the H.26x family of codecs~\cite{h261,h263,h2642003}.
Indeed,
several fast algorithms
for computing the exact DCT were
proposed~\cite{Chen1977,arai1988fast,fw1992,hou1987fast,lee1984new,loeffler1991practical,vetterli1984simple,wang1984fast}.
However,
these methods
require
the use of arithmetic
multipliers~\cite{lengwehasatit2004scalable,haweel2001new},
which are time, power, and hardware demanding
arithmetic operations,
when compared to additions or bit-shifting operations~\cite{blahut}.
This fact may jeopardize the application
of the DCT in very low power consumption contexts~\cite{tran,Lin2006}.
To overcome this problem,
in recent years,
several
approximate DCT methods
have been proposed.
Such approximations
do not compute the exact DCT,
but are capable of providing
energy compaction~\cite{bas2008,bayer201216pt}
at a very low computational cost.
In particular,
the 8-point DCT was given a number of approximations:
the signed DCT~\cite{haweel2001new},
the level~1~approximation~\cite{lengwehasatit2004scalable},
the Bouguezel-Ahmad-Swamy~(BAS) transforms~\cite{bas2008,bas2009,bas2010,bas2011,bas2013},
the rounded DCT (RDCT)~\cite{cb2011},
the modified RDCT~\cite{bc2012},
the approximation in~\cite{multibeam2012},
and
the improved DCT approximation introduced in~\cite{Potluri2013}.
These methods
furnish meaningful DCT approximations
using only
addition and bit-shifting operations,
whilst offering sufficient computational
accuracy for image and video processing~\cite{mssp2014}.
Recently,
with the growing need for higher compression rates~\cite{Potluri2013},
the high efficiency video coding~(HEVC)
was proposed~\cite{hevc1,Sullivan2012}.
Unlike several image and video compression standards,
the HEVC employs
4-, 16-, and 32-point integer DCT-based transformations~\cite{hevc1,Potluri2013}.
In contrast to
the 8-point DCT case---where
dozens of approximations
are
available~\cite{bas2008,bouguezel2008multiplication,bas2011,bc2012,Potluri2013,cb2011},
---the 16-point DCT approximation methods
are
much less explored in literature.
To the best of our knowledge,
only the following orthogonal methods are available:
the traditional Walsh--Hadamard transform (WHT)~\cite{yarlagadda},
the BAS-2010~\cite{bas2010} and BAS-2013~\cite{bas2013}
approximations,
and
the transformations
proposed in~\cite{mssp2014},~\cite{bayer201216pt}, and~\cite{Jridi2015}.
In this work,
we aim at proposing
a low-complexity orthogonal 16-point DCT approximation
capable of outperforming
all competing methods in terms of
arithmetic complexity
while
exhibiting very close coding performance
when compared to state-of-the-art methods.
For such,
we advance
a transformation matrix which combines
instantiations of a low-complexity 8-point approximation
according to a divide-and-conquer approach.
The remainder of this paper is organized as follows.
Section~\ref{sec:methodology} introduces the new DCT approximation,
a fast algorithm based on matrix factorization,
and
a comprehensive assessment
in terms of
computational complexity
and several performance metrics.
In Section~\ref{sec:imageandvideocompression},
the proposed approximation
is submitted to computational simulations
consisting
of
a JPEG-like scheme for still image compression
and
the embedding of the proposed approximation
into a HEVC standard reference software.
Section~\ref{section-hardware}
assesses the proposed transform
in a hardware realization based on
field-programmable gate array (FPGA).
Conclusions are drawn in Section~\ref{sec:conclusion}.
\section{16-point DCT approximation}
\label{sec:methodology}
\subsection{Definition}
\label{sec:definition}
It is well-known that several fast algorithm structures
compute the $N$-point DCT
through recursive
computations of the $\frac{N}{2}$-point DCT~\cite{mssp2014, britanak2007discrete, loeffler1991practical, rao1990discrete, Jridi2015}.
Following a similar approach to that adopted in~\cite{mssp2014,Jridi2015},
we propose a new 16-point approximate DCT
by
combining
two instantiations of the 8-point DCT approximation
introduced in~\cite{bc2012}
with
tailored
signal changes and permutations.
This procedure is induced by signal-flow graph in Fig.~\ref{f:grafo}.
This particular 8-point DCT approximation,
presented as $\mathbf{T}_8$ in Fig.~\ref{f:grafo},
was selected because
(i)~it presents the lowest computational cost
among the approximations archived in literature
(zero multiplications,
14~additions,
and
zero bit-shifting operations)~\cite{bc2012}
and
(ii)~it offers good energy compaction properties~\cite{Tablada2015}.
\begin{figure}%
\centering
\scalebox{1}{\input{sfg.pstex_t}}
\caption{Signal-flow graph of the fast algorithm for $\mathbf{T}$. The input data $x_i$,
$i = 0,1,\ldots,15$ relates to the output data $X_j$,
$j = 0,1,\ldots,15$ according to $\mathbf{X}=\mathbf{T}\cdot\mathbf{x}$. Dashed arrows represent multiplications by -1.}
\label{f:grafo}
\end{figure}
As a result,
the proposed transformation matrix
is given by:
\begin{align*}
\mathbf{T}
=
\left[
\begin{rsmallmatrix}
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & 1\\
1 & 1 & 1 & 1 & 1 & 1 & 1 & 1 & -1 & -1 & -1 & -1 & -1 & -1 & -1 & -1\\
1 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 1\\
1 & 1 & 0 & 0 & 0 & 0 & -1 & -1 & 1 & 1 & 0 & 0 & 0 & 0 & -1 & -1\\
1 & 0 & 0 & -1 & -1 & 0 & 0 & 1 & 1 & 0 & 0 & -1 & -1 & 0 & 0 & 1\\
1 & 1 & -1 & -1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 & 1 & 1 & 1 & -1 & -1\\
0 & 0 & -1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & -1 & 0 & 0\\
0 & 0 & 0 & 0 & 0 & 0 & -1 & 1 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1 & 1 & -1 & -1 & 1\\
0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & -1 & 1 & 0 & 0\\
0 & -1 & 0 & 0 & 0 & 0 & 1 & 0 & 0 & 1 & 0 & 0 & 0 & 0 & -1 & 0\\
0 & 0 & 1 & 1 & -1 & -1 & 0 & 0 & 0 & 0 & 1 & 1 & -1 & -1 & 0 & 0\\
0 & -1 & 1 & 0 & 0 & 1 & -1 & 0 & 0 & -1 & 1 & 0 & 0 & 1 & -1 & 0\\
1 & -1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1\\
0 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1 & -1 & 0 & 0 & 0\\
0 & 0 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0 & -1 & 1 & 0 & 0 & 0 & 0
\end{rsmallmatrix}
\right]
.
\end{align*}
The entries
of the resulting
transformation matrix
are
defined over
$\{0,\pm1\}$,
therefore
it is completely multiplierless.
Above transformation can
be
orthogonalized
according to the procedure described
in~\cite{cintra2011integer,cb2011,cintra2014low}.
Thus
the associate orthogonal DCT approximation
is furnished
by
$\mathbf{\hat{C}} = \mathbf{S} \cdot \mathbf{T}$,
where
$
\mathbf{S} = \sqrt{(\mathbf{T}\cdot\mathbf{T}^\top)^{-1}}
$
and
the superscript~${}^\top$
denotes matrix transposition.
In particular,
we have:
\begin{align*}
\mathbf{S}
=
&
\frac{1}{4}
\cdot
\operatorname{diag}
\left(
1,
1,
2,
\sqrt{2},
\sqrt{2},
1,
2,
2,
1,
2,
2,
\sqrt{2},
\sqrt{2},
2,
2,
2
\right)
.
\end{align*}
In the context of image and video coding,
the diagonal matrix~$\mathbf{S}$
does not contribute to
the computational cost of~$\mathbf{\hat{C}}$.
This is because it can be merged into
the codec quantization steps~\cite{mssp2014,bayer201216pt,bas2011,cb2011}.
Therefore,
the actual computation cost of the
approximation
is
fully confined in the
low-complexity matrix~$\mathbf{T}$.
\subsection{Fast algorithm and computational complexity}
\label{sec:evaluation}
The transformation $\mathbf{T}$ requires 112~additions,
if computed directly.
However,
it can be given the following
sparse matrix factorization:
\begin{align*}
\mathbf{T}
=
\mathbf{P}_2
\cdot
\mathbf{M}_4
\cdot
\mathbf{M}_3
\cdot
\mathbf{M}_2
\cdot
\mathbf{P}_1
\cdot
\mathbf{M}_1
,
\end{align*}
where
\begin{align*}
\mathbf{M}_1 &=
\left[
\begin{rsmallmatrix}\\
\mathbf{I}_8 & \overline{\mathbf{I}}_8\\
\overline{\mathbf{I}}_8 & -\mathbf{I}_8\\
\end{rsmallmatrix}
\right]
,
\\
\mathbf{M}_2 &= \operatorname{diag}\left(
\left[
\begin{rsmallmatrix}\\
\mathbf{I}_4 & \overline{\mathbf{I}}_4\\
\overline{\mathbf{I}}_4 & -\mathbf{I}_4\\
\end{rsmallmatrix} \right],
\left[
\begin{rsmallmatrix}\\
\mathbf{I}_4 & \overline{\mathbf{I}}_4\\
\overline{\mathbf{I}}_4 & -\mathbf{I}_4\\
\end{rsmallmatrix} \right]
\right)
,
\\
\mathbf{M}_3 &= \operatorname{diag}\left(
\left[
\begin{rsmallmatrix}\\
\mathbf{I}_2 & \overline{\mathbf{I}}_2\\
\overline{\mathbf{I}}_2 & -\mathbf{I}_2\\
\end{rsmallmatrix} \right],
-\mathbf{I}_4,
\left[
\begin{rsmallmatrix}\\
\mathbf{I}_2 & \overline{\mathbf{I}}_2\\
\overline{\mathbf{I}}_2 & -\mathbf{I}_2\\
\end{rsmallmatrix} \right]
, -\mathbf{I}_4
\right)
,
\\
\mathbf{M}_4 &= \operatorname{diag}\left(
\left[
\begin{rsmallmatrix}\\
1 & 1 & 0\\
1 & -1 & 0\\
0 & 0 & -1\\
\end{rsmallmatrix}
\right],
\mathbf{I}_4,
\left[
\begin{rsmallmatrix}\\
-1 & 0 & 0\\
0 & 1 & 1\\
0 & 1 & -1\\
\end{rsmallmatrix}
\right],
-\mathbf{I}_4,
\left[
\begin{rsmallmatrix}\\
1 & 0\\
0 & -1\\
\end{rsmallmatrix}
\right]
\right)
,
\end{align*}
matrices
$\mathbf{P}_1$ and $\mathbf{P}_2$ correspond
to the permutations
(1)(2)(3)(4)(5)(6)(7)(8)(9)(10 12 16 10)(11 13 15 11)(14)
and
(1)(2 9)(3 8 16 15 5 4 12 11 7 6 10 14 13 3)
in cyclic notation~\cite{Herstein1975},
respectively;
and
$\mathbf{I}_N$ and $\overline{\mathbf{I}}_N$
denote the identity and counter-identity matrices
of order $N$,
respectively.
The above factorization
reduces the computational cost
of~$\mathbf{T}$
to only 44~additions.
Fig.~\ref{f:grafo}
depicts the signal-flow graph
of the fast algorithm for $\mathbf{T}$;
the blocks labeled as
$\mathbf{T}_8$
denote the selected 8-point approximate DCT~\cite{bc2012}.
A computational complexity comparison
of the considered
orthogonal 16-point DCT approximations is summarized in
Table~\ref{tab:complexity}.
For contrast,
we also included
the computational cost of
the Chen DCT fast algorithm~\cite{Chen1977}.
The proposed approximation requires
neither multiplication, nor bit-shifting operations.
Furthermore,
when compared to
the methods in~\cite{mssp2014,Jridi2015},
the WHT or BAS-2013,
and the transformation in~\cite{bayer201216pt},
the proposed approximation
requires
26.67\%,
31.25\%,
and
38.89\%
less arithmetic operations,
respectively.
\begin{table}%
\centering
\caption{Comparison of computational complexities}
\label{tab:complexity} %
\begin{tabular}{l|c|c|c|c} %
\toprule
Transform & Mult & Add & Shifts & Total \\
\midrule
Chen DCT & 44 & 74 & 0 & 118 \\
WHT & 0 & 64 & 0 & 64 \\
BAS-2010 & 0 & 64 & 8 &72\\
BAS-2013 & 0 & 64 & 0 &64\\
Transform in~\cite{bayer201216pt} & 0 & 72 & 0 & 72 \\
Transform in~\cite{mssp2014} & {0} & {60} & {0} & {60}\\
Transform in~\cite{Jridi2015} &0 & 60 & 0 & 60\\
\textbf{Proposed approx.} & \textbf{0} & \textbf{44} & \textbf{0} &\textbf{44}\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Performance assessment}
We separate similarity and coding performance measures
to assess the
proposed
transformation.
For similarity measures,
we considered
the DCT distortion ($d_2$)~\cite{fong2012},
the total error energy~($\epsilon$)~\cite{cb2011},
and
the mean square error (MSE)~\cite{britanak2007discrete,rao1990discrete}.
For coding performance evaluation,
we selected the
the transform coding gain~($C_g$)~\cite{britanak2007discrete}
and
the
transform efficiency~($\eta$)~\cite{britanak2007discrete}.
Table~\ref{tab:performances}
compares
the performance measure values for
the discussed transforms.
The proposed approximation
could furnish performance measure
which are comparable to the average results
of the state-of-the-art approximation.
At the same time,
its computational cost is roughly 30\%~smaller
than the lowest complexity method in literature~\cite{mssp2014,Jridi2015}.
\begin{table}%
\centering
\caption{Coding and similarity performance assessment}
\label{tab:performances}
\begin{tabular}{l|p{.6cm}|p{.8cm}|p{.7cm}|p{.6cm}|p{.6cm}}
\toprule
Transform& $d_2$ & $\epsilon$ & MSE & $C_g$ & $\eta$ \\
\midrule
Chen DCT & 0.000 & 0.000 & 0.000 & 9.455& 88.452 \\
WHT &0.878& 92.563 & 0.428 & 8.194 & 70.646 \\
BAS-2010&0.667 & 64.749 & 0.187 & 8.521 & 73.634 \\
BAS-2013&0.511 & 54.621 & 0.132 & 8.194 & 70.646 \\
Transform in~\cite{bayer201216pt}&0.152 & 8.081 & 0.046 & 7.840 & 65.279 \\
Transform in~\cite{mssp2014}&0.340& 30.323& 0.064 & 8.295& 70.831 \\
Transform in~\cite{Jridi2015} &0.256 & 14.740 & 0.051 & 8.428 & 72.230 \\
\textbf{Proposed approx.} & \textbf{0.493} & \textbf{41.000} & \textbf{0.095}&\textbf{7.857} &\textbf{67.608}\\
\bottomrule
\end{tabular}
\end{table}
\section{Image and video coding}
\label{sec:imageandvideocompression}
In the following subsections,
we describe two computational experiments
in the context of image and video encoding.
Our goal is to demonstrate in real-life scenarios
that the introduced approximation
is capable of performing
very closely to state-of-the-art
approximations
at a much lower computational cost.
For the still image
experiment,
we employ a fixed-rate encoding scheme
which avoids quantization.
This is done to isolate the role of the transform
in order to emphasize the good properties of energy compaction
of the approximate transforms.
On the other hand,
for the video experiment,
we include
the variable-rate encoding
equipped with the quantization step
as required by the actual HEVC standard.
Thus,
we aim at providing two comprehensive experiments
to highlight the capabilities of the introduced
approximation.
\subsection{Image compression experiments}
\label{sec:imagecompression}
We adopted a JPEG-like procedure
as
detailed in
the methodology presented in~\cite{haweel2001new}
and
reproduced in~
\cite{bas2008, bas2010, bas2011, Jridi2015, mssp2014}.
A total of 45 512$\times$512 8-bit grayscale images obtained from a standard public image bank~\cite{uscsipi} was considered.
This set of image was selected to be representative
of the imagery commonly found in real-life applications.
Color images could be treated similarly by processing each channel separately.
Each given input image $\mathbf{A}$
was split into 1024~16$\times$16
disjoint blocks
($\mathbf{A}_k$, $k=1,2,\ldots, 1024$)
which
were submitted to the forward bidimensional (\mbox{2-D})
transformation given by:
$
\mathbf{B}_k
=
\mathbf{\tilde{C}}
\cdot
\mathbf{A}_k
\cdot
\mathbf{\tilde{C}^\top}
$,
where $\mathbf{\tilde{C}}$ is a selected
16-point transformation.
Following the zig-zag sequence~\cite{pao1998},
only the first $1\leq r \leq 150$
elements
of
$\mathbf{B}_k$
were retained;
being the remaining ones zeroed
and resulting
in~$\tilde{\mathbf{B}}_k$.
The inverse \mbox{2-D} transformation is then applied
according to:
$
\mathbf{\tilde{A}}_k
=
\mathbf{\tilde{C}^\top}
\cdot
\tilde{\mathbf{B}}_k
\cdot
\mathbf{\tilde{C}}
$.
The resulting matrix $\mathbf{\tilde{A}}_k$
is the lossy
reconstruction of $\mathbf{A}_k$.
The correct rearrangement of all blocks
results in the reconstructed image $\mathbf{\tilde A}$.
This procedure was performed for each of the 45~images
in the selected data set.
To assess the approximation
in a fair manner,
we consider
the ratio
between
performance measures
and
arithmetic cost.
Such ratio
furnishes
the performance gain
per unit
of arithmetic computation.
Fig.~\ref{fig:averagemeasures} shows
the
average PSNR
and
structural similarity index~(SSIM)~\cite{Wang2004} measurements
per unit of additive cost.
The
proposed approximation
outperforms
all approximate DCT
for
any value of $r$ in both metrics.
The introduced 16-point transform
presents the best cost-benefit ratio
among all competing methods.
\begin{figure}%
\centering
\subfigure[PSNR]
{\includegraphics[width=0.46\textwidth]{psnr_and_additions}
\label{f:psnr}}
\\
\subfigure[SSIM]
{\includegraphics[width=0.46\textwidth]{mssim_and_additions}
\label{f:ssim}}
\caption{
Average
\subref{f:psnr} PSNR and
\subref{f:ssim} SSIM measurements
per additive cost
at
compression ratios.}
\label{fig:averagemeasures}
\end{figure}
Fig.~\ref{f:alllena}
displays
a qualitative and quantitative comparison
considering standard Lena image.
The PSNR measurements for
the Lena image
were
only 4.75\% and 5.69\% below
the results furnished by the transformations
in~\cite{mssp2014,Jridi2015},
respectively.
Similarly,
considering the SSIM,
the proposed transform performed
only 0.62\%, 6.42\%, and 7.43\% below
the performance offered by the transformations
in~\cite{bayer201216pt}, \cite{mssp2014}, and~\cite{Jridi2015}.
On the other hand,
the proposed approximate DCT
requires
38.8\%
and
26.6\%
less arithmetic operations
when compared to~\cite{bayer201216pt}
and~\cite{mssp2014,Jridi2015},
respectively.
The proposed approximation
outperformed
the WHT, BAS-2010, and BAS-2013 according to both figures of merit.
Indeed,
the small losses in PSNR and SSIM
compared to
the exact DCT
are not sufficient
to effect
a significant image degradation
as perceived by
the human visual system,
as shown in
Fig.~\ref{f:alllena}.
\begin{figure}%
\centering
\subfigure[Original image]
{\includegraphics[width=0.25\textwidth]{lenna}
\label{f:lena}
}\
\subfigure[$\text{PSNR}=28.55~\mathrm{dB}$, $\text{SSIM}=0.7915$]
{\includegraphics[width=0.25\textwidth]{DCT_16_lenna}
\label{f:lenadct}
}\
\subfigure[$\text{PSNR}=21.20~\mathrm{dB}$, $\text{SSIM}=0.2076$]
{\includegraphics[width=0.25\textwidth]{WHT_16_lenna}
\label{f:lenawht}
}\\
\subfigure[$\text{PSNR}=25.27~\mathrm{dB}$, $\text{SSIM}=0.6735$]
{\includegraphics[width=0.25\textwidth]{BAS2010_16_lenna}
\label{f:lenabas10}}\
\subfigure[$\text{PSNR}=25.79~\mathrm{dB}$, $\text{SSIM}=0.6921$]
{\includegraphics[width=0.25\textwidth]{BAS2013_16_lenna}
\label{f:lenabas13}}\
\subfigure[$\text{PSNR}=25.75~\mathrm{dB}$, $\text{SSIM}=0.7067$]
{\includegraphics[width=0.25\textwidth]{BCEM_16_lenna}
\label{f:lenabcem}}\\
\subfigure[$\text{PSNR}=27.13~\mathrm{dB}$, $\text{SSIM}=0.7505$]
{\includegraphics[width=0.25\textwidth]{SBCKMK_16_lenna}
\label{f:lenasbckmk}}\
\subfigure[$\text{PSNR}=27.40~\mathrm{dB}$, $\text{SSIM}=0.7587$]
{\includegraphics[width=0.25\textwidth]{Jridi_16_}
\label{f:lenajridi}}\
\subfigure[$\text{PSNR}=25.84~\mathrm{dB}$, $\text{SSIM}=0.7023$]
{\includegraphics[width=0.25\textwidth]{New_16_lenna}
\label{f:lenanew}}
\caption{%
Original
(a)~Lena image and compressed versions with $r=16$
according to
(b)~the DCT,
(c)~WHT,
(d)~BAS-2010,
(e)~BAS-2013,
(f)~transform in \cite{bayer201216pt},
(g)~transform in \cite{mssp2014},
(h)~transform in \cite{Jridi2015},
and
(i)~proposed 16-point approximation.}
\label{f:alllena}
\end{figure}
\subsection{Video compression experiments}
\label{sec:videocompression}
The proposed approximation
was
embedded into the HM-16.3 HEVC reference software~\cite{refsoft},
i.e.,
the proposed approximation
is
considered as a replacement
for the original integer
transform
in the HEVC standard.
Because
the HEVC standard employs 4-, \mbox{8-,} \mbox{16-,} and 32-point
transformations,
we performed simulations in two scenarios:
(i)~substitution of the 16-point transformation only
and
(ii)~replacement of the 8- and 16-point transformations.
We adopted
the approximation described in~\cite{bc2012}
and the proposed approximation
for the 8- and 16-point substitutions,
respectively.
The original 8- and 16-point transforms
employed in the HEVC standard
require 22~multiplications and 28~additions;
and
86~multiplications and 100~additions,
respectively~\cite{Budagavi2012}.
In contrast,
the selected DCT approximations
are
multiplierless
and
require
50\%
and
56\%
fewer additions,
respectively.
The diagonal matrices associated to the 8- and 16-point approximations are
fully embedded into the quantization step
according to judicious scaling operations of the standard HEVC
quantization tables~\cite{Budagavi2012}.
In both scenarios,
we have considered 11~CIF videos of 300~frames
obtained from a public video database~\cite{videos}.
The default HEVC coding
configuration for \texttt{Main} profile
was
adopted,
which includes both 8-bit depth intra and inter-frame coding modes.
We varied the quantization parameter (QP)
from 5 to 50 in steps of~5.
We adopted the PSNR as figure of merit,
because it is readily available in the reference software.
Measurements were taken for each color channel and frame.
The overall video PSNR value was computed according to~\cite{Ohm2012}.
Average PSNR measurements
are shown in Fig.~\ref{fig:graphicshevc}.
The proposed approximation
is multiplierless
and
effected
66\% and 53{.}12\% savings in the number of additions
considering
Scenarios (i) and (ii), respectively.
At the same time,
the resulting
image quality measures
showed
average errors
less than 0{.}28\% and 0{.}71\%,
for
Scenarios~(i) and~(ii), respectively.
Fig.~\ref{fig:hevcframes}
displays
the first frame of the Foreman encoded video
according to
the unmodified codec
and the modified codec in Scenarios~(i)~and~(ii).
The approximate transform
could effect
images that are essentially
identical to the ones produced
by the actual codec
at a much lower computational complexity.
\begin{figure}%
\centering
{\includegraphics[width=0.42\textwidth]{psnr_hevc}
\label{f:psnrhevc}}
\vspace{-0.3cm}
\caption{Performance of the proposed DCT approximation in HEVC standard for several QP values.}
\label{fig:graphicshevc}
\end{figure}
\begin{figure}%
\centering
\subfigure[HEVC standard]
{\includegraphics[width=0.25\textwidth]{framehevc}
\label{f:foremanhevc}}
\subfigure[Scenario (i)]
{\includegraphics[width=0.25\textwidth]{frame44adds}
\label{f:foreman44adds}}
\subfigure[Scenario (ii)]
{\includegraphics[width=0.25\textwidth]{frame14e44adds}
\label{f:foreman14and44adds}}
\caption{First frame from `Foreman' video in the HEVC experiment with $\text{QP}=35$.
}
\label{fig:hevcframes}
\end{figure}
\section{Hardware implementation}
\label{section-hardware}
In order to evaluate
the hardware resource consumption of
the proposed approximation,
it was modeled and tested
in Matlab Simulink and
then it was physically realized on
FPGA.
The employed FPGA was a Xilinx Virtex-6 XC6VLX240T
installed on a Xilinx ML605 prototyping board.
The FPGA realization was tested with 10,000 random
16-point input test vectors
using hardware co-simulation.
Test vectors were generated from within the Matlab environment
and routed to the physical FPGA device
using JTAG based hardware co-simulation.
Then the data measured from the FPGA was routed back to Matlab
memory space.
The
associated
FPGA implementation
was evaluated
for hardware complexity and real-time performance using metrics
such as
configurable logic blocks (CLB) and flip-flop (FF) count,
critical path delay ($T_\text{cpd}$) in~ns,
and maximum operating frequency ($F_\text{max}$) in~MHz.
Values were obtained from the Xilinx FPGA synthesis
and place-route tools by
accessing the \texttt{xflow.results} report file.
In addition,
the
dynamic power
($D_p$) in $\mathrm{mW}/\mathrm{GHz}$
and
static power consumption ($Q_p$) in $\mathrm{mW}$
were
estimated using the Xilinx XPower Analyzer.
Using the CLB count as a metric
to estimate the circuit area~($A$)
and
deriving time~($T$) from $T_\text{cpd}$,
we also report
area-time complexity~($AT$)
and
area-time-squared complexity~($AT^2$).
Because the transformation in~\cite{Jridi2015}
possesses a very low arithmetic complexity (cf.~Table~\ref{tab:complexity})
and presents good performance (cf.~Table~\ref{tab:performances}),
it was chosen for a direct comparison with the proposed approximation.
The obtained results are displayed in Table~\ref{FPGAresults}.
The proposed approximation presents
an improvement of 41.28\% and 43.26\%
in area-time and area-time-square measures,
respectively, when compared to~\cite{Jridi2015}.
\begin{table}
\centering
\caption{Hardware resource and power consumption using Xilinx Virtex-6 XC6VLX240T 1FFG1156 device}
\label{FPGAresults}
\begin{tabular}{lc@{\,\,}c@{\,\,}c@{\,\,}c@{\,\,}c@{\,\,}c@{\,\,}c@{\,\,}c}
\toprule
Method &
CLB &
FF &
$T_\text{cpd}$%
&
$F_{\text{max}}$%
&
$D_p$%
&
$Q_p$%
&
$AT$ &
$AT^2$\\
\midrule
Transform in~\cite{Jridi2015} & 499 & 1588 & 3.0 & 333.33 & 7.4 & 3.500 & 1497 & 4491\\
Proposed approx. & 303 & 936 & 2.9 & 344.83 & 7.9 & 3.509 & 879 & 2548\\
\bottomrule
\end{tabular}
\end{table}
\section{Conclusion}\label{sec:conclusion}
This paper introduced
an orthogonal 16-point DCT approximation
which requires only 44 additions for its computation.
To the best of our knowledge,
the proposed transformation
has the \emph{lowest} computational cost
among the meaningful 16-point DCT approximations
archived in literature.
The introduced method
requires
from 26.67\% to 38.89\%
fewer arithmetic operations
than the best competitors.
In the context of image compression,
the proposed tool
attained
the best performance vs computational cost
ratio
for both PSNR and SSIM metrics.
When embedded into the H.265/HECV
standard,
resulting video frames
exhibited almost imperceptible degradation,
while demanding no multiplications
and 56~fewer additions
than the standard unmodified codec.
The hardware realization of the proposed transform presented
an improvement of more than 30\% in area-time and area-time-square measures
when compared to the lowest complexity competitor~\cite{Jridi2015}.
Potentially,
the present approach
can extended to derive 32- and 64-point approximations
by means of the scaled approach
introduced in~\cite{Jridi2015}.
\section*{Acknowledgments}
Authors acknowledge
CAPES, CNPq, FACEPE, and FAPERGS
for the partial support.
{\small
\bibliographystyle{IEEEtran}
|
1,108,101,564,832 | arxiv | \section*{PRELIMINARIES AND RESULTS}
The idea of category base is a generalization of both measure and topology and its main objective is to present measure and Baire category(topology) and also some other aspects of point set classification within a common framework. It was introduced by J.C.Morgan II in the mid seventies of the last century and has developed since then through a series of papers $[1],[2],[3],[4],[5],$ etc.
To start with, we recall some basic definitions and theorems which may be found in the above references and also in the monograph [6].
\begin{definition}
A category base is a pair (X,$\mathcal{C}$) where X is a non-empty set and $\mathcal{C}$ is a family of subsets of X, called regions satisfying the following set of axioms:
\begin{enumerate}
\item Every point of X belongs to some region; i,e., X=$\cup$$\mathcal{C}$.
\item Let A be a region and $\mathcal{D}$ be a non-empty family of disjont regions having cardinality less than the cardinality of $\mathcal{C}$.\\
i) If A$\cap$($\cup$$\mathcal{D}$) contains a region, then there is a region D$\in$$\mathcal{D}$ such that A$\cap$D contains a region.\\
ii) If A$\cap$($\cup$$\mathcal{D}$) contains no region, then there is a region B$\subseteq$A that is disjoint from every region in $\mathcal{D}$.
\end{enumerate}
\end{definition}
Several examples of category bases can be found in [6].
\begin{definition}
In a category base (X,$\mathcal{C}$), a set is called singular if every region contains a subregion which is disjoint from the set. Any set which can be expressed as countable union of singular sets is called meager. Otherwise, it is called abundant.
\end{definition}
If countable sets are meager, then the category base is point-meager base and a category base whose regions are abundant sets is a Baire base.
\begin{definition}
In a category base (X,$\mathcal{C}$), a set S is called Baire if in every region, there is a subregion in which either S or its complement X$-$S is meager.
\end{definition}
\begin{theorem}
The intersection of two regions either contains a region or is a singular sets.
\end{theorem}
\begin{theorem}(The Fundamental Theorem)
Every abundant set in a category base is abundant everywhere in some region. This means that for any abundant set A, there exists a region C in every subregion D of which A is abundant.\\
\end{theorem}
In [6], one can find several examples corresponding to Definition 2 and Definition 3.\\
From Definition 3, it follows that a set is non-Baire if there is a region (which is not necessarily unique) in every subregion of which both the set and its complement are abundant. In this paper, we identify this special region as a test region corresponding to the non-Baire set.\\
Below we establish two alternative characterizations for non-Baire sets in a category base. We define
\begin{definition}
A set A in a category base (X,$\mathcal{C}$) as having the ($\star$)-property if there exists a region C such that for every subregion D of C and every set E which essentially contains D , A$\cap$E is non-Baire.\\
(A set E contains D essentially means that D$-$E is meager [6])
\end{definition}
\begin{theorem}
In any category base, a set is non-Baire if and only if it satisfies the ($\star$)-property.
\begin{proof}
Suppose A is a non-Baire set in a category base (X,$\mathcal{C}$) and C be a test region for A. Let D be any subregion of C. It suffices to show that D can be considered as a test region for A$\cap$P where P is a subset of X which essentially contains D. To prove our contention, we choose any subregion E of D and note that in E both A$\cap$P and its complement are abundant. This is so because in E both E$\cap$A$\cap$P which can be expressed as E$\cap$A$-$A$\cap$(E$-$P) and its complement which is (E$-$A)$\cup$(A$\cap$(E$-$P)) are abundant and E being a subregion of D is also a subregion of C.\\
Conversely, if A is a Baire set then every region C contains a subregion D in which either A or its complement is meager. If D$\cap$A is meage, choose D$-$A as our set E which essentially contains D althouth A$\cap$E=$\phi$ is a Baire set. If D$-$A is meager, choose D$\cap$A as our set E which essentially contains D but A$\cap$E=D$-$A is meager and hence Baire.\\
This proves the theorem.
\end{proof}
\end{theorem}
\begin{definition}
In a category base (X,$\mathcal{C}$), a set A is called completely non-Baire in a region D if for every Baire set B such that B$\cap$D is abundant, both A$\cap$B and (D$-$A)$\cap$B are abundant.
\end{definition}
The above definition is somewhat analogous to the notion of a completely I-nonmeasurable set, given in [8].
\begin{theorem}
In any Baire base, a set is non-Baire if and only if it is completely non-Baire in some region.
\begin{proof}
Let A be a non-Baire set in a Baire base (X,$\mathcal{C}$). Then according to the definition, there is a region D in $\mathcal{C}$ in every subregion of which both A and its complement (X$-$A) are abundant. Let B be any Baire set such that B$\cap$D is abundant. Since by Theorem 4, every region is here a Baire set, we may assume that B$\subseteq$D. From the Fundamental theorem, there is a subregion C of D which is essentially contained in B. Consequently, both B$\cap$A and B$\cap$(D$-$A) are abundant.\\
Conversely, suppose A is completely non-Baire in a region D. Since by hypothesis every region is an abundant Baire set, both A and its complement are abundant in every subregion of D. Hence A is non-Baire.
\end{proof}
\end{theorem}
Next we introduce
\begin{definition}
A family $\mathcal{F}$ of non-Baire sets as an uniform non-Baire family in a category base (X,$\mathcal{C}$) if there is a region in $\mathcal{C}$ which can act as a common test region for all members of $\mathcal{F}$.
\end{definition}
With reference to this definition, we prove the following decomposition theorem.
\begin{theorem}
If (X,$\mathcal{C}$) is a point-meager, Baire base where $\mathcal{C}$ satisfies CCC (countable chain condition) and every region has cardinality $\aleph_1$, there exists an uniform non-Baire family which induces a decomposition of X.
\begin{proof}
Let S be a non-Baire set in (X,$\mathcal{C}$) and T=X$-$S. The existence of S is ensured by Theorem 6, Ch 2,Sec II, [6]. Then there is a region D in every subregion of which both S and T are abundant. Now by Grzegorek's unification of Sierpinski's theorems [1], (see also Theorem 16, Ch 2, Sec II, [6]) both S and T can be decomposed into families \{$S_\alpha\}_{\alpha < \Omega}$ and \{$T_\alpha\}_{\alpha < \Omega}$ ($\Omega$ is the first ordinal of cardinality $\aleph_{1}$) of mutually disjoint sets such that each $S_\alpha$ (resp. $T_\alpha$) is abundant in every region in which S (resp. T) is abundant. Since $X-S\subseteq X-S_\alpha$ (resp. X$-T\subseteq X-T_\alpha$), so every S$_\alpha$ and its complement (resp. T$_\alpha$ and its complement ) are abundant in every subregion of D proving that S$_\alpha$ (resp. T$_\alpha$) are both non-Baire sets for all $\alpha < \Omega$ and D can act as a common test region for the entire family \{$S_\alpha$, $T_\alpha$ : $\alpha < \Omega$\}. Moreover, this family constitutes a partition of X. Thus we find the existence of an uniform non-Baire family inducing a decomposition of X.
\end{proof}
\end{theorem}
A generalization of the classical Banach-Mazur game [7] was presented in [3] by Morgan through the introduction of the notion of $\mathcal{M}$-family of sets, which was defined in the following manner : \\
\begin{definition}
A family $\mathcal{C}$ of subsets of a non-empty set X is called an $\mathcal{M}$-family if it satisfies the following set of axioms :
\begin{enumerate}
\item The intersection of any descending sequence of $\mathcal{C}$-sets is non-empty.
\item Suppose x is a point in X, then\\
i) there is a $\mathcal{C}$-set containing x, i.e., X=$\cup$$\mathcal{C}$.\\
ii) for each $\mathcal{C}$-set A, there is a $\mathcal{C}$-set B$\subseteq$A such that x$\notin$B.
\item Let A be a $\mathcal{C}$-set and D be a nonempty family of disjoint $\mathcal{C}$-sets having cardinality less than the cardinality of $\mathcal{C}$.\\
i) If A$\cap$($\cup$$\mathcal{D}$) contains a $\mathcal{C}$-set , then there is a $\mathcal{D}$-set D such that A$\cap$D contains a $\mathcal{C}$-set.\\
ii) If A$\cap$($\cup$$\mathcal{D}$) contains no $\mathcal{C}$-set, then there is a $\mathcal{C}$-set B$\subseteq$A which is disjoint from every set in $\mathcal{D}$.
\end{enumerate}
\end{definition}
Evidently, any $\mathcal{M}$-family is a special case of a category base, so the definitions of meager, abundant and Baire sets are as usual. Moreover, any $\mathcal{M}$-family is point-meager and every member of it is an abundant set which follows directly from the axioms (1) and (2) in the above Definition.\\
If $\mathcal{C}$ is a non-empty family of subsets of a non-empty set X and S is a subset of X, then as described in [3], the generalized Banach-Mazur game $\Gamma$(S, $\mathcal{C}$) is played as follows : Two players I and II, alternatively choose sets from $\mathcal{C}$ to define a descending sequence of sets, player I selecting the sets in the sequence with odd indices and player II selecting the sets with even indices. If the intersection of the constructed sequence has atleast one point (which obviously exists if $\mathcal{C}$ is an $\mathcal{M}$-family) in S, then player I wins; otherwise player II wins. In the game $\Gamma$(S, $\mathcal{C}$), let us denote player I, player II by using symbols $\langle$S$\rangle$, $\langle$X$-$S$\rangle$ respectively.\\
Now suppose C$\in$$\mathcal{C}$ such that S$\cap$C $\neq$ $\phi$ $\neq$ (X$-$S)$\cap$C. Then we can describe the game $\Gamma$(S,$\mathcal{C}_{|C}$) in a manner similar as above where $\mathcal{C}_{|C}$=\{E$\in$$\mathcal{C}$ : E$\subseteq$C\}. As a consequence of the Theorem 2 [3], Theorem 11 may be interpreted in terms of the generalized Banach-Mazur game in the following manner :
\begin{theorem}
Let $\mathcal{C}$ be an $\mathcal{M}$-family satisfying CCC (countable chain condition) and every $\mathcal{C}$-set has cardinality $\aleph_{1}$. Then there exist a $\mathcal{C}$-set D and a partition X=$\bigcup\limits_{\alpha < \Omega} {X_\alpha}$ such that under the condition that there exists a sequence $\{h_n\}_{n=1}^{\infty} (h_n :\mathcal{C}\mapsto\mathcal{C})$ satisfying
\begin{enumerate}
\item for every $\mathcal{C}$-set A, h$_n$(A)$\subseteq$A ;
\item for every sequence $\{A_n\}_{n=1}^{\infty}$ of $\mathcal{C}$-sets, if $\{h_n(A)\}_{n=1}^{\infty}$ is descending, then $\bigcap\limits_{n=1}^{\infty}h_n$(A) contains only one point;
\end{enumerate}
no player $\langle X_\alpha \rangle$ ($\alpha < \Omega$) can have a winning strategy in the game $\Gamma(X_\alpha,\mathcal{C}_{|D}$).
\end{theorem}
\bibliographystyle{plain}
|
1,108,101,564,833 | arxiv | \section{Introduction}
The AdS/CFT correspondence is the most precise non-perturbative definition of quantum gravity. A central problem is how local bulk physics emerges from CFT data. This question has been studied extensively and is reasonably well-understood at large $N$, for small perturbations around vacuum AdS \cite{Balasubramanian:1998sn, Banks:1998dd}. In this limit, a bulk field $\Phi$ at a point $X$ is defined by integrating a local CFT operator $\mathcal{O}$ over the boundary with an appropriate smearing function $K$ \cite{Hamilton:2006az}:
\begin{equation}
\Phi(X)= \int dt \, d^{d-1}x \; K(X|t,{x}) \mathcal{O}({x}) + O\bigg(\frac{1}{N}\bigg).
\end{equation}
This CFT operator can subsequently be time evolved to a single timeslice using the CFT Hamiltonian, which gives a non-local operator $P$ in the CFT corresponding with the field $\Phi(X)$ in the bulk. This type of operator is called a `precursor' \cite{Polchinski:1999yd, Giddings:2001pt, Freivogel:2002ex}.\\
The study of precursors is fundamental to understanding a concrete realization of holography. There are several unresolved questions one can ask, such as how to construct precursors that correspond to bulk fields behind a black hole horizon.
Here we focus on two particular puzzles that are related to each other.
At large N, bulk locality requires the precursor to commute with all local CFT operators at a fixed time, while basic properties of quantum field theory demand that only trivial operators can commute with all local operators at a given time \cite{Almheiri:2014lwa}. Another is that a local bulk operator corresponds with many different precursors with different spatial support in the CFT, because the bulk field can be reconstructed in a particular spatial region of the CFT as long as it is contained in the corresponding entanglement wedge of that region.
Both of these apparent paradoxes can be resolved by requiring that different precursors are not equivalent as true CFT operators \cite{Almheiri:2014lwa}. In particular, the difference between two precursors corresponding to the same bulk field seems to have no clear physical meaning, and must act trivially some class of states. In what follows, we will refer to this perplexing feature as the `precursor ambiguity'. \\
In \cite{Almheiri:2014lwa} and \cite{Mintun:2015qda} some progress was made in giving a guiding principle for constructing the ambiguity between two precursors corresponding to the same bulk field. The former approach recasts the AdS/CFT dictionary in the language of quantum error correction (QEC). From this viewpoint, the ambiguity is an operator which acts trivially in the code subspace of QEC, which in this case is naturally thought of as the space of states dual to low-energy excitations of the bulk. The latter work, on the other hand, proposed that gauge symmetry in the CFT can give a prescription to construct the precursor ambiguity. Moreover, they claimed that the code subspace is the full space of gauge invariant states.\\
In this paper, we start in section \ref{sec:conjecture} by proposing the language of BRST symmetry as a tool for making the precursor ambiguity concrete. In section \ref{sec:BRST}, we show that this approach nicely reduces to an already identified precursor ambiguity in the presence of a global $SO(N)$ symmetry \cite{Mintun:2015qda}. Furthermore, it has the added benefit that it generalizes to arbitrary gauge theories at any $N$. In section \ref{sec:localizebulk} we show in a particular toy model how this precursor ambiguity has the right number of parameters to enable us to localize precursors in the boundary of the entanglement wedge order by order in 1/N.
\section{Proposal: Precursor Ambiguities from BRST}\label{sec:conjecture}
In most of the known examples of holography, the boundary theory has some gauge symmetry. The presence of these `unphysical' degrees of freedom renders the naive path integral for gauge theories divergent. One approach to deal with these problems while covariantly quantizing the gauge theory is the BRST formalism \cite{Becchi:1975nq, Tyutin:1975qk}. The rough idea is to replace the original gauge symmetry with a global symmetry, by enlarging the theory and introducing additional fields. This new rigid symmetry, the BRST symmetry, will still be present after fixing the gauge. Since the generator of the BRST symmetry $Q_{\text{BRST}}$ is nilpotent of order two, we can construct its cohomology which will describe the gauge invariant observables of the original theory. \\
We propose that the natural framework to understand precursor ambiguities is the language of BRST symmetry. In particular, we claim that if $P_1$ and $P_2$ are two precursors in the large $N$-limit corresponding with the same local bulk field $\Phi(X)$, then $P_1 - P_2 = \mathcal{O}$ where
\begin{itemize}
\item $\mathcal{O}$ is BRST exact: $\mathcal{O}=\{Q_{\text{BRST}},\tilde{\mathcal{O}}\}$
\item $\mathcal{O}$ does not contain any (anti-)ghosts.
\end{itemize}
By construction this leaves any correlation function of gauge invariant operators in arbitrary physical states invariant
\begin{equation}
\langle \mathcal{O}_1 \cdots \mathcal{O}_i \cdots \mathcal{O}_n\rangle= \langle \mathcal{O}_1 \cdots (\mathcal{O}_i+\{Q_{\text{BRST}},\tilde{\mathcal{O}}\} ) \cdots \mathcal{O}_n\rangle
\end{equation}
since $[Q_{\text{BRST}},\mathcal{O}_i]=0$ for a gauge invariant operator $\mathcal{O}_i$, and $Q_{\text{BRST}}|\psi\rangle=0$ for a gauge invariant state $|\psi\rangle$. \\
As an example, we will show in section \ref{sec:BRST} that in the case of $N$ free scalars with a global ${\rm SO}(N)$ symmetry, we can reproduce the results of \cite{Mintun:2015qda}. That means, there exists an operator $\tilde{\mathcal{O}}$ such that
\begin{equation}
\{Q_{\text{BRST}},\tilde{\mathcal{O}}\}\sim L^{ij} A^{ij}
\end{equation}
where $L^{ij}$ is the generator of the ${\rm SO}(N)$ symmetry, and $A^{ij}$ is any operator in the adjoint.\\
Note that while the BRST ambiguity is well-defined for any gauge theory and even at finite $N$, the notion of bulk locality only makes sense perturbatively in $1/N$. In order to connect the abstract BRST ambiguity to concrete equivalences between different CFT operators, we need to make use of the large $N$ expansion. Thus the precursor ambiguity we find is valid within states where the number of excitations is small compared to $N$.
\section{BRST Symmetry of $N$ Real Scalars}\label{sec:BRST}
In this section we will apply the BRST formalism to a theory of $N$ real scalars. The Lagrangian for this gauge theory in the covariant gauge is given by
\begin{equation}
\mathcal{L}=-\frac14 (F^a_{\mu \nu})^2 + \frac12 D^\mu \phi^i D_\mu \phi_i + \frac{\xi}{2} (B^a)^2 +B^a \partial^\mu A_\mu^a + \partial^\mu\bar{c}^a (D_\mu^{}c)^a
\end{equation}
where the auxiliary field $B^a$ can be integrated out using $\xi B^a=-\partial^\mu A_\mu^a$.
We take the $\phi^i$ in the fundamental representation of ${\rm SO}(N)$, while the ghost $c^a$, anti-ghost $\bar{c}^a$ and the gauge field $A_\mu^a$ are in the adjoint. The (anti-)ghosts are scalar fermion fields.
The covariant derivatives are given by
\begin{equation}
(D_\mu^{} c)^a = \partial_\mu c^a + g f^{abc} A^b_\mu c^c
\end{equation}
and
\begin{equation}
(D_\mu^{} \phi)^i = \partial_\mu \phi^i - i g A_\mu^a (T^a)_{ij}\phi^j.
\end{equation}
Note that $D_\mu^{} \phi^i$ is real since the matrices $(T^a)_{ij}$ are purely imaginary for ${\rm SO}(N)$.
The field strength $F$ is given by
\begin{equation}
F^a_{\mu \nu}=\partial_\mu A_\nu^a - \partial_\nu A^a_\mu + g f^{abc}A^b_\mu A^c_\nu.
\end{equation}
This Lagrangian is invariant under the following BRST symmetry:
\begin{align}
\delta_B A^a_\mu&=\epsilon (D_\mu c)^a \nonumber\\
\delta_B \phi^i &= i g \epsilon c^a (T^a)_{ij}\phi^j \nonumber\\
\delta_B c^a &= -\frac12 g \epsilon f^{abc} c^b c^c \\
\delta_B \bar{c}^a &= \epsilon B^a\nonumber\\
\delta_B B^a &=0. \nonumber
\label{eq:BRSTtransfos}
\end{align}
where $\epsilon$ is a constant Grassmann parameter.
\subsection{The BRST Charge}
In order to compute the BRST charge, we start by constructing the Noether current associated to this symmetry
\begin{equation}
J^\mu = \sum_\alpha \frac{\delta \mathcal{L}}{\delta( \partial_\mu \Phi_\alpha)} \delta_B \Phi_\alpha
\end{equation}
where the sum runs over all possible fields in the Lagrangian.
The BRST charge is then defined via
\begin{equation}
Q_B=\int d^{d-1}x \; J^0_B
\end{equation}
and generates the BRST transformations on the fields via
\begin{equation}
\delta_B \Phi_\alpha=\epsilon [\Phi_\alpha,Q_{\text{BRST}}]_\pm.
\end{equation}
Let's start by computing the variations and defining the conjugate momenta
\begin{align}
\frac{\delta \mathcal{L}}{\delta( \partial_\mu \phi^i)}&= D_\mu \phi^i\qquad & \Pi^i &\equiv D_0 \phi^i \qquad &[\phi^i(x), \Pi^j(y)]&= \delta^{ij}\delta^{(d-1)}({x} - {y})\\
\frac{\delta \mathcal{L}}{\delta( \partial_\mu c^a)}&= (\partial_\mu \bar{c})^a \qquad & \pi_c^a &\equiv (\partial_0 \bar{c})^a \qquad &\{c^a(x),\pi_c^b(y)\}&=\delta^{ab}\delta^{(d-1)}({x} - {y})\\
\frac{\delta \mathcal{L}}{\delta( \partial_\mu \bar{c}^a)}&= (D_\mu c)^a \qquad & \pi_{\bar{c}}^a &\equiv (D_0 c)^a \qquad &\{\bar{c}^a(x),\pi_{\bar{c}}^b(y)\}&=\delta^{ab}\delta^{(d-1)}({x} - {y})
\end{align}
and finally for the gauge field
\begin{equation}
\frac{\delta \mathcal{L}}{\delta( \partial_\mu A^a_\nu)}=- F^a_{\mu\nu} + \eta_{\mu \nu}B^a \qquad \Pi^a_\nu\equiv -F^a_{0\nu} + \eta_{0 \nu}B^a
\end{equation}
with commutation relation
\begin{equation}
[A^a_\mu(x),\Pi^b_\nu(y)]= \eta_{\mu \nu}\delta^{ab} \, \delta^{(d-1)}({x} - {y}).
\end{equation}
That gives the following Noether current
\begin{equation}
J^\mu= \left( - F^a_{\mu \nu} +\eta_{\mu \nu}B^a\right) (D_\nu c)^a+ i g D_\mu \phi^i c^a (T^a)_{ij}\phi^j - \frac12 g (\partial_\mu \bar{c}^a)f^{abc} c^b c^c+ (D_\mu c)^a B^a.
\end{equation}
The BRST charge is then given by
\begin{align}
Q_{\text{BRST}}&=\int dx^{d-1} \; \Pi^a_\nu (D_\nu c)^a + i g \Pi^i c^a (T^a)_{ij}\phi^j - \frac12 g f^{abc} \pi^a_c c^b c^c + B^a\pi_{\bar{c}}^a \\
&=\int dx^{d-1} \; \Pi^a_\nu (\partial_\nu c)^a - g f^{abc} A_\nu^b\Pi^c_\nu c^a+ig \Pi^i c^a (T^a)_{ij}\phi^j - \frac12 g f^{abc} \pi^a_c c^b c^c + B^a\pi_{\bar{c}}^a. \nonumber
\end{align}
We can define the generators of the ${\rm SO}(N)$ symmetry, as the Noether currents associated with the gauge transformations. The current has two contributions, one from the Yang-Mills parts $F^2$ and one from the matter part $(D \phi)^2$:
\begin{equation}
J^a_{\text{matter}}\equiv i\, \Pi^i (T^a)_{ij}\phi^j \qquad J^a_{\text{gauge}}\equiv - f^{abc}A^b_\mu \Pi^c_{\mu}
\end{equation}
\begin{equation}
J^a \equiv \left( J^a_{\text{matter}} + J^a_{\text{gauge}}\right) .
\end{equation}
This finally leads to the the BRST charge:
\begin{equation}
Q_{\text{BRST}}=\int dx^{d-1} \; \left( g c^a J^a - \frac12 g f^{abc} \pi^a_c c^b c^c+B^a \pi_{\bar{c}}^a +\Pi^a_\nu (\partial_\nu c)^a \right).
\end{equation}
\subsection{Reduction to a Global ${\rm SO}(N)$ Symmetry}
In order to connect with previous work on precursors \cite{Mintun:2015qda}, we are interested in degrading the $SO(N)$ gauge symmetry to a global symmetry. One crude way of accomplishing this, is by setting the gauge fields $A^a_\mu=0$ (and also $B^a=0$ since $B^a \sim \partial^\mu A^a_\mu$). In this case, the ghosts become quantum mechanical (position independent) and the BRST charge reduces to
\begin{equation}
Q_{\text{BRST}}=\int dx^{d-1} g c^a J^a - \frac12 g f^{abc} \pi^a_c c^b c^c \qquad J^a=i\, \Pi^i (T^a)_{ij}\phi^j
\end{equation}
where the global $SO(N)$ generator is given by $L^a=\int d^{d-1} x \, J^a(x)$. \\
Now consider an operator of the form $\pi_c^a \, \mathcal{O}^a$ and compute the anti-commutator with the BRST charge:
\begin{align}
\{ Q_{\text{BRST}},\pi^d_c \mathcal{O}^d\}&=\int dx^{d-1} g \{ c^a J^a ,\pi^d_c \mathcal{O}^d\} - \frac12 g f^{abc} \{\pi^a_c c^b c^c,\pi^d_c \mathcal{O}^d\}\\
&=g \int dx^{d-1} \mathcal{O}^a J^a =g L^a \mathcal{O}^a
\label{eq:MPRAmbiguity}
\end{align}
where we used that the generator of global ${\rm SO}(N)$ transformations rotates the operator $\mathcal{O}$ as $[J^a , \mathcal{O}^b]=f^{abc} \mathcal{O}^c $.
This expression is BRST exact by construction, and ghost-free. Adding this to a CFT operator will have no effect whatsoever within correlation functions in physical states. It is exactly the precursor ambiguity found in \cite{Mintun:2015qda}.
\pagebreak
\section{Localizing Precursors in a Holographic Toy Model}\label{sec:localizebulk}
In the previous section, we computed the ambiguous part of the precursors as a BRST exact and ghost-free operator. This ambiguity can be viewed as the redundant, quantum error correcting part of the precursors. Once it has been identified, the physical information contained in the precursors becomes clear. In this section we will study the particular ambiguity \eqref{eq:MPRAmbiguity} in a toy model. We will show that this ambiguity has the structure of an HKLL series, and that it contains enough freedom to localize bulk information in a particular region of the CFT by setting the smearing function to zero in that region.
\subsection{The Model}
The model is a CFT containing $N$ free scalar fields in $1+1$ spacetime dimensions:
\begin{equation}
\mathcal{L} = \sum\limits_{i=1}^N -\frac{1}{2} \, \partial_\mu \, \phi^i \, \partial^\mu \, \phi^i .
\end{equation}
It was first considered by \cite{Mintun:2015qda} and refined in \cite{Freivogel:2016zsb}. There is a $\Delta=2$ primary operator $\mathcal{O}=\partial_\mu \phi^i \, \partial^\mu \phi^i$ which we take to be dual to a massless scalar $\Phi$ in $AdS_ {2+1}$. \\
Following \cite{Mintun:2015qda} and \eqref{eq:MPRAmbiguity}, the precursor ambiguity is given by $L^{ij} A^{ij}$ where $A^{ij}$ is any operator in the adjoint of ${\rm SO}(N)$ and $L^{ij}$ is the generator of global ${\rm SO}(N)$ transformations. Note that we only kept the global part of the ${\rm SO}(N)$ transformations by setting $A_\mu^a=B^a=0$ in the full gauge theory discussed in section \ref{sec:BRST}.\\
Expanding the boundary field $\phi$ in terms of left/right-moving creation and annihilation modes, one can compute the generator of global rotations
\begin{equation}
L^{ij} = \int \frac{d k}{2 \, k} \, \bigg( \alpha^{\dagger}{}^{[i}_k \, \alpha_k^{j]} + \tilde{\alpha}^{\dagger}{}^{[i}_k \, \tilde{\alpha}_k^{j]} \bigg
\end{equation}
where the tilde denotes a right-moving polarization of the creation or annihilation modes and any zero modes are left out.
If there is no confusion what momentum a given mode has, we will omit the subscript $k$.
\subsection{Precursor Ambiguity and Bulk Localization Perturbatively in $1/N$}
The bulk field $\Phi$ in global $AdS_3$ can be constructed at large $N$ by smearing quadratic operators of the form $\mathcal{O}\sim {\alpha_{k} \tilde{\alpha}_{k'} }$ over a particular region of the CFT \cite{Hamilton:2006az}:
\begin{equation}
\Phi(X)=\int d^2 x \, K_1(X|x) \, \mathcal{O}(x) + O\bigg(\frac{1}{\sqrt{N}}\bigg)
\label{eq:HKLL}
\end{equation}
where the smearing function $K$ obeys the bulk free wave equation
\begin{equation}
\Box_{AdS_3} K_1(X|x)=0.
\end{equation}
This procedure correctly reproduces the bulk two-point function. The precursor can be obtained from \eqref{eq:HKLL} by time evolving the CFT operator to a single timeslice. Extending the HKLL procedure perturbatively in $1/N$ will look schematically as follows \cite{Kabat:2011rz, Heemskerk:2012mn}:
\begin{equation}
\Phi^{}(X)= \int K_1 \mathcal{O} +\frac{1}{\sqrt{N}} \iint K_2 \mathcal{O} \mathcal{O} + O\bigg(\frac{1}{N}\bigg)
\label{eq:interactingHKLL}
\end{equation}
where the expansion parameter is $1/\sqrt{N}$ instead of $1/N$ because we are dealing with a vector-like theory \cite{Klebanov:2002ja}.\\
In \cite{Freivogel:2016zsb} it was shown that, at leading order in $1/N$, the spatial support of the smearing function $K_1$ (and hence the information of the bulk field) can be localized in a particular Rindler wedge of the CFT due to an ambiguity in the smearing function. This freedom can be understood by noting that the term $ \alpha_{k_1}^{\dagger i} \, \tilde{\alpha}_{k_2}^i $ can be added to $\mathcal{O}$ within two-point functions since it annihilates the vacuum in both directions. While this two-parameter family of freedom is enough to localize the bulk field at leading order in $N$, one can see that it generically will be insufficient to set $K_2$ to zero in particular region, because this requires a four-parameter family of freedom. Since changing the smearing function corresponds with picking a different precursor, we would like to identify the aforementioned freedom in the smearing function with the precursor ambiguity. In what follows, we will explain how the precursor ambiguity $L^{ij} A^{ij}$ has enough freedom to localize bulk information order by order in $1/N$.\\
Start by considering the following quadratic (adjoint) operator
\begin{equation}
A^{ij}_2 \equiv \alpha_{k_1}^{\dagger i} \, \tilde{\alpha}_{k_2}^j .
\end{equation}
A possible ambiguity of the precursor will be given by $L^{ij} \, A^{ij}_2$.
Normal ordering yields
\begin{align}
\begin{split}
\frac{1}{N^\frac32}L^{ij} \, A^{ij}_2 &= \frac{1}{N^\frac32} \int \frac{d k}{2 \, k} \, \bigg( \alpha^{\dagger}{}^{[i}_k \, \alpha_k^{j]} + \tilde{\alpha}^{\dagger}{}^{[i}_k \, \tilde{\alpha}_k^{j]} \bigg) \, \alpha_{k_1}^{\dagger i} \, \tilde{\alpha}_{k_2}^j \\
&= \frac{(1-N)}{N^\frac32} \, \alpha^{\dagger i}_{k_1} \, \tilde{\alpha}^i_{k_2} +\frac{1}{\sqrt{N}} \frac{\alpha^{\dagger}{}^{i}_{k_1} \, L^{ij} \, \tilde{\alpha}^j_{k_2}}{N} \\
&\sim \mathcal{O} + \frac{1}{\sqrt{N}} \mathcal{O}\mathcal{O}
\end{split}
\label{eq:quadraticambiguity}
\end{align}
where $\mathcal{O}$ denotes an operator quadratic in the $\alpha$'s and normalized by $1/\sqrt{N}$ such that it is $O(1)$ in $N$-scaling. Note that the LHS of \eqref{eq:quadraticambiguity}, by construction, is zero in physical states (and hence can be added to the precursor without changing any of its correlation functions). \\
The piece quadratic in the $\alpha's$ in \eqref{eq:quadraticambiguity} is exactly the ambiguity needed to localize the precursor in the CFT to leading order in $N$, as was shown in detail in \cite{Freivogel:2016zsb}. One can now also see that one generically needs a four-parameter ambiguity if we want to be able to set $K_2$ in \eqref{eq:interactingHKLL} to zero in certain regions. Even though the term $\mathcal{O}\mathcal{O}/\sqrt{N}$ in \eqref{eq:quadraticambiguity} has the right structure to fit in the HKLL series, it does not have enough freedom to set $K_2$ to zero (it has only 2 free parameters, while we need 4). It can be done, however, by constructing a new operator which annihilates $SO(N)$-invariant states and is quartic in the $\alpha$'s:
\begin{equation}
A^{ij}_4 \equiv A_2^{ij} - \frac{1}{N} \, A_2^{ij} \, \alpha^{\dagger \, m}_{k_3} \, \alpha^m_{k_4} .
\end{equation}
The ambiguity in the precursor to order $\frac{1}{\sqrt{N}}$ is then given by $L^{ij} \, A^{ij}_4$. Normal ordering yields
\begin{equation}
L^{ij} \, A^{ij}_4 = L^{ij} \, A^{ij}_2 + T_4 + T_6
\end{equation}
where
\begin{equation}
T_4 = \alpha^{\dagger \, i}_{k_1} \, \alpha^{\dagger \, i}_{k_3} \, \tilde{\alpha}^m_{k_2} \, \alpha^m_{k_4} - \alpha^{\dagger \, i}_{k_3} \, \tilde{\alpha}^i_{k_2} \, \alpha^{\dagger \, m}_{k_1} \, \alpha^m_{k_4} + (1-N) \, \alpha^{\dagger \, i}_{k_1} \, \tilde{\alpha}^i_{k_2} \, \alpha^{\dagger \, m}_{k_3} \, \alpha^m_{k_4}
\end{equation}
\begin{align}
\begin{split}
T_6 &= \alpha^{\dagger \, i}_k \, \alpha^{\dagger \, i}_{k_1} \, \alpha^{\dagger \, m}_{k_3} \, \tilde{\alpha}^j_{k_2} \, \alpha^j_k \, \alpha^m_{k_4} - \alpha^{\dagger \, j}_k \, \alpha^{\dagger \, i}_{k_1} \, \alpha^{\dagger \, m}_{k_3} \, \tilde{\alpha}^j_{k_2} \, \alpha^i_k \, \alpha^m_{k_4} \\
&\hspace{3mm}+ \tilde{\alpha}^{\dagger \, i}_k \, \alpha^{\dagger \, i}_{k_1} \, \alpha^{\dagger \, m}_{k_3} \, \tilde{\alpha}^j_{k_2} \, \tilde{\alpha}^j_k \, \alpha^m_{k_4} - \tilde{\alpha}^{\dagger \, j}_k \, \alpha^{\dagger \, i}_{k_1} \, \alpha^{\dagger \, m}_{k_3} \, \tilde{\alpha}^j_{k_2} \, \tilde{\alpha}^i_k \, \alpha^m_{k_4}
\end{split}
\end{align}
and repeated momenta are integrated over appropriately. By $T_4$ we denote the ambiguity to quartic order in $L^{ij} \, A^{ij}_4$ and similarly with $T_6$ to hexic order.
As before, $T_4$ and $T_6$ scale the same with respect to $N$ in any gauge invariant state. Also they do not contribute in three-point functions of the bulk field. \\
Again we find that all the terms nicely arrange themselves in the right structure of an HKLL series
\begin{equation}
L^{ij} A_4^{ij}\sim \mathcal{O} + \frac{1}{\sqrt{N}} \mathcal{O}\mathcal{O} + \frac{1}{N} \mathcal{O}\mathcal{O}\mathcal{O}
\label{eq:quarticambiguity}
\end{equation}
where $\mathcal{O}$ schematically denotes an operator quadratic in the $\alpha$'s and normalized by $1/\sqrt{N}$ such that it is $O(1)$ in $N$-scaling. The main difference with $L^{ij} A_2^{ij}$ is that the term quartic in the $\alpha$'s now gets a contribution from $T_4$, which does have four independent parameters, and hence has enough freedom to localize the smearing function $K_2$.\\
Doing so also introduced a term like $\alpha^6$. The connected piece of this will be down in $1/N$ relative to $\alpha^4$. If $T_4$ fixes the ambiguity at order $1/\sqrt{N}$, $T_6$ will contribute towards fixing it at order $1/N$. Thus, by choosing a proper operator $A^{ij}$, we will be able to fix the ambiguity in the precursor to any order in $1/N$ perturbatively.\\
We can now summarize how this recursive procedure works to localize bulk information order by order in $N$. When the operator we want to smear $A^{ij}_2$ is quadratic, the ambiguity in the precursor to the quadratic order is given by $(1-N) \, \alpha^{\dagger i}_{k_1} \tilde{\alpha}^j_{k_2}$. These modes are labeled by two different momenta. Since we are working in two spacetime dimensions, they are able to fix all the ambiguity in the precursor up to quadratic level. \\
But fixing the quadratic level, introduces a quartic piece: $\alpha^{\dagger}{}^{i}_{k_1} \, L^{ij} \, \tilde{\alpha}^j_{k_2}$. This piece has insufficient freedom to localize the precursor up to $1/\sqrt{N}$ effects. To fix the ambiguity to the quartic level, one introduces a quartic ambiguity $L^{ij} A^{ij}_4$. This gives a piece $T_4$ which has four independent momenta and hence can now fix any ambiguity in the precursor up to quartic order. However, doing so also introduced a hexic piece $T_6$. This hexic term makes the precursor ambiguous to order six. We can repeat the procedure, smear a different $A^{ij}$ and then fix the ambiguity in the precursor up to order six.\\
Surprisingly, each term at a higher order is $\frac{1}{\sqrt{N}}$ relative to the current order. Hence, this procedure can be carried out order by order in $\frac{1}{\sqrt{N}}$ and thus fixes all the ambiguity in the interacting HKLL series in this toy model. While it is not explicitly demonstrated in this paper, a similar story should hold when the matter fields are in the adjoint.\\
One should note that, while the quadratic and quartic piece in the ambiguity \eqref{eq:quadraticambiguity} (and similarly for the quartic and hexic piece in the ambiguity \eqref{eq:quarticambiguity}) have the correct `naive' $N$-scaling ($\alpha \sim N^\frac14$) to be arranged in an HKLL series, their real $N$-scaling is the same. This means that neither term in \eqref{eq:quadraticambiguity} or \eqref{eq:quarticambiguity} is smaller compared to the other. For clarity, we will elaborate on this a bit more in the next section \ref{sec:Nscaling}.
\subsection{$N$-Scaling}\label{sec:Nscaling}
Within physical states, both terms on the RHS of \eqref{eq:quadraticambiguity} will be equal and opposite. In particular, they must have the same $N$-scaling (in contrary to what was claimed in \cite{Mintun:2015qda}), even though naive $N$-counting would suggest otherwise. In order to explicitly see that both terms have the same $N$-scaling in $SO(N)$-invariant states, we pick the following three states and label the operators as follows:
\begin{center}
\begin{tabular}[h]{|c|c|}
\hline
States & Operators\\
\hline \hline
$| \psi_1' \rangle = \frac{1}{\sqrt{N}} \, \alpha_{k_3}^{\dagger m} \, \alpha_{k_4}^{\dagger m} \, | 0 \rangle $ & $\mathcal{O}_1=\alpha_{k_1}^{\dagger i} \, L^{ij} \, \tilde{\alpha}_{k_2}^{j}/ N^\frac32$ \\
\hline
$| \psi_1'' \rangle = \frac{1}{\sqrt{N}} \, \tilde{\alpha}_{k_3}^{\dagger m} \, \tilde{\alpha}_{k_4}^{\dagger m} \, | 0 \rangle$ & $\mathcal{O}_2= \, \alpha_{k_1}^{\dagger i} \, \tilde{\alpha}_{k_2}^i/\sqrt{N}$ \\
\hline
$| \psi_2 \rangle = \frac{1}{\sqrt{N}} \, \tilde{\alpha}_{k_5}^{\dagger m} \, \alpha_{k_6}^{\dagger m} \, | 0 \rangle$ &\\
\hline
\end{tabular}
\end{center}
In order to assign a $N$-scaling to $\mathcal{O}_2$, one could check its two-point function. However, since this operator has vanishing two-point functions, we investigate the three-point function and find that it goes like $1/\sqrt{N}$. This justifies us to call assign an $O(1)$ $N$-scaling to $\mathcal{O}_2$. We will estimate the size of $\mathcal{O}_1$ and $\mathcal{O}_2$ in the subspace spanned by the three states above. Let us denote the matrix elements of an arbitrary operator $\mathcal{O}$ in the above subspace as
\begin{center}
$\mathcal{O} = \begin{pmatrix}
\langle \psi_1' | \mathcal{O} | \psi_1' \rangle & \langle \psi_1' | \mathcal{O} | \psi_1'' \rangle & \langle \psi_1' | \mathcal{O} | \psi_2 \rangle\\
\langle \psi_1'' | \mathcal{O} | \psi_1' \rangle & \langle \psi_1'' | \mathcal{O} | \psi_1'' \rangle & \langle \psi_1'' | \mathcal{O} | \psi_2 \rangle \\
\langle \psi_2 | \mathcal{O} | \psi_1' \rangle & \langle \psi_2 | \mathcal{O} | \psi_1'' \rangle & \langle \psi_2 | \mathcal{O} | \psi_2 \rangle
\end{pmatrix}.$
\end{center}
Then we get the following matrix elements for $\mathcal{O}_1$ and $\mathcal{O}_2$
\begin{align}
\mathcal{O}_1 =\frac{1}{\sqrt{N}} \begin{pmatrix}
0 & 0 & 1 \\
0 & 0 & 0 \\
0 & 1 & 0
\end{pmatrix} \qquad
\mathcal{O}_2 =\frac{1}{\sqrt{N}} \begin{pmatrix}
0 & 0 & 1 \\
0 & 0 & 0 \\
0 & 1 & 0
\end{pmatrix}.
\end{align}
We can see that both the pieces in $L^{ij} \, A^{ij}_2$ scale in the same way with respect to $N$, as expected. Naively, one could expect the part quartic in the $\alpha$'s to be down to part quadratic in the $\alpha$'s by a factor $1/\sqrt{N}$. For these particular operators that doesn't happen, because the disconnected piece in $\mathcal{O}_1$ enhances its $N$-scaling. \\
Applying similar arguments to \eqref{eq:quarticambiguity}, we conclude $T_6$ must have the same $N$-scaling as $T_4$. Again, the reason why this does not agree with naive $N$-scaling, is due to the contribution from the disconnected piece in $T_6$.
\section{Outlook}
In this paper we have presented preliminary evidence that precursors are related to BRST invariance and hence to
the underlying gauge symmetry of the field theory. There are several interesting follow-up directions to explore.
One could for example study precursors in the toy model in non-trivial states (such as thermal states), but
more importantly, one would like to generalize the construction to a proper gauge theory with local gauge invariance.
Perhaps the simplest example of a field theoretic precursor ambiguity is to consider the field theoretic dual of the bulk
operator one obtains by integrating a bulk field over a symmetric minimal surface. Such operators were studied in \cite{Czech:2016xec,
deBoer:2016pqk}, and to lowest order in the $1/N$ expansion in the field theory for a bulk scalar they are given by
\begin{equation}
Q_{\cal O}(x,y)
=C \int_{D(x,y)} \!\!\!\!\!d^d \xi\, \left( \frac{(y-\xi)^2(\xi-x)^2 }{-(y-x)^2} \right)^{\frac{(\Delta_{\cal O}-d)}{2}}\
\langle {\cal O}(\xi)\rangle\,
\end{equation}
where the integral is over the causal diamond $D(x,y)$ with past and future endpoints $x$ and $y$, and $\Delta_{\cal O}$
is the scaling dimension of the primary operator ${\cal O}$. The constant $C$ is a normalization constant which at
this point is arbitrary. The past light-cone of $y$ and the future light-cone of $x$ intersect at a sphere $B$, which is
the boundary of the bulk minimal surface.\\
If the field theory is defined on $S^{d-1}\times \mathbb R$, then there are two equivalent choices of causal diamonds
for a given symmetric minimal surface. Together, they contain a full Cauchy slice for the field theory.
Hence, there are two inequivalent boundary representations of the same bulk operator,
and the difference between these two is an example of a precursor ambiguity. We would therefore like to conjecture
that there exists an operator $Y$ such that
\begin{align} \label{aux1}
\{Q_{\rm BRST}, Y\} & = \int_{D(x,y)} \!\!\!\!\!d^d \xi\, \left( \frac{(y-\xi)^2(\xi-x)^2 }{-(y-x)^2} \right)^{\frac{(\Delta_{\cal O}-d)}{2}}\
\langle {\cal O}(\xi)\rangle\ \\
&- \int_{\bar{D}(\bar{x},\bar{y})} \!\!\!\!\!d^d \bar{\xi}
\, \left( \frac{(\bar{y}-\bar{\xi})^2(\bar{\xi}-\bar{x})^2 }{-(\bar{y}-\bar{x})^2} \right)^{\frac{(\Delta_{\cal O}-d)}{2}}\
\langle {\cal O}(\bar{\xi})\rangle\ + { O}(1/N)
\end{align}
Here, the second complimentary causal diamond is denoted by $D(\bar{x},\bar{y})$ with past and future endpoints $\bar{x}$ and $\bar{y}$.
It would be very interesting to construct an operator $Y$ for which (\ref{aux1}) holds, and we hope to come back to this
in the near future.
\acknowledgments
We thank Vladimir Rosenhaus for helpful discussions. This work is part of the Delta ITP consortium, a program of the Netherlands Organisation for Scientific Research (NWO) that is funded by the Dutch Ministry of Education, Culture and Science (OCW). SFL would like to acknowledge financial support from FOM, which is part of the NWO.
\bibliographystyle{ytphys}
|
1,108,101,564,834 | arxiv | \section{Conclusion}
\label{sec:con}
We introduce a simple but effective loss function, PIoU, to exploit both the angle and IoU for accurate OBB regression. The PIoU loss is derived from IoU metric with a pixel-wise form, which is simple and suitable for both horizontal and oriented bounding box. To demonstrate its effectiveness, we evaluate the PIoU loss on both anchor-based and anchor-free frameworks. The experimental results show that PIoU loss can significantly improve the accuracy of OBB detectors, particularly on objects with high-aspect ratios. We also introduce a new challenging dataset, Retail50K, to explore the limitations of existing OBB detectors as well as to validate their performance after using the PIoU loss. In the future, we will extend PIoU to 3D rotated object detection. Our preliminary results show that PIoU can improve PointPillars~\cite{lang2019pointpillars} on KITTI val dataset~\cite{geiger2013vision} by 0.65, 0.64 and 2.0 AP for car, pedestrian and cyclist in moderate level, respectively.
\section{Experiments}
\label{exp}
\subsection{Experimental Settings}
We evaluate the proposed PIoU loss with anchor-based and anchor-free OBB-detectors (RefineDet, CenterNet) under different parameters, backbones. We also compare the proposed method with other state-of-the-art OBB-detection methods in different benchmark datasets (\textit{i.e.} DOTA~\cite{Xia2018DAS}, HRSC2016~\cite{Liu2016SRB}, PASCAL VOC~\cite{Everingham2015TPV}) and the proposed Retail50K dataset. The training and testing tasks are accomplished on a desktop machine with Intel(R) Core(TM) i7-6850K CPU @ 3.60GHzs, 64 GB installed memory, a GeForce GTX 1080TI GPU (11 GB global memory), and Ubuntu 16.04 LTS. With this machine, the batch size is set to 8 and 1 for training and testing, respectively.
\noindent\textbf{Anchor-based OBB Detector:}
\label{approach:anchorbased}
For anchor-based object detection, we train RefineDet~\cite{refinedet} by updating its loss using the proposed PIoU method. Since the detector is optimized by classification and regression losses, we can easily replace the regression one with PIoU loss $L_{piou}$ while keeping the original Softmax Loss $L_{cls}$ for classification. We use ResNet~\cite{He2016DRL} and VGG ~\cite{Simonyan2014VDC} as the backbone models. The oriented anchors are generated by rotating the horizontal anchors by \(k\pi/6\) for \(0\leq k<6\). We adopt the data augmentation strategies introduced in \cite{Liu2016SSS} except cropping, while including rotation (i.e. rotate the image by a random angle sampled in \([0,\pi/6]\)). In training phase, the input image is resized to 512\(\times\)512. We adopt the mini-batch training on 2 GPUs with 8 images per GPU. SGD is adopted to optimize the models with momentum set to 0.9 and weight decay set to 0.0005. All evaluated models are trained for 120 epochs with an initial learning rate of 0.001 which is then divided by 10 at 60 epochs and again at 90 epochs. Other experimental settings are the same as those in \cite{refinedet}.
\noindent\textbf{Anchor-free OBB Detector:}
\label{approach:anchorfree}
To extend anchor-free frameworks for detecting OBB, we modify CenterNet~\cite{Zhou2019OAP} by adding an angle dimension regressed by L1-Loss in its overall training objective as our baseline. To evaluate the proposed loss function, in similar fashion as anchor-based approach, we can replace the regression one with PIoU loss $L_{piou}$ while keeping the other classification loss $L_{cls}$ the same. Be noted that CenterNet uses a heatmap to locate the center of objects. Thus, we do not back-propagate the gradient of the object's center when computing the PIoU loss. We use DLA~\cite{Yu2018Deep} and ResNet~\cite{He2016DRL} as the backbone models. The data augmentation strategies is the same as those for RefineDet-OBB (shown before). In training phase, the input image is resized to 512\(\times\)512. We adopt the mini-batch training on 2 GPUs with 16 images per GPU. ADAM is adopted to optimize the models. All evaluated models are trained for 120 epochs with an initial learning rate of 0.0005 which is then divided by 10 at 60 epochs and again at 90 epochs. Other settings are the same as those in \cite{Zhou2019OAP}.
\subsection{Ablation Study}
\label{exp:evaluation:anchorbased}
In this section, we investigate the impact of our design settings of the proposed method, and conduct several controlled experiments on DOTA~\cite{Xia2018DAS} and PASCAL VOC~\cite{Everingham2015TPV} datasets.
\noindent{\bf Comparison on different parameters:}
In Eq.~\ref{equ:model:kernel}, $k$ is an adjustable factor in our kernel function to control the sensitivity of each pixel. In order to evaluate its influence as well as to find a proper value for the remaining experiments, we conduct a set of experiments by varying $k$ values based on DOTA~\cite{Xia2018DAS} dataset with the proposed anchor-based framework. To simplify discussions, results of $k=5, 10, 15$ are detailed in Table~\ref{tab:Different_hyperparameter} while their distributions can be visualized in Fig.~\ref{fig:pixel_au}. We finally select $k=10$ for the rest of the experiments since it achieves the best accuracy.
\noindent{\bf Comparison for oriented bounding box:} Based on DOTA~\cite{Xia2018DAS} dataset, we compare the proposed PIoU loss with the commonly used L1 loss, SmoothL1 loss as well as L2 loss. For fair comparisons, we fix the backbone to VGGNet~\cite{Simonyan2014VDC} and build the network based on FPN~\cite{Lin2017FPN}. Table~\ref{tab:losses:dota} details the comparisons and we can clearly see that the proposed PIoU Loss improves the detection performance by around 3.5\%. HPIoU (Hard PIoU) loss is the simplified PIoU loss using Eq.~\ref{equ:model:optmize}. Its performance is slightly reduced but still comparable to PIoU loss. Thus, HPIoU loss can be a viable option in practise as it has lower computational complexity. We also observe that the proposed PIoU costs 15-20\% more time than other three loss functions, which shows that it is still acceptable in practice. We also observed that HPIoU costs less training time than PIoU. Such observation verifies the theoretical analysis and usability of Eq.~\ref{equ:model:optmize}.
\begin{table}[t!]
\small
\centering
\setlength{\tabcolsep}{20pt}
\caption{Comparison between different sensitivity factor $k$ in Eq.~\ref{equ:model:kernel} for PIoU loss on DOTA dataset. RefineDet~\cite{refinedet} is used as the detection model.}
\vspace{-1mm}
\begin{tabular}{cccc}
\hline
$k$ & AP & AP\(_{50}\) & AP\(_{75}\) \\ \hline
5 & 46.88 & 59.03 & 34.73 \\
10 & 54.24 & 67.89 & 40.59 \\
15 & 53.41& 65.97 & 40.84 \\ \hline
\end{tabular}
\vspace{-0.2em}
\label{tab:Different_hyperparameter}
\end{table}
\begin{table}[t!]
\small
\centering
\setlength{\tabcolsep}{12pt}
\caption{Comparison between different losses for oriented bounding box on DOTA dataset. RefineDet~\cite{refinedet} is used as the detection model. HPIoU (Hard PIoU) loss refers to the PIoU loss simplified by Eq.~\ref{equ:model:optmize}. Training time is estimated in hours.}
\vspace{-1mm}
\begin{tabular}{lcccc}
\hline
Loss & AP & AP\(_{50}\) & AP\(_{75}\) & Training Time \\ \hline
L1 Loss & 50.66 & 64.14 & 37.18 & 20 \\
L2 Loss & 49.70 & 62.74 & 36.65 & 20 \\
SmoothL1 Loss & 51.46 & 65.68 & 37.25 & 21.5 \\
\textbf{PIoU Loss} & \textbf{54.24} & \textbf{67.89} & \textbf{40.59} & \textbf{25.7} \\
\textbf{HPIoU Loss} & \textbf{53.37} & \textbf{66.38} & \textbf{40.36} & \textbf{24.8} \\ \hline
\end{tabular}
\vspace{-0.2em}
\label{tab:losses:dota}
\end{table}
\begin{table}[t!]
\small
\centering
\setlength{\tabcolsep}{7pt}
\caption{Comparison between different losses for horizontal bounding box on PASCAL VOC2007 dataset. SSD~\cite{Liu2016SSS} is used as the detection model.}
\vspace{-1mm}
\begin{tabular}{lccccccccccc}
\hline
Loss & AP & AP\(_{50}\) & AP\(_{60}\) & AP\(_{70}\) &AP\(_{80}\) & AP\(_{90}\) \\
\hline
SmoothL1 Loss & 48.8 & 79.8 & 72.9 & 60.6 & 40.3 & 10.2 \\
GIoU Loss~\cite{Rezatofighi2019GIO} & 49.9 & 79.8 & 74.1 & 63.2 & 41.9 & 12.4 \\
\textbf{PIoU Loss} & \textbf{50.3} & \textbf{80.1} & \textbf{74.9} & \textbf{63.0} & \textbf{42.5} & \textbf{12.2} \\
\hline
\end{tabular}
\vspace{-3mm}
\label{tab:losses:voc}
\end{table}
\noindent{\bf Comparison for horizontal bounding box:} Besides, we also compare the PIoU loss with SmoothL1 loss and GIoU loss~\cite{Rezatofighi2019GIO} for horizontal bounding box on PASCAL VOC dataset~\cite{Everingham2015TPV}. In Table \ref{tab:losses:voc}, we observe that the proposed PIoU loss is still better than SmoothL1 loss and GIoU loss for horizontal bounding box regression, particularly at those AP metrics with high IoU threshold. Note that the GIoU loss is designed only for horizontal bounding box while the proposed PIoU loss is more robust and well suited for both horizontal and oriented bounding box. Together with the results in Table \ref{tab:losses:dota}, we observe the strong generalization ability and effectiveness of the proposed PIoU loss.
\subsection{Benchmark Results}
\begin{table}[t]
\footnotesize
\centering
\setlength{\tabcolsep}{3pt}
\caption{Detection results on Retail50K dataset. The PIoU loss is evaluated on RefineDet~\cite{refinedet} and CenterNet~\cite{Zhou2019OAP} with different backbone models.}
\vspace{-2mm}
\begin{tabular}{lcccccc}
\hline
Method & Backbone & AP & AP\(_{50}\) & AP\(_{75}\) & Time (ms) & FPS \\ \hline
RefineDet-OBB~\cite{refinedet} & ResNet-50 & 53.96 & 74.15 & 33.77 & 142 & 7 \\
\textbf{RefineDet-OBB+PIoU} & ResNet-50 & \textbf{61.78} & \textbf{80.17} & \textbf{43.39} & \textbf{142} & \textbf{7} \\
RefineDet-OBB~\cite{refinedet} & ResNet-101 & 55.46 & 77.05 & 33.87 & 167 & 6 \\
\textbf{RefineDet-OBB+PIoU} & ResNet-101 & \textbf{63.00} & \textbf{79.08} & \textbf{46.01} & \textbf{167} & \textbf{6} \\ \hline
CenterNet-OBB~\cite{Zhou2019OAP} & ResNet18 & 54.44 & 76.58 & 32.29 & 7 & 140 \\
\textbf{CenterNet-OBB+PIoU} & ResNet18 & \textbf{61.02} & \textbf{87.19} & \textbf{34.85} & \textbf{7} & \textbf{140} \\
CenterNet-OBB~\cite{Zhou2019OAP} & DLA-34 & 56.13 & 78.29 & 33.97 & 18.18 & 55 \\
\textbf{CenterNet-OBB+PIoU} & DLA-34 & \textbf{61.64} & \textbf{88.47} & \textbf{34.80} & \textbf{18.18} & \textbf{55} \\\hline
\end{tabular}
\label{tab:exp:eval:potential}
\end{table}
\begin{table}[t!]
\footnotesize
\centering
\setlength{\tabcolsep}{5pt}
\caption{Detection results on HRSC2016 dataset. \emph{Aug.} indicates data augmentation. \emph{Size} means the image size that used for training and testing.}
\vspace{-2mm}
\begin{tabular}{llcccc}
\hline
Method & Backbone & Size & Aug. & mAP &FPS \\ \hline
R$^2$CNN~\cite{Jiang2017RRR} & ResNet101 & 800 $\times$ 800 & $\times$ & 73.03 & 2\\
RC1 \& RC2~\cite{liu2017high} & VGG-16 & - & - & 75.7 & \(<\)1fps \\
RRPN~\cite{Ma2018AST} & ResNet101 & 800 $\times$ 800 & $\times$ & 79.08 & 3.5\\
R\(^2\)PN~\cite{zhang2018toward} & VGG-16 & - & \(\surd\) & 79.6 & \(<\)1fps \\
RetinaNet-H~\cite{Yang2019RRS} & ResNet101 & 800 $\times$ 800 & $\surd$ & 82.89 & 14\\
RetinaNet-R~\cite{Yang2019RRS} & ResNet101 & 800 $\times$ 800 & $\surd$ & 89.18 & 10\\
RoI-Transformer~\cite{Jian2019LRT} & ResNet101 & 512 $\times$ 800 & $\times$ & 86.20 & - \\ \hline
\multirow{3}{*}{R$^3$Det~\cite{Yang2019RRS}}
& ResNet101 & 300 $\times$ 300 & $\surd$ & 87.14 & 18\\
& ResNet101 & 600 $\times$ 600 & $\surd$ & 88.97 & 15\\
& ResNet101 & 800 \(\times\) 800 & \(\surd\) & 89.26 & 12\\
\hline \hline
CenterNet-OBB~\cite{Zhou2019OAP} & ResNet18 & 512 $\times$ 512 & $\surd$ & 67.73 & 140\\
\textbf{CenterNet-OBB+PIoU} & \textbf{ResNet18} & \textbf{512 $\times$ 512} & $\surd$ & \textbf{78.54} & \textbf{140}\\
CenterNet-OBB~\cite{Zhou2019OAP} & ResNet101 & 512 $\times$ 512 & $\surd$ & 77.43 & 45\\
\textbf{CenterNet-OBB+PIoU} & \textbf{ResNet101} & \textbf{512 $\times$ 512} & $\surd$ & \textbf{80.32} & \textbf{45}\\
CenterNet-OBB~\cite{Zhou2019OAP} & DLA-34 & 512 $\times$ 512 & $\surd$ & 87.98 & 55\\
\textbf{CenterNet-OBB+PIoU} & \textbf{DLA-34} & \textbf{512 $\times$ 512} & $\surd$ & \textbf{89.20} & \textbf{55}\\ \hline
\end{tabular}
\vspace{-1mm}
\label{tab:state-of-the-art:hrsc2016}
\end{table}
\noindent\textbf{Retail50K:}
We evaluate our PIoU loss with two OBB-detectors (\textit{i.e.} the OBB versions of RefineDet~\cite{refinedet} and CenterNet~\cite{Zhou2019OAP}) on Retail50K dataset. The experimental results are shown in Table \ref{tab:exp:eval:potential}. We observe that, both detectors achieve significant improvements with the proposed PIoU loss (\(\sim\) 7\% improvement for RefineDet-OBB and \(\sim\) 6\% improvement for CenterNet-OBB). One reason for obtaining such notable improvements is that the proposed PIoU loss is much better suited for oriented objects than the traditional regression loss. Moreover, the improvements from PIoU loss in Retail50K are more obvious than those in DOTA (\textit{c.f.} Table \ref{tab:losses:dota}), which could mean that the proposed PIoU loss is extremely useful for objects with high aspect ratios and complex environments. This verifies the effectiveness of the proposed method.
\noindent\textbf{HRSC2016:}
The HRSC2016 dataset~\cite{Liu2016SRB} contains 1070 images from two scenarios including ships on sea and ships close inshore. We evaluate the proposed PIoU with CenterNet~\cite{Zhou2019OAP} on different backbones, and compare them with several state-of-the-art detectors. The experimental results are shown in Table \ref{tab:state-of-the-art:hrsc2016}. It can be seen that the CenterNet-OBB+PIoU outperforms all other methods except R\(^3\)Det-800. This is because we use a smaller image size (512\(\times\)512) than R\(^3\)Det-800 (800\(\times\)800). Thus, our detector preserves a reasonably competitive detection performance, but with far better efficiency (55 fps \textit{v.s} 12 fps). This exemplifies the strength of the proposed PIoU loss on OBB detectors.
\noindent\textbf{DOTA:}
The DOTA dataset~\cite{Xia2018DAS} contains 2806 aerial images from different sensors and platforms with crowd-sourcing. Each image is of size about 4000\(\times\)4000 pixels and contains objects of different scales, orientations and shapes. Note that image in DOTA is too large to be directly sent to CNN-based detectors. Thus, similar to the strategy in \cite{Xia2018DAS}, we crop a series of 512\(\times\)512 patches from the original image with the stride set to 256. For testing, the detection results are obtained from the DOTA evaluation server. The detailed performances for each category are reported so that deeper observations could be made. We use the same short names, benchmarks and forms as those existing methods in~\cite{Yang2019RRS} to evaluate the effectiveness of PIoU loss on this dataset. The final results are shown in Table~\ref{tab:state-of-the-art:dota}.
We find that the performance improvements vary among different categories. However, it is interesting to find that the improvement is more plausible for some categories with high aspect ratios. For example, harbour (HA), ground track field (GTF), soccer-ball field (SBF) and basketball court (BC) all naturally have large aspect ratios, and they appear to benefit from the inclusion of PIoU. Such observations confirm that the PIoU can effectively improve the performance of OBB detectors, particularly on objects with high-aspect ratios. These verify again the effectiveness of the proposed PIoU loss on OBB detectors. We also find that our baselines are relatively low than some state-of-the-art performances. We conjecture the main reason is that we use much smaller input size than other methods (512 vs 1024 on DOTA). However, note that the existing result (89.2 mAP) for HRSC2016 in Table~\ref{tab:state-of-the-art:hrsc2016} already achieves the state-of-the-art level performance with only $512\times512$ image size. Thus, the proposed loss function can bring gain in this strong baseline.
\begin{table*}[t!]
\tiny
\centering
\setlength{\tabcolsep}{0.7pt}
\caption{Detection results on DOTA dataset. We report the detection results for each category to better demonstrate where the performance gains come from.}
\vspace{-1mm}
\begin{tabular}{l|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
Method & Backbone & Size & PL & BD & BR & GTF & SV & LV & SH & TC & BC & ST & SBF & RA & HA & SP & HC &mAP \\ \hline
SSD~\cite{Liu2016SSS} & VGG16 & 512 & 39.8 & 9.1 & 0.6 & 13.2 & 0.3 & 0.4 & 1.1 & 16.2 & 27.6 & 9.2 & 27.2 & 9.1 & 3.0 & 1.1 & 1.0 & 10.6 \\
YOLOV2~\cite{Redmon2016YBF} & DarkNet19 & 416 & 39.6 & 20.3 & 36.6 & 23.4 & 8.9 & 2.1 & 4.8 & 44.3 & 38.4 & 34.7 & 16.0 & 37.6 & 47.2 & 25.5 & 7.5 & 21.4 \\
R-FCN~\cite{Dai2016ROD} & ResNet101 & 800 & 37.8 & 38.2 & 3.6 & 37.3 & 6.7 & 2.6 & 5.6 & 22.9 & 46.9 & 66.0 & 33.4 & 47.2 & 10.6 & 25.2 & 18.0 & 26.8 \\
FR-H~\cite{Ren2015FRT} & ResNet101 & 800 & 47.2 & 61.0 & 9.8 & 51.7 & 14.9 & 12.8 & 6.9 & 56.3 & 60.0 & 57.3 & 47.8 & 48.7 & 8.2 & 37.3 & 23.1 & 32.3 \\
FR-O~\cite{Xia2018DAS} & ResNet101 & 800 & 79.1 & 69.1 & 17.2 & 63.5 & 34.2 & 37.2 & 36.2 & 89.2 & 69.6 & 59.0 & 49. & 52.5 & 46.7 & 44.8 & 46.3 & 52.9 \\
R-DFPN~\cite{yang2018automatic} & ResNet101 & 800 & 80.9 & 65.8 & 33.8 & 58.9 & 55.8 & 50.9 & 54.8 & 90.3 & 66.3 & 68.7 & 48.7 & 51.8 & 55.1 & 51.3 & 35.9 & 57.9 \\
R\(^2\)CNN~\cite{Jiang2017RRR} & ResNet101 & 800 & 80.9 & 65.7 & 35.3 & 67.4 & 59.9 & 50.9 & 55.8 & 90.7 & 66.9 & 72.4 & 55.1 & 52.2 & 55.1 & 53.4 & 48.2 & 60.7 \\
RRPN~\cite{Ma2018AST} & ResNet101 & 800 & 88.5 & 71.2 & 31.7 & 59.3 & 51.9 & 56.2 & 57.3 & 90.8 & 72.8 & 67.4 & 56.7 & 52.8 & 53.1 & 51.9 & 53.6 & 61.0 \\
\hline \hline
RefineDet~\cite{refinedet} & VGG16 & 512 & 80.5 & 26.3 & 33.2 & 28.5 & 63.5 & 75.1 & 78.8 & 90.8 & 61.1 & 65.9 & 12.1 & 23.0 & 50.9 & 50.9 & 22.6 & 50.9 \\
\textbf{RefineDet+PIoU} & VGG16 & 512 & \textbf{80.5} & \textbf{33.3} & \textbf{34.9} & \textbf{28.1} & \textbf{64.9} & \textbf{74.3} & \textbf{78.7} & \textbf{90.9} & \textbf{65.8} & \textbf{66.6} & \textbf{19.5} & \textbf{24.6} & \textbf{51.1} & \textbf{50.8} & \textbf{23.6}& \textbf{52.5} \\
RefineDet~\cite{refinedet}& ResNet101 & 512 & 80.7 & 44.2 & 27.5 & 32.8 & 61.2 & 76.1 & 78.8 & 90.7 & 69.9 & 73.9 & 24.9 & 31.9 & 55.8 & 51.4 & 26.8 & 55.1 \\
\textbf{RefineDet+PIoU} & ResNet101 & 512 & \textbf{80.7} & \textbf{48.8} & \textbf{26.1} & \textbf{38.7} & \textbf{65.2} & \textbf{75.5} & \textbf{78.6} & \textbf{90.8} & \textbf{70.4} & \textbf{75.0} & \textbf{32.0} & \textbf{28.0} & \textbf{54.3} & \textbf{53.7} & \textbf{29.6} & \textbf{56.5}\\
\hline
CenterNet~\cite{Zhou2019OAP} & DLA-34 & 512 & 81.0 & 64.0 & 22.6 & 56.6 & 38.6 & 64.0 & 64.9 & 90.8 & 78.0 & 72.5 & 44.0 & 41.1 & 55.5 & 55.0 & 57.4 & 59.1 \\
\textbf{CenterNet+PIoU} & DLA-34 & 512 & \textbf{80.9} & \textbf{69.7} & \textbf{24.1} & \textbf{60.2} & \textbf{38.3} & \textbf{64.4} & \textbf{64.8} & \textbf{90.9} & \textbf{77.2} & \textbf{70.4} & \textbf{46.5} & \textbf{37.1} & \textbf{57.1} & \textbf{61.9} & \textbf{64.0} & \textbf{60.5} \\
\hline
\end{tabular}
\label{tab:state-of-the-art:dota}
\vspace{-2mm}
\end{table*}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{images/results.pdf}
\vspace{-2mm}
\caption{Samples results using PIoU (red boxes) and SmoothL1 (yellow boxes) losses on Retail50K (first row), HRSC2016 (second row) and DOTA (last row) datasets.}
\label{fig:exp:baseline:ourmethod:vis}
\vspace{-4mm}
\end{figure*}
In order to visually verify these performance improvements, we employ the anchor-based model RefineDet~\cite{refinedet} and conduct two independent experiments using PIoU and SmoothL1 losses. The experiments are applied on all three datasets (\textit{i.e.} Retail50K, DOTA~\cite{Xia2018DAS}, HRSC2016~\cite{Liu2016SRB}) and selected visual results are presented in Figure~\ref{fig:exp:baseline:ourmethod:vis}. We can observe that the OBB detector with PIoU loss (in red boxes) has more robust and accurate detection results than the one with SmoothL1 loss (in yellow boxes) on all three datasets, particularly on Retail50K, which demonstrates its strength in improving the performance for high aspect ratio oriented objects. Here, we also evaluate the proposed HPIoU loss with the same configuration of PIoU. In our experiments, the performances of HPIoU loss are slightly lower than those of PIoU loss (0.87, 1.41 and 0.18 mAP on DOTA, Retail50K and HRSC2016 respectively), but still better than smooth-L1 loss while having higher training speed than PIoU loss. Overall, the performances of HPIoU are consistent on all three datasets.
\section{Introduction}
\label{intro}
Object detection is a fundamental task in computer vision and many detectors~\cite{Ren2015FRT,Liu2016SSS,Lin2017FLF,Law2018CDO} using convolutional neural networks have been proposed in recent years. In spite of their state-of-the-art performance, those detectors have inherent limitations on rotated and densely crowded objects. For example, bounding boxes (BB) of a rotated or perspective-transformed objects usually contain a significant amount of background that could mislead the classifiers. When bounding boxes have high overlapping areas, it is difficult to separate the densely crowded objects. Because of these limitations, researchers have extended existing detectors with oriented bounding boxes (OBB). In particular, as opposed to the BB which is denoted by $(c_{x}, c_{y}, w, h)$, an OBB is composed by $(c_{x}, c_{y}, w, h, \theta)$ where $(c_{x}, c_{y})$, $(w, h)$ and $\theta$ are the center point, size and rotation of an OBB, respectively. As a result, OBBs can compactly enclose the target object so that rotated and densely crowded objects can be better detected and classified.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{images/ps.pdf}
\caption{Comparison between PIoU and SmoothL1~\cite{Ren2015FRT} losses. (a) Loss values between IoU and SmoothL1 are totally different while their SmoothL1 loss values are the same. (b) The proposed PIoU loss is consistent and correlated with IoU.}
\label{fig:intro:ps}
\end{figure}
Existing OBB-based approaches are mostly built on anchor-based frameworks by introducing an additional angle dimension optimized by a distance loss~\cite{Liu2017LAR,Li2018MRB,Liao2018RSR,Jian2019LRT,Yang2019RRS,Yang2019STM} on the parameter tuple $(c_{x}, c_{y}, w, h, \theta)$. While OBB has been primarily used for simple rotated target detection in aerial images~\cite{Li2018MRB,Zhu2015ORO,Razakarivony2016VDI,Liu2016SRB,Liu2015FMV,Benedek2012BDM,Xia2018DAS}, the detection performance in more complex and close-up environments is limited. One of the reasons is that the distance loss in those approaches, \emph{e.g.} SmoothL1 Loss~\cite{Ren2015FRT}, mainly focus on minimizing the angle error rather than global IoU. As a result, it is insensitive to targets with high aspect ratios. An intuitive explanation is that object parts far from the center $(c_{x}, c_{y})$ are not properly enclosed even though the angle distance may be small. For example, \cite{Liao2018RSR,Jian2019LRT} employ a regression branch to extract rotation-sensitive features and thereby the angle error of the OBB can be modelled in using a transformer. However, as shown in Figure~\ref{fig:intro:ps}(a), the IoU of predicted boxes (green) and that of the ground truth (red) are very different while their losses are the same.
To solve the problem above, we introduce a novel loss function, named \textit{Pixels-IoU (PIoU) Loss}, to increase both the angle and IoU accuracy for OBB regression. In particular, as shown in Figure~\ref{fig:intro:ps}(b), the PIoU loss directly reflects the IoU and its local optimum compared to standard distance loss. The rationale behind this is that the IoU loss normally achieves better performance than the distance loss~\cite{Yu2016UAA,Rezatofighi2019GIO}. However, the IoU calculation between OBBs is more complex than BBs since the shape of intersecting OBBs could be any polygon of less than eight sides. For this reason, the PIoU, a continuous and derivable function, is proposed to jointly correlate the five parameters of OBB for checking the position (inside or outside IoU) and the contribution of each pixel. The PIoU loss can be easily calculated by accumulating the contribution of interior overlapping pixels. To demonstrate its effectiveness, the PIoU loss is evaluated on both anchor-based and anchor-free frameworks in the experiments.
To overcome the limitations of existing OBB-based approaches, we encourage the community to adopt more robust OBB detectors in a shift from conventional aerial imagery to more complex domains. We collected a new benchmark dataset, \emph{Retail50K}, to reflect the challenges of detecting oriented targets with high aspect ratios, heavy occlusions, and complex backgrounds. Experiments show that the proposed frameworks with PIoU loss not only have promising performances on aerial images, but they can also effectively handle new challenges in Retail50K.
The contributions of this work are summarized as follows: (1) We propose a novel loss function, PIoU loss, to improve the performance of oriented object detection in highly challenging conditions such as high aspect ratios and complex backgrounds. (2) We introduce a new dataset, Retail50K, to spur the computer vision community towards innovating and adapting existing OBB detectors to cope with more complex environments. (3) Our experiments demonstrate that the proposed PIoU loss can effectively improve the performances for both anchor-based and anchor-free OBB detectors in different datasets.
\section*{Acknowledgements}
The paper is supported in part by the following grants: China Major Project for New Generation of AI Grant (No.2018AAA0100400), National Natural Science Foundation of China (No. 61971277). The work is also supported by funding from Clobotics under the Joint Research Program of Smart Retail.
\bibliographystyle{splncs04}
\section{Pixels-IoU (PIoU) Loss}
\label{approach}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{images/general_framework.pdf}
\vspace{-2mm}
\caption{Our proposed PIoU is a general concept that is applicable to most OBB-based frameworks. All possible predicted (green) and g/t (red) OBB pairs are matched to compute their PIoU. Building on that, the final PIoU loss is calculated using Eq.~\ref{equ:model:finalloss}.}
\vspace{-2mm}
\label{fig:model:general_framework}
\end{figure*}
In this section, we present in detail the PIoU Loss.
For a given OBB $\bm{b}$ encoded by $(c_{x}, c_{y}, w, h, \theta)$, an ideal loss function should effectively guide the network to maximize the IoU and thereby the error of $\bm{b}$ can be minimized. Towards this goal, we first explain the IoU method. Generally speaking, an IoU function should accurately compute the area of an OBB as well as its intersection with another box. Since OBB and the intersection area are constructed by pixels in image space, their areas are approximated by the number of interior pixels. Specifically, as shown in Figure~\ref{fig:box_model}, $\bm{t}_{i,j}$ (the purple point) is the intersection point between the mid-vertical line and its perpendicular line to pixel $\bm{p}_{i,j}$ (the green point). As a result, a triangle is constructed by OBB center $\bm{c}$ (the red point), $\bm{p}_{i,j}$ and $\bm{t}_{i,j}$. The length of each triangle side is denoted by $d_{i,j}^w$, $d_{i,j}^h$ and $d_{i,j}$. To judge the relative location (inside or outside) between $\bm{p}_{i,j}$ and $\bm{b}$, we define the binary constraints as follows:
\begin{equation}
\delta(\bm{p}_{i,j}|\bm{b})=\left\{
\begin{aligned}
1, & \quad d_{i,j}^w \leq \frac{w}{2},d_{i,j}^h \leq \frac{h}{2} \\
0, & \quad otherwise
\end{aligned}
\right.
\label{con:inside_pixel}
\end{equation}
where $d_{ij}$ denotes the L2-norm distance between pixel~$(i, j)$ and OBB center~$(c_x, c_y)$, $d_w$ and $d_h$ denotes the distance $d$ along horizontal and vertical direction respectively:
\begin{align}
\small
d_{ij}= & \ d(i,j)=\sqrt{(c_x-i)^2+(c_y-j)^2} \\
d_{ij}^w= & \ |d_{ij}\cos\beta| \\
d_{ij}^h= & \ |d_{ij}\sin\beta| \\
\beta= & \left\{
\begin{aligned}
\theta+\arccos\frac{c_x-i}{d_{ij}}, & \quad c_y - j \geq 0 \\
\theta-\arccos\frac{c_x-i}{d_{ij}}, & \quad c_y - j \textless 0\\
\end{aligned}
\right.
\label{eq:distances}
\end{align}
\begin{figure}[t]
\centering
\subfigure[]{
\label{fig:box_model}
\includegraphics[width=0.35\hsize]{images/inside.pdf}\;}
\hspace{5mm}
\subfigure[]{
\label{fig:pixel_au}
\includegraphics[width=0.45\hsize]{images/contribution_distribution.pdf}}
\vspace{-3mm}
\caption{General idea of the IoU function. (a) Components involved in determining the relative position (inside or outside) between a pixel $\bm{p}$ (green point) and an OBB $\bm{b}$ (red rectangle). Best viewed in color. (b) Distribution of the kernelized pixel contribution $F(\bm{p}_{i,j}|\bm{b})$ with different distances between $\bm{p}_{i,j}$ and box center $\bm{c}$. We see that $F(\bm{p}_{i,j}|\bm{b})$ is continuous and differentiable due to Eq.~\ref{equ:model:kernel}. Moreover, it approximately reflects the value distribution in Eq.~\ref{con:inside_pixel} when the pixels $\bm{p}_{i,j}$ are inside and outside $\bm{b}$.}
\vspace{-3mm}
\label{fig:model:piou}
\end{figure}
Let \(B_{\bm{b},\bm{b}^{\prime}}\) denotes the smallest horizontal bounding box that covers both \(\bm{b}\) and \(\bm{b}^{\prime}\). We can then compute the intersection area $S_{\bm{b} \cap \bm{b}^{\prime}}$ and union area $S_{\bm{b} \cup \bm{b}^{\prime}}$ between two OBBs $\bm{b}$ and $\bm{b}^{\prime}$ using the statistics of all pixels in \(B_{\bm{b},\bm{b}^{\prime}}\):
\begin{align}
S_{\bm{b} \cap \bm{b}^{\prime}} = \sum_{\bm{p}_{i,j}\in B_{\bm{b},\bm{b}^{\prime}}}&\delta(\bm{p}_{i,j}|\bm{b})\delta(\bm{p}_{i,j}|\bm{b}^{\prime}) \\
S_{\bm{b} \cup \bm{b}^{\prime}}= \sum_{\bm{p}_{i,j}\in B_{\bm{b},\bm{b}^{\prime}}}\delta(\bm{p}_{i,j}|\bm{b})+&\delta(\bm{p}_{i,j}|\bm{b}^{\prime})-\delta(\bm{p}_{i,j}|\bm{b})\delta(\bm{p}_{i,j}|\bm{b}^{\prime})
\label{equ:model:oldious}
\end{align}
The final IoU of $\bm{b}$ and $\bm{b}^{\prime}$ can be calculated by dividing $S_{\bm{b} \cap \bm{b}^{\prime}}$ and $S_{\bm{b} \cup \bm{b}^{\prime}}$. However, we observe that Eq.~\ref{con:inside_pixel} is not a continuous and differentiable function. As a result, back propagation (BP) cannot utilize an IoU-based loss for training. To solve this problem, we approximate Eq.~\ref{con:inside_pixel} as $F(\bm{p}_{i,j}|\bm{b})$ taking on the product of two kernels:
\begin{equation}
\begin{split}
F(\bm{p}_{i,j}|\bm{b}) = K(d_{i,j}^w, w)K(d_{i,j}^h, h) \\
\end{split}
\label{equ:model:pixelgate}
\end{equation}
Particularly, the kernel function $K(d, s)$ is calculated by:
\begin{equation}
K(d,s)=1-\frac{1}{1+e^{-k(d-s)}}
\label{equ:model:kernel}
\end{equation}
where $k$ is an adjustable factor to control the sensitivity of the target pixel $\bm{p}_{i,j}$. The key idea of Eq.~\ref{equ:model:pixelgate} is to obtain the contribution of pixel $\bm{p}_{i,j}$ using the kernel function in Eq.~\ref{equ:model:kernel}. Since the employed kernel is calculated by the relative position (distance and angle of the triangle in Figure~\ref{fig:box_model}) between $\bm{p}_{i,j}$ and $\bm{b}$, the intersection area $S_{\bm{b} \cap \bm{b}^{\prime}}$ and union area $S_{\bm{b} \cup \bm{b}^{\prime}}$ are inherently sensitive to both OBB rotation and size. In Figure~\ref{fig:pixel_au}, we find that $F(\bm{p}_{i,j}|\bm{b})$ is continuous and differentiable. More importantly, it functions similarly to the characteristics of Eq.~\ref{con:inside_pixel} such that $F(\bm{p}_{i,j}|\bm{b})$ is close to 1.0 when the pixel $\bm{p}_{i,j}$ is inside and otherwise when $F(\bm{p}_{i,j}|\bm{b})\sim 0$. Following Eq.~\ref{equ:model:pixelgate}, the intersection area $S_{\bm{b} \cap \bm{b}^{\prime}}$ and union area $S_{\bm{b} \cup \bm{b}^{\prime}}$ between $\bm{b}$ and $\bm{b}^{\prime}$ are approximated by:
\begin{align}
\label{equ:model:newious}
S_{\bm{b} \cap \bm{b}^{\prime}} \approx \sum_{\bm{p}_{i,j}\in B_{\bm{b},\bm{b}^{\prime}}}&F(\bm{p}_{i,j}|\bm{b})F(\bm{p}_{i,j}|\bm{b}^{\prime}) \\
\label{equ:S_union}
S_{\bm{b} \cup \bm{b}^{\prime}} \approx \sum_{\bm{p}_{i,j}\in B_{\bm{b},\bm{b}^{\prime}}}F(\bm{p}_{i,j}|\bm{b})+&F(\bm{p}_{i,j}|\bm{b}^{\prime})-F(\bm{p}_{i,j}|\bm{b})F(\bm{p}_{i,j}|\bm{b}^{\prime})
\end{align}
In practice, to reduce the computational complexity of Eq.~\ref{equ:S_union}, $S_{\bm{b} \cup \bm{b}^{\prime}}$ can be approximated by a simpler form:
\begin{equation}
S_{\bm{b} \cup \bm{b}^{\prime}} = w \times h + w^{\prime} \times h^{\prime} - S_{\bm{b} \cap \bm{b}^{\prime}}
\label{equ:model:optmize}
\end{equation}
where $(w,h)$ and $(w^{\prime},h^{\prime})$ are the size of OBBs $\bm{b}$ and $\bm{b}^{\prime}$, respectively. Our experiment in Section~\ref{exp:evaluation:anchorbased} shows that Eq.~\ref{equ:model:optmize} can effectively reduce the complexity of Eq.~\ref{equ:model:newious} while preserving the overall detection performance. With these terms, our proposed Pixels-IoU $(PIoU)$ is computed as:
\begin{equation}
PIoU(\bm{b},\bm{b}^{\prime}) =\frac{S_{\bm{b} \cap \bm{b}^{\prime}}}{S_{\bm{b} \cup \bm{b}^{\prime}}}
\label{equ:model:piou}
\end{equation}
Let $\bm{b}$ denotes the predicted box and $\bm{b}^{\prime}$ denotes the ground-truth box. A pair $(\bm{b},\bm{b}^{\prime})$ is regarded as positive if the predicted box $\bm{b}$ is based on a positive anchor and \(\bm{b}^{\prime}\) is the matched ground-truth box (an anchor is matched with a ground-truth box if the IoU between them is larger them 0.5). We use \(M\) to denote the set of all positive pairs. With the goal to maximize the PIoU between $\bm{b}$ and $\bm{b}^{\prime}$, the proposed PIoU Loss is calculated by:
\begin{equation}
L_{piou}=\frac{-\sum_{(\bm{b},\bm{b}^{\prime})\in M}\ln{PIoU(\bm{b},\bm{b}^{\prime})}}{|M|}
\label{equ:model:finalloss}
\end{equation}
Theoretically, Eq.~\ref{equ:model:finalloss} still works if there is no intersection between $\bm{b}$ and $\bm{b}^{\prime}$. This is because $PIoU(\bm{b},\bm{b}^{\prime}) > 0$ based on Eq.~\ref{equ:model:kernel} and the gradients still exist in this case. Moreover, the proposed PIoU also works for horizontal bounding box regression. Specifically, we can simply set $\theta=0$ in Eq.~\ref{eq:distances} for this purpose. In Section~\ref{exp}, we experimentally validate the usability of PIoU for horizontal bounding box regression.
\section{Related Work}
\subsection{Oriented Object Detectors}
Existing oriented object detectors are mostly extended from generic horizontal bounding box detectors by introducing an additional angle dimension. For instance, \cite{Liu2017LAR} presented a rotation-invariant detector based on one-stage SSD~\cite{Liu2016SSS}. \cite{Li2018MRB} introduced a rotated detector based on two-stage Faster RCNN~\cite{Ren2015FRT}. \cite{Jian2019LRT} designed an RoI transformer to learn the transformation from BB to OBB and thereafter, the rotation-invariant features are extracted. \cite{He2015OOP} formulated a generative probabilistic model to extract OBB proposals. For each proposal, the location, size and orientation are determined by searching the local maximum likelihood. Other possible ways of extracting OBB include, fitting detected masks~\cite{Chen2019FVO,He2017MRC} and regressing OBB with anchor-free models~\cite{Zhou2019OAP}, two new concepts in literature. While these approaches have promising performance on aerial images, they are not well-suited for oriented objects with high aspect ratios and complex environments. For this reason, we hypothesize that a new kind of loss is necessary to obtain improvements under challenging conditions.
For the purpose of comparative evaluation, we implement both anchor-based and anchor-free frameworks as baselines in our experiments. We later show how these models, when equipped with PIoU Loss, can yield better results in both retail and aerial data.
\subsection{Regression Losses}
For bounding box regression, actively used loss functions are Mean Square Error~\cite{Mood1974ITT} (MSE, L2 loss, the sum of squared distances between target and predicted variables), Mean Absolute Error~\cite{Willmott2005AOT} (MAE, L1 loss, the sum of absolute differences between target and predicted variables), Quantile Loss~\cite{Cannon2011QRN} (an extension of MAE, predicting an interval instead of only point predictions), Huber Loss~\cite{Huber1964REO} (basically absolute error, which becomes quadratic when error is small) and Log-Cosh Loss (the logarithm of the hyperbolic cosine of the prediction error)~\cite{Muller2004OTC}. In practise, losses in common used detectors~\cite{Redmon2016YOL,Liu2016SSS,Ren2015FRT} are extended from the base functions above. However, we can not directly use them since there is an additional angle dimension involved in the OBB descriptor.
Besides the base functions, there have been several works that introduce IoU losses for horizontal bounding box. For instance, \cite{Yu2016UAA} propose an IoU loss which regresses the four bounds of a predicted box as a whole unit. \cite{Rezatofighi2019GIO} extends the idea of~\cite{Yu2016UAA} by introducing a Generalized Intersection over Union loss (GIoU loss) for bounding box regression. The main purpose of GIoU is to get rid of the case that two polygons do not have an intersection. \cite{Smit2018LPI} introduce a novel bounding box regression loss based on a set of IoU upper bounds. However, when using oriented bounding box, those approaches become much more complicated thus are hard to implement, while the proposed PIoU loss is much simpler and suitable for both horizontal and oriented box. It should be noted that the proposed PIoU loss is different from~\cite{Zhou2019ILF} in which the IoU is computed based on axis alignment and polygon intersection, our method is more straightforward, i.e. IoU is calculated directly by accumulating the contribution of interior overlapping pixels. Moreover, the proposed PIoU loss is also different from Mask Loss in Mask RCNN~\cite{He2017MRC}. Mask loss is calculated by the average binary cross-entropy with per-pixel sigmoid (also called Sigmoid Cross-Entropy Loss). Different from it, our proposed loss is calculated based on positive IoU to preserve intersection and union areas between two boxes. In each area, the contribution of pixels are modeled and accumulated depending on their spatial information. Thus, PIoU loss is more general and sensitive to OBB overlaps.
\section{Retail50K Dataset}
\label{exp:dataset}
OBB detectors have been actively studied for many years and several datasets with such annotations have been proposed~\cite{Xia2018DAS,Benedek2012BDM,Liu2015FMV,Liu2016SRB,Razakarivony2016VDI,Zhu2015ORO,Li2018MRB,He2015OOP}. As shown in Table~\ref{tab:multidatasetcompare}, most of them only focused on aerial images (Figure~\ref{fig:exp:dataset_compare} (a),(b)) while a few are annotated based on existing datasets such as MSCOCO~\cite{Lin2014MCC}, PASCAL VOC~\cite{Everingham2015TPV} and ImageNet~\cite{Deng2009IAH}. These datasets are important to evaluate the detection performance with simple backgrounds and low aspect ratios. For example, aerial images are typically gray and texture-less. The statistics in~\cite{Xia2018DAS} shows that most datasets of aerial images have a wide range of aspect ratios, but around 90\% of these ratios are distributed between 1:1 and 1:4, and very few images contain OBBs with aspect ratios larger than 1:5. Moreover, aspect ratios of OBBs on PASCAL VOC are mostly close to square (1:1). As a result, it is hard to assess the capability of detectors on objects with high aspect ratios and complex backgrounds using existing datasets. Motivated by this, we introduce a new dataset, namely Retail50K, to advance the research of detection of rotated objects in complex environments. We intend to make this publicly available to the community (\url{https://github.com/clobotics/piou}).
\begin{table}[t]
\small
\centering
\setlength{\tabcolsep}{5pt}
\caption{Comparison between different datasets with OBB annotations. $\approx$ indicate estimates based on selected annotated samples as full access was not possible.}
\vspace{-2mm}
\begin{tabular}{l|ccccc}
\hline
Dataset & Scenario & Median Ratio & Images & Instances \\ \hline
SZTAKI~\cite{Benedek2012BDM} & Aerial & $\approx$1:3 & 9 & 665 \\
VEDAI~\cite{Razakarivony2016VDI} & Aerial & 1:3 & 1268 & 2950 \\
UCAS-AOD~\cite{Zhu2015ORO} & Aerial & 1:1.3 & 1510 & 14596 \\
HRSC2016~\cite{Liu2016SRB} & Aerial & 1:5 & 1061 & 2976 \\
Vehicle~\cite{Liu2015FMV} & Aerial & 1:2 & 20 & 14235 \\
DOTA~\cite{Xia2018DAS} & Aerial & 1:2.5 & 2806 & 188282 \\
SHIP~\cite{Li2018MRB} & Aerial & $\approx$1:5 & 640 & - \\
OOP~\cite{He2015OOP} & PASCAL & $\approx$1:1 & 4952 & - \\
\textbf{Proposed} & \textbf{Retail} & \textbf{1:20} & \textbf{47000} & \textbf{48000} \\ \hline
\end{tabular}
\label{tab:multidatasetcompare}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.7\linewidth]{images/dataset_compare.pdf}
\vspace{-3mm}
\caption{Sample images and their annotations of three datasets evaluated in our experiments: (a) DOTA~\cite{Xia2018DAS} (b) HRSC2016~\cite{Liu2016SRB} (c) Retail50K. There are two unique characteristics of Retail50K: (1) Complex backgrounds such as occlusions (by price tags), varied colours and textures. (2) OBB with high aspect ratios.}
\vspace{-1mm}
\label{fig:exp:dataset_compare}
\end{figure}
\begin{figure*}[t]
\centering
\includegraphics[width=1\linewidth]{images/statistics.pdf}
\vspace{-3mm}
\caption{Statistics of different properties of Retail50K dataset.}
\vspace{-3mm}
\label{fig:exp:statistics}
\end{figure*}
Figure~\ref{fig:exp:dataset_compare} (c) illustrates a sample image from Retail50K dataset. Retail50K is a collection of 47,000 images from different supermarkets. Annotations on those images are the layer edges of shelves, fridges and displays. We focus on such retail environments for three reasons: (1) \textbf{Complex background.} Shelves and fridges are tightly filled with many different items
with a wide variety of colours and textures. Moreover, layer edges are normally occluded by price tags and sale tags. Based on our statistics, the mean occlusion is around 37.5\%. It is even more challenging that the appearance of price tags are different in different supermarkets. (2) \textbf{High aspect ratio.} Aspect ratio is one of the essential factors for anchor-based models~\cite{Redmon2016YBF}. Bounding boxes in Retail50K dataset not only have large variety in degrees of orientation, but also a wide range of aspect ratios. In particular, the majority of annotations in Retail50K are with high aspect ratios. Therefore, this dataset represents a good combination of challenges that is precisely the type we find in complex retail environments
(3) \textbf{Useful in practice.} The trained model based on Retail50K can be used for many applications in retail scenarios such as shelf retail tag detection, automatic shelf demarcation, shelf layer and image yaw angle estimation, etc. It is worth to note that although SKU-110K dataset~\cite{Goldman2019PDI} is also assembled from retail environment such as supermarket shelves, the annotations in this dataset are horizontal bounding boxes (HBB) of shelf products since it mainly focuses on object detection in densely packed scenes. The aspect ratios of its HBB are distributed between 1:1-1:3 and hence, it does not cater to the problem that we want to solve.
\noindent{\bf Images and Categories:} Images in Retail50K were collected from 20 supermarket stores in China and USA. Dozens of volunteers acquired data using their personal cellphone cameras. To increase the diversity of data, images were collected in multiple cities from different volunteers. Image quality and view settings were unregulated and so the collected images represent different scales, viewing angles, lighting conditions, noise levels, and other sources of variability. We also recorded the meta data of the original images such as capture time, volunteer name, shop name and MD5~\cite{Xie2013FCA} checksum to filter out duplicated images. Unlike existing datasets that contain multiple categories~\cite{Xia2018DAS,Lin2014MCC,Everingham2015TPV,Deng2009IAH}, there is only one category in Retail50K dataset. For better comparisons across datasets, we also employ DOTA~\cite{Xia2018DAS} (15 categories) and HRSC2016~\cite{Liu2016SRB} (the aspect ratio of objects is between that of Retail50K and DOTA) in our experiments (Figure~\ref{fig:exp:dataset_compare}).
\noindent{\bf Annotation and Properties:} In Retail50K dataset, bounding box annotations were provided by 5 skilled annotators. To improve their efficiency, a handbook of labelling rules was provided during the training process. Candidate images were grouped into 165 labelling tasks based on their meta-data so that peer reviews can be applied. Finally, considering the complicated background and various orientations of layer edges, we perform the annotations using arbitrary quadrilateral bounding boxes (AQBB). Briefly, AQBB is denoted by the vertices of the bounding polygon in clockwise order. Due to high efficiency and empirical success, AQBB is widely used in many benchmarks such as text detection~\cite{Karatzas2015COR}, object detection in aerial images~\cite{Li2018MRB}, etc. Based on AQBB, we can easily compute the required OBB format which is denoted by $(c_{x},c_{y},w,h,\theta)$.
Since images were collected with personal cellphone cameras, the original images have different resolutions; hence they were uniformly resized into $600 \times 800$ before annotation took place. Figure~\ref{fig:exp:statistics} shows some statistics of Retail50K. We see that the dataset contains a wide range of aspect ratios and orientations (Figure~\ref{fig:exp:statistics} (a) and (b)). In particular, Retail50K is more challenging as compared to existing datasets~\cite{Liu2015FMV,Xia2018DAS,Li2018MRB} since it contains rich annotations with extremely high aspect ratios (higher than 1:10). Similar to natural-image datasets such as ImageNet (average 2) and MSCOCO (average 7.7), most images in our dataset contain around 2-6 instances with complex backgrounds (Figure~\ref{fig:exp:statistics} (c)). For experiments, we selected half of the original images as the training set, 1/6 as validation set, and 1/3 as the testing set. |
1,108,101,564,835 | arxiv | \section{Introduction}
The purpose of this paper is to study the structure of the bounded derived
category $\Dbcoh(\boldsymbol{E})$ of coherent sheaves on a singular irreducible
projective curve $\boldsymbol{E}$ of arithmetic genus one.
In the smooth case, such structure results are easily obtained from Atiyah's
description \cite{Atiyah} of indecomposable vector bundles over elliptic
curves. However, if $\boldsymbol{E}$ has a node or a cusp, some crucial
properties fail to hold. This is illustrated by the following table.
\begin{center}
\begin{tabular}[t]{p{6cm}|c|c}
&smooth&singular\\ \hline
homological dimension of $\Coh_{\boldsymbol{E}}$
&$1$&$\infty$\\ \hline
Serre duality holds&in general&\multicolumn{1}{p{3cm}}{
with one object being perfect}\\ \hline
torsion free implies locally free&yes&no\\ \hline
indecomposable coherent sheaves are semi-stable&yes&no\\ \hline
any indecomposable complex is isomorphic to a shift of a sheaf&yes&no\\
\hline
\end{tabular}
\end{center}
Despite these difficulties, the main goal of this article is to find the common
features between the smooth and the singular case. A list of such can be
found in Remark \ref{rem:common}.
In Section \ref{sec:background}, we review the smooth case and highlight where
the properties mentioned above are used. Our approach was inspired by
\cite{LenzingMeltzer}.
Atiyah's algorithm to construct indecomposable vector bundles of any slope can
be understood as an application of a sequence of twist functors with spherical
objects. From this point of view, Atiyah shows that any indecomposable object
of $\Dbcoh(\boldsymbol{E})$ is the image of an indecomposable torsion sheaf
under an exact auto-equivalence of $\Dbcoh(\boldsymbol{E})$.
In the case of a singular Weierstra{\ss} curve $\boldsymbol{E}$, as our main
technical tool we use Harder-Narasimhan filtrations in
$\Dbcoh(\boldsymbol{E})$, which were introduced by Bridgeland
\cite{Stability}. Their general properties are studied in Section
\ref{sec:HNF}.
The key result of Section \ref{sec:dercat} is the preservation of stability
under Seidel-Thomas twists \cite{SeidelThomas} with spherical objects. This
allows us to show that, like in the smooth case, any category of semi-stable
sheaves with fixed slope is equivalent to the category of coherent torsion
sheaves on $\boldsymbol{E}$.
In the case of slope zero, this was shown in our previous work
\cite{BurbanKreussler}. For the nodal case, an explicit description of
semi-stable sheaves of degree zero via \'etale coverings was given there as
well.
A combinatorial description of semi-stable sheaves of arbitrary slope over
a nodal cubic curve was found by Mozgovoy \cite{Mozgovoy}.
On the other hand, a classification of all indecomposable objects of
$\Dbcoh(\boldsymbol{E})$ was presented in \cite{BurbanDrozd}. A description of
the Harder-Narasimhan filtrations in terms of this classification is a task for
future work.
However, if the singular point of $\boldsymbol{E}$ is a cusp, the description
of all indecomposable coherent torsion sheaves is a wild problem in the sense
of representation theory, see for example \cite{Drozd72}. Nevertheless,
stable vector bundles on a cuspidal cubic have been classified by Bodnarchuk
and Drozd \cite{Lesya}.
It turns out that semi-stable sheaves of infinite homological dimension are
particularly important, because only such sheaves appear as Harder-Narasimhan
factors of indecomposable objects in $\Dbcoh(\boldsymbol{E})$ which are not
semi-stable (Proposition \ref{prop:extreme}).
The main result (Proposition \ref{prop:spherical}) of Section \ref{sec:dercat}
is the answer to a question of Polishchuk, who asked in \cite{YangBaxter},
Section 1.4, for a description of all spherical objects on $\boldsymbol{E}$.
We also prove that the group of exact auto-equivalences of
$\Dbcoh(\boldsymbol{E})$ acts transitively on the set of spherical objects.
In Section \ref{sec:tstruc} we study $t$-structures on $\Dbcoh(\boldsymbol{E})$
and stability conditions in the sense of \cite{Stability}.
We completely classify all $t$-structures on this category (Theorem
\ref{thm:tstruc}). This allows us to
deduce a description of the group of exact auto-equivalences of
$\Dbcoh(\boldsymbol{E})$ (Corollary \ref{cor:auto}).
As a second application, we calculate Bridgeland's moduli space of
stability conditions on $\boldsymbol{E}$ (Proposition \ref{prop:stabmod}).
The hearts $\mathsf{D}(\theta,\theta+1)$ of the $t$-structures constructed in
Section \ref{sec:tstruc} are finite-dimensional non-Noetherian Abelian
categories of infinite global dimension.
In the case of a smooth elliptic curve, this category is equivalent to the
category of holomorphic vector bundles on a non-commutative torus in the
sense of Polishchuk and Schwarz \cite{PolSchw}, Proposition 3.9.
It is an interesting problem to find such a differential-geometric
interpretation of these Abelian categories in the case of singular
Weierstra{\ss} curves.
Using the technique of Harder-Narasimhan filtrations, we gain new insight into
the classification of indecomposable complexes, which was obtained in
\cite{BurbanDrozd}.
It seems plausible that similar methods can be applied to study the derived
category of representations of certain derived tame associative algebras, such
as gentle algebras, skew-gentle algebras or degenerated tubular algebras, see
for example \cite{BuDro}.
The study of Harder-Narasimhan filtrations in conjunction with the action of
the group of exact auto-equivalences of the derived category should provide new
insight into the combinatorics of indecomposable objects in these derived
categories.
\textbf{Notation.} We fix an algebraically closed field $\boldsymbol{k}$ of
characteristic zero. By $\boldsymbol{E}$ we always denote a Weierstra{\ss}
curve. This is a reduced irreducible curve of arithmetic genus one, isomorphic
to a cubic curve in the projective plane. If not smooth, it has precisely one
singular point $s\in\boldsymbol{E}$, which can be a node or a cusp.
If $x\in\boldsymbol{E}$ is arbitrary, we denote by $\boldsymbol{k}(x)$ the
residue field of $x$ and consider it as a sky-scraper sheaf supported at $x$.
By $\Dbcoh(\boldsymbol{E})$ we denote the derived category of complexes of
$\mathcal{O}_{\boldsymbol{E}}$-modules whose cohomology sheaves are coherent
and which are non-zero only in finitely many degrees.
\textbf{Acknowledgement.} The first-named author would like to thank
Max-Planck-Institut f\"ur Mathematik in Bonn for financial support.
Both authors would like to thank Yuriy Drozd, Daniel Huybrechts, Bernhard
Keller, Rapha\"el Rouquier and Olivier Schiffmann for helpful discussions,
and the referee for his or her constructive comments.
\section{Background: the smooth case}\label{sec:background}
The purpose of this section is to recall well-known results about the structure
of the bounded derived category of coherent sheaves over a smooth elliptic
curve. Proofs of most of these results can be found in \cite{Atiyah},
\cite{Oda}, \cite{LenzingMeltzer} and \cite{Tu}.
The focus of our presentation is on the features and techniques
which are essential in the singular case as well.
At the end of this section we highlight the main differences between the smooth
and the singular case. It becomes clear that the failure of Serre duality is
the main reason why the proofs and even the formulation of some of the main
results do not carry over to the singular case.
The aim of the subsequent sections will then be to overcome these difficulties,
to find correct formulations which generalise to the singular case and to
highlight the common features of the bounded derived category in the smooth and
singular case.
With the exception of subsection \ref{subsec:diff}, throughout this section
$\boldsymbol{E}$ denotes a smooth elliptic curve over $\boldsymbol{k}$.
\subsection{Homological dimension}
For any two coherent sheaves $\mathcal{F}, \mathcal{G}$ on $\boldsymbol{E}$,
Serre duality provides an isomorphism $$\Ext^{\nu}(\mathcal{F},\mathcal{G})
\cong \Ext^{1-\nu}(\mathcal{G},\mathcal{F})^{\ast}.$$
This follows from the usual formulation of Serre duality and the fact that any
coherent sheaf has a finite locally free resolution. As a consequence,
$\Ext^{\nu}(\mathcal{F},\mathcal{G})=0$ for any $\nu \ge 2$, which means that
$\Coh_{\boldsymbol{E}}$ has homological dimension one. This implies that any
object $X\in\Dbcoh(\boldsymbol{E})$ splits into the direct sum of appropriate
shifts of its cohomology sheaves. To see this, start with a complex
$X=(\mathcal{F}^{-1} \stackrel{f}{\longrightarrow} \mathcal{F}^{0})$
and consider the distinguished triangle in $\Dbcoh(\boldsymbol{E})$
$$\ker(f)[1] \rightarrow X \rightarrow \coker(f) \stackrel{\xi}{\rightarrow}
\ker(f)[2].$$
Because $\xi\in\Hom(\coker(f),\ker(f)[2]) = \Ext^{2}(\coker(f), \ker(f)) =0$,
we obtain $X\cong \ker(f)[1] \oplus \coker(f)$. Using the same idea we can
proceed by induction to get the claim.
\subsection{Indecomposable sheaves are semi-stable}
It is well-known that any coherent sheaf $\mathcal{F}\in\Coh_{\boldsymbol{E}}$
has a Harder-Narasimhan filtration
$$0\subset \mathcal{F}_{n} \subset \ldots \subset \mathcal{F}_{1} \subset
\mathcal{F}_{0} = \mathcal{F}$$
whose factors $\mathcal{A}_{\nu} := \mathcal{F}_{\nu}/\mathcal{F}_{\nu+1}$ are
semi-stable with decreasing slopes $\mu(\mathcal{A}_{n})>
\mu(\mathcal{A}_{n-1}) > \ldots > \mu(\mathcal{A}_{0})$.
Using the definition of semi-stability, this implies
$\Hom(\mathcal{A}_{\nu+i}, \mathcal{A}_{\nu})
= 0$ for all $\nu\ge 0$ and $i>0$. Therefore,
$\Ext^{1}(\mathcal{A}_{0},\mathcal{F}_{1}) \cong
\Hom(\mathcal{F}_{1}, \mathcal{A}_{0})^{\ast} =0$, and the exact sequence
$0\rightarrow \mathcal{F}_{1} \rightarrow \mathcal{F} \rightarrow
\mathcal{A}_{0} \rightarrow 0$ must split.
In particular, if $\mathcal{F}$ is indecomposable, we have $\mathcal{F}_{1}=0$
and $\mathcal{F}\cong \mathcal{A}_{0}$ and $\mathcal{F}$ is semi-stable.
\subsection{Jordan-H\"older factors}
The full sub-category of $\Coh_{\boldsymbol{E}}$ whose objects are the
semi-stable sheaves of a fixed slope is an Abelian category in which any object
has a Jordan-H\"older filtration with stable factors.
If $\mathcal{F}$ and $\mathcal{G}$ are non-isomorphic stable sheaves which
have the same slope, we have $\Hom(\mathcal{F},\mathcal{G})=0$. Based on this
fact, in the same way as before, we can deduce that an indecomposable
semi-stable sheaf has all its Jordan-H\"older factors isomorphic to each
other.
\subsection{Simple is stable}
It is well-known that any stable sheaf $\mathcal{F}$ is simple,
i.e. $\Hom(\mathcal{F},\mathcal{F}) \cong \boldsymbol{k}$. On a smooth
elliptic curve, the converse is true as well, which equips us with a useful
homological characterisation of stability.
To see that simple implies stable, we suppose for a contradiction that
$\mathcal{F}$ is simple but not stable. This implies the existence of an
epimorphism $\mathcal{F}\rightarrow \mathcal{G}$ with $\mathcal{G}$ stable and
$\mu(\mathcal{F})\ge \mu(\mathcal{G})$. Serre duality implies
$\dim \Ext^{1}(\mathcal{G},\mathcal{F}) = \dim \Hom(\mathcal{F},\mathcal{G}) >
0$, hence, $\chi(\mathcal{G},\mathcal{F}) := \dim
\Hom(\mathcal{G},\mathcal{F}) - \dim \Ext^{1}(\mathcal{G},\mathcal{F}) < \dim
\Hom(\mathcal{G},\mathcal{F})$. Riemann-Roch gives
$\chi(\mathcal{G},\mathcal{F}) = (\mu(\mathcal{F}) -
\mu(\mathcal{G}))/\rk(\mathcal{F})\rk(\mathcal{G}) > 0$, hence
$\Hom(\mathcal{G},\mathcal{F})\ne 0$. But this produces a
non-zero composition $\mathcal{F}\rightarrow \mathcal{G} \rightarrow
\mathcal{F}$ which is not an isomorphism, in contradiction to the assumption
that $\mathcal{F}$ was simple.
\subsection{Classification}
Atiyah \cite{Atiyah} gave a description of all stable sheaves with a fixed
slope in the form $\mathcal{E}(r,d)\otimes \mathcal{L}$, where $\mathcal{L}$
is a line bundle of degree zero and $\mathcal{E}(r,d)$ is a particular stable
bundle of the fixed slope. The bundle $\mathcal{E}(r,d)$ depends on the choice
of a base point $p_{0}\in\boldsymbol{E}$ and
its construction reflects the Euclidean algorithm on the pair $(\rk,\deg)$.
We look at this description from a slightly different perspective. We use the
twist functors $T_{\mathcal{O}}$ and $T_{\boldsymbol{k}(p_{0})}$, which were
constructed by Seidel and Thomas \cite{SeidelThomas} (see also \cite{Meltzer}).
They act as equivalences on $\Dbcoh(\boldsymbol{E})$ and, hence, preserve
stability.
A stable sheaf of rank $r$ and degree $d$ is sent by $T_{\mathcal{O}}$ to one
with $(\rk,\deg)$ equal to $(r-d,d)$. If $r<d$ this is a shift of a stable
sheaf. The functor $T_{\boldsymbol{k}(p_{0})}$ sends the pair $(r,d)$ to
$(r,r+d)$ and its inverse sends it to $(r,d-r)$. Therefore, if we follow the
Euclidean algorithm, we find a composition of such functors which provides an
equivalence between the category of stable sheaves with slope $d/r$ and the
category of simple torsion sheaves. Such sheaves are precisely the structure
sheaves of closed points $\boldsymbol{k}(x)$, $x\in\boldsymbol{E}$. They are
considered to be stable with slope $\infty$.
More generally, this procedure provides an equivalence between the category of
semi-stable sheaves of rank $r$ and degree $d$ with the category of torsion
sheaves of length equal to $\gcd(r,d)$. This shows, in particular, that the
Abelian category of semi-stable sheaves with fixed slope is equivalent to the
category of coherent torsion sheaves.
\subsection{Auto-equivalences}
By $\Aut(\Dbcoh(\boldsymbol{E}))$ we denote the group of all exact
auto-equivalences of the triangulated category $\Dbcoh(\boldsymbol{E})$. This
group acts on the Grothendieck group $\mathsf{K}(\boldsymbol{E}) \cong
\mathsf{K}(\Dbcoh(\boldsymbol{E}))$.
As the kernel of the Chern character is the radical of the Euler-form
$\langle X,Y \rangle = \dim(\Hom(X,Y)) - \dim(\Hom(X,Y[1])$
which is invariant under this action, it induces an action on the even
cohomology $H^{2\ast}(\boldsymbol{E}, \mathbb{Z}) \cong \mathbb{Z}^{2}$.
Because $\dim(\Hom(\mathcal{F},\mathcal{G}))>0$ if and only if $\langle
\mathcal{F},\mathcal{G} \rangle >0$, provided $\mathcal{F}\not\cong
\mathcal{G}$ are stable sheaves, the induced action on $\mathbb{Z}^{2}$ is
orientation preserving. So, we obtain a homomorphism of groups $\varphi:
\Aut(\Dbcoh(\boldsymbol{E})) \rightarrow \SL(2,\mathbb{Z})$, which is
surjective because $T_{\mathcal{O}}$ and $T_{\boldsymbol{k}(p_{0})}$ are
mapped to a pair of generators of $\SL(2,\mathbb{Z})$. Explicitly, if
$\mathbb{G}$ is an auto-equivalence, $\varphi(\mathbb{G})$ describes its
action on the pair $(\rk,\deg)$.
To understand $\ker(\varphi)$, we observe that $\varphi(\mathbb{G}) =
\boldsymbol{1}$ implies that $\mathbb{G}$ sends a simple torsion sheaf
$\boldsymbol{k}(x)$ to some $\boldsymbol{k}(y)[2k]$, because indecomposability
is retained. By the same reason, $\mathbb{G}(\mathcal{O})$ is a shifted line
bundle of degree zero.
However, $\Hom(\mathcal{L},\boldsymbol{k}(y)[l]) = 0$, if $\mathcal{L}$ is a
line bundle and $l\ne 0$. Hence, after composing $\mathbb{G}$ with a shift, it
sends all simple torsion sheaves to simple torsion sheaves, without a shift.
Because $\boldsymbol{E}$ is smooth, we can apply a result of Orlov \cite{Orlov}
which says that any auto-equivalence $\mathbb{G}$ is a Fourier-Mukai transform
\cite{Mukai}.
However, any such functor, which sends the sheaves $\boldsymbol{k}(x)$ to
torsion sheaves of length one is of the form
$\mathbb{G}(X)=f^{\ast}(\mathcal{L}\otimes X)$,
where $f:\boldsymbol{E} \rightarrow \boldsymbol{E}$ is an automorphism and
$\mathcal{L}\in\Pic(\boldsymbol{E})$ a line bundle. Hence, $\ker(\varphi)$ is
generated by $\Aut(\boldsymbol{E}), \Pic^{0}(\boldsymbol{E})$ and even
shifts. This gives a complete description of the group
$\Aut(\Dbcoh(\boldsymbol{E}))$. A similar approach was used by Lenzing and
Meltzer to describe the group of exact auto-equivalences of tubular weighted
projective lines
\cite{LenzingMeltzerAuto}.
\subsection{Difficulties in the singular case}\label{subsec:diff}
Let now $\boldsymbol{E}$ be an irreducible but singular curve of arithmetic
genus one. The technical cornerstones of the theory as described in this
section fail to be true in this case. More precisely:
\begin{itemize}
\item the category of coherent sheaves $\Coh_{\boldsymbol{E}}$ has infinite
homological dimension;
\item there exist indecomposable complexes in $\Dbcoh(\boldsymbol{E})$ which
are not just shifted sheaves, see \cite{BurbanDrozd}, section 3;
\item Serre duality fails to be true in general;
\item not all indecomposable vector bundles are semi-stable;
\item there exist indecomposable coherent sheaves which are neither torsion
sheaves nor torsion free sheaves, see \cite{BurbanDrozd}.
\end{itemize}
Most of the trouble is caused by the failure of Serre duality.
The basic example is the following. Suppose, $s\in\boldsymbol{E}$ is a node,
then
$$\Hom(\boldsymbol{k}(s), \boldsymbol{k}(s)) \cong \boldsymbol{k}\quad
\text{ and }\quad
\Ext^{1}(\boldsymbol{k}(s), \boldsymbol{k}(s)) \cong \boldsymbol{k}^{2}.$$
Serre duality is available, only if at least one of the two sheaves involved
has finite homological dimension. This might suggest that replacing
$\Dbcoh(\boldsymbol{E})$ by the sub-category of perfect complexes would solve
most of the problems. But see Remark \ref{rem:notperfect}.
In the subsequent sections we overcome these difficulties and point out the
similarities between the smooth and the singular case.
\section{Harder-Narasimhan filtrations}\label{sec:HNF}
Throughout this section, $\boldsymbol{E}$ denotes an irreducible reduced
projective curve over $\boldsymbol{k}$ of arithmetic genus one.
The notion of stability of coherent torsion free sheaves on an irreducible
curve is usually defined with the aid of the slope function
$\mu(\,\cdot\,)=\deg(\,\cdot\,)/\rk(\,\cdot\,)$. To use the phase function instead is
equivalent, but better adapted for the generalisation to derived categories
described below.
By definition, the \emph{phase} $\varphi(\mathcal{F})$ of a non-zero
coherent sheaf $\mathcal{F}$ is the unique number which satisfies $0 <
\varphi(\mathcal{F})\le 1$ and $m(\mathcal{F}) \exp(\pi
i\varphi(\mathcal{F})) = -\deg(\mathcal{F}) + i \rk(\mathcal{F})$, where
$m(\mathcal{F})$ is a positive real number, called the \emph{mass} of the
sheaf $\mathcal{F}$.
In particular, $\varphi(\mathcal{O}) = 1/2$ and all non-zero torsion sheaves
have phase one.
A torsion free coherent sheaf $\mathcal{F}$ is called semi-stable if for any
exact sequence of torsion free coherent sheaves
$$0 \rightarrow \mathcal{E} \rightarrow \mathcal{F}
\rightarrow \mathcal{G} \rightarrow 0$$
the inequality $\varphi(\mathcal{E}) \le \varphi(\mathcal{F})$, or
equivalently, $\varphi(\mathcal{F}) \le \varphi(\mathcal{G})$, holds.
It is well-known \cite{Rudakov} that any torsion free coherent sheaf
$\mathcal{F}$ on a projective variety has a Harder-Narasimhan filtration
$$0 \subset \mathcal{F}_{n} \subset \mathcal{F}_{n-1} \cdots \subset
\mathcal{F}_{1} \subset \mathcal{F}_{0} = \mathcal{F},$$
which is uniquely characterised by the property that all factors
$\mathcal{A}_{i} = \mathcal{F}_{i}/\mathcal{F}_{i+1}$ are semi-stable and
satisfy
$$\varphi(\mathcal{A}_{n}) > \varphi(\mathcal{A}_{n-1}) > \cdots >
\varphi(\mathcal{A}_{0}).$$
Originally, this concept of stability was introduced in the 1960s in order to
construct moduli spaces using geometric invariant theory. It could also be
seen as a method to understand the structure of the category of coherent
sheaves on a projective variety.
By Simpson, the notion of stability was extended to coherent sheaves of
pure dimension.
A very general approach was taken by Rudakov \cite{Rudakov}, who introduced
the notion of stability on Abelian categories. Under some finiteness
assumptions on the category, he shows the existence and uniqueness of a
Harder-Narasimhan filtration for any object of the category in question. As an
application of his work, the usual slope stability extends to the whole
category $\Coh_{\boldsymbol{E}}$ of coherent sheaves on $\boldsymbol{E}$. In
particular, any non-zero coherent sheaf has a Harder-Narasimhan filtration and
any non-zero coherent torsion sheaf on the curve $\boldsymbol{E}$ is
semi-stable.
Inspired by work of Douglas on $\Pi$-Stability for D-branes, see for example
\cite{Douglas}, it was shown by Bridgeland \cite{Stability} how to extend the
concept of stability and Harder-Narasimhan filtration to the derived category
of coherent sheaves, or more generally, to a triangulated category. These new
ideas were merged with the ideas from \cite{Rudakov} in the paper
\cite{GRK}.
We shall follow here the approach of Bridgeland \cite{Stability}. In Section
\ref{sec:tstruc} we give a description of Bridgeland's moduli space of
stability conditions on the derived category of irreducible singular curves
of arithmetic genus one. However, throughout the present chapter we stick to
the classical notion of stability on the category of coherent sheaves and the
stability structure it induces on the triangulated category.
In order to generalise the concept of a Harder-Narasimhan filtration to the
category $\Dbcoh(\boldsymbol{E})$, Bridgeland \cite{Stability} extends the
definition of the phase of a sheaf to shifts of coherent sheaves by:
$$\varphi(\mathcal{F}[n]) := \varphi(\mathcal{F})+n,$$
where $\mathcal{F}\ne 0$ is a coherent sheaf on $\boldsymbol{E}$ and
$n\in\mathbb{Z}$. A complex which is non-zero at position $m$ only has,
according to this definition, phase in the interval $(-m,-m+1]$. If
$\mathcal{F}$ and $\mathcal{F}'$ are non-zero coherent sheaves and $a,b$
integers, we have the implication:
$$\varphi(\mathcal{F}[-a]) > \varphi(\mathcal{F}'[-b])
\quad\Rightarrow\quad a\le b.$$
For any $\varphi\in\mathbb{R}$ we denote by $\mathsf{P}(\varphi)$ the Abelian
category of shifted semi-stable sheaves with phase $\varphi$. Of course,
$0\in\mathsf{P}(\varphi)$ for all $\varphi$.
If $\varphi\in(0,1]$, this is a full Abelian subcategory of
$\Coh_{\boldsymbol{E}}$. For any $\varphi\in\mathbb{R}$ we have
$\mathsf{P}(\varphi+n) = \mathsf{P}(\varphi)[n]$. A non-zero object of
$\Dbcoh(\boldsymbol{E})$ will be called \emph{semi-stable}, if it is an
element of one of the categories $\mathsf{P}(\varphi)$, $\varphi\in\mathbb{R}$.
Bridgeland's stability conditions \cite{Stability} involve so-called
central charges.
In order to define the central charge of the standard stability condition, we
need a definition of degree and rank for arbitrary objects in
$\Dbcoh(\boldsymbol{E})$.
Let $K =\mathcal{O}_{\boldsymbol{E},\eta}$ be the field of rational
functions on the irreducible curve $\boldsymbol{E}$ with generic point
$\eta\in\boldsymbol{E}$. The base change $\eta:\Spec(K)\rightarrow
\boldsymbol{E}$ is flat, so that $\eta^{\ast}(F)$,
taken in the non-derived sense, is correctly defined for any
$F\in\Dbcoh(\boldsymbol{E})$.
We define $\rk(F):=\chi(\eta^{\ast}(F))$, which is the
alternating sum of the dimensions of the cohomology spaces of the complex
$\eta^{\ast}(F)$ which are vector spaces over $K$.
In order to define the degree, we use the functor
$$\boldsymbol{R}\Hom(\mathcal{O}_{\boldsymbol{E}},\,\cdot\,): \Dbcoh(\boldsymbol{E})
\rightarrow \Dbcoh(\boldsymbol{k}),$$
and set $\deg(F):= \chi(\boldsymbol{R}\Hom(\mathcal{O}_{\boldsymbol{E}},F))$.
Here, we denoted by $\Dbcoh(\boldsymbol{k})$ the bounded derived category of
finite dimensional vector spaces over $\boldsymbol{k}$.
For coherent sheaves, these definitions coincide with the usual
definitions of rank and degree. In particular, a torsion sheaf of length $m$
which is supported at a single point of $\boldsymbol{E}$ has rank $0$ and
degree $m$.
These definitions imply that rank and degree are additive on distinguished
triangles in $\Dbcoh(\boldsymbol{E})$. Hence, they induce homomorphisms on the
Grothendieck group $\mathsf{K}(\Dbcoh(\boldsymbol{E}))$ of the triangulated
category $\Dbcoh(\boldsymbol{E})$, which is by definition the quotient of the
free Abelian group generated by the objects of $\Dbcoh(\boldsymbol{E})$ modulo
expressions coming from distinguished triangles.
Recall that $\mathsf{K}_{0}(\Coh(\boldsymbol{E})) \cong
\mathsf{K}(\Dbcoh(\boldsymbol{E}))$,
see \cite{Groth}. We denote this group by $\mathsf{K}(\boldsymbol{E})$
\begin{lemma}\label{lem:GrothGrp}
If $\boldsymbol{E}$ is an irreducible singular curve of arithmetic genus
one, we have $\mathsf{K}(\boldsymbol{E}) \cong \mathbb{Z}^{2}$ with
generators $[\boldsymbol{k}(x)]$ and $[\mathcal{O}_{\boldsymbol{E}}]$.
\end{lemma}
\begin{proof}
Recall that the Grothendieck-Riemann-Roch Theorem, see \cite{BFM} or
\cite{Fulton}, provides a homomorphism
$$\tau_{\boldsymbol{E}}:\mathsf{K}(\boldsymbol{E}) \rightarrow
A_{\ast}(\boldsymbol{E})\otimes \mathbb{Q},$$
which depends functorially on $\boldsymbol{E}$ with respect to proper direct
images.
Moreover,
$(\tau_{\boldsymbol{E}})_{\mathbb{Q}}:\mathsf{K}(\boldsymbol{E})\otimes
\mathbb{Q} \rightarrow A_{\ast}(\boldsymbol{E})\otimes \mathbb{Q}$
is an isomorphism, see \cite{Fulton}, Cor.\/ 18.3.2.
If $\boldsymbol{E}$ is an irreducible singular projective curve of
arithmetic genus one, we easily see that the Chow group
$A_{\ast}(\boldsymbol{E})$ is isomorphic to $\mathbb{Z}^{2}$.
The two generators are $[x]\in A_{0}(\boldsymbol{E})$ with
$x\in\boldsymbol{E}$ and $[\boldsymbol{E}]\in A_{1}(\boldsymbol{E})$.
Note that $[x]=[y]\in A_{0}(\boldsymbol{E})$ for any two closed points
$x,y\in\boldsymbol{E}$, because the normalisation of $\boldsymbol{E}$ is
$\mathbb{P}^{1}$.
Using \cite{Fulton}, Thm.\/ 18.3 (5), we obtain
$\tau_{\boldsymbol{E}}(\boldsymbol{k}(x))= [x] \in A_{0}(\boldsymbol{E})$
for any $x\in\boldsymbol{E}$. On the other hand, from \cite{Fulton}, Expl.\/
18.3.4 (a), we obtain $\tau_{\boldsymbol{E}}(\mathcal{O}_{\boldsymbol{E}}) =
[\boldsymbol{E}] \in A_{1}(\boldsymbol{E})$.
Therefore, the classes of $\boldsymbol{k}(x)$ and
$\mathcal{O}_{\boldsymbol{E}}$ define a basis of
$\mathsf{K}(\boldsymbol{E})\otimes \mathbb{Q}$.
However, these two classes generate the group $\mathsf{K}(\boldsymbol{E})$,
so that it must be a free Abelian group.
\end{proof}
The \emph{central charge} of the standard stability structure on
$\Dbcoh(\boldsymbol{E})$ is the homomorphism of Abelian groups
$$
Z: \mathsf{K}(\boldsymbol{E}) \rightarrow \mathbb{Z}\oplus i\mathbb{Z}
\subset \mathbb{C},
$$
which is given by
$$
Z(F) := -\deg(F) + i \rk(F).
$$
If $F$ is a non-zero coherent sheaf, $Z(F)$ is a point on the ray from the
origin through $\exp(\pi i \varphi(F))$ in $\mathbb{C}$. Its distance from the
origin was called the mass of $F$.
Although the phase $\varphi(F)$ is defined for sheaves and their shifts only,
we are able to define the slope $\mu(F)$ for any object in
$\Dbcoh(\boldsymbol{E})$ which is not equal to zero in the Grothendieck
group. Namely, the usual definition $\mu(F):=\deg(F)/\rk(F)$ gives us now a
mapping $$\mu:\mathsf{K}(\boldsymbol{E})\setminus\{0\} \rightarrow
\mathbb{Q} \cup \{\infty\},$$ which extends the usual definition of the slope
of a sheaf.
Because $Z(\mathcal{O}_{\boldsymbol{E}})=i$ and $Z(\boldsymbol{k}(x))=-1$,
Lemma \ref{lem:GrothGrp} implies that $Z$ is injective. Therefore, $\mu$ is
defined for any non-zero element of the Grothendieck group.
For arbitrary objects $X\in\Dbcoh(\boldsymbol{E})$ we have $Z(X[1]) = -Z(X)$,
hence $\mu(X[1]) = \mu(X)$ when defined.
In case of shifted sheaves, in contrast to the slope $\mu$, the
phase $\varphi$ keeps track of the position of this sheaf in the complex.
As an illustration, we include an example of an indecomposable object in
$\Dbcoh(\boldsymbol{E})$ which has a zero image in the Grothendieck group.
\begin{example}
Let $s\in\boldsymbol{E}$ be the singular point and denote, as usual, by
$\boldsymbol{k}(s)$ the torsion sheaf of length one which is supported at
$s$. This sheaf does not have finite homological dimension. To see this, we
observe first that $\Ext^{k}(\boldsymbol{k}(s), \boldsymbol{k}(s)) \cong
H^{0}(\mathcal{E}xt^{k}(\boldsymbol{k}(s), \boldsymbol{k}(s)))$. Moreover,
as an
$\mathcal{O}_{\boldsymbol{E},s}$-module,
$\boldsymbol{k}(s)$ has an infinite periodic locally free resolution of the
form
$$
\cdots \stackrel{A}{\longrightarrow} \mathcal{O}_{\boldsymbol{E},s}^{2}
\stackrel{B}{\longrightarrow} \mathcal{O}_{\boldsymbol{E},s}^{2}
\stackrel{A}{\longrightarrow}\mathcal{O}_{\boldsymbol{E},s}^{2}
\longrightarrow \mathcal{O}_{\boldsymbol{E},s}
\longrightarrow \boldsymbol{k}(s) \longrightarrow 0
$$
where $AB=BA=f\cdot I_{2}$ is a reduced matrix factorisation of an equation
$f$ of $\boldsymbol{E} \subset \mathbb{P}^{2}$. For example, if $s$ is a
node, so that $\boldsymbol{E}$ is locally given by the polynomial $f = y^{2}
- x^{3} -x^{2}\in\boldsymbol{k}[x,y]$, we can choose
$A=\bigl(\begin{smallmatrix}
y&x^{2}+x\\x&y
\end{smallmatrix}\bigr)$ and
$B=\bigl(\begin{smallmatrix}
y&-x^{2}-x\\-x&y
\end{smallmatrix}\bigr)$ considered modulo $f$. More generally, any
singular Weierstra{\ss} cubic $f$ can be written as $y\cdot y - R\cdot S$
with $y, R,S$ all vanishing at the singular point. The off-diagonal elements
of $A$ and $B$ are then formed by $\pm R,\pm S$. Therefore, all entries of
the matrices $A$ and $B$ are elements of the maximal ideal of the local ring
$\mathcal{O}_{\boldsymbol{E},s}$. Hence, the application of $\Hom(\,\cdot\,,
\boldsymbol{k}(s))$ produces a complex with zero differential, which implies
that $\Ext^{k}(\boldsymbol{k}(s), \boldsymbol{k}(s))$ is
two-dimensional for all $k\ge 1$.
In particular, $\Ext^{2}(\boldsymbol{k}(s), \boldsymbol{k}(s)) \cong
\boldsymbol{k}^{2}$, and we can pick a non-zero element
$w\in\Hom(\boldsymbol{k}(s), \boldsymbol{k}(s)[2])$. There exists a complex
$X\in\Dbcoh(\boldsymbol{E})$ which sits in a distinguished triangle
$$X\rightarrow \boldsymbol{k}(s) \stackrel{w}{\longrightarrow}
\boldsymbol{k}(s)[2] \stackrel{+}{\longrightarrow}.$$
Because the shift by one corresponds to multiplication by $-1$ in the
Grothendieck group, this object $X$ is equal to zero in
$\mathsf{K}(\boldsymbol{E})$. On the other hand, $X$ is
indecomposable. Indeed, if $X$ would split, it must be $X\cong
\boldsymbol{k}(s) \oplus \boldsymbol{k}(s)[1]$, because the only non-zero
cohomology of $X$ is $H^{-1}(X) \cong \boldsymbol{k}(s)$ and $H^{0}(X) \cong
\boldsymbol{k}(s)$. But, because $\Hom(\boldsymbol{k}(s)[1],
\boldsymbol{k}(s)) = 0$, Lemma \ref{lem:PengXiao}, applied to the
distinguished triangle
\[\begin{CD}
\boldsymbol{k}(s)[1] @>>> X @>>> \boldsymbol{k}(s) @>{+}>{w}>
\end{CD}\]
with $X\cong \boldsymbol{k}(s) \oplus \boldsymbol{k}(s)[1]$, implies $w=0$.
\end{example}
\begin{definition}[\cite{Stability}]
A Harder-Narasimhan filtration (HNF) of an object $X \in
\Dbcoh(\boldsymbol{E})$ is a finite collection of distinguished triangles
\[
\xymatrix@C=.5em
{
0\; \ar[rr] && F_{n}X \ar[rr] \ar[dl]_{\cong}&& F_{n-1}X \ar[rr]
\ar[dl]&& \dots \ar[rr] &&F_{1}X \ar[rr] && F_{0}X \ar@{=}[r]\ar[dl] & X
\\
& A_{n} \ar[lu]^{+} && A_{n-1} \ar[lu]^{+} & & & & & & A_0 \ar[lu]^{+}&
\\
}
\]
with $A_j\in\mathsf{P}(\varphi_j)$ and $A_{j}\ne 0$ for all $j$, such that
$\varphi_{n} > \varphi_{n-1} > \cdots > \varphi_{0}.$
\end{definition}
If all ingredients of a HNF are shifted by one, we obtain a HNF of $X[1]$.
The shifted sheaves $A_{j}$ are called \emph{the semi-stable HN-factors} of
$X$ and we define $\varphi_{+}(X):=\varphi_{n}$ and
$\varphi_{-}(X):=\varphi_{0}$. Later, Theorem \ref{thm:uniqueHNF}, we show that
the HNF of an object $X$ is unique up to isomorphism. This justifies this
notation. For the moment, we keep in mind that $\varphi_{+}(X)$ and
$\varphi_{-}(X)$ might depend on the HNF and not only on the object $X$.
Before we proceed, we include a few remarks about the notation we use.
Distinguished triangles in a triangulated category are either displayed in the
form
$X\rightarrow Y\rightarrow Z \stackrel{+}{\longrightarrow}
\quad\text{ or as }\quad
\xymatrix@C=.5em{
X \ar[rr] && Y, \ar[dl]\\ & Z \ar[ul]^{+}
}
$
where the arrow which is marked with $+$ is in fact a morphism $Z\rightarrow
X[1]$.
We shall use the octahedron axiom, the axiom (TR4) in Verdier's list, in the
following convenient form: if two morphisms $X\stackrel{f}{\longrightarrow} Y
\stackrel{g}{\longrightarrow} Z$ are given, for any three distinguished
triangles with bases $f, g$ and $g\circ f$ there exists a fourth distinguished
triangle which is indicated below by dashed arrows, such that we obtain the
following commutative diagram:
\begin{center}
\mbox{\begin{xy} 0;<10mm,0mm>:0,
(0,3) *+{Z'} ="top" ,
(-3,0) *+{X} ="left" ,
(3,0) *+{X'} ="right" ,
(-1.5,1.5) *+{Y} ="midleft" ,
(1.5,1.5) *+{Y'} ="midright" ,
{"left";"midright":"right";"midleft",x} *+{Z}="center",
{"left" \ar@{->}^{f} "midleft" \ar@{->}_{g\circ f} "center"},
{"center" \ar@{->} "right" \ar@{->} "midright"},
{"midleft" \ar@{->}^{g} "center" \ar@{->} "top"},
{"top" \ar@{-->} "midright"},
{"midright" \ar@{-->} "right"},
{"right" \ar@{-->}_{+} +(.9,-.9)},
{"midright" \ar@{->}^{+} +"midright"-"center"},
{"right" \ar@{->}^{+} +"center"-"midleft"},
{"top" \ar@{->}^{+} +(.8,.8)}
\end{xy}}
\end{center}
The remainder of this section is devoted to the proofs of the crucial
properties of Harder-Narasimhan filtrations in triangulated categories.
These properties can be found in \cite{Stability, GRK}, where most of
them appear to be either implicit or without a detailed proof.
\begin{lemma}\label{lem:connect}
Let
$\xymatrix@C=.4em{U \ar[rr]^{f} && X \ar[dl]\\ & V \ar[ul]^{+}}$
and
$A \longrightarrow V \longrightarrow V' \stackrel{+}{\longrightarrow}$
be distinguished triangles. Then there exists a factorisation
$U\longrightarrow W \stackrel{f'}{\longrightarrow} X$
of $f$ and two distinguished triangles
$$\xymatrix@C=.5em{U \ar[rr] && W \ar[dl]\ar[rr]^{f'} && X\ar[dl]\\
& A \ar[ul]^{+} && V'.\ar[ul]^{+}}$$
\end{lemma}
\begin{proof}
If we apply the octahedron axiom to the composition $A\rightarrow V
\rightarrow U[1]$ we obtain the following commutative diagram, which gives
the claim.
\begin{center}
\mbox{\begin{xy} 0;<10mm,0mm>:0,
(0,3) *+{V'} ="top" ,
(-3,0) *+{A} ="left" ,
(3,0) *+{X[1]} ="right" ,
(-1.5,1.5) *+{V} ="midleft" ,
(1.5,1.5) *+{W[1]} ="midright" ,
{"left";"midright":"right";"midleft",x} *+{U[1]}="center",
{"left" \ar@{->} "midleft" \ar@{->} "center"},
{"center" \ar@{->}_{f[1]} "right" \ar@{->} "midright"},
{"midleft" \ar@{->} "center" \ar@{->} "top"},
{"top" \ar@{-->} "midright"},
{"midright" \ar@{-->}^{f'[1]} "right"},
{"right" \ar@{-->}_{+} +(.9,-.9)},
{"midright" \ar@{->}^{+} +"midright"-"center"},
{"right" \ar@{->}^{+} +"center"-"midleft"},
{"top" \ar@{->}^{+} +(.8,.8)}
\end{xy}}
\end{center}
\end{proof}
\begin{lemma}\label{lem:split}
Let
\[\xymatrix@C=.5em{
0\; \ar[rr] && F_{n}V \ar[rr] \ar[dl]_{\cong}&& F_{n-1}V \ar[rr]
\ar[dl]&& \dots \ar[rr] &&F_{1}V \ar[rr] && F_{0}V \ar@{=}[r]\ar[dl] & V
\\
& A_{n} \ar[lu]^{+} && A_{n-1} \ar[lu]^{+} & & & & & & A_0 \ar[lu]^{+}&
}\]
be a HNF of $V\in\Dbcoh(\boldsymbol{E})$ and $F_{k}V \longrightarrow V
\longrightarrow V' \stackrel{+}{\longrightarrow}$ a distinguished triangle
with $1\le k \le n$. Then, $F_{k}V$ has a HNF with HN-factors $A_{n},
A_{n-1}, \ldots, A_{k}$ and $V'$ one with HN-factors $A_{k-1},
A_{k-2}, \ldots, A_{0}$.
\end{lemma}
\begin{proof}
The first statement is clear, because we can cut off the HNF of $V$ at
$F_{k}V$ to obtain a HNF of $F_{k}V$. Let us define objects $F_{i}V'$ by
exact triangles $F_{k}V \longrightarrow F_{i}V \longrightarrow F_{i}V'
\stackrel{+}{\longrightarrow}$, where the first arrow is the composition of
the morphisms in the HNF of $V$. Using the octahedron axiom, we obtain for
any $i\le k$ a commutative diagram
\begin{center}
\mbox{\begin{xy} 0;<12mm,0mm>:0,
(0,3) *+{F_{i}V'} ="top" ,
(-3,0) *+{F_{k}V} ="left" ,
(3,0) *+{A_{i-1}} ="right" ,
(-1.5,1.5) *+{F_{i}V} ="midleft" ,
(1.5,1.5) *+{F_{i-1}V'} ="midright" ,
{"left";"midright":"right";"midleft",x} *+{F_{i-1}V}="center",
{"left" \ar@{->} "midleft" \ar@{->} "center"},
{"center" \ar@{->} "right" \ar@{->} "midright"},
{"midleft" \ar@{->} "center" \ar@{->} "top"},
{"top" \ar@{-->} "midright"},
{"midright" \ar@{-->} "right"},
{"right" \ar@{-->}_{+} +(.9,-.9)},
{"midright" \ar@{->}^{+} +"midright"-"center"},
{"right" \ar@{->}^{+} +"center"-"midleft"},
{"top" \ar@{->}^{+} +(.8,.8)}
\end{xy}}
\end{center}
which implies the second claim.
\end{proof}
\begin{remark}\label{rem:split}
The statement of Lemma \ref{lem:split} is true with identical proof if we
relax the assumption of being a HNF by allowing $\varphi(A_{k}) =
\varphi(A_{k-1})$ for the chosen value of $k$.
\end{remark}
\begin{lemma}\label{lem:bounds}
If
\[\xymatrix@C=.5em{
0\; \ar[rr] && F_{n}X \ar[rr] \ar[dl]_{\cong}&& F_{n-1}X \ar[rr]
\ar[dl]&& \dots \ar[rr] &&F_{1}X \ar[rr] && F_{0}X \ar@{=}[r]\ar[dl] & X
\\
& A_{n} \ar[lu]^{+} && A_{n-1} \ar[lu]^{+} & & & & & & A_0 \ar[lu]^{+}&
}\]
is a HNF of $X\in\Dbcoh(\boldsymbol{E})$ such that $A_{0}[k]$ is a sheaf,
then $H^{k}(X)\ne 0$. In particular, the following implication is true:
$$X\in \mathsf{D}^{\le m} \quad\Longrightarrow\quad
\forall i\ge0: A_{i}\in \mathsf{D}^{\le m}.$$
\end{lemma}
\begin{proof}
The assumption $A_{0}[k]\in \Coh_{\boldsymbol{E}}$ means
$H^{k}(A_{0})=A_{0}[k]\ne 0$ and $\varphi(A_{0}) \in (-k,-k+1]$. Because for
all $i>0$ we have $\varphi(A_{i}) > \varphi(A_{0})$, we obtain
$\varphi(A_{i})>-k$. This implies $H^{k+1}(A_{i})=0$ for all $i\ge 0$. The
cohomology sequences of the distinguished triangles
$F_{i+1}\longrightarrow F_{i}\longrightarrow A_{i}
\stackrel{+}{\longrightarrow}$ imply $H^{k+1}(F_{i}X)=0$ for all $i>0$ and
an exact sequence $H^{k}(X) \rightarrow H^{k}(A_{0}) \rightarrow
H^{k+1}(F_{1}X)$, hence $H^{k}(X)\ne 0$. The statement about the other
HN-factors $A_{i}$ follows now from $\varphi(A_{i})\ge \varphi(A_{0})$.
\end{proof}
\begin{proposition}
Any non-zero object $X\in\Dbcoh(\boldsymbol{E})$ has a HNF.
\end{proposition}
\begin{proof}
The existence of a HNF for objects of $\Coh_{\boldsymbol{E}}$ is
classically known, see \cite{HarderNarasimhan, Rudakov}. Therefore, we can
proceed by induction on the number of non-zero cohomology sheaves of
$X\in\Dbcoh(\boldsymbol{E})$. If $n$ is the largest integer with
$H^{n}(X)\ne 0$, we have a distinguished triangle
\begin{equation}
\tau^{\le n-1} X \longrightarrow X \longrightarrow H^{n}(X)[-n]
\stackrel{+}{\longrightarrow}
\end{equation}
By inductive hypothesis, there exists a HNF of $\tau^{\le n-1} X$. From
Lemma \ref{lem:bounds} we conclude that all HN-factors of $\tau^{\le n-1} X$
are in $\mathsf{D}^{\le n-1}$ and so $\varphi_{-}(\tau^{\le n-1} X)>-n+1$.
Because $H^{n}(X)$ is a sheaf, we have $\varphi_{+}(H^{n}(X)[-n])
\in (-n,-n+1]$, hence $\varphi_{-}(\tau^{\le n-1} X) >
\varphi_{+}(H^{n}(X)[-n])$.
We prove now for any distinguished triangle
\begin{equation}
\label{eq:induction}
U \longrightarrow X \longrightarrow V \stackrel{+}{\longrightarrow}
\end{equation}
in which $V[n]$ is a coherent sheaf that the existence of a HNF for $U$ with
$\varphi_{-}(U)> \varphi_{+}(V)$ implies the existence of a HNF of $X$.
Because $V[n]$ is a sheaf, $V$ has a HNF and we proceed by induction on the
number of HN-factors of $V$. Let $A$ be the leftmost object in a HNF of $V$,
i.e.\/ $A\in\mathsf{P}(\varphi_{+}(V))$. By Lemma \ref{lem:connect} applied
to the distinguished triangles (\ref{eq:induction}) and $A \longrightarrow V
\longrightarrow V' \stackrel{+}{\longrightarrow}$, there exist two
distinguished triangles in which $V'[n]$ is a coherent sheaf with a smaller
number of HN-factors as $V$:
$$\xymatrix@C=.5em{U \ar[rr] && W \ar[dl]\ar[rr] && X.\ar[dl]\\
& A \ar[ul]^{+} && V'\ar[ul]^{+}}$$
Because $\varphi_{-}(U)\ge \varphi(A) =\varphi_{+}(V)$, the left triangle can
be concatenated to the given HNF of $U$ in order to provide a HNF for
$W$. The start of the induction is covered as well: it is the case $V'=0$.
\end{proof}
\begin{lemma}\label{wesPT:ii}
If $X,Y\in \Dbcoh(\boldsymbol{E})$ with $\varphi_{-}(X) > \varphi_{+}(Y)$,
then $$\Hom(X,Y)=0.$$
\end{lemma}
\begin{proof}
If $X,Y$ are semi-stable sheaves, this is well-known and follows easily from
the definition of semi-stability. Because $\Hom(X,Y[k])=0$, if $X,Y$ are
sheaves and $k<0$, the claim follows if $X\in \mathsf{P}(\varphi)$ and $Y\in
\mathsf{P}(\psi)$ with $\varphi>\psi$. Let now $X\in\mathsf{P}(\varphi)$ and
$Y\in \Dbcoh(\boldsymbol{E})$ with $\varphi > \varphi_{+}(Y)$. Let
\[\xymatrix@C=.5em{
0\; \ar[rr] && F_{m}Y \ar[rr] \ar[dl]_{\cong}&& F_{m-1}Y \ar[rr]
\ar[dl]&& \dots \ar[rr] &&F_{1}Y \ar[rr] && F_{0}Y \ar@{=}[r]\ar[dl] & Y
\\
& B_{m} \ar[lu]^{+} && B_{m-1} \ar[lu]^{+} & & & & & & B_0 \ar[lu]^{+}&
}\]
be a HNF of $Y$. We have $\varphi(B_{j})\le \varphi(B_{m}) =
\varphi_{+}(Y)$, hence $\varphi(X)>\varphi(B_{j})$ and $\Hom(X,B_{j})=0$ for
all $j$. If we apply the functor $\Hom(X,\,\cdot\,)$ to the distinguished
triangles $F_{j+1}Y \longrightarrow F_{j}Y \longrightarrow B_{j}
\stackrel{+}{\longrightarrow}$, we obtain surjections $\Hom(X,F_{j+1}Y)
\twoheadrightarrow \Hom(X,F_{j}Y)$. From $\Hom(X,F_{m}Y)=
\Hom(X,B_{m})=0$, we obtain $\Hom(X,Y)=\Hom(X,F_{0}Y)=0$.
Let now $X,Y$ be arbitrary non-zero objects of $\Dbcoh(\boldsymbol{E})$
which satisfy $\varphi_{-}(X) > \varphi_{+}(Y)$. If
\[\xymatrix@C=.5em{
0\; \ar[rr] && F_{n}X \ar[rr] \ar[dl]_{\cong}&& F_{n-1}X \ar[rr]
\ar[dl]&& \dots \ar[rr] &&F_{1}X \ar[rr] && F_{0}X \ar@{=}[r]\ar[dl] & X
\\
& A_{n} \ar[lu]^{+} && A_{n-1} \ar[lu]^{+} & & & & & & A_0 \ar[lu]^{+}&
}\]
is a HNF of $X$, we have $\varphi(A_{i})\ge \varphi(A_{0})=\varphi_{-}(X) >
\varphi_{+}(Y)$. We know already $\Hom(A_{i},Y)=0$ for all $i\ge 0$. If we
apply the functor $\Hom(\,\cdot\,,Y)$ to the distinguished triangles
$F_{i+1}X \longrightarrow F_{i}X \longrightarrow A_{i}
\stackrel{+}{\longrightarrow}$, we obtain injections $\Hom(F_{i}X,Y)
\hookrightarrow \Hom(F_{i+1}X,Y)$. Again, this implies $\Hom(X,Y)=0$.
\end{proof}
\begin{theorem}[\cite{Stability,GRK}]\label{thm:uniqueHNF}
The HNF of any non-zero object $X\in\Dbcoh(\boldsymbol{E})$ is unique up to
unique isomorphism.
\end{theorem}
\begin{proof}
If
\[\xymatrix@C=.5em{
0\; \ar[rr] && F_{n}X \ar[rr] \ar[dl]_{\cong}&& F_{n-1}X \ar[rr]
\ar[dl]&& \dots \ar[rr] &&F_{1}X \ar[rr] && F_{0}X \ar@{=}[r]\ar[dl] &
X\\
& A_{n} \ar[lu]^{+} && A_{n-1} \ar[lu]^{+} & & & & & & A_0 \ar[lu]^{+}&
}\]
and
\[\xymatrix@C=.5em{
0\; \ar[rr] && G_{m}X \ar[rr] \ar[dl]_{\cong}&& G_{m-1}X \ar[rr]
\ar[dl]&& \dots \ar[rr] &&G_{1}X \ar[rr] && G_{0}X \ar@{=}[r]\ar[dl] &
X\\
& B_{m} \ar[lu]^{+} && B_{m-1} \ar[lu]^{+} & & & & & & B_0 \ar[lu]^{+}&
}\]
are HNFs of $X$, we have to show that there exist unique isomorphisms of
distinguished triangles for any $k\ge 0$
\[\begin{CD}
F_{k+1}X @>>> F_{k}X @>>> A_{k} @>{+}>>\\
@VV{f_{k+1}}V @VV{f_{k}}V @VV{g_{k}}V \\
G_{k+1}X @>>> G_{k}X @>>> B_{k} @>{+}>>
\end{CD}\]
with $f_{0}=\mathsf{Id}_{X}$. This is obtained by induction on $k\ge0$ from
the following claim: if an isomorphism $f:F\rightarrow G$ and two
distinguished triangles
$F' \longrightarrow F \longrightarrow A \stackrel{+}{\longrightarrow }$ and
$G' \longrightarrow G \longrightarrow B \stackrel{+}{\longrightarrow }$ are
given such that $A\in\mathsf{P}(\varphi), B\in\mathsf{P}(\psi)$ and $F',G'$
have HNFs with $\varphi_{-}(F')>\varphi$ and $\varphi_{-}(G')>\psi$, then
there exist unique isomorphisms $f':F'\rightarrow G'$ and $g:A\rightarrow B$
such that $(f',f,g)$ is a morphism of triangles. In particular,
$\varphi=\psi$.
Without loss of generality, we may assume $\varphi\ge \psi$. This implies
$\varphi_{-}(F'[1]) > \varphi_{-}(F') > \psi$. Lemma \ref{wesPT:ii} implies
therefore $\Hom(F',B) = \Hom(F'[1],B) = 0$. From \cite{Asterisque100},
Proposition 1.1.9, we obtain the existence and uniqueness of the morphisms
$f',g$. It remains to show that they are isomorphisms. If $g$ were zero, the
second morphism in the triangle $G' \longrightarrow G
\stackrel{0}{\longrightarrow} B \stackrel{+}{\longrightarrow }$ would be
zero. Hence, $B$ were a direct summand of $G'[1]$ which implies
$\Hom(G'[1],B)\ne 0$. This contradicts Lemma \ref{wesPT:ii}, because
$\varphi_{-}(G'[1]) > \varphi(G') > \psi=\varphi(B)$. Hence, $g\ne 0$ and
Lemma \ref{wesPT:ii} implies $\varphi(A)\le \varphi(B)$, i.e.\/
$\varphi=\psi$. So, the same reasoning as before gives a unique morphism of
distinguished triangles in the other direction. The composition of both are
the respective identities of $F' \longrightarrow F \longrightarrow A
\stackrel{+}{\longrightarrow }$ and $G' \longrightarrow G \longrightarrow B
\stackrel{+}{\longrightarrow }$ respectively, which follows again from the
uniqueness part of \cite{Asterisque100}, Proposition 1.1.9. This proves the
claim.
\end{proof}
We need the following useful lemma.
\begin{lemma}(\cite{PengXiao}, Lemma 2.5)\label{lem:PengXiao}
Let $\mathsf{D}$ be a triangulated category and
\[\begin{CD}
F @>>> G @>>> H_1 \oplus H_2 @>{+}>{(0,w)}>
\end{CD}\]
be a distinguished triangle in $\mathsf{D}$. Then $G\cong H_{1}\oplus G'$
splits and the given triangle is isomorphic to
\[\begin{CD}
F
@>{\bigl(\begin{smallmatrix} 0\\g \end{smallmatrix}\bigr)}>>
H_1 \oplus G'
@>{\bigl(\begin{smallmatrix}1&0\\0&f' \end{smallmatrix}\bigr)}>>
H_1 \oplus H_2
@>{+}>{(0,w)}>
\end{CD}\]
Dually, if
\[\begin{CD}
F @>{\bigl(\begin{smallmatrix} 0\\g \end{smallmatrix}\bigr)}>>
G_{1}\oplus G_{2} @>>> H @>{+}>>
\end{CD}\]
is a distinguished triangle then $H\cong G_{1}\oplus H'$ and the given
triangle is isomorphic to
\[\begin{CD}
F
@>{\bigl(\begin{smallmatrix} 0\\g \end{smallmatrix}\bigr)}>>
G_1 \oplus G_{2}
@>{\bigl(\begin{smallmatrix}1&0\\0&f' \end{smallmatrix}\bigr)}>>
G_1 \oplus H'
@>{+}>{(0,w)}>
\end{CD}\]
\end{lemma}
The results in this section are true for more general triangulated
categories than $\Dbcoh(\boldsymbol{E})$. Without changes, the proofs apply if
we replace $\Dbcoh(\boldsymbol{E})$ by the bounded derived category of an
Abelian category which is equipped with the notion of stability in the sense
of \cite{Rudakov}. In particular, these results hold for polynomial stability
on the triangulated categories $\Dbcoh(X)$ where $X$ is a projective
variety over $\boldsymbol{k}$.
\section{The structure of the bounded derived category of coherent sheaves on
a singular Weiersta{\ss} curve}\label{sec:dercat}
In this section, we prove the main results on which our understanding of
$\Dbcoh(\boldsymbol{E})$ is based. Again, $\boldsymbol{E}$ denotes a
Weierstra{\ss} curve. Our main focus is on the singular case, however all the
results remain true in the smooth case as well.
A speciality of this category is the
non-vanishing result Proposition \ref{wesPT}. Unlike the smooth case, there
exist indecomposable objects in $\Dbcoh(\boldsymbol{E})$, which are not
semi-stable. Their Harder-Narasimhan factors are characterised in Proposition
\ref{prop:extreme}. We propose to visualise indecomposable objects by their
``shadows''. As an application of our results, we give a complete
characterisation of all spherical objects in $\Dbcoh(\boldsymbol{E})$. As a
consequence, we show that the group of exact auto-equivalences acts
transitively on the set of spherical objects. This answers a question which was
posed by Polishchuk \cite{YangBaxter}.
Let us set up some notation.
For any $\varphi\in(0,1]$ we denote by $\mathsf{P}(\varphi)^{s} \subset
\mathsf{P}(\varphi)$ the full subcategory of stable sheaves with phase
$\varphi$. We extend this definition to all $\varphi\in\mathbb{R}$ by
requiring $\mathsf{P}(\varphi +n)^{s} = \mathsf{P}(\varphi)^{s}[n]$ for all
$n\in\mathbb{Z}$ and all $\varphi\in\mathbb{R}$.
We already know the structure of $\mathsf{P}(1)^{s}$. Because $\mathsf{P}(1)$
is the category of coherent torsion sheaves on $\boldsymbol{E}$, the objects
of $\mathsf{P}(1)^{s}$ are precisely the structure sheaves $\boldsymbol{k}(x)$
of closed points $x\in\boldsymbol{E}$. In order to understand the structure of
all the other categories $\mathsf{P}(\varphi)^{s}$, we use Fourier-Mukai
transforms. Our main technical tool will be the transform $\mathbb{F}$ which
was studied in \cite{BurbanKreussler}. It depends on the choice of a regular
point $p_{0}\in\boldsymbol{E}$.
Let us briefly recall its definition and main properties. It was defined
with the aid of Seidel-Thomas twists \cite{SeidelThomas}, which are functors
$T_{E}: \Dbcoh(\boldsymbol{E}) \rightarrow \Dbcoh(\boldsymbol{E})$
depending on a spherical object $E\in\Dbcoh(\boldsymbol{E})$. On objects, these
functors are characterised by the existence of a distinguished triangle
$$\boldsymbol{R}\Hom(E,F) \otimes E \rightarrow F \rightarrow T_{E}(F)
\stackrel{+}{\longrightarrow}.$$
If $p_{0}\in\boldsymbol{E}$ is a smooth point, the functor
$T_{\boldsymbol{k}(p_{0})}$ is isomorphic to the tensor product with the
locally free sheaf $\mathcal{O}_{\boldsymbol{E}}(p_{0})$, see
\cite{SeidelThomas}, 3.11. We defined
$$\mathbb{F} :=
T_{\boldsymbol{k}(p_{0})}T_{\mathcal{O}}T_{\boldsymbol{k}(p_{0})}.$$
In \cite{SeidelThomas} is was shown that twist functors can be described as
integral transforms and that $\mathbb{F}$ is isomorphic to the functor
$\FM^{\mathcal{P}}$, which is given by
$$\FM^{\mathcal{P}}(\,\cdot\,) :=
\boldsymbol{R}\pi_{2\ast}(\mathcal{P}\dtens \pi_{1}^{\ast}(\,\cdot\,)),$$
where $\mathcal{P}=\mathcal{I}_{\Delta}\otimes
\pi_{1}^{\ast}\mathcal{O}(p_{0}) \otimes
\pi_{2}^{\ast}\mathcal{O}(p_{0})[1]$. This is a shift of a coherent sheaf on
$\boldsymbol{E}\times \boldsymbol{E}$, on which we denote the ideal of the
diagonal by $\mathcal{I}_{\Delta} \subset
\mathcal{O}_{\boldsymbol{E}\times\boldsymbol{E}}$ and the two projections by
$\pi_{1}, \pi_{2}$.
In order to understand the effect of $\mathbb{F}$ on rank and degree, we look
at the distinguished triangle
$$\boldsymbol{R}\Hom(\mathcal{O},F) \otimes \mathcal{O} \rightarrow F
\rightarrow T_{\mathcal{O}}(F) \stackrel{+}{\longrightarrow}.$$
The additivity of rank and degree implies $\rk(T_{\mathcal{O}}(F))= \rk(F) -
\deg(F)$ and
$\deg(T_{\mathcal{O}}(F))= \deg(F)$. On the other hand, it is well-known that
$\deg(T_{\boldsymbol{k}(p_{0})}(F)) = \deg(F)+\rk(F)$ and
$\rk(T_{\boldsymbol{k}(p_{0})}(F)) = \rk(F)$.
So, if we use $[\mathcal{O}_{\boldsymbol{E}}], -[\boldsymbol{k}(p_{0})]$ as a
basis of $\mathsf{K}(\boldsymbol{E})$, which means that we use coordinates
$(\rk,-\deg)$, then the action of $T_{\mathcal{O}}, T_{\boldsymbol{k}(p_{0})}$
and of $\mathbb{F}$ on $\mathsf{K}(\boldsymbol{E})$ is given by the matrices
$$
\begin{pmatrix}
1&1\\0&1
\end{pmatrix},
\begin{pmatrix}
1&0\\-1&1
\end{pmatrix} \quad \;\text{and}\quad
\begin{pmatrix}
0&1\\-1&0
\end{pmatrix}\;\text{respectively.}
$$
In particular, for any object $F\in\Dbcoh(\boldsymbol{E})$ which has a slope,
we have $\mu(T_{\boldsymbol{k}(p_{0})}(F)) = \mu(F)+1$ and
$\mu(\mathbb{F}(F))=-\frac{1}{\mu(F)}$ using the usual conventions in dealing
with $\infty$.
If $F$ is a sheaf or a twist thereof, we defined the phase
$\varphi(F)$. In order to understand the effect of $\mathbb{F}$ on phases, it
is not sufficient to know its effect on the slope. This is because the slope
determines the phase modulo $2\mathbb{Z}$ only.
However, if $F$ is a coherent sheaf, the description of $\mathbb{F}$ as
$\FM^{\mathcal{P}}$ shows that $\mathbb{F}(F)$ can have non-vanishing
cohomology in degrees $-1$ and $0$ only. If, in addition, $\mathbb{F}(F)$ is a
shifted sheaf, this implies $\varphi(\mathbb{F}(F))\in (0,2]$.
From the formula for the slope it is now clear that $\varphi(\mathbb{F}(F)) =
\varphi(F)+\frac{1}{2}$ for any shifted coherent sheaf $F$.
The following result was first shown in \cite{Nachr}. We give an independent
proof here, which was inspired by \cite{Nachr}, Lemma 3.1.
\begin{theorem}\label{thm:mother}
$\mathbb{F}$ sends semi-stable sheaves to semi-stable sheaves.
\end{theorem}
\begin{proof}
Note that, by definition, a semi-stable sheaf of positive rank is
automatically torsion free. The only sheaf with degree and rank equal to
zero is the zero sheaf. Throughout this proof, we let $\mathcal{F}$ be a
semi-stable sheaf on $\boldsymbol{E}$.
If $\deg(\mathcal{F})=0$ this sheaf is torsion free and the claim
was shown in \cite{BurbanKreussler}, Thm.\/ 2.21, see also \cite{FMmin}.
For the sake of clarity we would like to stress here the fact that
\cite{BurbanKreussler}, Section 2, deals with nodal as well as cuspidal
Weierstra{\ss} curves.
Next, suppose $\deg(\mathcal{F})>0$. If
$\rk(\mathcal{F})=0$, $\mathcal{F}$ is a coherent torsion sheaf. Again, the
claim follows from \cite{BurbanKreussler}, Thm.\/ 2.21 and Thm.\/ 2.18,
where it was shown that $\mathbb{F}\circ\mathbb{F}= i^{\ast}[1]$, for any
Weierstra{\ss} curve. Here, $i:\boldsymbol{E} \rightarrow \boldsymbol{E}$ is
the involution which fixes the singularity and which corresponds to taking
the inverse on the smooth part of $\boldsymbol{E}$ with its group structure
in which $p_{0}$ is the neutral element.
Therefore, we may suppose $\mathcal{F}$ is torsion free. As observed before,
the complex $\mathbb{F}(\mathcal{F})\in\Dbcoh(\boldsymbol{E})$ can have
non-vanishing cohomology in degrees $-1$ and $0$ only. We are going to show
that $\mathbb{F}(\mathcal{F})[-1]$ is a sheaf, which is equivalent to the
vanishing of the cohomology object
$\mathcal{H}^{0}(\mathbb{F}(\mathcal{F}))\in\Coh_{\boldsymbol{E}}$.
Recall from \cite {BurbanKreussler}, Lemma 2.13, that for any smooth point
$x\in\boldsymbol{E}$ the sheaf of degree zero $\mathcal{O}(x-p_{0})$
satisfies
$\mathbb{F}(\mathcal{O}(x-p_{0})) \cong T_{\mathcal{O}}(\mathcal{O}(x))
\cong \boldsymbol{k}(x)$. Moreover, if $s\in\boldsymbol{E}$ denotes the
singular point, $n:\mathbb{P}^{1}\rightarrow \boldsymbol{E}$
the normalisation and
$\widetilde{\mathcal{O}}:=n_{\ast}(\mathcal{O}_{\mathbb{P}^{1}})$, then
$\mathbb{F}(\widetilde{\mathcal{O}}(-p_{0})) \cong
T_{\mathcal{O}}(\widetilde{\mathcal{O}}) \cong \boldsymbol{k}(s)$. The sheaf
$\widetilde{\mathcal{O}}(-p_{0})$ has degree zero on $\boldsymbol{E}$.
Because $\mathbb{F}$ is an equivalence, we obtain isomorphisms
\begin{align*}
\Hom(\mathbb{F}(\mathcal{F}),\boldsymbol{k}(x)) &\cong
\Hom(\mathcal{F}, \mathcal{O}(x-p_{0}))\\
\intertext{and}
\Hom(\mathbb{F}(\mathcal{F}),\boldsymbol{k}(s)) &\cong
\Hom(\mathcal{F}, \widetilde{\mathcal{O}}(-p_{0}))
\end{align*}
where $x\in\boldsymbol{E}$ is an arbitrary smooth point. These vector
spaces vanish as $\mathcal{F}$ was assumed to be semi-stable and of positive
degree.
Because cohomology of the complex $\mathbb{F}(\mathcal{F})$ vanishes in
positive degree, there is a canonical
morphism $\mathbb{F}(\mathcal{F})\rightarrow
\mathcal{H}^{0}(\mathbb{F}(\mathcal{F}))$ in $\Dbcoh(\boldsymbol{E})$, which
induces an injection of functors
$\Hom(\mathcal{H}^{0}(\mathbb{F}(\mathcal{F})), \,\cdot\,) \hookrightarrow
\Hom(\mathbb{F}(\mathcal{F}), \,\cdot\,)$. Therefore, the vanishing which was
obtained above, shows
$$\Hom(\mathcal{H}^{0}(\mathbb{F}(\mathcal{F})), \boldsymbol{k}(y)) = 0$$
for any point $y\in\boldsymbol{E}$. This implies the vanishing of the sheaf
$\mathcal{H}^{0}(\mathbb{F}(\mathcal{F}))$. Hence,
$\widehat{\mathcal{F}}:=\mathbb{F}(\mathcal{F})[-1]$ is a coherent sheaf and
the definition of $\mathbb{F}$ implies that there is an exact sequence of
coherent sheaves
$$0\rightarrow \widehat{\mathcal{F}}(-p_{0}) \rightarrow
H^{0}(\mathcal{F}(p_{0})) \otimes \mathcal{O}_{\boldsymbol{E}} \rightarrow
\mathcal{F}(p_{0}) \rightarrow 0.$$
This sequence implies, in particular, that $\widehat{\mathcal{F}}$ is
torsion free.
Before we proceed to show that $\widehat{\mathcal{F}}$ is semi-stable, we
apply duality to prove that $\mathbb{F}(\mathcal{F})$ is a sheaf if
$\deg(\mathcal{F})<0$. Let us denote the dualising functor by $\mathbb{D}:=
\boldsymbol{R}\mathcal{H}om(\,\cdot\,, \mathcal{O}_{\boldsymbol{E}})$. This
functor satisfies $\mathbb{D}\mathbb{D}\cong \boldsymbol{1}$. In
\cite{BurbanKreusslerRel}, Cor.\/ 3.4, we have shown that there exists an
isomorphism
$$\mathbb{D}\mathbb{F} [-1] \cong i^{\ast} \mathbb{F} \mathbb{D}.$$
Using $\mathbb{D}\circ[1] \cong [-1]\circ \mathbb{D}$, this implies
$$\mathbb{F} \cong \mathbb{D}i^{\ast}[-1]\mathbb{F}\mathbb{D}.$$
Because $\mathcal{F}$ is a torsion free sheaf on a curve, it is
Cohen-Macaulay and since $\boldsymbol{E}$ is Gorenstein, this implies
$\mathcal{E}xt^{i}(\mathcal{F},\mathcal{O}) = 0$ for any $i>0$.
Therefore, we have $\mathbb{D}(\mathcal{F})\cong \mathcal{F}^{\vee}$ and
this is a semi-stable coherent sheaf of positive degree. Thus, $[-1]\circ
\mathbb{F}$ sends $\mathcal{F}^{\vee}$ to a torsion free sheaf, on which
$\mathbb{D}$ is just the usual dual. Now, we see that
$\mathbb{F}(\mathcal{F})$ is a torsion free sheaf if $\mathcal{F}$ was
semi-stable and of negative degree.
It remains to prove that $\mathbb{F}$ preserves semi-stability. If
$\deg(\mathcal{F})=0$ or $\mathcal{F}$ is a torsion sheaf, this was shown
for any Weierstra{\ss} curve in \cite{BurbanKreussler}.
If $\deg(\mathcal{F})\ne 0$ the proof is based upon
$\mathbb{F}\mathbb{F}[-1]\cong i^{\ast}$, see \cite{BurbanKreussler}, Thm.\/
2.18. Suppose $\deg(\mathcal{F})>0$, then
$\mathbb{F}(\widehat{\mathcal{F}})\cong i^{\ast}(\mathcal{F})$ and this is a
coherent sheaf.
If $\widehat{\mathcal{F}}$ were not semi-stable, there would exist a
semi-stable sheaf $\mathcal{G}$ with $\mu(\widehat{\mathcal{F}}) >
\mu(\mathcal{G})$ and a non-zero morphism $\widehat{\mathcal{F}} \rightarrow
\mathcal{G}$. Because $\mu(\widehat{\mathcal{F}}) = -1/\mu(\mathcal{F})<0$,
$\mathbb{F}(\mathcal{G})$ is a coherent sheaf and application of
$\mathbb{F}$ produces a non-zero morphism $i^{\ast}(\mathcal{F}) \cong
\mathbb{F}(\widehat{\mathcal{F}}) \rightarrow
\mathbb{F}(\mathcal{G})$. However, $\mu(i^{\ast}(\mathcal{F})) =
\mu(\mathcal{F}) > -1/\mu(\mathcal{G}) = \mu(\mathbb{F}(\mathcal{G}))$
contradicts semi-stability of $i^{\ast}(\mathcal{F})$. Hence,
$\widehat{\mathcal{F}}$ is semi-stable. The proof in the case
$\deg(\mathcal{F})<0$ starts with a non-zero morphism
$\mathcal{U}\rightarrow \mathbb{F}(\mathcal{F})$ and proceeds similarly.
\end{proof}
It was shown in \cite{BurbanKreussler} that we obtain an action of the group
$\widetilde{\SL}(2,\mathbb{Z})$ on $\Dbcoh(\boldsymbol{E})$ by sending
generators of this group to $T_{\mathcal{O}}$, $T_{\boldsymbol{k}(p_{0})}$ and
the translation functor $[1]$ respectively.
Let us denote $$\mathsf{Q}:=\{\varphi\in\mathbb{R}\mid
\mathsf{P}(\varphi) \text{ contains a non-zero object}\}.$$
The action of a group $G$ on $\mathsf{Q}$ is called \emph{monotone}, if
$\varphi\le\psi$ implies $g\cdot\varphi\le g\cdot\psi$ for every $g\in G$ and
$\varphi,\psi\in \mathsf{Q}$.
\begin{proposition}\label{prop:transit}
The $\widetilde{\SL}(2,\mathbb{Z})$-action on $\Dbcoh(\boldsymbol{E})$
induces a monotone and transitive action on the set $\mathsf{Q}$. All
isotropy groups of this action are isomorphic to $\mathbb{Z}$.
\end{proposition}
\begin{proof}
As seen above, for any $\psi\in\mathsf{Q}$ and $0\ne
A\in\mathsf{P}(\psi)$, we have $\varphi(\mathbb{F}(A)) =
\varphi(A)+\frac{1}{2}$ and $\mu(T_{\boldsymbol{k}(p_{0})}(A)) = \mu(A)+1$.
Therefore, by Theorem \ref{thm:mother} it is clear that we obtain an induced
monotone action of $\widetilde{\SL}(2,\mathbb{Z})$ on $\mathsf{Q}$.
The group $\SL(2,\mathbb{Z})$ acts transitively on the set of all pairs of
co-prime integers which we interpret as primitive vectors of the lattice
$\mathbb{Z}\oplus i\mathbb{Z}\subset\mathbb{C}$. Hence, the action
of $\widetilde{\SL}(2,\mathbb{Z})$ on $\mathsf{Q}$ is transitive as
well. So, all isotropy groups are isomorphic. Finally, it is easy
to see that the isotropy group of $1\in\mathsf{Q}$ is generated by
$T_{\boldsymbol{k}(p_{0})}$.
\end{proof}
As an important consequence we obtain the following clear structure result
for the slices $\mathsf{P}(\varphi)$.
\begin{corollary}\label{cor:equiv}
The category $\mathsf{P}(\varphi)$ of semi-stable objects of phase
$\varphi\in\mathsf{Q}$ is equivalent to the category $\mathsf{P}(1)$ of
torsion sheaves. Any such equivalence restricts to an equivalence between
$\mathsf{P}(\varphi)^{s}$ and $\mathsf{P}(1)^{s}$. Under such an
equivalence, stable vector bundles correspond to structure sheaves of
smooth points. Moreover, if $\varphi\in(0,1)\cap \mathsf{Q}$,
$\mathsf{P}(\varphi)^{s}$ contains a unique torsion free sheaf, which is not
locally free. It correspond to the structure sheaf
$\boldsymbol{k}(s)\in\mathsf{P}(1)^{s}$ of the singular point.
\end{corollary}
Recall that an object $E\in\Dbcoh(\boldsymbol{E})$ is called \emph{perfect},
if it is isomorphic in the derived category to a bounded complex of locally
free sheaves of finite rank. Thus, a sheaf or shift thereof is called perfect,
if it is perfect as an object in $\Dbcoh(\boldsymbol{E})$.
If $\boldsymbol{E}$ is smooth, any object in $\Dbcoh(\boldsymbol{E})$ is
perfect. However, if $s\in\boldsymbol{E}$ is a singular point, the torsion
sheaf $\boldsymbol{k}(s)$ is not perfect.
If $\boldsymbol{E}$ is singular with one singularity $s\in\boldsymbol{E}$, the
category $\mathsf{P}(1)^{s}$ contains precisely one object which is not
perfect, the object $\boldsymbol{k}(s)$.
Hence, by Proposition \ref{prop:transit}, for any $\varphi\in\mathsf{Q}$ there
is precisely one element in $\mathsf{P}(\varphi)^{s}$ which is not perfect. We
shall refer to it as the \emph{extreme} stable element with phase
$\varphi$. So, the sheaf $\boldsymbol{k}(s)$ is the extreme stable element
with phase $1$. The extreme stable element is never locally free. A stable
object is either perfect or extreme.
We shall need the following version of Serre duality, which can be deduced
easily from standard versions:
If $E,F\in\Dbcoh(\boldsymbol{E})$ and at least one of them is perfect, then
there is a bi-functorial isomorphism
\begin{equation}
\label{wesPT:i}\Hom(E,F) \cong \Hom(F,E[1])^{\ast}.
\end{equation}
If neither of the objects is perfect, this is no longer true. For example,
$\Hom(\boldsymbol{k}(s),\boldsymbol{k}(s))\cong \boldsymbol{k}$, but
$\Hom(\boldsymbol{k}(s),\boldsymbol{k}(s)[1]) \cong
\Ext^{1}(\boldsymbol{k}(s),\boldsymbol{k}(s)) \cong \boldsymbol{k}^{2}$.
Any object $X$ in the Abelian category $\mathsf{P}(\varphi)$ has a
Jordan-H\"older filtration (JHF)
$$0\subset F_{n}X \subset \ldots \subset F_{1}X \subset F_{0}X = X$$
with stable JH-factors $J_{i}=F_{i}X/F_{i+1}X \in
\mathsf{P}(\varphi)^{s}$. The graded object $\oplus_{i=0}^{n}J_{i}$ is
determined by $X$. Observe that for any two objects $J\not\cong
J'\in\mathsf{P}(\varphi)^{s}$ we can apply Serre duality because at most one
of them is non-perfect.
\begin{corollary}\label{cor:sheaves}
\begin{enumerate}
\item\label{cor:i} If $\varphi,\psi \in \mathsf{Q}$ with $\varphi -1 < \psi
\le \varphi$ there exists $\Phi\in \widetilde{\SL}(2,\mathbb{Z})$ such that
$\Phi(\varphi)=1$ and $\Phi(\psi)\in(0,1]$.
\item\label{cor:ii} If $A,B\in\mathsf{P}(\varphi)^{s}$, then $A\cong B \iff
\Hom(A,B)\ne 0.$
\item\label{cor:iii} If $0\ne X\in\mathsf{P}(\varphi)$ and $0\ne
Y\in\mathsf{P}(\psi)$ with $\varphi < \psi < \varphi+1$, then $\Hom(X,Y)\ne
0$.
\item \label{cor:iv} If $J\in\mathsf{P}(\varphi)^{s}$ is not a JH-factor of
$X\in \mathsf{P}(\varphi)$, for all $i\in\mathbb{Z}$ we have
$\Hom(J,X[i])=0$.
\item \label{cor:v} If $X\in\mathsf{P}(\varphi)$ is indecomposable, all its
JH-factors are isomorphic to each other.
\item \label{cor:vi} If $X,Y\in\mathsf{P}(\varphi)$ are non-zero
indecomposable objects, both with the same JH-factor, then $\Hom(X,Y) \ne
0$.
\end{enumerate}
\end{corollary}
\begin{proof}
(\ref{cor:i}) This follows from Proposition \ref{prop:transit} because the
shift functor corresponds to an element in the centre of
$\widetilde{\SL}(2,\mathbb{Z})$ and therefore $\Phi(\mathsf{P}(\varphi)) =
\mathsf{P}(1)$ implies $\Phi(\mathsf{P}(\varphi-1)) = \mathsf{P}(0)$.
(\ref{cor:ii}) The statement is clear in case $\varphi=1$ and follows from
(\ref{cor:i}) in the general case.
(\ref{cor:iii}) Using (\ref{cor:i}) we can assume $\psi=1$, which means
that $Y$ is a coherent torsion sheaf. By Proposition \ref{prop:transit} this
implies $\varphi\in(0,1)$ and $X$ is a torsion free coherent sheaf. If
$Y\in\mathsf{P}(1)^{s}$ the statement is clear, because any torsion free
sheaf has a non-zero morphism to any $Y=\boldsymbol{k}(x)$,
$x\in\boldsymbol{E}$. If $Y\in\mathsf{P}(1)$ is arbitrary, there exists a
point $x\in\boldsymbol{E}$ and a non-zero morphism $\boldsymbol{k}(x)
\rightarrow Y$. The claim follows now from left-exactness of the functor
$\Hom(X,\,\cdot\,)$.
(\ref{cor:iv}) If $J'\in\mathsf{P}(\varphi)^{s}$ is a JH-factor of $X$, we
have $J\not\cong J'$. From (\ref{cor:ii}) and Serre duality together with
Lemma \ref{wesPT:ii} we obtain $\Hom(J,J'[i])=0$ for any
$i\in\mathbb{Z}$. Using the JHF of $X$, the claim now follows.
(\ref{cor:v}) It is easy to prove by induction that any
$X\in\mathsf{P}(\varphi)$ can be split as a finite direct sum $X\cong\oplus
X_{k}$, where each $X_{k}$ has all JH-factors isomorphic to a single element
$J_{k}\in\mathsf{P}(\varphi)^{s}$. This implies, the claim.
(\ref{cor:vi}) By (\ref{cor:i}) we may assume
$\varphi(X)=\varphi(Y)=1$. This means, both objects are indecomposable
torsion sheaves with support at the singular point $s\in\boldsymbol{E}$.
Such sheaves always have an epimorphism to and a monomorphism from the
extreme object $\boldsymbol{k}(s)$, hence the claim.
\end{proof}
It is interesting and important to note that an indecomposable semi-stable
object can be perfect even though all its JH-factors are extreme. This is
made explicit in \cite{BurbanKreussler}, Section 4, in the case of the
category $\mathsf{P}(1)$ of coherent torsion sheaves. If $\boldsymbol{E}$ is
nodal, there are two kinds of indecomposable torsion sheaves with support at
the node $s\in\boldsymbol{E}$: the so-called \emph{bands} and
\emph{strings}. The bands are perfect, whereas the strings are not
perfect. Using the action of $\widetilde{\SL}(2,\mathbb{Z})$ this carries over
to all other categories $\mathsf{P}(\varphi)$ with $\varphi\in\mathsf{Q}$.
An object $X\in\mathsf{P}(\varphi)$ will be called \emph{extreme} if it
does not have a direct summand which is perfect. This implies that, but is not
equivalent to the property that all its JH-factors are extreme. An example can
be found below, see Ex.~\ref{ex:extremefactors}.
From the above we deduce that any $X\in\mathsf{P}(\varphi)$ can be split as a
direct sum $X\cong X^{e}\oplus X^{p}$ with $X^{e}$ extreme and $X^{p}$
perfect. All direct summands of the extreme part have the unique
extreme stable element with phase $\varphi$ as its JH-factors. On the
other hand, all the direct summands of $X^{p}$ are perfect and they can have
any object of $\mathsf{P}(\varphi)^{s}$ as JH-factor.
\begin{corollary}
Any coherent sheaf $\mathcal{F}$ with $\End(\mathcal{F}) = \boldsymbol{k}$
is stable.
\end{corollary}
\begin{proof}
The assumption implies that $\mathcal{F}$ is indecomposable.
If $\mathcal{F}$ were not even semi-stable, it would have at least two
HN-factors. Using Corollary \ref{cor:sheaves}, we may assume that
$\varphi_{+}(\mathcal{F})=1$. Thus, $\mathcal{F}$ is a coherent sheaf which
is neither torsion nor torsion free. This implies that there is a
non-invertible endomorphism
$\mathcal{F} \rightarrow \boldsymbol{k}(s) \rightarrow \tors(\mathcal{F})
\rightarrow \mathcal{F}$, in contradiction to the assumption. Hence,
$\mathcal{F}\in\mathsf{P}(\varphi)$ is semi-stable. Let
$\mathcal{J}\in\mathsf{P}(\varphi)$ be its JH-factor.
From Corollary \ref{cor:sheaves} (\ref{cor:vi}) we obtain a non-zero
endomorphism $\mathcal{F}\rightarrow \mathcal{J}\rightarrow \mathcal{F}$,
which can only be an isomorphism, if $\mathcal{F}\cong \mathcal{J}$, so
$\mathcal{F}$ is indeed stable.
\end{proof}
The following method can be used to visualise the structure of the category
$\Dbcoh(\boldsymbol{E})$: the vertical slices in Figure \ref{fig:slices} are
thought to correspond to the categories $\mathsf{P}(t)^{s}$ of stable objects.
\begin{figure}[hbt]
\begin{center}
\setlength{\unitlength}{10mm}
\begin{picture}(11,5)
\multiput(0,4)(0.2,0){56}{\line(1,0){0.1}}
\put(0,1){\line(1,0){11.1}}
\thicklines
\put(1,1){\line(0,1){3}}\put(1,0.8){\makebox(0,0)[t]{$2$}}
\put(4,1){\line(0,1){3}}\put(4,0.8){\makebox(0,0)[t]{$1$}}
\put(7,1){\line(0,1){3}}\put(7,0.8){\makebox(0,0)[t]{$0$}}
\put(10,1){\line(0,1){3}}\put(10,0.8){\makebox(0,0)[t]{$-1$}}
\thinlines
\put(4.9,1){\line(0,1){3}}\put(4.9,0.8){\makebox(0,0)[t]{$t$}}
\put(1.8,2.5){$\Coh_{\boldsymbol{E}}[1]$}
\put(5.3,2.5){$\Coh_{\boldsymbol{E}}$}
\put(7.6,2.5){$\Coh_{\boldsymbol{E}}[-1]$}
\end{picture}
\end{center}
\caption{slices}\label{fig:slices}
\end{figure}
They are non-empty if and only if $t\in\mathsf{Q}$, i.e.\/
$\mathbb{R}\exp(\pi it) \cap \mathbb{Z}^{2} \ne \{(0,0)\}$. A point on such a
slice represents a stable object. The extreme stable objects are those which
lie on the dashed upper horizontal line. The labelling below the picture
reflects the phases of the slices. We have chosen to let it decrease from the
left to right in order to have objects with cohomology in negative degrees on
the left and with positive degrees on the right.
By Proposition \ref{prop:transit}, the group $\widetilde{\SL}(2,\mathbb{Z})$
acts on the set of all stable objects, hence it acts on such pictures.
This action sends slices to slices and acts transitively on the set of
slices with phase $t\in\mathsf{Q}$. The dashed line of extreme stable objects
is invariant under this action.
Any indecomposable object $0\ne X\in\Dbcoh(\boldsymbol{E})$ has a
\emph{shadow} in such a picture: it is the set of all stable objects which
occur as JH-factors in the HN-factors of $X$. If this set consists of more
than one point, the shadow is obtained by connecting these points by line
segments.
The following proposition shows that the shadow of an indecomposable object
which consists of more than one point is completely contained in the
extreme line.
\begin{figure}[hbt]
\begin{center}
\setlength{\unitlength}{10mm}
\begin{picture}(11,5)
\multiput(0,4)(0.2,0){56}{\line(1,0){0.1}}
\put(0,1){\line(1,0){11.1}}
\thicklines
\put(1,1){\line(0,1){3}}\put(1,0.8){\makebox(0,0)[t]{$2$}}
\put(4,1){\line(0,1){3}}\put(4,0.8){\makebox(0,0)[t]{$1$}}
\put(7,1){\line(0,1){3}}\put(7,0.8){\makebox(0,0)[t]{$0$}}
\put(10,1){\line(0,1){3}}\put(10,0.8){\makebox(0,0)[t]{$-1$}}
\thinlines
\put(1.8,2.5){$\Coh_{\boldsymbol{E}}[1]$}
\put(5.3,2.5){$\Coh_{\boldsymbol{E}}$}
\put(7.6,2.5){$\Coh_{\boldsymbol{E}}[-1]$}
\put(4,2){\circle*{0.2}}\put(4.2,2){\makebox(0,0)[l]{$X_{1}$}}
\put(8.2,3.4){\circle*{0.2}}\put(8.4,3.4){\makebox(0,0)[l]{$X_{2}$}}
\put(0.3,4){\circle*{0.2}}
\thicklines\put(0.3,4){\line(1,0){1.5}}
\put(1.8,4){\circle*{0.2}}
\thicklines\put(1.8,4){\line(1,0){0.8}}
\put(2.6,4){\circle*{0.2}}\put(1.3,4.2){\makebox(0,0)[b]{$X_{3}$}}
\put(4.3,4){\circle*{0.2}}
\thicklines\put(4.3,4){\line(1,0){1}}
\put(5.3,4){\circle*{0.2}}\put(4.8,4.2){\makebox(0,0)[b]{$X_{4}$}}
\put(6.3,4){\circle*{0.2}}\put(6.3,4.2){\makebox(0,0)[b]{$X_{5}$}}
\end{picture}
\end{center}
\caption{shadows}\label{fig:example}
\end{figure}
Figure \ref{fig:example} shows the shadows of five different
indecomposable objects:
\begin{itemize}
\item $X_{1}\in\Coh_{\boldsymbol{E}}$ an indecomposable torsion sheaf,
\item $X_{2}\in \Coh_{\boldsymbol{E}}[-1]$ the shift of an indecomposable
semi-stable locally free sheaf,
\item $X_{3}$ a genuine complex with three extreme HN-factors, one in
$\Coh_{\boldsymbol{E}}[2]$ and the other two in $\Coh_{\boldsymbol{E}}[1]$,
\item $X_{4}$ an indecomposable torsion free sheaf which is not semi-stable,
\item $X_{5}\in\Coh_{\boldsymbol{E}}$ an indecomposable and semi-stable
torsion free sheaf which could be perfect or not (a band or a string in the
language of representation theory).
\end{itemize}
The shadow of an indecomposable object is a single point if and only if this
object is semi-stable.
\begin{proposition}\label{prop:extreme}
Let $X \in \Dbcoh(\boldsymbol{E})$ be an indecomposable object which is not
semi-stable. Then, all HN-factors of $X$ are extreme.
\end{proposition}
\begin{proof}
Let
\[\xymatrix@C=.5em{
0\; \ar[rr] && F_{n}X \ar[rr] \ar[dl]_{\cong}&& F_{n-1}X \ar[rr]
\ar[dl]&& \dots \ar[rr] &&F_{1}X \ar[rr] && F_{0}X \ar@{=}[r]\ar[dl] &
X\\
& A_{n} \ar[lu]^{+} && A_{n-1} \ar[lu]^{+} & & & & & & A_0 \ar[lu]^{+}&
}\]
be a HNF of $X$. If the HN-factor $A_{i}$ were not
extreme, it could be split into a direct sum $A_{i} \cong A_{i}'
\oplus A_{i}''$ with $0\ne A_{i}'$ perfect and $A_{i}',
A_{i}''\in\mathsf{P}(\varphi_{i})$.
Because $\varphi_{-}(F_{i+1}X) > \varphi_{i}=\varphi(A_{i}')$, Lemma
\ref{wesPT:ii} and Serre duality imply
$$\Hom(A_{i}', F_{i+1}X[1]) \cong \Hom(F_{i+1}X, A_{i}')^{\ast} = 0.$$
Hence, we can apply Lemma \ref{lem:PengXiao} to the distinguished triangle
$$F_{i+1}X \rightarrow F_{i}X \rightarrow A_{i}
\stackrel{+}{\longrightarrow}$$
and obtain a decomposition $F_{i}X \cong F_{i}'X \oplus A_{i}'$.
We proceed by descending induction on $j\le i$ to show that there exist
decompositions $F_{j}X \cong F_{j}'X \oplus A_{i}'$. This is
obtained from Lemma \ref{lem:PengXiao} applied to the distinguished triangle
$$F_{j}'X\oplus A_{i}' \rightarrow F_{j-1}X \rightarrow A_{j-1}
\stackrel{+}{\longrightarrow}$$
and using Lemma \ref{wesPT:ii}, Serre duality and
$\varphi(A_{i}') > \varphi(A_{j-1})$ to get
$$\Hom(A_{j-1}, A_{i}'[1]) \cong \Hom(A_{i}', A_{j-1})^{\ast} = 0.$$
We obtain a decomposition $X=F_{0}X \cong F_{0}'X \oplus A_{i}'$ in which
we have $A_{i}'\ne 0$. Because $X$ was assumed to be indecomposable, we
should have $X\cong A_{i}'$, but this was excluded by assumption. This
contradiction shows that all HN-factors $A_{i}$ are necessarily
extreme.
\end{proof}
\begin{corollary}\label{cor:types}
There exist four types of indecomposable objects in the category
$\Coh_{\boldsymbol{E}}$:
\begin{enumerate}
\item \label{type:i} semi-stable with perfect JH-factor;
\item \label{type:ii} semi-stable, perfect but its JH-factor extreme;
\item \label{type:iii} semi-stable and extreme;
\item \label{type:iv} not semi-stable, with all its HN-factors extreme.
\end{enumerate}
\end{corollary}
A similar statement is true for $\Dbcoh(\boldsymbol{E})$. In this case, the
objects of types (\ref{type:i}), (\ref{type:ii}) and (\ref{type:iii}) are
shifts of coherent sheaves, whereas genuine complexes are possible for objects
of type (\ref{type:iv}).
Types (\ref{type:ii}), (\ref{type:iii}) and (\ref{type:iv}) were not available
in the smooth case.
Examples of type (\ref{type:i}) are simple vector bundles and structure sheaves
$\boldsymbol{k}(x)$ of smooth points $x\in\boldsymbol{E}$. All indecomposable
objects with a shadow not on the extreme line fall into type (\ref{type:i}).
Under the equivalences of Corollary \ref{cor:equiv}, indecomposable semi-stable
locally free sheaves with extreme JH-factor correspond, in the nodal case,
precisely to those torsion sheaves with support at the node $s$, which are
called bands, see \cite{BurbanKreussler}.
Examples of type (\ref{type:iii}) are the stable coherent sheaves which are not
locally free and the structure sheaf $\boldsymbol{k}(s)$ of the singular point
$s\in\boldsymbol{E}$. Moreover, in the nodal case, the torsion sheaves with
support at $s$, which are called strings in \cite{BurbanKreussler}, are of
type (\ref{type:iii}) as well. Examples of objects of type (\ref{type:iv}) are
given below.
\begin{example}
We shall construct torsion free sheaves on nodal $\boldsymbol{E}$ with an
arbitrary finite number of HN-factors. This implies that the number of points
in a shadow of an indecomposable object in $\Dbcoh(\boldsymbol{E})$ is not
bounded.
Recall from \cite{DrozdGreuel} that any indecomposable torsion free sheaf
which is not locally free, is isomorphic to a sheaf
$\mathcal{S}(\boldsymbol{d}) = p_{n\ast} \mathcal{L}(\boldsymbol{d})$. We use
here the notation of \cite{BurbanKreussler}, Section 3.5, so that $p_{n}:
\boldsymbol{I_{n}} \rightarrow \boldsymbol{E}$ denotes a certain morphism
from the chain $\boldsymbol{I_{n}}$ of $n$ smooth rational curves to the
nodal curve $\boldsymbol{E}$. If $\boldsymbol{d}=(d_{1},\ldots,d_{n})
\in\mathbb{Z}^{n}$, we denote by $\mathcal{L}(\boldsymbol{d})$ the line
bundle on $\boldsymbol{I_{n}}$ which has degree $d_{\nu}$ on the $\nu$-th
component of $\boldsymbol{I_{n}}$. We know $\rk(\mathcal{S}(\boldsymbol{d}))
= n$ and $\deg(\mathcal{S}(\boldsymbol{d})) = 1+\sum d_{\nu}$. We obtain,
in particular, that for any $\varphi\in\mathsf{Q} \cap (0,1)$ there exist
$n\in\mathbb{Z}$ and $\boldsymbol{d}(\varphi)\in\mathbb{Z}^{n}$ such that
$\mathcal{S}(\boldsymbol{d}(\varphi))$ is the unique extreme element in
$\mathsf{P}(\varphi)^{s}$. On the other hand, if $\boldsymbol{d}'\in
\mathbb{Z}^{n'}, \boldsymbol{d}''\in \mathbb{Z}^{n''}$ and
$\boldsymbol{d} = (\boldsymbol{d}_{+}', \boldsymbol{d}'')\in
\mathbb{Z}^{n'+n''}$, where $\boldsymbol{d}_{+}'$ is obtained from
$\boldsymbol{d}'$ by adding $1$ to the last component, we have an exact
sequence
$$0\rightarrow \mathcal{S}(\boldsymbol{d}') \rightarrow
\mathcal{S}(\boldsymbol{d}) \rightarrow
\mathcal{S}(\boldsymbol{d}'') \rightarrow 0$$
see for example \cite{Mozgovoy}. Hence, if we start with a sequence
$0<\varphi_{0} <\varphi_{1}< \ldots <\varphi_{m} <1$ where
$\varphi_{\nu}\in\mathsf{Q}$ and define
$$\boldsymbol{d}^{(m)} = \boldsymbol{d}(\varphi_{m})\quad\text{ and }\quad
\boldsymbol{d}^{(\nu)} =
(\boldsymbol{d}_{+}^{(\nu+1)},\boldsymbol{d}(\varphi_{\nu})) \text{ for }
m > \nu \ge 0,$$
we obtain an indecomposable torsion free sheaf
$\mathcal{S}(\boldsymbol{d}^{(0)})$ whose HN-factors are the extreme stable
sheaves $\mathcal{S}(\boldsymbol{d}(\varphi_{\nu})) \in
\mathsf{P}(\varphi_{\nu}), 0\le \nu \le m$. The HNF of this sheaf is given by
$$\mathcal{S}(\boldsymbol{d}^{(m)}) \subset
\mathcal{S}(\boldsymbol{d}^{(m-1)}) \subset \ldots \subset
\mathcal{S}(\boldsymbol{d}^{(0)}).$$
The sheaf $\mathcal{S}(\boldsymbol{d}^{(0)})$ is of type (\ref{type:iv}) and
not perfect.
\end{example}
\begin{example}\label{ex:extremefactors}
Suppose $\boldsymbol{E}$ is nodal and let
$\pi:C_{2}\rightarrow\boldsymbol{E}$ be an \'etale morphism of degree
two, where $C_{2}$ denotes a reducible curve which has two components, both
isomorphic to $\mathbb{P}^{1}$ and which intersect transversally at two
distinct points.
By $i_{\nu}:\mathbb{P}^{1}\rightarrow \boldsymbol{E},\;\nu=1,2$ we denote the
morphisms which are induced by the embeddings of the two components of
$C_{2}$.
There is a $\boldsymbol{k}^{\times}$-family of
line bundles on $C_{2}$, whose restriction to one component is
$\mathcal{O}_{\mathbb{P}^{1}}(-2)$ and to the other is
$\mathcal{O}_{\mathbb{P}^{1}}(2)$. The element in $\boldsymbol{k}^{\times}$
corresponds to a gluing parameter over one of the two singularities of
$C_{2}$. If $\mathcal{L}$ denotes one such line bundle,
$\mathcal{E}:=\pi_{\ast}\mathcal{L}$ is an
indecomposable vector bundle of rank two and degree zero on $\boldsymbol{E}$.
Let us fix notation so that $i_{1}^{\ast}\mathcal{E}
\cong \mathcal{O}_{\mathbb{P}^{1}}(-2)$ and $i_{2}^{\ast}\mathcal{E} \cong
\mathcal{O}_{\mathbb{P}^{1}}(2)$. There is an exact sequence of coherent
sheaves on $\boldsymbol{E}$
\begin{equation}\label{eq:nonssvb}
0\rightarrow i_{2\ast} \mathcal{O}_{\mathbb{P}^{1}} \rightarrow
\mathcal{E}
\rightarrow i_{1\ast} \mathcal{O}_{\mathbb{P}^{1}}(-2) \rightarrow 0.
\end{equation}
Because the torsion free sheaves $i_{2\ast} \mathcal{O}_{\mathbb{P}^{1}}$ and
$i_{1\ast} \mathcal{O}_{\mathbb{P}^{1}}(-2)$ have rank one and
$\boldsymbol{E}$ is irreducible, they are stable. Because $\varphi(i_{2\ast}
\mathcal{O}_{\mathbb{P}^{1}}) = 3/4$ and $\varphi(i_{1\ast}
\mathcal{O}_{\mathbb{P}^{1}}(-2)) = 1/4$, Theorem \ref{thm:uniqueHNF}
implies that the HNF of $\mathcal{E}$ is given by the
exact sequence (\ref{eq:nonssvb}). The HN-factors are the two torsion
free sheaves of rank one $i_{2\ast} \mathcal{O}_{\mathbb{P}^{1}}$ and
$i_{1\ast} \mathcal{O}_{\mathbb{P}^{1}}(-2)$, which are not locally
free. These are the extreme stable elements with phases $3/4$ and $1/4$
respectively. Therefore, the indecomposable vector bundle $\mathcal{E}$ is a
perfect object of type (\ref{type:iv}) which satisfies
$\varphi_{-}(\mathcal{E})=1/4$ and $\varphi_{+}(\mathcal{E})=3/4$.
\end{example}
\begin{remark}\label{rem:notperfect}
This example shows that the full sub-category of perfect complexes in the
category $\Dbcoh(\boldsymbol{E})$ is not closed under taking
Harder-Narasimhan factors. We interpret this to be an indication that the
derived category of perfect complexes is not an appropriate object for
homological mirror symmetry on singular Calabi-Yau varieties.
\end{remark}
\begin{remark}
It seems plausible that methods similar to those of this section could be
applied to study the derived category of representations of certain derived
tame associative algebras. Such may include gentle algebras, skew-gentle
algebras and degenerated tubular algebras.
The study of Harder-Narasimhan filtrations in conjunction with the action of
the group of exact auto-equivalences of the derived category may provide new
insight into the combinatorics of indecomposable objects in these derived
categories.
\end{remark}
\begin{proposition}\label{wesPT}
Suppose $X,Y\in \Dbcoh(\boldsymbol{E})$ are non-zero.
\begin{enumerate}
\item \label{wesPT:iii} If $\varphi_{-}(X) <
\varphi_{+}(Y) < \varphi_{-}(X)+1$, then $\Hom(X,Y)\ne 0$.
\item \label{wesPT:iv} If $X$ and $Y$ are indecomposable objects which are
not of type (\ref{type:i}) in Corollary \ref{cor:types} and which satisfy
$\varphi_{-}(X) = \varphi_{+}(Y)$, then $\Hom(X,Y)\ne 0$.
\end{enumerate}
\end{proposition}
\begin{proof}
If $X$ and $Y$ are semi-stable objects, the claim
(\ref{wesPT:iii}) was proved in Corollary \ref{cor:sheaves} (\ref{cor:iii}).
Similarly, (\ref{wesPT:iv}) for two semi-stable objects follows from
Corollary \ref{cor:sheaves} (\ref{cor:vi}), because there is only one
non-perfect object in $\mathsf{P}(\varphi)^{s}$.
For the rest of the proof we treat both cases, (\ref{wesPT:iii}) and
(\ref{wesPT:iv}) simultaneously.
For the proof of (\ref{wesPT:iv}) we keep in mind that Proposition
\ref{prop:extreme} implies that no HN-factor has a perfect summand, if
the object is indecomposable but not semi-stable.
If $X\in\mathsf{P(\varphi)}$ is semi-stable but $Y\in\Dbcoh(\boldsymbol{E})$
is arbitrary, we let
\[\xymatrix@C=.5em{
0\; \ar[rr] && F_{m}Y \ar[rr] \ar[dl]_{\cong}&& F_{m-1}Y \ar[rr]
\ar[dl]&& \dots \ar[rr] &&F_{1}Y \ar[rr] && F_{0}Y \ar@{=}[r]\ar[dl] &
Y\\
& B_{m} \ar[lu]^{+} && B_{m-1} \ar[lu]^{+} & & & & & & B_0 \ar[lu]^{+}&
}\]
be a HNF of $Y$. As $\varphi(B_{m})=\varphi_{+}(Y)$ we know already
$\Hom(X,B_{m})\ne 0$.
By assumption, we have $\varphi(B_{i}[-1]) = \varphi(B_{i})
-1 \le \varphi_{+}(Y)-1 < \varphi(X)$. Hence, by Lemma \ref{wesPT:ii},
$\Hom(X, B_{i}[-1]) =0$ and the cohomology sequence of the distinguished
triangle $F_{i+1}Y\rightarrow F_{i}Y\rightarrow B_{i}
\stackrel{+}{\rightarrow}$ provides an inclusion $\Hom(X, F_{i+1}Y) \subset
\Hom(X, F_{i}Y)$. This implies $0\ne \Hom(X,B_{m})\subset \Hom(X,Y)$.
Finally, in the general case, we let
\[\xymatrix@C=.5em{
0\; \ar[rr] && F_{n}X \ar[rr] \ar[dl]_{\cong}&& F_{n-1}X \ar[rr]
\ar[dl]&& \dots \ar[rr] &&F_{1}X \ar[rr] && F_{0}X \ar@{=}[r]\ar[dl] &
X\\
& A_{n} \ar[lu]^{+} && A_{n-1} \ar[lu]^{+} & & & & & & A_0 \ar[lu]^{+}&
}\]
be a HNF of $X$. As $\varphi(A_{0})=\varphi_{-}(X)$ we have
$\Hom(A_{0},Y)\ne 0$.
Because $\varphi_{-}(F_{1}X[1]) = \varphi_{-}(F_{1}X) +1 =
\varphi(A_{1}) +1 > \varphi_{-}(X)+1 > \varphi_{+}(Y)$, Lemma
\ref{wesPT:ii} implies $\Hom(F_{1}X[1], Y)=0$. The distinguished triangle
$F_{1}X \rightarrow X \rightarrow A_{0} \stackrel{+}{\rightarrow}$ gives us
now an inclusion $0\ne \Hom(A_{0},Y) \subset \Hom(X,Y)$ and so the claim.
\end{proof}
In \cite{YangBaxter}, Polishchuk asked for the classification of all
spherical objects in the bounded derived category of a singular projective
curve of arithmetic genus one. Below, we shall solve this problem for
irreducible curves.
Let $\boldsymbol{E}$ be an irreducible projective curve of arithmetic genus
one over our base field $\boldsymbol{k}$.
Recall that in this case an object $X\in\Dbcoh(\boldsymbol{E})$ is
\emph{spherical} if
$$X \text{ is perfect and }\quad
\Hom(X,X[i]) \cong
\begin{cases}
\boldsymbol{k} & \text{if }\; i \in \{0,1\} \\
0 & \text{if }\; i \not\in \{0,1\}
\end{cases}
$$
\begin{proposition}\label{prop:spherical}
Let $\boldsymbol{E}$ be an irreducible projective curve of arithmetic genus
one and $X\in\Dbcoh(\boldsymbol{E})$. Then the following are equivalent:
\begin{enumerate}
\item\label{spher:i} $X$ is spherical;
\item \label{spher:ii}$\Hom(X,X[i]) \cong
\begin{cases}
\boldsymbol{k} & \text{if }\; i = 0 \\
0 & \text{if }\; i = 2 \;\text{ or }\; i<0;
\end{cases}$
\item\label{spher:iii} $X$ is perfect and stable;
\item\label{spher:iv} there exists $n\in\mathbb{Z}$ such that $X[n]$ is
isomorphic to a simple vector bundle or to a torsion sheaf of length one
which is supported at a smooth point of $\boldsymbol{E}$.
\end{enumerate}
In particular, the group of exact auto-equivalences of
$\Dbcoh(\boldsymbol{E})$ acts transitively on the set of all spherical
objects.
\end{proposition}
\begin{proof}
The implication (\ref{spher:i})$\Rightarrow$(\ref{spher:ii}) is obvious.
Let us prove (\ref{spher:ii})$\Rightarrow$(\ref{spher:iii}).
First, we observe that $\Hom(X,X) \cong \boldsymbol{k}$ implies that
$X$ is indecomposable. Suppose, $X$ is not semi-stable. This is equivalent
to $\varphi_{+}(X)>\varphi_{-}(X)$. By Proposition \ref{prop:extreme}
we know that all HN-factors of $X$ are extreme.
Let $M\ge 0$ be the unique integer with $M\le \varphi_{+}(X) -
\varphi_{-}(X) < M+1$.
If $M< \varphi_{+}(X) - \varphi_{-}(X) <M+1$,
Proposition \ref{wesPT} (\ref{wesPT:iii}) implies $\Hom(X,X[-M])\ne
0$. Under the assumption (\ref{spher:ii}), this is possible only if $M=0$.
On the other hand, if $M=\varphi_{+}(X) - \varphi_{-}(X)$, we obtain from
Proposition \ref{wesPT} (\ref{wesPT:iv}) $\Hom(X,X[-M]) \ne 0$.
Again, this implies $M=0$.
So, we have $0< \varphi_{+}(X) - \varphi_{-}(X) <1$.
If we apply the functor $\Hom(\,\cdot\,,X)$ to
$F_{1}X\stackrel{u}{\rightarrow} X \rightarrow A_{0}
\stackrel{+}{\longrightarrow}$,
the rightmost distinguished triangle of the HNF of $X$, we obtain the exact
sequence
$$\Hom(F_{1}X[1],X) \rightarrow \Hom(A_{0},X) \rightarrow \Hom(X,X)
\rightarrow \Hom(F_{1}X,X),$$
in which the leftmost term $\Hom(F_{1}X[1],X)=0$ by Lemma \ref{wesPT:ii},
because $\varphi_{-}(F_{1}X[1]) > \varphi_{-}(X)+1>\varphi_{+}(X)$. The third
morphism in this sequence is not the zero map, as it sends $\mathsf{Id}_{X}$
to $u\ne 0$. Because $\Hom(X,X)$ is one dimensional, this is only possible
if $\Hom(A_{0},X)=0$. But Proposition \ref{wesPT} (\ref{wesPT:iii}) and
$\varphi(A_{0})<\varphi_{+}(X)< \varphi(A_{0})+1$ imply $\Hom(A_{0},X)\ne 0$.
This contradiction shows that $X$ must be semi-stable.
We observed earlier that all the
JH-factors of an indecomposable semi-stable object are isomorphic to each
other. Therefore, any indecomposable semi-stable object which is not stable
has a space of endomorphisms of dimension at least two. So, we conclude
$X\in\mathsf{P}(\varphi)^{s}$ for some $\varphi\in\mathbb{R}$.
Because $\Hom(\boldsymbol{k}(s),\boldsymbol{k}(s)[2]) \cong
\Ext^{2}(\boldsymbol{k}(s),\boldsymbol{k}(s))\ne 0$, the transitivity of the
action of $\widetilde{\SL}(2,\mathbb{Z})$ on the set $\mathsf{Q}$ implies
that none of the extreme stable objects satisfies the condition
(\ref{spher:ii}). Hence, $X$ is perfect and stable.
To prove (\ref{spher:iii})$\Rightarrow$(\ref{spher:i}), we observe that
the group of automorphisms of the curve $\boldsymbol{E}$ acts
transitively on the regular locus $\boldsymbol{E}\setminus\{s\}$. Hence, by
Proposition \ref{prop:transit}, the group of auto-equivalences of
$\Dbcoh(\boldsymbol{E})$ acts transitively on the set of all perfect stable
objects. Because, for example, the structure sheaf
$\mathcal{O}_{\boldsymbol{E}}$ is spherical, it is now clear that all
perfect stable objects are indeed spherical and that the group of
exact auto-equivalences of $\Dbcoh(\boldsymbol{E})$ acts transitively on the
set of all spherical objects.
To show the equivalence with (\ref{spher:iv}), it remains to recall that any
perfect coherent torsion free sheaf on $\boldsymbol{E}$ is locally free. This
follows easily from the Auslander-Buchsbaum formula because we are working in
dimension one.
\end{proof}
\section{Description of $t$-structures in the case of a
singular Weierstra\ss{} curve}\label{sec:tstruc}
The main result of this section is a description of all $t$-structures on the
derived category of a singular Weierstra\ss{} curve $\boldsymbol{E}$. This
generalises results of \cite{GRK} and \cite{Pol1}, where the smooth case was
studied. As an application, we obtain a description of the group
$\Aut(\Dbcoh(\boldsymbol{E}))$ of all exact auto-equivalences of
$\Dbcoh(\boldsymbol{E})$. A second application is a description of Bridgeland's
space of stability conditions on $\boldsymbol{E}$.
Recall that a $t$-structure on a triangulated category $\mathsf{D}$ is a pair
of full subcategories $(\mathsf{D}^{\le 0}, \mathsf{D}^{\ge 0})$ such that,
with the notation $\mathsf{D}^{\ge n} := \mathsf{D}^{\ge
0}[-n]$ and $\mathsf{D}^{\le n} := \mathsf{D}^{\le 0}[-n]$ for any $n\in
\mathbb{Z}$, the following holds:
\begin{enumerate}
\item $\mathsf{D}^{\le 0} \subset \mathsf{D}^{\le 1}$ and $\mathsf{D}^{\ge 1}
\subset \mathsf{D}^{\ge 0}$;
\item $\Hom(\mathsf{D}^{\le 0}, \mathsf{D}^{\ge 1}) = 0$;
\item\label{def:tiii} for any object $X \in \mathsf{D}$ there exists a
distinguished triangle
$$A \rightarrow X \rightarrow B \stackrel{+}{\longrightarrow}$$
with $A \in \mathsf{D}^{\le 0}$ and $B \in \mathsf{D}^{\ge 1}.$
\end{enumerate}
If $(\mathsf{D}^{\le 0}, \mathsf{D}^{\ge 0})$ is a $t$-structure then ${\sf
A} = \mathsf{D}^{\le 0} \cap \mathsf{D}^{\ge 0}$ has a structure of an
Abelian category. It is called the \emph{heart} of the $t$-structure.
In this way, $t$-structures on the derived category
$\Dbcoh(\boldsymbol{E})$ lead to interesting Abelian categories embedded into
it. The natural $t$-structure on $\Dbcoh(\boldsymbol{E})$ has $\mathsf{D}^{\le
n}$ equal to the full subcategory formed by all complexes with non-zero
cohomology in degree less or equal to $n$ only. Similarly, the full subcategory
$\mathsf{D}^{\ge n}$ consists of all complexes $X$ with $H^{i}(X)=0$ for all
$i<n$. The heart of the natural $t$-structure is the Abelian category
$\Coh_{\boldsymbol{E}}$.
In addition to the natural $t$-structure we also have many interesting
$t$-structures on $\Dbcoh(\boldsymbol{E})$.
In order to describe them, we introduce the following notation. We continue to
work with the notion of stability and the notation introduced in the previous
section.
If $\mathsf{P}\subset\mathsf{P}(\theta)^{s}$ is a subset, we denote by
$\mathsf{D}[\mathsf{P}, \infty)$ the full subcategory of
$\Dbcoh(\boldsymbol{E})$ which is defined as follows:
$X\in\Dbcoh(\boldsymbol{E})$ is in $\mathsf{D}[\mathsf{P},
\infty)$ if and only if $X=0$ or all its HN-factors, which have at least one
JH-factor which is not in $\mathsf{P}$, have phase $\varphi>\theta$.
Similarly, $\mathsf{D}(-\infty,\mathsf{P}]$ denotes the category which is
generated by $\mathsf{P}$ and all $\mathsf{P}(\varphi)$ with
$\varphi<\theta$.
If $\mathsf{P}=\mathsf{P}(\theta)^{s}$ we may abbreviate
$\mathsf{D}[\theta,\infty) = \mathsf{D}[\mathsf{P}, \infty)$ and
$\mathsf{D}(-\infty,\theta] = \mathsf{D}(-\infty,\mathsf{P}]$.
Similarly, if $\mathsf{P}=\emptyset$ we use
the abbreviations $\mathsf{D}(\theta,\infty)$ and $\mathsf{D}(-\infty,\theta)$.
For any open, closed or half-closed interval $I\subset\mathbb{R}$ we define
the full subcategories $\mathsf{D}I$ precisely in the same way. Thus, an
object $0\ne X\in\Dbcoh(\boldsymbol{E})$ is in $\mathsf{D}I$ if and only if
$\varphi_{-}(X)\in I$ and $\varphi_{+}(X)\in I$.
\begin{proposition}\label{prop:texpl}
Let $\theta\in\mathbb{R}$ and $\mathsf{P}(\theta)^{-} \subset
\mathsf{P}(\theta)^{s}$ be arbitrary. Denote by $\mathsf{P}(\theta)^{+} =
\mathsf{P}(\theta)^{s} \setminus \mathsf{P}(\theta)^{-}$ the complement of
$\mathsf{P}(\theta)^{-}$. Then,
$$\mathsf{D}^{\le0} := \mathsf{D}[\mathsf{P}(\theta)^{-}, \infty)$$
defines a $t$-structure on $\Dbcoh(\boldsymbol{E})$ with
$$\mathsf{D}^{\ge1} := \mathsf{D}(-\infty,\mathsf{P}(\theta)^{+}].$$
The heart $\mathsf{A}(\theta,\mathsf{P}(\theta)^{-})$ of it is
the category $\mathsf{D}[\mathsf{P}(\theta)^{-},
\mathsf{P}(\theta)^{+}[1]]$, which consists of those objects
$X\in\Dbcoh(\boldsymbol{E})$ whose HN-factors either have
phase $\varphi\in(\theta,\theta+1)$ or have all its JH-factors in
$\mathsf{P}(\theta)^{-}$ or $\mathsf{P}(\theta)^{+}[1]$.
\end{proposition}
\begin{proof}
The only non-trivial property which deserves a proof is (\ref{def:tiii}) in
the definition of $t$-structure. Given $X\in \Dbcoh(\boldsymbol{E})$, we
have to show that there exists a distinguished triangle $A\rightarrow X
\rightarrow B \stackrel{+}{\rightarrow}$ with $A\in \mathsf{D}^{\le0}$ and
$B\in \mathsf{D}^{\ge1}$. In order to construct it, let
\[\xymatrix@C=.5em{
0\; \ar[rr] && F_{n}X \ar[rr] \ar[dl]_{\cong}&& F_{n-1}X \ar[rr]
\ar[dl]&& \dots \ar[rr] &&F_{1}X \ar[rr] && F_{0}X \ar@{=}[r]\ar[dl] &
X\\
& A_{n} \ar[lu]^{+} && A_{n-1} \ar[lu]^{+} & & & & & & A_0 \ar[lu]^{+}&
}\]
be the HNF of $X$. Because $\varphi(A_{i+1})>\varphi(A_{i})$ for all $i$,
there exists an integer $k$, $0\le k \le n+1$ such that $\varphi(A_{k})\ge
\theta >\varphi(A_{k-1})$. If $\varphi(A_{k})>\theta$, this implies
$A_{i}\in \mathsf{D}^{\le0}$, if $i\ge k$ and $A_{i}\in \mathsf{D}^{\ge1}$,
if $i<k$. In particular, $F_{k}X\in \mathsf{D}^{\le0}$. In this case, we
define $A:=F_{k}X$ and let $A=F_{k}X \rightarrow X$ be the composition of
the morphisms in the HNF.
If, however, $\varphi(A_{k})=\theta$, there is a splitting $A_{k}\cong
A_{k}^{-} \oplus A_{k}^{+}$ such that all JH-factors of $A_{k}^{-}$ (resp.\/
$A_{k}^{+}$) are in $\mathsf{P}(\theta)^{-}$ (resp.\/
$\mathsf{P}(\theta)^{+}$).
Now, we apply Lemma \ref{lem:connect} to the distinguished triangles
$F_{k+1}X \stackrel{f}{\longrightarrow} F_{k}X \longrightarrow A_{k}
\stackrel{+}{\longrightarrow}$
and
$A_{k}^{-} \longrightarrow A_{k} \longrightarrow A_{k}^{+}
\stackrel{+}{\longrightarrow}$, given by the splitting of $A_{k}$,
to obtain a factorisation $F_{k+1}X \rightarrow A \rightarrow F_{k}X$ of
$f$ and two distinguished triangles
$$\xymatrix@C=.5em{F_{k+1}X \ar[rr] && A \ar[dl]\ar[rr] && F_{k}X.\ar[dl]\\
& A_{k}^{-} \ar[ul]^{+} && A_{k}^{+}\ar[ul]^{+}}$$
Part of the given HNF of $X$ together with the left one of these two
triangles form a HNF of $A$, whence $A\in \mathsf{D}^{\le0}$.
Again, we let $A\rightarrow X$
be obtained by composition with the morphisms in the HNF of $X$.
In any case, we choose a distinguished triangle $A\rightarrow X \rightarrow B
\stackrel{+}{\rightarrow}$, where $A\rightarrow X$ is the morphism chosen
before. From Lemma \ref{lem:split} or Remark \ref{rem:split} we obtain $B\in
\mathsf{D}^{\ge1}$. This proves the proposition.
\end{proof}
We shall also need the following standard result.
\begin{lemma}\label{lem:tsummands}
Let $(\mathsf{D}^{\le 0}, \sf{D}^{\ge 0})$ be a $t$-structure on a
triangulated category. If $X \oplus Y \in \sf{D}^{\le 0}$ then
$X \in \sf{D}^{\le 0}$ and $Y \in \sf{D}^{\le 0}$.
The corresponding statement holds for $\sf{D}^{\ge 0}$.
\end{lemma}
\begin{proof}
Let $A \stackrel{f}{\longrightarrow} X \stackrel{g}{\longrightarrow} B
\stackrel{+}{\longrightarrow}$ be a distinguished triangle with
$A\in\mathsf{D}^{\le 0}$ and $B\in\mathsf{D}^{\ge 1}$, which exists due to
the definition of a $t$-structure. If $X\not\in \mathsf{D}^{\le 0}$, we
necessarily have $g\ne 0$ and $B \ne 0$.
Because $\Hom(\sf{D}^{\le 0}, \sf{D}^{\ge 1}) = 0$, the composition $X
\oplus Y \stackrel{p}{\longrightarrow} X \stackrel{g}{\longrightarrow} B$,
in which $p$ denotes the natural projection, must be zero. If
$i:X\rightarrow X\oplus Y$ denotes the canonical morphism, we
obtain $g=g\circ p\circ i=0$, a contradiction. In the same way it follows
that $Y\in\mathsf{D}^{\le 0}$.
\end{proof}
Recall that an Abelian category is called \emph{Noetherian}, if any sequence of
epimorphisms stabilises, this means that for any sequence of epimorphisms
$f_{k}:A_{k}\rightarrow A_{k+1}$ there exists an integer $k_{0}$ such that
$f_{k}$ is an isomorphism for all $k\ge k_{0}$.
\begin{lemma}\label{lem:heart}
The heart $\mathsf{A}(\theta,\mathsf{P}(\theta)^{-})$ of the $t$-structure,
which was described in Proposition \ref{prop:texpl}, is Noetherian if and
only if $\mathsf{P}(\theta) \ne \{0\}$ and $\mathsf{P}(\theta)^{-} =
\emptyset$. In this case, ${\sf A}(\theta, \emptyset) = \mathsf{D}(\theta,
\theta +1]$.
\end{lemma}
\begin{proof}
If $\mathsf{P}(\theta) = \{0\}$ then
$\mathsf{A}(\theta,\mathsf{P}(\theta)^{-}) =
\mathsf{D}(\theta,\theta+1)$. This category is not Noetherian.
To prove this, we follow the proof of Polishchuk in the smooth case
\cite{Pol1}, Proposition 3.1.
We are going to show for any non-zero locally free shifted sheaf $E \in
\mathsf{D}(\theta, \theta +1)$, the
existence of a locally free shifted sheaf $F$ and an epimorphism
$E\twoheadrightarrow F$ in $\mathsf{D}(\theta, \theta +1)$, which is not an
isomorphism. This will be sufficient to show that $\mathsf{D}(\theta, \theta
+1)$ is not Noetherian.
By applying an appropriate shift, we may assume $0<\theta<1$. Under this
assumption, for every stable coherent sheaf $G$ we have
\begin{align*}
G\in\mathsf{D}(\theta, \theta +1) &\iff \theta<\varphi(G)\le 1\\
G[1]\in\mathsf{D}(\theta, \theta +1) &\iff 0<\varphi(G)< \theta.
\end{align*}
For any two objects $X,Y\in\Dbcoh(\boldsymbol{E})$ we define the Euler form
to be
$$\langle X,Y \rangle = \rk(X)\deg(Y) - \deg(X)\rk(Y)$$
which is the imaginary part of $\overline{Z(X)}Z(Y)$. If $X$ and $Y$ are
coherent sheaves and one of them is perfect, we have
$$\langle X,Y \rangle = \chi(X,Y) := \dim\Hom(X,Y) - \dim \Ext^{1}(X,Y).$$
This remains true, if we apply arbitrary shifts to the sheaves $X,Y$, where
we understand $\chi(X,Y)=\sum_{\nu} (-1)^{\nu} \dim \Hom(X,Y[\nu]).$
Let $E\in\mathsf{D}(\theta, \theta +1)$ be an arbitrary non-zero locally
free shifted sheaf. We look at the strip in the plane between the lines
$L(0):= \mathbb{R}\exp(i\pi\theta)$ and $L(E):=L(0)+Z(E)$. This strip must
contain lattice points in its interior.
\begin{figure}[hbt]
\begin{center}
\setlength{\unitlength}{10mm}
\begin{picture}(11,6)
\put(1.5,2){\vector(1,0){9.5}}\put(11,1.9){\makebox(0,0)[t]{$-\deg$}}
\put(6,0){\vector(0,1){6}}\put(5.8,6){\makebox(0,0)[r]{$\rk$}}
\put(2,0){\line(2,1){9}}
\put(11,4.3){\makebox(0,0)[t]{$\theta$}}
\put(2.8,0){\makebox(0,0)[l]{$\theta+1$}}
\put(1,1.5){\line(2,1){9}}
\put(6,2){\vector(-1,1){1}}\put(4.8,2.9){\makebox(0,0)[t]{$F$}}
\put(6,2){\vector(2,3){2}}\put(8.2,4.9){\makebox(0,0)[t]{$E$}}
\put(5,3){\vector(3,2){3}}
\put(10.5,4.5){\makebox(0,0)[b]{$L(0)$}}
\put(10.5,6){\makebox(0,0)[t]{$L(E)$}}
\end{picture}
\end{center}
\caption{}\label{fig:strip}\end{figure}
Therefore, there exists a lattice point $Z_{F}$ in this strip which enjoys
the following properties:
\begin{enumerate}
\item\label{nopoint} the only lattice points on the closed triangle whose
vertices are $0, Z(E), Z_{F}$, are its vertices;
\item\label{phase} $\varphi_{F} > \varphi(E)$.
\end{enumerate}
By $\varphi_{F}$ we denote here the unique number which satisfies
$\theta <\varphi_{F} < \theta+1$ and $Z_{F}\in \mathbb{R}\exp(i\pi
\varphi_{F})$.
Because $\SL(2,\mathbb{Z})$ acts transitively on $\mathsf{Q}$, there exists
a stable non-zero locally free shifted sheaf $F\in\mathsf{D}(\theta, \theta
+1)$ with $Z(F)=Z_{F}$ and $\varphi(F)=\varphi_{F}$.
The assumption $\mathsf{P}(\theta)=\{0\}$ implies
$\mathbb{R}\exp(i\pi\theta)\cap\mathbb{Z}^{2} = \{0\}$, hence, $Z(E)$ is the
only lattice point on the line $L(E)$. This implies that $Z(F)$ is not on
the boundary of the stripe between $L(0)$ and $L(E)$. In particular,
$Z(E)-Z(F)$ is contained in the same half-plane of $L(0)$ as $Z(E)$ and
$Z(F)$, see Figure \ref{fig:strip}.
Condition (\ref{nopoint}) implies $\langle E,F \rangle = 1$. Because $E$ is
locally free, condition (\ref{phase}) implies
$$\Ext^{1}(E,F) = \Hom(F,E) = 0.$$
Hence, $\Hom(E,F)\cong \boldsymbol{k}$. The evaluation map gives, therefore,
a distinguished triangle
$$\Hom(E,F)\otimes E \rightarrow F \rightarrow T_{E}(F)
\stackrel{+}{\longrightarrow}$$
with $T_{E}(F)\in \Dbcoh(\boldsymbol{E})$.
If $C:=T_{E}(F)[-1]$ we obtain a distinguished triangle
\begin{equation}
\label{eq:mutation}
C\rightarrow E\rightarrow F \stackrel{+}{\longrightarrow}
\end{equation}
with $Z(C)=Z(E)-Z(F)$.
Because $E$ is a stable non-zero shifted locally free sheaf, it is spherical
by Proposition \ref{prop:spherical} and so $T_{E}$ is an equivalence. This
implies that $T_{E}(F)$ is spherical and, by Proposition
\ref{prop:spherical} again, $C$ is a stable non-zero shifted locally free
sheaf. All morphisms in the distinguished triangle (\ref{eq:mutation}) are
non-zero because $C, E, F$ are indecomposable, see Lemma \ref{lem:PengXiao}.
Using Lemma \ref{wesPT:ii}, this implies $\theta-1<\varphi(C)<\theta+1$.
However, we have seen in which half-plane $Z(C)$ is contained, so that we
must have $\theta<\varphi(C)<\theta+1$, which implies
$C\in\mathsf{D}(\theta,\theta+1)$. The distinguished triangle
(\ref{eq:mutation}) and the definition of the structure of Abelian category
on the heart $\mathsf{D}(\theta,\theta+1)$ imply now that the
morphism $E\rightarrow F$ in (\ref{eq:mutation}) is an epimorphism in
$\mathsf{D}(\theta,\theta+1)$. This gives an infinite chain of epimorphisms
which are not isomorphisms, so that the category
$\mathsf{D}(\theta,\theta+1)$ is indeed not Noetherian.
In order to show that $\mathsf{A}(\theta,\mathsf{P}(\theta)^{-})$ is not
Noetherian for $\mathsf{P}(\theta)^{-} \ne \emptyset$ we may assume $\theta
= 0$. If there exists a stable element $\boldsymbol{k}(x) \in
\mathsf{P}(0)^{-}[1]\subset \mathsf{P}(1)$, where $x\in\boldsymbol{E}$ is a
smooth point, we have exact sequences
\begin{equation}
\label{eq:sequence}
0 \rightarrow \mathcal{O}(mx)
\rightarrow \mathcal{O}((m+1)x)
\rightarrow \boldsymbol{k}(x)
\rightarrow 0
\end{equation}
in $\Coh_{\boldsymbol{E}}$ with arbitrary
$m\in\mathbb{Z}$.
Hence the cone of the morphism $\mathcal{O}(mx) \rightarrow
\mathcal{O}((m+1)x)$ is isomorphic to $\boldsymbol{k}(x)[0]$. Because
$\boldsymbol{k}(x)[0]$ is an object of $\mathsf{D}^{\le-1}$, with regard to
the $t$-structure which is defined by $\mathsf{P}({0})^{-}$, we obtain
$\tau_{\ge0}(\boldsymbol{k}(x)[0])=0$, which is the cokernel of
$\mathcal{O}(mx) \rightarrow \mathcal{O}((m+1)x)$
in the Abelian category $\mathsf{A}(0,\mathsf{P}(0)^{-})$, see
\cite{Asterisque100}, 1.3.
Hence, there is an exact sequence
$$0 \rightarrow \boldsymbol{k}(x)[-1] \rightarrow \mathcal{O}(mx)
\rightarrow \mathcal{O}((m+1)x) \rightarrow 0$$
in $\mathsf{A}(0,\mathsf{P}(0)^{-})$
and we obtain an infinite chain of epimorphisms
$$ \mathcal{O}(x) \rightarrow \mathcal{O}(2x) \rightarrow \mathcal{O}(3x)
\rightarrow \cdots$$ in the category $\mathsf{A}(0,\mathsf{P}(0)^{-})$,
which, therefore, is not Noetherian.
If $\mathsf{P}(0)^{-}[1]$ contains $\boldsymbol{k}(s)$ only, where
$s\in\boldsymbol{E}$ is the singular point, we proceed as follows. First,
recall that there exist coherent torsion modules with support at $s$
which have finite injective dimension, see for example
\cite{BurbanKreussler}, Section 4. To describe examples of them, we can
choose a line bundle $\mathcal{L}$ on $\boldsymbol{E}$ and a section
$\sigma\in H^{0}(\mathcal{L})$, such that the cokernel of
$\sigma:\mathcal{O}\rightarrow \mathcal{L}$ is a coherent torsion module
$\mathcal{B}$ of length two with support at $s$. If we embed
$\boldsymbol{E}$ into $\mathbb{P}^{2}$, such a line bundle $\mathcal{L}$ is
obtained as the tensor product of the restriction of
$\mathcal{O}_{\mathbb{P}^{2}}(1)$ with $\mathcal{O}_{\boldsymbol{E}}(-x)$,
where $x\in\boldsymbol{E}$ is a smooth point. The section $\sigma$
corresponds to the line in the plane through $x$ and $s$. By twisting with
$\mathcal{L}^{\otimes m}$ we obtain exact sequences
$$ 0 \rightarrow \mathcal{L}^{\otimes m} \rightarrow \mathcal{L}^{\otimes
(m+1)} \rightarrow \mathcal{B} \rightarrow 0$$ in $\Coh_{\boldsymbol{E}}$.
Because $\mathcal{B}$ is a semi-stable torsion sheaf with support at $s$,
all its JH-factors are isomorphic to $\boldsymbol{k}(s)$ and we conclude as
above.
\end{proof}
\begin{proposition}\label{prop:eitheror}
Let $(\mathsf{D}^{\le0}, \mathsf{D}^{\ge0})$ be a $t$-structure on
$\Dbcoh(\boldsymbol{E})$ and $B$ a semi-stable indecomposable object
in $\Dbcoh(\boldsymbol{E})$. Then either $B\in \mathsf{D}^{\le0}$ or $B\in
\mathsf{D}^{\ge1}$.
\end{proposition}
\begin{proof}
Let $X\stackrel{f}{\rightarrow} B \stackrel{g}{\rightarrow} Y
\stackrel{+}{\longrightarrow}$ be a distinguished triangle with $X\in
\mathsf{D}^{\le0}$ and $Y\in \mathsf{D}^{\ge1}$. Suppose $X\ne 0$ and $Y\ne
0$ in $\Dbcoh(\boldsymbol{E})$. We decompose both objects into
indecomposables $X=\bigoplus X_{i}$ and $Y=\bigoplus Y_{j}$. By Lemma
\ref{lem:tsummands} we have $X_{i}\in \mathsf{D}^{\le0}$ and $Y_{j}\in
\mathsf{D}^{\ge1}$. If one of the components of the morphisms
$Y[-1]\rightarrow X=\bigoplus X_{i}$ or $\bigoplus Y_{j}=Y\rightarrow X[1]$
were zero, by Lemma \ref{lem:PengXiao} we would obtain a direct summand
$X_{i}$ or $Y_{j}$ in $B$. Because $B$ was assumed to be indecomposable,
this implies the claim of the proposition.
For the rest of the proof we suppose that all components of these two
morphisms are non-zero. This implies that $X_{i}$ and $Y_{j}$ are
non-perfect for all $i,j$. Indeed, if $X_{i}$ were perfect, we could
apply Serre duality (\ref{wesPT:i}) to obtain $\Hom(Y,X_{i}[1]) =
\Hom(X_{i},Y)^{\ast}$, which is zero because $X_{i}\in \mathsf{D}^{\le0}$
and $Y\in \mathsf{D}^{\ge1}$. The case with perfect $Y_{j}$ can be dealt
with similarly.
Using Lemma \ref{lem:PengXiao} again, it follows that none of the components
of $f:\bigoplus X_{i} \rightarrow B$ or $g:B\rightarrow \bigoplus Y_{j}$ is
zero, because none of the $X_{i}$ could be a direct summand of $Y[-1]$ and
none of the $Y_{j}$ could be a summand of $X[1]$.
Using Lemma \ref{wesPT:ii}, this implies
$\varphi_{-}(X_{i}) \le \varphi(B) \le \varphi_{+}(Y_{j})$ for all $i,j$.
If there exist $i,j$ such that $\varphi_{-}(X_{i}) -
\varphi_{+}(Y_{j})\not\in \mathbb{Z}$, there exists an integer $k\ge 0$ such
that $\varphi_{-}(X_{i}[k]) < \varphi_{+}(Y_{j}) < \varphi_{-}(X_{i}[k])
+1$. Using Proposition \ref{wesPT} (\ref{wesPT:iii}) this implies
$\Hom(X_{i}[k], Y_{j}) \ne 0$. But, for any
integer $k\ge 0$ we have $X_{i}[k]\in \mathsf{D}^{\le0}$ and because
$Y_{j}\in \mathsf{D}^{\ge1}$, we should have $\Hom(X_{i}[k], Y_{j}) =
0$. This contradiction implies $\varphi_{-}(X_{i}) - \varphi_{+}(Y_{j}) \in
\mathbb{Z}$ for all $i,j$. But, if $k=\varphi_{+}(Y_{j}) -
\varphi_{-}(X_{i})$, we still have $\Hom(X_{i}[k], Y_{j}) \ne 0$, which
follows from Proposition \ref{wesPT} (\ref{wesPT:iv}) because $X_{i}$ and
$Y_{j}$ are not perfect. The conclusion is now that we must have $X=0$ or
$Y=0$, which implies the claim.
\end{proof}
\begin{lemma}\label{lem:inequ}
Let $(\mathsf{D}^{\le0}, \mathsf{D}^{\ge0})$ be a $t$-structure on
$\Dbcoh(\boldsymbol{E})$. If $F\in \mathsf{D}^{\le0}$ and $G\in
\mathsf{D}^{\ge1}$, then $\varphi_{-}(F)\ge \varphi_{+}(G)$.
\end{lemma}
\begin{proof}
Suppose $\varphi_{-}(F)< \varphi_{+}(G)$. It is sufficient to derive a
contradiction for indecomposable objects $F$ and $G$.
Because, for any $k\ge0$, $F[k]\in \mathsf{D}^{\le0}$, we may replace $F$
by $F[k]$ and can assume $0< \varphi_{+}(G) - \varphi_{-}(F)\le 1$. Now,
there exists a stable vector bundle $\mathcal{B}$ on $\boldsymbol{E}$ and an
integer $r$ such that
$$\varphi_{-}(F) < \varphi(\mathcal{B}[r]) < \varphi_{+}(G) \le
\varphi_{-}(F) + 1.$$
By Proposition \ref{prop:eitheror},
$\mathcal{B}[r]$ is in $\mathsf{D}^{\le0}$ or in
$\mathsf{D}^{\ge1}$. But, from Proposition \ref{wesPT} (\ref{wesPT:iii}) we
deduce $\Hom(F, \mathcal{B}[r])\ne 0$ and $\Hom(\mathcal{B}[r],G)\ne 0$. If
$\mathcal{B}[r]\in \mathsf{D}^{\ge1}$, the first inequality contradicts $F\in
\mathsf{D}^{\le0}$ and if $\mathcal{B}[r]\in \mathsf{D}^{\le0}$, the second
one contradicts $G\in \mathsf{D}^{\ge1}$.
\end{proof}
\begin{theorem}\label{thm:tstruc}
Let $(\mathsf{D}^{\le0}, \mathsf{D}^{\ge0})$ be a t-structure on
$\Dbcoh(\boldsymbol{E})$. Then there exists a number $\theta\in \mathbb{R}$
and a subset
$\mathsf{P}(\theta)^{-}\subset \mathsf{P}(\theta)^{s}$, such that
$${\sf D}^{\le 0} = \mathsf{D}[\mathsf{P}(\theta)^{-}, \infty)
\quad\text{ and }\quad
{\sf D}^{\ge 1} = \mathsf{D}(-\infty,\mathsf{P}(\theta)^{+}].$$
\end{theorem}
\begin{proof}
From Lemma \ref{lem:inequ} we deduce the existence of $\theta \in
\mathbb{R}$ such that $\mathsf{D}(\theta,\infty)\subset\mathsf{D}^{\le 0}$
and $\mathsf{D}(-\infty, \theta)\subset\mathsf{D}^{\ge 1}.$
If we define
$\mathsf{P}(\theta)^{-}=\mathsf{P}(\theta)^{s}\cap \mathsf{D}^{\le 0}$
and
$\mathsf{P}(\theta)^{+}=\mathsf{P}(\theta)^{s}\cap \mathsf{D}^{\ge1}$,
Proposition \ref{prop:eitheror} implies
$\mathsf{P}(\theta)^{s} = \mathsf{P}(\theta)^{-}\cup\mathsf{P}(\theta)^{+}$.
Hence, $\mathsf{D}[\mathsf{P}(\theta)^{-}, \infty)\subset \mathsf{D}^{\le0}$
and $\mathsf{D}(-\infty,\mathsf{P}(\theta)^{+}]\subset \mathsf{D}^{\ge 1}$.
From Proposition \ref{prop:texpl} we know that
$(\mathsf{D}[\mathsf{P}(\theta)^{-}, \infty),
\mathsf{D}(-\infty,\mathsf{P}(\theta)^{+}[1]])$ defines a
$t$-structure. Now, the statement of the theorem follows.
\end{proof}
\begin{remark}
In the case of a smooth elliptic curve Theorem \ref{thm:tstruc} was proved in
\cite{GRK}. If $\theta\not\in\mathsf{Q}$ the heart
$\mathsf{D}(\theta,\theta+1)$ of the corresponding $t$-structure is a
finite-dimensional non-Noetherian Abelian category of infinite global
dimension. In the smooth case, they correspond to the category of holomorphic
vector bundles on a non-commutative torus in the sense of Polishchuk and
Schwarz \cite{PolSchw}. It is an interesting problem to find a similar
interpretation of these Abelian categories in the case of a singular
Weierstra{\ss} curve $\boldsymbol{E}$.
\end{remark}
To complete this section we give two applications of Theorem
\ref{thm:tstruc}. The first is a description of the group of exact
auto-equivalences of the triangulated category $\Dbcoh(\boldsymbol{E})$. The
second application is a description of Bridgeland's space of all stability
structures on $\Dbcoh(\boldsymbol{E})$. In both cases, $\boldsymbol{E}$ is an
irreducible curve of arithmetic genus one over $\boldsymbol{k}$.
\begin{corollary}\label{cor:auto}
There exists an exact sequence of groups
$$
\boldsymbol{1} \longrightarrow \Aut^0(\Dbcoh(\boldsymbol{E}))
\longrightarrow \Aut(\Dbcoh(\boldsymbol{E}))
\longrightarrow \SL(2,\mathbb{Z}) \longrightarrow \boldsymbol{1}
$$
in which $\Aut^0(\Dbcoh(\boldsymbol{E}))$ is generated by tensor products
with line bundles of degree zero, automorphisms of the curve and the shift
by $2$.
\end{corollary}
\begin{proof}
The homomorphism $\Aut(\Dbcoh(\boldsymbol{E})) \rightarrow \SL(2,\mathbb{Z})$
is defined by describing the action of an auto-equivalence on
$\mathsf{K}(\boldsymbol{E})$ in terms of the coordinate functions $(\deg,
\rk)$.
That this is indeed in $\SL(2,\mathbb{Z})$ follows, for example, because
$\Aut(\Dbcoh(\boldsymbol{E}))$ preserves stability and the Euler-form
\begin{align*}
\langle \mathcal{F},\mathcal{G}\rangle &=
\dim\Hom(\mathcal{F},\mathcal{G}) -
\dim\Hom(\mathcal{G},\mathcal{F})\\
&= \rk(\mathcal{F}) \deg(\mathcal{G}) -
\deg(\mathcal{F})\rk(\mathcal{G})
\end{align*}
for stable and perfect sheaves $\mathcal{F},\mathcal{G}$.
Clearly, tensor products with line bundles of degree
zero, automorphisms of the curve and the shift by $2$ are contained in the
kernel of this homomorphism. In order to show that the kernel coincides with
$\Aut^0(\Dbcoh(\boldsymbol{E}))$, we let $\mathbb{G}$ be an arbitrary exact
auto-equivalence of $\Dbcoh(\boldsymbol{E})$.
Then, $\mathbb{G}(\Coh_{\boldsymbol{E}})$ is still Noetherian and it is the
heart
of the $t$-structure $(\mathbb{G}(\mathsf{D}^{\le0}),
\mathbb{G}(\mathsf{D}^{\ge0}))$.
From Theorem \ref{thm:tstruc} and Lemma \ref{lem:heart} we know all
Noetherian hearts of $t$-structures. We obtain
$\mathbb{G}(\Coh_{\boldsymbol{E}}) = \mathsf{D}(\theta,\theta+1]$ with
$\mathsf{P}(\theta)\ne \{0\}$.
Now, by Corollary \ref{cor:sheaves} there exists
$\Phi\in\widetilde{\SL}(2,\mathbb{Z})$ which maps
$\mathsf{D}(\theta,\theta+1]$ to $\mathsf{D}(0, 1]=\Coh_{\boldsymbol{E}}$.
This implies that the auto-equivalence $\Phi\circ\mathbb{G}$ induces an
auto-equivalence of the category $\Coh_{\boldsymbol{E}}$.
It is well-known that such an auto-equivalence has the form $f^*(\mathcal{L}
\otimes \,\cdot\,)$, where $f:\boldsymbol{E} \rightarrow \boldsymbol{E}$ is an
isomorphism and $\mathcal{L}$ is a line bundle.
Note that $f^*(\mathcal{L} \otimes \,\cdot\,)$ is sent to the identity in
$\SL(2,\mathbb{Z})$, if and only if $\mathcal{L}$ is of degree zero.
The composition of $\Phi\circ\mathbb{G}$ with the inverse of $f^*(\mathcal{L}
\otimes \,\cdot\,)$ satisfies the assumptions of \cite{BondalOrlov}, Prop.~A.3,
hence is isomorphic to the identity.
Because the kernel of the homomorphism $\widetilde{\SL}(2,\mathbb{Z})
\rightarrow \SL(2,\mathbb{Z})$, which is induced by the action of
$\widetilde{\SL}(2,\mathbb{Z})$ on $\Dbcoh(\boldsymbol{E})$ and the above
homomorphism $\Aut(\Dbcoh(\boldsymbol{E})) \longrightarrow
\SL(2,\mathbb{Z})$, is generated by the element of
$\widetilde{\SL}(2,\mathbb{Z})$ which acts as the shift by $2$, the claim
now follows.
\end{proof}
For our second application, we recall Bridgeland's definition of stability
condition on a triangulated category \cite{Stability}.
Recall that we set $\mathsf{K}(\boldsymbol{E}) =
\mathsf{K}_{0}(\Coh(\boldsymbol{E})) \cong
\mathsf{K}_{0}(\Dbcoh(\boldsymbol{E}))$.
Following Bridgeland \cite{Stability}, we call a pair $(W,\mathsf{R})$ a
\emph{stability condition} on $\Dbcoh(\boldsymbol{E})$, if
$$W:\mathsf{K}(\boldsymbol{E})\rightarrow\mathbb{C}$$
is a group homomorphism and $\mathsf{R}$ is a compatible slicing of
$\Dbcoh(\boldsymbol{E})$. A \emph{slicing} $\mathsf{R}$ consists of a
collection of full additive subcategories $\mathsf{R}(t) \subset
\Dbcoh(\boldsymbol{E})$, $t\in\mathbb{R}$, such that
\begin{enumerate}
\item $\forall t\in\mathbb{R}\quad \mathsf{R}(t+1) = \mathsf{R}(t)[1]$;
\item If $t_{1}>t_{2}$ and $A_{\nu}\in\mathsf{R}(t_{\nu})$, then
$\Hom(A_{1},A_{2}) =0$;
\item each non-zero object $X\in\Dbcoh(\boldsymbol{E})$ has a HNF
\[\xymatrix@C=.4em{
0\; \ar[rr] && F_{n}X \ar[rr] \ar[dl]_{\cong}&& F_{n-1}X \ar[rr]
\ar[dl]&& \dots \ar[rr] &&F_{1}X \ar[rr] && F_{0}X \ar@{=}[r]\ar[dl] &
X\\
& A_{n} \ar[lu]^{+} && A_{n-1} \ar[lu]^{+} & & & & & & A_0 \ar[lu]^{+}&
}\]
in which $0\ne A_{\nu}\in\mathsf{R}(\varphi_{\nu})$ and
$\varphi_{n}>\varphi_{n-1}> \ldots > \varphi_{1}>\varphi_{0}$.
\end{enumerate}
Compatibility means for all non-zero $A\in\mathsf{R}(t)$
$$W(A)\in \mathbb{R}_{>0}\exp(i\pi t).$$
By $\varphi^{\mathsf{R}}$ we denote the phase function on
$\mathsf{R}$-semi-stable objects. Similarly, we denote by
$\varphi^{\mathsf{R}}_{+}(X)$ and
$\varphi^{\mathsf{R}}_{-}(X)$ the largest, respectively smallest, phase of an
$\mathsf{R}$-HN factor of $X$.
The standard stability condition, which was studied in the previous section,
will always be denoted by $(Z, \mathsf{P})$. This stability condition has a
slicing which is \emph{locally finite}, see \cite{Stability}, Def.\/ 5.7.
A slicing $\mathsf{R}$ is called locally finite, iff there exists $\eta>0$
such that for any $t\in\mathbb{R}$ the quasi-Abelian category
$\mathsf{D}^{\mathsf{R}}(t-\eta, t+\eta)$ is of finite length, i.e. Artinian
and Noetherian.
This category consists of those objects $X\in\Dbcoh(\boldsymbol{E})$ which
satisfy $t-\eta<\varphi^{\mathsf{R}}_{-}(X) \le \varphi^{\mathsf{R}}_{+}(X) <
t+\eta$.
In order to obtain a good moduli space of stability conditions, Bridgeland
\cite{Stability} requires the stability conditions to be
\emph{numerical}. This means that the central charge $W$ factors
through the numerical Grothendieck group. This makes sense if for any two
objects $E,F$ of the triangulated category in question, the vector spaces
$\bigoplus_{i} \Hom(E,F[i])$ are finite-dimensional. This condition is not
satisfied for $\Dbcoh(\boldsymbol{E})$, if $\boldsymbol{E}$ is
singular. However, in the case of our interest, we do not need such an extra
condition, because the Grothendieck group $\mathsf{K}(\boldsymbol{E})$ is
sufficiently small. From Lemma \ref{lem:GrothGrp} we know
$\mathsf{K}(\boldsymbol{E}) \cong \mathbb{Z}^{2}$ with generators
$[\mathcal{O}_{\boldsymbol{E}}]$ and $[\boldsymbol{k}(x)]$,
$x\in\boldsymbol{E}$ arbitrary.
Because $Z(\boldsymbol{k}(x))=-1$ and $Z(\mathcal{O}_{\boldsymbol{E}})=i$, it
is now clear that any homomorphism $W:\mathsf{K}(\boldsymbol{E}) \rightarrow
\mathbb{C}$ can be written as $W(E)=w_{1}\deg(E) + w_{2}\rk(E)$ with $w_{1},
w_{2}\in\mathbb{C}$. Equivalently, if we identify $\mathbb{C}$ with
$\mathbb{R}^{2}$, there exists a $2\times 2$-matrix $A$ such that $W=A\circ Z$.
\begin{definition}
By $\Stab{\boldsymbol{E}}$ we denote the set of all stability conditions $(W,
\mathsf{R})$ on $\Dbcoh(\boldsymbol{E})$ for which $\mathsf{R}$ is a locally
finite slicing.
\end{definition}
\begin{lemma}\label{lem:notaline}
For any $(W, \mathsf{R}) \in \Stab(\boldsymbol{E})$ there exists a matrix
$A\in\GL(2,\mathbb{R})$, such that $W=A\circ Z$.
\end{lemma}
\begin{proof}
As seen above, there exists a not necessarily invertible matrix $A$ such
that $W=A\circ Z$. If $A$ were not invertible, there would exist a number
$t_{0}\in\mathbb{R}$ such that $W(\mathsf{K}(\boldsymbol{E})) \subset
\mathbb{R}\exp(i\pi t_{0})$. This implies that there may exist a non-zero
object in $\mathsf{R}(t)$ only if $t-t_{0}\in\mathbb{Z}$. The assumption
that the slicing $\mathsf{R}$ is locally finite implies now that
$\mathsf{R}(t)$ is of finite length for any $t\in\mathbb{R}$. On the other
hand, the heart of the $t$-structure, which is defined by $(W,\mathsf{R})$
is $\mathsf{R}(t_{0})$ up to a shift. However, in Lemma \ref{lem:heart} we
determined all Noetherian hearts of $t$-structures on
$\Dbcoh(\boldsymbol{E})$ and none of them is Artinian. This contradiction
shows that $A$ is invertible.
\end{proof}
\begin{lemma}\label{lem:function}
If $(W,\mathsf{R}) \in \Stab(\boldsymbol{E})$, there exists a unique
strictly increasing function $f:\mathbb{R} \rightarrow \mathbb{R}$ with
$f(t+1) = f(t)+1$ and $\mathsf{R}(t) = \mathsf{P}(f(t))$.
\end{lemma}
\begin{proof}
By definition, $W(\mathsf{R}(t)) \subset \mathbb{R}_{>0} \exp(i\pi
t)$. By Lemma \ref{lem:notaline}, there exists a linear isomorphism $A$ such
that $W=A^{-1}\circ Z$. This implies that there is a function
$f:\mathbb{R}\rightarrow \mathbb{R}$ such that $Z(\mathsf{R}(t))
\subset \mathbb{R}_{>0} \exp(i\pi f(t))$.
On the other hand, $\mathsf{R}(t)$ is the intersection of two hearts of
$t$-structures. By Proposition \ref{prop:texpl} these hearts are of the form
$\mathsf{D}[\mathsf{P}(\theta_{1})^{-}, \mathsf{P}(\theta_{1})^{+}[1]]$ and
$\mathsf{D}[\mathsf{P}(\theta_{2})^{-}, \mathsf{P}(\theta_{2})^{+}[1]]$ with
$\theta_{1}\le \theta_{2}$. These have non-empty intersection only if
$\theta_{2} \le \theta_{1}+1$. Their intersection is contained in
$\mathsf{D}[\theta_{2},\theta_{1}+1]$, see Figure \ref{fig:intersection}.
\begin{figure}[hbt]
\begin{center}
\setlength{\unitlength}{10mm}
\begin{picture}(11,5)
\multiput(0,4)(0.2,0){56}{\line(1,0){0.1}}
\put(0,1){\line(1,0){11.1}}
\thicklines
\put(1.5,2){\line(0,1){2}}\put(1.5,0.8){\makebox(0,0)[t]{$\theta_{2}+1$}}
\put(1.4,3){\makebox(0,0)[r]{$\mathsf{P}(\theta_{2})^{+}[1]$}}
\put(5.5,1){\line(0,1){1}}\put(5.5,0.8){\makebox(0,0)[t]{$\theta_{2}$}}
\put(5.6,1.4){\makebox(0,0)[l]{$\mathsf{P}(\theta_{2})^{-}$}}
\put(4.5,2.3){\line(0,1){1}}\put(4.5,0.8){\makebox(0,0)[t]{$\theta_{1}+1$}}
\put(4.4,3){\makebox(0,0)[r]{$\mathsf{P}(\theta_{1})^{+}[1]$}}
\put(8.5,1){\line(0,1){1.3}}\put(8.5,0.8){\makebox(0,0)[t]{$\theta_{1}$}}
\put(8.5,3.3){\line(0,1){0.7}}
\put(8.6,1.5){\makebox(0,0)[l]{$\mathsf{P}(\theta_{1})^{-}$}}
\thinlines
\multiput(1.5,1)(0,0.2){5}{\line(0,1){0.1}}
\multiput(5.5,2)(0,0.2){10}{\line(0,1){0.1}}
\multiput(4.5,1)(0,0.2){7}{\line(0,1){0.1}}
\multiput(4.5,3.3)(0,0.2){4}{\line(0,1){0.1}}
\multiput(8.5,2.3)(0,0.2){5}{\line(0,1){0.1}}
\put(1.9,4){\line(-2,-3){0.4}}
\put(2.7,4){\line(-2,-3){1.2}}
\multiput(1.5,1)(0.8,0){3}{\line(2,3){2}}
\put(3.9,1){\line(2,3){1.6}}
\put(4.7,1){\line(2,3){0.8}}
\put(8.1,4){\line(2,-3){0.4}}
\put(7.3,4){\line(2,-3){1.2}}
\multiput(8.5,1)(-0.8,0){3}{\line(-2,3){2}}
\put(6.1,1){\line(-2,3){1.6}}
\put(5.3,1){\line(-2,3){0.8}}
\end{picture}
\end{center}
\caption{}\label{fig:intersection}
\end{figure}
Moreover, if $\theta_{2}<\theta_{1}+1$, there exist $\alpha,
\beta\in\mathsf{Q}$ with $\theta_{2}< \alpha < \beta < \theta_{1}+1\le
\theta_{2}+1$. In this case we have two non-trivial subcategories
$\mathsf{P}(\alpha)\subset \mathsf{R}(t)$ and $\mathsf{P}(\beta)\subset
\mathsf{R}(t)$. However, because $0<\beta-\alpha<1$ and
$Z(\mathsf{R}(t)) \subset \mathbb{R}_{>0} \exp(i\pi f(t))$, we cannot have
$Z(\mathsf{P}(\alpha)) \subset \mathbb{R}_{>0} \exp(i\pi\alpha)$ and
$Z(\mathsf{P}(\beta)) \subset \mathbb{R}_{>0} \exp(i\pi\beta)$.
Hence, $\theta_{2}=\theta_{1}+1=f(t)$ and we obtain $\mathsf{R}(t) \subset
\mathsf{P}(f(t))$.
From $\mathsf{R}(t+m)=\mathsf{R}(t)[m]$ we easily obtain
$f(t+m)=f(t)+m$. Moreover, $f(t_{2})=f(t_{1})+m$ with $m\in\mathbb{Z}$
implies $t_{2}-t_{1}\in\mathbb{Z}$, because the image of $W$ is not
contained in a line by Lemma \ref{lem:notaline}.
Next, we show that $f$ is strictly increasing. Suppose $t_{1}<t_{2}$,
$t_{2}-t_{1}\not\in\mathbb{Z}$ and both $\mathsf{R}(t_{i})$ contain non-zero
objects $X_{i}$. For any $m\ge0$ we have $\Hom(X_{2}, X_{1}[-m]) = 0$. If
$f(t_{2}) < f(t_{1})$, we choose $m\ge0$ such that $f(t_{2}) < f(t_{1}) -m <
f(t_{2}) +1$ and obtain $X_{2}\in\mathsf{P}(f(t_{2}))$ and $X_{1}[-m] \in
\mathsf{P}(f(t_{1})-m)$. But this implies, by Corollary \ref{cor:sheaves}
(\ref{cor:iii}), $\Hom(X_{2}, X_{1}[-m]) \ne 0$, a contradiction. Hence, we
have shown that $f$ is strictly increasing with $f(t+1)=f(t)+1$ and
$\mathsf{R}(t)\subset \mathsf{P}(f(t))$. In particular, any $\mathsf{R}$-HNF
is a $\mathsf{P}$-HNF as well. Therefore, all $\mathsf{P}$-semi-stable
objects are $\mathsf{R}$-semi-stable and we obtain $\mathsf{R}(t) =
\mathsf{P}(f(t))$.
\end{proof}
It was shown in \cite{Stability} that the group
$\widetilde{\GL}^{+}(2,\mathbb{R})$ acts naturally on the moduli space of
stability conditions $\Stab(\boldsymbol{E})$.
This group is the universal cover of $\GL^{+}(2, \mathbb{R})$ and has the
following description:
$$\widetilde{\GL}^{+}(2,\mathbb{R}) =
\{(A,f) \mid A\in\GL^{+}(2,\mathbb{R}), f:\mathbb{R}\rightarrow \mathbb{R}
\text{ compatible}\},$$
where compatibility means that $f$ is strictly increasing, satisfies
$f(t+1)=f(t)+1$ and induces the same map on $S^{1}\cong\mathbb{R}/2\mathbb{Z}$
as $A$ does on $S^{1}\cong\mathbb{R}^{2}\setminus\{0\}/\mathbb{R}^{\ast}$.
The action is simply $(A,f)\cdot (W,\mathsf{Q})=(A^{-1}\circ W,\mathsf{Q}\circ
f)$. So, this action basically is a relabelling of the slices.
The following result generalises \cite{Stability}, Thm.\/ 9.1, to the singular
case.
\begin{proposition}\label{prop:stabmod}
The action of $\widetilde{\GL}^{+}(2,\mathbb{R})$ on $\Stab(\boldsymbol{E})$
is simply transitive.
\end{proposition}
\begin{proof}
If $(W,\mathsf{R})\in\Stab(\boldsymbol{E})$, the two values
$W(\mathcal{O}_{\boldsymbol{E}})$ and $W(\boldsymbol{k}(p_{0}))$ determine a
linear transformation $A^{-1}\in\GL(2, \mathbb{R})$ such that $W=A^{-1}\circ
Z$, see Lemma \ref{lem:notaline}.
By construction, the function $f:\mathbb{R}\rightarrow \mathbb{R}$ of Lemma
\ref{lem:function} induces the same mapping on
$S^{1}\cong\mathbb{R}/2\mathbb{Z}$ as $A^{-1}$ does on
$S^{1}\cong\mathbb{R}^{2}\setminus\{0\}/\mathbb{R}^{\ast}$. Therefore,
$A\in\GL^{+}(2, \mathbb{R})$ and we obtain $(A,f)\in\widetilde{\GL}^{+}(2,
\mathbb{R})$ which satisfies $(W,\mathsf{R}) = (A,f)\cdot
(Z,\mathsf{P})$. Finally, if $(A,f)\cdot (Z,\mathsf{P}) =(Z,\mathsf{P})$ for
some $(A,f)\in\widetilde{\GL}^{+}(2, \mathbb{R})$, we obtain $f(t)=t$ for
all $t\in\mathbb{R}$. This implies easily $A=\boldsymbol{1}$.
\end{proof}
The group $\Aut(\Dbcoh(\boldsymbol{E}))$ acts on $\Stab(\boldsymbol{E})$ by
the rule $$\mathbb{G} \cdot (W, \mathsf{R}) := (\overline{\mathbb{G}}\circ W,
\mathbb{G}(\mathsf{R})).$$
Here, $\overline{\mathbb{G}}\in\SL(2,\mathbb{Z})$ is the image of
$\mathbb{G}\in\Aut(\Dbcoh(\boldsymbol{E}))$ under the homomorphism of Corollary
\ref{cor:auto} and $\mathbb{G}(\mathsf{R})(t):= \mathbb{G}(\mathsf{R}(t))$.
Because automorphisms of $\boldsymbol{E}$ and twists by line
bundles act trivially on $\Stab(\boldsymbol{E})$, we obtain
$$\Stab(\boldsymbol{E})/\Aut(\Dbcoh(\boldsymbol{E})) \cong \GL^{+}(2,
\mathbb{R})/\SL(2,\mathbb{Z}),$$
which is a $\mathbb{C}^{\times}$-bundle over the coarse moduli space of
elliptic curves. This result coincides with Bridgeland's result in the smooth
case. The main reason for this coincidence seems to be the irreducibility of
the curve. Example \ref{ex:marginalst} below shows that the situation is
significantly more difficult in the case of reducible degenerations of
elliptic curves.
\begin{remark}\label{rem:common}
Our results show that singular and smooth Weierstra{\ss} curves
$\boldsymbol{E}$ share the following properties:
\begin{enumerate}
\item A coherent sheaf $\mathcal{F}$ is stable if and only if
$\End(\mathcal{F}) \cong \boldsymbol{k}$.
\item Any spherical object is a shift of a stable vector bundle or of a
structure sheaf $\boldsymbol{k}(x)$ of a smooth point $x\in\boldsymbol{E}$.
\item The category of semi-stable sheaves of a fixed slope is
equivalent to the category of coherent torsion sheaves.
Such an equivalence is induced by an auto-equivalence of
$\Dbcoh(\boldsymbol{E})$.
\item There is an exact sequence of groups\\
$\boldsymbol{1} \rightarrow \langle \Aut(\boldsymbol{E}),
\Pic^{0}(\boldsymbol{E}),[2]\rangle \rightarrow
\Aut(\Dbcoh(\boldsymbol{E})) \rightarrow \SL(2,\mathbb{Z}) \rightarrow
\boldsymbol{1}.$
\item $\widetilde{\GL}^{+}(2,\mathbb{R})$ acts transitively on
$\Stab(\boldsymbol{E})$.
\item $\Stab(\boldsymbol{E})/\Aut(\Dbcoh(\boldsymbol{E})) \cong \GL^{+}(2,
\mathbb{R})/\SL(2,\mathbb{Z}).$
\end{enumerate}
\end{remark}
\begin{example}\label{ex:marginalst}
Let $C_{2}$ denote a reducible curve which has two components, both
isomorphic to $\mathbb{P}^{1}$ and which intersect transversally at two
distinct points. This curve has arithmetic genus one and appears as a
degeneration of a smooth elliptic curve.
On this curve, there exists a line bundle $\mathcal{L}$ which
fails to be stable with respect to some stability conditions. To construct
an explicit example, denote by $\pi:\widetilde{C}_{2}\rightarrow C_{2}$ the
normalisation, so that $\widetilde{C}_{2}$ is the disjoint union of two
copies of $\mathbb{P}^{1}$. There is a $\boldsymbol{k}^{\times}$-family of
line bundles whose pull-back to $\widetilde{C}_{2}$ is
$\mathcal{O}_{\mathbb{P}^{1}}$ on one component and
$\mathcal{O}_{\mathbb{P}^{1}}(2)$ on the other. The element in
$\boldsymbol{k}^{\times}$
corresponds to a gluing parameter over one of the two singularities. Let
$\mathcal{L}$ denote one such line bundle.
If $i_{\nu}:\mathbb{P}^{1}\rightarrow C_{2},\;\nu=1,2$ denote the embeddings
of the two components, we fix notation so that $i_{1}^{\ast}\mathcal{L}
\cong \mathcal{O}_{\mathbb{P}^{1}}$ and $i_{2}^{\ast}\mathcal{L} \cong
\mathcal{O}_{\mathbb{P}^{1}}(2)$. There is an exact sequence of coherent
sheaves on $C_{2}$
\begin{equation}\label{eq:linebundle}
0\rightarrow i_{2\ast} \mathcal{O}_{\mathbb{P}^{1}} \rightarrow \mathcal{L}
\rightarrow i_{1\ast} \mathcal{O}_{\mathbb{P}^{1}} \rightarrow 0.
\end{equation}
Moreover, the only non-trivial quotients of $\mathcal{L}$ are
$\mathcal{L}\twoheadrightarrow i_{1\ast} \mathcal{O}_{\mathbb{P}^{1}}$
and
$\mathcal{L}\twoheadrightarrow i_{2\ast} \mathcal{O}_{\mathbb{P}^{1}}(2)$.
For arbitrary positive real numbers $a,b$ we define a centred
slope-function $W_{a,b}$ on the category $\Coh_{C_{2}}$ by
$$W_{a,b}(F):= -\deg(F) + i(a\cdot \rk(i_{1}^{\ast}F) + b\cdot
\rk(i_{2}^{\ast} F)),$$
where $\deg(F)=h^{0}(F) - h^{1}(F)$. For example,
\begin{align*}
W_{a,b}(i_{1\ast}\mathcal{O}_{\mathbb{P}^{1}}(d)) &= -d-1+ia
\quad\text{ and }\\
W_{a,b}(i_{2\ast}\mathcal{O}_{\mathbb{P}^{1}}(d)) &= -d-1+ib.
\end{align*}
Using the exact sequence (\ref{eq:linebundle}), we obtain
$W_{a,b}(\mathcal{L}) = -2+i(a+b)$.
Using results of \cite{Rudakov}, it is easy to see that $W_{a,b}$ has the
Harder-Narasimhan property in the sense of \cite{Stability}. Hence, by
\cite{Stability}, Prop.\/ 5.3, $W_{a,b}$ defines a stability condition on
$\Dbcoh(C_{2})$. With respect to this stability condition, the line bundle
$\mathcal{L}$ is stable precisely when $2/(a+b) < 1/a$, which is
equivalent to $a<b$. It is semi-stable, but not stable, if $b=a$. If $a>b$,
$\mathcal{L}$ is not even semi-stable.
\end{example}
This example illustrates an interesting effect, which was not available on an
irreducible curve of arithmetic genus one. It is an interesting problem to
describe the subset in $\Stab(\boldsymbol{E})$ for which a given line
bundle $\mathcal{L}$ is semi-stable. This is a closed subset, see
\cite{Stability}. Physicists call the boundary of this set the line of
marginal stability, see e.g. \cite{AspinwallDouglas}. The example above
describes the intersection of this set with a two-parameter family of
stability conditions in $\Stab(\boldsymbol{E})$.
\begin{remark}
In the case of an irreducible curve of arithmetic genus one, we have shown
in Proposition \ref{prop:spherical} that $\Aut(\Dbcoh(\boldsymbol{E}))$ acts
transitively on the set of all spherical objects on
$\boldsymbol{E}$. Polishchuk \cite{YangBaxter} conjectured that this is
likewise true in the case of reducible curves with trivial dualising sheaf.
However, on the curve $C_{2}$ there exists a
spherical complex which has non-zero cohomology in two different degrees, see
\cite{BuBu}. This indicates that the reducible case is more difficult and
involves new features.
\end{remark}
|
1,108,101,564,836 | arxiv | \section{Introduction}
\medskip
It is widely known that critical point theory and variational techniques play a major role in a host of problems arising in life sciences. Over the last decades, a strong development of these methods and their applications has been witnessed in connection with the qualitative study of differential equations, for these ones often admit a weak formulation as a critical point equation
\begin{equation}\label{e}
\Phi'(u)=0.
\end{equation}
Here $\Phi'$ is the Fr\'echet derivative of a $C^1$ functional $\Phi$, the so-called {\it energy functional}, defined on a Banach space $X$. In view of the physical motivation behind some of these problems, a special attention has been paid to {\it ground state} solutions of \eqref{e}, i.e. those minimizing $\Phi$ over the set of nontrivial solutions.
The most succesful approach to show the existence of such solutions consists in minimizing $\Phi$ over the set $$\mathcal{N}=\mathcal{N}_\Phi:=\{u \in X \setminus\{0\};\ \Phi'(u)u=0\},$$
which under some mild conditions on $\Phi$ is a $C^1$ manifold, the so-called {\it Nehari manifold}. While a high amount of works has been applying this procedure (as well as the closely related {\it fibering method} \cite{P}) to a variety of problems in analysis, very few ones have provided an abstract framework showing what is essential to have $c:=\inf_\mathcal{N} \Phi$ achieved by a ground state solution of \eqref{e}. These abstract results \cite{BWW,FP, FRQ,I1, SW} are mostly devoted to the situation where $\Phi$ has a local minimum (which is generally assumed to be $\Phi(0)=0$) and displays a uniform mountain-pass geometry, in the sense that for any $u \in X \setminus \{0\}$ the following property holds:\\
\begin{itemize}
\item [(H1)] the map $t \mapsto \Phi(tu)$, defined in $(0,\infty)$, has a unique critical point $t_u$, which is a global maximum point (in particular $\Phi(t_uu)>0$). \\
\end{itemize} This scenario typically arises in superlinear elliptic problems, as the prototype
\begin{equation}
\label{m}
-\Delta u= b(x)|u|^{r-2}u, \quad u \in H_0^1(\Omega),
\end{equation}
where $\Omega \subset \mathbb{R}^N$ is a bounded domain, $r\in (2,2^*)$, and $b$ is a smooth coefficient such that $b(x)>0$ for all $x \in \Omega$.
This problem corresponds to \eqref{e} with
\begin{equation}\label{f}
\Phi(u)=\frac{1}{2}\int_\Omega |\nabla u|^2 -\frac{1}{r}\int_\Omega b(x)|u|^r, \quad u \in H_0^1(\Omega).
\end{equation}
More generally \cite{SW}, one may consider the functional
$$\Phi(u)=\frac{1}{2}\|u\|^2 -I(u), \quad u \in H_0^1(\Omega)$$
where $I(u):=\int_\Omega F(x,u)$, and $F$ is the primitive of a superlinear and subcritical term $f(x,u)$, in which case $\Phi$ is the energy functional of the problem
\begin{equation}
-\Delta u= f(x,u), \quad u \in H_0^1(\Omega).
\end{equation}
In \cite{SW} the authors dealt with the class of functionals $\Phi=I_0-I$, where $I_0$ is $p$-homogeneous for some $p>1$ (i.e. $I_0(tu)=t^pI_0(u)$ for any $t>0$ and $u \in X$) and satisfies
$c_0^{-1}\|u\|^p\leq I_0(u)\leq c_0\|u\|^p$
for any $u \in X$ and some $c_0>0$. This functional is inspired by the $p$-Laplacian problem
\begin{equation}
\label{m}
-\Delta_p u=\lambda|u|^{p-2}u+ f(x,u), \quad u \in W_0^{1,p}(\Omega),
\end{equation}
where $\lambda<\lambda_1(p)$, the first eigenvalue of $-\Delta_p$ on $X=W_0^{1,p}(\Omega)$, in which case
$I_0(u)=\int_\Omega \left(|\nabla u|^p - \lambda |u|^p\right)$. The existence of a ground state solution for \eqref{e} was then proved under some `$p$-superlinear' type conditions on $I$, see \cite[Theorem 13]{SW} for the details.
The homogeneity condition on $I_0$ was then removed in \cite[Theorem 1.2]{FRQ} to handle problems involving nonhomogeneous operators such as the $(p,q)$-Laplacian operator, as well as Kirchhoff and anisotropic type operators.
The main purpose of this article is to set up a framework for functionals that do not have a uniform geometry on $X$, in the sense that the map $t \mapsto \Phi(tu)$ has not the same behavior for all $u \neq 0$. This is the case if
(H1) does not hold in the whole of $X \setminus \{0\}$, but rather in some part of it. A simple example is given by \eqref{f} with $b$ vanishing in some part of $\Omega$ or changing sign, in which case (H1) holds only for those $u$ such that $\int_\Omega b(x)|u|^r>0$.
So it is reasonable to deal with functionals satisfying (H1) in an open cone of $X$, i.e. an open set $Y \subset X$ such that $t u \in Y$ for every $t>0$ and $u \in Y$.
Note that the condition
$ \int_\Omega b(x)|u|^r>0$
clearly defines an open cone in $X$.
Furthermore, we shall consider other geometries than (H1). More concretely, we aim at functionals such that the map $t \mapsto \Phi(tu)$ has at least a local minimum point whenever $u$ lies in an open cone.
This condition arises in several problems, as we shall see in our applications.
Overall we shall provide conditions ensuring that the infimum of $\Phi$ over $\mathcal{N} \cap Y$ is achieved by a critical point of $\Phi$, which can then be considered as a {\it ground state relative to $Y$}, in the sense that it has the least energy among critical points in $Y$. Some further conditions on $\Phi$ over $X\setminus Y$ shall entail that $c$ is the ground state of $\Phi$.
The applications of our abstract results shall concern mostly two classes of functionals. The first one is given by
\begin{equation}\label{cf}
\Phi=I_0-I
\end{equation}
where $I_0,I \in C^1(X,\mathbb{R})$ are such that $I$ dominates $I_0$ at infinity (i.e $I$ grows faster than $I$, in absolute value) and one the following possibilities occur (in some cases both):
\begin{itemize}
\item $I_0$ is coercive in $Y_1=\{u \in X: I(u)>0\}$ so that $(H1)$ holds in $Y_1$, and $\Phi$ has a mountain-pass structure therein.
\item $I$ is anti-coercive in $Y_2=\{u \in X: I_0(u)<0\}$, i.e. $I(u) \to -\infty$ as $\|u\| \to \infty$, with $u \in Y_2$. It follows that $\Phi$ is coercive therein.
\end{itemize}
The first case holds for instance if $I_0(u)\geq C\|u\|^p$ for some $C>0$, $p>1$, and every $u \in Y_1$, and $\frac{I(tu)}{t^p} \to \infty$ as $t \to \infty$, uniformly for $u$ in weakly compact subsets of $Y_1$. This happens in particular if $I\geq 0$ and $Y_1=\{u \in X: I(u)>0\}$, as we shall see in several problems (cf. Corollaries \ref{c2}, \ref{c3} and \ref{ch}).
As for the second case, it only occurs if $I_0$ and $I$ take negative values. This can be observed in the prototype of {\it indefinite} type problems
$$-\Delta u =\lambda u +b(x)|u|^{r-2}u, \quad u \in H_0^1(\Omega),$$
where $\lambda \in \mathbb{R}$, $2<r<2^*$, and $b \in L^{\infty}(\Omega)$ changes sign. A similar structure arises in the $(p,q)$-Laplacian problem
\begin{equation*}
-\Delta_p u -\Delta_q u = \alpha |u|^{p-2}u+\beta |u|^{q-2}u, \quad u \in W_0^{1,p}(\Omega),
\end{equation*}
where $1<q<p$ and $\alpha,\beta \in \mathbb{R}$. In such case $\Phi=I_0-I$ with
$$I_0(u)=\frac{1}{q}\int_\Omega \left(|\nabla u|^q- \beta|u|^q \right)\quad \mbox{and} \quad I(u)=-\frac{1}{p}\int_\Omega \left(|\nabla u|^p- \alpha|u|^p \right).$$
We consider the open cones $$Y_1=\left\{u \in W_0^{1,p}(\Omega): \int_\Omega \left(|\nabla u|^p- \alpha|u|^p \right)<0\right\}$$ and $$Y_2=\left\{u \in W_0^{1,p}(\Omega): \int_\Omega \left(|\nabla u|^q- \beta|u|^q \right)<0 \right\},$$
which are nonempty if, and only if, $\alpha >\lambda_1(p)$ and $\beta>\lambda_1(q)$, respectively.
It turns out that there exist $\alpha_*(\beta) \geq \lambda_1(p)$ and $\beta_*(\alpha)\geq \lambda_1(q)$ having the following properties:
\begin{enumerate}
\item For $\beta<\beta_*(\alpha)$ there exists $C>0$ such that $I_0(u)\geq C\|u\|^q \quad \forall u \in \overline{Y}_1$.
\item For $\alpha<\alpha_*(\beta)$ there exists $C>0$ such that
$ I(u)\le -C\|u\|^p\quad \forall u \in \overline{Y}_2$.
\end{enumerate}
Since $q<p$, we see that (H1) holds in $Y_1$ for $\beta<\beta_*(\alpha)$, whereas $\Phi$ is coercive in $Y_2$ for $\alpha<\alpha_*(\beta)$. It follows that
\begin{enumerate}
\item $c_1:=\displaystyle \inf_{\mathcal{N}\cap Y_1}\Phi$, the ground state of $\Phi$ relative to $Y_1$, is positive and achieved for $\alpha >\lambda_1(p)$ and $\beta<\beta_*(\alpha)$,
\item $c_2:=\displaystyle \inf_{\mathcal{N}\cap Y_2}\Phi$, the ground state of $\Phi$ relative to $Y_2$, is negative and achieved for $\alpha<\alpha_*(\beta)$ and $\beta>\lambda_1(q)$.
\end{enumerate}
In addition, it is clear that $\Phi \geq 0$ on $\mathcal{N} \setminus Y_2$, so that $c_2$ is the ground state level whenever it is achieved, whereas $c_1$ is the ground state level for $\alpha >\lambda_1(p)$ and $\beta< \lambda_1(q)$, since in this case $Y_2$ is empty and $\mathcal{N} \subset Y_1$.
This kind of structure also arises in a Kirchhoff equation (see section \ref{fe}), as well as in a fourth-order problem associated to a strongly coupled system (see section \ref{sh}).
The second class of functionals to be considered has the form \begin{equation}
\label{fcc1}
\Phi=\frac{1}{\kappa}P +\frac{1}{\upsilon}\Upsilon+\frac{1}{\gamma}\Gamma
\end{equation}
where $P,\Upsilon,\Gamma \in C^1(X,\mathbb{R})$ are weakly lower semicontinuous and $\kappa$-homogeneous, $\upsilon$-homogeneous and $\gamma$-homogeneous, respectively, for some $0<\upsilon<\kappa<\gamma$. In particular, the sets
$$Y_1=\left\{u \in X: \Upsilon(u)<0\right\} \quad
\mbox{and}
\quad Y_2=\left\{u \in X: \Gamma(u)<0\right\},$$
are open cones and we assume that for any $u \in Y=Y_1 \cap Y_2$ the map $t \mapsto \Phi(tu)$ has exactly two critical points (both nondegenerate) $0<t_u<s_u$, which are a local minimum and a local maximum points, respectively. Under these conditions, $c_1:=\displaystyle \inf_{\mathcal{N}\cap Y_1}\Phi$ is the ground state level of $\Phi$, cf. Corollary \ref{cc} below. We shall consider several functionals having this structure, among which we highlight the one given by
$$P(u,v)=\|v\|_q^q, \quad \Upsilon(u,v)=\int_\Omega \left(|\nabla u|^2+|\nabla v|^2-2\lambda uv\right), \quad \mbox{and} \quad \Gamma(u,v)=-\|u\|_r^r,$$
defined in $X:=H_0^1(\Omega)\times H_0^1(\Omega)$,
where $\lambda \in \mathbb{R}$ and $2<q<r<2^*$. In this case $\Phi$ is the energy functional associated to the semilinear gradient system
\begin{equation*}
\left\{
\begin{array}{ll}
-\Delta u = \lambda v+|u|^{r-2}u \
& \mbox{in} \ \ \Omega, \ \ \\
-\Delta v = \lambda u-|v|^{q-2}v \
&\mbox{in} \ \ \Omega, \ \ \\
u = v = 0 \ &\mbox{on} \ \
\partial\Omega,
\end{array}
\right.
\end{equation*}
which is treated with more generality in section \ref{kayeexample1}.
We stress that our abstract setting is new not only for homogeneous operators as the $p$-Laplacian, but also for nonhomogeneous ones as the $(p,q)$-Laplacian, Kirchhoff type operators, etc. For simplicity, we apply our results to elliptic problems with Dirichlet boundary conditions, but other boundary conditions can be handled, as well as problems in unbounded domains. Finally, let us mention that for the sake of space we have limited the bibliography to works dealing with the class of problems considered here via the Nehari manifold method (or the fibering method).
The outline of this article is the following: in section 2 we state and prove our main abstract results. These ones are organized according to the behavior of the map $t \mapsto \Phi(tu)$ for $u \in Y$, where $Y$ is a nonempty open cone. In section 3 we derive some results for the classes of functionals \eqref{cf} and \eqref{fcc1} and discuss some examples. Finally, some applications of our results to pdes and systems of pdes are given in sections 4 and 5, respectively.
\medskip
\subsection*{Notation} Throughout this article, we use the following notation:
\begin{itemize}
\item Unless otherwise stated $\Omega$ denotes a bounded domain of $\mathbb{R}^N$ with $N\geq 1$.
\item Given $r>1$, we denote by $\Vert\cdot\Vert_{r}$ (or $\Vert\cdot\Vert_{r,\Omega}$ in case we need to stress the dependence on $\Omega$) the usual norm in
$L^{r}(\Omega)$, and by $r^*$ the critical Sobolev exponent, i.e. $r^*=\frac{Nr}{N-r}$ if $r<N$ and $r^*=\infty$ if $r \geq N$.
\item Strong and weak convergences are denoted by $\rightarrow$ and
$\rightharpoonup$, respectively.
\item Given $f\in L^{1}(\Omega)$, we set $f^{\pm}:=\max(\pm f,0)$. The integral $\int_{\Omega}f$ is considered
with respect to the Lebesgue measure. Equalities and inequalities involving
$f$ shall be understood holding \textit{a.e.}.
\item $\varphi_u:[0,\infty) \to \mathbb{R}$ is the {\it fibering map} given by $\varphi_u(t)=\Phi(tu)$ for any $u \in X$.
\item $J:X \rightarrow \mathbb{R}$ is given by $J(u)=\Phi'(u)u$, and $\mathcal{N}:=J^{-1}(0) \setminus \{0\}$.
\item $S$ is the unit sphere in $X$.
\item $\partial Y$ and $\overline{Y}$ denote the boundary and the closure of $Y \subset X$ in the weak topology, respectively.
\item Given $u \in X$ and $R>0$ we denote by $B(u,R)$ the open ball of radius $R$ around $u$.
\item Unless otherwise stated, $C>0$ is a constant whose value may vary from one ocurrence to the other.
\end{itemize}
\medskip
\section{Main results}
\medskip
In this section $Y \subset X \setminus \{0\}$ is assumed to be a nonempty open cone, where $X$ is a reflexive Banach space, and $\Phi \in C^1(X,\mathbb{R})$ with $\Phi(0)=0$. We also assume that $\Phi$ and $J$ are weakly lower semicontinuous on $X$.
We shall consider different situations in accordance with the geometry of the fibering maps $\varphi_u$, for $u\in Y$. Overall we aim at finding conditions on $\Phi$ so that $$c:=\displaystyle \inf_{\mathcal{N}\cap Y}\Phi$$ is achieved and provides the ground state level of $\Phi$.
A basic and general property needed in our minimization procedure is
the following compactness condition:\\
\begin{itemize}
\item [(HY)$_d$] If $(u_n) \subset \mathcal{N}\cap Y$ and $\Phi(u_n)\to d \in \mathbb{R}$, then $(u_n)$ has a subsequence weakly convergent in $Y$. \\
\end{itemize}
Our setting shall include two main situations in accordance with the number of critical points of the map $\varphi_u$ for $u \in Y$. First we assume that $\varphi_u$ has a unique critical point. In this case we do not require $J$ to be $C^1$ (although in most applications it is), so $\mathcal{N}$ is not necessarily a manifold.
\medskip
\subsection{Cones where $\varphi_u$ has a unique critical point}\strut\\
To begin with, we assume that (H1) holds within $Y$:
\begin{theorem}\label{thm2}
Assume $(HY)_c$ and (H1) for every $u \in Y$. Then $c>0$ is achieved by a critical point of $\Phi$. If, in addition,
$J(u)\neq 0$ for every $u \in X \setminus Y$ with $u \neq 0$ then $c$ is the ground state level of $\Phi$.
\end{theorem}
\begin{proof}
We split the proof into two parts. First we proceed as in \cite{FRQ,SW} to show that $c$ is achieved. Given $u \in Y$ we know by (H1) that there exists exactly one $t_u>0$ such that $t_u u\in \mathcal{N} \cap Y$. In particular, this set is nonempty. It is also clear that $c \geq 0$.
Let $(u_n) \subset \mathcal{N} \cap Y$ be
a minimizing sequence for $c$, i.e. $\Phi(u_n) \to c$. By $(HY)_c$ we can assume that $u_n \rightharpoonup u\in Y $. Since $\Phi$ is weakly lower semicontinuous and $t_{u_n}=1$ for every $n$, we have
$c\leq \Phi(t_u u)\leq \liminf \Phi(t_u u_n) \leq \lim \Phi(u_n)=c$,
i.e. $\Phi(t_u u)=c>0$.
Finally, it is clear that $\mathcal{N} \subset Y$ if $J(u)\neq 0$ for every $u \in X \setminus Y$ with $u \neq 0$, so that $c=\inf_{\mathcal{N}}\Phi$ in this case.
Next we proceed as in \cite{LWW} (see also \cite{CW}) to show that $c$ is a critical value of $\Phi$.
Assume that $c$ is achieved by $u_0$ and this one is not a critical point of $\Phi$. Then there exists $v \in X$ such that
$\Phi'(u_0)v <0$.
By continuity we can find $\varepsilon, \delta >0$ small such that
\begin{equation}\label{bbs}
\Phi'(t(u_0+ \sigma v))v <0,\;\;\; \mbox{ for } \ \ t \in [1-\epsilon, 1+ \epsilon] \ \ \mbox{and } \ \ \sigma \in [-\delta, \delta].
\end{equation}
Moreover, since $$\Phi'((1-\epsilon)u_0)u_0=\varphi_{u_0}'(1-\epsilon)>0>\varphi_{u_0}'(1+\epsilon)=\Phi'((1+\epsilon)u_0)u_0$$ and $Y$ is open, there exists $\overline{\sigma} \in (0,\delta)$ such that $u_0+\overline{\sigma}v \in Y$ and $\varphi_{u_0+\overline{\sigma}v}'(1-\epsilon)>0>\varphi_{u_0+\overline{\sigma}v}'(1+\epsilon)$.
It follows that $ t_{u_0+\overline{\sigma}v} \in (1-\epsilon, 1+\epsilon)$. Writing $\overline{t}={t}_{u_0+\overline{\sigma}v}$,
from (\ref{bbs}) we have
$$
\Phi(\overline{t}(u_0+ \overline{\sigma} v)) - \Phi(u_0)\leq \Phi(\overline{t}(u_0+ \overline{\sigma} v)) - \Phi(\overline{t}u_0)= \overline{t}\int^{\overline{\sigma}}_{0} \Phi'(\overline{t}(u_0+ \sigma v))v d\sigma <0,
$$
so that
$
\Phi(\overline{t}(u_0+ \overline{\sigma} \phi)) < \Phi(u_0) = c$,
which is a contradiction. Therefore $\Phi'(u_0)=0$ and the proof is complete.\\
\end{proof}
Next we provide some conditions leading to $(HY)_d$ when $\Phi$ has a mountain-pass geometry in $Y$:
\begin{lem}\label{l1}
Assume (H1) for every $u \in Y$, and the following conditions:
\begin{enumerate}
\item There exists $\sigma>1$ such that $\displaystyle \lim_{t \to \infty} \frac{\Phi(tu)}{t^\sigma}=-\infty$ uniformly for $u$ on weakly compact subsets of $Y$.
\item $\Phi\ge I_0-I$ in $Y$,
where $I$ is weakly continuous and vanishes on $ \partial Y$, and $I_0$ satisfies $\displaystyle \lim_{t \to \infty} I_0(tu)=\infty$ uniformly for $u \in \mathcal{S} \cap Y$.
\end{enumerate}
If $(u_n) \subset \mathcal{N} \cap Y$ and $(\Phi(u_n))$ is bounded then $(u_n)$ is bounded. In particular, $(HY)_d$ holds for any $d \in \mathbb{R}$ if we assume in addition that
\begin{enumerate}
\item [(3)] $J>0$ on $\partial Y \setminus \{0\}$ and in
$Y \cap B(0,R)$, for some $R>0$.
\end{enumerate}
\end{lem}
\begin{proof}
If $(u_n) \subset \mathcal{N} \cap Y$ is unbounded then we may assume that $\|u_n\| \to \infty$ and $v_n \rightharpoonup v$ in $X$, where $v_n=\frac{u_n}{\|u_n\|}$. If $v \in \partial Y$ then, since $t_{u_n}=1$ for every $n$ and $I$ is weakly continuous and vanishes on $ \partial Y$, for every $t>0$ we have
\begin{equation}
\label{b1}\Phi(u_n)\geq \Phi(tv_n)\geq I_0(tv_n)-I(tv_n)\ge I_0(tv_n) - C.
\end{equation}
By (2) we have $I_0(tv_n) \to \infty$ uniformly as $t \to \infty$, and
we obtain a contradiction with the boundedness of $\Phi(u_n)$. Hence $v \in Y$ and consequently
$$\frac{\Phi(u_n)}{\|u_n\|^{\sigma}}= \frac{\Phi(\|u_n\|v_n)}{\|u_n\|^{\sigma}} \rightarrow -\infty,$$
which contradicts the fact that $\Phi(u_n)>0$ for every $n$.
Therefore $(u_n)$ is bounded. Let us assume in addition (3). Up to a subsequence, we have $u_n \rightharpoonup u$ in $X$. Since $(u_n) \subset Y$ we have either $u \in Y$ or $u\in \partial Y$. From $J>0$ in $Y \cap B(0,R)$ we know that $(u_n)$ is bounded away from zero, so that repeating \eqref{b1} we infer that $u \neq 0$. The weak lower semicontinuity of $J$ yields
$J(u) \leq \liminf J(u_n)=0$, and since $J>0$ on $\partial Y \setminus \{0\}$ we deduce that $u \in Y$, so that $(HY)_d$ is satisfied for any $d \in \mathbb{R}$.
\end{proof}
The boundedness of sequences minimizing $\Phi$ over $\mathcal{N} \cap Y$ can sometimes be obtained by other conditions (e.g. the boundedness of $\mathcal{N} \cap Y$ or the coercivity of $\Phi$ over it). In such cases $(HY)_d$ might be easier to establish through the following condition:\\
\begin{itemize}
\item [(HJ)] If $(u_n)\subset Y$, $u_n\rightharpoonup u$ in $X$ and $J(u_n)\to J(u)$ then $u_n \to u$ in $X$.\\
\end{itemize}
Let us note that (HJ) is satisfied if $J(u)=C\|u\|^{\theta}+H(u)$ where $C,\theta>0$ and $H$ is weakly continuous (which is the case in most of our applications).
\begin{lem}\label{l2}
$(HY)_d$ is satisfied under the following conditions:
\begin{enumerate}
\item Any sequence $(u_n) \subset \mathcal{N}\cap Y$ satisfying $\Phi(u_n)\to d$ is bounded.
\item $J>0$ on $\partial Y \setminus \{0\}$ and in
$Y \cap B(0,R)$, for some $R>0$.
\item $(HJ)$ holds for $u=0$.
\end{enumerate}
\end{lem}
\begin{proof}
By our assumptions, we may assume that $u_n \rightharpoonup u$ in $X$ if $(u_n) \subset \mathcal{N}\cap Y$ and $\Phi(u_n)\to d$. Since $J$ is weakly lower semicontinuous, we have $J(u)\leq 0$. Note also that (2) implies that $\|u_n\| \geq R$ for every $n$. If $u \not \in Y$ then $u \in \partial Y$, and (2) yields that $u=0$, so that $J(u_n) \to J(u)$. By (3) we have $u_n \to 0$ in $X$, which is a contradiction.
\end{proof}
Next we consider the following behavior on $Y$:\\
\begin{itemize}
\item [(H2)] $\varphi_u$ has a unique critical point $t_u>0$, which is a global minimizer (and consequently $\Phi(t_uu)=\varphi_u(t_u)<0$) \\
\end{itemize}
\begin{theorem}
\label{tp1}
Assume (H2) for any $u \in Y$, and the following conditions:
\begin{enumerate}
\item $\Phi$ is coercive in $Y$, i.e. $\Phi(u_n) \to \infty$ if $(u_n) \subset Y$ and $\|u_n\| \to \infty$.
\item $\Phi \geq 0$ on $\partial Y$.
\end{enumerate}
Then $c<0$ and $c=\inf_{Y} \Phi$, so that $c$ is a local minimum of $\Phi$. If, in addition, $\Phi \geq 0$ on $\mathcal{N} \setminus Y$, then $c$ is the ground state level of $\Phi$.
\end{theorem}
\begin{proof}
By (1) we know that $\Phi$ is bounded from below on $Y$. Let $(u_n) \subset Y$ be such that
$\Phi(u_n) \to \inf_{\overline{Y}} \Phi=\inf_Y \Phi<0$. From (1), (2) and the weak lower semicontinuity of $\Phi$ we can assume that $u_n \rightharpoonup u \in Y$. So $\Phi(u)=\inf_{Y} \Phi$ and since $Y$ is open, this infimum is a local minimum (and therefore a critical value) of $\Phi$. It follows that $c=\inf_{Y} \Phi$ and $c$ is the ground state level of $\Phi$ if $\Phi \geq 0$ on $\mathcal{N} \setminus Y$.\\
\end{proof}
\begin{rmk}
From the previous proof it is clear that instead of assuming that $\Phi$ is coercive in $Y$ and satisfies (H2) for any $u \in Y$, it suffices to assume $(HY)_c$ and $\inf_Y \Phi<0$. Furthermore, $Y$ may be an open set in general.\\
\end{rmk}
\subsection{Cones where $\varphi_u$ has more than one critical point}\strut\\
We allow now $\varphi_u$ to have more than one critical point. In contrast with the previous susbsection, we shall require more regularity on $\varphi_u$ for $u \in Y$. First we assume the following condition in $Y$:\\
\begin{itemize}
\item [(H3)] $\varphi_u \in C^2(0,\infty)$ has a non-degenerate minimum point $t_u>0$ such that $\varphi_u'<0$ in $(0,t_u)$. Moreover $\varphi_u(t_u)<\varphi_u(t)$ for any $t>0$ such that $\varphi_u'(t)=0$. In particular $\Phi(t_u u)=\varphi_u(t_u)<0$. \\
\end{itemize}
It is clear that $t_u$ in (H3) is the first nonzero critical point of $\varphi_u$. (H3) holds in particular if $\varphi_u$ has two critical points, the first one being a local minimum point. This condition arises in several problems, as we shall see in the next sections.
\begin{theorem}\label{tp2} Assume $(HY)_c$ and (H3), (HJ) for all $u\in Y$.
Then $c<0$ and it is achieved.
In addition:
\begin{enumerate}
\item If $J$ is $C^1$ in $Y$ then $c$ is a local minimum of $\Phi$.
\item If $\Phi \ge 0$ on $\mathcal{N}\setminus Y$, then $c$ is the ground state level of $\Phi$.
\end{enumerate}
\end{theorem}
\begin{proof} By (H3) we know that $\mathcal{N}\cap Y$ is nonempty and $c<0$.
Let $(u_n)\subset \mathcal{N}\cap Y$ satisfy $\Phi(u_n)\to c$. By $(HY)_c$ we may assume that $u_n\rightharpoonup u\in Y$, so there exists $t_u>0$ such that $J(t_uu)=0$. We claim that $u_n\to u$ in $X$. Indeed, on the contrary, we conclude from (HJ) that $0=J(t_uu)<\limsup J(t_uu_n)$,
so that for $n$ large enough we may assume that $J(t_uu_n)>0$ i.e. $\varphi'_{u_n}(t_u)>0$. Thus (H3) yields that $t_u>t_{u_n}=1$, which implies that $\Phi(t_uu)<\Phi(u)\le \liminf \Phi(u_n)=c$,
a contradiction. Hence $u_n\to u$ in $X$ and $\varphi_u'(1)=0$. If $t_u \neq 1$ then from (H3) we deduce that $\Phi(t_uu)<\Phi(u)=c$, which is a contradiction. Therefore $t_u=1$ and $\Phi(u)=c<0$.
From now on we assume that $u$ is the minimizer of the last paragraph and $J$ is $C^1$ over $Y$. Let $F:Y\times (0,\infty)\to \mathbb{R}$ be given by $F(v,t):=\varphi_v'(t)=J(tv)/t$. It follows that $F\in C^1$, $F(u,1)=0$ and $F_t(u,1)>0$. By the implicit function theorem, there exists an open ball $B$ containing $u$ and a unique $C^1$ map $\sigma: B \to (0,\infty)$ such that $\sigma(v)=t_v$ and $F(v,\sigma(v))=0$ for any $v \in B$. Moreover, since $F_t(u,1)>0$ and $F_t$ is continuous, making $B$ smaller if necessary, we have $F_t(v,1)>0$ for $v \in B$.
We claim that there exists $R>0$ such that $t_ww\in B$ for any $w\in B(u,R)$. Otherwise, there exists a sequence $R_n\to 0^+$ and $w_n\in B(u,R_n)$ such that $t_{w_n}w_n\notin B$. Since $w_n\to u$, by the previous paragraph we deduce that $t_{w_n}\to 1$ and hence $t_{w_n}w_n\to u$, a contradiction. Making $R>0$ smaller if necessary, we have $B(u,R) \subset B$.
It follows that for any $w\in B(u,R)$ the line segment between $w$ and $t_w w$ lies inside $B$.
Thus, if $t_w\neq 1$ and $w \in B(u,R)$ then there exists $\theta\in(\min\{1,t_w\},\max\{1,t_w\})$ such that
\begin{equation*}
\varphi_w(1)-\varphi_{w}(t_w)=\varphi_w'(t_w)(t_w-1)+\frac{1}{2}\varphi_w''(\theta)(t_w-1)^2=\frac{1}{2}F_t(w,\theta)(t_w-1)^2,
\end{equation*}
and from $F_t(w,\theta)=\theta^{-2}F_t(\theta w,1)>0$, we conclude that $\Phi(t_ww)\le\Phi(w)$, so that
$
\Phi(u)\le \Phi(t_{w}w)\le \Phi(w)$.
Since this inequality holds for each $w\in B(u,R)$
we see that $u$ is a local minimizer of $\Phi$, and consequently a critical point of $\Phi$. Finally, (2) follows from $c<0$.
\end{proof}
\begin{rmk}\label{N0empty}
{\rm It is easily seen that Theorem \ref{tp2} still holds if the condition $\Phi\ge 0$ on $ \mathcal{N}\setminus Y$ is relaxed to $c<\displaystyle \inf_{\mathcal{N}\setminus Y}\Phi$.
}
\end{rmk}
Lastly, we show that the following condition leads to the existence of a critical point (generally not a ground state):\\
\begin{itemize}
\item [(H4)] $\varphi_u \in C^2(0,\infty)$ has a non-degenerate maximum point $s_u>0$ such that $\varphi_u'<0$ in $(s_u,\infty)$ and $\varphi_u(s_u)\geq \varphi_u(t)$ for any $t>0$ such that $\varphi_u'(t)=0$. \\
\end{itemize}
Note that $s_u$ in (H4) is the last critical point of $\varphi_u$ and satisfies $\varphi_u(s_u)\geq \varphi_u(t)$ whenever $\varphi_u'(t)>0$.
We stress that it might happen that $\Phi(s_u u)=\varphi_u(s_u)<0$. In particular, $s_u$ does not need to be a global maximum point of $\varphi_u$.
\begin{theorem}\label{tp3} Assume (H4),(HJ) for all $u\in Y$ and $(HY)_d$ with $d:=\inf\{\Phi (s_uu):\ u\in Y\} $. If $J$ is $C^1$ in $Y$ then $d$ is achieved by a critical point of $\Phi$.
\end{theorem}
\begin{proof} By (H4) we know that $\{s_uu:\ u\in Y\}$ is nonempty.
Let $(u_n)\subset Y$ satisfy $\Phi(s_{u_n}u_n)\to d$ and set $v_n:=s_{u_n}u_n$. By $(HY)_d$ we may assume that $v_n \rightharpoonup v\in Y$, so there exists $s_v>0$ such that $J(s_vv)=0$. We claim that $s_v v$ achieves $d$.
Let us first assume that $v_n \not \to v$ in $X$. From (HJ) we deduce that $0=J(s_vv)<\limsup J(s_vv_n)$,
so that for $n$ large we must have $J(s_vv_n)>0$ i.e. $\varphi'_{v_n}(s_v)>0$. By (H4) we must have $s_v<1$, which implies that $\Phi(s_vv)\leq \liminf \Phi(s_vv_n) \le \liminf\Phi(v_n)=d$, i.e. $\Phi(s_v v)=d$.
Assume now that $v_n\to v$ in $X$, so that $v \in \mathcal{N}$.
Let us prove that $s_v=1$, which implies $\Phi(v)=d$. To this end, note that $F(v,s_v)=\varphi_v'(s_v)=0$ and $F_t(v,s_v)=\varphi_v''(s_v)<0$, where $F$ is as in the proof of Theorem \ref{tp2}. By the implicit function theorem, there exists an open ball $B$ containing $v$ and a unique $C^1$ map $\sigma: B \to (0,\infty)$ such that $\sigma(u)=s_u$ and $F(u,\sigma(u))=0$ for any $u \in B$. Since $v_n\to v$ we have $s_v=\sigma(v)=\lim \sigma(v_n)=\lim s_{v_n}=1$, and the claim is proved.
Furthermore, the previous discussion also shows that the set $\{s_uu:\ u\in Y\}=\{u \in Y: J(u)=0,\ J'(u)u<0\}$ is, in fact, a $C^1$ manifold. Since $J'(u)$ is surjective, we conclude from the Lagrange multipliers rule that $ \Phi'(v)=\alpha J'(v)$ for some $\alpha\in \mathbb{R}$.
From $\Phi'(v)v=0>J'(v)v$, it follows that $\alpha=0$ and the proof is complete.
\end{proof}
\medskip
\section{Some classes of functionals and examples}
\medskip
Let us apply the results of the previous section to two classes of functionals. Recall that $X$ is a reflexive Banach space, $Y \subset X \setminus \{0\}$ is a nonempty open cone, and $c:=\displaystyle \inf_{\mathcal{N}\cap Y}\Phi$.
First we deal with $$\Phi=I_0-I,$$ where $I_0,I \in C^1(X,\mathbb{R})$ and $I_0(0)=I(0)=0$. The following result, which is a consequence of Theorem \ref{thm2} and Lemma \ref{l1}, extends \cite[Theorem 13]{SW} and \cite[Theorem 1.2]{FRQ}, as far as the existence of a ground state is concerned. Let us stress that $I_0$ may be nonhomogeneous (and this will be the case in several applications).
\begin{cor}\label{c1}
Under the above conditions, assume in addition that:
\begin{enumerate}
\item $I_0$ and $u \mapsto I_0'(u)u$ are weakly lower semicontinuous and $I'$ is completely continuous, i.e. $I'(u_n) \to I'(u)$ in $X^*$ if $u_n \rightharpoonup u$ in $X$.
\item There exist $C>0$ and $\eta>1$ such that
$I_0'(u)u\geq C\|u\|^\eta$ for every $u \in \overline{Y}$
and $I'(u)=o(\|u\|^{\eta-1})$ as $u \to 0$ in $\overline{Y}$.
\item $I(u)=I'(u)u=0$ for every $u \in \partial Y$.
\item There exists $\sigma>1$ such that $t \mapsto \frac{I_0'(tu)u}{t^{\sigma-1}}$ and $t \mapsto \frac{I'(tu)u}{t^{\sigma-1}}$ are nonincreasing and increasing in $(0,\infty)$, respect., for every $u \in Y$. Moreover, $\displaystyle \lim_{t \to \infty} \frac{I_0(tu)}{t^{\sigma}}<\infty=\displaystyle \lim_{t \to \infty} \frac{I(tu)}{t^{\sigma}}$ uniformly for $u$ on weakly compact subsets of $Y$.
\end{enumerate}
Then $c$ is positive and achieved by a critical point of $\Phi$. If, in addition,
$I'(u)u \leq 0<I_0'(u)u$ for $u \in X \setminus Y$ with $u\neq 0$ then $c$ is the ground state level of $\Phi$.
\end{cor}
\begin{proof}
First note that since $I'$ is completely continuous, the maps $I$ and $u \mapsto I'(u)u$ are weakly continuous, so that by (1) we have that $\Phi$ and $J$ are weakly lower semicontinuous.
We also see that (2) implies that $I_0(u)\geq Cr^{-1}\|u\|^{\eta}$ for any $u \in \overline{Y}$ and $I(u)=o(\|u\|^{\eta})$ as $u \to 0$ in $\overline{Y}$. It follows that for any $u\in Y$ we have $\varphi_u$ positive near zero. By (4) we have $\displaystyle \lim_{t \to \infty} \frac{\Phi(tu)}{t^\sigma}=-\infty$ uniformly for $u$ on weakly compact subsets of $Y$, and $t \mapsto \Phi'(tu)u$ vanishes at most once, so (H1) holds for $u \in Y$. Note also that (2) and (3) yield that $J>0$ on $\partial Y \setminus \{0\}$ and in
$Y \cap B(0,R)$, for some $R>0$.
From Lemma \ref{l1} we infer that $(HY)_c$ holds. Finally, if $I'(u)u \leq 0$ for $u \in X \setminus Y$ then $J(u)=I_0'(u)u-I'(u)u>0$ for every $u \in X \setminus Y$ with $u \neq 0$. Theorem \ref{thm2} yields then the conclusion.
\end{proof}
\begin{rmk}\label{rc1}
Instead of $I'(u)=o(\|u\|^{\eta-1})$ as $u \to 0$ in $\overline{Y}$, one may require more generally that for some $C,R>0$ we have $J(u)\geq C\|u\|^{\eta}$ for $u \in Y \cap B(0,R)$. Indeed, in such case $\varphi_u$ is still positive near zero and the rest of the proof remains valid. See Corollaries \ref{c2} and \ref{c3} for examples where this condition occurs.
\end{rmk}
Let us consider now the functional
\begin{equation}
\label{fcc}
\Phi(u):=\frac{1}{\kappa}P(u) +\frac{1}{\upsilon}\Upsilon(u)+\frac{1}{\gamma}\Gamma(u)
\end{equation}
defined in $X$.
We assume that $0<\upsilon<\kappa<\gamma$ and $P,\Upsilon,\Gamma \in C^1(X,\mathbb{R})$ are weakly lower semicontinuous and $\kappa$-homogeneous, $\upsilon$-homogeneous and $\gamma$-homogeneous, respectively. Moreover there exists $C>0$ such that $ P(u)\geq C^{-1}\|u\|^{\kappa}$, $|\Upsilon(u)| \leq C\|u\|^\upsilon$ and $|\Gamma(u)|\leq C\|u\|^{\gamma}$ for all $u \in X$.
Note that by homogeneity we have
$J=P+\Upsilon+\Gamma$,
which is assumed to satisfy condition (HJ).
We deal with the cones $$Y_1=\left\{u \in X: \Upsilon(u)<0\right\} \quad
\mbox{and}
\quad Y_2=\left\{u \in X: \Gamma(u)<0\right\},$$
under the following condition:\\
\begin{enumerate}
\item [(H5)] For any $u \in Y_1 \cap Y_2$ the map $\varphi_u$ has exactly two critical points $0<t_u<s_u$, both non-degenerate, with $t_u$ a local minimum point, and $s_u$ a local maximum point.\\
\end{enumerate}
This condition enables us to apply Theorems \ref{tp2} and \ref{tp3} on $Y_1$ and $Y_2$, respectively, and derive the following result:
\begin{cor}\label{cc}
Under the above conditions, assume that $Y_1$ is non-empty. Then $c:=\inf_{\mathcal{N}\cap Y_1} \Phi<0$ is the ground state level, and it is achieved by a local minimizer of $\Phi$. If, in addition, $Y_2$ is nonempty then $\Phi$ has a second critical point.
\end{cor}
\begin{proof}
First of all, note that by (3) we have
\begin{equation}\label{coercibe}
\Phi(u)=\frac{\gamma-\kappa}{\kappa\gamma}P(u)+\frac{\gamma-\upsilon}{\upsilon\gamma}\Upsilon(u)
\ge C_1\|u\|^\kappa-C_2\|u\|^\upsilon \quad \forall u \in \mathcal{N},
\end{equation}
where $C_1,C_2$ are positive constants. Therefore $\Phi$ is coercive over $\mathcal{N}$.
Let us
show that Theorem \ref{tp2} provides the desired assertions on $c$. We analyze $\varphi_u$ for $u \in Y_1$. If $\Gamma(u)\ge 0$, then
\begin{enumerate}
\item[i)] $\varphi_u$ has a unique critical point $t_u>0$, which is a non-degenerate global minimizer,
\end{enumerate}
whereas if $\Gamma(u)< 0$, then $u \in Y_1 \cap Y_2$, and by (H5)
\begin{enumerate}
\item[ii)] $\varphi_u$ has exactly two critical points (both non-degenerate), the first one being a local minimum point $t_u>0$.
\end{enumerate}
Moreover, in both cases $\varphi_u(t_u)=\Phi(t_u u)<0$, and we deduce that $(H3)$ holds for $u \in Y_1$. To prove $(HY)_{c}$ note that, by \eqref{coercibe} we can find a sequence $(u_n) \subset \mathcal{N}\cap Y_1$ such that $u_n\rightharpoonup u$ in $X$ and $\Phi(u_n) \to c$. Since $c<0$, we conclude that $u\neq 0$, and from the inequality $-(\gamma-1)\varphi_{u_n}'(1)+\varphi_{u_n}''(1)>0$ it follows that
$$P(u)+\frac{\gamma-\upsilon}{\gamma-\kappa}\Upsilon(u)\leq \liminf \left( P(u_n)+\frac{\gamma-\upsilon}{\gamma-\kappa}\Upsilon(u_n)\right)\leq 0,$$
Since $u \neq 0$ we infer that $P(u)>0$ and thus
$\Upsilon(u)<0$ i.e. $u\in Y_1$. Thus $(HY)_c$ holds.
Finally, $J$ is clearly $C^1$ and by \eqref{coercibe} we have $\Phi>0$ in $\mathcal{N} \setminus Y_1$.
Next we show that Theorem \ref{tp3} applies on $Y_2$. Given $u \in Y_2$ we have the following alternatives: if $\Upsilon(u)\ge 0$, then
\begin{enumerate}
\item[i)] $\varphi_u$ has only one critical point $s_u>0$, which is a non-degenerate global maximum point.
\end{enumerate}
On the other hand, if $\Upsilon(u)< 0$, then we have again $u \in Y_1 \cap Y_2$, so that by (H5)
\begin{enumerate}
\item[ii)] $\varphi_u$ has only two critical points (both non-degenerate), the second one being a local maximum point $s_u$.
\end{enumerate}
Thus (H4) holds for any $u \in Y_2$. We set $d:=\inf\{\Phi (s_uu):\ u\in Y_2\}$ and prove $(HY)_{d}$. Once again from \eqref{coercibe}, we see that if $(u_n) \subset \mathcal{N}\cap Y_2$ and $\Phi(s_{u_n}u_n) \to d$ then we can assume that $v_n:=s_{u_n}u_n\rightharpoonup v$ in $X$. From $-(\upsilon-1)\varphi_{v_n}'(1)+\varphi_{v_n}''(1)<0$, it follows that
$$C^{-1}\|v_n\|^\kappa\leq P(v_n)<-\frac{\gamma-\upsilon}{\kappa-\upsilon}\Gamma(v_n)\leq C\|v_n\|^{\gamma}$$
Hence there exists $C>0$ such that $\|v_n\|\ge C$, and consequently $\Gamma(v)<0$, i.e. $v\in Y_2$.
Therefore $d$ is achieved by a critical point of $\Phi$.
\end{proof}
\begin{rmk}\label{rcc}\strut
\begin{enumerate}
\item An abstract result similar to Corollary \ref{cc} has been proved in \cite{BW}. We point out that in \cite{BW} the functionals $\Upsilon$ and $\Gamma$ are assumed to be weakly continuous.\\
\item From the previous proof we see that Corollary \ref{cc} still holds if the inequality
$ P(u)\geq C^{-1}\|u\|^{\kappa}$ holds for $u \in \mathcal{N}$. More precisely, it must hold along minimizing sequences for $c$ and $d$.
\end{enumerate}
\end{rmk}
Several problems are associated to energy functionals having the structure above.
Let $a,b \in L^{\infty}(\Omega)$ with $a^+,b^+ \not \equiv 0$, $\lambda>0$, and $1<q<p<r<p^*$. The functional given by \eqref{fcc} with
\begin{equation}\label{ecc}
P(u)=\|u\|^p, \quad \Upsilon(u)=\Upsilon_\lambda(u)= -\lambda \int_\Omega a|u|^{q} \quad \mbox{ and } \quad \Gamma(u)=-\int_\Omega b|u|^{r},
\end{equation}
(or $\Upsilon(u)=-\lambda \int_\Omega au$) defined in $X=W_0^{1,p}(\Omega)$,
has been treated by many authors (see e.g. \cite{BW,I,Ta}).
It is easily seen that such $P$, $\Upsilon,\Gamma$ satisfy the conditions of Corollary \ref{cc}.
Note also that for any $\lambda>0$ we have $Y_1=\left\{u \in X: \int_\Omega a|u|^q>0\right\}$.
Set
\begin{equation*}\label{ls}
\lambda^*:= \inf_{u\in Y_1 \cap Y_2} \lambda(u), \quad \mbox{where} \quad \lambda(u):=\frac{r-p}{r-q}\left(\frac{p-q}{r-q}\right)^{\frac{p-q}{r-p}}\frac{\|u\|^{p\frac{r-q}{r-p}}}{(\int_\Omega a|u|^q) \left(\int_\Omega b|u|^r\right)^{\frac{p-q}{r-p}}}
\end{equation*}
is the so-called nonlinear Rayleigh's quotient (see \cite{I1}). We observe that $\lambda(u)$ is $0$-homogeneous and it is obtained as the unique solution with respect to $(t,\lambda)$ of the system $\varphi_u'(t)=\varphi_u''(t)=0$. One can easily show that $\lambda^*$ is achieved and consequently $\lambda^*>0$. It follows that (H5) holds for $0<\lambda<\lambda^*$, and Corollary \ref{cc} provides some of the results in \cite{BW,I,Ta}.
Corollary \ref{cc} also applies if $$X = W^{2, \frac{p}{p-1}}(\Omega)\cap W_{0}^{1,\frac{p}{p-1}}(\Omega), \quad P(u)=\|u\|^{\frac{p}{p-1}}=\int_\Omega |\Delta u|^{\frac{p}{p-1}}$$ and $\Upsilon_\lambda$, $\Gamma$ are as in \eqref{ecc}, with now $1<q<\frac{p}{p-1}<r$ and $\frac{1}{p}+\frac{1}{r}>\frac{N-2}{N}$. These results, with $b \equiv 1$, were established in \cite{E1,E2}.
Further applications of Corollary \ref{cc} will be given in section \ref{spq} to deal with
\eqref{fcc}, still defined in $X=W_0^{1,p}(\Omega)$, but now with
$$P(u)=\|u\|^p, \quad \Upsilon(u)=\int_\Omega \left( |\nabla u|^q - \lambda |u|^q\right), \quad \mbox{ and } \quad \Gamma(u)=-\int_\Omega b|u|^{r},$$
or $$P(u)=\|u\|^q, \quad \Upsilon(u)=-\int_\Omega b|u|^{r}, \quad \mbox{ and } \quad \Gamma(u)=\int_\Omega \left( |\nabla u|^p - \lambda |u|^p\right),$$
which correspond to $(p,q)$-Laplacian problems with $1<q<p$.
We shall also apply Corollary \ref{cc} to \eqref{fcc} with $X:=H_0^1(\Omega)\times H_0^1(\Omega)$, which is a Banach space with the norm given by $\|(u,v)\|^2=\int_\Omega \left(|\nabla u|^2+|\nabla v|^2\right)$, and
$$P(u,v)=\|v\|_q^q, \quad \Upsilon(u,v)=\|(u,v)\|^2-2\lambda\int_\Omega uv, \quad \mbox{and} \quad \Gamma(u,v)=-\int_\Omega b|u|^r,$$
where $2<q<r<2^*$. This functional is associated to a gradient system, see subsection \ref{kayeexample1}.
\medskip
\subsection{Further examples}\label{fe}\strut\\
In the next sections we provide several applications of our results. Before that, let us show that some further established results can be derived from Theorems \ref{thm2}-\ref{tp3}.
Given $p \in (1,\infty)$ we denote by
\begin{equation}\label{eigenproblem}
\lambda_1(p):=\inf\left\{\int_\Omega |\nabla u|^p: u\in W_0^{1,p}(\Omega), \int_\Omega |u|^p=1\right\}
\end{equation}
the first eigenvalue of the Dirichlet p-Laplacian and by $\phi_1(p)>0$ a first eigenfunction associated to $\lambda_1(p)$. When $p=2$ we simpy write $\phi_1$ and $\lambda_1$.
\medskip
\subsection{A $(p,q)$-Laplacian problem}\strut \\\label{pqexample}
First we give a simple example where Theorems \ref{thm2} and \ref{tp1} apply.
Consider the $(p,q)$-Laplacian problem \begin{equation}\label{pex}
-\Delta_p u -\Delta_q u = \alpha |u|^{p-2}u+\beta |u|^{q-2}u, \quad u \in W_0^{1,p}(\Omega),
\end{equation}
with $1<q<p$ and $\alpha,\beta \in \mathbb{R}$. This problem has been recently investigated in \cite{BT,BT2,BT3}. Regarding the use of the Nehari manifold or the fibering method for $(p,q)$-Laplacian problems, we refer to \cite{BT,BT3,CI,FRQ, LY,PRR,RQ}.
Let
$$\Phi(u)=\frac{1}{p}\int_\Omega \left(|\nabla u|^p- \alpha|u|^p \right)+ \frac{1}{q}\int_\Omega \left(|\nabla u|^q-\beta |u|^q\right)$$
be defined in $X=W_0^{1,p}(\Omega)$. Note that $\Phi$ and $J$ given by
$$J(u)=\int_\Omega \left(|\nabla u|^p- \alpha|u|^p \right)+ \int_\Omega \left(|\nabla u|^q-\beta |u|^q\right)$$ are weakly lower semicontinuous.
We apply Theorems \ref{thm2} and \ref{tp1} with $$Y_1=\left\{u \in X: \int_\Omega \left(|\nabla u|^p- \alpha|u|^p \right)<0\right\} $$
and
$$Y_2=\left\{u \in X: \int_\Omega \left(|\nabla u|^q- \beta|u|^q \right)<0\right\},$$
respectively.
We see that $Y_1$ and $Y_2$ are nonempty if, and only if, $\alpha >\lambda_1(p)$ and $\beta>\lambda_1(q)$, respectively.
We set
$$
\beta_*(\alpha):=\inf_{u\in \overline{Y}_1 \setminus \{0\}}\frac{\int_\Omega |\nabla u|^q}{\int_\Omega |u|^q} \quad \mbox{and} \quad \alpha_*(\beta):=\inf_{u\in \overline{Y}_2 \setminus \{0\}}\frac{\int_\Omega |\nabla u|^p}{\int_\Omega |u|^p}.
$$
We claim that for any $\beta<\beta_*(\alpha)$ there exists $C>0$ such that
\begin{equation}\label{ipq}
\int_\Omega \left(|\nabla u|^q- \beta|u|^q \right) \geq C\|u\|^q \quad \forall u \in \overline{Y}_1.
\end{equation}
Indeed, on the contrary there would be a sequence $(u_n) \subset Y_1$ such that
$\|u_n\|=1$ and $\int_\Omega \left(|\nabla u_n|^q- \beta|u_n|^q \right) \to 0$. We can assume that $u_n \rightharpoonup u$ in $X$, so that by weak lower semicontinuity, we have $\int_\Omega \left(|\nabla u|^p- \alpha|u|^p \right)\leq 0$ and $\int_\Omega \left(|\nabla u|^q- \beta|u|^q \right)\leq 0$. Moreover, from the first inequality it is clear that $u \neq 0$, and the second one implies that $\beta \geq \beta_*(\alpha)$.\\
In a similar way, one can show that for $\alpha<\alpha_*(\beta)$ there exists $C>0$ such that
\begin{equation}\label{ipq1}
\int_\Omega \left(|\nabla u|^p- \alpha|u|^p \right) \geq C\|u\|^p \quad \forall u \in \overline{Y}_2.
\end{equation}
\medskip
\noindent {\bf Minimization in $\mathcal{N} \cap Y_1$:}
Let us take $\alpha >\lambda_1(p)$ and $\beta<\beta_*(\alpha)$.
It is straightforward that (H1) holds for all $u \in Y_1$.
By \eqref{ipq} we have
$$\Phi(u)=\frac{p-q}{pq}\int_\Omega \left(|\nabla u|^q-\beta |u|^q\right)\geq C\|u\|^q$$
for any $u \in \mathcal{N} \cap Y_1$,
and
\begin{equation}\label{ij}
J(u) \geq C\|u\|^q+\int_\Omega \left(|\nabla u|^p-\alpha|u|^p \right) \geq C\|u\|^q-C_1\|u\|^p
\end{equation}
for $u \in Y_1$. It follows that $J>0$ on $\partial Y_1 \setminus \{0\}$ and in
$Y_1 \cap B(0,R)$, for some $R>0$. Moreover, the first inequality in \eqref{ij} also shows that (HJ) holds for $u=0$.
From Lemma \ref{l2} we infer that $(HY)_{c_1}$ is satisfied for $c_1:=\inf_{\mathcal{N}\cap Y_1} \Phi$. So Theorem \ref{thm2} yields that $c_1$ is a critical value of $\Phi$. Moreover, if $\beta<\lambda_1(q)$ then $J>0$ in $X \setminus Y_1$, and consequently $c_1>0$ is the ground state level of $\Phi$.\\
\noindent{\bf Minimization in $\mathcal{N} \cap Y_2$:} It is clear that (H2) holds for any $u \in Y_2$. Given $\beta >\lambda_1(q)$ and $\alpha<\alpha_*(\beta)$,
by \eqref{ipq1} we have that $\Phi$ is coercive in $Y_2$. Moreover, if $u \in \partial Y_2$ then $\Phi(u)=\frac{1}{p}\int_\Omega \left(|\nabla u|^p- \alpha|u|^p \right)>0$. Finally, $\mathcal{N} \setminus Y_2$ is empty for $\alpha\leq \lambda_1(p)$, whereas $\Phi>0$ on $\mathcal{N} \setminus Y_2$ for $\alpha>\lambda_1(p)$. By Theorem \ref{tp1}, we infer that $c_2:=\inf_{\mathcal{N}\cap Y_2} \Phi<0$ is the ground state level of $\Phi$.\\
\begin{rmk}\label{pqrem} {\rm Let $\phi_q=\phi_1(q)$, $\phi_p=\phi_1(p)$, and
\begin{equation*}
\alpha_0:=\frac{\int_\Omega |\nabla \phi_q|^p}{\int_\Omega \phi_q^p}, \ \ \ \beta_0:=\frac{\int_\Omega |\nabla \phi_p|^q}{\int_\Omega \phi_p^q}.
\end{equation*}
In \cite[Proposition 7]{BT2} the following properties of $\beta_{*}(\alpha)$ are shown, among others:
$$ \lambda_1(q)<\beta_*(\alpha)<\beta_* \mbox{ for } \alpha\in (\lambda_1(p),\alpha_*) \quad \mbox{and} \quad \beta_*(\alpha)=\lambda_1(q) \mbox{ for } \alpha\ge \alpha_*.$$
In a similar way, $\alpha_*(\beta)$ satisfies
$$\lambda_1(p)<\alpha_*(\beta)<\alpha_*\mbox{ for } \beta\in (\lambda_1(q),\beta_*)\quad \mbox{and} \quad \alpha_*(\beta)=\lambda_1(p)\mbox{ for }\beta\ge \beta_*.$$
It turns out that
\begin{equation}\label{rab}
\alpha_*(\beta_*(\alpha))=\alpha \mbox{ for } \alpha\in (\lambda_1(p),\alpha^*) \mbox{ and } \beta_*(\alpha_*(\beta) )=\beta \mbox{ for }\beta\in (\lambda_1(q),\beta_*)
\end{equation}
i.e. $\alpha_*(\beta)$ and $\beta_*(\alpha)$ yield the same curve.
Indeed, as shown in the proof of \cite[Proposition 7]{BT2}, any minimizer $u_\alpha$ associated to $\beta_*(\alpha)$ satisfies $\int_\Omega \left(|\nabla u_\alpha|^p- \alpha|u_\alpha|^p \right)= 0$ for $\alpha\in (\lambda_1(p),\alpha^*)$. It follows that $\alpha_*(\beta_*(\alpha))\le \alpha$. Moreover if $\alpha_*(\beta_*(\alpha))< \alpha$ then there exists $u \in X\setminus \{0\}$ such that $\frac{\int_\Omega |\nabla u|^p}{\int_\Omega |u|^p}<\alpha$ and $\int_\Omega \left(|\nabla u|^q- \beta_*(\alpha)|u|^q \right)\leq 0$. So $u$ is not a minimizer associated to $\beta_*(\alpha)$, i.e.
$\frac{\int_\Omega |\nabla u|^q}{\int_\Omega |u|^q}>\beta_*(\alpha)$, which provides a contradiction. Therefore
$\alpha_*(\beta_*(\alpha))=\alpha$ for $\alpha\in (\lambda_1(p),\alpha^*)$, and the second assertion in \eqref{rab} is completely similar.
}
\end{rmk}
Summing up, we proved the following result, which yields \cite[Proposition 2]{BT} and \cite[Theorem 2.7]{BT2}:
\begin{cor}
Let $1<p<q$.
\begin{enumerate}
\item If $\beta<\lambda_1(q)$ then \eqref{pex} has a ground state solution at a positive level for $\alpha >\lambda_1(p)$.
\item If $\beta>\lambda_1(q)$ then \eqref{pex} has a ground state solution at a negative level for $\alpha <\alpha_*(\beta)$, and a second solution (which is a ground state solution relative to $Y_1$) for $\lambda_1(p)<\alpha<\alpha_*(\beta)$.\\
\end{enumerate}
\end{cor}
\medskip
\subsection{A Kirchhoff problem}\strut \\
We consider the Kirchhoff equation
\begin{equation}\label{pk2}
-\left(a+b\int_{\Omega}|\nabla u|^2 dx\right)\Delta u= \lambda u+\mu |u|^2u, \quad u \in H_0^1(\Omega),
\end{equation}
where $a,b>0$, and $\lambda,\mu \in \mathbb{R}$. Kirchhoff type problems have been largely investigated, mostly bt variational methods. We refer to \cite{CO,CKW,FRQ,LTW,SS} for works on this class of problems via the Nehari manifold method.
Let us set, for $u \in X=H_0^1(\Omega)$,
$$\Phi_1(u):=a\| u\|^2-\lambda \|u\|_2^2\quad \mbox{and} \quad \Phi_2(u):=b\|u\|^4-\mu \|u\|_4^4.$$
The energy functional for \eqref{pk2} is given by $
\Phi(u)=\frac{1}{2}\Phi_1(u)+\frac{1}{4}\Phi_2(u)$,
so that $J=\Phi_1+\Phi_2$.
As mentioned in the introduction, this problem has a variational structure similar to the one of the $(p,q)$ Laplacian problem in the previous subsection.
Let $\psi_1$ be the unique positive minimizer achieving (see \cite{D})
$
\mu_1=\inf\left\{\frac{\|u\|^4}{\|u\|_4^4}:u\in X\setminus\{0\}\right\}
$. Set $\lambda^*=a\frac{\|\psi_1\|^2}{\|\psi_1\|_2^2}$ and $\mu^*=b\frac{\|\phi_1\|^4}{\|\phi_1\|_4^4}$,
and note that $\lambda^*>a\lambda_1$ and $\mu^*>b\mu_1$ (see \cite{CO} or \cite[Lemma 3]{SS}).
We shall deal with the cones $$Y_1:=\left\{u\in X: \Phi_2(u)<0 \right\} \quad \mbox{and} \quad Y_2:=\left\{u\in X: \Phi_1(u)<0 \right\}$$ and the extremal parameter (see \cite{I1})
$$
\mu^*(\lambda):=\inf\left\{b\frac{\|u\|^4}{\|u\|_4^4}: u \in \overline{Y}_2\setminus \{0\}\right\}.
$$
By \cite[Proposition 3]{SS} we have $\mu^*(\lambda)\in(b\mu_1,\mu^*)$ for all $\lambda\in(a\lambda_1,\lambda^*)$.
For each $\mu>b\mu_1$, define
$$
\lambda^*(\mu):=\inf\left\{a\frac{\|u\|^2}{\|u\|_2^2}: u \in \overline{Y}_1\setminus\{0\} \right\}.
$$
Similar to Remark \ref{pqrem}, by using \cite[Theorem 6]{SS} and the ideas of \cite{BT2} one can prove that $\lambda^*(\mu)$ and $\mu^*(\lambda)$ yield the same curve. From now on we assume this fact.\\
\noindent {\bf Minimization in $\mathcal{N} \cap Y_1$:}
Let us prove that Theorem \ref{thm2} can be applied for $(\lambda,\mu)\in A:=[a\lambda_1,\lambda^*)\times(b\mu_1,\mu^*(\lambda))\cup(-\infty,a\lambda_1)\times(b\mu_1,+\infty)$. From \cite[Proposition 4]{SS}, it follows that $Y_1$ is non-empty and one can easily see that (H1) is satisfied on $Y_1$. Now we claim that
\begin{equation}\label{kirkk1}
\Phi_1(u)\ge C\|u\|^2 \quad \forall u\in \overline{Y}_1.
\end{equation}
Otherwise we can find a sequence $(u_n)\in S\cap \overline{Y}_1$ such that $\Phi_1(u_n)< 1/n$. We can assume that $u_n \rightharpoonup u$, so that $\Phi_1(u_0)\le 0$ and $\Phi_2(u)\le 0$. Moreover, it is clear that $u \not \equiv 0$, which contradicts $(\lambda,\mu)\in B$ and thus \eqref{kirkk1} holds true.
Next we prove that $(HY)_{c_1}$ is satisfied for $c_1:=\inf_{\mathcal{N}\cap Y_1} \Phi$: indeed note by \eqref{kirkk1} that
\begin{equation*}
J(u)\ge C\|u\|^2+\Phi_2(u)\ge C\|u\|^2-C_1\|u\|^4, \quad \forall u\in \overline{Y}_1.
\end{equation*}
It follows that $J>0$ on $\partial Y \setminus \{0\}$ and in
$Y \cap B(0,R)$, for some $R>0$. Moreover, the first inequality also shows that (HJ) holds with $u=0$.
From Lemma \ref{l2} we infer that $(HY)_{c_1}$ holds, so Theorem \ref{thm2} yields that $c_1$ is a critical value of $\Phi$. Moreover, if $\lambda<a\lambda_1$ then $J>0$ in $X \setminus Y_1$, so that $c_1>0$ is the ground state level of $\Phi$.\\
\noindent {\bf Minimization in $\mathcal{N} \cap Y_2$:}
We claim that Theorem \ref{tp1} can be applied for $(\lambda,\mu)\in B:=(a\lambda_1,\lambda^*)\times[b\mu_1,\mu^*(\lambda))\cup(a\lambda_1,+\infty)\times(-\infty,b\mu_1)$. From \cite[Proposition 4]{SS}, it follows that $Y_2$ is non-empty for all $(\lambda,\mu)\in B$ and (H2) is satisfied therein. Let us prove that there exists a positive constant $C$ such that
\begin{equation}\label{kirkk}
\Phi_2(u)\ge C\|u\|^4,\quad \forall u\in \overline{Y}_2.
\end{equation}
Otherwise we can find a sequence $(u_n)\in S\cap Y_2$ such that $\Phi_2(u_n)< 1/n$. We can assume that $u_n \rightharpoonup u$, so that $\Phi_1(u)le 0$ and $\Phi_2(u)\le 0$. Moreover, it is clear that $u \not \equiv 0$, which contradicts $(\lambda,\mu)\in B$ and \eqref{kirkk} holds true. Thus $\Phi$ is coercive in $Y_2$. Still from \eqref{kirkk} we have $\Phi=\frac{1}{4}\Phi_2>0$ on $\partial Y_2$.
Since $\Phi=\frac{1}{4}\Phi_1\ge 0$ on $\mathcal{N}\setminus Y_2$ we conclude from Theorem \ref{tp1} that $c_2:=\inf_{\mathcal{N} \cap Y_2} \Phi=\inf_{Y_2} \Phi=\inf_{\mathcal{N}} \Phi$ is negative and achieved by a local minimizer of $\Phi$. \\
Hence we obtain the following result \cite[Theorem 2]{SS}:
\begin{cor}\strut
\begin{enumerate}
\item Assume either $\lambda<a \lambda_1$ and $\mu >b \mu_1$ or $a \lambda_1\le \lambda<\lambda^*$ and $b \mu_1 < \mu < \mu^*(\lambda)$. Then \eqref{pk2} has a solution which is a ground state relative to $Y_1$. Moreover, this one is a ground state solution at a positive level for $\lambda<a \lambda_1$.
\item Assume either $\lambda>a \lambda_1$ and $\mu <b \mu_1$ or $a \lambda_1< \lambda<\lambda^*$ and $b \mu_1 \le \mu < \mu^*(\lambda)$. Then \eqref{pk2} has a ground state solution, which is a local minimizer of the energy functional, at a negative level.
\end{enumerate}
\end{cor}
\medskip
\medskip
\section{Applications to pdes}
\medskip
Let us obtain now some new results for $(p,q)$-Laplacian problems and Kirchhoff equations.
Throughout the next sections we assume that $b \in L^{\infty}(\Omega)$, and $g:\Omega \times \mathbb{R} \to \mathbb{R}$ and $f:\mathbb{R} \to \mathbb{R}$ are continuous. We set $G(x,t):=\displaystyle\int_{0}^{t}g(x,s)\ ds$ and $F(t):=\displaystyle\int_{0}^{t}f(s)\ ds$ for $t \in \mathbb{R}$.\\
\subsection{Generalized $(p,q)$-Laplacian problems}\strut\\
\label{spq}
Let $a:[0,\infty)\rightarrow
[0,\infty)$ be a $\mathcal{C}^{1}$ function satisfying the following conditions:\\
\begin{itemize}
\item[(A1)] $k_0\left( 1+t^{\frac{q-p}{p}}\right) \leq a(t) \leq k_1\left( 1+t^{\frac{q-p}{p}}\right)$ for every $t>0$, and some constants $k_0,k_1>0$ and $p\geq q>1$.
\item[(A2)] $a$ is non-increasing.
\item[(A3)] $t\mapsto a(t^p)t^p$ and $t \mapsto A(t^p)$ are convex in $(0,\infty)$.\\
\end{itemize}
The problem
\begin{equation}\label{quasi}
-div \left(a(|\nabla u|^p)|\nabla u|^{p-2}\nabla u\right) = g(x,u), \quad u \in W_0^{1,p}(\Omega),
\end{equation}
is a generalized $(p,q)$-Laplacian problem, as the operator in the left-hand side reduces to $-\Delta_p -\Delta_q$ if we choose $a(t)=1+t^{\frac{q-p}{p}}$, which is one of the main prototypes satisfying (A1)-(A3).
Let us set $A(t):=\displaystyle\int_{0}^{t}a(s)\ ds$ for $t \ge 0$.
\begin{cor} \label{c2}
Assume that $g(x,t)=0$ for every $x \in \Omega \setminus \Omega'$, where $\Omega'$ is an open subset of $\Omega$. Moreover, assume that:
\begin{enumerate}
\item $\displaystyle \lim_{|t| \rightarrow 0} \frac{g(x,t)}{|t|^{q-1}}=0$ uniformly for $x \in \Omega'$.
\item There exists $r \in (p,p^*)$ such that $\displaystyle \lim_{|t| \rightarrow \infty} \frac{g(x,t)}{|t|^{r-1}}=0$ uniformly for $x \in \Omega'$.
\item For every $x \in \Omega'$ the map $t \mapsto \frac{g(x,t)}{|t|^{p-1}}$
is increasing on $\mathbb{R} \setminus \{0\}$.
\item $\displaystyle \lim_{|t| \to \infty}\frac{G(x,t)}{|t|^p}=\infty$ uniformly for $x \in \Omega'$.
\end{enumerate}
Then \eqref{quasi} has a positive ground state.
\end{cor}
\begin{proof}
Let $X=W_0^{1,p}(\Omega)$ with $\|u\|=\left(\int_\Omega |\nabla u|^p\right)^{\frac{1}{p}}$, and $\Phi:=I_0-I$, where
\begin{equation}
I_0(u):=\frac{1}{p}\displaystyle\int_{\Omega}A(|\nabla u|^p)\quad \mbox{and} \quad I(u):=\int_\Omega G(x,u).
\end{equation}
for $u \in X$. Note that
$$I_0'(u)u=\int_\Omega a(|\nabla u|^p) |\nabla u|^p\quad \mbox{and}\quad I'(u)v= \int_\Omega g(x,u)v.$$
(A3) and (2) provide the weak lower semicontinuity of $I_0$ and $u \mapsto I_0'(u)u$, and the strong continuity of $I'$, respectively. (A1) implies that $I'_0(u)u \geq k_0\|u\|^p$, whereas (1) and (2) imply that $J(u) \geq C_1\|u\|^p-C_2\|u\|^r$ (see the proof of \cite[Corollary 2.1]{FRQ}).
We set
$$Y:=\{u \in X: u \not \equiv 0 \text{ in } \Omega'\},$$
which is clearly an open cone in $X$.
We also see that $\partial Y=X \setminus Y=\{u \in X: u \equiv 0 \text{ in } \Omega'\}$ so that $I(u)=I'(u)u=0$ for every $u \in \partial Y=X \setminus Y$.
From (A1) we have
$$\frac{I_0(tu)}{t^p}\leq \frac{k_1}{q}t^{q-p} \int_\Omega |\nabla u|^q+\frac{k_1}{p}\|u\|^p \quad \mbox{and}\quad \frac{I(tu)}{t^p}=\int_\Omega \frac{G(x,tu)}{t^p}\to \infty$$
uniformly for $u$, in a weakly compact subset of $Y$.
Finally, by (A2) and (3) the maps
$$t \mapsto \frac{I_0'(tu)u}{t^{p-1}}=\int_\Omega a\left(t^p|\nabla u|^p\right) |\nabla u|^p \quad \mbox{and} \quad t \mapsto \frac{I'(tu)u}{t^{p-1}}=\int_{\Omega'} \frac{g(x,tu)}{t^{p-1}}u$$ are nonincreasing and increasing, respectively, for every $u \in Y$. Corollary \ref{c1} and Remark \ref{rc1} yield the conclusion.\\
\end{proof}
In particular, for $g(x,u)=b(x)f(u)$ we obtain the following result:
\begin{cor}
Assume that $b \geq 0$, and $f$ is such that:
\begin{enumerate}
\item $\displaystyle \lim_{t \rightarrow 0} \frac{f(t)}{t^{q-1}}=0$.
\item $\displaystyle \lim_{t \rightarrow \infty} \frac{f(t)}{t^{r-1}}=0$ for some $r \in (p,p^*)$.
\item $t \mapsto \frac{f(t)}{|t|^{p-1}}$
is increasing on $\mathbb{R} \setminus \{0\}$ and goes to infinity as $t \to \infty$.
\end{enumerate}
Then the problem \begin{equation}
-div \left(a(|\nabla u|^p)|\nabla u|^{p-2}\nabla u\right) = b(x)f(u), \quad u \in W_0^{1,p}(\Omega),
\end{equation} has a positive ground state level.\\
\end{cor}
\begin{cor}\label{c21}
Let $b_1,b_2 \in L^{\infty}(\Omega)$ with $b_1,b_2 \geq 0$ and $b_1b_2 \equiv 0$, and $r_1 \in (p,p^*)$, $r_2 \in (q,p^*)$ with $r_1 \geq r_2$. Then the problem \begin{equation}
-div \left(a(|\nabla u|^p)|\nabla u|^{p-2}\nabla u\right) = b_1(x)|u|^{r_1-2}u - b_2(x)|u|^{r_2-2}u, \quad u \in W_0^{1,p}(\Omega),
\end{equation}
has a positive ground state level.
\end{cor}
\begin{proof}
Let $X=W_0^{1,p}(\Omega)$ and \[
Y=
\begin{cases}
\{u \in X: \int_\Omega b_1|u|^{r_1}>0\} & \mbox{ if } r_1 > r_2,\\
\{u \in X: \int_\Omega (b_1-b_2)|u|^{r}>0\} & \mbox{ if }r_1 = r_2=r.
\end{cases}
\]
If $r_1>r_2$ then we apply Corollary \ref{c1} with
\begin{equation}
I_0(u):=\frac{1}{p}\displaystyle\int_{\Omega}A(|\nabla u|^p)+\int_\Omega b_2|u|^{r_2}\quad \mbox{and} \quad I(u):=\int_\Omega b_1|u|^{r_1}.
\end{equation}
Indeed, since $b_2 \geq 0$ we see that Corollary \ref{c1} applies with $\eta=p$ and $\sigma=r_2$.
If $r_1=r_2=r$ then Corollary \ref{c1} applies with
$$
I_0(u):=\frac{1}{p}\displaystyle\int_{\Omega}A(|\nabla u|^p)\quad \mbox{and} \quad I(u):=\int_\Omega b|u|^{r},
$$
where $b=b_1-b_2$. Since $p<r<p^*$ we see that conditions (2) and (4) of Corollary \ref{c1} hold with $r=\sigma=p$.
\end{proof}
\begin{cor}\label{c22}
Let $b^+ \not \equiv 0$ and $r \in (p,p^*)$. Then the problem \begin{equation}
-div \left(a(|\nabla u|^p)|\nabla u|^{p-2}\nabla u\right) = b(x)|u|^{r-2}u, \quad u \in W_0^{1,p}(\Omega),
\end{equation}
has a positive ground state level.
\end{cor}
\begin{proof}
It follows from the previous corollary with $\alpha=\beta$, $b_1=b^+$ and $b_2=b^-$.\\
\end{proof}
\medskip
Next we deal with the $(p,q)$-Laplacian problem
\begin{equation}\label{pqr}
-\Delta_pu- \Delta_q u=\lambda |u|^{\theta-2}u+b(x)|u|^{r-2}u, \quad u\in W_0^{1,p}(\Omega).\\
\end{equation}
\medskip
where $1<q<p$, $\lambda \in \mathbb{R}$, and $1<\theta,r<p^*$.
If $r<q$ then we set
\begin{equation*}
\lambda^*=\inf\left\{\alpha(u): u\in X, \int b|u|^r>0\right\},
\end{equation*}
where
\begin{equation*}
\alpha(u)=\frac{\int |\nabla u|^p}{\int|u|^p}+\frac{q-r}{p-q}\left(\frac{p-q}{p-r}\right)^{\frac{p-r}{q-r}}\frac{\left( \int |\nabla u|^q\right)^{\frac{p-r}{q-r}}}{\int |u|^p\left(\int b|u|^r\right)^{\frac{p-q}{q-r}}}
\end{equation*}
and, if $r>p$,
\begin{equation*}
\lambda^{**}=\inf\left\{\beta(u): u\in X, \int b|u|^r>0 \right\},
\end{equation*}
where
\begin{equation*}
\beta(u)=\frac{\int |\nabla u|^q}{\int|u|^q}+\frac{r-p}{r-q}\left(\frac{p-q}{r-q}\right)^{\frac{p-q}{r-p}}\frac{\left(\int |\nabla u|^p\right)^{\frac{r-q}{r-p}}}{ \int |u|^q\left(\int b|u|^r\right)^{\frac{p-q}{r-p}}}.
\end{equation*}
Both $\alpha(u)$ and $\beta(u)$ can be obtained as the unique solution, with respect to the variable $(t,\lambda)$, of the equations $\varphi_u'(t)=\varphi_u''(t)=0$. They are called nonlinear Rayleigh's quotients and $\lambda^*,\lambda^{**}$ are called extremal parameters (see \cite{I1}).\\
\begin{cor}
Let $b^+ \not \equiv 0$ and $1<r<q<p=\theta$.
\begin{enumerate}
\item If $\lambda\le \lambda_1(p)$ then \eqref{pqr} has a ground state solution at a positive level.
\item If $\lambda_1(p)<\lambda<\lambda^*$ then \eqref{pqr} has a a ground state solution at a negative level, and a second solution.
\end{enumerate}
\end{cor}
\begin{proof}
The energy functional is given by
\begin{equation*}
\Phi(u)=\frac{1}{p}\left(\|u\|^p-\lambda \|u\|_p^p\right)+\frac{1}{q}\|\nabla u\|_q^q-\frac{1}{r}\int b|u|^r,
\end{equation*}
so that
$J(u)=\| u\|^p-\lambda \|u\|_p^p+ \|\nabla u\|_q^q-\int_\Omega b|u|^r$.
Let us first show that $\Phi$ is coercive if $\lambda \leq \lambda_1(p)$. This is clear if $\lambda < \lambda_1(p)$. Let $\lambda = \lambda_1(p)$ and $(u_n) \subset X$ with $\|u_n\| \to \infty$. If $\Phi(u_n)$ is bounded from above then we can assume that $v_n:=\frac{u_n}{\|u_n\|} \rightharpoonup v_0$ and, since $p>q>r$, we deduce that
$$\| v_n\|^p-\lambda_1(p) \|v_n\|_p^p \to 0 \quad \mbox{and} \quad \|\nabla v_n\|_q \to 0.$$
The second assertion yields $v=0$, whereas the first one shows that $v_n \to 0$ in $X$, a contradiction. Therefore $\Phi$ is coercive and has a negative global minimum.
Let us take now $\lambda_1(p)<\lambda<\lambda^*$.
We apply Corollary \ref{cc} with
$$P(u)=\|\nabla u\|_q^q, \quad \Upsilon(u)=-\int_\Omega b(x)|u|^r, \quad \mbox{and} \quad \Gamma(u)=\| u\|^p-\lambda \|u\|_p^p.$$
Hence $$Y_1=\left\{u\in X: \int_\Omega b(x)|u|^r>0 \right\} \quad \mbox{and} \quad Y_2=\left\{u\in X: \| u\|^p-\lambda \|u\|_p^p<0\right\}.$$
We claim that $P(u)\geq C\|u\|^q$ holds along any minimizing sequence for $c$. To this end, we use \cite[Lemma 9]{T} and Poincar\'e's inequality, which implies the desired inequality in $X_k:=\left\{u\in X: \| u\|^p- k \|u\|_p^p<0\right\}$, for any $k>0$. Since $c<0$ it is enough to show that for a given $\varepsilon>0$ we have $\{u \in \mathcal{N}: \Phi(u) \leq -\varepsilon\} \subset X_k$ for some $k>0$. On the contrary, we find a sequence $(u_n) \subset \mathcal{N}$ such that $\| u_n\|^p-n \|u_n\|_p^p\ge 0$ and $\Phi(u_n) \leq -\varepsilon$ for every $n$. Thus $\|u_n\|_p^p \leq \frac{1}{n}\|u_n\|^p$ and consequently
$$\left(1-\frac{\lambda}{n}\right) \|u_n\|^p\leq \| u_n\|^p-\lambda \|u_n\|_p^p<\int_\Omega b|u_n|^r\leq C\|u_n\|^r.$$
Since $p>r$ we deduce that $(u_n)$ is bounded, so we may assume that $u_n \rightharpoonup u$ in $X$. But $\|u_n\|_p^p \leq \frac{1}{n}\|u_n\|^p$ implies that $u=0$, which contradicts $\Phi(u)\leq \liminf \Phi(u_n) \leq -\varepsilon$.
Therefore the claim is proved and the inequality $P(u)\geq C\|u\|^q$ also holds in $Y_2$. Note also that $J$ satisfies (HJ) since $\Gamma$ does so.
Finally, (H5) holds since $\lambda<\lambda^*$, so Corollary \ref{cc} yields the desired conclusion.
\end{proof}
\begin{cor}
Let $b^+ \not \equiv 0$ and $1<q=\theta<p<r<p^*$.
\begin{enumerate}
\item If $\lambda\le \lambda_1(q)$ then \eqref{pqr} has a ground state solution at a positive level.
\item If $\lambda_1(q)<\lambda<\lambda^{**}$ then \eqref{pqr} has a ground state solution at a negative level, and a second solution.
\end{enumerate}
\end{cor}
\begin{proof}
We consider the functional
\begin{equation*}
\Phi(u)=\frac{1}{p}\|u\|^p+\frac{1}{q}\left(\| \nabla u\|_q^q-\lambda \|u\|_q^q\right)-\frac{1}{r}\int_\Omega b|u|^r,
\end{equation*}
and the open cones $$Y_1=\left\{u \in X: \| \nabla u\|_q^q-\lambda \|u\|_q^q<0\right\}\quad
\text{ and }
\quad Y_2=\left\{u\in X: \int_\Omega b(x)|u|^r>0 \right\}.$$
Note that
$
J(u)=\|u\|^p+\| \nabla u\|_q^q-\lambda \|u\|_q^q-\int_\Omega b|u|^r
$
and
$$\Phi(u)=\frac{r-p}{rp}\|u\|^p +\frac{r-q}{rq}\left(\| \nabla u\|_q^q-\lambda \|u\|_q^q\right) \geq C_1\|u\|^p -C_2\|u\|^q, \ \forall u \in \mathcal{N},$$
i.e. $\Phi$ is coercive on $\mathcal{N}$.\\
For $\lambda \leq \lambda_1(q)$ we apply Corollary \ref{c1} with $Y=Y_2$. Indeed, note that $\Phi=I_0-I$ with
$$I_0(u)=\frac{1}{p}\|u\|^p+\frac{1}{q}\left(\| \nabla u\|_q^q-\lambda \|u\|_q^q\right)\quad
\text{and}
\quad I(u)=\frac{1}{r}\int b|u|^r.$$
Since $\lambda \leq \lambda_1(q)$ we have $I_0'(u)u \geq \|u\|^p$ for all $u \in X$.
Thus conditions (1)-(4) of Corollary \ref{c1} clearly hold. Moreover, $I'(u)u \leq 0$ for $u \in X \setminus Y_2$,
so $c_2:=\displaystyle \inf_{\mathcal{N}\cap Y_2}\Phi$ is the ground state level of $\Phi$.\\
Finally, one may easily see that $$P(u)=\|u\|^p, \quad \Upsilon(u)=\Upsilon_\lambda(u)=\| \nabla u\|_q^q-\lambda \|u\|_q^q \quad \mbox{ and } \quad \Gamma(u)=-\int_\Omega b|u|^{r}$$
satisfy the conditions of Corollary \ref{cc} for any $\lambda_1(q)<\lambda<\lambda^{**}$. Indeed, in this case $Y_1,Y_2$ are nonempty since $\lambda>\lambda_1(q)$ and $b^+ \not \equiv 0$, respectively. Moreover,
(H5) is satisfied since $\lambda<\lambda^{**}$.
Thus $c_1:=\displaystyle \inf_{\mathcal{N}\cap Y_1}\Phi$ is the ground state level of $\Phi$, which has a second critical point that belongs to $Y_2$.
\end{proof}
\medskip
\subsection{Problems involving a $(p,q)$-Laplacian operator with spatial dependence}\strut\\
Let $D,E \subset \Omega$ be two disjoint smooth subdomains with $\overline{D \cup E}=\overline{\Omega}$.
We consider the problem
\begin{equation}\label{mr}
-\chi_D \Delta u - \chi_E \Delta_p u=b(x)|u|^{r-2} u, \quad u \in X,
\end{equation}
where $X=\{u \in H_0^1(\Omega): \nabla u \in L^p(E)\}$ and $\chi$ is the characteristic function.
This problem has been investigated in \cite{MR} with $b \equiv \lambda>0$ and $2<r<p$.
We shall deal here with the case $2<p<r<2^*$.
\begin{cor}
Let $b^+ \not \equiv 0$ and $2<p<r<2^*$. Then \eqref{mr} has a ground state solution at a positive level.
\end{cor}
\begin{proof}
We set $\|u\|=\|\nabla u\|_{2,D}+\|\nabla u\|_{p,E}$ and observe that $X$ equipped with this norm is a reflexive Banach space. Moreover, $\|\cdot \|$ is equivalent to the norm given by $\|\nabla u\|_{2,\Omega}+\|\nabla u\|_{p,E}$, see \cite[Lemma 2.1]{MR}.
The energy functional is given by
$$\Phi(u)=\frac{1}{2}\int_D |\nabla u|^2 +\frac{1}{p}\int_E |\nabla u|^p - \frac{1}{r}\int_\Omega b(x)|u|^r,$$
for $u \in X$.
Let $Y=\left\{u\in X: \int_\Omega b(x)|u|^r>0 \right\}$.
The condition $r>p>2$ clearly implies (H1) for any $u \in Y$.
Note also that if $p<\sigma<r$ then
$$\frac{\Phi(tu)}{t^\sigma} =-t^{r-\sigma}\int_\Omega b(x)|u|^r+o(1)\to -\infty$$ as $t \to \infty$, uniformly for $u$ on weakly compact subsets of $Y$.
Moreover,
$\Phi=I_0-I$ with
$I_0(u)=\frac{1}{2}\int_D |\nabla u|^2 +\frac{1}{p}\int_E |\nabla u|^p$ and $I(u)=\frac{1}{r}\int_\Omega b(x)|u|^r$, which is weakly continuous since $r<2^*$. If $u \in S$ then, for $t>0$ large enough, we have
$$I_0(tu)\geq \frac{t^2}{2} \left(\|\nabla u\|_{2,D}^2+\|\nabla u\|_{p,E}^p\right)\geq Ct^2\|u\|^p=Ct^2$$
where we have used the inequality
$ (x^2+y^p)^{\frac{1}{p}}\geq C(x+y)$, which holds for $x,y \in [0,1]$ and some $C>0$.
Finally, it is clear that $J>0$ on $\partial Y$ and since $\|\cdot \|$ is equivalent to $\|\nabla u\|_2+\|\nabla u\|_{p,E}$, we see that $J>0$
on $B(0,R) \setminus \{0\}$ if $R>0$ is small enough. Lemma \ref{l1} and Theorem \ref{thm2} yield the desired conclusion.
\end{proof}
\medskip
\subsection{Kirchhoff problems}\strut\\
Let us deal now with the problem
\begin{equation}\label{pk}
-M\left(\int_{\Omega} |\nabla u|^2\right) \Delta u =g(x,u), \quad u \in H_0^1(\Omega).
\end{equation}
Throughout this subsection $M:[0,\infty)\rightarrow
[0,\infty)$ is an increasing ${C}^{1}$ function such that
$M(0):=m_0>0$ and $t\mapsto\displaystyle\frac{M(t)}{t}$ is decreasing in $(0,\infty)$.
We set $\hat{M}(t):=\displaystyle\int_{0}^{t}M(s)\ ds$ for $t \ge 0$.
\begin{cor}\label{c3}
Assume that $g(x,t)=0$ for every $x \in \Omega \setminus \Omega'$, where $\Omega'$ is an open subset of $\Omega$. Moreover, assume that:
\begin{enumerate}
\item $\displaystyle \lim_{t \rightarrow 0} \frac{g(x,t)}{t}=0$ uniformly for $x \in \Omega'$.
\item $\displaystyle \lim_{t \rightarrow \infty} \frac{G(x,t)}{t^4}=\infty $ uniformly for $x \in \Omega'$.
\item There exists $C>0$ and $r \in (4,6)$ such that $|g(x,t)|\le C (1+|t|^{r-1})$ for any $x \in \Omega'$ and $t \in \mathbb{R}$.
\item For any $x \in \Omega'$ the map $t\mapsto \frac{g(x,t)}{t^{3}}$ is increasing.
\end{enumerate}
Then \eqref{pk} has a positive ground state level.
\end{cor}
\begin{proof}
The energy functional is given by $\Phi=I_0-I$, where
$$I_0(u):=\frac{1}{2}\hat{M}\left(\|u\|^2\right) \quad
\text{and}
\quad I(u):=\int_\Omega G(x,u),$$
for $u \in X=H_0^1(\Omega)$.
Note also that $$I_0'(u)u=M(\|u\|^2)\|u\|^2 \quad
\text{and}
\quad I'(u)u= \int_\Omega g(x,u)u.$$ Since $\hat{M}$ is convex, we see that $I_0$ and $u \mapsto I_0'(u)u$ are weakly lower semicontinuous, whereas (3) implies that $I'$ is strongly continuous. Since $M$ is increasing we have
$I_0'(u)u\geq m_0\|u\|^2$ for every $u \in X$.
Let $Y:=\{u \in X: u \not \equiv 0 \text{ in } \Omega'\}$.
Note that $I(u)=I'(u)u=0$ for every $u \in Y$.
Since $\frac{M(t)}{t}$ is decreasing we find that $M(t)\leq C(1+t)$ for some $C>0$ and all $t\geq 0$. It follows that
$\frac{I_0(tu)}{t^4}\leq C_1\|u\|^4+C_2t^{-2}\|u\|^2$, while
$\frac{I(tu)}{t^4}=\int_\Omega \frac{G(x,tu)}{t^4} \to \infty$
as $t \to \infty$, uniformly on weakly compact subsets of $Y$. Since $M$ and $t\mapsto \frac{g(x,t)}{t^{3}}$ are increasing, the maps $t \mapsto \frac{I_0'(tu)u}{t^{\sigma-1}}$ and $t \mapsto \frac{I'(tu)u}{t^{\sigma-1}}$ are increasing.
Finally, arguing as in the proof of \cite[Corollary 2.4]{FRQ} one can show that $\liminf_{u \to 0} \frac{\Phi'(u)u}{\|u\|^2}>0$.
Corollary \ref{c1} and Remark \ref{rc1} yield the desired conclusion.\\
\end{proof}
The next results are similar to Corollaries \ref{c21} and \ref{c22}, so we omit their proof:
\begin{cor}
Let $b_1,b_2 \in L^{\infty}(\Omega)$ with $b_1,b_2 \geq 0$ and $b_1b_2 \equiv 0$, and $r_1 \in (4,6)$, $r_2 \in (2,6)$ with $r_1 \geq r_2$. Then the problem \begin{equation}
-M\left(\int_{\Omega} |\nabla u|^2\right) \Delta u = b_1(x)|u|^{r_1-2}u - b_2(x)|u|^{r_2-2}u, \quad u \in H_0^1(\Omega),
\end{equation}
has a positive ground state level.
\end{cor}
\begin{cor}
Let $b^+ \not \equiv 0$ and $r \in (4,6)$. Then the problem \begin{equation}
-M\left(\int_{\Omega} |\nabla u|^2\right) \Delta u =b(x)|u|^{r-2}u, \quad u \in H_0^1(\Omega),
\end{equation}
has a positive ground state level.
\end{cor}
\medskip
\section{Applications to systems of pdes}
\medskip
This final section is devoted to the applications of our results to some elliptic systems. In this regard, let us mention that the Nehari manifold and the fibering method have been extensively exploited. We refer to \cite{AH,BM,BW1,SM,Ws} for results on gradient type systems, and to \cite{BMR,E1,E2,S} for strongly coupled systems that can be reduced to a fourth-order equation.
Recall that $b \in L^{\infty}(\Omega)$, $g:\Omega \times \mathbb{R} \to \mathbb{R}$ and $f:\mathbb{R} \to \mathbb{R}$ are continuous, and $G(x,t):=\displaystyle\int_{0}^{t}g(x,s)\ ds$ and $F(t):=\displaystyle\int_{0}^{t}f(s)\ ds$ for $t \in \mathbb{R}$.\\
\subsection{A semilinear gradient system}\label{kayeexample1} \strut\\
Consider the system of equations
\begin{equation}\label{pk3}
\left\{
\begin{array}{ll}
-\Delta u = \lambda v+b(x)|u|^{r-2}u \
& \mbox{in} \ \ \Omega, \ \ \\
-\Delta v = \lambda u-|v|^{q-2}v \
&\mbox{in} \ \ \Omega, \ \ \\
u = v = 0 \ &\mbox{on} \ \
\partial\Omega,
\end{array}
\right.
\end{equation}
where $\lambda \in \mathbb{R}$ and $2<q<r<2^*$.
We deal with $X:=H_0^1(\Omega)\times H_0^1(\Omega)$, endowed with the norm $\|(u,v)\|=\left(\|\nabla u\|_{2}^2+\|\nabla v\|_{2}^2\right)^{\frac{1}{2}}$. We also set $\|(u,v)\|_2=\left(\| u\|_{2}^2+\| v\|_{2}^2\right)^{\frac{1}{2}}$.
Let us set
\begin{equation*}
\lambda^*:=\inf\left\{\lambda(u,v): \int_\Omega b|u|^r>0,\ \int_\Omega uv>0 \right\},
\end{equation*}
where
\begin{equation*}
\lambda(u,v):=\frac{1}{2 \int uv} \left(\|(u,v)\|^2+ C_{r,q}\frac{\|v\|_q^{q\frac{r-2}{r-q}}}{\left(\int_\Omega b|u|^r dx\right)^{\frac{q-2}{r-q}}}\right),
\mbox{ and }
C_{r,q}:=\frac{r-q}{r-2}\left(\frac{q-2}{r-2}\right)^{\frac{q-2}{r-q}}.
\end{equation*}
It is not hard to show that $\lambda^*$ is achieved, and consequently $\lambda^*>\lambda_1$.
\begin{cor}\label{ws}\strut
\begin{enumerate}
\item If $\lambda<\lambda_1$ then \eqref{pk3} has a ground state solution (at a positive level).
\item If $\lambda\in(\lambda_1,\lambda^*)$ then \eqref{pk3} has a ground state solution, which is a local minimizer (at a negative level), and a second critical point.
\end{enumerate}
\end{cor}
\begin{proof}
The energy functional associated to \eqref{pk3} is given by
$$
\Phi(u,v)= \frac{1}{2}\|(u,v)\|^2-\lambda\int_\Omega uv
+\frac{1}{q}\|v\|_q^q-\frac{1}{r}\int_\Omega b|u|^r, \quad \mbox{ for } (u,v)\in X.
$$
Thus
$$J(u,v)=\|(u,v)\|^2-2\lambda \int_\Omega uv
+\|v\|_q^q-\int_\Omega b|u|^r.$$
We introduce the cones
$$Y_1=\left\{(u,v)\in X:\|(u,v)\|^2-2\lambda\int_\Omega uv<0\right\}\quad \mbox{and} \quad Y_2=\left\{(u,v)\in X: \int_\Omega b|u|^r> 0\right\}.$$
\begin{enumerate}
\item Let $\lambda<\lambda_1$.
Since $2\int_\Omega uv \leq \|u\|_2^2+\|v\|_2^2$ we see that
there exists $C>0$ such that
\begin{equation}\label{K0}
\|(u,v)\|^2-2\lambda\int_\Omega uv\ge C\|(u,v)\|^2,\ \quad \forall (u,v)\in X.
\end{equation}
Thus Corollary \ref{c1} applies with $Y=Y_2$,
$$I_0(u):= \frac{1}{2}\|(u,v)\|^2-\lambda\int_\Omega uv
+\frac{1}{q}\|v\|_q^q\quad \mbox{and} \quad I(u):=\frac{1}{r}\int_\Omega b|u|^r.$$
\item Let us take now $\lambda_1<\lambda<\lambda^*$. It follows that $Y_1$ is nonempty since $(\varphi_1,\varphi_1) \in Y_1$.
We claim that there exists $C>0$ such that $$\|v\|_q^q \geq C\|(u,v)\|^q \quad \forall (u,v) \in \overline{Y}_1.$$ Otherwise we find a sequence $(u_n,v_n) \subset \overline{Y}_1$ such that
$\|v_n\|_q^q \leq \frac{1}{n}\|(u_n,v_n)\|^q$. We may assume that $(\bar{u}_n,\bar{v}_n):=\frac{(u_n,v_n)}{\|(u_n,v_n)\|} \rightharpoonup (u,v)$ in $X$.
It follows that $\|\bar{v}_n\|_q \to 0$ i.e. $v=0$ and from $(u_n,v_n) \subset \overline{Y}_1$ we infer that
$\|(u_n,v_n)\|^2 \le 2\lambda \int u_nv_n,$
i.e. $1 \le 2\lambda \int \bar{u}_n\bar{v}_n \to 0$, which is a contradiction.
Thus the claim is proved an since $\lambda<\lambda^*$, we see that (H5) holds. Finally, it is clear that $J$ satisfies (HJ) for any $u$. So Corollary \ref{cc} applies with
$$P(u,v)=\|v\|_q^q, \quad \Upsilon(u,v)=\|(u,v)\|^2-2\lambda\int_\Omega uv, \quad \mbox{and} \quad \Gamma(u,v)=-\int_\Omega b|u|^r.$$
\end{enumerate}
\end{proof}
\medskip
\subsection{A quasilinear gradient system}\strut \\
The previous results on \eqref{pk3} can be easily extended to the quasilinear system
\begin{equation}\label{sp}
\left\{
\begin{array}{ll}
-\Delta_p u = \mu |u|^{p-2}u+\lambda \alpha|u|^{\alpha-2}u|v|^{\beta}+b(x)|u|^{r-2}u \
& \mbox{in} \ \ \Omega, \ \ \\
-\Delta_p v = \delta |v|^{p-2}v +\lambda \beta |u|^{\alpha}|v|^{\beta-2}v-|v|^{q-2}v \
&\mbox{in} \ \ \Omega, \ \ \\
u = v = 0 \ &\mbox{on} \ \
\partial\Omega,
\end{array}
\right.
\end{equation}
where $\mu,\delta \in \mathbb{R}$, $\lambda>0$, $1<p<q<r<p^*$, and $\alpha,\beta>1$ with $\alpha+\beta=p$.
The energy functional is given by
$$
\Phi(u,v)= \frac{1}{p}\int_\Omega\left( |\nabla u|^p-\mu |u|^p+|\nabla v|^p-\delta |v|^p\right)-\lambda\int_\Omega |u|^{\alpha}|v|^{\beta}
+\frac{1}{q}\|v\|_q^q-\frac{1}{r}\int_\Omega b|u|^r,
$$
for $(u,v)\in X=W_0^{1,p}(\Omega) \times W_0^{1,p}(\Omega)$.
We set $\|(u,v)\|=\left(\|\nabla u\|_{p}^p+\|\nabla v\|_{p}^p\right)^{\frac{1}{p}}$ and $\|(u,v)\|_p=\left(\| u\|_{p}^p+\| v\|_{p}^p\right)^{\frac{1}{p}}$.
Let
\begin{equation*}
\lambda_1^*(\mu,\delta)=\inf\left\{\frac{\|\nabla u\|_p^p+\|\nabla v\|_p^p-\mu \|u\|_p^p-\delta\|v\|_p^p}{p\int_\Omega |u|^{\alpha}|v|^{\beta}}: (u,v)\in X, \int_\Omega |u|^{\alpha}|v|^{\beta}>0 \right\},
\end{equation*}
and
\begin{equation*}
\lambda^*(\mu,\delta):=\inf\left\{\lambda(u,v,\mu,\delta): \int_\Omega b|u|^r>0,\ \int_\Omega |u|^{\alpha}|v|^{\beta}>0 \right\},
\end{equation*}
where
$$
\lambda(u,v,\mu,\delta):=\frac{1}{\int_\Omega |u|^{\alpha}|v|^{\beta}} \left(\|\nabla u\|_p^p+\|\nabla v\|_p^p-\mu \|u\|_p^p-\delta\|v\|_p^p+ C_{p,r,q}\frac{\|v\|_q^{q\frac{r-p}{r-q}}}{\left(\int_\Omega b|u|^r \right)^{\frac{q-p}{r-q}}}\right),
$$ and
$C_{p,r,q}:=\frac{r-q}{r-p}\left(\frac{q-p}{r-p}\right)^{\frac{q-p}{r-q}}$.
It is not hard to show that $\lambda_1^*(\mu,\delta)$ and $\lambda^*(\mu,\delta)$ are achieved, so that $\lambda^*(\mu,\delta)>\lambda_1^*(\mu,\delta)$.
In addition, $\frac{\lambda_1(p)-\max(\mu,\delta)}{\max(\alpha,\beta)}\leq \lambda_1^*(\mu,\delta)\leq \frac{1}{p}(2\lambda_1(p)-\mu-\delta)$. Note that if $p=2$, $\mu=\delta$, and $\alpha=\beta=1$ then $\lambda_1^*(\mu,\delta)=\lambda_1-\mu$.
Let
$$Y_1=\left\{(u,v)\in X:\|\nabla u\|_p^p+\|\nabla v\|_p^p-\mu \|u\|_p^p-\delta\|v\|_p^p-p\lambda\int_\Omega |u|^{\alpha}|v|^{\beta}<0\right\}$$ and $$Y_2=\left\{(u,v)\in X: \int_\Omega b|u|^r> 0\right\}.$$
For $\mu,\delta<\lambda_1(p)$ and $0<\lambda<\lambda_1^*(\mu,\delta)$ there exists $C>0$ such that
\begin{equation}
\|\nabla u\|_p^p+\|\nabla v\|_p^p-\mu \|u\|_p^p-\delta\|v\|_p^p-p\lambda\int_\Omega |u|^{\alpha}|v|^{\beta}\ge C\|(u,v)\|^p
\end{equation}
for all $(u,v)\in X$ such that $\int_\Omega |u|^{\alpha}|v|^{\beta}>0$.
Indeed, otherwise there is a sequence $(u_n,v_n)$ such that $\|(u_n,v_n)\|=1$, $\int |u_n|^{\alpha}|v_n|^{\beta}>0$ and
$$\limsup \left(\|\nabla u_n\|_p^p+\|\nabla v_n\|_p^p-\mu \|u_n\|_p^p-\delta\|v_n\|_p^p-p\lambda\int |u_n|^{\alpha}|v_n|^{\beta}\right)\le 0$$
We can assume that $(u_n,v_n) \rightharpoonup (u,v)$ in $X$, so that
$\|u\|^p+\|v\|^p-\mu \|u\|_p^p-\delta\|v\|_p^p-p\lambda \int |u|^{\alpha}|v|^{\beta}\le 0$
and $(u,v)\neq (0,0)$. Thus
$$0<\|\nabla u\|_p^p+\|\nabla v\|_p^p-\mu \|u\|_p^p-\delta\|v\|_p^p\le p\lambda \int_\Omega |u|^{\alpha}|v|^{\beta},$$
so that $\int_\Omega |u|^{\alpha}|v|^{\beta}>0$, which implies $\lambda \ge \lambda_1^*(\mu,\delta)$.
Now, if $\lambda_1^*(\mu,\delta)<\lambda<\lambda^*(\mu,\delta)$ then $Y_1$ is nonempty and arguing as in the proof of Corollary \ref{ws}(2) one can show that for $\mu<\lambda_1(p)$
there exists $C>0$ such that $\|v\|_q^q \geq C\|(u,v)\|^q$ for any $(u,v) \in Y_1$.
By Corollary \ref{cc} we obtain the following result:
\begin{cor}\strut
\begin{enumerate}
\item If $\mu,\delta<\lambda_1(p)$ and $\lambda<\lambda_1^*(\mu,\delta)$ then \eqref{sp} has a ground state solution (at a positive level)
\item If $\mu<\lambda_1(p)$ and $\lambda_1^*(\mu,\delta)<\lambda<\lambda^*(\mu,\delta)$ then \eqref{sp} has a ground state solution, which is a local minimizer (at a negative level), and a second critical point.
\end{enumerate}
\end{cor}
\medskip
\subsection{A strongly coupled system and a fourth-order equation}\label{sh}\strut \\
Let us consider the Hamiltonian type system
$$
\left\{
\begin{array}{ll}
-\Delta u = |v|^{p-2}v \
& \mbox{in} \ \ \Omega, \ \ \\
-\Delta v = g(x,u) \
&\mbox{in} \ \ \Omega, \ \ \\
u = v = 0 \ &\mbox{on} \ \
\partial\Omega,
\end{array}
\right.\leqno{(S)}
$$
with $p > 1$. We shall apply our results to the fourth-order equation derived from $(S)$ by the {\it reduction by inversion} procedure, namely
$$
\left\{
\begin{array}{ll}
\Delta \left(|\Delta u|^{\frac{2-p}{p-1}}\Delta u\right) = g(x,u) \
&\mbox{in} \ \ \Omega, \ \ \\
u = \Delta u = 0 \ &\mbox{on} \ \
\partial\Omega.
\end{array}
\right.\leqno{(E)}
$$
The energy functional for $(E)$ is given by
$$
\Phi(u) = \dfrac{p-1}{p} \displaystyle\int_{\Omega} |\Delta u|^{\frac{p}{p-1}} -
\displaystyle \int_{\Omega}G(x,u),
$$
which is a $C^1$ functional on
$X = W^{2, \frac{p}{p-1}}(\Omega)\cap W_{0}^{1,\frac{p}{p-1}}(\Omega)$,
endowed with the norm
$ \|u\| = \left(\int_{\Omega} |\Delta u|^{\frac{p}{p-1}} \right)^{\frac{p-1}{p}}$.
\begin{cor}\label{ch}
Assume that $g:\Omega \times \mathbb{R} \to \mathbb{R}$ is a continuous map such that $g(x,t)\equiv 0$ for every $x \in \Omega \setminus \Omega'$, where $\Omega'$ is an open subset of $\Omega$. Moreover, assume that
\begin{enumerate}
\item $\displaystyle \lim_{t \rightarrow 0} \frac{g(x,t)}{|t|^{\frac{1}{p-1}}}=0$ uniformly for $x \in \Omega'$.
\item For every $x \in \Omega'$ the map $t \mapsto \frac{g(x,t)}{|t|^{\frac{1}{p-1}}}$
is increasing on $\mathbb{R} \setminus \{0\}$.
\item $\displaystyle \lim_{|t| \to \infty}\frac{G(x,t)}{|t|^{\frac{p}{p-1}}}=\infty$ uniformly for $x \in \Omega'$.
\item If $N \geq 3$ and $p > \frac{N}{N -2}$ then there exists $C > 0$ and $q \in ( \frac{p}{p-1}, \sigma)$ such that
$$|g(x,t)| \leq C\left(1+|t|^{q-1}\right) \quad \forall \, t \in \mathbb{R},$$
where $\sigma = \frac{Np}{N(p-1) - 2p}$.
\end{enumerate}
Then $(E)$ has a positive ground state level.
\end{cor}
\begin{proof}
The energy functional is given by $\Phi=I_0-I$, where
$$I_0(u)=\dfrac{p-1}{p} \displaystyle \|u\|^{\frac{p}{p-1}} \quad \mbox{and} \quad I(u)=\displaystyle \int_{\Omega}G(x,u) \quad \mbox{for } u \in X.$$
Let $\sigma= \frac{Np}{N(p-1)-2p}$ if $p>\frac{N}{N-2}$ and $\sigma=\infty$ if $p=\frac{N}{N-2}$.
By (4) and the compact embeddings (see e.g. \cite{S})
\begin{equation}\label{x} \begin{cases} X \subset \mathcal{C}(\overline{\Omega}), & \text{ if }N=2 \text{ and } p>1 \text{ or } N\geq 3 \text{ and } 1<p<\frac{N}{N-2}, \\ X \subset L^r(\Omega), & \text{ if } p \geq \frac{N}{N-2} \text{ and } 1\leq r < \sigma,
\end{cases}
\end{equation}
one may show that $I'$ is strongly continuous. Proceeding as in the proof of Corollary \ref{c2} one may check that Corollary \ref{c1} applies with $Y:=\{u \in X: bu \not \equiv 0\}$.
\end{proof}
\begin{cor}
Let $g(x,u)=b(x)|u|^{r-2}u$ with $b^+ \not \equiv 0$ and $r>\frac{p}{p-1}$ such that
$\frac{1}{p}+\frac{1}{r}>\frac{N-2}{N}$.
Then $(E)$ has a positive ground state level.
\end{cor}
\begin{proof}
Arguing as in the previous proof one may show that Corollary \ref{c1} applies with $Y:=\{u \in X: \int_\Omega b(x)|u|^r>0\}$.
\end{proof}
In the next result we deal with $\Lambda_1:=\displaystyle \inf_{u \in X \setminus \{0\}} \frac{\int_{\Omega} |\Delta u|^{\frac{p}{p-1}}}{\int_{\Omega} |u|^{\frac{p}{p-1}}}$, the first eigenvalue of the problem
$$\Delta \left(|\Delta u|^{\frac{2-p}{p-1}}\Delta u\right) =\lambda |u|^{\frac{2-p}{p-1}}u \mbox{ in } \Omega, \quad u=\Delta u=0 \mbox{ on } \partial \Omega.$$
It is known that $\Lambda_1$ is simple \cite{DO}. We denote by $\psi_1$ a positive eigenfunction associated to $\Lambda_1$.
Let us set $$\Lambda^*:=\displaystyle \inf \left\{\frac{\int_{\Omega} |\Delta u|^{\frac{p}{p-1}}}{\int_{\Omega} |u|^{\frac{p}{p-1}}}: u \in X \setminus \{0\}, \int_\Omega b|u|^r \geq 0\right\}.$$
It is straightforward that $\Lambda^*>\Lambda_1$ if $\int_\Omega b\psi_1^r<0$.
\begin{cor}
Let $g(x,u)=\lambda |u|^{\frac{2-p}{p-1}}u+ b(x)|u|^{r-2}u$ with $b^+ \not \equiv 0$ and $r>\frac{p}{p-1}$ such that
$\frac{1}{p}+\frac{1}{r}>\frac{N-2}{N}$.
\begin{enumerate}
\item If $\lambda<\Lambda_1$ then $(E)$ has a ground state solution at a positive level.
\item If $\int_\Omega b\psi_1^r<0$ and $\Lambda_1<\lambda<\Lambda^*$ then $(E)$ has a ground state solution at a negative level and a second solution.
\end{enumerate}
\end{cor}
\begin{proof}
We have now $\Phi=I_0-I$, where
$$I_0(u) := \dfrac{p-1}{p} \displaystyle\int_{\Omega} \left(|\Delta u|^{\frac{p}{p-1}} -\lambda |u|^{\frac{p}{p-1}}\right) \quad \mbox{ and } \quad I(u):=
\displaystyle \int_{\Omega}b(x)|u|^r.$$
Set $Y_1:=\{u \in X: \int_\Omega b(x)|u|^r>0\}$.
One may easily show that for any $\lambda<\Lambda^*$ there exists $C>0$ such that $I_0'(u)u\geq C\|u\|^{\frac{p}{p-1}}$ for any $u \in \overline{Y}_1$. By the compact embedding \eqref{x} we have that $I'$ is strongly continuous. Note also that $I(u)=I'(u)u=0$ for $u \in \partial Y_1$, and since $r>\frac{p}{p-1}$ we have $I'(u)=o(\|u\|^{\frac{p}{p-1}})$. Finally, condition (4) of Corollary \ref{c1} holds with $\sigma=\frac{p}{p-1}$. Therefore $c_1:=\inf_{\mathcal{N}\cap Y_1} \Phi$ is positive and achieved by a critical point of $\Phi$.
If $\lambda<\Lambda_1$ then $I_0'(u)u\geq C\|u\|^{\frac{p}{p-1}}$ for any $u \in X$ and since $I'(u)u\leq 0$ for any $u \in X \setminus Y_1$, we infer that $c_1$ is the ground state level of $\Phi$.
Now, if $\int_\Omega b\psi_1^r<0$ and $\Lambda_1<\lambda<\Lambda^*$ then
$Y_2:=\{u \in X: I_0'(u)u<0\}$ is nonempty and there exists $C>0$ such that
$I'(u)u\geq C\|u\|^r$ for any $u \in \overline{Y}_2$. It follows that $\Phi$ is coercive in $Y_2$ and $\Phi \ge 0$ on $\partial Y_2$ and on $\mathcal{N} \setminus Y_2$. Thus $c_2:=\inf_{\mathcal{N}\cap Y_2} \Phi<0$ is the ground state level of $\Phi$ for $\Lambda_1<\lambda<\Lambda^*$.
\end{proof}
\medskip
|
1,108,101,564,837 | arxiv | \section*{Introduction}
\label{cint}
The goal of this paper is to try to understand the information contained in the
successive terms of spectral sequences, from the point of view of
\wwb{\infty,1}categories. Although many of the spectral sequences in common use are
stable, and may be described completely in terms of towers of spectra or filtered
chain complexes, we concentrate here on the unstable version, and more specifically
the homotopy spectral sequence of a (co)simplicial object.
The main reason is that in this paper we are only concerned with the differentials
(and thus the successive finite terms) in spectral sequences \ -- \ and these
are more complicated, and thus more illuminating, in the unstable version.
Moreover, in many cases of interest, the differentials in a stable spectral sequence
are determined by those in an associated unstable spectral sequence (e.g., for
the Adams spectral sequence).
In \cite{CELWhitM}, Cirici, Egas Santander, Livernet, and Whitehouse analyze the
spectral sequence of a filtered complex of $R$-modules, for any commutative
ring $R$, in a similar spirit. However, they do this in the context of model categories,
while the setting of \wwb{\infty,1}categories seems more appropriate for our purposes.
Our goal here is to understand two seemingly contradictory phenomena:
on the one hand, in the successive terms of a spectral sequence we discard extraneous
information, as we see from the fact that a map \w{f:x\sb{\bullet}\toy\sb{\bullet}} of simplicial objects
inducing an isomorphism in the \ww{E\sp{r}}-term necessarily induces an isomorphism in
the \ww{E\sp{r+1}}-term, but not conversely. On the other hand, we need
more (and higher order) information to compute \w{d\sp{r+1}} from the given \w{x\sb{\bullet}}
than we do for \w[.]{d\sp{r}} The reason is that as we proceed in the spectral sequence,
we require less data from the original (co)simplicial space, but gain knowledge of the
abutment.
This suggests that we need two types of localizations in order to analyze
the successive terms of the spectral sequence of a (co)simplicial object
in an \wwb{\infty,1}category, which may be described as follows\vspace{3 mm}:
The first part of the paper is a study of Cisinski's quasi-category version
of a (co)fibration category, reviewed in Section \ref{cqccf},
and the left Bousfield localization with respect to a set of maps
(in Section \ref{clbl}) or a set of objects (in Section \ref{clocth}).
In Section \ref{crbl} we construct right Bousfield localizations with respect
a set of objects in a quasi-category equipped with classes of weak equivalences and
either cofibrations or fibrations.
The second part of the paper applies this theory to spectral sequences:
in Section \ref{csssso} we recall from \cite{BMeadS} the description of the homotopy
spectral sequence of a simplicial object \w{x\sb{\bullet}} in an \wwb{\infty,1}category $X$,
with respect to a given homotopy cogroup object $\hat{\fy}$ in $X$, and show how to
reinterpret the construction in terms of chain complexes in spaces,
yielding a convenient diagrammatic description of the differentials
(see \S \ref{sdiagdesc}).
In Section \ref{cssl} we use this description to show that the
\ww{d\sp{r}}-differential in the spectral sequence depends only on the Postnikov
section \w[.]{\Po{r-1}\Omega\sp{p}\operatorname{Map}\sb{X}(\hat{\fy},x\sb{\bullet})} This dependence can be made
more precise using a certain countable collection \w{\mathcal{H}\sp{r}} of finite segments
of simplicial objects \w{H(n,m,\Sigma\sp{p}\hat{\fy})} of length \w{m\leq r-1} (see
\S \ref{dzmnp}). We define the $r$-\emph{stem} for \w{\lra{x\sb{\bullet},\hat{\fy}}} to be the system
\begin{myeq}\label{eqrstem}
\{\Po{m}\operatorname{Map}(H(n,m,\Sigma\sp{p}\hat{\fy}),\tau\sp{\ast}x\sb{\bullet})\}
\sb{H(n,m,\Sigma\sp{p}\hat{\fy})\in\mathcal{H}\sp{r}}~,
\end{myeq}
\noindent and show:
\begin{thma}\label{nthma}
For each \w[,]{r\geq 2} the \ww{E\sp{r}}-term of the spectral sequence
associated to \w{\lra{x\sb{\bullet},\hat{\fy}}} is determined by its \wwb{r-2}stem.
\end{thma}
\noindent See Theorem \ref{tstem} below\vspace{3 mm}.
We then define a pair of left and right Bousfield localizations on the product
\w{Z\sp{r}} of chain complex segments in \w{\Ss\sb{\ast}} corresponding to \w[,]{\mathcal{H}\sp{r}}
yielding the $r$-th Postnikov localization \w[,]{\mathcal{P}\sp{r}:Z\sp{r}\to Z\sp{r}}
and deduce:
\begin{corb}\label{ncorb}
A \ww{\mathcal{P}\sp{r}}-equivalence in \w{Z\sp{r}} induces an
isomorphism of the \ww{E\sp{r+2}}-terms of the associated
spectral sequences.
\end{corb}
\noindent See Corollary \ref{cstempr}\vspace{3 mm}.
In fact, the spectral sequence of \w{x\sb{\bullet}} depends only on the underlying
restricted simplicial object in \w{s\sb{+}X} (forgetting the degeneracies).
We can define the right Bousfield localization of \w{s\sb{+}X} with respect to
\w[,]{\mathcal{H}\sp{r}} and show:
\begin{corc}\label{ncorc}
The \ww{\mathcal{H}\sp{r}}-equivalences in \w{s\sb{+}X} induce \ww{E\sp{r}}-isomorphisms of
the associated spectral sequences.
\end{corc}
\noindent See Corollary \ref{cerloc}\vspace{3 mm}.
Section \ref{cssrfcs} is devoted to a detailed analysis of the
spectral sequence of a cosimplicial object \w{x\sp{\bullet}} in a quasi-category $X$
(which was only sketched in \cite[\S 9]{BMeadS}), again providing a diagrammatic
description of the differentials (see \S \ref{sdiagdes}). This is used in
Section \ref{ccsloc} to describe the cosimplicial version of $n$-stems, the Postnikov
localization \w[,]{\mathcal{P}\sp{r}} and the right Bousfield localization \w[,]{R\sb{H}}
satisfying analogues of Theorem \textbf{A} and Corollaries \textbf{B} and \textbf{C}
above (namely, Theorem \ref{tcstem} and Corollaries \ref{cctempr}
and \ref{thm8.6} below).
\begin{mysubsection}{Notation and conventions}
\label{snac}
The category of sets is denoted by \w[,]{\mbox{\sf Set}} and that of pointed sets by \w[.]{\Set\sb{\ast}}
Similarly, \w{\mbox{\sf Top}} denotes the category of topological spaces, and \w{\Top\sb{\ast}} that
of pointed spaces.
Let $\Delta$ denote the category of non-empty finite ordered sets and order-preserving
maps, so that a functor \w{F:\Delta\op\to\mathcal{C}} is a \emph{strict simplicial object}
in the category $\mathcal{C}$, and the category of such is denoted by \w[.]{\mathcal{C}\sp{\Delta\op}}
Similarly, a functor \w{G:\Delta\to\mathcal{C}} is a \emph{strict cosimplicial object},
and the category of such is denoted by \w[.]{\mathcal{C}\sp{\Delta}}
However, the category \w{\mbox{\sf Set}\sp{\Delta\op}} of simplicial sets, called \emph{spaces},
is denoted simply by $\mathcal{S}$, and that of pointed simplicial sets by
\w[.]{\Ss\sb{\ast}:=\Set\sb{\ast}\sp{\Delta\op}} We denote the category of small categories by \w[,]{\mbox{\sf Cat}}
with \w{B:\mbox{\sf Cat}\to\mathcal{S}} the ordinary \emph{nerve} functor.
The object \w{0<1\dotsc<n} in $\Delta$ is denoted by \w[,]{[\mathbf{n}]}
and for functors \w{G:\Delta\op\to\mathcal{C}} or \w{H:\Delta\to\mathcal{C}} we write
\w{G\sb{n}} for \w{G([\mathbf{n}])} and \w{H\sp{n}} for \w[.]{H([\mathbf{n}])}
However, we shall use the notation \w{\lra{\mathbf{1}}} to refer to the
single arrow category \w{0\to 1} when it is necessary to
distinguish it from the corresponding object of $\Delta$.
We let \w{\sb{+}\!\Delta} denote the subcategory of $\Delta$ with the same objects but only
monic maps (thus representing \emph{restricted} (co)simplicial objects).
For \w[,]{m < n} we denote by \w{\Delta\sb{m, n}} (respectively, \w[)]{\rDell{m,n}}
the full subcategory of \w{\Delta} (respectively, \w[)]{\sb{+}\!\Delta} consisting of the objects
\w{[\mathbf{k}]} with \w[.]{m \leq k\leq n} We abbreviate \w{\Delta\sb{0,n}} to \w{\Dell{n}}
and \w{\rDell{0,n}} to \w[.]{\rDele{n}}
If $\mathcal{C}$ has the necessary (co)limits, the inclusion \w{i\sb{n}:\Dell{n}\hookrightarrow\Delta}
induces \w{i\sb{n}\sp{\ast}:\mathcal{C}\sp{\Delta\op}\to\mathcal{C}\sp{\Dell{n}\sp{\operatorname{op}}}} with
left adjoint \w[,]{i\sb{n}'} and the $n$-\emph{skeleton} functor is
\w[.]{\sk{n}=i\sb{n}'\circ i\sb{n}\sp{\ast}:\mathcal{C}\sp{\Delta\op}\to\mathcal{C}\sp{\Delta\op}}
Similarly, the $n$-\emph{coskeleton} functor
\w{\csk{n}=i\sb{n}''\circ i\sb{n}\sp{\ast}:\mathcal{C}\sp{\Delta\op}\to\mathcal{C}\sp{\Delta\op}} is
defined using the right adjoint \w{i\sb{n}''} of \w[.]{i\sb{n}\sp{\ast}}
Variants of these functors exist for \w[.]{j\sb{n}:\rDele{n}\hookrightarrow\sb{+}\!\Delta}
Note that for \w[,]{\mathcal{C}=\mbox{\sf Set}} \w{\csk{n+1}A} is a model for the $n$-th Postnikov
section of a fibrant simplicial set $A$ (see \cite[VI, \S 3.4]{GJarS}).
The standard $n$-dimensional simplex in $\mathcal{S}$, represented by \w[,]{[\mathbf{n}]} is denoted by
\w[,]{\Deln{n}} its boundary (obtained by omitting the non-degenerate simplex in
\w[)]{\sigma\sb{n}\in\Deln{n}\sb{n}} by \w[,]{\partial\Deln{n}} and the
\wwb{n,k}horn (obtained by further omitting \w[)]{d\sb{k}\sigma\sb{n}}
by \w{\Lambda\sb{k}\sp{n}} (see \cite[I, \S 1]{GJarS}).
A \emph{quasi-category} is a simplicial set $X$ in which, for each \w[,]{0<k< n}
all liftings of the form
\mydiagram[\label{eqqcat}]{
\Lambda\sb{k}\sp{n} \ar[r] \ar[d] & X \\
\Deln{n} \ar@{.>}[ur] &
}
\noindent exist (see \cite{JoyQ,JoyQA}).
If $X$ is a quasi-category, we write \w{sX:= X\sp{B(\Delta\op)}} for category of
\emph{simplicial objects} in $X$, \w{s\sb{+}X := X\sp{B(\rDel\op)}} for the category
of \emph{restricted simplicial objects} in $X$, and
\w[.]{\spnX{m, n}:=X\sp{B(\rDel\op\sb{m, n})}} The truncation
functors \w{sX\to sX\sb{n}} and \w{s\sb{+}X\to\spnX{n}}
(corresponding to \w{\Dell{n}\hookrightarrow\Delta} and \w[)]{\rDele{n}\hookrightarrow\sb{+}\!\Delta}
will be denoted by \w[.]{\tnk{\ast}{n}}
Dually, we write \w{cX:=X\sp{B(\mathbf{\Delta})}} for the category \emph{cosimplicial objects}
in $X$, \w{c\sp{+}X:= X\sp{B(\sb{+}\!\Delta)}} for that of
\emph{restricted cosimplicial} objects, and
\w[.]{\cpnX{m,n}:= X\sp{B(\rDell{m,n})}}
The category of simplicial categories (that is, those enriched in $\mathcal{S}$, which we
will usually indicate by $\mathscr{X}$, $\mathscr{Y}$, and so on)
will be denoted by \w[,]{\mbox{\sf sCat}} and that of pointed simplicial categories
(enriched in \w[)]{\Ss\sb{\ast}} by \w[.]{\sCat\sb{\ast}}
In particular, we write \w{\map\sb{\ast}(x,y)} for the
standard function complex in \w{\Ss\sb{\ast}} or \w{\Top\sb{\ast}} (see \cite[I, \S 1.5]{GJarS}).
When we have a simplicial model category with its associated simplicial enrichment, we
denote the former by ${\mathbf C}$, say, and the latter by \w[.]{\mathscr{C}} As for a quasi-category $X$,
we write \w[,]{\operatorname{ho} X} \w[,]{\operatorname{ho}{\mathbf C}} or \w{\operatorname{ho}\mathscr{C}} for the associated homotopy category.
\end{mysubsection}
\begin{remark}\label{rinfc}
The category of simplicial sets admits a model category structure
in which the fibrant objects are quasi-categories and the weak equivalences are
\emph{Joyal equivalences} (see \cite[\S 2.2]{LurieHTT} and \cite{JoyQ,JoyQA}).
Similarly, there is a model category structure on \w{\mbox{\sf sCat}} in which the fibrant
objects are categories enriched in Kan complexes, and the weak equivalences are
Dwyer-Kan equivalences (see \cite{BergM}).
All the definitions and results in this paper could be stated in any of the known
models of \wwb{\infty,1}categories (see, e.g., \cite{BergH}), and could in fact be
presented in a model-independent way, using To{\"{e}}n's axiomatic formulation
(see \cite[\S 4]{ToenA}), for example, as was done in \cite{BMeadS}.
However, in the interests of concreteness we restrict attention here to the above two
models, using when needed the Quillen equivalence
\begin{myeq}\label{eqquileq}
\mathfrak{C}\colon\mbox{\sf sSet}\leftrightarrows\mbox{\sf sCat}\colon\mathfrak{B}
\end{myeq}
\noindent between the Bergner and Joyal model categories
(see \cite[Theorem 2.2.5.1]{LurieHTT}).
The right adjoint $\mathfrak{B}$ is the \emph{homotopy coherent nerve}, while
we can think of \w{\mathfrak{C}(X)} as a strict model for the \wwb{\infty,1}category $X$,
described quite explicitly in \cite{DSpivR}.
\end{remark}
\setcounter{figure}{0}\section{Quasi-Categories with a Class of Fibrations}
\label{cqccf}
In this section, we review Cisinski's notion of a quasi-category equipped with a
class of fibrations and weak equivalences, serving as the $\infty$-category version
of Brown's fibration categories (see \cite{KBrowA}, and compare \cite{BaueA}).
This material is largely taken from \cite{CisiH}.
\begin{defn}[\protect{\cite[Definition 7.4.6]{CisiH}}]
\label{def1.1}
Let $X$ be a quasi-category with a fixed terminal object $e$. A subcollection
\w{\operatorname{Fib} \subseteq X\sb{1}} is called a \emph{class of fibrations} if it satisfies the
following properties:
\begin{enumerate}
\renewcommand{\labelenumi}{(\arabic{enumi})~}
\item It contains all the identity maps and is closed under composition;
\item Pullbacks of fibrant objects (that is, objects such that the canonical map
\w{x \to e} is in \w[)]{\operatorname{Fib}} exist;
\item The pullback of a fibration between fibrant objects by a map with fibrant source
is a fibration.
\end{enumerate}
\end{defn}
\begin{defn}[\protect{\cite[Definition 7.4.12]{CisiH}}]
\label{def1.2}
A \emph{quasi-category with fibrations and weak equivalences} is a triple
\w{(X, \mathcal{W}, \operatorname{Fib})} consisting of a quasi-category $X$, a class of
fibrations \w{\operatorname{Fib} \subseteq X\sb{1}} as above, and a subcategory of weak equivalences
\w{\mathcal{W} \subseteq X} satisfying the $2$-out-of-$3$ property, such that:
\begin{enumerate}
\renewcommand{\labelenumi}{(\arabic{enumi})~}
\item Given a pullback diagram
\mydiagram[\label{eqpbd}]{
x \ar[r]\sp{f'} \ar[d] & y \ar[d]\sp{g} \\
z \ar[r]\sb{f} & w
}
such that the objects $y$, $z$, and $w$ are fibrant with $f$ is a weak equivalence and
a fibration, then the map \w{f'} is also a weak equivalence and a fibration.
\item Every map \w{f : x \to y} with fibrant codomain can be factored as a
weak equivalence followed by a fibration.
\end{enumerate}
\end{defn}
\begin{example}\label{exam1.3}
If $X$ is a quasi-category with finite limits, then \w{(X, X,\mathcal{J}(X))} has the structure
of a quasi-category with weak equivalences and fibrations, where \w{\mathcal{J}(X)}
is the maximal Kan subcomplex of $X$.
\end{example}
\begin{example}\label{exam1.4}
Let $C$ be a category. If \w{(C,\operatorname{Fib},\mathcal{W})} is a category of fibrant objects in the sense
of \cite{KBrowA}, then \w{(B(C), B(\operatorname{Fib}), B(\mathcal{W}))} is a quasi-category with fibrations
and weak equivalences.
\end{example}
A category $I$ is called \emph{cycle-free} if there are no non-identity endomorphisms
in $I$. An object $i$ of a cycle-free category $I$ is said to be of \emph{finite length}
if there is an integer $n$ such that for each string of non-identity morphisms
\w[,]{i\sb{0} \xrightarrow{f\sb{0}} \cdots \xrightarrow{f\sb{m-1}} i\sb{m} = i}
necessarily \w[.]{m \le n} The smallest such $n$ is called the \emph{length} of $i$,
denoted by \w[.]{\ell(i)}
A \emph{directed category} is a cycle-free category $I$ in which each object has
finite length. Such a category is \emph{filtered} by the full subcategories \w{I\sp{(n)}}
consisting of objects of length at most $n$, and we set
\w[.]{\partial I\sp{(n)}:= I\sp{(n)}-\{ x:\ell(x) =n\}}
Given an object \w[,]{x\sb{\bullet}\in X\sp{B(I\sp{\operatorname{op}})}} we write
$$
\partial x\sb{i} :=
\underset{j \in \partial(I/i)\sp{\ell(i)}}{\underset{ \longleftarrow }\lim} x\sb{j}~.
$$
Assume given a quasi-category $X$ equipped with a class of fibrations \w{\operatorname{Fib}} and
a directed category $I$. A map \w{f : x\sb{\bullet} \to y\sb{\bullet}}
in \w{X\sp{B(I\sp{\operatorname{op}})}} is a \emph{Reedy fibration} if for each \w[,]{i \in I} the map
\w{x\sb{i} \to y\sb{i} \times\sb{\partial y\sb{i}} \partial x\sb{i}}
is a fibration in $X$.
\begin{thm}[\protect{\cite[Theorem 7.4.20]{CisiH}}]
\label{thm1.5}
Let $X$ be a quasi-category with finite limits equipped with
classes \w{\operatorname{Fib}} of fibrations and $\mathcal{W}$ of weak equivalences, and let $I$ be a
finite directed category. The Reedy fibrations and levelwise weak equivalences
then give \w{X\sp{B(I\sp{\operatorname{op}})}} the structure of a category with weak equivalences
and fibrations.
\end{thm}
By using slightly stronger hypotheses, we obtain the following generalization:
\begin{corollary}\label{cor1.6}
If $X$ as in Theorem \ref{thm1.5} has countable limits, then the Reedy fibrations
and levelwise weak equivalences give \w{s\sb{+}X} the
structure of a category with weak equivalences and fibrations.
\end{corollary}
\begin{proof}
The fact that the Reedy fibrations form a class of fibrations follows from
\cite[Proposition 7.4.10]{CisiH}. Similarly, condition (1) of \ref{def1.2} is
\cite[Proposition 7.4.18]{CisiH}. It remains to verify is the existence of factorizations:
For each \w[,]{n \in \mathbb N} there is an adjunction
$$
i\sb{n}\sp{\ast}\colon\spnX{0, n}\leftrightarrows s\sb{+}X\colon(i\sb{n})\sb{\ast}
$$
where \w{i\sb{n}\sp{\ast}} is the restriction of presheaves and \w{(i\sb{n})\sb{\ast}}
is the right Kan extension (which exists by \cite[Proposition 6.4.9]{CisiH},
since $X$ has countable limits).
Let \w{f:x\to y} be a map with (Reedy) fibrant codomain. Using the construction
of \cite[Proposition 7.4.19]{CisiH}, we can produce compatible factorizations of
\w{i\sb{n}\sp{\ast}(f)} as a levelwise weak equivalence and a Reedy fibration, denoted by
\w[.]{g\sb{n}\circ h\sb{n}} Then
$$
\lim((i\sb{n})\sb{\ast}(g\sb{n})) \circ \lim((i\sb{n})\sb{\ast}(h\sb{n}))
$$
gives the required factorization.
\end{proof}
\begin{remark}\label{rmk1.7}
Theorem \ref{thm1.5} can be further generalized to the case where $I$ is an arbitrary
directed category by an inductive argument on the length of objects, using the argument of
Corollary \ref{cor1.6}, as long as $X$ has enough limits to guarantee existence of
the right Kan extension (using \cite[Proposition 6.4.9]{CisiH}).
\end{remark}
\begin{prop}\label{pinduce}
Suppose given a quasi-category $Y$, \w{(X,\operatorname{Fib},\mathcal{W})} as in Theorem \ref{thm1.5}, and
a functor of quasi-categories \w{F:Y\to X} such that:
\begin{enumerate}
\renewcommand{\labelenumi}{(\alph{enumi})~}
\item $F$ is essentially surjective;
\item \w{\operatorname{ho} F} is full;
\item $F$ preserves pullbacks and \w[,]{F(e')=e} for \w{e\in X} and \w{e'\in Y}
terminal objects.
\end{enumerate}
Then \w{(Y,F\sp{-1}(\operatorname{Fib}),F\sp{-1}(\mathcal{W}))} has the structure of a quasi-category with
fibrations and weak equivalences.
\end{prop}
\begin{proof}
By the $2$-out-of-$3$ property, \w{F\sp{-1}(\mathcal{W})} contains all identity maps and is
closed under composition. Since $F$ preserves pullbacks, it takes pullbacks of fibrations
to pullbacks of fibrations, and thus satisfies (2) and (3) of Definition \ref{def1.1}.
$F$ also preserves diagrams of the shape found in Definition \ref{def1.1}(1), and
thus satisfies that requirement, too. Finally, to verify Definition \ref{def1.2}(2),
let \w{f:c\to d} be a map in $Y$ with \w{F(d)} fibrant, and factor \w{F(f)} as a weak
equivalence followed by a fibration:
$$
F(c) \xrightarrow{a} d' \xrightarrow{b} F(d)~.
$$
\noindent By hypotheses (a) and (b), we can find a diagram
\mydiagram[\label{eqtwosquare}]{
F(c) \ar[r]\sb{\operatorname{Id}} \ar[d]_{F(a')} & F(c) \ar[d]_{a} \\
F(d'') \ar[r]\sb{w} \ar[d]_{F(b')} & d' \ar[d]_{b} \\
F(d) \ar[r]\sb{\operatorname{Id}} & F(d)~,
}
in which the map \w{F(b')} is an equivalence and the bottom square is a pullback
(since both its horizontal morphisms are equivalences). Thus \w{F(b')} is a fibration,
and \w{b' \circ a'} gives the required factorization of $f$.
\end{proof}
\setcounter{figure}{0}\section{Left Bousfield Localization}
\label{clbl}
In this section, we review the theory of localizations of $\infty$-categories
from \cite{CisiH}. Given a locally presentable quasi-category $X$, and a set
of maps $K$ in $X$, we construct the so-called $K$-equivalence structure,
in which the weak equivalences are $K$-equivalences.
\begin{defn}\label{dlocal}
If $\mathcal{W}$ is a subcategory of a quasi-category $X$ satisfying
the $2$-out-of-$3$ property, the \emph{localization} of $X$ at $\mathcal{W}$ is an object
\w{\LW(X)} corepresenting the functor
$$
(-)\sp{X} \times\sb{(-)\sp{W}} \times\mathcal{J}(-)\sp{W}
$$
\noindent (see \S \ref{exam1.3}).
Any functor of the form \w{\LW(X)} is called a \emph{localization functor}.
The image of \w{\operatorname{Id}\sb{\LW(X)}} under
$$
\LW(X))\sp{\LW(X)}\simeq(\LW(X))\sp{X}\times\sb{(\LW(X))\sp{\mathcal{W}}}\times\mathcal{J}(\LW(X))\sp{\mathcal{W}}\to
(\LW(X))\sp{X}
$$
is the \emph{localization map} \w[.]{X \to \LW(X)}
This has an evident universal property among all maps of quasi-categories
\w{X\to Y} which takes maps in $\mathcal{W}$ to equivalences.
\end{defn}
\begin{thm}[\protect{\cite[Proposition 7.1.4]{CisiH}}]
\w{\LW(X)} exists for all choices of $X$ and $\mathcal{W}$.
\end{thm}
\begin{example}\label{exam2.2}
Suppose that \w{\mathscr{W}\subseteq\mathscr{X}} is an inclusion of (fibrant) simplicial
categories, with underlying $1$-categories \w[.]{W\subseteq X} Then by
\cite[Proposition 1.2.1]{HiniDK}, we have an equivalence of quasi-categories
$$
\mathcal{L}\sb{\mathfrak{B}(\mathscr{W})}(\mathfrak{B}(\mathscr{X}))~\simeq~\mathfrak{B}\LLc\sb{H}(X,W)~,
$$
\noindent where \w{\LLc\sb{H}(X,W)} is the fibrant replacement in the Bergner structure
of the hammock localization in the sense of Dwyer and Kan (\cite{DKanL}).
In particular, suppose that $C$ is the underlying category of a simplicial model
category ${\mathbf C}$, with underlying simplicial category $\mathscr{C}$ and underlying category
of weak equivalences \w[.]{W\subseteq C} Then we have a weak equivalence
$$
\mathfrak{B}(\mathscr{C}) \simeq \mathfrak{B}(\LLc\sb{H}(C,W)) \simeq \mathcal{L}\sb{BW}(BC)
$$
\noindent by the preceding paragraph and \cite[Proposition 4.8]{DKanF}. In particular,
we can interpret this as saying that \w{\mathcal{L}\sb{BW}(BC)} \emph{presents}
the model category ${\mathbf C}$.
\end{example}
\begin{defn}\label{def2.3}
A \emph{left Bousfield localization} of a quasi-category $X$ is a localization
functor \w{X \to Y} with a fully faithful right adjoint. We call a left Bousfield
localization \emph{left exact} if it preserves finite limits.
Dually, a \emph{right Bousfield localization} of a quasi-category $X$ is a localization
functor \w{X \to Y} with a fully faithful left adjoint.
\end{defn}
\begin{remark}\label{rlrbl}
By \cite[Proposition 5.2.7.6]{LurieHTT}, left Bousfield localizations of simplicial
model categories give rise to left Bousfield localizations of quasi-categories.
\end{remark}
\begin{defn}\label{con2.5}
Suppose that $K$ is a small class of maps. Then the \emph{left Bousfield localization}
of $X$ at $K$, \w[,]{X\to\LLc\sb{K}\sp{\operatorname{cocon}}(X)} is the map universal among
cocontinuous maps that take elements of $K$ to equivalences.
We will call any map whose image under left Bousfield localization
\w{X\to\LLc\sb{K}\sp{\operatorname{cocon}}(X)} is an equivalence a
\emph{$K$-equivalence}.
\end{defn}
Throughout the rest of the section, we will fix a quasi-category with weak equivalences
and fibrations \w[,]{(X,\operatorname{Fib},\mathcal{W})} a small collection of maps $K$,
and assume the following:
\begin{assume}\label{ass3.1}
\begin{enumerate}
\renewcommand{\labelenumi}{(\arabic{enumi})~}
\item The images of the domain and codomain of the maps $K$
under the localization map are compact and connected.
%
\item $X$ and \w{\LW(X)} are locally presentable, and the localization map
\w{X \to \LW(X)} is accessible.
\item $\mathcal{W}$ is \emph{saturated}: that is, a morphism of $X$ is in $\mathcal{W}$
if and only if its image under the localization map is invertible.
\end{enumerate}
\end{assume}
\begin{example}\label{examxxx}
Suppose that ${\mathbf C}$ is an \emph{excellent} simplicial model category (in the sense of
\cite[Definition A.3.2.16]{LurieHTT}) with underlying category $C$, subcategory of
weak equivalences $\mathcal{W}$, and underlying simplicial category $\mathscr{C}$. Many known
examples of model categories (the Kan model structure on simplicial sets,
the Jardine model structure on simplicial presheaves, and so on) are
excellent. We claim that the structure \w{(B(C\sp{f}), \operatorname{Fib}, B(\mathcal{W}\sp{f}))} given by
Example \ref{exam1.4} automatically satisfies Assumptions \ref{ass3.1}(2)-(3).
Note that Assumption \ref{ass3.1}(1) involves choosing a collection of maps between
`homotopy compact and connected objects' such that simplicial \w{\hom} commutes
with filtered (homotopy) colimits and coproducts.
By the discussion of Example \ref{exam2.2}, we can identify the localization map
\w{B(C\sp{f}) \to \mathcal{L}\sb{B(\mathcal{W}\sp{f})}(B(C\sp{f}))} with homotopy coherent nerve of
the inclusion \w{C\sp{f} = C\sp{\circ} \to \mathscr{C}\sp{\circ}} of $C$ as a discrete
simplicial category; note that by \cite[Remark A.3.2.17]{LurieHTT}, every object
in an excellent model category is cofibrant. The subcategory $\mathcal{W}$ is saturated,
since a map in \w{C\sp{f}} is a weak equivalence if and only if it represents
an equivalence in \w[.]{\mathfrak{B}(\mathscr{C}\sp{\circ})}
The quasi-category \w{B(C\sp{f})} presents the trivial model structure on
\w[,]{C\sp{f}} so Assumption (2) is equivalent to showing that for some sufficiently
large regular cardinal $\lambda$, $\lambda$-filtered colimits are homotopy colimits.
But this is true for all combinatorial model categories
(see \cite[Proposition 7.3]{DuggC}, and thus all excellent model categories.
\end{example}
\begin{defn}\label{def3.3}
An object \w{x \in X} is \emph{$K$-local} if it is in the essential image of the
right adjoint of the localization map \w[.]{\LW(X)\to\LLc\sb{K}\sp{\operatorname{cocon}}(X)}
\end{defn}
\begin{remark}\label{rmk3.4}
By definition $x$ is \ww{K}-fibrant if it is \ww{K}-local and fibrant.
As in the model category case (see \cite{BarwL}), the $K$-local objects are the
objects $z$ such that the $K$-equivalences \w{f:x\to y} induce bijections
\w[.]{\operatorname{Map}\sb{\LW(X)}(z,x)\to \operatorname{Map}\sb{\LW(X)}(z,y)}
\end{remark}
\begin{lemma}\label{lemexact}
The map \w{\LW(X)\to\LLc\sb{K}\sp{\operatorname{cocon}}(X)} is left exact (i.e., preserves pullbacks).
\end{lemma}
\begin{proof}
By \cite[Proposition 6.2.1.1]{LurieHTT}, and the preceding paragraph, it suffices
to show that the $K$-equivalences are stable under pullback. But this follows
from the fact that the functor \w{\operatorname{Map}\sb{\LW(X)}(h, -)} preserves limits.
\end{proof}
\begin{lemma}\label{lem3.7}
A $K$-equivalence between $K$-local objects is a weak equivalence.
\end{lemma}
\begin{proof}
Suppose that \w{f : x \to y} is a $K$-equivalence. Then by Definition \ref{con2.5}
the image of $f$ in \w{\LW(X)} is in the essential image of
\w[,]{\mathcal{J}(\LLc\sb{K}\sp{\operatorname{cocon}}(X)) \subseteq\LLc\sb{K}\sp{\operatorname{cocon}}(X) \xrightarrow{\phi} \LW(X)}
where $\phi$ is the right adjoint of localization. In particular it is an equivalence
in \w[,]{\LW(X)} so that $f$ is a weak equivalence by Assumption \ref{ass3.1}(3).
\end{proof}
\begin{construction}\label{con2.8}
By \cite[Proposition 5.5.4.15]{LurieHTT}, the $K$-equivalences are the saturation of
the set $K$. Thus, by a small object argument of sufficient size, we can
construct a fibrant replacement \w{x\to\mathcal{M}\sb{K}(x)} of an object by a $K$-local one.
Consider the localization map
\w[.]{i\sb{\ast}:\LW(X) \leftrightarrows \LLc\sb{K} \sp{\operatorname{cocon}}(X) : i\sp{\ast}}
There is a commutative diagram in \w[:]{\LW(X)}
$$
\xymatrix
{
x \ar[r] \ar[d] & \mathcal{M}\sb{K}(x) \ar[d] \\
i\sp{\ast}i\sb{\ast}(x) \ar[r] & i\sp{\ast}i\sb{\ast}\mathcal{M}\sb{K}(x)
}
$$
\noindent where the vertical maps are the counits of the adjunction.
The bottom horizontal and right vertical maps are equivalences in \w{\LW(X)}
by Lemma \ref{lem3.7} above.
\end{construction}
\begin{construction}\label{con3.2}
We endow $X$ with the structure of a category of fibrations and weak equivalences
(called the \emph{\ww{K}-equivalence structure}), as follows: the fibrations between
fibrant objects of $X$ are defined to be those morphisms of \w{\operatorname{Fib}} for which the
diagram
$$
\xymatrix
{
x \ar[d] \ar[r] & \mathscr{M}\sb{K}(x) \ar[d] \\
y \ar[r] & \mathscr{M}\sb{K}(y)
}
$$
is a pullback in \w[.]{\LLc\sb{K}(X)}
As usual, a map which is both a $K$-equivalence and a $K$-fibration is called
a $K$-\emph{trivial fibration}.
\end{construction}
\begin{thm}\label{thm3.5}
For each \w[,]{m \in \mathbb N} the $K$-fibrations and $K$-equivalences
endow $X$ with the structure of a quasi-category with fibrations and weak equivalences.
\end{thm}
To prove this, we shall require some preliminary results:
\begin{lemma}\label{lem3.8}
Suppose we have a pullback diagram
$$
\xymatrix
{
x \times\sb{z} y \ar[r] \ar[d]\sb{g} & y \ar[d]\sp{f} \\
x \ar[r] & z
}
$$
\noindent where $f$ is a $K$-fibration and $z$, $x$, and $y$ are fibrant.
Then $g$ is a $K$-fibration as well.
\end{lemma}
\begin{proof}
Consider the diagram:
$$
\xymatrix@C=1.5em@R=1.5em{
& x \times\sb{z} y \ar@{-}[rr] \ar@{-}[d]_>>>{g} \ar@{-}[dd]
& & x \ar@{-}[dd]
\\
\mathscr{M}\sb{K}(x \times\sb{z} y) \ar@{-}[ur]\ar@{-}[rr]\ar@{-}[dd]
& & \mathscr{M}\sb{K}(x) \ar@{-}[ur]\ar@{-}[dd]
\\
& y \ar@{-}[rr]
& & z
\\
\mathscr{M}\sb{K}(y) \ar@{-}[rr]\ar@{-}[ur]
& &\mathscr{M}\sb{K}(z) \ar@{-}[ur]
}
$$
in \w[.]{\LW(X)} Since localization preserves pullbacks by fibrations by
\cite[Theorem 7.5.18]{CisiH}, we can assume that the back face is a pullback in
\w[.]{\LW(X)}
The front face of the cube is equivalent in \w{\LW(X)} to \w{i\sp{\ast}i\sb{\ast}}
applied to the back face by the discussion of \S \ref{con2.8}.
Thus, the front face of the cube is a pullback in \w{\LW(X)} by
Lemma \ref{lemexact}. The images of the back and front faces of the cube in
\w{\mathcal{L} \sb{K}(X)} are also pullbacks by another application of
Lemma \ref{lemexact}. By hypothesis, the right face is a pullback in
\w{\LW(X)} so the pasting law for pullbacks in a
quasi-category (\cite[Lemma 4.4.2.1]{LurieHTT}) implies the required result.
\end{proof}
\begin{lemma}\label{lem3.9}
Suppose we have a pullback diagram in $X$
$$
\xymatrix
{
x \times\sb{z} y \ar[r] \ar[d]\sb{h} & y \ar[d]\sp{f} \\
x \ar[r]\sb{g} & z
}
$$
where $f$ is a $K$-fibration, $z, x, y$ are fibrant and $g$ is a
$K$-equivalence. Then $h$ is a $K$-equivalence as well.
\end{lemma}
\begin{proof}
The image of the pullback square in $X$ in \w{\LW(X)} is a pullback as well by
\cite[Theorem 7.5.18]{CisiH}. Pullbacks preserve $K$-equivalences in
\w[,]{\LW(X)} by \ref{lemexact}.
\end{proof}
\begin{prop}\label{lem3.10}
Suppose that \w{f : x \to z} is a map between $K$-fibrant objects.
Then we can factor it as a $K$-equivalence followed by a $K$-fibration.
\end{prop}
\begin{proof}
Form the diagram
$$
\xymatrix
{
x \ar[ddr]\sb{\psi} \ar[drr] \ar@{.>}[dr] & & \\
& \ar[d] y'' \ar[r] & \mathscr{M}\sb{K}(x) \ar[d] \\
& y' \ar[r] \ar[d]\sb{\phi} & y \ar[d] \\
& z \ar[r] & \mathscr{M}\sb{K}(z)
}
$$
\noindent in $X$, where the right vertical composite is a factorization of
\w{\mathscr{M}\sb{K}(f)} as an equivalence followed by a fibration,
and both squares are pullbacks.
Note that we can assume that \w{\mathscr{M}\sb{K}(z)} and \w{\mathscr{M}\sb{K}(x)}
are fibrant, so that the above pullbacks are guaranteed to exist.
We claim that \w{\phi \circ \psi} gives the required factorization:
The horizontal map \w{y' \to y} is a $K$-equivalence by Lemma \ref{lem3.9}
above. The objects \w[,]{\mathscr{M}\sb{K}(y)} \w[,]{\mathscr{M}\sb{K}(x)} and \w{y'}
are $K$-local, so \w{\mathscr{M}\sb{K}(x)\to y} is a
weak equivalence, and hence a $K$-equivalence.
By the $2$-out-of-$3$ property, we see that $\psi$ is a $K$-equivalence.
To check that $\phi$ is a $K$-fibration, it suffices to show that
$y$ is weakly equivalent to $\mathscr{M}\sb{K}(y')$. Since $\mathscr{M}\sb{K}(z)$
is $K$-fibrant, we can conclude that $y$ is $K$-fibrant. A
$K$-equivalence between $K$-local objects is a weak equivalence by
Lemma \ref{lem3.7}. Thus $y$ is $K$-local, being weakly equivalent to
\w[.]{\mathscr{M}\sb{K}(x)} It follows from Remark \ref{rmk3.4} that $y$ is
$K$-fibrant. It is also $K$-equivalent, and hence
weakly equivalent, to \w[.]{\mathscr{M}\sb{K}(y')}
\end{proof}
\begin{proof}[Proof of Theorem \protect{\ref{thm3.5}}]
First, we check that the $K$-fibrations form a class of fibrations
in the sense of \ref{def1.1}. Property (3) is just a special case of \ref{lem3.8}
above. It is immediate from
the definition that the identity map is a $K$-fibration.
Pullbacks of $K$-fibrant objects exist, since they are in particular
pullbacks of fibrant objects. Composition preserves $K$-fibrations by the
2-out-of-3 property for pullbacks in a quasi-category (\cite[Lemma 4.4.2.1]{LurieHTT}).
Pullbacks of fibrations in $X$ are taken to pullbacks in
\w{\LW(X)} by \cite[Theorem 7.5.18]{CisiH}. If $f$ is a $K$-equivalence and
a $K$-fibration, the induced map \w{\mathscr{M}\sb{K}(x) \rightarrow \mathscr{M}\sb{K}(y)} is an
equivalence in \w[.]{\LW(X)} Thus, $f$ is a pullback of an equivalence in \w[,]{\LW(X)}
and thus itself represents an equivalence in $\LW(X)$. It is also a $K$-fibration by \ref{lem3.8} above.
\end{proof}
\begin{remark}\label{rmk3.11}
Given a model category ${\mathbf M}$, the fibrations of a left Bousfield localization
\w{\mathcal{L}({\mathbf M})} of ${\mathbf M}$ are somewhat difficult to describe.
The paper \cite{BarwL} gives a nice characterization of fibrations whose target
is in an admissible left exact and right proper subcategory $E$
(see \cite[Definition 4.15]{BarwL}). An example of such a subcategory
is the subcategory of fibrant objects (see \cite[Example 4.17]{BarwL}).
We used this characterization of fibrations in the localization with
fibrant source as our definition of $K$-fibrations above.
\end{remark}
\setcounter{figure}{0}\section{Localization with respect to a set of objects}
\label{clocth}
In this section, we will show that the localization of a locally presentable
quasi-category with respect to a class of $H$-equivalences (associated to a small
set of objects $H$) is a right Bousfield localization in the sense of
Definition \ref{def2.3}.
This is an application of the Adjoint Functor Theorem of \cite{Raptis}, using
the fact that the mapping spaces of the localization are locally small, by
Lemma \ref{lem2.6} below. The proof of this Lemma is inspired by the
``bounded cofibration arguments'' common in localization theory for model categories
(see the introduction of \cite{JardC} as well as Lemma 4.9 there).
\begin{defn}\label{def2.4}
Let \w{(X,\operatorname{Fib},\mathcal{W})} be a quasi-category with fibrations and weak equivalences.
Let $H$ be a small collection of objects in $X$. We say that a $1$-morphism
\w{f : x \to y} in $X$ is an \emph{$H$-equivalence} if
$$
\operatorname{Map}\sb{\LW(X)}(h, x) \to \operatorname{Map}\sb{\LW(X)}(h, y)
$$
is a weak equivalence for each \w[.]{h \in H} An object $z$ is called \emph{$H$-local}
if and only if, for each $H$-equivalence \w[,]{f : x \to y} the induced map
$$
\operatorname{Map}\sb{\LW(X)}(y, z) \to \operatorname{Map}\sb{\LW(X)}(x, z)
$$
is a weak equivalence.
\end{defn}
Note that every map in $\mathcal{W}$ is automatically an $H$-equivalence, since it is
an equivalence in \w[.]{\LW(X)} By a slight abuse of notation, we will write
\w{\LH(X)} for the localization of $X$ at the $H$-equivalences.
\begin{lemma}\label{lem2.5}
Suppose that $X$ is a locally presentable quasi-category. Then
\begin{enumerate}
\renewcommand{\labelenumi}{(\arabic{enumi})~}
\item Each object is $\lambda$-compact for some regular cardinal $\lambda$.
\item For each regular cardinal $\lambda$, the set of $\lambda$-compact objects
is essentially small.
\end{enumerate}
\end{lemma}
\begin{proof}
(1)\ By \cite[Proposition 5.4.2.2]{LurieHTT}, we can present any \w{x\in X} as a
$\kappa$-filtered colimit of $\kappa$-compact objects, indexed by a diagram $I$.
Since we can raise the index of accessibility (\cite[Proposition 5.4.2.11]{LurieHTT}),
we can assume that $X$ is a $\lambda$-accessible for \w[.]{\lambda >|I|} Thus, $x$ is a
colimit of a $\lambda$-bounded diagram of $\lambda$-compact objects and is thus
$\lambda$-compact by \cite[Corollary 5.3.4.15]{LurieHTT}.
\noindent(2)\ Using \cite[Proposition 5.4.2.11]{LurieHTT}, choose \w{\lambda' > \lambda}
such that $X$ is \ww{\lambda'}-accessible. By \cite[Proposition 5.4.2.2]{LurieHTT},
the set of \ww{\lambda'}-compact, and hence $\lambda$-compact, objects of $X$ is
essentially small.
\end{proof}
\begin{lemma}\label{lem2.6}
Consider a locally presentable quasi-category $X$ equipped with fibrations \w{\operatorname{Fib}}
and weak equivalences $\mathcal{W}$. Suppose that \w{\LW(X)} is locally presentable accessible
localization of $X$. Then for each \w{x\in X} there is a regular cardinal $\lambda$
such that for each $H$-equivalence \w{s : y \to x} there is a $\lambda$-compact object
$z$ in \w{\LW(X)} and a diagram of $H$-equivalences
$$
\xymatrix
{
z \ar[d] \ar[r] &x \\
y \ar[ur]\sb{s} &
}
$$
\end{lemma}
\begin{proof}
Let $S$ be a small set of objects such that each element of $X$ can be written
canonically as a filtered colimit of elements of $S$
(see \cite[Proposition 5.5.1.1]{LurieHTT}). By Lemma \ref{lem2.5} above, we can choose a
regular cardinal $\lambda$ such that
\begin{enumerate}
\renewcommand{\labelenumi}{(\arabic{enumi})~}
\item $H$ is $\lambda$-bounded, and for each \w{h \in H} the functor
\w{\operatorname{Map}\sb{\LW(X)}(h, -)} commutes with $\lambda$-filtered colimits.
\item For each choice of \w{a,b \in (H \cup S)} and \w[,]{n \in \mathbb N} the set of
$n$-simplices of \w{\operatorname{Map}\sb{\LW(X)}(a,b)} is $\lambda$-bounded.
\item The localization map is $\lambda$-accessible: that is, it preserves
$\lambda$-filtered colimits.
\end{enumerate}
Let us write
$$
y~=~\underset{s \in I}{\operatorname{colim}} \,y\sb{s}
$$
as a $\lambda$-filtered colimit of objects in $S$. For each \w[,]{\lambda' < \lambda}
we define a sub-object \w{z\sb{\lambda'}} of $y$ by transfinite induction.
We start the induction as follows: for each object \w{h \in H} and
\w[,]{\alpha \in \pi\sb{i}\operatorname{Map}\sb{X}(h, x))} we choose an object
\w{y\sb{(i, \alpha, h)} := y\sb{s}} \wb{s \in I} such that
$$
\alpha \in\operatorname{Im}(\pi\sb{i}\operatorname{Map}\sb{\LW(X)}(h, y\sb{(i, \alpha, h)})\to
\pi\sb{i}\operatorname{Map}\sb{\LW(X)}(h, x)).
$$
\noindent Let \w{I\sb{0} \subseteq I} be the full subcategory with objects of the form
\w[.]{y\sb{(i, \alpha, h)}} This is guaranteed to have a $\lambda$-bounded set of
morphisms by assumption (2) on $\lambda$. Now put
$$
z\sb{0}~:=~\underset{s \in I\sb{0}}{\operatorname{colim}} \, y\sb{s}.
$$
In the inductive step there are two possibilities\vspace{3 mm}:
\noindent\textbf{Case 1:} for a successor cardinal \w[,]{\lambda''+1=\lambda'} let
$$
z\sb{\lambda''}~:=~\underset{s \in I\sb{\lambda''}}{\operatorname{colim}} \, y\sb{s}
$$
\noindent for \w[.]{I\sb{\lambda''} \subseteq I} Given \w{k\in\mathbb N} and two elements
$\alpha$ and $\beta$ in \w{\pi\sb{k}\operatorname{Map}\sb{X}(h,z\sb{\lambda''})} with the same image
in \w[,]{\pi\sb{k}\operatorname{Map}\sb{\LW(X)}(h, x)} choose for each
\w{i\in I\sb{\lambda''}} a commutative diagram
$$
\xymatrix{
\pi\sb{k}\operatorname{Map}\sb{\LW(X)}(h, y\sb{j}) \ar[r] \ar[d]\sb{\phi} &
\pi\sb{k}\operatorname{Map}\sb{\LW(X)}(h, y\sb{(i, \alpha, \beta)}) \ar[r] &
\pi\sb{k}\operatorname{Map}\sb{\LW(X)}(h, x) \\
\pi\sb{k}\operatorname{Map}\sb{X}(h, z\sb{\lambda''}) & &
}
$$
with \w[,]{j \in I\sb{\lambda''}} \w[,]{\phi(\alpha')=\alpha}
and \w{\phi(\beta') = \beta} for some \w{\alpha'} and \w{\beta'} whose images under the
left horizontal map are the same.
We then let \w{I\sb{\lambda'}} be the full subcategory of $I$ on objects
in \w{I\sb{\lambda''}} and those of the form \w{y\sb{(i, \alpha, \beta)}} above.
The colimit
$$
y = \underset{s \in I\sb{\lambda'}}{\operatorname{colim}} \, y\sb{s}
$$
is evidently $\lambda$-compact, since the set of elements \w{y\sb{(i, \alpha, \beta)}}
is $\lambda$-bounded\vspace{3 mm}.
\textbf{Case 2:} if \w{\lambda'} is a limit cardinal, put
\w[.]{z\sb{\lambda'}:=\underset{\lambda''<\lambda'}{\operatorname{colim}}\,z\sb{\lambda''}}
At each stage of the induction, we obtain a $\lambda$-compact object, and the object
$$
z~=~\underset{\lambda' < \lambda}{\operatorname{colim}}\,z\sb{\lambda'}
$$
is also $\lambda$-compact. Both of these statements follow from
\cite[Corollary 5.3.4.15]{LurieHTT}. It is easy to check that the canonical map
\w{z \to y} is an $H$-equivalence by the construction of \w{z\sb{\lambda'}}
and the fact that \w{\operatorname{Map}\sb{\LW(X)}(h, -)} commutes with $\lambda$-filtered colimits.
\end{proof}
\begin{thm}\label{thm2.7}
Given $X$ and \w{\LW(X)} as in Lemma \ref{lem2.6}, let $H$ be a small set of
objects of $X$ whose image in \w{\LW(X)} is connected. Then the localization map
\w{\LW(X)\to\LH(X)} is a right Bousfield localization.
\end{thm}
\begin{proof}
By \cite[Proposition 7.11.2]{CisiH}, it suffices to show that the localization map
has a right adjoint. By the version of the adjoint functor theorem from (\cite{Raptis}),
it suffices to show that this localization map is continuous, that \w{\LH(X)} is
locally small, and that it satisfies the solution set condition.
If we equip \w{\LW(X)} with the class of fibrations and weak
equivalences given by the dual of Example \ref{exam1.3}, then \w{\LH(X)} is exactly
the continuous localization of \w{\LW(X)} (in the sense of \cite[Remark 7.7.10]{CisiH}).
This follows from the fact that both pullbacks and products of $H$-equivalences are
$H$-equivalences.
To show this, by \cite[Proposition 7.10.1 \& Corollary 7.6.13]{CisiH} it suffices
to show that \w{\mathcal{P}(\LH(X))} is locally small. In other words, we want to show that
\w{\pi\sb{0}\operatorname{Map}\sb{\LH(X)}(x, y)} is small for each \w[.]{x, y \in X}
The description of the mapping space obtained from the dual of
\cite[7.2.10(2), Remark 7.2.21]{CisiH} (in terms of the calculus of
fractions), together with Lemma \ref{lem2.5}, imply that each component
of \w{\operatorname{Map}\sb{\LW(X)}(x,y)} contains an object of the form \w[,]{x \leftarrow z \to y}
where $z$ is $\lambda$-compact in $\mathcal{L} (X)$ for some cardinal $\lambda$. But the
$\lambda$-compact objects of $X$ are essentially small. Thus the set of such
components is small.
We now verify the solution set condition from \cite{Raptis}. That is, we want to show
that \w{\LH(X)\sb{y/}} has a small, weakly initial set. Every object of \w{\LH(X)\sb{y/}}
admits a morphism from an object \w{y \leftarrow z \rightarrow x} such that $z$ is
$\alpha$-bounded, as noted in the preceding paragraph.
On the other hand, every such morphism factors through \w[,]{y \leftarrow z\to x'}
where \w{x'} is $\alpha$-bounded. The set of all morphisms \w[,]{y \leftarrow z\to x'}
for $z$ and \w{x'} $\alpha$-bounded is essentially small.
Hence the solution set condition holds.
\end{proof}
\setcounter{figure}{0}\section{Right Bousfield Localization}
\label{crbl}
In this section, we construct a version of right Bousfield localization with respect
to a set of objects $H$, both for a quasi-category equipped with cofibrations
(see Definition \ref{dqccw} below) and one equipped with fibrations.
This is not formally dual to the case of left Bousfield localization above,
since we are localizing with respect to a set of \emph{objects}, not morphisms.
\begin{defn} \label{dqccw}
A triple \w{(X, \operatorname{Cof},\mathcal{W})} is called a \emph{quasi-category with
cofibrations and weak equivalences} if \w{(X\sp{\operatorname{op}}, \operatorname{Cof}\sp{\operatorname{op}},\mathcal{W}\sp{\operatorname{op}})}
is a category with fibrations and weak equivalences as in Definition \ref{def1.2}.
\end{defn}
Our first example is obtained by dualizing Corollary \ref{cor1.6}:
\begin{lemma}\label{cor1.8}
Suppose that $X$ has countable colimits. Then \w{c\sp{+}X} (\S \ref{snac}) can be equipped
with the structure of a quasi-category with weak equivalences and cofibrations dual to
that of Theorem \ref{thm1.5}. We call this the \emph{Reedy structure} on \w[.]{c\sp{+}X}
\end{lemma}
In this section we shall assume that $X$ is locally
presentable, and fix a collection $H$ of cofibrant objects in $X$, each of
which is compact and connected as an object of \w[.]{\LW(X)} We wish to endow $X$
with a new structure of a category of cofibrations and weak equivalences, in which the
weak equivalences are the $H$-equivalences; we do so by mimicking Barwick's
construction of right Bousfield localizations in \cite{BarwL}.
\begin{assume}\label{ass6.1}
We assume for simplicity a few additional properties for \w[:]{(X,\operatorname{Cof},\mathcal{W})}
\begin{enumerate}
\renewcommand{\labelenumi}{(\arabic{enumi})~}
\item \w{X \to \LW(X)} preserves transfinite composites of cofibrations of
cofibrant objects.
\item The structure is cofibrantly generated in that each (trivial) cofibration
can be written as a transfinite composite of pushouts of a small set of
(trivial) cofibrations with cofibrant domain.
\item The set of weak equivalences satisfies (3) of \ref{ass3.1}
\end{enumerate}
\end{assume}
These are fairly mild assumptions. For instance, if we look at the
structure given by \ref{exam1.4} on the nerve of the cofibrant objects of a model
category assumption (1) is satisfied by \cite[Proposition 17.9.1]{HirM}.
Property (2) and (3) are enjoyed, for instance by the structure from \S \ref{examxxx},
as an excellent model category is cofibrantly generated by definition.
For each \w[,]{h \in H} let \w{\Lambda\sp{\bullet}(h)} be a cofibrant replacement for the constant
cosimplicial object on $h$ (in the Reedy structure on \w[),]{c\sp{+}X} and let
$$
I\sb{H}~:=~\{ L\sb{p}\Lambda\sp{\bullet}(h) \to\Lambda\sp{p}(h)\}\sb{h\in H,p\in\mathbb N} \cup J~,
$$
where $J$ is a set of generating trivial cofibrations for \w{(X,\operatorname{Cof},\mathcal{W})}
with cofibrant domains, and \w{L\sb{p}} is the $p$-th latching object (see
\cite[VII, \S 4]{GJarS}).
\begin{defn}\label{def6.2}
A map in $X$ is called an \emph{$H$-cofibration} if it can be written as a transfinite
composite of pushouts of elements of \w[.]{I\sb{H}} We call a map an
\emph{$H$-trivial cofibration} if it is both an $H$-cofibration and an $H$-equivalence.
\end{defn}
\begin{defn}\label{dhcol}
We call an object \w{x \in X} \emph{$H$-colocal} if for each $H$-equivalence
\w{f : y \to z} the induced map \w{\operatorname{Map}\sb{\LW(X)}(x, y)\to\operatorname{Map}\sb{\LW(X)}(x,z)}
is an equivalence.
\end{defn}
\begin{lemma}\label{lem6.4}
$H$-equivalences between $H$-colocal objects in $X$ are weak equivalences.
\end{lemma}
\begin{proof}
The $H$-equivalences between $H$-colocal objects are in the essential image of
\w[,]{\mathcal{J}\LLc\sb{H}(X) \subseteq\LLc\sb{H}(X) \xrightarrow{\phi} \LW(X)} where $\phi$ is
the left adjoint of the localization map, whose existence is guaranteed by
Theorem \ref{thm2.7}. Thus, $H$-equivalences between $H$-local objects represent
equivalences in \w[.]{\LW(X)} They are therefore weak equivalences by
Assumption \ref{ass6.1}.
\end{proof}
\begin{lemma}\label{lem6.5}
Suppose that $x$ is $H$-cofibrant. Then a map \w{f : x \to y} in $X$ is an $H$-trivial cofibration
if and only if it is a trivial cofibration.
\end{lemma}
\begin{proof}
If $f$ is a trivial cofibration it is an $H$-cofibration, since
\w[.]{J \subseteq I\sb{H}} It is also an $H$-equivalence, since every weak equivalence
is an $H$-equivalence.
Conversely, suppose that \w{f : x \to y} is an $H$-trivial cofibration.
Then both objects $x$ and $y$ can be written as colimits of transfinite composites of
pushouts of maps in \w[,]{I\sb{H}} and thus transfinite composites of pushouts of
$H$-colocal maps. We claim that $x$ and $y$ are thus $H$-colocal.
Indeed, the functorial mapping space \w{\mathbf{Map}\sb{\LW(X)}(-, y)} sends colimits to limits
(see the discussion in the preceding section). Thus, colimits preserve $H$-colocal
objects of \w[.]{\LW(X)} But the localization \w{X \to \LW(X)} preserves pushouts
by cofibrations by the dual of \cite[Theorem 7.5.18]{CisiH} and also preserves
transfinite composites of cofibrations by (1) of \ref{ass6.1}. Thus,
$x$ and $y$ are $H$-colocal.
But $H$-equivalences between $H$-colocal objects are
weak equivalences, so $f$ is a weak equivalence. The elements of \w{I\sb{H}} are
all cofibrations, so we conclude that an $H$-cofibration is a cofibration.
\end{proof}
\begin{lemma}\label{lem6.6}
Let \w{f : x \to y} be a map with $H$-cofibrant source. Then we can factor it as
an $H$-cofibration followed by an $H$-equivalence.
\end{lemma}
\begin{proof}
By Lemma \ref{lem2.5}, each object of $H$ is $\lambda$-compact for some cardinal
$\lambda$.
By a small object argument of size \w{\lambda' > \lambda} for some regular
\w[,]{\lambda'} we can factor $f$ as \w[,]{x \xrightarrow{g} z \xrightarrow{g'} y}
where $g$ is an H-cofibration, and the map \w{g'} has the right
lifting property with respect to \w[.]{h \otimes\partial\Deln{n} \to h\otimes\Deln{n}}
We now want to show that the map $g'$ induces an $H$-equivalence.
To do this, we will show that we can solve all lifting problems
\mydiagram[\label{eqliftsq}]{
\partial\Deln{n} \otimes h \ar[r]\sp(0.5){q} \ar[d] & X \ar[d]\sp{g'} \\
\Deln{n} \otimes h \ar[r]\sb(0.5){r} \ar@{.>}[ur] & Y
}
in \w[.]{\LW(X)} Note that the construction only shows that we can solve such lifting
problems in $X$.
We can write the horizontal maps in \wref{eqliftsq} as composites \w{q'\circ q''} and
\w[,]{r' \circ r''} where \w{q'} and \w{r'} are in the image of the localization
map \w{X\to\LW(X)} (see \cite[\S 7.2]{CisiH}), so it suffices to solve a lifting problem:
$$
\xymatrix
{
\partial\Deln{n} \otimes h \ar[r]_{q''} \ar[d] & w \ar[r]_{q'} & X \ar[d]_{g'} \\
\Deln{n} \otimes h \ar[r] \ar[r]_{r''} & w \ar@{.>}[ur] \ar[r]_{r'} & Y.
}
$$
\noindent Without loss of generality, we can assume \w{q'} and \w{r'} have
$H$-cofibrant domains, so we reduce to solving lifting problems involving elements
of \w[,]{I\sb{H}} which we can solve by the construction of \w[.]{g'}
For each \w{h \in H} and \w[,]{\sigma \in \Omega\sp{n}\operatorname{Map}\sb{\LLc\sb{\mathcal{W}} (X)}(h, z)}
we can find lifts in the diagrams (by the preceding discussion)
$$
\xymatrix
{
h \otimes \partial\Deln{n} \ar[r]\sb{0} \ar[d] & y \ar[d]\sb{g'} \\
h \otimes\Deln{n} \ar@{.>}[ur] \ar[r]\sb{\sigma} & z
}
$$
On the other hand, given \w{\sigma' \in \Omega\sp{n}\operatorname{Map}\sb{\LW(X)}(h, z)}
with a nullhomotopy of \w{g'\circ\sigma'} given by $\gamma$ below, we can find a lift in
the diagram
$$
\xymatrix
{
h \otimes \partial \Deln{n+1} \ar[r]\sb{\sigma'} \ar[d] & y \ar[d]\sp{g'} \\
h \otimes \Deln{n+1} \ar[r]\sb{\gamma} \ar@{.>}[ur] & z.
}
$$
\noindent We conclude that
\w{\pi\sb{0}\Omega\sp{n}\operatorname{Map}\sb{\LW(X)}(h, z)\to\pi\sb{0}\Omega\sp{n}\operatorname{Map}\sb{\LW(X)}(h, y)}
is a bijection.
\end{proof}
\begin{thm}\label{thm6.7}
The $H$-equivalences and $H$-cofibrations give $X$ the structure of a quasi-category
with weak equivalences and cofibrations
\end{thm}
\begin{proof}
The $H$-cofibrations form a class of cofibrations by a standard argument
(see \cite[Proposition 5.6]{BarwL}). The dual of (2) of \ref{def1.2} is just
Lemma \ref{lem6.6}. The dual of Condition (1) of \ref{def1.2} follows from the fact that
the $H$-trivial cofibrations with $H$-cofibrant source are precisely the trivial
cofibrations, and that pushouts of trivial cofibrations are trivial cofibrations.
\end{proof}
We also have a version of right Bousfield localization with respect to a class of
objects in a quasi-category with fibrations, which is much easier to establish:
\begin{thm}\label{thm4.10}
Suppose that \w{(X,\operatorname{Fib},\mathcal{W})} is a quasi-category with fibrations. Then we can equip $X$
with the structure of a quasi-category with fibrations and weak equivalences in which
the weak equivalences are the $H$-equivalences and the fibrations are those
of \w[.]{\operatorname{Fib}}
\end{thm}
\begin{proof}
Mapping space preserve limits, so $H$-equivalences are preserved under pullback.
Thus (1) of \ref{def1.2} holds. (2) is immediate from the factorization axiom for
\w[,]{(X,\mathcal{W},\operatorname{Fib})} since a weak equivalence is in particular an $H$-equivalence.
\end{proof}
\setcounter{figure}{0}\section{The spectral sequence of a simplicial object}
\label{csssso}
Assume given a complete quasi-category with fibrations and weak equivalences
\w{\lra{X,\operatorname{Fib},\mathcal{W}}} as in Definition \ref{def1.2}, satisfying Assumptions \ref{ass3.1}.
Given a homotopy cogroup object $\mathfrak{h}$ in $X$ (or more generally, in any
suitable version of an \wwb{\infty,1}category), we explained in \cite{BMeadS} how to
associate to a simplicial object \w{x\sb{\bullet}} in $X$ its \emph{homotopy spectral sequence},
and provided a homotopy-invariant characterization of the differentials,
independent of the model of $\infty$-categories chosen. For this purpose, we first
require:
\begin{lemma}\label{lem8.2}
There is an adjunction of quasi-categories
$$
\mathfrak{h}\otimes (-):\mathcal{S} \leftrightarrows X :\mathbf{Map}\sb{X}(\mathfrak{h}, -)~,
$$
\noindent where the tensoring $\otimes$ of a locally presentable quasi-category
over simplicial sets is that given in \cite[Section 4.4.4]{LurieHTT}.
\end{lemma}
\begin{proof}
By \cite[Proposition A.3.7.6]{LurieHTT}, \w[,]{X \simeq \mathfrak{B}(\mathscr{A})} where $\mathscr{A}$ is
the underlying simplicial category of a combinatorial simplicial model category ${\mathbf A}$.
There is a Quillen adjunction
$$
\mathfrak{h}\otimes (-):\leftrightarrows :\operatorname{map}\sb{\mathscr{A}}(\mathfrak{h}, -)~,
$$
where $\otimes$ comes from the simplicial structure of ${\mathbf A}$.
By \cite[Proposition 5.2.4.6]{LurieHTT}, this induces an adjunction of quasi-categories.
\end{proof}
\begin{mysubsection}{The spectral sequence of a simplicial object}
\label{sssso}
We briefly recall the necessary background from \cite{BMeadS}, in the case
where $X$ is a quasi-category (with enough colimits and limits), $\mathfrak{h}$ is a
compact homotopy cogroup object of \w[,]{\LW(X)} and \w{x\sb{\bullet}\in sX} is a simplicial
object in $X$. The associated spectral sequence, originally due to Quillen in
\cite{QuiS} (see also \cite{BFrieH}), has the form:
\begin{myeq}\label{eqssso}
E\sp{1}\sb{n,p}~=~\pi\sb{p}\operatorname{Map}\sb{X}(\mathfrak{h}, x\sb{n})~
~\implies~\pi\sb{p+n}\operatorname{Map}\sb{X}(\mathfrak{h},\|x\sb{\bullet}\|)~,
\end{myeq}
\noindent using the mapping spaces of Lemma \ref{lem8.2}.
The spectral sequence is in fact determined by the restriction of \w{x\sb{\bullet}} to \w{s\sb{+}X}
(see \S \ref{snac}). This is also true of \w[,]{\|x\sb{\bullet}\|} the
\emph{geometric realization} (colimit) of \w{x\sb{\bullet}} (see \cite[Appendix A]{SegCC}).
Therefore, from now on we shall work with restricted simplicial objects.
With some exceptions (see, e.g., \cite[\S 4]{BKanSQ}) the only useable spectral
sequences are those of abelian groups, so it is not a significant restriction
to assume that \w{\mathfrak{h}=\Sigma\hat{\fy}} is in fact a suspension in $X$.
In distinction from the usual approach using exact couples
\w[,]{(E\sp{r},D\sp{r})} for our purposes we work throughout by
representing a class $\gamma$ in \w{E\sp{r}\sb{n,p}} of the
spectral sequence by elements
\w{[f]\in\pi\sb{p}\operatorname{Map}\sb{X}(\hat{\fy},x\sb{n})} in
\w[.]{E\sp{1}\sb{n,p}} Each such \w{[f]} is a homotopy invariant
in $X$, or in the Reedy model structure on \w[,]{s\sb{+}X} but of
course we may have many such representatives for the given
\w[.]{\gamma=\lra{f}} From the general theory we know that the
differential \w{d\sp{r}(\gamma)} vanishes if and only if there
exists a representative \w{[f]} for which \w{\mathfrak{d}\sp{r}[f]=0} in
\w[,]{E\sp{1}\sb{n-r,p+r-1}} where \w{\mathfrak{d}\sp{r}[f]} is defined by
a particular choice of lifts in terms of the original exact couple
\w[.]{(E\sp{1},D\sp{1})}
Specializing to the Bousfield-Friedlander spectral sequence of a simplicial space
(in the version of \cite[\S 8]{DKStovB}), we showed in \cite[Theorem 3.11]{BMeadS}
that if \w{\gamma\in E\sp{r}\sb{n,p}} is represented by a map
\w[,]{f:\Sigma\sp{p}\hat{\fy}\to x\sb{n}} then \w{[f]} survives to \w{E\sp{r}\sb{n,p}}
(for \w[)]{r\geq 2} if and only if we can fit $f$ into a diagram in $X$ of the form:
\myvdiag[\label{diagnk}]{
\Sigma\sp{p}\hat{\fy} \ar@/^1pc/[rr]\sp{d\sb{0}=0}\sb{\vdots}
\ar@/_1pc/[rr]\sb{d\sb{n}=0} \ar[dd]\sb{f} && 0
\ar@/^1pc/[rr]\sp{d\sb{0}=0}\sb{\vdots} \ar@/_1pc/[rr]\sb{d\sb{n-1}=0} \ar[dd] &&
0 \ar[dd] &\cdots\cdots & 0 \ar[dd] \\
&& && && \\
x\sb{n} \ar@/^1pc/[rr]\sp{d\sb{0}}\sb{\vdots} \ar@/_1pc/[rr]\sb{d\sb{n}} && x\sb{n-1}
\ar@/^1pc/[rr]\sp{d\sb{0}}\sb{\vdots} \ar@/_1pc/[rr]\sb{d\sb{n-1}} && x\sb{n-2} &
\cdots \cdots & x\sb{n-r+1}~.
}
\noindent Moreover, the value of the differential \w{\mathfrak{d}\sp{r}([f])} is represented in
\w{E\sp{1}\sb{n-r,p+r-1}} by a map \w{\Sigma\sp{p+r-1}\hat{\fy}\to x\sb{n-r}} constructed by
the universal property of the diagram of the form \wref[,]{diagnk}
(see \cite[Corollary 6.11]{BMeadS}), so the fact that
\w{\mathfrak{d}\sp{r}([f])} is zero in \w{E\sp{r}\sb{n-r,p+r-1}} -- \ in other words, having
\emph{some} nullhomotopic representative in \w[,]{E\sp{1}\sb{n-r,p+r-1}} for some choice
of such an extension \ -- \ is equivalent to \w{\lra{f}} surviving to
\w[.]{E\sp{r+1}\sb{n,p}} See \S \ref{sdiagdesc} below for an alternative construction
of \w[.]{\mathfrak{d}\sp{r}([f])}
\end{mysubsection}
\begin{remark}\label{rerterm}
In order to calculate the usual \ww{E\sp{r+1}\sb{n,p}}-term of a spectral sequence
described as above using the \ww{E\sp{1}}-term alone, we need to know not only which
classes \w{[f]\in E\sp{1}\sb{n,p}} survive to the \ww{E\sp{r+1}}-term, but also which
of them are hit by the \ww{d\sp{m}}-differential \ -- \ or more precisely, are in the
image of \w{\mathfrak{d}\sp{m}} -- \ for some \w[.]{1\leq m\leq r}
This involves an analysis of all possible diagrams of the form \wref{diagnk} for
the given values of \w[,]{(n,r,p)} as well as those for \w{(n+m-1,m,p-m+1)}
with \w[.]{p\leq m\leq r}
However, when \w[,]{n<r} diagrams of the form \wref{diagnk} do not exist, but we must
take into account all diagrams starting in dimension $n$ and terminating in
\w{x\sb{1}} used to calculate \w{\mathfrak{d}\sp{n}} itself in order to know if
\w{[f]\in E\sp{1}\sb{n,p}} survives to the \ww{E\sp{n+1}}-term, and thus by default to
the \ww{E\sp{r+1}}-term.
\end{remark}
\begin{mysubsection}{Chain complexes}
\label{scc}
We now explain how the spectral sequence of a simplicial object may be described in
more traditional terms, using chain complexes:
If $\mathcal{C}$ is a pointed category, let \w{\mbox{\sf Ch}(\mathcal{C})} denote the category of
(non-negatively graded) chain complexes in $\mathcal{C}$: that is, commuting diagrams of the form
\mydiagram[\label{eqchaincx}]{
\dotsc A\sb{n}\ar[r]\sp{\df{n}} \ar[rd] |!{[d];[r]}\hole &
A\sp{n-1} \ar[r]\sp{\df{n-1}} \ar[rd] |!{[d];[r]}\hole &
A\sp{n-2}\dotsc &
\dotsc A\sp{2} \ar[r]^(0.65){\df{2}} \ar[rd] |!{[d];[r]}\hole &
A\sp{1}\ar[r]\sp{\df{1}} & A\sp{0}~,\\
\dotsc \ast \ar[ru] & \ast \ar[ru] & \ast\dotsc & \dotsc \ast\ar[ru] & \ast\ar[ru] &
}
\noindent so \w{\df{i-1}\circ\df{i}=0} for \w[.]{i\geq 1}
For any \w{A\in\mathcal{C}} and \w[,]{n\geq 0} let \w{A\oS{n}} be the chain complex having
$A$ in dimension $n$, and $0$ elsewhere.
We denote by \w{\Chn{n}{k}(\mathcal{C})} the category of \wwb{n,k}\emph{truncated}
chain complexes in $\mathcal{C}$ \ -- \ that is, diagrams \wref{eqchaincx} starting at $n$ and
ending at $k$ \ -- \ with truncation functor \w[.]{\tnk{n}{k}:\mbox{\sf Ch}(\mathcal{C})\to\Chn{n}{k}(\mathcal{C})}
\end{mysubsection}
\begin{remark}\label{radjoints}
If $\mathcal{C}$ has enough (co)limits, \w{\tnk{n}{k}} has a left adjoint
\w{\lnk{n}{k}:\Chn{n}{k}(\mathcal{C})\to\mbox{\sf Ch}(\mathcal{C})} defined by:
$$
\lnk{n}{k}(D\sb{\ast})\sb{i}~=~
\begin{cases}
D\sb{i}& \text{if}~k\leq i\leq n\\
\operatorname{Coker}(\partial\sb{k+1})& \text{if}~i=k-1\\
0&\text{otherwise,}
\end{cases}
$$
\noindent as well as a right adjoint
\w{\rnk{n}{k}:\Chn{n}{k}(\mathcal{C})\to\mbox{\sf Ch}(\mathcal{C})} defined by:
$$
\rnk{n}{k}(C\sb{\ast})\sb{i}~=~
\begin{cases}
C\sb{i}& \text{if}~k\leq i<n\\
\operatorname{Ker}(\partial\sb{n}) & \text{if}~i=n+1\\
0&\text{otherwise.}
\end{cases}
$$
\noindent Note that if $\mathcal{C}$ is a model category and \w{D\sb{\ast}} is Reedy cofibrant in
\w{\Chn{n}{k}(\mathcal{C})} (see \cite[\S 15.3]{HirM}), then \w{\operatorname{Coker}(\partial\sb{k+1})} is
the homotopy colimit of the truncated version of \wref[.]{eqchaincx} Similarly,
if \w{C\sb{\ast}} is Reedy fibrant, \w{\operatorname{Ker}(\partial\sb{n})} is the homotopy limit.
For \w[,]{C\sb{\ast}\in\mbox{\sf Ch}(\mathcal{C})} we write \w{\cskn{n}{k}C\sb{\ast}} for
\w[,]{\rnk{n}{k}\tnk{n}{k}C\sb{\ast}} with the unit \w{\enk{n}{k}:C\sb{\ast}\to\cskn{n}{k}C\sb{\ast}}
a fibration when \w{C\sb{\ast}} is fibrant.
\end{remark}
\begin{mysubsection}{Simplicial objects and chain complexes}
\label{ssocc}
If $\mathcal{C}$ is a pointed category with enough limits, the $n$-th \emph{Moore chains} object
of a restricted simplicial object \w{\bX\sb{\bullet}\in\mathcal{C}\sp{\rDel\op}} is defined to be:
\begin{myeq}\label{eqmoor}
C\sb{n}\bX\sb{\bullet}~:=~\cap\sb{i=1}\sp{n}\operatorname{Ker}\{d\sb{i}:X\sb{n}\to X\sb{n-1}\}~,
\end{myeq}
\noindent with differential
\w[.]{\df{n}:=d\sb{0}\rest{C\sb{n}\bX\sb{\bullet}}:C\sb{n}\bX\sb{\bullet}\to C\sb{n-1}\bX\sb{\bullet}}
The $n$-th \emph{Moore cycles} object is thus
\w[,]{Z\sb{n}\bX\sb{\bullet}~:=~\cap\sb{i=0}\sp{n}\operatorname{Ker}\{d\sb{i}:X\sb{n}\to X\sb{n-1}\}}
with \w{v\sb{n}:Z\sb{n}\bX\sb{\bullet}\to C\sb{n}\bX\sb{\bullet}} the inclusion. Note that \w{\df{n}}
factors through \w[.]{\wdf{n}:C\sb{n}\bX\sb{\bullet}\to Z\sb{n}\bX\sb{\bullet}}
The Moore chains functor \w{C\sb{\ast}:s\sb{+}\mathcal{C}\to\mbox{\sf Ch}(\mathcal{C})} has a left
adjoint (and right inverse) \w[,]{\mathcal{E}:\mbox{\sf Ch}(\mathcal{C})\to s\sb{+}\mathcal{C}} with
\w[,]{(\mathcal{E} A\sb{\ast})\sb{n}=A\sb{n}} \w[,]{d\sb{0}\sp{n}=\df{n}} and
\w{d\sb{i}\sp{n}=0} for \w[.]{i\geq 1}
If $\mathcal{C}$ has enough colimits, the $n$-th \emph{latching object} for \w{\bX\sb{\bullet}\in\mathcal{C}\sp{\Delta\op}}
is the colimit
\begin{myeq}\label{eqlatch}
L\sb{n}\bX\sb{\bullet}~:=~\colimit{\theta\sp{\operatorname{op}}:[\mathbf{k}]\to[\mathbf{n}]}\,X\sb{k}~,
\end{myeq}
\noindent where $\theta$ ranges over surjections \w{[\mathbf{n}]\to[\mathbf{k}]} in
$\mathbf{\Delta}$. Any iterated degeneracy \w{s\sb{I}:{\mathbf X}\sb{k}\to{\mathbf X}\sb{n}} factors through
the obvious map \w[.]{\sigma\sb{n}:L\sb{n}\bX\sb{\bullet}\to{\mathbf X}\sb{n}}
If \w{\bX\sb{\bullet}} is an abelian group object in \w[,]{\mathcal{C}\sp{\Delta\op}} the
natural map \w{C\sb{n}\bX\sb{\bullet}\to\operatorname{Coker}(\sigma\sb{n})} is an
isomorphism, by \cite[Corollary (1.12)]{DolH}, so if we set
\w[,]{\oX{n}:=C\sb{n}\bX\sb{\bullet}} we have
\begin{myeq}\label{eqsplatch}
{\mathbf X}\sb{n}~\cong~ L\sb{n}\bX\sb{\bullet}\oplus\oX{n}\hspace*{5 mm} \text{for each}\hspace*{5 mm} n\geq 0~,
\end{myeq}
\noindent and thus by induction (starting with \w[):]{\oX{0}={\mathbf X}\sb{0}}
\begin{myeq}\label{eqsplitlatch}
L\sb{n}\bX\sb{\bullet}~:=~
\coprod\sb{0\leq k\leq n-1}~~\coprod\sb{0\leq i\sb{1}<\dotsc<i\sb{n-k-1}\leq n-1}~
\oX{k}~,
\end{myeq}
\noindent with each summand on the right mapping to the left by
\w[.]{s\sb{i\sb{n-k-1}}\dotsc s\sb{i\sb{2}}s\sb{i\sb{1}}}
Note that the inclusion \w{\sb{+}\!\Delta\hookrightarrow\Delta} induces a forgetful functor
\w[,]{\mathcal{U}:\mathcal{C}\sp{\Delta\op}\to\mathcal{C}\sp{\rDel\op}} and its left adjoint
\w{\mathcal{L}:\mathcal{C}\sp{\rDel\op}\to\mathcal{C}\sp{\Delta\op}} is given by
\w[.]{(\mathcal{L}\bX\sb{\bullet})\sb{n}=X\sb{n}\amalg L\sb{n}\bX\sb{\bullet}}
The adjunction \w{C\sb{\ast}\mathcal{U}:\mathcal{C}\sp{\Delta\op}\rightleftharpoons\mbox{\sf Ch}(\mathcal{C}):\mathcal{L}\mathcal{E}} can be viewed as
a version of the Dold-Kan correspondence in an arbitrary category \ -- \ which is not
generally an equivalence, unless $\mathcal{C}$ is abelian.
See \cite[\S 1]{BJTurnHI} for further details.
All this makes sense also in any pointed \wwb{\infty,1}category, such as a
quasi-category $X$ (see Remark \ref{rinfc}). Moreover, if \w{x\sb{\bullet}} in \w{sX}
and $\mathfrak{h}$ in $X$, are as in \S \ref{sssso}, and \w{\bX\sb{\bullet}:=\operatorname{Map}\sb{X}(\mathfrak{h},x\sb{\bullet})} in
\w[,]{s\Ss\sb{\ast}} then \wref{eqsplatch} and \wref{eqsplitlatch} still hold up to weak
equivalence. Moreover, this holds even if $\mathfrak{h}$ is just a cogroup object in
\w[,]{\operatorname{ho} X} in some cases (see \cite{BJTurnHI}).
\end{mysubsection}
\begin{mysubsection}{A diagrammatic description of the differentials}
\label{sdiagdesc}
Given \w{\lra{x\sb{\bullet},\mathfrak{h}=\Sigma\hat{\fy}}} in a quasi-category $X$ as in \S \ref{sssso},
let \w{\bX\sb{\bullet}:=\operatorname{Map}\sb{X}(\hat{\fy},x\sb{\bullet})} be the corresponding homotopy coherent
simplicial space. By \cite{DKSmitH} (see also \cite[IX]{GJarS}), we can replace it
by a strict simplicial space in \w[,]{\Ss\sb{\ast}\sp{\Delta}} and further assume that \w{\bX\sb{\bullet}} is
Reedy fibrant.
In this case, as explained in \cite[\S 6]{BMeadS}, in order to obtain a
diagram \wref{diagnk} in $X$, it suffices to find a homotopy coherent diagram
\myvdiag[\label{diagnksp}]{
{\mathbf A} \ar@/^1pc/[rr]\sp{d\sb{0}=0}\sb{\vdots}
\ar@/_1pc/[rr]\sb{d\sb{n}=0} \ar[dd]\sb{\hat{f}} && 0
\ar@/^1pc/[rr]\sp{d\sb{0}=0}\sb{\vdots} \ar@/_1pc/[rr]\sb{d\sb{n-1}=0} \ar[dd] &&
0 \ar[dd] &\cdots\cdots & 0 \ar[dd] \\
&& && && \\
{\mathbf X}\sb{n} \ar@/^1pc/[rr]\sp{d\sb{0}}\sb{\vdots} \ar@/_1pc/[rr]\sb{d\sb{n}} && {\mathbf X}\sb{n-1}
\ar@/^1pc/[rr]\sp{d\sb{0}}\sb{\vdots} \ar@/_1pc/[rr]\sb{d\sb{n-1}} && {\mathbf X}\sb{n-2} &
\cdots \cdots & {\mathbf X}\sb{n-r+1}~.
}
\noindent in \w[.]{\Ss\sb{\ast}} Since \w[,]{\mathfrak{h}=\Sigma\hat{\fy}}
so we may take \w{{\mathbf A}=S\sp{p}} for \w[,]{p\geq 1} where $\hat{f}$ corresponds to
\w[.]{[f]\in\pi\sb{p}\operatorname{Map}\sb{X}(\hat{\fy},x\sb{\bullet})}
We extend the usual notation \w{\CsA{n}} for the cone on the $n$-fold suspension
by setting \w{\CsA{-1}:={\mathbf A}} and \w[,]{\CsA{-2}:=\ast} with \w{i\sp{n}:\sA{n}\to\CsA{n}}
the inclusion and \w{q\sp{n}:\CsA{n}\to\sA{n+1}} the quotient map.
It was shown in \cite[\S 2.B]{BJTurnHI} that one can extend
\w{\hat{f}:{\mathbf A}\to{\mathbf X}\sb{n}} to a diagram \wref{diagnksp}
as above if and only if we have a solid map of truncated chain complexes
\w{F:\bD\sb{\ast}\to\tnk{n}{k}C\sb{\ast}\bX\sb{\bullet}} in \w{\Chn{n}{k}(\Ss\sb{\ast})} of the form
\mydiagram[\label{eqgrid}]{
{\mathbf D}\sb{n}=\CsA{-1}={\mathbf A} \ar[dd]\sb{\df{n}} \ar[dr]\sp{q\sp{n-k-3}}
\ar[rrr]\sp{F\sb{n}} & & &
C\sb{n}\bX\sb{\bullet} \ar[dl]\sp{\wdf{n}} \ar[dd]\sp{\df{n}}\\
& \sA{0} \ar[dl]\sb{i\sp{0}} \ar[r]\sp{a\sb{n-1}}& Z\sb{0}{\bX\sb{\bullet}}\ar[dr]\sp{v\sb{0}}\\
{\mathbf D}\sb{n-1}=\CsA{1} \ar@{.>}[d] \ar[rrr]\sp{F\sb{n-1}} & & & C\sb{n-1}\bX\sb{\bullet}\ar@{.>}[d]\\
{\mathbf D}\sb{k}=\CsA{n-k-1} \ar[dd]\sb{\df{k}} \ar[dr]\sp{q\sp{n-k-1}}
\ar[rrr]\sp{F\sb{k}} & & & C\sb{k}{\bX\sb{\bullet}}\ar[dl]\sp{\wdf{k}}
\ar[dd]\sp{\df{k}} \\
& \sA{n-k} \ar[dl]\sb{i\sp{n-k}} \ar[r]\sp{a\sb{k-1}} &
Z\sb{k-1}{\bX\sb{\bullet}} \ar[dr]\sp{v\sb{k-1}}\\
{\mathbf D}\sb{k-1}=\CsA{n-k-2} \ar@{-->}[rrr]\sp{F\sb{k-1}} & & & C\sb{k-1}\bX\sb{\bullet}
}
\noindent for \w[.]{k=n-r+1} Here \w[,]{j\sb{n}\circ F\sb{n}\simeq\hat{f}} for
\w{j\sb{n}:C\sb{n}\bX\sb{\bullet}\hookrightarrow{\mathbf X}\sb{n}} as above, and \w{\bD\sb{\ast}} is a cofibrant replacement
for \w{{\mathbf A}\oS{n}} in \w{\Chn{n}{k}(\Ss\sb{\ast})} (in the notation of \S \ref{scc}),
which extends to \w{\mbox{\sf Ch}(\Ss\sb{\ast})} in the obvious way. In addition, we have
\w{\lnk{n}{k}\bD\sb{\ast}} ending in \w{\sA{n-k}} in dimension \w[,]{k-1} by Remark
\ref{radjoints}, with $F$ inducing the map \w[,]{c(F):=v\sb{k-1}\circ a\sp{k-1}} by
adjunction. This map represents \w{\mathfrak{d}\sp{r}([f])} in
\w[,]{E\sp{1}\sb{n-r,p+r-1}} and it must be nullhomotopic in order for \w{F\sp{k-1}}
to exist.
\end{mysubsection}
\begin{remark}\label{rerequivs}
Since \w{\bD\sb{\ast}} is cofibrant, \w{C\sb{\ast}\bX\sb{\bullet}} is fibrant, and \w{i\sp{n-k}} is a
cofibration, we have a (homotopy) pullback diagram
\mydiagram[\label{eqhpb}]{
\operatorname{map}\sb{\Chn{n}{k-1}(\mathcal{C})}(\tnk{n}{k-1}\bD\sb{\ast},\tnk{n}{k-1}C\sb{\ast}\bX\sb{\bullet}) \ar[d]
\ar@{->>}[rr]\sp{(\tnk{n}{k})\sb{\ast}} &&
\operatorname{map}\sb{\Chn{n}{k}(\mathcal{C})}(\tnk{n}{k}\bD\sb{\ast},\tnk{n}{k}C\sb{\ast}\bX\sb{\bullet}) \ar[d]\sb{c} \\
\ast\simeq\operatorname{map}\sb{\mathcal{C}}({\mathbf D}\sb{k-1},C\sb{k-1}\bX\sb{\bullet}) \ar@{->>}[rr]\sp{(i\sp{n-k})\sp{\ast}}
&&\operatorname{map}\sb{\mathcal{C}}(\sA{n-k},C\sb{k-1}\bX\sb{\bullet})~,
}
\noindent for \w{c(F):=v\sb{k-1}\circ a\sp{k-1}} as above. In fact, by Remark
\ref{radjoints} we have a natural identification
\begin{myeq}\label{eqadjcha}
\operatorname{map}\sb{\Chn{n}{k}(\mathcal{C})}(\tnk{n}{k}\bD\sb{\ast},\tnk{n}{k}C\sb{\ast}\bX\sb{\bullet})~=~
\operatorname{map}\sb{\mbox{\sf Ch}(\mathcal{C})}(\lnk{n}{k}\bD\sb{\ast},C\sb{\ast}\bX\sb{\bullet})~,
\end{myeq}
\noindent and the map $c$ on the right hand side of \wref{eqadjcha} is induced by
restriction to dimension \w[.]{k-1}
Note that \w{\operatorname{map}\sb{\Chn{n}{k}(\mathcal{C})}(\tnk{n}{k}\bD\sb{\ast},\tnk{n}{k}C\sb{\ast}\bX\sb{\bullet})} splits as
a disjoint union of subspaces corresponding to distinct \w{[f]\in E\sp{1}\sb{n,p}}
which survive to \w{E\sp{r}} (each of which may consist of several connected
components).
Evidently, a map \w{g:x\sb{\bullet}\toy\sb{\bullet}} in \w{sX} or \w{s\sb{+}X} (or the corresponding map
\w{\hat{g}:\bX\sb{\bullet}\to\bY\sb{\bullet}} in \w{\Ss\sb{\ast}\sp{\Delta\op}} or \w[)]{\Ss\sb{\ast}\sp{\rDel\op}} which induces
a weak equivalence on the right vertical arrow of \wref{eqhpb} for all \w{n\geq r}
will induce a weak equivalence
$$
\operatorname{map}\sb{\Chn{n}{k-1}(\mathcal{C})}(\tnk{n}{k-1}\bD\sb{\ast},\tnk{n}{k-1}C\sb{\ast}\bX\sb{\bullet})~\xra{g\sb{\ast}}~
\operatorname{map}\sb{\Chn{n}{k-1}(\mathcal{C})}(\tnk{n}{k-1}\bD\sb{\ast},\tnk{n}{k-1}C\sb{\ast}\bY\sb{\bullet})~,
$$
\noindent and thus an isomorphism in the \ww{E\sp{r+1}}-terms of the spectral sequences
of \w{\lra{x\sb{\bullet},\mathfrak{h}}} and \w[.]{\lra{y\sb{\bullet},\mathfrak{h}}} However, this is far from
being necessary.
\end{remark}
\setcounter{figure}{0}\section{The spectral sequence of a simplicial object and localization}
\label{cssl}
Let \w{\lra{X,\operatorname{Fib},\mathcal{W}}} be a complete quasi-category with fibrations and weak
equivalences as in Definition \ref{def1.2}, satisfying Assumptions \ref{ass3.1}.
By Corollary \ref{cor1.6}, \w{s\sb{+}X} (and its various truncations)
may be equipped with Reedy fibrations and levelwise equivalences to
form a quasi-category with fibrations and weak equivalences.
It turns out that there are two types of localization relevant to the spectral
sequence of a simplicial object \ -- \ both using the fact that the differentials
are determined by mapping out of finite diagrams as in \wref{diagnksp}
or \wref[.]{eqgrid}
\begin{mysubsection}{Postnikov sections and spectral sequences}
\label{spsss}
The first type of localization is based on the oldest known form of localization
in homotopy theory \ -- \ the Postnikov section:
As noted in \S \ref{sssso}, we may assume that \w{\mathfrak{h}=\Sigma\hat{\fy}}
is a suspension in $X$, and so \w{{\mathbf A}=S\sp{p}} in \wref{eqgrid}
for \w[,]{p\geq 0} so we may replace the map of truncated chain complexes
\w{F:\bD\sb{\ast}\to\tnk{n}{k}C\sb{\ast}\bX\sb{\bullet}} in \w{\Chn{n}{k}(\Ss\sb{\ast})} in \wref{eqgrid}
under the $p$-fold loop-suspension adjunction in \w{\Ss\sb{\ast}} by
\w[,]{\widetilde{F}:\widetilde{\bD}\sb{\ast}\to\tnk{n}{k}\Omega\sp{p}C\sb{\ast}\bX\sb{\bullet}} where \w[,]{\widetilde{\bD}\sb{\ast}}
a cofibrant replacement of \w{\widetilde{\bA}\oS{n}} for\w[,]{\widetilde{\bA}=\bS{0}} is the $p$-fold
desuspension of \w[.]{\bD\sb{\ast}} Note that all objects in \w{\widetilde{\bD}\sb{\ast}} are of dimension
\w[,]{\leq n-k=r-1} so $\widetilde{F}$ factors through the \wwb{r-1}Postnikov section
of \w[.]{\tnk{n}{k}\Omega\sp{p}C\sb{\ast}\bX\sb{\bullet}} The same is true of the corresponding map
$\widehat{F}$ in the right hand side of \wref[,]{eqadjcha} and thus also for
\w[,]{c(\widetilde{F})} adjoint to \w[.]{c(F)}
Note that the adjunction
\begin{myeq}\label{eqchsimp}
\mathcal{E}\colon\mbox{\sf Ch}(\Ss\sb{\ast})~\leftrightarrows~\Ss\sb{\ast}\sp{\rDel\op}\colonC\sb{\ast}
\end{myeq}
\noindent of \S \ref{ssocc} is homotopy meaningful, by \cite[Lemma 2.7]{StoV},
and allows us to convert $\widetilde{F}$ into a map in \w{\Ss\sb{\ast}\sp{\rDell{k,n}}}
of the form \wref{diagnksp} in which we have a cofibrant replacement for the top
row (since $\mathcal{E}$ preserves Reedy cofibrancy). Using Remark \ref{rerterm}, we deduce:
\end{mysubsection}
\begin{prop}\label{rtrunc}
Given \w{x\sb{\bullet}\in sX} as in \S \ref{sssso} and $\hat{\fy}$ as above, the representation of
\w{\mathfrak{d}\sp{r}([f])} in \w{E\sp{1}\sb{n-r,p+r-1}} associated to the
extension \ -- \ and thus \w{d\sp{r}(\lra{f})\in E\sp{r}\sb{n-r,p+r-1}} itself, as we
run over all possible extensions \ -- \ depends only on
\w[,]{\Po{r-1}\Omega\sp{p}\operatorname{Map}\sb{X}(\hat{\fy},x\sb{\bullet})}
which thus determines \w[,]{E\sp{r+1}\sb{\ast,\ast}} and in particular
allows us to determines whether \w{[f]} survives to \w[.]{E\sp{r+1}}
\end{prop}
This result can be refined using the following:
\begin{defn}\label{dzmnp}
Given $\hat{\fy}$ as above, for any \w{p\geq 0} and
\w{1\leq m \le n} we let \w{H(n,m,\Sigma\sp{p}\hat{\fy})} denote the diagram
\mywdiag[\label{eq1}]{
& \Sigma\sp{p}\hat{\fy} \ar@<3ex>[rr]\sp{d\sb{0} = 0} \ar@<0.5ex>[rr]\sp{d\sb{1} = 0}
\ar@<-2.5ex>[rr]\sb{d\sb{n} = 0}\sp{\vdots} && 0 \ar@<2ex>[rr]\sp{d\sb{0}}
\ar@<-1ex>[rr]\sb{d\sb{n}}\sp{\vdots} && 0
\ar@<2ex>[rr]\sp{d\sb{0}} \ar@<-1ex>[rr]\sb{d\sb{n-2}}\sp{\vdots}&& \cdots
\ar@<2ex>[rr]\sp{d\sb{0}} \ar@<-1ex>[rr]\sb{d\sb{n-m+1}}\sp{\vdots} && 0~\\
\text{dimension:} & n && n-1 && n-2 &&\dotsc&& n-m
}
\noindent in \w{\spnX{n-m,n}} (unique up to a contractible space of choices,
by the universal property of $0$), and let
\begin{myeq}\label{eqhr}
\begin{split}
\mathcal{H}\sp{r}(\mathfrak{h})~:=~&
\bigcup\sb{p\geq 1}~\bigcup\sb{n\geq 0}~\left[\bigcup\sb{1\leq m\leq\min\{p,r\}}~
\{H(n+m,m-1,\Sigma\sp{p-m+1}\hat{\fy})\}\right]\\
&~\cup~
\bigcup\sb{n\geq r-1}\,\{H(n,r-1,\Sigma\sp{p}\hat{\fy})\}
~\cup~\bigcup\sb{n<r-1}\,\{H(n,n-1,\Sigma\sp{p}\hat{\fy})\}
\end{split}
\end{myeq}
\noindent with \w{\mathcal{H}(\mathfrak{h}):=\bigcup\sb{r=2}\sp{\infty} \mathcal{H}\sp{r}} the collection
of all such diagrams.
\end{defn}
\begin{remark}\label{rdoublec}
The reader will note that the list in \wref{eqhr} has repetitions; the reason is that
the first set of objects of the form \w{H(n+m,-,-)} are used
to identify when \w{[f]\in E\sp{1}\sb{n,p}} is in the image of the (earlier)
differentials, while the next two sets, of the form \w{H(n,-,-)} are used to verify
that \w{[f]} is a \ww{d\sp{r}}-cycle. Thus for the first set, we are only
interested in maps in the top right corner of \wref{eqhpb} with non-trivial image
under $c$, while for the second case we want the fiber of $c$ (see Remark
\ref{rerequivs}).
One could use this distinction to further refine the localizations defined below, but
we shall not pursue this idea further here.
\end{remark}
\begin{defn}\label{drstem}
The various inclusions
\begin{myeq}\label{eqrestdiag}
\rDell{m,n}~\hookrightarrow~\rDell{m',n}
\end{myeq}
\noindent induce a partial order on the subset of diagrams in \w{\mathcal{H}(\mathfrak{h})}
with a fixed $p$ and $n$.
For any quasi-category $X$ and \w{a,b\in X} the adjunction
\w{\operatorname{Map}\sb{X}(\Sigma a,b)\simeq\Omega\operatorname{Map}\sb{X}(a,b)} induces natural maps
\begin{myeq}\label{eqstems}
\Po{r}\operatorname{Map}\sb{X}(\Sigma a,b)\xra{\simeq}\Po{r}\Omega\operatorname{Map}\sb{X}(a,b)\to
\Po{r-1}\Omega\operatorname{Map}\sb{X}(a,b)\xra{\simeq}\Omega\Po{r}\operatorname{Map}\sb{X}(a,b)
\end{myeq}
\noindent for each \w[.]{r\geq 1}
Thus given \w{\mathfrak{h}=\Sigma\hat{\fy}\in X} and \w{x\sb{\bullet}\ins\sb{+}X} as in \S \ref{sssso},
for each \w{r\geq 0} we define the \emph{$r$-stem for} \w{\lra{x\sb{\bullet},\mathfrak{h}}} to be the
system consisting of
\begin{myeq}\label{eqrstems}
\Po{m}\operatorname{Map}\sb{\LLc\sb{\mathcal{W}}(\spnX{n-m+1,n})}
(H(n,m,\Sigma\sp{p}\hat{\fy}),\tnk{\ast}{n-m+1,n}x\sb{\bullet})
\end{myeq}
\noindent for all \w[,]{H(n,m,\Sigma\sp{p}\mathfrak{h})\in\mathcal{H}\sp{r}(\mathfrak{h})}
under the various maps induced by \wref{eqrestdiag} and \wref[.]{eqstems}
This is a more precise version of the ``spiral $r$-system'' of
\cite[Section 4]{BBlanS}.
\end{defn}
We then deduce from Proposition \ref{rtrunc} and \cite[Theorem 6.8]{BMeadS}
the following refinement of \cite[Theorem 4.13]{BBlanS}:
\begin{thm}\label{tstem}
Given \w{\mathfrak{h}\in X} and \w{x\sb{\bullet}\in sX} as in \S \ref{sssso}, for each \w[,]{r\geq 2}
the \ww{E\sp{r}}-term of the associated spectral sequence is determined
by the \wwb{r-2}stem of \w[.]{\lra{x\sb{\bullet},\mathfrak{h}}}
\end{thm}
\begin{mysubsection}{The Postnikov localization}
\label{sploc}
We can reformulate Theorem \ref{tstem} in terms of a pair of Bousfield localizations,
as follows:
The Postnikov section functor \w{\Po{r}:\Ss\sb{\ast}\to\Ss\sb{\ast}} is a
nullification with respect to \w[,]{\bS{r+1}} so it is
cocontinuous by \cite[Proposition 3.4.4]{HirM}, and continuous by
\cite[Theorem 9.9]{BousTH}. Thus we may think of it as a left
Bousfield localization \w{\mathcal{L}\sp{r}} on the quasi-category $Y$ of
pointed $\infty$-groupoids, with the usual class of fibrations
\w{\operatorname{Fib}} and weak equivalences $\mathcal{W}$ (corresponding to those of the
usual model category structure on \w[),]{\Ss\sb{\ast}} extended objectwise
to each functor category \w[.]{\Chn{n}{k}(Y)}
Similarly, if for each \w[,]{p\geq 0} we denote the $p$-\emph{connected cover} functor
by \w[,]{(-)\lra{p}:\Ss\sb{\ast}\to\Ss\sb{\ast}} we may replace
\w{\Po{r-1}\Omega\sp{p}\operatorname{Map}\sb{X}(\widehat{\mathfrak{h}},x\sb{\bullet})} by
\w{\Po{r-1}(\operatorname{Map}\sb{X}(\widehat{\mathfrak{h}},x\sb{\bullet})\lra{p})} in Proposition \ref{rtrunc}.
However, \w{(-)\lra{p}} is just the colocalization, or cellularization, with respect to
\w{\bS{p+1}} (see \cite[\S 3.1.7]{HirM}), so we may think of it as a right Bousfield
localization \w{\mathcal{R}\sb{p}} on \w{\lra{Y,\operatorname{Fib},\mathcal{W}}} as above, again extended to each
\w[.]{\Chn{n}{k}(Y)}
Now fix \w[,]{r\geq 2} and consider the quasi-category
\begin{myeq}\label{eqproduct}
Z\sp{r}~:=~\prod\sb{H(n,m,\Sigma\sp{p}\hat{\fy})\in\mathcal{H}\sp{r}(\mathfrak{h})}~\Chn{n}{n-m+1}(Y)\bp{p}
\end{myeq}
\noindent (the weak product of presheaf categories).
We may apply to \w{Z\sp{r}} the combined left and right Bousfield
localization taking the form \w{\mathcal{L}\sp{m}\circ\mathcal{R}\sb{p}} on the factor
\w[.]{\Chn{n}{n-m+1}(Y)\bp{p}} This defines the $r$-th \emph{Postnikov localization}
functor \w[.]{\mathcal{P}\sp{r}:Z\sp{r}\to Z\sp{r}} By Theorem \ref{thm3.5}
(and the corresponding
straightforward analogue for right localizations), \w{Z\sp{r}} has the structure of
a quasi-category with fibrations, in which the class of weak equivalences
\w[,]{\mathcal{W}\sp{r}} called \ww{\mathcal{P}\sp{r}}-\emph{equivalences}, are
Reedy (i.e., degree-wise) weak equivalences of the truncated chain complexes
\w{\Po{m-1}C\sb{\ast}\lra{p}} on the factor \w[.]{C\sb{\ast}\in\Chn{n}{n-m+1}(Y)\bp{p}}
Note that for any quasi-category $X$ as in \S \ref{sssso}, we have a sequence of
functors
\begin{equation*}
\begin{split}
s\sb{+}X~&\xra{\operatorname{Map}\sb{X}(\mathfrak{h},-)}~s\sb{+}\Ss\sb{\ast}~\xra{C\sb{\ast}}~
\mbox{\sf Ch}(Y)~\xra{\tnk{n}{n-m+1}}~\Chn{n}{n-m+1}(Y)\\
&~\xra{\Po{m-1}}~\Chn{n}{n-m+1}(Y)~\xra{(-)\lra{p}}~\Chn{n}{n-m+1}(Y)~
\end{split}
\end{equation*}
\noindent for $m$, $n$, and $p$ as in \wref[,]{eqproduct} which together define
a functor \w[.]{G\sp{r}:s\sb{+}X\to Z\sp{r}} We think of the component of
\w{G\sp{r}(x\sb{\bullet})} in \w{\Chn{n}{n-m+1}(Y)\bp{p}} as providing the
\wwb{m,p,n}\emph{window} for \w{x\sb{\bullet}} (in the sense of \cite[\S 2.2]{BBlanS}).
We see by Proposition \ref{rtrunc} that the spectral sequence for \w{x\sb{\bullet}\ins\sb{+}X}
(with respect to a fixed \w{\mathfrak{h}\in X} as in \S \ref{spsss}) is determined through
the \ww{E\sp{r+2}}-page by \w[,]{G\sp{r}x\sb{\bullet}} and conclude from Theorem \ref{tstem}:
\begin{corollary}\label{cstempr}
The \ww{\mathcal{P}\sp{r}}-equivalences induce isomorphisms of
the associated spectral sequences from the \ww{E\sp{r+2}}-term on.
\end{corollary}
\end{mysubsection}
\begin{remark}\label{rtruncations}
We might try to use \w{G\sp{r}:s\sb{+}X\to Z\sp{r}} to lift the notion of a
\ww{\mathcal{P}\sp{r}}-equivalence to \w{s\sb{+}X} itself, or just to \w[.]{\mbox{\sf Ch}(\Ss\sb{\ast})}
However, because the a weak equivalence at all \wwb{m,p,n}-windows is just
a Reedy equivalence of (restricted) simplicial spaces, we would not gain
anything from the corresponding localization of \w[.]{s\sb{+}X}
On the other hand, the discussion in \S \ref{ssocc} allows us to reformulate
the Postnikov localization \w{\mathcal{P}\sp{r}} in terms of simplicial truncations:
more precisely, if we set
\begin{myeq}\label{eqsproduct}
\widehat{Z}\sp{r}~:=~
\prod\sb{H(n,m,\Sigma\sp{p}\hat{\fy})\in\mathcal{H}\sp{r}(\mathfrak{h})}~\spn{n-m+1,n}(Y)\bp{p}~,
\end{myeq}
\noindent the restrictions \w{C\sb{\ast}:\spn{n,k}(Y)\to\Chn{n}{k}(Y)} of the Moore chains
to each factor combine to define a functor \w[.]{F:\widehat{Z}\sp{r}\to Z\sp{r}}
Because each restricted \w{C\sb{\ast}} is right adjoint (and left inverse) to
\w[,]{\tnk{n}{k}\circ\mathcal{E}\circ\rnk{n}{k}} the functor $F$ satisfies assumptions (a)-(c)
of Proposition \ref{pinduce}, so we can use it to lift the \ww{\mathcal{P}\sp{r}}-structure of
a quasi-category with fibrations from \w{Z\sp{r}} to \w[.]{\widehat{Z}\sp{r}}
Of course, we could also have constructed it directly as in \S \ref{sploc}.
\end{remark}
\begin{mysubsection}{The $\mathcal{E}\sp{r}$-localization}
\label{serl}
The second form of localization we need is the following:
Note that since $X$ is locally presentable, for each \w{m \le n} we have a
left Kan extension adjunction
$$
\mbox{\sf LKE}\sb{n, m} :\spn{n,m}(X) \leftrightarrowss\sb{+}X : i\sp{\ast}\sb{n, m}
$$
\noindent by \cite[Proposition 6.4.9]{CisiH}. By Theorem \ref{thm4.10}, we therefore
have a right Bousfield localization of \w{s\sb{+}X} at the family
$$
\mathcal{E}\sp{r}~:=~
\{\mbox{\sf LKE}\sb{m, n}(H(n,m,\Sigma\sp{p}\hat{\fy})\}\sb{H(n,m,\Sigma\sp{p}\hat{\fy})\in\mathcal{H}\sp{r}(\mathfrak{h})}~.
$$
(see \S \ref{dzmnp}). Thus \w{s\sb{+}X} has a new structure of a quasi-category with
fibrations and weak equivalences, in which the latter are the
\ww{\mathcal{E}\sp{r}}-\emph{equivalences}.
The left Kan extension along a fully faithful inclusion of quasi-categories is also
fully faithful \cite[Proposition 4.3.2.17]{LurieHTT}, so we may deduce
from \S \ref{sssso} and Remark \ref{rerterm}:
\end{mysubsection}
\begin{corollary}\label{cerloc}
The \ww{\mathcal{E}\sp{r}}-equivalences induce \ww{E\sp{r}}-isomorphisms of
the associated spectral sequences.
\end{corollary}
\setcounter{figure}{0}\section{The Spectral Sequence of a Cosimplicial Object}
\label{cssrfcs}
We now investigate the dual to the spectral sequences considered so far:
namely, the homotopy spectral sequence of a cosimplicial object in an
\wwb{\infty,1}category. As in \cite{BMeadS}, we require a description which allows
us to analyze the differentials in the spectral sequence, when applied to a
representative in the \ww{E\sb{1}}-term \ -- \ much as we did for in the simplicial
case in Sections \ref{csssso} and \ref{cssl}.
This was discussed briefly in \cite[Section 9]{BMeadS}, but the treatment there is
not sufficient for our purposes here, which depend on three basic requirements:
\begin{enumerate}
\renewcommand{\labelenumi}{(\arabic{enumi})~}
\item We want the differentials \ -- \ and thus the spectral sequence as a
whole \ -- \ to depend only on the underlying \emph{restricted} cosimplicial object
(forgetting the codegeneracies).
\item It should be possible to recover the $r$-th differential \wb{r\geq 2} from
the \wwb{r-2}Postnikov truncation in the associated simplicial category $\mathscr{X}$.
\item We want a model-independent, and in particular homotopy invariant,
description of the differentials.
\end{enumerate}
With these goals in mind, we now give a more detailed construction from scratch:
\begin{mysubsection}{Cosimplicial objects in quasi-categories}
\label{scoqc}
Suppose that we have a pointed, locally presentable quasi-category $X$, and a
compact, connected (abelian) homotopy cogroup object \w{\mathfrak{h}=\Sigma\hat{\fy}}
as in \S \ref{sssso}.
Given a cosimplicial object \w{x\sp{\bullet}\in cX} (see \S \ref{snac}), we obtain
a (homotopy coherent) pointed cosimplicial space \w{w\sp{\bullet}:=\mathbf{Map}\sb{X}(\hat{\fy},x\sp{\bullet})}
in \w[,]{c\Ss\sb{\ast}:=\Ss\sb{\ast}\sp{B(\mathbf{\Delta})}} using Lemma \ref{lem8.2}. By \cite[Theorem 6.7]{Riehl1},
we can associate to \w{w\sp{\bullet}} a homotopy coherent diagram in Kan complexes.
Using homotopy coherence theory (see \cite[Section 9]{GJarS}), we can replace this in
turn with an equivalent strict cosimplicial space \w[.]{\bW\sp{\bullet}} We then define the
spectral sequence associated to \w{\lra{x\sp{\bullet},\mathfrak{h}}} to be the Bousfield-Kan homotopy
spectral sequence of \w{\bW\sp{\bullet}} (more precisely: a Reedy fibrant replacement thereof),
with
\begin{myeq}\label{eqbkss}
E\sb{2}\sp{n,n+p}~=~\pi\sp{n}\pi\sb{n+p}\bW\sp{\bullet}~\cong~
~\pi\sp{n}\pi\sb{n+p}\mathbf{Map}\sb{X}(\hat{\fy},x\sp{\bullet})~\cong~
\pi\sp{n}[\Sigma\sp{n+p}\hat{\fy},x\sp{\bullet}]
\end{myeq}
\noindent (the indexing has been chosen because this term contributes to
\w[),]{\pi\sb{p}\operatorname{Tot}\bW\sp{\bullet}} with
\w[.]{d\sb{r}:E\sb{r}\sp{n,n+p}\to E\sb{r}\sp{n+r,n+p+r-1}}
See \cite[IX]{BKanH} for further details.
We again take the point of view, explained in \S \ref{sssso}, that the differential
\w{d\sb{r}(\gamma)} for \w{\gamma\in E\sb{r}\sp{n,n+p}} is to be described in
terms of all possible values \w{\mathfrak{d}\sb{r}[f]\in E\sb{1}\sp{n+r,n+p+r-1}} for
the various representatives \w{[f]\in E\sb{1}\sp{n,n+p}} of $\gamma$.
In light of the above, for the remainder of this section we fix a Reedy fibrant
and cofibrant cosimplicial space \w[.]{\bW\sp{\bullet}\in\Ss\sb{\ast}\sp{\Delta}} Because
\w{\pi\sb{k}{\mathbf W}\sb{n}\cong[\Sigma\sp{k}\hat{\fy},x\sp{n}]} and $\mathfrak{h}$ is an abelian cogroup
object in $X$, \w{\pi\sb{k}{\mathbf W}\sb{n}} is an abelian group for each \w{n\geq 0} and
\w[.]{k\geq 1}
\end{mysubsection}
It is possible to develop a full description of the spectral sequence of \w{\bW\sp{\bullet}}
(and indeed of \w[)]{\lra{x\sp{\bullet},\mathfrak{h}}} to the category of cochain complexes in \w[,]{\Ss\sb{\ast}}
as we did in the simplicial case in \S \ref{scc} (see \cite{BBSenH}).
However, in the interests of brevity we describe only the cosimplicial
version of \S \ref{ssocc}:
\begin{mysubsection}{Cosimplicial objects and cochain complexes}
\label{scoch}
If $\mathcal{C}$ is a pointed category, let \w{\coc\!\Ch(\mathcal{C}):=\mbox{\sf Ch}(\mathcal{C}\sp{\operatorname{op}})} denote the category of
(non-negatively graded) cochain complexes in $\mathcal{C}$.
The \emph{$n$-th normalized cochain object} of a cosimplicial object \w{\bW\sp{\bullet}} in \w{c\mathcal{C}}
is defined
\begin{myeq}\label{eqnormcoch}
N\sp{n}(\bW\sp{\bullet})~:=~\bigcap\sb{i=0}\sp{n-1}\operatorname{Ker}(s\sp{i}:{\mathbf W}\sp{n}\to{\mathbf W}\sp{n-1})~,
\end{myeq}
\noindent and we have
\begin{myeq}\label{eqceone}
E\sb{1}\sp{n,p}~\cong~\pi\sb{p}N\sp{n}(\bW\sp{\bullet})
\end{myeq}
\noindent in our spectral sequence (see \cite[X, 6.3(i)]{BKanH}).
Alternatively, if we denote by \w{D\sp{n}(\bW\sp{\bullet})} the (homotopy) image of
$$
\coprod\sb{i=1}\sp{n-1}\,{\mathbf W}\sp{n}~\xra{\bot\sb{i}\,d\sp{i}}~{\mathbf W}\sp{n}~,
$$
the \emph{$n$-th Moore cochain object} of \w{\bW\sp{\bullet}} is defined to be the (homotopy)
cofiber
\begin{myeq}\label{eqmoorecc}
C\sp{n}(\bW\sp{\bullet})~:=~\operatorname{Coker}(D\sp{n}(\bW\sp{\bullet})\to{\mathbf W}\sp{n})~,
\end{myeq}
\noindent with differential \w{\delta\sp{n}:C\sp{n}\bW\sp{\bullet}\to C\sp{n+1}(\bW\sp{\bullet}}
induced by \w[.]{d\sp{0}}
Note that the Moore cochain functor \w{C\sp{\ast}:c\sp{+}\mathcal{C}\to\coc\!\Ch{\mathcal{C}}} has a right
adjoint (and left inverse) \w[.]{\mathcal{E}:\coc\!\Ch{\mathcal{C}}\to c\sp{+}\mathcal{C}}
Likewise, the forgetful functor \w{\mathcal{U}:c\mathcal{C}\to c\sp{+}\mathcal{C}} (induced by
\w[)]{\sb{+}\!\Delta\hookrightarrow\mathbf{\Delta}} has a right adjoint \w{\mathcal{F}:c\sp{+}\mathcal{C}\to c\mathcal{C}} adding codegeneracies
(see \cite[\S 1.8]{BSenH}).
\end{mysubsection}
\begin{lemma}\label{lem7.1}
If a cosimplicial space \w{\bW\sp{\bullet}\in c\Ss\sb{\ast}} consisting of Eilenberg-MacLane spaces of type $n$
in each simplicial degree, and all coface and codegeneracy maps are homomorphisms,
then \w[.]{{\mathbf W}\sp{n}\simeq N\sp{n}(\bW\sp{\bullet}) \times D\sp{n}(\bW\sp{\bullet})}
\end{lemma}
\begin{proof}
The dual of \cite[Theorem III.2.1]{GJarS} yields an isomorphism
\w[,]{N\sp{n}(x\sp{\bullet})\simeq{\mathbf W}\sp{n}/D\sp{n}(\bW\sp{\bullet})} and thus a splitting
of \w[.]{{\mathbf W}\sp{n}\to{\mathbf W}\sp{n}/D\sp{n}(\bW\sp{\bullet})}
\end{proof}
\begin{prop}\label{lem7.2}
For \w{\bW\sp{\bullet}} as in \S \ref{scoqc}, we have a homotopy equivalence
\w[.]{{\mathbf W}\sp{n}\simeq N\sp{n}(\bW\sp{\bullet}) \times D\sp{n}(\bW\sp{\bullet})}
\end{prop}
\begin{proof}
We let \w{\Po{m}:\Ss\sb{\ast}\to\Ss\sb{\ast}} denote the $m$-th Postnikov section functor.
We prove by induction on \w{m\geq 0} that the statement holds for \w[:]{\Po{m}(W\sp{n})}
The case \w{m = 0} follows from Lemma \ref{lem7.1} and the assumption that
\w{\pi\sb{0}{\mathbf W}\sp{n}} is an abelian group. In step $m$, we have a fibre
sequence \w[,]{F\sp{\bullet}\sb{m}\to\Po{m}\bW\sp{\bullet}\to\Po{m-1}\bW\sp{\bullet}} where \w{F\sp{\bullet}\sb{m}} is an
Eilenberg-Mac~Lane space in each cosimplicial degree.
Since (homotopy) colimits commute with homotopy fibre sequences in \w[,]{\Ss\sb{\ast}} the functor
\w{D\sp{n}} commutes with fibre sequences, and we have a comparison of fibre sequences:
\begin{equation*}
\xymatrix@R=15pt@C=20pt{
D\sp{n}F\sp{\bullet}\sb{m} \times N\sp{n}F\sp{\bullet}\sb{m} \ar[d] \ar[r] &
D\sp{n}\Po{m}\bW\sp{\bullet} \times N\sp{n}\Po{m}\bW\sp{\bullet} \ar[d] \ar[r] &
D\sp{n}\Po{m-1}\bW\sp{\bullet} \times N\sp{n}(\Po{m-1}(\bW\sp{\bullet})) \ar[d] \\
F\sb{m}\sp{n} \ar[r] & \Po{m}W\sp{n} \ar[r] & \Po{m-1}W\sp{n}
}
\end{equation*}
The left and right vertical maps are homotopy equivalences, by Lemma \ref{lem7.1}
and by the induction hypothesis, respectively, so the middle one is as well.
Since filtered homotopy limits and finite homotopy (co)limits of spaces commute,
\w{D\sp{n}} and \w{N\sp{n}} commute with filtered homotopy limits. Thus
\begin{equation*}
\begin{split}
\operatorname{holim}\ \Po{n}{\mathbf W}\sp{n}~&\simeq~\operatorname{holim}\,[ N\sp{n}(\Po{n}(\bW\sp{\bullet}))\times D\sp{n}(\Po{n}(\bW\sp{\bullet}))]\\
&\simeq~N\sp{n}(\operatorname{holim} \Po{n}\bW\sp{\bullet})\times D\sp{n}(\operatorname{holim}\Po{n}\bW\sp{\bullet})
~=~N\sp{n}(\bW\sp{\bullet})\times D\sp{n}(\bW\sp{\bullet})~,
\end{split}
\end{equation*}
\noindent which completes the proof.
\end{proof}
From \wref{eqmoorecc} we deduce the following generalization of the dual of
\cite[Corollary (1.12)]{DolH}:
\begin{corollary}\label{cmoorenorm}
The natural map \w{C\sp{\ast}(\bW\sp{\bullet})\to N\sp{\ast}(\bW\sp{\bullet})} is a levelwise weak equivalence.
\end{corollary}
\begin{mysubsection}{The \w{\operatorname{Tot}} tower}
\label{stott}
Let \w{\Del\sp{\bullet}\in\mbox{\sf Set}\sp{\Delta}} denote the cosimplicial space having
\w{\mathbf{\Delta}\sp{n}} in degree $n$, and recall that \w{\operatorname{Tot}(\bW\sp{\bullet}):=\operatorname{map}\sb{c\mathcal{S}}(\Del\sp{\bullet},\bW\sp{\bullet})}
for \w{\Del\sp{\bullet}} as in \S \ref{snac}, where \w{\operatorname{map}\sb{c\mathcal{S}}} is the simplicial enrichment
for the Reedy model structure on \w[.]{c\mathcal{S}}
Similarly, \w[.]{\operatorname{Tot}\sp{n}(\bW\sp{\bullet}):=\operatorname{map}\sb{c\mathcal{S}}(\sk{n}\Del\sp{\bullet},\bW\sp{\bullet})}
Thus a $k$-simplex of \w{\operatorname{Tot}\sp{n}(\bW\sp{\bullet})} is a choice of maps
\begin{equation}\label{eqone}
f\sb{m} :\sk{n}\mathbf{\Delta}\sp{m} \times \Deln{k} \to {\mathbf W}\sp{m}
\end{equation}
for each \w{m\geq 0} such that
$$
f\sp{j}\circ(\sk{n}\phi \times\operatorname{Id}) = {\mathbf W}(\phi) \circ
f\sp{m} :\sk{n}\mathbf{\Delta}\sp{m} \times \Deln{k} \to {\mathbf W}\sp{j}~.
$$
We use the notations \w{\mathbf{\Delta}\sp{m}} and \w{\Deln{m}} for the $m$-simplex thought
of as a space and as a combinatorial book-keeping device, respectively.
Since \w[,]{\sk{n}(\mathbf{\Delta}\sp{N})=\operatorname{colim}\sb{k\leq n}\mathbf{\Delta}\sp{k}}
the map of simplicial sets is completely determined by the maps \w{f\sb{k}}
for \w[.]{k\leq n}
A representative $f$ of a homotopy class \w{[f]\in\pi\sb{k}\operatorname{Tot}\sp{n}\bW\sp{\bullet}} is determined
by a collection of maps as above, whose restriction to
\w{\sk{n}\mathbf{\Delta}\sp{m}\times \partial \Deln{k}} is $0$.
Similarly, a homotopy \w{F:f\sim f'} between two representatives of
\w{[f]\in\pi\sb{k}\operatorname{Tot}\sp{n}\bW\sp{\bullet}} is determined by a collection of compatible maps
$$
F\sb{m} :\sk{n}\mathbf{\Delta}\sp{m}\times\Deln{k}\times[0,1]\to{\mathbf W}\sp{m}
$$
for \w[.]{m \le n} The homotopy groups of \w{\operatorname{Tot}\sp{n}\bW\sp{\bullet}} are thus determined
by the truncation \w{\tnk{h}{\le n}\bW\sp{\bullet}} of \w{\bW\sp{\bullet}} in cosimplicial dimensions
$\leq n$.
We write \w{\bW\sp{\bullet}\bp{n}} for the Reedy fibrant replacement of the left Kan extension of
\w{\tnk{h}{\le n}(\bW\sp{\bullet})} to an object of \w[.]{\mathcal{S}\sp{\Delta}} By the previous remark,
\w{\pi\sb{\ast}\operatorname{Tot}\sp{n}\bW\sp{\bullet}} depends only on this truncation: that is, the natural map
\w{\operatorname{Tot}\sp{n}\bW\sp{\bullet}\bp{n}\to\operatorname{Tot}\sp{n}\bW\sp{\bullet}} is an equivalence.
Thus, we can identify the homotopy spectral sequence of the cosimplicial space \w{\bW\sp{\bullet}}
with the spectral sequence of the tower of fibrations
\begin{myeq}\label{tottower}
\cdots\to\operatorname{Tot}\sp{n}(\bW\sp{\bullet}\bp{n})\to\operatorname{Tot}\sp{n-1}(\bW\sp{\bullet}\bp{n-1})\to\cdots\to
\operatorname{Tot}\sp{0}(\bW\sp{\bullet}\bp{0}~.
\end{myeq}
Now let \w[.]{\oW{n}:=N\sp{n}\bW\sp{\bullet}\bp{n}} We extend the usual notation
\w{P\Omega\sp{n}} (for the composite of the path space functor
with $n$-fold loop space, when \w[)]{n\geq 0} by letting \w{P\Omega\sp{-1}X:=X} and
\w{P\Omega\sp{n}:=\ast} for \w[.]{n <-1} We then set
$$
M\sp{r}\bp{n}\oW{n}:=\prod\sb{0 \le k \le r}
\prod\sb{0 \le i\sb{1} < i\sb{1} \cdots < i\sb{k} \le r} P\Omega\sp{n+k-r-1}\oW{n}~,
$$
where the codegeneracy map \w{s\sp{t}: M\sp{r}\bp{n}\oW{n}\to M\sp{r}\bp{n-1}\oW{n}} is
given on the factor corresponding to \w{I=(i\sb{1}, \cdots, i\sb{k})}
by projection onto the unique factor \w{J=(j\sb{1}, \cdots j\sb{k+1})}
such that \w[.]{s\sp{I}\circ s\sp{t}=s\sp{J}}
Thus \w{M\sp{\bullet}\bp{n}\oW{n}} is obtained by applying the functor \w{\mathcal{F}:c\sp{+}\Ss\sb{\ast}\to c\Ss\sb{\ast}}
of \S \ref{scoch} to the restricted cosimplicial object
\w[,]{\bQ\sp{\bullet}\in c\sb{+}\Ss\sb{\ast}} where \w[.]{{\mathbf Q}\sp{m} := P\Omega\sp{n-m-1}\oW{n}}
In particular, the map into \w{M\sp{i}\bp{n}\oW{n}} for \w{i>n} is
determined by the cosimplicial identities, applying an appropriate
iterated codegeneracy to \w{M\sp{i}\bp{n}\oW{n}} until we land in \w[.]{\oW{n}}
We claim that the fibre sequence
$$
N\sp{n}(\bW\sp{\bullet}\bp{n})\to\operatorname{Tot}\sp{n}(\bW\sp{\bullet}\bp{n})\to\operatorname{Tot}\sp{n-1}(\bW\sp{\bullet}\bp{n-1})
$$
can be identified with the fibre sequence
$$
N\sp{n}(\bW\sp{\bullet}\bp{n}) \to\operatorname{Tot}\sp{n}(\bW\sp{\bullet}\bp{n-1} \times M\sp{\bullet}\bp{n}\oW{n})
\to\operatorname{Tot}\sp{n-1}(\bW\sp{\bullet}\bp{n-1})~.
$$
\noindent In fact, we have a natural map
\w[,]{\bW\sp{\bullet}\bp{n-1} \timesM\sp{\bullet}\bp{n}\oW{n}\to\bW\sp{\bullet}\bp{n}}
which is an equivalence in cosimplicial dimensions \w[:]{m \leq n}
The case \w{m = n} is Proposition \ref{lem7.2}, and for \w[,]{m < n} this holds because
\w{{\mathbf W}\bp{n-1}\sp{m} = {\mathbf W}\bp{n}\sp{m}} and \w{M\sp{m}\bp{n}\oW{n}}
is weakly contractible. The result now follows from the description of the
homotopy groups of \w{\operatorname{Tot}\sp{n}} in \S \ref{stott}.
By the preceding paragraph, we can thus assume by induction that
\begin{myeq}\label{eq2}
\bW\sp{\bullet}\bp{n} \simeq \bW\sp{\bullet}\sb{[n-1]} \timesM\sp{\bullet}\sb{n}\oW{n}
\end{myeq}
\noindent for all \w[.]{n\geq 1}
\end{mysubsection}
We have the following analogue of \S \ref{sdiagdesc}:
\begin{mysubsection}{A diagrammatic description of the differentials}
\label{sdiagdes}
Starting with \w[,]{F\sp{n-1}=d\sp{0}:C\sp{n-1}{\mathbf W}\bp{n-1}\sp{n-1}
\to P\Omega\sp{-1}{\mathbf W}\bp{n-1}\sp{n-1}}
we may define by downward induction a map of complexes
\w{F:C\sp{\ast}\bW\sp{\bullet}\bp{n-1}\toD\sp{\ast}} given by:
\mysdiag[\label{complexes}]{
C\sp{k+1}\bW\sp{\bullet}\bp{n-1}\ar[rrr]\sp{F\sp{k+1}} &&& P\Omega\sp{n-k-3}\oW{n} & =
D\sp{k+1} \\
&&& \Omega\sp{n-k-2}\oW{n} \ar@{^{(}->}[u] & \\
C\sp{k}\bW\sp{\bullet}\bp{n-1} \ar[uu]\sp{\delta\sp{k}} \ar[rrr]\sb{F\sp{k}} &&&
P\Omega\sp{n-k-2}\oW{n} \ar@{->>}[u] &
=~D\sp{k} \ar[uu]\sp{\delta\sp{k}\sb{D}} \\
&&& \Omega\sp{n-k-1}\oW{n} \ar@{^{(}->}[u] \ar@/_{3.9pc}/[uu]\sb{0} & \\
C\sp{k-1}\bW\sp{\bullet}\sb{[n-1]}
\ar@{-->}[rrru]\sp{a\sp{k-1}} \ar[uu]\sp{\delta\sp{k-1}}
\ar@{.>}[rrr]_(0.5){F\sp{k-1}} &&& P\Omega\sp{n-k-1}\oW{n} \ar@{->>}[u] &
= D\sp{k-1} \ar[uu]\sp{\delta\sp{k-1}\sb{D}}
}
\noindent Note that \w{F\sp{k}} induces the map \w{a\sp{k-1}} in \wref[,]{complexes}
which must be nullhomotopic in order for \w{F\sp{k-1}} to exist.
Using the splitting from \wref{eq2} and Proposition \ref{lem7.2}, as
in \cite[Section 3]{BBSenH} we can show that there is a fibre sequence
\begin{myeq}\label{afibresequence}
\Sigma\Dup{n}\to\bW\sp{\bullet}\bp{n}\to \bW\sp{\bullet}\bp{n-1}\xrightarrow{F\sb{[n-1]}}\Dup{n}~,
\end{myeq}
\noindent where \w{\Dup{n}} is obtained by applying \w{\mathcal{E}\mathcal{U}} of \S \ref{scoch}
to the complex \w{D\sp{\ast}}
in the right-hand side of diagram \wref{complexes} above, and the map
\w{F\bp{n-1}} is adjoint to the map of complexes given by \wref[.]{complexes}
\end{mysubsection}
\begin{prop}\label{pdiagdiff}
In the situation of \S \ref{scoqc}, an element \w{[f]\in E\sp{n,n+p}\sb{1}}
represented by \wref{eqeoneterm} survives to \w{E\sp{n,n+p}\sb{r+1}}
if and only if \wref{eqeoneterm} extends to a diagram:
\mytdiag[\label{eqerterm}]{
\Deln{n+r-1} \ltimes S\sp{p} \ar[r]\sb<<<<<<<<{g\sb{n+r-1}} &{\mathbf W}\sp{n+r-1}\bp{n+r-1} \\
\Deln{n+r-2} \ltimes S\sp{p} \ar@<1.5ex>[u]\sp{d\sp{0}}\sb{\cdots}
\ar@<-1.5ex>[u]\sb{d\sp{n+r-1}} \ar[r]_>>>>>>>>{g\sb{n+r-2}}
\ar@{}[d]\sb{\vdots} &{\mathbf W}\sp{n+r-2}\bp{n+r-2}
\ar@<1.5ex>[u]\sp{d\sp{0}}\sb{\cdots} \ar@<-1.5ex>[u]\sb{d\sp{n+r-1}}
\ar@{}[d]\sb{\vdots} \\
\Deln{n} \ltimes S\sp{p} \ar[r]\sb{g\sb{n}=f\sb{n}} & W\sp{n}\bp{n} \\
0\ar@<1.5ex>[u]\sp{d\sp{0}}\sb{\cdots} \ar@<-1.5ex>[u]\sb{d\sp{n}} \ar[r
&{\mathbf W}\sp{n-1}\bp{n} \ar@<1.5ex>[u]\sp{d\sp{0}}\sb{\cdots} \ar@<-1.5ex>[u]\sb{d\sp{n}}
}
\noindent indexed by \w[.]{\rDell{n-1,n+r-1}\times[\mathbf{1}]}
\end{prop}
\begin{proof}
By induction on $r$, starting with \w[:]{r=1}
Because \w{\bW\sp{\bullet}} is Reedy fibrant, the natural map
\begin{myeq}\label{eqnormis}
\pi\sb{\ast}N\sp{n}(\bW\sp{\bullet})~\to~ N\sp{n}(\pi\sb{\ast}\bW\sp{\bullet})~\cong~C\sp{n}\pi\sb{\ast}(\bW\sp{\bullet})
\end{myeq}
\noindent is an isomorphism, by \cite[X, 6.3(ii)]{BKanH} and Corollary \ref{cmoorenorm}.
By \wref[,]{eqceone} we can thus represent \w{[f]\in E\sp{n,n+p}\sb{1}} by a map
\w{\Del\sp{\bullet}\ltimes S\sp{p} \to\operatorname{Tot}\sp{n}\bW\sp{\bullet}\bp{n}} -- \ in other words, by a sequence
of compatible maps \w{f\sb{i}:\mathbf{\Delta}\sp{i}\ltimes S\sp{p}\to{\mathbf W}\bp{n}\sp{m}}
with \w{f\sb{i} = 0} for \w{m < n} (see \cite[\S 4.1]{BBSenH}). This determines a diagram
of pointed simplicial sets indexed by \w{\Dell{n}\times[\mathbf{1}]} (in the notation of
\S \ref{snac}), whose restriction to \w{\rDele{n}} is depicted by
\mydiagram[\label{eqeoneterm}]{
\Deln{n} \ltimes S\sp{p} \ar[r]\sb(0.55){f\sb{n}} & {\mathbf W}\sp{n}\bp{n}\\
\Deln{n-1} \ltimes S\sp{p} \ar@<1.5ex>[u]\sp{d\sp{0}}\sb{\cdots}
\ar@<-1.5ex>[u]\sb{d\sp{n}} \ar[r]\sb(0.55){0} \ar@{}[d]\sb{\vdots} & {\mathbf W}\sp{n-1}\bp{n}
\ar@<1.5ex>[u]\sp{d\sp{0}}\sb{\cdots} \ar@<-1.5ex>[u]\sb{d\sp{n}} \ar@{}[d]\sb{\vdots} \\
\Deln{0} \ltimes S\sp{p} \ar[r]\sb(0.5){0} & {\mathbf W}\sp{0}\bp{n}~,
}
\noindent which is equivalent to the case \w{r=1} of \wref[,]{eqerterm} since mapping
out of a zero object does not require choices of homotopies.
On the other hand, in such a commutative diagram, \w{f\sb{n}} factors through
\w{M\sp{n}\bp{n}\oW{n}} as in \wref[,]{eq2} and thus the diagram determines a map
\w{\Del\sp{\bullet}\to\operatorname{Tot}\sp{n}(M\sp{\bullet}\bp{n}\oW{n})} by freely adding codegeneracies.
This in turn determines an element of \w[,]{\Omega\sp{n+p}N\sp{n}\bW\sp{\bullet}} since the fibre of
\w{\bW\sp{\bullet}\bp{n}\to\bW\sp{\bullet}\bp{n-1}} can be identified with \w{M\sp{\bullet}\bp{n}\oW{n}} by
\wref[.]{eq2}
Now assume the statement is true for \w[:]{r-1}
a map \w{g:\Del\sp{\bullet}\to\operatorname{Tot}\sp{n+r-1}(\bW\sp{\bullet}\bp{n+r-1}} representing
an element of the \ww{E\sb{r-1}}-term of the spectral sequence survives to
the \ww{E\sb{r}}-term if and only if the composite \w{F\sb{[n-1]} \circ g} in
\mydiagram[\label{eqnullhtpyt}]{
W\sb{[n+r]}\sp{\bullet} \ar[r] & W\sb{[n+r-1]}\sp{\bullet} \ar[rr]\sp{F\sb{[n-1]}} &&
D\sb{[n-1]}\sp{\bullet} \\
\Del\sp{\bullet} \ltimes S\sp{p} \ar[ur]\sb{g} & &&
}
\noindent is nullhomotopic, where the horizontal maps are the fibre sequence
\wref[.]{afibresequence}
By the discussion in \S \ref{sdiagdes}, we see that both \w{F\sb{[n-1]}} and
\w{F\sb{[n-1]}\circ g} are adjoint, respectively, to right hand map and the
composite in the diagram:
$$
C\sp{\ast}\mathcal{F}(\Del\sp{\bullet} \ltimes S\sp{p}) \toC\sp{\ast}\mathcal{F} \bW\sp{\bullet}\bp{n-1}\toD\sp{\ast}
$$
They are thus adjoint to the maps $\psi$ and \w{\psi \circ \psi'} in the diagram below,
where the horizontal maps can be seen to be a sequence of fibrations.
$$
\xymatrix
{
\mathcal{F} W\sb{[n+r]} \ar[r] & \mathcal{F} W\sb{[n+r-1]} \ar[r]\sb{\psi} &\mathcal{E} D\sp{\ast} \\
\Del\sp{\bullet} \ltimes S\sp{p} \ar[ur]\sb{\psi'} & &
}
$$
By adjunction, \w{F\sb{[n-1]}\circ g} is thus nullhomotopic if and only if the
composite \w{\psi \circ \psi'} is nullhomotopic, which holds if and only if
a diagram of the form \ref{eqerterm} exists. Hence the result.
\end{proof}
\begin{corollary}\label{cor7.9}
%
Given $X$, $\mathfrak{h}$ and \w[,]{x\sp{\bullet}} with \w{\bW\sp{\bullet}} a Reedy fibrant strictification of
\w[,]{w\sp{\bullet}:=\mathbf{Map}\sb{X}(\hat{\fy},x\sp{\bullet})\in c\Ss\sb{\ast}} as in \S \ref{scoqc}, a class
\begin{myeq}\label{eqfrepres}
[f]\in[\Sigma\sp{n+p}\mathfrak{h},x\sp{n}]~=~\pi\sb{n+p}(w\sp{n})~\cong~
\pi\sb{p}(\Omega\sp{n}{\mathbf W}\sp{n})~
\end{myeq}
\noindent in \w{E\sb{1}\sp{n,n+p}} of the associated spectral sequence survives
to \w{E\sb{r}\sp{n,p}} if and only if it fits into a map \w{B(\rDele{n+r-1}) \to\Ss\sb{\ast}}
of the form
\mytdiag[\label{eqerrterm}]{
\Deln{n+r-1} \ltimes\Sigma\sp{p}\mathfrak{h}\ar[r]\sb<<<<<<<<{g\sb{n+r-1}} & x\sp{n+r-1} \\
\Deln{n+r-2} \ltimes\Sigma\sp{p}\mathfrak{h} \ar@<1.5ex>[u]\sp{d\sp{0}}\sb{\cdots}
\ar@<-1.5ex>[u]\sb{d\sp{n+r-1}} \ar[r]\sb>>>>>>>>>{g\sb{n+r-2}} \ar@{}[d]\sb{\vdots} &
x\sp{n+r-2}
\ar@<1.5ex>[u]\sp{d\sp{0}}\sb{\cdots} \ar@<-1.5ex>[u]\sb{d\sp{n+r-1}}
\ar@{}[d]\sb{\vdots} \\
\Deln{n} \ltimes\Sigma\sp{p}\mathfrak{h} \ar[r]\sb{g\sb{n}=f\sb{n}} & x\sp{n} \\
\Deln{n-1} \ltimes\Sigma\sp{p}\mathfrak{h}\ar@<1.5ex>[u]\sp{d\sp{0}}\sb{\cdots}
\ar@<-1.5ex>[u]\sb{d\sp{n}}
\ar[r]\sb{0}
& x\sp{n-2} \ar@<1.5ex>[u]\sp{d\sp{0}}\sb{\cdots}
\ar@<-1.5ex>[u]\sb{d\sp{n}}
}
\end{corollary}
\begin{proof}
By \cite[Theorem 6.7]{Riehl1} and the fact that \w{{\mathbf W}\bp{m}\sp{m} ={\mathbf W}\sp{m}}
(by construction), a diagram \w{B(\rDele{n+r-1})\to\Ss\sb{\ast}} as in \wref{eqerrterm}
is equivalent to a homotopy coherent diagram of the form \wref[.]{eqerterm}
But such a diagram is equivalent to a strictly commuting diagram of pointed spaces
by a relative version of the usual Dwyer-Kan homotopy coherence theorem
(see \cite{DKSmitH}). The result then follows from Proposition \ref{pdiagdiff}.
\end{proof}
\begin{mysubsection}{A combinatorial description of the differentials}
\label{scombdes}
By \cite{BKanS}, the unstable Adams spectral sequence for a space ${\mathbf X}$ may be
identified with the homotopy spectral sequence of a certain cosimplicial resolution
\w{\bW\sp{\bullet}} of ${\mathbf X}$, as in \cite[Chapter X]{BKanS}.
In \cite[Theorem 6.4]{BBSenH}, it was shown that applying the \ww{d\sb{r}}-differential
in this spectral sequence to an element of \w{E\sb{r}\sp{n,n+p}} represented by
\w{[f]\in E\sb{1}\sp{n,n+p}} as in Proposition \ref{pdiagdiff} yields a value
of a certain associated higher cohomology operation. This may be described more generally
in our situation as follows:
Assume we have lifted \w{[f]} as in \wref{eqeoneterm} to a map
\w{g\bp{N}:\Del\sp{\bullet}\ltimes S\sp{p}\to\bW\sp{\bullet}\bp{N}} as in \wref[,]{eqerterm} for \w[.]{N=n+r-1}
From \wref{afibresequence} we see that \w{g\bp{N}} can be lifted to \w[,]{g\bp{N+1}}
up to homotopy, if and only if there is a nullhomotopy
\w[,]{H:F\bp{N}\circ g\bp{N}\sim 0} determined according to \S \ref{stott} by
a sequence of maps \w{H\sp{k}} fitting into a diagram
\myudiag[\label{eqnulhtpy}]{
\sms{C\Deln{k}}{S\sp{p}} \ar@/^{1.9pc}/[rrrrrd]\sp{H\sp{k}} &&&&&\\
& \hfsm{\Deln{k}}{S\sp{p}} \ar@{_{(}->}[ul]\sb{\delta\sp{0}} \ar[rr]\sp{g\bp{N}\sp{k}} &&
{\mathbf W}\sp{k}\bp{N} \ar[rr]\sp(0.45){F\sp{k}} && P\Omega\sp{N-k-1}\oW{N+1} \\
\sms{C\Deln{k-1}}{S\sp{p}}
\ar@<2.5ex>[uu]\sp(0.55){Cd\sp{0}}\sp(0.45){=\delta\sp{1}}\sb(0.5){\dotsc}
\ar@<-2.5ex>[uu]\sb(0.55){Cd\sp{k}}\sb(0.45){=\delta\sp{k+1}}
\ar@/^{3.3pc}/[rrrrrd]\sp{H\sp{k-1}} &&&&&\\
& \hfsm{\Deln{k-1}}{S\sp{p}} \ar@{_{(}->}[ul]\sb{\delta\sp{0}}
\ar@<2.5ex>[uu]\sp(0.5){d\sp{0}}\sb(0.5){\dotsc} \ar@<-2.5ex>[uu]\sb(0.5){d\sp{k}}
\ar[rr]\sp(0,55){g\bp{N}\sp{k-1}} &&
{\mathbf W}\sp{k-1}\bp{N} \ar[rr]\sp(0.45){F\sp{k-1}} && P\Omega\sp{N-k}\oW{N+1}
\ar@<3ex>[uu]\sp(0.6){\iota\sb{N-k-1}\circ p}\sb(0.6){=d\sp{0}}
\ar@<-4ex>[uu]\sb(0.4){(j>0)}\sb(0.6){=d\sp{j}}\sp(0.6){0}
}
\noindent as in \cite[5.5]{BBSenH}. The value of the differential \w{\mathfrak{d}\sb{r}([f])}
is the obstruction to the existence of $H$; it is given by a certain map
\w[,]{\Phi:\sms{\partial\mathcal{P}\sp{N+1}\sb{r}}{S\sp{p}}\to\oW{N+1}}
depending only on the given maps \w{F\bp{N}} and \w[.]{g\bp{N}}
Here \w{\mathcal{P}\sp{N+1}\sb{r}} is a certain simplicial complex which is PL-equivalent to
an \wwb{N+1}-ball, so \w{\partial\mathcal{P}\sp{N+1}\sb{r}} is an $N$-sphere
(see \cite[Definition 5.1]{BBSenH}).
The proof of this result in \cite[Theorem 6.4]{BBSenH} used a specific CW construction
for \w[,]{\bW\sp{\bullet}} but in fact it is valid for any pair \w{\lra{x\sp{\bullet},\mathfrak{h}}} in a
quasi-category $X$ as in \S \ref{scoqc}. Moreover, the cell structure of
\w{\mathcal{P}\sp{N+1}\sb{r}} allows us to show that, up to homotopy, $\Phi$ is determined
inductively by the universal property of the colimit of these cells, providing
a model-independent and homotopy invariant description of the differentials in the
situation of Corollary \ref{cor7.9}, just as we did in \cite[Corollary 6.11]{BMeadS} in
the simplicial case.
\end{mysubsection}
\setcounter{figure}{0}\section{The spectral sequence of a cosimplicial object and localization}
\label{ccsloc}
In this section, we obtain analogues of the constructions and results of
Section \ref{cssl} for the spectral sequence of a cosimplicial object \w{x\sp{\bullet}}
in a quasi-category $X$.
First, note that from Corollary \ref{cor7.9} we may deduce the following analogue
of Proposition \ref{rtrunc}:
\begin{prop}\label{rctrunc}
Given \w{\mathfrak{h}\in X} and \w{x\sp{\bullet}\in cX} as in \S \ref{scoqc} and
\w{[f]\in E\sb{1}\sp{n,n+p}} surviving to \w[,]{E\sb{r}} the element \w{\mathfrak{d}\sp{r}([f])}
in \w{E\sb{1}\sp{n+r,n+p+r-1}} associated to diagram \wref{eqnulhtpy}
-- \ and thus \w{d\sp{r}([f])\in E\sb{r}\sp{n-r,p+r-1}} itself
\ -- \ depends only \w[,]{\Po{r-1}\operatorname{Map}\sb{X}(\Sigma\sp{p+n}\mathfrak{h},x\sb{\bullet})\sb{f\sb{n}}} which
thus determines \w[,]{E\sb{r+1}} and in particular lets us decide
whether \w{[f]} survives to \w[.]{E\sb{r+1}}
\end{prop}
In order to obtain a cosimplicial version of Theorem \ref{tstem}, we need to replace
Definition \ref{dzmnp} by the following:
\begin{defn}\label{dgmnp}
Given \w{\mathfrak{h}\in X} as in \S \ref{sssso}, for any \w{n\geq 0} and \w{m,p\geq 1}
we let \w{G(n,m,\Sigma\sp{p}\hat{\fy})} denote the diagram
\mywdiag[\label{eqgmnp}]{
0 \ar@<2.5ex>[rr]\sp(0.3){d\sp{0}}\sb(0.3){\vdots} \ar@<-2.5ex>[rr]\sb(0.3){d\sp{n}} &&
\Sigma\sp{p}\hat{\fy}\otimes \Deln{n}
\ar@<2.5ex>[rr]\sp(0.5){d\sp{0}}\sb(0.5){\vdots} \ar@<-2.5ex>[rr]\sb(0.5){d\sp{n+1}} &&
\Sigma\sp{p}\hat{\fy}\otimes \Deln{n+1}\dotsc&
\Sigma\sp{p}\hat{\fy}\otimes \Deln{n+m-1}
\ar@<2.5ex>[rr]\sp(0.5){d\sp{0}}\sb(0.5){\vdots} \ar@<-2.5ex>[rr]\sb(0.5){d\sp{n+m}} &&
\Sigma\sp{p}\hat{\fy}\otimes \Deln{n+m}
}
\noindent in \w{\cpnX{n-1,n+m}} (unique up to weak equivalence), where we omit the
leftmost $0$ when \w[.]{n=0}
The notation \w{-\otimes\Deln{k}} serves merely as a placeholder,
to keep track of the cosimplicial identities. Set
\begin{myeq}\label{eqgr}
\mathcal{G}\sb{r}(\mathfrak{h}):=
\bigcup\sb{p\geq 1}~\bigcup\sb{n\geq 0}~\{G(n,r-1,\Sigma\sp{p}\hat{\fy})\}\cup
\bigcup\sb{1\leq m\leq\min\{p,r\}}
\{G(n-m,m-1,\Sigma\sp{p-m+1}\hat{\fy})\}
\end{myeq}
\noindent with \w{\mathcal{G}(\mathfrak{h})=\bigcup\sb{r=2}\sp{\infty}\,\mathcal{G}\sb{r}(\mathfrak{h})} the collection
of all such diagrams, as in \S \ref{dzmnp}. The inclusions
\begin{myeq}\label{eqrestdiags}
\rDell{n,n+m}~\hookrightarrow~\rDell{n,n+m'}
\end{myeq}
\noindent again induce a partial order on the subset of diagrams in \w{\mathcal{G}(\mathfrak{h})}
with a fixed $p$. Note also that
\w{\Sigma G(n,m,\Sigma\sp{p}\mathfrak{h})\simeq G(n,m,\Sigma\sp{p+1}\mathfrak{h})} in
\w[.]{\cpnX{n-1,n+m}}
Thus given \w{\mathfrak{h}=\Sigma\hat{\fy}} and \w{x\sp{\bullet}} as in \S \ref{scoqc},
for each \w{r\geq 0} we define the cosimplicial \emph{$r$-stem} for
\w{\lra{x\sp{\bullet},\mathfrak{h}}} to be the system consisting of
\begin{myeq}\label{eqcstem}
\Po{m}\operatorname{Map}\sb{\cpnX{n-1,n+m}}(G(n,m,\Sigma\sp{p}\mathfrak{h}),\tau\sp{\ast}x\sp{\bullet})
\end{myeq}
\noindent for all \w[,]{G(n,m,\Sigma\sp{p}\mathfrak{h})\in\mathcal{G}\sb{r}(\mathfrak{h})}
under the various maps induced by \wref{eqrestdiags} and \wref[.]{eqstems}
Again, this is a more precise version of the ``spiral $r$-system'' of \cite[\S 5]{BBlanS}.
\end{defn}
\begin{mysubsection}{Cosimplicial stems and differentials}
\label{stail}
If $\mathscr{X}$ is the simplicially enriched category corresponding to
the quasi-category $X$, as in \S \ref{spsss}, then not only the elements \w{[f]} in
\w[,]{E\sb{1}\sp{n,n+p}} but also \w{d\sb{1}([f])} (and thus the class of \w{[f]} in
\w[,]{E\sb{2}\sp{n,n+p}} if it survives) are determined by \w[.]{\Po{0}\mathscr{X}\cong\operatorname{ho} X}
Once we know that \w{[f]} survives to \w{E\sb{r}\sp{n,n+p}} for \w[,]{r\geq 2} we know
from \S \ref{scombdes} that \w{\mathfrak{d}\sb{r}([f])} is determined by
the map \w{F:G(n,n+r-1,\Sigma\sp{p}\mathfrak{h})\to\tau\sp{\ast}x\sp{\bullet}} described by the part of
\wref{eqerrterm} in \w[.]{\cpnX{n,n+r-1}} Moreover, in this case the standard
simplicial enrichment of \w{\Ss\sb{\ast}} (see \cite[\S 1.5]{GJarS}) implies that all the
ingredients needed to describe the value of \w{\mathfrak{d}\sb{r}([f])} are contained in
\w{\operatorname{Map}\sb{\Ss\sb{\ast}}(S\sp{p},{\mathbf W}\bp{n+k}\sp{n+k})\sp{i}} for \w{n\leq i\leq n+r-1}
(and \w[).]{0\leq k\leq r-1} In homotopy-invariant terms, this is determined by
\begin{myeq}\label{eqposstem}
\Po{r-1}\operatorname{Map}\sb{X}(\Sigma\sp{n+p}\hat{\fy},x\sp{n+k})\simeq
\Po{r+n+p-1}\operatorname{Map}\sb{X}(\hat{\fy},x\sp{n+k})\lra{n+p}\hspace*{2 mm} \text{for}\ -1\leq k\leq r-1.
\end{myeq}
\noindent Fitting the spaces \wref{eqposstem} together to compute
\w{\mathfrak{d}\sb{r}([f])} is just taking the homotopy limit over the various coface maps to
calculate the mapping spaces of \wref[.]{eqcstem}
Thus we obtain the following analogue of Theorem \ref{tstem}:
\end{mysubsection}
\begin{thm}\label{tcstem}
Given \w{\mathfrak{h}\in X} and \w{x\sp{\bullet}\in cX} as in \S \ref{scoqc} and \w[,]{r\geq 2}
the \ww{E\sp{r}}-term of the associated spectral sequence is determined
by the \wwb{r-2}stem of \w[.]{\lra{x\sp{\bullet},\mathfrak{h}}}
\end{thm}
This is a more precise version of \cite[Theorem 5.12]{BBlanS}.
\begin{mysubsection}{The Postnikov localization for cosimplicial objects}
\label{spcloc}
We can again reformulate Theorem \ref{tcstem} using the same pair of Bousfield
localizations on the quasi-category $Y$ of pointed $\infty$-groupoids
as in \S \ref{sploc}, except that in this case we use cosimplicial (rather than
cochain) windows, as in Remark \ref{rtruncations}:
If we set
\begin{myeq}\label{eqcproduct}
Z\sb{r}~:=~\prod\sb{G(n,m,\Sigma\sp{p}\hat{\fy})\in\mathcal{G}\sb{r}(\mathfrak{h})}~\cpn{n-1,n-m+1}{Y}\bp{p}~,
\end{myeq}
\noindent as a quasi-category with fibrations (as in \S \ref{sploc}), the composites of
$$
c\sp{+}X~\xra{\operatorname{Map}\sb{X}(\hat{\fy},-)}~c\sp{+}Y~\xra{\tnk{n}{n-m+1}}~\cpn{n-1,n-m+1}{Y}
$$
\noindent combine to define a functor \w[,]{G\sb{r}:Z\sb{r}\to Z\sb{r}}
with the component of \w{G\sb{r}(x\sp{\bullet})} in \w{\cpn{n-1,n-m+1}{Y}\bp{p}} again
providing the \wwb{m,p,n}\emph{cosimplicial window} for \w[.]{x\sp{\bullet}}
Again applying to \w{Z\sb{r}} the combined Bousfield localization
\w{\mathcal{L}\sp{m}\circ\mathcal{R}\sb{p}} of \S \ref{sploc} to each factor defines the $r$-th
\emph{Postnikov localization} functor \w[,]{\mathcal{P}\sp{r}:Z\sb{r}\to Z\sb{r}} where
\w{Z\sb{r}} has the structure of a quasi-category with fibrations and
\ww{\mathcal{P}\sp{r}}-equivalences.
Proposition \ref{rctrunc} again shows that the \ww{E\sp{r+2}}-page
of the spectral sequence for \w{x\sp{\bullet}\inc\sp{+}X} is determined through
by \w[,]{G\sb{r}x\sp{\bullet}} and we conclude from Theorem \ref{tcstem} the
following analogue of Corollary \ref{cstempr}:
\begin{corollary}\label{cctempr}
The \ww{\mathcal{P}\sp{r}}-equivalences induce isomorphisms of
the associated spectral sequences from the \ww{E\sb{r+2}}-term on.
\end{corollary}
\end{mysubsection}
\begin{mysubsection}{The $\mathcal{E}\sb{r}$-localization}
\label{scerl}
As in \S \ref{serl}, Assumption \ref{ass6.1}(1) for the Reedy structure on $X$
follows from the same assumption for \w{(X,\operatorname{Cof},\mathcal{W})} in \S \ref{ass6.1},
since cofibrations and weak equivalences in diagram categories are defined levelwise.
For (2), a family of generating (trivial) cofibrations for the Reedy
structure can be produced from a family of generating (trivial) cofibrations for
\w{(X,\operatorname{Cof},\mathcal{W})} by a standard argument.
For (3), we can identify the localization map \w{c\sp{+}X\to \LLc\sb{\Reedy}(c\sp{+}X)} with the map
\w{c\sp{+}X\to c\sp{+}\LW(X)} by \cite[Theorem 7.6.17]{CisiH}.
Thus, the localization map is accessible, since colimits of presheaf categories
can be calculated pointwise by \cite[5.1.2.2]{LurieHTT}.
Therefore, if we let \w{\mathcal{E}\sb{r}} denote the set of left Kan extensions of \w{\mathcal{G}\sb{r}}
(see \S \ref{dgmnp}) from the relevant truncations of \w[,]{c\sp{+}X} as in \S \ref{serl},
Theorem \ref{thm6.7} implies that \w{c\sp{+}X} has the structure of a quasi-category with
cofibrations and weak equivalences, in which the latter are the
\ww{\mathcal{E}\sb{r}}-equivalences \ -- \ that is,
\ww{\mathcal{E}\sb{r}}-local maps \w{x\sp{\bullet}\toy\sp{\bullet}} in \w[.]{c\sp{+}X} We denote by \w{R\sb{\mathcal{E}\sb{r}}} the right Bousfield
localization of \w{c\sp{+}X} with respect to \w[.]{\mathcal{E}\sb{r}}
\end{mysubsection}
We deduce from Corollary \ref{cor7.9}:
\begin{corollary}\label{thm8.6}
A \ww{\mathcal{E}\sb{r}}-equivalence \w{f: x\sp{\bullet} \to y\sp{\bullet}} in \w{c\sp{+}X} induces a bijection of
the associated spectral sequences at the \ww{E\sb{r}}-term.
\end{corollary}
|
1,108,101,564,838 | arxiv | \section{Introduction}
In the \emph{standard model} (SM) of elementary particle physics, the total number of leptons of each flavour is conserved. The discovery of neutrino oscillations \cite{Fukuda:1998mi, Ahmad:2001an, Eguchi:2002dm} revealed that this is a broken symmetry and leptons can change their flavour. \emph{Lepton flavour violation} (LFV) has however never been observed in the charged lepton sector. The exact mechanism and size of LFV being unknown, its study is of large interest, as it is linked to neutrino mass generation, \emph{CP} violation and new physics beyond the SM.
\begin{figure}
\begin{minipage}[b]{0.48\linewidth}
\centering
\includegraphics[width=0.45\textwidth]{m3e_neutrino_osc.eps}\\ \vspace{3mm}
\includegraphics[width=0.45\textwidth]{m3e_susy.eps}\\ \vspace{3mm}
\includegraphics[width=0.45\textwidth]{m3e_tree.eps}\\ \vspace{3mm}
\caption{a): Feynman diagram for the \mte process via neutrino oscillation (indicated by the cross). b): Diagram for lepton flavour violation involving supersymmetric particles. c) Diagram for lepton flavour violation at tree level.}
\label{fig:m3e_neutrino_osc}
\end{minipage}
\hspace{0.02\linewidth}
\begin{minipage}[b]{0.48\linewidth}
\centering
\includegraphics[width=0.8\textwidth]{mu3ekappa_v32.eps}
\caption{Experimental limits and projected limits on the LFV mass scale $\Lambda$ as a function of the parameter $\kappa$ (see equation \ref{eq:kappa}), taken from \cite{Schoning_CHIPP}.}
\label{fig:mu3ekappa}
\end{minipage}
\end{figure}
Charged lepton flavour violating reactions can be induced by neutrino mixing in a loop diagram (see figure \ref{fig:m3e_neutrino_osc} a) which is however strongly suppressed, leading to (unobservable) LFV branching fractions in muon decays of less than $10^{-50}$. In many extensions of the SM, such as grand unified models \cite{Pati:1974yy,Georgi:1974sy,Langacker1981}, supersymmetric models \cite{Haber:1984rc} (see figure \ref{fig:m3e_neutrino_osc} b), left-right symmetric models \cite{Mohapatra:1974hk, Mohapatra:1974gc,Senjanovic:1975rk}, models with an extended Higgs sector \cite{Kakizaki:2003jk} and models where electroweak symmetry is broken dynamically \cite{Hill:2002ap}, an experimentally accessible amount of LFV is predicted. The observation of LFV in the charged lepton sector would be a sign for new physics, possibly at scales far beyond the reach of direct observation at e.g.~the large hadron collider (LHC).
Many experiments have been performed or are in operation to search for LFV in the decays of muons or taus. Most prominent are the search for the radiative muon decay \meg \cite{Brooks:1999pu,Nicolo:2003at,Adam:2009ci,Adam:2011ch}, the decay \mte \cite{Bellgardt:1987du}, the conversion of captured muons to electrons \cite{Kaulard:1998rb} and LFV tau decays \cite{HAYASAKA2011}.
\section{Theory of \mte}
The lepton flavour violating three electron decay of the muon can be mediated either via loops (see figure \ref{fig:m3e_neutrino_osc} a) and b)) or at tree level, via the exchange of a (heavy) new particle (see figure \ref{fig:m3e_neutrino_osc} c)). The effective couplings of the loop diagram are of tensor type (dipole couplings) if the photon is real and receive additional vector-type contributions if the photon is virtual. The four fermion coupling of the tree diagram c) can be described as a contact interaction due to the large mass of the intermediate particle. In order to allow comparisons between \meg and \mte experiments, the following simplified Lagrangian, introducing a common mass scale $\Lambda$ is used:
\begin{equation}
\label{eq:kappa}
\mathcal{L}_{LFV} = \frac{m_\mu}{(\kappa +1) \Lambda^2}\bar{\mu_R}\sigma^{\mu\nu}e_L F_{\mu\nu} + \frac{\kappa}{(\kappa +1) \Lambda^2} (\bar{\mu_L}\gamma^\mu e_L)(\bar{e_L}\gamma_\mu e_L),
\end{equation}
here the left-left vector coupling is chosen as exemplary for the contact term (right), the parameter $\kappa$ describes the strength of the contact interaction amplitude relative to the loop amplitude. Figure \ref{fig:mu3ekappa} shows limits on the mass scale $\Lambda$ as a function of $\kappa$ for the MEG and SINDRUM experiments, as well as limits expected from the MEG experiment at design sensitivity and for a \mte experiment with sensitivities from $10^{-15}$ to $10^{-17}$. At the same branching fraction, \meg experiments are more sensitive at low values of $\kappa$ (dominance by the loop diagram), whilst \mte experiments constrain $\Lambda$ at high values of $\kappa$.
In the case of a dominant on-shell photon contribution ($\kappa \rightarrow 0$), a quasi model-independent relation between the \meg and \mte decay rates can be derived:
\begin{equation}
\frac{B(\mu \rightarrow eee)}{B(\mu \rightarrow e\gamma)} \approx 0.006.
\end{equation}
In order to set competitive constraints on LFV dipole couplings, a limit on the branching fraction of the decay \mte thus has to be about two orders of magnitude smaller than the best \meg limit. In case of a discovery, an \mte experiment (with a three-body final state) would give access to CP observables and thus allow for a search for CP violation. In \meg experiments this would require a (difficult) measurement of the photon polarization.
\section{Challenges for a \mte experiment}
A \mte experiment aiming for a sensitivity of $10^{-16}$ has to be able to suppress accidental backgrounds and the process $\mu \rightarrow eee \nu \bar{\nu}$ (\emph{internal conversion}) at least at the same level whilst running at a rate in excess of $10^9$ muons on target per second, rates which are achievable with a beam-line upgrade at the Paul Scherrer Institut (PSI) in Switzerland. Accidental background can be controlled by an excellent vertex and timing resolution, whilst the only distinguishing feature of the internal conversion decay is the missing momentum carried away by the neutrinos; thus a very precise track reconstruction is needed.
Conventional tracking detectors either have a limited rate capacity and resolution (gaseous detectors) or induce unacceptable amounts of multiple scattering (hybrid silicon pixel sensors). The development of thin, high-voltage active silicon pixel sensors (HV-MAPS) and scintillating fibre trackers opens up new possibilities which we plan to exploit for a novel experiment searching for the LFV decay \mte.
\section{Detector technologies}
Recently, thin active silicon pixel detectors have become available \cite{Turchetta:2001dy, Peric:2007zz, Peric2010504,Peric2010, Winter2010192, DeMasi2011296}, opening up new possibilities for low-material, high-precision tracking detectors. Produced in a CMOS process, these sensors include amplification and read-out logic inside of pixels with pitches from $20$ to $100~\mu\textrm{m}$. The silicon can be thinned to a thickness of $50~\mu\textrm{m}$ without affecting the signal collection efficiency. For the application in the \mte experiment, the high-voltage CMOS monolithic active pixel sensor (MAPS) technology developed at the Institute for Computer Science of the University of Heidelberg (\emph{ZITI}) is particularly interesting \cite{Peric:2007zz, Peric2010504,Peric2010}. Here a process originally developed for driving automotive and industrial devices with high voltage signals is used to create a monolithic pixel sensor, where all the transistors are placed in a deep N-well, which can be reversely biased with regard to the P-substrate with a high voltage, typically more than $50~\textrm{V}$. This creates a relatively large depleted area at the transition. Particles passing through this depleted zone will generate a sizeable amount of charge (around $2200~e$ for a depletion zone depth of $9~\mu\textrm{m}$, leading to a signal-to-noise ratio above $30$), which is separated by the strong electric field, leading to a fast current signal. The technology is well suited for tracking low momentum particles: because the signal is collected close to the chip surface, the chip can be thinned without signal loss. It is also resistant to both non-ionizing and (with a suitable design of the electronics) ionizing radiation. As a standard industrial process is used, the sensors are relatively cheap. The process has reticles of up to $2 \times 2~\textrm{cm}$ size, which can be combined by wire-bonding or stitching to $2 \times 6~\textrm{cm}$ sensors.
Without active cooling, the sensors can be read out at 10 to 40~MHz, leading to up to hundred electron tracks per readout frame. Precise timing information helping to suppress accidental background should come from a scintillating fibre tracker with a time resolution better than 100~ps.
\section{Detector concept}
\begin{figure}
\centering
\includegraphics[width=1.00\textwidth]{Schematic-2.eps}
\caption{Schematic view of a possible \mte experiment.}
\label{fig:Schematic-2}
\end{figure}
The best geometry for a detector providing high resolution tracking for momenta from 15 to 53~MeV/$c$ with a large angular coverage is currently under study. Both for resolution in a multiple scattering environment and ease of reconstruction, a spectrometer like arrangement of two double layers separated by free space in a solenoidal magnetic field is preferred. In order to improve the momentum resolution at high momenta, tracks curling back into the detector are reconstructed and fitted. Various detector concepts are currently being studied using a Geant4 \cite{Agostinelli2003250} based simulation. Depending on the exact geometry, the detector will produce a data rate of about 50~Gbit/s, which is more than can practically be stored, but low enough for a triggerless operation mode with a farm of GPU assisted PCs performing fast track fits.
\section{Conclusion}
New detector technologies (especially HV-MAPS), open the opportunity to build an experiment searching for the LFV decay \mte with a sensitivity of up to $10^{-16}$, and thus the potential of finding physics beyond the standard model or strongly constraining models. A collaboration is currently forming and a letter of intent will be submitted to PSI soon.
\section*{References}
\bibliographystyle{iopart-num}
|
1,108,101,564,839 | arxiv | \section{Introduction and results \label{sec-intro}}
The goal of this paper is to study scaling limits of random matrix products
$${\mathcal T}_{\lambda,n} {\mathcal T}_{\lambda,n-1}\cdots {\mathcal T}_{\lambda,1}$$
with $\lambda\to 0$ where the ${\mathcal T}_{\lambda,n}$ are perturbations of a fixed $d\times d$ matrix of the form
\begin{equation} \label{eq-def-Tt}
{\mathcal T}_{\lambda,n} = {\mathcal T}_0 + \lambda {\mathcal V}_{\lambda,n} + \lambda^2 {\mathcal W}_\lambda\;.
\end{equation}
Here, for every $\lambda$, ${\mathcal V}_{\lambda,n}$ is a family of
independent, identically distributed random matrices
with ${\mathbb E}({\mathcal V}_{\lambda,n})={\bf 0}$ and
${\mathcal W}_{\lambda}$ is a deterministic matrix, both of order one.
In the simplest case, $d=1$, ${\mathcal T}_0=1$, ${\mathcal W}_\lambda=0$, and ${\mathcal V}_{\lambda,n}={\mathcal V}_n$ are independent centered random variables
with variance one. Then, the classical Donsker's central limit theorem applied to the logarithm of $(1)$ shows that the product (for here denoted ${\mathcal X}_{\lambda, n}={\mathcal T}_{\lambda,n}{\mathcal T}_{\lambda,n-1}\cdots {\mathcal T}_{\lambda,1}$) satisfies
$$
({\mathcal X}_{\lambda;t/\lambda^2}, t\ge 0)\; \Longrightarrow \;(e^{B(t)-t/2},\; t\ge 0)
$$
as $\lambda \to 0$, where $B(t)$ is a standard Brownian motion. Compared to the simplest case, the general case has extra interesting features.
\begin{itemize}
\item The matrix ${\mathcal T}_0$ can have eigenvalues of different absolute values, so the product can grow exponentially at different rates in different directions.
\item The matrix ${\mathcal T}_0$ can have complex eigenvalues of the same absolute value that act as rotations; this can produce an averaging effect for the added drift and noise terms.
\end{itemize}
The main question that we resolve in this paper is the following.
\begin{question}
If the matrix ${\mathcal T}_0$ have an eigenvalue of large absolute value, can one still understand the fine evolution of the product in the directions belonging to smaller eigenvalues?
\end{question}
Matrix products of
this kind are used in the study of quasi-1-dimensional random Schr\"odinger operators, and the large eigenvalues are related to so-called hyperbolic channels. Indeed, the main motivating example is this case, which we will introduce in Section~\ref{sub-RS}.
When ${\mathcal T}_0$ is (the multiple of) a unitary matrix this type of result has been
established in that context, \cite{BR,SS2,VV1,VV2} and the limiting process is described by a
stochastic differential equation (SDE).
In \cite{KVV,VV1} the SDE limit was used to study the limiting eigenvalue statistics of such random
Schr\"odinger operators in a critical scaling $\lambda^2 n=t$
(at bandedges one has a different scaling as mentioned in Appendix~\ref{sub-Jordan}).
We can extend this result and obtain a limit for the rescaled eigenvalue process
in the presence of hyperbolic channels as well (cf. Theorem~\ref{theo-EV}).
In particular, we solve Problem~3 raised in \cite{VV1} and obtain limiting GOE statistics
for the Anderson model on sequences of long boxes (cf. Theorem~\ref{theo-GOE}) with appropriate scalings.
We essentially reduce the proof to a situation where it is left to analyze the same family of SDEs as in \cite{VV1}. Deriving the GOE statistics then
relies on the work of Erd\"os, Schlein, Yau and Yin \cite{ESYY, EYY}, but we do not repeat
these steps that are done in \cite{VV1}. The main result is stated in Section~\ref{sub-RS}, further details are given in Section~\ref{sec-RSO}.
In the considered scaling limit $\lambda^2 n=t$ in Theorem~\ref{theo-EV} localization effects and Poisson statistics are not seen and the description through an SDE arises.
We obtain that the hyperbolic channels only shift the eigenvalues but do not affect the local statistics.
In fact, fixing the width and base energy, the local eigenvalue statistics only depends on the number of so called elliptic channels. This can be seen as some universality statement.
Increasing the number of elliptic channels and choosing appropriate sequences of models, the GOE statistics arises.
As a byproduct of this work we solve some conjecture from \cite{Sa1} showing that
there is an SDE limit for the reduced transfer matrices in the presence of hyperbolic channels (cf. Remark~\ref{rem-red-T}) .
\vspace{.2cm}
Random matrix ensembles such as the Gaussian Orthogonal Ensemble (GOE) were introduced by Wigner \cite{Wi} to model the observed repulsion between eigenvalues in large nuclei.
The local statistics is given by the ${\rm Sine}_1$ kernel, see e.g. \cite{Me}.
This type of repulsion statistics is expected for many randomly disordered systems of the same symmetry class (time reversal symmetry).
that have delocalized eigenfunctions. This is referred to as {\bf universality}.
Most models with rigorously proved universal bulk behavior are themselves ensembles of random matrices, e.g. \cite{DG, ESY, Joh, TV}.
Recently, T. Shcherbina proved universal GUE statistics (Gaussian Unitary Ensemble) for random block band matrix ensembles that
in some sense interpolate between the classical matrix ensembles and Anderson models \cite{Shc}.
The Anderson model was introduced by P. W. Anderson to describe disordered media like randomly doped semi-conductors
\cite{And}. It is given by the Laplacian and a random independent identically distributed potential and has significantly less randomness than the matrix ensemble models.
For large disorder and at the edge of the spectrum, the Anderson model in ${\mathbb Z}^d$ or ${\mathbb R}^d$
localizes \cite{FS, DLS, SW, CKM, AM, Aiz, Klo}
and has Poisson statistics \cite{Mi, Wa, GK}.
For small disorder in the bulk of the spectrum, localization and Poisson statistics appears in one and quasi-one dimensional systems \cite{GMP, KuS, CKM, Lac, KLS} (except if prevented by a symmetry \cite{SS3})
and is expected (but not proved) in 2 dimensions.
Delocalization for the Anderson model was first rigorously proved on regular trees (Bethe lattices) \cite{K1} and had been extended to several
infinite-dimensional tree-like graphs \cite{K1, ASW, FHS, AW, KLW, FHH, KS, Sa2, Sa3, Sha}.
Recently it was shown that there is a transition from localization to delocalization on normalized antitrees at exactly 2-dimensional growth rate \cite{Sa4}.
For 3 and higher dimensions one expects delocalized eigenfunctions (absolutely continuous spectrum) for small disorder and the eigenvalue statistics of large boxes should approximate GOE by universality.
However, proving any of these statements in ${\mathbb Z}^d$ or ${\mathbb R}^d$ for $d\geq 3$ remains a great mathematical challenge.
\vspace{.2cm}
The papers \cite{BR, VV1, VV2} are restricted to the subset of the important cases where all eigenvalues of ${\mathcal T}_0$ had the same absolute value (and no Jordan blocks).
The novelty of this work is to handle eigenvalues of different absolute value for ${\mathcal T}_0$, the application to Schr\"odinger operators
comes from applying Theorem~\ref{theo-SDE} to the transfer matrices.
Let us briefly explain why this is not a trivial extension.
If ${\mathcal T}_0$ (or $A{\mathcal T}_0 A^{-1}$ for some matrix $A$) is unitary one simply has to remove the free evolution from the random products.
To illustrate this, let for now ${\mathcal X}_{\lambda,n}={\mathcal T}_{\lambda,n} {\mathcal T}_{\lambda,n-1}\cdots {\mathcal T}_{\lambda,1}$. Then,
${\mathcal T}_0^{-n} {\mathcal X}_{\lambda,n}=({\bf 1}+\lambda {\mathcal T}_0^{-n} {\mathcal V}_{\lambda,n} {\mathcal T}_0^{(n-1)} + \lambda^2 {\mathcal T}_0^{-n} {\mathcal W}_\lambda {\mathcal T}_0^{(n-1)}) ({\mathcal T}_0^{-(n-1)}{\mathcal X}_{\lambda,n-1})$.
The conjugations like ${\mathcal T}_0^{-n} ({\mathcal V}_{\lambda,n}{\mathcal T}_0^{-1}) {\mathcal T}_0^n$ simply lead to an averaging effect over the compact group generated by the unitary ${\mathcal T}_0$ in the limit for the drift and diffusion terms.
Adopting techniques by Strook and Varadhan \cite{SV} and Ethier and Kurtz \cite{EK} to this situation (cf. Proposition~23 in \cite{VV2})
one directly obtains an SDE limit for ${\mathcal T}_0^{-n} {\mathcal X}_{\lambda,n}$ in the scaling $\lambda^2 n= t$.
If ${\mathcal T}_0$ has eigenvalues of different sizes then generically some entries of ${\mathcal T}_0^{-n} {\mathcal W}_\lambda {\mathcal T}^{n-1}$ and the variance of some entries of
${\mathcal T}_0^n {\mathcal V}_{\lambda,n}{\mathcal T}_0^{n-1}$ will grow exponentially in $n$. This destroys any hope of a limiting process.
Instead one may then consider a process $U^{-n} {\mathcal X}_{\lambda,n}$ where $U$ is a unitary just counteracting the fast rotations.
But then one still has different directions growing at different exponential rates
even for the free evolution, and simply projecting to some subspace, $PU^{-n}{\mathcal T}_0^{-n} {\mathcal X}_{\lambda,n}$, does not work either!
The trick lies in finding a projection which cuts off the exponential growth of the free evolution {\bf and} does not screw up the convergence of the
random evolution to some drift and diffusion terms.
The correct way to handle the exponential growing directions is choosing a Schur complement. The exponentially decreasing directions will tend to zero and not matter and the directions of size $1$
will lead to a limit. The exponential growing directions have some non-trivial effect and lead to an additional drift term.
As the Schur complement itself is not a Markov process, it will be better to consider it as part of a quotient of ${\mathcal X}_{\lambda,n}$ modulo a certain subgroup of ${\rm GL}(d,{\mathbb C})$.
Then one still needs several estimates to handle the appearing inverses in the Schur complement and the error terms before one can apply Proposition~\ref{p_turboEK}
which is some modification of Proposition~23 in \cite{VV2}.
\vspace{.2cm}
In the Appendix we will state some further general corollaries or applications of Theorem~\ref{theo-SDE}.
Although we cannot take an SDE limit of the entire
matrix as indicated above, it will be possible to describe the limit of its action on Grassmannians and flag manifolds (cf. Appendix~\ref{sec-corr}).
For this limit the correlations for SDE limits corresponding to different sizes of eigenvalues of ${\mathcal T}_0$ are important.
The limit processes live in certain submanifolds that are stable under the free, non-random
dynamics of ${\mathcal T}_0$. This result is related to the numerical calculations in \cite{RS} who considered the action of the transfer matrices on the so called Lagrangian Planes, or Lagrangian Grassmannians
(which is some invariant subspace of a Grassmannian).
The limiting submanifold corresponds to the 'freezing' of some phases related to the hyperbolic channels. In the scaling limit, only a motion of the part corresponding to the so called elliptic channels can be seen
and it is described by a SDE.
In Appendix~\ref{sub-Jordan} we also study the case of non-diagonalizable Jordan blocks. These can be dealt with by a $\lambda$-dependent basis change
which leads to a different critical scaling. In the Schr\"odinger case such Jordan blocks appear at band-edges and
we give an example for a Jordan block of size $2d$ for general $d$.
\subsection{General SDE limits \label{sub-results}}
Without loss of generality we focus on the eigenvalues with absolute value $1$ and assume that ${\mathcal T}_0$ has no Jordan blocks for eigenvalues of size $1$.
Next, we conjugate the matrices ${\mathcal T}_{\lambda,n}$ to get ${\mathcal T}_0$ in Jordan form. We may write it as a block diagonal matrix of dimension $d_0+d_1+d_2$ of the form
\begin{equation}\label{eq-T_0}
{\mathcal T}_0=
\begin{pmatrix}
\Gamma_0 & & \\ & U & \\ & & \Gamma_2^{-1}
\end{pmatrix}\;
\end{equation}
where $U$ is a unitary, and $\Gamma_0$ and $\Gamma_2$ have spectral radius smaller than $1$.
The block $\Gamma_0$ corresponds to the exponential decaying directions and the block $\Gamma_2^{-1}$ to the
exponential growing directions of ${\mathcal T}_0$.
The only way the matrix product ${\mathcal T}_{\lambda,n} \cdots {\mathcal T}_{\lambda,1} $ can have a continuous limiting evolution is if we compensate for the macroscopic rotations
given by $U$ (as in \cite{BR,VV1,VV2}).
Hence define
\begin{equation} \label{eq-def-Xx}
{\mathcal X}_{\lambda,n} := {\mathcal R}^{-n} {\mathcal T}_{\lambda,n}{\mathcal T}_{\lambda,n-1} \cdots {\mathcal T}_{\lambda,1} {\mathcal X}_0
\quad\text{where}\quad
{\mathcal R}=\pmat{{\bf 1}_{d_0} & & \\ & U & \\ & & {\bf 1}_{d_2}}\,
\end{equation}
where ${\mathcal X}_0$ is some initial condition and ${\bf 1}_d$ is the identity matrix of dimension $d$.
In most of the following calculations
we will use a subdivision in blocks of size
$d_0+d_1$ and $d_2$. Let us define the Schur complement $X_{\lambda,n}$
of size $(d_0+d_1) \times (d_0+d_1)$ by
\begin{equation}
X_{\lambda,n} = A_{\lambda,n}-B_{\lambda,n}D_{\lambda,n}^{-1} C_{\lambda,n}\,,\quad
\text{where}\quad
{\mathcal X}_{\lambda,n}=\begin{pmatrix}
A_{\lambda,n} & B_{\lambda,n} \\ C_{\lambda,n} & D_{\lambda,n}
\end{pmatrix}
\label{eq-def-X}\,.
\end{equation}
If ${\mathcal X}_{\lambda,n}$ and $D_{\lambda,n}$ are both invertible, then
\begin{equation} \label{eq-def-X-2}
X_{\lambda,n} = ({\mathcal P}^* {\mathcal X}_{\lambda,n}^{-1} {\mathcal P})^{-1}\quad
\end{equation}
where $\mathcal P$ is the projection to the first $d_0+d_1$ coordinates.
Note that invertibility of $D_{\lambda,n}$ is required to define $X_{\lambda,n}$.
Therefore, we demand the starting value $D_0$ to be invertible, where
\begin{equation*}
{\mathcal X}_0=\begin{pmatrix}
A_0 & B_0 \\ C_0 & D_0
\end{pmatrix}\;,\quad\text{and we define}\quad
X_0=A_0-B_0 D_0^{-1} C_0\,.
\end{equation*}
The first important observation, explained in Section~\ref{sec-evol}, is that the pair
\begin{equation}\label{eq-def-Z}
(X_{\lambda,n}, Z_{\lambda,n})\quad\text{where}\quad
Z_{\lambda,n}=B_{\lambda,n} D_{\lambda,n}^{-1}
\end{equation}
is a Markov process. Therefore, it will be more convenient to study this pair.
\vspace{.2cm}
We need the following assumptions.
\begin{assumptions} \label{assumption}
We assume that for
some constants $\epsilon>0$, $\lambda_0>0$ one has
\begin{align}
\label{eq-cond-Vv-m}
&\sup_{0<\lambda< \lambda_0} {\mathbb E}(\|{\mathcal V}_{\lambda,n}\|^{6+\epsilon}) < \infty\;.
\end{align}
Furthermore we assume that the limits of first and second moments
\begin{align}
\label{eq-cond-Vv-lim}
\lim_{\lambda\to 0} {\mathcal W}_{\lambda}\;,\qquad
\lim_{\lambda\to 0} {\mathbb E}(({\mathcal V}_{\lambda,n})_{i,j} ({\mathcal V}_{\lambda,n})_{k,l}) \;, \qquad
&\lim_{\lambda\to 0} {\mathbb E}(({\mathcal V}^*_{\lambda,n})_{i,j} ({\mathcal V}_{\lambda,n})_{l,k}) \qquad \text{exist.}
\end{align}
\end{assumptions}
\newcommand{\vl}[2]{V_{\lambda,{#1}{#2} }}
\newcommand{\wl}[2]{W_{\lambda,{#1}{#2} }}
\vspace{.2cm}
In order to state the main theorem, we need to subdivide ${\mathcal V}_{\lambda,n},\,{\mathcal W}_\lambda$ in blocks of sizes $d_0,\, d_1,\,d_2$.
We denote the $d_j \times d_k$ blocks by
\begin{equation}\label{eq-def-Vv}
\vl{j}{k}, \qquad \text{and} \qquad \wl{j}{k} \qquad \text{respectively.}
\end{equation}
The covariances of the $d_1\times d_1$ block $\vl11$ will be important.
A useful way to encode covariances of centered matrix-valued random variables $A$ and $B$ is
to consider the matrix-valued linear functions
$M\mapsto {\mathbb E}(A^\top M B)$ and $M\mapsto {\mathbb E}(A^* M B)$. Choosing matrices $M$ with one entry $1$ and all other entries zero one can read off ${\mathbb E}(A_{ij} B_{kl})$ and ${\mathbb E}(\overline{A}_{ij} B_{kl})$ directly.
Let us therefore define
\begin{equation}\label{eq:h}
h(M):=\lim_{\lambda\to 0} {\mathbb E}(\vl11^\top M \vl11)\;,\qquad
\widehat{h}(M):=\lim_{\lambda\to 0} {\mathbb E}(\vl11^* M \vl11)\;.
\end{equation}
Furthermore the lowest order drift term of the limit will come from the lowest order Schur-complement and hence contain some influence from
the exponentially growing directions.
Therefore, let
\begin{equation}\label{eq-def-W}
W:=\lim_{\lambda\to 0} \wl11-{\mathbb E}( \vl12\Gamma_2\vl21)\;.
\end{equation}
By the assumption \eqref{eq-cond-Vv-lim} above these limits exist.
\begin{theo}\label{theo-SDE}
Let the assumptions \eqref{eq-cond-Vv-m} and \eqref{eq-cond-Vv-lim} stand.
Then, for $t>0$ we have convergence in law,
$Z_{\frac1{\sqrt{n}},\lfloor tn \rfloor} \; \Longrightarrow \; {\bf 0}$ and
\begin{equation}\notag
X_{\frac{1}{\sqrt{n}},\lfloor tn \rfloor} \; \Longrightarrow \; X_t=
\pmat{{\bf 0}_{d_0} & \\ & \Lambda_t} X_0 \qtx{for} n\to\infty\;.
\end{equation}
$\Lambda_t$ is a $d_1\times d_1$ matrix valued process and the solution of
\begin{equation}\notag
d\Lambda_{t}=V \Lambda_t \,dt + d{\mathcal B}_t\,\Lambda_t\,,\quad
\Lambda_0={\bf 1}\,.
\end{equation}
and ${\mathcal B}_t$ is a complex matrix Brownian motion (i.e.\ ${\mathcal B}_t$ is Gaussian) with covariances
\begin{equation} \label{eq-variances}
{\mathbb E}({\mathcal B}_t^\top M {\mathcal B}_t)=
g(M) t\,,\quad
{\mathbb E}({\mathcal B}_t^* M {\mathcal B}_t)=\widehat g(M) t
\end{equation}
where
\begin{align} \label{eq-def-V}
V &=\int_{\langle U\rangle}
u W U^* u^* \,d u \\
\label{eq-def-g}
g(M) &= \int_{\langle U\rangle} \overline u \,\overline U \,h( u ^\top M u )\,U^* u ^*\,d u \;\\
\label{eq-def-hg}
\widehat g(M) &=
\int_{\langle U\rangle} u U\, \widehat h( u ^* M u )\, U^* u ^*\,d u \;.
\end{align}
Here, $\langle U\rangle$ denotes the compact abelean group generated by the unitary $U$,
i.e. the closure of the set of all powers of $U$, and $d u $ denotes the Haar measure on $\langle U \rangle$.
\end{theo}
\begin{rem} \label{rem-SDE}
\begin{enumerate}[{\rm (i)}]
\item The analogous theorem in the situation $d_2=0$ (no exponential growing directions)
holds. In this case the matrices $B_{\lambda,n}, C_{\lambda,n}$ and $D_{\lambda,n}$ do not
exist and one simply has $X_{\lambda,n}=A_{\lambda,n}={\mathcal X}_{\lambda,n}$. For this case one can actually simplify
some of the estimates done for the proof, as one does not need to work with the process
$B_{\lambda,n}D_{\lambda,n}^{-1}$ and no inverse is required.
\item In the case where $d_0=0$, i.e. no exponential decaying directions, the Theorem
also works fine. In this case one simply has $X_t=\Lambda_t X_{11}$.
\item The Theorem does not hold for $t=0$ and indeed it looks contradictory for small $t$.
However, the exponentially decaying directions go to zero exponentially fast so that one obtains
\begin{equation*}
X_{\frac1{\sqrt{n}},\lfloor n^\alpha \rfloor} \,\Longrightarrow\, \pmat{{\bf 0} & \\ & {\bf 1}_{d_1}} \,X_0
\end{equation*}
for sufficiently small $\alpha$ which gives the initial conditions for the limiting process.
\item \label{rem1-rot} When defining the process ${\mathcal X}_{\lambda,n}$
one may want to subtract some of the oscillating terms in the growing and decaying directions as well, i.e.
one may want to replace ${\mathcal R}$ in \eqref{eq-def-Xx} by a unitary of the form
$\hat{\mathcal R}=\smat{U_0&&\\\ &U&\\\ &&U_2}$ written in blocks of sizes $d_0, d_1, d_2$, respectively.
Then let $$\hat{\mathcal X}_{\lambda,n}=\hat{\mathcal R}^{-n}{\mathcal R}^n {\mathcal X}_{\lambda,n}=
\pmat{\hat A_{\lambda,n} & \hat B_{\lambda,n} \\ \hat C_{\lambda,n} & \hat D_{\lambda,n}}$$
and define
the corresponding Schur complement $\hat X_{\lambda,n}=\hat A_{\lambda,n} - \hat B_{\lambda,n}
\hat D_{\lambda,n}^{-1}\hat C_{\lambda,n} $ as well has $\hat Z_{\lambda,n}= \hat B_{\lambda,n} \hat D_{\lambda,n}^{-1}$.
Simple algebra shows that
$$
\hat X_{\lambda,n} = \pmat{U_0^{-n} & \\ & {\bf 1}} X_{\lambda,n}\;,\qquad
\hat Z_{\lambda,n} = \pmat{U_0^{-n} & \\ & {\bf 1}} Z_{\lambda,n} U_2^n\;.
$$
Hence, it is easy to see that for $n\to \infty$,
$$
\hat Z_{\frac1{\sqrt{n}},\lfloor nt \rfloor} \Rightarrow 0\;,\qquad
\hat X_{\frac1{\sqrt{n}},\lfloor nt \rfloor} \Rightarrow X_t
$$
where $X_t$ is the exact same process as in Theorem~\ref{theo-SDE}.
\item When ${\mathcal T}_0$ has eigenvalues of absolute value $c$ different from $1$, and ${\mathcal T}_0$ is diagonalized (or in Jordan form)
so that the corresponding eigenspace are also the span of coordinate vectors and have no Jordan blocks, then
we can apply Theorem~\ref{theo-SDE} to the products of ${\mathcal T}_{\lambda,n}/c$. Moreover, considering products of direct sums
${\mathcal T}_{\lambda,n}/c \oplus {\mathcal T}_{\lambda,n}/c'$ the application of Theorem~\ref{theo-SDE} gives correlations of these SDEs, cf. Theorem~\ref{theo-SDE2}.
\item The stated SDE limit can be seen as limiting processes of equivalence classes of ${\mathcal R}^{-n} {\mathcal X}_{\lambda,n}$ modulo a certain group ${\mathbb G}$.
This means the following: For a specific subgroup ${\mathbb G}\subset {\rm GL}(d,{\mathbb C})$ we define two matrices to be equivalent, $M\sim M'$, if and only if
$M=M' G$ for some $G\in{\mathbb G}$. This equivalence relation defines the quotient ${\rm Mat}(d,{\mathbb C})/{\mathbb G}$.
One may ask for which subgroups ${\mathbb G}$ of ${\rm GL}(d,{\mathbb C})$ one has some normalization $\widetilde{\mathcal R}$ such that the process
$\widetilde{\mathcal R}^{-n} {\mathcal T}_{1/\sqrt{n}, n} \cdots {\mathcal T}_{1/\sqrt{n},1}\,{\mathcal X}_{0} \;/\; {\mathbb G}$
has a distributional limit.
As we will show in Appendix~\ref{sec-corr}, for invertible and diagonalizable ${\mathcal T}_0$
we get such a limit on the so-called flag manifold (cf. Theorem~\ref{th-flag}) and whenever ${\mathbb G}$ is algebraic and cocompact (cf. Corollary~\ref{cor-flag}).
\item Finally, one might pose the question whether one can also obtain some similar result in the presence of Jordan blocks.
In fact, combining this work with techniques of \cite{SS1} one can obtain a limit for a process obtained with a $\lambda$-dependent basis change.
In terms of Schr\"odinger operators these situations occur on band-edges, cf. Appendix~\ref{sub-Jordan}.
\end{enumerate}
\end{rem}
The proof is structured in the following way. Section~\ref{app} states an abstract theorem for convergence of Markov processes to SDE limits that we will use.
In Section~\ref{sec-evol} we will develop the evolution equations for the process $(X_{\lambda,n}, Z_{\lambda,n})$, together with
some crucial estimates. In Section~\ref{sec-lim} we will then obtain the limiting stochastic differential equations
as in Theorem~\ref{theo-SDE}. The reader interested in the proofs can continue with Section~\ref{app}.
Applications to random Schr\"odinger operators are given in the following Subsection and in Section~\ref{sec-RSO} which also contains the proofs.
\subsection{The GOE limit for random Schr\"odinger operators \label{sub-RS}}
\newcommand{\mathbb Z_{n,d} }{\mathbb Z_{n,d} }
Let $\mathbb Z_{n,d} $ be the adjacency matrix of the $n\times d$ grid, and let $V$ be a diagonal matrix with i.i.d. random entries of the same dimension.
A fundamental question in the theory of random Schr\"odinger operators is how the eigenvalues of
\begin{equation}\label{eq:rsch}
\mathbb Z_{n,d} + \lambda V
\end{equation}
are distributed. Predictions from the physics literature suggest that in certain scaling regimes that correspond to the delocalized regime, random matrix behavior should appear. More precisely,
the random set of eigenvalues, in a window centered at some energy $E$ and scaled properly, should converge to the Sine$_1$ point process. The latter process is the large-$n$ limit of the random set of eigenvalues of the $n\times n$ Gaussian orthogonal ensemble near 0.
A version of such predictions were proved rigorously \cite{VV1} for subsequences of $n_i\gg d_i\to\infty$, $\lambda_i^2 n_i\to 0$ but only near energies $E_i$ tending to zero. In a modified model where the edges in the $d$ direction get weight $r<1$ the proof of \cite{VV1} works for almost all energies in the range $(-2+2r,2-2r)$.
Proving such claims for almost all energies of the original model \eqref{eq:rsch} presented a challenge, the main motivation for the present paper.
For better comparison with \cite{VV1} let us re-introduce the weight $r$.
It is natural to think of operators like \eqref{eq:rsch} as acting on a sequence $\psi=(\psi_1,\ldots,\psi_n)$ of $d$-vectors.
So given the weight $r$, let us define the $nd\times nd$ matrix $H_{\lambda,n,d}$ by
\begin{equation}\label{eq-H-lnd}
(H_{\lambda,n,d} \psi)_k = \psi_{k+1} + \psi_{k-1} + (r{\mathbb Z}_d + \lambda V_k )\psi_k
\end{equation}
with the notational convention that $\psi_0=\psi_{n+1}=0$.
Here, ${\mathbb Z}_d$ is the adjacency matrix of the connected graph of a path with $d$ vertices
and the $V_k$ are i.i.d. real diagonal matrices, i.e.,
\begin{equation}\label{eq-GOE-cond1}
\mathbb Z_d=\pmat{{\bf 0} & {\bf 1} & & \\
{\bf 1} & \ddots & \ddots & \\
& \ddots & \ddots & {\bf 1} \\
& & {\bf 1} & {\bf 0}}\;,\qquad
V_1=\pmat{v_1 & & \\
& \ddots & \\
& & v_d}
\end{equation}
with
\begin{equation} \label{eq-GOE-cond2}
{\mathbb E}(v_j)=0,\quad{\mathbb E}(|v_j|^{6+\varepsilon})<\infty,\qquad{\mathbb E}(v_i v_j)=\delta_{ij}.
\end{equation}
Then we obtain the following:
\begin{theo}\label{theo-GOE}
For any fixed $r>0$ and almost every energy $E\in (-2-2r,2+2r)$ there exist sequences $n_k \gg d_k \to \infty$,\, $\sigma_k^2:=\lambda_k^2 n_k \to 0,\,$
and normalizing factors $\nu_k\sim d_k n_k / \sigma_k$ such that
the process of eigenvalues of
$$\nu_k \left( H_{\lambda_k,n_k,d_k} - E\right)$$
converges to the ${\rm Sine}_1$ process.
In particular the level statistics corresponds to GOE statistics in this limit.
\end{theo}
Theorem \ref{theo-GOE} resolves Problem~3 posed in {\rm \cite{VV1}}. There one has $r<1$ and $E\in (-2+2r, 2-2r)$ or $r=1$ and a sequence of energies converging to $0$.
(Note that this interval is smaller than the one in Theorem~\ref{theo-GOE} and in fact empty for $r\geq 1$). Theorem~\ref{theo-GOE} applies to the exact Anderson model $r=1$ with any fixed energy in the interior $(-4,4)$ of the spectrum of the discrete two-dimensional Laplacian. It also applies in the case $r>1$. This is because hyperbolic channels can now be handled for the SDE limit.
The exact definition of 'elliptic' and 'hyperbolic' channels will be given in Section~\ref{sec-RSO}.
Overcoming this difficulty was the main motivation for this work.
Essentially, only the elliptic channels play a role in the eigenvalue process limit. It is thus important to have a sequence with a growing number of elliptic channels going to infinity.
Indeed, one can obtain GOE statistics even for a sequence of energies $E_k$ approaching the edge of the spectrum $|E|=2+2r$. For this, one needs that the sequence $d_k$ grows fast enough, such that the number of elliptic channels at energy $E_k$ grows.
Further details will be given in Section~\ref{sec-RSO} where we will also consider an SDE description for the eigenvalue processes of operators on strips with a fixed width in the critical scaling.
The operators considered are slightly more general than \eqref{eq-H-lnd}.
\section{A limit theorem for Markov processes \label{app}}
The key idea is to use a variation of Proposition~23 in \cite{VV2} to obtain the convergence to the limiting process.
\begin{prop}\label{p_turboEK}
Fix $T>0$, and for each $m\ge 1$ consider a Markov chain
$$
(X^m_n\in {\mathbb R}^d,\, n =1\ldots \lfloor mT
\rfloor).
$$
as well as a sequence of ``good'' subsets $G_m$ of ${\mathbb R}^d$.
Let $Y^m_n(x)$ be distributed as the increment
$X^m_{\ell+1}-x$ given $X^m_n=x\in G_m$. We define
$$
b^m(t,x)= m {\mathbb E}[ Y_{\lfloor mt\rfloor}^m(x)],\qquad
a^m(t,x)=m {\mathbb E}[ Y_{\lfloor mt\rfloor}^m(x) Y_{\lfloor
mt\rfloor}^m(x)^\textup{T}].
$$
Let $d'\le d$, and let $\tilde x$ denote the first $d'$ coordinates
of $x$. These are the coordinates that will be relevant in the limit.
Also let $\tilde b^m$ denote the first $d'$ coordinates of $b^m$ and $\tilde a^m$ be the
upper left $d'\times d'$ sub-matrix of $a^m$.
Furthermore, let $f$ be a function $f:{\mathbb Z}_+\to{\mathbb Z}_+$ with $f(m)=o(m)$, i.e $\lim_{m\to\infty} f(m) / m = 0$
Suppose that as $m\to \infty $ for $x,y\in G_n$ we have
\begin{align}
& |\tilde a^m(t,x)-\tilde a^m(t,y)|+|\tilde b^m(t,x)-\tilde b^m(t,y)| \le
c|\tilde x-\tilde y|+o(1)\label{e lip}\\
&\sup_{x\in G_m,n} {\mathbb E}[|\tilde Y^m_n(x)|^3] \le cm^{\frac{-3}{2}}\quad\text{for all}\quad
n\geq f(m) \label{e 3m},
\end{align}
and that there are functions $a,b$ from ${\mathbb R}\times[0,T]$ to
${\mathbb R}^{d'^2}, {\mathbb R}^{d'}$ respectively with bounded first and second
derivatives so that uniformly for $x\in G_m$,
\begin{eqnarray}
\sup_{x\in G_m, t} \Big|\int_0^t \tilde a^m(s,x)\,ds-\int_0^t a(s,\tilde x)\,ds
\Big| &\to& 0 \\
\sup_{x\in G_m,t} \Big|\int_0^t \tilde b^m(s,x)\,ds-\int_0^t b(s,\tilde x)\,ds
\Big|&\to& 0. \label{e fia}\,.
\end{eqnarray}
Suppose further that
$$
\tilde X_{f(m)}^m\Longrightarrow X_0.
$$
and that ${\mathbb P}(X^m_n \in G_m \mbox{ for all }n\geq f(m))\to 1$.
Then $(\tilde X^m_{\lfloor m t\rfloor}, 0 < t \le T)$ converges in law
to the unique solution of the SDE
$$
dX = b \,dt + a\, dB, \qquad X(0)=X_0.$$
\end{prop}
\begin{proof}
This is essentially Proposition 23 in \cite{VV2}. The first difference is
that the coordinates $d'+1,\ldots, d$ of the $X^m$ do not appear in
the limiting process. A careful examination of the proof of that Proposition shows
that it was not necessary to assume that all coordinates appear in the limit, as long as
the auxiliary coordinates do not influence the variance and drift asymptotics.
The second difference is the introduction of the ``good'' set $G_m$, possibly a proper subset of ${\mathbb R}^d$. Since we assume
that the processes $X^m$ stay in $G_m$ with probability tending to one, we can apply the Proposition 23 of \cite{VV1}
to $X^m$ stopped when it leaves this set. Then, the probability that the stopped process is different from the original
tends to zero, completing the proof.
The third difference is the weak convergence of $\tilde X_{f(m)}^m$ instead of $\tilde X^n_0$
and that we have the bound in \eqref{e 3m} only for $m\geq f(m)$.
Note that for the Markov family $\hat X^{m}_l = \hat X^{m}_{\max(l,f(m))}$
all the same conditions apply with $f(m)=0$ and the initial conditions converge weakly.
Moreover, for any fixed $t>0$ and $m$ large enough one has
$\hat X^m_{\lfloor mt \rfloor}=X^m_{\lfloor mt \rfloor}$.
\end{proof}
We will use this proposition with $m=\lambda^{-2}$ or $\lambda=1/\sqrt{m}$. $X^m_n$ will correspond to the pair $(X_{1/\sqrt{m}\,,\,n}\,,\,Z_{1/\sqrt{m},n})$
whereas $\tilde X^m_n$ will only be the part of $X_{1/\sqrt{m}\,,\,n}$ giving the SDE limit in Theorem~\ref{theo-SDE}.
Moreover, many of the estimates will only work with high probability which will be treated by introducing stopping times that will not matter in the limit.
\section{Evolution equation and estimates \label{sec-evol}}
In this section we show the basic estimates needed to establish the conditions of the proposition above.
Recall that ${\mathcal T}_{\lambda,n}={\mathcal T}_0+\lambda {\mathcal V}_{\lambda,n} + \lambda^2 {\mathcal W}_\lambda$ where the disordered part
satisfies the assumptions \eqref{eq-cond-Vv-m} and \eqref{eq-cond-Vv-lim}.
For convenience we define ${\mathcal Y}_{\lambda,n} = {\mathcal V}_{\lambda,n}+\lambda {\mathcal W}_\lambda$, such that
\begin{equation*}
{\mathcal T}_{\lambda,n} = {\mathcal T}_0+\lambda {\mathcal V}_{\lambda,n} + \lambda^2 {\mathcal W}_\lambda={\mathcal T}_0+\lambda{\mathcal Y}_{\lambda,n}\;.
\end{equation*}
Then the assumptions imply that for small $\lambda$
\begin{equation}\label{eq-cond-Yy}
{\mathbb E}(\|{\mathcal Y}_{\lambda,n}\|^{6+\epsilon})={\mathcal O}(1)\;,\qquad
{\mathbb E}({\mathcal Y}_{\lambda,n})=\lambda {\mathcal Y}+o(1)\;.
\end{equation}
For the proof of Theorem~\ref{theo-SDE}
we will fix some time $T>0$ and obtain the SDE limit up to time $T>0$ which is fixed but arbitrary.
In principle we could work with estimates that are valid with high probability in order to obtain the limit process.
However, for the sake of keeping the arguments and estimates simpler, it will be easier to work with a cut off
on the randomness and almost sure estimates. The cut off bound will approach infinity for $\lambda$ going to zero in a way that it does not affect the limit.
\begin{prop}
Without loss of generality we can assume
\begin{equation}
\label{eq-cond-Vv}
\|{\mathcal Y}_{\lambda,n}\| < K_{\mathcal Y} \lambda^{s-1}\;\;\text{for some $\tfrac23<s<1$ and $K_{\mathcal Y}>0$}\,. \\
\end{equation}
\end{prop}
\begin{proof}
Assumption \eqref{eq-cond-Vv-m} and Markov's inequality yield
\begin{equation} \label{eq-prob-est}
{\mathbb P}(\|{\mathcal Y}_{\lambda,n}\| \geq C \|) \leq \frac{{\mathbb E}(\|{\mathcal Y}_{\lambda,n}\|^{6+\epsilon})}{C^{6+\epsilon}}\leq
\frac{k}{C^{6+\epsilon}}\,
\end{equation}
for some fixed $k$, uniformly for small $\lambda$.
Now let $s$ be such that $2/(6+\epsilon)\,<\,1-s\,<\,1/3$ and
define the truncated random variable $\widetilde {\mathcal Y}_{\lambda,n}$ by
\begin{equation*}
\widetilde {\mathcal Y}_{\lambda,n} = \begin{cases}
{\mathcal Y}_{\lambda,n} & \qtx{if} \|{\mathcal Y}_{\lambda,n}\|<K\lambda^{s-1} \\
0 & \qtx{else}
\end{cases}
\end{equation*}
By the choice of $s$, $(1-s)>\frac2{6+\epsilon}>\frac1{5+\epsilon}$ and we obtain
\begin{align*}
{\mathbb E}(\|{\mathcal Y}_{\lambda,n}-\widetilde{\mathcal Y}_{\lambda,n} \|)\,&=\,
\int_{\|{\mathcal Y}_{\lambda,n}>K\lambda^{s-1}\|} \|{\mathcal Y}_{\lambda,n} \|\,d{\mathbb P}
\,\leq\, \int_{K\lambda^{s-1}}^\infty C \,\cdot\,(6+\epsilon)\frac{k}{C^{7+\epsilon}}\,dC \notag\\
&\leq\, \frac{(6+\epsilon) k}{(5+\epsilon)\,K^{5+\epsilon}}\,\lambda^{(5+\epsilon)(1-s)}\,=\,o(\lambda)\;
\end{align*}
and similarly
\begin{equation*}
{\mathbb E}(\|{\mathcal Y}_{\lambda,n}-\widetilde{\mathcal Y}_{\lambda,n} \|^2)\,\leq\,
\frac{(6+\epsilon)\,k}{(4+\epsilon)\,K^{4+\epsilon}}\,\lambda^{(4+\epsilon)(1-s)}\,=\,o(1)
\end{equation*}
for $\lambda\to 0$. Thus, using $\widetilde {\mathcal Y}_{\lambda,n}$ instead of ${\mathcal Y}_{\lambda,n}$ in \eqref{eq-def-V}, \eqref{eq-def-g} and
\eqref{eq-def-hg} does not change the quantities $V$, $g(M)$ and $\hat g(M)$.
Hence, the SDE limits mentioned in Theorem~\ref{theo-SDE} for $\widetilde {\mathcal Y}_{\lambda,n}$ and ${\mathcal Y}_{\lambda,n}$ are the same.
Let us assume that Theorem~\ref{theo-SDE} is correct for $\widetilde {\mathcal Y}_{\lambda,n}$ and obtain its validity for using
${\mathcal Y}_{\lambda,n}$ by showing that we obtain the same limit SDE.
From \eqref{eq-prob-est}
\begin{equation*}
{\mathbb P}({\mathcal Y}_{\lambda,n} \neq \widetilde {\mathcal Y}_{\lambda,n})\,\leq \frac{k}{K^{6+\epsilon} \lambda^{(6+\epsilon)(s-1)}}\,
\,=\,c\,\lambda^{(6+\epsilon)(1-s)}=c \lambda^{2+\delta}
\end{equation*}
where the last equations define $c>0$ and $\delta>0$.
Hence,
\begin{equation*}
{\mathbb P}\Big(\|{\mathcal Y}_{\lambda,n}\| > K \lambda^{s-1}\;\;\;
\text{for some $n=1,2,\ldots,\lfloor\lambda^{-2}T\rfloor$}\Big)\,\leq\, Tc\lambda^\delta
\end{equation*}
which approaches zero for $\lambda\to 0$.
Therefore, introducing a stopping time $T_\lambda:=\min\{n\,:\, {\mathcal Y}_{\lambda,n}\neq\widetilde{\mathcal Y}_{\lambda,n}\}$
and considering the stopped process $X_{\lambda,n \wedge T_\lambda}$ one obtains the same distributional limit process up to time $T$ (in fact for arbitrary $T$).
But for the stopped processes there is no difference when replacing ${\mathcal Y}_{\lambda,n}$ by $\widetilde {\mathcal Y}_{\lambda,n}$.
\end{proof}
Thus we may assume equation \eqref{eq-cond-Vv} without loss of generality and we will do so from now on.
Moreover, as the spectral radius of $\Gamma_0$ and $\Gamma_2$ are smaller than 1,
using a basis change, we may assume:\footnote{even if $\Gamma_0$ or $\Gamma_2$ are not diagonalizable, one can make the norm smaller than one as one can make the off diagonal terms of Jordan blocks arbitrarily small.}
\begin{equation}\label{eq-def-gamma}
\|\Gamma_0\|\leq e^{-\gamma} \;,\quad
\|\Gamma_2\|\leq e^{-\gamma}\;,\quad \text{where $\gamma>0$}.
\end{equation}
\vspace{.2cm}
Before obtaining the evolution equations, we will first establish that the pair
$(X_{\lambda,n},Z_{\lambda,n})$ is a Markov process.
Let us define the following subgroup of ${\rm GL}(d,{\mathbb C})$.
\begin{equation*}
{\mathbb G}=\left\{ \begin{pmatrix}
{\bf 1} & 0 \\ C & D
\end{pmatrix} \,\in\,{\rm Mat}(d,{\mathbb C}) \;:\quad \text{where}\quad
D \in {\rm GL}(d_2,{\mathbb C})\;
\right\}\;.
\end{equation*}
Now let ${\mathcal X}_1$ and ${\mathcal X}_2$ be equivalent, ${\mathcal X}_1\sim {\mathcal X}_2$, if
${\mathcal X}_1={\mathcal X}_2{\mathcal Q}$ for ${\mathcal Q}\in{\mathbb G}$.
As different representatives differ by multiplication from the right, multiplication from the left defines an action on the
equivalence classes.
Therefore, the evolution of the equivalence classes
$[{\mathcal X}_{\lambda,n}]_\sim$ is a Markov process.
As
\begin{equation} \label{eq-std-form}
\begin{pmatrix}
A & B \\ C & D
\end{pmatrix}
\begin{pmatrix}
{\bf 1} & {\bf 0} \\ -D^{-1} C & D^{-1}
\end{pmatrix}
\,=\,\begin{pmatrix}
A-BD^{-1}C & BD^{-1} \\ {\bf 0} & {\bf 1}
\end{pmatrix}
\end{equation}
we see that the equivalence class $[{\mathcal X}_{\lambda,n}]_\sim$ is determined by
the pair $(X_{\lambda,n}, Z_{\lambda,n})$.
\vspace{.2cm}
Let us further introduce the following commuting matrices of size $d_0+d_1$,
\begin{equation} \label{eq-def-R-S}
R=\pmat{{\bf 1} & {\bf 0} \\ {\bf 0} & U},\quad
S=\pmat{\Gamma_0 & {\bf 0} \\ {\bf 0} & {\bf 1}};\quad R,S \in {\rm Mat}(d_0+d_1,{\mathbb C})\;,
\end{equation}
Note that $R$ is unitary and that
\begin{equation} \label{eq-def-Rr}
{\mathcal R} = \pmat{R & {\bf 0} \\ {\bf 0} & {\bf 1}} \in {\rm U}(d)\;.
\end{equation}
for ${\mathcal R}$ as defined in \eqref{eq-def-Xx}.
As $\Gamma_0^{n}$ is exponentially decaying,
we refer to the $d_0$ dimensional subspace corresponding to this matrix block
as the decaying directions of ${\mathcal T}_0^n$. Similarly, the $d_2$ dimensional subspace corresponding
to the entry $\Gamma_2^{-n}$ are referred to as growing directions.
\newcommand{V_{\lambda,n}^A}{V_{\lambda,n}^A}
\newcommand{V_{\lambda,n}^B}{V_{\lambda,n}^B}
\newcommand{V_{\lambda,n}^C}{V_{\lambda,n}^C}
\newcommand{V_{\lambda,n}^D}{V_{\lambda,n}^D}
\newcommand{V}{V}
\newcommand{\vxs_{\lambda,n}^X}{V_{\lambda,n}^X}
\newcommand{R^{-n} V_{\lambda,n}^A R^{n-1}}{R^{-n} V_{\lambda,n}^A R^{n-1}}
\newcommand{R^{-n} V_{\lambda,n}^B}{R^{-n} V_{\lambda,n}^B}
\newcommand{V_{\lambda,n}^C R^{n-1}}{V_{\lambda,n}^C R^{n-1}}
\newcommand{V_{\lambda,n}^D}{V_{\lambda,n}^D}
\newcommand{T_{\lambda,n}^A}{T_{\lambda,n}^A}
\newcommand{T_{\lambda,n}^B}{T_{\lambda,n}^B}
\newcommand{T_{\lambda,n}^C}{T_{\lambda,n}^C}
\newcommand{T_{\lambda,n}^D}{T_{\lambda,n}^D}
The evolution of ${\mathcal X}_{\lambda,n}$ is given by
\begin{equation} \label{eq-evol-Xx}
{\mathcal X}_{\lambda,n}= {\mathcal R}^{-n} {\mathcal T}_{\lambda,n} {\mathcal R}^{n-1} \,{\mathcal X}_{\lambda,n-1}\,.
\end{equation}
Therefore, let
\begin{equation*}
{\mathcal R}^{-n} {\mathcal T}_{\lambda,n} {\mathcal R}^{n-1} = \pmat{T_{\lambda,n}^A & T_{\lambda,n}^B \\ T_{\lambda,n}^C & T_{\lambda,n}^D }\,.
\end{equation*}
Here, $A,B,C,D$ are used as indices to indicate that we use the same sub-division of the matrix as we did when defining
$A_{\lambda,n},\,B_{\lambda,n},\,C_{\lambda,n}$ and $D_{\lambda,n}$.
The action on the equivalence class of ${\mathcal X}_{\lambda,n-1} \sim \smat{X_{\lambda,n-1} & Z_{\lambda,n-1} \\ {\bf 0}& {\bf 1}}$ gives
\begin{equation*}
\pmat{T_{\lambda,n}^A & T_{\lambda,n}^B \\ T_{\lambda,n}^C & T_{\lambda,n}^D } \pmat{X_{\lambda,n-1} & Z_{\lambda,n-1} \\ {\bf 0}& {\bf 1}}=
\pmat{T_{\lambda,n}^A X_{\lambda,n-1} & T_{\lambda,n}^A Z_{\lambda,n-1} + T_{\lambda,n}^B \\ T_{\lambda,n}^C X_{\lambda,n-1} & T_{\lambda,n}^C Z_{\lambda,n-1} + T_{\lambda,n}^D}\,.
\end{equation*}
Transforming the matrix on the right hand side into the form as in \eqref{eq-std-form} we can read off
the evolution equations
\begin{equation} \label{eq-evol-Z}
Z_{\lambda,n} = \left(T_{\lambda,n}^A Z_{\lambda,n-1} + T_{\lambda,n}^B \right)\,\left(T_{\lambda,n}^C Z_{\lambda,n-1} + T_{\lambda,n}^D \right)^{-1}\;
\end{equation}
and
\begin{equation}
X_{\lambda,n} = T_{\lambda,n}^A X_{\lambda,n-1} - \left(T_{\lambda,n}^A Z_{\lambda,n-1} + T_{\lambda,n}^B \right) \left(T_{\lambda,n}^C Z_{\lambda,n-1} + T_{\lambda,n}^D \right)^{-1}
T_{\lambda,n}^C X_{\lambda,n-1} \label{eq-exp-X0}\,.
\end{equation}
For more detailed calculations, let
\begin{equation}\label{eq-def-va}
{\mathcal Y}_{\lambda,n}={\mathcal V}_{\lambda,n}+\lambda{\mathcal W}_\lambda=:
\pmat{V_{\lambda,n}^A & V_{\lambda,n}^B \\ V_{\lambda,n}^C & V_{\lambda,n}^D }\;,\;\;\text{then}\quad
{\mathcal R}^{-n} {\mathcal Y}_{\lambda,n} {\mathcal R}^{n-1} =\pmat{R^{-n} V_{\lambda,n}^A R^{n-1} & R^{-n} V_{\lambda,n}^B \\ V_{\lambda,n}^C R^{n-1} & V_{\lambda,n}^D }\,.
\end{equation}
From \eqref{eq-def-Tt}, \eqref{eq-T_0}, \eqref{eq-def-R-S} and \eqref{eq-def-Rr} one finds
\begin{equation}\label{eq-rel-t-v}
T_{\lambda,n}^A=S+\lambdaR^{-n} V_{\lambda,n}^A R^{n-1}\;,\;\; T_{\lambda,n}^B=\lambdaR^{-n} V_{\lambda,n}^B\;,\;\; T_{\lambda,n}^C=\lambdaV_{\lambda,n}^C R^{n-1}\;,\;\; T_{\lambda,n}^D=\Gamma_2^{-1}+\lambdaV_{\lambda,n}^D\;.
\end{equation}
We will first consider the Markov process $Z_{\lambda,n}$ and denote the starting point by
$Z_0=B_0D_0^{-1}$.
\begin{prop}\label{prop-Z-small}
For $\lambda$ small enough and some constant $K_Z$ we have the uniform bound
\begin{equation}\label{eq-Z-small}
\|Z_{\lambda,n}\| \leq K_Z(e^{-\gamma n/2}+\lambda^s)\,=\,{\mathcal O}(e^{-\gamma n/2}, \lambda^s)\,
\end{equation}
with $\gamma$ as in \eqref{eq-def-gamma}.
This implies $Z_{\frac1{\sqrt{m}},\lfloor tm\rfloor}\Rightarrow 0$ in law.
\end{prop}
\begin{proof}
Take $\lambda$ small enough, such that $e^\gamma - K_{\mathcal Y}\lambda^{s} (1+\max(\|Z_0\|,1)) > e^{\gamma/2}$
(with $K_{\mathcal Y}$ as in \eqref{eq-cond-Vv}) and
\begin{equation}\label{eq-cond-lb}
\frac{(1+K_{\mathcal Y}\lambda^s)\max(\|Z_0\|,1)+K_{\mathcal Y}\lambda^s}{e^{\gamma}-K_{\mathcal Y}\lambda^s\left (1+\max(\|Z_0\|,1)\right)}
\,\leq\,e^{-\gamma/2}\,\max(\|Z_0\|,1).
\end{equation}
Then, using \eqref{eq-evol-Z}
we find for $\|Z_{\lambda,n-1}\|\leq \max(\|Z_0\|,1)$
\begin{equation*}
\|Z_{\lambda,n}\| \leq \frac{(1+K_{\mathcal Y}\lambda^s)\|Z_{\lambda,n-1}\|+K_{\mathcal Y}\lambda^s}
{e^{\gamma}-K_{\mathcal Y}\lambda^s\|Z_{\lambda,n-1}\|}\,\leq e^{-\gamma/2} \max(\|Z_0\|,1) \,<\, \max(\|Z_0\|,1).
\end{equation*}
Hence, inductively, $\|Z_{\lambda,n}\|<\max(\|Z_0\|,1)$
for $n\geq 1$. Thus, using this equation and \eqref{eq-cond-lb} again leads to
\begin{equation*}
\|Z_{\lambda,n}\| \leq e^{-\gamma/2} \|Z_{\lambda,n-1}\|+K_{\mathcal Y} \lambda^s\,.
\end{equation*}
By induction this yields the bound
\begin{equation*}
\|Z_{\lambda,n}\| \leq e^{-\gamma n/2}\|Z_{0}\|\,+\, \frac{1-e^{-\gamma n/2}}{1-e^{-\gamma/2}} K_{\mathcal Y} \lambda^s
\end{equation*}
proving the proposition.
\end{proof}
\begin{rem}
Note that the estimates show that $T_{\lambda,n}^C Z_{\lambda,n-1}+T_{\lambda,n}^D$ is invertible. Using
$D_{\lambda,n} = T_{\lambda,n}^C B_{\lambda,n-1}+T_{\lambda,n}^D D_{\lambda,n-1}=(T_{\lambda,n}^C Z_{\lambda,n-1}+T_{\lambda,n}^D)D_{\lambda,n-1} $
it follows inductively also that $D_{\lambda,n}$ is invertible.
Hence, $X_{\lambda,n}$ and $Z_{\lambda,n}$ are always well defined for small $\lambda$ under assumption \eqref{eq-cond-Vv}.
Hence, under the assumptions of Theorem~\ref{theo-SDE} they will be well defined up to $n=T\lambda^{-2}$
with probability going to one as $\lambda\to 0$.
\end{rem}
\vspace{.2cm}
Next, let the reminder term
$\widetilde\Xi_{\lambda,n}$ be given by
\begin{equation}\label{eq-def-tl-Xi}
(T_{\lambda,n}^D+T_{\lambda,n}^C Z_{\lambda,n-1})^{-1} %
=\Gamma_2 + \widetilde\Xi_{\lambda,n} ,
\end{equation}
and define
\begin{equation}\label{eq-def-Xi}
\Xi_{\lambda,n} := -T_{\lambda,n}^A Z_{\lambda,n-1} \Gamma_2 V_{\lambda,n}^C R^{n-1}-(T_{\lambda,n}^A Z_{\lambda,n-1}+\lambdaR^{-n} V_{\lambda,n}^B)\widetilde\Xi_{\lambda,n} V_{\lambda,n}^C R^{n-1}\,.
\end{equation}
Furthermore let
\begin{equation}\label{eq-def-W_lb}
\vxs_{\lambda,n}^X:= V_{\lambda,n}^A-\lambda V_{\lambda,n}^B \Gamma_2 V_{\lambda,n}^C\,.
\end{equation}
The upper index $X$ should indicate that this is the important combination of the random parts ${\mathcal Y}_{\lambda,n}$ that will contribute to the SDE limit for the process $X_{\lambda,n}$.
As we will establish, the 'reminder' part expressed in the $\Xi_{\lambda,n}$ terms will be of too low order and not matter in the limit.
By \eqref{eq-exp-X0} and \eqref{eq-rel-t-v} one obtains
\begin{equation} \label{eq-exp-X}
X_{\lambda,n}=S X_{\lambda,n-1} + \lambda R^{-n} \vxs_{\lambda,n}^X R^{n-1} X_{\lambda,n-1}
+ \lambda\Xi_{\lambda,n} X_{\lambda,n-1}\,.
\end{equation}
The following estimates will be needed to obtain the SDE limit.
\begin{lemma}\label{lem-estimates}
Let ${\mathbb E}_{X,Z}$ denote the conditional expectation given that $X_{\lambda,n-1}=X$
and $Z_{\lambda,n-1}=Z$.
\begin{enumerate}[{\rm (i)}]
\item For small $\lambda$ one has the bounds
\begin{align}
\label{eq-E-Xi} {\mathbb E}_{X,Z}(\Xi_{\lambda,n}) &= {\mathbb E}(\Xi_{\lambda,n}|Z_{\lambda,n-1}=Z) = {\mathcal O}(\lambda^{2s-1}\|Z\|,\lambda^{3s-1})\\
\label{eq-E-Xi2}
{\mathbb E}_{X,Z} (\|\Xi_{\lambda,n}\|^2) &= {\mathcal O}(\|Z\|^2,\lambda^2,\lambda\|Z\|) \\
\label{eq-Xi-bound} \Xi_{\lambda,n} &= {\mathcal O}(\|Z_{\lambda,n-1}\|\lambda^{s-1},\lambda^{3s-1})\,.
\end{align}
\item $\vxs_{\lambda,n}^X$ is independent of $Z_{\lambda,n-1}$ and $X_{\lambda,n-1}$ and
there is a matrix $V_0$ and a constant $K$ such that
\begin{align}
\label{eq-E-W} {\mathbb E}(\vxs_{\lambda,n}^X) &=\lambda V_0 + o(\lambda) \\
\label{eq-W-bound} \vxs_{\lambda,n}^X & = {\mathcal O}(\lambda^{s-1}) \\
\label{eq-W3-bound}{\mathbb E}(\|\vxs_{\lambda,n}^X\|^3) &\leq K ={\mathcal O}(1)\;.
\end{align}
\item
\begin{equation}
\label{eq-XX-bound} {\mathbb E}_{X,Z}\Big(X_{\lambda,n}X^*_{\lambda,n}\Big)
= SXX^*S+\|X\|^2\cdot {\mathcal O}(\lambda^2,\lambda^{2s}\|Z\|).
\end{equation}
Moreover,
there is a function $K(T)$ such that
\begin{equation}
\label{eq-XX-bound1} {\mathbb E}(\|X_{\lambda,n}\|^2) \leq K(T)\quad
\text{for all $n< T \lambda^{-2}$}.
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
Note that \eqref{eq-cond-Vv} implies the uniform bounds
\begin{equation}\label{eq-bd-va}
V_{\lambda,n}^\#={\mathcal O}(\lambda^{s-1}) \qtx{for} \#\in\{A,B,C,D,X\}
\end{equation}
As $T_{\lambda,n}^D=\Gamma_2^{-1}+\lambdaV_{\lambda,n}^D$, $T_{\lambda,n}^A=S+\lambdaR^{-n} V_{\lambda,n}^A R^{n-1}$ one finds
\begin{equation}\label{eq-xi-est1}
\widetilde \Xi_{\lambda,n}={\mathcal O}(\lambda^s)\quad\text{and}\quad
(T_{\lambda,n}^A Z+\lambda R^{-n} V_{\lambda,n}^B)\widetilde\Xi_{\lambda,n}V_{\lambda,n}^C R^{n-1} ={\mathcal O}(\lambda^{3s-1},\lambda^{2s-1}\|Z\|)\;.
\end{equation}
Using \eqref{eq-cond-Vv-lim} we see that
${\mathbb E}(T_{\lambda,n}^A Z \Gamma_2 V_{\lambda,n}^C R^{n-1})={\mathcal O}(\lambda\|Z\|)$
which together with \eqref{eq-xi-est1} gives \eqref{eq-E-Xi}.
(Note that $V_{\lambda,n}^A,\,V_{\lambda,n}^B$ and $V_{\lambda,n}^C$ are independent of $Z_{\lambda,n-1}$.)
The moment condition \eqref{eq-cond-Vv-m} also yields
${\mathbb E} (\|T_{\lambda,n}^A Z \Gamma_2 V_{\lambda,n}^C R^{n-1}\|^2) = {\mathcal O}(\|Z\|^2)$.
Combining this with \eqref{eq-xi-est1}, using Cauchy Schwarz in the form
${\mathbb E}(\|A+B\|^2) \leq \left(\sqrt{{\mathbb E}(\|A\|^2)} + \sqrt{{\mathbb E}(\|B\|^2)}\right)^2$
and using ${\mathcal O}(\lambda^{3s-1},\lambda^{2s-1}\|Z\|) \leq {\mathcal O}(\lambda,\|Z\|)$ we find for some constant $K$ that
${\mathbb E}_{X,Z}(\|\Xi_{\lambda,n}\|^2) \leq K (\|Z\|+\lambda)^2$ giving \eqref{eq-E-Xi2}.
Finally, \eqref{eq-bd-va} yields $\|T_{\lambda,n}^A Z \Gamma_2 V_{\lambda,n}^C R^{n-1}\|={\mathcal O}(\|Z\|\lambda^{s-1})$ which
combined with \eqref{eq-xi-est1} gives \eqref{eq-Xi-bound}.
To get (ii) note that equation \eqref{eq-E-W} follows from
\eqref{eq-cond-Vv-lim}, \eqref{eq-bd-va} yields \eqref{eq-W-bound} and
the moment condition \eqref{eq-cond-Vv-m} implies \eqref{eq-W3-bound}.
For part (iii) note that by \eqref{eq-exp-X} one has
\begin{align}
& {\mathbb E}_{X,Z} (X_{\lambda,n} X_{\lambda,n}^*) = SX X^* S + \lambda R^{-n} {\mathbb E}(\vxs_{\lambda,n}^X) R^{n-1} X X^*S
+\lambda \left[R^{-n} {\mathbb E}(\vxs_{\lambda,n}^X) R^{n-1} X X^*S \right]^* \notag \\
& \qquad + \lambda {\mathbb E}_{X,Z}(\Xi_{\lambda,n}) X X^* S + \lambda \left[ {\mathbb E}(\Xi_{\lambda,n}) X X^* S\right]^* \,+\,
{\mathcal O}\left(\lambda^2\|X\|^2 {\mathbb E}_{X,Z}\big((\|\vxs_{\lambda,n}^X \|+\|\Xi_{\lambda,n}\|)^2\big) \right) \notag\,.
\end{align}
Using \eqref{eq-E-Xi}, \eqref{eq-E-W} and ${\mathbb E}_{X,Z}((\|\vxs_{\lambda,n}^X\|+\|\Xi_{\lambda,n}\|)^2)={\mathcal O}(1) $
one finally obtains equation \eqref{eq-XX-bound}. The latter estimate follows from
\eqref{eq-Z-small},\,\eqref{eq-Xi-bound}, \eqref{eq-W3-bound} and Cauchy-Schwarz.\\
For \eqref{eq-XX-bound1} note that the Hilbert-Schmidt norm is given by
$\|X\|^2_{HS}=\mbox{\rm Tr}(XX^*)$. Then \eqref{eq-Z-small} and \eqref{eq-XX-bound} imply
that for some constant $K$ one finds
$$
{\mathbb E}\left(\|X_{\lambda,n}\|_{HS}^2\right)\leq \begin{cases}
{\mathbb E}\left(\|X_{\lambda,n-1}\|^2_{HS}\right) (1+K \lambda^{2s}) &
\quad \text{for $n \leq s/\gamma \ln(\lambda^{-2})$} \\
{\mathbb E}\left(\|X_{\lambda,n-1}\|^2_{HS}\right) (1+K \lambda^{2}) &
\quad \text{for $n > s/\gamma \ln(\lambda^{-2})$.}
\end{cases}
$$
By induction, for small $\lambda$ and $n<T\lambda^{-2}$,
$$
{\mathbb E}(\left(\|X_{\lambda,n}\|_{HS}^2\right)\leq
(1+K \lambda^{2s})^{s/\gamma \ln(\lambda^{-2})}(1+K \lambda^{2})^{T\lambda^{-2}}\|X_0 \|^2
<e^{K+TK}\,\|X_0\|^2.
$$
As all norms are equivalent, this finishes the proof.
\end{proof}
\section{Proof of Theorem~\ref{theo-SDE}, the limit of $X_{\lambda,n}$ \label{sec-lim}}
We need to split up the $(d_0+d_1)\times(d_0+d_1)$ matrix $X_{\lambda,n}$ into the corresponding blocks.
Therefore, let
\begin{equation*}
P_0=\pmat{{\bf 1} & {\bf 0} }\,\in\, {\rm Mat}(d_0 \times (d_0+d_1))\,,\quad
P_1=\pmat{{\bf 0} & {\bf 1} }\,\in\, {\rm Mat}(d_1 \times (d_0+d_1))
\end{equation*}
Then, using \eqref{eq-def-R-S} one finds
\begin{equation}\label{eq-P-ids}
P_0 S = \Gamma_0 P_0,\quad P_1 S = P_1,\quad
P_0R^n= P_0,\quad P_1 R^n = U^n P_1\,.
\end{equation}
Moreover,
for any $(d_0+d_1)\times (d_0+d_1)$ matrix $M$ one has
\begin{equation}\label{eq-P-ids2}
M=\pmat{P_0 M \\ P_1 M}=\pmat{MP_0^* & MP_1^*}\,.
\end{equation}
\begin{prop} \label{prop-P0X}
There is a function $K(T)$ such that for all $n<T\lambda^{-2}$
one has
$$
{\mathbb E}(\|P_0X_{\lambda,n}\|_{HS}^2) \,\leq\, e^{-2\gamma n} \|P_0X_0\|_{HS}^2+ K(T)\lambda^{2s}
$$
In particular, for any function $f(n)\in{\mathbb N}$ with $\lim_{n\to \infty} f(n)=\infty$ one has
$$
P_0X_{\frac{1}{\sqrt{n}},f(n)}\Longrightarrow
\pmat{{\bf 0}&{\bf 0}}\,.
$$
\end{prop}
\begin{proof}
Multiplying \eqref{eq-XX-bound} by $P_0$ from the left and $P_0^*$ from the right,
taking expectations and using the bound \eqref{eq-XX-bound1} gives
$$
{\mathbb E}(P_0 X_{\lambda,n} X_{\lambda,n}^*P_0^* )=
\Gamma_0 {\mathbb E}(P_0X_{\lambda,n-1} X_{\lambda,n-1}^*P_0^*)\Gamma_0^*+
{\mathcal O}(\lambda^{2s})
$$
which leads to
$$
{\mathbb E}(\|P_0X_{\lambda,n}\|_{HS}^2) \leq e^{-2\gamma} {\mathbb E}(\|P_0 X_{\lambda,n-1}\|_{HS}^2)+
{\mathcal O}(\lambda^{2s})\,.
$$
where the bound for the error term is uniform in $n$ for $n<\lambda^{-2}T$.
Induction yields the stated result.
\end{proof}
Finally, let us consider the part with an interesting limit.
Multiplying \eqref{eq-exp-X} by $P_1$ from the left and and using \eqref{eq-P-ids}, \eqref{eq-P-ids2}
one finds
\begin{align}
P_1 X_{\lambda,n} &= P_1 X_{\lambda,n-1}+\lambda {U}^{-n} P_1 \vxs_{\lambda,n}^X
( P_1^* U^{n-1} P_1 X_{\lambda,n-1}+P_0^*P_0 X_{\lambda,n-1}) \notag \\
&\quad + \lambda P_1 \Xi_{\lambda,n} (P_1^* P_1 X_{\lambda,n-1}+P_0^* P_0 X_{\lambda,n-1})
\label{eq-evol-P0X}
\end{align}
We immediately obtain the following estimate.
\begin{prop} \label{prop-P1X}
For $n<\lambda^{-2} T$ one has uniformly
$$
{\mathbb E}(\|P_1X_{\lambda,n}-P_1X_0\|^2) \leq {\mathcal O}(n\lambda^{2s})\,.
$$
This implies for any function $f(n)\in{\mathbb N}$ with
$\lim_{n\to \infty} f(n) n^{-s} = 0$ that
$$
P_1X_{\frac{1}{\sqrt{n}},f(n) }\,\Longrightarrow\,P_1X_0
$$
in law for $n\to \infty$.
\end{prop}
\begin{proof}
Using the estimates of Lemma~\ref{lem-estimates} one finds similarly to \eqref{eq-XX-bound} that
\begin{align*}
{\mathbb E}_{X,Z}((P_1X_{\lambda,n}-P_1X_0) (P_1X_{\lambda,n}-P_1X_0)^*) =
P_1XX^*P_1^* + \|X\|^2 {\mathcal O}(\lambda^2,\lambda^{2s} \|Z\|)\,.
\end{align*}
Using \eqref{eq-XX-bound1} and $\|Z_{\lambda,n-1}\|\leq{\mathcal O}(1)$ from \eqref{eq-Z-small} we find therefore
that uniformly for $n<\lambda^{-2} T$
\begin{align*}
&{\mathbb E}((P_1X_{\lambda,n}-P_1X_0) (P_1X_{\lambda,n}-P_1X_0)^*) = \\
&\qquad \qquad {\mathbb E}((P_1X_{\lambda,n-1}-P_1X_0)(P_1X_{\lambda,n-1}-P_1X_0)^*) + {\mathcal O}(\lambda^{2s})
\end{align*}
Taking traces (Hilbert-Schmidt norm) and induction yield the result.
\end{proof}
In order to use Proposition~\ref{p_turboEK} we need to consider stopped processes.
So for any $\lambda$, let $T_K$ be the stopping time when $\|P_1X_{\lambda,n}\|$ is bigger then $K$.
We define the stopped process by
$$
P_1 X^K_{\lambda,n}:=P_1X_{\lambda,T_K\wedge n}\;,\quad
P_0 X^K_{\lambda,n}:= P_0 X_{\lambda,n} \cdot 1_{n\leq T_K}\;,\quad
Z^K_{\lambda,n} := Z_{\lambda,n} \cdot 1_{n\leq T_K}\;
$$
where
$$
1_{n\leq T_K}= 1 \quad \text{for $n\leq T_K$} \quad\text{and}\quad
1_{n\leq T_K}= 0 \quad \text{for $n>T_K$}\;.
$$
As long as $n\leq T_K$, \eqref{eq-exp-X} and Lemma~\ref{lem-estimates}
give $\|P_0 X_{\lambda,n}\|\leq (e^{-\gamma}+{\mathcal O}(\lambda^s)) \|P_0 X_{\lambda,n-1} \|+{\mathcal O}(\lambda^s)$.
An induction similar as in Proposition~\ref{prop-Z-small} yields for any finite $K$
\begin{equation}\label{eq-est-P0}
\|P_0 X^K_{\lambda,n}\|<K_P(e^{-\gamma/2 n}+\lambda^s)\;,\quad\text{for some constant $K_P = K_P(K)$.}
\end{equation}
For the limit, we will scale $\lambda=1/\sqrt{m}$ and $n=\lfloor tm \rfloor$.
First, define the good set
\begin{equation*}
G_m=G_m(K):=\{(X,Z)\,:\,\|Z\|<2K_Z m^{-s/2}, \|P_0X\|< 2K_P m^{-s/2}\}\,,
\end{equation*}
then by the estimates \eqref{eq-Z-small} and \eqref{eq-est-P0} one has
\begin{equation}\label{eq-in-Gm}
(X^K_{1/\sqrt{m}\,,n},Z^K_{1/\sqrt{m}\,,n}) \in G_m \quad \mbox{for} \quad n>s\,\ln(m)\,/\,\gamma \;.
\end{equation}
For the variances in the SDE limit we need to recognize the connection to the matrix $V$ and the functions
$g(M),\,\hat g(M)$ as defined in
\eqref{eq-def-V}, \eqref{eq-def-g} and \eqref{eq-def-hg}.
Using the notations as introduced in \eqref{eq-def-Vv} combined with \eqref{eq-def-va} and \eqref{eq-def-W_lb} one obtains
\begin{equation}\label{eq-exp-PWP}
P_1 \vxs_{\lambda,n}^X P_1^* = (\vl11+\lambda\wl11)-\lambda(\vl12+\lambda\wl12)\Gamma_2(\vl21+\lambda\wl21)\;.
\end{equation}
Therefore, using the functions $h,\,\widehat{h}$ as defined in \eqref{eq:h} and $W$ as defined in \eqref{eq-def-W}
one finds
\begin{align}
& {\mathbb E}(P_1\vxs_{\lambda,n}^X P_1^*)=\lambda P_1 V_0 P_1^*+o(\lambda)\,=\,\lambda W + o(\lambda)\\
& {\mathbb E}((P_1 \vxs_{\lambda,n}^X P_1^*)^\top M P_1 \vxs_{\lambda,n}^X P_1^*) = h(M)+o(1) \;\label{eq-h-1} \\
& {\mathbb E}((P_1 \vxs_{\lambda,n}^X P_1^*)^* M P_1 \vxs_{\lambda,n}^X P_1^*) = \widehat h(M)+o(1)\;\label{eq-h-2}.
\end{align}
Here the error terms $o(\lambda)$ and $o(1)$ are uniform in the limit $\lambda\to 0$.
Next we have to consider the conditional distribution of the differences $Y_{\lambda,n}=Y_{\lambda,n}(X,Z)$
given that $X_{\lambda,n-1}=X, Z_{\lambda,n-1}=Z$, i.e. for Borel sets of matrices ${\mathcal A}$,
$$
{\mathbb P}(Y_{\lambda,n}(X,Z)\in {\mathcal A}):={\mathbb P}\big( P_1 X_{\lambda,n}-P_1 X \in {\mathcal A}\,\big|\,X_{\lambda,n-1}=X, Z_{\lambda,n-1}=Z\big)\;,
$$
Using \eqref{eq-evol-P0X} one has
\begin{equation} \label{eq-Y}
Y_{\lambda,n} =
\lambda {U}^{-n} P_1 \vxs_{\lambda,n}^X
( P_1^* U^{n-1} P_1 X +P_0^*P_0 X)
+ \lambda P_1 (\Xi_{\lambda,n}|Z_{\lambda,n-1}=Z)\,X
\end{equation}
where $(\Xi_{\lambda,n}|Z_{\lambda,n-1}=Z)$ is a random matrix variable distributed as $\Xi_{\lambda,n}$ conditioned to
$Z_{\lambda,n-1}=Z$, this simply means that in \eqref{eq-def-tl-Xi} and \eqref{eq-def-Xi} one replaces $Z_{\lambda,n-1}$ by $Z$.
\begin{prop}\label{prop-Y}
Assume $(X,Z)\in G_m$, $m=\lambda^{-2}$, thus $P_0X={\mathcal O}(\lambda^s)$ and
$Z={\mathcal O}(\lambda^s)$.
Then one finds for $Y_{\lambda,n} = Y_{\lambda,n}(X,Z)$ the uniform estimates (uniform in $X,Z,n$ )
\begin{align}
\label{eq-E-Y} {\mathbb E}(Y_{\lambda,n}) &= \lambda^2 U^{-n} W U^{n-1} P_1 X + o(\lambda^{2})\\
\label{eq-var-Y} {\mathbb E}(Y^\top_{\lambda,n} M Y_{\lambda,n}) &=
\lambda^2 (P_1X)^\top {U^\top}^{n-1} \,h\big({\bar U}^{n} M {U}^{-n}\big)\, U^{n-1} P_1X+ o(\lambda^{2})\\
\label{eq-var-Y2} {\mathbb E}(Y^*_{\lambda,n} M Y_{\lambda,n}) &=
\lambda^2 (P_1X)^* {U^*}^{n-1} \,\widehat h\big( U^{n} M {U}^{-n}\big)\, U^{n-1} P_1X+ o(\lambda^{2})\\
\label{eq-Y3} {\mathbb E}(\|Y_{\lambda,n}\|^3) &\leq \lambda^3 K_Y\|X\|^3 \quad\text{for some uniform $K_Y>0$}.
\end{align}
Note that any covariance of real and imaginary entries of $Y_{\lambda,n}$ can be obtained by
varying $M$ in \eqref{eq-var-Y} and \eqref{eq-var-Y2}.
Moreover , one obtains uniformly for $0<t<T$
\begin{align}
\lim_{m\to\infty} \int_0^t m {\mathbb E}\left(Y_{\frac1{\sqrt{m}},\lfloor sm \rfloor}\right) \,ds
&= t V P_1X \label{eq-av-Y} \\
\lim_{m\to\infty} \int_0^t m {\mathbb E}\left(Y^\top_{\frac1{\sqrt{m}},\lfloor sm \rfloor} M
Y_{\frac1{\sqrt{m}},\lfloor sm \rfloor}\right) \,ds
&= t (P_1X)^\top g(M) P_1 X \label{eq-av-var} \\
\lim_{m\to\infty} \int_0^t m {\mathbb E}\left(Y^*_{\frac1{\sqrt{m}},\lfloor sm \rfloor} M
Y_{\frac1{\sqrt{m}},\lfloor sm \rfloor}\right) \,ds
&= t (P_1 X)^* \widehat g(M) P_1X \label{eq-av-var2}
\end{align}
where $V$, $g$ and $\widehat g$ are as in \eqref{eq-def-V}, \eqref{eq-def-g} and \eqref{eq-def-hg}
\end{prop}
\begin{proof}
Given $X_{\lambda,n-1}=X,\,P_0X={\mathcal O}(\lambda^s),\,Z_{\lambda,n-1}=Z={\mathcal O}(\lambda^s)$
and using the estimates \eqref{eq-E-Xi} and \eqref{eq-E-W}
equation \eqref{eq-Y} yields
$$
{\mathbb E}(Y_{\lambda,n})=\lambda {\mathbb E}(U^{-n} P_1 \vxs_{\lambda,n}^X P_1^* U^* U^{n} P_1 X) +
{\mathcal O}(\lambda^{3s})=\lambda^2 U^{-n} W U^{n-1} P_1 X\,+ o(\lambda^2)\,
$$
which implies \eqref{eq-E-Y}.
Using \eqref{eq-W-bound}, \eqref{eq-Xi-bound} and $Z_{\lambda,n-1}=Z={\mathcal O}(\lambda^s)$
we get
\begin{equation} \label{eq-exp-Y}
Y_{\lambda,n} = \lambda U^{-n} P_1 \vxs_{\lambda,n}^X P_1^* U^{n-1} P_1 X +{\mathcal O}(\lambda^{2s})\,.
\end{equation}
Together with \eqref{eq-h-1} and \eqref{eq-h-2} this proves \eqref{eq-var-Y} and \eqref{eq-var-Y2}.
Finally, \eqref{eq-Y3} follows from \eqref{eq-exp-Y} and \eqref{eq-cond-Vv-m}.
Letting $u=U^{-n}={U^n}^*$ we have the terms
$u W U^*u^*$, $\overline{u} \overline{U} h(u^\top M u) U^* u^* $ and
$u U \widehat{h}(u^* M u) U^* u^* $ appearing in \eqref{eq-E-Y}, \eqref{eq-var-Y} and \eqref{eq-var-Y2}, respectively.
On the abelean compact group $\langle U\rangle$ generated by the unitary $U$, the functions
\begin{align*}
& u \mapsto u W u^*\,,
\quad u \mapsto \overline{u}\, \overline{U}\, h(u^\top M u)\, U^*\, u^*\;,\quad
u\mapsto u\, U\, \widehat{h}(u^* M u)\, U^*\, u^*
\end{align*}
are polynomials of the eigenvalues of $u$ as $h,\,\widehat{h}$ are linear and
all $u\in\langle U \rangle$ are simultaneously diagonalizable.
For any such polynomial $p(u)$ one finds
\begin{equation}\label{eq-U-average}
\lim_{m\to\infty}\int_0^t p(U^{-\lfloor ms \rfloor})\,ds =
\lim_{m\to\infty} \frac{t}m \sum_{k=1}^m p(U^{-k})\,=\,t\,\int_{\langle U \rangle} p(u)\,du
\end{equation}
uniformly for $t<T$, where $du$ denotes the Haar measure on $\langle U \rangle$.
Applied to \eqref{eq-E-Y}, \eqref{eq-var-Y}, \eqref{eq-var-Y2}
this yields \eqref{eq-av-Y}, \eqref{eq-av-var} and \eqref{eq-av-var2}.
\end{proof}
Let $f(n)\in{\mathbb N}$ with $\lim_{n\to\infty} f(n)=\infty$ and $\lim_{n\to\infty} f(n) n^{-s}=0$,
then by Proposition~\ref{prop-P0X} and \ref{prop-P1X} we find for large enough $K$ that
$X^K_{1/\sqrt{n},f(n)} \Longrightarrow \smat{0 & 0 \\ 0 & {\bf 1}_{d_1}} X_0$ where we used a
subdivision in blocks of sizes $d_0$ and $d_1$.
For sake of concreteness let us set $f(n)=\lfloor n^\alpha \rfloor$ with some $0<\alpha< s$.
From \eqref{eq-in-Gm} we find for $m\to \infty$,
$$
{\mathbb P}\left((X^K_{1/\sqrt{m}\,,n},Z^K_{1/\sqrt{m}\,,n})\, \in\, G_m \text{ for all } Tm > n > f(m) \right)\,\to\, 1
$$
Together with Proposition~\ref{prop-Y}
we see that the stopped processes $(X^K_{1/{\sqrt{m}},n},Z^K_{1/{\sqrt{m}},n})$ for $n=1,\ldots, mT$ satisfy the conditions of
Proposition~\ref{p_turboEK} with $\tilde X^m_n=P_1X^K_{1/{\sqrt{m}},n}$, the good sets $G_m$ and
$f(m)=\lfloor m^\alpha \rfloor$.
Thus, with Proposition~\ref{prop-P0X} it follows $X^K_{1/{\sqrt{m}},\lfloor tm \rfloor} \Longrightarrow \pmat{{\bf 0} & \\ & \Lambda_t^K} X_0$, uniformly for
$0<t<T$, where $\Lambda_t^K = \Lambda_{t\wedge T_k}$ denotes the stopped process of
$\Lambda_t$ as described in Theorem~\ref{theo-SDE} with stopping time
$T_K$ when $\left\| P_1 \Lambda_t X_0 \right\| >K$.
As we have this convergence for all such stopping times $T_K$, $\|P_1 \Lambda_t X_0\|$ is almost surely finite
and as the final time $T$ was arbitrary, one obtains
$X_{1/\sqrt{m},\lfloor tm \rfloor} \Longrightarrow \pmat{{\bf 0} & \\ & \Lambda_t} X_0$ for any $t>0$.
Together with Proposition~\ref{prop-Z-small} this finishes the proof of Theorem~\ref{theo-SDE}.
\section{Application to random Schr\"odinger operators \label{sec-RSO}}
The main purpose of this section is the proof of Theorem~\ref{theo-GOE}. However, we will also obtain a description for limits of eigenvalue processes in a critical scaling.
For this we will consider slightly more general operators as \eqref{eq-H-lnd}.
More precisely, we study the limiting eigenvalue process for $n\to\infty$ with $\lambda^2n$ constant and $d$ fixed for more general random $nd\times nd$ matrices
given by
\begin{equation}\label{eq-def-H}
(H_{\lambda,n} \psi)_k = \psi_{k+1}+\psi_{k-1}+(A+\lambda V_k)\psi_k\;
\end{equation}
where $\psi=(\psi_1,\ldots,\psi_n)$, $\psi_0=\psi_{n+1}=0$ and $\psi_j\in{\mathbb C}^d$.
Here, $A$ is a general Hermitian matrix, and the $V_k$ are general i.i.d.\ Hermitian matrices with ${\mathbb E}(V_k)={\bf 0}$ and ${\mathbb E}(\|V_k\|^{6+\varepsilon})<\infty$.
We dropped the index $d$ now as $d$ will be fixed from now on and sometimes we may also drop the index $n$.
Moreover, for simplicity, we can assume that $A$ is diagonal; indeed, this can be achieved by the change of basis
$\psi_n \mapsto O^* \psi_n$ where $O^* AO$ diagonalize $A$ (and replaces $V_n$ by $O^*V_n O$).
The eigenvalue equation $H_{\lambda}\psi=E \psi$ is a recursion that can be written in the matrix form as follows.
\begin{equation}\label{eq-def-transfer}
\pmat{\psi_{k+1} \\ \psi_k} = T_{k} \pmat{\psi_k \\ \psi_{k-1}}\,\qquad
\text{where}\quad
T_{k}= T^{E}_{\lambda,k} = \pmat{E{\bf 1} -A-\lambda V_k & -{\bf 1} \\ {\bf 1} & {\bf 0}}\,.
\end{equation}
The $T^E_{\lambda,k}$ are called transfer matrices. Now, $E$ is an eigenvalue of $H_{\lambda,n}$ if there is a nonzero solution $(\psi_1,\psi_n)$ to
$$
\pmat{0\\ \psi_n} = T_n\cdots T_{1} \pmat{\psi_1 \\ 0}\,\qquad
$$
or equivalently, when the determinant of the top left $d\times d$ block of $T_n\cdots T_{1}$ vanishes. So we can study the
eigenvalue equation through the products
\begin{equation*}
T_{[1,k]} = T_k \cdots T_1
\end{equation*}
which are the focus of our next theorems.
\subsection{Elliptic and hyperbolic channels and SDE limits}
The matrices $T_k$ satisfy
$$T^*{\mathcal J} T = {\mathcal J} \qquad \text{where}\qquad {\mathcal J}=\pmat{{\bf 0} & {\bf 1}_d \\ -{\bf 1}_d & {\bf 0}},$$
the definition of elements of the hermitian symplectic group ${\rm HSp}(2d)$.
In particular, they are all invertible. The $T_k$ are all perturbations of the noiseless matrix
$$T_*:=T^E_{0,1}\;.$$ This matrix is also block diagonal with $d$ blocks of size 2, and the eigenvalues of $T^E_{0,1}$ are exactly the
$2d$ solutions of the $d$ quadratics
$$
z+z^{-1}= E-a_j, \qquad a_j \text{ is an eigenvalue of } A.
$$
So the solutions are on the real line or on the complex unit circle, depending on
whether $|E-a_j|$ is less or more than two. We call the corresponding generalized eigenspaces of
$T_*=T^E_{0,1}$ elliptic $(<2)$, parabolic $(=2)$ and hyperbolic $(>2)$ channels. Elliptic and hyperbolic channels correspond to
two-dimensional eigenspaces, while parabolic channels correspond to a size 2 Jordan block. Traditionally, this notation refers to the solutions
of the noiseless $(\lambda=0)$ recursion that are supported in these subspaces for every coordinate $\psi_n$.
Pick an energy $E$, such that there are no parabolic channels and at least one elliptic channel.
Suppose that $A$ is diagonalized so that
$|E-a_j|>2$ for $j=1,\ldots,d_h$ and $|E-a_j|<2$ for $j>d_h$.
Correspondingly, we define the hyperbolic eigenvalues $\gamma_j$ and elliptic eigenvalues $z_j$ of $T_*$ by
\begin{align*}
\gamma_j+\gamma_j^{-1} &= E-a_j\,,\quad &|\gamma_j|<1&\;,&\qtx{for}& j=1,\ldots,d_h \\
z_j+z_j^{-1} &= E-a_{j+d_h}\,,\quad &|z_j|=1&,\,\im(z_j)>0\;,&\qtx{for}& j=1\,\ldots,d_e=d-d_h\;.
\end{align*}
Furthermore we define the diagonal matrices
\begin{equation}\label{eq-def-Ga-Z}
\Gamma={\rm diag}(\gamma_1,\ldots,\gamma_{d_h}),\qquad
Z={\rm diag}(z_{1},\ldots,z_{d_e}).
\end{equation}
In order to complete the description of the limiting eigenvalue process, we need to consider a family of limiting SDE by varying the energy in
the correct scaling. More precisely, define the $2d_e\times 2d_e$ unitary matrix $U$ and
the $2d \times 2d$ matrix ${\mathcal Q}$ by
\begin{equation}\label{eq-def-Qq}
U=\pmat{\bar Z & \\ & Z}\,,\qquad
{\mathcal Q}=\pmat{\Gamma & & & \Gamma^{-1} \\ & \bar Z & Z & \\
{\bf 1}_{d_h} & & & {\bf 1}_{d_h} \\ & {\bf 1}_{d_e} & {\bf 1}_{d_e} & }\,
\end{equation}
so that ${\mathcal Q}$ diagonalizes $T_*$ to a form as in \eqref{eq-T_0} that is used for Theorem~\ref{theo-SDE}
\begin{equation*}
{\mathcal T}_*:={\mathcal Q}^{-1}T_{*}{\mathcal Q}=\pmat{\Gamma & & \\ & U & \\ & & \Gamma^{-1}}\,.
\end{equation*}
Furthermore, let
\begin{equation}\label{eq-def-cjT}
{\mathcal T}_k= {\mathcal T}^{\varepsilon,\sigma}_{\lambda,k}:=
{\mathcal Q}^{-1}\, T^{E+\lambda^2\varepsilon}_{\lambda,k} \,{\mathcal Q}=
{\mathcal T}_*+\lambda \sigma\, {\mathcal V}_{k} + \lambda^2 \varepsilon {\mathcal W}\,,\quad
{\mathcal T}_{[1,k]}={\mathcal T}^{\varepsilon,\sigma}_{\lambda,[1,k]}:={\mathcal T}_{k}\cdots {\mathcal T}_{1}\,
\end{equation}
with
\begin{equation}\label{eq-T-parts}
{\mathcal V}_k={\mathcal Q}^{-1} \pmat{-V_k & \\ & {\bf 0}} {\mathcal Q}\,,\qquad {\mathcal W}= {\mathcal Q}^{-1} \pmat{{\bf 1}_d & \\ & {\bf 0}} {\mathcal Q}\,.
\end{equation}
The parameter $\sigma$ is somewhat redundant, however, it will be useful for replicating the argument of \cite{VV1} where this scaling parameter was also introduced.
The scaling $\varepsilon \lambda^2\sim \varepsilon/n$ means that a unit interval of $\varepsilon$ should contain a constant order of eigenvalues.
In order to get limiting SDEs we consider a Schur complement as before, thus define the $2d_e \times 2d_e$ matrices
\begin{equation}\label{eq-def-hat-T}
\widehat {\mathcal T}^{\varepsilon,\sigma}_{\lambda,n} =
\left( {\mathcal P}^*_{\leq 1} \left[ {\mathcal T}^{\varepsilon,\sigma}_{\lambda,[1,n]} {\mathcal X}_0 \right]^{-1} {\mathcal P}_{\leq 1}\right)^{-1} \qtx{with}
{\mathcal P}_{\leq1}=\pmat{{\bf 1}_{d_h+2d_e} \\ {\bf 0}_{d_h\times(d_h+2d_e)}}\
\end{equation}
Then by Theorem~\ref{theo-SDE} we obtain the correlated family (parameters $\sigma, \,\varepsilon$) of limiting processes
\begin{equation}\label{eq-L_et}
\pmat{ {\bf 0}_{d_h} & \\ & U^{-\lfloor tn \rfloor} }\,\widehat {\mathcal T}^{\varepsilon,\sigma}_{1/\sqrt{n},\lfloor tn\rfloor}\;\Longrightarrow\;
\pmat{ {\bf 0}_{d_h} \\ & \Lambda^{\varepsilon,\sigma}_{t}}\,\left( {\mathcal P}^*_{\leq 1} {\mathcal X}_0^{-1} {\mathcal P}_{\leq 1}\right)^{-1} \qquad \text{for $n\to\infty$}\;,
\end{equation}
where for fizzed $(\varepsilon, \sigma)$, the process $\Lambda^{\varepsilon,\sigma}_t$ satisfies some SDE in $t$.
\begin{remark}\label{rem-red-T}
For $\varepsilon=0$ and $\sigma=1$, up to some conjugation, the matrix $\widehat {\mathcal T}^{0,1}_{\lambda,n}$ corresponds to the reduced transfer matrix as introduced in {\rm \cite{Sa1}} for the scattering of a block described by $H_{\lambda}$
of a finite length $n$ inserted into a cable
described by $H_0$ of infinite length ('$n=\infty$'). Thus we obtain that in the limit $\lambda^2 n ={\rm const.},\,n\to\infty$, the process of the reduced transfer matrix as defined in {\rm \cite{Sa1}} is described by a SDE, proving Conjecture~1 in {\rm \cite{Sa1}}.
\end{remark}
To get to the GOE limit we need to express the limit SDEs more explicitly. Therefore, let us split the potential $V_1$ into the hyperbolic and elliptic parts, i.e. let
\begin{equation} \label{eq-def-Vh}
V_1 = \pmat{ V_h & V_{he} \\ V^*_{he} & V_e}\,\qtx{where}
V_h\,\in\,{\rm Mat}(d_h\times d_h)\;,\quad V_e\,\in\,{\rm Mat}(d_e\times d_e)\;.
\end{equation}
Moreover, define
\begin{equation}\label{eq-def-Sh}
Q = \int_{\langle Z \rangle} \mathbf{z} \,{\mathbb E}(V_{he}^* (\Gamma^{-1}-\Gamma)^{-1} \,V_{he})\,\bar \mathbf{z}\,
d \mathbf{z}\;,\quad
{\mathcal S}=\pmat{(\bar Z-Z)^{-1} & {\bf 0} \\ {\bf 0} & (\bar Z-Z)^{-1}}
\end{equation}
where $d\mathbf{z}$ denotes the Haar measure on the compact abelian group $\langle Z \rangle$ generated by the diagonal, unitary matrix $Z$. As we will see, $Q$ will give rise to a drift term coming from the hyperbolic channels. In fact, this is the only influence of the hyperbolic channels for the limit process.
Moreover, to simplify expressions, we will be interested in one specific case.
\begin{define}
We say that the matrix $Z={\rm diag}(z_1,\ldots,z_{d_e})$ with $|z_j|=1, \,\im(z_j)>0$ is {\bf chaotic}, if all of the following apply for all $i,j,k,l \in \{1,\ldots, d_e\}$,
\begin{align*}
&z_i z_j z_k z_l \neq 1\;, \quad \bar z_i z_j z_k z_l \neq 1\,\\
&\bar z_i \bar z_j z_k z_l \neq 1 \quad \text{unless $\{i,j\}=\{k,l\}$}.
\end{align*}
\end{define}
The following observation corresponds to Lemma~8 in \cite{VV1}.
\begin{lemma}\label{lem-chaotic}
Let the eigenvalues $a_j$, $j=1,\ldots, d$ of $A$ be simple and let $I$ be the interval with fixed hyperbolic and elliptic channels as considered, i.e.
$$
I\,=\,\{E\in{\mathbb R}\,:\,|E-a_j|>2\;\; \text{for $j=1,\ldots, d_h$\;\; and}\;\; |E-a_j|<2\;\;\text{for $j>d_h$}\;\}\,.
$$
Then, for Lebesgue almost all $E\in I$, the matrix $Z$ as defined above is chaotic and moreover, for any diagonal, unitary matrix $Z_*$ there is a sequence $n_k$ such that
$Z^{n_k+1}\to Z_*$.
\end{lemma}
\begin{proof}
By the definitions above, $z_j=e^{i\varphi_j}$ where $\varphi_j = \arccos((E-a_{j+d_h})/2)\,\in\,(-\pi\,,\,\pi)$.
We will show that for almost all $E$, the vector $\varphi=\varphi(E)=(\varphi_{1},\ldots,\varphi_{d_e})$ has no non-zero integer vector orthogonal to it.
It is not difficult to see that $Z$ is chaotic in this case and the orbit $Z^n$ is dense in the torus of diagonal unitary matrices.\\
It is enough to show that for any non-zero integer vector $w$ the set of energies $E\in I$ where $w \cdot \varphi(E) = 0$ is finite.
Clearly, $E\mapsto w\cdot \varphi(E)$ is analytic on $I$ and therefore it either has finitely many zeros or is constant zero.
Taking the derivative with respect to $E$ we get
$$
(w\cdot \varphi(E))'\,=\,\sum_{j=1}^{d_e} \frac{-w_j}{\sqrt{1-\frac14(E-a_{j+d_h})^2}}\,.
$$
As all the values $a_{j+d_h}$ are different, each summand has a singularity at a different value. Hence, this derivative can only be identically zero on $I$ if
$w$ is the zero vector. Thus, for $w\neq 0$, $E\mapsto w\cdot \varphi(E)$ is not the zero function.
\end{proof}
\begin{prop}
\label{prop-SDEs}
{\rm (i)}
The family of processes $\Lambda^{\varepsilon,\sigma}_{t}$ as in equation
\eqref{eq-L_et} or Theorem~\ref{theo-EV}
satisfy SDEs of the form
\begin{equation}\label{eq-sigma-SDE}
d\Lambda^{\varepsilon,\sigma}_{t}\,=\,{\mathcal S}
\pmat{\varepsilon {\bf 1} - \sigma^2Q & \\ & -\varepsilon {\bf 1} + \sigma^2Q} \,\Lambda^{\varepsilon,\sigma}_{t}\,dt
\,+\,\sigma\,{\mathcal S} \pmat{d{\mathcal A}_t & d{\mathcal B}_t \\ -d{\mathcal B}_t^* & -d{\mathcal C}_t}\,\Lambda^{\varepsilon,\sigma}_{t}\,
\end{equation}
with $\Lambda^{\varepsilon,\sigma}_{0}={\bf 1}$ and $\sigma,\varepsilon$ fixed,
where ${\mathcal A}_t,\,{\mathcal B}_t,\,{\mathcal C}_t$ are jointly Gaussian complex-valued $d_e\times d_e$ matrix Brownian motions, independent of $\varepsilon$ and $\sigma$, with ${\mathcal A}_t^* = {\mathcal A}_t$, ${\mathcal C}_t^*={\mathcal C}_t$ and certain covariances.
\noindent {\rm (ii)} If $A$ and $V_n$ are real symmetric then we obtain
\begin{equation*}
{\mathcal C}_t = \overline{{\mathcal A}_t}={\mathcal A}_t^\top\qtx{and} {\mathcal B}_t^\top = {\mathcal B}_t\,.
\end{equation*}
\noindent {\rm (iii)} If $Z$ is chaotic then ${\mathcal B}_t$ is independent of ${\mathcal A}_t$ and ${\mathcal C}_t$. Also,
${\mathcal A}_t$ and ${\mathcal C}_t$ have the same distribution.
Moreover, with the subscript $t$ dropped, we have the following:
\begin{align*}
&{\mathbb E}|{\mathcal A}_{ij}|^2={\mathbb E}|{\mathcal B}_{ij}|^2=t\,{\mathbb E}|(V_e)_{ij}|^2 \\
&{\mathbb E}({\mathcal A}_{ii} {\mathcal A}_{kk})=
t\,{\mathbb E}((V_e)_{ii} (V_e)_{kk})\,, \\
&{\mathbb E}({\mathcal A}_{ij} {\mathcal C}_{ij}) = {\mathbb E}({\mathcal A}_{ij}\overline{{\mathcal C}}_{ji})=
{\mathbb E}({\mathcal B}_{ij}\overline{{\mathcal B}}_{ji})=t\,{\mathbb E}((V_e)_{ij})^2
\end{align*}
and whenever $\{i,j\}\neq\{k,l\}$ one finds
\begin{equation*}
{\mathbb E}({\mathcal A}_{ij}{\mathcal A}_{kl})=
{\mathbb E}({\mathcal A}_{ij} {\mathcal C}_{kl})=
{\mathbb E}(\overline{{\mathcal B}}_{ij} {\mathcal B}_{kl})=0
\end{equation*}
and for any $i,j,k,l$,
\begin{equation*}
{\mathbb E}({\mathcal B}_{ij} {\mathcal B}_{kl})=0\,.
\end{equation*}
All other covariances are obtained from ${\mathcal A}_t={\mathcal A}_t^*,\,{\mathcal C}_t={\mathcal C}_t^*$.
\end{prop}
\begin{proof} Let us stick to the case $\sigma=1$.
Note that
\begin{equation}
{\mathcal Q}^{-1}={\mathcal S}_{\Gamma,Z} \pmat{{\bf 1}_{d_h} & {\bf 0} & - \Gamma^{-1} & {\bf 0} \\ {\bf 0} & {\bf 1}_{d_e} & {\bf 0} & -Z \\
{\bf 0} & -{\bf 1}_{d_e} & {\bf 0} & \bar Z \\ -{\bf 1}_{d_h} & {\bf 0} & \Gamma & {\bf 0} }\;
\end{equation}
where
\begin{equation}
{\mathcal S}_{\Gamma,Z}=\pmat{-S_\Gamma & & & \\ & S_Z & & \\ & & S_Z & \\
& & & -S_\Gamma}\,,\quad S_\Gamma=(\Gamma^{-1}-\Gamma)^{-1}\;,\quad
S_Z=(\bar Z-Z)^{-1}
\end{equation}
we chose the sign on $S_\Gamma$ this way, so that $S_\Gamma>{\bf 0}$ is a positive diagonal matrix.
With \eqref{eq-T-parts} and \eqref{eq-def-Vh} this leads to
\begin{equation}
{\mathcal V}_1={\mathcal S}_{\Gamma,Z} \pmat{-V_h\Gamma & -V_{he} \bar Z & -V_{he} Z & -V_h \Gamma^{-1} \\
-V_{he}^* \Gamma & -V_e \bar Z & -V_e Z & -V_{he}^* \Gamma^{-1} \\
V_{he}^* \Gamma & V_e \bar Z & V_e Z & V_{he}^* \Gamma^{-1} \\
V_h\Gamma & V_{he} \bar Z & V_{he} Z & V_h \Gamma^{-1}}\;,\quad
{\mathcal W}={\mathcal S}_{\Gamma,Z} \pmat{\Gamma & {\bf 0} & {\bf 0} & \Gamma^{-1} \\
{\bf 0} & \bar Z & Z & {\bf 0} \\ {\bf 0} & -\bar Z & -Z & {\bf 0} \\ -\Gamma & {\bf 0} & {\bf 0} & \Gamma^{-1}}
\end{equation}
In the notations as introduced in Section~\ref{sec-intro} and used for Theorem~\ref{theo-SDE}
we have $\Gamma_2=\Gamma$ and
\begin{eqnarray}
\vl11 U^* &=& {\mathcal S} \left(\pmat{-V_e & -V_e \\ V_e & V_e} + \lambda \epsilon \pmat{{\bf 1}_{d_e} & {\bf 1}_{d_e} \\ - {\bf 1}_{d_e} & - {\bf 1}_{d_e} }\right) \\
\vl12\Gamma\vl21 U^* &=& {\mathcal S} \pmat{V_{he}^*S_\Gamma V_{he} & V_{he}^* S_\Gamma V_{he} \\
-V_{he}^*S_\Gamma V_{he} & -V_{he}^* S_\Gamma V_{he}}\,
\end{eqnarray}
with ${\mathcal S}$ as in \eqref{eq-def-Sh}.
In order to calculate the drift term, note that using the definition of $Q$ in \eqref{eq-def-Sh} we obtain
$$
\int_{\langle Z \rangle} \pmat{\mathbf{z} & {\bf 0} \\ {\bf 0} & \bar \mathbf{z}} \,{\mathbb E}\,\pmat{\varepsilon{\bf 1}-V_{he}^* S_\Gamma V_{he} &
\varepsilon{\bf 1}-V_{he}^* S_\Gamma V_{he}
\\ -\varepsilon{\bf 1}+V_{he}^* S_\Gamma V_{he} & -\varepsilon{\bf 1}+V_{he}^* S_\Gamma V_{he}}
\pmat{\bar\mathbf{z} & {\bf 0} \\ {\bf 0} & \mathbf{z}}\,d\mathbf{z}\,=\,
\pmat{\varepsilon{\bf 1}-Q & {\bf 0} \\ {\bf 0} & -\varepsilon{\bf 1}+Q}
$$
where we used that for any $d_e\times d_e$ matrix $M$ one finds
$$
\int_{\langle Z \rangle} \mathbf{z} \,M\,\mathbf{z}\,d\mathbf{z}\,=\,
\lim_{n\to\infty} \frac1n \sum_{k=1}^n Z^k M Z^k =
\left(\lim_{n\to\infty} \frac1n\sum_{k=1}^n (z_i z_j)^k M_{ij}\right)_{ij}\,=\,{\bf 0}
$$
as we have $|z_iz_j|=1$ and $\im(z_i)>0,\,\im(z_j)>0$ implying that
$z_i z_j \neq 1$ for any $i,j \in \{1,\ldots,d_e\}$.
Therefore, application of Theorem~\ref{theo-SDE} gives \eqref{eq-sigma-SDE}
with ${\mathcal A}_t={\mathcal A}_t^*$, ${\mathcal C}_t={\mathcal C}_t^*$.
In order to express the covariances as described by \eqref{eq-variances} in more detail recall
$Z={\rm diag}(z_1,\ldots,z_{d_e})$, $|z_j|=1$, leading to
\begin{align}
\int_{\langle Z \rangle} \prod_{j=1}^{d_e} \mathbf{z}_{jj}^{n_j} \,d\mathbf{z}\,=\,
\chi\left(\prod_{j=1}^{d_e} z_j^{n_j} \right)
\qtx{with} \chi(z)=\begin{cases}
1 & \qtx{for} z=1 \\
0 & \qtx{else}
\end{cases}\;
\end{align}
where $\mathbf{z}_{jj}$ is the $j$-th diagonal entry of the diagonal matrix $\mathbf{z}\in\langle Z \rangle$,
and $n_j$ are integers.
This leads to the following covariances,
\begin{align*}
{\mathbb E}(({\mathcal A}_t)_{ij} ({\mathcal A}_t)_{kl}) &= {\mathbb E}((\overline{{\mathcal A}_t})_{ji} (A_t)_{kl}) ={\mathbb E}(({\mathcal C}_t)_{ij} ({\mathcal C}_t)_{kl}) = \notag {\mathbb E}((\overline{{\mathcal C}_t})_{ji} ({\mathcal C}_t)_{kl}) \\
&= t\,{\mathbb E}((V_e)_{ij}(V_e)_{kl}) \,\chi(\bar z_i z_j \bar z_k z_l)\;; \\
{\mathbb E}(({\mathcal B}_t)_{ij} ({\mathcal B}_t)_{kl}) &= t\,{\mathbb E}((V_e)_{ij}(V_e)_{kl})\,\chi(z_iz_jz_kz_l)\;; \\
{\mathbb E}((\overline{{\mathcal B}_t})_{ij} ({\mathcal B}_t)_{kl}) &= t\, {\mathbb E}((\overline{V_e})_{ij}(V_e)_{kl})\,
\chi(\bar z_i \bar z_j z_k z_l)\;.
\end{align*}
The correlations between the Brownian motions are given by
\begin{align*}
&{\mathbb E}(({\mathcal A}_t)_{ij} ({\mathcal C}_t)_{kl}) = {\mathbb E}((\overline{{\mathcal A}_t})_{ji}({\mathcal C}_t)_{kl})
= t \,{\mathbb E}((V_e)_{ij}(V_e)_{kl})\,\chi(z_i \bar z_j \bar z_k z_l)\;;\\
&{\mathbb E}(({\mathcal A}_t)_{ij}({\mathcal B}_t)_{kl}) =
{\mathbb E}((\overline{{\mathcal A}_t})_{ji}({\mathcal B}_t)_{kl}) = t\,{\mathbb E}((V_e)_{ij}(V_e)_{kl})\,\chi(z_i\bar z_j z_k z_l)\;; \\
&{\mathbb E}(({\mathcal C}_t)_{ij}({\mathcal B}_t)_{kl}) =
{\mathbb E}((\overline{{\mathcal C}_t})_{ji}({\mathcal B}_t)_{kl}) = t\,{\mathbb E}((V_e)_{ij}(V_e)_{kl})\,\chi(\bar z_i z_j z_k z_l) \,.
\end{align*}
This shows part (i) for $\sigma=1$. Changing $V_1$ to $\sigma V_1$ immediately gives the general case.
If $V_e$ is almost surely real, which is the case if $O^* V_1 O$ is almost surely real, then one has ${\mathcal C}_t=\overline{{\mathcal A}}_t$ and ${\mathcal B}_t = {\mathcal B}_t^\top$ giving part (ii).
Part (iii) follows from using the chaoticity assumption in the equations for the covariances.
\end{proof}
\subsection{Limiting eigenvalue statistics \label{sub:eigenvalue}}
The convergence to the SDE limit as in \eqref{eq-L_et} should firstly be interpreted for fixed $\varepsilon$ and $\sigma$.
However, considering direct sums of matrices for finitely many pairs $(\varepsilon, \sigma)$
one obtains
joint convergence to a random field $(\varepsilon,\sigma,t)\mapsto \Lambda_t^{\varepsilon,\sigma}$ in terms of finite dimensional distributions.
For fixed $\sigma,\,t$, the left hand side of \eqref{eq-L_et} is clearly analytic in $\varepsilon\in{\mathbb C}$.
Moreover, all estimates made for the general setup are uniform for $\varepsilon$ varying in compact sets.
Using the bounds \eqref{eq-Z-small} and \eqref{eq-XX-bound1} we can therefore apply \cite[Corollary 15]{VV1} and see that for fixed $\sigma$ and $t$ there is a unique\footnote{unique in the sense of a uniquely induced distribution on the set of analytic functions} version such that $\varepsilon \mapsto \Lambda^{\varepsilon,\sigma}_t$ is analytic.
In particular, using this analytic version, we can define the random set $\{\varepsilon \in {\mathbb C}\;:\; f(\Lambda^{\varepsilon,\sigma}_t) = 0\}$ for fixed $(\sigma, t)$ and an analytic function
$f:{\rm Mat}(d,{\mathbb C})\to {\mathbb C}$. Unless $f(\Lambda^{\varepsilon,\sigma}_t)$ is the zero function in $\varepsilon$, this random set consists of isolated points by analyticity and can be seen as a point process which we may denote by
${\rm zeros}_\varepsilon f(\Lambda^{\varepsilon,\sigma}_t)$.
\begin{theo}\label{theo-EV}
Consider the process ${\mathcal E}_{\sigma,n}$ of eigenvalues of
$n(H_{\frac\sigma{\sqrt{n}},n}-E)$ and let $n_k$ be an increasing sequence such that $Z^{n_k+1}\to Z_*$ for
$k\to\infty$ with $Z$ being the unitary, diagonal $d_e\times d_e$ matrix defined in \eqref{eq-def-Ga-Z}.
Then, ${\mathcal E}_{\sigma,n_k}$ converges to the zero process of the determinant of a $d_e\times d_e$ matrix,
\begin{equation*}
{\mathcal E}_{\sigma,n_k}\,\Longrightarrow\, {\rm zeros}_\varepsilon \det\left( \pmat{\bar Z_* & Z_*}
\Lambda^{\varepsilon,\sigma}_{1} \pmat{{\bf 1}_{d_e} \\ - {\bf 1}_{d_e}}
\right)\;.
\end{equation*}
\end{theo}
\begin{proof} Without loss of generality we restrict to the case $\sigma=1$.
We won't need the precise form of the limit SDE but it is important how we obtain this SDE.
Therefore we need to look at the matrix parts giving the Schur complement as in the proof of
Theorem~\ref{theo-SDE}. Hence, using $U$ and ${\mathcal T}^{\varepsilon,1}_{\lambda,[1,n]}$ as above let
\begin{equation*}
{\mathcal R}=\pmat{{\bf 1}_{d_h} & & \\ & U & \\ & & {\bf 1}_{d_h}}\,,\quad
{\mathcal X}_0=\pmat{{\bf 1}_{d_h} & {\bf 0} & -{\bf 1}_{d_h} \\ {\bf 0} & {\bf 1}_{2d_e} & {\bf 0} \\ {\bf 0} & {\bf 0} & {\bf 1}_{d_h}}
\end{equation*}
and
\begin{equation}\label{eq-def-Xx-e-l-n}
{\mathcal X}^{\varepsilon}_{\lambda,n}={\mathcal R}^{-n} {\mathcal T}^{\varepsilon,1}_{\lambda,[1,n]} {\mathcal X}_0\,.
\end{equation}
Using blocks of sizes $d_h+2d_e$ and $d_h$, let
\begin{equation}\label{eq-def-X-e-l-n}
{\mathcal X}^{\varepsilon}_{\lambda,n} =
\pmat{A^{\varepsilon}_{\lambda,n} & B^{\varepsilon}_{\lambda,n} \\ C^{\varepsilon}_{\lambda,n} & D^{\varepsilon}_{\lambda,n}}\;, \quad
X^{\varepsilon}_{\lambda,n}=A^{\varepsilon}_{\lambda,n}-B^{\varepsilon}_{\lambda,n}
(D^{\varepsilon}_{\lambda,n})^{-1} C^{\varepsilon}_{\lambda,n}\,.
\end{equation}
Then by Theorem~\ref{theo-SDE},
$X_{\varepsilon,1/\sqrt{n},\lfloor tn \rfloor} \Longrightarrow \pmat{{\bf 0} & \\ & \Lambda^\varepsilon_{t} }$,
with the process $\Lambda^\varepsilon_t=\Lambda^{\varepsilon,1}_{t}$ as in \eqref{eq-L_et} and Theorem~\ref{theo-EV}.
Let us define
\begin{equation}
\Theta_0:= {\mathcal X}_0^{-1} {\mathcal Q}^{-1} \pmat{{\bf 1}_{d} \\ {\bf 0}}\,\pmat{{\bf 0} & \Gamma^{-1}-\Gamma \\ (\bar Z-Z) & {\bf 0}}\,=\,
\pmat{{\bf 0} & {\bf 0} \\ {\bf 1}_{d_e} & {\bf 0} \\ -{\bf 1}_{d_e} & {\bf 0} \\ {\bf 0} & {\bf 1}_{d_h} }
\end{equation}
as well as
\begin{equation}
M^{\varepsilon}_{\lambda,n}=\pmat{{\bf 1}_{d_e} & & {\bf 0} \\ - \left( D^{\varepsilon}_{\lambda,n}\right)^{-1} C^{\varepsilon}_{\lambda,n}
\smat{{\bf 0} \\ {\bf 1}_{d_e} \\ -{\bf 1}_{d_e}} & & ( D^{\varepsilon}_{\lambda,n})^{-1} } \,\in\,
{\rm GL}(d,{\mathbb C})
\end{equation}
Then,
\begin{equation}\label{eq-Xx-Th-M}
{\mathcal X}^{\varepsilon}_{\lambda,n} \,\Theta_0\, M^{\varepsilon}_{\lambda,n}
\,=\,
\pmat{X^{\varepsilon}_{\lambda,n} \smat{{\bf 0} \\ {\bf 1} \\ -{\bf 1}} & B^{\varepsilon}_{\lambda,n}(D^{\varepsilon}_{\lambda,n})^{-1} \\ {\bf 0} & {\bf 1}_{d_h} }
\end{equation}
Let us also define
\begin{equation}\label{eq-def-Th-n}
\Theta^*_n:=\pmat{{\bf 0} & {\bf 1}_{d_e} \\ \Gamma^{-1} & {\bf 0}}\,\pmat{{\bf 1}_{d} & {\bf 0}}\, {\mathcal Q} {\mathcal R}^n\,=\,
\pmat{{\bf 0} & {\bar Z}^{n+1} & Z^{n+1} & {\bf 0} \\ {\bf 1}_{d_h} & {\bf 0} & {\bf 0} & {\bf 1}_{d_h}}\,.
\end{equation}
An energy $E+\lambda^2\varepsilon$
is an eigenvalue of $H_{\lambda,n}$, precisely if there is a solution to the eigenvalue equation
with $\psi_{0}=0$ and $\psi_{n+1}=0$, i.e. if and only if
\begin{equation}\label{eq-eig-cond}
\det \left(\pmat{{\bf 1}_{d} & {\bf 0}} {\mathcal T}^{E+\lambda^2\varepsilon}_{\lambda,[1,n]}\,\pmat{{\bf 1}_{d} \\ {\bf 0}}\right) = 0\,.
\end{equation}
As ${\mathcal T}^{E+\lambda^2\varepsilon}_{\lambda,[1,n]}={\mathcal Q} {\mathcal R}^n {\mathcal X}^{\varepsilon}_{\lambda,n}{\mathcal X}_0^{-1} {\mathcal Q}^{-1}$, this is equivalent to
\begin{equation}
\det\left(\Theta_n^* {\mathcal X}^{\varepsilon}_{\lambda,n} \Theta_0 M^{\varepsilon}_{\lambda,n}\right)=0\,
\end{equation}
Using Theorem~\ref{theo-SDE}, \eqref{eq-Xx-Th-M} and \eqref{eq-def-Th-n}, we see that along a subsequence $n_k$ of the positive integers where $Z^{n_k+1}$ converges to $Z_*$, we find
for $\lambda_k=1/\sqrt{n_k}$
\begin{equation}\label{eq-TXTM-conv}
\Theta_{n_k}^* {\mathcal X}^\varepsilon_{\lambda_k,n_k} \Theta_0 M^\varepsilon_{\lambda_k,n_k} \,\Longrightarrow\,
\pmat{ \pmat{\bar Z_* & Z_*} \Lambda^\varepsilon_1 \pmat{{\bf 1}_{d_e} \\ -{\bf 1}_{d_e}} & & {\bf 0} \\ {\bf 0} & & {\bf 1}_{d_h}}\;.
\end{equation}
As already established above, there is a unique holomorphic version of the random process $\varepsilon\mapsto \Lambda_{\varepsilon,1}$.
In fact, using uniform boundedness of $\Theta_n$ as well as \eqref{eq-Xx-Th-M} and the bounds \eqref{eq-Z-small}, \eqref{eq-XX-bound1} we find that
${\mathbb E}\| \Theta_{n_k}^* {\mathcal X}_{\varepsilon,\lambda_k,n_k} \Theta_0 M_{\varepsilon,\lambda_k,n_k} \|$ is uniformly bounded for $\varepsilon \in {\mathbb C}$ varying in a compact set.
Hence, we can apply \cite[Corollary 15]{VV1} to obtain again the existence of a unique analytic version in $\varepsilon$ for the right hand side.
Moreover, we find a realization of all random processes on the same probability space but with local-uniform convergence in $\varepsilon$ in the equation
\eqref{eq-TXTM-conv} (Skorokhod embedding theorem).
As the determinant is a holomorphic function, the same is true for the determinants of these matrices.
As long as the (random) holomorphic determinant of the right hand side is not identically zero the local uniform convergence also implies that the discrete level sets of zeros of the determinants converge in the vague sense,
i.e. the counting measures integrated against continuous, compactly supported functions converge.
It is possible that certain zeros go off to infinity and disappear in the limit.
Hence, it is left to show that $\det(\smat{\bar Z_* & Z_*} \Lambda^\varepsilon_1 \smat{{\bf 1} \\ -{\bf 1}}) $ is (almost surely) not identically zero in $\varepsilon$.
Now, \eqref{eq-sigma-SDE} can be rewritten as
$$
\pmat{{\bf 1} & \\ & -{\bf 1}} {\mathcal S}^{-1}\,d\Lambda^\varepsilon_t\,+\,\pmat{Q \\ & Q}\,\Lambda^\varepsilon_{t}\, dt\,+\,
\pmat{d{\mathcal A}_t & d{\mathcal B}_t \\ d{\mathcal B}^*_t & -d{\mathcal C}_t}\,\Lambda^\varepsilon_{t}\,=\,\varepsilon\,\Lambda^\varepsilon_{t}\,dt
$$
which is the transfer matrix equation (fundamental solution) for the eigenvalue equation ${\mathcal D} \psi = \varepsilon \psi$ where ${\mathcal D}$ is the random operator
$$
{\mathcal D} \psi(t) = \left[ \pmat{{\bf 1} \\ & -{\bf 1}} {\mathcal S}^{-1} \partial_t\,+\,\pmat{Q\\ & Q} + \pmat{d{\mathcal A}_t & d{\mathcal B}_t \\ d{\mathcal B}^*_t & -d{\mathcal C}_t}\,/\,dt \right]\,\psi(t)\,
$$
Using the continuous versions of the Brownian motions leading to measure valued white noise, one can make perfect sense of this random operator ${\mathcal D}$ on $L^2([0,1])\otimes {\mathbb C}^{2d_e}$, by choosing
the random domain of continuous functions $\psi(t)$ such that ${\mathcal D} \psi(t)$ (at first defined as a measure) is a continuous function.
(A typical procedure for first-order one-dimensional operators with measure-valued potential).
The zero determinant condition above yields an eigenvector $\psi$ satisfying the boundary conditions $\psi(0)=\smat{{\bf 1} \\ -{\bf 1}} \psi_1$ (i.e. $\pmat{{\bf 1} & {\bf 1}} \psi(0) = 0$) and
$ \pmat{\bar Z_* & Z_*} \psi(1) = 0$. One can check that the operator is symmetric with these boundary conditions. Indeed, using integration by parts one finds for continuous
$\psi(t),\,\varphi(t)$ in the domain with these boundary conditions, that
\begin{align*}
& \int_0^1 ({\mathcal D} \psi(t))^* \varphi (t)\,dt\,-\,\int_0^1 \psi^*(t) {\mathcal D} \varphi (t)\,dt \,=\,- \left[\psi^*(t) \smat{S_Z^{-1} & \\ & -S_Z^{-1}} \varphi(t) \right]_0^1 \\
& \,=\, \psi^*(0) \smat{{\bf 1} \\ {\bf 1}} S_Z^{-1} \smat{{\bf 1} & {\bf 0}} \varphi(0)\,+\, \psi^*(1) \smat{Z_* \\ \bar Z_*} S_Z^{-1} \smat{{\bf 0} & Z_*} \varphi(1)\,=\,0
\end{align*}
In the second line we used the boundary conditions first for $\varphi$ and then for $\psi$.
Hence, the set of eigenvalues $\varepsilon$ of ${\mathcal D}$ with these boundary conditions is a subset of the real line and in fact discrete and it is equal to the
zero set in $\varepsilon$ of the right hand side
of \eqref{eq-TXTM-conv}.
\end{proof}
\subsection{Limiting GOE statistics}
In this subsection we will prove Theorem~\ref{theo-GOE} by reduction to the work in \cite{VV1}.
Without loss of generality we focus on energies $E$ smaller than $0$ and consider $r=1$.
The more general case needs some more care and notations in the subdivision into elliptic and hyperbolic channels, but the main calculations remain the same.
We need to consider the SDE limit as described above a bit more precisely for this particular Anderson model as in \eqref{eq-def-H} with $A={\mathbb Z}_d$ and $V_n$ as in \eqref{eq-GOE-cond1}.
In Proposition~\ref{prop-SDEs}, especially for the definitions of $V_h$, $V_e$ and $V_{he}$ it was assumed that $A$ is diagonal. So in order to use the calculations above we need to
diagonalize ${\mathbb Z}_d$ and see how this unitary transformation changes $V_n$.
We let $d\geq 2$, then ${\mathbb Z}_d$ is diagonalized by the orthogonal matrix $O$ given by
\begin{equation}
O_{jk}=\sqrt{2/(d+1)}\,\sin(\pi jk /(d+1))\,.
\end{equation}
The corresponding eigenvalue of ${\mathbb Z}_d$ with eigenvector being the $j$-th column vector of $O$
is given by
\begin{equation}
a_j=2 \cos(\pi j / (d+1))\,,\quad j=1,\ldots, d\;.
\end{equation}
For $-2<E<0$ there is $d_h < d$ such that
\begin{align}
2\cos(\pi j / (d+1))- E &> 2\qtx{for} j=1,\ldots,d_h \quad \text{and} \\
-2< 2\cos(\pi j / (d+1))- E &< 2 \qtx{for} j=d_h+1,\ldots,d\,.
\end{align}
So we have $d_h$ hyperbolic and $d_e=d-d_h$ elliptic channels and the upper $d_h\times d_h$ block of $O^* {\mathbb Z}_d O$ corresponds to the hyperbolic channels.
Using \eqref{eq-GOE-cond1} and the notations as in \eqref{eq-def-Vh} we have
\begin{equation}
\pmat{V_h & V_{he} \\ V_{he}^* & V_e} =
O^\top \pmat{v_1 & & {\bf 0} \\ & \ddots & \\ {\bf 0} & & v_d}\,O\;,
\quad {\mathbb E}(v_j)=0\,,\;\; {\mathbb E}(v_j\,v_k) = \delta_{jk}\,.
\end{equation}
Let $E$ be such that $Z$ is chaotic, then by Proposition~\ref{prop-SDEs}~(iii) we need to consider the following the covariances
\begin{align}
{\mathbb E}(|(V_e)_{ij}|^2)&={\mathbb E}\,\left|(O^\top V_1 O)_{i+d_h,j+d_h}\right|^2 = \langle |O_{i+d_h}|^2, |O_{j+d_h}|^2\rangle \\
{\mathbb E}((V_e)_{ii}(V_e)_{jj}) &={\mathbb E}\,\left((O^\top V_1 O)_{i+d_h,i+d_h}(O^\top V_1 O)_{j+d_h,j+d_h}\right) =
\langle |O_{i+d_h}|^2, |O_{j+d_h}|^2\rangle\,.
\end{align}
Here, by $|O_i|^2$ we denote the vector $(|O_{k,i}|^2)_{k=1,\ldots,d}$ and $\langle\cdot,\cdot\rangle$
denotes the scalar product.
As stated in \cite{VV1}, one finds
\begin{equation}
(d+1)\; \langle |O_i|^2\,,\,|O_j|^2\,\rangle\, =\,
\begin{cases}
3/2 &\qtx{for} i=j \\
1 &\qtx{for} i\neq j\;.
\end{cases}
\end{equation}
Let us further calculate the drift contribution $Q$ from the hyperbolic channels as introduced above.
Using chaoticity, it is not hard to see from \eqref{eq-def-Sh} that $Q$ is diagonal. Moreover one has
\begin{equation}
Q_{jj}={\mathbb E}(V_{he}^* S_\Gamma V_{he})_{jj})= \sum_{k=1}^{d_h} (S_{\Gamma})_{kk} {\mathbb E} ([(V_{he})_{kj}]^2)=
\sum_{k=1}^{d_h} \frac{\langle |O_k|^2, |O_{j+d_h}|^2\rangle}{\gamma_k^{-1}-\gamma_k}\;.
\end{equation}
It follows that $Q$ is a multiple of the unit matrix, more precisely
\begin{equation}
Q=q\,{\bf 1}\qtx{with} q = \frac{1}{d+1}\,\sum_{k=1}^{d_h} (\gamma_k^{-1}-\gamma_k)^{-1}\,.
\end{equation}
Note that $|q|<\max_k |\gamma_k^{-1}-\gamma_k|<\max_k |E-a_k| = \|E-A\|=\|E-{\mathbb Z}_d\|\leq |E|+2$ uniformly.
Thus, using Proposition~\ref{prop-SDEs}
we obtain the following SDE limits,
\begin{equation}
d\Lambda^{\varepsilon,\sigma}_{t}\,=\,
{\mathcal S}\,(\varepsilon-\sigma^2q)\pmat{{\bf 1} & {\bf 0} \\ {\bf 0} & -{\bf 1}}\,\Lambda^{\varepsilon,\sigma}_{t}\,dt\,+\,
\sigma {\mathcal S} \pmat{d{\mathcal A}_t & d{\mathcal B}_t \\ -d{\mathcal B}_t^* & -d\overline{{\mathcal A}}_t}\,\Lambda^{\varepsilon,\sigma}_{t}
\end{equation}
where ${\mathcal A}_t$ and ${\mathcal B}_t$ are independent matrix Brownian motions, ${\mathcal A}_t$ is Hermitian, ${\mathcal B}_t$ complex symmetric, i.e.
\begin{equation}
{\mathcal A}_t^* = {\mathcal A}_t\;,\quad {\mathcal B}_t^\top = {\mathcal B}_t\;
\end{equation}
with covariance structure
\begin{equation}
{\mathbb E}(|({\mathcal B}_t)_{ij}|^2)={\mathbb E}(|({\mathcal A}_t)_{ij}|^2)={\mathbb E}(({\mathcal A}_t)_{ii}({\mathcal A}_t)_{jj})=
\begin{cases}
\frac32\,t \,/\, (d+1) &\;\text{for}\; i=j\\
t \,/\, (d+1)&\;\text{for}\; i\neq j
\end{cases}\;.
\end{equation}
All covariances which do not follow are zero. Except for the additional drift $\sigma^2 q$ which can be seen as a shift in $\varepsilon$, this is the exact same SDE as it appears in \cite{VV1}.
In fact, the matrix ${\mathcal S}$ here corresponds to $iS^2$ as in \cite{VV1} and the process there corresponds to the process above conjugated by $|{\mathcal S}|^{1/2}$.\\
Thus, from now on the proof to obtain the ${\rm Sine}_1$ kernel and GOE statistics follows precisely the arguments as in \cite{VV1}.
First take $E$ as in Lemma~\ref{lem-chaotic} so that $Z$ is chaotic, and take a sequence $n_k$ such that $Z^{n_k+1} \to {\bf 1}$, then, for the point process as in Theorem~\ref{theo-EV} we find
${\mathcal E}_{\sigma,n_k} \Longrightarrow {\mathcal E}_\sigma={\rm zeros}_\varepsilon \det(\smat{{\bf 1} & {\bf 1}} \Lambda^{\varepsilon,\sigma}_1 \smat{{\bf 1} \\ -{\bf 1}})$\,.
Defining $\widehat\Lambda^{\varepsilon,\sigma}_t=\sigma^{-1} (\Lambda^{\varepsilon \sigma,\sigma}_t - {\bf 1})$ we find
$$\sigma^{-1} {\mathcal E}_\sigma \,=\, {\rm zeros}_\varepsilon
\det\left(\pmat{{\bf 1} & {\bf 1}} \widehat \Lambda^{\varepsilon,\sigma}_1 \pmat{{\bf 1} \\ -{\bf 1}}\right)$$
where $\widehat \Lambda^\varepsilon_0={\bf 0}$ and
$$
d \widehat \Lambda^{\varepsilon,\sigma}_t\,=\,(\varepsilon-\sigma q) {\mathcal S} \pmat{{\bf 1} \\ & -{\bf 1}} (\sigma \widehat \Lambda^{\varepsilon,\sigma}_t + {\bf 1})\,dt\,+\,
{\mathcal S} \pmat{d{\mathcal A}_t & d{\mathcal B}_t \\ -d{\mathcal B}_t^* & -d\overline{{\mathcal A}}_t}\,(\sigma \widehat\Lambda^{\varepsilon,\sigma}_{t} + {\bf 1})
$$
By \cite[Theorem 11.1.4]{SV} this SDE converges for $\sigma\to 0$ to the solution of the SDE with $\sigma=0$ which is a matrix-valued Brownian motion with drift and explicitly solvable.
Thus, for $\sigma \to 0$ one has $\widehat \Lambda^{\varepsilon,\sigma}_t \Longrightarrow \widehat \Lambda^{\varepsilon}_t$ which satisfies the same SDE with $\sigma=0$ above, therefore
$$
\widehat \Lambda^{\varepsilon,\sigma}_t \;\stackrel{\sigma\to 0}{\Longrightarrow}\; \widehat \Lambda^{\varepsilon}_t
\,=\, \varepsilon t\, {\mathcal S} \pmat{{\bf 1} \\ & -{\bf 1}}\,+\,{\mathcal S}\,\pmat{{\mathcal A}_t & {\mathcal B}_t \\ -{\mathcal B}_t^* & -\overline{{\mathcal A}}_t}\;.
$$
Using analytic versions in $\varepsilon$ one obtains by similar arguments as above that
$$
\sigma^{-1} {\mathcal E}_\sigma\;\stackrel{\sigma\to 0}{\Longrightarrow}\; {\rm zeros}_\varepsilon \det\left(\pmat{{\bf 1} & {\bf 1}} \widehat \Lambda^\varepsilon_1 \pmat{{\bf 1} \\ - {\bf 1}} \right)\,=\,
{\rm spec}\,(\re({\mathcal B}_1-{\mathcal A}_1))
$$
where ${\rm spec}(\cdot)$ denotes the spectrum and $\re(\cdot)$ the entry-wise real part of a matrix. The latter equation is a simple calculation using the relations from above.
Similar to Proposition~9 in \cite{VV1}, the convergence can be realized jointly.
\begin{lemma}\label{lem-conv-RM}
Let $Z$ be chaotic, let $n_k$ be a sequence such that $Z^{n_k+1}\to {\bf 1}$ and let $\sigma_k$ be sequence with $\sigma_k\to 0$ such that $\frac{1}{\sigma_k} \| Z^{n_k+1} - {\bf 1} \| \to 0$.
Consider the regularized Schur complements $\widehat {\mathcal T}^{\varepsilon,\sigma}_{\lambda,n}$ of the transfer matrices as defined in \eqref{eq-def-hat-T}.
Then define the regularized versions ${\mathcal X}^{\varepsilon,\sigma}_{\lambda,n}$ and the part $X^{\varepsilon,\sigma}_{\lambda,n}$ as in \eqref{eq-def-Xx-e-l-n} and \eqref{eq-def-X-e-l-n} but this time keeping the $\sigma$.
Choose ${\mathcal X}_0$ such that the corresponding Schur complement $X_0$ exists and let $\hat X_0=\smat{{\bf 0} \\ & {\bf 1}} X_0$, the starting point for the SDE limit. Then, for $t>0$,
we find for $k\to \infty$ that
$$
\frac{1}{\sigma_k} \left( X^{\varepsilon \sigma_k, \sigma_k}_{\frac{1}{\sqrt{n_k}},\lfloor tn_k\rfloor}\;-\; \widehat X_0 \right) \quad \Longrightarrow \quad \pmat{{\bf 0} \\ & \widehat \Lambda^{\varepsilon}_t} \widehat X_0
$$
jointly for $t\in [0,1]$ and $\varepsilon$ varying in any finite subset of ${\mathbb C}$.
Moreover, for the eigenvalue process ${\mathcal E}_{\sigma_k, n_k, d}$ of $(H_{\lambda_k,n_k,d}-E)$ we find
$$
\frac{n_k}{\sigma_k}\; {\mathcal E}_{\sigma_k, n_k, d}\;\Longrightarrow \; {\rm spec}\,\left(\re({\mathcal B}_1-{\mathcal A}_1)\right)
$$
\end{lemma}
\begin{proof}
The proof for the first statement works very similar to above using Proposition~\ref{p_turboEK}.
Therefore we let $\sigma_\lambda \to 0$ for $\lambda \to 0$ with $\sigma_{\lambda_k}=\sigma_k$ for $\lambda_k=1/\sqrt{n_k}$ and consider the process
$$
\widehat X_{\lambda,n}\,=\, \frac{1}{\sigma_\lambda} \left[ X^{\varepsilon \sigma_\lambda, \sigma_\lambda}_{\lambda, n}\,-\,\hat X_0 \right]\,.
$$
For $\sigma_\lambda \widehat X_{\lambda,n}+\widehat X_0 = X^{\varepsilon \sigma_\lambda, \sigma_\lambda}_{\lambda, n}$ the drift term for each step is of order $\lambda^2 \varepsilon \sigma_\lambda$ and the diffusion term of order $\lambda \sigma_\lambda$.
Similar to \eqref{eq-exp-X} one obtains therefore an equation of the form
$$
\widehat X_{\lambda,n}\,=\,\pmat{\Gamma \\ & {\bf 1}} \widehat X_{\lambda,n}\,+\,
\lambda R^{-n} V^X_{\lambda,n} R^{n-1} (\sigma_\lambda \widehat X_{\lambda,n}+\widehat X_0 )\,+\,{\mathcal O}\;.
$$
where the second term of $\vxs_{\lambda,n}^X$ in \eqref{eq-def-W_lb} gets an additional $\sigma_\lambda$ factor and the drift component of
the first term, $V_{\lambda,n}^A$, is proportional to $\varepsilon \lambda$.
As $\sigma_\lambda\to 0$ the estimates on the reminder terms improve and the drift and diffusion terms will not depend on $X_{\lambda,n}$ in the limit anymore.
Therefore, we get the Brownian motion with drift,
$\widehat X_{1/\sqrt{n},\lfloor tn \rfloor }\;\Longrightarrow\; \smat{{\bf 0} \\ & \widehat \Lambda^\varepsilon_t} \widehat X_0$\,.
To see the convergence of the eigenvalue processes we need to follow the calculations of Section~\ref{sub:eigenvalue} and use the analytic version with uniform convergence for compacts in $\varepsilon$. Note that for this case $\widehat X_0=\smat{{\bf 0} \\ & {\bf 1}}$ in blocks of sizes $d_h$ and $2d_e$.
With similar notations as in Section~\ref{sub:eigenvalue} (but keeping the $\sigma$-dependence and the upper $\sigma$-index) we obtain with $\sigma=\sigma_\lambda$ that
$$
\Theta_n^* {\mathcal X}^{\varepsilon \sigma, \sigma}_{\lambda,n} \Theta_0 M^{\varepsilon\sigma,\sigma}_{\lambda,n}\,=\,
\pmat{ \smat{{\bf 0} & \bar Z^{n+1} & Z^{n+1}} (\sigma \widehat X_{\lambda,n}+\widehat X_0) \smat{{\bf 0} \\ {\bf 1} \\ -{\bf 1}} & &
\smat{{\bf 0} & \bar Z^{n+1} & Z^{n+1} } \widehat Z_{\lambda,n}
\\ \smat{{\bf 1} & {\bf 0} & {\bf 0}} \sigma \widehat X_{\lambda,n} \smat{{\bf 0} \\ {\bf 1} \\ -{\bf 1}} & & {\bf 1}_{d_h}}\;
$$
where $\widehat Z_{\lambda,n}=B_{\lambda,n}^{\varepsilon \sigma,\sigma} (D_{\lambda,n}^{\varepsilon \sigma,\sigma})^{-1}$.
Note that by the choice of $\sigma_k$ as above one has
$$\sigma_k^{-1} \pmat{{\bf 0} & \bar Z^{n_k+1} & Z^{n_k+1}} \widehat X_0 \smat{{\bf 0} \\ {\bf 1} \\ -{\bf 1}}\,=\,
\sigma_k^{-1}\,(\bar Z^{n_k+1}-Z^{n_k+1})\,\to\, 0\,.$$
Hence, for $\lambda_k=1/\sqrt{n_k}$ we have
$$
\Theta_n^* {\mathcal X}^{\varepsilon \sigma_k, \sigma_k}_{\lambda_k,n_k} \Theta_0 M^{\varepsilon\sigma_k,\sigma_k}_{\lambda_k,n_k} \pmat{\sigma_k^{-1} {\bf 1} \\ & {\bf 1}}
\;\;\Longrightarrow\;\; \pmat{ \smat{{\bf 1} & {\bf 1}} \widehat \Lambda^\varepsilon_1 \smat{{\bf 1} \\ -{\bf 1}} & {\bf 0} \\ {\bf 0} & {\bf 1}_{d_h} }\,.
$$
Using analytic versions with uniform convergence locally in $\varepsilon$, the zero processes in $\varepsilon$ of the determinants also converge, hence
$\frac{n_k}{\sigma_k}\; {\mathcal E}_{\sigma_k, n_k, d}\;\Longrightarrow \;{\rm spec}\left(\re({\mathcal B}_1-{\mathcal A}_1)\right)$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{theo-GOE}]
We still restrict to the case $r=1$. For any energy $E\in (-4,4)$, the number of elliptic channels $d_e=d_e(d)$ for the transfer matrices of $H_{\lambda,n,d}$ will go to $\infty$ as $d\to\infty$.
By Lemma~\ref{lem-chaotic} we find for Lebesgue almost all such energies that the following two things hold: \\
1. For any $d$ there is no parabolic channel (i.e. $|E-a_j|\neq 2$ for all $j$)\\
2. For any $d$ the conditions of Lemma~\ref{lem-chaotic} apply.\\
Take such an energy $E$ and take sequences $n_k=n_k(d)$ and
$\sigma_k=\sigma_k(d)$ satisfying the conditions of Lemma~\ref{lem-conv-RM}.
$\re({\mathcal B}_1-{\mathcal A}_1)$ is a real, symmetric $d_e\times d_e$ random matrix whose distribution depends only on $d_e$.
As noted in \cite[Section 4]{VV1} its distribution can be written as $(d+1)^{-1/2} (K+b{\bf 1})$ where $b$ is a standard Gaussian random variable and $K$ an independent real symmetric
matrix with mean zero and Gaussian entries such that ${\mathbb E}(K_{ii}^2)=5/4$ and ${\mathbb E}(K_{ij}^2)=1$ for $i\neq j$.
As explained in \cite{VV1} the bulk eigenvalue process $s(d_e)$ of $\sqrt{d_e} (K+b{\bf 1})$ converges locally to the ${\rm Sine}_1$ process by methods of \cite{ESYY} when $d_e$ converges to $\infty$.
Thus, for the eigenvalue processes ${\mathcal E}_{\sigma,n,d}$ of $H_{\sigma/\sqrt{n},n,d}-E$ we find
$ \frac{ \sqrt{ d\, d_e}\, n_k(d)}{\sigma_k(d)}\;{\mathcal E}_{\sigma_k,n_k,d}\,\Rightarrow\, s(d_e)$ and $s(d_e)\Rightarrow {\rm Sine}_1$ in the topology of weak convergence.
Thus we find some diagonal sequence $(k_j,d_j)$ such that with $n_j=n_{k_j}(d_j),\,\sigma_j=\sigma_{k_j}(d_j),\,d_{j,e}=d_e(d_j)$ one finds
$ \frac{\sqrt{d_jd_{j,e}}\, n_j}{\sigma_j}\,{\mathcal E}_{\sigma_j,n_j,d_j}\,\Rightarrow\, {\rm Sine}_1$.
\end{proof}
|
1,108,101,564,840 | arxiv | \section{Introduction}
Time-evolution of a quantum mechanical system characterized by a discrete energy spectrum allows energy level crossings in certain situations. When two levels cross under a modulation of some external parameter (e.g. magnetic and electric fields etc.) varying in time, the level crossings may or may not convert to avoided crossings. If the symmetry of the quantum mechanical problem
permits a cross-talk between the levels, the levels start to repel each other.
The simplest problem where the avoided level crossing arises
is the Landau-Zener (LZ) problem \cite{landau},\cite{zener}, see \cite{nakamurabook}. The LZ Hamiltonian \cite{landau},\cite{zener} addressing a time evolution of a two-level system (TLS)
has been suggested in 1932 to describe the crossing of molecular terms aiming to construct a qualitative theory of a pre-dissociation.
The same year, Majorana considered
a completely different problem which nevertheless falls
into the same universality class. Namely,
Majorana \cite{majorana} investigated the behaviour of atoms subject to the time-dependent
magnetic field. The pioneering work of Majorana \cite{majorana} has anticipated the
revolution in quantum manipulation of few-level artificially prepared
quantum mechanical systems well before the era of quantum information
processing began (see, for example \cite{nielsen}). Quantum interference is yet another important phenomenon
appearing when two levels cross several times under modulation of an external field \cite{stueckelberg}. In particular, a periodically driven two-level system is characterized by an interference pattern known as St\"uckelberg oscillations, see review
\cite{shevchenko}.
There are several realizations of TLS based on spintronics of
quantum dot artificial atoms \cite{dots1, dots2}, quantum beats engineered with ultra-cold gases
\cite{mark}, \cite{bloch} and superconducting devices \cite{chargequbit1}, see, e.g., reviews \cite{makhlin},\cite{shumeiko}. Among the superconducting qubits, the quantum devices built with mesoscopic Josephson junctions allow an unprecedented level of control on quantum coherence phenomena \cite{qubit},\cite{martines}.
The charge qubit based on a Cooper-pair box (CPB) has been one of the first quantum devices
to provide the evidence of quantum interference associated with Landau-Zener-St\"uckelberg-Majorana (LZSM) physics in a non-atomic system. However, the real
CPB can be considered as the TLS only under certain approximations.
The experiments of the Helsinki group \cite{hakonen1},\cite{hakonen2} have clearly demonstrated that the interference
pattern of St\"uckelberg oscillations cannot be fully explained by the
two-level models. On one hand, the models of quantum interferometers constructed by adding few extra levels to the two-level system may provide a suitable explanation of the experimental puzzles \cite{demkovosherov}-\cite{ashhab}. On the other hand, the models describing
multi-level interferometers contain some additional parameters which
can be used for fine-tuning quantum systems to certain resonance transitions
and therefore inspire new experiments.
In this paper we consider a three-level model for describing the quantum dynamics of the superconducting Cooper-pair box.
The paper is organized as follows: in Section II we introduce the CPB model
and investigate quantum dynamics associated with
Landau-Zener tunneling in three-level system under a linear-in-time sweep.
In Section III we consider a periodically driven three-level system
and discuss in- and off- resonance Rabi oscillations. Concluding remarks are given in
the Section IV.
\section{Landau-Zener tunneling in a Cooper Pair Box}
We consider a superconducting Cooper-pair box -- a small superconducting island coupled both to a massive electrode via resistive Josephson junction and to a electrostatic gate via capacitance. The Hamiltonian describing this system is given by:
\begin{equation}\label{Hamil1}
H_{CPB}=E_C(\hat n-n_g)^2+E_J\cos\hat\phi.
\end{equation}
The first term in $H_{CPB}$ represents the charge states:
here $E_C{=}(2e)^2/2C$ is a charging energy of superconducting island ($C$ is its capacitance), the operator $\hat n$ accounts for the number of Cooper pairs, dimensionless gate charge $n_g{=}{-}C_gV_g/2e$ is the external parameter controlling the number of the Cooper pairs on the island via the gate voltage $V_g$. The second term in the Hamiltonian (\ref{Hamil1}) describes Josephson tunneling. Here $E_J$ is the Josephson energy and $\hat\phi$ is the phase operator canonically conjugated to
$\hat n$: $\hat n{=}{-}i\partial/\partial \hat \phi$ (here we adopt the system of units $\hbar{=}1$). We assume that the value of a superconducting gap $\Delta_S$ of the island is larger compared to the charging energy $E_C$ ($\Delta_S{\gg} E_C$), which allows us to ignore tunneling of the odd number of charges to the island. In this paper, we investigate the charge regime $E_J{\ll} E_C$, when superconducting CPB operates as an elementary charge qubit \cite{makhlin}, \cite{shumeiko}.
If the Josephson energy is negligibly small, $E_J{\to} 0$, a fixed number of the Cooper pairs is trapped on the island, while the ground state energy depends periodically
on the gate voltage $V_g$. Besides, there are special values of the gate voltage,
namely, $n_g(V_g){=}N{\pm} 1/2$, at which $N$ and $N{\pm} 1$ charge states become degenerate. Inclusion of the finite Josephson energy lifts the degeneracy and allows us
to approximate the CPB at low energies by a two-level system model.
In this paper we go beyond the TLS model by taking into account an additional degeneracy between $n$ and $n{+}2$ charge states occurring under condition $n_g(V_g){=}n$. The minimal model describing this case accounts for three charge states only, namely, $\{n_1,n_2,n_3\}{\equiv}\{N{-}1,N, N{+}1\}$ Cooper pairs, see Fig.1. In the regime $E_C{\gg} E_J$ the Hamiltonian is written
in the basis formed by the charge states, parametrized by the number of Cooper pairs on the island. The matrix form of the Hamiltonian in this basis is given by:
\begin{eqnarray}\label{Hamil2}
H=\left(
\begin{array}{ccc}
E_C(n_g-n_1)^2 & \Delta & \Sigma\\
\Delta& E_C(n_g-n_2)^2 & \Delta\\
\Sigma & \Delta & E_C(n_g-n_3)^2
\end{array}\right),\nonumber\\
\end{eqnarray}
where $\Delta{\equiv} E_J$ and $\Sigma$ are the amplitudes for tunneling on the island of one and two Cooper pairs respectively.
We start our analysis of the quantum dynamics by considering the case, when the gate voltage is swept linearly in time: $n_g(t){=}N{+}\alpha t$. In order to get simple analytical results we first restrict our analysis by imposing $\Sigma{=}0$ condition
(absence of direct tunneling of two Cooper pairs). In this case, it is easy to solve the time-dependent Schr\"odinger equation $i\dot \psi =H\cdot \psi$ with Hamiltonian (\ref{Hamil2}) by using so-called Kayanuma's method \cite{kayanuma}. The idea behind the Kayanuma's ansatz is to exclude all diagonal elements in Eq.(\ref{Hamil2}) by performing a transformation with
a diagonal operator
\begin{eqnarray}\label{oper}
\hat U=e^{-i\theta_t}\left(
\begin{array}{ccc}
e^{-iE_C\left(\alpha t^2+t\right)} & 0 & 0\\
0 & 1 & 0 \\
0 & 0 & e^{-iE_C\left(-\alpha t^2+t\right)}
\end{array}\right),\nonumber\\
\end{eqnarray}
where $\theta_t{=}E_C \alpha^2 t^3/3$.
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{fig1.pdf}
\caption{(Color online) The energy diagram for the superconducting Cooper-pair box model. Dashed lines denote
the charging energy given by the diagonal term of the Eq. (\ref{Hamil2})
as a function of the dimensionless gate voltage, $n_g(V_g)$ (diabatic basis
for the Landau-Zener problem). Solid lines show the adiabatic basis obtained by
diagonalization of the Eq. (\ref{Hamil2}) for a particular case $\Sigma{=}0$. Dash-dotted red curves form a closed loop and denote adiabatic and non-adiabatic paths
resulting in quantum interference.}
\end{center}
\label{fig1}
\end{figure}
Transforming the wave function $ \tilde\psi(t){=}\hat U\psi(t){=}\sum_{i=1}^3C_i(t)\vert i\rangle$, where the states $\vert i\rangle$ form the compact basis of diabatic states of Eq.(\ref{Hamil2}), we re-write the non-stationary Schr\"odinger equation describing the time-evolution of the three-level system in terms of the system of three linear differential equations:
\begin{eqnarray}\label{schr}
&& i\dot C_1(t)= \Delta e^{iE_C\left(\alpha t^2+t\right)} C_2(t), \nonumber \\
&& i\dot C_2(t)= \Delta e^{-iE_C\left(\alpha t^2+t\right)} C_1(t)+ \Delta e^{-iE_C\left(-\alpha t^2+t\right)}C_3(t), \nonumber \\
&&i\dot C_3(t)=\Delta e^{iE_C\left(-\alpha t^2+t\right)} C_2(t).
\end{eqnarray}
To find a solution of the system of coupled linear differential equations, it is
convenient to rewrite it in the form of linear integral Volterra equations.
For example, it is straightforward to transform the equation for $C_2(t)$ to a
self-contained integral form by excluding $C_1(t)$ and $C_3(t)$
with the help of the first an the third equations in (\ref{schr}):
\begin{eqnarray}\label{amp}
&&C_2(t)=-\Delta^2\int_{-\infty}^t dt_1 \int_{-\infty}^{t_1}dt_2 C_2(t_2)\times\nonumber\\
&&\times\left\{\exp\left[-iE_C (t_1^{+})^2+iE_C (t_2^{+})^2\right]+\right.\nonumber\\
&&+\left.\exp\left[iE_C (t_1^{-})^2-iE_C (t_2^{-})^2\right] \right\},
\end{eqnarray}
where $ t^{\pm}=\sqrt{\alpha}t\pm1/(2\sqrt{\alpha})$. We assume that
the initial condition for Eqs.(\ref{schr}) is given by the $N$ - quasiparticle
Cooper pairs state
characterized by the occupancy: $C_2(-\infty){=}1$ and $C_{1}(-\infty){=}C_{3}(-\infty){=}0$.
The integral equation (\ref{amp}) is solved by the iterations. This procedure is legitimate in the non-adiabatic approximation under condition $\delta {=}\Delta^2/(\alpha E_C ){\ll} 1$. By exponentiating the result of the first iteration we obtain the probability $P_2{=}|C_2|^2$ to find the system in the $N$-charge state at $t{\rightarrow }\infty$:
\begin{eqnarray}\label{probability}
P_{2}(t)\approx\exp\left(-\frac{\pi}{2}\frac{\Delta^2}{\alpha E_C}\left[F\left(\tilde t^{+}\right)+F\left(\tilde t^-\right)\right]\right),
\end{eqnarray}
where the function
\begin{eqnarray}\label{def}
F(z)=\left[\left(\frac{1}{2}+C(z)\right)^2+\left(\frac{1}{2}+S(z)\right)^2\right]
\end{eqnarray}
is expressed in terms of the Fresnel integrals
\begin{equation}\label{fresnel}
S(z)=\sqrt{\frac{2}{\pi}}\int_0^zdt\sin t^2 \;, \quad C(z)=\sqrt{\frac{2}{\pi}}\int_0^zdt\cos t^2.
\end{equation}
In Eq.(\ref{probability}) we denote $\tilde t^{\pm}{=}\sqrt{2E_C/\pi}[\sqrt{\alpha}t{\pm} 1/(2\sqrt{\alpha})]$.
We plot on Fig.~2 the probability $P_2$ obtained by analytic solution of the Eq.(\ref{probability}) for two different sets of parameters (see details in the figure caption): an orange curve represents the solution with $\Sigma{=}0$, while a black curve corresponds to
the solution with $\Sigma{\neq}0$. The step-like behaviour characteristic for the orange curve is originating from an interplay between two time scales of the LZ problem \cite{gefen1},\cite{gefen2}: i) a Zener time $t_Z{\sim} (\alpha E_C)^{-1/2}$ associated with the "individual" Landau-Zener transitions at corresponding avoided crossings (we consider $t_Z$
for the non-adiabatic LZ transition \cite{gefen1}); ii) a dwell time $t_D{\sim} \alpha^{-1}$ related to the time interval between two consequent crossings
(see Fig.1 and Fig 2.). Two different regimes correspond to two opposite limiting cases: (i) two Landau-Zener transitions can be considered as two consequent (independent) avoided crossings if $t_Z{<}t_D$ (see the upper panel in Fig.2), and (ii)
two transitions can not be separated in time if $t_Z{>}t_D$ and the interference from the nearest avoided crossings must be taken into account (see the lower panel in Fig.2). This interference results in a pronounced super-structure in the time
evolution of the probability $P_2(t)$. Emergence of the two energy scales $E_1$ and $E_2$ with $E_1{-}E_2 {\sim }E_C$ leads to the "beats" pattern characterized by the period
$t_{beats}{\sim }E_C^{-1}$.
It is convenient to consider a "triangle" formed by three parabolas (see Fig.~1)
as an Mach-Zehnder interferometer. Each avoided crossing point is equivalent
to a "mirror" characterized by a transparency determined by LZ probability.
The left avoided crossing therefore splits the state into two parts (red dash-dotted lines representing adiabatic and non-adiabatic paths in Fig.~1), while the right crossing can either play a role of yet another splitter (if $\Sigma{=}0$) or
detect an interference between transmitted (diabatic) and reflected (adiabatic) paths if $\Sigma{\neq}0$.
The "beats" super-structure is
associated with the repopulation of all three states of
{\color{black}the Mach-Zehnder interferometer} due to almost perfect "transmission"
at $n_g{=}N$ (induced tunneling is given by the second order processes
$\propto \Delta^2/E_C$, see \cite{kiselev1} for the details.
The interference pattern changes its character when
the "transmission" at $n_g{=}N$ associated with the tunneling of two Cooper pairs
becomes pronounced (black curves in Fig. 2).
The "finite reflection"
at the "upper mirror" (splitter) $n_g{=}N$ leads to the probability $P_2$ deficit
(see the difference between the orange and black curves at the upper panel of Fig.2) and modifies the step pattern in the regime $t_Z{<}t_D$.
Besides, we emphasize that the probabilities to find system in $N{-}1$, $N{+}1$ states are equally distributed in the absence of $\Sigma$-terms. The reason for equipartition is due to
equivalence of two tunneling rates at two avoided crossing points $t=\pm 1/(2\alpha)$. This effect holds in both regimes $t_Z {\lessgtr} t_D$. Taking into account
finite $\Sigma$ results in appearance of an asymmetry between the probabilities $P_1$ and $P_3$. Moreover, this asymmetry becomes even more pronounced in the case
$t_Z{>}t_D$, see inserts in lower panel of the Fig.2.
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{fig2.pdf}
\caption{(Color online)
Main frames: Time-dependent Landau-Zener probability $P_2(t){=}|C_2(t)|^2$ given by
Eq. (\ref{schr})
as a function of a dimensionless time $\sqrt{\alpha E_C}t$ (see definitions and detailed explanations in the Section II). The inserts show the time evolution of the probabilities $P_1(t){=}|C_1(t)|^2$ and $P_3(t){=}|C_3(t)|^2$.
All curves are computed for the non-adiabatic regime $\delta{\ll}1$.
The orange curves correspond to the analytic solution Eq.~(\ref{probability}) with $\Sigma{=}0$. The black curves represent the results of numerical calculations
performed with $\Sigma{\neq}0$. Without any loss of generality we assume that the transparency at each avoided crossing point can be fine-tuned independently. We therefore do not rely upon a smallness of $\Sigma$ compared to $\Delta$. The initial condition for all curves reads:
$C_2(-\infty){=}1$ and $C_{1}(-\infty){=}C_{3}(-\infty){=}0$.
Upper panel: $t_Z{<}t_D$, parameters $\delta=0.0042$, $\Delta/E_C{=}0.004$ and $\Sigma/E_C{=}0.024$. Lower panel: $t_Z{>}t_D$, parameters $\delta=0.011$, $\Delta/E_C{=}0.2$ and $\Sigma/E_C{=}0.8$.}
\end{center}
\label{fig2}
\end{figure}
\section{Periodically driven CPB}
In this Section we consider a periodic modulation of the dimensionless gate charge
\begin{eqnarray}\label{drive}
n_g(t)=N+\varepsilon_0+A\cos (\Omega_D t)
\end{eqnarray}
where $\Omega_D$ and $A$ are the frequency and amplitude of the modulation respectively and $\varepsilon_0$ is the charge offset. We investigate the cases of resonance and off-resonance drivings
and analyse Rabi oscillations \cite{rabi} in the driven three-level system.
The system is resonantly driven if the frequency of the drive $\Omega_D$
coincides with the energy difference
between two neighbouring states (two levels).
In that case, known as a conventional Rabi problem \cite{rabi}, the probability to occupy each of two eigenstates oscillates with the frequency
proportional to the amplitude of the drive.
When the two-level system is driven off-resonance, the oscillation frequency
$\Omega_{\rm off}{>}\Omega_R$. We show that the off-resonance driving
of the three-level system allows a strong violation of this inequality.
\begin{figure}
\begin{center}
\includegraphics[width=80mm]{fig3.pdf}
\caption{Energy spectrum for the $S{=}1$ model in the presence of a single-ion
anisotropy parameter $D$ (see the main text for the discussion of the mapping between
the three-level CPB models and $S{=}1$ Hamiltonians).
(a) The single-ion anisotropy parameter
$D{=}0$. The three-fold degeneracy of the $S{=}1$ state is lifted out by static magnetic field $h_0^z=2E_C\varepsilon_0$. Equidistant splitting of $|{\pm} 1\rangle$ states is described by a linear Zeeman effect. (b) Finite single ion anisotropy $D{\neq}0$ lifts out the degeneracy between $|0\rangle$ and $|{\pm} 1\rangle$ states.
The states $|{\pm} 1\rangle$ still remain degenerate in the absence of magnetic field.
(c) Finite synthetic magnetic field $h_0^z{\neq} 0$ eliminates the degeneracy between $|{\pm}1\rangle$ states. When all degeneracies of the effective $S{=}1$ model are lifted out,
there exist three resonances frequencies corresponding to the transitions between three pairs of levels. Conditions for the in- and off- resonance transitions are discussed in the Section III.}
\end{center}
\label{fig3}
\end{figure}
\subsection{Mapping three-level systems to $S{=}1$ models}
To analyse the quantum dynamics of a multi-level CPB, it is convenient
to use an equivalent language of spin-$S$ states representing
$2S{+}1$ - levels model.
In particular, the diagonal part of the Hamiltonian describing three-level $S{=}1$
system can always be represented in terms of a linear (dipole moment) and quadratic
(quadrupole moment) combinations of $\hat S^z$. The transitions between the
eigenstates of $\hat S^z$ operator
are accounted by linear terms in $\hat S^x$, $\hat S^y$ operators and also corresponding
bi-linear combinations (quadrupole moments).
Rewriting the Hamiltonian
(\ref{Hamil2}) in the basis of linear and bi-linear spin $S{=}1$ operators
results in the following spin Hamiltonian:
\begin{eqnarray}\label{eaxis}
{\cal H}=H-H_0(t)=\Delta \hat S^x + h^z(t) \hat S^z + D (\hat S^z)^2
\end{eqnarray}
where
$h^z(t){=}2E_C\varepsilon_0{+}2AE_C\cos(\Omega_D t)$ is a synthetic time-dependent magnetic field, $D{=}E_C$ is an easy-axis anisotropy parameter
(quadrupole interaction) and $H_0(t){=}[h^z(t)]^2/(4E_C)$. Note, that the Eq. (\ref{eaxis})
describing three-level system
is not linear in terms of the $S$-operators, in contrast to the Hamiltonians
describing the quantum dynamics of the TLS. However, the Eq.
(\ref{eaxis}) as well as any three-state
Hermitian Hamiltonians represented by $3\times 3$
matrices can be written down as a linear form in a basis of Gell-Mann matrices
(generators of SU(3) group) \cite{kiselev1}. The linear in terms
of the $S{=}1$ operators part of the Hamiltonian (\ref{eaxis}) corresponding to
$D{=}0$ case falls into a class of SU(2) symmetry group. The transitions between
the eigenstates of $\hat S^z$ operator, $\{\vert{-}1\rangle, \vert 0\rangle, \vert{+}1\rangle \}$ (which are equivalent to $\{N{-}1$, $N$, $N{+}1\}$ charge states of the CPB model), are restricted by $\Delta S^z{=}\pm 1$ condition.
Constant (non-oscillating) magnetic field applied along $z$ direction, $h^z_0{=}2E_C\varepsilon_0$ lifts the three-fold degeneracy of the $S{=}1$ states (linear Zeeman effect).
Since the $\vert\pm\rangle$ states are equidistant from the $\vert0\rangle$
state, the driving with $\Omega_D{=}h^z_0$ gives an access to the transitions
$\vert{-}1\rangle{\leftrightarrow}\vert0\rangle$ and $\vert0\rangle{\leftrightarrow}\vert{+}1\rangle$ (see Fig. 3(a)).
Finite quadrupole interaction (single-ion anisotropy) $D{\neq} 0$
lifts out the degeneracy between $\vert0\rangle$ and $\vert{\pm}1 \rangle$ states
(see Fig. 3(b)). Finite synthetic magnetic field $h^z_0$ (aka charge offset)
applied along $z$-direction eliminates the degeneracy of $\vert{\pm} 1\rangle$ states.
Therefore, finite $D$ - term
explicitly breaks the $SU(2)$ symmetry and allows transitions with
unrestricted selection rule $\Delta S^z{=}{\pm}2$.
However, the CPB model Eq. (\ref{Hamil2}) is derived under condition $E_C{=}D{\gg}\Delta$. Thus, the $SU(2)$ symmetric point is beyond the validity of the CPB model.
\subsection{Rotating Wave Approximation}
The diagonal elements of the Eq. (\ref{Hamil2}) subject to the periodic drive Eq. (\ref{drive}) explicitly depend on time. We perform the first (exact) step
in transforming the Hamiltonian of the model by
rewriting Eq. (\ref{Hamil2}) in the new rotating frame basis by applying
a transformation:
\begin{eqnarray}\label{oper2}
&&\hat V=\exp\left(-iE_C\left[\frac{2A}{\Omega_D}\sin(\Omega_D t) \cdot\hat S^z+\right.\right.\\
&&\left.\left.\hat I\cdot\left(\varepsilon_0^2t+\frac{2\varepsilon_0A}{\Omega_D}\sin(\Omega_D t)+\frac{A^2}{2\Omega_D}\left(t+\frac{\sin (2\Omega_D t)}{2}\right)\right)\right]\right),\nonumber
\end{eqnarray}
The transformation Eq. (\ref{oper2}) results in elimination of the time-dependence from the diagonal matrix elements of the Eq. (\ref{Hamil2}) by transferring it
to the off-diagonal elements of the
Hamiltonian matrix. In Eq.(\ref{oper2}) $\hat I$ denotes the unit $3\times 3$ matrix.
Further simplification of the transformed Hamiltonian is achieved by rewriting the time-dependent off-diagonal elements of Hamiltonian matrix with a help of the textbook identity for the Bessel functions:
$\exp(ix\sin t)=\sum_m J_m(x)e^{imt}$. As a result, the new Hamiltonian
$\tilde H{=}\hat V^{-1} H \hat V-i\hat V^{-1}\dot{\hat V}$ reads as follows:
\begin{eqnarray}\label{Hamil3}
\tilde H=\sum_{m=-\infty}^{\infty}\left(
\begin{array}{ccc}
E_C (1+2\varepsilon_0) & \Delta_m e^{im\Omega_D t} & 0\\
\Delta_m e^{-im\Omega_D t} & 0 & \Delta_m e^{im\Omega_D t} \\
0 & \Delta_m e^{-im\Omega_D t} & E_C(1-2\varepsilon_0)
\end{array}\right),\nonumber\\
\end{eqnarray}
where $\Delta_m{=}\Delta J_m(2AE_C/\Omega_D)$. The wave functions $\varphi(t)$
written in the rotated basis are connected to the wave functions $\psi(t)$
in the original basis through the equation
$\varphi(t){=}\hat V^{-1}\cdot \psi(t)$. Note, that the
Hamiltonian (\ref{Hamil3}) remains explicitly time-dependent after
the transformation Eq. (\ref{oper2}).
The next step is to transform the Hamiltonian (\ref{Hamil3}) to a time-independent form. It can be done by applying the second transformation to yet another rotating frame.
Unfortunately, as is known, there is no simple way to eliminate exactly the time-dependence from the Eq. (\ref{Hamil3}). However, it can be done approximately using a reliable ansatz known as a {\it rotating wave approximation} (RWA).
The idea behind RWA is to consider the solution of the Schr\"odinger equation as a sum of the $k$-th harmonics:
\begin{eqnarray}\label{schr2}
\varphi(t)=\sum_k\left(
\begin{array}{ccc}
e^{ik\Omega_D t} & 0 & 0\\
0 & 1 & 0 \\
0 & 0 & e^{-ik\Omega_D t}
\end{array}\right)\tilde\varphi_k(t),
\end{eqnarray}
For each $m$-th harmonic in Eq. (\ref{Hamil3}) there exists
corresponding $k{=}m$ term in Eq. (\ref{schr2}) such a way that the off-diagonal matrix element of the new Hamiltonian will be
given by a sum of two terms: one is non-oscillating and another one is fast oscillating.
After neglecting the fast oscillating terms in Eq.(\ref{Hamil3}) we write the Schr\"odinger equation for $m$-th harmonic, $\tilde \varphi^{(m)}$ as follows:
\begin{eqnarray}\label{schr3}
i\dot {\tilde \varphi}^{(m)}(t)=\left(
\begin{array}{ccc}
E_C+\delta \omega_m & \Delta_m & 0\\
\Delta_m & 0 & \Delta_m \\
0 & \Delta_m & E_C-\delta\omega_m
\end{array}\right)\cdot \tilde \varphi^{(m)}(t),\nonumber\\
\end{eqnarray}
where $\delta\omega_m{=}m\Omega_D{+}2E_C\varepsilon_0$. While a general solution of Eq.(\ref{schr3}) is cumbersome, we consider below only some cases of a special interest.
\subsection{Resonance Rabi oscillations in CPB}
The matrix form of the time-independent Hamiltonian (\ref{schr3}) assumes that
only two pairs of the levels, namely $(N, N{+}1)$ and $(N, N{-}1)$, can be
fine tuned to the resonance by adjusting $\delta\omega_m$. The resonance between
$(N{-}1, N{+}1)$ states typically is not accessible due to the absence
(smallness) of the corresponding matrix elements. Indeed, the
probability for two Cooper pairs to tunnel in CPB is intuitively small due to
smallness of the phase space for such a process.
Therefore, there are only {\color{black} two} resonance Rabi oscillations in the CPB model. If $\delta \omega_m{=} {+}E_C$ (which is equivalent to
$m\Omega_D{=}E_C(1{-}2\varepsilon_0)$), the resonance condition for the transition between $(N, N{+}1)$ is satisfied.
This resonance condition assumes that the three-level system is considered away from the resonance $n_g{=}N{\pm}1/2$. It provides a low bound for the offset charge
$|\varepsilon_0|{<}(1/2)(1-\Delta/E_C)$.
The states $(N, N{-}1)$ stay off-resonance
being separated by a large energy offset $2E_C$. Under this condition
the transition between $(N, N{-}1)$
can be neglected and the Hamiltonian matrix (\ref{schr3}) reduced to
$2{\times} 2$ form \cite{carroll}. The Rabi oscillations in the TLS are
described by the standard textbook equation \cite{rabi} (for simplicity we focus on a single-photon $m{=}1$ resonance): the resonance drive with
$\Omega_D{=}E_C(1{-}2\varepsilon_0)$ results in oscillations with
$\Omega_R{=}2A\cdot\Delta/(1{-}2\varepsilon_0)$ if amplitude of the drive
$A{\ll} \Omega_D/E_C$ (to obtain the equation for $\Omega_R$ we use an
asymptotic of the Bessel function $J_1(z{\ll} 1){\approx} z$).
If the TLS is driven near the $n_g{=}N{\pm}1/2$ resonance,
we expand the dimensionless gate charge across the resonance as follows:
\begin{eqnarray}\label{drive1}
n_g(t)=N\pm 1/2+\tilde\varepsilon_0+\tilde A\cos (\Omega_D t).
\end{eqnarray}
The resonance condition reads $\Omega_D{=}\Delta$ and $\Omega_R{\propto}\tilde A$
in accordance with the standard theory of the Rabi oscillations.
If $\delta \omega_m{=} {-}E_C$ (which is equivalent to
$m\Omega_D{=}{-}E_C(1{+}2\varepsilon_0)$), the resonance condition for a transition between $(N, N{-}1)$ is satisfied and the states $(N, N{+}1)$ stay off-resonance.
Analysing corresponding Rabi oscillations in the TLS under the resonance condition $\Omega_D{=}{-}E_C(1{+}2\varepsilon_0)$ for the single-photon processes $m{=}1$ we obtain the Rabi oscillations
with a frequency $\Omega_R{=}{-}2A\cdot\Delta/(1{+}2\varepsilon_0)$. The analysis of the multi-photon resonances and periodic driving near the $n_g{=}N{\pm}1/2$ resonance
the can be performed similarly to the analysis of $(N, N{+}1)$ Rabi oscillations
considered above.
If $\Sigma{\neq}0$ direct tunneling of two Cooper pairs is allowed,
the third Rabi resonance between
$(N-1){\leftrightarrow}(N+1)$
states is possible. In that situation
the $N$ state is separated from $(N{\pm}1)$ states by the large energy gap
$E_C$ and therefore can be neglected. The resonance condition for the Rabi oscillations in the TLS reads as
$\Omega_D{=}\Sigma$ and the Rabi frequency is proportional to the amplitude of corresponding drive.
\subsection{Off-resonance Rabi oscillations in CPB}
As we have pointed it out in the previous Subsection, the matrix
element describing tunneling of two Cooper pairs is negligible compared to
the Josephson energy. Therefore, without loss of any generality we assume that
$\Sigma{=}0$ and
there is no direct transition between $N{+}1$ and $N{-}1$. However, such transition arises as a second order tunneling process.
We are referring to Rabi oscillations associated with indirect
$(N{+}1){\leftrightarrow}(N{-}1)$ transition as the off-resonance Rabi effect. The degeneracy of $N {+}1$ and $N{-}1$ levels
(in the absence of direct tunneling) is restored under condition $\delta\omega_m{=}0$
or $m\Omega_D{=}{-}2E_C\varepsilon_0$.
The solution of Eq.(\ref{schr3}) is written down in the form
\begin{eqnarray}\label{solu}
\tilde \varphi^{(m)}(t){=}\frac{1}{2}\exp(-iE_C t/2)\cdot\hat M \cdot \tilde \varphi(0)
\end{eqnarray}
where matrix $\hat M$ is given by
\begin{eqnarray}\label{sol}
\hat M{=}\left(
\begin{array}{ccc}
e^{-i\frac{E_Ct}{2}}+\theta_- & -\frac{i4\Delta_m}{\xi}\sin\left(\frac{\xi t}{2}\right) & -e^{-i\frac{E_Ct}{2}}+\theta_-\\
-\frac{i4\Delta_m}{\xi_n}\sin\left(\frac{\xi t}{2}\right) & 2\theta_+ & -\frac{i4\Delta_m}{\xi_n}\sin\left(\frac{\xi t}{2}\right) \\
-e^{-i\frac{E_Ct}{2}}+\theta_- & -\frac{i4\Delta_m}{\xi}\sin\left(\frac{\xi t}{2}\right) & e^{-i\frac{E_Ct}{2}}+\theta_-
\end{array}\right).\nonumber\\
\end{eqnarray}
For parametrization of the matrix $\hat M$ in Eq.(\ref{sol}) we use the shorthand notations $\theta_{\pm}{=}\cos(\xi t/2){\pm }i(E_C/\xi)\sin(\xi t/2)$ and $\xi=\sqrt{E_C^2+8\Delta_m^2}$. In case of small driving amplitude $A{\ll} \Omega/E_C$, the Bessel function $J_m(z{\ll} 1)\approx z^m/m!$ and therefore $\Delta_m{\approx} \Delta (2AE_C/\Omega_D)^m/m!$.
The transition probability between $\vert i \rangle$ (occupied at $t{=}{-}\infty$)
and $\vert j\rangle$ (empty if $j{\neq} i$) states
$P^{(m)}_{i\rightarrow j}=|\tilde \varphi_j^{(m)}(t)|^2$ for the $m$-photon resonance
is straightforwardly obtained from Eq.(\ref{solu}) and Eq.(\ref{sol}).
Assuming that either $N{-} 1$ or $N{+}1$ charge state
was occupied at $t{=}{-}\infty$ we find that the time-depended population difference
(equivalent to the time evolution of the expectation value of $\hat S^z(t)$ operator)
is given by a slowly varying oscillating function
\begin{eqnarray}\label{osc}
&&P^{(m)}_{1-3}=|\tilde\varphi^{(m)}_1(t)|^2-|\tilde\varphi^{(m)}_3(t)|^2\approx \\&&
\cos\left(\frac{2\Delta_m^2t}{E_C}\right)\left(1-\frac{2\Delta_m^2}{E_C^2}\right)+\frac{2\Delta_m^2}{E_C^2}\cos(E_C t).\nonumber
\end{eqnarray}
If the initial condition in Eq. (\ref{solu}) and Eq. (\ref{sol}) assumes that
the $N$-charge states is occupied while $N{\pm}1$ states are empty,
the oscillations
in the population difference (precession of the expectation value of $\hat S^z$)
are absent $P_{1-3}^{(m)}{=}0$.
It is convenient to define a Fourier transform of the probability
\begin{eqnarray}\label{four}
P^{(m)}_{1-3}(\omega)=\int_{-\infty}^{+\infty} P^{(m)}_{1-3}(t)
e^{-i\omega t} dt
\end{eqnarray}
This function for the indirect $(N+1){\leftrightarrow}(N-1)$ transition
contains two Lorentzian peaks (in the presence of decoherence):
one main peak at the frequency
$\omega{=}\Omega_R{=}2\Delta_m^2/E_C$ with a height
$1-2(\Delta_m/E_C)^2$ and one satellite peak at $\omega{=}E_C$ with a
height $2(\Delta_m/E_C)^2$. The Fourier transform of the
total transition probability obtained by summation over all multi-photon processes will have a characteristic shape of a frequency comb.
\section{Summary and discussions}
The standard investigation of a Cooper-pair box model describing a charge Josephson qubit assumes projection onto a TLS near the degeneracy points
when the dimensionless gate charge $n_g$ takes the half-integer values
$n_g{=}N{\pm} 1/2$. The degeneracy is lifted out by including a tunneling of one Cooper pair. As a result, the Landau-Zener transition with a probability controlled by the Josephson energy and Zener tunneling rate takes place.
In this paper we extended the CPB model by including an additional degeneracy point
between $N{-}1$ and $N{+}1$ Cooper pairs. The minimal model accounting for this degeneracy is formulated in terms of the three-level system. We investigated
the Landau-Zener transition associated with linear sweep of $n_g$ in the three-level model by solving the Schr\"odinger equation using Kayanuma's method. We have shown that
the LZ probabilities demonstrate a behaviour characterized by either "step" structure or "beats" pattern. We have formulated the conditions for the formation of the steps and beats in terms of the parameters of the three-level model. We introduced the mapping between the
three-level model describing the CPB and the models describing quantum dynamics of $S{=}1$ system in the presence of the single-ion anisotropy (quadrupole interaction).
Analysis of the Rabi oscillations in the periodically driven three-level system is performed in the framework of the Rotating Wave Approximation for two important limiting cases of resonance and off-resonance drives. It is shown that if the direct transition between certain pairs of the levels is allowed by the symmetry, then the resonance Rabi oscillations are well-described by the two-level model. In that case the resonance condition assumes driving at the frequency equal to the energy offset. If, however, the direct transition between the two levels is
forbidden by the symmetry (when the corresponding matrix element is zero), the Rabi oscillations nevertheless occur as the second order in tunneling process at
the off-resonance frequency which scales quadratically with
the Josephson energy. It is well known that for the two-level models any detuning from the resonance increases the frequency of the oscillations. The resonance condition gives a low bound for the Rabi oscillations frequency: it is equal to the amplitude of the drive.
The off-resonance Rabi oscillations in the three-level CPB Hamiltonian are predicted
to be characterized by a much smaller frequency determined by the second-order
in tunneling process. These Rabi oscillations correspond to the precession
of $S^z$ projection (the population difference between $N{+} 1$ and $N{-} 1$ states characterized by the equal odd or even parity) described by the effective $S{=}1$ Hamiltonians.\\
\section*{Acknowledgements}
We acknowledge fruitful conversations with Pertti Hakonen on
the early stage of the project. We are grateful to Mark Dykman, Yuval Gefen, Sigmund Kohler, Heribert Lorenz, Stefan Ludwig, Valery Pokrovsky and Nikolay Sinitsyn
for many inspiring discussion of the Landau-Zener-St\"uckelberg-Majorana physics.
|
1,108,101,564,841 | arxiv | \section{Introduction}
Modern machine learning models can yield highly accurate predictions in many applications. As these predictions are often used in critical decision making, it is increasingly important to accompany them with an uncertainty quantification of how much the true label may deviate from the prediction. A common approach to quantifying the uncertainty in the data is to learn a \emph{prediction set}---a set-valued analogue of usual (point) predictions---which outputs a \emph{subset} of candidate labels instead of a single predicted label. For example, this could be a prediction interval for regression, or a discrete label set for multi-class classification. A common requirement for learned prediction sets is that it should achieve \emph{valid coverage}, i.e. the set should cover the true label with high probability (such as 90\%) on a new test example~\citep{lawless2005frequentist}. In addition to coverage, the prediction set is often desired to have a good \emph{efficiency}, such as a low length or small cardinality~\citep{lei2018distribution,sadinle2019least}, in order for it to be informative. Note that coverage and efficiency typically come as a trade-off, as it is in general more likely to achieve a better coverage using a larger set.
This paper is concerned with the problem of finding the most efficient prediction set with valid coverage. Our approach builds on conformal prediction~\citep{vovk2005algorithmic}, a powerful framework for generating prediction sets from (trained) base predictors with finite-sample coverage guarantees. Conformal prediction has been used for learning prediction sets in a variety of tasks in regression~\citep{lei2014distribution,lei2018distribution,romano2019conformalized}, classification~\citep{cauchois2020knowing,romano2020classification,angelopoulos2020uncertainty}, structured prediction~\citep{bates2021distribution}, and so on.
However, the conformalization step in conformal prediction by default does not offer the flexibility for optimizing additional efficiency metrics, as the efficiency is already determined by the associated score function and the target coverage level.
As a concrete example, the Conformalized Quantile Regression algorithm learns a single width adjustment parameter that turns a two-sided quantile predictor into a prediction interval of valid coverage~\citep{romano2019conformalized}; however, it does not offer a way of further optimizing its length (cf. Figure~\ref{figure:fig1} Left).
For certain efficiency metrics and prediction tasks, several approaches have been proposed, for example by designing a better score function~\citep{angelopoulos2020uncertainty}, using base predictors of a specific form~\citep{izbicki2019flexible, izbicki2020cd, sadinle2019least}, or selecting a best training hyperparameter~\citep{yang2021finite}.
However, optimizing the efficiency for more general tasks or efficiency metrics still largely requires ``manual'' efforts by the researcher, as it (1) often relies on specific domain knowledge about the task at hand; (2) is often done in conjunction with conformal prediction in multiple rounds of trial-and-error; (3) is often done by reasoning about high-level properties of the efficiency loss and coverage constraints (e.g. what makes the length short), but not by directly optimizing the efficiency-coverage trade-off in a data-dependent way. To the best of our knowledge, there is a lack of a more principled and unified approach for optimizing any efficiency metric subject to valid coverage over any class of prediction sets.
\begin{figure*}[t]
\centering
\vspace{-1em}
\begin{minipage}{0.49\textwidth}
\centering
\label{figure:sim-left}
\includegraphics[width=0.98\textwidth]{Figures/fig1_left.pdf}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\centering
\label{figure:sim-right}
\includegraphics[width=0.98\textwidth]{Figures/fig1_right.pdf}
\end{minipage}
\vspace{-1em}
\caption{
\small Comparison of vanilla conformal prediction and our {\tt CP-Gen}~for learning a prediction interval with $90\%$ nominal coverage on a real-world regression task. Left: The function class $\set{C_t}$ is the prediction intervals used by Conformalized Quantile Regression. Right: $\set{C_{\theta, t}}$ is a larger class of intervals used by our conformal quantile finetuning procedure with the same base predictor; here the additional trainable parameter $\theta$ is the last linear layer within the quantile neural network (cf. Section~\ref{section:cqf} and Appendix~\ref{appendix:fig1-details} for more details).
}
\vspace{-1em}
\label{figure:fig1}
\end{figure*}
In this paper, we cast the above task as a constrained empirical risk minimization (ERM) problem of optimizing the efficiency subject to the coverage constraint, over any general function class of prediction sets with potentially multiple learnable parameters. This is motivated by a simple observation that vanilla conformal prediction is already equivalent to solving such a constrained ERM with one learnable parameter (Section~\ref{section:conformal}). Overall, our algorithm can be viewed as an automatic and data-dependent approach for optimizing the efficiency simulatneously with conformal prediction. Our contributions are summarized as follows.
\begin{itemize}[leftmargin=1em]
\item We propose {\tt CP-Gen}~(Conformal Prediction with General Function Class), a generalization of conformal prediction to learning multiple parameters. {\tt CP-Gen}~selects within an arbitrary class of prediction sets by solving the constrained ERM problem of best efficiency subject to valid empirical coverage (Section~\ref{section:multi-conformal-algorithm}), and is a systematic extension of existing algorithms.
\item We show theoretically that {\tt CP-Gen}~achieves approximately valid coverage and near-optimal efficiency within class, whenever the class is low-capacity with respect to both the coverage and the efficiency loss (Section~\ref{section:multi-conformal-theory}, with concrete examples in Appendix~\ref{appendix:examples}). We also provide a practical variant {\tt CP-Gen-Recal}~using data splitting and reconformalization, which achieves exact coverage, as well as good efficiency under additional assumptions (Section~\ref{section:multi-conformal-reconformalize}).
\item To address the issue that {\tt CP-Gen}~and {\tt CP-Gen-Recal}~involve a non-differentiable coverage constraint, we develop a differentiable approximation using surrogate losses and Lagrangians (Section~\ref{section:optim}). This allows us to solve the constrained ERM problem over higher-dimensional continuous parameter spaces via gradient-based optimization, and is more flexible than existing algorithms that require discretization and brute-force search.
\item We empirically demonstrate that {\tt CP-Gen-Recal}~with our gradient-based implementation can learn prediction sets with valid coverage and significantly improved efficiency on three real-data tasks: prediction intervals for regression with improved length, minimum-volume prediction sets for multi-output regression, and label prediction sets for ImageNet (Section~\ref{section:experiment} \& Appendix~\ref{appendix:imagenet}).
\end{itemize}
We illustrate our main insight via the coverage-vs-efficiency trade-off plots in Figure~\ref{figure:fig1}: While vanilla conformal prediction only learns a single parameter (within its conformalization step) by a simple thresholding rule over a coverage-efficiency \emph{curve}, our {\tt CP-Gen}~is able to further improve the efficiency by thresholding a \emph{region} formed by a larger function class.
\subsection{Related work}
\paragraph{Learning prediction sets via conformal prediction}
The framework of conformal prediction for learning prediction sets is originated in the early works of~\citep{vovk1999machine,vovk2005algorithmic, shafer2008tutorial}. The main advantage of conformal prediction is that it yields (marginal) coverage guarantees regardless of the data distribution (i.e. distribution-free). More recently, conformal prediction has been applied to a variety of uncertainty quantification tasks, such as prediction intervals for regression~\citep{papadopoulos2008inductive, vovk2012conditional, vovk2015cross, lei2014distribution, vovk2018cross, lei2018distribution, romano2019conformalized,izbicki2019flexible,guan2019conformal,gupta2019nested,kivaranovic2020adaptive, barber2021predictive,foygel2021limits}, label prediction sets for classification problems \citep{lei2013distribution,sadinle2019least,romano2020classification, cauchois2020knowing,cauchois2020robust,angelopoulos2020uncertainty}, and prediction sets for structured output~\citep{bates2021distribution}.
\paragraph{Optimizing efficiency in addition to valid coverage}
The problem of finding a prediction set with (approximate) valid coverage and small size has been considered, e.g. in~\citet{pearce2018high,chen2021learning}~for regression and~\citet{park2019pac} for classification; however, these approaches do not use conformal prediction.~\citet{yang2021finite}~propose to minimize the length of the conformal interval over either a finite class or a linear aggregation of base predictors, and provides coverage and efficiency guarantees. All above works formulate this task as a risk minimization problem, yet are restricted to considering either finite classes or specific efficiency loss functions.
Our work is inspired by~\citep{yang2021finite} and generalizes the above works by allowing any function class and efficiency loss, along with providing a differentiable approximate implementation.
The problem of optimizing the efficiency can also be done by utilizing structures of the particular efficiency loss to choose a specific base predictor and an associated prediction set~\citep{lei2014distribution,sadinle2019least,izbicki2019flexible,izbicki2020cd}. By contrast, our approach does not require either the efficiency loss or the base predictor to possess any structure, and is thus complementary.
\paragraph{Other algorithms and theory}
An alternative line of work constructs prediction intervals / prediction sets by aggregating the prediction over multiple base predictors through Bayesian neural network~\citep{mackay1992bayesian, gal2016dropout, kendall2017uncertainties,malinin2018predictive,maddox2019simple} or ensemble methods~\citep{lakshminarayanan2016simple, ovadia2019can, huang2017snapshot,malinin2019ensemble}. However, these methods do not typically come with (frequentist) coverage guarantees. The recent work of~\citet{hoff2021bayes} studies ways of enhancing Bayes-optimal prediction with frequentist coverage.
Prediction intervals can also be obtained by parameter estimation using a parametric model for the data ~\citep{cox1975prediction,bjornstad1990predictive,beran1990calibrating,barndorff1996prediction,hall1999prediction,lawless2005frequentist}; see~\citep{tian2020methods} for a review. However, the coverage of such prediction intervals relies heavily on the parametric model being correct (well-specified), and can even fail in certain high-dimensional regimes where the model is indeed correct~\citep{bai2021understanding}.
\section{Ablation studies}
\subsection{Conformal quantile finetuning}
\label{appendix:cqf-ablations}
We report ablation results for the conformal quantile finetuning problem with nominal coverage level $1-\alpha\in\set{80\%, 95\%}$, and otherwise exactly the same setup as Section~\ref{section:cqf}. The conclusions are qualitatively the same as the $90\%$ version presented in Table~\ref{table:conformal-finetuning}.
\begin{table}[h]
\centering
\small
\caption{\small {\bf Results for conformal quantile finetuning} on real-data regression tasks at level $1-\alpha=80\%$. For each method we report the (test) coverage, length, and pinball loss of the corresponding base quantile predictor. All results are averaged over 8 random seeds.}
\vspace{-1em}
\centerline{
\label{table:conformal-finetuning-80}
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{3}{c}{{\tt CQR}} & \multicolumn{3}{c}{{\tt QR} + {\tt CP-Gen-Recal}~(ours)} \\
\cmidrule(r){2-4} \cmidrule(r){5-7}
Dataset & Coverage(\%) & Length & $L_{\rm pinball}^{\rm test}$ & Coverage(\%) & Length & $L_{\rm pinball}^{\rm test}$ \\
\midrule
MEPS\_19 & $80.42$ & $0.702$ & $0.154$ & $80.45$ & $\mathbf{0.514}$ & $0.190$ \\
MEPS\_20 & $80.44$ & $0.707$ & $0.161$ & $80.48$ & $\mathbf{0.466}$ & $0.200$ \\
MEPS\_21 & $79.91$ & $0.696$ & $0.151$ & $79.85$ & $\mathbf{0.618}$ & $0.192$ \\
Facebook\_1 & $80.38$ & $0.348$ & $0.072$ & $80.01$ & $\mathbf{0.198}$ & $0.137$ \\
Facebook\_2 & $79.96$ & $0.329$ & $0.063$ & $79.80$ & $\mathbf{0.189}$ & $0.138$ \\
kin8nm & $79.59$ & $0.865$ & $0.119$ & $78.69$ & $\mathbf{0.832}$ & $0.125$ \\
naval & $79.91$ & $2.777$ & $0.311$ & $79.76$ & $\mathbf{2.721}$ & $0.311$ \\
bio & $80.07$ & $1.791$ & $0.222$ & $80.54$ & $\mathbf{1.674}$ & $0.248$ \\
blog\_data & $80.64$ & $0.399$ & $0.082$ & $80.10$ & $\mathbf{0.272}$ & $0.158$ \\
\midrule
Nominal ($1-\alpha$) & $80.00$ & - & - & $80.00$ & - & - \\
\bottomrule
\end{tabular}
}
\vspace{-1em}
\end{table}
\begin{table}[h]
\centering
\small
\caption{\small {\bf Results for conformal quantile finetuning} on real-data regression tasks at level $1-\alpha=95\%$. For each method we report the (test) coverage, length, and pinball loss of the corresponding base quantile predictor. All results are averaged over 8 random seeds.}
\vspace{-1em}
\centerline{
\label{table:conformal-finetuning-95}
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{3}{c}{{\tt CQR}} & \multicolumn{3}{c}{{\tt QR} + {\tt CP-Gen-Recal}~(ours)} \\
\cmidrule(r){2-4} \cmidrule(r){5-7}
Dataset & Coverage(\%) & Length & $L_{\rm pinball}^{\rm test}$ & Coverage(\%) & Length & $L_{\rm pinball}^{\rm test}$ \\
\midrule
MEPS\_19 & $94.60$ & $1.674$ & $0.078$ & $95.10$ & $\mathbf{1.292}$ & $0.091$ \\
MEPS\_20 & $94.72$ & $1.650$ & $0.081$ & $94.78$ & $\mathbf{1.261}$ & $0.097$ \\
MEPS\_21 & $94.64$ & $1.633$ & $0.071$ & $94.99$ & $\mathbf{1.351}$ & $0.086$ \\
Facebook\_1 & $94.96$ & $0.797$ & $0.036$ & $95.04$ & $\mathbf{0.601}$ & $0.061$ \\
Facebook\_2 & $95.17$ & $0.700$ & $0.031$ & $94.98$ & $\mathbf{0.560}$ & $0.060$ \\
kin8nm & $95.15$ & $1.602$ & $0.047$ & $94.95$ & $\mathbf{1.557}$ & $0.048$ \\
naval & $94.87$ & $3.308$ & $0.084$ & $94.83$ & $\mathbf{3.265}$ & $0.088$ \\
bio & $95.17$ & $2.698$ & $0.073$ & $95.22$ & $\mathbf{2.587}$ & $0.084$ \\
blog\_data & $95.07$ & $0.862$ & $0.040$ & $95.09$ & $\mathbf{0.744}$ & $0.068$ \\
\midrule
Nominal ($1-\alpha$) & $95.00$ & - & - & $95.00$ & - & - \\
\bottomrule
\end{tabular}
}
\vspace{-1em}
\end{table}
\subsection{Multi-output regression}
\label{appendix:multi-output-ablations}
We report ablation results for the multi-output regression problem with nominal coverage level $1-\alpha\in\set{80\%, 95\%}$, and otherwise exactly the same setup as Section~\ref{section:multi-output}. The conclusions are qualitatively the same as the $90\%$ version presented in Table~\ref{table:multi-output}, except for one dataset at level $95\%$.
\begin{table}[h]
\centering
\small
\caption{\small {\bf Results for multi-output regression} on next-state prediction tasks, at level $1-\alpha=80\%$. For each method we report the (test) coverage and volume of its learned box-shaped prediction set. All results are averaged over 8 random seeds.}
\vspace{-1em}
\label{table:multi-output-80}
\centerline{
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{2}{c}{ {\tt Coord-wise} } & \multicolumn{2}{c}{ {\tt Coord-wise-Recal} } & \multicolumn{2}{c}{{ {\tt CP-Gen-Recal}~(ours)}} \\
\cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7}
Dataset & Coverage(\%) & Volume & Coverage(\%) & Volume & Coverage(\%) & Volume \\
\midrule
Cartpole & $87.82$ & $1.74\times 10^{-6}$ & $80.08$ & $7.83\times 10^{-7}$ & $80.09$ & $\mathbf{7.45\times 10^{-7}}$ \\
Half-Cheetah & $88.28$ & $4.26\times 10^{-7}$ & $79.96$ & $2.42\times 10^{-8}$ & $80.04$ & $\mathbf{1.37\times 10^{-8}}$ \\
Ant & $87.77$ & $1.97\times 10^{-5}$ & $80.12$ & $3.97\times 10^{-7}$ & $80.06$ & $\mathbf{1.75\times 10^{-7}}$ \\
Walker & $90.25$ & $5.88\times 10^{-7}$ & $80.28$ & $3.13\times 10^{-9}$ & $80.28$ & $\mathbf{1.45\times 10^{-9}}$ \\
Swimmer & $91.49$ & $1.33\times 10^{-6}$ & $79.99$ & $6.18\times 10^{-8}$ & $79.97$ & $\mathbf{4.32\times 10^{-9}}$ \\
Hopper & $86.47$ & $3.25\times 10^{-10}$ & $79.87$ & $7.40\times 10^{-11}$ & $79.97$ & $\mathbf{4.41\times 10^{-11}}$ \\
Humanoid & $90.84$ & $2.86\times 10^{-7}$ & $80.05$ & $9.47\times 10^{-13}$ & $80.02$ & $\mathbf{2.41\times 10^{-13}}$ \\
\midrule
Nominal ($1-\alpha$) & $80.00$ & - & $80.00$ & - & $80.00$ & - \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[h]
\centering
\small
\caption{\small {\bf Results for multi-output regression} on next-state prediction tasks, at level $1-\alpha=95\%$. For each method we report the (test) coverage and volume of its learned box-shaped prediction set. All results are averaged over 8 random seeds.}
\vspace{-1em}
\label{table:multi-output-95}
\centerline{
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{2}{c}{ {\tt Coord-wise} } & \multicolumn{2}{c}{ {\tt Coord-wise-Recal} } & \multicolumn{2}{c}{{ {\tt CP-Gen-Recal}~(ours)}} \\
\cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7}
Dataset & Coverage(\%) & Volume & Coverage(\%) & Volume & Coverage(\%) & Volume \\
\midrule
Cartpole & $97.21$ & $1.07\times 10^{-4}$ & $95.10$ & $4.60\times 10^{-5}$ & $95.12$ & $\mathbf{8.61\times 10^{-6}}$ \\
Half-Cheetah & $96.80$ & $2.37\times 10^{-4}$ & $95.03$ & $4.03\times 10^{-5}$ & $95.01$ & $\mathbf{3.29\times 10^{-5}}$ \\
Ant & $96.65$ & $5.30\times 10^{-1}$ & $95.02$ & $4.87\times 10^{-2}$ & $95.09$ & $\mathbf{2.39\times 10^{-2}}$ \\
Walker & $97.01$ & $8.08\times 10^{-4}$ & $94.94$ & $6.21\times 10^{-5}$ & $94.99$ & $\mathbf{4.27\times 10^{-5}}$ \\
Swimmer & $97.74$ & $3.44\times 10^{-4}$ & $94.95$ & $3.77\times 10^{-5}$ & $95.01$ & $\mathbf{5.34\times 10^{-6}}$ \\
Hopper & $96.27$ & $1.76\times 10^{-8}$ & $94.96$ & $\mathbf{8.23\times 10^{-9}}$ & $94.96$ & $1.19\times 10^{-8}$ \\
Humanoid & $97.22$ & $3.58\times 10^{-1}$ & $94.99$ & $7.69\times 10^{-4}$ & $94.91$ & $\mathbf{7.49\times 10^{-4}}$ \\
\midrule
Nominal ($1-\alpha$) & $95.00$ & - & $95.00$ & - & $95.00$ & - \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{Comparison of {\tt CP-Gen}~and {\tt CP-Gen-Recal}}
\label{appendix:cpgen}
We compare the performance of {\tt CP-Gen}~and {\tt CP-Gen-Recal}~on the multi-output regression tasks using the same setup as Section~\ref{section:multi-output}. Recall that the vanilla {\tt CP-Gen}~optimizes both $(\what{\theta}, \what{t})$ on $D_{\cal}$ (we additionally reconformalize $\what{t}$ on the same $D_{\cal}$ to address the potential bias in $\what{t}$ brought by the approximate optimization~\eqref{problem:practical}), whereas our main {\tt CP-Gen-Recal}~algorithm optimizes $\what{\theta}$ on $D_{\cal}$ and reconformalizes $\what{t}_{{\rm recal}}$ on $D_{{\rm recal}}$.
Table~\ref{table:cpgen} reports the results. Observe that, except for the volume on one dataset (Humanoid), there is no significant difference in both the coverage and the volume for the two methods.
For practice we recommend {\tt CP-Gen-Recal}~whenever the exact coverage guarantee is important, yet this result shows that---perhaps originating from the fact that here $n_{\cal}=20000$ is large---the coverage (generalization error) of {\tt CP-Gen}~is also nearly valid, which may be better than what our Proposition~\ref{proposition:main-gen} suggests.
\begin{table}[b]
\centering
\small
\caption{\small Comparison of {\tt CP-Gen}~and {\tt CP-Gen-Recal}~on the multi-output regression tasks.}
\label{table:cpgen}
\vspace{-1em}
\centerline{
\begin{tabular}{lcccc}
\toprule
& \multicolumn{2}{c}{{{\tt CP-Gen-Recal}}} & \multicolumn{2}{c}{{{\tt CP-Gen}}} \\
\cmidrule(r){2-3} \cmidrule(r){4-5}
Dataset & Coverage(\%) & Volume & Coverage(\%) & Volume \\
\midrule
Cartpole & $90.12$ & $2.30\times 10^{-6}$ & $90.09$ & $2.30\times 10^{-6}$ \\
Half-Cheetah & $90.02$ & $9.07\times 10^{-7}$ & $89.96$ & $8.83\times 10^{-7}$ \\
Ant & $90.02$ & $8.25\times 10^{-5}$ & $89.98$ & $8.21\times 10^{-5}$ \\
Walker & $89.94$ & $3.47\times 10^{-7}$ & $89.91$ & $3.30\times 10^{-7}$ \\
Swimmer & $90.13$ & $1.46\times 10^{-7}$ & $89.96$ & $1.29\times 10^{-7}$ \\
Hopper & $89.92$ & $8.25\times 10^{-10}$ & $89.92$ & $8.23\times 10^{-10}$ \\
Humanoid & $89.94$ & $4.95\times 10^{-8}$ & $90.05$ & $7.08\times 10^{-8}$ \\
\midrule
Nominal & $90.00$ & - & $90.00$ & - \\
\bottomrule
\end{tabular}
}
\vspace{-1em}
\end{table}
\section{Differentiable optimization}
\label{section:optim}
Our (meta) algorithms {\tt CP-Gen}~and {\tt CP-Gen-Recal}~involve solving the constrained ERM problem~\eqref{problem:multi-conformal-alg}. One feasible case is when $\Theta$ is finite and small, in which we enumerate all possible $\theta\in\Theta$ and find the optimal $\what{t}$ for each $\theta$ efficiently using quantile computation. However, this optimization is significantly more challenging when the underlying parameter set $\Theta$ is continuous and we wish to jointly optimize over $(\theta, t)$: The coverage loss $\widehat{L}_{{\rm coverage}}(C_{\theta, t})$ is \emph{non-differentiable} and its ``gradient'' is zero almost everywhere as it uses the zero-one loss. This makes the coverage constraint challenging to deal with and not amenable to any gradient-based algorithm.
To address this non-differentiability, we develop a gradient-based practical implementation by approximating the coverage constraint. We first rewrite each individual coverage loss into the form
\begin{align*}
\indic{y\notin C_{\theta, t}(x)} = \indic{s_{\theta, t}(x, y) < 0} = \ell_{{01}}(s_{\theta, t}(x,y)).
\end{align*}
where $\ell_{{01}}(z)\defeq \indic{z < 0}$ is the zero-one loss. (Such a rewriting is possible in most cases by taking $s_{\theta, t}$ as a suitable ``score-like'' function; see Appendix~\ref{appendix:exp-details} for instantiations in our experiments.) Then, inspired by the theory of surrogate losses~\citep{bartlett2006convexity}, we approximate $\ell_{{01}}(z)$ by the hinge loss $\ell_{\hinge}(z) = \brac{1-z}_+$ which is (almost everywhere) differentiable with a non-trivial gradient. We find the hinge loss to perform better empirically than alternatives such as the logistic loss.
To deal with the (modified) constraint, we turn it into an exact penalty term with penalty parameter $\lambda\ge 0$, and use a standard primal-dual formulation~\citep{bertsekas1997nonlinear} to obtain an unconstrained min-max optimization problem on the Lagrangian:
\begin{align}
\label{problem:practical}
\min_{\theta, t} \max_{\lambda\ge 0} \what{L}_{{\rm eff}}(C_{\theta, t}) + \lambda \brac{ \what{L}_{\hinge}(C_{\theta, t}) - \alpha}_+,
\end{align}
where $\what{L}_{\hinge}(C_{\theta, t})\defeq \frac{1}{n_{\cal}}\sum_{i=1}^{n_\cal} \ell_{\hinge}(s_{\theta, t}(x_i, y_i))$ is the empirical hinge loss on the calibration dataset $D_{\cal}$. Our final practical implementation of~\eqref{problem:multi-conformal-alg} solves the problem~\eqref{problem:practical} by Stochastic Gradient Descent-Ascent (with respect to $(\theta, t)$ and $\lambda$) to yield an approximate solution $(\what{\theta}, \what{t})$. We remark that in our experiments where we use the reconformalized version~{\tt CP-Gen-Recal}, we only keep the $\what{\theta}$ obtained from~\eqref{problem:practical} and perform additional reconformalization to compute $\what{t}_{{\rm recal}}$ to guarantee coverage.
We also emphasize that the approximation in~\eqref{problem:practical} makes the problem differentiable at the cost of deviating from the true constrained ERM problem~\eqref{problem:multi-conformal-alg} and thus potentially may sacrifice in terms of the efficiency. However, our experiments in Section~\ref{section:experiment} show that such an implementation can still improve the efficiency over existing approaches in practice, despite the approximation.
\section{Proof of Proposition~\ref{proposition:single-conformal}}
\label{appendix:proof-prop1}
Recall that $C_t(x)=[f(x) - t, f(x) + t]$ satisfies $\ell_{{\rm eff}}(C_t; (x, y)) = {\rm length}(C_t(x))=2t$ for any $(x,y)$. Also, we have
\begin{align*}
\indic{y\notin C_t(x)} = \indic{y\notin [f(x) - t, f(x) + t]} = \indic{\abs{y - f(x)} > t},
\end{align*}
or equivalently $\indic{y \in C_t(x)} = \indic{\abs{y - f(x)}\le t}$.
Therefore, problem~\eqref{problem:single-conformal} is equivalent to
\begin{align*}
\minimize_{t\ge 0} & ~~t \\
\subjectto & ~~1 - \what{L}_{{\rm coverage}}(C_t) \defeq \frac{1}{n_{\cal}} \sum_{i=1}^{n_{\cal}} \indic{ |y_i - f(x_i)| \le t} \ge 1-\alpha.
\end{align*}
By definition of quantiles, this problem is solved at
\begin{align*}
\what{t} = \ceil{(1-\alpha) n_{\cal}}\textrm{-th largest element of}~\set{\abs{y_i - f(x_i)}}_{i=1}^{n_{\cal}},
\end{align*}
which is the desired result.
\qed
\section{Additional experiments and analyses}
\label{appendix:additional-experiments}
\subsection{Conditional coverage of {\tt CP-Gen-Recal}}
\label{appendix:conditional-coverage}
We analyze the improved length prediction intervals learned by {\tt CP-Gen-Recal}~(Section~\ref{section:cqf}) by evaluating its \emph{conditional} coverage metrics and comparing with the baseline {\tt CQR}~method.
As conditional coverage is hard to reliably estimate from finite data, we consider two proxy metrics proposed in~\citep{feldman2021improving} that measure the independence between \emph{length} and \emph{indicator of coverage}:
\begin{itemize}[leftmargin=2em]
\item The correlation coefficient (Corr) between the following two random variables: the interval size $L={\rm length}(C(X))$ and the indicator of coverage $V=\indic{Y\in \what{C}(X)}$. A (population) correlation of 0 is a necessary (but not sufficient) condition of perfect conditional coverage~\citep{feldman2021improving}. Here we measure the absolute correlation, which is smaller the better.
\item HSIC: A more sophisticated correlation metric between $L$ and $V$ that takes into account nonlinear correlation structures. A (population) HSIC of 0 is a necessary and sufficient condition of the independence between $L$ and $V$. We estimate HSIC on the finite test data using the method in~\citep{feldman2021improving}.
\end{itemize}
Table~\ref{table:conformal-finetuning-corr} reports the results. Observe that while our {\tt CP-Gen-Recal}~improves the length, it achieves worse (higher) Correlation/HSIC than the baseline {\tt CQR}, which is expected as length and conditional coverage often come as a trade-off.
\begin{table}[h]
\centering
\small
\caption{\small {\bf Conditional coverage results for conformal quantile finetuning} on real-data regression tasks at level $1-\alpha=90\%$. For each method we report the (absolute) correlation coefficient as well as the HSIC metric between length and indicator of coverage. All results are averaged over 8 random seeds.}
\vspace{-1em}
\centerline{
\label{table:conformal-finetuning-corr}
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{3}{c}{{\tt CQR}} & \multicolumn{3}{c}{{\tt QR} + {\tt CP-Gen-Recal}~(ours)} \\
\cmidrule(r){2-4} \cmidrule(r){5-7}
Dataset & Corr($\downarrow$) & HSIC($\downarrow$) & Length($\downarrow$) & Corr($\downarrow$) & HSIC($\downarrow$) & Length($\downarrow$) \\
\midrule
MEPS\_19 & $\mathbf{ 0.022 }$ & $\mathbf{ 3.03\times 10^{-5} }$ & $1.167$ & $0.049$ & $1.77\times 10^{-4}$ & $\mathbf{0.890}$ \\
MEPS\_20 & $\mathbf{ 0.032 }$ & $\mathbf{ 3.63\times 10^{-5} }$ & $1.165$ & $0.113$ & $2.66\times 10^{-4}$ & $\mathbf{0.830}$ \\
MEPS\_21 & $\mathbf{ 0.029 }$ & $\mathbf{ 4.72\times 10^{-5} }$ & $1.145$ & $0.068$ & $2.20\times 10^{-4}$ & $\mathbf{0.962}$ \\
Facebook\_1 & $\mathbf{ 0.029 }$ & $\mathbf{ 1.27\times 10^{-5} }$ & $0.555$ & $0.175$ & $7.34\times 10^{-4}$ & $\mathbf{0.384}$ \\
Facebook\_2 & $\mathbf{ 0.024 }$ & $\mathbf{ 1.16\times 10^{-5} }$ & $0.491$ & $0.116$ & $2.68\times 10^{-4}$ & $\mathbf{0.364}$ \\
kin8nm & $\mathbf{ 0.031 }$ & $\mathbf{ 4.85\times 10^{-5} }$ & $1.214$ & $0.084$ & $9.32\times 10^{-5}$ & $\mathbf{1.173}$ \\
naval & $0.091$ & $\mathbf{ 1.05\times 10^{-5} }$ & $3.095$ & $\mathbf{ 0.064 }$ & $2.16\times 10^{-5}$ & $\mathbf{3.077}$ \\
bio & $\mathbf{ 0.026 }$ & $\mathbf{ 4.15\times 10^{-5} }$ & $2.271$ & $0.041$ & $1.09\times 10^{-4}$ & $\mathbf{2.164}$ \\
blog\_data & $\mathbf{ 0.013 }$ & $\mathbf{ 4.60\times 10^{-5} }$ & $0.605$ & $0.141$ & $5.75\times 10^{-4}$ & $\mathbf{0.496}$ \\
\bottomrule
\end{tabular}
}
\vspace{-1em}
\end{table}
\subsection{Alternative tweaks for {\tt CQR}~baseline}
\label{appendix:alternative-tweaks}
Here we test two additional tweaked versions of the {\tt CQR}~baseline in the prediction interval experiment of Section~\ref{section:cqf}:
\begin{itemize}[leftmargin=2em]
\item {\tt CQR}-$D_{{\rm train}}\cup D_{\cal}$: Use dataset $D_{{\rm train}}\cup D_{\cal}$ for training the base quantile regressor, then conformalize on $D_{{\rm recal}}$.
\item {\tt CQR-PinballFt}: Train the base quantile regressor on $D_{{\rm train}}$, and finetune the last linear layer on $D_{\cal}$ using the pinball loss (same as training), and conformalize on $D_{{\rm recal}}$.
\end{itemize}
Optimization details about these two methods are described in Section~\ref{appendix:opt-detail-tweaks}.
\paragraph{Result}
Table~\ref{table:conformal-finetuning-tweak} reports the results for these two tweaked baselines, in comparison with our original baseline {\tt CQR}~as well as our proposed {\tt QR} + {\tt CP-Gen-Recal}. Observe that using more training data ({\tt CQR}-$D_{{\rm train}}\cup D_{\cal}$) improves the length slightly on some datasets but not all. In contrast, {\tt CQR-PinballFt}~is unable to improve either the pinball loss or the length over the base {\tt CQR} (observe that {\tt CQR-PinballFt}~uses the same set of training data as {\tt CQR}-$D_{{\rm train}}\cup D_{\cal}$ but uses a less expressive model in the finetuning stage). Overall, on almost all datasets (except for kin8nm), our {\tt CP-Gen-Recal}~still achieves the best length.
\begin{table}[h]
\centering
\small
\caption{\small {\bf Results for conformal quantile finetuning} on real-data regression tasks at level $1-\alpha=90\%$. Here we compare our {\tt CP-Gen-Recal}~method with {\bf tweaked versions of the baseline {\tt CQR}~method}. All results are averaged over the same 8 random seeds as in Table~\ref{table:conformal-finetuning}. All (average) coverages are within $(90\pm 0.5)\%$ and omitted here.}
\vspace{-1em}
\centerline{
\label{table:conformal-finetuning-tweak}
\begin{tabular}{lcccccccc}
\toprule
& \multicolumn{2}{c}{{\tt CQR}-$D_{{\rm train}}$} & \multicolumn{2}{c}{{\tt CQR}-$D_{{\rm train}}\cup D_{\cal}$} & \multicolumn{2}{c}{{\tt CQR-PinballFt}} & \multicolumn{2}{c}{{\tt QR} + {\tt CP-Gen-Recal}~(ours)} \\
\cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7} \cmidrule(r){8-9}
Dataset & Length & $L_{\rm pinball}^{\rm test}$ & Length & $L_{\rm pinball}^{\rm test}$ & Length & $L_{\rm pinball}^{\rm test}$ & Length & $L_{\rm pinball}^{\rm test}$ \\
\midrule
MEPS\_19 & $1.167$ & $0.112$ & $1.171$ & $0.111$ & $1.192$ & $0.112$ & $\mathbf{0.890}$ & $0.131$ \\
MEPS\_20 & $1.165$ & $0.117$ & $1.179$ & $0.114$ & $1.190$ & $0.117$ & $\mathbf{0.830}$ & $0.141$ \\
MEPS\_21 & $1.145$ & $0.107$ & $1.150$ & $0.106$ & $1.249$ & $0.107$ & $\mathbf{0.962}$ & $0.129$ \\
Facebook\_1 & $0.555$ & $0.052$ & $0.549$ & $0.051$ & $0.578$ & $0.052$ & $\mathbf{0.384}$ & $0.090$ \\
Facebook\_2 & $0.491$ & $0.044$ & $0.472$ & $0.042$ & $0.523$ & $0.044$ & $\mathbf{0.364}$ & $0.092$ \\
kin8nm & $1.214$ & $0.076$ & $\mathbf{1.165}$ & $0.072$ & $1.232$ & $0.075$ & $1.173$ & $0.078$ \\
naval & $3.095$ & $0.164$ & $3.089$ & $0.164$ & $3.096$ & $0.164$ & $\mathbf{3.077}$ & $0.166$ \\
bio & $2.271$ & $0.130$ & $2.240$ & $0.128$ & $2.271$ & $0.130$ & $\mathbf{2.164}$ & $0.148$ \\
blog\_data & $0.605$ & $0.058$ & $0.551$ & $0.056$ & $0.660$ & $0.058$ & $\mathbf{0.496}$ & $0.107$ \\
\bottomrule
\end{tabular}
}
\vspace{-1em}
\end{table}
\subsubsection{Optimization details}
\label{appendix:opt-detail-tweaks}
{\tt CQR}-$D_{{\rm train}}\cup D_{\cal}$: Our original {\tt CQR}~baseline used $D_{\cal}$ for monitoring validation loss and automatically determining the early stopping (cf. Section~\ref{appendix:cqf-details}), and $D_{{\rm recal}}$ for conformalization. To optimize on $D_{{\rm train}}\cup D_{\cal}$, we do not use automatic learning rate decay and early stopping, but instead manually picked the number of epochs and corresponding learning rate decay schedule that is close to average runs of the {\tt CQR}~method on each dataset. This choice ensures that our new baseline still gets to use see the exact same amount of data for conformalizing ($D_{{\rm recal}}$) and testing ($D_{{\rm test}}$), and has a optimization setup as close as possible the original {\tt CQR}~baseline.
More concretely, we optimize for 800 epochs for \{MEPS\_19, MEPS\_20, MEPS\_21\}, 1500 epochs for \{Facebook\_1, Facebook\_2, blog\_data\}, 6000 epochs for kin8nm, 350 epochs for naval, and 2500 epochs for bio. For all datasets, the learning rate decays by 10x twice, at 90\% and 95\% of the total epochs.
{\tt CQR-PinballFt}: We finetuned on $D_{\cal}$ with 1000 epochs and batch size 256. The learning rate was chosen within $\{10^{-2}, 10^{-3}\}$ and the results are not too different for these two choices (length difference is within $0.010$ for these two choices, and there is no overall winner). We presented the results with learning rate $10^{-3}$.
\subsection{Additional {\tt Max-score-Conformal}~baseline for multi-output regression}
\label{appendix:maxscore}
We test one additional baseline method for the multi-output regression experiment in Section~\ref{section:multi-output}:
\begin{itemize}[leftmargin=2em]
\item {\tt Max-score-Conformal}: Here we consider the \emph{hypercube}-shaped predictor
\begin{align}
\label{equation:maxscore}
C_t(x) = \prod_{i=1}^{d_{\rm out}} [\what{f}_i(x) - t, f_i(x) + t],
\end{align}
and use conformal prediction on $D_{\cal}\cup D_{{\rm recal}}$ to compute a conformalized $\what{t}$ and the final prediction set $C_{\what{t}}(x)$. In other words, we perform standard conformal prediction with score function $\|y - \what{f}(x)\|_\infty$.
\end{itemize}
We remark that both the {\tt Coord-wise-Recal}~and the {\tt Max-score-Conformal}~baseline methods are special instances of {\tt CP-Gen-Recal} with some fixed $\theta_0$: In our parametrization, $u=(\theta, t)$, $\theta$ determines the \emph{shape} (i.e. relative ratios between the $u_i's$) whereas $t$ determines the \emph{size}. Therefore, {\tt Max-score-Conformal}~can be thought of as choosing $\theta_0$ to be the all-ones ratio (i.e. hypercube-shaped), whereas {\tt Coord-wise-Recal} can be thought of as choosing $\theta_0$ from a coordinate-wise one dimensional conformal prediction.
\paragraph{Result}
Table~\ref{table:multi-output-additional} reports the result for {\tt Max-score-Conformal}. Compared with the existing baseline {\tt Coord-wise-Recal}, {\tt Max-score-Conformal}~achieves better volume on the Cartpole dataset but worse volume on almost all other datasets (except for Swimmer where their volumes are similar). Further, note that {\tt Max-score-Conformal}~achieves significantly higher volumes for certain datsets (Ant, Humanoid). Our inspection shows that this due to the fact that there are a certain number of hard-to-predict state dimensions (and many other easy-to-predict state dimensions) for these two datasets. Therefore, {\tt Coord-wise-Recal}~which builds on {\tt Coord-wise}~adapts to this structure and uses only a high length on these dimensions only, whereas the {\tt Max-score-Conformal}~method pays this max conformal score on all dimensions to yield an unnecessarily high volume.
We remark that our {\tt CP-Gen-Recal}~still performs significantly better than both baselines.
\begin{table}[h]
\centering
\small
\caption{\small {\bf Results for multi-output regression} on next-state prediction tasks, at level $1-\alpha=90\%$. The setting is the same as in Table~\ref{table:multi-output} (with the same 8 random seeds), and here we compare additionally with the {\tt Max-score-Conformal}~baseline method described in~\eqref{equation:maxscore}.}
\vspace{-1em}
\label{table:multi-output-additional}
\centerline{
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{2}{c}{ {\tt Coord-wise-Recal} } & \multicolumn{2}{c}{ {\tt Max-score-Conformal} }& \multicolumn{2}{c}{{ {\tt CP-Gen-Recal}~(ours)}} \\
\cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7}
Dataset & Coverage(\%) & Volume & Coverage(\%) & Volume & Coverage(\%) & Volume \\
\midrule
Cartpole & $90.17$ & $5.10\times 10^{-6}$ & $90.10$ & $3.07\times 10^{-6}$ & $90.12$ & $\mathbf{2.30\times 10^{-6}}$ \\
Half-Cheetah & $90.06$ & $1.23\times 10^{-6}$ & $89.96$ & $1.72\times 10^{-4}$ & $90.02$ & $\mathbf{9.07\times 10^{-7}}$ \\
Ant & $89.99$ & $1.70\times 10^{-4}$ & $90.06$ & $3.46\times 10^{2}$ & $90.02$ & $\mathbf{8.25\times 10^{-5}}$ \\
Walker & $90.01$ & $7.33\times 10^{-7}$ & $90.02$ & $1.03\times 10^{-2}$ & $89.94$ & $\mathbf{3.47\times 10^{-7}}$ \\
Swimmer & $89.90$ & $2.22\times 10^{-6}$ & $90.08$ & $2.21\times 10^{-6}$ & $90.13$ & $\mathbf{1.46\times 10^{-7}}$ \\
Hopper & $90.02$ & $1.01\times 10^{-9}$ & $89.96$ & $1.29\times 10^{-8}$ & $89.92$ & $\mathbf{8.25\times 10^{-10}}$ \\
Humanoid & $89.95$ & $8.53\times 10^{-8}$ & $89.98$ & $2.48\times 10^7$ & $89.94$ & $\mathbf{4.95\times 10^{-8}}$ \\
\midrule
Nominal ($1-\alpha$) & $90.00$ & - & $90.00$ & - & $90.00$ & - \\
\bottomrule
\end{tabular}
}
\end{table}
\subsection{100 random seeds and standard deviation}
Here we repeat the experiments in Section~\ref{section:cqf} \&~\ref{section:multi-output} with 100 random seeds and the exact same setups. We report the mean and standard deviations in Table~\ref{table:conformal-finetuning-100} for the prediction intervals experiment (Section~\ref{section:cqf}), Table~\ref{table:multi-output-100-mean} for the mean and Table~\ref{table:multi-output-100-std} for the standard deviation for the multi-output regression experiment (Section~\ref{section:multi-output}).
\begin{table}[h]
{
\centering
\small
\caption{\small {\bf Results for conformal quantile finetuning} on real-data regression tasks at level $1-\alpha=90\%$. For each method we report the (test) coverage, length, and pinball loss of the corresponding base quantile predictor. All results are averaged over 100 random seeds.}
\vspace{-1em}
\centerline{
\label{table:conformal-finetuning-100}
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{3}{c}{{\tt CQR}} & \multicolumn{3}{c}{{\tt QR} + {\tt CP-Gen-Recal}~(ours)} \\
\cmidrule(r){2-4} \cmidrule(r){5-7}
Dataset & Coverage(\%) & Length & $L_{\rm pinball}^{\rm test}$ & Coverage(\%) & Length & $L_{\rm pinball}^{\rm test}$ \\
\midrule
MEPS\_19 & $89.92$ $\pm 1.16$ & $1.147$ $\pm 0.057$ & $0.107$ $\pm 0.013$ & $89.95$ $\pm 0.012$ & $\mathbf{0.895 \pm 0.126 }$ & $0.130$ $\pm 0.015$ \\
MEPS\_20 & $89.90$ $\pm 0.98$ & $1.164$ $\pm 0.054$ & $0.109$ $\pm 0.012$ & $89.97$ $\pm 0.011$ & $\mathbf{0.872 \pm 0.113 }$ & $0.131$ $\pm 0.015$ \\
MEPS\_21 & $90.00$ $\pm 0.94$ & $1.162$ $\pm 0.056$ & $0.104$ $\pm 0.011$ & $90.08$ $\pm 0.011$ & $\mathbf{0.910 \pm 0.133 }$ & $0.126$ $\pm 0.013$ \\
Facebook\_1 & $90.12$ $\pm 0.71$ & $0.540$ $\pm 0.040$ & $0.050$ $\pm 0.007$ & $90.07$ $\pm 0.007$ & $\mathbf{0.382 \pm 0.051 }$ & $0.089$ $\pm 0.009$ \\
Facebook\_2 & $90.04$ $\pm 0.50$ & $0.497$ $\pm 0.028$ & $0.044$ $\pm 0.005$ & $90.06$ $\pm 0.005$ & $\mathbf{0.389 \pm 0.075 }$ & $0.091$ $\pm 0.007$ \\
kin8nm & $90.34$ $\pm 1.37$ & $1.238$ $\pm 0.067$ & $0.076$ $\pm 0.004$ & $90.31$ $\pm 0.013$ & $\mathbf{1.216 \pm 0.068 }$ & $0.080$ $\pm 0.004$ \\
naval & $89.99$ $\pm 1.13$ & $3.101$ $\pm 0.015$ & $0.164$ $\pm 0.001$ & $89.95$ $\pm 0.011$ & $\mathbf{3.095 \pm 0.028 }$ & $0.167$ $\pm 0.001$ \\
bio & $90.00$ $\pm 0.69$ & $2.261$ $\pm 0.033$ & $0.130$ $\pm 0.002$ & $89.97$ $\pm 0.005$ & $\mathbf{2.154 \pm 0.031 }$ & $0.148$ $\pm 0.003$ \\
blog\_data & $89.99$ $\pm 0.60$ & $0.593$ $\pm 0.033$ & $0.058$ $\pm 0.005$ & $89.95$ $\pm 0.007$ & $\mathbf{0.460 \pm 0.075 }$ & $0.104$ $\pm 0.006$ \\
\midrule
Nominal ($1-\alpha$) & $90.00$ & - & - & $90.00$ & - & - \\
\bottomrule
\end{tabular}
}
\vspace{-1em}
}
\end{table}
\begin{table}[h]
\centering
\small
\caption{\small {\bf Results for multi-output regression (mean)} on next-state prediction tasks, at level $1-\alpha=90\%$. For each method we report the (test) coverage and volume of its learned box-shaped prediction set. All results are averaged over 100 random seeds.}
\vspace{-1em}
\label{table:multi-output-100-mean}
\centerline{
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{2}{c}{ {\tt Coord-wise} } & \multicolumn{2}{c}{ {\tt Coord-wise-Recal} } & \multicolumn{2}{c}{{ {\tt CP-Gen-Recal}~(ours)}} \\
\cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7}
Dataset & Coverage(\%) & Volume & Coverage(\%) & Volume & Coverage(\%) & Volume \\
\midrule
Cartpole & $94.30$ & $1.14\times 10^{-5}$ & $90.02$ & $4.80\times 10^{-6}$ & $90.03$ & $\mathbf{2.05\times 10^{-6}}$ \\
Half-Cheetah & $93.84$ & $1.05\times 10^{-5}$ & $90.00$ & $1.22\times 10^{-6}$ & $90.02$ & $\mathbf{9.01\times 10^{-7}}$ \\
Ant & $93.53$ & $3.26\times 10^{-3}$ & $89.94$ & $1.75\times 10^{-4}$ & $89.98$ & $\mathbf{9.22\times 10^{-5}}$ \\
Walker & $94.52$ & $2.82\times 10^{-5}$ & $89.99$ & $7.48\times 10^{-7}$ & $90.00$ & $\mathbf{3.74\times 10^{-7}}$ \\
Swimmer & $95.65$ & $2.82\times 10^{-5}$ & $90.02$ & $2.18\times 10^{-6}$ & $90.01$ & $\mathbf{1.24\times 10^{-7}}$ \\
Hopper & $92.95$ & $2.53\times 10^{-9}$ & $89.98$ & $8.86\times 10^{-10}$ & $89.99$ & $\mathbf{7.04\times 10^{-10}}$ \\
Humanoid & $94.87$ & $7.43\times 10^{-4}$ & $90.06$ & $1.69\times 10^{-7}$ & $90.03$ & $\mathbf{8.95\times 10^{-8}}$ \\
\midrule
Nominal ($1-\alpha$) & $90.00$ & - & $90.00$ & - & $90.00$ & - \\
\bottomrule
\end{tabular}
}
\end{table}
\begin{table}[h]
\centering
\small
\caption{\small {\bf Results for multi-output regression (standard deviation)} on next-state prediction tasks, at level $1-\alpha=90\%$. For each method we report the (test) coverage and volume of its learned box-shaped prediction set. All standard deviations are computed over 100 random seeds.}
\vspace{-1em}
\label{table:multi-output-100-std}
\centerline{
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{2}{c}{ {\tt Coord-wise} } & \multicolumn{2}{c}{ {\tt Coord-wise-Recal} } & \multicolumn{2}{c}{{ {\tt CP-Gen-Recal}~(ours)}} \\
\cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7}
Dataset & Coverage(\%) & Volume & Coverage(\%) & Volume & Coverage(\%) & Volume \\
\midrule
Cartpole & $0.35$ & $3.57\times 10^{-6}$ & $0.31$ & $1.33\times 10^{-6}$ & $0.27$ & $4.46\times 10^{-7}$ \\
Half-Cheetah & $0.25$ & $2.46\times 10^{-6}$ & $0.32$ & $2.75\times 10^{-7}$ & $0.31$ & $2.10\times 10^{-7}$ \\
Ant & $0.27$ & $1.50\times 10^{-3}$ & $0.29$ & $8.06\times 10^{-5}$ & $0.29$ & $4.15\times 10^{-5}$ \\
Walker & $0.24$ & $8.81\times 10^{-6}$ & $0.30$ & $2.18\times 10^{-7}$ & $0.29$ & $1.12\times 10^{-7}$ \\
Swimmer & $0.24$ & $6.47\times 10^{-6}$ & $0.29$ & $5.83\times 10^{-7}$ & $0.33$ & $3.36\times 10^{-8}$ \\
Hopper & $0.29$ & $6.83\times 10^{-10}$ & $0.29$ & $2.26\times 10^{-10}$ & $0.33$ & $1.97\times 10^{-10}$ \\
Humanoid & $0.23$ & $1.22\times 10^{-3}$ & $0.30$ & $2.71\times 10^{-7}$ & $0.29$ & $1.47\times 10^{-7}$ \\
\bottomrule
\end{tabular}
}
\end{table}
\section{Conclusion}
This paper proposes Conformal Prediction with General Function Class, a conformal prediction algorithm that optimizes the efficiency metric subject to valid coverage over a general function class of prediction sets. We provide theoretical guarantees for its coverage and efficiency in certain situations, and develop a gradient-based practical implementation which performs well empirically on several large-scale tasks. We believe our work opens up many directions for future work, such as stronger theoretical guarantees via more structured function classes, further improving the gradient-based approximate implementation, or experiments on other uncertainty quantification tasks.
\section{Results for label prediction sets on ImageNet}
\label{appendix:imagenet}
Here we present the ImageNet label prediction set experiment abbreviated in Section~\ref{section:experiment}.
\paragraph{Dataset and model}
We take $K=9$ large-scale pretrained neural networks on the ImageNet training set~\citep{deng2009imagenet}. Our models are \{ResNeXt101, ResNet152, ResNet101, DenseNet161, ResNet18, ResNet50, VGG16, Inception, ShuffleNet\}, similar as in~\citep{angelopoulos2020uncertainty}.
We then consider task of constructing label prediction sets with valid coverage and small cardinality. We train and test out conformal procedures on the following two datasets, neither seen by the pretrained models:
\begin{enumerate}[label=(\arabic*)]
\item ImageNet-Val: The original validation set of ImageNet with $50000$ images. We randomly split (varying with seed) this into $|D_{\cal}|=10000$, $|D_{{\rm recal}}|=10000$, and $|D_{{\rm test}}|=30000$.
\item ImageNet-V2~\citep{recht2019imagenet}: A new validation set following the roughly the same collection routines of the original images in ImageNet, however believed to have a mild distribution shift and thus slightly harder for classifiers pretrained on ImageNet. This dataset contains $10000$ images, which we randomly split (varying with seed) into $|D_{\cal}|=4000$, $|D_{{\rm recal}}|=1000$, and $|D_{{\rm test}}|=5000$.
\end{enumerate}
\paragraph{Methods for learning prediction sets}
Our constructions of the prediction sets are based on the Least Ambiguous Set-Valued Classifier (LAC) method of~\citep{sadinle2019least}, which turns any base predictor $p$ where $p(\cdot|x)$ denotes the predicted distribution of the $L=1000$ labels into a prediction set $C_t(x)$ via
\begin{align*}
C_t(x) = \set{y\in[L]: p(y|x) > t},
\end{align*}
where $t$ is found by conformal prediction.
We consider learning a valid prediction set with smaller set size by finding an optimized \emph{ensemble weight} of the $K$ base predictors using our {\tt CP-Gen-Recal}~algorithm. This means we learn prediction sets of the form
\begin{align*}
C_{\theta, t}(x) = \set{y\in[L]: p_\theta(y|x) \defeq \sum_{k=1}^K \theta_k p_k(y|x) > t},
\end{align*}
where $\set{p_k}_{k\in[K]}$ are the base predictors.
Our {\tt CP-Gen-Recal}~algorithm (and its practical implementation~\eqref{problem:practical}) would solve a primal-dual optimization problem with the efficiency loss and hinge approximate coverage constraint to optimize $(\theta, t)$. However, here the efficiency loss we care about (the cardinality) is non-differentiable. We make a further approximation by considering the $L_q^q$ norm with $q=0.5$ as the surrogate efficiency loss:
\begin{align*}
\ell_{{\rm eff}}(\theta, t; (x_i, y_i)) \defeq \sum_{y'=1}^L \brac{ p_{\theta}(y'|x_i) - t }_+^q,
\end{align*}
with the intuition that the $q\to 0$ limit is exactly the cardinality of $C_{\theta, t}(x_i)$. Our final optimization problem is then
\begin{align*}
\min_{\theta\in\Delta_{K}, t>0} \max_{\lambda>0}\frac{1}{n_{\cal}}\sum_{i=1}^{n_{\cal}}\sum_{y'=1}^L \brac{ p_{\theta}(y'|x_i) - t }_+^q + \lambda \frac{1}{n_{\cal}}\sum_{i=1}^{n_{\cal}} \ell_{\hinge}(p_\theta(y_i | x_i) - t).
\end{align*}
We solve this by SGD on $(\theta, t)$ and (ascent on) $\lambda$, with the softmax parameterization for $\theta$ ( $\theta\in\Delta_K$ as an ensemble weight is a probability distribution) and log parametrization for $t>0$. The learning rate is $10^{-2}$ for $(\theta, t)$ and $10^{-4}$ for $\lambda$. We perform this optimization for 500 epochs over $D_{\cal}$ with batch-size 256 for ImageNet-Val and 64 for ImageNet-V2.
After we obtain the iterates $\set{\what{\theta}_j}$ (where $j$ denotes the epoch count), we perform a further iterate selection of first re-computing the $\what{t}(\what{\theta}_j)$ by conformalizing on $D_{\cal}$, and then choosing the iterate $j$ with the best average set size also on $D_{\cal}$, before feeding it into the reconformalization step with $D_{{\rm recal}}$. As the $D_{{\rm recal}}$ is only used in the reconformalization step, such as method still guarantees valid coverage like the original Algorithm~\ref{algorithm:multi-conformal-reconformalize}.
We compare our above algorithm against two baselines: conformalizing each individual model and reporting the best one, or conformalizing the uniform ensemble (which uses weights $\theta_{\rm unif}=\frac{1}{K}\ones_K$). For these two baselines, for fairness of comparison, we allow them to use the whole $D_{\cal}\cup D_{{\rm recal}}$ as the calibration set, as their construction (apart from pre-training) is not data-dependent.
\begin{table}[t]
\centering
\small
\caption{{\small \textbf{Results for ImageNet Prediction Sets with Conformal Ensembling.}} For each method we report the (test) coverage and set size. Each entry reports the (mean, std) over 8 random seeds.}
\vspace{-1em}
\label{table:imagenet}
\centerline{
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{2}{c}{Best conformalized single model} & \multicolumn{2}{c}{Conformalized uniform ensemble} & \multicolumn{2}{c}{Ensemble via {\tt CP-Gen-Recal}~(ours)} \\
\cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7}
Dataset & Coverage(\%) & Size & Coverage(\%) & Size & Coverage(\%) & Size \\
\midrule
ImageNet-Val & $90.10 \pm 0.29$ & $1.70 \pm 0.03$ & $90.13 \pm 0.21$ & $1.62 \pm 0.02$ & $90.11 \pm 0.33$ & $\mathbf{1.51 \pm 0.03}$ \\
ImageNetV2 & $90.01 \pm 0.71$ & $5.00 \pm 0.24$ & $89.93 \pm 0.71$ & $4.66 \pm 0.22$ & $90.18 \pm 0.85$ & $\mathbf{4.39 \pm 0.44}$ \\
\bottomrule
\end{tabular}
}
\vspace{-1em}
\end{table}
\paragraph{Results}
Table~\ref{table:imagenet} shows that our algorithm is able to learn label prediction sets with valid coverage and improved set sizes over the baselines. This demonstrates the advantage of our method even in applications where the efficiency loss (here set size) is non-differentiable and needs to be further approximated to allow gradient-based algorithms.
\section{Conformal prediction with general function classes}
\label{section:theory}
\subsection{Algorithm}
\label{section:multi-conformal-algorithm}
Our algorithm, Conformal Prediction with General Function Classes ({\tt CP-Gen}; full description in Algorithm~\ref{algorithm:multi-conformal}), is an extension of the constrained ERM problem~\eqref{problem:single-conformal} into the case of general function classes with multiple learnable parameters. {\tt CP-Gen}~takes in a function class of prediction sets
\begin{align}
\label{equation:function-class}
\mc{C} \defeq \set{C_{\theta, t}(x): \theta\in \Theta, t \in\mc{T}\subset \R},
\end{align}
where (as mentioned) we assume that $\set{C_{\theta, t}}_{t\in\mc{T}}$ is a nested set for each $\theta\in\Theta$. The parameter set $\Theta$ as well as the form of $C_{\theta, t}$ in~\eqref{equation:function-class} can be arbitrary, depending on applications and the available base predictors at hand. Given $\mc{C}$, our algorithm then solves the constrained ERM problem~\eqref{problem:multi-conformal-alg} of finding the smallest interval among $\mc{C}$ subject to valid coverage on dataset $D_{\cal}$.
Compared with vanilla conformal prediction, Algorithm~\ref{algorithm:multi-conformal} allows more general tasks with an arbitrary function class and efficiency loss; for example, this encompasses several recent algorithms such as finite hyperparameter selection and linear aggregation~\citep{yang2021finite,chen2021learning}. We remark that~\eqref{problem:multi-conformal-alg} includes an additional relaxation parameter $\eps_0\ge 0$ for the coverage constraint. This is for analysis (for Proposition~\ref{proposition:main-gen}(b) \&~\ref{proposition:reconformalize}(b)) only; our implementation uses $\eps_0=0$.
\begin{algorithm}[t]
\caption{Conformal Prediction with General Function Class ({\tt CP-Gen})}
\label{algorithm:multi-conformal}
\begin{algorithmic}[1]
\REQUIRE Class of prediction sets $\mc{C}=\set{C_{\theta, t}}_{\theta\in\Theta, t\in\mc{T}}$; target miscoverage level $\alpha\in(0,1)$; $\eps_0\ge 0$.\\
\hspace{1.32em} Efficiency loss $\ell_{{\rm eff}}$; Calibration dataset $D_{\cal}$ with size $n_{\cal}$.
\STATE Solve the following constrained ERM problem on dataset $D_{\cal}$ (with relaxation parameter $\eps_0$):
\begin{equation}
\label{problem:multi-conformal-alg}
\begin{aligned}
(\what{\theta}, \what{t}) \setto
\argmin_{\theta\in\Theta, t\in\mc{T}} & ~~ \what{L}_{{\rm eff}}(C_{\theta, t}) \defeq \frac{1}{n_{\cal}} \sum_{i \in D_{\cal}} \ell_{{\rm eff}}(C_{\theta, t}(x_i), y_i) \\
\subjectto & ~~ \what{L}_{{\rm coverage}}(C_{\theta, t}) \defeq \frac{1}{n_{\cal}} \sum_{i\in D_{\cal}} \indic{y_i \notin C_{\theta, t}(x_i)} \le \alpha + \eps_0.
\end{aligned}
\end{equation}
\ENSURE Prediction set $C_{\what{\theta}, \what{t}}$.
\end{algorithmic}
\end{algorithm}
\subsection{Theory}
\label{section:multi-conformal-theory}
An important theoretical question about {\tt CP-Gen}~is whether it achieves coverage and efficiency guarantees on the population (test data). This section showcases that, by standard generalization arguments, {\tt CP-Gen}~achieves approximate validity and near-optimal efficiency whenever function class is low-capacity in a certain sense. We remark that our experiments use the modified algorithm {\tt CP-Gen-Recal}~(Section~\ref{section:multi-conformal-reconformalize}) which involves a reconformalization step. Here we focus on {\tt CP-Gen}~as we believe its theory could be more informative.
Let $L_{\set{{\rm eff}, {\rm coverage}}}(C_{\theta, t})\defeq \E[\ell_{\set{{\rm eff}, {\rm coverage}}}(C_{\theta, t}; (X, Y))]$ denote the population coverage and efficiency lossesfor any $(\theta, t)$. We define the following uniform concentration quantities:
\begin{align}
& \eps_{{\rm eff}} \defeq \sup\nolimits_{\theta\in\Theta, t\in\mc{T}} \abs{ \what{L}_{{\rm eff}}(C_{\theta, t}) - L_{{\rm eff}}(C_{\theta, t}) }, \label{equation:eps-eff} \\
& \eps_{{\rm coverage}} \defeq \sup\nolimits_{\theta\in\Theta, t\in\mc{T}} \abs{ \what{L}_{{\rm coverage}}(C_{\theta, t}) - L_{{\rm coverage}}(C_{\theta, t}) }. \label{equation:eps-coverage}
\end{align}
The following proposition connects the generalization of {\tt CP-Gen}~to the above uniform concentration quantities by standard arguments (see Appendix~\ref{appendix:proof-main-gen} for the proof. We remark that the proof relies on $D_{\cal}$ being i.i.d., which is slightly stronger than exchangeability assumption commonly assumed in the conformal prediction literature.)
\begin{proposition}[Generalization of {\tt CP-Gen}]
\label{proposition:main-gen}
The prediction set $C_{\what{\theta}, \what{t}}$ learned by Algorithm~\ref{algorithm:multi-conformal} satisfies
\begin{enumerate}[wide,label=(\alph*),topsep=0pt]
\item (Approximately valid population coverage) We have
\begin{align*}
L_{{\rm coverage}}(C_{\what{\theta}, \what{t}}) \le \alpha + \eps_0 + \eps_{{\rm coverage}},
\end{align*}
i.e. the population coverage of $C_{\what{\theta}, \what{t}}$ is at least $1-\alpha-(\eps_0+\eps_{{\rm coverage}})$.
\item (Near-optimal efficiency) Suppose $\eps_0\ge \eps_{{\rm coverage}}$, then we further have
\begin{align*}
L_{{\rm eff}}(C_{\what{\theta}, \what{t}}) \le \mathop{\inf_{(\theta, t)\in\Theta\times \mc{T}}}_{L_{{\rm coverage}}(C_{\theta, t})\le \alpha} L_{{\rm eff}}(C_{\theta, t}) + 2\eps_{{\rm eff}},
\end{align*}
i.e. $C_{\what{\theta}, \what{t}}$ achieves $2\eps_{{\rm eff}}$-near-optimal efficiency against any prediction set within $\mc{C}$ with at least $(1-\alpha)$ population coverage.
\end{enumerate}
\end{proposition}
\paragraph{Examples of good generalization}
Proposition~\ref{proposition:main-gen} shows that {\tt CP-Gen}~acheives approximate coverage and near-optimal efficiency if the concentration terms $\eps_{\rm eff}$ and ${\rm eff}_{\rm coverage}$ are small. In Appendix~\ref{appendix:examples}, we bound these on two example function classes: \emph{Finite Class} (Proposition~\ref{proposition:finite-class}) and \emph{VC/Rademacher Class} (Proposition~\ref{proposition:vc-class}). Both classes admit bounds of the form $\{\eps_{\rm eff},{\rm eff}_{\rm coverage}\}\le \sqrt{{\rm Comp}(\mc{C})/n_{\cal}}$ with high probability via standard concentration arguments, where ${\rm Comp}(\mc{C})$ is a certain complexity measure of $\mc{C}$. Combined with Proposition~\ref{proposition:main-gen}, our {\tt CP-Gen}~algorithm with these classes achieve an $1-\alpha-\sqrt{{\rm Comp}(\mc{C})/n_{\cal}}$ approximate coverage guarantee and $\sqrt{{\rm Comp}(\mc{C})/n_{\cal}}$ near-optimal efficiency guarantee. In particular, our Proposition~\ref{proposition:finite-class} recovers the coverage guarantee for the finite-class selection algorithm of~\citep[Theorem 1]{yang2021finite} though our efficiency guarantee is more general.
We remark that both examples above contain important applications. The finite class contains e.g. optimizing over a $K$-dimensional hyperparameter to use via grid search, with e.g. $(1/\delta)^K$ confidence sets and thus ${\rm Comp}(\mc{C}) = O(\log ((1/\delta)^K)) = O(K\log(1/\delta))$. The VC/Rademacher class contains the important special case of linear classes with $K$ base predictors (we defer the formal statement and proof to Appendix~\ref{appendix:linear-class}). Also, these examples are not necessarily exhaustive. Our take-away is rather that we may expect {\tt CP-Gen}~to generalize well (and thus achieves good coverage and efficiency) more broadly in practice, for instance whenever it learns $K\ll n_{\cal}$ parameters.
\begin{algorithm}[t]
\caption{Conformal Prediction with General Fn. Class and Recalibration ({\tt CP-Gen-Recal})}
\label{algorithm:multi-conformal-reconformalize}
\begin{algorithmic}[1]
\REQUIRE Class of prediction sets $\mc{C}=\set{C_{\theta, t}}_{\theta\in\Theta, t\in\mc{T}}$; target miscoverage level $\alpha\in(0,1)$; $\eps_0\ge 0$. \\
\hspace{1.32em} Efficiency loss $\ell_{{\rm eff}}$;
Calibration datasets $D_{\cal}$, $D_{{\rm recal}}$ with size $n_{\cal}$, $n_{{\rm recal}}$.
\STATE Run Algorithm~\ref{algorithm:multi-conformal} on dataset $D_{\cal}$ (with relaxation parameter $\eps_0$) to obtain $(\what{\theta}, \what{t})$. \label{line:1}
\STATE Keep $\what{\theta}$, and reconformalize $t\in\mc{T}$ on the recalibration dataset $D_{{\rm recal}}$:
\begin{align*}
\what{t}_{{\rm recal}} \setto \inf\set{t\in\mc{T}: y_i \in C_{\what{\theta}, t}(x_i)~\textrm{for at least}~\ceil{(1-\alpha)(n_{{\rm recal}}+1)}~\textrm{examples}~(x_i, y_i)\in D_{{\rm recal}}}.
\end{align*}
\ENSURE Prediction set $C_{\what{\theta}, \what{t}_{{\rm recal}}}$.
\end{algorithmic}
\end{algorithm}
\subsection{Algorithm with valid coverage via reconformalization}
\label{section:multi-conformal-reconformalize}
Although {\tt CP-Gen}~enjoys theoretical bounds on the coverage and efficiency, a notable drawback is that it does not guarantee \emph{exactly valid} (at least) $1-\alpha$ coverage like usual conformal prediction, and its approximate coverage bound depends on the uniform concentration quantity $\eps_{{\rm coverage}}$ that is not computable from the observed data without structural assumptions on the function class $\mc{C}$. %
To remedy this, we incorporate a simple reconformalization technique on another recalibration dataset $D_{{\rm recal}}$ (e.g. a further data split), which guarantees valid finite-sample coverage by exchangeability. We call this algorithm {\tt CP-Gen-Recal}~and provide the full description in Algorithm~\ref{algorithm:multi-conformal-reconformalize}.
We remark that this reconformalization technique for obtaining guaranteed $1-\alpha$ coverage is widely used in the conformal prediction literature, e.g.~\citep{angelopoulos2020uncertainty}. However, to the best of our knowledge, there is no known analysis for our {\tt CP-Gen-Recal}~algorithm for general function classes, for which we provide a result below (formal statement and proof can be found in Proposition~\ref{proposition:reconformalize} \& Appendix~\ref{appendix:theory-recal}). The proof of the coverage bound is standard as in the conformal prediction literature, while the proof of the efficiency bound builds upon the result for {\tt CP-Gen}~(Proposition~\ref{proposition:main-gen}(b)) and handles additional concentration terms from the reconformalization step.
\begin{proposition}[Coverage and efficiency guarantee for {\tt CP-Gen-Recal}; Informal version]
The prediction set $C_{\what{\theta}, \what{t}_{{\rm recal}}}$ learned by Algorithm~\ref{algorithm:multi-conformal-reconformalize} achieves $(1-\alpha)$ finite-sample coverage: $\P_{D_{{\rm recal}}, (X, Y)}(Y \in C_{\what{\theta}, \what{t}_{{\rm recal}}}) \ge 1-\alpha$. Further, it achieves $O(\eps_{{\rm eff}} + \eps_{{\rm coverage}} + 1/\sqrt{n_{{\rm recal}}})$ near-optimal efficiency under additional regularity assumptions.
\end{proposition}
\section{Proof of Proposition~\ref{proposition:main-gen}}
\label{appendix:proof-main-gen}
\begin{enumerate}[wide,label=(\alph*)]
\item As $C_{\what{\theta}, \what{t}}$ solves problem~\eqref{problem:multi-conformal-alg}, it satisfies the constraint $\what{L}_{{\rm coverage}}(C_{\what{\theta}, \what{t}})\le \alpha + \eps_0$. Therefore,
\begin{align*}
& \quad L_{{\rm coverage}}(C_{\what{\theta}, \what{t}}) = \underbrace{\what{L}_{{\rm coverage}}(C_{\what{\theta}, \what{t}})}_{\le \alpha+\eps_0} + L_{{\rm coverage}}(C_{\what{\theta}, \what{t}}) - \what{L}_{{\rm coverage}}(C_{\what{\theta}, \what{t}}) \\
& \le \alpha + \eps_0 + \sup_{(\theta,t)\in\Theta\times\mc{T}} \abs{ L_{{\rm coverage}}(C_{\theta, t}) - \what{L}_{{\rm coverage}}(C_{\theta, t}) } \\
& = \alpha + \eps_0 + \eps_{{\rm coverage}}.
\end{align*}
\item Suppose $\eps_{{\rm coverage}}\le \eps_0$. Taking any $(\theta, t)\in\Theta\times\mc{T}$ such that $L_{{\rm coverage}}(C_{\theta, t})\le \alpha$, we have
\begin{align*}
\what{L}_{{\rm coverage}}(C_{\theta, t}) \le L_{{\rm coverage}}(C_{\theta, t}) + \eps_{{\rm coverage}} \le \alpha + \eps_{{\rm coverage}} \le \alpha + \eps_0.
\end{align*}
This shows that $(\theta, t)$ lies within the constraint set of problem~\eqref{problem:multi-conformal-alg}. Thus as $(\what{\theta}, \what{t})$ further minimizes the loss $\what{L}_{{\rm eff}}$ within the constraint set, we have $\what{L}_{{\rm eff}}(C_{\what{\theta}, \what{t}}) \le \what{L}_{{\rm eff}}(C_{\theta, t})$. This shows that
\begin{align*}
& \quad L_{{\rm eff}}(C_{\what{\theta}, \what{t}}) - L_{{\rm eff}}(C_{\theta, t}) \\
& \le \underbrace{\what{L}_{{\rm eff}}(C_{\what{\theta}, \what{t}}) - \what{L}_{{\rm eff}}(C_{\theta, t}) }_{\le 0} + 2\sup_{(\theta, t)\in\Theta\times\mc{T}} \abs{\what{L}_{{\rm eff}}(C_{\theta, t}) - L_{{\rm eff}}(C_{\theta, t})} \\
& \le 2\eps_{{\rm eff}}.
\end{align*}
As the above holds simultaneously for all $(\theta, t)\in\Theta\times\mc{T}$ with at most $\alpha$ coverage loss, taking the sup over the left-hand side yields
\begin{align*}
L_{{\rm eff}}(C_{\what{\theta}, \what{t}}) - \mathop{\inf_{(\theta, t)\in\Theta\times \mc{T}}}_{L_{{\rm coverage}}(C_{\theta, t})\le \alpha} L_{{\rm eff}}(C_{\theta, t}) \le 2\eps_{{\rm eff}}.
\end{align*}
\end{enumerate}
\qed
\section{Examples of good generalization for {\tt CP-Gen}}
\label{appendix:examples}
We provide two concrete examples where the concentration terms $\eps_{{\rm eff}}$ and $\eps_{{\rm coverage}}$ are small with high probability, in which case Proposition~\ref{proposition:main-gen} guarantees that {\tt CP-Gen}~learns an prediction set with approximate validity and near-optimal efficiency.
\begin{assumption}[Bounded and Lipschitz efficiency loss]
\label{assumption:loss}
The loss function $\ell_{{\rm eff}}$ satisfies
\vspace{-0.5em}
\begin{enumerate}[wide,label=A\arabic*]
\item (Bounded loss). $\abs{ \ell_{{\rm eff}}(C_{\theta, t}; (x, y)) } \le M$ for all $(\theta, t)\in\Theta\times\mc{T}$ and all $(x,y)\in\mc{X}\times\mc{Y}$.~\label{assumption:loss-bound}
\item ($t$-Lipschitzness). $t \mapsto \ell_{{\rm eff}}(C_{\theta, t}; (x, y))$ is ${L_{\mc{T}}}$-Lipschitz for all $(\theta,x,y)\in\Theta\times\mc{X}\times\mc{Y}$.
\label{assumption:loss-lipt}
\end{enumerate}
\end{assumption}
\begin{assumption}[Bounded $\mc{T}$]
\label{assumption:T-bound}
The parameter space $\mc{T}\subset \R$ is bounded: $\sup_{t\in\mc{T}} |t| \le {B_\mc{T}}$.
\end{assumption}
\subsection{Finite class}
\begin{proposition}[Finite class]
\label{proposition:finite-class}
Suppose $\Theta$ is a finite set ($N_\Theta\defeq |\Theta|<\infty$), and suppose Assumptions~\ref{assumption:loss-bound},~\ref{assumption:loss-lipt},~\ref{assumption:T-bound} hold. Then, we have with probability at least $1-\delta$ that
\begin{align*}
\textstyle
\eps_{{\rm coverage}} \le C\sqrt{ \log( N_\Theta/\delta) / n_{\cal} }~~~{\rm and}~~~\eps_{{\rm eff}} \le C \cdot \brac{M\sqrt{\log( N_\Theta/\delta)} + {L_{\mc{T}}}{B_\mc{T}}} / \sqrt{n_{\cal}},
\end{align*}
where $C>0$ is an absolute constant.
\end{proposition}
\begin{proof}
We first bound $\eps_{{\rm coverage}}$. Fix any $\theta\in\Theta$, define
\begin{align}
\label{equation:score}
t_\theta(x, y) \defeq \inf\set{t\in\mc{T} : C_{\theta, t}(x) \ni y}
\end{align}
to be the smallest possible $t\in\mc{T}$ such that the $C_{\theta, t}(x)$ contains $y$. Observe that, as $\set{C_{\theta, t}(x)}_{t\in\mc{T}}$ are nested sets, the coverage event can be rewritten as $\indic{y \in C_{\theta, t}(x)}=\indic{t \ge t_\theta(x, y)}$ for any $(x,y)$. Therefore, we have
\begin{align*}
& \quad \sup_{t\in\mc{T}} \abs{ \what{L}_{{\rm coverage}}(C_{\theta, t}) - L_{{\rm coverage}}(C_{\theta, t})} \\
& = \sup_{t\in\mc{T}} \abs{ \frac{1}{n_{\cal}}\sum_{i=1}^{n_\cal} \indic{y_i \notin C_{\theta, t}(x_i) } - \P\paren{ Y \notin C_{\theta, t}(X) } } \\
& = \sup_{t\in\mc{T}} \abs{ \frac{1}{n_{\cal}}\sum_{i=1}^{n_\cal} \indic{y_i \in C_{\theta, t}(x_i) } - \P\paren{ Y \in C_{\theta, t}(X) } } \\
& = \sup_{t\in\mc{T}} \abs{ \frac{1}{n_{\cal}}\sum_{i=1}^{n_\cal} \indic{t_\theta(x, y) \le t} - \P_{(x,y)}\paren{ t_\theta(X,Y) \le t} } \\
& = \sup_{t\in\mc{T}} \abs{ \what{F}_\theta(t) - F_\theta(t) },
\end{align*}
where we have defined $F_\theta:\R\to[0,1]$ as the CDF of the random variable $t_\theta(X,Y)$ and similarly $\what{F}_\theta$ as the empirical CDF of the same random variable over the finite dataset $D_{\cal}$. Applying the Dvoretzky-Kiefer-Wolfowitz (DKW) inequality~\citep[Corollary 1]{massart1990tight} yields that
\begin{align*}
\sup_{t\in\mc{T}} \abs{ \what{F}_\theta(t) - F_\theta(t) } \le \sqrt{\frac{\log(2/\delta)}{2n_{\cal}}}
\end{align*}
with probability at least $1-\delta$. Now, taking the union bound with respect to $\theta\in\Theta$ (where for each $\theta$ we plug in tail probability $\delta/2N_\Theta$) we get that with probability at least $1-\delta/2$,
\begin{equation}
\label{equation:finite-coverage-bound}
\begin{aligned}
& \quad \eps_{{\rm coverage}} = \sup_{\theta\in\Theta} \sup_{t\in\mc{T}} \abs{ \what{L}_{{\rm coverage}}(C_{\theta, t}) - L_{{\rm coverage}}(C_{\theta, t})} \\
& = \sup_{\theta\in\Theta} \sup_{t\in\mc{T}} \abs{ \what{F}_\theta(t) - F_\theta(t) } \le \sqrt{\frac{\log(4N_\Theta/\delta)}{2n_{\cal}}} \le C\sqrt{\frac{\log(N_\Theta/\delta)}{n_{\cal}}}.
\end{aligned}
\end{equation}
for some absolute constant $C>0$.
We next bound $\eps_{{\rm eff}}$. Fix any $\theta\in\Theta$. We have by standard symmetrization argument that
\begin{align*}
& \quad \E\brac{ \sup_{t\in\mc{T}} \abs{ \what{L}_{{\rm eff}}(C_{\theta, t}) - L_{{\rm eff}}(C_{\theta, t}) } } \\
& = \E\brac{ \sup_{t\in\mc{T}} \abs{ \frac{1}{n_{\cal}}\sum_{i=1}^{n_{\cal}} \ell_{{\rm eff}}(C_{\theta, t}; (x_i, y_i)) - \E\brac{ \ell_{{\rm eff}}(C_{\theta, t}; (X, Y)) } }} \\
& \le 2\E_{(x_i, y_i), \eps_i}\brac{ \sup_{t\in\mc{T}} \abs{ \frac{1}{n_{\cal}} \sum_{i=1}^{n_{\cal}}\eps_i \ell_{{\rm eff}}(C_{\theta, t}; (x_i, y_i)) } } \\
& \stackrel{(i)}{\le} 2{L_{\mc{T}}} \cdot \E_{\eps_i} \brac{\sup_{t\in\mc{T}} \abs{ \frac{1}{n_{\cal}} \sum_{i=1}^{n_{\cal}}\eps_i \cdot t } }
= 2{L_{\mc{T}}} \cdot \E_{\eps_i} \brac{\abs{ \frac{1}{n_{\cal}} \sum_{i=1}^{n_{\cal}}\eps_i } \cdot \sup_{t\in\mc{T}} |t| } \\
& \stackrel{(ii)}{\le} 2{L_{\mc{T}}}\cdot {B_\mc{T}} \cdot \E_{\eps_i}\brac{ \abs{ \frac{1}{n_{\cal}} \sum_{i=1}^{n_{\cal}}\eps_i } } \stackrel{(iii)}{\le} 2{L_{\mc{T}}}\cdot {B_\mc{T}}/\sqrt{n_{\cal}}.
\end{align*}
Above, (i) used the Lipschitzness Assumption~\ref{assumption:loss-lipt} and the Rademacher contraction inequality~\citep[Exercise 6.7.7]{vershynin2018high}; (ii) used Assumption~\ref{assumption:T-bound}, and (iii) used $\E_{\eps_i}\brac{ \abs{ \frac{1}{n_{\cal}} \sum_{i=1}^{n_{\cal}}\eps_i } } \le \paren{ \E_{\eps_i}\brac{ \paren{ \frac{1}{n_{\cal}} \sum_{i=1}^{n_{\cal}}\eps_i }^2 } }^{1/2} = 1/\sqrt{n_{\cal}}$. (Above $\eps_i\simiid {\rm Unif}(\set{\pm 1})$ are Rademacher variables.)
Next, as each loss $|\ell_{{\rm eff}}(C_{\theta, t}; (x, y))|\le M$ by Assumption~\ref{assumption:loss-bound}, the random variable
\begin{align*}
\sup_{t\in\mc{T}} \abs{ \what{L}_{{\rm eff}}(C_{\theta, t}) - L_{{\rm eff}}(C_{\theta, t}) }
\end{align*}
satisfies the $M/n_{\cal}$ finite-difference property. Therefore by McDiarmid's Inequality, we have with probability at least $1-\delta$ that
\begin{align*}
& \quad \sup_{t\in\mc{T}} \abs{ \what{L}_{{\rm eff}}(C_{\theta, t}) - L_{{\rm eff}}(C_{\theta, t}) } \\
& \le \E\brac{ \sup_{t\in\mc{T}} \abs{ \what{L}_{{\rm eff}}(C_{\theta, t}) - L_{{\rm eff}}(C_{\theta, t}) } } + \sqrt{\frac{M^2\log(1/\delta)}{2n_{\cal}}} \\
& \le C\cdot \frac{{L_{\mc{T}}} {B_\mc{T}} + M\sqrt{\log(1/\delta)}}{\sqrt{n_{\cal}}}.
\end{align*}
Finally, by union bound over $\theta\in\Theta$ (where we plug in $\delta/2N_\Theta$ as tail probability into the above), we have with probability at least $1-\delta/2$ that
\begin{equation}
\label{equation:finite-eff-bound}
\begin{aligned}
& \quad \eps_{{\rm eff}} = \sup_{\theta\in\Theta} \sup_{t\in\mc{T}} \abs{ \what{L}_{{\rm eff}}(C_{\theta, t}) - L_{{\rm eff}}(C_{\theta, t}) } \\
& \le C\cdot \frac{{L_{\mc{T}}} {B_\mc{T}} + M\sqrt{\log(N_\Theta/\delta)}}{\sqrt{n_{\cal}}}.
\end{aligned}
\end{equation}
\eqref{equation:finite-coverage-bound} together with~\eqref{equation:finite-eff-bound} is the desired result.
\end{proof}
\subsection{VC/Rademacher class}
Next, for any class $\mc{C}$, let ${\rm VC}(\mc{C})\defeq {\rm VC}(\set{(x, y)\mapsto \indic{y\notin C_{\theta, t}(x)}: \theta\in\Theta, t\in\mc{T}})$ denote its VC dimension with respect to the coverage loss.
\begin{proposition}[VC/Rademacher class]
\label{proposition:vc-class}
We have for some absolute constant $C>0$ that
\begin{enumerate}[wide,label=(\alph*),topsep=0pt]
\item Suppose ${\rm VC}(\mc{C}) = K+1<\infty$, then with probability at least $1-\delta/2$,
\begin{align*}
\eps_{{\rm coverage}} \le C\sqrt{(K+1 + \log(1/\delta))/n_{\cal}}.
\end{align*}
\item Suppose Assumption~\ref{assumption:loss-bound} holds. Then we have with probability at least $1-\delta/2$ that
\begin{align*}
\eps_{{\rm eff}} \le C \brac{ R_{n_{\cal}}^{{\rm eff}}(\mc{C}) + \sqrt{M^2\log(1/\delta) / n_{\cal}} },
\end{align*}
where $R_{n_{\cal}}^{{\rm eff}}(\mc{C})\defeq \E_{(x_i,y_i),\eps_i}\brac{ \sup_{(\theta, t)\in\Theta\times\mc{T}} \abs{\frac{1}{n_{\cal}}\sum_{i=1}^{n_{\cal}}\eps_i \ell_{{\rm eff}}(C_{\theta, t}; (x_i, y_i))} }$ is the Rademacher complexity of the class $\mc{C}$ with respect to $\ell_{{\rm eff}}$ (above $\eps_i\simiid {\rm Unif}(\set{\pm 1})$).
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}[wide,label=(\alph*)]
\item By assumption, the class of Boolean functions $\set{(x,y)\mapsto \indic{y\notin C_{\theta,t}(x)}}_{(\theta,t)\in\Theta\times\mc{T}}$ has VC dimension $K+1<\infty$. Therefore by the standard Rademacher complexity bound for VC classes~\citep[Theorem 8.3.23]{vershynin2018high} and McDiarmid's Inequality, we have with probability at least $1-\delta/2$ that
\begin{align*}
& \quad \eps_{{\rm coverage}} = \sup_{(\theta, t)\in\Theta\times\mc{T}} \abs{ \frac{1}{n_{\cal}}\sum_{i=1}^{n_\cal} \indic{y_i \notin C_{\theta, t}(x_i) y_i} - \P\paren{ Y \notin C_{\theta, t}(X) } } \\
& \le C\sqrt{ \frac{K+1}{n_{\cal}} } + \sqrt{\frac{\log(2/\delta)}{2n_{\cal}}} \le C\sqrt{\frac{K+1+\log(1/\delta)}{n_{\cal}}}.
\end{align*}
\item We have by standard symmetrization argument that (below $\eps_i\simiid {\rm Unif}(\set{\pm 1})$ denote Rademacher variables)
\begin{align*}
& \quad \E\brac{\eps_{{\rm eff}}} = \E\brac{\sup_{\theta\in\Theta, t\in\mc{T}} \abs{ \what{L}_{{\rm eff}}(C_{\theta, t}) - L_{{\rm eff}}(C_{\theta, t}) }} \\
& = \E\brac{ \sup_{\theta\in\Theta, t\in\mc{T}} \abs{ \frac{1}{n_{\cal}}\sum_{i=1}^{n_{\cal}} \ell_{{\rm eff}}(C_{\theta, t}; (x_i, y_i)) - \E\brac{ \ell_{{\rm eff}}(C_{\theta, t}; (X, Y)) } }} \\
& \le 2 \E_{(x_i, y_i), \eps_i}\brac{ \sup_{\theta\in\Theta, t\in\mc{T}} \abs{ \frac{1}{n_{\cal}}\sum_{i=1}^{n_{\cal}} \eps_i \ell_{{\rm eff}}(C_{\theta, t}; (x_i, y_i)) } } = 2R_n(\mc{C}).
\end{align*}
Further by Assumption~\ref{assumption:loss-bound}, the quantity $\eps_{{\rm eff}}$ satisfies $M/n_{\cal}$ bounded-difference, so applying McDiarmid's Inequality gives that with probability at least $1-\delta/2$,
\begin{align*}
\eps_{{\rm eff}} \le \E\brac{\eps_{{\rm eff}}} + \sqrt{\frac{2M^2\log(2/\delta)}{n_{\cal}}} \le C\brac{R_{n_{\cal}}^{{\rm eff}}(\mc{C}) + \sqrt{\frac{M^2\log(1/\delta)}{n}}}.
\end{align*}
\end{enumerate}
\end{proof}
\section{Theoretical guarantee for~{\tt CP-Gen-Recal}}
\label{appendix:theory-recal}
In this section we state and prove the formal theoretical guarantee for the {\tt CP-Gen-Recal}~algorithm (Algorithm~\ref{algorithm:multi-conformal-reconformalize}).
Define the score $t_\theta(X, Y)\defeq \inf\set{t\in\mc{T}: Y \in C_{\theta, t}(X)}$ and let $F_{\theta}(t)\defeq \P(Y \in C_{\theta, t}(X))=\P(t_\theta(X, Y)\le t)$ denote its CDF.
\begin{assumption}[Lower bounded density for score function]
\label{assumption:cdf}
For any $\theta\in\Theta$, $t_\theta(X, Y)$ has a positive density $f_\theta(t)=F'_\theta(t)>0$ on $t\in\mc{T}$. Further, let $t_{\theta,1-\alpha}\defeq \inf\set{t\in\mc{T}: F_\theta(t)\ge 1-\alpha}$ denote its $(1-\alpha)$ quantile, then there exists some constants ${\underline{c}_0},{\delta_0}>0$ such that
\begin{align*}
\inf_{t\in [t_{\theta,1-\alpha}-{\delta_0}, t_{\theta,1-\alpha} + {\delta_0}]} f_\theta(t) \ge {\underline{c}_0}.
\end{align*}
\end{assumption}
\begin{proposition}[Valid coverage and near-optimal efficiency for reconformalized algorithm]
\label{proposition:reconformalize}
The following holds for Algorithm~\ref{algorithm:multi-conformal-reconformalize}:
\begin{enumerate}[wide,label=(\alph*)]
\item (Valid coverage) For any possible $\what{\theta}\in\Theta$ learned in Line~\ref{line:1} and the resulting $\what{t}_{{\rm recal}}$, we have
\begin{align*}
\E_{D_{{\rm recal}}}\brac{ L_{{\rm coverage}}(C_{\what{\theta}, \what{t}_{{\rm recal}}})} \le \alpha,~~~\textrm{and thus}~~~\P_{D_{{\rm recal}}, (X,Y)}\paren{ Y \in C_{\what{\theta}, \what{t}_{{\rm recal}}}(X) }\ge 1-\alpha.
\end{align*}
\item (Efficiency) Suppose Assumptions~\ref{assumption:loss-lipt} and~\ref{assumption:cdf} hold, $\max\set{\eps_{{\rm coverage}}+1/n_{\cal}, 2\sqrt{\log(1/\delta)/n_{{\rm recal}}}}\le {\underline{c}_0}{\delta_0}$ (recall the definition of $\eps_{{\rm coverage}}$ in~\eqref{equation:eps-coverage}), and $\eps_{{\rm coverage}}\le \eps_0$. Then for $\delta\le 0.5$, we have with probability at least $1-\delta$ that
\begin{align*}
L_{{\rm eff}}(C_{\what{\theta}, \what{t}_{{\rm recal}}}) \le \mathop{\min_{(\theta, t)\in\Theta\times \mc{T}}}_{L_{{\rm coverage}}(C_{\theta, t})\le \alpha} L_{{\rm eff}}(C_{\theta, t}) + 2\eps_{{\rm eff}} + C {L_{\mc{T}}} \cdot \brac{ \eps_{{\rm coverage}} + \frac{1}{n_{\cal}}+ \sqrt{ \frac{\log(1/\delta)}{n_{{\rm recal}}} } } / {\underline{c}_0}.
\end{align*}
\end{enumerate}
\end{proposition}
\begin{proof}
\begin{enumerate}[wide,label=(\alph*)]
\item As the learned parameter $\what{\theta}$ (and thus the family of nested sets $C_{\what{\theta}, t}$) is independent of the recalibration dataset $D_{{\rm recal}}$, we have that the scores $t_{\what{\theta}}(x,y)$ on dataset $D_{{\rm recal}}$ and a new test point $(X,Y)$ are exchangeable given any $\what{\theta}$. Therefore by~\citep[Proposition 1]{gupta2019nested}, we have for any $\what{\theta}\in\Theta$ that
\begin{align*}
\P_{D_{{\rm recal}}, (X, Y)}\paren{Y \in C_{\what{\theta}, \what{t}_{{\rm recal}}}(X) } \ge 1-\alpha,
\end{align*}
or equivalently $\E_{D_{{\rm recal}}}\brac{ L_{{\rm coverage}}(C_{\what{\theta}, \what{t}_{{\rm recal}}}) } \le \alpha$.
\item For any $\theta\in\Theta$, define the score function $t_\theta(x, y)$ the same as in~\eqref{equation:score}, and similarly define the CDF $F_\theta(t)\defeq \P(t_\theta(X,Y)\le t)$ and its empirical counterpart $\what{F}^{\cal}_\theta(t)$ and $\what{F}^{{\rm recal}}_\theta(t)$ as the finite-sample version on dataset $D_{\cal}$ and $D_{{\rm recal}}$ respectively.
We first analyze $\what{t}$. By the same derivation as in~\eqref{equation:finite-coverage-bound}, we have
\begin{align*}
\sup_{t\in\mc{T}} \abs{ \what{F}^{\cal}_{\what{\theta}}(t) - F_{\what{\theta}}(t) } \le \sup_{(\theta, t)\in\Theta\times\mc{T}} \abs{\what{F}^{\cal}_\theta(t) - F_\theta(t)} = \eps_{{\rm coverage}}.
\end{align*}
As $(\what{\theta}, \what{t})$ solves the constrained ERM~\eqref{problem:multi-conformal-alg} and by the assumption that $\ell_{{\rm eff}}(C_{\theta, t}; (x, y))$ is monotone in $t$, we have that $\what{t}$ is the minimal value of $t\in\mc{T}$ such that $\what{F}^{\cal}_{\what{\theta}}(t)\ge 1-\alpha$. Therefore, (as $|D_{\cal}|=n_{\cal}$ and $\set{t_\theta(x_i, y_i)}_{i\in D_{\cal}}$ are almost surely distinct by Assumption~\ref{assumption:cdf},) we have
\begin{align*}
1-\alpha \le \what{F}^{\cal}_{\what{\theta}}(\what{t}) \le 1-\alpha + 1/n_{\cal}.
\end{align*}
This shows that
\begin{align*}
\abs{F_{\what{\theta}}(\what{t}) - F_{\what{\theta}}(t_{\what{\theta},1-\alpha})} = \abs{F_{\what{\theta}}(\what{t}) - (1-\alpha)} \le \eps_{{\rm coverage}} + 1/n_{\cal},
\end{align*}
where we recall that $t_{\what{\theta},1-\alpha}$ is the $(1-\alpha)$ (population) quantile of $t_{\what{\theta}}(X,Y)$. Note that $F'_\theta(t) = f_\theta(t) \ge {\underline{c}_0}$ on $t\in [t_{\what{\theta},1-\alpha} - {\delta_0}, t_{\what{\theta},1-\alpha} + {\delta_0}]$ by Assumption~\ref{assumption:cdf}. Further, $\eps_{{\rm coverage}}+1/n_{\cal} \le {\underline{c}_0}{\delta_0}$. Therefore, by monotonicity of $F_\theta$, we must have $\what{t} \in [t_{\what{\theta}, 1-\alpha} - \delta_0, t_{\what{\theta}, 1-\alpha} + \delta_0]$, and thus
\begin{align}
\label{equation:t-bound-1}
\abs{ \what{t} - t_{\what{\theta},1-\alpha} } \le (\eps_{{\rm coverage}} + 1/n_{\cal})/{\underline{c}_0}.
\end{align}
We next analyze $\what{t}_{{\rm recal}}$. As the dataset $D_{{\rm recal}}$ is independent of $\what{\theta}$, we can apply the DKW Inequality~\citep[Corollary 1]{massart1990tight} to obtain that
\begin{align*}
\sup_{t\in\mc{T}} \abs{\what{F}^{{\rm recal}}_{\what{\theta}}(t) - F_{\what{\theta}}(t) } \le \sqrt{\frac{\log(1/\delta)}{2n_{{\rm recal}}}}
\end{align*}
with probability at least $1-\delta$. Using a similar argument as above, we get (for $\delta\le 0.5$)
\begin{align*}
\abs{F_{\what{\theta}}(\what{t}_{{\rm recal}}) - F_{\what{\theta}}(t_{\what{\theta},1-\alpha})} \le \sqrt{\frac{\log(1/\delta)}{2n_{{\rm recal}}}} + \frac{1}{n_{{\rm recal}}} \le 2\sqrt{\frac{\log(1/\delta)}{n_{{\rm recal}}}}.
\end{align*}
As $2\sqrt{\log(1/\delta)/n_{{\rm recal}}} \le {\underline{c}_0}{\delta_0}$, we can apply the similar argument as above to deduce that
\begin{align}
\label{equation:t-bound-2}
\abs{ \what{t}_{{\rm recal}} - t_{\what{\theta},1-\alpha} } \le 2\sqrt{\log(1/\delta)/n_{{\rm recal}}}/{\underline{c}_0}.
\end{align}
Combining~\eqref{equation:t-bound-1} and~\eqref{equation:t-bound-2} and using the Lipschitzness of the efficiency loss (Assumption~\ref{assumption:loss-lipt}), we get
\begin{align*}
& \quad L_{{\rm eff}}(C_{\what{\theta}, \what{t}_{{\rm recal}}}) - L_{{\rm eff}}(C_{\what{\theta}, \what{t}}) \\
& \le {L_{\mc{T}}} \cdot \abs{\what{t}_{{\rm recal}} - \what{t}} \le {L_{\mc{T}}}\cdot \paren{ \abs{\what{t}_{{\rm recal}} - t_{\what{\theta},1-\alpha} } + \abs{ t_{\what{\theta},1-\alpha} - \what{t} } } \\
& \le C {L_{\mc{T}}} \cdot \brac{ \eps_{{\rm coverage}} + n_{\cal}^{-1} + \sqrt{ \frac{\log(1/\delta)}{n_{{\rm recal}}} } } / {\underline{c}_0}.
\end{align*}
Finally, as we assumed $\eps_{{\rm coverage}}\le \eps_0$, the condition of Proposition~\ref{proposition:main-gen}(b) holds, so we have
\begin{align*}
L_{{\rm eff}}(C_{\what{\theta}, \what{t}}) \le \mathop{\inf_{(\theta, t)\in\Theta\times \mc{T}}}_{L_{{\rm coverage}}(C_{\theta, t})\le \alpha} L_{{\rm eff}}(C_{\theta, t}) + 2\eps_{{\rm eff}}.
\end{align*}
Summing the preceding two bounds, we get
\begin{align*}
L_{{\rm eff}}(C_{\what{\theta}, \what{t}_{{\rm recal}}}) \le \mathop{\inf_{(\theta, t)\in\Theta\times \mc{T}}}_{L_{{\rm coverage}}(C_{\theta, t})\le \alpha} L_{{\rm eff}}(C_{\theta, t}) + 2\eps_{{\rm eff}} + C {L_{\mc{T}}} \cdot \brac{ \eps_{{\rm coverage}} + n_{\cal}^{-1} + \sqrt{ \frac{\log(1/\delta)}{n_{{\rm recal}}} } } / {\underline{c}_0}.
\end{align*}
which is the desired result.
\end{enumerate}
\end{proof}
\subsection{Case study: Linear class}
\label{appendix:linear-class}
In this section, we study prediction intervals with a specific linear structure and show that it satisfies the conditions of the VC/Rademacher class of Proposition~\ref{proposition:vc-class}.
Concretely, suppose we have a regression task ($\mc{Y}=\R$), and the prediction
interval $C_{\theta,t}(x)$ takes a linear form
\begin{align}
\label{equation:linear-interval}
C_{\theta, t}(x) = [\theta^\top \Phi_{{\rm lo}}(x) - t\sigma(x), \theta^\top \Phi_{{\rm hi}}(x) + t\sigma(x)],
\end{align}
where $\theta\in\Theta\subset \R^K$, $\Phi_{{\rm hi}}, \Phi_{{\rm lo}}:\mc{X}\to\R^K$ are feature maps such that $\Phi_{{\rm lo}}(x)_i\le \Phi_{{\rm hi}}(x)_i$ for all $i\in[K]$, $\sigma:\mc{X}\to \R_{>0}$.
For intuitions, we can think of $\Phi_{\set{{\rm hi}, {\rm lo}}}$ as pretrained representation functions and $\sigma$ as an (optional) pretrained function for modeling the variability of $y|x$. Note that this encompasses linear ensembling of several existing methods, such as vanilla conformal regression~\citep{lei2018distribution} by taking $\Phi_{{\rm hi}}=\Phi_{{\rm lo}}=\Phi$ where each $\Phi_i:\mc{X}\to \R$ is a base predictor, as well as Conformalized Quantile Regression~\citep{romano2019conformalized} where each $(\Phi_{{\rm lo}, i}, \Phi_{{\rm hi},i})$ is a pair of learned lower and upper quantile functions.
Our goal is to find an optimal linear function of this representation that yields the shortest prediction interval (with fixed width) subject to valid coverage.
We assume that both the features and the parameters are bounded:
\begin{assumption}[Bounded features and parameters]
\label{assumption:linear-bound}
We have $\sup_{\theta\in\Theta}\norm{\theta}\le {B_{\Theta}}$, $\sup_{x\in\mc{X}}\norm{\Phi(x)}\le {B_{\Phi}}$, $\sup_{x\in\mc{X}} \sigma(x) \le {B_{\sigma}}$, and $\sup_{t\in\mc{T}} |t| \le {B_\mc{T}}$.
\end{assumption}
The following result shows that Proposition~\ref{proposition:vc-class} is applicable on the linear class.
\begin{corollary}[Coverage and length guarantees for linear class]
For the $(K+1)$-dimensional linear class~\eqref{equation:linear-interval}, suppose Assumption~\ref{assumption:linear-bound} holds, and we take the efficiency loss to be the length of the interval: $\ell_{{\rm eff}}(C; (x, y))\defeq {\rm length}(C(x))$.
Then, we have with probability at least $1-\delta$ (over the calibration dataset $D_{\cal}$) that
\begin{align*}
\eps_{{\rm coverage}} \le C\sqrt{\frac{K+1+\log(1/\delta)}{n_{\cal}}},~~~{\rm and}~~~\eps_{{\rm eff}} \le C\brac{{B_{\Theta}}{B_{\Phi}} + {B_\mc{T}}{B_{\sigma}}} \cdot \sqrt{\frac{\log(1/\delta)}{n_{\cal}}},
\end{align*}
where $C>0$ is an absolute constant.
\end{corollary}
\begin{proof}
We verify the conditions of Proposition~\ref{proposition:vc-class}. First, we have
\begin{align*}
\indic{y\notin C_{\theta, t}(x)} = \indic{ \max\set{y - \theta^\top \Phi_{{\rm hi}}(x), \theta^\top\Phi_{{\rm lo}}(x) - y} > t\sigma(x) }.
\end{align*}
The set within the indicator above is the union of two sets $\set{(x, y): y - \theta^\top\Phi_{{\rm hi}}(x) - t\sigma(x) > 0}$ and $\set{(x, y): \theta^\top\Phi_{{\rm lo}}(x) - y - t\sigma(x) > 0}$. Note that each family of sets (over $(\theta, t)\in\R^K\times \R$ are linear halfspaces with feature dimension $K+2$), and thus has VC-dimension $\le K+2$. Applying the VC dimension bound for unions of sets~\citep[Theorem 1.1]{van2009note}, we get ${\rm VC}(\mc{C})\le C'(K+2 + K+2) \le C(K+1)$ for some absolute constant $C>0$. Therefore the condition of Proposition~\ref{proposition:vc-class}(a) holds from which we obtain the desired bound for $\eps_{{\rm coverage}}$.
To bound $\eps_{{\rm eff}}$, we first note that for any $(x, y)\in\mc{X}\times \R$,
\begin{align*}
& \quad \abs{\ell_{{\rm eff}}(C_{\theta, t}; (x, y))} = \abs{{\rm length}(C_{\theta, t}(x))} \\
& = \theta^\top (\Phi_{{\rm hi}}(x) - \Phi_{{\rm lo}}(x)) + 2t\sigma(x) \le \norm{\theta}\norm{\Phi_{{\rm hi}}(x) - \Phi_{{\rm lo}}(x)} + 2t\sigma(x) \\
& \le 2{B_{\Theta}}{B_{\Phi}} + 2{B_\mc{T}}{B_{\sigma}} \eqdef M,
\end{align*}
and thus the boundedness assumption (Assumption~\ref{assumption:loss-bound}) holds with $M$ defined above. Next, we have the following bound on the Rademacher complexity
\begin{align*}
& \quad R_{n_{\cal}}^{{\rm eff}}(\mc{C}) = \E\brac{ \sup_{(\theta, t)\in\Theta\times \mc{T}} \abs{ \frac{1}{n_{\cal}}\sum_{i=1}^{n_{\cal}} \eps_i \paren{ \theta^\top \paren{\Phi_{{\rm hi}}(x_i) - \Phi_{{\rm lo}}(x_i) } + 2t\sigma(x_i) } } } \\
& \le \E\brac{ \sup_{\theta\in\Theta} \abs{\< \theta, \frac{1}{n_{\cal}}\sum_{i=1}^{n_{\cal}} \eps_i \paren{\Phi_{{\rm hi}}(x_i) - \Phi_{{\rm lo}}(x_i) } \>} } + \E\brac{ \sup_{t\in\mc{T}} \abs{ 2t \cdot \frac{1}{n_{\cal}}\sum_{i=1}^{n_{\cal}} \eps_i\sigma(x_i) } } \\
& \le \sup_{\theta\in\Theta} \norm{\theta} \cdot \E\brac{ \norm{\frac{1}{n_{\cal}}\sum_{i=1}^{n_{\cal}} \eps_i \paren{\Phi_{{\rm hi}}(x_i) - \Phi_{{\rm lo}}(x_i) }}^2 }^{1/2} + 2\sup_{t\in\mc{T}} |t| \cdot \E\brac{ \paren{ \frac{1}{n_{\cal}}\sum_{i=1}^{n_{\cal}} \eps_i\sigma(x_i) }^2 }^{1/2} \\
& \le {B_{\Theta}} \cdot \E\brac{ \frac{1}{n_{\cal}}\norm{\Phi_{{\rm hi}}(x_1) - \Phi_{{\rm lo}}(x_1)}^2 }^{1/2} + 2{B_\mc{T}} \cdot \E\brac{ \frac{1}{n_{\cal}} \sigma^2(x_1)}^{1/2} \\
& \le C\cdot \frac{{B_{\Theta}}{B_{\Phi}} + {B_\mc{T}}{B_{\sigma}}}{\sqrt{n_{\cal}}}.
\end{align*}
Applying Proposition~\ref{proposition:vc-class}(b), we get $\eps_{{\rm eff}} \le C \cdot \brac{{B_{\Theta}}{B_{\Phi}} + {B_\mc{T}}{B_{\sigma}}}\cdot \sqrt{\log(1/\delta)/n_{\cal}}$ with probability at least $1-\delta$. This is the desired bound for $\eps_{{\rm eff}}$.
\end{proof}
\section{Preliminaries}
\paragraph{Uncertainty quantification via prediction sets}
We consider standard learning problems in which we observe a dataset $D$ of examples $(x_i,y_i)\in\mc{X}\times \mc{Y}$ from some data distribution, and wish to predict the label $y$ from the input $x$. A \emph{prediction set} is a set-valued function $C:\mc{X}\to 2^{\mc{Y}}$ where $C(x)$ is a subset of $\mc{Y}$. Two prevalent examples are regression ($\mc{Y}=\R$) in which we can choose $C(x)\subset \R$ as a \textbf{prediction interval}, and (multi-class) classification ($\mc{Y}=[L]\defeq \set{1,\dots,L}$) in which we can choose $C(x)\subset [L]$ as a (discrete) \textbf{label prediction set}.
\paragraph{Coverage and efficiency}
The (marginal) coverage probability (henceforth \emph{coverage}) of a prediction set $C$ is defined as
\begin{align*}
{\rm Coverage}(C) \defeq \P\paren{Y \in C(X)}
\end{align*}
where $(X,Y)$ is a test example from the same data distribution. We also define the (mis)-coverage loss $L_{{\rm coverage}}(C)\defeq 1 - {\rm Coverage}(C) = \P(Y\notin C(X))$. A learned prediction set is often desired to achieve \emph{valid coverage} in the sense that ${\rm Coverage}(C)\ge 1-\alpha$ for some $\alpha\in(0,1)$. Here $1-\alpha$ is a pre-determined target coverage level; typical choices are e.g. $1-\alpha\in\set{90\%, 95\%}$, which corresponds to picking $\alpha\in\set{0.1,0.05}$.
In addition to valid coverage, it is often desired that the prediction set has a good \emph{efficiency} (such as small size). This is motivated by the fact that valid coverage can be achieved trivially if we do not care about the size, e.g. by always outputting $C=\mc{Y}$, which is not informative. Throughout this paper we will use $\ell_{{\rm eff}}$ to denote the particular efficiency loss we care about, where $\ell_{{\rm eff}}(C; (x, y))$ measures the efficiency loss of $C$ on an example $(x,y)$, such as the length (Lebesgue measure) of prediction intervals, or the size (cardinality) of label prediction sets.
\paragraph{Nested set framework}
We adopt the nested set framework of~\citep{gupta2019nested} for convenience for our presentation and analysis. A family $\set{C_t}_{t\in\mc{T}\subset \R}$ is said to be a (family of) nested sets if $t\le t'$ implies that $C_t(x)\subset C_{t'}(x)$ for all $x\in\mc{X}$. Throughout this paper out notation $C_t$ or $C_{\theta, t}$ are assumed to be nested sets with respect to $t$. We assume that our efficiency loss $\ell_{{\rm eff}}$ is non-decreasing w.r.t. its (set-valued) argument, i.e. $\ell_{{\rm eff}}(C; (x, y)) \le \ell_{{\rm eff}}(C'; (x, y))$ if $C\subseteq C'$. Therefore, for nested sets the loss $t\mapsto \ell_{{\rm eff}}(C_t; (x, y))$ is non-decreasing in $t$. As the coverage loss $L(C_t)=\P(Y\notin C_t(X))$ (and its empirical version) is instead non-increasing in $t$, the efficiency loss and the coverage loss always comes as a trade-off.
\subsection{Conformal prediction}
\label{section:conformal}
Conformal prediction~\citep{vovk2005algorithmic, lei2014distribution} is a powerful technique for learning prediction sets with coverage guarantees. The core of conformal prediction is its \emph{conformalization} step, which turns any base prediction function (or training algorithm) into a prediction set.
We here briefly review conformal prediction using the vanilla (split) conformal regression method of~\citep{lei2018distribution}, and refer the readers to~\citep{angelopoulos2021gentle} for more examples. Given any \emph{base predictor} $f:\mc{X}\to\R$ (potentially learned on a training dataset $D_{{\rm train}}$), conformal prediction outputs a prediction interval
\begin{align}
\label{equation:vanilla-conformal}
C_{\what{t}}(x) \defeq \brac{ f(x) - \what{t}, f(x) + \what{t} },
\end{align}
where $\what{t}\in\R_{\ge 0}$ is chosen as the $(1-\alpha)$-quantile\footnote{Technically~\eqref{equation:vanilla-t} requires the $\ceil{(1-\alpha)(n_{{\rm recal}}+1)}$-th largest element to guarantee valid coverage~\citep{vovk2005algorithmic}; here we choose the close $\ceil{(1-\alpha)n_{{\rm recal}}}$-th largest to allow the following insight.} of $|y - f(x)|$ on a calibration dataset $D_{\cal}$ with size $n_{\cal}\defeq |D_{\cal}|$ using the following conformalization step:
\begin{align}
\label{equation:vanilla-t}
\what{t} = \ceil{(1-\alpha)n_{\cal}}\textrm{-th largest of}~\set{|y_i - f(x_i)|}_{i=1}^{n_{\cal}}.
\end{align}
The main guarantee for the learned interval $C_{\what{t}}$ is that it achieves a $(1-\alpha)$ coverage guarantee of the form $\P_{D_{\cal}, (X, Y)}(Y \in C_{\what{t}}(X)) \ge 1-\alpha$~\citep[Theorem 2.2]{lei2018distribution}. The proof relies on the exchangeability between the scores $\set{|y_i - f(x_i)|}_{i=1}^{n_{\cal}}$ and $|Y - f(X)|$, which allows this guarantee to hold in a distribution-free fashion (i.e. for any data distribution).
\paragraph{Conformal prediction as a constrained ERM with one parameter}
We start by a simple re-interpretation that the conformalization step~\eqref{equation:vanilla-t} is equivalent to solving a constrained empirical risk minimization (ERM) problem with a single learnable parameter $t$ (cf. Appendix~\ref{appendix:proof-prop1} for the proof).
\begin{proposition}[Conformal regression as a constrained ERM with one learnable parameter]
\label{proposition:single-conformal}
The parameter $\what{t}\in\R$ defined in~\eqref{equation:vanilla-t} is the solution to the following constrained ERM problem
\begin{equation}
\label{problem:single-conformal}
\begin{aligned}
\minimize_{t\ge 0} & ~~ \what{L}_{{\rm eff}}(C_t) \defeq \frac{1}{n_{\cal}} \sum_{i\in D_{\cal}} \ell_{{\rm eff}}(C_t; (x_i, y_i)) = 2t \\
\subjectto & ~~\what{L}_{{\rm coverage}}(C_t) \defeq \frac{1}{n_{\cal}} \sum_{i\in D_{\cal}} \indic{ y_i \notin C_t(x_i)} \le \alpha.
\end{aligned}
\end{equation}
Above, $\ell_{{\rm eff}}(C; (x, y)) = {\rm length}(C(x))$ is the length of the interval $C(x)$.
\end{proposition}
Though simple, this re-interpretation suggests a limitation to the conformalization step~\eqref{equation:vanilla-t} as well as its analogue in other existing conformal methods: It only learns a single parameter $t$, and thus cannot further optimize the efficiency due to the coverage-efficiency trade-off (cf. Figure~\ref{figure:fig1}). However, the form of the constrained ERM problem~\eqref{problem:single-conformal} suggests that it can be readily extended to more general function classes with more than one learnable parameters, which is the focus of this work.
\section{Experiments}
\label{section:experiment}
We empirically test our {\tt CP-Gen-Recal}~algorithm (using the practical implementation~\eqref{problem:practical}) on three representative real-data tasks. The concrete construction of $\set{C_{\theta, t}}$ will be described within each application. Throughout this section we choose $1-\alpha=90\%$ as the nominal coverage level, and use the {\tt CP-Gen-Recal}~algorithm to guarantee coverage in expectation. We provide ablations with $\alpha\in\set{80\%, 95\%}$ in Appendix~\ref{appendix:cqf-ablations} and~\ref{appendix:multi-output-ablations} and the {\tt CP-Gen}~algorithm in Appendix~\ref{appendix:cpgen}. Several additional ablations and analyses can be found in Appendix~\ref{appendix:additional-experiments}.
\subsection{Improved prediction intervals via conformal quantile finetuning}
\label{section:cqf}
\begin{table}[t]
\centering
\small
\caption{\small {\bf Results for conformal quantile finetuning} on real-data regression tasks at level $1-\alpha=90\%$. For each method we report the (test) coverage, length, and pinball loss of the corresponding base quantile predictor. All results are averaged over 8 random seeds.}
\vspace{-1em}
\centerline{
\label{table:conformal-finetuning}
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{3}{c}{{\tt CQR}} & \multicolumn{3}{c}{{\tt QR} + {\tt CP-Gen-Recal}~(ours)} \\
\cmidrule(r){2-4} \cmidrule(r){5-7}
Dataset & Coverage(\%) & Length & $L_{\rm pinball}^{\rm test}$ & Coverage(\%) & Length & $L_{\rm pinball}^{\rm test}$ \\
\midrule
MEPS\_19 & $89.98$ & $1.167$ & $0.112$ & $90.09$ & $\mathbf{0.890}$ & $0.131$ \\
MEPS\_20 & $89.72$ & $1.165$ & $0.117$ & $89.99$ & $\mathbf{0.830}$ & $0.141$ \\
MEPS\_21 & $89.81$ & $1.145$ & $0.107$ & $90.22$ & $\mathbf{0.962}$ & $0.129$ \\
Facebook\_1 & $90.12$ & $0.555$ & $0.052$ & $90.34$ & $\mathbf{0.384}$ & $0.090$ \\
Facebook\_2 & $90.13$ & $0.491$ & $0.044$ & $90.02$ & $\mathbf{0.364}$ & $0.092$ \\
kin8nm & $90.03$ & $1.214$ & $0.076$ & $89.31$ & $\mathbf{1.173}$ & $0.078$ \\
naval & $89.70$ & $3.095$ & $0.164$ & $89.71$ & $\mathbf{3.077}$ & $0.166$ \\
bio & $90.26$ & $2.271$ & $0.130$ & $90.20$ & $\mathbf{2.164}$ & $0.148$ \\
blog\_data & $90.19$ & $0.605$ & $0.058$ & $90.01$ & $\mathbf{0.496}$ & $0.107$ \\
\midrule
Nominal ($1-\alpha$) & $90.00$ & - & - & $90.00$ & - & - \\
\bottomrule
\end{tabular}
}
\vspace{-1em}
\end{table}
\paragraph{Setup}
We consider regression tasks in which we use quantile regression (pinball loss) to train a base quantile predictor $\what{F}(x) = [\what{f}_{{\rm lo}}(x), \what{f}_{{\rm hi}}(x)] = \what{\theta}_0^\top \what{\Phi}(x)$ on $D_{{\rm train}}$ (with learning rate decay by monitoring validation loss on $D_{\cal}$). Here $\what{\Phi}:\mc{X}\to \R^{d_h}$ is the learned representation function\red{,} $\what{\theta}_0\in\R^{d_h\times 2}$ is the last linear layer ($d_h$ denotes the last hidden dimension), and $\what{f}_{{\rm lo}},\what{f}_{{\rm hi}}$ are the learned \{lower, upper\} quantile functions (see Appendix~\ref{appendix:cqf-details} for more details on the training procedure). Given $\what{F}$, we learn a baseline prediction interval of the form $[\what{f}_{{\rm lo}}(x) - t, \what{f}_{{\rm hi}}(x) + t]$ on $D_{{\rm recal}}$ via Conformalized Quantile Regression ({\tt CQR})~\citep{romano2019conformalized}.
We then attempt to improve the length over {\tt CQR}~by \emph{conformal quantile finetuning}: Fix the representation function $\what{\Phi}$ and finetune the linear layer $\theta$ using our {\tt CP-Gen-Recal}~algorithm, so that $C_{\theta, t}(x) = [\theta_{{\rm lo}}^\top \what{\Phi}(x) - t, \theta_{{\rm hi}}^\top\what{\Phi}(x)+t]$ (where $\theta=[\theta_{{\rm lo}}, \theta_{{\rm hi}}]$). We learn a new $\what{\theta}$ on $D_{\cal}$ via~\eqref{problem:practical} (where $\ell_{{\rm eff}}$ is chosen as the length), and then compute $\what{t}_{{\rm recal}}$ on $D_{{\rm recal}}$ as in Algorithm~\ref{algorithm:multi-conformal-reconformalize}.
We perform the above on 9 real-world regression datasets with a 3-layer MLP with width $d_h=64$, similar as~\citep{romano2019conformalized, feldman2021improving}. Additional details about the setup can be found in Appendix~\ref{appendix:cqf-details}. We also test various tweaks of the {\tt CQR}~baseline (results provided in Appendix~\ref{appendix:alternative-tweaks}).
\paragraph{Results}
Table~\ref{table:conformal-finetuning} compares the (test) coverage and length between {\tt CQR}~and the finetuned linear layer via our {\tt CP-Gen-Recal}. While both {\tt CQR}~and {\tt CP-Gen-Recal}~achieves valid 90\% coverage, {\tt CP-Gen-Recal}~can systematically improve the length over {\tt CQR}~on all tasks. Table~\ref{table:conformal-finetuning} also reports the pinball loss for both the base $\what{\theta}_0\what{\Phi}(x)$ as well as the fine-tuned $\what{\theta}^\top\what{\Phi}(x)$ on the test set $D_{{\rm test}}$. Intriguingly, our conformal finetuning made the pinball loss \emph{worse} while managing to improve the length. This suggests the unique advantage of our constrained ERM objective, as it rules out the simple explanation that the length improvement is just because of a lower test loss. We remark that while {\tt CP-Gen-Recal}~improves the length over {\tt CQR}, it comes at a cost in terms of worse conditional coverage (analysis presented in Appendix~\ref{appendix:conditional-coverage}).
\subsection{Minimum-volume prediction sets for multi-output regression}
\label{section:multi-output}
\paragraph{Setup}
This task aims to learn a box-shaped prediction set for multi-output regression with a \emph{small volume}. Our learning task is regression with output dimension $d_{{\rm out}}>1$. We first learn a based predictor $\what{f}:\R^d\to\R^{d_{{\rm out}}}$ by minimizing the MSE loss on $D_{{\rm train}}$. We then learn a \emph{box-shaped} prediction set of the form $C_u(x) = \prod_{i=1}^{d_{\rm out}} [\what{f}_i(x) - \what{u}_i, \what{f}_i(x) + \what{u}_i]$ by one of the following methods:
\begin{itemize}[leftmargin=2em, topsep=0pt, itemsep=0pt]
\item ({\tt Coord-wise}): Each $\what{u}_i$ is obtained by vanilla conformalization~\eqref{equation:vanilla-t} over the $i$-th output coordinate on $D_{\cal}\cup D_{{\rm recal}}$. To guarantee $1-\alpha$ coverage, each coordinate is conformalized at level $1-\alpha/d_{{\rm out}}$, motivated by the union bound.
\item ({\tt Coord-wise-Recal}): Perform the above on $D_{\cal}$ to learn $\what{u}_i$, and reconformalize an additional $t\ge 0$ on $D_{{\rm recal}}$ to reshape the prediction set \emph{proportionally}:
\begin{align}
\label{equation:multi-output-set}
\textstyle
C_{\what{u}, t}(x) = \prod_{i=1}^{d_{\rm out}} [\what{f}_i(x) - t\what{u}_i, \what{f}_i(x) + t\what{u}_i].
\end{align}
\item ({\tt CP-Gen-Recal}, ours): Optimize the volume directly over all $u\in\R^{d_{{\rm out}}}$ using~\eqref{problem:practical} on $D_{\cal}$, where $\ell_{{\rm eff}}(C_u; (x, y))= \prod_{i=1}^{d_{{\rm out}}} (2u_i)$ is chosen as the volume loss. We then reconformalize an additional $\what{t}\ge 0$ on $D_{{\rm recal}}$ to reshape the prediction set same as in~\eqref{equation:multi-output-set}. Note that this reconformalization step is equivalent to Algorithm~\ref{algorithm:multi-conformal-reconformalize} with the re-parametrization $\what{u}\mapsto(\what{\theta}, \what{t})$ where $\what{\theta}\in\R_{>0}^{d_{{\rm out}}-1}$ denotes the ratio between $\what{u}_i$, and $\what{t}\in\R_{>0}$ denotes a common scale.
\end{itemize}
Our datasets are a collection of \emph{next-state prediction tasks} with multi-dimensional continuous states in offline reinforcement learning (RL), constructed similarly as D4RL~\citep{fu2020d4rl} with some differences. Additional details about the dataset and experimental setup are in Appendix~\ref{appendix:multi-output-details}. We also test an additional {\tt Max-score-Conformal}~baseline (which uses vanilla conformal prediction with score function $\|y - \what{f}(x)\|_\infty$, equivalent to a \emph{hypercube}-shaped predictor) in Appendix~\ref{appendix:maxscore}, which we find also performs worse than our {\tt CP-Gen-Recal}.
\paragraph{Results}
Table~\ref{table:multi-output} reports the (test) coverage and volume of the above three methods. The {\tt Coord-wise}~method achieves valid coverage but is quite conservative (over-covers), which is as expected as the union bound is worst-case in nature and the coordinate-wise conformalization does not utilize the potential correlation between the output coordinates. {\tt Coord-wise-Recal}~achieves approximately 90\% coverage with a much smaller volume. Our {\tt CP-Gen-Recal}~also achieves valid 90\% coverage but a further lower volume across all tasks. This suggests that optimizing the volume over all possible $u\in\R^{d_{{\rm out}}}$ data-dependently using our {\tt CP-Gen-Recal}~is indeed more flexible than pre-determined conformalization schemes such as {\tt Coord-wise}.
\begin{table}[t]
\centering
\small
\caption{\small {\bf Results for multi-output regression} on next-state prediction tasks, at level $1-\alpha=90\%$. For each method we report the (test) coverage and volume of its learned box-shaped prediction set. All results are averaged over 8 random seeds.}
\vspace{-1em}
\label{table:multi-output}
\centerline{
\begin{tabular}{lcccccc}
\toprule
& \multicolumn{2}{c}{ {\tt Coord-wise} } & \multicolumn{2}{c}{ {\tt Coord-wise-Recal} } & \multicolumn{2}{c}{{ {\tt CP-Gen-Recal}~(ours)}} \\
\cmidrule(r){2-3} \cmidrule(r){4-5} \cmidrule(r){6-7}
Dataset & Coverage(\%) & Volume & Coverage(\%) & Volume & Coverage(\%) & Volume \\
\midrule
Cartpole & $94.28$ & $1.20\times 10^{-5}$ & $90.17$ & $5.10\times 10^{-6}$ & $90.12$ & $\mathbf{2.30\times 10^{-6}}$ \\
Half-Cheetah & $93.90$ & $1.10\times 10^{-5}$ & $90.06$ & $1.23\times 10^{-6}$ & $90.02$ & $\mathbf{9.07\times 10^{-7}}$ \\
Ant & $93.56$ & $3.37\times 10^{-3}$ & $89.99$ & $1.70\times 10^{-4}$ & $90.02$ & $\mathbf{8.25\times 10^{-5}}$ \\
Walker & $94.42$ & $2.59\times 10^{-5}$ & $90.01$ & $7.33\times 10^{-7}$ & $89.94$ & $\mathbf{3.47\times 10^{-7}}$ \\
Swimmer & $95.62$ & $2.80\times 10^{-5}$ & $89.90$ & $2.22\times 10^{-6}$ & $90.13$ & $\mathbf{1.46\times 10^{-7}}$ \\
Hopper & $92.87$ & $2.81\times 10^{-9}$ & $90.02$ & $1.01\times 10^{-9}$ & $89.92$ & $\mathbf{8.25\times 10^{-10}}$ \\
Humanoid & $94.75$ & $4.28\times 10^{-4}$ & $89.95$ & $8.53\times 10^{-8}$ & $89.94$ & $\mathbf{4.95\times 10^{-8}}$ \\
\midrule
Nominal ($1-\alpha$) & $90.00$ & - & $90.00$ & - & $90.00$ & - \\
\bottomrule
\end{tabular}
}
\end{table}
\paragraph{Additional experiment: label prediction sets for ImageNet}
We show that {\tt CP-Gen-Recal}~can learn label prediction sets for ImageNet with valid coverage and improved size over existing approaches, by finding an optimized set of ensemble weights over multiple base neural networks (Table~\ref{table:imagenet}). The full setup and results are presented in Appendix~\ref{appendix:imagenet}.
\section{Additional experimental details}
\label{appendix:exp-details}
\subsection{Conformal quantile finetuning}
\label{appendix:cqf-details}
\paragraph{Datasets}
Our choice of the datasets follows~\citep{feldman2021improving}. We provide information about these datasets in Table~\ref{table:reg-data}.
\begin{table}[h]
\centering
\small
\caption{\small Information about the regression datasets. Here $(n, d)$ denotes the (sample size, feature dim).}
\label{table:reg-data}
\vspace{-1em}
\centerline{
\begin{tabular}{ccc}
\toprule
Dataset & $n$ & $d$ \\
\midrule
MEPS\_19~\citep{meps19_data} & 15785 & 139 \\
MEPS\_20~\citep{meps20_data} & 17541 & 139 \\
MEPS\_21~\citep{meps21_data} & 15656 & 139 \\
Facebook\_1~\citep{facebook_data} & 40948 & 53\\
Facebook\_2~\citep{facebook_data} & 81311 & 53 \\
kin8nm~\citep{kin8nm_data} & 8192 & 8 \\
naval~\citep{naval_data} & 11934 & 17 \\
bio~\citep{bio_data} & 45730 & 9 \\
blog\_data~\citep{blog_data} & 52397 & 280 \\
\bottomrule
\end{tabular}
}
\vspace{-1em}
\end{table}
All datasets are standardized so that inputs and labels have mean
$0$ and standard deviation $1$, and split into (train, cal, recal, test) with size 70\%, 10\%, 10\%, 10\% (varying with the random seed).
\paragraph{Base predictor and optimization}
Our network architecture is a 3-layer MLP with width 64 and output dimension 2 (for the lower and upper quantile). We use momentum SGD with initial learning rate $10^{-3}$ and momentum $0.9$, batch-size 1024, and run the optimization for a max of 10000 epochs. A 10x learning rate decay is performed if the validation loss on $D_{\cal}$ has not decreased in 10 epochs, and we stop the learning whenever the learning rate decay happens for 3 times. The loss function used in training $\what{F}=[\what{f}_{{\rm lo}}, \what{f}_{{\rm hi}}]$ is the summed pinball loss of level $\alpha/2$ for $\what{f}_{{\rm lo}}$ and $1-\alpha/2$ for $\what{f}_{{\rm hi}}$, following~\citep{romano2019conformalized}:
\begin{align*}
\ell(\what{F}; (x_i, y_i)) = \ell_{{\rm pinball}}^{\alpha/2}(\what{f}_{{\rm lo}}(x_i) - y_i) + \ell^{1-\alpha/2}_{{\rm pinball}}( \what{f}_{{\rm hi}}(x_i) - y_i),
\end{align*}
where for any $\beta\in(0, 1)$, $\ell_{{\rm pinball}}^\beta$ is the pinball loss at level $\beta$:
\begin{equation*}
\ell_{{\rm pinball}}^\beta(t) = \left\{
\begin{aligned}
& - \beta t & \textrm{if}~t < 0,\\
& (1-\beta) t & \textrm{if}~t \ge 0.
\end{aligned}
\right.
\end{equation*}
\paragraph{Optimization details for {\tt CP-Gen-Recal}}
For the conformal quantile finetuning procedure with our {\tt CP-Gen-Recal}, we rewrite the miscoverage loss for the quantile-based prediction interval as
\begin{align*}
\indic{y \notin C_{\theta, t}(x)} = \indic{ t - \max\set{ \theta_{{\rm lo}}^\top \what{\Phi}(x) - y, y - \theta_{{\rm hi}}^\top\what{\Phi}(x)} < 0}.
\end{align*}
(In practice our $\theta$ also includes a trainable bias same as the original top linear layer; here we abuse notation slightly to allow easier presentation.) We approximate the right-hand side above with the hinge loss to obtain the formulation~\eqref{problem:practical}. To solve that optimization problem, we use SGD on $(\theta, t)$ with learning rate $0.01$ and (ascent on) $\lambda$ with learning rate $0.1$. The batch-size here is 256 and the number of episodes is 1000. To ensure $t>0$ we use a log parametrization for $t$. Finally, $t_{{\rm recal}}$ is computed by the reconformalization step in Algorithm~\ref{algorithm:multi-conformal-reconformalize} on $D_{{\rm recal}}$.
\subsection{Multi-output regression}
\label{appendix:multi-output-details}
\paragraph{Datasets}
We generate offline datasets consisting of (state, action, next\_state) pairs within RL tasks within the OpenAI Gym~\citep{openaigym}. For each task, the data is generated by executing a medium-performing \emph{behavior policy} that is extracted from standard RL training runs. All tasks are continuous state and continuous action. Table~\ref{table:rl-data} summarizes the state and action dimension, along with the reward of the policies used for generating the data. All datasets contain 200K examples.
All datasets are standardized so that inputs and labels have mean
$0$ and standard deviation $1$, and split into (train, cal, recal, test) with size 70\%, 10\%, 10\%, 10\% (varying with the random seed).
\begin{table}[h]
\centering
\small
\caption{\small Information about the next-state prediction datasets. Here $(d_S, d_A)$ denotes the (state, action) dimension of the corresponding RL task. Datasets with a (slim) note only extract a subset of the full state (so that $d_S$ is less than the full state dimension). We also report the mean reward of the behavior policies.}
\label{table:rl-data}
\vspace{-1em}
\centerline{
\begin{tabular}{cccc}
\toprule
RL Task & $d_S$ & $d_A$ & mean reward \\
\midrule
Cartpole & 4 & 1 & 107 \\
Half-Cheetah & 17 & 6 & 8015\\
Ant (slim) & 27 & 8 & 4645 \\
Walker & 17 & 6 & 3170 \\
Swimmer & 8 & 2 & 51 \\
Hopper & 11 & 3 & 2066 \\
Humanoid (slim) & 45 & 17 & 1357 \\
\bottomrule
\end{tabular}
}
\vspace{-1em}
\end{table}
\paragraph{Base predictor and optimization}
Our network architecture is a 3-layer MLP with width 64, input dimension $d_{\rm in}=d_S+d_A$, and output dimension $d_{\rm out}=d_S$. We use momentum SGD with initial learning rate $10^{-3}$, momentum $0.9$, and batch-size 512. We run the optimization for 1000 epochs with a 10x learning rate decay at epoch 500. The loss function for training the network is the standard MSE loss.
\paragraph{Optimization details for {\tt CP-Gen-Recal}}
For the conformal quantile finetuning procedure with our {\tt CP-Gen-Recal}, we rewrite the miscoverage loss for the box-shaped prediction set as
\begin{align*}
\indic{y \notin C_{u}(x)} = \indic{ y\notin \prod_{i=1}^{d_{\rm out}} [\what{f}_i(x) - u_i, \what{f}_i(x) + u_i] } = \indic{ 1 - \max_{1\le i\le d_{\rm out}} |y_i - \what{f}_i(x)| / u_i < 0}.
\end{align*}
where we recall $u\in\R^{d_{\rm out}}$ is the learnable parameter within the initial optimization stage of {\tt CP-Gen-Recal}~as discussed in Section~\ref{section:multi-output}. We approximate the right-hand side above with the hinge loss to obtain the formulation~\eqref{problem:practical}. To solve that optimization problem, we use SGD on $(\theta, t)$ with learning rate $0.01$ and (ascent on) $\lambda$ with learning rate $0.01$. The batch-size here is 1024 and the number of episodes is 1000. To ensure $u>0$ we use a log parametrization for $u$.
For the reconformalization step, we keep the (relative) ratios of the $\what{u}$ obtained above (as the $\what{\theta}$), and then reconformalize an additional $t_{{\rm recal}}>0$ on $D_{{\rm recal}}$ via the proportional reshaping of~\eqref{equation:multi-output-set}.
\subsection{Details for Figure~\ref{figure:fig1}}
\label{appendix:fig1-details}
Figure~\ref{figure:fig1} is obtained on one run of our conformal quantile finetuning experiment on the MEPS\_19 dataset, and illustrates the coverage-efficiency tradeoff. Both figures there compute the coverage and length on the (unseen) test set $D_{{\rm test}}$, for better illustration. Figure~\ref{figure:fig1} Left plots the family $[\what{f}_{{\rm lo}}(x) - t, \what{f}_{{\rm hi}}(x) + t]$ used by Conformalized Quantile Regression. Figure~\ref{figure:fig1} Right plots the family
\begin{align*}
C_{\theta, t}(x) = [\theta_{{\rm lo}}^\top \what{\Phi}(x) - t, \theta_{{\rm hi}}^\top\what{\Phi}(x)].
\end{align*}
The specific function class of $\theta$ shown in the thinner lines is a finite set of linear interpolations of the original $\what{\theta}_0$ obtained in QR and the new $\what{\theta}$ obtained by conformal quantile finetuning, with combination weights within $\set{-0.3, -0.2, \dots, 1.0}$. The shaded region is then obtained by filling in the area.
|
1,108,101,564,842 | arxiv | \section{Introduction}
Gait can be regarded as an unique feature of a person, and it is usually determined by an individual's physical characters, \textit{e.g.}, height, weight, limb length, and walking habit, \textit{e.g.}, walking speed, posture combined with characteristic motions.
The study of human gait recognition has been becoming an active research field, especially in the era of Internet of Things (IoT).
It is appealing that an intelligent house can automatically recognize its owner's gait and offer customized services such as turning on the light and running a bath in advance when the owner comes back outside and walks towards the door, or a special nursing ward can sense an unknown subject in terms of its gait and alert the medical staffs or family members in time.
Compared with the human identification systems based on other biometrics, like fingerprints, foot pressure, face, iris and voice, which need to be captured by physical contact or at a close distance from the devices, gait-based systems have the potential of unobtrusive and passive sensing \cite{boulgouris2005gait}.
Coincidentally, the emerging Wi-Fi-based human sensing techniques have shown us their potentials of Device-free Passive (DfP) sensing, and have inspired researchers to design and propose many DfP human sensing applications, such as fall detection \cite{han2014wifall}, human daily activity classification \cite{wang2015understanding}, keystroke \cite{ali2015keystroke} and sign language recognition \cite{li2016wifinger,ma2018signfi}.
The theoretical underpinning of the Wi-Fi-based DfP sensing systems is the Doppler shift and multipath effect of radio signals.
In Wi-Fi networks, different human activities can induce different Doppler shifts and multipath distortions in Wi-Fi signals, which are depicted and quantized by Channel State Information (CSI).
For walking movement, the torso and limbs of the walker always move at different speeds, which modulates Wi-Fi signals to the propagation paths with different lengths and introduces different frequency components into the CSI measurements.
By extracting the very fine-grained and idiosyncratic features from the CSI measurement as human gait representation, some Wi-Fi-based gait recognition systems \cite{wang2016gait,zeng2016wiwho} are proposed.
Different from the traditional gait recognition systems, which usually rely on video cameras \cite{lee2002gait,lam2011gait}, floor force sensors \cite{orr2000smart}, or wearable devices \cite{sprager2009cumulant,primo2014context}, \textit{etc.} to capture human walking dynamics, Wi-Fi-based systems are unconstrained by light and Line-of-Sight (LoS) conditions, and there is no need to deploy dense specialized sensors or require people to carry or wear some devices.
Besides, the concern about leaking private data, \textit{e.g.}, image data, is naturally eliminated in the Wi-Fi-based gait recognition systems.
The two critical processes of the existing Wi-Fi based gait recognition systems are gait cycle detection and gait feature extraction \cite{chen2017rapid}.
However, CSI measurements obtained from commercial Wi-Fi devices contain much noise, which makes it difficult to detect gait cycles.
Some sophisticated signal processing techniques, like spectrogram enhancement and autocorrelation analysis, are employed to denoise and emphasize the cycle patterns \cite{wang2016gait}.
After getting the data of each cycle, the previous work proposes to generate some experientially hand-crafted features from time-domain and frequency-domain of the cycle-wise data for gait recognition.
The previous methods are basically driven by traditional techniques of signal processing and machine learning, which have limitations in extracting high-quality representation of the data.
In this paper, we propose to adopt a much advanced deep learning framework, namely attention-based Recurrent Neural Network (RNN) encoder-decoder, to implement a cycle-independent human gait and walking direction recognition system with Wi-Fi signals.
The attention-based RNN encoder-decoder neural networks are initially proposed for machine translation \cite{bahdanau2014neural}. Owing to the attention scheme, the trained networks can adaptively focus the attentions on important words in the source sentence when generating the target word.
This distinguishing characteristic motivates us to create our cycle-independent gait and direction recognition system given the arbitrarily segmented CSI data.
Identifying human gait combined with walking direction can enable much more practical and interesting applications while the existing Wi-Fi based gait recognition methods can not cope with these two tasks simultaneously.
With the attention-based RNN encoder-decoder architecture, the proposed model can jointly train the networks for the two objectives.
For capturing more human walking dynamics, two receivers and one transmitter are deployed in different spatial layouts.
In the proposed system, the CSI measurements from the two receivers are first gathered together and refined to form an integrated walking profile.
Then, the RNN encoder reads and encodes the walking profile into a hidden state sequence, which can be regarded as a primary feature representation of the profile.
Subsequently, given a specific recognition task (gait or direction), the decoder computes a corresponding attention vector which is a weighted sum of the hidden states assigned with different attentions, and is finally used to predict the recognition target.
The attention scheme gives the proposed method the ability to align with some critical clips of the input data sequence for the two different tasks.
The main contributions of this work are summarized as follows:
\begin{itemize}
\item By adopting the attention-based RNN encoder-decoder framework, we propose a cycle-independent human gait and walking direction recognition system while the existing Wi-Fi based gait recognition approaches are not reported to cope with these two tasks simultaneously.
Given a specific recognition task, the proposed system can adaptively align with the clips, which are critical for that task, of the CSI data sequence.
To the best of our knowledge, we are among the first to introduce the attention-based RNN encoder-decoder framework to the Wi-Fi based human gait recognition application scenario.
\item In order to capture more human walking dynamics, we deploy two receivers together with one transmitter in different spatial layouts and splice the spectrograms of CSI measurements received by different receivers to construct integrated walking profiles for recognition purpose.
A profile reversion method is proposed to augment the training data and our system is trained to cope with multi-direction gait recognition task, thus individuals aren't required to walk along a predefined path in a predefined direction as some existing Wi-Fi-based gait recognition systems do.
\item We implement our system on commodity Wi-Fi devices and evaluate its performance by conducting the walking experiment, where 11 subjects are required to walk on 12 different paths in 8 different directions.
The experimental results show that our system can achieve average $F_1$ scores of 89.69\% for gait recognition from a group of 8 randomly selected subjects and 95.06\% for direction recognition from 8 directions, and the average accuracies of these two recognition tasks both exceed 97\%.
\end{itemize}
\section{Related Work}
In the 1970's, Johansson et al. \cite{johansson1973visual} and Cutting et al. \cite{cutting1978biomechanical} had conducted similar research and found that viewers could determine the gender of a walker or even recognize the walker who they were familiar by just watching the video pictures of prominent joints of the walker.
Based on these study foundations, the early human walking activity and gait recognition applications were mainly based on video or image sequences \cite{polana1994detecting,little1998recognizing,nixon1999automatic,lee2002gait,boulgouris2005gait,lam2011gait}.
Polana et al. \cite{polana1994detecting} proposed to recognize human walking activity by analyzing the periodicity of different activities in optical flow.
Besides periodicity, Little et al. \cite{little1998recognizing} also extracted moment features of foreground silhouettes and optical flow for walking identification.
Moreover, Lee et al. \cite{lee2002gait} divided the silhouettes into 7 regions that roughly correspond to different body parts and computed statistics on these regions to construct gait representation and realize human identification and gender classification.
And some other useful methods like Gait Energy Image (GEI) \cite{han2006individual} and Gait Flow Image (GFI) \cite{lam2011gait} were proposed to further enhance the equality of gait representation created from silhouettes and improve the rate of gait recognition.
However, the video-based methods always require subjects to walk in the direction perpendicular to the optical axis of cameras so as to get more gait information \cite{boulgouris2005gait}, and also introduce many other non-negligible drawbacks, such as LoS condition, light condition and personal privacy.
Besides, some other data sensors were employed for gait recognition.
Orr et al. \cite{orr2000smart} adopted floor force sensors to record individuals' footstep force profiles, based on which the footstep models were built and achieved an overall gait recognition rate of 93\% for identifying a single subject.
Sprager et al. \cite{sprager2009cumulant} and Primo et al. \cite{primo2014context} proposed to use the built-in accelerometers of mobile phones to collect walking dynamics and extract gait features for human recognition.
These sensor-based approaches either rely on specially deployed sensors (\textit{e.g.}, floor sensors) or require people to carry or wear wearable devices, which limits their applicability.
Recently, the emerging Wi-Fi-based (mainly, CSI-based) sensing techniques are widely applied to produce many DfP applications, such as human activity recognition \cite{han2014wifall, wang2015understanding, ali2015keystroke, li2016wifinger,ma2018signfi}, indoor localization \cite{sen2012you}, gait recognition \cite{zeng2016wiwho, wang2016gait}.
Since we mainly concern the problem of using Wi-Fi CSI to implement gait recognition, here we introduce some previous work on CSI-based gait recognition.
By utilizing the time-domain and frequency-domain gait features extracted from CSI, Zeng et al. \cite{zeng2016wiwho}, Zhang et al. \cite{zhang2016wifi} and Wang et al. \cite{wang2016gait} separately proposed three gait recognition systems called WiWho, WiFi-ID and WifiU.
WiWho mainly focused on the low frequency band 0.3$\sim$2 Hz of CSI, which contains much interference induced by slight body movements and environment changes \cite{xu2018wistep}.
This hinders WiWho from working when the subject is as far as more than 1 meter from the LoS path of its sender and receiver.
While WiFi-ID and WifiU concentrated on the frequency components of 20$\sim$80 Hz in CSI measurements.
In WiFi-ID, Continuous Wavelet Transformation (CWT) and RelieF feature selection algorithm were applied to extract gait features in different frequency bands, and the Sparse Approximation based Classification (SAC) \cite{wright2009robust} was chosen as the classifier.
In WifiU, Principal Component Analysis (PCA) and spectrogram enhancement techniques were utilized to generate a synthetic CSI spectrogram from which a set of 170 features were derived, and the SVM classifier was finally employed for human identification.
Based on WiWho, Chen et al. \cite{chen2017rapid} introduced an acoustic sensor (\textit{i.e.}, a condenser microphone) as a complementary sensing module to implement a multimodal human identification system, named Rapid, which could guarantee a more robust classification result in comparison to WiWho.
Most of these systems could achieve an average human identification accuracy of around 92\% to 80\% from a group of 2 to 6 subjects.
And one important function of the microphone used in Rapid is to detect the start and end points of each walking step, \textit{i.e.}, gait cycle detection, which is an indispensable part of the vast majority of video-, sensor- and CSI-based gait recognition systems.
However, segmenting gait cycles from CSI measurements is difficult since the variation patterns induced by walking are sometimes buried in the noise \cite{wang2016gait}, and some sophisticated signal processing techniques are needed to carefully refine the data.
After gait cycle detection, most existing methods (like Rapid, WifiU, \textit{etc.}) try to generate some experientially hand-crafted features from the cycle-wise gait data and then train the gait classifiers.
Different from all the aforementioned work, our system introduces the attention-based RNN encoder-decoder architecture to (i) adaptively align with some important time slices of CSI data sequence, which means there is no cycle partitioning needed, (ii) automatically learn to extract effective feature representations for gait recognition.
Another major difference between our system and the above work is that our system is trained to cope with multi-direction gait recognition (12 walking paths and 8 walking directions relative to the system equipments), where individuals don't have to walk along a predefined path in a predefined direction as WiWho or WifiU does.
In what follows, we will specify the design of the proposed system.
\section{Background \& Motivation}
To understand how human walking activity exerts impacts on Wi-Fi signals, we first give a brief overview of CSI and the multipath effect of wireless signals.
And then, we explain the network architecture of the attention-based RNN encoder-decoder, which powers our system to run the core functions.
\subsection{Channel State Information}
As for wireless communication, Channel Frequency Response (CFR) characterizes the small-scale multipath effect and the frequency-selective fading of the wireless channels.
Let $X(f,t)$ and $Y(f,t)$ separately denote the frequency-domain transmitted and received signal vectors, and the relation between $X$ and $Y$ can be modeled as
$Y(f, t) = H(f, t)\times X(f, t) + \mathcal{N}_{\emph{noise}}$,
where $f$ is the carrier frequency, $t$ indicates that the channel is time-varying, $H(f,t)$ is the complex valued CFR and $\mathcal{N}_{\emph{noise}}$ represents the noise.
Furthermore, in Wi-Fi networks, Channel State Information (CSI) is used to monitor the channel properties, and it is the simple version of CFR with discrete subcarrier frequency $f_i$ ($i=1, 2,..., N_S$), where $N_S$ denotes the total number of subcarriers across an Wi-Fi channel.
With the CSI tool \cite{halperin2011tool}, an CSI measurement, which contains 30 matrices with dimensions $N_{\emph{Tx}}\times N_{\emph{Rx}}$, can be got from one physical frame, $N_{\emph{Tx}}$ and $N_{\emph{Rx}}$ represent the number of antennas of the transmitter (Tx) and the receiver (Rx), respectively.
We regard the CSI measurements collected from each Tx-Rx antenna pair as a \emph{CSI stream}, thus there are $N_C = N_{\emph{Tx}}\times N_{\emph{Rx}}$ time-series CSI streams.
\begin{figure}[!t]
\centering
\includegraphics[width=0.75\columnwidth]{./figures/multipath_model.pdf}
\caption{Multipath effect of Wi-Fi signals in indoor environment.}
\label{fig_multipath_effect}
\end{figure}
\subsection{Multipath Effect of Wi-Fi Signals}
In Fig. \ref{fig_multipath_effect}, one transmitted signal can directly travel through the Line-of-Sight (LoS) path, or be reflected off the wall and the walking subject and propagate through multiple paths before arriving at the receiver, this phenomenon is called \emph{multipath effect} \cite{rappaport1996wireless}.
The multiple paths can be divided into two categories: static paths (solid lines in Fig. \ref{fig_multipath_effect}) and dynamic paths (caused by walking subject as expressed by dotted lines in Fig. \ref{fig_multipath_effect}).
If there totally exist $N_P$ propagation paths among which $N_D$ paths are dynamically varying, then $H(f_i, t)$ of the $i^{th}$ subcarrier at time $t$ can roughly be expressed as
\begin{equation}\label{eqn_multipath}
\begin{aligned}[b]
H(f_i,t) =& \sum_{n=1}^{N_P} a_n(f_i,t)e^{-j\frac{2\pi d_n(t) }{\lambda_i}}\\
=& H_s(f_i) + \sum_{k=1}^{N_D} a_k(f_i,t)e^{-j\frac{2\pi d_k(t) }{\lambda_i}}\mbox{,}
\end{aligned}
\end{equation}
where $H_s(f_i)$ is the sum of responses of the static paths and can be regarded as a constant \cite{wang2015understanding}, since signals traveling through the static paths have relatively invariable path length and propagation attenuation,
$a_k(f_i,t)$ represents the attenuation of the $k^{th}$ dynamical path, $\frac{d_k(t) }{\lambda_i}$ and $\frac{2\pi d_k(t) }{\lambda_i}$ separately denote the propagation delay and the phase shift when the path length is $d_k(t)$, $\lambda_i$ is the wavelength of subcarrier $i$.
In terms of Fig. \ref{fig_multipath_effect}, at time $t$, the length of path $k$ can be expressed as $d_k(t) = d_k(t_0) + v_kt$, where $v_k$ represent change speed of path $k$.
Therefore, $\frac{2\pi d_k(t) }{\lambda_i} = \frac{2\pi v_kt }{\lambda_i} + \frac{2\pi d_k(t_0) }{\lambda_i}$.
Usually, a $\lambda_i$ displacement of the subject can roughly cause a $2\lambda_i$ length change of the dynamical path $k$ (round trip), which introduces $4\pi$ phase change.
According to the principle of superposition of waves, the $4\pi$ phase change finally induces 2 cycles of the amplitude change of CSI values \cite{wang2015understanding}, which reveals an approximate relation between human moving speed $V$ ($= v_k/2$) and frequency $F$ of amplitude variation of CSI values, \emph{i.e.}, $F = 2V / \lambda_k$.
In addition, according to the Friis free space propagation model, the transmitting power ($P_t$) and the receiving power ($P_r$) of Wi-Fi signals have the following relation \cite{rappaport1996wireless}:
\begin{equation}\label{eqn_friis}
P_r = P_t G_r G_t \big(\frac{\lambda}{4 \pi d}\big)^2 \mbox{,}
\end{equation}
$G_r$ and $G_t$ separately are the gains of Rx and Tx antennas, $d$ is the length of LoS path.
Besides, the receiving power ($P_r^i$) of the $i^{th}$ subcarrier is proven to be basically proportional to the CSI power ($|H(f_i,t)|^2$) of subcarrier $i$ \cite{yang2013rssi,wu2012fila}, \textit{i.e.}, $P_r^i \propto |H(f_i,t)|^2$.
Thus, combined with equation (\ref{eqn_friis}), we can get the relation $|H(f_i,t)|^2 \propto \frac{1}{d^2}$,
Based on the above explanation, we can get some useful information:
\begin{enumerate}
\item Since every individual has unique gait while walking, which means different people can induce Wi-Fi signals propagating through different paths and result in quite different changing patterns of the paths.
Fortunately, these different patterns can be imprinted on CSI measurements and be reflected by the changing patterns of CSI amplitudes and we can probably realize gait recognition task by digging into the CSI measurements.
\item As shown in Fig. \ref{fig_multipath_effect}, assuming that Tx and Rx are fixed, \textit{i.e.}, the distance of the LoS path is constant.
If there is no moving object in the environment, the power of the received CSI measurements will be relatively steady.
However, when a person walks towards the Tx and the Rx, the distances of static paths are still constant while the distances of dynamic paths induced by the person is getting shorter, and the receiving power of the dynamic paths is getting higher, thus the variance of CSI power or CSI amplitude becomes larger and larger, and vice versa.
Fig. \ref{fig_walking_instance_waveform} displays the variation of CSI amplitude when a person walks in different directions relative to the Tx and the Rx.
Therefore, we can roughly regard that $\sigma^2(|H(f_i,t)|^2) \propto \frac{1}{d^2}$, and \cite{chen2017rapid} also drew the similar conclusion.
Based on this, we can deduce the walking direction of a person by analyzing the variation trend of CSI power or CSI amplitude.
\end{enumerate}
By now, we have explained the basis and feasibility of walking gait and direction recognition using Wi-Fi signals.
Next, we will introduce the core model employed in the proposed method, namely the attention-based RNN encoder-decoder.
\begin{figure}[!t]
\centering
\subfigure[The variation of CSI amplitude when a person walks away from the Tx and the Rx.]
{
\includegraphics[width=0.45\columnwidth]{./figures/subject1_dd+_pwr_d0_seg_2.pdf}
\label{fig_walk_away_waveform}
}\quad
\subfigure[The variation of CSI amplitude when a person walks towards the Tx and the Rx.]
{
\includegraphics[width=0.45\columnwidth]{./figures/subject1_dd+_pwr_d1_seg_3.pdf}
\label{fig_walk_close_waveform}
\caption{The variation of CSI amplitude when a person walks in different directions relative to the Tx and the Rx (in Fig. \ref{fig_multipath_effect}).
\label{fig_walking_instance_waveform}
\end{figure}
\subsection{Attention-based RNN encoder-decoder}
\begin{figure}[!t]
\centering
\includegraphics[width=0.90\columnwidth]{./figures/standard_RNN_encoder_decoder.pdf}
\caption{The standard architecture of RNN encoder-decoder.}
\label{fig_standard_RNN_encoder_decoder}
\end{figure}
\subsubsection{Standard RNN encoder-decoder}
The time-series CSI measurements are a kind of sequence data whose lengths can be arbitrary, given a sequence of CSI inputs ($x^1, \cdots, x^T$), the target of our system is to output another predicted sequence ($y^1, \cdots, y^N$), such as walking direction and walker identity.
Therefore, this process can be formulated as a sequence to sequence learning problem \cite{sutskever2014sequence}, and the RNN encoder-decoder is naturally adopted in this work.
Fig. \ref{fig_standard_RNN_encoder_decoder} illustrates the architecture of a standard RNN encoder-decoder \cite{cho2014learning}, which learns to encode a input sequence into a fixed-length summary vector $c$ and to decode the vector $c$ into an output sequence.
At each time step $t$, the hidden state $h_x^t$ of the RNN encoder is updated by
\begin{equation}\label{eqn_encoder_hidden_state}
h_x^t = f(h_x^{t-1}, x^t) \mbox{,}
\end{equation}
and the summary vector $c$ is generated by
\begin{equation}\label{eqn_summary_vector}
c = q(h_x^1, \cdots, h_x^T) \mbox{,}
\end{equation}
where $f$ is a non-linear activation function, and it can be a Long Short-Term Memory (LSTM) unit \cite{hochreiter1997long} or a Gated Recurrent Unit (GRU) \cite{cho2014learning}, which can automatically extract high-quality features of the input data, and $q$ can be a simple function of picking the last hidden state, \textit{i.e.}, $q(h_x^1, \cdots, h_x^T) = h_x^T$.
The decoder is another RNN, and it is trained to learn the following conditional distribution:
\begin{equation}\label{eqn_output_conditional_distribution}
P(y^n|y^{n-1}, y^{n-2}, \cdots, y^1, c) = g(h_y^n, y^{n-1}, c), n \in [1, N] \mbox{,}
\end{equation}
where
\begin{equation}\label{eqn_decoder_hidden_state}
h_y^n = f(h_y^{n-1}, y^{n-1}, c) \mbox{,}
\end{equation}
here, $y^n$ is the $n^{th}$ predicted result, $h_y^n$ is the hidden state of the $n^{th}$ prediction step, $f$ also is a non-linear function and $g$ usually is a softmax function.
\textit{With the help of RNN encoder-decoder model, the proposed system can jointly train to maximize the probability of a series of recognition tasks (\textit{e.g.}, direction estimation and human identification) given an CSI sequence}.
However, a major concern is that the standard RNN encoder-decoder model tries to compress all input data into a single fixed-length vector, which is used to predict all the output labels.
Since different outputs probably have different connections to the inputs, for example, prediction of walking direction may need to sense the variation trend of the entire CSI power sequence while human identification may focus more on some critical clips of the CSI sequence.
To address this concern, the attention-based RNN encoder-decoder is adopted in our method.
\begin{figure}[!t]
\centering
\includegraphics[width=0.9\columnwidth]{./figures/attention_based_RNN_encoder_decoder.pdf}
\caption{The architecture of attention-based RNN encoder-decoder.}
\label{fig_attention_based_RNN_encoder_decoder}
\end{figure}
\subsubsection{RNN encoder-decoder with attention scheme} \label{subsubsec_attn_rnn_en_de}
As Fig. \ref{fig_attention_based_RNN_encoder_decoder} shows, the key difference between attention-based and standard RNN encoder-decoder is that the attention-based model adaptively encodes the input sequence into different summary vectors, which we call attention vectors, for different predictions.
The attention-based RNN encoder-decoder model is firstly proposed for machine translation \cite{bahdanau2014neural}, and it can learn to align and translate simultaneously.
In this model, the conditional probability of equation (\ref{eqn_output_conditional_distribution}) is rewritten as
\begin{equation}\label{eqn_attention_output_conditional_distribution}
P(y^n|y^{n-1}, y^{n-2}, \cdots, y^1, c^n) = g(h_y^n, y^{n-1}, c^n), n \in [1, N] \mbox{,}
\end{equation}
and $h_y^n = f(h_y^{n-1}, y^{n-1}, c^n)$.
The derivation of distinct attention $c_n$ corresponding to the target $y^n$ is expressed as the weighted sum of all the hidden states of RNN encoder ($h_x^1, \cdots, h_x^T$):
\begin{equation}\label{eqn_summary_vector_c_n}
c^n = \sum_{t = 1}^{T}\omega^{n,t} h_x^t \mbox{,}
\end{equation}
where $\omega^{n,t}$ is the attention weight, and it is computed by
\begin{equation}\label{eqn_attention_weight}
\omega^{n,t} = softmax(attn(h_y^n,h_x^t)) \mbox{,}
\end{equation}
the function $attn(\cdot)$ scores how well the input data at each time step and the current output match \cite{bahdanau2014neural}, which enables the model to adaptively align with some important parts of the inputs when predicting a certain target.
It's is noticed that the initial hidden state $h_y^0$ of the RNN decoder is set as $h_x^T$ in the specific implementation of \cite{bahdanau2014neural}.
\section{System Design}\label{sec_system_design}
The proposed system consists of four essential modules, which are \textit{CSI Collection}, \textit{Raw CSI Processing}, \textit{Walking Profile Generation} and \textit{Gait and Direction Recognition}.
In what follows, the detailed processing procedures of each module will be explained.
\subsection{CSI Collection} \label{subsec_csi_collection}
Since the 2.4 GHz Wi-Fi band is narrower and more crowded than the 5 GHz band, the latter is a much better choice for less inter-channel interference and more reliable communication.
Therefore, our system is set to run on the 5 GHz Wi-Fi band.
A laptop equipped with an Intel 5300 wireless card and 2 omni-directional antennas serves as the transmitter.
In order to capture more walking dynamics, two laptops equipped with Intel 5300 wireless cards (each with 3 omni-directional antennas) are deployed as the receivers.
For concentrating on different body parts of a walker, the receivers are placed at different heights, where one is at a height of 0.5 m and the other is 1.0 m above the ground level, and the transmitter is placed at a height of 0.75 m.
The transmitter continuously sends 802.11n data packets to the receivers, to which the CSI measurements of correctly received packets are reported with the tool released in \cite{halperin2011tool}.
Fig. \ref{fig_csi_amplitude_variance} illustrates the amplitude variance of CSI received by the 2 receivers when a subject performs two different movements (in-place walking w/o swing arms and swing arms w/o walking), and we can find that the lower receiver is more sensitive to leg movements while the higher is more sensitive to arm movements.
Considering human activities in traditional indoor environment introduce frequencies of no more than 300 Hz in CSI measurements \cite{wang2015understanding}, in terms of the Nyquist sampling theorem, our system is configured with a sampling rate ($F_s$) of 1000 Hz.
For each receiver, the received data of each CSI stream forms a matrix $\textbf{C}_i (i=1 \cdots N_C)$ with dimensions $N_S \times T$, where $N_C=6\ (2 \times 3)$ and $N_S=30$ separately denote the number of streams and subcarriers in each stream.
$T$ is the data length.
To eliminate the impact of Carrier Frequency Offset (CFO), we only reserve the CSI amplitude and ignore the CSI phase in our system as \cite{wang2015understanding} suggested.
\subsection{Raw CSI Processing}
\subsubsection{Long Delay Removal}
Channel Impulse Response (CIR), which is the inverse Fourier transformation of CFR, can characterize the propagation delays of the received signals.
The signals with long propagation delay probably are reflected by some static or dynamic objects which are far away from the transceivers, and these signals are useless and can distort the CSI amplitudes.
Theoretically, every signal with a certain propagation delay can be separated from CIR, but limited by the bandwidth of Wi-Fi channel (\textit{i.e.}, 20 MHz), the time resolution of CIR is approximately 1$/$20MHz $=$ 50 ns \cite{xie2015precise}.
Therefore, we can only distinguish a series of signal clusters with discrete time delays.
Besides, previous study shows the maximum delay in general indoor environment is less than 500 ns \cite{jin2010indoor}.
Thus, we transform each CSI measurement into time-domain CIR by Inverse Fast Fourier transformation (IFFT) and remove the components whose propagation delays are longer than 500 ns, and then we convert the processed CIR back to CSI by Fast Fourier Transformation (FFT).
\subsubsection{CSI Denoising}
After removing the paths with long delays, the CSI values still contain significant high-frequency noise and low-frequency interferences \cite{xu2018wistep}.
Moreover, the frequency components, \textit{i.e.}, the frequencies of CSI amplitude variation, induced by walking are approximately within the range of 20$\sim$60 Hz given the 5 GHz Wi-Fi band\cite{wang2016gait,xu2018wistep},
the proposed system adopts the Butterworth bandpass filter, which guarantees high fidelity of reserved signals in the passband, to eliminate the high-frequency and low-frequency noise.
The upper and lower cutoff frequencies of the Butterworth filter are empirically set as 90 Hz and 5 Hz, respectively.
The direct current component (0 Hz) of each subcarrier is also filtered by the bandpass filter.
Subsequently, the proposed system introduces weighted moving average to further denoise and smooth the CSI amplitudes.
\subsubsection{CSI Refining}
As mentioned above, the time-series CSI amplitudes of 30 subcarriers within one CSI stream come from one Tx-Rx antenna pair, which means they reflect quite similar multipath propagation of Wi-Fi signals, and the amplitudes have correlated changing pattern.
However, different CSI subcarriers have slightly different carrier frequencies, which results in some phase shifts and a little different attenuations of CSI amplitudes in each CSI stream.
Directly using all the correlated data may push our system towards some deep fading and unreliable subcarriers, to ensure better recognition results, the system utilizes Principal Component Analysis (PCA) to automatically discover the correlation between CSI amplitudes in each CSI stream and produce synthesized combinations (principal components) \cite{wang2015understanding}.
Fig. \ref{fig_walking_pca_waveform} displays the comparisons between original CSI amplitudes of the 7\# subcarrier and the first PCA components of total 30 subcarriers for the two walking instances mentioned above.
We can see that the first PCA component can better depict the changing pattern and trend of the CSI variation induce by walking.
Here the first three principal components, which capture the most details of walking movement, are selected as the refined CSI data.
For each receiver, we can totally obtain 6 groups of refined CSI data sequences since there are 6 CSI streams.
\begin{figure}
\centering
\subfigure[Amplitude variance of in-place walking w/o swing arms.]
{
\includegraphics[width=0.75\columnwidth]{./figures/var_walking_wo_arm.pdf}
\label{fig_var_walking_wo_arm}
}\\
\subfigure[Amplitude variance of swing arms w/o walking.]
{
\includegraphics[width=0.75\columnwidth]{./figures/var_swing_arm.pdf}
\label{fig_var_swing_arm}
}
\caption{The amplitude variance of CSI received by different receivers.}
\label{fig_csi_amplitude_variance}
\end{figure}
\begin{figure}[!t]
\centering
\subfigure[]
{
\includegraphics[width=0.45\columnwidth]{./figures/subject1_dd+_pwr_d0_seg_2.pdf}
\label{fig_pwr_d0_waveform}
\subfigure[]
{
\includegraphics[width=0.45\columnwidth]{./figures/subject1_dd+_pca_d0_seg_2.pdf}
\label{fig_pca_d0_waveform}
}\\
\subfigure[]
{
\includegraphics[width=0.45\columnwidth]{./figures/subject1_dd+_pwr_d1_seg_3.pdf}
\label{fig_pwr_d1_waveform}
\subfigure[]
{
\includegraphics[width=0.45\columnwidth]{./figures/subject1_dd+_pca_d1_seg_3.pdf}
\label{fig_pca_d1_waveform}
}
\caption{Comparisons between the original CSI amplitudes of the 7\# subcarrier and the first PCA components of total 30 subcarriers for the two walking instances: (i) walking away from and (ii) walking towards the devices. (a) and (b) are separately the original CSI amplitude and the $1^{st}$ PCA component of instance (i); (c) and (d) are separately the original CSI amplitude and the $1^{st}$ PCA component of instance (ii).}
\label{fig_walking_pca_waveform}
\end{figure}
\subsection{Walking Profile Generation}
\subsubsection{Walking Detection}
Compared with other daily activities, such as sitting down and cooking, walking has some special characteristics: (i) it involves the motions of many different body parts, (ii) the moving speeds of different body parts are relatively high, (iii) it can last for a bit long time.
Since the frequencies induced by walking are mainly between 20 Hz to 60 Hz, the Energy of Interest (EI), which equals the sum of (normalized) magnitudes squared of FFT coefficients in the frequency range 20$\sim$60 Hz, is calculated to detect walking movement in terms of an appropriate threshold like that in \cite{xu2018wistep,zeng2016wiwho}.
The width of the FFT window is set as 256 ($\approx F_s/4$) based on the trade-off between detection accuracy and system response rate.
Whenever a walking movement is detected, the system starts its core functions immediately, namely generating the walking profile and further recognizing walking gait and direction.
\subsubsection{Spectrogram Generation}
The refined CSI data can only depict the signal changing patterns in the time domain, where the signal reflections of different body parts are mixed together.
When a subject is walking, its body parts (such as legs, arms and torso) have different moving speeds, and the signals reflected by different body parts have quite different energy considering different parts have different reflection area.
Specifically, the swing leg (especially, the lower leg) has the highest moving speed, and the supporting leg and the torso have low moving speed; the signals reflected from the torso have the strongest energy while the arm reflections have much weak energy.
By utilizing the Short-Time Fourier Transform (STFT), the system converts the time-domain refined CSI data sequence to the spectrogram of time-frequency domain.
In practice, the CSI data sequences are first segmented into fixed-length chunks, which usually overlap each other so as to reduce artifacts at the boundary.
Then each chunk is transformed by FFT, and the logarithmic magnitudes squared of the FFT yield the final spectrogram.
The spectrogram (of the $1^{st}$ PCA component) of the ``walking away" instance is illustrated in Fig. \ref{fig_walking_away_pca_spectrogram}, where the relatively ``hot" colored areas (have strong energy) in each chunk mainly present the torso reflections and some orange or yellow areas indicate the reflections of legs or arms.
The time-varying trend of energy is still maintained in the spectrogram, which is a good evidence to judge the walking direction.
Some advanced signal processing techniques used in \cite{wang2016gait}, like spectrogram enhancement, are not applied in our system to further refine the spectrogram, in contrary, we delegate more power to the RNN encoder-decoder network and let it learn to find out the important information from the noisy data.
\begin{figure}
\centering
\includegraphics[width=0.85\columnwidth]{./figures/walking_away_pca_spectrogram.png}
\caption{The spectrograms of the refined CSI data of a ``walking away" instance.}
\label{fig_walking_away_pca_spectrogram}
\end{figure}
\subsubsection{Profile Splicing}
Considering that original CSI amplitudes are denoised using a bandpass filter, the frequencies within [5, 90] (Hz) are reserved, and the frequency components induced by walking movement are mainly in the range of 20$\sim$60 Hz.
The proposed method keeps the data of 64 out of the 85 FFT points in the spectrogram, which corresponds to the frequency range of around 10$\sim$70 Hz, and stacks the data of each PCA component to form the primary \textit{walking profile} with dimensions $192 \times T$ for each PCA component, where $T$ is the length of the spectrogram (\textit{i.e.}, the amount of chunks).
Thus the stacked
For enhancing the primary profile, the proposed system generates the dynamical features (also known as delta features) by applying discrete time derivatives on the primary profile, the dynamical features are proven to have excellent performance in many Automatic Speech Recognition (ASR) systems\cite{furui1986speaker,zheng2001comparison}, since the dynamical features can reveal some underlying connections and characteristics of adjacent speech frames.
By concatenating the first-order derivative, namely the first-order difference, of the primary profile, the proposed method gets a $384 \times T$ dimensional walking profile for each CSI stream of a certain receiver.
In addition, the proposed system is designed to splice the walking profiles corresponding to the same Tx-Rx antenna pair of the lowly-placed and the highly-placed receivers to further add more walking dynamics into the profile.
Finally, a ``rich" and integrated walking profile with dimensions $768 \times T$ is constructed, and totally 6 integrated walking profile can be produced in each processing cycle.
\subsubsection{Profile Reversion and Standardization}
From Fig. \ref{fig_walking_pca_waveform}, we can find that the CSI amplitudes have relatively symmetrical time-varying patterns when the subject walks in reverse directions, which inspires us to reverse the data sequences of walking profiles along the time dimension to augment the instance data of the opposite walking movements.
Usually, the performance of neural networks, like RNN, can be improved with more training data \cite{mikolov2012statistical}, in the proposed system the operation of profile reversion doubles the data used for training or recognition, and it is expected to promote the performance of our system.
Before feeding data to the neural networks, it is necessary to standardize the data in advance, \textit{i.e.}, putting all the variables on the relatively same scale (with zero mean and unit variance), which can help to speed up the convergence and improve the performance of the networks \cite{shanker1996effect}.
The statistical standardization method is applied in the system, to be specific, the proposed system calculates the global mean $\mu_{wp}$ and standard deviation $\sigma_{wp}$ of a integrated walking profile, and then subtracts $\mu_{wp}$ from each variable of the profile and subsequently divides the difference by $\sigma_{wp}$.
So far, all the preparation work has been done, and it's time to build our attention-based RNN encoder-decoder networks.
\subsection{Model Customization for Gait and Direction Recognition}
The existing attention-based RNN encoder-decoder neural networks are specialized for Natural Language Processing (NLP) applications, especially for the machine translation task \cite{bahdanau2014neural}, and the trained networks can adaptively concentrate on important words in the source sentence when generating the target word.
This distinguishing characteristic motivate us to create a cycle-independent gait and direction recognition system given the arbitrarily segmented CSI data.
However, there are two main differences between the tasks of machine translation and ours.
Firstly, machine translation is a single task learning problem, while our system aims to deal with two different tasks (direction and gait recognitions), namely multitask learning \cite{caruana1997multitask}.
Secondly, apart from the source sentence, there exist statistical relations among the target words, which are basically described by statistical language model.
Therefore, at the decoder side, the predicted target word is subsequently set as the input to predict the next word as illustrated in Fig. \ref{fig_attention_based_RNN_encoder_decoder}.
However, in our system there is no explicit relation between human gait and walking direction, in order to jointly train the networks for our objectives, we need to customize our own model.
\begin{itemize}
\item \textit{\textbf{Encoder}}: In the system, the input of the RNN encoder is the CSI data sequence (the integrated $768 \times T$ dimensional walking profile), which is denoted as $\mathbf{x} = \{x^1, \cdots, x^T\}$, and each $x^t\ (t \in [1, T])$ is a vector with dimensions $768 \times 1$.
The output of the RNN decoder is denoted as $\mathbf{y} = \{y^1, y^2\}$, where $y^1 \in \mathbb{R}^{1 \times n_d}$ and $y^2 \in \mathbb{R}^{1 \times n_g}$, $n_d$ and $n_g$ separately are the number of walking directions and the number of subjects.
Moreover, we can define more variables for many other tasks such as gender classification.
In this work, we mainly concern gait and direction recognition.
Considering that human gait and walking direction have no explicit relation with each other, the conditional probability of equation (\ref{eqn_attention_output_conditional_distribution}) needs to be rewritten as
\begin{equation}\label{eqn_model_output_conditional_distribution}
P(y^n| c^n) = g(h_y^n, c^n), n \in [1, 2] \mbox{,}
\end{equation}
and the computation of $h_y^n$ is also isolated to the former predicted target, namely $h_y^n = f(h_y^{n-1}, c^n)$.
As suggested in \cite{bahdanau2014neural}, a bidirectional RNN (BiRNN) framework is utilized to create our RNN encoder.
BiRNN presents each input sequence forwards and backwards to two separate recurrent hidden layers, which are connected to the same output layer, and it's reported to perform better than the unidirectional RNN \cite{schuster1997bidirectional,graves2012supervised}.
In BiRNN, the hidden state $h_x^t$ is expressed as the concatenation of the forward hidden state $\overrightarrow{h_x^t}$ and the backward one $\overleftarrow{h_x^t}$, \textit{i.e.}, $h_x^t = [\overrightarrow{h_x^t}; \overleftarrow{h_x^t}]$.
\item \textit{\textbf{Decoder}}: As for the decoder, because the computation of attention weight $\omega^{n,t}$ and attention vector $c^n$ doesn't depend on the predicted targets, which implies that the proposed method can directly and faithfully implement the computation processes of attention weights and attention vectors as described in \cite{bahdanau2014neural} without any major modification.
The proposed method is expected to learn and compute valuable attention weights which enable the system to automatically align with some critical clips of the input data sequence for the two different tasks.
Due to the lack of future context, a unidirectional RNN which has the same number of hidden layers as the encoder is employed in the proposed system.
Thus, at the end of the encoding stage, the bidirectional hidden state $h_x^T$ of the last input of the encoder needs to be transformed to meet the size of the unidirectional hidden state $h_y^0$ of the decoder, and the transformation in this work is simply performing an additive operation between $\overrightarrow{h_x^T}$ and $\overleftarrow{h_x^T}$.
For the networks with multiple hidden layers, the transformation can be executed on each layer of the networks accordingly.
Followed by the hidden layer(s), two full connection layers are adopted for the two different tasks.
\end{itemize}
\section{Implementation \& Evaluation}
\subsection{Experiment Setup}
In order to conveniently collect more CSI data of walking in different directions, we find a spacious laboratory with size 9.5m$\times$7.8m as the CSI data collection environment, which is shown in Fig. \ref{fig_experiment_environment}.
The transmitter and the receivers are placed abreast at the marked positions and the distance between each receiver and the transmitter is 1.0 m.
As we mentioned in subsection \ref{subsec_csi_collection}, for concentrating on different body parts of a walker, the two receivers are separately placed at heights of 0.5 m and 1.0 m above the ground level, and the transmitter is placed at a height of 0.75 m.
The devices all run on the 149\# Wi-Fi channel (its central frequency is 5.745 GHz) without the dynamic frequency selection (DFS) and transmit power control (TPC) constraints of Federal Communications Commission (FFC).
By using the CSI tool \cite{halperin2011tool}, the transmitter and the receivers are configured to work in monitor mode, where the data packets injected by the transmitter can simultaneously be captured by the two receivers.
Besides, the clocks of the two receivers are synchronized by the Network Time Protocol (NTP) so as to obtain CSI measurements with synchronized timestamps.
A deep learning server, which has two high-performance NVIDIA GeForce GTX 1080Ti Graphics Processing Units (GPUs), is connected to the receivers by cables.
The server is specialized for data processing and model training.
A 6.0m$\times$6.0m grid-layout area, 1.5 m away from the transmitter, is planned for the walking experiment, and the size of each grid is 1.0m$\times$1.0m.
As shown in Fig. \ref{fig_experiment_environment}, 12 specific paths (marked with solid lines) are assigned for subjects to walk on, and the labels of 8 walking directions are annotated on the bottom left of the layout plan.
The detailed experiment process is described in the next subsection.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\columnwidth]{./figures/scenario.pdf}
\caption{Experiment environment.}
\label{fig_experiment_environment}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=0.65\columnwidth]{./figures/walking_time_distribution.pdf}
\caption{Time distribution of subjects walking on different paths.}
\label{fig_time_distribution}
\end{figure}
\subsection{Experiment Process}
\subsubsection{Dataset Creation}
In this work, 11 subjects (8 male and 3 female graduate students) are invited to carry on the walking experiment.
For privacy concern, we don't record some private information like age, height and weight of each subject.
Every time, a subject is asked to walk between the two endpoints of one straight path in its natural way, for example walking between D and D' of path DD' in Fig. \ref{fig_experiment_environment}.
Meanwhile, each receiver sends its CSI measurements companied by timestamps to the server, and the server stores the CSI data and the timestamps for further processing.
When the subject arrives at a endpoint, it turns around and walks back to the other endpoint.
For each path, the subject is required to walk for 5 minutes without break (a alarm clock is provided to remind the subject), then turns to another path.
Therefore, we can totally get 60-minute data involving 12 walking paths and 8 different directions from each subject.
During the experiment, two video cameras are applied to record the whole experimental process like that in \cite{brajdic2013walk}, and the CSI data corresponding to walking (except for turning and break) are manually labeled by our recorders.
We promise that all the video resources are only used for the experiment and will be cleared for protecting the privacy of subjects as soon as we finish the paper.
An CSI data sequence, which involves a single-trip walking of a path, is regarded as an CSI walking instance, and we totally collect 10626 walking instances in the experiment.
Since the shortest and the longest path lengths of the 12 paths are about 5.66 m and 8.49 m respectively, the time distribution of all subjects walking on different paths is illustrated in Fig. \ref{fig_time_distribution}, where the minimum and the maximum single-trip walking time separately is 4.135 s and 10.626 s.
For each subject, we randomly select 20\% of the walking instances corresponding to a specific path and direction for validation and testing, and the remaining instances are for training.
Given the 1000 Hz sampling rate of the system, for brevity, a 4000-point sliding window segmentation with step size of 500 points is performed on each labeled CSI walking instance to get multiple slices used for training, validation and testing.
Note that thanks to the recurrent structure of RNN, the data with arbitrary lengths can be handled by our model.
By conducting profile splicing and reversion, for training set, we get 172088 integrated walking profiles of all the 11 subjects.
Moreover, we bisect the slices used for validation and testing and generate our validation and testing sets, each of which has 14858 profiles of all subjects.
We randomly select the data of 8 subjects to train and evaluate our model.
\subsubsection{Model Training}
The training of our attention-based RNN encoder-decoder model is performed in PyTorch using one GTX 1080Ti GPU.
Given the 11 GB memory size and the 11.3 TFLOPs (in single precision) processing power of an GTX 1080Ti GPU, it enables us to train a complex and relatively deep RNN encoder-decoder model.
As a reference, a PyTorch implementation of the attten-based RNN encoder-decoder for machine translation \cite{bahdanau2014neural} is available on GitHub\footnote{https://github.com/spro/practical-pytorch}.
In our specific implementation, the RNN architectures of our encoder and decoder both are GRU, which is reported to have simpler structure but better performance than LSTM \cite{jozefowicz2015empirical}, and the numbers of hidden layers are set as 3 in the encoder and decoder.
The encoder and decoder of the proposed system have 1024 hidden units each, and the encoder has 256 input units while the decoder has no input layer.
We use a minibatch stochastic gradient descent (SGD) algorithm to train the encoder and decoder, each SGD update direction is computed using a minibatch of 64 instances.
We adjust (decrease) the learning rate from 1e-4 to 1e-6 based on the training epochs have been done, and the total training epochs of the system are set as 32.
The validation set is employed to check if the error is within a reasonable range.
For better preventing our model from overfitting, some effective techniques, such as dropout \cite{pham2014dropout} and training with noise \cite{an1996effects}, are introduced in the system.
\subsubsection{Evaluation Metrics}
For evaluating the performance of the proposed system, two major metrics are employed, namely $Accuracy$ and $F_1$ score, where $Accuracy$ is for primary evaluation of the system's gait and direction recognition ability since it only takes true positives of each class into consideration, and $F_1$ score, which is the weighted average of precision and recall, is used to evaluate the comprehensive performance of the system.
Besides, the confusion matrices are also posted to illustrate the detailed recognition results.
\begin{figure}[!t]
\centering
\subfigure[Human gait recognition.]
{
\includegraphics[width=0.45\columnwidth]{./figures/gait_cm.png}
\label{fig_model_dir_cm}
}\quad
\subfigure[Walking direction recognition.]
{
\includegraphics[width=0.45\columnwidth]{./figures/dir_cm.png}
\label{fig_model_gait_cm}
}
\caption{Confusion matrices of gait and direction recognition results.
\label{fig_model_confusion_matrces}
\end{figure}
\begin{figure}[!t]
\centering
\subfigure[Human gait recognition.]
{
\includegraphics[width=0.65\columnwidth]{./figures/gait_acc_f1.pdf}
\label{fig_model_gait_result}
}\quad
\subfigure[Walking direction recognition.]
{
\includegraphics[width=0.65\columnwidth]{./figures/dir_acc_f1.pdf}
\label{fig_model_dir_result}
}
\caption{The detailed recognition accuracies and $F_1$ scores of human gait and walking direction.
\label{fig_model_dir_gait_results}
\end{figure}
\subsection{Experiment Results}
The performance of the proposed system is evaluated on the testing dataset, and each instance in the test set has two labels, \textit{i.e.}, subject label (from ``S1'' to ``S8'') and direction label (from ``D0'' to ``D7'').
The confusion matrices of human gait and walking direction recognition results are shown in Fig. \ref{fig_model_confusion_matrces}, where the diagonal entries represent the numbers of true positives of different classes, intuitively, we can find most of the instances are correctly predicted by the sytem.
Fig. \ref{fig_model_dir_gait_results}, which is derived from the confusion matrices, illustrates the gait and direction recognition accuracies and $F_1$ scores for specific classes.
The proposed system achieves relatively high accuracies (all above 95\%) and high $F_1$ scores of all the classes given different tasks, especially for the direction recognition task.
Concretely, the average gait recognition accuracy and $F_1$ score of the 8 selected subjects are 97.68\% and 89.69\% respectively, while the average accuracy and $F_1$ score for direction recognition are separately 98.75\% and 95.06\%.
These results demonstrate that by adopting the attention-based RNN encoder-decoder architecture can achieve promising recognition results on the human gait and walking direction recognition tasks.
\begin{figure}[!t]
\centering
\subfigure[Instance of ``S8' and ``D1''.]
{
\includegraphics[width=0.45\columnwidth]{./figures/attention_D1_S8.jpg}
\label{fig_attention_weight_D1_S10}
\subfigure[Instance of ``S4'' and ``D4''.]
{
\includegraphics[width=0.45\columnwidth]{./figures/attention_D4_S4.jpg}
\label{fig_attention_weight_D4_S4}
\caption{Visualization of attention weights for 2 test instances.}
\label{fig_attention_weight}
\end{figure}
\subsection{Attention Visualization}
Given a certain test instance, the proposed system first encodes the instance into a particular attention vector, which is the weighted sum of all the hidden states of the system's encoder, then the system decodes the attention vector and outputs the prediction for a specific task.
The attention weights computed by the equation \ref{eqn_attention_weight} score how well the walking profile at each time step and the current prediction match.
Fig. \ref{fig_attention_weight} visualizes the attention weights (in the middle of each subfigure, where the upper row and the lower row correspond to direction and gait respectively) of 3 test instances in grayscale (where 0 is black and 1 is white), and the top and bottom parts of each subfigure are the spectrograms of the highly-placed Rx (HighRx) and lowly-placed Rx (LowRx), respectively.
We observe that (i) for different recognition tasks, the computed weights are different, which means the proposed system can automatically focus its attentions on different time steps of the spectrograms when coping with different tasks; (ii) the large weights (more brighter weights) are basically align with some high-energy clips in the spectrograms, where these clips usually contain much more critical and apparent features of human walking dynamics.
Based on the observations, we can conclude that even without portioning CSI data sequence into cycle-wise time slices the proposed system can still learn to adaptively align with some important parts of the sequence and realize cycle-independent gait and walking direction recognition.
\section{Limitations}
Although we have proven the feasibility and shown the promising results of jointly recognizing human walking gait and direction with the attention-based RNN encoder-decoder framework in Wi-Fi networks.
There are still some limitations.
We only evaluate the performance of our system on a group of 8 subjects, and some external influence factors, like footwear, floor surface, room layout, and internal influence, like length of walking profile, haven't taken into consideration.
We are trying to explore the impacts of subject group size and other specific factors in our future work.
Limited by the bandwidth and synchronization problem of Wi-Fi networks, if there are multiple people walking at the same time, the signals reflected off different people are mixed together at the receiver side.
As many existing WiFi-based systems, for now, we haven't found some effective methods to separate the mixed Wi-Fi signals from multiple moving individuals, and now the proposed system can't be applied in the scenario where multiple individuals walk simultaneously.
\section{Conclusions}
By adopting the attention-based RNN encoder-decoder framework, we proposed a new cycle-independent human gait and walking direction recognition system which was jointly trained for walking gait and direction recognition purposes.
For capturing more human walking dynamics, two receivers and one transmitter were deployed in different spatial layouts.
The CSI measurements from the two receivers were first gathered together and refined to form an integrated walking profile.
Then, the RNN encoder read and encoded the walking profile into primary feature vectors, based on which the decoder computed different attention vectors for different recognition tasks.
With the attention scheme, the proposed system could learn to adaptively align with different critical clips of the CSI data sequence for walking gait and direction recognitions.
We implemented our system on commodity Wi-Fi devices in indoor environment, and the experimental results demonstrated that the proposed system could achieve average $F_1$ scores of 89.69\% for gait recognition from a group of 8 subjects and 95.06\% for direction recognition from 8 directions, besides the average accuracies of these two recognition tasks both exceeded 97\%.
Our system is expected to enable more practical and interesting applications.
|
1,108,101,564,843 | arxiv | \section{Introduction}
Let $\D$ stand for the disk $\{z\in\C:|z|<1\}$, $\T$ for its boundary, and $m$ for the normalized Lebesgue measure on $\T$. For $0<p\le\infty$, we denote by $H^p$ the classical Hardy $p$-space of holomorphic functions on $\D$ (see \cite[Chapter II]{G}), viewed also as a subspace of $L^p(\T,m)$.
\par Now suppose $\th$ is an {\it inner function} on $\D$, that is, $\th\in H^\infty$ and $|\th|=1$ almost everywhere on $\T$. The associated {\it model subspace} $K^p_\th$ is then defined, this time for $1\le p\le\infty$, by putting
\begin{equation}\label{eqn:defmodsub}
K^p_\th:=H^p\cap\th\ov{H^p_0},
\end{equation}
where $H^p_0:=zH^p$ and the bar denotes complex conjugation. In other words, $K^p_\th$ is formed by those $f\in H^p$ for which the function
$$\widetilde f:=\ov z\ov f\th$$
(living a.e. on $\T$) is in $H^p$. Moreover, the map $f\mapsto\widetilde f$ leaves $K^p_\th$ invariant.
\par We may also rephrase the above definition by saying that $K^p_\th$ is the kernel (in $H^p$) of the Toeplitz operator with symbol $\ov\th$. For further -- and deeper -- operator theoretic connections, we refer to \cite{N}. Finally, we mention the direct sum decomposition formula
\begin{equation}\label{eqn:dirsumdec}
H^p=K^p_\th\oplus\th H^p,\qquad1<p<\infty
\end{equation}
(with orthogonality for $p=2$), which is a consequence of the M. Riesz projection theorem; cf. \cite[Chapter III]{G}.
\par From now on, the role of $\th$ will be played by an {\it interpolating Blaschke product}, i.e., by a function of the form $$B(z)=B_{\{a_k\}}(z):=\prod_k\f{|a_k|}{a_k}\f{a_k-z}{1-\ov a_kz},$$
with $\{a_k\}\subset\D$ and $\sum_k(1-|a_k|)<\infty$, satisfying
$$\inf_k|B'(a_k)|\,(1-|a_k|)>0.$$
The zero sequences $\{a_k\}$ of such products are precisely the {\it interpolating sequences} for $H^\infty$ (see \cite[Chapter VII]{G}), meaning that
\begin{equation}\label{eqn:trcarl}
H^\infty\big|_{\{a_k\}}=\ell^\infty.
\end{equation}
\par Here and throughout, we use the notation $X\big|_{\{a_k\}}$ (where $X$ is a given function space on $\D$) for the set of all sequences $\{w_k\}\subset\C$ such that the interpolation problem
\begin{equation}\label{eqn:interprob}
f(a_k)=w_k\qquad (k=1,2,\dots)
\end{equation}
has a solution $f\in X$. Equivalently, $X\big|_{\{a_k\}}$ consists of the traces $f\big|_{\{a_k\}}=\{f(a_k)\}$ that arise as $f$ ranges over $X$.
\par Now, for $0<p<\infty$, the appropriate $H^p$ analogue of \eqref{eqn:trcarl} is known to be
\begin{equation}\label{eqn:shashi}
H^p\big|_{\{a_k\}}=\ell^p_1(\{a_k\}),
\end{equation}
where $\ell^p_1(\{a_k\})$ stands for the set of sequences $\{w_k\}\subset\C$ with
\begin{equation}\label{eqn:carlpsum}
\sum_k|w_k|^p\,(1-|a_k|)<\infty.
\end{equation}
Indeed, a theorem of Shapiro and Shields (see \cite{SS}) tells us that, for $1\le p<\infty$, \eqref{eqn:shashi} holds if and only if $\{a_k\}$ is an interpolating sequence for $H^\infty$; the case $0<p<1$ is, in fact, no different. Letting $B=B_{\{a_k\}}$ be the associated (interpolating) Blaschke product, we may then combine \eqref{eqn:shashi} with the observation that
$$H^p\big|_{\{a_k\}}=K^p_B\big|_{\{a_k\}},\qquad1<p<\infty$$
(which follows upon applying \eqref{eqn:dirsumdec} with $\th=B$), to deduce that
\begin{equation}\label{eqn:kpblp}
K^p_B\big|_{\{a_k\}}=\ell^p_1(\{a_k\}),\qquad1<p<\infty.
\end{equation}
\par The trace space $K^p_B\big|_{\{a_k\}}$ with $1<p<\infty$ is thereby nicely characterized, but the endpoints $p=1$ and $p=\infty$ present a harder problem. The latter case was recently settled by the author in \cite{DARX}. There, it was shown that a sequence $\{w_k\}\in\ell^\infty$ belongs to $K^\infty_B\big|_{\{a_k\}}$ if and only if the numbers
\begin{equation}\label{eqn:defwtilde}
\widetilde w_k:=\sum_j\f{w_j}{B'(a_j)\cdot(1-a_j\ov a_k)}\qquad(k=1,2,\dots)
\end{equation}
satisfy $\{\widetilde w_k\}\in\ell^\infty$. We mention in passing that a similar method, coupled with \cite[Theorem 5.2]{DSpb}, yields the identity
$$K_{*B}\big|_{\{a_k\}}=\left\{\{w_k\}\in\ell^2_1(\{a_k\}):\,
\{\widetilde w_k\}\in\ell^\infty\right\},$$
where $K_{*B}:=K^2_B\cap\bmo(\T)$.
\par It is the other extreme (i.e., the case $p=1$) that puzzles us. We feel that the numbers $\widetilde w_k$, defined as in \eqref{eqn:defwtilde}, must again play a crucial role in the solution, and we now proceed to discuss this in detail.
\section{Problems and discussion}
\medskip\noindent\textbf{Problem 1.} Given an interpolating Blaschke product $B$ with zeros $\{a_k\}$, characterize the set $K^1_B\big|_{\{a_k\}}$.
\medskip An equivalent formulation is as follows.
\medskip\noindent\textbf{Problem 1$'$.} For $B=B_{\{a_k\}}$ as above, describe the functions from $K^1_B+BH^1$ in terms of their values at the $a_k$'s.
\medskip The equivalence between the two versions is due to the obvious fact that, given a function $f\in H^1$, we have $f\in K^1_B+BH^1$ if and only if $f\big|_{\{a_k\}}=g\big|_{\{a_k\}}$ for some $g\in K^1_B$.
\smallskip It should be noted that, whenever $B=B_{\{a_k\}}$ is an {\it infinite} interpolating Blaschke product, the trace space $K^1_B\big|_{\{a_k\}}$ is strictly smaller than $H^1\big|_{\{a_k\}}=\ell^1_1(\{a_k\})$, a result that can be deduced from \cite[Theorem 3.8]{Steg}. On the other hand, it was shown by Vinogradov in \cite{V} that
$$K^1_{B^2}\big|_{\{a_k\}}=\ell^1_1(\{a_k\})$$
for each interpolating Blaschke product $B=B_{\{a_k\}}$. The subspace $K^1_{B^2}$ is, however, essentially larger than $K^1_B$.
\par In light of our solution to the $K^\infty_B$ counterpart of Problem 1, as described in Section 1 above, the following conjecture appears to be plausible.
\medskip\noindent\textbf{Conjecture 1.} Let $B=B_{\{a_k\}}$ be an interpolating Blaschke product. In order that a sequence of complex numbers $\{w_k\}$ belong to $K^1_B\big|_{\{a_k\}}$, it is necessary and sufficient that
\begin{equation}\label{eqn:carlsumw}
\sum_k|w_k|\,(1-|a_k|)<\infty
\end{equation}
and
\begin{equation}\label{eqn:carlsumwtilde}
\sum_k|\widetilde w_k|\,(1-|a_k|)<\infty
\end{equation}
(i.e., that both $\{w_k\}$ and $\{\widetilde w_k\}$ be in $\ell^1_1(\{a_k\})$).
\medskip As a matter of fact, the necessity of \eqref{eqn:carlsumw} and \eqref{eqn:carlsumwtilde} is fairly easy to verify. First of all, letting $\de_k$ denote the unit point mass at $a_k$, we know that the measure $\sum_k(1-|a_k|)\,\de_k$ is Carleson (see \cite[Chapter VII]{G}), and so
\begin{equation}\label{eqn:carlsumfak}
\sum_k|F(a_k)|\,(1-|a_k|)<\infty
\end{equation}
for every $F\in H^1$.
\par Now suppose that we can solve the interpolation problem \eqref{eqn:interprob} with a function $f\in K^1_B$. An application of \eqref{eqn:carlsumfak} with $F=f$ then yields \eqref{eqn:carlsumw}. Furthermore, putting $g=\widetilde f(:=\ov z\ov fB)$ a.e. on $\T$, we know that $g\in K^1_B(\subset H^1)$; in particular, $g$ extends to $\D$ by the Cauchy integral formula. Therefore,
\begin{equation}\label{eqn:gcauchy}
\begin{aligned}
\ov{g(a_k)}&=\int_{\T}\f{\ov{g(\ze)}}{1-\ze\ov{a_k}}\,dm(\ze)\\
&=\int_{\T}\f{\ze f(\ze)\ov{B(\ze)}}{1-\ze\ov{a_k}}\,dm(\ze)\\
&=\f1{2\pi i}\int_{\T}\f{f(\ze)}{B(\ze)\cdot(1-\ze\ov{a_k})}\,d\ze,
\end{aligned}
\end{equation}
and computing the last integral by residues we find that
\begin{equation}\label{eqn:bargak}
\ov{g(a_k)}=\sum_j\f{w_j}{B'(a_j)\cdot(1-a_j\ov a_k)}=\widetilde w_k,
\end{equation}
for each $k\in\N$. Finally, we apply \eqref{eqn:carlsumfak} with $F=g$ and combine this with \eqref{eqn:bargak} to arrive at \eqref{eqn:carlsumwtilde}.
\par Thus, conditions \eqref{eqn:carlsumw} and \eqref{eqn:carlsumwtilde} are indeed necessary in order that $\{w_k\}$ be in $K^1_B\big|_{\{a_k\}}$. It is the sufficiency of the two conditions that presents an open problem.
\par In contrast to the case $1<p<\infty$, where \eqref{eqn:carlpsum} remains valid upon replacing $\{w_k\}$ by $\{\widetilde w_k\}$, no such thing is true for $p=1$. In other words, \eqref{eqn:carlsumw} no longer implies \eqref{eqn:carlsumwtilde}, so the two conditions together are actually stronger than \eqref{eqn:carlsumw} alone. Consider, as an example, the \lq\lq radial zeros" situation where the interpolating sequence $\{a_k\}$ satisfies
\begin{equation}\label{eqn:radialcase}
0\le a_1<a_2<\dots<1
\end{equation}
(in which case the quantities $1-a_k$ must decrease exponentially, see \cite[Chapter VII]{G}). Further, let $\{\ga_k\}$ be a sequence of nonnegative numbers such that $\sum_k\ga_k<\infty$ but $\sum_k k\ga_k=\infty$, and let $w_k=B'(a_k)\cdot\ga_k$. Then
$$(1-|a_k|)\cdot|w_k|=(1-|a_k|)\cdot|B'(a_k)|\cdot\ga_k\le\ga_k,\qquad k\in\N$$
(since $B$ is a unit-norm $H^\infty$ function), and \eqref{eqn:carlsumw} follows. On the other hand,
$$\widetilde w_k=\sum_j\f{\ga_j}{1-a_ja_k}\ge\sum_{j=k}^\infty\f{\ga_j}{1-a_ja_k}
\ge\f1{2(1-a_k)}\sum_{j=k}^\infty\ga_j,$$
because for $j\ge k$ we have $a_j\ge a_k$, and hence
$$1-a_ja_k\le1-a_k^2\le2(1-a_k).$$
Consequently,
$$\sum_k|\widetilde w_k|\,(1-|a_k|)=\sum_k\widetilde w_k\,(1-a_k)
\ge\f12\sum_k\sum_{j=k}^\infty\ga_j=\f12\sum_j j\ga_j=\infty,$$
which means that \eqref{eqn:carlsumwtilde} breaks down.
\par Our next problem involves the notion of an ideal sequence space, which we now recall. Suppose $\mathcal S$ is a vector space consisting of sequences of complex numbers. We say that $\mathcal S$ is {\it ideal} if, whenever $\{u_j\}\in\mathcal S$ and $\{v_j\}(\subset\C)$ is a sequence satisfying $|v_j|\le|u_j|$ for all $j$, it follows that $\{v_j\}\in\mathcal S$. Roughly speaking, this property means that the sequences belonging to $\mathcal S$ admit a nice description in terms of a certain \lq\lq size condition" on their components.
\par When $1<p<\infty$, identity \eqref{eqn:kpblp} tells us that the trace space $K^p_B\big|_{\{a_k\}}$ coincides with $\ell^p_1(\{a_k\})$ and is therefore ideal. However, assuming that Conjecture 1 is true and arguing by analogy with the $p=\infty$ case, as studied in \cite{DARX}, we strongly believe that $K^1_B\big|_{\{a_k\}}$ is no longer ideal in general. We would even go on to conjecture that this last trace space is never ideal, as soon as $B=B_{\{a_k\}}$ is an infinite Blaschke product. The following problem arises then naturally.
\medskip\noindent\textbf{Problem 2.} Given an interpolating Blaschke product $B$ with zeros $\{a_k\}$, determine the maximal (largest possible) ideal sequence space contained in $K^1_B\big|_{\{a_k\}}$.
\medskip In particular, the calculations above pertaining to the special case \eqref{eqn:radialcase} show that, in this \lq\lq radial zeros" situation, the maximal ideal subspace of $K^1_B\big|_{\{a_k\}}$ must be
$$\mathcal M:=\left\{\{w_k\}:\,\sum_kk|w_k|\,(1-a_k)<\infty\right\},$$
provided that the validity of Conjecture 1 is established. Strictly speaking, by saying that $\mathcal M$ is {\it maximal} we mean that $\mathcal M\subset K^1_B\big|_{\{a_k\}}$, while every ideal sequence space $\mathcal S$ with $\mathcal S\subset K^1_B\big|_{\{a_k\}}$ is actually contained in $\mathcal M$.
\par Our last problem (posed in a somewhat vaguer form) deals with the map, to be denoted by $\mathcal T_{\{a_k\}}$, that takes $\{w_k\}$ to $\{\widetilde w_k\}$; here $\widetilde w_k$ is again defined by \eqref{eqn:defwtilde}, with $B=B_{\{a_k\}}$.
\medskip\noindent\textbf{Problem 3.} Given an interpolating Blaschke product $B$ with zeros $\{a_k\}$, study the action of the linear operator
$$\mathcal T_{\{a_k\}}:\,\{w_k\}\mapsto\{\widetilde w_k\}$$
on various (natural) sequence spaces.
\medskip We know from \eqref{eqn:bargak} that, whenever the $w_k$'s satisfy \eqref{eqn:interprob} for some $f\in K^1_B$, the conjugates of the $\widetilde w_k$'s play a similar role for the \lq\lq partner" $\widetilde f:=\ov z\ov fB$ of $f$, so that
$$\widetilde f(a_k)=\ov{\widetilde w_k}\qquad(k=1,2,\dots).$$
Recalling \eqref{eqn:kpblp}, we deduce that $\mathcal T_{\{a_k\}}$ maps $\ell^p_1(\{a_k\})$ with $1<p<\infty$ isomorphically onto itself. Its inverse, $\mathcal T^{-1}_{\{a_k\}}$, is then given by
$$\mathcal T^{-1}_{\{a_k\}}:\,\{\widetilde w_k\}\mapsto
\left\{\sum_j\f{\widetilde w_j}{\ov{B'(a_j)}\cdot(1-\ov a_ja_k)}\right\}=\{w_k\},$$
which follows upon interchanging the roles of $f$ and $g(=\widetilde f)$ in \eqref{eqn:gcauchy} and \eqref{eqn:bargak}. It might be interesting to determine further sequence spaces that are preserved by $\mathcal T_{\{a_k\}}$. Anyhow, this $\mathcal T_{\{a_k\}}$-invariance property can no longer be guaranteed for the endpoint spaces $\ell^1_1(\{a_k\})$ and $\ell^\infty$. In fact, we have seen in \cite{DARX} that the set of those $\ell^\infty$ sequences that are mapped by $\mathcal T_{\{a_k\}}$ into $\ell^\infty$ coincides with the trace space $K^\infty_B\big|_{\{a_k\}}$ (which is, typically, smaller than $\ell^\infty$); and Conjecture 1 says that at the other extreme, when $p=1$, the situation should be similar.
\par Furthermore, the suggested analysis of the $\mathcal T_{\{a_k\}}$ operator could perhaps be extended, at least in part, to the case of a generic (non-interpolating) Blaschke sequence $\{a_k\}$.
\par Finally, going back to Problem 1, we remark that the range $0<p<1$ is also worth looking at, in addition to the $p=1$ case. It seems reasonable, though, to define the corresponding $K^p_B$ space as the closure of $K^\infty_B$ in $H^p$, rather than use the original definition \eqref{eqn:defmodsub} for $p<1$. The problem would then consist in describing the trace space $K^p_B\big|_{\{a_k\}}$ for an interpolating Blaschke product $B=B_{\{a_k\}}$. More generally, one might wish to characterize $K^p_\th\big|_{\{\la_k\}}$, say with $0<p\le1$ and $\th$ inner, for an arbitrary interpolating sequence $\{\la_k\}$ in $\D$. This time, however, the chances for a neat solution appear to be slim. Indeed, even the \lq\lq good" case $1<p<\infty$ is far from being completely understood; see, e.g., \cite{DPAMS, HNP} for a discussion of such interpolation problems in $K^2_\th$.
|
1,108,101,564,844 | arxiv | \section*{Introduction}
Fix a congruence subgroup $G$ which is either of the form $\Gamma_0(N)$ or $\Gamma_1(N)$ for some $N$, and $A$ a subring of $\mathbb{C}$. A modular form on $G$ has a $q$-expansion:
\[ f(z) = \sum_{i \geq 0} a_i q^i \]
where $q = e^{2i\pi z}$ and $a_i \in \mathbb{C}$ for all $i\geq 0$. If $A$ is a subring of $\mathbb{C}$, and if $f$ is a modular form of level $G$ and weight $k$ such that:
\[ f(z) = \sum_{i \geq 0} a_i q^i \in A[[q]]\]
then we say that $f$ has coefficients in $A$. The set of modular forms on $G$ and coefficients in $A$ is an $A$-module, which we denote by $M_k(G,A)$. In the case where $A = \mathbb{C}$, we write simply $M_k(G) = M_k(G,\mathbb{C})$. When $G = SL_2(\mathbb{Z})$ is the full modular group, we say $f$ is of level $1$.\\
The set $\{f \in M_k(G,A): k = 0 \mbox{ or } k \geq 2\}$ generates a graded $A$-module which we denote by $M(G,A)$, and which can be written as the direct sum:
\[M(G,A) = M_0(G,A) \oplus \left( \bigoplus_{k \geq 2} M_k(G,A)\right)\]
where $M_0(G,A) = A$. When $A = \mathbb{C}$, we write $M(G) = M(G,\mathbb{C})$. When there is no ambiguity, we may drop the congruence group and the ring of coefficients from the notation.\\
If $A = \mathbb{F}_p$, then by $M_k(G,\mathbb{F}_p)$ we mean the formal mod $p$ reductions of the $q$-expansions of modular forms in $M_k(G,\mathbb{Z})$. Thinking of modular forms as $q$-expansions, we write:
\[M_k(G,\mathbb{F}_p) = M_k(G,\mathbb{Z})\otimes \mathbb{F}_p. \]
We then define $M(G,\mathbb{F}_p)$ similary as:
\[M(G,\mathbb{F}_p) = M_0(G,\mathbb{F}_p) \oplus \left( \bigoplus_{k \geq 2} M_k(G,\mathbb{F}_p)\right).\]
When $G = \Gamma_1(N)$ with $N \geq 5$, and $k \geq 2$, we have a more conceptual interpretation of mod $p$ modular forms; see Section 1 for further clarification. The reason we omit modular forms of weight $1$ is to avoid the difficulty arising from the existence of modular forms mod $p$ of weight $1$ which do not lift to characteristic 0, and because in practice it is more difficult to do computations with weight 1 modular forms. \\
For a congruence group $G$ and a subring $A$ of $\mathbb{C}$, the graded $A$-algebra $M(G,A)$ is finitely generated (see \cite{DR}, Th\'eor\`eme 3.4). Thus there exists an integer $n \geq 2$ such that the smallest graded $A$-subalgebra of $M(G,A)$ containing:
\[M_0(G,A) \oplus \left( \bigoplus_{k = 2}^{n} M_k(G,A)\right)\]
is the whole algebra $M(G,A)$. For any such $n$, we say that $M(G,A)$ is generated in weight at most $n$, and the smallest such $n$ is called the generating weight of $M(G,A)$. \\
In \cite{rustom}, we studied the graded algebras of modular forms of various levels and over subrings $A$ of $\mathbb{C}$. Our main theorem in \cite{rustom} was that for $N \geq 5$, the algebra $M(\Gamma_1(N),\mathbb{Z}[\frac{1}{N}])$ is generated in weight at most $3$. The key idea was to use a result that first appeared in Mumford's paper \cite{mumford} concerning invertible sheaves on algebraic curves, and apply it to modular curves over finite fields $\mathbb{F}_p$ for all $p \nmid N$, to conclude the result over $\mathbb{Z}[\frac{1}{N}]$. To deal with mod $p$ modular forms, we appealed to the existence of a fine moduli scheme classifying elliptic curves with $\Gamma_1(N)$ structure when $N \geq 5$. For levels $\Gamma_0(N)$ (for $N$ satisfying some congruence conditions) and over $\mathbb{C}$ (equivalently, over $\mathbb{Q}$), we proved a similar result, where the generating weight is now $6$.\\
\begin{comment}It recently came to the attention of the author that such questions (over $\mathbb{C}$) had already been studied by Wagreich (\cite{wagreich1} and \cite{wagreich2}) for graded algebras of automorphic forms over finitely generated Fuchsian groups of the first kind. In these articles, Wagreich gives a precise description of the number of generators needed in each weight. The description depends only on the signature of the Fuchsian group involved, that is, a knowledge of the genus, the number of cusps, and the number and orders of elliptic points. Wagreich's results give an affirmative answer to our Conjecture 1 in \cite{rustom}, mainly that $M(\Gamma_0(N),\mathbb{C})$ is generated in weight at most $6$. In addition, when the algebra can be generated by at most 4 generators, Wagreich gives bounds on the degrees of the generators of the ideal of relations.\\
\end{comment}
This article is divided into three parts. In the first part, we deal with the ideal of relations. Suppose that $\{g_1,\cdots, g_r\}$ is a minimal set of generators of $M(G,A)$. Then one can define a homomorphism of graded $A$-algebras:
\[\Phi : A[x_1,\cdots,x_r] \mapsto M(G,A)\]
given by $\Phi(x_i) = g_i$ for $1 \leq i \leq r$, where $A[x_1,\cdots,x_r]$ is a weighted polynomial ring, each $x_i$ receiving the weight of $g_i$. The ideal $I = \ker \Phi$ is finitely generated, and it is the ideal of relations. We say that the ideal of relations of $M(G,A)$ (always with respect to a minimal set of generators) is generated in degree at most $n$ if one can pick generators $\{r_1,\cdots,r_m\}$ of $I$ each lying in degree at most $m$. Again using the tools in \cite{mumford}, we provide bounds on the degree of generators of the ideal of the relations that exist between elements of a minimal set of generators of the algebra. More precisely, we prove:\\
\textbf{Theorem \ref{relthm}. } \textit{Choosing a minimal set of generators for $M = M(\Gamma,\mathbb{Q})$, the ideal of relations is generated:
\begin{itemize}
\item in degree at most $6$ when $\Gamma = \Gamma_1(N)$ for $N \geq 5$, or
\item in degree at most $10$ when $\Gamma = \Gamma_0(N)$ for $N$ satisfying:
\[N \equiv 0 \pmod{4} \mbox{ or } N \equiv 0 \pmod{p}, \mbox{ } p \equiv 3 \pmod{4} \]
and:
\[N \equiv 0 \pmod{9} \mbox{ or } N \equiv 0 \pmod{p}, \mbox{ } p \equiv 5 \pmod{6}. \]
\end{itemize}}
The main result of this section concerns the ideal of relations for modular forms of level $\Gamma_1(N)$ and with coefficients in $\mathbb{Z}[\frac{1}{N}]$, and we prove it using the existence of the fine moduli scheme. Precisely, we prove:\\
\textbf{Theorem \ref{relthmZZ}. } \textit{Let $N \geq 5$. Choosing a minimal set of generators for $M = M(\Gamma_1(N),\mathbb{Z}[\frac{1}{N}])$, the ideal of relations is generated in degree at most $6$. }\\\\
In the second part, we turn our attention to modular forms of level with coefficients in $\mathbb{Z}$. Here we choose to deal with the easiest case, the modular forms of level $\Gamma_0(p)$, although we expect that our method generalizes to other congruence subgroups. First we look at what the generating weight can be, and we prove that, when restricting to coefficients in $\mathbb{Z}$, there is no longer a uniform upper bound on the generating weight independently of the level. More explicitly, we prove:\\
\textbf{Theorem \ref{genZ}. }\textit{Let $N \geq 5$ and let $p \geq 5$ be a prime which divides $N$ exactly once. Then any set of generators for $M(\Gamma_0(N),\mathbb{Z})$ contains a form of weight $p-1$. In particular, the generating weight of $M(\Gamma_0(N),\mathbb{Z})$ is at least $p-1$.}\\\\
Next, we proceed to identify a set of generators for $M(\Gamma_0(p), \mathbb{Z})$. This set consists of the $T$-form $T \in M_{p-1}(\Gamma_0(p),\mathbb{Z})$ appearing in \cite{rustom}, given by:
\[ T(z) := \left(\frac{\eta(pz)^p}{\eta(z)}\right)^2, \]
and the subset $S$ of those modular forms with coefficients in $\mathbb{Z}$ whose $p$-adic valuation at the cusp $0$ are not too negative. We recall here that the $T$-form played a fundamental role in the algorithm we developed in \cite{rustom} based on the work of Scholl in \cite{scholl}. In particular, for $f = \sum_{n\geq 0} a_n q^n$, we define the $p$-adic valuation of $f$ as:
\[ v_p(f) = \inf\{v_p(a_n) : n \geq 0\}, \] and we show:\\
\textbf{Theorem \ref{gensZZ}. } \textit{
Let $S = \{f \in M(\Gamma_0(p),\mathbb{Z}) : v_p(\tilde{f})\geq 0 \}$. Then $M(\Gamma_0(p), \mathbb{Z})$ is generated by $T$ and the set $S$.} \\\\
In order to prove that these modular forms generate $M(\Gamma_0(p), \mathbb{Z})$, we first prove a generalization of a result of Serre appearing in \cite{serre} concerning congruence relations between modular forms on $\Gamma_0(p)$ and on $SL_2(\mathbb{Z})$. In that paper, Serre proves that for a modular form $f$ of level $p$ and of weight $k$, there is a modular form $g$ of level $1$ and of weight $k' \geq k$ such that $f \equiv g \pmod{p}$. Now for $f \in M_k(\Gamma_0(N),\mathbb{Z})$, and an odd prime $p$ dividing $N$ exactly once, set $\tilde{f} = p^{k/2} f|W_p^{N}$, where $f|W_p^N$ is the image of $f$ under the Atkin-Lehner involution associated to $p$ (see Section \ref{lowerbound}). Our generalization is formulated as follows:\\
\textbf{Theorem \ref{congthm}. }Let $p \geq 5$, $f \in M_k(\Gamma_0(p),\mathbb{Z})$ with $v_p(f) = 0$ and $v_p(\tilde{f}) = k + a$. Then there exists $g \in M_{k-a(p-1)}(SL_2(\mathbb{Z}),\mathbb{Z})$ such that $f \equiv g \pmod{p}$. \\\\
Note that whenever $f$ satisfies the conditions of this theorem, we have $k \geq a(p-1)$ by the bounds on $v_p(\tilde{f})$ given in Proposition 3.20 of \cite{DR}. Serre's result covers the case of the theorem where $a \leq 0$, so that the weight we get for the level $1$ modular form $g$ is $k -a(p-1)\geq k$, and he proves this using an elementary trace argument. Our generalization shows that even if $a > 0$, such a congruence holds, and one can actually pick the level 1 form $g$ in weight $k' = k -a(p-1) < k$. Serre's argument does not generalize in an obvious way, and to prove this generalization, we resort to the intersection theory on the Deligne-Mumford stacks classifying elliptic curves with $\Gamma_0(p)$ structure as studied in \cite{DR}. We provide a brief summary of the main notions of this intersection theory. Finally we state two conjectures (Conjectures \ref{G0conj1} and \ref{G0conj2}) concerning the generating weight of the subalgebra generated by $S$.\\
The results of this paper along with those in \cite{rustom} allow us to write down an explicit algorithm that fully determines the structure of algebra $M(G,A)$ with a minimal set of generators and relations (when $A$ is a PID). The last part of this paper comprises the results of computations carried out using the algorithm and contains the structure of this algebra for various $G$ and $A$. It also contains the results of computations that support Conjecture \ref{G0conj2}.\\\\
\textbf{Acknowledgements. } The author wishes to thank Jim Stankewicz for helpful discussions and clarifications he offered regarding the paper of Deligne and Rapoport (\cite{DR}) and their theory of the moduli stacks.
\subsection*{Notation, definitions and conventions}
Here we add futher definitions, notation, and conventions to the ones given above in the introduction. The standard Eisenstein series of level 1 and weight $k$, normalized so that the constant term in the $q$-expansion is $1$, is denoted by $E_k$. The unique cusp form of level 1 and weight $12$ (normalized to have a leading term $q$ in the $q$-expansion) is denoted by $\Delta$. The Dedekind eta function $\eta$ is defined by the product:
\[ \eta(z) = e^{\frac{2i\pi z}{24}} \prod_{n=1}^{\infty}(1-q^n)\]
and is a modular form of weight $\frac{1}{2}$. The modular forms $\eta$ and $\Delta$ are related by:
\[ \eta(z)^{24}= \Delta(z). \]
We recall the defintion of a $T$-form, first defined in \cite{scholl}, which played a key role in our work appearing in \cite{rustom}.
\begin{defn}\label{deftform}For a congruence group $G$ and a ring $A \subset \mathbb{C}$, a modular form $T \in M_k(G,A)$ is called a $T$-form if it satisfies the following conditions:
\begin{itemize}
\item $T$ only vanishes at the cusp $\infty$, and
\item the $q$-expansion of $T$ lies in $A[[q]]$ and the $q$-expansion of $T^{-1}$ lies in $A((q))$.
\end{itemize}
\end{defn}
If $f \in M_k(G)$, and $\gamma \in GL_2(\mathbb{Q})$, where:
\[\gamma = \begin{pmatrix} a & b \\ c & d \end{pmatrix}, \] we define the action of $\gamma$ on $z \in \mathbb{C}$ by:
\[\gamma \cdot z = \frac{az + b}{cz + d},\] and the operator $-|_k \gamma : f \mapsto f|_k\gamma$ by:
\[(f|_k\gamma)(z) = (\det \gamma)^{k/2} \left( cz + d \right)^{-k} f(\gamma \cdot z). \]
Note that for a fixed level $G$, the collection of operators $\{-|_k\gamma : k \in \mathbb{N}\}$ defines a graded operator on $M(G)$, which we denote simply by $-|\gamma : f \mapsto f|\gamma$.
\section{Relations}
Let $X$ be a be a smooth, geometrically connected algebraic curve of genus $g$ over a perfect field $k$. Let $\mathcal{L}$, $\mathcal{M}$, and $\mathcal{N}$ be three invertible sheaves on $X$. We have the following exact sequence:
\[0 \rightarrow R(\mathcal{L},\mathcal{M}) \rightarrow H^0(X,\mathcal{L})\otimes H^0(X,\mathcal{M}) \xrightarrow{\mu} H^0(X,\mathcal{L}\otimes \mathcal{M}) \rightarrow S(\mathcal{L},\mathcal{M}) \rightarrow 0 \]
where $\mu$ is the natural multplication map: $\sum f_i \otimes g_i \mapsto \sum f_i g_i$, $R(\mathcal{L},\mathcal{M})$ and $S(\mathcal{L},\mathcal{M})$ are respectively its kernel and cokernel. In \cite{rustom}, we studied the generators of graded algebras of modular forms, using the first of the following results that first appear in \cite{mumford}.
\begin{lem}\label{mumlema}
Let $\mathcal{L}$, $\mathcal{M}$, and $\mathcal{N}$ be as above.
\begin{enumerate}
\item If $\deg \mathcal{L} \geq 2g+1$ and $\deg \mathcal{M} \geq 2g$, then $\mu$ is surjective.
\item The natural map:
\[ R(\mathcal{L},\mathcal{M})\otimes H^0(X,\mathcal{N}) \rightarrow R(\mathcal{L}\otimes \mathcal{N}, \mathcal{M}) \]
mapping $(\sum f_i\otimes g_i)\otimes h \mapsto \sum (f_i h)\otimes g_i$ is surjective if $\deg \mathcal{L} \geq 3g+1$, and $\min\{\deg \mathcal{M},\deg \mathcal{N}\} \geq 2g+2$.
\end{enumerate}
\end{lem}
We recall the following fact proven in \cite{rustom}.
\begin{cor} \label{gencorQ} $M(\Gamma,\mathbb{Q})$ is generated:
\begin{enumerate}
\item in weight at most $3$, when $\Gamma = \Gamma_1(N)$ for $N \geq 5$, or
\item in weight at most $6$, when $\Gamma = \Gamma_0(N)$ for $N$ satisfying the following congruence conditions:
\[N \equiv 0 \pmod{4} \mbox{ or } N \equiv 0 \pmod{p}, \mbox{ } p \equiv 3 \pmod{4} \]
and:
\[N \equiv 0 \pmod{9} \mbox{ or } N \equiv 0 \pmod{p}, \mbox{ } p \equiv 5 \pmod{6} \]
\end{enumerate}
\end{cor}
We make the following additional observations.
\begin{rem}\label{remweights}\indent
\begin{enumerate}
\item In \cite{rustom}, we used a weaker version of Lemma \ref{mumlema}, resulting in the upper bound $6$ on the weight in part (2) of Corollary \ref{gencorQ}. In fact, Lemma \ref{mumlema} allows us to show, using the same proof as in \cite{rustom}, that the bound in part(2) of Corollary \ref{gencorQ} can actually be taken to be $4$.
\item It follows from the first part of Corollary \ref{gencorQ} that a minimal set of generators of $M(\Gamma_1(N),\mathbb{Q})$ must be the union of bases of the spaces $M_2(\Gamma_1(N),\mathbb{Q})$ and $M_3(\Gamma_1(N),\mathbb{Q})$, since no weight $2$ forms can give rise to forms in weight $3$.
\end{enumerate}
\end{rem}
Consider the graded $\mathbb{Q}$-algebra $M = M(\Gamma, \mathbb{Q})$ and pick a minimal set of generators $\{g_1,\cdots,g_n\}$ for it. This provides us with a map of graded algebras:
\[ \Phi: A \rightarrow M \]
\[ x_i \mapsto g_i \]
where $A = \mathbb{Q}[x_1,\cdots,x_n]$ is the polynomial algebra where each $x_i$ is given the weight of $g_i$. We denote by $A_k$ the $\mathbb{Q}$-vector space spanned by degree $k$ polynomials. We wish to examine generators of the homogeneous ideal $\ker \Phi$, which is the ideal of relations. In \cite{wagreich1} and \cite{wagreich2}, Wagreich describes the generators of the ideal of relations of $M$ when $M$ is generated by at most 4 generators. Our result is valid regardless of the number of generators involved.
\begin{thm} \label{relthm}
Choosing a minimal set of generators for $M = M(\Gamma,\mathbb{Q})$, the ideal of relations is generated:
\begin{enumerate}
\item in degree at most $6$ when $\Gamma = \Gamma_1(N)$ for $N \geq 5$,
\item in degree at most $10$ when $\Gamma = \Gamma_0(N)$ for $N$ satisfying:
\[N \equiv 0 \pmod{4} \mbox{ or } N \equiv 0 \pmod{p}, \mbox{ } p \equiv 3 \pmod{4} \]
and:
\[N \equiv 0 \pmod{9} \mbox{ or } N \equiv 0 \pmod{p}, \mbox{ } p \equiv 5 \pmod{6} \]
\end{enumerate}
\end{thm}
\begin{proof}
For (1), as per Remark \ref{remweights}, if we choose a minimal set of generators, then there are no relations in degrees $2$ or $3$. Let $P \in A$ be a homogeneous polynomial of degree $k \geq 7$, representing a relation in $M$ in weight $k \geq 7$. As explained in \cite{rustom}, we have a line bundle $\mathcal{L}$ on the modular curve $X = X_1(N)$ of genus $g$ such that:
\[ M_k = H^0(X, \mathcal{L}^{\otimes k}) \]
and $\deg \mathcal{L} \geq g+1$ (when $N \geq 5$). \\
Let $a = 5$ if $k = 7$, and $a = 6$ otherwise. The polynomial $P$ is the sum of homogeneous monomials, each of degree $k$. Since $k \geq 7$, each of these monomials is divisible by a monomial of degree $a$. Let $\{v_1,\cdots,v_n\}$ be a basis of $A_{k-a}$. Thus we can write:
\[ P = \sum_{i = 1}^n Q_i v_i \] where for all $i$, $Q_i$ is a homogeneous polynomial of degree $a$. \\
For some $m$, and possibly after a reordering of the $v_i$'s, the set $\{\Phi(v_1),\cdots,\Phi(v_m)\}$ is a basis of $M_{k-a}$. This means that for every $j$ such that $m+1 \leq j \leq n$ there are constants $\alpha_{1,j},\cdots,\alpha_{m,j}$ such that:
\[ \Phi(v_j) = \sum_{i=1}^{m} \alpha_{i,j} \Phi(v_i).\]
Hence letting
\[G_j := v_j - \sum_{i=1}^{m} \alpha_{i,j} v_i\]
for $m+1 \leq j \leq n$, we have $\Phi(G_j) = 0$, i.e., the $G_j$'s are relations in weight $k-a$. Now if for $1 \leq i \leq m$ we set:
\[Q_i' := Q_i + \sum_{j=m+1}^n \alpha_{i,j} Q_j, \]
we can rewrite $P$ as:
\[ P = \sum_{i=1}^m Q_i' v_i + \sum_{j=m+1}^{n} G_j Q_j, \]
where, since $\Phi(P) = 0$ and $\Phi(\sum_{j=m+1}^{n} G_j Q_j) = 0$, we have $\Phi(\sum_{i=1}^m Q_i' v_i) = 0$.
We see then that $\sum_{i=1}^m Q_i' v_i$ must be represented in $R(\mathcal{L}^{\otimes(k-a)},\mathcal{L}^{\otimes a})$, that is:
\[\sum_{i=1}^m \Phi(Q_i')\otimes \Phi(v_i) \in R(\mathcal{L}^{\otimes(k-a)},\mathcal{L}^{\otimes a}).\] We have then the following diagram:
\[\begin{tikzpicture}[xscale=3,yscale=-1.5]
\node (Z) at (0,0) {$0$};
\node (R) at (0.65,0) {$R(\mathcal{L}^{\otimes a},\mathcal{L}^{\otimes(k-a)})$};
\node (M) at (2.1,0) {$H^0(X,\mathcal{L}^{\otimes a})\otimes H^0(X,\mathcal{L}^{\otimes(k-a)})$};
\node (N) at (3.4,0) {$H^0(X,\mathcal{L}^{\otimes k}) $};
\node (R2) at (0.65,1) {$R(\mathcal{L}^{\otimes 3},\mathcal{L}^{\otimes (k-a) })\otimes H^0(X,\mathcal{L}^{\otimes (a-3)})$};
\draw [->] (Z) -- (R);
\draw [->] (R) -- (M);
\draw [->] (M) -- (N);
\draw [->] (R2) -- node [left]{$\epsilon$} (R);
\end{tikzpicture}\]
By Lemma \ref{mumlema}, the map $\epsilon$ is surjective. Thus for each $i$, we can find polynomials $H_{s}$ in degree $3-a+k \leq k-2$, and polynomials $F_{i,s}$ in degree $a-3$, satisfying:
\[\Phi(\sum_{i=1}^m F_{i,s} v_i) = 0\]
and such that:
\[ \sum_{i=1}^m \Phi(\sum_s H_{s} F_{i,s}) \otimes \Phi(v_i) = \sum_{i=1}^m \Phi(Q_i')\otimes \Phi(v_i), \] so that:
\[ \Phi(\sum_s H_{s} F_{i,s}) = \Phi(Q_i'), \] hence
\[ Q_i' = \sum_s H_{s} F_{i,s} + W_i \]
where for all $i$, $\Phi(W_i) = 0$, i.e. $W_i$ is a relation in weight $a$. Putting all the above together, we get:
\[ P = \sum_{i=1}^m (\sum_s H_{s} F_{i,s})v_i + \sum_{i=1}^{m} W_i v_i + \sum_{j=m+1}^{n} G_j Q_j \]
so $P$ can be written in terms of relations of degrees $k-(a-3)$, $a$, and $k-a$, which are all $\leq k-2$. \\
For (2), suppose that $N$ satisfies the given congruence conditions. When the genus $g$ of the modular curve $X = X_0(N)$ is 0, the statement follows from the results of \cite{TS11}. So suppose that the modular curve $X = X_0(N)$ is of genus $g \geq 1$. We have a line bundle $\mathcal{L}$ on $X$ such that for even $k$:
\[ M_k = H^0(X,\mathcal{L}^{\otimes k/2}) \]
(since the conditions on $N$ remove any elliptic points and irregular cusps), and $\deg \mathcal{L} \geq 2g$. We can then repeat the same argument as above since for even $k \geq 10$, we have the following diagram:
\[\begin{tikzpicture}[xscale=3,yscale=-1.5]
\node (Z) at (-0.35,0) {$0$};
\node (R) at (0.40,0) {$R(\mathcal{L}^{\otimes 4},\mathcal{L}^{\otimes(k/2-4)})$};
\node (M) at (2,0) {$H^0(X,\mathcal{L}^{\otimes 4})\otimes H^0(X,\mathcal{L}^{\otimes(k/2-4)})$};
\node (N) at (3.4,0) {$H^0(X,\mathcal{L}^{\otimes k/2}) $};
\node (R2) at (0.40,1) {$R(\mathcal{L}^{\otimes 2},\mathcal{L}^{\otimes (k/2-4) })\otimes H^0(X,\mathcal{L}^{\otimes 2})$};
\draw [->] (Z) -- (R);
\draw [->] (R) -- (M);
\draw [->] (M) -- (N);
\draw [->] (R2) -- node [left]{$\epsilon$} (R);
\end{tikzpicture}\]
where the map $\epsilon$ is surjective by Lemma \ref{mumlema}. Thus any relation of degree $\geq 12$ can be written as a combination of relations of degrees $k-4$, $8$, and $k-8$.
\end{proof}
We now turn to the case of modular forms of level $\Gamma_1(N)$ over $\mathbb{Z}[\frac{1}{N}]$. First we consider the situation in positive characteristic. Note that while in characteristic 0 any relation must be a homogeneous polynomial in the generators (see \cite{miyake}, Lemma 2.1.1), in positive characteristic one might have non-homogeneous relations (for example, the famous $E_{p-1} \equiv 1 \pmod{p}$). Here we restrict our attention to homogeneous relations. Recall that (cf. \cite{G90}) when $N \geq 5$, the functor representing generalized elliptic curves over $\mathbb{Z}[\frac{1}{N}]$ with a choice of a point of exact order $N$ is representable by a fine moduli $\mathbb{Z}[\frac{1}{N}]$-scheme $X_1(N)$. On this scheme there is an invertible sheaf, which we denote by $\omega$. The modular forms of level $\Gamma_1(N)$ over $\mathbb{Z}[\frac{1}{N}]$ are just global sections of tensor powers of this invertible sheaf:
\[ M_k(\Gamma_1(N), \mathbb{Z}[\frac{1}{N}]) = H^0 ( X_1(N), \omega^{\otimes k}). \]
When $p \nmid N$, the scheme $X_1(N)$ admits a good reduction
\[X_1(N)_{\mathbb{F}_P} := X_1(N) \otimes \mathbb{F}_p,\]
and the invertible sheaf $\omega$ also pulls back to an invertible sheaf $\omega_{\mathbb{F}_p}$, and by base change theorems, we have the following identification (for $k \geq 2)$:
\[ M_k(\Gamma_1(N),\mathbb{F}_p) = H^0(X_1(N)_{\mathbb{F}_p}, \omega_{\mathbb{F}_p}^{\otimes k}). \]
\begin{lem}\label{modprels}
Let $N \geq 5$, and $p \nmid N$ be a prime. Choosing a minimal set of generators of $M(\Gamma_1(N),\mathbb{F}_p)$, the homogeneous relations are generated in degree at most $6$.
\end{lem}
\begin{proof}
As $\mathbb{F}_p$ is perfect, and $\deg \omega_{\mathbb{F}_p} = \deg \omega$ (this is shown in the proof of Proposition 1 in \cite{rustom}), the argument in Theorem \ref{relthm} goes through unchanged.
\end{proof}
As above, suppose we have a minimal set of generators $\{g_1,\cdots,g_n\}$ for $M = M(\Gamma_1(N),\mathbb{Z}[\frac{1}{N}])$, and let $A = \mathbb{Z}[\frac{1}{N}][x_1,\cdots,x_n]$ be the weighted polynomial algebra where each $x_i$ is given the weight of $g_i$. Again, the homogeneous ideal $\ker \Phi$ is finitely generated. This provides us with a map of graded algebras:
\[ \Phi: A \rightarrow M \]
\[ x_i \mapsto g_i. \]
So for every weight $k$, there is a short exact sequence:
\[\begin{tikzpicture}[xscale=3,yscale=-1.5]
\node (Z1) at (0,0) {$0$};
\node (R) at (0.65,0) {$\ker(\Phi)_k$};
\node (A) at (1.3,0) {$A_k$};
\node (M) at (2,0) {$M_k$};
\node (Z2) at (2.65,0) {$0$};
\draw [->] (Z1) -- (R);
\draw [->] (R) -- (A);
\draw [->] (A) -- node[above]{$\Phi$} (M);
\draw [->] (M) -- (Z2);
\end{tikzpicture}\]
where the subscript $k$ indicates weight $k$ homogeneous submodule. For $p\nmid N$, the map $\Phi$ induces a morphism $\bar{\Phi}: A_k \otimes \mathbb{F}_p \rightarrow M_k \otimes \mathbb{F}_p$. Since $\mathbb{F}_p$ is flat over $\mathbb{Z}[\frac{1}{N}]$, so we have a commutative diagram with exact rows:
\[\begin{tikzpicture}[xscale=3,yscale=-1.5]
\node (Z1) at (0,0) {$0$};
\node (R) at (0.8,0) {$\ker(\Phi)_k $};
\node (A) at (1.6,0) {$A_k$};
\node (M) at (2.4,0) {$M_k$};
\node (Z2) at (3.2,0) {$0$};
\node (Z1p) at (0,1) {$0$};
\node (Rp) at (0.8,1) {$\ker(\Phi)_k \otimes \mathbb{F}_p$};
\node (Ap) at (1.6,1) {$A_k \otimes \mathbb{F}_p$};
\node (Mp) at (2.4,1) {$M_k \otimes \mathbb{F}_p$};
\node (Z2p) at (3.2,1) {$0$};
\draw [->] (Z1) -- (R);
\draw [->] (R) -- (A);
\draw [->>] (A) -- node[above]{$\Phi$}(M);
\draw [->] (M) -- (Z2);
\draw [->] (Z1p) -- (Rp);
\draw [->] (Rp) -- (Ap);
\draw [->>] (Ap) -- node[above]{$\bar{\Phi}$}(Mp);
\draw [->] (Mp) -- (Z2p);
\draw [->] (R) -- node[right]{$\alpha$}(Rp);
\draw [->>] (A) -- node[right]{$\beta$}(Ap);
\draw [->>] (M) -- node[right]{$\gamma$}(Mp);
\end{tikzpicture}\]
As the map $\alpha$ in the above diagram is surjective, this shows that any mod $p$ homogeneous relation of degree $k$ between the generators must be the reduction mod $p$ of a relation in characteristic $0$. That is, we have shown the following lemma:
\begin{lem}\label{relred} The mod $p$ relations of degree $k$ between the generators are precisely the elements of $\ker(\Phi)_k \otimes \mathbb{F}_p$, the reductions mod $p$ of relations of degree $k$ in characteristic 0.
\end{lem}
We now proceed to prove:
\begin{thm}\label{relthmZZ}
Let $N \geq 5$. Choosing a minimal set of generators for $M = M(\Gamma_1(N),\mathbb{Z}[\frac{1}{N}])$, the ideal of relations is generated in degree at most $6$.
\end{thm}
\begin{proof}
We argue as in the proof of Theorem 1 in \cite{rustom}. Let $R_1,\cdots,R_m$ be the relations that generate $\ker\Phi$ up to degree $6$. By Lemma \ref{relred} and Lemma \ref{modprels}, the reductions $\bar{R}_1,\cdots,\bar{R}_m$ generate the homogeneous relations mod $p$. Let $B_0$ be a relation in degree $ k \geq 7$. Suppose for the sake of contradiction that $B_0$ is not in the submodule of $\ker\Phi$ generated by $R_1,\cdots,R_m$. As in the proof of Theorem \ref{relthm}, we can assume $R_1,\cdots,R_m$ have weights in $\{k-(a-3), a, k-a\}$, where $a = 5$ if $k = 7$, and $a = 6$ otherwise. The reduction mod $p$ of $B_0$ can be written as:
\[ \bar{B}_0 = \sum_{i=1}^m \bar{F}^{(0)}_i \bar{R_i} \]
where $\bar{F}^{(0)}_i$ have weights $\geq 2$, so that the mod $p$ modular forms represented by these polynomials have lifts to characteristic $0$. Let $F^{(0)}_i \in \mathbb{Z}[\frac{1}{N}][x_1,\cdots,x_n]$ denote a lift of $\bar{F}^{(0)}_i$. By our assumption,
\[ B_0 - \sum_{i=1}^m F^{(0)}_i R_i \not = 0\]
Then there must be some nonzero polynomial $B_1$ such that:
\[ B_0 - \sum_{i=1}^m F^{(0)}_i R_i = pB_1.\]
Since $B_0$ and $R_i$ are relations of degree $k$, it follows that $B_1$ is itself a relation of degree $k$, and we can repeat the process with $B_1$:
\[B_1 - \sum_{i=1}^m F^{(1)}_i R_i = pB_2\] for some $B_2, F^{(1)}_i \in \mathbb{Z}[\frac{1}{N}][x_1,\cdots,x_n]$. Iterating this, we have:
\[ B_0 = \sum_{i=1}^m F_i R_i \]
where $F_i \in \mathbb{Z}_{p}[x_1,\cdots,x_n]$. This holds for each $p \nmid N$, so by Lemma 4 of \cite{rustom}, we conclude that for each $i$, $F_i \in \mathbb{Z}[\frac{1}{N}][x_1,\cdots, x_n]$, whence a contradiction.
\end{proof}
\section{Modular forms with coefficients in $\mathbb{Z}$}
\subsection{The lower bound}\label{lowerbound}
A classical known fact is that $M(SL_2(\mathbb{Z}),\mathbb{C}) = \mathbb{C}[E_4,E_6]$, and one can easily show the stronger statement that $M(SL_2(\mathbb{Z}),\mathbb{Z}[\frac{1}{6}]) = \mathbb{Z}[\frac{1}{6}][E_4, E_6]$, see Theorem 6.3 in \cite{kilford}. If one wishes to restrict consideration to modular forms with coefficients in $\mathbb{Z}$, then we find that $M(SL_2(\mathbb{Z}),\mathbb{Z})$ is generated by $E_4, E_6$ and $\Delta$, and in fact (see \cite{delignetate}):
\[M(SL_2(\mathbb{Z}),\mathbb{Z}) \cong \mathbb{Z}[E_4,E_6, \Delta]/(E_4^3 - E_6^2 - 1728\Delta). \]
We see that the generating weight increases from $6$ to $12$ when we pass to the ring $\mathbb{Z}$. The purpose of this section is to show that the increase in the generating weight when passing to $\mathbb{Z}$ is a general phenomenon.\\
Consider a level $N \geq 1$ and an odd prime $p$ dividing $N$ exactly once. We can then define the Atkin-Lehner involution:
\[ W^N_p = \begin{pmatrix} p & a \\ N & bp \end{pmatrix} \]
where $a$ and $b$ are any integers such that $\det W_p^N = p^2 b - Na = p$.
First we need the following lemma, due to Kilbourn (see \cite{kilbourn}). This lemma generalizes a result obtained in prime level in \cite{DR}.
\begin{lem}\label{killema}
Let $N \geq 1$ and let $p$ be an odd prime dividing $N$ exactly. Then for all $k \leq p-3$, :
\[ |v_p(f|W^N_p) - v_p(f)| \leq k/2. \]
\end{lem}
For convenience, we will make the following definition.
\begin{defn}\label{definv}
Let $N$ and $p$ be as in Lemma \ref{killema}, $k \geq 0$ and $f \in M_k(\Gamma_0(N),\mathbb{Q})$. Then we define the following operator:
\[ \tilde{f} := \tilde{\omega}(f) := p^{k/2} f|W^N_p. \]
\end{defn}
Then we have a corollary of Lemma \ref{killema}:
\begin{cor}\label{kilcor}
Let $N$ and $p$ be as in Lemma \ref{killema}, $0 \leq k \leq p-3$, and $f \in M_k(\Gamma_0(N),\mathbb{Q})$. Then:
\[ v_p(\tilde{f}) \geq v_p(f). \]
In particular, if $v_p(f) = 0$, then $v_p(\tilde{f}) \geq 0$.
\end{cor}
We will prove the following:
\begin{thm}\label{genZ} Let $N \geq 5$ and let $p \geq 5$ be a prime which divides $N$ exactly once. Then any set of generators for $M(\Gamma_0(N),\mathbb{Z})$ contains a form of weight $p-1$. In particular, the generating weight of $M(\Gamma_0(N),\mathbb{Z})$ is at least $p-1$.
\end{thm}
\begin{proof}
The idea of the proof is to produce a modular form in weight $p-1$ that cannot be written as a polynomial with $\mathbb{Z}$ coefficients in modular forms with $\mathbb{Z}$ coefficients in weights $< p-1$. Recall (see \cite{rustom}) the $T$-form $T$, given by:
\[ T(z) := \left(\frac{\eta(pz)^p}{\eta(z)}\right)^2 \in M_{p-1}(\Gamma_0(p)) \subset M_{p-1}(\Gamma_0(N))\]
We recall also that both $T$ and $T^{-1}$ have $q$-expansion coefficients in $\mathbb{Z}$. It is also obvious that $v_p(T) = 0$. The truth of Theorem \ref{genZ} then clearly follows from the following lemma:
\begin{lem}\label{vptlem}
The form $T$ is not a polynomial with $\mathbb{Z}$ coefficients in modular forms with $\mathbb{Z}$ coefficients in weights $<p-1$.
\end{lem}
\begin{proof}
We will prove the lemma by showing that $T$ violates the inequality in Lemma \ref{killema}, and that every modular form which is a polynomial in forms of weight $< p -1$ must satisfy the inequality.\\
We define the following matrices:
\[ M_p = \begin{pmatrix} 1 & a \\ \frac{N}{p} & pb \end{pmatrix}, \]
\[ M_p' = \begin{pmatrix} p & a \\ \frac{N}{p} & b \end{pmatrix} \]
\[ \gamma = \begin{pmatrix} p & 0 \\ 0 & 1 \end{pmatrix}. \]
We note that $W^N_p = M_p \gamma$, and that $\gamma W^N_p = p M_p'$, so $\gamma W^N_p$ acts on modular forms via the $-|_k$ operator in the same way as $M_p'$ does. We with to compute $v_p(\tilde{T})$ where $\tilde{T} = p^{\frac{p-1}{2}}T|W_p^N$. We have:
\[\tilde{T}(z) = p^{p-1} (Nz + pb)^{-(p-1)} \left(\frac {\eta(M_p' \cdot z)^p}{\eta(W^N_p \cdot z)}\right)^2. \]
By applying the appropriate transformation formulae for the $\eta$ function (see for instance \cite{koehler}), we have:
\[\eta(M_p' \cdot z)^2 = p \nu_\eta(M_p')^2 (Nz + pb) \eta(z)^2, \]
\[\eta(W^N_p \cdot z)^2 = \nu_\eta (M_p)^2 (Nz + pb) \eta(p\cdot z)^2, \]
where $\nu_\eta(-)$ is the eta multiplier. After writing out the multipliers explicitly, we find that:
\[ \tilde{T}(z) = \epsilon p^{-1} \left(\frac{\eta(z)^p}{\eta(pz)}\right)^2 \]
where:
\[ \epsilon = \begin{cases} e^{\left(\frac{2Np - 6p - 2N/p + 6}{24}\right)} & \mbox{ if } N/p \in 2\mathbb{Z} \\
e^{\left(\frac{2Np - 6N + 4N/p}{24}\right)} & \mbox{ otherwise }\end{cases}, \]
and $e(z) = e^{2i\pi z}$. It is is show to show that $\epsilon = \pm 1$, and hence that $v_p(\tilde{T}) = -1$. All we need to show is that
\[ Np - 3p - N/p + 3 \equiv 0 \pmod{6} \mbox{ if } \frac{N}{p} \equiv 0 \pmod{2} \]
and that
\[ Np - 3N + 2N/p \equiv 0 \pmod{6} \mbox{ if } \frac{N}{p} \equiv 1 \pmod{2}. \]
Indeed, we have $p \equiv 1 \pmod{2}$ and $p \equiv \pm 1 \pmod{3}$, so if $\frac{N}{p} \equiv 0 \pmod{2}$, then:
\[\begin{cases} Np - 3p - N/p + 3 \equiv 0 \pmod{2} & \mbox{, and} \\
Np - 3p - N/p + 3 \equiv 0 \pmod{3}.
\end{cases}\] Similarly, if $\frac{N}{p} \equiv 1 \pmod{2}$, then:
\[\begin{cases} Np - 3N + 2N/p \equiv 0 \pmod{2} & \mbox{, and} \\
Np - 3N + 2N/p \equiv 0 \pmod{3}.
\end{cases}\]
To finish the proof, note that the operator $\tilde{\omega}$ of Definition \ref{definv} defines an operator on the graded algebra of modular forms:
\[ \widetilde{fg} = \tilde{f} \cdot \tilde{g}, \]
\[ \widetilde{f+g} = \tilde{f}+ \tilde{g}. \]
Suppose now that
\[T = \sum c_{i_1,\cdots,i_n} g_1^{i_1} \cdots g_n^{i_n}\] where $c_i \in \mathbb{Z}$ and $g_i \in M(\Gamma_0(N),\mathbb{Z})$ are modular forms in weights $\leq p-3$. Then:
\[\tilde{T} = \sum c_{i_1,\cdots,i_n} (\tilde{g_1})^{i_1} \cdots (\tilde{g_n})^{i_n},\]
which would force $v_p(\tilde{T}) \geq 0$ by Corollary \ref{kilcor}, but that contradicts the above computation of $v_p(\tilde{T})$.
\end{proof}
We have now established Lemma \ref{vptlem}, and Theorem \ref{genZ} follows immediately.
\end{proof}
In \cite{rustom}, we raised the question of the lowest weight in which one could find a $T$-form (see Definition \ref{deftform}) for $\Gamma_0(p)$, and we proved that the lowest weight is either $p-1$ or $\frac{p-1}{2}$. As a corollary to Lemma \ref{vptlem}, we have the following:
\begin{cor}
Let $p \geq 5$. The lowest weight in which one can find a $T$-form for $\Gamma_0(p)$ is $p-1$.
\end{cor}
\begin{proof}
Let $T$ be the $T$-form in weight $p-1$ defined above. Suppose there exists a $T$-form $T'$ of lower weight. By the defining properties of $T$-forms, it follows that $T'$ divides $T$ in the algebra of modular forms, and that $\frac{T}{T'} = T''$ is a $T$-form in weight $< p-1$. Then $T = T' T''$, but this contradicts Lemma \ref{vptlem}.
\end{proof}
\subsection{Intersection theory on $\mathcal{M}_{\Gamma_0(p)}$}\label{intheory}
For the convenience of the reader, we summarize here the main definitions regarding the intersection theory on the stacks $\mathcal{M}_{\Gamma_0(p)}$ studied in \cite{DR}. For our purpose, we only need the theory developed in \cite{DR}. However, the appendix by Brian Conrad in \cite{BDP} contains a more explicit and more general treatement of the intersection theory on such stacks.\\
The stack $\mathcal{M}_{\Gamma_0(p)}$ is the moduli stack classifying elliptic curves (over $\mathbb{Z}$) with a choice of a subgroup of order $p$. These stacks are Deligne-Mumford (DM); the main property of DM stacks that we need is that they admit finite \'etale covers by schemes. Thus one can define sheaves on them as sheaves on the \'etale site, in particular, these stacks are locally ringed, and one can use the intersection theory of schemes to define intersection concepts on the stacks. For the definition and basic properties of these stacks, see \cite{DM}.\\
The stack $\mathcal{M}_{\Gamma_0(p)}$ is not representable, i.e.\ it is not a scheme, since every pair $(E,C)$ consisting of an elliptic curve $E$ and a subgroup $C$ of order $p$ admits at least one non-trivial automorphism (the involution corresponding to $-1 \in \Gamma_0(p)$). It is regular (\cite{DR},Th\'eor\`eme V.1.16), of dimension 2 (of relative dimension $1$) over $\mbox{Spec}(\mathbb{Z})$.\\
Let $\mathcal{M}$ be such a stack, and let $\mathcal{L}$ be an invertible sheaf on $\mathcal{M}$. As in \cite{DR}, VI.4.3, the degree of $\mathcal{L}$ is defined as follows. Suppose that $\mathcal{L}$ has a rational section $f$. Pick a geometric fiber (for example, say it is $\mathcal{M}\otimes k$ where $k$ is an algebraically closed field), and at each closed geometric point $x$ of this fiber, define:
\[ \deg_x (f) = \begin{cases} \dim_k \widetilde{O_x}/(f) & \mbox{if } f \mbox{ is regular at } x \\ - \dim_k \widetilde{O_x}/(f^{-1}) & \mbox{ otherwise}\end{cases}\]
where $\widetilde{O_x}$ is the henselian local ring of the fiber at $x$. Then the degree of $\mathcal{L}$ is defined by:
\[\deg \mathcal{L} = \sum_x \frac{deg_x(f)}{|Aut(x)|} \]
where $Aut(x)$ is the automorphism group of the elliptic curve represented by the point $x$. This degree is independent of the choice of the fiber.
\\
A reduced irreducible closed substack of codimension 1 is Cartier (Lemma B.2.2.8 in \cite{BDP}). A Cartier divisor is effective if the ideal sheaf associated to the corresponding closed substack is invertible. If $D$ is an effective Cartier divisor, there is an invertible sheaf $\mathcal{O}(D)$ associated to it that has a canonical regular global section $s_D$. By regularity of the stack, the henselian local ring at every codimension 1 point is a DVR, thus to every effective Cartier divisor one can associate an effective Weil divisor (i.e. a finite formal integral combination of closed reduced irreducible substacks of co-dimension 1, where all the coefficients are non-negative). For an invertible sheaf $\mathcal{L}$ on $\mathcal{M}$, and a global section $s$ of $\mathcal{L}$ that is non-zero on every connected component of $\mathcal{M}$, we can associate a Weil divisor $div(f)$ such that there is an isomorphism of sheaves $\mathcal{O}(div(f)) \cong \mathcal{L}$. \\
Thus we can identify the concepts of an effective Cartier divisor and an effective Weil divisor. Given an invertible sheaf $\mathcal{L}$ with a global section $s$ which is non-zero on every connected component, its divisor $D = div(f)$ can be written as the sum of horizontal and vertical divisors. If $N$ is an irreducible component of a geometric fiber, seen as a vertical divisor (by giving it the reduced structure), then one can define the intersection number:
\[ (D,N) := \deg \mathcal{O}(D)|_N. \]
The degree of $\mathcal{L}$ can equally be defined as the intersection number of the divisor of a global section with a geometric fibral divisor.\\
The stack classifiying generalized elliptic curves (without a choice of a structure) is denoted $\mathcal{M}_1$. It is shown in \cite{DR} that the reduction mod $p$, $\mathcal{M}_{\Gamma_0(p)} \otimes \mathbb{F}_p$ consists of two copies of $\mathcal{M}_1 \otimes \mathbb{F}_p$ glued together at the supersingular points. \\
Similar to the case of the moduli schemes, on each of the stacks $\mathcal{M}_{\Gamma_0(p)}$ and $\mathcal{M}_1$, there is an invertible sheaf $\omega$ such that the modular forms (of levels $p$ and $1$ respectively) of weight $k$ can be seen as global sections of $\omega^{\otimes k}$.
\subsection{Congruences of level $1$ and level $p$ modular forms}
For this section, fix a prime $p \geq 5$. We look at congruence relations between modular forms on $SL_2(\mathbb{Z})$ and on $\Gamma_0(p)$, that is, congruences between their formal $q$-expansions at infinity. In \cite{serre}, Serre proves that every modular form in $M(\Gamma_0(p),\mathbb{Z})$ is $p$-adically of level $1$, that is, if $f \in M_k(\Gamma_0(p),\mathbb{Z})$ for some $k$, then for every integer $i > 0$, there exists an integer $k_i$ and a modular form $f_i \in M_{k_i}(SL_2(\mathbb{Z}),\mathbb{Z})$ such that $f \equiv f_i \pmod{p^i}$. In particular one has the following result. Let $v = v_p(\tilde{f})$, and let:
\[
E_{p-1}^* = E_{p-1} - \tilde{E}_{p-1}.
\]
The form $E_{p-1}^*$ has the following properties: $E_{p-1}^* \equiv 1 \pmod{p}$ and $v_p(\widetilde{E_{p-1}^*})=p$. If $v \leq k$, this implies that $tr(f(E_{p-1}^*)^{k-v}) \in M_{kp - v(p-1)}(SL_2(\mathbb{Z}))$ is $p$-integral and is congruent to $f$ modulo $p$. Here, $tr$ is the trace operator sending modular forms of level $p$ to modular forms of level $1$; for the definition and properties, see \cite{serre}.\\
When $v > k$, if the above congruence still holds, then we expect to see $f$ mod $p$ in weight $kv - v(p-1) < k$. Since this weight is less than $k$, Serre's trace argument apparently no longer applies. The aim of this section is to show that a similar congruence relation still holds even when the ``expected weight" for $f$ is less than $k$. That is, we have:
\begin{thm}\label{congthm}
Let $p \geq 5$, $f \in M_k(\Gamma_0(p),\mathbb{Z})$ with $v_p(f) = 0$ and $v_p(\tilde{f}) = k + a$. Then there exists $g \in M_{k-a(p-1)}(SL_2(\mathbb{Z}),\mathbb{Z})$ such that $f \equiv g \pmod{p}$.
\end{thm}
\begin{proof}
The case where $a\leq 0$ is covered by Serre's argument in \cite{serre}. We deal here with the case where $a > 0$.The proof relies on Deligne and Rapoport's study of the stack $\mathcal{M}_{\Gamma_0(p)}$ in \cite{DR}. The intersection theory on such stacks is summarized in Section \ref{intheory}.\\
The modular form $f$ can be seen as a global section $f \in H^0 (\mathcal{M}_{\Gamma_0(p)}, \omega^{\otimes k})$. Let $N_1$ and $N_2$ be the two irreducible components of $\mathcal{M}_{\Gamma_0(p)}\otimes \mathbb{F}_p$ containing respectively the (reductions of the) cusps $\infty$ and $0$. Then $f$ does not vanish at the generic point of $N_1$, and it vanishes to order $a$ at the generic point of $N_2$. Thus the divisor of $f$ can be written as:
\[ div (f) = D + aN_2\]
where, without loss of generality after multiplying $f$ by a constant of $p$-adic valuation $0$, we can assume\footnote{While $f$ might have some poles along certain vertical (i.e.\ fibral) divisors, it cannot have a pole along a horizontal divisor. This is because any horizontal divisor would meet the generic fiber, and so if $f$ has a pole along a horizontal divisor, then $f$ would have a pole when considered as a modular form over $\mathbb{C}$, which contradicts the holomorphy of $f$ as a complex function of a complex variable. The vertical components of the divisor of poles correspond to the primes appearing in the denominators of the $q$-expansion of $f$. As these denominators are bounded, one can find a constant $K \in \mathbb{Z}$, $K \not \equiv 0 \pmod{p}$ such that $Kf$ has no primes in the denominators of its $q$-expansion except possibly $p$. Multipying by such a constant obviously preserves the $p$-adic valuation of $f$ at both cusps}that $D$ is an effective horizontal Cartier divisor on $\mathcal{M}_{\Gamma_0(p)}$. As in \cite{DR}, VII.3.19, we can calculate the intersection number $(D,N_1)$ as follows: on $N_1 \cong \mathcal{M}_1 \otimes \mathbb{F}_p$, the degree of $\omega$ is $\frac{1}{24}$ (\cite{DR}, VI.4.4.1), hence:
\[ (div(f), N_1) = \frac{k}{24}. \]
The components $N_1$ and $N_2$ intersect transversally at the supersingular points. It follows from \cite{DR} (Th\'eor\`eme V.1.16 and Th\'eor\`eme VI.4.9.1) that:
\[ (N_2, N_1) = \frac{p-1}{24} \]
This gives then that:
\[ (D,N_1) = \frac{k - a(p-1)}{24}. \]
Now $D$ is an effective Cartier divisor, so it corresponds to an invertible sheaf $\mathcal{O}(D)$ together with a regular global section $s_D$. We then have:
\[ (D,N_1) = \deg_{\bar{\mathbb{F}}_p} (\mathcal{O}(D)|_{N_1}) = \sum_{x} \frac{\deg_x(s_D)}{|Aut(x)|} \]
where the sum is over the closed geometric points of the component $N_1$. Since $s_D$ is regular, $\deg_x(s_D) \geq 0$ for each $x$, and $(D,N_1) \geq 0$. Since $p \geq 5$, it follows (see \cite{silverman}, Theorem 10.1) that for each $x$, $|Aut(x)| \leq 6$. In particular we have that if $(D,N_1) > 0$, then $(D,N_1) \geq \frac{1}{6}$.\\
First, if $k-a(p-1) = 2$, then $(D,N_1) = \frac{1}{12} < \frac{1}{6}$, which is impossible. So we must have that either $k-a(p-1)=0$ or $k-a(p-1) > 2$. Denote by $M^{a}_k$ the subset of $M_k(\Gamma_0(p),\mathbb{Z})$ consisting of modular forms $h$ such that $v_p(h)=0$ and $v_p(\tilde{h}) = k + a$. Define the mapping:
\[ \phi: M_{k-a(p-1)}(SL_2(\mathbb{Z}),\mathbb{Z}) \rightarrow M_k(\Gamma_0(p),\mathbb{Z}) \]
\[g \mapsto (E_{p-1}^*)^a g. \]
Recall here the convention that $M_0(SL_2(\mathbb{Z}),\mathbb{Z}) = \mathbb{Z}$. It is easy to check that the image under $\phi$ of $V = M_{k-a(p-1)}(SL_2(\mathbb{Z}),\mathbb{Z})$ lies in $M^{a}_k$. Recall also that $V$ has a Victor Miller basis, which is the unique integral basis consisting of form $f_0,\cdots, f_{d-1}$, where $d = \dim_{\mathbb{Q}} M_{k-a(p-1)}(SL_2(\mathbb{Z}),\mathbb{Q})$, such that $f_i = q^i + O(q^d)$ for $0 \leq i \leq d-1$ (see for instance Proposition 6.2 in \cite{kilford}). We also adopt the convention that the Victor Miller basis of $M_0(SL_2(\mathbb{Z}),\mathbb{Z})$ is the set $\{1\}$.
\\Assume $f$ mod $p$ is not in $ \phi(V)\otimes \mathbb{F}_p$, then substracting from $f$ a suitable linear combination of the images in $ \phi(V)\otimes \mathbb{F}_p$ of elements of the Victor Miller basis of $V$, we may assume that $f$ has a mod $p$ vanishing order at infinity $v_{\infty,p} (f) \geq d$, where $d = \dim_{\mathbb{Q}} M_{k-a(p-1)}(SL_2(\mathbb{Z}),\mathbb{Q})$; then so does the section $s_D$ of $\mathcal{O}(D)$. As the cusp infinity has only an automorphism of order $2$, we have:
\[ \sum_{x \not = \infty} \frac{\deg_x(s_D)}{|Aut(x)|} + \frac{v_{\infty,p}(s_D)}{2} = (D,N_1). \]
If $k-a(p-1) = 0$, then $(D,N_1) = 0$, and $d = \dim_{\mathbb{Q}} M_{k-a(p-1)}(SL_2(\mathbb{Z}),\mathbb{Q}) = 1$. As $\deg_x(s_D) \geq 0$ for each $x$, this means that $\deg_x(s_D) = 0$ for all $x$, so in particular, $v_{\infty,p}(s_D) = 0$, but this contradicts the inequality $v_{\infty,p} (f) \geq d$. So assume that $k - a(p-1) > 2$. We have:
\[ \sum_{x \not = \infty} \frac{\deg_x(s_D)}{|Aut(x)|} + \frac{v_{\infty,p}(s_D) - d}{2} = (D,N_1) - \frac{d}{2}. \]
Consider the form $f_{d-1} \in V$, which is the element of the Victor Miller basis of $V$ with highest vanishing order at infinity, this vanishing order at infinity being $v_{\infty}(f) = d-1$. This form vanishes nowhere other than at infinity and possibly at the elliptic points of orders 2 and 3. Let $v_2(f)$ and $v_3(f)$ denote respectively the vanishing orders of $f$ at the elliptic points of orders 2 and 3, and recall that $v_2(f) \leq 1$ and $v_3(f) \leq 2$. The valence formula for level 1 modular forms (see for example Proposition 3.2 in \cite{kilford}) then gives:
\[ v_{\infty}(f_{d-1}) + \frac{1}{2}v_2(f_{d-1}) + \frac{1}{3} v_3(f_{d-1}) = \frac{k-a(p-1)}{12}, \] and therefore:
\[ \frac{k-a(p-1)}{12} - (d - 1) \leq \frac{1}{2} + \frac{2}{3} = \frac{7}{6}.\]
Thus using the above calculation for $(D,N_1)$, we find that:
\[ \sum_{x \not = \infty} \frac{\deg_x(s_D)}{|Aut(x)|} + \frac{v_{\infty,p}(s_D) - d}{2} \leq \frac{1}{12},\] which forces:
\[ \sum_{x \not = \infty} \frac{\deg_x(s_D)}{|Aut(x)|} + \frac{v_{\infty,p}(s_D) - d}{2} = 0 \] and hence:
\[ (D,N_1) = \frac{d}{2} \]
which in turn gives:
\[ d = \frac{k-a(p-1)}{12}. \]
This however is contradicted by the dimension formula for modular forms in level 1, which says that if $k-a(p-1) \equiv 0 \pmod{12}$, then $d = 1 + \frac{k-a(p-1)}{12}$.
\end{proof}
\begin{rem}
A weaker version of Theorem \ref{congthm} can be proven by elementary methods. This can be done by adapting the argument due to Kilbourn (see \cite{kilbourn}) by which he proves Lemma \ref{killema}. This is a sketch of the argument: if $f$ satisfies the hypothesis of Theorem \ref{congthm} with $a \geq 1$, then $h = tr(f) \equiv f \pmod{p^2}$. Let $g = \frac{f - h}{p^{v_p(f-h)}}$. let $-|V : M_k(SL_2(\mathbb{Z})) \rightarrow M_k(\Gamma_0(p),\mathbb{Z})$ denote the operator defined by $V(\sum a_n q^n) = \sum a_n q^{pn}$. Then one can show that $h|V \equiv p^{m - k} \tilde{g} \pmod{p}$. On the other hand, $v_p (p^{m - k} \tilde{\tilde{g}}) = v_p(p^{m-k}p^k g)\geq a+1 \geq 2$, so $v_p (p^{m - k} \tilde{g})$ is congruent to some modular form of level 1 in weight $kp - (a+1)(p-1)$. If $w_p$ denotes the mod $p$ filtration of level $1$ modular forms, defined by:
\[ w_p(F) = \inf\{k : F \equiv G \pmod{p} \mbox{ for some } G \in M_k(SL_2(\mathbb{Z}),\mathbb{Z})\}, \]
we know that $w_p(h|V) = pw_p(h)$ so this forces $w_p(h) \leq k - (p-1)$. It might be possible to find a variation of this argument that would give an alternative and elementary proof of Theorem \ref{congthm}. The author was however unable to find such an argument.
\end{rem}
A simple corollary of this theorem concerns the $T$-form defined above.
\begin{cor}\label{ptcor}
$p\tilde{T} \equiv 1 \pmod{p}$.
\end{cor}
\begin{proof}
This can be proven directly from the computation of $\tilde{T}$ as in the previous section, but this follows easily from Theorem \ref{congthm}, by noting that $v_p(p\tilde{T}) = 0$, $v_p(\widetilde{p\tilde{T}}) = p$ and that $p\tilde{T}$ is of weight $p-1$. The theorem then implies that $p\tilde{T}$ is congruent mod $p$ to a modular form of weight $0$, that is, a constant, and this constant is found to be $1$ by examining the first coefficient of the $q$-expansion.
\end{proof}
\subsection{Generators of $M(\Gamma_0(p), \mathbb{Z})$}
In this section we identify a set of generators for $M(\Gamma_0(p),\mathbb{Z})$. Let $T$ denote the $T$-form defined earlier:
\[ T(z) := \left(\frac{\eta(pz)^p}{\eta(z)}\right)^2 \in M_{p-1}(\Gamma_0(p),\mathbb{Z}).\]
Let $S$ denote the subset of $M(\Gamma_0(p), \mathbb{Z})$ consisting of modular forms $f$ satisfying $v_p(\tilde{f})\geq 0$. We prove the following:
\begin{thm}\label{gensZZ}
$M(\Gamma_0(p), \mathbb{Z})$ is generated by $T$ and $S$.
\end{thm}
\begin{proof}
Let $f \in M_k(\Gamma_0(p), \mathbb{Z})$, $f \not \in S$. Put $a = -v_p(\tilde{f}) > 0$. We argue by induction on $a$. Let $g = p^a \tilde{f}$. Then $v_p(g)=0$ and $v_p(\tilde{g}) = a + v_p(\tilde{\tilde{f}}) = a + v_p(p^k f) \geq k + a$, so by Theorem \ref{congthm}, there exists $h \in M_{k-a(p-1)}(SL_2(\mathbb{Z}),\mathbb{Z})$ such that:
\[ h \equiv g \pmod{p},\]
and by Corollary \ref{ptcor}, this can be rewritten as:
\[ (p\tilde{T})^a h\equiv g \pmod{p} \]
where now $(p\tilde{T})^a h$ and $g$ have the same weight. Thus there exists $u \in M_k(\Gamma_0(p),\mathbb{Z})$ such that:
\[ (p\tilde{T})^a h + pu = g.\]
Recall that $\widetilde{p\tilde{T}} = p^p T$, and that $\tilde{h} = p^{k-a(p-1)} h|V$ since $h$ is of level 1 and of weight $k-a(p-1)$. So applying the $\tilde{w}$ operator again, we get:
\[ p^{k+a} T^a h|V + p\tilde{u} = p^{k+a} f \]
and hence $p\tilde{u} = p^{k+a} v$ for some $v \in M_k(\Gamma_0(p),\mathbb{Z})$. Now we have:
\[ f = T^a h|V + v. \]
An easy calculation now shows that $v_p(\tilde{v}) \geq 1-a = v_p(f)$ (since $v_p(u) \geq 0$). By induction it then follows that we have the following decomposition of $f$:
\[ f = T^a f_a|V + T^{a-1} f_{a-1}|V + \cdots + T f_1|V + f_0 \]
where for each $1 \leq i \leq a$, $f_i \in M_{k - i(p-1)}(SL_2(\mathbb{Z}),\mathbb{Z})$, and hence $v_p(\tilde{f}_i|V) = v_p(f_i) \geq 0$, and $v_p(\tilde{f_0})\geq 0$, which proves the theorem.
\end{proof}
Numerical evidence points to the following conjecture:
\begin{conj}\label{G0conj1}
The $\mathbb{Z}$-subalgebra of $M(\Gamma_0(p),\mathbb{Z})$ generated by $S = \{f \in M(\Gamma_0(p),\mathbb{Z}) : v_p(\tilde{f}) \geq 0\}$ is generated in weight at most $6$.
\end{conj}
Conjecture \ref{G0conj1} and Theorem \ref{gensZZ} together imply the following:
\begin{conj}\label{G0conj2}
The weights of the modular forms appearing in a minimal set of generators for $M(\Gamma_0(p),\mathbb{Z})$ are in the set $\{2,4,6,p-1\}$, and there is only one generator of weight $p-1$ (which can be chosen to be the $T$-form $T$).
\end{conj}
In Section \ref{conj2data}, we present the computational data supporting Conjecture \ref{G0conj2}.
\section{Computational data}
\subsection{Generators and relations of $M(\Gamma_1(N), \mathbb{Z}[\frac{1}{N}])$}
We use a modification of Algorithm 1 in \cite{rustom} to calculate the structure of the algebra of $M(\Gamma_1(N), \mathbb{Z}[\frac{1}{N}])$ for $5 \leq N \leq 22$. For each $N$, the minimal set of generators picked is the union of integral bases of $M_2(\Gamma_1(N),\mathbb{Z})$ and $M_3(\Gamma_1(N),\mathbb{Z})$. The algorithm operates as follows:
\begin{algo}\label{algo1}\indent
\begin{enumerate}
\item $GENERATORS = \{g_1,\cdots, g_r\}$ integral basis for $M_2(\Gamma_1(N),\mathbb{Z}) \bigcup$ integral basis for $M_3(\Gamma_1(N),\mathbb{Z}) $.
\item $RELATIONS = \{ \}$.
\item for each $k \in \{4,5,6\}$:
\begin{enumerate} \label{thecheck}
\item $B$ = integral basis of $M_k(\Gamma, A)$.
\item $M = \begin{pmatrix} m_1 \\ \vdots \\ m_s \end{pmatrix}$, the monomials of weight $k$ in the elements of $GENERATORS$.
\item Express elements of $M$ as an integral linear combination of elements of $B$, obtaining an integral matrix $A$ such that $M = AB$.
\item Calculate $D$, the Smith Normal Form of the matrix $A$, as well as the transformation matrices $U$ and $V$, such that $D = UAV$.
\item For every diagonal entry $D_{ii}$ of $D$, check if $D_{ii}$ is invertible in $\mathbb{Z}[\frac{1}{N}]$:
\begin{itemize}
\item if $D_{ii}$ is not invertible in $\mathbb{Z}[\frac{1}{N}]$, then the $i$th row of $UAB$ is a relation. Add it to $RELATIONS$.
\end{itemize}
\item For every row $D_j$ of $D$:
\begin{itemize}
\item if $D_j$ is a zero row, then the $j$th row of $UAB$ is a relation. Add it to $RELATIONS$.
\end{itemize}
\end{enumerate}
\item Represent each generator $g_i$ as a variable $x_i$. Find the ideal $I$ of $\mathbb{Z}[\frac{1}{N}][x_1,\cdots,x_r]$ generated by $RELATIONS$. This is the ideal of relations. Output a Gr\"{o}bner basis for $I$.
\end{enumerate}
\end{algo}
In the following table we provide the number of relations needed to generate the ideal of relations of $M(\Gamma_1(N), \mathbb{Z}[\frac{1}{N}])$, detailing the total number of generators of $M(\Gamma_1(N),\mathbb{Z}[\frac{1}{N}]$ and the number of generators in weights 2 and 3, as well as the total number of relations and the number of relations in each degree.
\[\begin{tabular}{ c | c | c |c | c | c | c | c }
\hline
N & generators & weight 2& weight 3 & relations & degree 4 & degree 5 & degree 6 \\
\hline
5 & 7 & 3 & 4 & 17 & 1 & 6 & 10 \\
6 & 7 & 3 & 4 & 17 & 1 & 6 & 10 \\
7 & 12 & 5 & 7 & 58 & 6 & 24 & 28 \\
8 & 12 & 5 & 7 & 58 & 6 & 24 & 28 \\
9 & 17 & 7 & 10 & 124 & 15 & 54 & 55 \\
10 & 17 & 7 & 10 & 124 & 15 & 54 & 55 \\
11 & 25 & 10 & 15 & 281 & 35 & 125 & 121 \\
12 & 22 & 9 & 13 & 215 & 28 & 96 & 91 \\
13 & 33 & 13 & 20 & 502 & 64 & 226 & 212 \\
14 & 30 & 12 & 18 & 412 & 54 & 186 & 172 \\
15 & 40 & 16 & 24 & 749 & 104 & 344 & 301 \\
16 & 38 & 15 & 23 & 673 & 89 & 306 & 278 \\
17 & 52 & 20 & 32 & 1281 & 166 & 584 & 531 \\
18 & 43 & 17 & 26 & 869 & 118 & 398 & 353 \\
19 & 63 & 24 & 39 & 1902 & 246 & 867 & 789 \\
20 & 56 & 22 & 34 & 1495 & 207 & 690 & 598 \\
21 & 72 & 28 & 44 & 2497 & 346 & 1156 & 995 \\
22 & 65 & 25 & 40 & 2027 & 270 & 930 & 827 \\
\hline
\end{tabular}\]
\subsection{Generators of $M(\Gamma_0(p),\mathbb{Z})$}\label{conj2data}
Recall that the set $S$ is defined as the set of modular forms $f \in M(\Gamma_0(p),\mathbb{Z})$ such that $v_p(\tilde{f})\geq 0$. Recall also the $T$-form $T(z) := \left(\frac{\eta(pz)^p}{\eta(z)}\right)^2$. We will provide computational evidence for Conjecture \ref{G0conj2}.\\
We use Algorithm 1 in \cite{rustom} to calculate the degrees of generators in a minimal set of generators for $M(\Gamma_0(p),\mathbb{Z})$. The following table details the results.
\[\begin{tabular}{ l | c | r }
\hline
$p$ & weights of generators\\
\hline
5 & 2, 4, 4 & --\\
7 & 2, 4, 4, 6, 6 & --\\
11 & 2, 2, 4, 6, 10 & --\\
13 & 2, 4, 4, 4, 4, 6, 6, 12 & --\\
17 & 2, 2, 4, 4, 4, 6, 16 & --\\
19 & 2, 2, 4, 4, 4, 6, 6, 18 & --\\
23 & 2, 2, 2, 4, 4, 6, 22 & --\\
29 & 2, 2, 2, 4, 4, 4, 4, 6, 28 & --\\
31 & 2, 2, 2, 4, 4, 4, 4, 6, 6, 30 & --\\
\hline
\end{tabular}\]
In each level $p$ above, the form in weight $p-1$ can be chosen to be the $T$-form.
\bibliographystyle{amsalpha}
|
1,108,101,564,845 | arxiv | \section{Introduction}
In recent years there has been an increased demand from citizens, corporations, and governments to access e-government services. This requires the need for trustworthy e-governance solutions that provide citizens and businesses with safe and transparent public services, and without compromising their security or privacy. The failure to satisfy these necessities can have severe economic and social consequences. Along with security and privacy, creating an infrastructure that users can rely on is a key consideration when establishing e-government \cite{leiding2021machine}, along with providing a strong linkage between digital identity and socio-economic inclusion \cite{addo2021advancing}.
Coursey \emph{et al.} predict that \emph{e-government will fundamentally transform the relationship between governments and citizens} \cite{coursey2008models}. This transformation could lead to improved democracy, including around citizen participation, increased transparency, improved trust in government, strengthened democracy, and social inclusion \cite{draheim2020narratives}. But approaches to e-governance can fail without a citizen focus, increased accountability, citizen empowerment, co-production and good governance \cite{draheim2020narratives}. But, Draheim et al \cite{draheim2020narratives} also outline that there are important political topics which must run alongside any development of e-governance applications including those related to the digital divide, anti-corruption, the loss of privacy, social change, and increasing control from government agencies.
Valdavida \emph{et al.} \cite{valdavida2021public} outline a number of key recommendations in building a trustworthy approach to e-governance, including:
\begin{itemize}
\item \textbf{Bridge with the existing world.} This should allow for seamless integration with existing systems, and break-down the barriers to the exchange of digital assets from heterogeneous systems \cite{valdavida2021public}.
\item \textbf{Law and technology.} This requires that legal and technology specialists work together to develop an integrated system.
\item \textbf{Decentralisation, transparency and accountability.} This moves award from our requirements for centralised services, and aims to increase transparency, privacy and accountability \cite{valdavida2021public}.
\item \textbf{Value proposition.} This requires a strong focus on useful use cases which are important in the lives of citizens.
\item \textbf{Governance.} This should provide a well-defined foundation for the system based on roles, responsibilities, and decision-making processes and where the governance structures are well-defined, and clearly articulated at every stage of the development \cite{valdavida2021public}.
\end{itemize}
GLASS \footnote{\label{myfootnote}GLASS is funded within the EU Horizon 2020 research and innovation programme under grant agreement 959879} \cite{lo2022glass} provides a user-centric approach in which digital credentials are referenced within digital wallets owned and controlled by individuals. This eliminates the need for a third party to maintain trust. Citizens then have complete control over their identities and associated data. GLASS employs a trusted framework to enable the requester (often defined as the \emph{relying party}) to validate the verifiability of the issuer's verifiable credentials and digital signature. In the EU, these transfers must be compliant with the General Data Protection Regulation (GDPR) \cite{gobeo2022gdpr} and the incorporation of a legal framework for electronic Identification and Authentication and trust Services (eIDAS). As a result, the GLASS model is able to carry out procedures that are functional across borders and in a user-friendly manner, in order to support governments, citizens, organisations, and public services. The EBSI infrastructure provides a foundation element of this, and which can be used to provide trusted identity-checking services for citizens and trusted organisations. This infrastructure can provide a distributed infrastructure that can reduce administrative burdens and thus connect public administrations across all the EU state members.
In this paper, we investigate EBSI and GLASS and examine their approaches to handling identity, security and privacy. In addition, we investigate the viable integration of GLASS and EBSI models. Overall, it focuses on a distributed, secure, and scalable e-Government architecture that integrates the GLASS \cite{glass_definition, GLASS-home} and EBSI (European Blockchain Services Infrastructure) projects \cite{EBSI-ec-dbb-wiki-home}. Both of these projects use permissioned blockchains and distributed ledger methods, in order to ensure security and privacy and can address some of the weaknesses of current e-government systems and the interoperability between governance departments. In order to provide improved resilience, GLASS has the option to use IPFS (InterPlanetary File System).
The rest of the paper is structured as follows. Section \ref{related_work} briefly describes the related work while Section \ref{data_sharing} presents fundamental data-sharing use cases. Section \ref{ipfs_usecases} proceeds with the presentation of IPFS use cases while Section \ref{eu_eid} summarises the European approaches for electronic identities. Soon afterwards Section \ref{eu_blockchain_part} details the European Blockchain Partnership while Section \ref{sec:Background} provides the necessary background information for the GLASS project. This enables Section \ref{glass_ebsi} to explain the feasibility of bringing together GLASS with EBSI. Last, Section \ref{sec:Conclusion and future work} draws the conclusions while giving some pointers for future work.
\section{Related work}
\label{related_work}
Lykidis \emph{et al.} \cite{lykidis2021use} define a wide range of ongoing blockchain-based e-Government applications including for authentication, e-Voting, land property services, e-Delivery Services, human resources management, and government contracting. Most of these integrate some form of identity management, but the approaches taken for identity management vary widely. Rathee and Singh \cite{RATHEE2021} thus provide a literature review of blockchain-related identity management. The two main approaches to identity management can be summarised as:
\begin{itemize}
\item{using one or more appointed trusted authorities providing assurance over the identity-proofing of subjects,}
\item{allowing subjects to create and manage self-asserted identities, referred to as self-sovereign identities (SSI).}
\end{itemize}
\subsection{Country-wide adoption}
In blockchain-based applications, SSI is commonly used. Traditional applications favour the appointed trusted authority approach, such as those adopted within PKI (Public Key Infrastructure). A range of projects, too, have resulted in country-wide infrastructures, such as a collaboration from the State Information Center, China Mobile, China UnionPay, and other businesses in creating the Blockchain Service Network (BSN) \cite{tang2022blockchain}. Singanamalla \emph{et al.} \cite{singanamalla2022telechain} define Telechain integrates a range of telecommunication providers in India in order to comply with regulations. Estonia, too, is well known for its advancement of the X-Road \cite{paide2018systematic}, and which includes the Guardtime blockchain solution (KSI blockchain \cite{buldas2013keyless}). This focuses on integrating Estonian state registries \cite{draheim2020blockchains}. As part of their advancement of a digital economy, Belarus developed the cryptohub HTP (High Technologies Park) \cite{bazhenova2018digitalization} in order to define key cryptographic operators, cryptocurrency exchanges, and so on. Akpan \emph{et al.} \cite{akpan2022governance} analysed with the link between e-governance and governance in 15 West African countries, and found there was a significant positive correlation between World Governance Indicators (WGI) and E-government Development Index (EDGI), and that the best advancement comes when there is strong integration with established institutions and into existing structures of governance.
Within the EU, blockchain methods are increasingly being considered for enhanced e-governance, but challenges still exist in aligning identity linkages that use PKI (Public Key Infrastructure) and Qualified Digital Certificates issued by Qualified Trust Service Providers. Turkanovic et al \cite{turkanovic2020signing} propose an architecture using the CEF (Connecting Europe Facility) \cite{vettorazzi2018establishing} building blocks of EBSI, eSignature, and an eID compliant with eIDAS.
\subsection{Self-sovereign identity}
The method of creating and controlling one's own identity is known as SSI (Self-sovereign Identity). With this, we typically use a key pair, and where transactions are digitally signed using a private key, and then this is proven with a public key. The private key can then be stored in a citizen wallet, and which cannot be accessed by any other entity.
Two major global initiates which aim to harmonize the usage of verifiable credentials and wallets are The Open Identity Exchange (OIX) and Trust over IP Foundation (ToIP) \cite{ToIP}. With ToIP we see a focus on decentralized digital identity projects, and where it issues global compatibility guidelines for Hyperledger Aries and Indy, and verifiable credentials \cite{Dizme2020}.
The basic infrastructure involves the user (the holder), the issuer of a verifiable credential, and the relying party. The user is in full control of gathering the verifiable credential and then passing it on to the relying party. Overall, we use a trust framework to give allow the relying party to check the trustworthiness of the verifiable credentials that are passed. This should include a digital signature of the issuer, and a check against the trusted public key of the issuer.
\subsection{Appointed trusted authorities}
Traditional digital applications often use one or more appointed trusted authorities providing assurance over the identity-proofing of subjects. In the case of natural persons, this takes the form of a centralised or decentralised identity register. In the case of legal persons, typically these are defined as business registers. For this, Anand and Brass \cite{AnandNishant2021Rifd} provides an overview of eID systems and their governance models.
\subsection{Sharing credentials}
Liang \emph{et al.} \cite{liang2017integrating} used Hyperledger Fabric to share consent information.
With this, a user could share healthcare information with insurance companies, in order to get an insurance quote. On a data sharing event, an event record is created as a tuple of {recordhash, owner, receiver, time, location, expiry-date, and signature}, and then submitted to the blockchain network in order to convert health records into a transaction. Every action on the health record is then recorded and is thus accountable. The research project uses the Hyperledger Fabric membership service component and the channel scheme and uses:
\begin{itemize}
\item \textbf{Membership service provider (CA)}. The CA focuses on membership enrollment. Each participating node is then issued with enrollment certificates and transaction certificates for the network and creates access control lists during channel establishment.
\item \textbf{Chaincode}. This is the code responsible for the deployment, invocation and querying of transactions, and isolates each of the participating parties in the network.
\end{itemize}
The License accoUntability and CompliancE (LUCE) data sharing platform is built on the Ethereum platform \cite{havelange2019luce}. It allows citizens to rectify and erase data in relation to General Data Protection Regulation's (GDPR) rights. LUCE tracks data in terms of licensing terms and has three core roles: Data Providers; Data Requesters; and Supervisory Authorities
With LUCE, we have states of:
\begin{itemize}
\item Sharing document: publish.
\item Accessing dataset: query, request, accessing terms, accepting licensing terms, and download token.
\item Monitoring compliance: access token, access data, reports compliance, access token, replication and checking compliance.
\item GDPR compliance: rights to access; rights to erase and right to rectification; and supervisory authority.
\end{itemize}
Jaiman \emph{et al.} \cite{jaiman2020consent} created a blockchain-based data-sharing consent model for health data, and where smart contracts represent a citizen's consent over their health data. It uses two ontologies defined within the Ethereum blockchain:
\begin{itemize}
\item Data Use Ontology (DUO). This defines citizen consent.
\item Automatable Discovery and Access Matrix (ADA-M). This defines the formatting of queries from data requesters.
\end{itemize}
\subsection{Right to be forgotten}\label{sect:5.1}
Often our data storages are centralised in the approach. One method that can be used to create distributed file storage is IPFS (InterPlantary File System). This involves hashing the content of files to a CID (Content Identifier). It then supports the splitting up of files onto multiple nodes. This increases the resilience and availability of the stored files. Unfortunately, it is often difficult to delete content from IPFS, and where there can often be little in the way of encryption used to store files. Politou \emph{et al.} \cite{politou2020delegated} define the Right to be Forgotten (RtbF) as a requirement of GDPR, and thus for data to be erased. Unfortunately, enforcing this across the entire IPFS network is not actually feasible. They thus implemented an anonymous protocol for delegated content erasure requests within IPFS, and where erasure is only allowed by the original content provider or their delegates.
\subsection{Dual-chain approaches}
Wang \emph{et al.} \cite{wang2018data} define data-sharing methods on a dual blockchain approach and where one chain stores the original data (the data blockchain) and another one stores the transactions related to the data (the trading blockchain). This method allows for the transactions performed on the data to be traced. Within this method, the data is broken into $n$ data blocks. Within a transaction, a data owner then signs a random hash on a previous transaction, and provides the data block number, permission level, and the public key of the next data user.
The access level defines three main levels:
\begin{itemize}
\item Level 1. Ability to use data, along with granting full rights to usage and distribution.
\item Level 2. Ability to use data, and grant data usage rights.
\item Level 3. Ability to use data.
\end{itemize}
\subsection{File sharing}
Khatal \emph{et al.} \cite{khatal2021fileshare} define FileShare and which defines a secure decentralized application framework for sharing files. It uses a centralized application (DApp) built on Ethereum to register users and provide provenance. A smart contract then governs, manages, and provides traceability of encrypted files for the latest version.
\section{Data sharing use cases}
\label{data_sharing}
In order to fully develop distributed e-governance applications, it is important to investigate other application areas for their usage of data sharing and citizen integration.
\subsection{Healthcare sharing}
Electronic Health Records (EHR) often contain highly sensitive healthcare data which are periodically distributed among healthcare providers, pharmacies and patients for clinical diagnosis and treatment \cite{dubovitskaya2017secure}. Furthermore, critical medical information must be regularly updated and shared, and where proper consent is provided by the patient. Along with this, we need strong availability, fast access, and the appropriate encryption on these records \cite{dubovitskaya2017secure,papadopoulos2021privacy,abramson2020distributed}.
Siyal \emph{et al.} \cite{siyal2019applications} define a number of opportunities in relation to blockchains within health care, including those related to transparency, reduced transaction time, improved security, cost efficiencies and irreversible transactions. Along these, they see the key challenges being interoperability, scalability, storage, social acceptance, and the requirement for standardization.
Omran \emph{et al.} \cite{omran2022sharded} define a sharded blockchain-based to share diagnostic tests around pandemics, and which focuses on privacy-enhanced methods for health care data sharing.
This included the usage of ring signatures, which have unique random identifiers and are unlinkable (but trusted). The authors propose that their method could be used to share health care data for research. The work outlined that 2,407,462 records can be shared within 11 minutes, compared to 63 days without sharding.
\subsubsection{MedChain}
Information sharing within health care provides strong use cases around citizen permissions and in supporting interoperability. Shen \emph{et al.} \cite{shen2019medchain} implemented MedChain for sharing health care data within a blockchain infrastructure.
The general architecture of the system uses a super-peer approach at the top level of the architecture.
\subsubsection{Tangle-based}
Zheng \emph{et al.} \cite{zheng2019accelerating} define a health data-sharing system using the IOTA Tangle
It uses a Masked Authenticated Messaging
and where IOTA users create a channel - identified by an address - and to where they publish messages. Users then subscribe to this channel and receive messages whenever the channel owner publishes them. Recently, IOTA has also defined a number of use cases for the integration of the IOTA Tangle and EBSI. These include digital management of educational credentials, trusted digital audit trails and document traceability, SME financing, data sharing among authorities, intellectual property rights management, and digital product passport \cite{IOTAEBSI}.
\subsection{Distributed Data Sharing Platforms}
\subsubsection{Distributed Health Care}
Theodouli \emph{et al.} \cite{theodouli2018design} defined a layered approach to a blockchain-based architecture:
Layer 1: Web/Cloud Platforms; Layer 2: Cloud Middleware; and Layer 3: Blockchain Network. The system is built on a foundation of smart contacts using a Registry Contract (RC) and which stores the registry of users (medical research centers and patients).
Each user has a unique smart contract address - defined as the Patient Data Contract (PDC). Each ID is unique for every patient and does not reveal their identity. The PDC has hashed patient healthcare data and a URL for the link to the patient's healthcare data. A Permissions Contract (PC) is then used to map the PDC address and the requesting entity (medical research centre), and then links to the patient approval for access.
There are currently several approaches regarding EHR management and how blockchain technology can be utilized to improve it \cite{holbl2018systematic,mayer2020electronic}. Yüksel \emph{et al.} \cite{yuksel2017research} discuss the distinction of EHR storage between distributed and cloud design, and propose that a centralized model could be established by applying the relevant decentralized settings and techniques of the Cloud architectures. Overall, their cloud architecture refers to the structured storage and allocation of massive volumes of medical records among remote third-party service providers. Within the Cloud, healthcare organizations and individuals are able to access the data by utilizing relevant security measures based on identity management, access control, policy integration and compliance management, thus accomplishing common requirements such as scalability, availability and cost-effectiveness \cite{abbas2014review}.
The MedRec software \cite{azaria2016medrec} is a permissionless blockchain implementation, built on the Ethereum blockchain network, and which demonstrates an auditable and decentralized EHR management. It introduces both questionable transaction scalability and linkability likelihood between a transaction and an individual thus could compromise data and user anonymity. MedRec has inspired other authors to build upon and try to mitigate its issues \cite{yang2017blockchain}.
The MediBchain system \cite{al2017medibchain} establishes a permissioned peer-to-peer approach and is designed on the Ethereum network. It uses a cloud-based server infrastructure to achieve acceptable scalability. However, even if data is held by the participating parties, the data remains in encrypted form, but linkability remains an issue. Another concern is the transaction cost, which is not measurable. One solution built on the Ethereum network focuses on the pharmaceutical supply chain: \textit{Modum.io} \cite{bocek2017blockchains}. In this work, Bocek \emph{et al.} \cite{bocek2017blockchains} developed an architecture where the IoT devices collect temperature data; and mobile devices get this data and connect to an HTTP server. The HTTP server acts as a blockchain node and stores the data in PostgreSQL databases, using smart contracts. Their real-world scenario looks promising and can be adapted in an EHR use case, although it is not decentralized in each stage. Where the HTTP server is hijacked by attackers and where the collected data is vulnerable and exposed. Subsequently, its usage in an EHR scheme is not advisable, considering the sensitive nature of EHR data.
\subsubsection{Permissioned approaches}
Stamatellis \emph{et al.} \cite{stamatellis2020privacy}, utilized Hyperledger Fabric to incorporate a private data collection key feature. In their work, the authors created a privacy-preserving healthcare scenario in which various participating entities can store and share data securely. The Idemix suite is used to generate the participants' credentials to enhance their privacy and anonymity further. Idemix is a Zero-Knowledge Proof (ZKP) cryptographic protocol that ensures the anonymity and unlinkability of the credentials \cite{stamatellis2020privacy}.
Ichikawa \emph{et al.} \cite{ichikawa2017tamper} presented solutions that utilize Hyperledger Fabric in order to store healthcare records collected from mobile devices. Although the authors built their model on the older v0.5 version of Hyperledger Fabric - and which does not have the private data collection feature, the v1.4 version (the version that PREHEALTH \cite{stamatellis2020privacy} is built upon) has it. Moreover, their model does not support the Idemix suite to create the necessary privacy guarantees. Overall their system is able to store data in an immutable ledger, but without any privacy protection for end-users. It should be noted that to incorporate the private data collection feature - an update of their system - is not possible without a complete re-design of their architecture. Likewise, Liang \emph{et al.} \cite{liang2017integrating} utilized Hyperledger Fabric to simulate a real-world scenario with different participating entities. On their system, the represented entities include users, wearable devices, healthcare providers, insurance companies, blockchain networks and cloud databases. However, later versions of Hyperledger Fabric introduced new challenges that their work needs to address, in case of an architecture re-design and revision into a newer version that incorporates the private data collection feature. MeDShare \cite{xia2017medshare} has many concepts similar to their work, however, the backbone blockchain framework is not explicitly selected. Moreover, the authors focus more on discussing the fundamental building blocks of blockchain technology, such as data blocks and smart contracts, than on a proposed solution.
A decentralized privacy-preserving blockchain solution built on a permissionless blockchain network, and which combines cloud storage and medical data acquired from IoT devices, is examined in \cite{dwivedi2019decentralized} in the context of developing a novel hybrid EHR management method. The key advantages of the proposed solution include the utilization of lightweight symmetric and asymmetric key encryption (also known as public key encryption) in order to achieve effective anonymity and client authorization; however, performance efficiency and GDPR compliance have not been examined.
\subsubsection{Other implementations}
Sharma, Chen and Sheth \cite{sharma2018toward} examine kHealth, a practical privacy-preserving cloud-based system that handles health data acquired from the Internet of Things (IoT) devices. Their system aims to build personalized health predictive models by applying efficient, but computationally expensive privacy guarantees, as it employs various homomorphic encryption and differential privacy techniques. Meanwhile, it should be noted that scalability optimizations often undermine privacy-protection mechanisms.
Dubovitskaya \emph{et al.} \cite{dubovitskaya2015cloud} introduced a scalable and privacy-preserving e-Health cloud system where medical data is encrypted through public-key cryptography over both local and cloud-stored databases, and efficiently distributed by access control policies specified by the patient. A limitation of this implementation is the possible misconduct of trusted cloud server providers; these could violate anonymity by deducing private information from the user's IP address. They could, also, associate a pseudonym with a patient by launching an inference attack.
Casino \emph{et al.} \cite{casino2019systematic} addressed how blockchain technology could enhance several applications within the healthcare industry, such as the management of EHR, drug counterfeiting, and user-oriented medical research.
Ming and Zhang \cite{ming2018efficient} proposed an efficient Privacy-Preserving Access Control (PPAC) scheme for EHR cloud-based systems and which utilizes the cuckoo filter and a novel Attribute-Based Signcryption (ABSC) mechanism, in order to achieve anonymity and computational efficiency. They provided extensive privacy guarantees and comparative performance evaluation results. However, compliance with GDPR has not been investigated.
Roehrs \emph{et al.} \cite{roehrs2017omniphr} distributed Personal Health Record (PHR) information into data blocks. From a logical point of view, the data storage seems centralized, but, in fact, it is decentralized among the participating devices. The authors noted that their proposed protocol \textit{openPHR} is feasible, extensible, and elastic, and can be adopted in practice by many organizations. Their architecture is presented in detail, but the practicality of their work is being questioned. Additionally, the authors mentioned that security and privacy are still lacking in their approach. It should be noted that a PHR is controlled by the patient in contrast to an EHR which is controlled by a healthcare institution. However, both EHR and PHR are electronically stored and distributed and thus may be evaluated in terms of performance and scalability metrics, privacy-preserving features, and GDPR compliance.
\section{IPFS use cases}\label{ipfs_usecases}
IPFS (InterPlanetary File System) implements a distributed infrastructure using P2P (Peer-to-peer) methods, and where there is no centralised server. As with Torrent networks, it is defined as being censorship-resistant \cite{henningsen2020mapping}. Benet \cite{benet2014ipfs} outlines that IPFS can be likened to the Web where we use content-address hyperlinks, but where a single BitTorrent swarm exchange objects within one Git repository.
IPFS breaks files up into blocks and uses a Merkle DAG (Direct Acyclic Graph) and a distributed hashtable. Within a traditional blockchain infrastructure, we sequentially store transactions, and where this can take some time to create a consensus through the building of blocks. With a DAG, each of the transactions becomes a block, and it thus speeds up the consensus mechanisms. Sergio Demian Lerner \cite{lerner2015dagcoin} outlined that in a DAG there were no fixed blocks, and that each transaction brings with it, its own proof of work. Within this, he defined the usage of a fast cache for the most recent transactions, and where older transactions cannot be used as a reference.
\subsection{Architecture}
Chen et al \cite{chen2017improved}
define four core layers for storage (Layer 4), routing (Layer 3), virtual chain (Layer 2), and blockchain (Layer 1). Within the blockchain layer, it is possible to build a new blockchain or use Bitcoin's blockchain. A significant and prominently distributed database technology that elaborates the blockchain technology is Blockstack \cite{ali2017blockstack}. Blockstack operates by default using the Gaia distributed database \cite{sood2022decentralised} that is able to store its data decentralized in the users' web browsers instead of a centralized web server, thus enhancing privacy. Blockstack recently released Stacks 2.1 which is built on top of the Bitcoin blockchain, in order to utilise smart contracts and decentralised applications \cite{ali2020stacks}.
For the virtual chain layer, the transactions are processed and verified, and then sent to the blockchain layer to be stored. Each transaction must have been signed by the private key of a sender, and these are verified by their public key. Typically transactions are for a node to bind its IP address and its associated account (such as defined by its public key), and also one to declare the files that it is associated with. Files can either be \textbf{long-term immutable} or \textbf{occasionally mutable}, and are broken into blocks to be bartered in the BitSwap protocol. Each of the blocks is then identified with a \textbf{content identifier (CID)}. With the \textbf{Bitswap protocol} nodes distribute \textit{want-lists} to their peers, and which contains the list of CIDs for blocks that they want to receive. Each node remembers which blocks its peers want. Whenever a node receives a block it checks its list to see if one of its connected peers wants the received block. The BitSwap protocol involves a node having two lists: the blocks they want; and the blocks they have. Nodes thus barter between themselves. Within a BitTorrent exchange, the data is exchanged in a single torrent.
The routing layer extracts the information from Layer 2 and maps the routing address of an account to the associated files or blocks under their account. The storage layer (Layer 4) is where the data itself is stored (for mutable storage and immutable storage). In \cite{chen2017improved}, the authors make improvements to IPFS by adding a zig-zag file storage structure in order to provide a triple replication scheme (for frequently used data) and for an erasure codes storage scheme (for infrequently used data). The authors define that the method can differentiate between hot data and cold data. Within hot data storage, we store data near the computation and where there is fast access to the data, whereas cold data can be stored within cloud storage.
\subsection{IoT integration}
Muralidharan et al \cite{muralidharan2019interplanetary} implemented an IoT network using IPFS and where nodes and data are addressed using unique cryptographic hashes. Overall routing is implemented using Distributed Hash Tables (DHTs), and which can be used to find and publish data for peers.
These DHTs can store data within the network, and can thus reduce the latency in accessing data. A Merkle DAG along with Git versioning keeps the infrastructure up-to-date. For security, IPFS uses secure file sharing and encrypted communication.
Nizamuddin et al \cite{nizamuddin2019decentralized} implemented an IPFS infrastructure using Ethereum. They have documented their Solidity smart contracts on GitHub \cite{gitNizamuddin} and tested them under Remix - and an Ethereum IDE (Integrated Development Environment) which allows for the development of smart contracts \cite{remix2021}.
It defines Developers (D) and Approvers (A). These are identified by Ethereum addresses, and where the creator of a document creates one smart contract for each document. Developers are then responsible in uploading the document (off-chain) to the IPFS file system. The system requires a two-thirds majority of new developers/approvers to be approved by existing approvers.
\subsection{Performance}
Henningsten et al \cite{henningsen2020mapping} measured the performance of IPFS and used a Kademlia-based distributed hash table (DHT). They had an average of 44,474 nodes of which 52.19\% resided behind a NAT. They found that the infrastructure was robust and was resistant against Sybil attacks. Unfortunately, they identified weaknesses related to performance and the privacy of queries.
Naz \emph{et al} \cite{naz2019secure} implements a data-sharing model with a number of entities:
\begin{itemize}
\item \textbf{Owner}. This relates to the entity that is sharing the data, such as a government entity.
\item \textbf{Customer}. This relates to an entity which can download files from an IPFS server using reconstructed hashes.
\item \textbf{Workers}. These help customers to decrypt the content, authenticate new customers through signatures, and query smart contracts customer data requests.
\item \textbf{Arbitrator}. This entity resolves disputes between buyers and sellers for the requested content.
\end{itemize}
With Naz's model \cite{naz2019secure}, an owner creates metadata for a file they wish to share, such as its filename, its file type, its file size, and its description. This information, and a complete copy of the file data, are then added to the IPFS.
Once loaded onto the IPFS, the owner receives the hashes of the data back and then contacts trusted worker nodes. These worker nodes have their key pairs stored within smart contracts and are responsible for decrypting content. The file hashes are split into $k$ shares using the Shamir Secret Share (SSS) method and encrypted using $n$ random keys. These shares are then stored - along with security information - on a blockchain. It is important to encrypt these hashes as an adversary could rebuild the file based on the hashes. Only valid customers who have paid for access can then rebuild the files. A share for $S$ can then be created with {$S_1$, ... ,$S_n$} shares, and where there are $n$ shares with a threshold of $k$. Overall, $k$ shares are required to rebuild the secret. These shares are stored and encrypted in a smart contract and can only be decrypted and rebuilt by verified workers (who are found by their public key by the owner).
Ali et al \cite{ali2017iot} used a side-chain method to keep network privacy where a validator node runs the side chain.
Within the network, each IoT device has public and private keys, and which they use to encrypt data for the validator. The validator then adds data onto a side chain. A smart contract then provides communication between the device and the validator. It also stores the public key and hash of the IPFS storing data on a device, and the public key and access rights of requesters from the consortium.
Kumar et al \cite{kumar2019implementation} outlined a way to implement IPFS networks in order to reduce the transaction size of blocks in the blockchain, and a content-addressed-based access of transactions. With this, miners store transactions on the IPFS distributed system storage. They then get back the IPFS hash of the transaction and store it in a block on the blockchain.
\subsection{Applications}
Sun \emph{et al} \cite{sun2020blockchain} used a ciphertext policy attribute-based encryption (CP-ABE) and IPFS to securely store and sharing of electronic medical records. CP-ABE controls the access to encrypted files, and IPFS then stores these in a distributed form.
The system is made up of: Blockchain, IPFS, medical system, and data user (DU). In this case, CP-ABE is used to generate an encryption key based on a policy which is made up from a number of attributes. A workflow is \cite{sun2020blockchain}:
\begin{itemize}
\item Initially a public-private key pair is created for Bob (the patient) based on his attributes. Bob then goes to a hospital and doctor Alice diagnoses a condition.
\item Alice encrypts the diagnosis (R) with Bob's public key (CT) and signs the diagnosis ( $CT' = (CT , sig_R$)). She then uploads to IPFS and generates an index for keywords. IFPS returns a hash address (HASHID) for the file ($h$).
\item On receipt of the hash address, Alice encrypts $h$ with a random number and hashes the medical records and their index with SHA-256. The hash value ($h_R$) and the encrypted hash ($h'$) are then stored on the blockchain by broadcasting the transaction. $h_R$ is the hash value of the record, and $h'$ is the encrypted hash address.
\item A transaction ID (TRANSID) is then returned from the blockchain.
\end{itemize}
To recall the record:
\begin{itemize}
\item Bob sends an access request with keywords to the hospital. If he has the rights of access, a search token is returned ($ST_w = (ID,h,\gamma$).
\item Bob verifies the hash address $h$ which contains in the search token $ST_w$, and downloads the encrypted medical record ($CT'$) using the IPFS for hash address of $h$.
\item Bob decrypts the ciphertext with his private key and obtains the file.
\end{itemize}
The advantages of this scheme is that Bob needs to be authorized to gain access to private data, and needs to be authorized. The weaknesses of the system include its inability to revoke attributes for access and expired users \cite{sun2020blockchain}.
Taborda \emph{et al.} \cite{taborda2020decentralized} created a Web platform to store information on hotels for an image repository. This uses IPFS and blockchain in order to improve security and access control. Hao \emph{et al.} \cite{hao2018safe} define a data storage system which uses IPFS to store video, images, and real-time monitoring data reported from sensors in agricultural products tracking.
Nizamuddin \cite{nizamuddin2018ipfs} defines an IPFS/smart contract solution to prove the originality and authenticity of published work and digital content. They use Ethereum smart contracts in order to govern, manage, and provide traceability and visibility of the history of digital content from its original version to the current version. In the work, they create a use case of an online book publication.
Vishwavidyapeetham \emph{et al.} \cite{vishwavidyapeetham2018blockchain} apply IPFS to making notes within research projects. Their system uses a traditional encryption method to create secure documents for record keeping and Ethereum smart contracts to track the encrypted files.
Patsakis \cite{patsakis2019hydras} define Resource Identifier Generation Algorithms and which extend Domain Generation Algorithms (DGA) - and which are often used by cybercriminals for botnet management and communication. Their system extends beyond DNS to use IPFS. overall, it hosts malicious content and explores ways that a botmaster deploys and controls bots.
Karapapas \emph{et al} \cite{karapapas2020ransomware} define that decentralized systems provide opportunities for cybercriminals to perform illegal activities. Overall, Nizamuddin et al \cite{nizamuddin2019decentralized} use IPFS and define a number of drivers for a business model using cryptocurrencies. With its integrated version control systems, their infrastructure enables tamper-proof version control. Dwivedi \cite{dwivedi2022smart} defines the usage of smart contracts and an IPFS infrastructure to improve data sharing - using data contracts - and device authentication within a fog computing/IIoT (Industrial IoT) infrastructure.
\section{European e-ID}
\label{eu_eid}
Different countries have different approaches to identity and consequently also to electronic identity. On a global scale, some well-known implementations are the ICAO electronic passport system \cite{pasupathinathan2008line} which is used by most countries worldwide, and the Indian Aadhaar system \cite{khera2017impact}. Depending on the political and cultural situation in a particular country, the degree of trust in such a system varies. Some countries do not have a national identity scheme. In the UK, a previous national ID scheme was cancelled \cite{GOV.UK2011} on the back of a single blog post \cite{Saunders2006}.
\subsection{EU electronic identity}
\subsubsection{eIDAS}\label{subsubsec:eidas}
Within the European Commission, DG Connect (the Directorate General for Communications Networks, Content \& Technology) took the initiative for the \textbf{e}lectronic \textbf{ID}entification, \textbf{A}uthentication and trust \textbf{S}ervices (eIDAS) regulation, covering electronic identification, authentication and trust services. The main legal document relates to electronic identification and trust services in the eIDAS Regulation \cite{eidasreg}. The implementation of eIDAS is technically supported by ETSI (European Telecommunications Standards Institute ) and CEN, including the development of standards. Along with this there are Implementing Decisions and Implementing Regulations for electronic identification \cite{eidasregia-ID-2, eidasregia-ID-3, eidasregia-ID-4, eidasregia-ID-1}, as well as for trust services \cite{eidasregia-TS-2, eidasregia-TS-3, eidasregia-TS-1, eidasregia-TS-4}. Additional regulation addresses, amongst others things, reference the provision of e-Signature products, Points-of-Single Contact and Trusted Lists. The European Commission thus publishes an entry point to the Trusted Lists in the form of a List of Trusted Lists.
Delos \emph{et al.}\ \cite{seletal2015eidas} describe how every member state is free to organise its trust ecosystem within the European Union. The FutureTrust project \cite{DBLP:conf/openidentity/HuhnleinFSPSHWN16} researched the foundations of trust and trustworthiness and provided Open Source software components and trustworthy services. Regarding identity, member states act in a sovereign way. Each member state organises the identity of its citizens at its discretion. Most member states provide some form of an electronic authentication mechanism. These mechanisms include userid/password schemes, smart cards and mobile apps.
A member state may notify one or more identity management systems of the EU Commission, which (after acceptance by the other Member States) leads to mutual recognition across the member states. For this purpose, a set of minimum identity attributes has been defined \cite{eidasregia-ID-2} for natural and legal persons. Regarding trust services, a member state may set up a supervisory body in order to monitor Trust Service Providers (TSPs), including Qualified Trust Service Providers (QTSPs). While the supervisory body is a public sector body, most TSPs and QTSPs are private enterprises. The supervisory body will call upon the services of a Conformity Assessment Body (CAB) to evaluate TSPs and QTSPs. Such CABs are typically private enterprises, accredited by a National Accreditation Body (NAB).
The relations between these entities can be summarised as follows. Prospective QTSPs must be audited (\emph{conformity assessed}) by a CAB. There are no prescribed standards for this purpose. However, the following applies:
\begin{itemize}
\item A CAB needs to be accredited by a NAB.
\item A CAB must make its conformity assessment scheme public.
\item The European cooperation for Accreditation\footnote{The EA is the body recognised under Regulation 765/2008 \cite{accreditationregEEA} to manage a peer evaluation system across European NABs} (EA) adopted Resolution EA 2014 (34) 22 \cite{EA-resolution-2014-34-22} to use an eIDAS accreditation scheme based on ISO/IEC 17065 \cite{ISO17065:2012} supplemented by ETSI EN 319 403 \cite{etsien319403} as one possible way for CABs to assess conformity with the relevant requirements of the eIDAS Regulation \cite{eidasreg}.
\end{itemize}
Terminology and basic definitions for electronic signatures are specified in eIDAS Article 3 \cite{tsakalakis2017identity}. Three levels of increasing reliability and protection against potential misuse are defined for Trust services, particularly electronic signatures: basic; advanced; and qualified. The EU Commission offers an anchor point from where evaluation and validation of identity and trust services can be initiated. This anchor point is legal, functional, and technical. It is based on the combination of a set of legal acts and the online publication of signed metadata. For trust services, the List of Trusted Lists (LOTL), both in a human and machine-readable format, is publicly available. From this meta-data anchor point, parties such as supervisory bodies can be identified, and each such supervisory body can publish the trusted list for its territory. Within these trusted lists, trust service providers are identified. Qualified TSPs are subject to mandatory supervision and conformity assessment, including bi-annual audits and the use of qualified hardware and software.
Regarding trust services, eIDAS Chapter III defines general provisions, the organisation of supervision, and mutual assistance amongst supervisor bodies. It defines specific requirements for TSPs and QTSPs, such as the bi-annual audit and the security breach notification requirement. Dedicated sections of eIDAS Chapter III define requirements for electronic signatures and seals, as well as electronic time stamps, electronic registered delivery services, and website authentication.
\subsubsection{The envisaged European identity wallet}
In 2021, an updated proposal \cite{EuropeanCommission2021a} was recommended \cite{EuropeanCommission2021b} to eIDAS and which requires all EU member states to provide a European Digital Identity Wallet (EDIW) as prior sections of this work had described. Furthermore, the updates to eIDAS introduce four new Trust Services including \cite{EuropeanCommission2021a, Thales2022}:
\begin{itemize}
\item Electronic Ledgers.
\item Qualified Electronic Archiving service.
\item Management of remote electronic signature and seal creation devices.
\item Qualified electronic attestation of attributes verified against authentic sources.
\end{itemize}
This opened up a realm of possibility and supports distributed ledger technologies, verifiable credentials and self-sovereign identity for electronic signature use cases.
While many countries are moving towards digital ID schemes, it is the EU that perhaps has the answer to getting citizens on board. With the European Digital ID (EUDI) approach, there is the ambition that every EU citizen has the option to request a governmental e-ID in the form of an electronic wallet. The pilot phase for digital wallets will begin in 2023, and where every EU member country will offer a Digital ID Wallet by 2024. The core part of this will be compliance with GDPR and the integration of a legal framework of the eIDAS regulation. Its vision is to break down existing siloed digital systems.
Thales \cite{Thales} conducted a survey to understand how citizens perceive an EU-derived wallet — which could store a citizen’s ID, driving licence, and other relevant documents that could be used to prove someone's identity. It showed that around 27\% of those surveyed currently use some form of national ID scheme to prove their identity. A significant finding is that privacy and security are significant concerns for citizens, with 65\% of those in the survey saying that security was the most important feature of the wallet and then followed by convenience and privacy as the most significant concerns. Another finding was that there were significant differences in the attitudes to the wallet in different countries, and where in France and Italy, the level of likely adoption was surveyed at 85\% and 75\%, respectively. Age, too, plays a factor, and where younger people are more accepting of the adoption of digital wallets. The survey uncovers differences in national attitudes.
\section{European Blockchain Partnership}
\label{eu_blockchain_part}
In 2018, 27 EU Member states, Norway and Liechtenstein signed up to the European Blockchain Partnership (EBP) \cite{queiruga2022self}. This led to the creation of the European Blockchain Services Infrastructure (EBSI). In 2022, Ukraine became the third non-EU member to join EBP \cite{EBP}. Within EBP, there are currently four main use cases: Self-Sovereign Identity, Diploma award \cite{vasileedis}, Document Traceability and Trust Data Sharing. For the European Self-Sovereign Identity Framework (ESSIF), we have a trusted method of identifying citizens and thus allow them to create their own digital identity. There is thus no need for trusted third-party trust providers for identity checking. ESSIF also aligns with GDPR and eIDAS, and where EBSI is a public permissioned blockchain and where digital credentials are stored in wallets that citizens own and control \cite{grech2021blockchain}. This means that citizens have full control of their identities and their associated data. The blockchain, itself, does not store any personal information. Baldacci \emph{et al.} \cite{baldacci2021advancing} define that the core principles of EBSI are:
\begin{itemize}
\item \textbf{Public and permissioned}. The identity of all participating nodes must be governed.
\item \textbf{Decentralized}. Each member should run its own node or set of nodes.
\item \textbf{Scalable}. Support of high-throughput and a high number of nodes.
\item \textbf{Open specifications}. EU Public License and free from patent and IP protection.
\item \textbf{Sustainable}. Energy-efficient consensus mechanism.
\item \textbf{Interoperable}. should foster interoperability via alignment with the work of standardization bodies such as ISO, CEN or ETSI.
\end{itemize}
In 2020, a number of proponents (DIZME, Findy, Lissi and MeineSichereID) outlined their collaboration within the Trust over IP Foundation \cite{EBSI2020} and with a goal to focus on achieving a European SSI Network. A key focus of their statement is related to the integration of EBSI with ToIP stack and ESSIF, and thus move towards a common single market for data across both private and public services.
Overall, the European Commission (EC) has developed a number of blockchain strategies \cite{fulli2021policy}, including the regulation of crypto-assets \cite{sandner2022crypto} and in the development of market infrastructures based on distributed ledger technology \cite{priem2022european}. These involve the integration of joint visions through the EBP and have since invested in EBSI \cite{EBSI}. Along with this, the EU Commision has defined the ESSIF (European Self Sovereign Identity Framework) \cite{pohn2021eid} as a framework for enhancing interoperability between national SSI schemes \cite{konstantinidis2021identity}.
\subsection{EBSI}
\subsubsection{The EBSI project}
Turkanovic \emph{et al.} \cite{turkanovic2020signing} define the usage of EBSI and which aims to integrate with public services across the EU. It involves EU member states running both Hyperledger Besu \cite{dalla2021your} and Fabric clients. EBSI's ledger protocol is described as being \emph{pluggable} and thus it is assumed that either Besu or Fabric can be used \cite{EBSI}. With a consensus, each member state has an equal vote on the verification process, and where each state runs at least one consensus node. Figure \ref{fig:cocco} outlines the SSI model used by EBSI.
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/futureinternet-14-00140-g001.png}
\captionsetup{justification=centering}
\caption{SSI model for EBSI \cite{cocco2022system}}
\label{fig:cocco}
\end{center}
\end{figure*}
Figure \ref{fig:ebsiarchitecture} outlines the architecture for EBSI, and where we have a customer (such as Alice, who is the signer for a public service based on the blockchain) can sign for an academic quantification, and then create a verifiable signature, and link to a cross-border service for an eIDAS compliant signature \cite{CEFDigital2022}. This then links to the EBSI blockchain infrastructure. With EBSI, each member state has at least one running the ledger, and where the reading of the information contained is public. The writing process can only be done by trusted entities.
Hyperledger Besu integrates with the Ethereum blockchain platform which uses 42-character hexadecimal addresses to identify accounts (derived based on a 128-character public key and 64-character hexadecimal private key) \cite{Ethereum2022}. With this, there is an owner of the blockchain network and who has the right to define the addresses that have permission to read and/or write from the blockchain. With Hyperledger Fabric, nodes are identified with an X.509 certificate, and a Fabric root CA (Certificate Authority) is then defined as a \emph{Root of Trust} for all the permissions.
A key focus for a citizen-focused data ecosystem is the usage of digital wallets and verifiable credentials. In Canada, we see the usage of the Verifiable Organizations Network (VON), and which can issue digital licenses, permits, and registrations to legal entities \cite{sedlmeir2021digital}. These credentials then integrate with Hyperledger Aries. In Germany, too, Aries is used to issue eID cards, along with travel documents. For a global scope, the Trust over IP Foundation focuses on improving the compatibility of infrastructures in using Hyperleger Aries and Indy for verifiable credentials.
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/trust01.png}
\captionsetup{justification=centering}
\caption{System landscape diagram of the architecture reference model \cite{turkanovic2020signing}}
\label{fig:ebsiarchitecture}
\end{center}
\end{figure*}
\subsubsection{EBSI Use Cases}
There are four current use cases for EBSI: identity checking, the awarding of a Diploma, social security checking and document traceability.
At the core of EBSI is ESSIF (European Self-Sovereign Identity Framework), and which supports the onboarding and creation of a citizen wallet (Figure \ref{fig:usecase02}). This should allow for interaction with other public and private organisational infrastructures. One core feature is that ESSIF is compliant with GDPR, and supports e-IDAS. These are important for legal enforceability and citizen privacy - and thus move toward a European citizen-derived identity.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\linewidth]{figures/use02.png}
\captionsetup{justification=centering}
\caption{EBSI ESSIF use case \cite{EBSIusecase}}
\label{fig:usecase02}
\end{center}
\end{figure*}
The awarding of a Diploma involves an abstraction of the key roles involved in an academic award, such as the Accreditation Body; the Educational Organisation; the Student; and Trusted Accreditation Registry Administrator.
A common identity check that is used when moving between countries is a social security check. This EBSI use case integrates the creation and checking of a PDA1 document\footnote{A PDA1 document is used to prove one's social security status and is issued by the country of origin.}, and for it to be signed by a trusted entity, and then verified in another EU member state. Within document tracing, EBSI focuses on defining ways that allow for trusted audit trails and compliance checks for documents. This involves both off-chain storage of the documents, with on-chain verification.
To support document traceability, EBSI adds a storage layer to its infrastructure layer \cite{baldacci2021advancing}, and where documents are not kept off-chain, and where the document is hosted by a trusted organisation (and subject to terms and conditions defined by EBSI). As of EBSI v2.0 only distributed storage has been implemented as an off-chain solution. This approach involves using nodes on the EBSI network as data stores via Cassandra DB \cite{EBSI2022}. Although only one data store has been implemented it should be noted that EBSI's API can support other storage types in the form of flat files, relational databases, key/value stores, and big data stores \cite{EBSI2022A}.
\subsubsection{Trusted health care data sharing with EBSI}
Bittins \emph{et al.} \cite{bittins2021healthcare} outline how EBSI could be used for the sharing of healthcare data across the EU, and thus provide both provenance and the integration of SSI.
With this, we have a trust relationship between XCA (Cross-Community Access). EBSI — though an eIDAS bridge — then defines the permissions and the required verifiable credentials for access to the medical data. It uses XDS (Cross-Domain Sharing) and the SAML (Security Assertation Markup Language) to integrate with existing legacy systems, in order to authenticate and authorize, and also to support patient-informed consent.
\subsection{EBSI Compliance and Security Measures}
A core strength of EBSI is its integration with a range of regulations and legislation, including the NIS Directive \cite{directive2016directive}; The NIS Implementing Regulation; The eIDAS Regulation; and the GDPR Regulation. As EBSI delivers an infrastructure, the legislation listed above may be applied directly to this infrastructure, or it may be applicable when EBSI is used in combination with one or more applications. The latter is particularly the case for the eIDAS Regulation because EBSI – at least for version 2.0 and prior versions – does not offer identity or trust services as defined by the Regulation. Nevertheless, applications that do offer identity or trust services may make use of EBSI.
Furthermore, EBSI must comply with the Commission's internal standards and policies. As a consequence, the use of appropriate security is mandated \cite{EBSIusecase}. Within the context of the security framework described above, a selection of key security measures is listed in Table \ref{table:1}.
\begin{table*}[]
\begin{center}
\caption{A selection of EBSI V2 Security Measures}
\begin{tabular}{p{4cm} | p{7cm}}
Security Measure ID & Security Measure Description \\
\hline
EBSI\_V2\_SMID\_001 & End user identification/authentication based on EU Login and EBSI wallet (covering users)\\
EBSI\_V2\_SMID\_002 & DevOps users identification/authentication based on remote SSH login\\
EBSI\_V2\_SMID\_003 & Application components identification/authentication based on JSON Web Token (JWT) and component key pairs (covering components and enterprises)\\
EBSI\_V2\_SMID\_004 & EBSI API acts as an access mechanism for blockchain and distributed storage services\\
EBSI\_V2\_SMID\_005 & Registry of trusted dapps in a smart contract\\
EBSI\_V2\_SMID\_006 & Registry of trusted issuers (governments and universities) in a smart contract\\
EBSI\_V2\_SMID\_007 & Registry of trusted accreditation organisations in a smart contract\\
EBSI\_V2\_SMID\_007 & Registry of trusted registration authorities in a smart contract\\
EBSI\_V2\_SMID\_009 & Registry of trusted schemes in a smart contract\\
EBSI\_V2\_SMID\_010 & Protection of all smart contracts by an ABAC[7] security policy and a multi-sig smart contract\\
EBSI\_V2\_SMID\_011 & All EBSI front-end components have a TLS[8] certificate\\
EBSI\_V2\_SMID\_012 & All EBSI front-end components are protected by a proxy server\\
EBSI\_V2\_SMID\_013 & All EBSI components are protected by a firewall\\
EBSI\_V2\_SMID\_014 & All EBSI components run on a hardened Operating System\\
EBSI\_V2\_SMID\_015 & For those EBSI components that are cloud-based, the cloud provider's security \\
\end{tabular}
\label{table:1}
\end{center}
\end{table*}
\section{The GLASS project}\label{sec:Background}
The overall structure of GLASS uses a distributed domain approach (Figure~\ref{jain1}) and is created using Hyperledger Fabric. In the example we three distinct sovereign nations that have joined a single channel to expedite the governance infrastructure, and where sovereign nation has two departments that are accountable for endorsing, validating, and committing citizens data. Each trusted department is then responsible in signing identity documents with their associated private key, and proven with their public key on the EBSI ledger.
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/distributed_example.png}
\caption{An overview of a distributed data-sharing platform based on permissioned blockchain.}
\label{jain1}
\end{center}
\end{figure*}
\subsection{Decentralised Distributed Ledger and Chaincode}
Blockchain is a decentralised, trustless, tamper-proof, distributed ledger over peer-to-peer networks~\cite{ZibinZheng, Papadopoulos2020}. The blockchain and smart contract technology provide a mechanism for developing efficient access control methods to support secure identification, authentication, and user authorization. In addition, the immutable audit trail of blockchain makes it immutable against integrity issues. On the other hand, the smart contract feature facilitates user authentication and authorisation under programmable policies in a fully distributed environment~\cite{MarkoHolbl}.
In a permissioned blockchain (such as Hyperledger Fabric) the smart contract interconnects with the ledger, and shapes the core segments of the blockchain system~\cite{chaincode}.
With Hyperledger Fabric, the chaincode is a piece of code that runs on peers within an organisation. It enables the creation and querying of transactions on the shared ledger, and updates the current state of the ledger. This chaincode can be utilised to generate business contracts, asset definitions or oversee decentralised applications.
\subsection{Interplanetary File System}
IPFS \cite{benet2014ipfs, ipfs} is a content sharing protocol that enables nodes to store and transfer digital content in a distributed manner. IPFS uses a peer-to-peer protocol that adopts a content-based addressing approach to resource sharing. It follows a similar approach as BitTorrent, where its distributed nature defines how files can be shared across the network. IPFS can be used to generate a permanent and distributed Web-based system (either public or private).
The IPFS process involves generating a cryptographic hash that can be used as the address, as opposed to a URL approach on our existing Web-based accesses. IPFS thus does not follow a centralised approach, rather the peers on the network are able to distribute the data. Moreover, when a peer downloads the content, it becomes a distributor of that content. Digital content such as those related to directory folders, images, and data stores can be represented by IPFS. IPFS breaks down the resources and stores them in a distributed manner. Each block of data is content-addressed using a CID.
\subsection{Content Identifier}
A content identifier, or CID, is a label used to point to resources in IPFS~\cite{Multiformats2020}. It does not indicate where the content is stored (unlike URLs), but it forms a kind of address based on the content itself. CIDs have a fixed length, regardless of the size of their underlying content. They are basically the SHA-256 cryptographic hash of the contents; hence, any minor changes to the data will produce a different CID. An example of a CID for the string-based content of '\emph{{hello}
world}' would be: \url{QmT78zSuBmuS4z925WZfrqQ1qHaJ56DQaTfyMUF7F8ff5o}
\subsubsection{Distributed Hash Table}
A Distributed Hash Table (DHT) is a decentralised key-value lookup table~\cite{benet2014ipfs}. It functions as a distributed database that can store and retrieve information based on the keys that the nodes hold~\cite{Distributed_hash_table}. The main feature of DHT is that it maps a CID to the location of content in IPFS. Moreover, nodes are permitted to connect or leave the network and organise themselves to balance and store data
without having any central entity involved in the network. The DHT algorithm that has been fulfilled by IPFS is referred to as Kademlia~\cite{maymounkov2002kademlia}.
\subsubsection{Distributed Infrastructure of Trust}
A core part of the adoption of the GLASS architecture is in the trust infrastructure based on country-wide domains. Figure~\ref{fig:trip02} outlines this structure in terms of the mapping of country-specific digital signers, and their rights. In the example in Figure~\ref{fig:trip02}, we see a top-level domain and then structured into the country domain of [DE]. Trusted organisational units, such as the Department of Justice [DE.DE\_Dept\_Justice] can then map onto this, and where each trusted organisation unit or unit would have their own signing keys. Their associated public key would then be stored within Hyperledger with their common identity (ID), their claim rights.
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/trip02.png}
\caption{Overview of the trust infrastructure.}
\label{fig:trip02}
\end{center}
\end{figure*}
\subsubsection{Triplets Design Overview}
The triplets stored on Hyperledger Fabric data collections form the core metadata which allow us to decrypt data distributed on IPFS (or other distribution mechanisms in future work.)
Figure~\ref{fig:trip01} outlines the integration of the triplets on the ledger. Each credential file that the citizen (Alice) stores is encrypted with a unique 256-bit encryption key and uses AES GCM. This encryption key is then encrypted with the private key of Alice (and which is either stored in her digital wallet or within the Hyperledger infrastructure). This encrypted key is then stored along with the CID of the credential file and the location of the file (URI: Universal Resource Identifier). This location can either point to an IPFS store, or a URL (Universal Resource Location).
In order to support trust within each domain, the public key of the trusted credential signer is stored within each country domain (Figure \ref{fig:trust03}). These are marked as a trusted signers for given credentials, such as AC for Academic Credential. These signers are only trusted for the credential types they have been defined in the trust policy. Each credential is then associated with a credential schema, which is used by the credential signer for core attributes and optional ones. The signer's public key maps to the structure defined in Figure \ref{fig:trip02}. The trust infrastructure focuses on storing the URI for the encrypted credential, but will not have any access to the contents of the file, as the citizen can only rediscover the encryption key using the private key stored in their wallet.
\begin{figure*}
\begin{center}
\includegraphics[width=0.9\linewidth]{figures/trip01.png}
\caption{Overview of triplets.}
\label{fig:trip01}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/trust03.png}
\caption{Trust infrastructure within ledger.}
\label{fig:trust03}
\end{center}
\end{figure*}
Since the focus of this work is the integration of Hyperledger Fabric with IPFS, it may have been noted that the CID and URI will store the same content. As the IPFS protocol uses the CID to both identify and locate resources, this is as expected for IPFS. However, by using the property of URI as a separate field, we may choose to distribute our content in other mechanisms in future work such as Dropbox and Sharepoint (as examples). In such scenarios, the CID of a resource will remain the same but the URI will differ depending on where the resource is located. The encryption key will also remain the same.
Such an architecture allows for an encryption key to be hosted within an external domain.
\subsection{Resource Distribution}
The InterPlanetary File System is a peer-to-peer content-sharing protocol widely used on the internet. This protocol is used as our primary resource distribution mechanism in the developed prototype. All the resources distributed on the IPFS network, within our scenario, are encrypted. However, for greater security, a private instance of IPFS has been used as the testing ground in the scope of this codebase. A private IPFS functions the same as the public instance of IPFS. However, only nodes that possess a shared private key (referred to as the swarm key) can participate in the private IPFS network. The use of a private IPFS network helps prevent accidental disclosure of confidential or sensitive information.
\section{GLASS e-Governance data sharing model over the EBSI infrastructure}\label{glass_ebsi}
This section discusses the fundamentals of the key building parts of the architecture reference model and exhibits their potential applications. The architecture reference model is assessed by creating a generic model of the GLASS data sharing model and services over the EBSI Infrastructure, as well as a use-case scenario model. Figure \ref{fig:glassover} illustrates a high-level design of the reference model for better understanding. Before discussing the architecture, we will first present an overview of the GLASS and EBSI components and essential services.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/GLASS_EBSI_1.png}
\captionsetup{justification=centering}
\caption{GLASS and EBSI layers}
\label{fig:glassover}
\end{center}
\end{figure*}
\subsection{EBSI}
\subsubsection{Business Application}
Since the GLASS-related business applications are not currently a part of EBSI, this layer's primary focus is to facilitate the integration of public and private sector applications with EBSI through the use of either core service APIs or by hosting a node and becoming part of the network. Users call the EBSI business interface, which allows applications to leverage the EBSI-exposed API interfaces to make a transaction to a smart contract in the EBSI ledger.
\subsubsection{Use cases}
The use cases are applications that demonstrate EBSI's capacity to provide public services across international borders. This set of use cases is geared toward streamlining administrative procedures and verifying digital information's integrity across industries. As a result, efficiency improves, and public confidence is boosted. Notarisation of papers, diploma administration, European Self-Sovereign Identity, and trusted data exchange were the first four use cases implemented in EBSI Version 1. SME finance, asylum, and the European Social Security Platform are the additional three use cases that were chosen afterwards.
\subsubsection{Core services}
The functional area of the main services has features that can be used in any use case. This layer offers the interfaces for the many EBSI services, both on-chain and off-chain, and contains the application enablers. These microservices are organised into five distinct categories within the core services layer. These tools and resources include: the Integration API; trusted registries; API security management; and the wallet library. In addition to these services, it also provides access to features related to digital identity. EBSI provides a business interface that calls the EBSI-exposed API interfaces to request to perform a transaction to a smart contract.
\subsubsection{Chain and storage}
The supported distributed ledger and storage protocols are part of the chain and storage layer. EBSI V2 Ledger API allows users to interact with Hyperledger Besu (Read and Write) and Fabric ledgers (Read Only) \cite{li2020survey}. Smart contracts are used to control and enable the execution of trusted transactions, that will record on the blockchain. Smart contracts are also used to manage Trusted Registries.
\subsubsection{EBSI infrastructure}
This layer is responsible for providing generic capabilities and connectivity to Blockchain networks. These capabilities include network, compute, and deployment capabilities. It also represents all the infrastructure components necessary to set up an EBSI node. The ledger checks that the proposed transactions are valid, approve them and stores them on the chain of blocks. The infrastructure is made up of a decentralised network with nodes that are currently being hosted by the member states who are participating. Currently, there are 25 nodes participating.
\subsection{GLASS}
The trust infrastructure based on country-wide domains is a key component of the GLASS architecture's adoption since it enables distributed Infrastructure of Trust between EU member states. In a manner analogous to EBSI, GLASS aims to provide an e-Governance framework that EU member states can employ. It examines the main technologies of a distributed ledger system that can provide governments, citizens, and businesses across the EU with efficient, transparent, cross-border functional, and user-friendly solutions to support public services.
\subsubsection{Business application}
The main goal of this layer is to facilitate and build links between organisations and individuals who can legally use the GLASS services. GLASS created a user's wallet, which is a key management application that gives the user a graphical interface for storing, managing, and securing digital keys. Signing transactions, statements, credentials, documents, or claims can be accomplished with the help of these keys. As a result, users are given the ability to connect with third parties in a reliable manner and form relationships with them.
\subsubsection{Use cases}
The core use cases in GLASS involve: moving to another Member State; a short-term visit to another country; and getting a job abroad \cite{glassusecase}.
\subsubsection{Core services}
The focus of this layer is concerned with establishing connections between a GLASS portal, a Hyperledger Fabric ledger, and a private IPFS network instance to establish a trusted e-governance data-sharing model. To achieve this, GLASS makes use of a number of identity trust capabilities and APIs that permit blockchain-based services to reach a superior standard of identity assurance and trust. Credential signing, credential sharing, ledger transactions, and encryption are all part of the core services, as are APIs for issuing verifiable credentials associated with electronic identification (eID).
\subsubsection{GLASS portal}
GLASS develops an e-government portal that incorporates distributed ledger technologies into a private instance of an IPFS network. This is to record and store an encrypted verified credential on a Hyperledger fabric ledger with access restrictions. The GLASS portal will aid in sharing the encrypted resource over the (private) IPFS cluster. In this way, a CID for the encrypted resource will be made. Since the IPFS uses the CID to identify (hash) and locate resources, it also functions as a URI. Along with delivering the encrypted resource on IPFS, the GLASS portal will also communicate with Hyperledger Fabric to keep track of the CID, URI, and encryption keys that were used to encrypt the resource. Along with delivering the encrypted resource on IPFS, the GLASS portal communicates with Hyperledger Fabric to store the CID, URI, and encryption key used to encrypt the resource. The user is then given the result of the status check. If the steps were completed without error, the user would be given a CID and URI that correspond to their encrypted resource. This lets them or other people with permission find it in the future.
\subsubsection{Chain and storage}
GLASS can use IPFS as an option for the storage using an Internet-wide protocol for peer-to-peer content exchange. It involves the integration of Hyperledger Fabric with IPFS. IPFS is what's utilised to support the storing of verified credentials in any location, and it can be used for that purpose everywhere. In terms of security, it makes use of a Uniform Resource Identifier (URI), a Content Identifier (CID), and a Protected Encryption Key (Universal Resource Identifier). After that, these are able to be recorded in a secure Hyperledger Fabric record included within the EBSI architecture. Both URI and CID will be used to store the same data. However, utilising the URI property as its own separate field will result in increased adaptability and will make it possible for the content to be stored and shared across several platforms, including Dropbox and Sharepoint. In these kinds of situations, the CID of a resource will continue to be the same even if its location changes, but its URI will be different depending on the location of the resource, as URI could point somewhere else. In addition to this, the encryption key will not be altered. A system with such an architecture makes it possible for an encryption key to be stored in an independent domain.
Hyperledger Fabric uses an internal piece of code called \emph{chaincode} that trusted peers execute within an organisation. It allows new transactions to be added to the distributed ledger, makes it possible to query those transactions, and updates the ledger's current state. You can use this chaincode to create legally binding contracts, define assets, or manage distributed applications. We develop a GLASS-ipfs chaincode in Hyperledger Fabric so that users can create Glass resources and read the Glass resource Key. Our system's chaincode is comprised of numerous functions. For instance, there are functions that allow a user to insert a new triplet and read an existing triplet to obtain the encryption key of a resource.
\subsubsection{GLASS ledger}
Our permissioned blockchain, Hyperledger Fabric, is utilised for the storing of the triplets (CID, Key, and URI) for each resource that is encrypted and disseminated on IPFS. These triplets are stored in a hashed form. A data collection is used to store the triplets, while the ledger itself is utilised to record and read/write operations, and this is for the purpose of auditing and record-keeping.
The access control, because encryption keys are sensitive, the GLASS project has created two generic organisations to show how access policies work in Hyperledger Fabric. The two organisations are org1.org and org2.org. By default, org1.org has full access to generate new triplets and read existing triplet values. This permission also allows org1.org to read existing triplet values. On the other hand, the only authorisation org2.org possesses is the ability to generate new triplets; it does not have access to any encryption keys. This demonstrates that the Hyperledger Fabric environment is set up to use two data collections: one that is public and can be read by both organisations, and another that is private and can only be read by org1.org. The encryption key is kept in the second collection. The first collection holds the CID and URI of the GLASS resources. These two data sets, when combined, comprise the proposed Triplet concept.
\subsection{The GLASS over EBSI model}
This section describes the proposed reference model architecture. The architecture reference model is made up of numerous identity trust components that enable blockchain-based services to attain a high level of identity assurance. The architecture reference model can take advantage of the EBSI service by integrating its eSignature and eID properties, which are capable of EU cross-border identification based on the eIDAS network because the infrastructure consists of all EU Member States. In addition, the reference model fosters confidence by encrypting and storing user credentials in IPFS, which all EU state members can subsequently verify to ensure that a robust association is created between the user's digital identity and his entity in the physical world. In this section, the architecture reference model will present an example of a student from a university in one of the European countries who can use the GLASS e-governance data sharing and trust model to request qualification in the form of verifiable credentials signed by the university. The student can then present this verifiable qualification in the form of a verifiable presentation that an employer can verify within the ecosystem (Figure \ref{fig:student01}) and for the required credentials (Figure \ref{fig:student02}). Figure \ref{fig:overeb} illustrates a high-level design of the reference model of GLASS for better understanding.
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/student_use.png}
\captionsetup{justification=centering}
\caption{GLASS example with academic credential presentation}
\label{fig:student01}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/student_use2.png}
\captionsetup{justification=centering}
\caption{Example credentials}
\label{fig:student02}
\end{center}
\end{figure*}
\begin{figure*}
\begin{center}
\includegraphics[width=0.8\linewidth]{figures/GLASS_EBSI_4.png}
\captionsetup{justification=centering}
\caption{GLASS over EBSI model}
\label{fig:overeb}
\end{center}
\end{figure*}
EBSI differs from GLASS in that it does not include the user’s wallet as part of its design, whereas GLASS provides the user’s wallet. However, EBSI provides all the APIs required to communicate with a blockchain ledger and conduct verified transactions. Therefore, EBSI permits any wallet capable of generating an EBSI-compliant DID. On the other hand, GLASS offers wallet functionality for both citizens and organisations. Consequently, the GLASS wallet can be used to identify users and associate their DIDs within the EBSI ledger. (1) In Figure \ref{fig:overeb}, the two primary identifiers are illustrated in this model. The first is for the citizen, while the second is for the organisation. Also referred to as legal persons and natural persons, legal persons can be any organisation, such as a university, tax authority, or any other government body. In contrast, a natural person is merely an individual or a citizen.
Client wallets can generate and store a pair of public and private keys on the device using the EBSI wallet API. After that, the on-boarding process can begin to record and anchor the information in the EBSI ledger. The authorised authority or issuers are registered in the EBSI ledger using the trusted issuer registry API. Individuals will be recorded in the EBSI ledger via a specific API. Similarly, the trusted Apps registry API adds the authorised verifier to the EBSI ledger.
In reference to Figure \ref{fig:overeb}:
\begin{itemize}
\item (2) A student can then request a certificate from a university, which can be presented to an employer in another country.
\item (3) The university issues the student certificate signed using the university’s private key. Using our architecture, a legal organisation, in this case, the university, with the necessary permissions, can use the GLASS portal to encrypt and distribute resources via the private IPFS network securely.
\item (4) The portal performs automatic encryption and distribution of GLASS resources across the private IPFS network before resource access. The security is handled by means of Advanced Encryption Standard-256 Cipher Block Chaining (AES-256 CBC). The plaintext file is deleted from the server once encryption is complete.
\item (5) In the proposed model, the IPFS protocol is the main way that resources are shared. In our design, all resources shared on the IPFS network are encrypted, but for more security, testing was done on a private instance of IPFS. A private IPFS works the same way as a public IPFS, but nodes can only join the private IPFS network if they have a shared private key called the “swarm key.” Using a private IPFS network effectively prevents sensitive or private information from getting out by accident.
\item (6) We will receive back the CID and URI.
\item (7) When an encrypted resource is distributed over IPFS, the GLASS portal simultaneously records and maintains the metadata triplet (CID, encryption key, and URI) in the Hyperledger Fabric data collections.
\item (8) This activity is recorded in the ledger of the permissioned blockchain so that it can be audited afterwards. Role-based security allows the GLASS portal to read and retrieve encryption keys from the Hyperledger Fabric data collection and decryption of GLASS resources if the user has the required permissions.
\item (9) In the verification phase, the student will send the signed version of the signed credentials to an employer in any other country within the ecosystem.
\item (10) Finally, the verifier queries the EBSI ledger to verify the issuer’s DIDs by accessing the repository of trusted signers’ public keys and DIDs. In this case, the DID can be resolved by anyone in the ecosystem.
\end{itemize}
\section{Conclusion}\label{sec:Conclusion and future work}
At its core, enhanced e-governance could bring increased efficiency, increased quality, and increased effectiveness of government services \cite{draheim2020narratives}. But, there are many political narratives that can stall its development including the digital divide, anti-corruption, the loss of privacy, social change, and the increasing control from government agencies \cite{draheim2020narratives}.
GLASS \cite{chrysoulas2021glass} focuses on developing an e-Governance framework that could be adopted by the European Union’s member states, while EBSI focuses on the provision of trusted identity data. Key features of EBSI are the provision of GDPR compliance and eIDAS 2 signatures. The two infrastructures could thus provide an integrated approach to enhanced e-Governance services across the EU, and future support the freedom of movement of EU citizens.
|
1,108,101,564,846 | arxiv | \section{Introduction}
\label{introduction}
Early \emph{graph mode} or \emph{define-and-run}~\cite{DBLP:journals/corr/abs-1908-00213} deep learning frameworks like Caffe~\cite{DBLP:journals/corr/JiaSDKLGGD14}, Theano~\cite{DBLP:journals/corr/Al-RfouAAa16}, and TensorFlow~\cite{45381} defined APIs in which the user constructed a graph-based intermediate representation (IR) of the desired computation. Program transformations like program differentiation, device/host partitioning and placement, quantization, device lowering, and performance optimization could be applied directly to this IR. One way to think of these frameworks is as simple embedded programming languages that are meta-programmed from a host language, predominantly Python~\cite{innes_2017}.
However, these frameworks require the user to exit the host language and enter a domain-specific language and runtime, which often has inferior user experience compared to the host language. For instance, debugging requires different tools from the typical debugging toolkits such as Python's \ic{pdb} library.
More recent \textit{eager mode} or \textit{define-by-run}~\cite{DBLP:journals/corr/abs-1908-00213} frameworks such as Autograd~\cite{maclaurinautograd}, Chainer~\cite{DBLP:journals/corr/abs-1908-00213}, PyTorch~\cite{DBLP:journals/corr/abs-1912-01703} and TensorFlow Eager~\cite{DBLP:journals/corr/abs-1903-01855} eschew explicit graph-building APIs in favor of programming in the host language directly. The primary program transformation used in deep learning frameworks, program differentiation, is reformulated from an ahead-of-time transformation to a just-in-time transformation, in the form of auto-differentiation.
Most training and inference can be done using eager mode with auto-differentiation. However, there are still transformations\textemdash such as program quantization or operator fusion\textemdash that are easier to write given the additional program structure provided by an IR. To bridge this gap, an eager-mode framework needs a way of capturing program structure from user programs to enable these transformations.
Some program capture systems are built to capture a freestanding representation of the whole program for the purposes of serialization or export. For instance, TorchScript~\cite{torchscript} includes mutable state, control-flow, and complex data types for the purposes of faithfully modeling the semantics of the original Python program. Modeling Python in full generality comes at the cost of complexity in program capture techniques and difficulty of writing transforms on the highly-complex IR.
In contrast, it is possible to decouple the requirements of faithfully modeling Python from the requirements needed for transforms such as quantization or fusion. Transforms are often formulated as modifications to a high-level directed acyclic graph (DAG) organization of the code, with implementation details hidden within high-level blocks (such as Convolution or Batch Normalization). Thus, simplifications can be made to both the program capture mechanism and the IR it produces, focusing on the high-level DAG structure of the majority of neural network computation.
For this use case, we present \ic{torch.fx}, a high-productivity library for capturing and transforming PyTorch programs. \ic{torch.fx} explicitly trades generality of supported programs for simplicity of program capture and representation. \ic{torch.fx} focuses on the DAG representation of deep learning programs and provides customization interfaces to adapt programs into this representation. In doing so, \ic{torch.fx} is able to provide a program transform interface that supports the majority of deep learning programs while providing simple and easy-to-use APIs for implementing transforms.
We present the following contributions:
\begin{enumerate}
\item A practical analysis of the features of program capture and transformation that are important for deep learning programs.
\item A Python-only program capture library that implements these features and can be customized to capture different levels of program detail.
\item A simple 6 instruction IR for representing captured programs that focuses on ease of understanding and ease of doing static analysis.
\item A code generation system for returning transformed code back to the host language's ecosystem.
\item Case studies in how \ic{torch.fx} has been used in practice to develop features for performance optimization, program analysis, device lowering, and more.
\end{enumerate}
\section{Background}
\label{background}
\label{eagerdifferentiable}
When capturing and transforming programs, both eager and graph-mode frameworks must make choices about \emph{capturing program structure}, \emph{program specialization} and the \emph{design of the intermediate representation} in which programs are kept. The combination of these choices determines the space of programs that are representable in the framework, the ease of writing transformations, and the performance of resulting transformed programs. In general, supporting more programs at high performance requires a more complicated capture framework and IR and subsequently makes transformations harder to write.
\subsection{Capturing Program Structure}
There are several ways to capture program structure from Python programs. The simplest way is to \emph{trace} the execution of a model given some example inputs and record the operations that occur, which is the approach used by PyTorch's \ic{jit.trace}~\cite{torchscript}. A slightly more complicated variant of this approach is to perform tracing with abstract values rather than example inputs (\emph{symbolic tracing}). MXNet's Gluon~\cite{chen2015mxnet}, and TensorFlow's \ic{tf.function}~\cite{DBLP:journals/corr/abs-1810-08061} implement this approach. In addition to the user not having to provide example inputs, this approach surfaces locations where Python control flow depends on the input values, rather than collecting a trace specialized to the control decisions imparted by the example inputs.
During tracing, operations are only recorded for tensors and a small number of other data structures such as lists of tensors. This means that tracing can only record a representation for a subset of the Python program. Although tracing's visibility into the program is limited, this is often sufficient for deep learning computations, which are most often flat sequences of tensor operations\textemdash termed \textit{basic block} programs in Section \ref{ir_design}.
By overriding the execution behavior of standard Python code, some tracing systems can capture more program structure, such as control flow, at the cost of additional complexity. For instance, \ic{tf.function} augments symbolic tracing with a Lightweight Modular Staging~\cite{rompf2010lightweight} system that uses Python AST transforms to convert imperative control flow constructs into higher-order Python functions, which can then be traced.
An alternative way to capture program structure is to have users write models directly in an embedded programming language within Python. The simplest of these techniques is to provide a graph-building API similar to TensorFlow, which lets users build programs (graphs) by calling Python functions. It is awkward to represent control flow in these APIs, so PyTorch's TorchScript~\cite{torchscript} instead extracts programs directly from the Python source using a traditional lexer-parser-compiler toolchain. TorchScript can inspect the source syntax in full fidelity and can understand language constructs such as structured control flow, collection types (e.g. {\tt tuple}, {\tt list}, {\tt dict}) and user-defined types. As opposed to tracing, which can fail silently, embedded language approaches can report unsupported constructs as part of compilation. On the other hand, embedded language compilation is significantly more complicated to implement, since it requires a full language stack. Even then, in practice these systems will not support the full Python language, so users still need to make their program conform to the supported subset (albeit a larger subset than supported by tracing systems).
Systems such as Zygote.jl~\cite{DBLP:journals/corr/abs-1810-07951} and TPU integration~\cite{DBLP:journals/corr/abs-1810-09868} in the Julia ecosystem~\cite{bezanson2017julia} as well as Swift for TensorFlow~\cite{DBLP:journals/corr/abs-2102-13243} provide program transformation interfaces by way of integration into non-Python host languages. The main drawback of such native host language integrations in Swift and Julia is that they require the user to exit the Python ecosystem. Python has considerable momentum and extensive libraries in the numeric/scientific computing (and particularly deep learning) space, and many users prefer to stay in the Python ecosystem. While other languages may provide objectively better experiences in some respects, adoption has been slow.
\subsection{Specializing Programs}
\label{program_specialization}
A Python expression such as \ic{a + b} is very abstract. There are no constraints on the types of \ic{a} or \ic{b}. Even if both are Tensors, the number of dimensions and the size of the dimensions might vary. When ML frameworks capture programs, they often simultaneously \emph{specialize} these expressions such that they are only valid for specific types or tensor shapes. The more a program is specialized, the fewer inputs it will work on, so approaches vary in the \emph{degree} of specialization, the \emph{timing} of when specialization is done (ahead of time, just-in-time), and the \emph{safety} of the specialized result.
For example, PyTorch's TorchScript \ic{torch.jit.trace}~\cite{torchscript} specializes to the shape of the example inputs. \ic{jit.trace} capture is unintrusive\textemdash that is\textemdash it records the operations that occur during an actual execution run of the program. One implication of this is the presence of tensor metadata such as the \ic{ndim} or \ic{shape} attributes, which can escape the traced region and be used in control decisions within the Python program. This may cause the traced representation to be \emph{shape specialized}\textemdash that is\textemdash it is only valid for the value shapes used at trace time and may fail for other shapes.
To avoid the problem of specialization failing for some inputs, systems such as DyNet~\cite{neubig2017dynet} and LazyTensor~\cite{suhan2021lazytensor} perform tracing just-in-time, and thus can capture specialized program representations for every invocation. At runtime, these systems defer execution of tensor operations, instead accumulating a program trace. When a value must be materialized, the system will apply transformations to the collected program representation (e.g. automatic batching or native code lowering) and execute the code, returning the values requested. However, this process adds additional cost, since the program is captured on every invocation. LazyTensor uses a caching system to reduce this cost: optimized artifacts are stored in a cache keyed by a hash of the collected IR. On further invocations of the same IR, the optimized artifact can be called directly.
The performance of JIT specialization can also be improved by proving that re-capturing the program is unneeded for some inputs. For instance, JAX's \ic{jit} combinator~\cite{frostig2018compiling} uses pure, functional Python programs as input. This enforces referential transparency on non-Tensor computation like shape expressions. When some transform requires specialization, such as conversion to XLA~\cite{XLA} with static shapes, the system can look at the shapes of the inputs to determine if a new capture is required. A disadvantage of JIT specialization is that it is more complicated to reason about code execution. For instance, {\tt print} or {\tt pdb} statements in traced code will only be executed \emph{on runs where re-tracing occurs}. Re-tracing and re-transformation can also cause hard-to-predict performance bubbles as execution of the system stalls to re-specialize.
\subsection{Intermediate Representation Design}
\label{ir_design}
ML frameworks vary in the format of their IRs, with richer IRs capturing more programs and being more expressive at the cost of additional complexity to write transformations or run the code efficiently.
\paragraph{Language} Many frameworks define their IR in a cross-language way. For example, Caffe and TensorFlow use the Protocol Buffers format~\cite{protobuf} to represent computational graphs. PyTorch's JIT and MXNet use C++ data structures for their IR with additional bindings into Python. Such native representations can have better runtime performance and may be easier to serialize. On the other hand, these representations can impose a learning curve above that required for programming Python.
\paragraph{Control flow}
Most neural networks are expressible as flat sequences of tensor operations without control flow such as if-statements or loops\textemdash a definition we refer to as a \emph{basic block} program. \emph{Basic block} programs are often represented as a directed acyclic graph (DAG) data structure. Multilayer Perceptrons (MLPs), Convolutional Neural Networks (CNNs) such as ResNet~\cite{DBLP:journals/corr/HeZRS15} and personalization/recommendation models \cite{DBLP:journals/corr/abs-1906-00091} are easily expressed this way. Similarly, Transformer networks~\cite{DBLP:journals/corr/VaswaniSPUJGKP17} can also be expressed in this way, barring the loop needed for sequence generation on the decoder portion of the network.
Recurrent Neural Networks (RNNs) such as the Elman RNN~\cite{elman1990finding}, LSTM~\cite{hochreiter1997long}, and Gated Recurrent Unit (GRU)~\cite{DBLP:journals/corr/ChoMGBSB14} are not immediately expressible in this way, as the recurrent network computation is applied repeatedly across elements of a sequence with (typically) dynamic length. RNN structures can be represented in an imperative language as a loop with tensor computation applied in the loop body and tensor values carried across loop iterations. However, in practice, these RNN structures are typically provided as wholesale tensor operations. Thus, an entire RNN application over a sequence appears in code as a call to an RNN function or module. Therefore, these network architectures often also appear as \textit{basic block} programs.
Nevertheless, many frameworks support capturing and representing control flow in their IR. TorchScript built control flow support into all of its components first-class due to anticipation for workloads to become more complex, particularly in sequence processing domains. JAX uses higher-order functions such as \ic{jax.lax.scan} to allow functional-style control flow~\cite{frostig2018compiling}. MLIR represents control flow with basic blocks that end in tail calls~\cite{DBLP:journals/corr/abs-2002-11054}. In addition to adding complexity to the IR, more general control flow also makes transforms such as common sub-expressions more complicated to implement.
\paragraph{State} Deep learning models contain state in the form of the trainable model weights used in different layers. Apart from these parameters, most networks operate as pure functions of their inputs. ML frameworks take different approaches to handling how this state is mutated.
PyTorch allows values to be mutated and tensors can be views of each other. For example, the slicing syntax \ic{x[i]} (where \ic{x} is a Tensor value) does not produce a new Tensor value, but rather returns a view aliasing the subset of tensor \ic{x} indexed by \ic{i}. Views can also be mutated. For example, the expression \ic{x[i] = y} will write the value of \ic{y} into the portion of \ic{x} indexed by \ic{i}.
Since PyTorch supports these aliasing and mutation semantics, modifications to programs must be done in the context of an analysis that proves that the modification is safe~\cite{andersen1994program}. TorchScript implemented such alias analysis for the purpose of reasoning about the safety of transforms over the TorchScript IR. However, this comes at a high cost: all operations in the program must be annotated with information specifying their aliasing and mutation behavior. In practice, many functions (opaque calls or ones that have not been annotated with relaxed semantics) are treated with a conservative assumption that the callee mutates global memory, causing the operation to act as a barrier and hindering optimization. Needing to reason about aliasing and mutability complicates pass authoring, adds additional maintenance burden to the framework, and can limit optimization opportunities, but enables the user to apply the full generality of the PyTorch tensor language.
JAX's functional approach moves the burden of tracking this state outside of the framework. Instead the model must be turned into a pure function where the parameters are passed as inputs. Typically, this is done with wrapper libraries such as Haiku~\cite{haiku2020github} or Flax~\cite{flax2020github}. Any transforms that have to modify both state and code, such as folding batch norm scaling to a weight tensor, are made more complicated because these components no longer live together in the same framework.
\section{Design Principles}
Many of the different designs for program capture and transformation used in existing frameworks favor the ability to represent more deep learning programs at the cost of the complexity of their implementation. When captured programs are the \emph{only} way to run a program, the ability to capture a program in full fidelity is crucial. But PyTorch is primarily used as an \emph{eager execution} framework and program capture is only used for some specific transforms; It does not need to work for an entire program. Furthermore, most PyTorch programmers who want to transform models are machine learning practitioners who prefer to work in Python and may have less knowledge of compiler design.
By designing for \emph{typical} deep learning models rather than the long tail, it is possible to create a framework that is much easier to use and simpler to implement. This philosophy is captured by \ic{torch.fx}'s design principles:
\begin{itemize}
\item Prefer making program capture and transformation easy for typical models at the cost of working for all possible programs. Avoid complexity to support long-tail, esoteric use cases.
\item Work with tools and concepts that ML practitioners are already familiar with such as Python data structures and the publicly documented operators in PyTorch.
\item Make the process of program capture highly configurable so users can implement their own solutions for long-tail uses. Allowing users to make one-off configurations is simpler than handling the general case.
\end{itemize}
\section{torch.fx Overview}
\label{fx_overview}
In the spirit of simplicity, \ic{torch.fx} \emph{captures programs} via symbolic tracing, \emph{represents them} using a simple 6-instruction python-based IR, and \emph{re-generates Python code} from the IR to execute it. To avoid the complexities of re-capture for JIT specialization, \ic{torch.fx} makes no attempt to specialize programs itself, instead relying on the transforms to decide what specializations they want to perform during capture. The process of symbolic tracing can be configured by users to work for more esoteric uses.
\begin{figure}[!t]
\begin{lstlisting}
import torch
from torch.fx import symbolic_trace, GraphModule
def my_func(x):
return torch.relu(x).neg()
# Program capture via symbolic tracing
traced : GraphModule = symbolic_trace(my_func)
for n in traced.graph.nodes:
print(f'{n.name} = {n.op} target={n.target} args={n.args}')
"""
x = placeholder target=x args=()
relu = call_function target=<built-in method relu ...> args=(x,)
neg = call_method target=neg args=(relu,)
output = output target=output args=(neg,)
"""
print(traced.code)
"""
def forward(self, x):
relu = torch.relu(x); x = None
neg = relu.neg(); relu = None
return neg
"""
\end{lstlisting}
\caption{\ic{torch.fx} captures programs using symbolic tracing into a simple IR and generates Python code from that IR.}
\label{fx_components_code}
\end{figure}
Figure \ref{fx_components_code} shows an example of capturing code with \ic{torch.fx}. \ic{symbolic_trace} takes a function or \ic{torch.nn.Module} and captures its structure in a \ic{Graph} object. That \ic{Graph} object is combined with module parameters in a \ic{GraphModule}, which is a subclass of \ic{torch.nn.Module} whose \ic{forward} method runs the captured \ic{Graph}. We can print the \ic{Node}s of this \ic{Graph} to see the IR that was captured. \ic{placeholder} nodes represent inputs and a single \ic{output} node represents the result of the \ic{Graph}. \ic{call_function} nodes have a reference directly to the Python function they would call. \ic{call_method} nodes directly invoke a method on their first argument. The \ic{Graph} is reconstituted into Python code (\ic{traced.code}) for invocation.
\begin{figure}[!t]
\begin{lstlisting}
from torch.fx import Graph
def replace_activation(g: Graph, old, new):
for n in g.nodes:
if n.op == 'call_function' and n.target == old:
# create IR to call new activate
with g.inserting_after(n):
new_n = g.call_function(new, n.args)
n.replace_all_uses_with(new_n)
g.erase_node(n)
# or for this simplified case: `n.target = new`
replace_activation(traced.graph, torch.relu,
torch.nn.functional.gelu)
traced.recompile()
\end{lstlisting}
\caption{\label{transform_example} Transforms, like this one that replaces activation functions, are written directly in Python.}
\end{figure}
Figure \ref{transform_example} shows an example transform using \ic{torch.fx}. The transform finds all instances of one activation and replaces them with another. We use it replace \ic{relu} with \ic{gelu} in our example.
\subsection{Program Capture}
\label{program_capture_symtrace}
\ic{torch.fx}'s symbolic tracing mechanism uses a \ic{Proxy} data structure to record operations on values flowing through the program. \ic{Proxy} is a duck-typed Python class that records attribute accesses and method calls on it, acting as an abstract value that stands in for the concrete program values. \ic{Proxy} uses the \ic{__torch_function__} protocol~\cite{torch_function} to intercept and record the dispatch of PyTorch operators, which are free functions. Finally, \ic{torch.fx} overrides PyTorch's \ic{Module} abstraction to record calls to \ic{Module}s using proxied values. The process of symbolic tracing is configurable via a \ic{Tracer} class whose methods can be overridden to control what values are kept as \ic{Proxy}s and which are partially evaluated during the trace.
\subsection{Intermediate Representation}
\label{ir_what}
\ic{torch.fx} represents programs in a DAG-based IR, which is amenable to the \emph{basic block} programs common in deep learning. Programs are represented as a \ic{Graph} object, which contains a linear series of \ic{Node} objects representing operations. \ic{Node}s have a string opcode, describing what type of operation the \ic{Node} represents (the semantics of the opcodes can be found in Appendix \ref{node_semantics}). Nodes have an associated \ic{target}, which is the call target for call nodes (\ic{call_module}, \ic{call_function}, and \ic{call_method}). Finally, \ic{Node}s have \ic{args} and \ic{kwargs}, which together represent the arguments to the target in the Python calling convention as witnessed during tracing\footnote{No normalization is applied to args or kwargs; They are preserved as the user wrote them. This facilitates further backward-compatibility of the generated code} (the semantics for \ic{args} and \ic{kwargs} for each opcode can be found in Appendix \ref{args_kwargs_semantics}). Data dependencies between \ic{Node}s are represented as references to other \ic{Node}s within \ic{args} and \ic{kwargs}.
To simplify the IR, \ic{torch.fx}'s IR does not have primitive operations that model the construction or mutation of data structures. Nevertheless, \ic{args} and \ic{kwargs} support immediate values: Python built-in types such as \ic{int} and \ic{float} and recursive collection types like \ic{tuple} and \ic{list} can appear as \ic{Node} arguments without separate object construction \ic{Node}s. Because \ic{Node}s support immediate values, the IR is clean and \ic{Node}s are approximately 1-to-1 with Tensor operations.
\ic{torch.fx} stores the state of the program in the \ic{GraphModule} class. \ic{GraphModule} is the container for transformed programs, exposing the transformed, generated code as well as providing the familiar parameter management APIs of \ic{nn.Module}. \ic{GraphModule} can be used anywhere a normal \ic{nn.Module} can be used, providing interoperability between transformed code and the rest of the PyTorch ecosystem.
\ic{torch.fx}'s IR provides two opcodes for accessing state in the \ic{Module} hierarchy: \ic{call_module}, which invokes a sub-\ic{Module}'s forward method, and \ic{get_attr}, which fetches a parameter from the \ic{Module}. Transformed code can interact with the \ic{Module} hierarchy in much the same way normal PyTorch code can via these opcodes. In addition, transformations can manipulate the mutable state in the \ic{Module} hierarchy simultaneously with transformations over code. This provides a natural separation between the mutable parameters and the functional \ic{Graph} that interacts with them via \ic{call_module} \ic{Node}s, while still keeping them together in a single object for doing transformations that work on both.
\subsection{Source-to-Source Transformation}
\label{codegen_what}
The final stage in the \ic{torch.fx} transformation pipeline is code generation. Rather than exiting the Python ecosystem and entering a bespoke runtime, \ic{torch.fx} generates valid Python source code from the transformed IR. This transformed code is then loaded into Python, producing a callable Python object, and installed as a \ic{forward} method on the \ic{GraphModule} instance. Using code generation allows the results of \ic{torch.fx} transforms to be installed in models and still used in further transforms. For instance, in Figure \ref{code_generation_example} we take the result of tracing our original program and install it as the activation in a new module. Then, we symbolically trace the result for further transformation.
\begin{figure}[!t]
\begin{lstlisting}
class SampleModule(torch.nn.Module):
def forward(self, x):
return self.act(x + math.pi)
sm = SampleModule()
sm.act = traced # from previous figure
traced : GraphModule = symbolic_trace(sm)
print(traced.code)
"""
def forward(self, x):
add = x + 3.141592653589793; x = None
gelu = torch.nn.functional.gelu(add); add = None
neg = gelu.neg(); gelu = None
return neg
"""
\end{lstlisting}
\caption{\label{code_generation_example} \ic{torch.fx} generates Python code as its output, so it can be reused in further capture and transform steps.}
\end{figure}
\section{Design Decisions}
\label{designdecisions}
\ic{torch.fx} mixes and extends approaches from previous work to deliver an easy to use, simple to implement, and configurable library. We highlight a few of these decisions here.
\subsection{Symbolic Tracing}
\ic{torch.fx} uses symbolic tracing with \ic{Proxy} objects rather than embedded language techniques because they are easier to implement directly in Python using its flexible object model. The implementation is simple enough that users can read and step through the source when tracing behaves unexpectedly.
Tracing also helps eliminate control flow in a model not dependent on inputs such as the loop over sequential modules in a \ic{torch.nn.Sequential}. PyTorch models are written pervasively with these abstractions, with many users also using third party libraries that contain their own model implementations, so it is important to be able to trace through these abstractions to get to the actual operators running.
Symbolic tracing works well for common models at the cost of not being able to capture long-tail models that actually contain input-dependent control flow. We make up for this limitation by making the tracing process customizable to work around one-off issues.
\subsection{Configurable Program Capture}
\label{configurable_program_capture}
\ic{torch.fx}'s symbolic tracing is customizable. A \ic{Tracer} class controls the behavior of \ic{fx.symbolic_trace}. Its methods can be overridden to change the tracing process's behavior.
The \ic{is_leaf_module} method can be overridden to specify which PyTorch \ic{Module} instances should be treated as opaque calls during tracing. By default, \ic{torch.fx} keeps PyTorch built-in \ic{Module}s such as \ic{nn.Conv2d} intact while tracing through user-defined \ic{Module}s, since this creates a trace of standard, understandable primitives. Customizing this behavior can block out portions of a model that contain unsupported language features or modify the level of representation used for transformations.
\ic{create_proxy} is a method that can be overridden to customize the behavior of creating a \ic{Node} in the \ic{Graph} and the associated runtime \ic{Proxy} value. This can be used to, for example, install custom metadata onto \ic{Node}s for the purpose of transformation or to support custom data structures as traceable values. A custom \ic{Tracer} could, for instance, specialize the sizes and shapes of \ic{Tensor}s and use these values to capture a program that would otherwise not be traceable without specialization.
\subsection{AoT Capture without Specialization}
\label{aot}
While ahead-of-time tracing limits the space of programs that can be captured (e.g. arbitrary control flow is not supported), it provides a more predictable and more observable capture, transformation, and code generation process that fits into the PyTorch developer experience and works well in practice.
Unlike example-based tracing, symbolic tracing cannot incidentally specialize program flow because the information needed to make data-dependent control flow decisions is not present at trace time. Common \ic{Tensor} attributes used in control decisions such as \ic{shape} and \ic{ndim} are returned as \ic{Proxy} values during symbolic tracing. Operations on these values can then be recorded. On the other hand, when these \ic{Proxy} objects are used in a context where untraceable operations (such as a cast to Python built-in types like {\tt int} or {\tt bool}) occur on them, the user receives an error message describing the problem and a stack trace indicating the location of the issue.
\subsection{Python-based IR and Transforms}
\label{codegen}
Rather than use a cross-language format such as protocol buffers, \ic{torch.fx} IR is entirely represented and implemented Python. Users can call, read, or override it easily. There is no need to understand Protocol Buffers or C++ (or set up either of their build environments), which present barriers to ML engineers familiar with working primarily in Python. Transforms are written in Python as well.
Furthermore, the \emph{result} of transformations is also Python code. This makes it easy to inspect for correctness, debug with \ic{pdb}, feed to libraries, and pass on to further transforms. Transformed code is encapsulated in a \ic{GraphModule} that can be used in PyTorch just like any other \ic{nn.Module}. For instance, a user can TorchScript compile the model for deployment or use it in PyTorch's \ic{DistributedDataParallel} library. Users can also save the generated code as a source file via the experimental \ic{GraphModule.to_folder} API.
Code generation further integrates \ic{torch.fx} into the Python ecosystem rather than sequestering transformed code into a bespoke and harder-to-use runtime.
\subsection{No Control Flow Inside IR}
\label{controlflow}
With Transformers~\cite{DBLP:journals/corr/VaswaniSPUJGKP17} increasingly replacing sequential recursive neural networks with larger scalable attention modules, the use of host language control flow in deep learning is becoming more rare. Many models can be expressed without it, and even for programs with some control flow (e.g. a beam search decoder), there are large blocks of the model without control flow (the encoder and the step of the decoder).
However, the presence of control flow in an IR adds significant complexity regardless of whether a particular model uses it. Most analyses on the IR must be expressed as fix-point data-flow~\cite{kildall1972global} over the program rather than simple forward propagation. The author must define a lattice, transfer function, and join function for the analyzed property in the program and prove monotonicity and finiteness thereof. While familiar to compiler writers, we have found that writers of ML transforms often introduce bugs in transforms such as having join functions that are not monotonic or failing to iterate until converged. In contrast, for a \emph{basic block} IR, only a transfer function is needed.
An example of the complexity of fix-point analysis can be found in shape propagation: shapes can be trivially propagated forward through a basic block program (barring a few operations with value-dependent output shapes). However, when control flow is added, shape propagation does not satisfy the finiteness property\textemdash a value carried across a loop iteration can take on an infinite number of shapes, as shown in Figure~\ref{lcd_shape_analysis}. The analysis will typically reach a ``dynamic" value in such situations. Shape analysis would then provide under-specified data, which would hinder further transformations that require concrete shape information, such as ASIC lowering.
\begin{figure}[!ht]
\begin{lstlisting}
def loop_shapes(x, itr):
# x is an input tensor of size [1, N]
for _ in range(itr):
x = torch.cat((x, x), dim=0)
# Depending on the number of loop iterations, x may have an
# arbitrary leading dimension i.e. x \in [*dynamic*, N]
return x
\end{lstlisting}
\caption{\label{lcd_shape_analysis} A demonstration of dynamic shapes due to loop-carried dependencies}
\end{figure}
Furthermore, some transformations proposed in the ML community are not well defined in the presence of control flow, such as the quantization transform described in Section \ref{quant}.
The fact that the IR does not contain control flow itself does not prevent transforms from working on sub-graphs of basic blocks within a larger model; We leave the details of how this composition works to the writer of the transform or the user applying the transform.
\subsection{Functional Graphs but Stateful Modules}
As described in Section~\ref{ir_design}, aliasing and mutability semantics in a language can necessitate complex analyses to prove that a program transformation is legal. \ic{torch.fx} omits such analysis, instead defining mutating operations as undefined behavior with the option to raise errors when it is captured during tracing.
Avoiding mutability in the IR simplifies analysis and transformation of deep learning programs greatly. Most models do not suffer from this restriction since most mutation is localized to the parameters of the model.
\ic{torch.fx} still preserves the hierarchical \ic{nn.Module} structure from PyTorch and can represent module calls and attribute fetches from this structure. Modules like \ic{torch.nn.Conv2d} are well understood by users, have well-documented arguments, and hide the stateful use of parameters within the module, so preserving these objects makes writing transformations easier. For instance, a \ic{torch.nn.BatchNorm} module will actually contain mutable state, but that state is well understood by ML practitioners.
\section{Case Studies and Evaluation}
\label{case_studies}
\ic{torch.fx} has been used by PyTorch users both in the open-source ecosystem as well as as a critical component of the deep learning stack at a major software company. We study the complexity of \ic{torch.fx}'s IR and various use cases of \ic{torch.fx}, including \emph{performance optimization}, \emph{program analysis}, and \emph{device and runtime export}.
\subsection{IR Complexity}
One of the goals of \ic{torch.fx} is to simplify the IR produced for ML models and make it easier for ML practitioners to understand. We can compare \ic{torch.fx} IR to the IR produced by the two TorchScript~\cite{torchscript} front-ends (\ic{jit.trace} and \ic{jit.script}), since all start from the same input programs. Figure~\ref{ts_ir_complex} shows some example IR from the start of a ResNet model. The IR produced by TorchScript is very rich, including tensor operations, scalar operations, control flow, data structures, hierarchical module structure, and aliasing and mutability semantics. Support for these features makes it much more verbose for simple models, resulting in 2614 operations from \ic{jit.script} and 860 from \ic{jit.trace}. The same ResNet model consists of 445 operations in \ic{torch.fx} IR. Most of the reduction comes from eliminating control flow irrelevant to the captured trace. But \ic{torch.fx} IR also benefits from inlining simple constants and data structures, so is almost half the size of the IR the captured with \ic{torch.jit.trace}, which similarly eliminates control flow.
The complex IR from the TorchScript front-ends induces complexity in the program transform authoring process, requiring more care to write transforms correctly and leading to longer and less maintainable transform code. \ic{torch.fx} addresses this by greatly simplifying its captured representation, facilitating transforms that are easier to write and maintain.
\begin{figure*}[t]
\begin{subfigure}[b]{.5\textwidth}
\begin{lstlisting}
graph
...
\end{lstlisting}
\caption{{\scriptsize TorchScript IR}}
\label{example_export}
\end{subfigure}
\begin{subfigure}[b]{.5\textwidth}
\begin{lstlisting}
def forward(self, x : torch.Tensor) -> torch.Tensor:
conv1_weight = self.conv1.weight
conv2d = torch.conv2d(x, conv1_weight, None,
(2, 2), (3, 3), (1, 1), 1)
...
\end{lstlisting}
\caption{{\scriptsize \ic{torch.fx} IR}}
\label{importing_api}
\end{subfigure}
\caption{\ic{torch.fx} traces through non-varying control flow and can embed constants as arguments in its \ic{Node}s. This substantially simplifies the IR for typical models.
For a canonical ResNet50 model, \ic{torch.fx} IR contains 445 operations compared to 2614 for \ic{torch.jit.script} and 860 for \ic{torch.jit.trace}.}
\label{ts_ir_complex}
\end{figure*}
\label{eval_fx_ir_complexity}
\subsection{Performance Optimization}
\label{perfoptimization}
PyTorch's tensor language provides good performance in many cases, but architectural details of the underlying hardware create opportunities for further optimization. We investigate techniques by which \ic{torch.fx} enables runtime performance improvements.
\subsubsection{Quantization}
\label{quant}
Quantization~\cite{DBLP:journals/corr/abs-1712-05877} is a technique used to increase the efficiency of neural network computation by reducing the size of Tensor data elements. Smaller data elements require less memory bandwidth, less storage, and can often be processed faster by modern processors. Neural network computation has relaxed sensitivity to numerical perturbations, so quantization is a canonical performance optimization.
Performing Post-Training Quantization or Quantization-Aware Training requires access not only to parameter values but also to the activation values that flow through the program~\cite{DBLP:journals/corr/abs-1806-08342}. For instance, quantization-aware training needs to measure the distribution of floating point values in the output of a tensor addition operation to calculate a scale and bias value under quantized numerics. Such introspection is generally not available in PyTorch eager mode. However, \ic{torch.fx} provides a lightweight way to capture such a program representation.
The Post-Training Quantization procedure entails the following stages:
\begin{enumerate}
\item A preparation phase, which instruments the program with ``observer" objects that record statistical information about the floating-point values contained in Tensor values at various points in the program.
\item A calibration phase, where the user feeds batches of data through the network to populate the observers.
\item A conversion phase, where the collected statistics are used to down-cast weight values and convert operations in the model to quantized operations with embedded scale and zero-point information.
\end{enumerate}
Quantization makes use of \ic{torch.fx}'s graph and \ic{GraphModule} representation to simultaneously modify the program code and weight values. The process for Quantization-Aware Training is analogous to phases (1) and (2) in the above but with ``fake quantize" observers that snap floating point values to the corresponding values under quantized numerics.
We evaluate the performance of a DeepRecommender~\cite{kuchaiev2017training} model with Post-Training Quantization applied on a server-class Intel Xeon Gold 6138 CPU @ 2.00GHz using FBGEMM~\cite{DBLP:journals/corr/abs-2101-05615} quantized operations. Figure~\ref{quant_exp_results_inline} shows that \ic{torch.fx}-enabled quantization confers up to a 3.3x runtime performance improvement compared to the floating point model, with low variance highlighting the predictable performance characteristics of ahead-of-time transformation. Numeric data for the experiment can be found in Appendix \ref{quant_data}.
\begin{figure}[!ht]
\resizebox{\linewidth}{!}{
\includegraphics[]{chart_assets/quantization_chart_deeprecommender.pdf}
}
\caption{\label{quant_exp_results_inline} Normalized inference runtime (lower is better) for \ic{torch.fx}-based quantization.}
\end{figure}
Not only does \ic{torch.fx}-based quantization provide the expected performance increases, but the tool's development saw an order-of-magnitude productivity increase compared to an implementation on the TorchScript platform. By reducing the amount of complexity in the representation, exposing transformation APIs in Python, and embedding into the native PyTorch ecosystem, \ic{torch.fx} provides a high-productivity environment for semantics-changing transforms like quantization.
\subsubsection{Fusion Optimizations}
Operator Fusion is a class of optimization that merges patterns of tensor operations together into a single compute kernel. Fusion can save operator dispatch cost, memory bandwidth cost, and memory space cost.
One example of operator fusion is \textit{Convolution-BatchNorm fusion}. During inference, a Convolution-BatchNorm operator sequence can be merged by applying the batch normalization weights to the convolution weights~\cite{markus_2018}.
We evaluate this transformation on a PyTorch ResNet50 model on an NVIDIA Tesla V100-SXM2 16GB with CUDA version 11.0 and an Intel Xeon Gold 6138 CPU @ 2.00GHz. Figure \ref{conv_bn_fusion_results} shows approximately a 6\% latency reduction for the GPU case, a 40\% latency reduction on CPU with default intra-op parallelism, and a smaller 18\% latency reduction with intra-op parallelism disabled (i.e. \ic{OMP_NUM_THREADS=1}). Numeric results for this experiment can be found in Appendix \ref{fusion_data}.
\ic{torch.fx} provides the necessary non-local program context and state modification facilities needed for this transformation with its ahead-of-time, graph-based nature~\cite{he_2021}. The whole transformation and test harness amount to fewer than 150 lines of Python, demonstrating the power of \ic{torch.fx}'s APIs in enabling concise, fast-to-develop program transformations over PyTorch code.
\begin{figure}[!ht]
\includegraphics[]{chart_assets/fusion_chart.pdf}
\caption{\label{conv_bn_fusion_results} Normalized inference runtime (lower is better) with \ic{torch.fx}-based Convolution/Batch-Norm fusion.}
\end{figure}
\subsubsection{Program Scheduling and Partitioning}
Software pipelining involves scheduling operations such that the usage of parallel compute resources is maximized. An example of this is the overlapping of operations that occur synchronously on the CPU with operations that occur asynchronously on the GPU. Another example is overlapping operations that occur on the local host with operations that occur on a remote host via a remote procedure call (RPC) mechanism. \ic{torch.fx} provides the non-local analysis and transformation facilities needed for such scheduling optimizations and is used for this purpose at a major software company.
\subsection{Program Analysis}
\ic{torch.fx} has been applied in various ways for program analysis.
\ic{torch.fx} has been used to implement a framework for simulation of deep learning inference at scale on various hardware devices at a major software company. \ic{torch.fx} enables the estimation of FLOPs, memory bandwidth usage, and data value sizes of the workload, allowing for estimation of the program runtime and memory consumption. This system allows for rapid development of deep learning systems, enabling quick iteration in simulation rather than on real devices.
\ic{torch.fx} has also been used for various forms of shape analysis. The canonical \ic{fx.passes.shape_prop} package provides a naïve implementation of shape analysis by interpreting the graph and recording the observed shapes. Additional systems, including shape propagation via symbolic expressions and shape propagation via gradual typing semantics, are in development. \ic{torch.fx} provides a representation on which such analyses can be done, opening opportunities for type system and inference innovations to be applied to PyTorch models.
Finally, \ic{torch.fx} provides an \ic{fx.graph_drawer} package, which gives the user the ability to visualize \ic{torch.fx} graphs with Graphviz~\cite{10.1007/3-540-45848-4_57}. This provides a commonly-requested way of understanding a deep learning program via a visual representation of its DAG.
\subsection{Device and Runtime Export/Compilation}
PyTorch is primarily designed for modern GPUs, which provide a great deal of flexibility and dynamism and thus are very amenable to PyTorch's \emph{eager mode} execution model. However, GPUs can still benefit from ahead-of-time compilation of model code through tookits like NVIDIA's TensorRT~\cite{tensorrt}.
More specialized processors (such as the TPU~\cite{jouppi2017datacenter}) promise higher performance, better power efficiency, and reduced cost via specialized functional units, specialized number formats, and new memory architectures. These processors often require static analyses and optimizations including operator scheduling, code generation, memory planning/scheduling, and architecture-aware quantization. Similarly to the optimizations in \ref{perfoptimization}, such analyses typically require greater program context than the per-operator kernel launches provided by PyTorch during eager mode execution. \ic{torch.fx} provides a pathway for such compiler stacks to integrate with PyTorch by providing a program representation extracted ahead-of-time. \ic{torch.fx} is used at a major software company for ASIC lowering.
We evaluate lowering a PyTorch ResNet50 model and a LearningToPaint model \cite{DBLP:journals/corr/abs-1903-04411} to NVIDIA TensorRT on an NVIDIA Tesla V100-SXM2 16GB GPU with CUDA version 11.0 using an experimental \ic{torch.fx}-to-TensorRT lowering system. Figure \ref{trt_results} shows that TensorRT provides a predictable 3.7x runtime speed-up across 30 trials compared to baseline PyTorch for ResNet50 and a 1.54x speed-up for LearningToPaint. Numerical data for this experiment is available in Appendix \ref{trt_data}.
In addition to providing the platform for runtime speed-up through TensorRT, \ic{torch.fx} also provided high developer productivity for this component. The project was quickly developed using \ic{torch.fx}'s Python APIs as well as TensorRT's Python APIs, creating a translation layer between the two. The project was also able to quickly build components such as automatic splitting of the model based on TensorRT's supported operators and automatically scheduling unsupported operations in non-optimized blocks. Finally, the ultimate user API is very easy to use, inspect, and debug, as it conforms to Python coding practices.
\begin{figure}[!ht]
\includegraphics[]{chart_assets/trt_chart.pdf}
\caption{\label{trt_results} Normalized inference runtime (lower is better) with \ic{torch.fx}-based TensorRT lowering}
\end{figure}
\section{Conclusion}
\label{conclusion}
We presented \ic{torch.fx}, a Python-only system for capturing and transforming PyTorch programs. We analyzed the factors that complicated related systems\textemdash including control flow, mutability, and data model\textemdash and show how \ic{torch.fx} avoids complexity by focusing on common use cases and customizability. We investigated various use cases of \ic{torch.fx} across optimization, analysis, and device lowering, and show how these results are enabled by \ic{torch.fx}'s API design.
\section{Acknowledgements}
We would like to acknowledge all of the contributors to the core \ic{torch.fx} framework, including Alban Desmaison, Alex Beloi, Alexander Soare, Allen (Congcong) Chen, Andrew Millspaugh, Ansley Ussery, Aravind Kalaiah, Bradley Davis, Brandon Lin, David Esiobu, Dmytro Dzhulgakov, Eli Uriegas, Erjia Guan, Garret Catron, Harut Movsisyan, Horace He, Hui Guo, James Reed, Jason Ansel, Jay Leverett, Jerry Cai, Jerry Zhang, Jordan Fix, Kefei Lu, Lu Fang, Malay Bag, Meghan Lele, Mehdi Mirzazadeh, Michael Benayoun, Michael Suo, Mike Ruberry, Mikhail Zolotukhin, Natalia Gimelshein, Nikita Shulga, Oleg Khabinov, Onyiee, Patrick Hu, Patrick Spencer, Peter Bell, Philip Meier, Richard Zou, Sam Estep, Shirong Wu, Shiyan Deng, Thomas Wang, Vasiliy Kuznetsov, Yinghai Lu, Zachary DeVito, and Zeina Migeed.
We would like to acknowledge the contributors to the FX Graph Mode Quantization project used in evaluations, including Adnios, Alban Desmaison, Angela Yi, Bradley Davis, Charles David Hernandez, Emily Shen, Erjia Guan, Horace He, James Reed, Jerry Zhang, Raghuraman Krishnamoorthi, Mike Ruberry, Mikhail Zolotukhin, Philip Meier, Rong Rong (AI Infra), Sam Estep, Supriya Rao, Vasiliy Kuznetsov, Xiang Gao, Zachary DeVito, and Zafar Takhirov.
We would like to acknowledge the contributors to the fx2trt project used in evaluations, including Alex Beloi, Aravind Kalaiah, Bangsheng Tang, Eli Uriegas, Emad El-Haraty, Ivan Kobzarev, Jack Montgomery, James Reed, Jerry Zhang, Jordan Fix, Kefei Lu, Linbin Yu, Marat Subkhankulov, Mike Ruberry, Mor Tzur, Nikita Shulga, Philip Meier, Protonu Basu, Rui Zhu, Samuel Salas, Shirong Wu, Shiyan Deng, Yinghai Lu, and Zhengxu Chen.
Finally, we'd like to acknowledge all the discussions and feedback from many users inside and outside Facebook.
|
1,108,101,564,847 | arxiv | \section{Introduction}
A commonly encountered class of signals consists of one or more pure sinusoidal tones plus noise. For such signals, the Fourier transform (FT) is often the analysis method of first choice because it concentrates the power of each tone into a delta function located at the tone frequency, and also displays the correlation properties of the noise. Unfortunately it is not possible to compute integrals of continuous functions which extend to infinity. In actual computations therefore one must calculate an approximation to the Fourier transform which will necessarily involve a finite number $N$ of samples of the function. The input signal is notionally multiplied by a uniform window extending over the interval $[-T/2,T/2]$ (which is sometimes called a top hat window because of its shape), then sampled within that interval at constant intervals $\Delta t = T/N$. The delta function for each tone becomes convolved with the Fourier transform of the top hat window, which is a sinc function.
If the Fourier transform of the windowed and sampled input is also sampled at $N$ intervals separated by $\Delta f = 1/T$, the result is known as the Discrete Fourier Transform or DFT \citep{bracewell_2000}. If the sinusoids contained in the original signal happen to have periods $T_j$ which are integer fractions of $T$, then the DFT will concentrate all the power of such tones into a single bin of the output. In effect the samples occur at the zeros of the sinc function. In the general case, however, non-periodicity of the sinusoids at the window edges is to be expected: this will generate Fourier components which were not there in the original signal. The effect is to redistribute power out of the bin corresponding to the true frequency of the tone into adjacent bins. This `leakage' can make it difficult to find other tones close to and/or fainter than the strongest.
A classic way to ameliorate this problem (nothing will make it entirely disappear) is to choose a different window function. A window which tapers gradually toward zero as $|t| \to T/2$ is usually found to cause much less leakage than one which drops abruptly to zero at the boundaries. A variety of functions having differing properties have been proposed over the years. Commonly, although not always, a window function will be real-valued, symmetrical about $t=0$, and everywhere nonnegative, the value decreasing monotonically from a maximum at $t=0$ to a minimum at $|t|=T/2$. Only windows having these properties are discussed in the present paper. Such a window has a Fourier transform which typically is reminiscent of a sinc function, namely with a relatively broad central positive-valued lobe surrounded by oscillations, known as sidelobes, which decrease steadily in amplitude with increasing frequency $f$. Often too (as with the sinc function) the rate of decrease of this amplitude is proportional to $f$ raised to some negative power.
\citet{harris_1978} defined several criteria for assessing the performance of windows and tabulated the respective values for many examples. In the present paper three only of these criteria will be considered: the equivalent noise bandwidth or ENBW, the amplitude of the largest sidelobe, and the far-field rate of decrease of sidelobe amplitude (sometimes known as the roll-off). These quantities are described in more detail in section \ref{ss_criteria}.
The present paper describes a new window function which includes a parameter which allows one to vary the ENBW over its entire possible range. For convenience, and for a reason which is made clear in section \ref{ss_hyperbolic}, I refer to it as the hyperbolic window. The new window is compared here with a selection of others which excel in the properties of present interest, as described above. The hyperbolic window is not the only one of variable width, and the comparison makes it plain that one can often find an alternative which has a better value of either roll-off or sidelobe amplitude. However, as is shown in the present paper, one can still achieve roll-off rates of up to 60 decibels per octave with the hyperbolic window over a wide range of values of ENBW; and superior sidelobe levels can be attained in the near-field regime $Tf < \sim 100$, particularly in the range of ENBW values below 1.7, for which few well-performing windows are available. For this reason, and because of its flexibility, the hyperbolic window is suggested for applications in which sensitivity and near-field dynamic range need to be jointly optimized.
\section{Theory}
\subsection{Generic windows} \label{ss_windows}
A general symmetric window function $w(|t|)$ is uniquely defined by its values in the interval $[0, T/2]$. Its Fourier transform $W(f)$ is thus clearly
\begin{displaymath}
W(f) = 2 \int_0^{T/2} dt \ w(t) \; cos(2 \pi f t).
\end{displaymath}
Throughout the present paper I assume that $w$ has been normalized such that $W(0)=1$.
In order to do $N$-point DFT calculations with a window, one has to sample $w(t)$ at $1+N/2$ locations, the samples being defined as
\begin{displaymath}
w_j = w(j T / N) \textrm{ for } j \in \{0, \ \dots, \ N/2\}.
\end{displaymath}
(I assume throughout that $N$ is even.) Conventionally then one stores the symmetric samples corresponding to $t < 0$ in the upper part of the vector according to the prescription
\begin{displaymath}
w_j = w_{N-j} \textrm{ for } j \in \{N/2+1, \ \dots, \ N-1\}.
\end{displaymath}
Note that $w_0$ and $w_{N/2}$ are centres of symmetry and thus each occurs only once in the sequence.
The Fourier transform of the continuous function $w$ can be more closely approximated via the DFT, i.e. with a finer frequency spacing, if the technique of zero-padding is used \citep{bracewell_2000}. Suppose one wished to sample the FT with spacing $\delta f = 1/a$. (Note that $a$ must be $>1$ and that $M = aN$ must be an integer.) The procedure is to construct a new vector $w^\prime_j$ of length $M = aN$ for which
\begin{displaymath}
w^\prime_j = w_j \textrm{ for } j \in \{0, \ \dots, \ N/2\}
\end{displaymath}
and
\begin{displaymath}
w^\prime_j = w_{j-M+N} \textrm{ for } j \in \{M+1-N/2, \ \dots, \ M-1\},
\end{displaymath}
the remainder being filled with zeros.
\subsection{Assessment criteria} \label{ss_criteria}
The three criteria listed by \citet{harris_1978} used in the present paper are the equivalent noise bandwidth (ENBW), the maximum sidelobe power, and the roll-off rate. The ENBW is defined as
\begin{equation} \label{equ_enbw}
ENBW = N \frac{\sum_{j=0}^{N-1} w^2_j}{\left( \sum_{j=0}^{N-1} w_j \right)^2}.
\end{equation}
This gives the ratio between the gain for incoherent sources versus coherent ones, and thus is the reciprocal of sensitivity, which can be described as the detectability of a pure tone (i.e. coherent) among noise (incoherent). The smallest possible value of ENBW (corresponding to the best sensitivity) is unity, which can be attained only by the top-hat window. As can be seen from table 1 of \citeauthor{harris_1978}, ENBW correlates well with measures of the width of the central lobe of the window FT $W(f)$, but the exact proportionality varies from window to window.
The `sidelobe height' is defined here as the maximum value of $W^2(f)$ for $f > f_0$, where $f_0$ is the frequency of the first zero-crossing of $W$. Note that the FTs of some windows don't cross zero at all, or at least not until very large $f$; thus a sidelobe height cannot meaningfully be defined for them. In the present paper, following conventional practice, sidelobe heights are presented in decibels.
The FT of any function $w$ which is only non-zero within a finite interval, and analytic within that interval, is found to decrease in amplitude asymptotically at some finite order $(n+1)/2$ of frequency, the value of $n$ being determined by the order of the first differential of $w$ which is non-zero within or at the interval boundaries. On a logarithmic scale, a change in FT envelope amplitude proportional to frequency raised to the power $-(n+1)/2$ is equivalent to about $6 \times (n+1)$ decibels decrease per octave. I find it simpler just to quote $-(n+1)$: hence all roll-off values quoted in the present paper follow this scheme. The top hat window for example has, in these terms, a roll-off of $-1$.
In computational practice the maximum frequency obtained is limited to $N/(2T)$, $N$ being as before the number of samples of $w$. This may not be large enough to allow $W$ to reach its asymptotic value. It should also be pointed out that the finite precision of computation places an ultimate limit on dynamic range, so for purposes of optimizing this over the widest range of accessible frequencies, one needs to look farther than the theoretical asymptotic roll-off, and also ask how soon the asymptotic behaviour becomes manifest.
Both sidelobe height and roll-off may be used as proxies for dynamic range, since they both bear on the detectability of secondary tones in the presence of the strongest.
\subsection{The hyperbolic window} \label{ss_hyperbolic}
The hyperbolic window is defined for $|t|<T/2$ as
\begin{displaymath}
w(t) = \cos^{2 \alpha} \left[ \frac{\pi z(s,t)}{2} \right]
\end{displaymath}
where
\begin{displaymath}
z(s,t) = \frac{\tau (1 - s)}{1 - s (2 \tau - 1)}
\end{displaymath}
for
\begin{displaymath}
\tau = 2 |t| / T.
\end{displaymath}
The variable width is achieved via the warp factor $s$, for which the useful range of variation is -1 to 1. As $s \to -1$, $w$ becomes infinitely narrow; at $s=0$, $z=\tau$, thus $w$ becomes identical to a simple cosine window, raised to the power $2 \alpha$; as $s \to 1$, $w$ approaches the top hat. Note e.g. that for $s=0$ and $\alpha=1$, the hyperbolic and Hann windows \citep{harris_1978} are identical.
No closed-form expression is known for the general Fourier transform.
The name `hyperbolic' was chosen because $z$ can be rearranged to the form
\begin{displaymath}
z(s,t) = A(s) + \frac{B(s)}{C(s) + |t|}.
\end{displaymath}
The Fourier transform of the hyperbolic window undergoes a marked changes as the warp factor $s$ departs from the `Hann' value of zero. For infinitesimal values of $s$ it is easy to show via Taylor expansion that the difference $\Delta w$ between the hyperbolic and Hann windows is (for integer $\alpha$) equal to
\setlength{\arraycolsep}{0.0em}
\begin{eqnarray} \label{equ_perturbation}
\lefteqn{\Delta w = 2^{(2-2\alpha)} \pi \alpha s \tau ( 1 - \tau ) \times {}}\nonumber\\
& & {\times}\: \sum_{k=0}^{\alpha-1} \binom{2\alpha}{k} \frac{\alpha-k}{\alpha} \sin [(\alpha - k) \pi \tau ]
\end{eqnarray}
\setlength{\arraycolsep}{5pt
for $0 \le \tau \le 1$. Since this begins to have discontinuities at derivative order 3, one expects it to have a roll-off of -4. In fact this is what is observed (See for example Fig. \ref{fig_f}). However, the Fourier transform of equation \ref{equ_perturbation} appears to consist of a smooth or DC component of this roll-off value plus an oscillatory component which, particularly at high values of $\alpha$, exhibits a much steeper roll-off. The upshot is that the total FT for the hyperbolic window tends to resemble that of the Hann window raised to the same $\alpha$ exponent, plus a DC component at the much shallower roll-off of $-4$. Clearly the shallow-slope DC component is going to dominate the total at high frequencies; at low, it may or may not remain the dominant component, depending on the values of $s$ and $\alpha$.
\section{Comparisons} \label{s_comparisons}
\subsection{The contenders} \label{ss_contenders}
The highest-sidelobe and roll-off for the hyperbolic window for several samples of values of $s$ and $\alpha$ are compared here against matching values for a selection of other window types, both variable and non-variable. These are described in the following lists.
Two variable-width windows were included:
\begin{itemize}
\item The Tukey window \citep{harris_1978, tukey_1967} is a simple modification of the Hann window to allow ENBW to be varied from the Hann value to unity while retaining the $-3$ roll-off of the Hann. One could also choose to raise the Tukey to some arbitrary power, but in fact this does not improve the roll-off, while at the same time worsening the sidelobes. The lack of improvement results because the 2nd-order and higher derivatives remain discontinuous at the boundary between the constant part and the cosine part of this window. The FT of the Tukey looks rather uneven and exhibits large beats as the window width approaches either end of its range, due to the fact that it comprises two different functions spliced together.
\item The Planck window \citep{mckechan_2010} has continuity of derivatives of all orders, everywhere. Because of that it has an asymptotic roll-off which increases without bound. Beats also occur in the FT of this window, for the same reason as for the Tukey window.
\end{itemize}
The fixed-width windows are:
\begin{itemize}
\item The top hat window is included because it forms the end point for the transformation of all three variable-width windows.
\item Hann \citep{harris_1978}: one of the simplest, yet it has reasonable values of all three criteria considered here. It is also a shape reachable by both the hyperbolic and Tukey windows. Squared and cubed Hann windows are also compared here.
\item De la Vall\'{e}-Poussin \citep{parzen_1961}: included again because of its good compromise of values.
\item Bohman \citep{bohman_1960}: properties similar to the De la Vall\'{e}-Poussin, and to the squared Hann.
\item Kaiser \citep{kaiser_1980}: included because it has excellent sidelobe suppression at moderate width, although it rolls off only as $1/f$. It is a variable-width formulation but, following \citet{nuttall_1981}, only the window normalized by $I_0(3\pi)$ is considered here.
\item Nuttall \citep{nuttall_1981}: Nuttall explored several windows consisting of sums of 3 or 4 cosine terms, optimized mostly for sidelobe level, but also at respectable roll-off values. Three are included for comparison here: namely the 3-term window corresponding to his Fig. 9, which is labeled here `Nuttall 3'; and 4-term windows `Nuttall 4a' and `Nuttall 4b', corresponding to his Figs. 11 and 12 respectively.
\end{itemize}
Many more fixed-width windows have been described, but most of them have poor roll-off, or too-large ENBW, so do not commend themselves for high dynamic range, high sensitivity work. In any case I do not pretend to make here a comprehensive survey of windows.
Note also that, except for the Hann, I have not thoroughly explored the changes in the fixed-width windows obtainable by raising them to higher powers. The few trials which have been made hold out little promise. The Nuttall windows for example are carefully balanced to yield optimum sidelobe levels; raising them to a higher exponent destroys this balance, with resulting rapid deterioration of the sidelobe performance. The same is observed when the hyperbolic transform is tried with these windows.
\subsection{Results} \label{ss_graphs}
For all the results quoted in the present section, window functions were sampled at $N=4096$ or $2^{12}$ points. Since a zero pad multiple of 16 was used, the discrete Fourier transform operated on $2^{16}$ points in total.
The ENBW is easily calculated from equation \ref{equ_enbw}. The highest sidelobe level is best estimated via a DFT of a sufficiently zero-padded version of the sampled window. Roll-off slopes were found the most difficult to estimate in practice. For the present paper, these were estimated from log-log scaled, squared FT curves in an iterative process in which an initial guess at the slope of the FT envelope was progressively refined. This slope was estimated over a range of values of frequency around $150/T$.
For the machine used to perform the calculations, floating-point precision was reported as 2.2e-16. This is consistent with the observed noise floor in the Fourier transforms, which lies at about $-340$ decibels below the peak power. Only the Planck and hyperbolic FTs routinely reach this noise floor before $Tf = N/2$.
The warp parameter $s$ of the hyperbolic window is monotonically related to the reciprocal of ENBW. Fig. \ref{fig_g} displays this relation for three values of $\alpha$.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{fig_g}
\caption{Variation of the equivalent noise bandwidth (ENBW) with `warp factor' $s$ for the hyperbolic window. Three curves are provided for different values of the exponent $\alpha$. Note that the reciprocal ENBW is in fact what is plotted, so as to display its entire range.}
\label{fig_g}
\end{figure}
In section \ref{ss_hyperbolic} it is described how the FT of the hyperbolic window often exhibits a DC component of roll-off about $-4$ plus a significantly steeper oscillatory component. An example of the full hyperbolic window FT is shown in Fig. \ref{fig_j}, compared to a Planck window of the same value of ENBW. The separation between the DC and oscillatory components is clearly visible at frequencies greater than about $Tf \sim 30$. At frequencies below this value, the total FT is oscillatory - it passes through zero - even though this is not readily apparent from Fig. \ref{fig_j}, because of the plotting algorithm.
The tendency of the frequency at which the DC component ceases to dominate the whole is to move toward lower values as the ENBW increases. Past a certain value of ENBW, which depends on the value of the exponent $\alpha$, the DC component is dominant over the entire spectrum; in other words, the FT ceases to be oscillatory at any frequency. Shortly beyond this critical value of ENBW, local maxima are no longer found in the near-field transform. These critical points can be observed in Fig. \ref{fig_d} as the places beyond which sidelobe height as defined in section \ref{ss_criteria} ceases to be meaningful.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{fig_j}
\caption{The Fourier power spectra of the hyperbolic (dotted and solid black lines) and Planck (solid red line) windows are shown. The value of ENBW is 1.46 for both. The Planck window was calculated with an $\epsilon$ value of 0.4; for the hyperbolic window, $\alpha=4$ and $s=0.606$. The dotted curve plots the full FT of the hyperbolic window, whereas the solid black line traces the oscillatory residual. Note also the noise floor at about $-340$ decibels due to the finite computational precision.}
\label{fig_j}
\end{figure}
In practice it is easy to estimate and remove the DC component via interpolation of a non-padded DFT of the window. Effectively then, as far as detecting weak secondary tones is concerned, the limiting factor is the oscillatory `residual' component of the FT, rather than its total amplitude. This is demonstrated in Fig. \ref{fig_k}, which shows a zoomed-in portion of the spectra in Fig. \ref{fig_j}. Here also a secondary tone at frequency $=100/T$ and of amplitude $10^{-10}$ smaller than the primary has been added before Fourier transforming. The secondary peak can be seen in both window FTs, but whereas one would probably not care to go much fainter using the Planck window, there is ample room at this frequency to detect a 5-fold fainter tone in the hyperbolic residuals.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{fig_k}
\caption{The same two window FTs as in Fig. \ref{fig_j} are plotted here again over a smaller range of frequencies, and with a secondary pure tone added at $Tf=100$ and of amplitude $10^{-10}$ relative to the $f=0$ value. The hyperbolic window is shown on the left, the Planck window on the right. In the LH panel, the dotted line shows the full FT, the solid line shows the oscillatory residual. The vertical range is the same in both panels but the full-FT plot for the hyperbolic has been offset vertically.}
\label{fig_k}
\end{figure}
At parameter settings for which the onset of DC dominance falls below the roll-off estimation frequency of about $150/T$, the roll-off of the hyperbolic window was estimated from the residual part of the FT; otherwise from the total FT.
Fig. \ref{fig_i} shows the estimated roll-off for the hyperbolic window at a number of values of exponent $\alpha$, with ENBW held at the Hann value of 1.5. It can be seen that the decrease is broadly in line with that obtained from raising the Hann window to the same power. Note though that raising the Hann window to exponents greater than unity also increases its ENBW beyond 1.5. The increase in roll-off steepness for the hyperbolic window can however be obtained \emph{without} sacrifice in ENBW, and thus sensitivity.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{fig_i}
\caption{Roll-off of the hyperbolic window as a function of the exponent $\alpha$. For each value of $\alpha$, the warp factor $s$ is varied so as to maintain the ENBW at the Hann-window value of 1.5. The roll-off was estimated at frequency $f$ of $150/T$. An open triangle means the roll-off was estimated from the total curve; an open circle indicates the roll-off refers to the oscillatory residual. The difference between these is described in section \ref{ss_hyperbolic}. The solid line shows the value for the Hann window raised to the same exponent.}
\label{fig_i}
\end{figure}
Fig. \ref{fig_d} compares the maximum sidelobe height for all the windows except the Nuttall 4b, which at $-93.3$ decibels lies off the bottom of the graph. One would assess the hyperbolic window as a relatively poor performer, except that its variable character allows exploration of areas of the parameter space not covered by any of the windows which offer better sidelobe levels at a small number of fixed (and relatively large) values of ENBW. Note also that compared to the other variable-width windows, the $\alpha=1$ hyperbolic is always better than the Tukey, at the same value ($-3$) of roll-off; and the $\alpha=3$ hyperbolic is always better than the Planck, at values of roll-off which, if not superior to the Planck, are certainly respectable (being steeper than for any of the fixed windows), and which, as is argued below, offer about the same practical performance.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{fig_d}
\caption{Plotted here for various windows is the ratio in decibels between the power of the highest sidelobe and the maximum power anywhere (which for all the present windows occurs at a frequency of zero). Circles indicate fixed-width windows. The dot-dashed line shows the Tukey window over its full permitted range of ENBW, the dashed likewise for the Planck window. The solid lines show the response of the hyperbolic window at three values of the exponent $\alpha$. Note that none of the hyperbolic-window curves extends over the full range of ENBW values accessible to the window. As explained in section \ref{ss_graphs}, this is because the Fourier transform of this window ceases to exhibit local maxima at a certain maximum value of ENBW (e.g. at ENBW $\sim 1.58$ for $\alpha=1$).}
\label{fig_d}
\end{figure}
Fig. \ref{fig_f} compares roll-off for the $\alpha=3$ hyperbolic window against the other contenders. Although superior to most of the other windows, including the Tukey, which is stuck at a roll-off of $-3$ even when raised to a power, the best performer is clearly the Planck window, for which the present methods estimate a steepest roll-off value of about $-13$ at an ENBW just short of of 1.4. This number is actually not very meaningful, since the continuity of the Planck window at all orders of derivative means that its asymptotic roll-off is infinite. Indeed in plots of its FT (see Fig. \ref{fig_j} for example) the log-log slope is seen to increase monotonically with increasing $f$, unlike all other windows examined in the present study.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{fig_f}
\caption{Plotted here is the roll-off slope of the hyperbolic window at $\alpha=3$, contrasted with that of the fixed-width windows (open circles), the Tukey (dot-dashed line) and the Planck (dashed line). The dotted line gives the roll-off calculated from the total curve; the solid line shows that calculated from the oscillatory residual. The difference between these is described in section \ref{ss_hyperbolic}. Present, but hard to see, is a discontinuity in the `total curve' line at the Hann-cubed value. As explained in section \ref{ss_hyperbolic}, this is due to the rapid growth of a shallow roll-off perturbation to the Hann-like FT as the warp factor moves away from zero.}
\label{fig_f}
\end{figure}
The slow decrease in the steepness of the roll-off for the hyperbolic window seen in Fig. \ref{fig_f} at ENBW less than about 1.2 is also an artifact of the empirical $Tf \sim 150$ estimation point. The hyperbolic window in FT exhibits a pedestal which becomes ever broader as ENBW is decreased. The estimates of roll-off presented here thus begin to decrease as the width of the pedestal approaches the $Tf = 150$ value.
\begin{figure}[!t]
\centering
\includegraphics[width=2.5in]{fig_h}
\caption{This Fig. shows the approximate frequency at which the Fourier transform of the Planck window of the given ENBW first becomes smaller than the transform of the hyperbolic window of the same ENBW. As with Fig. \ref{fig_i}, triangles and circles indicate respectively where the total or residual curves of the hyperbolic FT were considered. Open symbols indicate $\alpha=3$; filled ones, $\alpha=4$.}
\label{fig_h}
\end{figure}
In fact the Planck window FT also exhibits a pedestal which is generally seen to be wider than that of the hyperbolic window at the same ENBW. The respective FTs don't cross until typically $Tf \sim 100$. Fig. \ref{fig_h} shows the (approximate) frequency at which the Planck FT first dips below the hyperbolic FT, for two different values of $\alpha$. For all frequencies below this value, the hyperbolic FT is of lower amplitude than the Planck, by as much as 30 decibels. Broadly speaking we find that, for a given value of ENBW, an exponent $\alpha$ can be found at which the hyperbolic is a better performer than the Planck up to frequencies of about $100/T$; beyond that point, the Planck is usually better. With the double-precision arithmetic on the machine used to perform the calculations, the noise floor of about $-340$ decibels is reached by the hyperbolic window FT at $Tf$ values of order 1000.
\section{Conclusions}
The present paper recaps the basic considerations behind the use of window functions to assist discrete Fourier analysis of signals comprising one or more periodic components among noise. Of the many possible ways to assess the performance of windows, attention is concentrated here on those criteria which bear directly on either sensitivity or dynamic range. Three quantitative criteria were selected, namely the equivalent noise bandwidth or ENBW, the amplitude of the highest sidelobe of the window Fourier transform, and the high-frequency asymptotic rate of decrease of the transform (roll-off).
Although a low value of ENBW is associated not only with optimum sensitivity but also with a narrower central maximum in the window FT, there is no discussion of resolution here. Coverage of all the different criteria is beyond the scope of the present paper. Maximizing the resolving power is in any case more complicated than simply choosing the narrowest central peak: high sidelobes will play a role as well.
A new variable-width window, named the hyperbolic window, is described in this paper. This window, at representative values of its width and power parameters $s$ and $\alpha$, is compared against a selection of windows from the existing literature. Since, by choice of the appropriate value of $s$, the ENBW of the hyperbolic window can be varied continuously between its minimum and maximum possible values of respectively unity and infinity, the most important comparisons display either sidelobe height or roll-off for the panel of windows as smooth functions of ENBW. The effect of $\alpha$ on the performance of the hyperbolic window is indicated where practical by plotting curves at a few values of $\alpha$ on the same graph. Note that, since best sensitivity is obtained for low values of ENBW, attention has been concentrated on this range.
Unlike the Tukey or Planck windows, the width of the hyperbolic window may be decreased without limit: in other words it can access all values of ENBW up to infinity. However since there seems nothing to be gained by going to large ENBW, this regime is little explored in the present paper.
Because the Planck window is smooth at all orders of derivative, the steepness of its FT roll-off increases with frequency without bound. In practice however, although this gives it superior dynamic range over a range of frequencies, its wide pedestal sets a lower boundary to this range at $Tf \sim 100$, and the finite numerical precision of computation sets an upper one at $Tf \sim 1000$.
Another advantage the hyperbolic window has over both the Tukey and Planck windows is that it is not constructed by splicing two different functions together. This gives it a much more regular-appearing Fourier transform. Both the Tukey and Planck windows in contrast exhibit relatively cluttered spectra, which may marginally decrease the detectability of weak additional tones.
A feature of the hyperbolic window is that, for significant ranges of $s$ and $\alpha$, its Fourier transform ceases at some frequency to be oscillatory, i.e. to pass through zero. (Indeed for some values of $s$ and $\alpha$ there are no zero-crossings at all.) In such cases it is convenient to divide the transform into a DC offset curve plus an oscillatory residual. Usually the oscillatory part has a much steeper roll-off than the smooth part. It is shown that, as far as detecting secondary tones goes, it is the oscillatory not the smooth part which is the practical limiting factor.
The comparison with other windows shows that, for any ENBW and at any frequency less than about $100/T$, the hyperbolic window is better either in sidelobe height or roll-off than any of its competitors - although not necessarily in both. It never approaches the sidelobe performance of the Nuttall windows at the same ENBW values, although for $\alpha > 2$ it has better roll-off. Nevertheless, it offers a respectable 40 decibels of sidelobe suppression at an ENBW of about 1.5, whereas for the top-performing Nuttall windows ENBW is greater than 2.
\bibliographystyle{plainnat}
|
1,108,101,564,848 | arxiv | \section{Evolving the Equations of General Relativistic
Magnetohydrodynamics on Adaptively-Refined Grids}
We have recently extended our adaptive mesh refinement (AMR) numerical
relativity code to solve the ideal magnetohydrodynamics (MHD)
equations, enabling us to simulate MHD effects in dynamical spacetimes
with AMR\cite{etie2011a,etie2011b}. The subtlety in evolving these
equations is enforcing the divergence-free constraint $\ve{\nabla}\cdot
\ve{B}=0$. If we were to evolve the induction equation in the most
obvious and straightforward way, numerical errors will lead to
violation of this divergence-free constraint, resulting in the
production of spurious magnetic monopoles. There are
several known solutions to this problem in unigrid simulations, but
few when AMR is used. The one we chose was to evolve the vector
potential, $\mathcal{A}^{\mu}$. In this case, the magnetic induction
and divergence-free equations in curved spacetime become:
\begin{eqnarray}
B^i &=& \epsilon^{ijk} \partial_j A_k, \\
\partial_t A_i &=& \epsilon_{ijk} v^j B^k - \partial_i (\alpha \Phi -
\beta^j A_j)
\label{A_evol_eqn}
\\
\partial_j \tilde{B}^j &=& \partial_i( \tilde\epsilon^{ijk} \partial_j A_k)
\equiv 0, \label{divB}
\end{eqnarray}
where $B^i=\tilde B^i/\sqrt{\gamma}$ is the magnetic field measured by a normal observer,
$A_{\mu}=\mathcal{A}_{\mu}-\Phi n_{\mu}$ is the projection of
the four-vector potential $\mathcal{A}_{\mu}$ onto a 3-dimensional spacelike
hypersurface, $\Phi$ the scalar potential, $n^{\mu}$ is the normal vector to the hypersurface,
$\tilde\epsilon^{ijk}=\epsilon^{ijk}/\sqrt{\gamma}$,
$\epsilon^{ijk}=n_{\mu}\epsilon^{\mu ijk}$ is the 3-dimensional
Levi-Civita tensor associated with the 3-metric $\gamma_{ij}$,
and $\gamma$ the 3-metric determinant.
By construction, we {\it guarantee} in Eq.~\ref{divB} that
the divergence of the magnetic field is zero, since the divergence of
a curl is zero. This property is guaranteed, no matter what
interpolation scheme we choose when interpolating between different
adaptively refined grids.
When evolving the vector potential, an electromagnetic (EM)
gauge choice must be made. In choosing an EM gauge, there is a
subtlety. We have written our numerical prescription so that the
resulting magnetic fields are completely invariant to the
EM gauge choice inside uniform-resolution grids.
However, when we adaptively add subgrids at higher resolution using
AMR, {\it interpolation at mesh refinement boundaries turns EM gauge
modes into physical modes}, thereby affecting the magnetic fields.
Thus, if we are not careful in our gauge choice, the
gauge-dependent magnetic fields induced on these refinement boundaries
may poison our simulation.
Our first attempt at a gauge condition was $\partial_i (\alpha
\Phi-\beta^j A_j)=0$, as it greatly simplifies the right-hand side
of Eq.~\ref{A_evol_eqn}. However, we later found that this gauge
choice introduces a zero-speed gauge mode\cite{etie2011a}. With this
zero-speed mode, if the path of magnetized matter crosses an AMR
refinement boundary, interpolation on this boundary leads to the
creation of weak, spurious magnetic fields in black hole--neutron star
(BHNS) simulations that grow stronger with time until the simulation
crashes.
So we switched from our original choice to the Lorenz gauge
$\nabla_{\mu} \mathcal{A}^{\mu}=0$, in which the EM gauge modes
propagate away, thereby drastically reducing the appearance of
spurious magnetic fields at refinement boundaries. The simulations
presented in the next section were the first to use this gauge for
full GRMHD with AMR.
\section{Magnetized Black Hole---Neutron Star Binary Mergers}
As a neutron star (NS) is tidally disrupted by a black hole (BH)
companion at the end of a BHNS binary inspiral, its magnetic fields
will be stretched, wound, and amplified. If sufficiently strong, these
magnetic fields may impact the gravitational waveforms, merger
evolution and mass of the remnant disk. Formation of highly-collimated
magnetic field lines in the disk+spinning BH remnant may launch
relativistic jets, providing the central engine for a short-hard GRB (sGRB). We
explore this scenario through fully general relativistic,
magnetohydrodynamic (GRMHD) BHNS simulations from inspiral through
merger and disk formation\cite{etie2011b}. In particular, we attempt
to answer the following two questions:
\begin{enumerate}
\item How do NS magnetic fields affect BHNS binary waveforms and the resulting BH+disk system?
\item Do we produce an sGRB progenitor?
\end{enumerate}
To answer these questions, we perform simulations in
which the BH is initially spinning with spin parameter 0.75, aligned
with the orbital angular momentum. Though {\it surface} NS magnetic field
strengths have been inferred by observation, very little is known
about NS {\it interior} magnetic fields. So for this preliminary
investigation we seed only the NS interior with initially poloidal
magnetic fields. The initial data are shown in the left panel of
Fig.~\ref{figbasicstory}. Keeping this field configuration fixed, we
vary the initial magnetic field strength, choosing magnetic fields
with average magnetic to gas pressure $P_{\rm B}/P_{\rm gas}$ of 0,
$5\times10^{-5}$, and $5\times10^{-3}$. Note that the case with the
strongest magnetic fields has field strengths of order $10^{17}$G at
the core of the NS (assuming the NS has a rest mass of
1.4$M_{\odot}$). We choose to seed the NS with
magnetic fields sufficiently weak to avoid disturbing the NS equilibrium during
inspiral, but sufficiently strong to influence the final outcome.
To address the first question, we find that magnetic fields have no
significant impact on the gravitational waveforms or
residual disk masses, regardless of initial strength.
Magnetic fields retain their poloidal structure
during the final orbit before merger. But in terms of magnetic field
structure, there is a large difference between pre- and post-disruption:
magnetic fields that were almost purely poloidal initially
become almost purely toroidal due to the rapid winding of
the matter around the BH as it forms a disk.
One of the ingredients in sGRB models is the collimation
of magnetic fields perpendicular to the disk. The right frame of
Fig.~\ref{figbasicstory} demonstrates the lack of magnetic
field collimation in the $P_{\rm B}/P_{\rm gas}=5\times10^{-3}$ case.
\begin{figure*}
\epsfxsize=2.4in
\leavevmode
\epsffile{B4-0-Density_InitialData.eps}
\epsfxsize=2.4in
\leavevmode
\epsffile{B4-0-Density.eps}
\caption{3D density and magnetic field snapshots. Left: initial data, NS on
the right (from highest to lowest rest-mass density, the colors are:
yellow, orange, red, and cyan), BH apparent horizon (AH) on the
left. Right: final disk density profile with magnetic field
lines, about $33$ms $(1.4 M_{\odot}/M_0)$ after disk
formation ($t=2072M$), where $M_0$ is the initial rest mass of the NS and $M$
the ADM mass of the system.
}
\label{figbasicstory}
\end{figure*}
We stopped our simulation about 30ms after tidal
disruption. Notice that we had a thick disk orbiting a
spinning BH, but there was no strong evidence of
magnetic field collimation. But what about relativistic outflows,
another key ingredient for sGRB central engines? After 30 ms of disk evolution,
we find no outflows: low-density matter is still rapidly accreting onto the BH
poles and no sign of fluid velocity reversal is observed.
\begin{figure*}
\begin{center}
\vspace{-4mm}
\epsfxsize=3.2in
\leavevmode
\epsffile{fig3f.eps}
\caption{3D snapshot, corresponding to the case in which we seed a
remnant disk from an unmagnetized BHNS simulation with purely
poloidal magnetic fields. This is a snapshot taken when we terminate the
simulation, viewing from above the disk
plane. Magnetic field streamlines emerging just above
and below the BH poles are shown in white, and those in the disk are shown
in yellow.}
\label{meridionaldiskB5}
\end{center}
\end{figure*}
In our latest
work\cite{etie2012}, we demonstrated that when the
remnant disk from an unmagnetized BHNS simulation is seeded with
large-scale poloidal fields, we observe spectacular collimated
magnetic fields and relativistic outflows, as shown in the panels of
Fig.~\ref{meridionaldiskB5}. However, such large-scale poloidal fields may be difficult
to generate in a fully self-consistent BHNS simulation, as the
magnetic fields must follow the NS fluid as it wraps around the spinning
BH during tidal disruption and disk formation, generating strong
toroidal fields. GRMHD simulations performed by other groups indicate that BH
accretion disks lacking large-scale poloidal fields may not be capable
of generating sustained jets\cite{GRMHD_Jets_Req_Strong_Pol_fields}.
This result combined with our findings, make BHNS mergers less likely
sGRB central engines.
In spite of this, we found in this same work\cite{etie2012} that
inserting {\it tilted} magnetic fields into the NS breaks the
initial equatorial symmetry of the problem and
encourages poloidal fluid motion, resulting in 10x stronger poloidal
magnetic fields in the remnant disk. Even with these stronger poloidal
magnetic fields, no magnetic collimation or relativistic outflows were
observed. We anticipate that large-scale poloidal fields might be
produced in BHNS simulations with highly-spinning BHs that are
misaligned with the orbital angular momentum. Such a system might
explain quasi-periodic signals in observed sGRBs\cite{stone2012}.
\bibliographystyle{ws-procs975x65}
|
1,108,101,564,849 | arxiv | \section{Introduction}
The idea that quantum mechanics can be well defined even if the notion of time is absent has been proposed \cite{Rovelli:1989jn,Rovelli:1988qp} and developed in a number of different strategies \cite{Rovelli:1990jm,Rovelli:1991ni,Reisenberger:2001pk,Marolf:2002ve,Hartle:1992as}. The motivation for formulating quantum mechanics in \emph{timeless} description comes from the research on quantum gravity, as in the quantum theory of general relativity, the spacetime background is not fixed and generally it is not possible to make sense of quantum variables ``at a moment of time''. This is closely related to the ``problem of time'' in quantum gravity~\cite{Isham:1992ms}.
In particular, a comprehensive formulation for the relativistic (timeless) quantum mechanics and its probabilistic interpretation are presented in Chapter 5 of \cite{Rovelli:book}. The formulation is based on the canonical (Hilbert spaces and self-adjoint operators) formalism and we wonder whether it also admits the covariant (sum-over-histories) formalism. In the conventional nonrelativistic (with time) quantum mechanics, the transition amplitudes are the matrix elements of the unitary evolution generated by the Hamiltonian and can be reformulated as the sum over histories, called the path integral (see \cite{Greiner:book} for a detailed derivation). In the relativistic quantum mechanics, however, the concept of time evolution is not well defined in the fundamental level; therefore, conceptual issues and technical subtleties arise when one tries to derive the timeless path integral from the canonical formalism.
The aim of this paper is to rigorously derive the timeless path integral for relativistic quantum mechanics, starting from the canonical formulation in \cite{Rovelli:book}. It turns out the transition amplitude can be reformulated as the sum, or functional integral, over all possible paths in the constraint surface $\Sigma$ specified by the (relativistic) Hamiltonian constraint $H(q^a,p_a)=0$ for the configuration variables $q^a$ and their conjugate momenta $p_a$, and each path contributes with a phase identical to the classical action divided by $\hbar$. Unlike the conventional path integral in which every path is parameterized by the time variable $t$, the timeless path integral is completely independent of the parametrization for paths, manifesting the timeless feature. Furthermore, for the special case that the Hamiltonian constraint is a quadratic polynomial in $p_a$, the timeless path integral over $\Sigma$ reduces to the timeless Feynman's path integral over the (relativistic) configuration space.
The timeless path integral for relativistic quantum mechanics is appealing both conceptually and technically. Conceptually, timeless path integral offers an alternative interpretation of relativistic quantum fluctuations and is more intuitive than the canonical formalism for many aspects. It can give a new point of view about how the conventional quantum mechanics with time emerges within a certain approximation and thus may help to resolve the problem of time. Technically, timeless path integral provides tractable tools to compute (at least numerically or approximately) the transition amplitudes which otherwise remain formal in the canonical formalism. For example, the semiclassical approximation for the timeless path integral can be developed \textit{\`{a} la} the Wentzel-Kramers-Brillouin (WKB) method.
In the research of loop quantum gravity (LQG), the sum-over-histories formulation is an active research area that goes under the name ``spin foam models'' (SFMs) (see \cite{Rovelli:book} and references therein for LQG and SFMs). In particular, over the past years, SFMs in relation to the kinematics of LQG have been clearly established \cite{Engle:2007uq,Freidel:2007py,Engle:2007wy,Kaminski:2009fm}. However, the Hamiltonian dynamics of LQG is far from fully understood, and although well motivated, SFMs have not been systematically derived from any well-established theories of canonical quantum gravity. Meanwhile, loop quantum cosmology (LQC) has recently been cast in a sum-over-histories formulation, providing strong support for the general paradigm underlying SFMS \cite{Ashtekar:2009dn,Ashtekar:2010ve}. In this paper, the timeless path integral is systematically derived from the canonical formalism of relativistic quantum mechanics, and we hope it will shed new light on the issues of the interplay between LQG/LQC and SFMs.
This paper is organized as follows. We begin with a review on the classical theory of relativistic mechanics in \secref{sec:classical theory} and then a review on the quantum theory of relativistic mechanics in \secref{sec:quantum theory}. The main topic is presented in \secref{sec:timeless path integral}, where the timeless path integral is derived and investigated in detail. Finally, conclusions and outlooks are summarized and discussed in \secref{sec:discussion}.
\section{Classical theory of relativistic mechanics}\label{sec:classical theory}
The conventional formulation of classical mechanics treats the time $t$ on a special footing and therefore is not broad enough for general-relativistic systems, which treat time on the equal footing as other variables. To include general-relativistic systems, we need a a more general formulation with a new conceptual scheme. A timeless formulation for relativistic classical mechanics is proposed for this purpose and described in detail in Chapter 3 of \cite{Rovelli:book}, excerpts from which are presented in this section with some new materials added to give a review and define notations.
\subsection{Hamiltonian formalism}\label{sec:Hamiltonian formalism}
Let $\mathcal{C}$ be the \emph{relativistic configuration space} coordinated by $q^a$ for $a=1,2,\cdots,d$ with $q^a$ being the \emph{partial observables} and $d$ being the dimension of $\mathcal{C}$. In nonrelativistic mechanics, one of the partial observables can be singled out and treated specially as the time $t$, i.e. $q^a=(t,q^i)$, but this separation is generally not possible for general-relativistic systems. An observation yields a complete set of $q^a$, which is called an \emph{event}. In nonrelativistic mechanics, an observation is a reading of the time $t$ together with other readings $q^i$.
Consider the cotangent space $\Omega=T^*\mathcal{C}$ coordinated by $q^a$ and their momenta $p_a$. The space $\Omega$ carries a natural one-form $\tilde{\theta}=p_a\,dq^a$. Once the kinematics (i.e. the space $\mathcal{C}$ of the partial observables $q^a$) is known, the dynamics is fully determined by giving a \emph{constraint surface} $\Sigma$ in the space $\Omega$. The constraint surface $\Sigma$ is specified by $H=0$ with a function $H:\Omega\rightarrow\mathbb{R}^k$. Denote $\tilde{\gamma}$ an unparameterized curve in $\Omega$ (observables and momenta) and $\gamma$ its projection to $\mathcal{C}$ (observables only). The physical motion is determined by the function $H$ via the following
\begin{quote}
\textbf{Variational principle.} A curve $\gamma$ in $\mathcal{C}$ is a \emph{physical motion} connecting the events $q_1^a$ and $q_2^a$, if $\tilde{\gamma}$ extremizes the action
\be\label{eqn:cl action}
S[\tilde{\gamma}]=\int_{\tilde{\gamma}}p_a\,dq^a
\ee
in the class of the curves $\tilde{\gamma}$ which satisfy
\be\label{eqn:cl constraint}
H(q^a,p_a)=0,
\ee
(i.e. $\tilde{\gamma}\in\Sigma$) and
whose projection $\gamma$ to $\mathcal{C}$ connect $q_1^a$ and $q_2^a$.
\end{quote}
If $k=1$, $H$ is a scalar function and called the \emph{Hamiltonian constraint}. If $k>1$, there is gauge invariance and $H$ is called the \emph{relativistic Hamiltonian}. The pair $(\mathcal{C},H)$ describes a relativistic dynamical system. All (relativistic and nonrelativistic) Hamiltonian systems can be formulated in this timeless formalism.
By parameterizing the curve $\tilde{\gamma}$ with a parameter $\tau$, the action \eqnref{eqn:cl action} reads as
\be\label{eqn:cl action with N}
S[q^a,p_a,N_i]=\int d\tau\left( p_a(\tau)\,\frac{d q^a(\tau)}{d\tau}
-N_i(\tau)H^i(q^a,p_a)\right),
\ee
where the constraint \eqnref{eqn:cl constraint} has been implemented with the Lagrange multipliers $N_i(\tau)$. Varying this action with respect to $N_i(\tau)$, $q^a(\tau)$ and $p_a(\tau)$ yields the constraint equation(s) \eqnref{eqn:cl constraint} together with the Hamilton equations:
\begin{subequations}\label{eqn:Hamilton eqs}
\ba
\frac{dq^a(\tau)}{d\tau}&=&N_j(\tau)\frac{\partial H^j(q^a,p_a)}{\partial p_a},
\\
\frac{dp_a(\tau)}{d\tau}&=&-N_j(\tau)\frac{\partial H^j(q^a,p_a)}{\partial q^a}.
\ea
\end{subequations}
For $k>1$, a motion is determined by a $k$-dimensional surfaces in $\mathcal{C}$ and different choices of the $k$ arbitrary functions $N_j(\tau)$ determine different curves and parametrizations on the single surface that defines a motion. For $k=1$, a motion is a 1-dimensional curve in $\mathcal{C}$ and different choices of $N(\tau)$ correspond to different parametrizations for the same curve. Different solutions of $q^a(\tau)$ and $p_a(\tau)$ for different choices of $N_j(\tau)$ are gauge-equivalent representations of the same motion and different choices of $N_j(\tau)$ have no physical significance.
Along the solution curve, the change rate of $H$ with respect to $\tau$ is given by
\ba
\frac{dH^i}{d\tau}
&=&\frac{dq^a}{d\tau}\frac{\partial H^i}{\partial q^a}
+\frac{dp_a}{d\tau}\frac{\partial H^i}{\partial p_a}
=N_j\frac{dH^j}{dp_a}\frac{\partial H^i}{\partial q^a}
+N_j\frac{dH^j}{dq^a}\frac{\partial H^i}{\partial p_a}\nn\\
&\equiv&N_j\,\lbrace H^i,H^j\rbrace.
\ea
To be consistent, the physical motion should remain on the constraint surface $\Sigma$. That is, $dH/d\tau$ has to vanish along the curve. Therefore, we must have the condition
\be\label{eqn:first class}
\left.\lbrace H^i,H^j\rbrace\right\vert_\Sigma=0,
\qquad\text{abbreviated as}\quad
\lbrace H^i,H^j\rbrace\approx0
\ee
for all $i$ and $j$. A function $F(q^a,p_a)$ defined in a neighborhood of $\Sigma$ is called \emph{weakly zero} if $\left.F\right\vert_\Sigma=0$ (abbreviated as $F\approx0$) and called \emph{strongly zero} if
\be
\left.F\right\vert_\Sigma=0
\quad\text{and}\quad
\left.\left(\frac{\partial F}{\partial q^a},
\frac{\partial F}{\partial p_a}\right)\right\vert_\Sigma=0,
\qquad\text{abbreviated as}\quad
F\simeq0.
\ee
It can be proven that $F\approx0$ implies $F\simeq f_iH^i$ for some functions $f_i(q^a,p_a)$. Consequently, we have
\be\label{eqn:first class 2}
\lbrace H^i,H^j\rbrace\simeq {f^{ij}}_k(q^a,p_a)\,H^k.
\ee
The condition \eqnref{eqn:first class} ensures all constraints $H^i$ to be \emph{first class}. (See \cite{Wipf:1993xg} for more about constrained systems and the concept of first class constraints.)
\subsection{Nonrelativistic mechanics as a special case}\label{sec:nonrelativistic mechanics}
The conventional nonrelativistic mechanics can also be formulated in the timeless framework as a special case. For the nonrelativistic systems, the relativistic configuration space has the structure $\mathcal{C}=\mathbb{R}\times\mathcal{C}_0$, where $\mathcal{C}_0$ is the conventional nonrelativistic configuration space; i.e., $q^a=(t,q^i)$ as one of the partial observables is identified as the time $t$. Correspondingly, the momenta read as $p_a=(p_t,p_i)$ with $p_t$ being the conjugate momentum of $t$ and $p_i$ being the conjugate momenta of $q^i$. The Hamiltonian constraint is given by
\be\label{eqn:nonrel H}
H(t,q^i,p_t,p_i)=p_t+H_0(q^i,p_i,t),
\ee
where $H_0(q^i,p_i,t)$ is the conventional nonrelativistic Hamiltonian function. Given the Hamiltonian constraint in the form of \eqnref{eqn:nonrel H}, the Hamilton equations \eqnref{eqn:Hamilton eqs} lead to
\begin{subequations}
\ba
\frac{dt}{d\tau}=N(\tau),
\qquad
\frac{dp_t}{d\tau}=-N(\tau)\frac{\partial H_0}{\partial t},
\\
\frac{dq^i}{d\tau}=N(\tau)\frac{\partial H_0}{\partial p_i},
\qquad
\frac{dp_i}{d\tau}=-N(\tau)\frac{\partial H_0}{\partial q^i},
\ea
\end{subequations}
which read as
\be
\frac{dp_t}{dt}=-\frac{\partial H_0}{\partial t}
\ee
and
\be\label{eqn:nonrel Hamilton eqs}
\frac{dq^i}{dt}=\frac{\partial H_0}{\partial p_i}, \qquad
\frac{dp_i}{dt}=-\frac{\partial H_0}{\partial q^i},
\ee
if particularly we use $t$ to parameterize the curve of solutions.
Furthermore, the constraint \eqnref{eqn:cl constraint} dictates $p_t=-H_0$. Thus, the momentum $p_t$ is the negative of energy and it is a constant of motion if $H_0=H_0(q^i,p_i)$ has no explicit dependence on $t$. The equations in \eqnref{eqn:nonrel Hamilton eqs} are precisely the conventional Hamilton equations for nonrelativistic mechanics.
The Hamilton equations in \eqnref{eqn:nonrel Hamilton eqs} form a system of first-order ordinary differential equations. Given the initial condition $q^i(t_0)=q_0^i$ and $p_i(t_0)=p_{i0}$ at the time $t_0$, the existence and uniqueness theorem for ordinary differential equations states that there exists a solution of \eqnref{eqn:nonrel Hamilton eqs} given by $q^i=q^i(t)$ and $p_i=p_i(t)$ for $t\in\mathbb{R}$, and furthermore the solution is unique.\footnote{In order to apply the existence and uniqueness theorem, we assume $\partial H_0/\partial q^i$, $\partial H_0/\partial p_i$, $\partial^2 H_0/\partial {q^i}^2$, $\partial^2 H_0/\partial {p_i}^2$ and $\partial^2 H_0/\partial q^i \partial p_j$ all continuous.} As a consequence, $q^i$ and $p_i$ evolve as functions of $t$, and a physical motion is an \emph{open} curve in $\mathcal{C}=\mathbb{R}\times\mathcal{C}_0$, along which the observable $t$ is \emph{monotonic}.
A dynamical system in which a particular partial observable can be singled out as $t$ such that the Hamiltonian is separated as in the form of \eqnref{eqn:nonrel H} is called \emph{deparametrizable}. For deparametrizable systems, the change of $t$ is in accord with the ordinary notion of time, which does not turn around but grows monotonically along the physical motion. Generically, however, relativistic systems might be non-deparametrizable --- no preferred observable can serve as the time such that other variables are described as functions of time along the physical motion. The classical theory predicts the physical motion as an unparameterized curve, which gives \emph{correlations} between physical variables, not the way physical variables evolve with respect to a preferred time variable. In the next subsection, we will introduce the timeless double pendulum as an example to illustrate the timeless feature.
\subsection{Example: Timeless double pendulum}\label{sec:timeless double pendulum}
Let us now introduce a genuinely timeless system as a simple model to illustrate the mechanics without time. This model was first introduced in \cite{Rovelli:1990jm,Rovelli:1991ni} and used repeatedly as an example in \cite{Rovelli:book}.
Consider a mechanical system with two partial observables, $a$ and $b$, whose dynamics is specified by the relativistic Hamiltonian
\be
H(a,b,p_a,p_b)=-\frac{1}{2}\left(p_a^2+p_b^2+a^2+b^2-2E\right)
\ee
with a given constant $E$. The relativistic configuration space is $\mathcal{C}=\mathbb{R}^2$ coordinated by $a$ and $b$, and the cotangent space $\Omega=T^*\mathcal{C}$ is coordinated by $(a,b,p_a,p_b)$. The constraint surface $\Sigma$ is specified by $H=0$; it is a 3-dimensional sphere of radius $\sqrt{2E}$ in $\Omega$.
In the $N(\tau)=1$ gauge, the Hamilton equations \eqnref{eqn:Hamilton eqs} give
\be
\frac{da}{d\tau}=p_a,
\qquad
\frac{db}{d\tau}=p_b,
\qquad
\frac{dp_a}{d\tau}=-a,
\qquad
\frac{dp_b}{d\tau}=-b,
\ee
and the Hamiltonian constraint \eqnref{eqn:cl constraint} gives
\be
a^2+b^2+p_a^2+p_b^2=2E.
\ee
The general solution is given by
\be
a(\tau)=A_a\sin(\tau), \qquad b(\tau)=A_b\sin(\tau+\beta),
\ee
where $A_a=\sqrt{2E}\sin\alpha$ and $A_b=\sqrt{2E}\cos\alpha$, and $\alpha$ and $\beta$ are constants.
Therefore, physical motions are closed curves (ellipses) in $\mathcal{C}=\mathbb{R}^2$. (Choosing different gauges for $N$ yields the same curve with different parametrizations.) This system is non-deparametrizable and does not admit a conventional Hamiltonian formulation, because, as discussed in \secref{sec:nonrelativistic mechanics}, physical motions in $\mathcal{C}=\mathbb{R}\times\mathcal{C}_0$ for a nonrelativistic system are monotonic in $t$ and thus cannot be closed curves.
\subsection{Lagrangian formalism}
Consider the special case that the relativistic Hamiltonian is given in the form:\footnote{In this subsection, the repeated index $a$ is not summed unless $\sum_a$ is explicitly used.}
\be\label{eqn:H special form}
H(q^a,p_a)=\sum_a\alpha_ap_a^2 + \sum_a\beta_ap_aq^a + \sum_a\gamma_ap_a +V(q^a),
\ee
where $\alpha_a$, $\beta_a$ and $\gamma_a$ are constant coefficients, and $V(q^a)$ is the potential which depends only on $q^a$. This form is quite generic and many examples of interest belong to this category such as the relativistic particle (free or subject to an external potential), the timeless double pendulum (harmonic or anharmonic) and the nonrelativistic system as described by \eqnref{eqn:nonrel H} with $H_0=\sum_ip_i^2/2m_i+V(q^i,t)$. The Hamilton equations \eqnref{eqn:Hamilton eqs} yields
\begin{subequations}\label{eqn:Hamilton eqs special case}
\ba
\label{eqn:Hamilton eqs special case a}
\frac{dq^a}{d\tau}&=&N\left(2\alpha_ap_a+\beta_aq^a+\gamma_a\right),\\
\frac{\;dp_a}{d\tau}&=&-N\left(\beta_ap_a+\frac{\partial V}{\partial q^a}\right).
\ea
\end{subequations}
Equation \eqnref{eqn:Hamilton eqs special case a} gives the relation between the momenta $p_a$ and the ``velocities'' $\dot{q}^a:={dq^a}/{d\tau}$, through which
the inverse Legendre transform recasts the action \eqnref{eqn:cl action with N} in terms of the Lagrangian function:
\ba\label{eqn:cl Lagrangian}
&&S[q^a,\dot{q}^a,N;\tau]=
\int d\tau\, L(q^a,\dot{q}^a,N)\nn\\
&=&\int d\tau
\left(\sum_a\frac{N}{4\alpha_a}\left[\frac{\dot{q}^a}{N}-\beta_aq^a-\gamma_a\right]^2
-NV(q^a)\right).
\ea
Variation with respect to $N$ yields
\ba\label{eqn:var to N on L}
\frac{\delta S}{\delta N}&\equiv&\frac{\partial L}{\partial N}=0
\quad\Rightarrow\nn\\
0&=&\sum_a\frac{1}{4\alpha_a}\left[\frac{\dot{q}^a}{N}-\beta_aq^a-\gamma_a\right]^2
-\sum_a\frac{\dot{q}^a}{2\alpha_aN} \left[\frac{\dot{q}^a}{N}-\beta_aq^a-\gamma_a\right]
-V\nn\\
&=&-\left(\sum_a\alpha_ap_a^2 + \sum_a\beta_ap_aq^a + \sum_a\gamma_ap_a +V\right)
=-H,
\ea
which is precisely the Hamiltonian constraint \eqnref{eqn:cl constraint}. On the other hand, variation with respect to $q^a$ gives the equation of motion as a second-order differential equation:
\ba\label{eqn:var to q on L}
&&\frac{\delta S}{\delta q^a}
\equiv\frac{\partial L}{\partial q^a}
-\frac{d}{d\tau}\frac{\partial L}{\partial\dot{q}^a}=0\nn\\
&\Rightarrow&\quad
\frac{d}{Nd\tau}\left(\frac{dq^a}{Nd\tau}\right)
=\beta_a^2q^a+\beta_a\gamma_a-2\alpha_a\frac{\partial V}{\partial q^a},
\ea
which is equivalent to \eqnref{eqn:Hamilton eqs special case}.
\section{Quantum theory of relativistic mechanics}\label{sec:quantum theory}
The timeless formulation for relativistic classical mechanics is reviewed in \secref{sec:classical theory}. Based on the Hamiltonian framework of the classical theory, the quantum theory of relativistic mechanics can be formulated in canonical formalism. Unlike the conventional quantum theory, relativistic quantum mechanics does not describe evolution in time, but correlations between observables. The timeless formulation for relativistic quantum mechanics is described in detail in Chapter 5 of \cite{Rovelli:book}, excerpts from which are presented in \secref{sec:general scheme} and \secref{sec:timeless double pendulum qm} to give a review. Issues on the physical Hilbert space are detailed in \secref{sec:physical Hilbert space} and the physical interpretation of quantum measurements and collapse is discussed in \secref{sec:measurements and collapse}.
\subsection{General scheme}\label{sec:general scheme}
Let $\mathcal{C}$ be the relativistic configuration space for the classical theory as described in the \secref{sec:Hamiltonian formalism}. The corresponding quantum theory can be formulated timelessly in the following scheme:
\begin{description}
\item[Kinematical states.] Let $\mathcal{S}\subset\mathcal{K}\subset\mathcal{S}'$ be the Gelfand triple defined over $\mathcal{C}$ with the measure $d^dq_a\equiv dq^1dq^2\cdots dq^d$.\footnote{That is, $\mathcal{S}$ is the space of the smooth functions $f(q^a)$ on $\mathcal{C}$ with fast decrease, $\mathcal{K}=L^2[\mathcal{C},d^dq^a]$ is a Hilbert space, and $\mathcal{S}'$ is formed by the tempered distributions over $\mathcal{C}$.} The kinematical states of a system are represented by vectors $\ket{\psi}\in\mathcal{K}$, and $\mathcal{K}$ is called the \emph{kinematical Hilbert space}.
\item[Partial observables.] A partial observable is represented by a self-adjoint operator in $\mathcal{K}$. The simultaneous eigenstates $\ket{s}$ of a complete set of commuting partial observables are called \emph{quantum events}. In particular, $\hat{q}^a$ and $\hat{p}_a$ are partial observables acting respectively as multiplicative and differential operators on $\psi(q^a)$; i.e., $\hat{q}^a\psi(q^a)=q^a\psi(q^a)$ and $\hat{p}_a\psi(q^a)=-i\hbar\,\partial\psi(q^a)/\partial q^a$. Their eigenstates $\ket{q^a}$ (defined as $\hat{q}^a\ket{q^a}=q^a\ket{q^a}$) and $\ket{p_a}$ (defined as $\hat{p}_a\ket{p_a}=p_a\ket{p_a}$) are both quantum events.
\item[Dynamics.] Dynamics is defined by a self-adjoint operator $\hat{H}$ in $\mathcal{K}$, called \emph{relativistic Hamiltonian}. The operator from $\mathcal{S}$ to $\mathcal{S}'$ schematically defined as
\be\label{eqn:projector}
\hat{P}=\int d\tau\, e^{-i\tau\hat{H}}
\ee
is called the ``projector''.\footnote{The integration range depends on the system: It is over a compact space if the spectrum of $\hat{H}$ is discrete and over a noncompact space if the spectrum is continuous. The operator $\hat{P}$ is a projector in the precise sense only if zero is a part of the discrete spectrum of $\hat{H}$.} The matrix elements
\be\label{eqn:W s s'}
W(s,s'):=\opelem{s}{\hat{P}}{s'}
\ee
are called \emph{transition amplitudes}, which encode entire physics of the dynamics.
\item[Physical states.] A \emph{physical state} is a solution of the quantum Hamiltonian constraint equation:
\be\label{eqn:qm constraint}
\hat{H}\ket{\psi}=0,
\ee
which is the quantum counterpart of \eqnref{eqn:cl constraint}. Given an arbitrary kinematical state $\ket{\psi_\alpha}\in\mathcal{S}$, we can associate an element $\obra{\Psi_{\psi_\alpha}}\in\mathcal{S}'$, defined by its (linear) action on arbitrary states $\ket{\psi_\beta}\in\mathcal{S}$ as
\be
\action{\Psi_{\psi_\alpha}}{\psi_\beta}=\int d\tau\, \inner{e^{i\tau\hat{H}}\psi_\alpha}{\psi_\beta}
\equiv {\opelem{\psi_\alpha}{\hat{P}}{\psi_\beta}},
\ee
such that $\obra{\Psi_{\psi_\alpha}}$ is a physical state, namely, a solution to \eqnref{eqn:qm constraint}. The solution space is endowed with the Hermitian inner product:
\be\label{eqn:physical inner product}
\oinner{\Psi_{\psi_\alpha}}{\Psi_{\psi_\beta}}
:=\action{\Psi_{\psi_\alpha}}{\psi_\beta},
\ee
which is called the \emph{physical inner product}. The Cauchy completion of the solution space with respect to the physical inner product $\oinner{\cdot}{\cdot}$ is called the \emph{physical Hilbert space} and denoted as $\mathcal{H}$.
\item[Measurements and collapse.] If the measurement corresponding to a partial observable $\hat{A}$ is performed, the outcome takes the value of one of the eigenvalues of $\hat{A}$ if the spectrum of $\hat{A}$ is discrete, or in a small spectral region (with uncertainty) if the spectrum is continuous. Measuring a complete set of partial observables $\hat{A}_i$ \emph{simultaneously} is called a \emph{complete measurement} at an ``instance'',\footnote{In the timeless language, a complete measurement is said to be conducted at some ``instance'', not at some ``instant''.} the outcome of which gives rise to a kinematical state $\ket{\psi_\alpha}$ (which is a simultaneous eigenstate of $\hat{A}_i$ if the spectra of $\hat{A}_i$ are discrete). The physical state is said to be \emph{collapsed} to $\oket{\Psi_{\psi_\alpha}}$ by the complete measurement.
\item[Prediction in terms of probability.] If at one instance a complete measurement yields $\ket{\psi_\alpha}$, the probability that at another instance another complete measurement yields $\ket{\psi_\beta}$ is given by
\be
\mathcal{P}_{\beta\alpha}=
\left\vert
\frac{\oinner{\Psi_{\psi_\beta}}{\Psi_{\psi_\alpha}}}
{\sqrt{\oinner{\Psi_{\psi_\beta}}{\Psi_{\psi_\beta}}}\
\sqrt{\rule{0mm}{3.6mm}\oinner{\Psi_{\psi_\alpha}}{\Psi_{\psi_\alpha}}}}
\right\vert^2
=\left\vert
\frac{W[\psi_\beta,\psi_\alpha]}
{\sqrt{\rule{0mm}{3.5mm}W[\psi_\beta,\psi_\beta]}\ \sqrt{W[\psi_\alpha,\psi_\alpha]}}
\right\vert^2,
\ee
where
\be
W[\psi_\beta,\psi_\alpha]:=\opelem{\psi_\beta}{\hat{P}}{\psi_\alpha}
=\int ds \int ds'\ \overline{\psi_\beta(s)}\ W(s,s') \, \psi_\alpha(s').
\ee
In particular, if the quantum events $s$ make up a discrete spectrum, the probability of the quantum event $s$ given the quantum event $s'$ is
\be\label{eqn:P s s'}
\mathcal{P}_{ss'}
=\left\vert
\frac{W(s,s')}{\sqrt{W(s,s)}\ \sqrt{W(s',s')}}
\right\vert^2.
\ee
If the spectrum is continuous, the probability of a quantum event in a small spectral region $R$ given a quantum event in a small spectral region $R'$ is
\be\label{eqn:P R R'}
\mathcal{P}_{RR'}
=\left\vert
\frac{W(R,R')}{\sqrt{W(R,R)}\ \sqrt{W(R',R')}}
\right\vert^2,
\ee
where
\be\label{eqn:W R R'}
W(R,R'):=\int_R ds \int_{R'} ds'\ W(s,s').
\ee
\end{description}
It should be noted that, unlike the classical theory, the relativistic quantum mechanics described above is \emph{not} equivalent to the conventional quantum theory, even if the system is deparametrizable. In conventional quantum mechanics, the time $t$ is treated as a parameter and not quantized as an operator. Thus, the measurement of $t$ is presumed to have zero uncertainty ($\Delta t=0$). In relativistic quantum mechanics, $t$ is on the same footing as other observables $q^i$ and the measurement of $t$ will yield nonzero $\Delta t$. For a simple harmonic oscillator govern by the relativistic Hamiltonian $H=p_t+H_0=p_t+{p_\alpha^2}/{2m}+{m\omega^2\alpha^2}/{2}$, it was shown in \cite{Reisenberger:2001pk,Marolf:2002ve} that, if $\Delta t\ll m\Delta\alpha^2/\hbar$, we can ignore the temporal resolution $\Delta t$ and idealize the measurement of $t$ as \emph{instantaneous}, and the conventional nonrelativistic quantum theory is recovered as a good approximation of the relativistic quantum mechanics.
In the following, we will first take the timeless double pendulum as an example to see how the above scheme is carried out, and then elaborate on some intricate issues.
\subsection{Example: Timeless double pendulum}\label{sec:timeless double pendulum qm}
Take the timeless double pendulum introduced in \secref{sec:timeless double pendulum} as an example. The kinematical Hilbert space is $\mathcal{K}=L^2(\mathbb{R}^2,dadb)$, and the quantum Hamiltonian equation reads as
\be\label{eqn:qm constraint for double pendulum}
\hat{H}\psi(a,b)=
\frac{1}{2}\left(
-\hbar^2\frac{\partial^2}{\partial a^2}-\hbar^2\frac{\partial^2}{\partial b^2}
+a^2-b^2-2E
\right)\psi(a,b)=0.
\ee
Since $\hat{H}=\hat{H}_a+\hat{H}_b-E$, where $\hat{H}_a$ (resp. $\hat{H}_b$) is the nonrelativistic Hamiltonian for a simple harmonic oscillator in the variable $a$ (resp. $b$), this equation can be easily solved by using the basis that diagonalizes $\hat{H}_a$ and $\hat{H}_b$. Let
\be
\psi_n(a)\equiv\inner{a}{n}=\frac{1}{\sqrt{n!}}\,H_n(a)\,e^{-a^2/2\hbar}
\ee
be the normalized $n$th eigenfunction for the harmonic oscillator with eigenvalue $E_n=\hbar(n+1/2)$, where $H_n(a)$ is the $n$th
Hermite polynomial. Clearly, the function
\be
\psi_{n_a,n_b}(a,b):=\psi_{n_a}(a)\,\psi_{n_b}(b)\equiv\inner{a,b}{n_a,n_b}
\ee
solves \eqnref{eqn:qm constraint for double pendulum} if
\be
\hbar\left(n_a+n_b+1\right)=E,
\ee
which implies the quantum theory exists only if $E=\hbar(N+1)$ with $N\in\mathbb{Z}^+\cup\{0\}$.
Consequently, for a given $N$, the general solution of \eqnref{eqn:qm constraint for double pendulum} is given by
\be
\Psi(a,b)=\sum_{n=0}^{N} c_n\, \psi_n(a)\, \psi_{N-n}(b),
\ee
and thus the physical Hilbert space $\mathcal{H}$ is an $(N+1)$-dimensional proper subspace of $\mathcal{K}$ spanned by an orthonormal basis $\{\ket{n,N-n}\}_{n=0,\cdots,N}$.
The projector $\hat{P}:\mathcal{S}\rightarrow\mathcal{H}$ is a true projector as $\mathcal{H}$ is a proper subspace of $\mathcal{K}$ for the case that the spectrum of $\hat{H}$ is discrete. Obviously, $\hat{P}$ is given by
\be
\hat{P}=\sum_{n=0}^N \ket{n,N-n}\bra{n,N-n},
\ee
which can be obtained (up to an irrelevant overall factor) from \eqnref{eqn:projector}:
\ba
\int_0^{2\pi/\hbar}d\tau\, e^{-i\tau\hat{H}}
&\propto& \frac{1}{2\pi}\int_0^{2\pi}d\tau\sum_{n_a,n_b}\ket{n_a,n_b}
\,e^{-i\tau(n_a+n_b+1-E)}\bra{n_a,n_b}\nn\\
&=&\sum_{n_a,n_b}\delta_{n_a+n_b+1,E}\ket{n_a,n_b}\bra{n_a,n_b}
=\hat{P}.
\ea
Here, the integration range is so chosen because $\exp(-i\tau\hat{H})$ is periodic in $\tau$ with period $2\pi/\hbar$ if $E=\hbar(N+1)$.
The transition amplitudes are given by
\ba
W(a,b,a',b')&:=&\opelem{a,b}{\hat{P}}{a',b'}
=\sum_{n=0}^N \inner{a,b}{n,N-n}\inner{n,N-n}{a',b'}\nn\\
&=&\sum_{n=0}^N \frac{e^{-(a^2+b^2+a'^2+b'^2)/2\hbar}}{{n!(N-n)!}}\,
H_n(a)\,H_{N-n}(b)\,H_n(a')\,H_{N-n}(b')\,,
\ea
which is the probability density of measuring $(a,b)$, given $(a',b')$ measured at another instance. Furthermore, the probability of the quantum event $(n_a,n_b)$ given the quantum event $(n'_a,n'_b)$ is
\be
W[\psi_{n_a,n_b},\psi_{n'_a,n'_b}]:=\opelem{n_a,n_b}{\hat{P}}{n'_a,n'_b}
=\delta_{N,n_a+n_b}\delta_{n_a,n'_a}\delta_{n_b,n'_b}.
\ee
\subsection{More on the physical Hilbert space}\label{sec:physical Hilbert space}
The operator $\hat{P}\!: \mathcal{S} \rightarrow \mathcal{S}'$ maps an arbitrary element of $\mathcal{S}$ to its dual space $S'$.
If zero is in the continuous spectrum of $\hat{H}$, $\hat{P}$ maps $S$ to a larger space $S'$ and thus is \emph{not} really a projector. In this case, the physical state $\obra{\Psi_{\psi_\alpha}}$ mapped from $\ket{\psi_\alpha}$ is a tempered distribution. $\hat{P}$ becomes a true projector only if zero is a part of the discrete spectrum of $\hat{H}$ such as in the timeless double pendulum.
The construction in \eqnref{eqn:projector} is a special case for the \emph{group averaging} procedure \cite{Marolf:1995cn,Marolf:2000iq}, the idea of which is to averaging over all states along the gauge flow (generated by the constraint operator) to yield the physical solution which satisfies the constraint equation. In the special case, let $\ket{E}$ be the eigenstate of $\hat{H}$ with eigenvalue $E$, then schematically we have
\ba
\hat{H}\oket{\Psi_{\psi_\alpha}}&=&\int d\tau\, \hat{H} e^{-i\tau\hat{H}}\ket{\psi_\alpha}
=\int d\tau\! \int dE\, \hat{H} e^{-i\tau\hat{H}} \ket{E}\inner{E}{\psi_\alpha}\nn\\
&=&\int d\tau\! \int dE\, E\, e^{-i\tau E} \ket{E}\inner{E}{\psi_\alpha}
=\int dE\, \delta(E)\, E\ket{E}\inner{E}{\psi_\alpha}=0,
\ea
thus showing that $\hat{P}$ maps an arbitrary kinematical state $\ket{\psi_\alpha}$ to a physical state which satisfies the constraint equation \eqnref{eqn:qm constraint}. Furthermore, it can be easily shown that $\action{\Psi_{\psi_\alpha}}{\psi_\beta} =\action{\Psi_{\psi_\alpha}}{\psi_\beta'}$ if $\obra{\Psi_{\psi_\beta}}=\obra{\Psi_{\psi_\beta'}}$, and therefore the physical inner product in \eqnref{eqn:physical inner product} is well defined.
If there are multiple constraints, we have to solve the multiple constraint equations simultaneously:
\be
\hat{H}^i\ket{\psi}=0,
\qquad \text{for}\ i=1,\cdots,k.
\ee
In the simplest case that $[\hat{H}^i,\hat{H}^j]=0$ for all $i,j$, the projector can be easily constructed via
\be\label{eqn:projector multiple constraints}
\hat{P}=\int d\tau_1\cdots\int d\tau_k\, e^{-i\tau_i\hat{H}^i}
\ee
as a direct extension of \eqnref{eqn:projector}. In general, however, $\hat{H}^i$ do not commute, as classically the Poisson brackets $\{H^i,H^j\}$ vanish only \emph{weakly} [see \eqnref{eqn:first class} and \eqnref{eqn:first class 2}].
In the case that $\hat{H}^i$ do not commute but form a closed Lie algebra, i.e.,
\be
[\hat{H}^i,\hat{H}^j]={f^{ij}}_k\,\hat{H}^k
\ee
with ${f^{ij}}_k$ being constants, the exponentials of $\hat{H}^i$ form a Lie group $G$ and the physical state can be obtained by group averaging:
\be\label{eqn:group averaging}
\oket{\Psi_{\psi_\alpha}}=\int_G d\mu(\hat{U})\,\hat{U}\ket{\psi_\alpha},
\ee
where $d\mu$ is the Haar measure. It follows
\ba
\hat{U}'\oket{\Psi_{\psi_\alpha}}
&=&\int_G d\mu(\hat{U})\,\hat{U}'\hat{U}\ket{\psi_\alpha}
=\int_G d\mu(\hat{U}'^{-1}\hat{U}'')\,\hat{U}''\ket{\psi_\alpha}\nn\\
&=&\int_G d\mu(\hat{U}'')\,\hat{U}''\ket{\psi_\alpha}=\oket{\Psi_{\psi_\alpha}}
\ea
for any $\hat{U}'\in G$. The fact that $\oket{\Psi_{\psi_\alpha}}$ is invariant under any $\hat{U}'\in G$ implies that it is annihilated by the generators of $G$, namely, $\hat{H}^i\oket{\Psi_{\psi_\alpha}}=0$.
Furthermore, the physical inner product in \eqnref{eqn:physical inner product} is again well defined. (See \cite{Marolf:2000iq} for more details and subtleties.) The averaging in \eqnref{eqn:projector multiple constraints} is indeed a special case of \eqnref{eqn:group averaging}.
Generically, ${f^{ij}}_k$ are functions of $q^a$ and $p_a$ in \eqnref{eqn:first class 2}, and, correspondingly, $\hat{H}^i$ do not form a closed Lie algebra in the kinematical space $\mathcal{K}$. In this case, it is much more difficult to obtain the physical solutions and to construct the quantum theory which is free of quantum anomalies (see \cite{Thiemann:1996ay} for the issues of anomalies).
\subsection{Remarks on measurements and collapse}\label{sec:measurements and collapse}
Imagine that a quantum system is measured by Alice and Bob at two different instances, yielding two outcomes corresponding to $\ket{\psi_\alpha}$ and $\ket{\psi_\beta}$, respectively. From the perspective of Alice, the physical state is collapsed to $\oket{\Psi_{\psi_\alpha}}$ by her measurement and Bob's measurement affirms her prediction. Bob, on the other hand, regards the physical state to be collapsed to $\oket{\Psi_{\psi_\beta}}$ by his measurement and predicts what Alice can measure. The striking puzzle arises: Who, Alice or Bob, causes the physical state to collapse in the first place?
In the timeless framework, it turns out to be an invalid question to ask who collapses the physical state \emph{first}, since we cannot make any sense of time. The seemingly puzzle is analogous to the Einstein-Podolsky-Rosen (EPR) paradox, in which a pair of entangled particles are measured separately by Alice and Bob. In the context of special relativity, if the two measurements are conducted at two spacetime events which are spacelikely separated, the time-ordering of the two events can flip under a Lorentz boost and thus has no physical significance. Alice and Bob can both claim that the entangled state is collapsed by her/his measurement and thus have different knowledge about what the physical state should be, yet the predictions by Alice and Bob are consistent to each other. In our case, the measurement at an instance is analogous to the measurement on a single particle of the EPR pair; the kinematical state is analogous to the (local) state of a single particle; and the physical state is analogous to the (global) entangled state of the EPR pair. A complete knowledge (usually from measurement) about the local state will collapse the global state at once through the entanglement, which is analogous to the dynamics (or say, transition amplitudes) in our case. Consistency also holds in our case as $\action{\Psi_{\psi_\alpha}}{\psi_\beta}= \overline{\action{\Psi_{\psi_\beta}}{\psi_\alpha}}$\,. (See \cite{Laudisa} for more on the EPR paradox in the relational interpretation of quantum mechanics and also Section 5.6 of \cite{Rovelli:book} for more on the philosophical issues.)
As a side remark, exploiting further the close analogy between the EPR pair and the timeless formalism of relativistic quantum mechanics, one might be able to conceive an analog of the Bell's inequality, which would help to elaborate on the interpretational and conceptual issues of relativistic quantum mechanics at the level of thought experiments.
\section{Timeless path integral}\label{sec:timeless path integral}
The canonical formalism for relativistic quantum mechanics is described in \secref{sec:quantum theory}. All information of the quantum dynamics is encoded by the transition amplitudes \eqnref{eqn:W s s'}. In particular, by choosing $\ket{s}=\ket{q^a}$ and $\ket{s'}=\ket{q'^a}$, all physics can be obtained from the following transition amplitudes
\ba
W(q^a,q'^a)=\opelem{q^a}{\hat{P}}{q'^a}
\sim\int d\tau \, \opelem{q^a}{e^{-i\tau \hat{H}}}{q'^a}.
\ea
From now on, we will use the notation $\sim$ to denote the equality up to an overall constant factor which has no physical significance, as any overall constant is canceled out in the numerator and denominator in \eqnref{eqn:P s s'}.
As a special case of group averaging, the integration range of $\tau$ is taken to be a compact interval if $\exp(-i\tau\hat{H})$ forms a compact Lie group $U(1)$ (timeless double pendulum is an example) and it is taken to be $(-\infty,\infty)$ if the group of $\exp(-i\tau\hat{H})$ is noncompact. For the case that $\exp(-i\tau\hat{H})$ gives $U(1)$, we can unwrap $U(1)$ to its covering space $\mathbb{R}$ and correspondingly integrate $\tau$ over $(-\infty,\infty)$. The unwrapping only gives rise to an overall multiplicative factor (which is divergent if not properly regulated). Therefore, in any case, up to an irrelevant overall factor, transition amplitudes can be computed by
\be\label{eqn:W q q'}
W(q^a,q'^a)
\sim \int_{-\infty}^\infty d\tau \, \opelem{q^a}{e^{-i\tau \hat{H}}}{q'^a},
\ee
where $\opelem{q^a}{e^{i\tau \hat{H}}}{q'^a}$ can be thought as the transition amplitude for a kinematical state $\ket{q'^a}$ to ``evolve' to the state $\ket{q^a}$ by the ``parameter time'' $\tau$. Equation \eqnref{eqn:W q q'} sums over $\opelem{q^a}{e^{i\tau \hat{H}}}{q'^a}$ for all possible values of $\tau$, suggesting that $W(q^a,q'^a)$ is intrinsically timeless as the parameter time $\tau$ has no essential physical significance.
Rigorously, the integration should be regularized via
\be\label{eqn:regularization}
W(q^a,q'^a)\sim
\lim_{M\rightarrow\infty}
\frac{\int_{-M}^M d\tau \opelem{q^a}{e^{-i\tau \hat{H}}}{q'^a}}
{\int_{-M}^M d\tau},
\ee
as a cut-off $M$ is introduced to regulate the integral and the irrelevant overall factor to be finite. As we will see, the variable $\tau$ corresponds to the parametrization of curves in the path integral and integrating over all $\tau$ indicates that the parametrization of curves has no physical significance. The above regularization scheme is physically well justified, as it cuts off the curves in the path integral which are ``too wild'' (noncompact curves), given the endpoints $q'^a$ and $q^a$ fixed.
In the following, starting from \eqnref{eqn:W q q'}, we will first derive the timeless path integral for the case of a single Hamiltonian constraint and then investigate it in more detail. In the end, we will study the path integral with multiple relativistic constraints.
\subsection{General structure}\label{sec:path integral general structure}
For a given $\tau$, let us introduce a \emph{parametrization sequence}: $\tau_0=0,\, \tau_1,\,\tau_2,\cdots\!,\tau_{N-1},\,\tau_N=\tau$ with $\tau_i\in\mathbb{R}$, and define $\Delta\tau_n:=\tau_n-\tau_{n-1}$. The conditions on the endpoints ($\tau_0=0$ and $\tau_n=\tau$) correspond to $\sum_{n=1}^N\Delta\tau_n=\tau$. The \emph{mesh} of the parameter sequence is defined to be $\max_{n=1,\cdots,N}\{\left|\Delta\tau_n\right|\}$. The parameter sequence is said to be fine enough if its mesh is smaller than a given small number $\epsilon$.\footnote{\label{foot:parametrization}Let $X$ be a topological space and $s\in[0,1]$. A continuous map $\gamma:\,s\mapsto\gamma(s)\in X$ is called a \emph{path} with an initial point $s(0)=x_0$ and an end point $s(1)=x_1$. The image of $\gamma$ is called a \emph{curve}, which can be reparameterized with respect to a new variable $\tau$ as $\gamma:\,\tau\mapsto\gamma(\tau)$ by introducing an arbitrary continuous function $\tau:\,s\mapsto\tau(s)\in\mathbb{R}$. The parametrization sequence $\tau_0=0,\, \tau_1,\,\tau_2,\cdots\!,\tau_{N-1},\,\tau_N=\tau$ can be viewed as a discrete approximation for the reparametrization function $\tau(s)$ with $\tau(s=0)=0$ and $\tau(s=1)=\tau$ if we identify $\tau_n=\tau(n/N)$. For the case that $\tau(s)$ is injective, the parametrization sequence is ordered (i.e. $0=\tau_0<\tau_1<\cdots<\tau_{N-1}<\tau_N=\tau$ and $\Delta\tau_n>0$ if $\tau>0$) and called a \emph{partition} of the interval $[0,\tau]$, which is used to define the Riemann integral as the continuous limit: $\int_0^\tau f(\tau')d\tau'=\lim_{\mathrm{mesh}\rightarrow0}\sum_{n=0}^{N-1}f(\tau_n)\Delta\tau_n$. In the timeless formulation of relativistic mechanics, a dynamical solution is an \emph{unparameterized} curve in $\Omega$ and its parametrization has no physical significance. In order to exploit the timeless feature, we should keep the parametrizations generic and not restrict ourselves to injective ones. Correspondingly, the partition is generalized to a parametrization sequence and the Riemann integral is generalized to the Riemann-Stieltjes integral as $\int_0^1 f(s)d\tau(s)=\lim_{\mathrm{mesh}\rightarrow0}\sum_{n=0}^{N-1}f(n/N)\Delta\tau_n$, which is well defined even if $\tau(s)$ is not injective.}
As $\tau$ is fixed now, identifying $q^a=q_N^a$ and $q'^a=q_0^a$, and using $\sum_{n=1}^N\Delta\tau_n=\tau$, we can rewrite $\opelem{q^a}{e^{-i\tau \hat{H}}}{q'^a}$ as
\ba\label{eqn:kernel}
&&\opelem{q^a}{e^{-i\tau \hat{H}}}{q'^a}
\equiv \opelem{q_N^a}{e^{-i\Delta\tau_N\hat{H}}\, e^{-i\Delta\tau_{N-1}\hat{H}} \cdots e^{-i\Delta\tau_1\hat{H}}}{q_0^a}\nn\\
&=&
\left(\prod_{n=1}^{N-1}\int d^dq_n^a\right)
\opelem{q_N^a}{e^{-i\Delta\tau_N \hat{H}}}{q_{N-1}^a}
\opelem{q_{N-1}^a}{e^{-i\Delta\tau_{N-1} \hat{H}}}{q_{N-2}^a}
\cdots
\opelem{q_1^a}{e^{-i\Delta\tau_1 \hat{H}}}{q_0^a},
\ea
where we have inserted $N-1$ times the completeness relation
\be
\int d^dq^a \, \ket{q^a}\bra{q^a}
:= \int dq^1\cdots dq^d \, \ket{q^1,\cdots,q^d}\bra{q^1,\cdots,q^d}\,.
\ee
For a given arbitrary small number $\epsilon$, by increasing $N$, we can always make the parameter sequence fine enough such that $\mathrm{mesh}\{\tau_i\}<|\tau|/N<\epsilon$.\footnote{More rigorously, for a given $\epsilon$, the large number $N$ should be chosen to satisfy $\mathrm{mesh}\{\tau_i\}<|\tau|/N<M/N<\epsilon$, where $M$ is the cut-off regulator defined in \eqnref{eqn:regularization}, so that the $\mathcal{O}(\epsilon^2)$ term in \eqnref{eqn:approximation} can be dropped for any value of $|\tau|$. In the end, we have to integrate \eqnref{eqn:discrete path integral for kernel} over all possible values of $\tau$ to obtain $W(q^a,q'^a)$, and the regularization is essential to keep the $\mathcal{O}(\epsilon^2)$ terms under control for arbitrary values of $\tau$.}
Consequently, we can approximate each $\opelem{q_n^a}{e^{-i\Delta\tau_n \hat{H}}}{q_{n-1}^a}$ to the first order in $\epsilon$ as
\be\label{eqn:step evolution}
\opelem{q_{n+1}^a}{e^{-i\Delta\tau_{n+1} \hat{H}}}{q_{n}^a}
=\opelem{q_{n+1}^a}{1-i\Delta\tau_{n+1}\hat{H}(\hat{q}^a,\hat{p}_a)}{q_{n}^a}
+\mathcal{O}(\epsilon^2).
\ee
For the generic case that the Hamiltonian operator $\hat{H}$ is a polynomial of $\hat{q}^a$ and $\hat{p}_a$ and is \emph{Weyl ordered}, with the use of the completeness relation for the momenta
\be
\int \frac{d^dp_a}{(2\pi\hbar)^d} \, \ket{p_a}\bra{p_a}
:= \int \frac{dp_1\cdots dp_d}{(2\pi\hbar)^d}
\, \ket{p_1,\cdots,p_d}\bra{p_1,\cdots,p_d},
\ee
it can be shown that
\be\label{eqn:q H q}
\opelem{q^a}{\hat{H}(\hat{q}^q,\hat{p}_a)}{q'^a}
=\int\frac{d^dp_a}{(2\pi\hbar)^d}\,
\exp\left[\frac{i}{\hbar}\,p_a(q^a-q'^a)\right]
H\left(\frac{q^a+q'^a}{2},p_a\right).
\ee
(See Exercise~11.2 in \cite{Greiner:book} for the proof.)
Applying \eqnref{eqn:q H q} to \eqnref{eqn:step evolution}, we have
\ba\label{eqn:approximation}
\opelem{q_{n+1}^a}{e^{-i\Delta\tau_{n+1} \hat{H}}}{q_{n}^a}
&=&\int\frac{d^dp_{na}}{(2\pi\hbar)^d}\
e^{ip_{na}(q_{n+1}^a-q_n^a)/\hbar}
\left[1-i\Delta\tau_{n+1}H((q_{n+1}^a+q_n^a)/2,p_{na}\right]+\mathcal{O}(\epsilon^2)\nn\\
&=&\int\frac{d^dp_{na}}{(2\pi\hbar)^d}\
e^{ip_{na}\Delta q_n^a/\hbar}\
e^{-i\Delta\tau_{n+1}H(\bar{q}_n^a,\,p_{na})}+\mathcal{O}(\epsilon^2),
\ea
where we define $\bar{q}_n^a:=(q_{n+1}^a+q_n^a)/2$ and $\Delta q_n^a:=q_{n+1}^a-q_n^a$.
Making the parametrization sequence finer and finer (by decreasing $\epsilon$ or equivalently by increasing $N$) and at the end going to the limit $\epsilon\rightarrow0$ or $N\rightarrow\infty$, we can cast \eqnref{eqn:kernel} as
\ba\label{eqn:discrete path integral for kernel}
\opelem{q^a}{e^{-i\tau \hat{H}}}{q'^a}
&=&\lim_{N\rightarrow\infty}
\left(\prod_{n=1}^{N-1}\int d^dq_n^a\right)
\left(\prod_{n=0}^{N-1}\int\! \frac{d^dp_{na}}{(2\pi\hbar)^d}\right)
\exp\left(\frac{i}{\hbar}\sum_{n=0}^{N-1}p_{na}\Delta q_n^a\right)\nn\\
&&\quad\times
\exp\left(-i\sum_{n=0}^{N-1}\Delta\tau_{n+1}H(\bar{q}_n^a,p_{na})\right).
\ea
In the limit $N\rightarrow\infty$, the points $q_n$ and $p_n$ can be viewed as the sampled points of a continuous curve in $\Omega=T^*\mathcal{C}$ given by $\tilde{\gamma}(\tau')=(q^a(\tau'),p_a(\tau'))$, which is parameterized by $\tau'$ and with the endpoints projected to $\mathcal{C}$ fixed by $q^a(\tau'=0)=q'^a$ and $q^a(\tau'=\tau)=q^a$. That is, $q_n$ and $p_n$ are the sampled points of $\tilde{\gamma}$ as $q_n^a=q^a(\tau_n)$ and $p_{na}=p_a(\tau_n)$. In the treatment of functional integral, it is customary to introduce the special notations for path integrals:
\begin{subequations}
\ba
\prod_{n=1}^{N-1}\int d^dq_n^a \quad &\rightarrow& \quad \int\mathcal{D}q^a,\\
\prod_{n=0}^{N-1}\int\! \frac{d^dp_{na}}{(2\pi\hbar)^d} \quad &\rightarrow& \quad \int\mathcal{D}p_a.
\ea
\end{subequations}
Meanwhile, in the continuous limit ($N\rightarrow\infty$), the finite sums appearing in the exponents in \eqnref{eqn:discrete path integral for kernel} also converge to the integrals:
\be\label{eqn:int p dq}
\frac{i}{\hbar}\sum_{n=0}^{N-1}p_{na}\Delta q_n^a
\quad\rightarrow\quad
\frac{i}{\hbar}\int_{\tilde{\gamma}} p_adq^a
\equiv\frac{i}{\hbar}\int_{\tilde{\gamma}} \left(p_a\frac{dq^a}{d\tau'}\right) d\tau'
\ee
and
\be\label{eqn:int H dtau}
-i\sum_{n=0}^{N-1}\Delta\tau_{n+1}H(p_{na},\,\bar{q}_n^a)
\quad\rightarrow\quad
-i\int_{\tilde{\gamma}} H(q^a(\tau'),p_a(\tau'))\,d\tau'.
\ee
Note that the continuous limit above is defined via the Riemann-Stieltjes integral as an extension of the Riemann integral (see \footref{foot:parametrization}).
With the new notations, \eqnref{eqn:discrete path integral for kernel} can be written in a concise form:
\be\label{eqn:continuous path integral for kernel}
\opelem{q^a}{e^{-i\tau \hat{H}}}{q'^a}=
\int\mathcal{D}q^a \int\mathcal{D}p_a\
\exp\left(\frac{i}{\hbar}\int_{\tilde{\gamma}}p_adq^a\right)
\exp\left(-i\int_{\tilde{\gamma}} H(q^a(\tau'),p_a(\tau'))\,d\tau'\right).
\ee
It is remarkable to note that (up to the factor $i/\hbar$) the continuous limit in \eqnref{eqn:int p dq} is simply the line integral of the one-form $\tilde{\theta}=p_adq^a$ over the curve $\tilde{\gamma}$, identical to \eqnref{eqn:cl action}, and is independent of the parametrization $\tau$. On the other hand, the integral in \eqnref{eqn:int H dtau} depends on the parametrization of $\tau$. Thus, to compute $W(q^a,q'^a))$ in \eqnref{eqn:W q q'}, the integration over $\tau$ only hits the second exponential in \eqnref{eqn:continuous path integral for kernel} and the first exponential simply factors out. The integration of the second exponential over $\tau$ yields
\ba\label{eqn:delta function}
&&\int_{-\infty}^\infty d\tau \exp\left(-i\int_{\tilde{\gamma}} H(q^a(\tau'),p_a(\tau'))\,d\tau'\right)\nn\\
&=&\int_{-\infty}^\infty d\tau \exp\left(-i\tau\int_{\tilde{\gamma}} H(q^a(\bar{\tau}),p_a(\bar{\tau}))\,d\bar{\tau}\right)
=\delta\left(\int_{\tilde{\gamma}} H(q^a(\bar{\tau}),p_a(\bar{\tau}))\,d\bar{\tau}\right),
\ea
where we have rescaled the parametrization $\tau'$ to $\bar{\tau}=\tau'/\tau$ so that the endpoints now read as $q'^a=q^a(\bar{\tau}=0)$ and $q^a=q^a(\bar{\tau}=1)$.\footnote{\label{foot:removing regulator}In the expression of \eqnref{eqn:delta function}, we have removed the cut-off regulator (i.e. the limit $M\rightarrow\infty$ has been taken). More rigorously, we have removed the regulator \emph{before} the limit $N\rightarrow\infty$ is taken. The Dirac delta function in \eqnref{eqn:delta function} would have been a \emph{nascent delta function} if the regulator had not been removed.} The appearance of the Dirac delta function indicates that only the paths which satisfy $\int_{\tilde{\gamma}}H(\bar{\tau})d\bar{\tau}=0$ will contribute to the path integral for $W(q^a,q'^a)$. The condition $\int_{\tilde{\gamma}}H(\bar{\tau})d\bar{\tau}=0$ is, however, still not geometrical, since $\bar{\tau}$ can be further reparameterized to $\bar{\tau}'=\bar{\tau}'(\bar{\tau})$ to yield $\int_{\tilde{\gamma}}H(\bar{\tau}')d\bar{\tau}'\neq0$ even with the initial and final values fixed, i.e., $\bar{\tau}'(\bar{\tau}=0)=0$ and $\bar{\tau}'(\bar{\tau}=1)=1$. On the other hand, $W(q^a,q'^a)$ cast in \eqnref{eqn:W q q'} has no dependence on the parametrization whatsoever, which implies that, in the continuous limit, the contribution of a path $\tilde{\gamma}$ satisfying the condition $\int_{\tilde{\gamma}}H(\bar{\tau})d\bar{\tau}=0$ for a specific (rescaled) parametrization $\bar{\tau}$ is somehow exactly canceled by that of another path satisfying the same condition. In the end, only the paths restricted to the constraint surface (i.e., $\tilde{\gamma}\in\Sigma$, or equivalently $H(\tau')=0$ for all $\tau'$ along the path) contribute to the path integral for $W(q^a,q'^a)$. The constraint $\tilde{\gamma}\in\Sigma$ is now geometrical.
How the aforementioned cancelation takes place is obscure. To elucidate this point, we exploit the fact that $W(q^a,q'^a)$ is independent of the parametrization and play the trick by averaging over all possible parametrizations. That is, up to an overall factor of no physical significance, we can recast $W(q^a,q'^a)$ by summing over different parametrizations as follows:
\ba\label{eqn:path integral 1}
&&W(q^a,q'^a)
\sim \int d\tau \int \left[\mathcal{D}\Delta\tau\right]_{\sum\!\Delta\tau_n=\tau}
\opelem{q^a}{e^{i\tau \hat{H}}}{q'^a}\\
&\sim& \int d\tau \int\left[\mathcal{D}\Delta\tau\right]_{\sum\!\Delta\tau_n=\tau} \int\mathcal{D}q^a \int\mathcal{D}p_a\
\exp\left(\frac{i}{\hbar}\int_{\tilde{\gamma}}p_adq^a\right)
\exp\left(-i\sum_{n=0}^{N-1}\Delta\tau_{n+1}H(\bar{q}_n^a,p_{na})\right),\nn
\ea
where the notation $\int\left[\mathcal{D}\Delta\tau\right]_{\sum\!\Delta\tau_n=\tau}$ is a shorthand for
\be
\underbrace{\int_{-\tau/N}^{\tau/N} d\Delta\tau_1 \int_{-\tau/N}^{\tau/N} d\Delta\tau_2 \cdots \int_{-\tau/N}^{\tau/N} d\Delta\tau_N}_{\sum_{n=1}^{N}\Delta\tau_n=\tau}
\quad\rightarrow\quad
\int\left[\mathcal{D}\Delta\tau\right]_{\sum\!\Delta\tau_n=\tau},
\ee
which sums over all fine enough (namely, $\mathrm{mesh}\{\tau_i\}<|\tau|/N$) parametrization sequences for a given $\tau$. It is easy to show that
\be
\int_{-\infty}^\infty d\tau
\underbrace{\int_{-\tau/N}^{\tau/N} d\Delta\tau_1 \int_{-\tau/N}^{\tau/N} d\Delta\tau_2 \cdots \int_{-\tau/N}^{\tau/N} d\Delta\tau_N}_{\sum_{n=1}^{N}\Delta\tau_n=\tau}
=\prod_{n=0}^{N-1}\int_{-\infty}^{\infty} d\Delta\tau_{n+1},
\ee
when the cut-off regulator $M$ is removed (also see \footref{foot:removing regulator}).
Consequently, for a given arbitrary parametrization $\tau'$, renaming the varying $\Delta\tau_n$ as $\Delta\tau_n=\hbar^{-1} N_n\Delta\tau'_n$, we can rewrite \eqnref{eqn:path integral 1} as
\ba\label{eqn:path integral 2}
&&W(q^a,q'^a)\\
&\sim& \int\mathcal{D}q^a \int\mathcal{D}p_a \int\mathcal{D}N\
\exp\left(\frac{i}{\hbar}\int_{\tilde{\gamma}}p_adq^a\right)
\exp\left(-\frac{i}{\hbar}\sum_{n=0}^{N-1}
\Delta\tau'_{n+1}N_{n+1}H(\bar{q}_n^a,p_{na})\right),\nn
\ea
where we introduce the notation
\be
\prod_{n=0}^{N-1}\int_{-\infty}^\infty dN_{n+1} \quad \rightarrow \quad \int\mathcal{D}N.
\ee
Again, in the continuous limit, the finite sum converges to the Riemann-Stieltjes integral:
\be
-\frac{i}{\hbar}\sum_{n=0}^{N-1}\Delta\tau'_{n+1}N_{n+1}H(\bar{q}_n^a,p_{na})
\quad\rightarrow\quad
-\frac{i}{\hbar}\int_{\tilde{\gamma}}
N(\tau')H(q^a(\tau'),p_a(\tau'))d\tau',
\ee
and \eqnref{eqn:path integral 2} can be neatly written as the \emph{path integral}:
\ba\label{eqn:path integral 3}
W(q^a,q'^a)
&\sim& \int\mathcal{D}q^a \int\mathcal{D}p_a \int\mathcal{D}N\
\exp\left[\frac{i}{\hbar}
\left(\int_{\tilde{\gamma}}p_adq^a
-\int_{\tilde{\gamma}}N(\tau')H d\tau'\right)
\right]\nn\\
&\equiv& \int\mathcal{D}q^a \int\mathcal{D}p_a \int\mathcal{D}N\
\exp\left[\frac{i}{\hbar}
\int_{\tilde{\gamma}}\left(p_a\frac{dq^a}{d\tau'}
-N(\tau')H\right)d\tau'\right].
\ea
Integration over $N$ can be carried out to obtain the delta functional:
\be
\int\mathcal{D}N \exp\left(\frac{i}{\hbar}\int N(\tau')H d\tau'\right)\sim\delta[H]
\equiv \lim_{N\rightarrow\infty}\prod_{n=0}^{N-1} \delta(H(\bar{q}^a_n,p_{an})),
\ee
and thus the path integral \eqnref{eqn:path integral 3} can be written in an alternative form as
\be\label{eqn:path integral 4}
W(q^a,q'^a)\sim
\int\mathcal{D}q^a \int\mathcal{D}p_a\ \delta[H]
\exp\left[\frac{i}{\hbar}
\int_{\tilde{\gamma}}p_adq^a\right],
\ee
where insertion of the delta functional $\delta[H]$ confines the path to be in the constraint surface (i.e. $\tilde{\gamma}\in\Sigma$). Note that the phase in the exponent in \eqnref{eqn:path integral 4} is identical to the classical action defined in \eqnref{eqn:cl action} (divided by $\hbar$) and that in \eqnref{eqn:path integral 3} is identical to the classical action in \eqnref{eqn:cl action with N} with $k=1$. Therefore, each path in $\Sigma$ contributes with a phase, which is the classical action divided by $\hbar$.
The path integral formalism is intuitively appealing. It gives us an intuitive picture about the transition amplitudes: $W(q^a,q'^a)$ is described as the sum, with the weight $\exp(iS/\hbar)$ (where $S$ is the classical action of $\tilde{\gamma}$), over all arbitrary paths $\tilde{\gamma}$ which are restricted to $\Sigma$ and whose projection $\gamma$ to $\mathcal{C}$ connect $q'^a$ and $q^a$. None of $q^a$ is restricted to be monotonic along the paths, and in this sense the formulation is called \emph{timeless} path integral. The parametrization for the paths has no physical significance as can be seen in the expression of \eqnref{eqn:path integral 4}, which is completely geometrical and independent of parametrizations. On the other hand, the continuum notation of \eqnref{eqn:path integral 3} is really a schematic for the discretized version:
\begin{subequations}
\ba
\label{eqn:path integral 5 a}
W(q^a,q'^a) &\sim& \lim_{N\rightarrow\infty}
\prod_{n=1}^{N-1}\int d^dq_n^a
\prod_{n=0}^{N-1}\int\! \frac{d^dp_{na}}{(2\pi\hbar)^d}
\prod_{n=0}^{N-1}\int dN_{n+1}\nn\\
&&\qquad\qquad\times
\exp\left(-\frac{i}{\hbar}\sum_{n=0}^{N-1}
\Delta\tau'_{n+1}N_{n+1}H(\bar{q}_n^a,p_{na})\right)\\
\label{eqn:path integral 5 b}
&\sim&
\lim_{N\rightarrow\infty}
\prod_{n=1}^{N-1}\int d^dq_n^a
\prod_{n=0}^{N-1}\int\! \frac{d^dp_{na}}{(2\pi\hbar)^d}
\prod_{n=0}^{N-1}\int dN_{n+1}\nn\\
&&\qquad\qquad\times
\exp\left(-\frac{i}{\hbar}\sum_{n=0}^{N-1}
N_{n+1}H(\bar{q}_n^a,p_{na})\right),
\ea
\end{subequations}
where $\Delta\tau'_n$ in \eqnref{eqn:path integral 5 a} is absorbed to $N_n$ in \eqnref{eqn:path integral 5 b} and this only results in an irrelevant overall factor. The expression of \eqnref{eqn:path integral 5 b} is explicitly independent of parametrizations.\footnote{Perhaps, more appropriately, the ``timeless path integral'' should be renamed ``timeless `curve' integral'', as in the rigorous terminology, a \emph{curve} is defined as the unparameterized image of a \emph{path}, which is specified by a parameter. However, we keep the name of ``path integral'' to conform the conventional nomenclature.}
The contributing paths in the path integral can be very ``wild'' --- not necessarily smooth or even continuous. This calls into question whether the path integral can achieve convergence. We do not attempt to present a rigorous derivation here but refer to \cite{Simon:book} for the legitimacy issues and subtleties of the path integral.
Each path in $\Sigma$ contributes with a different phase, and the contributions from the paths far away from the stationary solution essentially cancel one another through destructive interference. As a result, most contributions come from the paths close to the stationary solution. The stationary solution can be obtained by taking the functional variation on \eqnref{eqn:path integral 3} with respect to $N$, $q^a$ and $p_a$, which yield the classical Hamiltonian constraint \eqnref{eqn:cl constraint} and the Hamilton equations \eqnref{eqn:Hamilton eqs}. Therefore, the stationary solution is the classical solution and we thus have the approximation
\be\label{eqn:W approx}
W(q^a,q'^a)\approx \sum_i \,e^{\frac{i}{\hbar}S[\tilde{\gamma_i}]},
\ee
where $\tilde{\gamma_i}$ are the classical solutions which connect $q'^a$ and $q^a$ and $S$ is the action.\footnote{Generally, there could be multiple classical solutions connecting $q'^a$ and $q^a$ (as in the case of the timeless double pendulum), if the system is not deparametrizable.} Based on the path integral formalism, a semiclassical theory can be developed in the vicinity of the classical solutions \textit{\`{a} la} the WKB method.
\subsection{Deparametrizable systems as a special case}\label{sec:deparametrizable systems}
If the Hamiltonian happens to be deparametrizable, the classical Hamiltonian is in the form of \eqnref{eqn:nonrel H}, and the path integral \eqnref{eqn:path integral 4} reads as
\begin{subequations}\label{eqn:path integral deparametrizable}
\ba
W(q^a,q'^a)&\sim&
\int\mathcal{D}t\int\mathcal{D}q^i \int\mathcal{D}p_t\int\mathcal{D}p_i\ \delta[p_t+H_0]
\exp\left[\frac{i}{\hbar}
\int_{\tilde{\gamma}}\left(p_tdt+p_idq^i\right)\right]\nn\\
&=&\int\mathcal{D}t\int\mathcal{D}q^i \int\mathcal{D}p_i\,
\exp\left[\frac{i}{\hbar}
\int_{\tilde{\gamma}}\left(p_i\frac{dq^i}{dt}-H_0\right)dt\right]\\
&\equiv&
\lim_{N\rightarrow\infty}
\left(\prod_{n=1}^{N-1}\int dt_n\right)
\left(\prod_{n=1}^{N-1}\int d^{d-1}q_n^i\right)
\left(\prod_{n=0}^{N-1}\int\! \frac{d^{d-1}p_{ni}}{(2\pi\hbar)^{d-1}}\right)\nn\\
&&\quad\times\exp\left[\frac{i}{\hbar}\sum_{n=0}^{N-1}
\left(p_{ni}\frac{\Delta q_n^i}{\Delta t_n}-H_0(\bar{q}^i_n,p_{in})\right)\Delta t_n
\right].
\ea
\end{subequations}
On the other hand, in the conventional nonrelativistic quantum mechanics, the transition amplitude is given by the conventional path integral (see Chapter~11 of \cite{Greiner:book}):
\begin{subequations}\label{eqn:conventional path integral}
\ba
&&G(q^i,t;q'^i,t'):=\opelem{q^i}{e^{-i\hat{H}_0(t-t')}}{q'^i}\nn\\
&=&\int\mathcal{D}q^i \int\mathcal{D}p_i\,
\exp\left[\frac{i}{\hbar}
\int_{\tilde{\gamma}_0}\left(p_i\frac{dq^i}{dt}-H_0\right)dt\right]\\
&\equiv&
\lim_{N\rightarrow\infty}
\left(\prod_{n=1}^{N-1}\int d^{d-1}q_n^i\right)
\left(\prod_{n=0}^{N-1}\int\! \frac{d^{d-1}p_{ni}}{(2\pi\hbar)^{d-1}}\right)\nn\\
&&\quad\times\exp\left[\frac{i}{\hbar}\sum_{n=0}^{N-1}
\left(p_{ni}\frac{\Delta q_n^i}{\Delta t_n}-H_0(\bar{q}^i_n,p_{in})\right)\Delta t_n
\right],
\ea
\end{subequations}
where the path $\tilde{\gamma}_0$ is in the cotangent space $T^*\mathcal{C}_0$ and its projection $\gamma_0$ on $\mathcal{C}_0$ has the endpoints fixed at $q^i(t')=q'^i$ and $q^i(t)=q^i$.
Equations \eqnref{eqn:path integral deparametrizable} looks almost the same as \eqnref{eqn:conventional path integral} except for the functional integral $\int\mathcal{D}t$. In \eqnref{eqn:path integral deparametrizable}, all paths given by $q^a(\tau)=(t(\tau),q^i(\tau))$ (parameterized by an arbitrary parameter $\tau$) are summed and the quantum fluctuation in $t$ is also included; by contrast, in \eqnref{eqn:conventional path integral}, all paths are given by $q^i(t)$ as $t$ is treated as a parameter and with no fluctuation at all. In other words, \eqnref{eqn:conventional path integral} sums over only the paths which are monotonic in $t$, whereas \eqnref{eqn:path integral deparametrizable} sums over all possible paths whether $t$ is monotonic or not. The difference is profound and shows that the relativistic quantum mechanics is \emph{not} equivalent to the conventional quantum mechanics even if the system is deparametrizable, as already commented in the end of \secref{sec:general scheme}.
However, for most systems, we have the good approximation \eqnref{eqn:W approx} and only the paths in the vicinity of the classical solution are important. Meanwhile, as discussed in \secref{sec:nonrelativistic mechanics}, the classical solution for a deparametrizable system is always monotonic in $t$. Therefore, in \eqnref{eqn:path integral deparametrizable}, it is a good approximation to sum over only the paths which are not too deviated from the classical solution and thus monotonic in $t$. In this approximation, \eqnref{eqn:path integral deparametrizable} reduces to the conventional path integral \eqnref{eqn:conventional path integral} as $\int\mathcal{D}t$ factors out as an irrelevant overall factor. Therefore, the conventional path integral, although not equivalent to, is a good approximation for the timeless path integral. Further research is needed to investigate when the approximation remains good and when it fails; this is closely related to the question when the fluctuation (uncertainty) of $t$ can be ignored and thus the measurement of it can be idealized as instantaneous (see \cite{Reisenberger:2001pk,Marolf:2002ve} for the case of a simple harmonic oscillator).
\subsection{Timeless Feynman's path integral}
Consider the special case that the classical Hamiltonian is given in the form of \eqnref{eqn:H special form} and the Hamiltonian operator is Weyl ordered. As the Hamiltonian is a quadratic polynomial in $p_a$, the path integral over $\mathcal{D}p_a$ in \eqnref{eqn:path integral 3} can be integrated out. That is, in the expression:\footnote{In this subsection, the repeated index $a$ is not summed unless $\sum_a$ is explicitly used.}
\ba
&&W(q^a,q'^a)\\
&\sim& \int\mathcal{D}q^a \int\mathcal{D}N
\prod_{n=0}^{N-1}\int\! \frac{d^dp_n}{(2\pi\hbar)^d}\
\exp\left[\frac{i}{\hbar}\sum_{n=0}^{N-1}
\left(\sum_a p_{na}\Delta q_n^a-\Delta\tau'_{n+1}N_{n+1}H(\bar{q}_n^a,p_{na})\right)
\right],\nn
\ea
the integration over each $p_{na}$ can be explicitly carried out:
\ba
&&\int_{-\infty}^\infty dp_{na}
\exp\left(\frac{i}{\hbar}
\left[
p_{na}\Delta q_n^a-\Delta\tau'_{n+1}N_{n+1}
\left(\alpha_ap_{na}^2 + \beta_ap_{na}\bar{q}_n^a+
\gamma_ap_{na}\right)
\right]\right)\nn\\
&\propto&
\frac{1}{\sqrt{\rule{0mm}{3.5mm}N_{n+1}}}\,
\exp \left(\frac{i}{\hbar}\frac{\Delta\tau'_{n+1}N_{n+1}}{4\alpha_a}
\left[\frac{\Delta q_n^a}{\Delta\tau'_{n+1}N_{n+1}}
-\beta_a\bar{q}_n^a-\gamma_a\right]^2 \right)
\ea
by the Gaussian integral $\int_{-\infty}^\infty dx\, e^{-\alpha x^2+\beta x}=(\pi/\alpha)^{1/2}e^{\beta^2/4\alpha}$.
Noting that $dN_{n+1}/\sqrt{\rule{0mm}{3.5mm}N_{n+1}}=2\,d\sqrt{\rule{0mm}{3.5mm}N_{n+1}}$\, and introducing the shorthand notation:
\be
\prod_{n=0}^{N-1}\int_{-\infty}^\infty d\sqrt{N_{n+1}} \quad \rightarrow \quad \int\mathcal{D}\sqrt{N},
\ee
we then have
\ba
W(q^a,q'^a)&\sim& \int\mathcal{D}q^a \int\mathcal{D}\sqrt{N}\\
&&\times
\exp\left[\frac{i}{\hbar}\sum_{n=0}^{N-1}
\left(\sum_a \frac{N_{n+1}}{4\alpha_a}
\left[\frac{\Delta q_n^a}{\Delta\tau'_{n+1}N_{n+1}}
-\beta_a\bar{q}_n^a-\gamma_a\right]^2-N_{n+1}V(\bar{q}_n^a)\right)\Delta\tau'_{n+1}
\right],\nn
\ea
which written in the continuous form reads as
\be\label{eqn:Feynman's path integral}
W(q^a,q'^a)
\sim \int\mathcal{D}q^a \int\mathcal{D}\sqrt{N}
\exp\frac{i}{\hbar}\int_\gamma d\tau'
\left(\sum_a\frac{N}{4\alpha_a}\left[\frac{\dot{q}^a}{N}-\beta_aq^a-\gamma_a\right]^2
-NV(q^a)\right),
\ee
where the ``velocity'' $\dot{q}^a:=dq^a/d\tau'$ is the continuous limit of $\Delta q_n^a/\Delta\tau'_{n+1}$.
Therefore, in the special case that the Hamiltonian is a quadratic polynomial in $p_a$, the transition amplitude admits a path integral formalism over the configuration space, whereby the functional integration over $N$ is modified as $\int\mathcal{D}\sqrt{N}$. This is called the \emph{configuration space path integral} or \emph{Feynman's path integral}. The configuration space path integral \eqnref{eqn:Feynman's path integral} sums over all arbitrary paths $\gamma\in\mathcal{C}$ whose endpoints are fixed at $q'^a$ and $q^a$, and each path contributes with a phase, which is identical to the Lagrangian function as given in \eqnref{eqn:cl Lagrangian} (divided by $\hbar$). The functional variations on \eqnref{eqn:Feynman's path integral} with respect to $\sqrt{N}$ and $q^a$ yield the classical Hamiltonian constraint and equation of motion as in \eqnref{eqn:var to N on L} and \eqnref{eqn:var to q on L}.\footnote{Note that $\delta W/\delta\sqrt{N}=2\sqrt{N}\,\delta W/\delta N$.} This shows again that the stationary solution is the classical solution and thus \eqnref{eqn:W approx} is a good approximation.
\subsection{Path integral with multiple constraints}
If there are multiple constraints and the constraint operators $\hat{H}^i$ commute, the projector is given by \eqnref{eqn:projector multiple constraints} and \eqnref{eqn:W q q'} can be directly generalized as\footnote{In this subsection, the repeated index $i$ is not summed unless $\sum_i$ is explicitly used.}
\be\label{eqn:W q q' with multiple H}
W(q^a,q'^a)
\sim \int_{-\infty}^\infty d\tau^1 \cdots \int_{-\infty}^\infty d\tau^k\
\opelem{q^a}{e^{-i\sum_{i=1}^k \tau^i \hat{H}^i}}{q'^a}.
\ee
If each $\hat{H}^i$ is a polynomial of $\hat{q}^a$ and $\hat{p}_a$ and Weyl ordered, the linear sum $\hat{H}'=\sum_i\tau^i\hat{H}^i$ is also a polynomial and Weyl ordered. Thus, by replacing $\tau$ with $1$ and $\hat{H}$ with $\hat{H}'$ in \eqnref{eqn:continuous path integral for kernel}, it can be shown
\ba
\opelem{q^a}{e^{-i\sum_i\tau^i \hat{H}^i}}{q'^a}&=&
\int\mathcal{D}q^a \int\mathcal{D}p_a\
\exp\left(\frac{i}{\hbar}\int_{\tilde{\gamma}}p_adq^a\right)\nn\\
&&\qquad\quad\times
\exp\left(-i \sum_i \int_{\tilde{\gamma}} \tau^iH^i(q^a(\bar{\tau}),p_a(\bar{\tau}))\,d\bar{\tau}\right),
\ea
where $\bar{\tau}$ is a parameter for the curve $\tilde{\gamma}$ with $q'^a=q^a(\bar{\tau}=0)$ and $q^a=q^a(\bar{\tau}=1)$. Redefining $\tau^i\Delta\bar{\tau}_n$ as $\Delta\tau^i_n$, we then have
\ba\label{eqn:kernel with multiple H}
\opelem{q^a}{e^{-i\sum_i\tau^i \hat{H}^i}}{q'^a}&=&
\int\mathcal{D}q^a \int\mathcal{D}p_a\
\exp\left(\frac{i}{\hbar}\int_{\tilde{\gamma}}p_adq^a\right)\nn\\
&&\qquad\quad\times
\exp\left(-i \sum_i \int_{\tilde{\gamma}} H^i(q^a(\tau^i),p_a(\tau^i))\,d\tau^i\right),
\ea
As in the case with a single constraint, the first exponential in \eqnref{eqn:kernel with multiple H} is independent of parametrizations for the curve $\tilde{\gamma}$, and for the second exponential we can play the same trick by summing over different parametrizations to get rid of the seemingly dependence on parametrizations. Following the same steps in \secref{sec:path integral general structure}, for each $i$, we have
\ba
&&\int d\tau^i \int \left[\mathcal{D}\Delta\tau^i\right]_{\sum\!\Delta\tau^i_n=\tau^i}
\opelem{q^a}{e^{i\tau^i \hat{H}^i}}{q'^a}\\
&=&
\int\mathcal{D}q^a \int\mathcal{D}p_a \int\mathcal{D}N_i\
\exp\left[\frac{i}{\hbar}
\int_{\tilde{\gamma}}\left(p_a\frac{dq^a}{d\tau'}
-N_i(\tau')H^i\right)d\tau'\right]
\ea
for a given arbitrary parametrization $\tau'$. After summed over $\left[\mathcal{D}\Delta\tau^i\right]_{\sum\!\Delta\tau^i_n=\tau^i}$ for each $i$, \eqnref{eqn:W q q' with multiple H} yields
\begin{subequations}\label{eqn:path integral with multiple H}
\ba
\label{eqn:path integral with multiple H 1}
W(q^a,q'^q)&\sim&
\int\mathcal{D}q^a \int\mathcal{D}p_a\, \prod_{i=1}^k\int\mathcal{D}N_i\,
\exp\left[\frac{i}{\hbar}
\int_{\tilde{\gamma}}\left(p_a\frac{dq^a}{d\tau'}
-\sum_{i=1}^k N_iH^i\right)d\tau'\right]\\
\label{eqn:path integral with multiple H 2}
&\sim&
\int\mathcal{D}q^a \int\mathcal{D}p_a\, \prod_{i=1}^k\delta[H^i]\,
\exp\left[\frac{i}{\hbar}
\int_{\tilde{\gamma}} p_a dq^a \right],
\ea
\end{subequations}
which is the direct generalization of \eqnref{eqn:path integral 3} and \eqnref{eqn:path integral 4}. In the path integral, each path in $\Sigma$ contributes with a phase, which is the classical action given in \eqnref{eqn:cl action with N} divided by $\hbar$. Functional variation on \eqnref{eqn:path integral with multiple H 1} with respect to $N_i$, $q^a$ and $p_a$ again yields the classical equations \eqnref{eqn:cl constraint} and \eqnref{eqn:Hamilton eqs}.
\section{Summary and discussion}\label{sec:discussion}
Starting from the canonical formulation in \cite{Rovelli:book}, the timeless path integral for relativistic quantum mechanics is rigorously derived. Given in \eqnref{eqn:path integral 4}, the transition amplitude is formulated as the path integral over all possible paths in the constraint surface $\Sigma$ (through the confinement by the delta functional $\delta[H]$), and each path contributes with a phase identical to the classical action $\int_{\tilde{\gamma}}p_adq^a$ divided by $\hbar$. The alternative expression is given in \eqnref{eqn:path integral 3}, which is the functional integral over all possible paths in the cotangent space $\Omega=T^*\mathcal{C}$ as well as over the Lagrange multiplier $N$. The timeless path integral manifests the timeless feature of relativistic quantum mechanics, as the parametrization for paths has no physical significance. For the special case that the Hamiltonian constraint $H(q^a,p_a)$ is a quadratic polynomial in $p_a$, the transition amplitude admits the timeless Feynman's path integral over the paths in the configuration space $\mathcal{C}$, as given in \eqnref{eqn:Feynman's path integral}.
The formulation of timeless path integral is intuitively appealing and advantageous in many respects as it generalizes the action principle of relativistic classical mechanics by replacing the classical notion of a single trajectory with a sum over all possible paths. In particular, it is easy to argue that the classical solution contributes most to the transition amplitude and thus \eqnref{eqn:W approx} is a good approximation for generic cases since the stationary solution is identical to the classical one. In the vicinity of the classical trajectory, the semiclassical approximation \textit{\`{a} la} the WKB method can be developed. Furthermore, timeless path integral offers a new perspective to see how the conventional quantum mechanics emerges from relativistic quantum mechanics within a certain approximation (as discussed in \secref{sec:deparametrizable systems}) and may provide new insight into the problem of time.
The formulation of timeless path integral can be directly extended for the dynamical systems with multiple constraints as given in \eqnref{eqn:path integral with multiple H}, if the constraint operators $\hat{H}^i$ commute. For the case that $\hat{H}^i$ do not commute but form a closed Lie algebra, the projector is no loner given by \eqnref{eqn:projector multiple constraints} but we have to invoke \eqnref{eqn:group averaging} to obtain the physical state, which leads to
\be\label{eqn:W q q' with multiple noncommuting H}
W(q^a,q'^a)
\sim \int d\mu(\vec{\theta})\,
\opelem{q^a}{e^{-i\vec{\theta}\cdot\hat{\vec{H}}}}{q'^a},
\ee
where $\theta^i$ are coordinates of the Lie group $G$ generated by $\hat{H}^i$.
Starting from \eqnref{eqn:W q q' with multiple noncommuting H} and following the similar techniques used in this paper, one should be able to obtain the timeless path integral, but the measure of the functional integral $\prod_{i=1}^k\int\mathcal{D}N_i$ appearing in \eqnref{eqn:path integral with multiple H 1} would have to be nontrivially modified as $\theta^i$ play the same role of $\tau^i$ in \eqnref{eqn:W q q' with multiple H} but now the nontrivial Haar measure $d\mu$ is involved and the nontrivial topology of $G$ has to be taken into account. For the case that $\hat{H}^i$ do not form a closed Lie algebra, it is not clear how to construct the quantum theory which is free of quantum anomalies even in the canonical formalism. The timeless path integral may instead provide a better conceptual framework to start with for the quantum theory.
Throughout this paper, we have focused on simple mechanical systems, but not field theories. In Section 3.3 of \cite{Rovelli:book}, the canonical treatment of classical field theories which maintains clear meaning in a general-relativistic context is presented as a direct generalization of the timeless formulation for relativistic classical mechanics (see also \cite{Gotay:1997eg} and references therein), and the corresponding quantum field theory is formulated in Section 5.3 of \cite{Rovelli:book}. The timeless path integral for relativistic quantum mechanics derived in this paper should be extended for the quantum field theory described in \cite{Rovelli:book}. We leave it for the future research.
Furthermore, as the timeless path integral is systematically derived from the well-controlled canonical formulation of relativistic quantum mechanics, we expect it to provide new insight into the issues of the connection between LQG/LQC and SFMs. Extending timeless path integral to field theories will be particularly helpful.
\begin{acknowledgements}
The author would like to thank Biao Huang for helpful discussions. This work was supported in part by the Grant No. 10675019 from the NSFC and the Grants No. 20080440017 and No. 200902062 from the China Postdoctoral Science Foundation.
\end{acknowledgements}
|
1,108,101,564,850 | arxiv | \section{Introduction}
The 22.2\,GHz radio emission from luminous extragalactic H$_2$O masers originates in dense ($10^7$\,$<$\,$n({\rm H_2})$\,$<$\,$10^{11}$\,cm$^{-3}$) and warm ($T$\,$>$\,300\,K) gas clouds within a few parsecs from the nuclear engines of their parent galaxies. These masers trace circumnuclear accretion disks (``disk-masers'', e.\,g. UGC3789; \citealt{reid09}), the inner parts of relativistic jets (``jet-masers'', e.\,g. Mrk~348; \citealt{peck03}) or nuclear outflows (Circinus; \citealt{greenhill03}), that are associated with active galactic nuclei (AGN). In contrast to optical and ultraviolet radiation, the radio photons can penetrate the enormous column densities of gas and dust that often obscure the line of sight to the nucleus. This, together with the high brightness temperature and small size of the maser spots, makes H$_2$O emission a suitable tool to investigate the geometry, kinematics, and excitation conditions of the gas in the immediate vicinity of supermassive black holes. Very Long Baseline Interferometry (VLBI) studies of water maser sources, complemented by single-dish monitoring, are a unique instrument to map accretion disks and to estimate the enclosed masses (e.\,g. \citealt{braatz2010}; \citealt{kuo2011}), as well as to determine the shock speeds and densities of radio jets \citep{peck03}.
To date, most such studies have targeted radio quiet AGN in the local Universe. Indeed, the majority of the known extragalactic water masers have been found in Seyfert 2 or LINER galaxies at $z<0.06$. However, the discovery of a water maser in a type 2 quasar at $z=0.66$ \citep{barvainis05} demonstrated that H$_2$O masers can also be detected at higher redshifts. The discovery of water masers at cosmological distances ($z>1.5$) would allow us to study the parsec-scale environment around powerful radio sources, to investigate the physical conditions of the molecular gas in the inner parsecs of quasars, and to measure their black-hole masses not only in the local but also in the early universe. We have recently performed a survey of gravitationally lensed quasars with the Effelsberg radio telescope to find water masers at cosmological redshifts (\citealt{impellizzeri08}; \citealt{mckean2010}). By observing gravitational lens systems we use the lens as a `cosmic telescope' to probe a luminosity regime that is otherwise not reachable with current instrumentation.
Our first confirmed high redshift water maser was found toward the lensed quasar MG~J0414+0534 at $z=2.64$, which is by far the most distant object known to host water maser emission \citep{impellizzeri08}. The previously reported (unlensed) H$_2$O apparent isotropic luminosity of $\sim$10,000\,L$_{\odot}$ places the maser in MG~J0414+0534 among the most luminous water masers ever detected and suggests that the emission is associated with the AGN of the quasar. Although the characteristics of the spectrum seem to favour an association with the radio jet rather than with an accretion disk, the origin of the H$_2$O emission could not be conclusively determined from the Effelsberg and EVLA data alone.
In this paper we report the results from 15 months of monitoring of the redshifted 6\,GHz radio continuum and line emission in MG~J0414+0534 with the 300-m Arecibo telescope. We monitored the line with a large bandwidth to potentially detect additional maser components and to constrain a possible velocity drift of the lines. Furthermore, we monitored the line to reveal possible variations in the maser flux density, determine if a correlation exists between the maser and the continuum flux density and whether there is a time-lag between them. Throughout the paper we adopt a cosmology with $\Omega_{\rm M} =0.3$, $\Omega_{\rm \Lambda} =0.7$ and $H_0 = 70$\,km\,s$^{-1}$\,Mpc$^{-1}$.
\section{Observations and data reduction}\label{sect:obs}
The water maser line from the gravitationally lensed quasar MG~J0414+0534 was monitored with the Arecibo telescope between October 2008 and January 2010, at $\sim$6 week intervals, for a total of 11 epochs (see Table~\ref{table:obs}).
We observed the 6$_{16}$--5$_{23}$ transition of ortho-H$_2$O (rest frequency 22.235\,GHz) using the C-high receiver when available and the standard C-band receiver otherwise. Both receivers provide dual linear polarization. For most of the observations (8 out of 11), we employed the Wideband Arecibo Pulsar Processor (WAPP) spectrometer in single board mode, which provides up to four independently tunable sub-correlators. We used two of the four WAPP sub-correlators, each with a 100\,MHz bandwidth centered at the redshifted frequency of the line (6.110\,GHz at $z$\,=\,2.639) and with a single polarization. In three epochs, August 2009, November 2009, and January 2010, we used the WAPP in dual board mode. This mode provides eight independent sub-correlators, each of 100\,MHz bandwidth, which can be centered at different frequencies within an instantaneous interval of 1\,GHz. We utilized the sub-correlators to simultaneously observe the water maser line and other redshifted molecular transitions including four ammonia inversion lines, NH$_3$ (1, 1), (2, 2), (3, 3), and (6, 6), and five excited OH and CH$_3$OH transitions (see Table~\ref{table:trans}). With nine-level quantization and one polarization per sub-correlator, both configurations provided 2048 spectral channels per sub-correlator and yielded a channel spacing of 48.8\,kHz (equivalent to 2.4\,km\,s$^{-1}$).
Since MG~J0414+0534 has quite strong continuum emission (0.71$\pm$0.02\,Jy, on average, the error being the standard deviation of the mean), we observed in double position switching mode \citep{ghosh02} to avoid problems related to residual standing waves in the baseline of the spectrum. A standard ON/OFF position-switched observation of 300s was performed on MG~J0414+0534, followed by a 40s ON/OFF observation on the strong continuum source 3C~120 (5.5$\pm$0.2\,Jy), which was used as a bandpass calibrator. The half power beam width (HPBW) was $\sim$0.7{\arcmin}$\times$\,0.9{\arcmin} and the pointing was found to be accurate within 10{\arcsec} in all observations. In order to obtain a precise flux calibration of our spectra, we also performed WAPP cross maps of the non-variable continuum source 3C~93, of the bandpass calibrator 3C~120, and of MG~J0414+0534.
The data reduction was performed with the standard Arecibo Observatory (AO) IDL analysis package written by Phil Perillat using special routines developed by the AO staff. The individual ON/OFF scans on MG~J0414+0534 were processed to yield (ON-OFF)/OFF spectra, and these were divided by similar spectra for 3C~120 to obtain bandpass corrected spectra of MG~J0414+0534. The flux density of 3C~93, calculated using the K{\"u}hr's coefficients \citep{kuehr81}, was used to convert the resulting ratio spectra to Jy. The uncertainty of this flux calibration procedure is dominated by the error on the flux density determined for 3C~93 and is estimated to be 7\%.
For each epoch, individual scans were inspected for quality and radio frequency interference (RFI) and co-added to produce a final spectrum. A polynomial baseline (typically of order 7--8) was then fitted to this spectrum and subtracted from it. Finally, we averaged the two polarizations. Due to a technical problem in one of the polarization channels of the June 2009 dataset, only a single polarization spectrum is reported for this epoch. The r.m.s. sensitivities reached in individual epochs ranged from 0.2 to 0.6 mJy per 2.4 km\,s$^{-1}$ wide channel (see Table~\ref{table:obs}). We measured the continuum flux density of MG~J0414+0534 from the calibrated cross maps.
\begin{table*}
\caption{Observational details}
\label{table:obs}
\centering
\begin{tabular}{c l r c c c l}
\hline\hline
Epoch & Date & Day No. & Receiver & On-source int. time & R.m.s. & Comments \\
& & & & (minutes) & (mJy per 2.4\,km\,s$^{-1}$ chan) & \\
\hline
1 & 2008 Oct 14-15 & 0 & C-high & 50 & 0.3 & \\
2 & 2008 Nov 21-22 & 38 & C-high & 55 & 0.4 & \\
3 & 2009 Jan 1-2 & 79 & C-high & 50 & 0.4 & \\
4 & 2009 Feb 14-19 & 123 & C & 195 & 0.2 & \\
5 & 2009 Apr 4-5 & 172 & C & 65 & 0.4 & \\
6 & 2009 May 16-17 & 214 & C & 65 & 0.5 & \\
7 & 2009 Jun 27-28 & 256 & C & 65 & 0.6 & Single pol. spectrum \\
8 & 2009 Aug 8-9 & 298 & C-high & 65 & 0.5 & dual board set up \\
9 & 2009 Sep 28-30 & 349 & C & 65 & 0.4 & \\
10 & 2009 Nov 12-13 & 394 & C-high & 55 & 0.4 & dual board set up \\
11 & 2010 Jan 11-12 & 454 & C-high & 45 & 0.4 & dual board set up \\
\hline
\end{tabular}
\end{table*}
\begin{table}
\caption{The list of molecular transitions that were targeted.}
\label{table:trans}
\centering
\begin{tabular}{c c l @{}c}
\hline\hline
Band & Frequency & Transitions & Rest frequency \\
& (GHz) & & (GHz) \\
\hline
1 & 6.110 & H$_2$O $6_{16}-5_{23}$ & 22.235 \\
2 & 6.515 & NH$_3$ (1, 1) & 23.694 \\
& & NH$_3$ (2, 2) & 23.722 \\
3 & 6.567 & OH $^{2}\Pi_{3/2} J=9/2, F=4-4$ & 23.817 \\
& & OH $^{2}\Pi_{3/2} J=9/2, F=5-5$ & 23.826 \\
& & NH$_3$ (3, 3) & 23.870 \\
4 & 6.878 & CH$_3$OH $4_{2}-4_{1}$ & 24.933 \\
& & CH$_3$OH $6_{2}-6_{1}$ & 25.018 \\
& & NH$_3$ (6, 6) & 25.056 \\
& & CH$_3$OH $7_{2}-7_{1}$ & 25.124 \\
\hline
\end{tabular}
\end{table}
\section{Results}\label{sect:res}
In the following, the quoted line velocities are defined w.r.t. the optical redshift of MG~J0414+0534, $z$\,=\,2.639 \citep{lawrence95}, using the optical velocity definition in the heliocentric frame. Isotropic line luminosities and upper limits have been calculated using:
\begin{equation}\label{eq:lum}
\frac{L_{\rm line}}{\rm L_{\odot}}=\frac{1}{m}\frac{0.001}{1+z}\frac{\nu_{\rm line}}{\rm [GHz]}\frac{\int{S\,dv}}{\rm [Jy\,km\,s^{-1}]}\frac{D_{\rm L}^2}{\rm [Mpc^2]},
\end{equation}
where $m$ is the lensing magnification, $z$ is the redshift of the background source, $\nu_{\rm line}$ is the rest frequency of the transition, $\int{S\,dv}$ is the integrated flux density, and $D_{\rm L}$ is the luminosity distance. The lensing magnification for MG~J0414+0534 is estimated to be $\sim$35 \citep{trotter00}. This value for the magnification is used under the assumption that the line emission is coincident with the radio continuum. If the line emission is not associated with the continuum, then the lensing magnification could be larger or smaller than 35. The luminosity distance of MG~J0414+0534 is 21,790\,Mpc.
The errors on the quantities derived from the continuum and the maser line emission have been calculated in the following way. The error on the continuum flux density was determined by using the calibration uncertainty. The errors on the integrated and peak line flux densities, and the line widths of the Gaussian profiles were determined by considering both the statistical uncertainties associated with the Gaussian fits and the uncertainties from the absolute flux calibration. Finally, we deduced the error on the flux densities and velocities of the line peak (i.\,e. the maximum of the H$_2$O spectrum) using the r.m.s of a single channel and the channel separation, respectively.
\subsection{The tentative satellite line}\label{sect:satellite}
Our first Arecibo spectrum of MG~J0414+0534, taken in October 2008 (see Fig~\ref{fig:oct08_spec}), confidently confirms the presence of the water maser line that was detected in the discovery spectra obtained with Effelsberg and the EVLA \citep{impellizzeri08}. In addition, it shows a weak satellite emission feature, detected with a signal-to-noise ratio (SNR) of three, that is displaced by about +800\,km\,s$^{-1}$ from the main line. We fit simple Gaussian profiles to the maser features shown in Fig~\ref{fig:oct08_spec} and find that the main line has a central velocity of $-$278$\pm$5\,km\,s$^{-1}$ with a full width at half maximum (FWHM) of 174$\pm$5\,km\,s$^{-1}$. From the integrated flux density (0.30$\pm$0.03\,Jy\,km\,s$^{-1}$), using Eq.~\ref{eq:lum}, we derive for the main line an intrinsic (i.\,e. unlensed) H$_2$O isotropic luminosity of $\sim$26,000\,L$_{\odot}$ that makes the maser in MG~J0414+0534 the most luminous that is currently known.
The satellite line at +470$\pm$10\,km\,s$^{-1}$ has a FWHM of 100$\pm$10\,km\,s$^{-1}$ and is five times less luminous ($L_{\rm H_2O} \sim$5000\,L$_{\odot}$). This feature could not be identified in the Effelsberg spectrum. Its peak flux density (0.6$\pm$0.2\,mJy) is comparable with the r.m.s. noise level of the data (0.6\,mJy per 3.8\,km\,s$^{-1}$ channel; \citealt{impellizzeri08}). Smoothing the Effelsberg spectrum down to a channel width of 54\,km\,s$^{-1}$ (r.m.s $\sim$\,0.2\,mJy) still shows no significant emission around +470\,km\,s$^{-1}$. The velocity of the satellite line was not covered by the bandwidth of our discovery EVLA spectrum. Surprisingly, this emission line feature was not detected again after October 2008 (see Fig~\ref{fig:spectra}). In February 2009 we performed deeper observations aimed at confirming the presence of this feature. No emission line other than the main one at about $-$300\,km\,s$^{-1}$ was detected above a 3$\sigma$ noise level of 0.3\,mJy per 19.2\,km\,s$^{-1}$ channel. However, a weak feature is seen in the spectrum at the velocity of about +490\,km\,s$^{-1}$ (see Fig.\ref{fig:fit4}, lower panel). The satellite line remains undetected also in the spectrum produced by averaging all of the epochs with the same weights (Fig~\ref{fig:fit4}, upper panel). Nonetheless, we note that the range between 200 and 500\,km\,s$^{-1}$ looks spiky and that, interestingly, one of these spikes is at the position of the satellite line. Averaging the spectra using different weights (e.\,g. 1/r.m.s$^2$ or the integration time) does not change the shape of the resulting spectrum. This may indicate that many weak lines are present in the range 200--500\,km\,s$^{-1}$ and that in October 2008 we saw one of these lines flaring.
\begin{figure}
\centering
\includegraphics[angle=90,width=9cm]{fig1_castangia2011.eps}
\caption{Water maser spectrum observed towards MG~J0414+0534 in October 2008 (black histogram). The fitted Gaussian profiles are overlaid (green line). Channel width is 19.2\,km\,s$^{-1}$. The root-mean-square (r.m.s.) noise level of the spectrum is 0.2\,mJy per channel. The velocity scale is relative to redshift 2.639 \citep{lawrence95} using the optical velocity definition in the heliocentric frame. The red cross marks the systemic velocity and the associated uncertainty (see Section~\ref{sect:red}). The blue and the black crosses indicate the peaks of the CO emission \citep{barvainis98} and the \ion{H}{i} absorption components \citep{moore99}, respectively, with their errors.}
\label{fig:oct08_spec}
\end{figure}
\subsection{Structure of the main line}\label{sect:fit4}
The high SNR of the February 2009 spectrum ($\sim$13; see Fig~\ref{fig:fit4}, lower panel) reveals that the main line has a complex profile that is likely the result of the blending of many components. When we fit the line profile with multiple Gaussians, the best fit is obtained using four components. Due to the lower SNR of the spectra, it is impossible to perform the same analysis for the other epochs. However, the four Gaussian components well describe the average profile of the main line (Fig~\ref{fig:fit4}, upper panel), implying that they must be present in most of the epochs. In order to inspect the variability of the individual velocity features, we produced a spectrum by averaging with equal weights the last three epochs of the monitoring (September and November 2009 and January 2010). The resulting spectrum (Fig~\ref{fig:fit4}, middle panel) has an r.m.s comparable with that of the February 2009 observation. The mean time separation between the two spectra is 276 days. Table~\ref{table:gauss} summarizes the properties of the Gaussian profiles fitted to these spectra (central velocity, FWHM, and integrated flux density) and the intrinsic, i.\,e. unlensed isotropic H$_2$O luminosity. Comparing the Gaussian peak velocities, we find that the velocity of components I and II did not change, while the velocities of components III and IV have marginally increased by +15$\pm$3\,km\,s$^{-1}$ and +10$\pm$3\,km\,s$^{-1}$, respectively. However, these weaker features can be identified in only two of the eleven spectra from individual epochs. It is therefore possible that the change in the peak velocities of these components is due to a change in the line profile rather than to an actual motion of the gas.
\begin{table}
\caption{Parameters of the Gaussian profiles fitted to the water maser line in the spectra of February 2009 (epoch~4) and the average of the last three epochs (epochs 9, 10, and 11; see also Fig.~\ref{fig:fit4}).}
\label{table:gauss}
\centering
\begin{tabular}{c c c c c r}
\hline\hline
Comp. & Epoch & Velocity & FWHM & Int. flux density & Lum. \\
& & (km\,s$^{-1}$) & (km\,s$^{-1}$) & (mJy\,km\,s$^{-1}$) & (L$_{\odot}$) \\
\hline
I & 4 & -350$\pm$2 & 31$\pm$2 & 23$\pm$12 & 2000 \\
& 9, 10, 11 & -351$\pm$2 & 21$\pm$2 & 17$\pm$5 & 1500 \\
II & 4 & -285$\pm$2 & 43$\pm$2 & 60$\pm$12 & 5100 \\
& 9, 10, 11 & -290$\pm$2 & 45$\pm$2 & 65$\pm$5 & 5600 \\
III & 4 & -280$\pm$2 & 161$\pm$2 & 173$\pm$12 & 14800 \\
& 9, 10, 11 & -265$\pm$2 & 154$\pm$2 & 184$\pm$5 & 15800 \\
IV & 4 & -167$\pm$2 & 63$\pm$2 & 43$\pm$12 & 3700 \\
& 9, 10, 11 & -157$\pm$2 & 63$\pm$2 & 51$\pm$5 & 4400 \\
\hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\includegraphics[angle=-90,width=9cm]{fig2_castangia2011.ps}
\caption{{\it Lower panel}: Water water maser spectrum of MG~J0414+0534 observed in February 2009. {\it Middle panel}: Average of the last three epochs (September and November 2009 and January 2010) obtained using equal weights. {\it Upper panel}: Final spectrum produced by averaging all the epochs with the same weight. Individual Gaussian profiles fitted to the spectra are overlaid in blue, while the resulting profile is displayed in green. The red cross marks the systemic velocity and the associated uncertainty (see Section~\ref{sect:red}). The blue and the black crosses indicate the peaks of the CO emission \citep{barvainis98} and the \ion{H}{i} absorption components \citep{moore99}, respectively, with their errors . Channel spacing is 2.4\,km\,s$^{-1}$. The r.m.s noise level is 0.2\,mJy per channel in the spectra of the lower and middle panels and 0.1\,mJy per channel in the upper panel.}
\label{fig:fit4}
\end{figure}
\subsection{Monitoring}
The results of our continuum and line monitoring are displayed in Figs.~\ref{fig:spectra} and~\ref{fig:monitoring}. Fig.~\ref{fig:spectra} shows the sequence of spectra observed towards MG~J0414+0534 from July 2007 to January 2010. In addition to the Arecibo spectra, we also show the combined Effelsberg and EVLA spectrum. This spectrum is the combination of 14 hours of observations with the Effelsberg radio telescope between July and September 2007, and 12 hours of observations with the EVLA in October 2007 (for details see \citealt{impellizzeri08}). Although the line profile does change slightly from epoch to epoch, the overall appearance of the main H$_2$O emission feature remains stable during the period of the Arecibo observations. A significant change in the line profile seems instead to have occurred between the Effelsberg and EVLA observations and the first epoch of the Arecibo monitoring campaign. The line appears to be much broader in the first Arecibo spectrum w.r.t. the previous observations. This is confirmed by a comparison between the Gaussian fit parameters of the lines. Fitting a single Gaussian profile to the combined Effelsberg and EVLA spectrum, we obtain a FWHM of 78$\pm$4\,km\,s$^{-1}$ which is about half the line width measured in the Arecibo spectrum of October 2008 (see Section~\ref{sect:satellite}). This line broadening is responsible for the larger intrinsic isotropic luminosity we measure (26,000\,L$_{\odot}$) w.r.t. that reported by \citet{impellizzeri08} (10,000\,L$_{\odot}$). The line velocity is also different. Correcting for the small shift in the reference frequency used in the Arecibo observations (6110.0\,MHz) w.r.t that used at Effelsberg and the EVLA (6110.2\,MHz) the peak of the Gaussian in the combined spectrum is at -312$\pm$4\,km\,s$^{-1}$. Hence, the line is redshifted by 34$\pm$6\,km\,s$^{-1}$ in the Arecibo spectrum. Differences in the baseline calibration of the datasets, though possibly accounting in part for the different line widths, are not sufficient to explain these discrepancies. The most plausible interpretation is that there was a real change in the line profile. Looking at the spectra in Fig.~\ref{fig:spectra} it seems that the most redshifted component seen in Fig.~\ref{fig:fit4} (component 4 in Table~\ref{table:gauss}) was not present at the time of the Effelsberg and EVLA observations.
In Fig.~\ref{fig:monitoring} (left panels), we plot the 6\,GHz continuum flux density of MG~J0414+0534 together with the peak flux density and the peak velocity of the line as a function of time. In the right panels instead, the continuum flux density is displayed together with the integrated flux density and the Gaussian peak velocity of the line as a function of time. The integrated flux density and the Gaussian peak velocities have been derived by fitting a single Gaussian profile to the broad maser feature. Absolute deviations of the continuum flux from the mean are, on average, comparable with the flux calibration uncertainty (7\% see Section~\ref{sect:obs}). The 6\,GHz continuum flux density of MG~J0414+0534 thus remained nearly constant for the duration of the whole monitoring period, with an average flux density of 0.71$\pm$0.02\,Jy. The line peak flux density is also surprisingly stable throughout the period of the observations. Small fluctuations are not exceeding the limits of uncertainty (between 10\% and 50\%). The integrated flux density instead, shows significant variations from epoch to epoch that, however, do not follow a definite trend. The behaviour of the integrated flux density reflects the variation of the width of the Gaussian profile, whose FWHM fluctuates between $\sim$100 and $\sim$240\,km\,s$^{-1}$ during the monitoring period. This variation is likely the result of flux variability among individual velocity components (see Section~\ref{sect:fit4}).
We fit a linear function to the line and Gaussian peak velocities. In both cases, the $\chi^2$ values calculated from the fits are quite high, indicating that, most likely, if there is a systematic acceleration, this is not constant. Nevertheless, assuming that a straight line is the correct model for the data, we can calculate the accelerations using a least absolute deviation method, which is less sensitive to outlying data w.r.t the $\chi^2$ minimization. The best fit lines and the mean absolute deviations are shown in Fig.~\ref{fig:monitoring} (lower panels). We find that the line peak velocity is constant within the limit of the uncertainty associated with the peak identification (i.\,e. the channel width, 2.4\,km\,s$^{-1}$). The line velocity derived from Gaussian fits instead, is increasing by $\sim$12\,km\,s$^{-1}$\,yr$^{-1}$. However, since the Gaussian fit is sensitive to the whole profile, this trend may be due to fluctuations in the relative intensities of the individual velocity components rather than to a real acceleration of the masing clouds. Furthermore, drifting maser lines, as those observed in edge on accretion disks, typically have line widths of 1-4\,km\,s$^{-1}$ (e.\,g. NGC~4258; \citealt{humphreys08}). Velocity drifts of broad (FWHM\,$\sim$100\,km\,s$^{-1}$) maser features have never been observed so far. Thus, we treat this result with caution and do not use it in our discussion.
\begin{figure*}
\centering
\includegraphics[scale=0.8]{fig3_castangia2011.ps}
\hspace{0.5cm}
\includegraphics[scale=0.8]{fig4_castangia2011.ps}
\caption{Water maser spectra observed towards MG~J0414+0534 between July 2007 and January 2010. The first spectrum (bottom left corner) is the combined Effelsberg and EVLA spectrum (channel spacing 38.4\,km\,s$^{-1}$) obtained observing 14+12\,hrs on-source between July and October 2007 (for details, see \citealt{impellizzeri08}). The other spectra have been taken with Arecibo. The spectra shown here have been smoothed down to a resolution of 19.2\,km\,s$^{-1}$. The blue and red vertical lines indicate the peak velocities of the main and satellite maser features, respectively, as measured in the October 2008 spectrum.}
\label{fig:spectra}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=90,width=9.1cm]{fig5_castangia2011.eps}
\includegraphics[angle=90,width=9.1cm]{fig6_castangia2011.eps}
\caption{{\it Left panels}: Continuum and line peak flux density vs. time (up) and peak velocity vs. time (bottom) for the spectra with a channel spacing of 2.4\,km\,s$^{-1}$. The peak flux density of the line has been multiplied by a factor of 100 to allow the comparison with the continuum flux density within the same plot. The error bars represent the r.m.s. noise levels per channel of the spectra and the width of the channels, respectively. {\it Right panels}: Continuum and integrated flux density vs. time (up) and peak velocity vs. time (bottom) derived from Gaussian fitting. The error bars indicate the formal errors obtained from the fits. Best fit lines and mean absolute deviations are shown in green. Day 0 is 2008 October 14-15}
\label{fig:monitoring}
\end{figure*}
\subsection{Upper limits on the other molecular transitions}
In three occasions, August 2009, November 2009, and January 2010, we took advantage of the WAPP dual board mode to search for molecular emission lines from NH$_3$, OH, and CH$_3$OH (see Table~\ref{table:trans}) towards MG~J0414+0534. No emission line was detected in the individual spectra nor in the average spectrum above a 5\,$\sigma$ noise level of 1.5 and 2.0\,mJy per 2.4\,km\,s$^{-1}$ channel, for bands 2 and 3, respectively. Using Eq.~\ref{eq:lum} and considering rectangular lines of width 2.4\,km\,s$^{-1}$, this yields upper limits on the isotropic luminosities of the ammonia inversion lines (1, 1), (2, 2), and (3, 3) of $\sim$330\,L$_{\odot}$ (for the first two transitions) and $\sim$440\,L$_{\odot}$. The luminosity of the two OH lines must be $<$\,440\,L$_{\odot}$. Unfortunately, the frequency band centered at 6.9\,GHz, where the NH$_3$ (6, 6) and the excited CH$_3$OH transition frequencies fall, is severely affected by RFI. Interferences are present in about 60\% of the band making any line identification impossible.
\section{Discussion}\label{sect:disc}
We have monitored the radio continuum and maser emission in MG~J0414+0534 for $\sim$15 months at $\sim$6 week intervals and found that both are surprisingly stable. The continuum and the line peak flux density were found to be constant throughout the periods of observations. The integrated flux density instead displays significant changes from epoch to epoch that are likely the result of changes in the individual velocity components. From the analysis of the 11 epochs of the monitoring, we can place an upper limit on the velocity drift of the most prominent line peak (component II in Table~\ref{table:gauss}) of 2\,km\,s$^{-1}$\,yr$^{-1}$. We tentatively detected a weaker satellite line at +470\,km\,s$^{-1}$ in October 2008 that, however, was not confirmed by the spectra of the other epochs, nor by our most sensitive spectrum obtained by averaging all the epochs. In the next sections we examine the possible scenarios for the origin of the maser in the light of these results.
\subsection{The redshift of MG~J0414+0534}\label{sect:red}
For the discussion presented here it is of fundamental importance to assess the true redshift of MG~J0414+0534 and the accuracy of the corresponding systemic velocity. \citet{lawrence95} derived a redshift of 2.639$\pm$0.002 from the broad H$\alpha$ emission line identified in the infrared spectrum of the quasar. CO (3--2) emission was detected toward MG~J0414+0534 by \citet{barvainis98} and is centered at the H$_{\alpha}$ redshift, while \ion{H}{i} is seen in absorption, blueshifted by $\sim$200\,km\,s$^{-1}$ w.r.t. the H$_{\alpha}$ and CO emission lines \citep{moore99}. The \ion{H}{i} line consists of two absorption components, one at $z=2.6376\pm0.0002$ and one at $z=2.6353\pm0.0001$, with the most prominent and blueshifted one approximately coincident with the peak of the H$_2$O emission (Figs.~\ref{fig:oct08_spec} and \ref{fig:fit4}).
The discrepancy between the redshift of the H$_{\alpha}$ emission and of the \ion{H}{i} absorption centroid is not surprising. In fact, previous studies on various types of galaxies indicate that systemic velocities derived from optical emission lines can be biased by motions of the emitting gas and obscuration of the back side of the galaxy \citep{mirabel84,morganti01}. More remarkable is the difference between the redshift of the CO and the \ion{H}{i} lines, given that CO traces the large scale galaxy structure and should be free of outflow/infall problems. According to \citet{moore99}, in the case of MG~J0414+0534, this offset might be due to i) the \ion{H}{i} absorption occurring against an extended jet component and not towards the nucleus or ii) the \ion{H}{i} is absorbing the active nucleus and CO emission has a different spatial distribution and is affected differently by gravitational lensing. In the following, we assume that the optical/CO redshift, $z=2.639$, is the most reliable redshift for MG~J0414+0534. For the sake of completeness, we will also discuss the possibility that the redshift of MG~J0414+0534 is the one derived from \ion{H}{i} absorption. The uncertainty in the optical redshift determination corresponds to a large uncertainty in the definition of the systemic velocity ($\pm$165\,km\,$^{-1}$). Accordingly, the main maser line is mostly blueshifted w.r.t the systemic velocity of MG~J0414+0534, although part of the emission may possibly be considered systemic (see Fig.~\ref{fig:fit4} and Table~\ref{table:gauss}).
\subsection{Origin of the H$_2$O emission}
\subsubsection{Jet-maser scenario}
Our initial hypothesis, based on the absence of systemic and redshifted components in the Effelsberg and EVLA spectra and on the wide line profile, was that the emission is associated with the prominent relativistic jets of the quasar \citep{impellizzeri08}. Part of our results are indeed consistent with this interpretation. First of all, even when the maser line profile is resolved into multiple velocity components, individual emission features have line widths between 30 and 160\,km\,s$^{-1}$ that resemble those of known H$_2$O masers associated with radio jets (e.\,g. Mrk~348; \citealt{peck03}). Our non-detection of a radial acceleration of the main maser peak is also compatible with the jet-maser scenario.
Adopting the hypothesis that the maser in MG~J0414+0534 is associated with the jet(s), the H$_2$O emission may arise from the shocked region at the interface between the relativistic jet material and an encroaching molecular cloud, as is believed to be the case for the masers in Mrk~348 \citep{peck03} and part of the emission in NGC~1068 \citep{galli01}. Alternatively, it could also be the result of the amplification of the radio continuum of the jet by foreground molecular clouds along the line of sight (as in NGC~1052; \citealt{sawada08}). In this framework, the maser and continuum intensities in MG~J0414+0534 would then be expected to show a similar behaviour to that in the aforementioned cases. For Mrk~348, strong variation of both maser and continuum flux densities are reported, with a close temporal correlation between them \citep{peck03}. The peak flux density of the jet-maser component in NGC~1068 is also variable, although the variability is not outstanding \citep{galli01}. For the third jet-maser case, that of NGC~1052, the variability is mainly caused by changes in the line profile \citep{braatz03}. The extreme stability of the main line peak and continuum flux density in MG~J0414+0534 resulting from our study, seems to exclude a jet-maser scenario similar to that in Mrk~348 and NGC~1068, while the reported significant variations in the line profile of our target, may hint at similarities with the case of NGC~1052. We note, however, that the number of sources in which the maser emission is confidently associated with the jet(s) is very low and that more of these masers should be studied in detail in order to investigate the properties of these kind of sources.
The possibility that the \ion{H}{i} absorption occurs against a jet component and not against the core (see Section~\ref{sect:red}) is interesting and might favour the jet-maser scenario. Indeed, the most blueshifted \ion{H}{i} component is displaced by only (29$\pm$8)\,km\,s$^{-1}$ from the peak of the H$_2$O emission suggesting that the same gas structure that is absorbing the continuum radiation from the jet may host the molecular clouds that produce the maser emission.
\subsubsection{Disk-maser scenario}
The presence of highly red- and blueshifted emission features, symmetrically bracketing the systemic velocity, and emission lines close to the systemic velocity (typically referred to as `triple-peak profile') is a spectroscopic signature of masers that are associated with edge-on accretion disks (e.\,g. NGC4258; \citealt{n4258miyo}). Within this frame, the tentative detection of the redshifted line at +470\,km\,s$^{-1}$ in October 2008, may be seen as an element in favour of the accretion disk scenario. According to the standard model, we expect the high-velocity emission to arise from near the mid-line of the disk, defined as the diameter of the disk perpendicular to the line of sight, while the maser emission at the systemic velocity should originate on the near side of the disk. Therefore, the predicted radial accelerations of the high-velocity features are much smaller than those of the lines near the systemic velocity. The velocity drifts measured for the high-velocity maser lines in NGC~4258, for example, are in the range -0.7 to +0.7\,km\,s$^{-1}$\,yr$^{-1}$ \citep{humphreys08}. Our upper limit on the radial acceleration of the blueshifted maser emission in MG~J0414+0534, 2\,km\,s$^{-1}$\,yr$^{-1}$, cannot rule out such accelerations.
If the main maser line and the satellite line at +470\,km\,s$^{-1}$ can be considered as the blueshifted and redshifted lines from the tangentially seen part of an edge-on accretion disk in Keplerian rotation, then the radius at which emission originates is given by $R= GM_{\rm BH}V_{\rm R}^{-2}$, where $G$ is the gravitational constant, $M_{\rm BH}$ is the black hole mass, and $V_{\rm R}$ is the rotational velocity at radius $R$. From the difference between the line of sight velocities of the main and satellite maser lines ($V_{\rm obs}$), we obtain $V_{\rm R} = V_{\rm obs} \cdot \sin (i)^{-1}$ $\sim 370 \cdot \sin (i)^{-1}$\,km\,s$^{-1}$. Adopting the black hole mass of $M_{\rm BH}=10^{9.0}$\,M$_{\odot}$ calculated by \citet{pooley07} for MG~J0414+0534, and assuming an edge-on orientation ($i$ = 90{\degr}) for the accretion disk\footnote{Since accretion disks that provide enough amplification paths for maser action have inclinations that differ less than 10{\degr} from an edge-on orientation (see e.\,g., \citealt{kuo2011}), the values for the rotation velocity, and hence, the radius and gas density of the disk, should not be very different from the one calculated assuming $i$ = 90{\degr}.}, we get a radius of $R \sim$ 30\,pc. This value is fairly large compared to the radii at which maser emission is found in the accretion disks of nearby radio quiet AGN (typically, 0.1 to 1 pc). We should keep in mind however, that MG~J0414+0534 is a radio loud quasar, while known disk-maser hosts are mainly radio quiet Seyfert or LINER galaxies with a mass of the nuclear engine that is two orders of magnitude lower ($\sim 10^7$\,M$_{\odot}$; \citealt{kuo2011}).
In order to understand if the physical conditions of the gas at 30\,pc from a 10$^9$M\,$_{\odot}$ black hole are suitable to provide water maser emission, we calculate the density of the gas necessary to reach stability against tidal disruption, in a spherical clump at a radius $R$ from the central engine that is rotating at a velocity $V_{\rm R}$ (see \citealt{tarchi07} and references therein). For MG~J0414+0534 we obtain that the density of H$_2$ molecules at a radius of 30\,pc from the nuclear engine would need to be $\ga 2 \times 10^5$cm$^{-3}$. Such a density is far from the density at which the H$_2$O level population thermalize (e.\,g. \citealt{kylafis91}) and, hence, the conditions of the gas do not preclude the production of water maser emission. Therefore, the identification of the tentative line at about +470\,km\,s$^{-1}$\,yr$^{-1}$ as the redshifted feature of the characteristic disk-maser spectrum is physically plausible, thus making the disk-maser picture a viable option.
If we assume that the atomic gas is absorbing the radio continuum emission from the core (see Section~\ref{sect:red}) and that MG~J0414+0534 is at the redshift of the \ion{H}{i} absorption centroid (2.6365; \citealt{moore99}), the value of the maser disk radius calculated above changes substantially. In fact, in this case the main line lies at the systemic velocity of the quasar and the inferred rotational velocity and radius are $\sim$750\,km\,s$^{-1}$ and $\sim$7\,pc, respectively. For such a disk to be stable the density of H$_2$ molecules at a radius of 7\,pc from the nuclear engine would need to be $\ga 10^7$cm$^{-3}$, a density that is still compatible with the production of H$_2$O maser emission in the 6$_{16}$--5$_{23}$ transition (e.\,g. \citealt{kylafis91}). In the hypothesis that the maser emission originates on the near side of the disk, the velocity drift is given by $V^2_{\rm R} R^{-1}$. Assuming that the radius at which the systemic and high-velocity lines arise is the same, we obtain a velocity drift of $\sim$0.8\,km\,s$^{-1}$\,yr$^{-1}$. A longer monitoring period (at least 4 or 5 years) and/or a higher spectral resolution would be necessary to detect such a small velocity drift and test this hypothesis.
Therefore, although the type~1 optical spectrum and the relatively low column density derived from X-ray observations (N$_{\rm H}$\,$\sim 5 \times 10^{22}$\,cm$^{-2}$; \citealt{chartas02}) indicate that the disk might not be favourably oriented to produce detectable water maser emission, we cannot exclude this possibility on the basis of our single-dish data alone.
\section{Conclusions}\label{sect:con}
The redshifted 6\,GHz radio continuum and H$_2$O maser emission in the type~1 quasar MG~J0414+0534 at $z=2.639$ have been monitored with the 300-m Arecibo telescope for $\sim$15 months, in order to help shedding light on the origin of the most distant water maser found to date.
We have confirmed the H$_2$O detection reported by \citet{impellizzeri08} at high signal-to-noise levels and have found that the line profile can be resolved into a complex of features with line widths between 30 and 160\,km\,s$^{-1}$. A redshifted line was tentatively detected in October 2008 at a velocity of +470\,km\,s$^{-1}$. The total intrinsic (i.\,e. unlensed) H$_2$O isotropic luminosity is $\sim$30,000\,L$_{\odot}$ making the maser in MG~J0414+0534 the most luminous ever discovered. The overall appearance of the main maser feature, as well as the flux density of the most prominent peak, are surprisingly stable throughout the period of the observations, although the integrated flux density shows significant variations on monthly time scales, possibly hinting at changes in the individual velocity components. The continuum flux density is also quite stable from epoch to epoch. The velocity of the strongest line peak is constant within the uncertainty, thus providing an upper limit on the velocity drift of 2\,km\,s$^{-1}$\,yr$^{-1}$.
The large line widths of the individual velocity components of the H$_2$O maser feature and the lack of an evident triple-peak profile favour an association of the maser with the relativistic jet(s) of the quasar. The type~1 nature of the AGN in MG~J0414+0534 further reinforces this interpretation. However, the remarkable stability of the continuum and the line emission is partly in contrast with this picture. Furthermore, the tentative detection of the redshifted feature in the October 2008 spectrum is compatible with the disk-maser hypothesis.
While providing useful clues to determine the nature of the maser in MG~J0414+0534, our single-dish data alone are presently insufficient to confidently exclude either one of the two scenarios, jet vs. accretion disk. VLBI observations and longer time-scale single-dish monitoring will be essential to unveil the origin of the H$_2$O maser in this intriguing object.
\begin{acknowledgements}
P.C. and V.I. wish to thank the operators at the 300--m telescope for their hospitality during the observing runs. We are indebted to C. Salter and T. Ghosh for their invaluable assistance during observing preparation and data reduction and to H. Hernandez for the careful scheduling of this long monitoring. We are also grateful to the anonymous referee for his/her useful suggestions. P.C. wish to thank A. Tarchi for critically reading the manuscript. O.W. is funded by the Emmy-Noether-Programme of the `Deutsche Forschungsgemeinschaft', reference Wu 588/1-1.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,108,101,564,851 | arxiv | \section{Introduction}\label{sec:intro}
The morphology of star forming galaxies at $1\leq z \leq3$ is often dominated by giant ($\sim$1 kpc) and massive ($\gtrsim$10$^{8-9}\ \ensuremath{{\rm M}_\odot}$) clumps which contribute a significant fraction ($\sim$20-30\%) of the total system's star formation. These clumpy structures are consistently observed via tracers of young stars, e.g. in the rest-UV and -optical (\citealt{cowie1995,elmegreen2004,elmegreen2005,elmegreen2007,guo2012}) and in ionized gas observations \citep{genzel2008}. This irregular morphology was initially attributed to galaxy merger activity and some work examining $z>1$ samples has indicated gravitational interactions can drive clumpiness in galaxies (e.g. \citealt{puech2010,calabro2019}). However, resolved kinematic studies (e.g. \citealt{genzel2006,fs2006,fs2008,fs2009,shapiro2008,epinat2012}) have shown that a large fraction of systems ($>60\%$) do not appear to be mergers. Instead, such systems are often identified as orderly rotators, with clumps thought to form in situ via Toomre instabilities in a gas-rich ($f_{gas}\sim$50-60\%) disk \citep{tacconi2020}.
There is a long standing concept, motivated largely by simulations \citep{noguchi1999,elmegreen2008,immeli2004a,immeli2004b,dekel2009} that these observed clumps could contribute significantly to the formation of bulges in disk galaxies. The processes which allow for this migration (e.g. decreasing angular momentum due to dynamical friction) require clump in-spiral timescales of order a few 10$^{8}$ yr. However, the degree to which clumps can contribute to bulge growth via this mechanism depends on their viability, e.g. whether or not they are disrupted by their own stellar feedback within a few orbital times. Observations of normal $1<z<3$ star forming galaxies observe a range of lifetimes for clumps (100 - 650 Myr; \citealt{zanella2019,fs2020}) suggest that some appear sufficiently long-lived to migrate. This is typically the case for higher mass clumps with $M_{\star} > 10^{9}\ensuremath{{\rm M}_\odot}$.
If clumps survive long enough to in-spiral, one should expect a radial gradient within the galaxy, in particular with age. Evidence for a radial gradient of clump properties is seen in observations (such as in SINS and CANDELS galaxies; e.g. \citealt{fs2011,fs2020,guo2012,ginzburg2021}) and these observations appear consistent with simulations \citep{bournaud2014,mandelker2017,dekel2021} where clumps closer to the galaxy nucleus appear redder, older, denser, and more massive. Other studies at high-$z$ (e.g. \citealt{guo2018,fs2011,zanella2019}) report little or only slight evidence of a trend between age and galactocentric radius, although small number statistics may influence some of these findings. However, there may be some ambiguity in using radial trends as evidence for long-lived clumps as radial trends are also predicted as a result of inside-out disk formation \citep{murray2010,genel2012}.
More recent theoretical work suggests that though clumps may play a role in bulge formation, significant contributions from large gaseous inflows within these disks is likely very important for bulge formation as well \citep{db2014,zolotov2015}. Moreover, galaxies at $z=1-3$ likely experience multiple periods of instability, and one set of clumps need not explain the full mass of bulges at $z=0$ \citep{tacchella2016}.
As the star formation rate densities of massive clumps are observed to be very high, stellar feedback effects are predicted to play a crucial role in determining the fate of these clumps and how they contribute to their host galaxy (i.e. in building bulges). Consequently, the stellar masses of clumps are both a strong indicator of their viability and an important clue into the dominant forms of feedback at play. Observations of clumps in $1<z<3$ galaxies suggest they are characteristically massive, upwards of $10^{8-9} \ensuremath{{\rm M}_\odot}$ \citep{elmegreen2007,guo2012,soto2017}. The true sizes and masses of these clumps remain uncertain, however due to the inherent resolution limits imposed on observations at high-$z$. Work probing different spatial scales (for example $\sim 1$ kpc as in \citealt{fs2011} to a few hundred pc \citealt{livermore2012,livermore2015}) consistently find derived clump properties are resolution-dependent and vary with spatial scale. This is supported by investigations of the resolution dependence of clump mass using cosmological simulations (e.g. \citealt{huertas2020,meng2020}). \citet{dz2017} examined clumps in both lensed and unlensed star-forming galaxies at $1.1 < z < 3.6$ and find masses ranging $10^{7} - 10^{9}\ensuremath{{\rm M}_\odot}$. They also show that the limited resolution available in observations of non-lensed systems works along side clump clustering and projection effects to artificially enhance derived mass estimates. This is also supported by some studies at lower redshift: \citet{larson2020} study clumpy star formation in $z<0.1$ luminous infrared galaxies (LIRGs) and find clumps are overall much less massive ($\sim 10^{5}\ensuremath{{\rm M}_\odot}$).
Results from numerical work indicate that the details of feedback models have a significant impact on the resulting stellar masses of clumps \citep{tamburello2015,mayer2016,mandelker2017,dekel2021}. Studies where feedback effects are primarily modeled via radiation pressure and supernovae (e.g. \citealt{bournaud2007,bournaud2014}) routinely find long-lived clumps which migrate inward and contribute to bulge growth. \cite{mandelker2017} find that including radiation pressure increases the surface density and mass thresholds for clumps remaining undisrupted for a few free-fall times. Alternatively, \cite{hopkins2012} uses a feedback recipe that includes radiation pressure and high photon trapping fractions. These combined effects produce star forming clumps which are short-lived ($10-100$ Myr) and rapidly disrupted, transforming $\sim$10\% of their gas into stars \citep{hopkins2012}. Indeed, Hopkins et al. predict that the distribution of older stellar populations within clumpy star forming galaxies should be smooth in comparison to that of the gas and young stars. Using this feedback prescription \cite{oklopcic2017} perform a case study of giant clumps in a massive, $z=2$ disk galaxy and, similarly, find no evidence in their simulations for net inward migration of clumps and predict by $z=1$ a smooth, stellar morphology. Using the NIHAO simulation, \cite{buck2017} find a similar lack of long-lived clumps and suggest that clumps in high-z galaxies are merely the consequence of variable extinction. The similarity between these simulations is that both incorporate early radiative feedback with high photon trapping coefficients in the dust \citep{hopkins2012,stinson2013}.
Because this gas-rich, turbulent mode of star formation occurs primarily in distant galaxies the effects of redshift must be considered. Distance affects observations in three ways: (1) generating native limits to resolution with current instrumentation, (2) creating practical limits to sensitivity, and (3) shifting spectral features to longer wavelengths, making them impractical to observe with current and near-future instruments. The first two effects, resolution and sensitivity, are assuaged by lensed galaxies \citep{jones2010,livermore2012,livermore2015,cava2018,dz2017}. However, in the vast majority of lensed systems the magnification occurs in only one direction, which complicates structural analysis. Furthermore, observations of lensed galaxies can still be impacted by redshift-related effects. For example, resolved measurements of galaxies at $z>2$ cannot be undertaken at $\lambda > 500$nm using current facilities, which significantly challenges efforts to measure the stellar masses of clumps, even in lensed systems.
The optimal wavelength for constraining the masses of clumps is the near infrared (IR). Mass-to-light ratios estimated from near-IR observations are less sensitive to degeneracies in extinction, age and metallicity than are rest-frame visible wavelength observations \citep{bdj2001}. High-resolution imaging of very clumpy star-forming galaxies at low redshifts would provide a robust (and highly complementary) viewpoint on the physics of the high-redshift population. \cite{green2014} presented the DYnamics of Newly-Assembled Massive Objects (DYNAMO) Survey, which is comprised of 95 nearby ($z\sim 0.06-0.08$ \& $0.12-0.16$) galaxies which closely resemble high-$z$ clumpy systems in terms of their kinematic and star formation properties. The DYNAMO sample has been the subject of a number of follow-on investigations (for example, \citealt{green2014,bassett2014,bassett2017,fisher2014,fisher2017a,fisher2017b,white2017,oliva2018,fisher2019}) which test the similarity of these local galaxies to their potential high-redshift counterparts. The most notable exploration of this theme is presented in \cite{fisher2017a}, who used high-resolution H$\alpha$ maps (from \textit{Hubble Space Telescope}; hereafter, \textit{HST}) to confirm that DYNAMO galaxies exhibit the same clumpy morphology observed at high-redshift. As is the case with massive galaxies at high-redshift, a large fraction of DYNAMO systems appear kinematically to be turbulent disks ($\sigma_{gas} \sim$20-100 km/s; \citealt{green2014,bassett2014,bekiaris2016}) with high molecular gas fractions ($f_{gas} \sim$0.2 - 0.6; \citealt{fisher2014,fisher2019,white2017}).
In this paper, we build on the work of \cite{fisher2017b} by investigating the existence and properties of stellar clumps in G04-1, a galaxy from DYNAMO, using adaptive optics (AO)-enabled NIRC2 $K_{P}$-band observations\footnote{In this manuscript, the term ``$K$-band" refers to observations taken with the NIRC2 $K_{P}$ filter ($\rm \Delta\lambda = 1.948 - 2.299 \mu m$).} Due to its wealth of previous observations and its clear classification as a highly star forming, clumpy, turbulent, gas-rich disk, G04-1 makes an ideal candidate for probing the nature of stellar clumps.
This paper is structured as follows: in \S\ref{sec:2}, we provide an overview of these targets and describe the new NIRC2 and \textit{HST} observations. In \S\ref{sec:3}, we describe the methods we have used for identifying stellar clumps and calculating clump properties (such as mass and color) in our imaging data. In \S\ref{sec:4}, we present our results and discuss them in context. \S\ref{sec:5} summarizes our findings.
Throughout this paper, we assume a cosmology where $H_{0}$ = 67 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{M} = 0.31$, and $\Omega_{\Lambda} = 0.69$.
\section{Observations}\label{sec:2}
\subsection{Known properties of G04-1}
Galaxy G04-1 is a member of the greater DYNAMO sample \citep[originally presented in][hereafter referred to as DYNAMO-I]{green2014}. DYNAMO is an H$\alpha$ IFU survey of local ($z\sim 0.07$ and $z\sim 0.12$) galaxies which have been selected from the Sloan Digital Sky Survey (SDSS, \citealt{york2000,blanton2017}) to be H$\alpha$-luminous (in the top 1\% of H$\alpha$ emitters in the local universe, based on fiber luminosity; $\overline{SFR}\sim$ 11 $\ensuremath{{\rm M}_\odot}$ yr$^{-1}$). Located at $z\sim 0.1298$, G04-1 has an integrated stellar mass of 6.47$\times 10^{10}$ $\ensuremath{{\rm M}_\odot}$ and H$\alpha$-derived SFR of about 15 $\ensuremath{{\rm M}_\odot}$yr$^{-1}$ (DYNAMO-I). As is the case for G04-1, a large fraction ($\sim$84\%) of DYNAMO galaxies appear disk-like and about half are located on the Tully-Fisher relation \citep{green2014}. \cite{fisher2017a} produce a surface brightness profile from a high-resolution \textit{HST} continuum map for G04-1 and find that the system is well-fit by an exponential disk + bulge model. Based on the data presented in DYNAMO-I and on follow-up kinematic fitting by \cite{bekiaris2016}, G04-1 is classified as a turbulent rotating disk ($V_{\rm circ}=264 \pm 6 \ensuremath{\,{\rm km\,s}^{-1}}$ and $\sigma_{gas} = 34 \ensuremath{\,{\rm km\,s}^{-1}}$). Recently, \cite{oliva2018} have published high-resolution (100 - 400 pc scale) AO-assisted kinematic maps of DYNAMO galaxies imaged in P$\alpha$ with Keck-OSIRIS. For G04-1, these authors find an integrated velocity dispersion of $\sigma \sim 51 \ensuremath{\,{\rm km\,s}^{-1}}$. In addition, these authors observe multiple high-dispersion ($\rm \sigma_{max}\sim 71 \ensuremath{\,{\rm km\,s}^{-1}}$) peaks in the P$\alpha$ map. G04-1 is also observed to be gas-rich; CO line fluxes reported by \cite{fisher2014} (using the Plateau de Bure Interferometer observations of the CO[1-0] transition) estimate a baryonic gas mass fraction of $\sim$31\% and a depletion time of about 1.9 Gyr \citep{white2017}. Like many of the systems commonly observed at high-$z$, G04-1 is morphologically clumpy. Using 100pc resolution \textit{HST} $H\alpha$ maps, \cite{fisher2017a} identify 13 massive star forming clump regions within the galaxy.
\subsection{NIRC2 K-band imaging}
G04-1 was observed with the NIRC2 imaging camera on Keck II using \textsc{widecam4} ($\sim$0.04"/pix scale) with laser guide star adaptive optics correction and with the $K_{P}$ filter ($\rm \lambda_{c} = 2.124\ \mu m$). Observations took place on 2016-21OCT as part of the 2016B observing cycle (program W140N2L) for 1.75 hours.
Observations were reduced using fairly standard methods. First, all raw frames were corrected for known bad pixels using the Keck-NIRC2 bad pixel map. Dark current was removed from science and flat frames by subtracting off a scaled master dark frame. Both lamp-on and lamp-off flat frames (8 each) were recorded on the night of observation, and these frames were median-combined separately (with sigma-clipping) and then subtracted (on-off) to construct a flat field frame. This was divided through all science frames to flat-correct the data.
Keck/NIRC2 images show interference fringes, which are easily detected in flat-corrected frames. We modeled this pattern by first selecting all co-temporal science frames within which all sources were masked. In most cases this corresponded with the three frames associated with a single dither pattern. These masked frames were median-combined and scaled (to an appropriate background level) to produce frames which were subtracted off the individual images to perform removal of sky and fringes in a single step. The reduced science frames were individually corrected for distortion effects using IRAF's \textsc{drizzle} task in combination with X \& Y \textsc{wide} camera distortion solutions provided by H. Fu.\footnote{https://www2.keck.hawaii.edu/inst/nirc2/dewarp.html} Image registration and sub-pixel offsets between the dithered frames were calculated with IRAF's \textsc{xregister} task. Finally, these offset values were used with \textsc{drizzle} once again to produce a final co-added image.
Our G04-1 frames contained a ghost image of the Keck primary mirror positioned in the lower-right quadrant of the array. In some cases ($<30\%$ of frames) this image (which appeared static over the course of the night's observations) overlapped partially with the dithered position of the galaxy. Where possible (i.e. where the source is in a different quadrant) the region with this feature was left unmasked so as to remove it during the sky subtraction process described above.
\begin{figure*}[htb!]
\begin{center}
\includegraphics[scale=0.59]{thatfigure_with_propscalebars.png}
\end{center}
\caption{Multi-band Imaging of Stellar Clumps in G04-1. In the center panel, we show the reduced, co-added (n=21) image of G04-1 from Keck NIRC2 observations in K$_{\rm P}$-band. White boxes within the center panel denote individual clump regions with labels corresponding to the Clump ID numbers provided in Tables \ref{tab:phot} \& \ref{tab:prop}. Surrounding this center panel, we include enlarged maps of each clump region in K-band (with the disk light removed) and in our \textit{HST} $H\alpha$ (from \citealt{fisher2017a}), F336W, and F467M data sets. Due to the complex structure near the center of G04-1, clumps 13, 14, \& 15 are shown in a single region labeled the "nuclear ring". The on-sky region depicted by white boxes in the center panel match that of each of the corresponding clump's enlarged panels. For reference: cutouts for clumps 1, 2, 3, 5, 6, 7, 8, 9, \& 12 are represent on-sky sizes of $\sim 1$ kpc across. For clump 4, this cutout represents a size of $\sim 1.4$ kpc and for clumps 10 \& 11, about $\sim 1.5$ kpc. Finally, the width of the white box enclosing the nuclear ring (R) region is $3.7$ kpc in size.}
\label{fig:clump_apertures}
\end{figure*}
\subsection{HST F336W and F467M Observations}\label{sec:HST}
We obtained observations (Proposal ID \#15069) of G04-1 using the Wide Field Camera on the Advanced Camera for Surveys (WFC/ACS) on the \textit{Hubble Space Telescope}. Observations were performed using the F336W ($U$) wide and F467M (Stromgen $b$) medium filters on 2018-02-26 for a total integration time of 0.5 and 0.2 hrs, respectively. For further description of the broad-band \textit{HST} images we refer the reader to \cite{lenkic2021}. Charge transfer effects in WFC images were mitigated both by placing the target close to the readout edge of the image, and also by using a post-flash to raise the image background to 12 e-/pix. All images were reduced using the standard \textit{HST} pipeline and combined using \textsc{drizzle}.
\section{Properties of Stellar Clumps in G04-1}\label{sec:3}
\subsection{Identification of Stellar Clumps}
Clumps were identified in our galaxy maps as follows. First, we constructed a model of our AO point-spread function (PSF) by combining all imaging of all point sources from a single observing night (described in more detail in \S3.3). We then convolved the co-added K-band image with a Gaussian kernel with a width of 10x the full-width-at-half-maximum (FWHM) of the PSF (determined by a double Moffat function fit; details on this derivation are provided in \S 3.3). This corresponds to a kernel size of $\sim$1.2 arcseconds. This convolved galaxy image was then divided through the un-convolved image to produce a ratio map from which we identified stellar clumps.
Next, all ratio map pixels above a given threshold (defined as 4x the standard deviation of the the intraclump regions) were identified as peak-value pixels. We then imposed the constraint that all peak-value pixels represented local peaks in the flux map. Here, we defined the local regions surrounding the peak values as boxes 10$\times$10 pixels ($\sim$1 kpc $\times$ 1 kpc at $z\sim$0.1; motivated by the typical clump sizes observed in G04-1 by \citealt{fisher2017b}) in size. If higher-value pixels were found within in this local region, the location of the flux peak was shifted accordingly. It is emphasized that this technique assumes a minimum separation between candidate clumps (the box size). Once local peaks were identified, we determined the center of the clump candidates by centroid fitting a 2D Gaussian + constant to a cutout region of the map surrounding the local peak pixel. We then imposed a final constraint that all clump candidates were least as large in size as the FWHM of our combined AO PSF (about 3$\times$3 pixels in area). Within G04-1, this process identified the 15 stellar clumps. These clumps are shown in the center panel of Fig \ref{fig:clump_apertures}, where individual clump regions are indicated by white boxes. In this figure, we also include maps of each of the identified clumps in the \textit{HST} H$\alpha$ (from \citealt{fisher2017a}), F336W, and F467M data sets.
The number of detected clumps is inherently connected to the detection threshold. Decreasing the detection threshold of clumps to 3-$\sigma$ would increase the number of potential clumps in G04-1 to $\sim$30 from 15. If we make a more strict definition of clumps using a threshold of 5-$\sigma$ this decreases the number of clumps to $\sim$10. However, we observe that our algorithm fails to identify fainter and smaller clumps (such as ID=1 and 9) which are otherwise easily identified by eye. Of course, we acknowledge that this illustrates there exists a systematic uncertainty in this choice.
We recognize that the clumps we report here in G04-1 do not likely represent a full statistical sample of all star clusters in the galaxy. G04-1 is a galaxy with diverse forms of structure, including multiple clumps (as identified in this paper), spiral arms, and a bright nuclear ring. Our aim in this paper is simply to determine if a rotating disk galaxy with large, observed clumps of star formation likewise has larger structures (i.e. “clumps”) in maps of starlight, and (if so) where are these masses located and to report their corresponding masses. We acknowledge that constructing a catalog of all observable structures within the galaxy, spanning a wide range of spatial scales, is beyond the scope of this work.
In summary, in the present paper we define stellar ``clumps" to be regions within the galaxy disk which (1) represent local flux peaks (2) are entirely comprised of pixels above a given threshold level (4x the background of the disk) and (3) are at least 3$\times$3 pixels in size (the FWHM of the AO PSF). A list of the photometric properties of all regions meeting these criteria for G04-1 is given in Table \ref{tab:phot}.
\begin{figure*}[htb!]
\begin{center}
\includegraphics[scale=0.66]{psf_profile.pdf}
\end{center}
\caption{Exploring AO PSF effects on clump photometry. In panel 2a the disk-subtracted 1D brightness profile of the combined AO point-spread function (black diamonds; see \S 3.3) is shown in comparison with the brightness profile for a typical clump (in green; for Clump ID = 2) and the double Moffat best-fit to the data (in blue). Panels 2b \& 2c illustrate the effects of convolving the observed point-spread function with a simulated stellar clump. Dashed white circles show the regions defining the photometric aperture and background annulus. These regions have been used to determine the fraction of clump light lost due to the broad wing component of the AO PSF when performing photometry ($\sim$30\%).}
\label{fig:AOPSF}
\end{figure*}
\subsection{Determination of Clump Fluxes}
We calculated the flux of clumps in two ways. First, we simply integrated the flux of all pixels within a defined aperture centered on each clump (apertures were placed at the locations determined in the previous section). In most cases (12 of 15), the clumps identified in our K-band map could be directly associated with clumps observed in H$\alpha$ by \cite{fisher2017a}. In these cases, we utilized the apertures described in that work in our flux calculations. In four cases (Clump IDs: 4, 7, 11, \& 15) we identified NIR clumps with no obvious H$\alpha$ counterpart. For these clumps, appropriate apertures were estimated using the observed sizes in the ratio map.
We note that while Clump ID = 7 is well-detected in K-band imaging it is unique in that it has no observable counterpart (i.e. a feature with similar morphology and position) in any of the \textit{HST} data sets presented in this paper. Suspecting that this object is not actually a clump in G04-1 but the serendipitous detection of a background infrared source, we have performed a sky coordinate search using the online search tool \textsc{Vizier} \citep{vizier} but were unable to find a known object for which we could directly associate this emission. Unable rule out that this feature is indeed a stellar clump (without significant $\rm H\alpha$ emission) in G04-1, we include it in the analysis and discussion sections which follow.
Next, a local disk background subtraction was performed to remove light contributed from the diffuse disk component. The local disk background value was determined from a region surrounding each clump (an annular aperture of area equal to that of the clump aperture). Nearby clump pixels that fell within this annular region were omitted from the background estimate. This local background component (defined as the mean background value multiplied by the area of the clump aperture) was then subtracted off the flux of the clump, $F_{K_{P}}$, to obtain a disk-subtracted flux estimate ($F_{K_{P},diskcorr}$). These values are listed in Table \ref{tab:phot}. While disk subtraction is typically performed via bulge-disk decomposition methods, attempts (using both \textsc{GalFit} and \textsc{ProFit} softwares; \citealt{galfit,profit}) to fit a bulge and disk component to the K-band image of G04-1 left substantial structure within the galaxy (i.e. the ring, spiral arms, and bright peaks in flux associated with clumps), which were difficult to model with tolerable residuals. Therefore, we chose to adopt a more clump-centric approach to the removal of the background component.
\begin{figure*}[htb!]
\begin{center}
\includegraphics[scale=0.47]{magcorr.pdf}
\end{center}
\caption{An assessment of \textit{Keck} NIRC2 $K_{P}$-band and \textit{HST} F336W and F467M filter AB magnitudes (with and without local disk subtraction) for clumps in G04-1 (unity slope reference lines shown in red). The near 1-to-1 relationship between disk corrected and non-disk corrected magnitudes indicates effects of correction for disk light are independent of clump position in the galaxy.}
\label{fig:Kp_corr}
\end{figure*}
G04-1's guide star was used to derive a zero-point value for converting counts/s to flux. All images of the guide star were reduced using the methods described above, aligned by centroiding, and combined to produce a final image. The star (2MASS J04122098-0555104) has an entry in the Two Micron All-Sky Survey Point-source Catalog (hereafter 2MASS; \citealt{skrutskie2006}). We derived a $K_{P}$ magnitude for the guide star using its $K_{S}$ magnitude (from 2MASS) in combination with the flux ratio of the bandwidths between the $K_{P}$ and $K_{S}$ filters. This, in combination with our imaging of the star, was used to produce a conversion value between counts/s and flux.
\subsection{AO-PSF Effects on Photometry}
As described in $\S$2.2, our observations of G04-1 incorporate laser guide star AO correction. The shape of the AO PSF is known to vary in both time and with observing conditions and is generally modeled as a near diffraction-limited core with a broad, seeing-limited halo/wing component. Except in cases with very high Strehl ratios, a significant fraction of the light is shifted into the wings. The impact of the multi-component PSF of AO systems may therefore act to systematically reduce the measured flux of clumps. Assuming standard performance of the Keck AO system, we expect a Strehl ratio of 0.25 in the $K_{P}$ filter, based on our tip-tilt star magnitude (R$\sim$17 mag and 2MASS $K_{S}=15.7$ mag) and distance to our target ($r_{\rm offset} \sim 28.4\arcsec$). The Strehl ratio associated with this set of observations was independently estimated by comparing imaging of point sources observed the same night with a model of the diffraction-limited PSF. We find Strehl ratios are very similar (0.26, on average) to that supplied by Keck and therefore a value of 0.25 is used in all of our flux-magnitude calculations. Here, we explore some of the recovery biases associated with performing photometry on our AO-enabled $K_{P}$-band maps.
In order to increase the signal-to-noise of the faint component, we have constructed a model of the AO PSF through a median combination of all of the point sources observed (scaled by exposure time) on 2016-21OCT. These 21 frames correspond to a combined integration time of 1.4 min. In Fig. \ref{fig:AOPSF}a, we provide a plot of the azimuthally-averaged 1D surface brightness profile of the combined source (shown as black diamonds). We find that this AO PSF model from our observations is well fit by a double Moffat function with best-fit parameters (amplitude, core width, power index) of (0.91, 0.06$\arcsec$, 1.56) and (0.08, 0.13$\arcsec$, 1.21) for the core and halo components, respectively (shown as a blue solid line). Note: the centering of our double Moffat profile was fixed to the source origin. For comparison, we also include in Fig. \ref{fig:AOPSF}a a scaled 1D profile of a characteristic clump (ID: 2) in G04-1. The FWHM corresponding to this double Moffat profile is $\sim 0.1 \arcsec$, which is consistent with that expected in K-band at this observing site and given the Strehl ratio assessed above.
Due to image construction effects (i.e. the \textsc{Drizzle} algorithm), the empirical PSF of the final, co-added \textit{Keck}-NIRC2 image is likely slightly underestimated by this point-source FWHM value. Although, we expect the difference between these two resolution metrics to be small ($<10\%$). One method of inferring the spatial resolution of this image would be to assess features observed within the galaxy which appear smaller than clumps. The dominance of star light in the disk, however, makes the search for small structures in the galaxy prohibitive. Nevertheless, in the field of the final image of G04-1 we do observe one small source for which we estimate a FWHM of $\sim0.15$ arcsec. This provides additional confidence in the FWHM estimate we utilize here to explore the effects on flux loss on our K-band clump photometry. We note that our \textit{HST} F336W ($U$) and F467M ($b$) data sets are also processed by \textsc{Drizzle} and PSF modeling using \textsc{TinyTim} predicts FWHM values in this bands of $\sim0.1$ arcsec.\footnote{\textsc{TinyTim} is a PSF modeling software package developed by J. Krist and R. Hook. Access to this software is available through the Space telescope Science Institute (STSCI) HST instrumentation website.} Within our final F336W image, we observe a number of point sources in the field surrounding the galaxy. From averaging the on-sky sizes of a few of these sources, we estimate a spatial resolution for our \textit{HST} data of $\sim 0.09$ arcsec, which is slightly better in comparison to our K-band maps.
We are interested primarily in estimating the fraction of the light originating from stellar clumps that is shifted beyond our flux apertures (i.e. the ``light loss") due to the broad wing component of the AO PSF. We perform a simple simulation of the effects of the PSF on a stellar clump, which we show in Fig. \ref{fig:AOPSF}. Here, we model our clump with a 2D Gaussian function where the FWHM is set by an average, circularized clump radius ($\sim0.13\arcsec$ or $\sim$300 pc) from our sample in Table \ref{tab:phot} (Fig. \ref{fig:AOPSF}b). We note that many of the clump sizes are consistent with sizes from \textit{HST} H$\alpha$ imaging, and do not suffer from broad components of the PSF. We then convolved this 2D clump with the previously described double Moffat profile to mimic the smearing effect of our empirical PSF (see Fig. \ref{fig:AOPSF}c). To assess the fraction of light lost, we use a fixed aperture size of $r=3\sigma$ (where $\sigma$ is defined by the double Moffat FWHM) and annulus radii of $r_{in}=4\sigma$ and $r_{out}=8\sigma$ (see Fig. \ref{fig:AOPSF}b \& c) for performing flux and background photometry.
After convolution with our AO PSF model, approximately 66\% of the clump light remains within the 3$\sigma$ flux aperture, corresponding to a flux/mass correction factor of 1.33. However, in reality these clumps are embedded within the galaxy disk and the broad PSF wings also feed stellar light from the surrounding disk back into our clump apertures. This is likely to reduce flux lost from clumps. Since we have not included an underlying disk component in our simulation, this correction factor of 1.33 represents an upper limit for the fraction of light lost due to PSF effects.
In our simulation, we find that the per-pixel flux contribution to the local background estimate (determined within the aforementioned annulus) is of order $\sim$1\% of the amplitude (i.e. peak pixel value) of the clump. Again, we present this as a rough estimate; clump clustering and bright structures in G04-1 requires estimations of the local disk background to be determined from a range of annuli radii. Moreover, the morphology of stellar clumps is observed in our maps to deviate from simple Gaussians. Nevertheless, we note that clump peak pixel values range between 2-5$\times$ higher than their corresponding local disk values, suggesting that this light represents a minor contribution to our background estimates.
The authors highlight that the above consideration of the impact of AO-correction on photometry measurements of clumps within G04-1 is really only possible due to the galaxy's location at relatively low redshift. The scale length of star forming clumps in G04-1 is small when compared with the scale length of the disk. This allows us to more cleanly separate out the clumps from the underlying disk of the galaxy. This underscores another unique advantage of studying processes at high-$z$ via targets in the DYNAMO sample.
\subsection{Clump Stellar Masses}
It is commonly assumed that stellar mass density varies in lock-step with near-infrared surface flux density, but we have used our in-hand visible-wavelength \textit{HST} photometry to try to refine our mass estimates using population synthesis models. Integrated and clump-scale mass-to-light ratios for G04-1 were estimated using the stellar population synthesis code \textsc{galaxev}, which comprises a library of evolutionary models computed using the isochrone synthesis codes of \cite{bc2003}. We used spectral energy distribution (SED) models from the \cite{bc2003} code base (incorporating their 2011 update), assuming a Chabrier initial mass function \citep{chabrier2003}. As our goal was to derive mass-to-light ratios for isolated clump regions, we modeled the star formation history of clumps as a simple stellar population (i.e. a delta-burst). Galaxies in DYNAMO (including G04-1) are observed to have metallicities which are slightly sub-solar (determined from the [NII]/$H\alpha$ ratio \citep{pp2004} from SDSS spectra) and an BC2003 SED model consistent with this was chosen from the BaSeL 3.1 spectral library \citep{bc2003}.
Clump colors vary to a small degree across the galaxy disk, and we have used the \textit{HST} data described in \S\ref{sec:HST} to construct ($U-b$) maps (using F336W and F467M). Using the apertures defined in calculating $K_{P}$-band magnitudes on these maps, we calculated a ($U-b$) color index (with and without disk-subtracted magnitudes) for each of the clumps in G04-1. The standard deviation of the clump colors was 0.36 (in AB mag units). Using the BC03 software, we then modeled the evolution of the $K_{P}$ mass-to-light ratio as a function of observed ($U-b$) color to derive clump-specific mass-to-light ratios. The stellar mass of each clump is then defined as simply the product of the NIR flux and the BC03-derived mass-to-light ratio of the clump. Derived ($U-b$) clump colors, mass-to-light ratios, and masses are all listed in Tables \ref{tab:phot} \& \ref{tab:prop}.
The simple stellar population's observed magnitudes in our selected filters are evaluated at equally spaced time steps ($\Delta log(\rm Age)=$ 0.05 yr, beginning at $10^{5}$ yr) and thus we are able to track the color evolution of the burst with time. This provides us a reasonable metric for estimating the ages of the clumps, using simply their individual $U-b$ colors, which we include on the right-hand axis of Fig. \ref{fig:mass_vs_dist} (top panel). We note that a particular age of a clump should be interpreted carefully in terms of the clump lifetime. \citet{bournaud2014} outlines the difficulties associated with accurately measuring the lifetime of a clump based solely on photometric ages. Clumps can have complex star formation histories and their simulations show that if clumps are long lived, they will continually rejuvenate (i.e. experience subsequent bursts) and young stars dominate the age measurement. However, \citet{bassett2014} reports (based on absorption line analysis) an age range of $60-500$ Myr for the entire galaxy of G04-1 which is consistent with the clump ages we derive here using stellar population synthesis modeling.
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=0.56]{U-b_vs_b-K.pdf}
\end{center}
\caption{In this figure, the observed $(U-b; F336W - F467M)$ and $(b-K)$ colors of clumps in G04-1 are compared to those of two example synthetic stellar populations (generated using \textsc{galaxev} and stellar models from \citealt{bc2003}). Here, the positions of stellar clumps are shown as black stars. The evolutionary tracks of two star formation histories, a simple delta-burst (SSP) and an exponentially-declining SFR (Exp; with an e-folding timescale of $t=500 Myr$) are provided for reference. SSP and Exp tracks in red and blue, respectively, include colors with extinction applied, assuming a \cite{calzetti2000} extinction law. Their counterparts plotted as grey do not account for effects from extinction.}
\label{fig:clumpcolors}
\end{figure}
As described above, we derive mass-to-light ratios for clumps via a method which assumes that 1) the observed colors reflect that of a single burst of star formation and 2) that the clumps extinction values are similar to that of the disk (e.g. in using disk-subtracted values). Note: the assumption that clump extinction is similar to the disk is supported by observations of the $H\alpha$-to-$P\alpha$ flux ratio in G04-1, which is found to be roughly constant (Bassett et al 2017). Given the observed stability of the K-band mass-to-light ratio, the uncertainties associated with estimating stellar masses via ($U - b$) color are not expected to be significant (e.g. a small variation in clump color is observed across the galaxy). However, as magnitude information is available for clumps in three bands ($U$, $b$, and $K_{P}$) we can compare the observed ($U-b$) and ($b-K$) colors with those generated from stellar population synthesis modeling. This comparison with two, distinct star formation histories (a ``delta-burst" SSP and an exponentially-declining SFR) is shown in Fig \ref{fig:clumpcolors}. In this comparison, we utilize colors calculated from disk-subtracted magnitudes to minimize K-band reddening from the underlying stellar disk in which the clumps are embedded. We incorporate the effects of extinction and reddening via the \cite{calzetti2000} dust extinction law assuming $A_{H\alpha}\sim1-1.6$ (derived for G04-1 by \citealt{bassett2017}) and $A_{V}\sim 1$. Tracks which account for extinction are shown for the SSP and Exponential star formation histories in red and blue, respectively. We find that four clumps exhibit both ($U-b$) and ($b-K$) colors which map well to those generated by BC03 SSP modeling. Five clumps are within a magnitude in color from either the SSP or Exp BC03 tracks. Three clumps are vertically offset by $>$1 mag from both tracks, appearing much more red in ($b-K$) color than predicted by BC03.
These results are challenging to interpret as it remains unclear how well-represented these clump populations are by the simple star formation histories presented here. Indeed, the star formation histories of clumps in high-$z$ turbulent disk galaxies are very poorly constrained. Results from simulation work (e.g. \citealt{bournaud2014}) suggest that some clumps may undergo multiple bursts of star formation over the course of their lifetime. While actively star forming clumps are dominated by young stars, they may be holding onto a redder, more evolved stellar population which might explain them appearing significantly more red. We note, however, that attempts to fine-tune our models for different star formation histories (a SSP and exponential SFR with timescales of $100< \tau < 500$ Myr) result in an overall range in the average K-band mass-to-light ratio of 0.1 - 0.15. Further efforts to fine-tune are likely beyond the scope of this paper. Nonetheless, this shows that our uncertainty in model results in a small addition to the overall systematic uncertainty associated with stellar mass. This underscores the clear advantage of estimating clump stellar masses using K-band photometry.
\section{Results \& Discussion}\label{sec:4}
Figure \ref{fig:Mstar_hist} shows a histogram of the full range of clump masses, both before and after background flux subtraction is performed. On average, subtraction of the local disk component reduces clump masses by around 50\%. The average stellar mass of the clumps before (after) background subtraction for the 15 identified clumps in G04-1 is $5.69\pm 1.8\times 10^{7}\ensuremath{{\rm M}_\odot}$ ($2.06\pm 0.7\times 10^{7}\ensuremath{{\rm M}_\odot}$). The highest and lowest mass clumps (Clump IDs 14 \& 9) correspond to disk-subtracted masses of 27.0 and 0.36 $\times 10^{7}\ensuremath{{\rm M}_\odot}$, respectively. If we incorporate the correction for light-loss due to wings of the AO PSF (see \S 3.3), the maximum mass for clumps within G04-1 may be as high as 3.6 $\times 10^{8} \ensuremath{{\rm M}_\odot}$. We note that the fractional drop in mass from the background subtraction is quite consistent across clumps (i.e. uncorrelated with clump brightness or position in the disk), as shown in Fig. \ref{fig:Kp_corr} where we directly assess the effect of disk-subtraction on calculated magnitudes for clumps in the $K_{P}$, F336W ($U$), and F467M ($b$) datasets.
\begin{deluxetable*}{ccccccccccc}
\tabletypesize{\scriptsize}
\tablecaption{G04-1 Clump Photometry\label{tab:phot}}
\tablewidth{0pt}
\tablehead{
\\
\colhead{Clump ID} &
\colhead{$K_{P}^{\dagger}$} &
\colhead{$K_{P, diskcorr}^{\dagger}$} &
\colhead{$F_{K_{P}}$} &
\colhead{$F_{K_{P, diskcorr}}$} &
\colhead{$M_{\star}$} &
\colhead{$M_{\star, diskcorr}$}
\\
\colhead{} &
\colhead{(AB mag)} &
\colhead{(AB mag)} &
\colhead{($10^{-14}$\ erg/s/cm$^{2}$)} &
\colhead{($10^{-14}$\ erg/s/cm$^{2}$)} &
\colhead{($10^{7}$\ $\ensuremath{{\rm M}_\odot}$)} &
\colhead{($10^{7}$\ $\ensuremath{{\rm M}_\odot}$)}
\\
}
\startdata
1 & 21.08 & 22.77 & 2.01 $\pm$ 0.41 & 0.43 $\pm$ 0.10 & 2.59 $\pm$ 0.79 & 0.55 $\pm$ 0.18\\
2 & 21.19 & 22.38 & 1.81 $\pm$ 0.37 & 0.61 $\pm$ 0.13 & 0.85 $\pm$ 0.50 & 0.28 $\pm$ 0.19\\
3 & 20.52 & 21.82 & 3.38 $\pm$ 0.68 & 1.02 $\pm$ 0.21 & 1.98 $\pm$ 1.07 & 0.60 $\pm$ 0.30\\
4 & 19.70 & 21.26 & 7.22 $\pm$ 1.45 & 1.88 $\pm$ 0.39 & 12.7 $\pm$ 3.31 & 3.31 $\pm$ 0.89\\
5 & 19.52 & 21.14 & 8.47 $\pm$ 1.70 & 1.91 $\pm$ 0.40 & 14.9 $\pm$ 3.88 & 3.37 $\pm$ 0.89\\
6 & 21.53 & 23.11 & 1.33 $\pm$ 0.27 & 0.31 $\pm$ 0.07 & 1.56 $\pm$ 0.50 & 0.37 $\pm$ 0.12\\
7 & 20.72 & 22.07 & 2.80 $\pm$ 0.57 & 0.81 $\pm$ 0.19 & 3.28 $\pm$ 1.06 & 0.95 $\pm$ 0.33\\
8 & 23.00 & 24.16 & 0.34 $\pm$ 0.07 & 0.12 $\pm$ 0.04 & 0.36 $\pm$ 0.13 & 0.13 $\pm$ 0.05\\
9 & 22.57 & 23.48 & 0.51 $\pm$ 0.11 & 0.22 $\pm$ 0.05 & 0.36 $\pm$ 0.17 & 0.16 $\pm$ 0.08\\
10 & 20.85 & 21.47 & 2.50 $\pm$ 0.51 & 1.41 $\pm$ 0.30 & 2.64 $\pm$ 0.91 & 1.49 $\pm$ 0.52\\
11 & 20.74 & 21.65 & 2.75 $\pm$ 0.55 & 1.19 $\pm$ 0.25 & 2.90 $\pm$ 1.00 & 1.26 $\pm$ 0.44\\
12 & 20.99 & 22.33 & 2.19 $\pm$ 0.44 & 0.64 $\pm$ 0.13 & 3.35 $\pm$ 0.93 & 0.97 $\pm$ 0.27\\
13 & 19.22 & 20.28 & 11.2 $\pm$ 2.24 & 4.19 $\pm$ 0.84 & 5.25 $\pm$ 3.45 & 1.97 $\pm$ 1.29\\
14 & 18.58 & 19.38 & 20.1 $\pm$ 4.02 & 9.69 $\pm$ 1.94 & 26.6 $\pm$ 7.95 & 12.9 $\pm$ 3.83\\
15 & 20.21 & 21.09 & 4.50 $\pm$ 0.90 & 2.00 $\pm$ 0.40 & 5.96 $\pm$ 1.78 & 2.65 $\pm$ 0.79\\
\enddata
\tablenotetext{\dagger}{Errors on clump AB magnitudes are approximately 0.24 mag.}
\end{deluxetable*}
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=0.56]{Mstar_hist.pdf}
\end{center}
\caption{The distribution of clump stellar masses in G04-1. The red and blue histograms represent stellar masses determined for the clumps described in Table \ref{tab:phot} both with and without local disk background subtraction.}
\label{fig:Mstar_hist}
\end{figure}
\begin{deluxetable*}{ccccccccccc}
\tabletypesize{\scriptsize}
\tablecaption{Observed Properties of Stellar Clumps in G04-1\label{tab:prop}}
\tablewidth{0pt}
\tablehead{
\\
\colhead{Clump ID} &
\colhead{HST ID} &
\colhead{SFR$_{}^{\star}$} &
\colhead{$\sigma^{\star}$} &
\colhead{F336W - F467M$^{\dagger}$} &
\colhead{$K_{P}$ MLR} &
\colhead{$R_{GC}$}
\\
\colhead{} &
\colhead{ } &
\colhead{($\ensuremath{{\rm M}_\odot}$/yr)} &
\colhead{$\ensuremath{\,{\rm km\,s}^{-1}}$}&
\colhead{(AB mag)} &
\colhead{} &
\colhead{(kpc)}
\\
}
\startdata
1 & 12 & 0.85$\pm$0.15 & 51 & 0.68 & 0.11 & 3.61\\
2 & 13 & 0.62$\pm$0.14 & 51 & 0.30 & 0.04 & 3.35\\
3 & 14 & 0.93$\pm$0.22 & 51 & 0.41 & 0.05 & 2.81\\
4 & - & - & 51 & 1.15 & 0.15 & 2.47\\
5 & 6 & 1.36$\pm$0.22 & 48.24$\pm$19.8 & 1.15 & 0.15 & 1.90\\
6 & 4 & 0.29$\pm$0.07 & 51 & 0.65 & 0.10 & 3.18\\
7 & - & - & 51 & 0.65 & 0.10 & 4.60\\
8 & 1 & 0.19$\pm$0.04 & 51 & 0.58 & 0.09 & 5.46\\
9 & 2 & 0.28$\pm$0.05 & 51 & 0.47 & 0.06 & 4.90\\
10 & 3 & 0.43$\pm$0.12 & 51 & 0.55 & 0.09 & 4.06\\
11 & - & - & 51 & 0.58 & 0.09 & 2.93\\
12 & 10 & 0.39$\pm$0.10 & 51 & 0.74 & 0.13 & 2.36\\
13 & 5 & 1.36$\pm$0.31 & 55.7$\pm$27.62 & 0.30 & 0.04 & 1.05\\
14 & 9 & 2.1$\pm$0.39 & 41$\pm$14 & 0.88 & 0.11 & 0.79\\
15 & - & - & 51 & 0.87 & 0.11 & 1.16\\
\enddata
\tablenotetext{\dagger}{\textit{HST} F336W-F467M AB color values represent non-disk subtracted colors and have associated errors of on average 0.6 mag.}
\tablenotetext{\star}{Values for clump star formation rates and velocity dispersions have been taken from Fisher et al. (2017b) and Oliva-Altamirano et al. (2018), respectively.}
\end{deluxetable*}
As described in \S1, predictions from simulations for stellar clump masses in turbulent, clumpy disk galaxies depend quite strongly on the detailed feedback prescription assumed by the model. Observations of galaxy clumps (such as those presented here) are therefore useful for testing these models and for providing insight into the dominant forms of feedback. Strong radiative feedback models predict that clumps of gas are disrupted so quickly that the stellar morphology in these galaxies should be relatively smooth \citep{hopkins2012,oklopcic2017}. Indeed, simulation work by \cite{buck2017} incorporating more moderate feedback models find little evidence for clumps of stars after 200 Myr. Studies where feedback effects are modeled as radiation pressure and supernovae, however, predict massive clumps ($>10^{8}\ensuremath{{\rm M}_\odot}$). We find that only a small fraction of the clump masses (3 of 15; before disk-subtraction) observed in G04-1 are consistent with this mass regime. While the present paper provides data for only a single galaxy, at least in this case the majority of clumps (both before and after disk-subtraction) observed in G04-1 appear to fall in the range $10^{6-8}\ensuremath{{\rm M}_\odot}$, which is fairly near the middle of the mass spectrum observed in high-redshift observations of lensed galaxies and in simulations.
\cite{lenkic2021} recently studied the internal color gradients of clumps in DYNAMO galaxies. They find that color gradients of clumps are more commonly consistent with changes in age, rather than extinction. This is different than the explanation by \cite{buck2017} that clumps in turbulent galaxies are the result of regions of lower extinction. They also find that DYNAMO clump age gradients are consistent with an old clump of stars, with a young center. This is consistent with our results, in which old stellar clumps co-exist with high SFR surface densities. Together, these results give two independent lines of evidence of DYNAMO galaxies contain long-lived clumps of stars.
The total mass of the clumps (after local disk subtraction) is approximately 3.1$\times 10^{8}\ensuremath{{\rm M}_\odot}$. Using high resolution \textit{HST} continuum data, \citealt{fisher2017a} fit a surface brightness profile for G04-1 and estimate a bulge-to-total ratio of about 11\%. Given a total stellar mass of 6.47$\times 10^{10}\ensuremath{{\rm M}_\odot}$, this corresponds to a current bulge mass of about 7.1$\times 10^{9}\ensuremath{{\rm M}_\odot}$. If we assume that all of the clumps observed in G04-1 survive sufficiently long to migrate inward, then the total contribution of these clumps to the mass of the bulge would be of about 5\%. Assuming there is no further mass growth of the galaxy, this potential contribution increases to 8.54$\times 10^{8}\ensuremath{{\rm M}_\odot}$ or about 12\% when considering non-disk subtracted mass estimates. Whether or not the current observations lend support to the notion of bulge building from clump infall clearly depends on the duty cycle of the process, because a single episode of bulge growth from this process would only add incrementally to the present bulge.
\subsection{Placing Clump Masses and Sizes in Context}
A number of recent observational studies have shown that the star forming clumps observed in $1<z<3$ galaxies may not be as massive or large as initially predicted due to observational challenges inherent to mapping light in galaxies at high redshift. For example, studies examining systems with strong gravitational lensing have been able to explore clumps at spatial resolutions below 100 pc (e.g. \citealt{livermore2012,livermore2015,wuyts2014}) and consistently find smaller clump sizes ranging 50 pc - 1 kpc. To explore the effect of resolution on derived clump properties, \citet{cava2018} evaluated multiple images of a single galaxy with different lensing magnifications (consistent with effective resolutions ranging from 30 to 300 pc) and found that at coarse resolution clump sizes and masses were systematically overestimated. Their target galaxy, the Cosmic Snake, has similar total mass and SFR as ours in this work. They find that when observed at the finest spatial resolution that clumps in the Cosmic Snake galaxy have masses ranging $\sim10^{7}-10^{8.5}$~M$_{\odot}$, a range quite comparable to ours. Using machine learning methods to identify clumps in VELA zoom-in simulations, \citealt{huertas2020} find observational effects significantly impact clump properties, leading to a factor of 10 over-estimation in stellar mass. While it is becoming increasingly clear that observational constraints and resolution limits have led to significant ambiguity in the true masses and sizes of clumps, it's difficult to ascertain to what degree these biases have impacted the values we report here for clumps in G04-1. In this section, we aim to contextualize our work by comparing our findings with similar studies spatially resolving star formation in galaxies.
Using a combination of \textit{HST} broad and narrow band imaging, \citet{bastian2005} explore star cluster populations in M51. M51 can be taken as a typical star-forming spiral disks, and is therefore useful for comparison to the extreme star formation in G04-1. They measure masses in complexes of star formation with sizes from $\sim100$~pc to a few hundred parcecs, comparable to the sizes in G04-1. The key difference is the stellar masses associated to the star forming associations in M51 are $3-30\ \times\ 10^{4}\ \ensuremath{{\rm M}_\odot}$, which is multiple orders of magnitude lower than what we observe in G04-1.
Ultra luminous infrared galaxies (U/LIRGs) are a class of local ($z<0.1$) dusty systems with high IR luminosities ($L_{[8-1000\mu m]}>10^{11}\ensuremath{{\rm L}_\odot}$) and associated SFRs which are comparable to our target and to galaxies at $z\sim 1-3$ (of order $\rm 10^{2-3}\ \ensuremath{{\rm M}_\odot}\ yr^{-1}$) \citep{malek2017,larson2020}. Looking at resolved star formation (via Pa $\alpha$ and Pa $\beta$ line emission) in 48 galaxies from the Great Observatories All-Sky LIRG Survey (GOALS) with \textit{HST}, \citet{larson2020} find that while the typical sizes of clumps in LIRGs are comparable to that observed in DYNAMO ($\sim100-900$ pc), they appear generally less massive ($M_{*,median}\sim 5 \times 10^{5}\ \ensuremath{{\rm M}_\odot}$) and exhibit individual SFRs which are roughly an order of magnitude lower ($\sim0.03\ \ensuremath{{\rm M}_\odot} \rm yr^{-1}$).
\citet{messa2019} examined star forming clumps in 14 galaxies from the Lyman-Alpha Reference Sample (LARS; $z=0.03-0.2$) via UV/optical data from \textit{HST}. They find that for LARS galaxies (which are selected to be similar to high-$z$ galaxies based on their $\rm H\alpha$ and UV fluxes) clump sizes range from $20-600$ pc with a median clump size of about $d\sim60$ pc. While these reported clump sizes are smaller than that seen in DYNAMO ($<$R$_{clump}>\ \sim$500 pc; \citealt{fisher2017a}) it remains unclear whether this indicates a significant difference in clump sizes or if it is due to the clustering and resolution effects discussed above. For example, the smallest clump sizes reported by \citet{messa2019} are observed in the lowest redshift LARS galaxies where the best resolution of their data ($\sim 10$ pc) is roughly 10$\times$ better than in the \textit{HST} imaging presented in \S2.3 for G04-1.
Global system dynamics are closely linked with theoretical formation pathways for star forming clumps in galaxies and thus also provide a critical point of comparison \citep[discussion in][]{fisher2017b}. Like other DYNAMO galaxies, G04-1 has been shown to have a well ordered rotation field measured both in ionized gas at AO-enabled high spatial resolution \citep{oliva2018} and in stellar kinematics \citep{bassett2014}. Moreover, \cite{fisher2017a} shows that the star-light profile is consistent with an exponential model.
While some fraction of ULIRGs are likely spirals, the primary dynamical driving mechanism of the bulk of U/LIRGs is widely considered to be merging (e.g. Larson et al. 2016). This has been shown to be the true case of LIRG galaxies in GOALS, where the sample is dominated by interacting systems \citep{larson2020}. We highlight this important kinematic distinction as it presents an important caveat when comparing the observed properties of clumps in DYNAMO with other local high SFR samples.
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=0.52]{combodist_3panel.pdf}
\end{center}
\caption{The relationship between clump \textit{HST} F336W - F467M (un-disk corrected) color (top panel, solid red circles), mass (middle panel; green circles), disk-subtracted mass (bottom panel; blue circles) and galactocentric radius. Here, green points represent mass values uncorrected for disk contribution and dark blue points represent mass estimates for which we have subtracted off the local disk background. Radial distance values (horizontal axis) are calculated as the offset between the individual clump aperture centroid and the galaxy center. The Pearson's correlation coefficients we calculate for these three quantities indicate strong negative correlation: we find $r$-values of -0.65, -0.64, and -0.61 for the clump colors, uncorrected masses, and disk-corrected masses, respectively. The vertical grey dashed line represents the outer edge of the ring region in G04-1 at $R_{GC}\sim2.2$ kpc.}
\label{fig:mass_vs_dist}
\end{figure}
\subsection{Radial Gradients in Clump Properties}
In Fig. 6 we show the radial gradients in both clump stellar masses and clump colors. We find that clumps inside $R<2$ kpc have average disk-corrected masses of $5.2\times 10^{7}\ensuremath{{\rm M}_\odot}$ and average non-disk corrected F336W-F467M colors of $\sim$0.8. Where clumps outside of this radius appear smaller and bluer with masses $9.1\times 10^{6}\ensuremath{{\rm M}_\odot}$ and F336W-F467M values of about $\sim$0.6. The trend with mass is roughly consistent with a log-linear decline in mass with radius. The observable properties of clumps: e.g. the number, mass, star formation rate (SFR), and age, are fundamental for testing feedback models. In the case of G04-1, there is clear evidence for a clump mass gradient across the galaxy. We have computed Pearson's correlation coefficients for the three derived quantities in Fig. \ref{fig:mass_vs_dist} and find $r$-values of -0.65, -0.64, and -0.61 for the clump colors, uncorrected masses, and disk-corrected masses, respectively. This indicates that all three quantities exhibit strong negative gradients across the galaxy disk. This is shown in Fig. \ref{fig:mass_vs_dist} (bottom panel) which illustrates the radial dependence of clump mass (both before and after disk-subtraction). Clumps closer to the galaxy's nucleus are observed to be significantly (more than a factor of ten) more massive than those at the outskirts. We note that galaxy G04-1 is host to various structural features, including multiple spiral arms and a prominent nuclear ring, which are clearly observed in the top row and central panels in Fig. \ref{fig:clump_apertures}. This ring structure appears somewhat asymmetric and varies radially in width due to the existence of bright knots of ongoing star formation and (in some part) the slight inclination of the galaxy. We have estimated the radial outer edge of this nuclear ring region to be located at a distance of roughly $R_{GC}\sim 2.2$ kpc (the average value of four measurements made at cardinal points in the \textit{HST} $\rm H\alpha$ ratio map image) from the galaxy centre. For spatial reference, we include a vertical dashed line denoting the outer edge of this ring region in the panels of Fig. \ref{fig:mass_vs_dist}.
This clump mass gradient in G04-1 could have multiple origins. In particular, it could be due to (\textit{i}) the inward migration of clumps while gradually forming more stars, (\textit{ii}) an inside-out growth of the galaxy disk, or (\textit{iii}) the Jeans mass being larger at smaller radii. From spatially-resolved kinematic maps (using Keck-OSIRIS observations of the Pa-$\alpha$ line; see \citealt{oliva2018}) it is known that the gas velocity dispersion in G04-1 declines slightly with radius. While this would argue for a higher Jeans mass near the galaxy centre, the declining surface density and age gradient make this scenario (\textit{iii}) unlikely. From Fig. \ref{fig:mass_vs_dist}, it is observed that from $r=5$ kpc to $r=1$ kpc in the disk, clump age increases by about 150 Myr while the overall clump mass increases by about $\sim 10^{8} \ensuremath{{\rm M}_\odot}$. \citet{lenkic2021} performs multi-band stellar population modelling of the spectral energy distributions of clumps in G04-1 and finds a similar range in clump ages ($80 - 300$ Myr). If this variation is entirely due to star formation (e.g. scenario \textit{i}), this would imply that the mean SFR of clumps during migration should be $\sim 0.7 \ensuremath{{\rm M}_\odot} yr^{-1}$. This value is surprisingly consistent with the average clump SFR for this galaxy reported by \cite{fisher2017a} using H$\alpha$-based measurements ($<$SFR$\rm_{clump}>$ $\sim 0.83\ \ensuremath{{\rm M}_\odot}\ yr^{-1}$). This also suggests that an upper-limit to possible mass contributed by clumps in G04-1 to the bulge (assuming all clumps complete migration) is of order $10^{9} \ensuremath{{\rm M}_\odot}$ or addition of roughly $\sim 14\%$ to the bulge mass. The SFR of clumps is likely a more reliable proxy for mass growth than mass flow rate, as more mass is likely to be lost via feedback than star formation. Notably, the 150 Myr age-span observed for clumps in G04-1 is well-matched to the minimum age predicted for clumps which are able to survive long enough to complete in-spiral to the galaxy center \citep{guo2012,guo2017,fs2011,shibuya2016,soto2017}. However, we qualify this simple model by noting that it assumes that 1) clumps originate in the outskirts of the disk, 2) clumps are isolated from other clumps (i.e. they don't merge) and 3) all clumps within the galaxy survive long enough to complete their migration inward. It remains unclear to what degree these assumptions are reasonable.
We find no clear evidence of a radial dependence on the number density of clumps; Fig. 1 illustrates that stellar clumps are indeed evenly distributed across the disk. Additionally, Fig. \ref{fig:mass_vs_dist} (top panel) shows that stellar clumps near the nuclear ring appear (albeit to a small degree) consistently more red, suggesting that these stellar populations may be slightly older and more evolved. \cite{fisher2017a} report star formation rates (SFR) for these clumps (\S2, and listed in Table \ref{tab:prop}) and find higher SFRs for clumps located near the inner ring (on average 1.6 $\ensuremath{{\rm M}_\odot}$/yr) when compared to the arm (0.5 $\ensuremath{{\rm M}_\odot}$/yr).
Our measurements appear to be consistent with observations of clump properties in high-redshift galaxies, as well as with numerical investigations. In their simulations, \cite{mandelker2014,mandelker2017} and \cite{dekel2021} find significant gradients in clump properties across the disk. More specifically, clumps closer to the galaxy center tend to be both more massive and comprised of older stellar populations (i.e. longer-lived).
Fig. \ref{fig:mass_vs_dist} is consistent with the investigation clump properties at high-redshift by \cite{guo2018}, who examined UV-bright clumps in CANDELS galaxies and found significant gradients in both stellar mass (with inner clumps on average being more massive than outer clumps by 1-2 orders of magnitude) and color. They also found that inner clumps appear redder (in $U-V$) than those observed in the outskirts of the disks. \cite{fs2011} performed deep \textit{HST} (NIC2/F160W and ACS/F814W) imaging of clumps in six $z\sim 2$ star forming galaxies and also found central clumps to be more massive and older. Positive color and stellar mass gradients were similarly observed by \cite{cava2018} in imaging of clumps within the `Cosmic Snake', a lensed system. Similar mass-radius relations also appear at low redshift: observations of star clusters in local galaxies such as $z<0.1$ star forming spirals \citep{sun2016} and major merger Arp 299 \citep{randriamanakoto2019} find cluster mass increases with decreasing galactocentric radius. Radial trends are often inferred to be an observational basis for clump migration. If the clumps in G04-1 are long-lived, however, and have survived long enough for in-spiral to establish these gradients, then the masses we observe strongly argue against very strong feedback effects (see \S1).
It is emphasized that the clump mass results shown in Fig. \ref{fig:mass_vs_dist} are quite robust; these radial trends are certainly not due to uncertainties in mass-to-light calculations, since our observations use K-band imaging where M/L is very stable. Moreover, the high spatial resolution in two dimensions allows for an accurate disk subtraction and, therefore, radial trends cannot be due to background subtraction effects.
\begin{figure}[htb!]
\begin{center}
\includegraphics[scale=0.55]{SFR_vs_Mstar.pdf}
\end{center}
\caption{The resolved star forming main-sequence for G04-1. Here we show the relationship between star formation rate (taken from \citealt{fisher2017a}) and stellar mass (from this work) for $\sim$400pc sized regions in G04-1. Galaxy regions which can be directly associated with clumps (identified in our NIRC2 K-band maps) are identified in this figure by being either red or blue, depending on whether they are positioned in the nuclear ring or disk regions of the galaxy. The power-law fit to the data presented in this paper (described in \S 4.2) is shown as a dashed cyan line. For comparison, we include similar relations derived from observations at high-z \citep{wuyts2013} and low-z \citep{cd2016}.}
\label{fig:SFR_vs_Mstar}
\end{figure}
\subsection{Resolved $\Sigma_{SFR}$ versus $\Sigma_{star}$}
Observations at both high and low redshift have found a tight ($\sim$0.3 dex) and roughly linear relationship (the so-called ``star forming main-sequence") between the star formation rate and stellar mass surface densities in actively star forming galaxies \citep{daddi2007,noeske2007,elbaz2011}. In Fig. \ref{fig:SFR_vs_Mstar}, we plot the star forming main-sequence in terms of spatially-resolved quantities for G04-1. To estimate the SFR and stellar mass surface densities ($\Sigma_{SFR}\ \&\ \Sigma_{M_{star}}$, respectively) both the $K_{P}$-band (this work) and the H$\alpha$ maps (presented in \cite{fisher2017a}) were re-gridded to a common scale of $\sim$400 pc/pixel. New fluxes were calculated for each pixel in each of the re-gridded maps and used to estimate corresponding SFRs and stellar mass surface density values. A galaxy-averaged K-band mass-to-light ratio of 0.1 (corresponding to the integrated F336W - F467M color for the galaxy) was used to calculate stellar masses. As a reference, all pixels associated with the stellar clump regions are plotted as red or blue points (depending on whether they correspond to ring or disk/arm clumps; see Fig. \ref{fig:SFR_vs_Mstar}). All non-clump region pixels (defined as where $<$30\% of the contributing flux originates from clumps) are shown as black circles. We note that the two non-clump data points in Fig. \ref{fig:SFR_vs_Mstar} with high (ring-like) $\Sigma_{SFR}$ values correspond to regions near the centre of G04-1.
After flagging the clump regions as red and blue in Fig. \ref{fig:SFR_vs_Mstar}, we note the following: (1) the clump regions within the galaxy define the upper end of SFR surface density for this galaxy and (2) support for the radial trends in clump properties described in \S4.1 at larger spatial-scales. Large-sample resolved studies of galaxies (see \citealt{wuyts2013,hemmati2014,cd2016,magdis2016}) find that this spatially-resolved relation is observed across redshifts and wavelengths, but the derived slopes and zero-points vary. A power-law fit to all of the data points in Fig. \ref{fig:SFR_vs_Mstar} results in a slope that is remarkably near unity:
\begin{equation}
\rm log(\Sigma_{SFR}) = -8.58\pm0.21\ +\ 1.041\pm0.03\log(\Sigma_{M_{star}}).
\end{equation}
\noindent A separate fit of only the data points associated with clumps results in a flatter relation:
\begin{equation}
\rm log(\Sigma_{SFR}) = -3.69\pm3.81\ +\ 0.44\pm0.49\ log(\Sigma_{M_{star}}).
\end{equation}
\noindent We note that when computing these relations, we omitted a number of very low signal-to-noise data points at large radii in both the H$\alpha$ and $K_{P}$ maps, because they were sufficiently near the sky background their errors were likely dominated by systematics from sky subtraction.
\cite{cd2016} use integral field spectroscopy (IFS) observations of 306 local galaxies from the CALIFA survey ($0.005<z<0.03$) to derive this relation and find $\rm log(\Sigma_{SFR}) = -7.95 + 0.72\ log(\Sigma_{M_{*}})$. This is significantly less steep than that observed at high-$z$: \cite{wuyts2013} use kpc-scale multi-wavelength broad-band imaging (from CANDELS) and H$\alpha$ surface brightness profiles (from 3D-HST) for 473 star forming galaxies ($0.7<z<1.5$) and find $\rm log(\Sigma_{SFR}) = -8.4 + 0.95\ log(\Sigma_{M_{*}})$. For reference, we plot both of these observed relations in Fig. \ref{fig:SFR_vs_Mstar}.
Nearly all regions within G04-1 directly overlap with the star forming main-sequence relation derived from observations of high-$z$ galaxies. Indeed, the slope and intercept values are remarkably similar. In terms of its integrated properties, G04-1 lies offset from the star forming main-sequence. However, its location at $z\sim 0.1298$ results in some ambiguity as to the reason for this offset. For example, one wonders whether G04-1 is a normal, local star forming galaxy which simply hosts high-SFR clumps (i.e. a scenario where the clumps were completely externally formed and then accreted). From this figure we infer that this is likely not to be the case. Instead, Fig. \ref{fig:SFR_vs_Mstar} suggests that all regions (both the clump and intra-clump regions) within the galaxy are experiencing an enhanced mode of star formation, more like what is routinely observed in galaxies at high-$z$.
\subsection{Stellar clumps are co-located with Ha}
Where possible, we calculated the spatial offsets between the locations of the NIR clumps discussed in the present paper and their counterparts in the \cite{fisher2017a} H$\alpha$ maps. On average, clumps in K-band have centers (determined via centroiding; see \S3) which are displaced from their corresponding H$\alpha$ centers (from \citealt{fisher2017a}) by about 2.6 pixels ($\sim$0.1$\arcsec$). This close alignment of clumps can be visualized in the enlarged clump multi-band panels provided in Fig. \ref{fig:clump_apertures}. As this average offset between clumps is very similar to the width of our night's AO PSF ($\sim$3 pix), we cannot infer whether this offset is indeed real or a manifestation of the PSF associated with our night of data. Clumps in the ring region of G04-1 appear more offset from their H$\alpha$ counterpart than those in the galaxy's arms. However, as stated in \S3, a number of the apertures (12, 13, \& 14) taken from \cite{fisher2017b} for clumps in the ring region of the galaxy required transformation (rotation and aperture-size modification) in order to adequately encompass the K-band clump flux. Indeed, we identify fewer clumps in the ring of G04-1 than \cite{fisher2017a}. These differences are a likely consequence of the significant amount of additional light contributed from the central disk component in the K-band imaging. This light may be washing out the concentrated light from individual clumps within the ring and blending structure. These effects would present reasonable explanations for ring clumps exhibiting greater offsets.
In general, the stellar clumps appear well-aligned with the active star forming regions observed in the H$\alpha$ map, implying that these more evolved stellar populations maintain a link with regions of recent star formation. If stellar clumps are long-lived structures then this would suggest they don't undergo a single burst and then shut off, but that they continue to experience star formation. However, we observe a number of clumps (IDs: 4, 7, 11, \& 15) for which we do not observe an obvious H$\alpha$ component. This is quite interesting because, as seen in Fig. 5, we observe that all of the regions in G04-1 associated with clumps exhibit high observed star formation rate surface densities. This would imply that all stellar clumps should be H$\alpha$-bright. We have identified two possible scenarios which may explain this: the light from these HII regions may be obscured by the existence of a molecular cloud situated along our line-of-sight. While this would certainly be a very specific situation, it is statistically possible as typical HII regions are of order a few tens of parsecs and only a few would be required to produce something of order the scale of a clump. An alternative scenario would be that these clumps have indeed turned off in terms of their star formation and are now simply wandering through the gas rich disk. This second scenario is quite interesting as numerical simulations (e.g. \citealt{bournaud2014}) suggest that clumps massive enough to survive as long-lived should eventually re-accrete gas from the disk and re-ignite in star formation. Certainly, more work teasing out the details of these stellar populations is required to determine which of these scenarios is more likely.
\section{Summary}\label{sec:5}
In this paper, we present a case-study of stellar clumps in a gas-rich, clumpy turbulent disk galaxy from the DYNAMO sample.
\begin{itemize}
\item We present new K-band imaging of G04-1 using Keck-NIRC2 and \textit{Hubble Space Telescope} WFC/ACS observations using the F336W ($U$) and F467M (Stromgen $b$) filters.
\item We identify 15 clumps in K-band light of G04-1 that are evenly distributed in mass, ranging from 0.36 to $27.0\times10^{7}\ \ensuremath{{\rm M}_\odot}$. These values correspond to Clump IDs 14 \& 9, respectively. Subtraction of the local disk component from clump light results in a drop in clump mass estimates of around 50\%. This corresponds to a median disk-corrected clump mass of $\sim 1.5\times 10^{7}\ \ensuremath{{\rm M}_\odot}$.
\item We find evidence of radial trends in clump stellar properties. Clumps closer to the galaxy nucleus are observed to be more massive and appear consistently more red, suggesting that these stellar populations may be more evolved. We do not find evidence of a radial dependence on the number density of clumps.
\item We investigate the relationship between the star formation rate and stellar mass surface densities using high-resolution maps in $K_{P}$ (from in this paper) and H$\alpha$ (from \textit{HST}; presented in \citealt{fisher2017b}). A power-law fit to the data results in slope and intercept values (1.041 $\pm$ 0.03 \& -8.58 $\pm$ 0.21, respectively) similar to that derived from populations of high-$z$ galaxies. Indeed, nearly all regions in G04-1 appear to be undergoing an enhanced mode of star formation.
\end{itemize}
\section*{Acknowledgements}
Some of the data presented herein were obtained at the W. M. Keck Observatory, which is operated as a scientific partnership among the California Institute of Technology, the University of California and the National Aeronautics and Space Administration. The Observatory was made possible by the generous financial support of the W. M. Keck Foundation.
The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain.
The above work is also partly based on observations made with the NASA/ESA \textit{Hubble Space Telescope}, obtained from the Data Archive at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS 5-26555.
This publication makes use of data products from the Two Micron All Sky Survey (2MASS), which is a joint project of the University of Massachusetts and the Infrared Processing and Analysis Center/California Institute of Technology, funded by the National Aeronautics and Space Administration and the National Science Foundation.
This research made use of \textsc{Astropy}, a community-developed core Python package for Astronomy (Astropy Collaboration, 2013).
HAW and RGA thank NSERC and the Dunlap Institute for Astronomy and Astrophysics for financial support. The Dunlap Institute is funded through an endowment established by the David Dunlap family and the University of Toronto.
DBF acknowledges support from Australian Research Council (ARC) Future Fellowship FT170100376 and ARC Discovery Program grant DP160102235. ADB acknowledges partial support from AST1412419.
KG acknowledges support from Australian Research Council (ARC) Discovery Program (DP) grant DP130101460. Support for this project is provided in part by the Victorian Department of State Development, Business and Innovation through the Victorian International Research Scholarship (VIRS).
\bibliographystyle{apj}
|
1,108,101,564,852 | arxiv | \section{Introduction}
Planet formation and evolution are heavily dependent on the circumstellar environment. The circumstellar material can dictate formation, composition, orbital parameters, and migration of planets. While much has been learned in recent years about circumstellar disks, especially with ALMA \citep{AndrewsDiskSubstructuresHigh2018}, the evolution from protoplanetary disk to debris disk is not well understood. This time period is crucial for the final stages of the growth of terrestrial planets and the early evolution of their atmospheres \citep[e.g.][]{KenyonTerrestrialPlanetFormation2006, OlsonNebularatmospheremagma2019}. Additionally, ALMA typically images only the outer regions of disks; it is also of interest to understand the inner few AU of a system where many exoplanets are found and where the systems' habitable zones are.
Protoplanetary disks consist of gas, dust, and eventually planetesimals. All three components play a crucial role in the formation and evolution of planets. Measurements of micron-sized dust are relatively easy, as dust can produce detectable amounts of infrared (IR) emission, even through the debris disk stage. However, the primordial gas, which is mostly molecular hydrogen, is assumed to be 99\% of the protoplanetary disk mass \citep[see review by][]{WilliamsProtoplanetaryDisksTheir2011} and controls much of the disk dynamics, such as altering orbits of planetesimals and planets \citep[e.g.,]{WeidenschillingAerodynamicssolidbodies1977, GoldreichEccentricityEvolutionPlanets2003,YoudinStreamingInstabilitiesProtoplanetary2005, BaruteauPlanetDiskInteractionsEarly2014} and potentially producing rings and spirals \citep{LyraFormationsharpeccentric2013, GonzalezSelfinduceddusttraps2017}. Lower gas fractions and optically thin gas are expected in debris disks \citep{WyattEvolutionDebrisDisks2008}, although the precise gas fraction is poorly constrained \citep{MatthewsObservationsModelingTheory2014} and possibly varies significantly between disks. But even small amounts of optically thin gas can still have a large effect on disk dynamics \citep[e.g.][]{TakeuchiDustMigrationMorphology2001, LyraFormationsharpeccentric2013}. Thus, to understand the evolution of the circumstellar environment, we must understand how the hydrogen evolves.
However, H$_2$ is notoriously hard to detect. Its only allowed electric dipole transitions are in the ultraviolet (UV). In most circumstellar environments, those transitions require excited H$_2$, which can occur in circumstellar disks with warm gas \citep{NomuraMolecularhydrogenemission2005, AdamkovicsFUVIrradiatedDisk2016}. Warm gas is common in protoplanetary disks, but is less likely to be found in debris disks because of the generally large distance of the gas from the central star. Chromospheric and transition region lines, such as Ly$\alpha$, pump the H$_2$ molecule from an excited level in the ground electronic state to the first (Lyman) or second (Werner) electronic levels. Because of extremely high oscillator strengths, the excited molecule immediately decays back to the ground electronic level in a fluorescent cascade, emitting photons. The set of emission lines produced by transitions from a single excited electronic state to the multiple allowed ground electronic states is called a progression. Within a given progression, H$_2$ line fluxes are proportional to their branching ratios \citep{WoodAnalysisH2Emission2002, HerczegFarUltravioletSpectraTW2004}. Because of this, far UV spectra are a powerful way to characterize the warm H$_2$ gas. Emission from these lines is a probe of gas temperature. But as all these transitions are in the UV, they require data from space-based observatories, thus limiting the number of observations currently available. There are magnetic quadrapole transitions in the IR that have been detected in protoplanetary disks \citep[e.g][]{WeintraubDetectionQuiescentMolecular2000, BaryDetectionsRovibrationalH22003}; unfortunately, they are weak and require much larger amounts of warm H$_2$ than debris disks typically have in order to detect them \citep[e.g.][]{BitnerTEXESSurveyH22008, Carmonasearchmidinfraredmolecular2008}.
To try to get around these issues, other molecules, most notably IR and millimeter transitions of HD and more commonly CO, have been used to trace the H$_2$ \citep[e.g.][]{TrapmanFarinfraredHDemission2017}. However, neither is a perfect tracer, and both rely on an assumed ratio to H$_2$. For example, disk mass estimates have often used the ISM CO/H$_2$ of $\sim$10$^{-4}$, consistent with the value found by \citet{FranceCOH2Abundance2014} based on CO and H$_2$ observations in the UV, but other recent studies have shown that CO appears depleted in protoplanetary disks \citep{FavreSignificantlyLowCO2013, SchwarzUnlockingCODepletion2019, McClureCarbondepletionobserved2019}. Furthermore, the difference in chemistry and masses between the molecular species mean that neither HD or especially CO trace H$_2$ perfectly \citep{MolyarovaGasMassTracers2017, AikawaMultiplePathsDeuterium2018}.
Molecular hydrogen emission has been detected in every protoplanetary and transition disk that have far UV spectral observations \citep[e.g.][]{ValentiIUEAtlasPre2000, ArdilaObservationsTauriStars2002, HerczegOriginsFluorescentH22006, InglebyFarUltravioletH2Emission2009, FranceHubbleSpaceTelescope2012, YangFarultravioletAtlasLowresolution2012, France1600EmissionBump2017}. Debris disks are not defined by their gas content --- they are instead defined by secondary dust produced from planetesimal collisions, which observationally gets translated into a fractional luminosity, $f=L_{disk}/L_*$ less than 10$^{-2}$ --- but all evidence indicates that compared with protoplanetary disks, they have a smaller gas-to-dust ratio and less gas in total \citep[e.g.][]{ChenDustGasPictoris2007}. Other gas species, like CO, have been detected in debris disks \citep[e.g.][]{RobergeDebrisDisksNearby2008, MoorMolecularGasYoung2011, DentMolecularGasClumps2014, HiguchiDetectionSubmillimeterwaveEmission2017}, but the only previous potential detection of H$_2$ in what is clearly a debris disk is from AU Mic \citep{FranceLowMassH2Component2007}. This is not unexpected. While comets in our own Solar System produce CO, they do not produce H$_2$ \citep{MummaChemicalCompositionComets2011}. Thus, it is likely that secondary H$_2$ is not produced in the same manner as secondary CO. In several cases, there are arguments for the reclassification of systems based on the discovery of H$_2$, such as RECX 11 \citep{InglebyEvolutionXrayFarultraviolet2011}, HD 98800 B (TWA 4 B) \citep{YangFarultravioletAtlasLowresolution2012, RibasLonglivedProtoplanetaryDisks2018}, and potentially DoAr 21 \citep{BaryDetectionsRovibrationalH22003, JensenNoTransitionDisk2009}. But exactly when and on what timescale the H$_2$ dissipates is not known.
Since even small amounts of H$_2$ gas can have a significant impact on planetary systems at ages $\sim$10 Myr, we have begun a program to examine UV spectra of young stars that show no evidence of near-infrared (NIR) excess. One specific way that gas can impact a system is by limiting the IR flux from dust produced by planetesimal collisions. \citet{KenyonRockyPlanetFormation2016} show that there is a discrepancy between the incidence rate of dust expected to be produced by terrestrial planet formation (2 to 3\% of young systems) and the incidence rate of close-in terrestrial planets (20\% of mature systems). Gas, however, could sweep away that dust via gas drag, making it harder to detect. Thus, it is critical to understand the evolution of H$_2$ in the terrestrial planet forming regions.
\section{Target and Observations}
TWA 7 is an M dwarf that is part of the $\sim$7-10 Myr TW Hya Association \citep{WebbDiscoverySevenTauri1999}. Recent spectral classifications assign an M2 or M3 spectral type \citep{ManaraXshooterspectroscopyyoung2013, HerczegOpticalSpectroscopicStudy2014}; we adopt M2.5. The star is surrounded by a debris disk that was first detected due to its IR excess at 24 and 70 $\mu$m by \citet{LowExploringTerrestrialPlanet2005} with the Spitzer Space Telescope. However, the lack of near IR excess \citep{WeinbergerSearchWarmCircumstellar2004} and typical accretion signatures \citep{JayawardhanaAccretionDisksYoung2006} strongly imply that it is a ``cool'' debris disk, making it one of the few known M stars with a debris disk \citep{TheissenWarmDustCool2014}. The dust in the disk has since been detected in the FIR at 450 and 850 $\mu$m using the James Clerk Maxwell Telescope \citep{MatthewsMassTemperatureTWA2007} and at 70, 160, 250, and 350 $\mu$m using the Herschel Space Observatory \citep{CiezaHerschelDIGITSurvey2013}. No [O I] was detected by Herschel at 63 $\mu$m \citep{Riviere-MarichalarGasdustTW2013}, but CO has been recently detected using ALMA in the J=3-2 transition \citep{MatraUbiquityStellarLuminosity2019}. The disk has been imaged in the IR with SPHERE showing spiral arms near 25 AU \citep{OlofssonResolvingfaintstructures2018}. \citet{YangFarultravioletAtlasLowresolution2012} and \citet{FranceHubbleSpaceTelescope2012} both failed to detect H$_2$ around TWA 7 in UV spectra. \citet{YangFarultravioletAtlasLowresolution2012} used a less sensitive prism spectrum; \citet{FranceHubbleSpaceTelescope2012} looked at 12 H$_2$ features separately, as opposed to detecting the combined H$_2$ emission from many features as we do in this paper. (See Section \ref{cc_sec}.)
We can put some constraints on the expected H$_2$ based on dust and CO measurements. For TWA 7, the total dust mass in the disk, M$_d$, is 2$\times$10$^{-2}$ M$_\oplus$ \citep{BayoSubmillimetrenoncontaminateddetection2019}, while the mass of CO in the disk, M$_{CO}$, is 0.8-80$\times$10$^{-6}$ M$_\oplus$ \citep{MatraUbiquityStellarLuminosity2019}. Based on these estimates, if TWA 7 has an ISM value for the CO/H$_2$ ratio of $\sim$10$^{-4}$ then we can expect M$_{H_2}$ to be on the order of M$_d$. If it has a lower CO/H$_2$ of $\sim$10$^{-6}$, as TW Hya has \citep{FavreSignificantlyLowCO2013}, we can expect M$_{H_2}$ to be 100$\times$ larger than M$_d$, consistent with the ISM gas-to-dust ratio \citep{SpitzerPhysicalprocessesinterstellar1978}. Models that have explored gas-to-dust ratios between 0.01 and 100 indicate that gas can significantly influence the disk dynamics \citep{YoudinStreamingInstabilitiesProtoplanetary2005, LyraFormationsharpeccentric2013, GonzalezSelfinduceddusttraps2017}, so in either case, H$_2$ could play an important if not dominant role in TWA 7's disk dynamics. Given the presence of this distant reservoir of gas, we explore here the possibility that an (as yet unseen) reservoir of gas is also present at smaller disk radii, in the terrestrial planet region of the disk
For this work, we use archival HST-Cosmic Origins Spectrograph (COS) observations of TWA 7 from May 2011 (PID 11616, PI: G. Herczeg). The data were acquired with the far UV medium resolution modes of COS: G130M and G160M. These spectra have a spatial resolution of 1'' and a wavelength uncertainty $\sim$15 km/s \citep{cosmic2020}. The observations are at a range of central wavelengths that allow us to get a contiguous spectrum that spans from 1133 to 1795 \AA\ (Figure \ref{spectrum}). In addition to TWA 7, we also analyze spectra of classical T Tauri stars (CTTS) and main sequence M dwarf stars for comparison purposes (Table \ref{compstars}) taken between December 2009 and August 2015. The CTTS were chosen from the stars analyzed by \citet{FranceHubbleSpaceTelescope2012} that had extinction values measured by both \citet{HerczegOpticalSpectroscopicStudy2014} and \citet{FurlanSpitzerInfraredSpectrograph2011}. The main sequence M dwarfs were from \citet{KruczekH2FluorescenceDwarf2017}, chosen because they had H$_2$ detected from the stellar photosphere and COS spectra that covered a comparable wavelength range. One of the six M dwarfs --- GJ 581 --- has a cold, faint debris disk \citep{LestradeDEBRISdiskplanet2012}, but it is much older (2-8 Gyr) and less active \citep{SchoferCARMENESsearchexoplanets2019} than TWA 7 or the CTTS. Its disk is also significantly less luminous than that of TWA 7 \citep{ChoquetFirstimagesdebris2016}. The remaining five M dwarfs have no detected disks. All spectra were observed with COS in a similar manner. Spectra were reduced by the CALCOS pipeline. Multiple observations were then co-added into one spectrum as described by \citet{DanforthEmpiricallyEstimatedFarUV2016}. The TWA 7 spectrum we analyzed is plotted in Figure \ref{spectrum}.
\begin{figure}
\centering
\includegraphics[width=6.4in]{spectrum_labled.png}
\caption{The spectrum of TWA 7 used for analysis with the most prominent stellar features labeled. Note that the Ly$\alpha$ profile is largely geocoronal airglow emission and was thus not used. \label{spectrum}}
\end{figure}
We also used archival HST-STIS spectra of TW Hya, reduced with the STIS pipeline. For each observation, we combined the orders to create a single spectrum. We then co-added the observations in a similar manner to the way we co-added the observations from COS.
\begin{deluxetable}{llrrrl}
\tabletypesize{\small}
\tablewidth{0pt}
\tablecaption{Stellar Properties}
\tablehead{
\colhead{Object} & \colhead{PID/PI} & \colhead{Distance} & \colhead{RV} & \colhead{A$_V^a$} & \colhead{A$_V^b$}
\vspace{-5pt}
\\
\colhead{} & \colhead{} & \colhead{(pc)} & \colhead{(km s$^{-1}$)} & \colhead{(mag)} & \colhead{(mag)}
}
\startdata
\textbf{TWA 7} & \textbf{11616/Herczeg} & \textbf{34.0} & \textbf{11.4} & \nodata & \textbf{0.00}$^c$ \\
\cutinhead{Classical T Tauri Stars}
AA Tau & 11616/Herczeg & 136.7 & 17.0 & 1.9 & 0.40 \\
BP Tau & 12036/Green & 128.6 & 15.2 & 1.0 & 0.45 \\
DE Tau & 11616/Herczeg & 126.9 & 15.4 & 0.9 & 0.35 \\
DM Tau & 11616/Herczeg & 144.5 & 18.6 & 0.0 & 0.10 \\
DR Tau & 11616/Herczeg & 194.6 & 21.1 & 1.4 & 0.45 \\
GM Aur & 11616/Herczeg & 159.0 & 15.2 & 0.6 & 0.30 \\
HN Tau & 11616/Herczeg & 136.1 & 4.6 & 1.0 & 1.15 \\
LkCa 15 & 11616/Herczeg & 158.2 & 17.7 & 1.0 & 0.30 \\
SU Aur & 11616/Herczeg & 157.7 & 14.3 & 0.9 & 0.65 \\
UX Tau & 11616/Herczeg & 139.4 & 15.5 & 0.5 & 0.00$^c$ \\
\cutinhead{Main Sequence M Stars with H$_2$}
GJ 176 & 13650/France & 9.5 & 26.2 & \nodata & \nodata \\
GJ 832 & 12464/France & 5.0 & 13.2 & \nodata & \nodata \\
GJ 667 C & 13650/France & 7.2 & 6.4 & \nodata & \nodata \\
GJ 436 & 13650/France & 9.8 & 9.6 & \nodata & \nodata \\
GJ 581 & 13650/France & 6.3 & -9.4 & \nodata & \nodata \\
GJ 876 & 12464/France & 4.7 & -1.6 & \nodata & \nodata \\
\cutinhead{STIS Spectra}
TW Hya & 11608/Calvet & 60.1 & 13.4 & \nodata & 0.00
\enddata
\tablecomments{Properties of stars analyzed in this paper. \\
$^a$ A$_V$ from \citet{FurlanSpitzerInfraredSpectrograph2011}\\
$^b$ A$_V$ from \citet{HerczegOpticalSpectroscopicStudy2014} with uncertainties of 0.15 mag.\\
$^c$ The measured value was negative. Since this is unphysical, we adapted an extinction of 0.0 mag. \\
Distances are from \citep{Bailer-JonesEstimatingDistanceParallaxes2018} based on Gaia DR2 \citet{CollaborationGaiaDataRelease2018}. RVs of the T Tauri stars are from \citet{NguyenCloseCompanionsYoung2012}, from Gaia DR2 for the M stars, and from \citet{TorresSearchassociationscontaining2006} for TW Hya and TWA 7.
Based on extinction measurements from stars in the Local Bubble \citep{LeroyPolarimetricInvestigationInterstellar1993}, we assume these Main Sequence M stars have no extinction.
\label{compstars}}
\end{deluxetable}
\section{Analysis and Results}
\subsection{Methods for Cross-Correlation}\label{cc_sec}
In protoplanetary disks and nearby M dwarfs, the strength of the H$_2$ lines make them clearly detectable above the noise; however, this is not the case for systems with smaller amounts of H$_2$ flux. Instead, we take advantage of the many weak H$_2$ lines in the system and use a cross-correlation function (CCF), a technique that has been used previously with IR data to study gas in protoplanetary disks \citep{HartmannFurtherevidencedisk1987}. The CCF allows us to combine the signal from multiple lines into one signal by calculating how well the spectrum correlates with that of a model template \citep{Tonrysurveygalaxyredshifts1979}. Our full template was created using the procedure from \citet{McJunkinEmpiricallyEstimatedFarUV2016} for a temperature of 2500 K and a column density of log$N$(H$_2$)=19 cm$^2$. As temperature and density have little effect on the relative strengths of these lines in protoplanetary disks \citep{FranceHubbleSpaceTelescope2012}, we used the same template for all the stars. Ly$\alpha$ also has a significant impact on H$_2$ line strengths, but its profile is contaminated from self-absorption, ISM absorption, and geocoronal airglow, and thus cannot be used for all of our targets. As a result, we do not consider the shape of the Ly$\alpha$ profile when defining our template. We use a single FWHM of 0.047 \AA\ for the lines in the template, chosen solely because it is the width that maximizes the CCF for TWA 7. Templates for individual progressions were created by picking the lines from the full template based on \citet{AbgrallTableLymanBand1993} (Figure \ref{template}). Although there are many H$_2$ lines, focusing only on the strongest lines gives the clearest signal. We chose the minimum H$_2$ line strength in the template that maximized the CCF detection for TWA 7 for each progression. The minimum line strength is dependent on how strong the lines in that progression are: progressions with weaker fluxes require smaller minimum line strengths. While we analyzed 12 different progressions (Table \ref{pros}), chosen because all 12 were detected by \citet{FranceHubbleSpaceTelescope2012} in protoplanetary disks, our analysis focused on the progressions that typically produce the most H$_2$ flux in CTTS --- [1,4], [1,7], [0,1], and [0,2]. Each of these progressions is excited by Ly$\alpha$ and can decay to multiple lower states, resulting in a set of H$_2$ emission lines throughout the UV. The total summed flux in an individual progression is a function of having enough Ly$\alpha$ photons to pump the H$_2$ molecule to the excited state (Figure \ref{cartoon}), the filling factor of H$_2$ around the Ly$\alpha$, the column density in the excited rovibrational level of the X electronic state, and the oscillator strength of the the pump transition \citep{HerczegOriginsFluorescentH22006}.
\begin{deluxetable}{lcrcclc}
\tabletypesize{\small}
\tablewidth{0pt}
\tablecaption{Progressions Analyzed}
\tablehead{
\colhead{[\hbox{$v^\prime$},\hbox{$J^\prime$}]} & \colhead{$\lambda_{pump}$} & \colhead{velocity} & \colhead{TW Hya H$_2$ Flux} & \colhead{oscillator strength} & \colhead{[\hbox{$v^{\prime\prime}$},\hbox{$J^{\prime\prime}$}]} & \colhead{E$^{\prime\prime}$}
\vspace{-5pt}
\\
\colhead{} & \colhead{(\AA)} & \colhead{(km s$^{-1}$)} & \colhead{(10$^{-15}$ erg cm$^{-2}$ s$^{-1}$)} & \colhead{($\times$ 10$^{-3}$)} & \colhead{} & \colhead{(eV)}
}
\startdata
$[3,13]$ & 1213.356 & -571 &\ \ 4.7 & 20.6 & [1,14] & 1.79 \\
$[4,13]$ & 1213.677 & -491 &\ \ 2.4 & \quad 9.33 & [2,12] & 1.93 \\
$[3,16]$ & 1214.465 & -297 & 14.9 & 23.6 & [1,15] & 1.94 \\
$[4,4]$ & 1214.781 & -219 &\ \ 8.9 & \quad 9.90 & [3,5] & 1.65 \\
$[1,7]$ & 1215.726 & 14 & 16.2 & 34.8 & [2,6] & 1.27 \\
$[1,4]$ & 1216.070 & 99 & 36.0 & 28.9 & [2,5] & 1.20 \\
$[3,0]$ & 1217.038 & 338 & \ \ 3.5 & \quad 1.28 &[3,1] & 1.50 \\
$[0,1]$ & 1217.205 & 379 & 37.9 & 44.0 & [2,0] & 1.00 \\
$[0,2]$ & 1217.643 & 487 & 33.4 & 28.9 & [2,1] & 1.02 \\
$[2,12]$ & 1217.904 & 551 & 18.4 & 19.2 & [1,13] & 1.64\\
$[2,15]$ & 1218.521 & 704 & \ \ 3.1 & 18.0 & [1,14] & 1.79 \\
$[0,3]$ & 1219.089 & 844 & \ \ 2.1 & 25.5 &[2,2] & 1.04 \\
\enddata
\tablecomments{Velocity is from Ly$\alpha$ center. TW Hya H$_2$ Flux as measured by \citet{HerczegOriginsFluorescentH22006}. Oscillator strengths of the pumping transitions calculated by \citet{HerczegOriginsFluorescentH22006} based on \citet{AbgrallTableLymanBand1993}. \\
$[$v$^{\prime\prime}$,J$^{\prime\prime}$] and E$^{\prime\prime}$ are the lower level in the electronic ground state for the pumping transition and the corresponding energy for that state.\\
Each of these progressions is pumped by Ly$\alpha$ flux and can decay to multiple lower states, resulting in a set of H$_2$ emission lines throughout the UV.
\label{pros}}
\end{deluxetable}
\begin{figure}
\centering
\includegraphics[width=6.4in]{progressiontemplates.png}
\caption{Templates of the most prominent progressions used for cross-correlation. A cutoff for a minimum line strength of 5$\times$10$^-{15}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$ is also shown. \label{template}}
\end{figure}
Since we want to be sure that we are only cross-correlating continuum and H$_2$ emission (plus the associated noise) and not emission from hot gas lines from the chromosphere or transition region, we masked out FUV lines commonly seen in lower mass stars from \citet{HerczegFarUltravioletSpectrumTW2002}, \citet{BrandtDor94Hubble2001}, and \citet{AyresFarUltravioletUpsDowns2015}. As these lines have different widths in different stars depending on numerous properties, we erred on the side of masking the wavelength regions covered by the broadest of these features to minimize the chance of a false positive from a line that was not H$_2$.
\begin{figure}
\centering
\includegraphics[width=6.4in]{ly_alpha_cartoon.png}
\caption{Schematic Ly$\alpha$ profiles. The solid purple line corresponds to a broader profile, like seen with CTTS, while the dashed pink line is indicative of Main Sequence stars with a more narrow profile. The pumping wavelengths of some prominent are marked. The strongest progressions have not only substantial Ly$\alpha$ flux but also larger oscillator strengths and smaller lower state energies (Table \ref{pros}). \label{cartoon}}
\end{figure}
Cross-correlating the entire spectrum with the entire masked template returns a tentative detection. However, this involves cross-correlating a significant amount of noise which can weaken the detection. Therefore we created segments of spectrum for each H$_2$ feature of $\sim$1 \AA\ ($\sim$200 km/s) wide centered around the wavelengths of expected H$_2$ lines, which is wide enough to get the entire line profile without adding too much continuum flux or noise. (We cannot be certain as to whether the photons detected outside of lines are from the star itself, as M dwarfs have very little stellar continuum flux in these regions, so we will refer to the region outside of lines as ``continuum/noise.'') To calculate the final CCF, we explored two procedures to verify any findings. With both methods, if the flux for every line is emitted at a similar relative velocity (within $\sim$10 km s$^{-1}$), the CCF's signal will grow stronger. In the first method, we created one long spectrum by putting all the individual segments end-to-end. We do the same for the corresponding template segments. We then cross-correlated this pieced together spectrum with the same regions from the template. In the second method, we cross-correlated each segment of spectrum individually with its corresponding template segment and added the cross-correlation functions. Because of this, we chose to use a CCF that has not been normalized for length, which is usually the last step of calculating the CCF. Unnormalized CCFs work equally well for both of our methods; normalized CCFs of different lengths cannot be added linearly, because longer CCFs should be weighted more
\subsection{\texorpdfstring{H$_2$ Detection and Verification}{H2 Detection and Verification}}
\begin{figure}
\centering
\includegraphics[width=6.4in]{ccfs_3_wlines.png}
\caption{Cross-correlation functions of TWA 7 for the most prominent H$_2$ progressions. \label{ccf}}
\end{figure}
We detect peaks near the stellar radial velocity in the CCFs of the spectra of TWA 7 (Figure \ref{ccf}) using a minimum H$_2$ template line strength cutoff of 5$\times$10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$. The peaks are detected with both methods --- segmented spectrum and added CCF --- for calculating the CCF. Although the peak is strongest when all of the most prominent progressions are included, we also see significant ($>$3$\sigma$) detections when some individual progressions are analyzed. While the peak for [0,1]+[0,2] is slightly off-center from the systemic velocity of 11.4 km/s, we attribute this to the uncertainties in the wavelength calibration that can lead to shifts in the resulting velocities by up to 15 km/s, as described in \citet{LinskyUltravioletSpectroscopyRapidly2012}
The strength of the cross-correlation function is dependent on the S/N in the H$_2$ lines. Since we are trying to measure the height and significance of the CCF peak, we need to understand the noise properties of the observed spectrum. This is made more difficult, because there are so few FUV photons that reach us. For example, \citet{LoydMUSCLESTreasurySurvey2016} looked for FUV continuum in our M dwarf sample, obtaining a significant detection for only 3 of 6 targets. As a result, noise in our spectrum cannot be approximated to be Gaussian as it can when there are hundreds or thousands of counts. Typical continuum/noise regions in our TWA 7 spectrum have flux distributions that look approximately like that seen in Figure \ref{kde} where we show a histogram of flux levels found in the continuum/noise of TWA 7.
There are several potential issues in modeling this noise. The first is that there are undoubtedly unidentified lines that we do not mask, as possibly seen in the increase around 0.3$\times10^{-15}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$ . However, since other unidentified lines could possibly overlap with the H$_2$ lines, we choose not to remove this peak from the distribution. Another issue is that because of the low flux level, when the detector background gets subtracted, we end up measuring ``negative'' flux in some wavelength bins. To deal with this, we estimate the noise in two separate ways: with a scaled Poisson distribution and using the actual distribution fit with a kernel density estimator (KDE) \citep{RosenblattRemarksNonparametricEstimates1956, ParzenEstimationProbabilityDensity1962}, as shown in Figure \ref{kde}. The scaled Poisson was determined by calculating the skew of the distribution of continuum/noise, $\gamma_1$. The mean of the Poisson distribution $\lambda$ is then $\gamma_1^{-2}$. We then convert from counts to flux using a constant scaling factor determined by the mean of the distribution. The KDE was calculated with a Gaussian kernel using a bandwidth (equivalent to the sigma parameter) of 10$^{-17}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$. We randomly draw our noise from these distributions. These two noise models cover the range of possibilities of the underlying true noise, so a robust detection will only be evident if it occurs using both noise models.
\begin{figure}
\centering
\includegraphics[width=3.4in]{hist_kde.png}
\caption{Typical continuum/noise distributions, which we model in two ways: by creating a KDE of the distribution and by scaling a discrete Poisson distribution to the data. Both were normalized so that the total area is equal to 1. \label{kde}}
\end{figure}
To determine the significance of the detection, we used our noise models to create spectra containing only noise. We then cross-correlated these noise spectra with the template in the same ways we did for the TWA 7 spectrum. We then record the CCF maximum within 15 km s$^{-1}$ of the systemic velocity. We chose this range, because this was the range we used to look for a detection, as COS has a velocity precision of 15 km s$^{-1}$. This procedure was repeated multiple times (see Table \ref{pros_sig}) for each type of noise to produce the distributions shown in Figure \ref{ccf_max_noise}. We then compared the CCF maximums to TWA 7's CCF maximum within 15 km s$^{-1}$ of the systemic velocity. The fraction of times the noise's CCF maximum was equal to or larger than TWA 7's CCF maximum is taken to be the probability of a false positive. The significance ($\sigma$) values we report are the the equivalent probabilities for a Gaussian distribution.
\begin{figure}
\centering
\includegraphics[width=6.4in]{just_noise.png}
\caption{Distribution of CCF heights for simulations of noise cross-correlated with the [0,1] and [0,2] progressions of the H$_2$ template. Based on 1,200,000 simulations for each set, we detect H$_2$ at a level of 4.5$\sigma$ for Poisson noise with the added CCFs, 4.6$\sigma$ for CDF sampled noise with the added CCFs, 3.9$\sigma$ for Poisson noise with the segmented spectrum CCF, and 4.0$\sigma$ for CDF sampled noise with the segmented spectrum CCF. \label{ccf_max_noise}}
\end{figure}
We did an initial trial of 32,000 simulations with each method to see if we could detect each progression individually. We obtain significant detections for [0,1], [0,2], [1,4], and [1,7], with significant being defined as $>$3$\sigma$ detections for all four methods; we also get a a marginal detection ($>$3$\sigma$ for some but not all methods) for [0,3] (Table \ref{pros_sig}). We then investigated the detected progressions further. Using a line strength cutoff of 5$\times$10$^{-15}$ erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$, as shown in Figure \ref{template}, we detect H$_2$ at a significance $>$5$\sigma$ for all of our noise models and CCF types for the combination of the [1,4], [1,7], [0,1], and [0,2] progressions based on 3,500,000 simulations of each. For just the progressions on the wing --- [0,1] combined with [0,2] --- we did 1,200,000 simulations for each measurement. We detect H$_2$ at a level of 4.5$\sigma$ for Poisson noise with the added CCFs, 4.6$\sigma$ for KDE noise with the added CCFs, 3.9$\sigma$ for Poisson noise with the segmented spectrum CCF, and 4.0$\sigma$ for KDE sampled noise with the segmented spectrum CCF. The segmented spectrum CCF produces similar distributions for both types of noise, as shown on the right of Figure \ref{ccf_max_noise}, because it is more robust to slight differences in noise models.
\begin{deluxetable}{lrrrrrrc}
\tabletypesize{\small}
\tablewidth{0pt}
\tablecaption{Detection Significance of H$_2$ in Progressions}
\tablehead{
\colhead{} & \multicolumn{2}{c}{\underline{Added CCF}} & \multicolumn{2}{c}{\underline{Segmented Spectrum CCF}} & \colhead{} & \colhead{Minimum line strength} & \colhead{Lines}
\vspace{-5pt}
\\
\colhead{Progression} & \colhead{KDE} & \colhead{Poisson} & \colhead{ \hspace{7pt} KDE} & \colhead{ \hspace{17pt} Poisson} & \colhead{Simulations} & \colhead{erg s$^{-1}$ cm$^{-2}$ \AA$^{-1}$}
&\colhead{ Included}}
\startdata
$[3,13]$ & 2.2 & 2.2 & \hspace{2pt} 1.6 & 1.5 \quad \hspace{2pt} & 32000 & 0.1$\times$10$^{-15}$ & 9 \\
$[4,13]$ & 0.7 & 0.7 & \hspace{2pt} 0.8 & 0.8 \quad \hspace{2pt} & 32000 & 1.7$\times$10$^{-15}$ & 2 \\
$[3,16]$ & 2.4 & 2.4 & \hspace{2pt} 1.8 & 1.8 \quad \hspace{2pt} & 32000 & 1.2$\times$10$^{-15}$ & 9 \\
$[4,4]$ & 1.2 & 1.2 & \hspace{2pt} 2.0 & 2.0 \quad \hspace{2pt} & 32000 & 1.3$\times$10$^{-15}$ & 6 \\
$[1,7]$ & $>$4.0 & $>$4.0 & \hspace{2pt} $>$4.0 & $>$4.0 \quad \hspace{2pt} & 32000 & 7.5$\times$10$^{-15}$ & 2 \\
$[1,4]$ & $>$4.0 & $>$4.0 & \hspace{2pt} $>$4.0 & $>$4.0 \quad \hspace{2pt} & 32000 & 3.0$\times$10$^{-15}$ & 12 \\
$[3,0]$ & 0.7 & 0.7 & \hspace{2pt} 0.0 & 0.0 \quad \hspace{2pt} & 32000 & 3.5$\times$10$^{-15}$ & 2 \\
$[0,1]$ & 3.5 & 3.4 & \hspace{2pt} 3.2 & 3.1 \quad \hspace{2pt} & 32000 & 9.0$\times$10$^{-15}$ & 2\\
$[0,2]$ & $>$4.0 & $>$4.0 & \hspace{2pt} 3.3 & 3.3 \quad \hspace{2pt} & 32000 & 5.0$\times$10$^{-15}$ &2 \\
$[2,12]$ & 2.2 & 2.3 & \hspace{2pt} 2.6 & 2.6 \quad \hspace{2pt} & 32000 & 2.0$\times$10$^{-15}$ &2 \\
$[2,15]$ & 2.8 & 2.8 & \hspace{2pt} 2.7 & 2.7 \quad \hspace{2pt} & 32000 & 3.0$\times$10$^{-15}$ & 4 \\
$[0,3]$ & 3.3 & 3.1 & \hspace{2pt} 2.9 & 2.8 \quad \hspace{2pt} & 32000 & 3.0$\times$10$^{-15}$ & 5 \\
\hline
$[0,1]+[0,2]$ & 4.6 & 4.5 & \hspace{2pt} 4.0 & 3.9 \quad \hspace{2pt} & 1200000 & 5.0$\times$10$^{-15}$ & 6 \\
$[1,4]+[1,7]+[0,1]+[0,2]$ & $>$5.0 & $>$5.0 & \hspace{2pt} $>$5.0 & $>$5.0 \quad \hspace{2pt} & 3500000 & 5.0$\times$10$^{-15}$ & 20 \\
\enddata
\tablecomments{Significance of detection in each individual progression for our four different methods, as well as for the combination of [0,1] and [0,2] and the combination of [1,4], [1,7], [0,1], and [0,2].
\label{pros_sig}}
\end{deluxetable}
There are two unclassified background objects in the sky near TWA 7. However, the two sources are not expected to be in the COS-PSA aperture, given the COS-PSA aperture size of 2.5" in diameter and the distance of the background sources to TWA 7, at 2.5" \citep{Neuhauserpossibilitygroundbaseddirect2000} and 6" \citep{BayoSubmillimetrenoncontaminateddetection2019}, given we confirmed that TWA 7 was centered on the aperture. Furthermore, as we required that the CCF peak at the radial velocity of the object fall within the error of the spectrograph, we feel these background sources are an extremely unlikely source of the H$_2$.
\subsection{\texorpdfstring{Determining the Origin of the H$_2$}{Determining the Origin of the H2}}\label{origin}
Simply detecting H$_2$ does not indicate that the H$_2$ is circumstellar in origin, because some M stars are known to show H$_2$ emission pumped by Ly$\alpha$ \citep{KruczekH2FluorescenceDwarf2017}. Active M stars, like TWA 7 \citep{YangMagneticPropertiesYoung2008}, have strong chromospheric Ly$\alpha$ emission, which can pump H$_2$ in star spots or in their lower chromospheres. Since TWA 7's debris disk is nearly face on at an inclination of 13$^\circ$ \citep{OlofssonResolvingfaintstructures2018}, and the resolution of COS is 15 km/s, we cannot use velocity information to differentiate between circumstellar and stellar H$_2$. Instead, we looked at the flux ratios between different progressions. The [1,4] and [1,7] progression are both pumped by emission from the center of the Ly$\alpha$ line profile (Figure \ref{cartoon}). Other progressions are pumped from the wings of the profile, so strong emission in these lines is only possible with a broader Ly$\alpha$ line indicative of active accretion. The two most prominent examples are [0,1] and [0,2] which are pumped at velocities 379 and 487 km/s from line center. These progressions should only be bright if the Ly$\alpha$ profile is especially wide, as shown in the purple profile in Figure \ref{cartoon}, but they will be much fainter if the star's Ly$\alpha$ profile is more narrow, similar to the pink curve. Stars that are accreting have much broader Ly$\alpha$ profiles than active main sequence stars \citep{SchindhelmLyaDominanceClassical2012, YoungbloodMUSCLESTreasurySurvey2016} and are therefore expected to produce more emission in the [0,1] and [0,2] progressions relative to the [1,4] and [1,7] progressions in comparison to non-accretors.
Using the segmented spectrum, we took the ratio of the CCF maximum of all the non-central progressions
to the ratio of the CCF maximum of [1,4]+[1,7] for all the stars with previously detected H$_2$ in our sample. Dividing by the height of [1,4]+[1,7] for each system acts as a normalization factor to deal with spectra with different S/N or different line widths due to rotational broadening. Since this ratio can be affected by extinction, with lines at shorter wavelengths appearing fainter than they are intrinsically, we have to de-redden the spectrum first, which we do based on the extinction laws from \citet{CardelliRelationshipInfraredOptical1989}. We examine three different sets of ratios: ratios with spectra uncorrected for extinction, ratios with spectra corrected by the extinction values found by \citet{FurlanSpitzerInfraredSpectrograph2011}, and ratios with spectra corrected by extinction values from \citet{HerczegOpticalSpectroscopicStudy2014}. We assume the main sequence M stars and TWA 7 have no extinction based on extinction measurements from stars in the Local Bubble \citep{LeroyPolarimetricInvestigationInterstellar1993}.
To estimate the 1$\sigma$ limits (the gray areas in Figures \ref{ratios} and \ref{ratios_app}), we used a similar procedure as we used to calculate the significance of detections for TWA 7. We sampled the noise from each spectrum, calculating a KDE like we did for TWA 7, and created spectra of pure noise to cross-correlate with the template. We then took the maximum of each CCF within 15 km/s of the RV of that star. The gray regions represent the inner 68\% of ratios calculated based on those maxima. These 1$\sigma$ regions are biased towards positive numbers, because we chose the maximum CCF value, which even in a normally distributed, random noise sample will bias to positive values.
\begin{figure}
\centering
\includegraphics[width=6.4in]{ratios_2.png}
\caption{The CCF height ratios of [0,1] and [0,2] to that of [1,4]+[1,7] for our selection of T Tauri stars (open symbols), M stars (filled symbols), and TWA 7. The gray areas are the 1$\sigma$ regions for a null result if that progression had no flux and just noise. They are greater than zero because we select the maximum of the CCF within 15 km s$^{-1}$, which would typically be greater than zero even for random noise. TWA 7's ratios matches better with the T Tauri stars, suggesting that its H$_2$ is also circumstellar. The ratios for the other progressions are shown in the appendix in Figure \ref{ratios_app}. \label{ratios}}
\end{figure}
The [0,1] and [0,2] ratios differentiate the samples most clearly regardless of extinction correction. Based on their data, TWA 7's H$_2$ appears to be more similar to that from the CTTS (Figure \ref{ratios}). However, other progressions are not as clear. To analyze all of the ratios, we created a Support Vector Machine classifier \citep{Platt99probabilisticoutputs} with a 4th order polynomial kernel using the data from the M dwarfs and CTTS --- excluding TWA 7 --- to determine where the H$_2$ was coming from. We then applied this classification scheme to the TWA 7 data. Based on the observed ratios, this test categorizes TWA 7's H$_2$ as similar to CTTS' 99.2\% of the time for the set uncorrected for extinction, 98.2\% of the time for the set corrected using the extinction values from \citet{HerczegOpticalSpectroscopicStudy2014}, and 98.3\% of the time for the set corrected using the extinction values from \citet{FurlanSpitzerInfraredSpectrograph2011}. This implies that the TWA 7's H$_2$ is being pumped not only from the core, but also from the wings of the Ly$\alpha$ profile as with CTTS. We expand upon this in Section \ref{sec_disc}.
\subsection{\texorpdfstring{Estimating the Amount of Circumstellar H$_2$}{Estimating the Amount of Circumstellar H2}}\label{mass_est}
To estimate the amount of warm H$_2$ in TWA 7, we compared its H$_2$ emission with that of a transition disk system, TW Hya. We chose TW Hya because in comparison with TWA 7, it has a similar age \citep{WebbDiscoverySevenTauri1999}, inclination \citep{PontoppidanSpectroastrometricImagingMolecular2008}, and a relatively similar spectral type \citep{HerczegOpticalSpectroscopicStudy2014}. While we do not think that the line profile of TW Hya would be identical to that of TWA 7, as TW Hya is accreting enough to be measured by conventional methods, it is the best match from the data available. Our goal was to find a constant scale factor that is the ratio between the H$_2$ line strengths in TWA 7 and those in TW Hya. We used a least squares fit to calculate this scale factor with the only other free parameter being the radial velocity difference. We coadded the 19 brightest H$_2$ features from the most prominent progressions --- [1,4], [1,7], [0,1], and [0,2] --- and compared the line flux from the coadded profile to the coadded profile from TW Hya. We also measured the uncertainty of this ratio by measuring the noise in the spectrum in comparison to the flux. We found a ratio between the coadded profiles of (6.9$\pm$0.7)$\times$10$^{-4}$. as shown in Figure \ref{coadd}. Adjusting for differences in distance, TWA 7 has (2.2$\pm$0.2)$\times$10$^{-4}$ of TW Hya's H$_2$ line strength, and, as a result of its similar inclination and line widths, its H$_2$ luminosity is assumed to be less than TW Hya's by the same factor. \citet{FranceHubbleSpaceTelescope2012} measure TW Hya's H$_2$ luminosity as (16.2$\pm$2.0)$\times$10$^{29}$ erg s$^{-1}$. This gives us an H$_2$ luminosity of (3.6$\pm$0.6)$\times$10$^{26}$ erg s$^{-1}$ for TWA 7. By comparing the flux values measured by \citet{KruczekH2FluorescenceDwarf2017} in star spots to our value for TWA 7' s flux, even if there is a contribution from star spots to this value, we expect that the circumstellar gas is more than 50\% of the total H$_2$ luminosity.
\begin{figure}
\centering
\includegraphics[width=3.8in]{coadded_flux.png}
\caption{The co-added H$_2$ line profile of TWA 7 compared to the co-added line profile of TW Hya, scaled by the best fit ratio of (6.9$\pm$0.7)$\times$10$^{-4}$.\label{coadd}}
\end{figure}
From this scaling factor, we can also put a lower limit on the mass of warm H$_2$ assuming the gas is all circumstellar. The flux observed in a specific H$_2$ line, $F_{obs}$, is a function of the Einstein A value for that emitting transition, $A_{Bl}$, the distance to TWA 7, $d$, the frequency of the emitting transition, $\nu_{Bl}$, and the number of H$_2$ molecules that have been pumped to the required electronic excited state, $N_B$:
\begin{equation}\label{eq_fluor}
F_{obs} = N_B\frac{A_{Bl}h\nu_{Bl}}{4\pi d^2}
\end{equation}
where $l$ is a lower energy state than $B$.
$N_B$ depends on the number of H$_2$ molecules in the lower state of the pumping transition, $N_X$ and the rate at which those get excited. This rate is dependent on the oscillator strength $f$, the Ly$\alpha$ flux of TWA 7, and the optical depth of the warm H$_2$. Since we do not know the Ly$\alpha$ flux, the H$_2$ filling factor, or the optical depth, we instead choose to estimate an upper limit for the excitation rate, which turns into a lower limit for $N_X$ as described by Equation \ref{eq_pump}:
\begin{equation}\label{eq_pump}
N_X \frac{4\pi J_\nu}{h\nu_{p}}\frac{\pi e^2}{m_ec}f\leq N_B\sum A_{Bl}
\end{equation}
where $A_{Bl}$ are all the relevant Einstein A values for the upper state, J$_\nu$ is the Ly$\alpha$ flux at the pumping wavelength, and $\nu_p$ is the frequency of the pumping transition. We do not consider dissociation, as the probability of dissociation for [1,4], [1,7], [0,1], and [0,2] is predicted to be negligible \citep{AbgrallTotaltransitionprobability2000}. This $N_X$ is dependent on the total number of warm H$_2$ molecules, $N_{H_2}$, and the temperature, which gets factored into the Boltzmann Equation, $q(T)$:
\begin{equation}\label{eq_part}
q(T)N_{H_2} =N_X.
\end{equation}
We calculate a $q(T)$ based on the assumption that the H$_2$ is being thermally excited \citep{AdamkovicsFUVIrradiatedDisk2016}.
While Ly$\alpha$ flux varies in time, and the HST FUV observation of TWA 7's Ly$\alpha$ flux is contaminated by ISM absorption and geocoronal emission, we estimate that TWA 7's Ly$\alpha$ is less than 0.03 of TW Hya's based on comparison of the spectra at velocities $>$400 km/s \citep{HerczegFarUltravioletSpectraTW2004}. We estimate the flux observed for a given transition using our scaling factor found above and the flux observed for TW Hya by \citet{HerczegFarUltravioletSpectrumTW2002}. All pumping transition properties are described in Table \ref{pros}, while the Einstein A values are from \citep{AbgrallTableLymanBand1993}. We calculate a separate N$_{H_2}$ for each line flux measured for TW Hya, which then converts into a line flux for TWA 7; we then average these values together to get our final result. For a gas temperature of 1500 K, we get a rough estimate for the minimum amount of warm H$_2$ of $\sim$9.9$\times$10$^{-11}$ M$_\oplus$. If spread out in a ring with a radius of 0.3 AU --- a radius at which H$_2$ is commonly seen \citep{FranceHubbleSpaceTelescope2012} --- this corresponds to a minimum column density of $\sim$2.8$\times$10$^{15}$ cm$^{-2}$. This is consistent with the upper limit on H$_2$ column density reported by \citet{InglebyFarUltravioletH2Emission2009} of 3.0$\times$10$^{17}$ cm$^{-2}$ using a less sensitive prism spectrum of TWA 7. Based on the spread of line fluxes, we adopt a range of 10$^{15}$ to 3.0$\times$10$^{17}$ cm$^{-2}$ for the vertical column density of H$_2$ in TWA 7.
\section{Discussion}\label{sec_disc}
The H$_2$ progressions ratios from TWA 7 (Figure \ref{ratios}) more closely resemble that from CTTS than that from M stars. However, these ratios do not guarantee that the H$_2$ is circumstellar. TWA 7 is much closer in age to the CTTS and is thus likely to have higher chromospheric activity than an average M star. Chromospheric activity produces Ly$\alpha$ emission, which can then excite the H$_2$ in star spots on mid M type stars. We suggest this is not the primary source of H$_2$ emission in TWA 7 as chromospheric activity affects the core of the Ly$\alpha$ profile significantly more than the wings \citep{LemaireHydrogenLyaLyv2015}.So while there is some Ly$\alpha$ emission in the wings from all of these stars, the amount of flux induced solely from chromospheric activity is likely not enough to excite the outer H$_2$ progressions. \citet{YoungbloodFUMESIILya2021} looked at how Ly$\alpha$ varied with stellar parameters, showing how increased Ly$\alpha$ is correlated with higher chromospheric activity and lower gravity, both of which are correlated with youth. However, the profiles from \citet{YoungbloodFUMESIILya2021} show that the ratio of the flux between the peak and the wings can remain constant with varying chromospheric activity and gravity, even if the overall flux changes. Thus the most likely explanation is that TWA 7 is still weakly accreting circumstellar gas from an inner disk.
Accretion rates for weakly accreting stars are notoriously hard to measure accurately. There are cases, like MY Lup, that have FUV accretion signatures but lack optical ones \citep{AlcalaHSTspectrareveal2019}. Previously, TWA 7 was considered a standard, non-accreting WTTS. It shows no accretion signatures in the optical. The hot FUV lines seen in TWA 7's HST-COS spectrum, such as C IV or N V, have profiles that do not look like those of CTTS \citep{ArdilaHotGasLines2013}. It also lacks the NUV flux and Ca II] $\lambda$2325 emission of known accreting stars \citep{InglebyNearultravioletExcessSlowly2011,InglebyAccretionRatesTauri2013}. However, most of the accreting gas is expected to be hydrogen in the ground state \citep{MuzerolleDetectionDiskAccretion2000}, so Ly$\alpha$ should be more sensitive to small accretion rates than any other line. There is a similar system in TWA, TWA 4 B, a K5 star with circumstellar H$_2$ FUV emission discovered by \citet{YangFarultravioletAtlasLowresolution2012} despite not showing obvious accretion signatures. Given TWA is close in age to when the typical prototplanetary disk is predicted to evolve into a debris disk, these systems could represent a short-lived phase of disk evolution with residual gas that does not accrete at the high levels detectable in optical spectra. FUV spectra of more stars in TWA would allow us to further investigate the gas evolution at this crucial age.
Assuming the H$_2$ we observe is indeed circumstellar, the next question concerns its origin. One possibility is that the H$_2$ originates from the inward migration, sublimation, and the subsequent photodissociation of H$_2$O ices in comet-like nuclei. The H$_2$O photodissociates into H, OH, and O, and the newly available H atoms can then reform into H$_2$. If true, there should also be some oxygen gas species in the inner disk. \citet{Riviere-MarichalarGasdustTW2013} give an upper limit on the oxygen mass of 2.3$\times$10$^{-5}$ M$_\oplus$ from Herschel data. This upper limit is more than the oxygen that would accompany the H$_2$ we detect if the H$_2$ originates from dissociated H$_2$O and assuming the warm H$_2$ is confined to the inner few AU of the disk, making this a potentially viable source of the observed H$_2$. Future observations could better constrain the oxygen mass in the inner disk and allow us to determine whether H$_2$O ice evaporation is a possible origin of the circumstellar H$_2$ around TWA 7. Additionally, detection of dust from these comets could lend support to this theory \citep{PearceGastrappinghot2020}.
Another possibility is that the H$_2$ we see is residual protoplanetary disk gas. Regardless of its origin, an H$_2$ formation pathway is needed to balance ongoing UV photodissociation of H$_2$. Molecular hydrogen forms most efficiently on grain surfaces, as in the ISM, but it can also form via gas phase reactions (e.g., through H + H$^-$ $\rightarrow$ H$_2$ + e$^-$) when there is less dust surface area available \citep{BrudererSurvivalmoleculargas2013}.
To explore the possibility of grain surface formation of H$_2$ in the case of TWA 7, we can estimate the upper limit on the surface area of warm grains by looking at its spectral energy distribution (SED). Although the W3 band from WISE \citep{WrightWidefieldInfraredSurvey2010} shows no excess IR emission from dust \citep{OlofssonResolvingfaintstructures2018, BayoSubmillimetrenoncontaminateddetection2019}, we can put a limit on the amount of warm dust by assuming the dust can generate the equivalent of the 1$\sigma$ uncertainty for the W3 flux. Under that assumption, we compute the SED using the model described by \citet{Isellashapeinnerrim2005} to put an upper limit of $\le$5.1$\times$10$^{-8}$ M$_\oplus$ on the amount of warm ($\sim$1000 K) silicate particles between 1 $\mu$m and 1 mm. We chose a lower particle size limit of 1 $\mu$m, because in more evolved systems, particles smaller than that near the star can get blown away by stellar winds. Based on this estimate, grains with radii between 1 $\mu$m and 1 mm could make up a significant surface area, up to a surface area of 10$^{23}$ cm$^2$. If the grains are spread out evenly over the inner 0.3 au, the mass column density is 2$\times$10$^{-5}$ g cm$^{-2}.$
With the above constraint on the possible dust content of the inner disk of TWA 7, we can use the results of \citet{BrudererSurvivalmoleculargas2013} to estimate the H$_2$ reservoir that can be sustained in the inner disk. In modeling the inner regions of transition disks, \citet{BrudererSurvivalmoleculargas2013} considered two physical models: a dusty inner disk and a very dust-poor inner disk with dust column densities of $\Sigma_d$=3$\times$10$^{-4}$ g cm$^{-2}$ and 3$\times$10$^{-9}$ g cm$^{-2}$ respectively at 0.3 au. The upper limit on the dust surface density of TWA 7 we find above is an order of magnitude below the surface density of the dusty inner disk model but many orders of magnitude above the surface density of the dust-poor disk model. Thus the dust-poor disk model provides a relatively conservative estimate of the H$_2$ density allowed for TWA 7.
The other relevant parameter in the \citet{BrudererSurvivalmoleculargas2013} model is the gas surface density. Figure 6 of \citet{BrudererSurvivalmoleculargas2013} shows the results for a case in which the inner disk and is very dust poor and has a gas column density of 0.3 g cm$^{-2}$ at 0.3\,au. The H$_2$ fraction in the disk atmosphere is $\sim$3$\times$10$^{-6}$ relative to hydrogen or an H$_2$ column density of N$_{\rm H_2}$=3$\times$10$^{18}$ g cm$^{-2}$. \citet{BrudererSurvivalmoleculargas2013} do not show the temperature of the H$_2$, although much of it is likely to be warm, as the disk is dust poor, and dust is a coolant for the gas through gas-grain collisions. If 0.1\% of the total H$_2$ column is warm ($\sim$1500 K), this scenario predicts a warm H$_2$ mass similar to that inferred for TWA 7.
Note that this result is obtained despite using a model with a dust density several orders of magnitude below our dust upper limit. Thus, it seems plausible that even a dust-poor inner disk can sustain a warm H$_2$ column density in the range we estimate for TWA 7 in Section \ref{mass_est}. While the models from \citet{BrudererSurvivalmoleculargas2013} were not tuned specifically to TWA 7's parameters --- the model assumes a hotter 10 L$_\odot$ star and a given polycyclic aromatic hydrocarbons (PAH) abundance --- these two factors should impact the H$_2$ production in opposite ways: the higher UV flux of the more massive star enhances photodestruction of H$_2$, while the PAH abundance enhances H$_2$ production. We therefore believe it is plausible that the H$_2$ we detect is sustained via some combination of gas phase reactions in the circumstellar environment of TWA 7. Future observations between 3 and 12 microns with telescopes like JWST could detect PAHs in the disk and lend further support to this possibility \citep{SeokPolycyclicAromaticHydrocarbons2017}.
Although we do not have the requisite measurements to conclusively determine why there is is warm H$_2$ in the circumstellar environment of TWA 7,
regardless of its origin, warm gas in a region without detectable warm dust is not unique to this star. Primordial warm H$_2$ is detected inside the inner edge of the dust disk in transitional disk systems \citep{FranceHubbleSpaceTelescope2012, ArulananthamUVtoNIRStudyMolecular2018}. Warm CO has also been detected in these regions \citep{PontoppidanSpectroastrometricImagingMolecular2008, SalykCORovibrationalEmission2011}. Clearly, warm gas can outlast detectable amounts of warm dust. Thus, the physics resulting in warm gas in the cavities of transitional disks could also be the cause of the H$_2$ we detect in TWA 7.
\section{Conclusions}
We have detected molecular hydrogen from four progressions ([1,4], [1,7], [0,1], and [0,2]) in TWA 7, a known debris disk system. The ratios between CCF peaks of the detected H$_2$ progressions (Figure \ref{ratios}) resemble those from CTTS. This suggests that the H$_2$ in TWA 7 is circumstellar, as it is for CTTS. This is highly unexpected, because H$_2$ is not typically detected in debris disk systems. This star joins a small group of systems that have H$_2$ but are not accreting by typical diagnostic standards. Assuming the H$_2$ is circumstellar, we have estimated a column density of 10$^{15}$ to 3.0$\times$10$^{17}$ cm$^{-2}$. While we cannot determine the origin of the gas conclusively, it is likely to be generated from residual protoplanetary disk gas.
\acknowledgements
Based on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555. Support for program number (GO-15310) was provided through a grant from the STScI under NASA contract NAS5-26555. GJH is supported by by general grant 11773002 awarded by the National Science Foundation of China. L.F. would like to thank Andrea Isella for help with the S.E.D. modeling. This research has made use of the VizieR catalogue access tool, CDS, Strasbourg, France. The original description of the VizieR service was published by \citep{WengerSIMBADastronomicaldatabase2000}. This research has made use of the SIMBAD database,operated at CDS, Strasbourg, France. This research has made use of the NASA/ IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This work has made use of data from the European Space Agency (ESA) mission {\it Gaia} (\url{https://www.cosmos.esa.int/gaia}), processed by the {\it Gaia} Data Processing and Analysis Consortium (DPAC, \url{https://www.cosmos.esa.int/web/gaia/dpac/consortium}). Funding for the DPAC has been provided by national institutions, in particular the institutions participating in the {\it Gaia} Multilateral Agreement.
\facility{HST (COS, STIS)}
\software{SpecTres \citep{CarnallSpectResFastSpectral2017}, NumPy \citep{oliphant2006guide, van2011numpy}, Scikit-learn \citep{scikit-learn}, Pandas \citep{reback2020pandas}, Scipy \citep{VirtanenSciPyFundamentalAlgorithms2019}, Matplotlib \citep{Hunter:2007} }
\restartappendixnumbering
|
1,108,101,564,853 | arxiv | \section{Introduction}
\label{sec:intro}
Motion Estimation/Compensation and video coding have wide range of applications in various areas of image/video processing, including restoration \cite{Foroosh_2000,Foroosh_Chellappa_1999,Foroosh_etal_1996,Cao_etal_2015,berthod1994reconstruction,shekarforoush19953d,lorette1997super,shekarforoush1998multi,shekarforoush1996super,shekarforoush1995sub,shekarforoush1999conditioning,shekarforoush1998adaptive,berthod1994refining,shekarforoush1998denoising,bhutta2006blind,jain2008super,shekarforoush2000noise,shekarforoush1999super,shekarforoush1998blind,amiot2015fluorosocopic},
content/context analysis \cite{Tariq_etal_2017_2,Tariq_etal_2017,tariq2013exploiting,tariq2014scene,Cakmakci_etal_2008,Cakmakci_etal_2008_2,Zhang_etal_2015,Lotfian_Foroosh_2017,Morley_Foroosh2017,Ali-Foroosh2016,Ali-Foroosh2015,Einsele_Foroosh_2015,ali2016character,Cakmakci_etal2008,damkjer2014mesh}, surveillance \cite{Junejo_etal_2007,Junejo_Foroosh_2008,Sun_etal_2012,junejo2007trajectory,sun2011motion,Ashraf_etal2012,sun2014feature,Junejo_Foroosh2007-1,Junejo_Foroosh2007-2,Junejo_Foroosh2007-3,Junejo_Foroosh2006-1,Junejo_Foroosh2006-2,ashraf2012motion,ashraf2015motion,sun2014should}, action recognition \cite{Shen_Foroosh_2009,Ashraf_etal_2014,Ashraf_etal_2013,Sun_etal_2015,shen2008view,sun2011action,ashraf2014view,shen2008action,shen2008view-2,ashraf2013view,ashraf2010view,boyraz122014action,Shen_Foroosh_FR2008,Shen_Foroosh_pose2008,ashraf2012human}, self-localization \cite{Junejo_etal_2010,Junejo_Foroosh_2010,Junejo_Foroosh_solar2008,Junejo_Foroosh_GPS2008,junejo2006calibrating,junejo2008gps}, tracking \cite{Shu_etal_2016,Milikan_etal_2017,Millikan_etal2015,shekarforoush2000multi,millikan2015initialized}, scene modeling \cite{Junejo_etal_2013,bhutta2011selective,junejo1dynamic,ashraf2007near}, and video post-production \cite{Cao_etal_2005,Cao_etal_2009,shen2006video,balci2006real,xiao20063d,moore2008learning,alnasser2006image,Alnasser_Foroosh_rend2006,fu2004expression,balci2006image,xiao2006new,cao2006synthesizing}.
to name a few.\\
Reliable motion estimation/compensation can substantially reduce the residual energy in the coding of video data. Motion estimation methods are either global \cite{Foroosh_etal_2002,Foroosh_2005,Balci_Foroosh_2006,Balci_Foroosh_2006_2,Alnasser_Foroosh_2008,Atalay_Foroosh_2017,Atalay_Foroosh_2017-2,shekarforoush1996subpixel,foroosh2004sub,shekarforoush1995subpixel,balci2005inferring,balci2005estimating,foroosh2003motion,Balci_Foroosh_phase2005,Foroosh_Balci_2004,balci2006subpixel,balci2006alignment}, or local \cite{foroosh2001closed,shekarforoush2000multifractal,foroosh2004adaptive,foroosh2003adaptive} in their nature in terms of treating the transformation relating two images. There is also a separate but related body of work on camera motion quantification, which requires online or offline calibration of camera \cite{Cao_Foroosh_2007,Cao_Foroosh_2006,Cao_etal_2006,Junejo_etal_2011,cao2004camera,cao2004simple,caometrology,junejo2006dissecting,junejo2007robust,cao2006self,foroosh2005self,junejo2006robust,Junejo_Foroosh_calib2008,Junejo_Foroosh_PTZ2008,Junejo_Foroosh_SolCalib2008,Ashraf_Foroosh_2008,Junejo_Foroosh_Givens2008,Lu_Foroosh2006,Balci_Foroosh_metro2005,Cao_Foroosh_calib2004,Cao_Foroosh_calib2004,cao2006camera}.
While these methods and their variations have been proposed in the past for motion compensation in different applications, space-time subband/wavelet coding \cite{ohm2005advances} is by far the method of choice for coding and compressing images and videos due to its superior performance. Its effectiveness, however, can be significantly improved with motion compensation, which is the topic of the proposed method in this paper.
\section{Related Work}
\label{sec:related}
Still image coding \cite{andreopoulos2005complete} and video coding \cite{XuLW15} are important topics of research in coding and compression of multimedia data. On the other hand, scalable video coding \cite{van2002bottom,park2000motion} is an emerging trend in numerous multimedia applications with heterogeneous networks, due to their ability to adapt different resolution and quality requirements. Recently, a large body of research has focused on wavelet-based methods \cite{secker2001motion,andreopoulos2005complete,liu2007fast,chen2014adaptive}, where motion compensated temporal filtering (MCTF) is shown to play an essential role in both scalable video coding and still image coding. MCTF is performed either directly on input images, or on their transforms. Thus, MCTF methods can be categorized into two groups depending on the order of temporal and spatial transforms. MCTF techniques which perform temporal decomposition before a spatial transform include, Secker and Taubman \cite{secker2001motion}, and Pesquest-Popescu and Bottreau \cite{pesquet2001three} who used lifting formulation of three dimensional temporal wavelet decomposition for motion compensated video compression. Kim \textit{et al.} \cite{kim2000low} proposed a 3-D extension of set partitioning in hierarchical trees (3D-SPIHT) by a low bit-rate embedded video coding scheme. More recently, Xiong \textit{et al.} \cite{xiong2008scale} extended spatiotemporal subband transform to in-scale motion compensation to exploit the temporal and cross-resolution correlations simultaneously, by predicting low-pass subbands from next lower resolution and high-pass subbands from neighboring frames in the same resolution layer. Furthermore, Chen and Liu \cite{chen2014adaptive} used an adaptive Lagrange multiplier selection model in rate-distortion optimization (RDO) for motion estimation. In order to achieve more accurate motion data, Esche \textit{et al.} \cite{esche2013adpative} proposed an interpolation method for motion information per pixel using block based motion data, and R{\"u}fenacht \textit{et al.} \cite{rufenacht2014hierarchical} anchor motion fields at reference frames instead of target frames to resolve folding ambiguities in the vicinity of motion discontinuities.
Although the methods cited above have good performance, they suffer from drifting and operational mismatch problems \cite{xiong2008scale}. Therefore, performing spatial transform before temporal decomposition was introduced to overcome these drawbacks. However, since complete DWT is shift variant, in order to achieve in-band ME/MC (i.e. directly in the wavelet domain), several methods were proposed to tackle this problem by redundancy. Van der Auwera \textit{et al.} \cite{van2002bottom} used a bottom-up prediction algorithm for a bottom-up overcomplete discrete wavelet transform (ODWT). Park and Kim \cite{park2000motion} proposed a low-band-shift method by constructing the wavelet tree by shifting low-band subband in each level for horizontal, vertical, and diagonal directions for one pixel and performing downsampling. Andreopoulos \textit{et al.} \cite{andreopoulos2005complete} defined a complete to overcomplete discrete wavelet transform (CODWT), which avoids inverse DWT generally used to obtain ODWT. More recently, Liu and Ngan \cite{liu2007fast} use partial distortion search and anisotropic double cross search algorithms with the MCTF method in \cite{andreopoulos2005complete} for a fast motion estimation. Amiot \textit{et al.} \cite{amiot2015fluorosocopic} perform MCTF for denoising, using dual-tree complex wavelet (DT-CW) coefficients.
All MCTF methods summarized above perform motion estimation/motion compensation either in the temporal domain before DWT, or in the wavelet domain with the help of redundancy (e.g. ODWT, DT-CW, etc.), due to the fact that complete DWT is shift-variant and motion estimation directly on DWT subbands is a challenging task. However, redundancy in these methods leads to high computational complexity \cite{liu2007fast}. Inspired by the fact that shift variance keeps the perfect reconstruction and nonredundancy properties of wavelets and breaks the coupling between spatial subbands, and that wavelet codecs always operate on complete DWT subbands \cite{andreopoulos2005complete}, we propose a novel in-band ME/MC method, which avoids the need of shift invariance, and operates directly on the original DWT coefficients of the input sequences. Since Haar wavelets are widely utilized in MCTF methods due to the coding efficiency based on their short kernel filters \cite{andreopoulos2005complete}, our method is built on Haar subbands. For accurate ME/MC, we define the exact relationships between the DWT subbands of input video sequences, which allows us to avoid upsampling, inverse DWT, redundancy, and interpolation for subpixel accuracy.
The rest of the paper is organized as follows. We introduce the problem and our proposed solution in Section \ref{sec:formulation}. We define the derived exact inter-subband relationships in Section \ref{sec:method}, demonstrate the experimental results in Section \ref{sec:results}, and finally conclude our paper in Section \ref{sec:conclusion}.
\section{Motion Compensated Temporal Filtering}
\label{sec:formulation}
In this section, we explain our proposed method for in-band motion compensated temporal filtering, operating directly on DWT subbands.
\begin{figure}[t]
\centerline{\includegraphics[width=12cm]{mctf.png}}
\centering
\caption{A block diagram of the proposed in-band Motion Compensated Temporal Filtering model.}\medskip \label{fig:MCTF}
\end{figure}
The wavelet transform provides localization both in time and frequency; therefore, it is straightforward to use wavelets in MCTF. In order to perform ME/MC in MCTF, wavelet subbands of the transformed signal need to be predicted. However, due to decimation and expansion operations of DWT, direct band-to-band estimation is generally not practical \cite{park2000motion}. The proposed method overcomes this challenge by revealing the relationships between subbands of reference and target frames.
The proposed in-band MCTF method is demonstrated in Fig. \ref{fig:MCTF}. Given a video sequence, first, DWT is performed on each frame for spatial decomposition, then a temporal decomposition is performed by splitting video frames into groups. ME/MC ($\euscr{P}$ in Fig. \ref{fig:MCTF}) is performed by block matching, using reference frames ($DWT(I_{2t})$) to predict the target frames ($DWT(I_{2t+1})$). Employing the found motion vectors (MV), reference frames are mapped onto the target frames to generate error frames, $C$ in Fig. \ref{fig:MCTF}, which are then quantized ($\euscr{Q}$), encoded/decoded by a wavelet codec, together with the MVs.
We employ Haar wavelet decomposition in spatial transform due to the benefits mentioned earlier. Since the method in Section \ref{sec:method} is accurate for any arbitrary subpixel translation defined as a multiple of $2^k$, where $k$ is the decomposition level, our method does not need interpolation for subpixel accuracy. A block matching method with unidirectional full search is used for ME/MC steps which is a common method used for MCTF. Our cost function is based on mean square error minimization using all subbands, as follows:
\begin{equation}
(dx,dy) = \operatorname{arg\,min}_{x,y} \{( A - \hat{A} )^{2} + ( a - \hat{a} )^{2} + ( b - \hat{b} )^{2} + ( c - \hat{c} )^{2}\},
\end{equation}
\noindent where $A, a, b, c$ denote the original target frame wavelet subbands, and $\hat{A}, \hat{a}, \hat{b}, \hat{c}$ are the estimated subbands for the same target image, using the method described in Section \ref{sec:method} and a reference frame.
\section{Inter-subband Relationship}
\label{sec:method}
In-band (wavelet domain) shift method along with the related notation are provided in this section.
\subsection{Notation} \label{term}
Here, we provide the notation used throughout the paper beforehand, in Table \ref{termtable}, for a better understanding of the proposed method and to prevent any confusions.
\begin{center}
\begin{table}[h]
\centering
\caption{Notation}
\begin{tabular}{l p{0.7\linewidth}}
$I_t$ & Input video frame at time $t$\\
${\bf A}, {\bf a}, {\bf b}, {\bf c}$ & Haar wavelet transform approximation, horizontal, vertical, and diagonal subbands of input image, respectively \\
${\bf F}, {\bf K}, {\bf L}$ & Coefficient matrices to be multiplied by approximation, horizontal, vertical, and diagonal DWT subbands \\
$h$ & Number of hypothetically added levels in case of non-integer shifts\\
$s$ & Integer shift amount after the hypothetically added levels ($h$)\\
\end{tabular} \label{termtable}
\end{table}
\end{center}
Bold letters in the following sections demonstrate matrices and vectors. The subscripts $x$ and $y$ indicate the horizontal and vertical translation directions, respectively. Finally, the subscript $k$ indicates the $k$th video frame, where $k = 1, 2, ..$.
\subsection{In-band Shifts}
Our goal for the MCTF method described in Section \ref{sec:formulation} is to achieve ME/MC in the wavelet domain using DWT subbands, given a video frame sequence. For this purpose, wavelet subbands of the tranformed signal should be predicted using only DWT subbands of the reference frame. Therefore, we derive the relationship between the subbands of transformed and reference images, which can be described by in-band shift (in the wavelet domain) of the reference image subbands. Below, we derive the mathematical expressions which demonstrate these relationships.
Let $A$, $a$, $b$, and $c$ be the first level approximation, horizontal, vertical, and diagonal detail coefficients (subbands), respectively, of a $2D$ reference frame at time $t$, $I_t$, of size $2m\times2n$, where $m$ and $n$ are positive integers. Since decimation operator in wavelet transform reduces the size of input frame by half in each direction for each subband, we require the frame sizes to be divisible by 2. Now, the $1st$ level subbands of translated frame in any direction (i.e. horizontal, vertical, diagonal) can be expressed in matrix form using the $1st$ level Haar transform subbands of reference frame as in the following equations:
\begin{eqnarray}\label{firsteq}
\textbf{A}_s &=& \textbf{F}_y \textbf{A} \textbf{F}_x + \textbf{F}_y \textbf{a} \textbf{K}_1 + \textbf{L}_1 \textbf{b} \textbf{F}_x + \textbf{L}_1 \textbf{c} \textbf{K}_1 \nonumber\\
\textbf{a}_s &=& - \textbf{F}_y \textbf{A} \textbf{K}_1 + \textbf{F}_y \textbf{a} \textbf{K}_2 - \textbf{L}_1 \textbf{b} \textbf{K}_1 + \textbf{L}_1 \textbf{c} \textbf{K}_2 \nonumber\\
\textbf{b}_s &=& - \textbf{L}_1 \textbf{A} \textbf{F}_x - \textbf{L}_1 \textbf{a} \textbf{K}_1 + \textbf{L}_2 \textbf{b} \textbf{F}_x + \textbf{L}_2 \textbf{c} \textbf{K}_1 \nonumber\\
\textbf{c}_s &=& \textbf{L}_1 \textbf{A} \textbf{K}_1 - \textbf{L}_1 \textbf{a} \textbf{K}_2 - \textbf{L}_2 \textbf{b} \textbf{K}_1 + \textbf{L}_2 \textbf{c} \textbf{K}_2 \nonumber\\
\end{eqnarray}
As already mentioned in Section \ref{term}, $\textbf{F}$, $\textbf{K}$, and $\textbf{L}$ stand for coefficient matrices to be multiplied by the lowpass and highpass subbands of the reference frame, where subscripts $x$ and $y$ indicate \textit{horizontal} and \textit{vertical} shifts. $\textbf{A}_s, \textbf{a}_s, \textbf{b}_s, \textbf{c}_s$ are translated frame subbands in any direction. The low/high-pass subbands of both reference and transformed frames are of size $m \times n$, $\textbf{F}_y$ and $\textbf{L}_{1,2}$ are $m \times m$, whereas $\textbf{F}_x$ and $\textbf{K}_{1,2}$ are $n \times n$.
By examining the translational shifts between subbands of two input frames in the Haar domain, we realize that horizontal translation reduces $\textbf{L}$ to zero and $\textbf{F}_y$ to the identity matrix. This could be understood by examining the coefficient matrices defined later in this section (namely, Eq. (\ref{coefmat})), by setting the related vertical components to zero (specifically, $s_y$ and $h_y$). Likewise, vertical translation depends solely on approximation and vertical detail coefficients, in which case $\textbf{K}$ is reduced to zero and $\textbf{F}_x$ is equal to the identity matrix.
Here, we first define the matrices for subpixel shift amounts. The algorithm to reach any shift amount using the subpixel relationship will be described later in this section.
For subpixel translation, contrary to the customary model of approximating a subpixel shift by upsampling an image followed by an integer shift, our method models subpixel shift directly based on the original coefficients of the reference frame, without upsampling and the need for interpolation. To this end, we resort to the following observations:
\textbf{(1)} Upsampling an image $I$, is equivalent to adding levels to the bottom of its wavelet transform, and setting the detail coefficients to zero, while the approximation coefficients remain the same, as demonstrated in Fig. \ref{fig:upsample} for upsampling by $2^1$ as an example, where gray subbands show added zeros.
\textbf{(2)} Shifting the upsampled image by an amount of $s$ is equivalent to shifting the original image by an amount of $s/2^h$, where $h$ is the number of added levels (e.g. $h=1$ in Fig. \ref{fig:upsample}).
\begin{figure}[t]
\centering
\centerline{\includegraphics[width=6.5cm]{upsample.png}}
\caption{Upsampling illustration.}\medskip \label{fig:upsample}
\end{figure}
These observations allow us to do an in-band shift of the reference subbands for a subpixel amount, without upsampling or interpolation, which saves both memory and reduces the computational cost. Transformed signals therefore can be found by using the original level subbands of the reference image with the help of a hypothetically added level ($h$) and an integer shift value ($s$) at the added level.
Now, the aforementioned coefficient matrices, $\textbf{F}_x$, $\textbf{K}_{1}$, and $\textbf{K}_{2}$ can be defined, in lower bidiagonal Toeplitz matrix form as follows.
\scriptsize
\begin{eqnarray}
\resizebox{\linewidth}{!}{%
$\textbf{F}_x = \dfrac{1}{2^{h_x+1}}
\begin{bmatrix}
2^{h_x+1} - \abs{s_x} & & \\
\abs{s_x} & 2^{h_x+1} - \abs{s_x} \\
& \abs{s_x} \\
& & \ddots & \ddots \\
& \\
& & & \abs{s_x} & 2^{h_x+1} - \abs{s_x} \\ \nonumber
\end{bmatrix} \nonumber$
}
\end{eqnarray}
\begin{eqnarray}
\textbf{K}_1 = \dfrac{1}{2^{h_x+1}}
\begin{bmatrix}
-s_x & \\
s_x & -s_x \\
& s_x & \\
& & \ddots & \ddots\\
&\\
& & & s_x & -s_x \\
\end{bmatrix} \nonumber
\end{eqnarray}
\begin{eqnarray}
\resizebox{\linewidth}{!}{%
$\textbf{K}_2 = \dfrac{1}{2^{h_x+1}}
\begin{bmatrix}
2^{h_x+1} - 3\abs{s_x} & \\
- \abs{s_x} & 2^{h_x+1} - 3\abs{s_x} \\
& -\abs{s_x} & \\
& & \ddots & \ddots\\
&\\
& & & -\abs{s_x} & 2^{h_x+1} - 3\abs{s_x} \\
\end{bmatrix}$ \label{coefmat}
}
\end{eqnarray}
\normalsize
\noindent where $s_{x}$ and $h_{x}$ demonstrate the integer shift amounts at the hypothetically added level and the number of added levels for $x$ direction, respectively.
$\textbf{F}_y$, $\textbf{L}_1$, and $\textbf{L}_2$ matrices are defined in a similar manner by upper bidiagonal Toeplitz matrices, using $y$ direction values for $s$ and $h$.
As mentioned earlier, $\textbf{F}_x$ and $\textbf{K}_{1,2}$ are $n \times n$, while $\textbf{F}_y$ and $\textbf{L}_{1,2}$ are $m \times m$. Sizes of these matrices also indicate that in-band shift of subbands is performed using only the original level Haar coefficients (which are of size $m \times n$) without upsampling. When the shift amount is negative, diagonals of the coefficient matrices interchange. The matrices are adapted for boundary condition by adding one more column/row at the end, for the MCTF method proposed in Section \ref{sec:formulation}, where subband sizes are also adjusted to be $(m+1)\times(n+1)$.
The relationship defined above for subpixel shifts, can be used to produce any shift amount based on the fact that wavelet subbands are periodically shift-invariant. Table \ref{shifts} demonstrates the calculation of any shift using subpixels, where $\%$ stands for modulo, $\floor{s}$ and $\ceil{s}$ are the greatest integer lower than, and smallest integer higher than the shift amount $s$. Using circular shift of subbands for the given amounts in each shift amount case, and setting the new shift amount to the new shift values in Table \ref{shifts}, we can calculate any fractional or integer amount of shifts using subpixels.
\begin{center}
\begin{table}[h]
\centering
\caption{Arbitrary shifts defined by circular shift and subpixel amount}
\begin{tabular}{lll
\toprule
Shift amount & Circular shift & New shift amount\\
\midrule
$s\%2 = 0$ & $s/2$ & $0$\\
$s\%2 = 1$ & $\floor{s/2}$ & $1$ \\
$\ceil{s}\%2 = 0$ & $\ceil{s}/2$ & $s-\ceil{s}$ \\
$\floor{s}\%2 = 0$ & $\floor{s}/2$ & $s-\floor{s}$ \\
\bottomrule
\end{tabular} \label{shifts}
\end{table}
\end{center}
If the shift amount (or the new shift amount in Table \ref{shifts}) is not divisible by $2$, in order to reach an integer value at the $(N+h)$th level, the shift value at the original level is rounded to the closest decimal point which is divisible by $2^h$.
\section{Experimental Results}
\label{sec:results}
In this section, we demonstrate the results obtained with our method compared to the methods which perform in-band MCTF for video coding. We report our results on CIF video sequence examples with resolutions $352\times240$ and $352\times288$. We set our block size to $8\times8$ or $16\times16$ depending on the resolution of the sequences (in order to have integer number of blocks in subbands) and the required accuracy. Even though our MCTF method is based on 1-level DWT, we perform $2$ more spatial decomposition levels after ME/MC steps before encoding, since compared methods use $3$ spatial decomposition levels in total. Motion vectors and error frames are encoded using context-adaptive variable-length coding (CAVLC) and
global thresholding with Huffman coding methods, respectively.
\begin{figure}[h]
\centering
\includegraphics[width=8cm]{football.jpg}
\caption{Rate-distortion comparison for the Football sequence.} \label{fig:football}
\end{figure}
Fig. \ref{fig:football} shows the comparison of our method with respect to two conventional in-band methods, which are direct wavelet subband matching (band-to-band) and wavelet-block low-band-shift (LBS) \cite{park2000motion} for CIF video sequence "Football". The graph demonstrates rate-distortion curves for a predicted frame of the Football sequence, where the shown bitrates are for error frame only (same as in the compared methods), and the accuracy for our method is set to $1/4$ pixel. As seen in this figure, our method improves PSNR compared to conventional in-band methods by $0.1-1$ dB in general.
\begin{figure}[h]
\begin{tabular}{cc}
\includegraphics[width=7.7cm]{ours.jpg} &
\includegraphics[width=7.7cm]{ourslow.jpg}
\end{tabular}
\caption{PSNR performance of proposed method.} \label{fig:ours}
\end{figure}
\begin{figure}[h]
\centering
\begin{tabular}{cc}
\includegraphics[width=8.3cm, height = 6cm]{zeroone.jpg}
\includegraphics[width=8.3cm, height = 6cm]{zerozerotwo.jpg}
\end{tabular}
\caption{Residual images for predicted frames of Foreman for $0.1$ bpp on the left and $0.02$ bpp on the right.} \label{fig:residual}
\end{figure}
We demonstrate our results for several video sequences at different bitrates in Fig. \ref{fig:ours}, where bitrates include the luminance component only for the reference frame, the error frame, and MVs. The graph on the left shows the results with $1/2$ pixel accuracy using $16\times16$ blocks, and the one on the right uses $1/4$ pixel accuracy with $8\times8$ blocks. We also show the residual images for a predicted frame of the Foreman sequence in Fig. \ref{fig:residual}, for $0.1$ and $0.02$ bpp, respectively. The examples show how our method reduces the residual signal energy even at very low bitrates by providing more accurate reconstruction (prediction).
\section{Conclusion}
\label{sec:conclusion}
We propose a novel method for wavelet-based (in-band) ME/MC for MCTF in for video coding, where DWT is applied before temporal decomposition, and ME/MC steps are performed directly on DWT subbands. We avoid the need for shift-invariance property for non-redundant DWT (required by conventional methods for ME/MC), by deriving the exact relationships between DWT subbands of reference and transformed video frames.
Our method avoids upsampling, inverse-DWT (IDWT), and calculation of redundant DWT while achieving high accuracy even at very low-bitrates. Experimental results demonstrate the accuracy of presented method for ME/MC, confirming that our model effectively improves video coding quality by reducing the residual energy in the error frames. The proposed ME/MC scheme can also be adapted for several image/video processing applications such as denoising, or scalable video coding.
\bibliographystyle{plain}
|
1,108,101,564,854 | arxiv | \section{Introduction}
Cataclysmic variables (CVs) are compact binaries comprised of a Roche-lobe filling sun-like star transferring gas to a white dwarf (WD) via an accretion disk (if the WD is non-magnetic) or via a magnetically channeled accretion column if the WD is magnetic. When a critical mass of hydrogen-rich gas accumulates on the white dwarf, an explosive thermonuclear runaway is triggered identified as a classical nova (CN).
Following the nova explosion, the stellar remnant enters a phase of
non-explosive evolution during quiescence. This phase is not well
understood. Do these post-novae evolve into dwarf novae as their
accretion rates drop? When does nuclear burning and hence the soft
X-ray production stop following the nova explosion? The time scale
for this to occur is predicted to be in the range of 1 year to $>$ 10
years, possibly as long as 300 years depending upon the mass of the
WD and the amount of H left on the WD after the nova explosion, and
is inversely dependent on the white dwarf mass.
What are the accretion
rates of old novae as a function of time since the nova explosion?
Is there enhanced mass transfer due to irradiation of the secondary
donor star (causing the donor to bloat and/or by driving a wind
off of the donor star) by the hot white dwarf and/or hot accretion
disk? Does the mass of the white dwarf grow following a nova outburst?
An essential key to answering a number of these questions is in the far ultraviolet
wavelength domain where the Planckian peak in the energy distribution
of a Cataclysmic variable accretion disk or accreting white dwarf occurs. Specifically, a reliable determination of the rate of accretion onto each post-nova white dwarf and a comparison of the accretion rates of each system, at different times since their last eruption could bear directly on the accretion related questions mentioned above.
The old novae RR Pic, V533 Her and DI Lac have good quality far
ultraviolet spectra on the MAST archive obtained with the Far
Ultraviolet Spectroscopic Explorer (FUSE), and International Ultraviolet
Explorer (IUE).
In addition, archival FUSE spectra enables our model fitting to
extend down to the Lyman Limit which provides additional insights
and constraints on the nature of the hot emitting component.
Moreover, RR Pic, offers a number of advantages for a synthetic spectral
analysis. RR Pic itself has an accurate trigonometric parallax and
very good FUSE spectra. Our analysis
utilizes the new Hubble FGS parallax distance of 400 pc which removes
one critical free parameter in our model fitting. The physical and
orbital parameters from the literature for RR Pic, V533 Her and
DI Lac are summarized in Table 1.
\clearpage
\begin{deluxetable}{ccccccccc}
\tablewidth{0pc}
\tablecaption{
Physical and Orbital Properties of Old Novae
}
\tablehead{
System & Nova & $P_{orb}$ & i & d & V & E(B-V) & $M_{\rm wd}$ & Speed Class \\
Name & Year & hr & deg & pc & & & $M_{\odot}$ &
}
\startdata
RR Pic & 1925$^a$ & 3.481$^b$ & 65$^{cf}$ & 380-490$^c$ & 12.3$^d$ & 0.00$^a$ & 0.95$^e$ & Slow$^c$ \\
V533 Her & 1963$^a$ & 3.53$^f$ & 62$^g$ & 560-1250$^i$ $^j$ & & 0.03$^a$ & 0.95$^g$ & Slow$^d$ \\
DI Lac & 1910$^a$ & 13.6$^m$ & $<$18$^i$ & & & 0.26$^a$ & &
\enddata
\tablenotetext{
a}{Selvelli, P. \& Gilmozzi, R. 2013, A\&A, 560, 49
}
\tablenotetext{
b}{Vogt, N. 1975, A\&A, 41, 15.
}
\tablenotetext{
c}{Schmidtobreick, L., Tappert, C. \& Saviane, I. 2008, MNRAS, 389, 1348
}
\tablenotetext{
d}{Warner, B. 1985, Mon. Not. R. ast. Soc., 219.
}
\tablenotetext{
e}{Haefner, R., Metz, K. 1982, A\&A 109, 171.
}
\tablenotetext{
f}{McQuillin et al. 2012, MNRAS, 419, 330
}
\tablenotetext{
g}{Rodriguez-Gil and Martinez-Pais 2002 }
\tablenotetext{
h}{Kraft, R. 1964, ApJ, 139, 457
}
\tablenotetext{
i}{Slavin et al.1995, MNRAS, 276, 353
}
\tablenotetext{
j}{Gill, C. D. \& O'Brien, T. J., 2000, MNRAS, 314, 175
}
\end{deluxetable}
\clearpage
In section 2, we present a log of the FUSE and IUE spectroscopic
observations and describe the spectral line features in the FUSE
wavelength range. In section 3, we present the results of a search
for line and continuum variations as a function of orbital phase.
In section 4, we describe our synthetic spectral fitting codes,
model grids and fitting procedure, while in section 5, we present
our model fitting results. Finally, in section 6, we summarize our
conclusions.
\section{\bf Far Ultraviolet Spectroscopic Observations}
The observing log of far ultraviolet spectroscopic observations
of RR Pic, V533 Her and DI Lac is presented in Table 2. All spectra were downloaded from the MAST archive
All of the FUSE spectra of the three systems were acquired through the LWRS aperture in TIME-TAG mode. Each exposure was made up of multiple subexposures. As pointed out by Godon et al. (2012), the FUSE reduction
requires a careful post-processing of the different FUSE spectral segments
before they are co-added into one final spectrum.
Extensive details on the acquisition and the processing of the FUSE data
are given in Godon et al. (2012) and will not be repeated here.
The FUSE spectra for the three systems along with their respective line identifications are displayed for RR Pic in Fig. 1, V533 Her in Fig.2 and DI Lac in Fig.3. The absorption line and emission line identifications for the FUSE (as well as for IUE) spectra of RR Pic, V533 Her and DI Lac are comprehensively tabulated by Selvelli \& Gilmozzi (2013).
In the FUSE wavelength range, RR Pic has a rich absorption line spectrum
as noted by Selvelli and Gilmozzi (2013) who tabulate equivalent
widths (EW) and Full Widths at Half-Maximum (FWHM).
Limiting our focus to the strongest absorption features
(EW $>$ 0.75\AA), they are Lyman Epsilon (938), S\,{\sc vi} (934 \& 945),
Lyman Delta (950), S\,{\sc iii} (1012 \& 1016), S\,{\sc iii} (1063, 1073),
P\,{\sc v} (1118), blended P\,{\sc v} (1128.38) +
Si\,{\sc iv} (1128.3) and C\,{\sc iii} (1175).
For V533 Her, all of the emission lines in its FUSE spectrum are sharp and
of geocoronal origin with full widths at half-maximum (FWHM) of a
fraction of an Angstrom. However, V533 Her, like RR Pic, exhibits a
rich absorption line spectrum. The strongest features (EW $>$ 0.75\AA)
are due to S\,{\sc iv} (1063), Si\,{\sc ii} (1108), P\,{\sc v} (1118),
Si\,{\sc iv} (1123), P\,{\sc v} (1128), Si\,{\sc iv} (1128), and
C\,{\sc iii} (1175).
The strongest among these features are the P V lines.
If the abundances derived from these P lines are suprasolar, then
their overabundance points to explosive CNO burning and hence are
asssociated with the 1963 and earlier nova explosions of V533 Her
(cf. Sion and Sparks (2015)).
\begin{deluxetable}{cccccc}
\tablecaption{Observation Log}
\tablehead{
System & Telescope & Data ID & Obs. Date & Time & Exp.Time \\
Name & & & YYYY-MM-DD & hh:mm:ss & sec
}
\startdata
RR Pic & FUSE & D9131601001 & 2003-10-29 & 13:35:41 & 1792 \\
& FUSE & D9131601002 & 2003-10-29 & 14:40:16 & 3916 \\
& FUSE & D9131601003 & 2003-10-29 & 16:24:04 & 3733 \\
& FUSE & D9131601004 & 2003-10-29 & 18:07:39 & 4185 \\
& IUE & SWP05775 & 1979-07-11 & 23:15:54 & 1200 \\
DI Lac & FUSE & D9131301001 & 2003-10-20 & 09:38:42 & 2572 \\
& FUSE & D9131301002 & 2003-10-20 & 11:02:47 & 3473 \\
& IUE & SWP29325 & 1986-09-28 & 18:09:18 & 16799 \\
V533 Her & FUSE & D9131701001 & 2003-07-01 & 03:34:36 & 548 \\
& FUSE & D9131701002 & 2003-07-01 & 04:19:26 & 1390 \\
& FUSE & D9131701003 & 2003-07-01 & 05:15:22 & 496 \\
& FUSE & D9131701004 & 2003-07-01 & 05:59:29 & 1687 \\
& IUE & SWP44805 & 1992-05-29 & 07:52:12 & 18000 \\
\enddata
\end{deluxetable}
\clearpage
\clearpage
For DI Lac, unlike RR Pic and V533 Her,the FUSE spectrum does not clearly
reveal strong absorption features. The spectrum is rather noisy and, like
V533 Her, it is strongly affected by ISM molecular hydrogen absorption.
The IUE spectra of the 3 old novae are briefly displayed together on a log-log
scale in Fig.4.
The clear signature of wind outflow is seen in the IUE spectra of
DI Lac and V533 Her spectra with the C IV (1550) resonance doublet
present with P Cygni structure. The presence of this feature with its blue-shifted absorption along with the blue-shifts of other absorption features points to a wind from the disk and/or boundary layer.
We will come back to description of the IUE spectra of the 3 systems
in the results section.
\section{Phase Dependent FUV Line and Continuum Variations}
In order to shed light on the origin of the absorption lines in the FUSE spectra of the three systems, we examined the subexposures that were obtained over each individual FUSE orbit. For RR Pic, there were four such exposures, for DI Lac two exposures and for V533 Her, four individual exposures.
The details (timing) of each spectrum are listed in Table 3.
For RR Pic, each subexposure is essentially one orbit of the FUSE spacecraft.
The average spacecraft orbital period is 90 minutes. The first exposure started
on Oct. 29 at 13:35:41, and the last exposure started on Oct. 29 at 18:07:39,
and lasted for approximately 4000s. Hence, RR Pic was observed with FUSE for
approximately 6 hours, compared with its orbital period of 3.481 hours.
The spectra obtained in orbit 1 and 3 are almost identical to each other and
have a higher flux level than the spectra obtained in orbit 2 and 4, which are
themselves almost identical to each other.
We have co-added exposures 1 and 3 together, as well as 2 and 4.
In Fig.5, we have overplotted the two resulting co-added spectra.
The absorption line features are slightly blue-shifted for exposures 1 and 3 and are
slightly red-shifted for exposures 2 and 4. This is consistent with the occulting
material coming from the L1 stream over-shooting the disk, because during that phase,
the WD is moving away from the observor, therefore explaining the red-shift observed
in exposures 2 and 4 with less flux (in a manner similar to the FUSE observations
of EM Cyg, Godon et al. 2009). However, the material absorbing the flux does
not change the depth of the lines but reduces the overall flux (luminosity)
of the spectrum. Thus, the absorption lines do not form from the material
overshooting the disk. In view of this line behavior, the possiblity remains
that the absorption is associated with the accreting white dwarf itself.
We note also that the IUE spectra of RR Pic exhibit the same drop of
flux at the orbital phase when the emission lines are slightly red-shifted
compared to when the lines are slightly blu-shifted. The IUE SWP spectrum
of RR Pic that
we retrieved from the archive was obtained when the flux was maximum,
i.e. at the orbital phase where the emission lines are blue-shifted.
The total exposure time for V533 Her is 1.145 hrs, compared with its orbital period of 3.53 hrs. Our search for variations in the line and continuum flux proved more difficult than expected due to the more noisy subexposures.
For the fainter DI Lac, we have only two FUSE subexposures to examine.
The total FUSE exposure time was 6045 s or 1.679 hrs, compared with its orbital period of 13.6 hrs. Hence, this amounts to a little over 12\% of its orbit. From our examination of the two subexposures, there is little to suggest any variation in the continuum level and absorption lines during the FUSE observation.
\section{\bf Synthetic Spectral Analysis}
We adopted model accretion disks from the solar composition
optically thick, steady state disk
model (``the standard disk model'')
grid of Wade and Hubeny (1998). In these accretion disk models, the
outermost disk radius, $R_{\rm out}$, is chosen so that
$T_{\rm eff}$ ($R_{\rm out}$) is near 10,000K since disk annuli beyond this
point, which are cooler zones with larger radii, would provide only a very
small contribution to the mid and far UV disk flux, particularly in the
FUSE and SWP FUV band pass. For the disk models, unless otherwise specified,
we selected every combination of $\dot{M}$, inclination and white dwarf
mass to fit the data:
the inclination angle i = 18, 41, 60, 75 and 81
degrees, $M_{\rm wd}$ = 0.80, 1.03, 1.21 $M_{\odot}$ and
$\log(\dot{M})$ ($M_{\odot}$/yr) = -8.0, -8.5, -9.0, -9.5, -10.0, -10.5.
For the WD models, we used TLUSTY Version 203 (Hubney 1988) and
Synspec48 (Hubeny and Lanz 1995) to construct a grid solar composition
WD stellar photospheres, with temperatures
from 12,000K to 60,000K in steps of 1,000K to 5,000K, and with
effective surface stellar gravity, Log(g),
corresponding to the white dwarf mass of the accretion disk model.
We adopt a projected standard stellar rotation rate $V_{\rm rot}
\sin(i) $ of 200 km/s.
We carried out synthetic spectral fitting with disks
and photospheres, and a combination of both to model the FUSE spectra,
the IUE spectra, and, when possible,
the combination of the FUSE + IUE spectra to attempt to consistently
fit a broader wavelength baseline.
\section{\bf Synthetic Spectral Fitting Results}
\subsection {RR Pic}
The IUE spectrum of RR Pic is consistent with $\sim$15,000 K component
and our synthetic spectral fits yield either a
low WD surface temperature and/or low mass accretion rate,
giving an extremely short distance ($< 20$pc).
The IUE spectrum of RR Pic is displayed in
Fig.4 together with standard disk models for comparison.
{\it The slope of the continuum of
the IUE spectrum of RR Pic is much more shallow than that of
a standard disk model,} and could indicate the presence of
a non-standard disk.
On the other hand, however,
the FUSE spectrum of RR Pic reveals a rising FUV continuum
(toward shorter wavelengths) and strong absorption features suggesting
either a hot photosphere and/or a hot accretion disk.
Therefore, we only model the FUSE spectrum of RR Pic.
For the spectral fits (going down the Lyman limit)
we have co-added the spectra from orbit 1 with that of orbit 3,
when the source is least likely veiled by material flowing over the
disk's rim. The FUV slope of the FUSE continuum points to the presence
of a hot component. If we assume that the
FUSE flux is due to a hot white dwarf, then we obtain a very high
temperature of the order of $T_{\rm eff}$ = 70,000K to 80,000K
with an inflated radius
of the order of $\sim 0.03 R_{\odot}$ ($\approx$20,000km) to account
for the distance.
Such a model is shown in Fig.6.
For $M_{\rm wd} = 1 M_{\odot}$ with a non-inflated radius,
the scale factor-derived
distance is reduced to $\sim$100 pc which is well below the parallax distance.
For accretion disk model fits to the
FUSE spectrum, with i = 60 degrees, $M_{\rm wd} = 0.8 M_{\odot}$ and a high accretion
rate of $10^{-8} M_{\odot}$/yr,
we obtain a distance of 427pc, which is within the
range of RR Pic's parallax distance.
This model has a flux level that is too low in the shorter wavelengths of FUSE.
If we increase the WD mass to $1.03 M_{\odot}$, then the fit in the
shorter wavelengths is slightly improved and a distance of 506 pc is obtained.
This accretion disk fit is shown in Fig.7.
In order to fit the absorption lines with the disk model,
the inclination of the disk has to be set to 3deg,
which is almost a face-on disk, while the inclination of RR Pic is
known to be 65deg.
Since the absorption lines do not arise
in the disk, (see Fig.7), then they may arise from the white
dwarf photosphere. If true, this would imply that we could be seeing
significant flux from the post-nova hot white dwarf photosphere itself.
However, unless the WD is very hot with an inflated radius,
the WD contributes too little flux for this to be a viable possibility:
the addition of a WD (with a non-inflated radius)
to the disk disk model does not improve the disk model.
\subsection{V533 Her}
The IUE spectra of V533 Her also reveal a relatively shallow slope of the
continuum consistent with a cold component, namely, either a low mass
accretion rate disk, or a low temperature WD, or a combination of both.
The IUE spectrum of V533 Her is also displayed in Fig.4
in comparison to standard accretion disk models.
The resulting distance we obtain from the low mass accretion
disk model fits, however, is far too close
and we are forced to reject these solutions.
The slope of the IUE spectrum, on the overall,
agrees with that of a $\sim$15,000 K component.
While this could be due to a non-standard disk, the FUSE spectrum
of V533 Her has a slope consistent with a hot FUV component with a
temperature 40,000K $\pm$5,000 K, the exact value depends on the
assumed WD surface gravity $\log(g)$.
For a $1 M_{\odot}$ model, a hot white dwarf fit
yields a distance that is too short (300-400pc), about 2-3 times smaller than
the accepted value. Since the distance obtained from the model
scales like the WD radius, a possible solution would be a hot
WD with an radius inflated to 2-3 times its value.
In Fig.8 we present a solar abundances 40,000K WD fit with $\log(g)=8.6$,
corresponding to a $1 M_{\odot}$ WD with a radius of 6,000km,
giving a distance of 316pc.
Turning to accretion disk models, a disk around a
$1 M_{\odot}$ WD with an accretion rate of $10^{-9} M_{\odot}$/yr
and i = 60 degrees produces a good continuum fit and a reasonable
distance of 832 pc.
This disk model is shown in Fig.9. Here too, the disk does not fit any
of the absorption lines.
The disk+WD fit to the FUSE spectrum of V533 Her produces the
best fit when each component contributes about half of the flux.
This is achieved for $\dot{M}=10^{-9.5}M_{\odot}$/yr and
$T_{\rm wd}=60,000$ K, giving a distance of 640pc and where the
WD contributes 51\% of the flux and the disk contributes the
remaining 49\%.
This model is shown in Fig.10.
Note, however, that while the model disk fits match the FUSE slope
better, disk models clearly do not fit any of the absorption features.
This is important since a hot white dwarf model does fit the absorption
lines in the FUSE spectrum quite well. Nevertheless, we can virtually
rule out the white dwarf because it contributes too little FUV flux,
unless it is very hot with an inflated radius two to four times larger
than the expected 6,000km.
\subsection{DI Lac}
The distance we obtain from the model fit scales with the radius of the WD
which itself depends on the WD mass. Since both the WD mass
and distance to the system are unknown, and the spectra are of a low
quality, there is a degenracy in the modeling.
Therefore, we assume a standard WD mass
of $M_{\rm wd} = 0.8M_{\odot}$, $\log(g)=8.4$, and an
inclination of 18deg in the disk models.
The IUE spectrum of DI Lac presents a moderately shallow continuum slope
consistent with a component with a temperature of about 20,000K.
A WD with T=20,000K gives an unrealistic distance of barely 40pc
(that would scale to less than 100pc assuming a low WD mass),
while the disk model agreeing with the slope has a mass accretion rate of
$10^{-10}M_{\odot}$/yr and gives a distance of 140pc
(see Fig.4).
The FUSE spectrum of DI Lac is consistent with a component
with a temperature of about 30-35,000K. A $0.8M_{\odot}$ WD with
a temperature of 35,000K gives a distance of $\sim 150$pc, while
a 30,000K WD gives a distance of $\sim 110$pc.
An accretion disk model best fit to the FUSE spectrum
has a mass accretion rate of
$10^{-10}M_{\odot}$/yr (giving a distance of 140pc)
to $3 \times 10^{-10}M_{\odot}$/yr (and a distance of 300pc).
Since both the FUSE and IUE spectra of DI Lac are of pretty low quality
but are consistent with each other as they
give similar results, we decide to model the combined FUSE+IUE
spectrum. The best fit model to the combined FUSE+IUE spectrum of
DI Lac is a disk + WD model, where a 30,000K WD helps fit the absorption
lines in the FUSE range of the spectrum, while a disk with
$\dot{M} = 10^{-10}M_{\odot}$/yr provides the lower flux needed to
fit the IUE range of the spectrum.
The distance obtained from this model is d = 175pc.
This combination FUSE + IUE fit is displayed in Fig.11 showing the
FUSE range, and in Fig.12 showing the IUE range.
The dotted line is the contribution of the white dwarf and the dashed
line is the flux of the model accretion disk.
\section{Summary}
It is apparent from our study of the hot components in the old novae
addressed in this paper that the range of wavelengths extending down
to the Lyman Limit by the FUSE spacecraft is essential to uncovering
the nature of the hot coponent, be it a white dwarf, or an accretion
disk. The FUSE coverage reveals that for RR Pic, the hot component
is more likely represented by a bright accretion disk with the corresponding
accretion rate of $10^{-8}M_{\odot}$/yr. An accretion rate this high
is supported by optical observations of the accretion rates of old novae
(e.g. Warner 1995 and references therein). This best-fitting accretion
disk solution has a white dwarf of $\sim 1 M_{\odot}$, a disk
inclination of 60 degrees and yielded a distance to RR Pic of 506 pc
which agrees well with the parallax distance.
We find that a hot WD ($T\sim 70-80,000$K) also fits the FUSE spectrum
of RR Pic, but implies a WD radius inflated to about 20,000km to
agree with the observed distance.
Likewise for V533 Her,
the FUSE spectra provide fitting solutions with an accretion disk
around a $1 M_{\odot}$ WD and a disk inclination of 60 degrees
yielding an accretion rate of $10^{-9}M_{\odot}$/yr at distance of
832 pc. But as in the case of RR Pic, the strong absorption features
are not reproduced by the accretion disk.
Here too we cannot rule out a hot WD with an inflated radius
as the source of the FUV: a 40,000K WD fits the FUSE spectrum
and some of the absorption lines, assuming a WD radius of $\sim 12,000$km
to 24,000km.
A combined WD+disk model also yields a reasonable fit with a
60,000K WD providing 51\% of the flux and a $10^{-9.5}M_{\odot}$/yr
disk providing the remaining 49\%, giving a distance of 640pc.
For DI lac, we opt to combined the FUSE spectrum with the
matching IUE spectrum which together provide self consistent results.
We find that the best fit is obtained with a combined disk+WD model,
where the WD has a temperature of 30,000K.
Our derived accretion rate of $10^{-10}M_{\odot}$ is
lower by about an order of magnitude than both RR Pic and V533 Her.
Although Nova Lac 1910 is considerably earlier than Nova Her 1963,
Nova Pic 1925 is almost as old as Nova Lac 1910 but has an accretion
rate two orders of magnitude higher.
The existing IUE spectra of RR Pic and V553 Her both exhibit a continuum
slope much more shallow than that of the standard disk model
(see Fig.4)
and in disagreement with the FUSE spectra revealing a hot component
that can be matched with an accretion disk.
It is a possible indication that the standard disk model does not apply here,
and it also prevented the use of the IUE spectra in our spectral analysis.
This work is supported by NASA ADP grants NNX13AF12G and NNX13AF11G
to Villanova University. One of us, LJ, was supported by a Villanova
Undergraduate Research Fellowship (VURF). This paper is based on
observations made with the NASA-CNES-CSA Far Ultraviolet Spectroscopic
Explorer (FUSE). FUSE was operated for NASA by the Johns Hopkins U
niversity under NASA contract NAS5-32985.
|
1,108,101,564,855 | arxiv | \section*{Acknowledgements}The author is indebted to Hans Gerd Evertz
for helpful discussions. Part of this work was performed at the
Institut f\"ur Theoretische Physik at TU-Graz under FWF grant number
P21240-N16.
|
1,108,101,564,856 | arxiv | \section{Introduction}
Radiation transfer is, by essence, a difficult problem (e.g., Rutily
\& Chevallier 2006), as well as a question of very large relevance in
astrophysics. It relies indeed on \emph{complex} non-linear
light--matter interactions (see e.g., Hubeny \& Mihalas 2014, Rutten
2003).
At the very heart of the problem lays the issue of how photons may
scatter on these {\em moving} massive particles constituting the
atmosphere under study. The usual literature classify these processes
as, either ``complete'' redistribution in frequency (CRD), or
``partial'' redistribution in frequency (PRD; see e.g., \S10 of Hubeny
\& Mihalas 2014). The vast majority of astrophysical problems are
solved, still, within the frame of CRD, for which further
simplifications are the equality of emission and absorption profiles
-- the latter being also usually known {\em a priori} --, which also
leads to the independence of the so-called source function
vs. frequency.
Besides, and more generally, while non-equilibrium distributions of
photons i.e., potential departures from the Planck law, have been
routinely considered since the late 60's, a very limited number of
studies tried to push further the description of the physical problem,
by questioning {\em to what extent the most often assumed
Maxwell--Boltzmann velocity distribution of the massive particles
onto which photons scatter may remain valid} (see e.g., Oxenius \&
Simmonneau 1994, and references therein).
Non-maxwellian velocity distributions functions (hereafter vdf) have
been studied (e.g., Scudder 1992 and further citations) or evidenced
in \emph{natural} plasma (see e.g., Jeffrey et al. 2017 for a recent
study about solar flares). Such departures from Maxwell--Boltzmann
vdf's have also been considered in the radiative modelling of spectral
lines formed in neutral planetary exospheres (e.g., Chaufray \&
Leblanc 2013), where these authors introduced so-called $\kappa$ vdf's
into their photon scattering physical model.
However, such non-maxwellian vdf's are still known {\em ab initio}
before solving the radiation transfer problem. The more general issue
of computing {\em self-consistent} non-equilibrium distributions for
both photons and massive particles -- whose associated problem we
coin ``full non--LTE radiation transfer'' -- remains a quite open
question in astrophysics, although a few studies have already been
conducted in the past (see e.g., Borsenberger et al. 1986, 1987).
Hereafter, we provide a first numerical tool that will allow us
to go further in this direction, enabling further computations of {\em
generalized} redistribution functions. Moreover the numerical scheme
we evaluated may also be of more general interest, for other topics of
numerical (astro)physics.
\section{Redistribution in frequency}
As an illustrative but important example, we shall focus here on the
case of coherent scattering in the {\em atomic} frame of reference,
for a spectral line of central wavelength $\nu_0$. We shall also
assume that only ``natural'' broadening is at play for the upper
energy level of, typically, a resonance line with an infinitely sharp
lower level.
Therefore, we shall consider an elementary {\em frequency}
redistribution function $r(\xi',\xi)$ such that:
\begin{equation}
r(\xi',\xi) = \varphi(\xi') \delta(\xi'-\xi) \, ,
\end{equation}
where $\xi'$ and $\xi$ are, respectively, the incoming and the
outgoing frequencies of a photon, and $\delta$ is the usual Dirac
distribution, together with:
\begin{equation}
\varphi(\xi') = \left( \frac{\Gamma}{\pi} \right) {\frac{1}{(\xi' -
\nu_0)^2 + {\Gamma}^{2}}} \, .
\end{equation}
The latter is a {\em Lorentzian} profile, with damping parameter $\Gamma$,
resulting from the ``natural width'' of the upper atomic state of the
transition at $\nu_0$.
If we assume that the {\em angular} redistribution associated with the
scattering event is {\em isotropic}, such a case of radiation damping
and coherence in the atom's frame refers to the standard case ``II-A''
in the nomenclature of Hummer (1962; see also Hubeny \& Mihalas 2014).
Once the elementary scaterring process have been defined in the atomic
frame of reference, we have to consider for a further practical
implementation into a radiative transfer problem, the collective
effects induced by the agitation of a pool of massive particles
populating the atmosphere. This is precisely in this ``jump'' to the
observer's frame of reference, because of Doppler shifts such as:
\begin{equation}
\nu = \xi + \frac{\nu_0}{c} {\vec{n} . \vec{v}} \, ,
\end{equation}
where $\nu$ is the observed frequency, $\vec{n}$ may be either the
incoming or the outgoing direction of a photon, and $\vec{v}$ the
velocity of the massive particle onto which the scattering takes
place, that some assumption has to be made about the vdf of the
massive atoms (or molecules) present in an atmosphere, under given
physical conditions.
Detailed derivations of $R_{II-A}$ can be found in the classical
literature about redistribution functions, from Henyey
(1940)\cite{typos} to Hummer (1962). Standard redistribution
functions have been first derived assuming that the vdf of the atoms
scattering light is a Maxwell--Boltzmann distribution. Then, but more
generally, any macroscopic redistribution function in the observer's
frame suitable for implementation into the numerical radiative
transfer problem will result from the further integration along each
velocity components $u_i$ (hereafter normalized to the most probable
velocity $v_{\rm th.}=\sqrt{2kT/m}$) characterizing the movement of
the scattering atoms and, therefore considering these changes of
frequencies due to the associated Doppler shifts as expressed by
Eq.\,(3). The latter phenomenon is usually refered to as Doppler, or
{\em thermal} broadening.
\section{The numerical problem}
\begin{figure}[]
\includegraphics[width=9. cm,angle=0]{PPA_Fig01.png}
\caption{The failure of a standard Gauss--Hermite quadrature of order
$k=70$ (green), as compared to the almost superimposed results
from, respectively, the method using the Faddeeva complex function
(dark) and our alternative double adaptive Gaussian quadrature
scheme, for a normalized Voigt profile with $a=10^{-2}$.}
\label{Fig1}
\end{figure}
We aim at generalizing computations of redistribution functions in
order to be able to compute vdf's self-consistently with the radiation
field. Therefore we need a \emph{robust} numerical approach to repeatedly
perform numerical integrations like:
\begin{widetext}
\begin{equation}
H_{1}(x',x,\gamma) = \displaystyle \int_{-\infty}^{+\infty} { {
{f(u_1)du_1} \over { [ (\frac{x+x'}{2}) \sec(\gamma/2) - u_1
]^2 + [ \frac{a}{{\Delta \nu_{D}}} \sec(\gamma/2)]^2 } } }
\, ,
\end{equation}
\end{widetext}
where $x'$ and $x$ are the usual incoming and outgoing {\em
reduced}\cite{redfreq} frequencies in the observer's frame, ${\Delta
\nu_{D}}$ the Doppler width defined as $(\nu_0/c)v_{\rm th.}$, and
$\gamma$ the diffusion angle between incoming and outgoing directions
in the plane defined by $u_1$ and $u_2$. For the Maxwell--Boltzmann
case, we should indeed use:
\begin{equation}
f(u_1)=\frac{1}{\sqrt{\pi}} e^{-u_1^2} \, ,
\end{equation}
but we shall need to consider $f(u_1)$ to be $non-$analytic, and, at
first, (slightly) departing from the maxwellian standard vdf. Indeed,
physical conditions leading to small departures from a Gaussian vdf
have already been identified and discussed by Oxenius (1986), and they
would correspond to a non--LTE gas of moderate optical thickness. Note
also that, for a preliminary study, we shall assume a self-consistent
vdf solution of the problem that may still be decomposed as
$f_1(u_1)f_2(u_2)f_3(u_3)$.
However, before exploring potential departures from gaussianity, we
need to adopt a robust enough numerical strategy in order to
numerically evaluate integrals such as Eq.\,(4), a task which is
notoriously difficult even with Maxwell--Boltzmann fdv's. It is very
easy to verify that, for instance a standard Gauss--Hermite (GH)
quadrature, even at high rank $k$, fails at computing properly a
somewhat simpler expression like the Voigt\cite{voigt} function given
in Eq.\,(13). We display in Fig. (1) the comparison between a GH
integration and the new numerical scheme which is presented hereafter.
\section{Adaptive Gaussian Quadrature}
We shall start following the scheme proposed by Liu \& Pierce (1994),
which is based on the classical Gauss--Hermite (GH) quadrature. The
latter is indeed suitable for integrations of the kind:
\begin{equation}
I=\displaystyle \int_{-\infty}^{+\infty} { f(y){ {e^{-y^2}dy} } } \, .
\end{equation}
Then the GH quadrature is such that:
\begin{equation}
\displaystyle \int_{-\infty}^{+\infty} { f(y){ {e^{-y^2}dy} } }
\simeq \sum_{i=1}^{k} {w_i f(y_i)}
\end{equation}
where the nodes $y_i$ are the zeros of the $k$-th order Hermite
polynomial, and $w_i$ the corresponding weights. Tabulated values of
both nodes and weights can be found very easily, and they are also
available for various programming language. We shall use {\tt numpy}'s
(Oliphant 2006) function {\tt polynomial.hermite.hermgauss}, and a GH
of order $k=70$ for all results presented hereafter.
The main drawback of such a standard quadrature is that function $f$
shall be scanned at the very nodes $y_i$ {\em irrespectively from} the
range where it may have its most significant variations.
However, Liu \& Pierce (1994) proposed that, should a function $g$ to
be integrated, one may define:
\begin{equation}
h(y)= { {g(y)} \over {{\cal N}(y;\hat{\mu},\hat{\sigma})} } \, ,
\end{equation}
where ${\cal N}$ is the usual Gaussian function:
\begin{equation}
{\cal N}(y;\hat{\mu},\hat{\sigma}) = \frac{1}{\hat{\sigma} \sqrt{2\pi}}
e^{-\frac{1}{2} (\frac{y-\hat{\mu}}{\hat{\sigma}})^2 } \, ,
\end{equation}
so that one can write:
\begin{equation}
\displaystyle \int_{-\infty}^{+\infty} { g(y) dy } = \displaystyle
\int_{-\infty}^{+\infty} { h(y){{\cal N}(y;\hat{\mu},\hat{\sigma})}
dy } \, ,
\end{equation}
and, finally:
\begin{equation}
\displaystyle \int_{-\infty}^{+\infty} { g(y) dy } \simeq
\sum_{i=1}^{k} {{{w_i} \over {\sqrt{\pi}}} h(\hat{\mu}+ \sqrt{2}
\hat{\sigma} y_i)} \, .
\end{equation}
This adaptive Gaussian quadrature scheme (AGQ) allows to use the
original nodes and weights of the GH quadrature, but somewhat {\em
zooms in} these domains where function $g$ has its most
significant variations.
The choice of $\hat{\mu}$ and $\hat{\sigma}$ is of importance. Liu \&
Pierce (1994) suggested to adopt $\hat{\mu}$ to be the mode of $g$,
and $\hat{\sigma}=1/\sqrt{j}$, where:
\begin{figure}[]
\includegraphics[width=9. cm,angle=0]{PPA_Fig02.png}
\caption{Example of distribution of nodes for an initial
Gauss-Hermite quadrature of order $k=70$ (dots), and for our
adaptive Gaussian quadrature centered at the Lorentzian peak
(crosses).}
\label{Fig2}
\end{figure}
\begin{equation}
j = - \frac{\partial^2}{\partial y^2} \log g(\hat{\mu}) \, .
\end{equation}
We shall come back on this choice in the following section, and show
that a somewhat larger $\hat{\sigma}$ value is more suitable for the
special case of the Voigt profile.
\section{AGQ tests with the Voigt function}
Let us consider the {\em normalized} Voigt function
hereafter defined as:
\begin{equation}
H(a,u) = \frac{1}{\sqrt{\pi}} \left( \frac{a}{\pi} \right)
\int_{-\infty}^{+\infty} { e^{-y^2} dy \over {(u-y)^2 + a^2} } \,
,
\end{equation}
and which satisfies to:
\begin{equation}
\int_{-\infty}^{+\infty} {H(a,u) du} = 1 \, .
\end{equation}
Note that several authors use $H$ for the Voigt profile normalized to
$\sqrt{\pi}$, but $U$ instead of our $H$ normalized to unity though
(see e.g., Hubeny \& Mihalas 2014, their \S 8). We shall also use $h(y;
a,u)$ for the integrand of Eq.\,(13).
For this numerical integration, three main regimes should be
considered, depending on the values of $u$ i.e., according to the
respective amplitudes of the Gaussian and the Lorentzian components of
the integrand. For the line core region such that $\lvert u \rvert <
2$, we use a slightly modified AGQ, for which we use a value of
$\hat{\sigma}$ larger than the one suggested in the original article
of Liu \& Pierce (1994). We display in Fig.\,(2) the new quadrature
nodes, marked with crosses, centered at the Lorentzian peak located at
$y \approx 1.7$, and using $3\hat{\sigma}$ instead of the value
suggested in the original prescription of Liu \& Pierce (1994). The
nodes of the standard GH quadrature (at the \emph{same} order) are
displayed as dots. They extend too far away, clearly ``miss'' the
large amplitude Lorentzian peak and therefore the dominant
contribution to the integral.
Second, we perform a {\em double} AGQ scheme for the near wing regions
such that $2 < \lvert u \rvert < 4$, and for which two discernable
peaks of \emph{comparable} amplitudes result from, respectively, the
Lorentzian and the Gaussian components of the convolution (we shall
hereafter refer to $u_2$ and $u_4$ for these two boundary values). In
such a case, we use both the centering and integration range controls
provided by the original AGQ for evaluating \emph{separately} the
contribution from each component of the integrand. For the Lorentzian
component we therefore do as when $\lvert u \rvert < 2$, but {\em we
add} to this part of the integral the contribution of the nearby
Gaussian peak using {\em another} AGQ centered at 0, and of specific
$\hat{\sigma}_G$ obviously adapted to the width of the known Gaussian
component of the integrand (also, overlap with the nearby Lorentzian
component should be avoided). The two distincts sets of nodes, based
on the same original GH quadrature nodes, at same order, are displayed
by crosses of different colors in Fig.\,(3).
\begin{figure}[]
\includegraphics[width=9. cm,angle=0]{PPA_Fig03.png}
\caption{Respective distributions of nodes, marked by crosses of
different colors, from an initial Gauss-Hermite quadrature of
order $k=70$ when the Gaussian and Lorentzian peaks are of
comparable amplitude, here for $u$ around 3.}
\label{Fig3}
\end{figure}
Finally, for the far wing where $\lvert u \rvert > u_4$, and when the
Lorentzian peaks fade out, the \emph{usual} Gauss--Hermite quadrature is
satisfactory.
\begin{figure}[]
\includegraphics[width=9. cm,angle=0]{PPA_Fig04.png}
\caption{Voigt profiles $H(a,u)$ computed with our double AGQ scheme
for, respectively, $a=0.001$, $10^{-4}$, $10^{-5}$ and $10^{-6}$
(and decreasing wing values). Small discontinuities are still
noticeable at the transitions values about 2 and 4. This should
however not impair any standard scattering integral computation.}
\label{Fig4}
\end{figure}
Results using our double AGQ quadrature scheme are displayed in
Fig.\,(4), for different values of $a$ ranging from 0.01 to $10^{-4}$,
more likely regimes expected for our next computations. Maximum
relative errors computations using the Faddeeva function method as a
benchmark, and the {\tt scipy.special.wofz} Python function, are at
most of a few percents, as displayed in Fig.\,(5); note also that the
latter was obtained using a $u_4$ value of 4.25, instead of the
fiducial value of 4, indicating also in what direction a further
fine-tuning could be worked out, if necessary, by considering $u_2$
and $u_4$ as slowly varying functions of $a$. Sometimes we can still
notice small discontinuities at the changes of regimes, at $u_2$ and
$u_4$. We believe however that, should our procedure be used for Voigt
profile computations and radiative modelling, such small and very
local discontinuities will not impair further computations of these
scattering integrals entering the equations of the statistical
equilibrium.
This new numerical scheme is particularly efficient for {\em small}
values of $a$, typically lower than 0.01, where other schemes may fail
(see for instance the discussion in Schreier 2018 about the
implementation of {\tt Voigt1D} in the {\tt astropy} package in
Python, using the McLean et al. 1994 method). But first of all, it is
certainely suitable for our next applications of such a numerical
integration scheme, and for physical conditions leading to very sharp
Lorentzian peaks. We could also test the sensitivity of our scheme to
the \emph{order} of the initial Gauss--Hermite quadrature. For
instance, for $a<0.01$ we could go down to orders 40 to 50 without any
significant loss of accuracy.
\begin{figure}[]
\includegraphics[width=9. cm,angle=0]{PPA_Fig05.png}
\caption{Relative error between our computations with the double AGQ
method, for $a=10^{-4}$, and a reference computation using the
Faddeeva complex function.}
\label{Fig5}
\end{figure}
For larger values of $a$, typically more than 0.1, we noticed that
{\em no} intermediate scheme between the original Liu \& Pierce (1994)
at line core, and the Gauss--Hermite in the wings appears
necessary. However the transition value between the two regimes should
be adapted to the value of $a$, in a 2 to 4 range.
\section{Conclusion}
We have tested a suitable numerical strategy for our first step
towards ``fully non-LTE'' radiative transfer calculations, and the
computation of generalized frequency redistribution functions. We
modified the original strategy of Liu \& Pierce (1994), but also
applied it to a \emph{non}-unimodal distribution.
Our numerical scheme does not pretend to compete with these numerical
methods implemented for the {\em very} accurate computation of the
Voigt function (see e.g., Schreier 2018 and references therein) since
our aim is elsewhere, i.e. to explore departures from Gaussian
vdf's. It is however providing very good results as compared to
reference computations, such as the one using the Faddeeva complex
function. Relative errors down to a few percent are systematically
reported in the near wing region, and we believe that further fine
tuning could be achieved for reaching an even better accuracy.
This is however not the scope of our study, which aims at computing
generalized redistribution functions, after self--consistent
computations of both massive particles and photons respective
distributions under various physical conditions. In that respect, our
main concern is well about a proper ``capture'' of the expected {\em
very} sharp, and therefore very large amplitude Lorentzian peaks.
And we believe that the principle of our numerical integration scheme
should remain valid for the more easy to track contribution from the
velocity distribution function, even for computed perturbations from a
Gaussian shape.
As a final remark, we are also aware that computations with
\emph{non}-Gaussian functions convolved with a Lorentzian may also be
doable, using a Fourier transform based method (e.g., Mendenhall
2007).
\begin{acknowledgments}
Fr\'ed\'eric Paletou is grateful to his radiative transfer {\it
sensei}, Dr. L.H. ``Larry'' Auer, with whom we started discussing
about these issues long time ago.
\end{acknowledgments}
|
1,108,101,564,857 | arxiv | \section{Introduction}
According to conventional wisdom, explicit regularization should be added to the least-squares objective when the Hilbert space $\cH$ is high- or infinite-dimensional \citep{Golub_1979,wahba1990spline,smola1998learning,shawe2004kernel,evgeniou2000regularization, de2005model, alvarez2012kernels}:
\begin{align}
\min_{f\in\cH} \frac{1}{n}\sum_{i=1}^n (f(x_i)-y_i)^2 + \lambda \norm{f}_{\cH}^2.
\end{align}
The regularization term is introduced to avoid ``overfitting'' since kernels provide enough flexibility to fit training data exactly (i.e. interpolate it). From the theoretical point of view, the regularization parameter $\lambda$ is a knob for balancing bias and variance, and should be chosen judiciously. Yet, as noted by a number of researchers in the last few years,\footnote{In particular, we thank M. Belkin, B. Recht, L. Rosasco, and N. Srebro for highlighting this phenomenon.} the best out-of-sample performance, empirically, is often attained by setting the regularization parameter to \emph{zero} and finding the minimum-norm solution among those that interpolate the training data. The mechanism for good out-of-sample performance of this interpolation method has been largely unclear \citep{zhang2016understanding,belkin2018understand}.
As a concrete motivating example, consider the prediction performance of Kernel Ridge Regression for various values\footnote{We take $\lambda \in \{0, 0.01, 0.02, 0.04, 0.08, 0.16, 0.32, 0.64, 1.28\}$.} of the regularization parameter $\lambda$ on subsets of the MNIST dataset. For virtually all pairs of digits, the best out-of-sample mean squared error is achieved at $\lambda=0$. Contrary to the standard bias-variance-tradeoffs picture we have in mind, the test error is monotonically decreasing as we decrease $\lambda$ (see Figure~\ref{fig:mnist-intro} and further details in Section~\ref{sec:experiments}).
\begin{figure}[h]
\centering
\includegraphics[width=0.6\textwidth]{mnist-intro.pdf}
\caption{Test performance of Kernel Ridge Regression on pairs of MNIST digits for various values of regularization parameter $\lambda$, normalized by variance of $y$ in the test set (for visualization purposes).}
\label{fig:mnist-intro}
\end{figure}
We isolate what appears to be a new phenomenon of \emph{implicit regularization} for interpolated minimum-norm solutions in Kernel ``Ridgeless'' Regression. This regularization is due to the curvature of the kernel function and ``kicks in'' only for high-dimensional data and for ``favorable'' data geometry. We provide out-of-sample statistical guarantees in terms of spectral decay of the empirical kernel matrix and the empirical covariance matrix, under additional technical assumptions.
Our analysis rests on the recent work in random matrix theory. In particular, we use a suitable adaptation of the argument of \citep{el2010spectrum} who showed that high-dimensional random kernel matrices can be approximated in spectral norm by linear kernel matrices plus a scaled identity. While the message of \citep{el2010spectrum} is often taken as ``kernels do not help in high dimensions,'' we show that such a random matrix analysis helps in explaining the good performance of interpolation in Kernel ``Ridgeless'' Regression.
\subsection{Literature Review}
Grace Wahba \citep{wahba1990spline} pioneered the study of nonparametric regression in reproducing kernel Hilbert spaces (RKHS) from the computational and statistical perspectives. One of the key aspects in that work is the role of the decay of eigenvalues of the kernel (at the population level) in rates of convergence. The analysis relies on explicit regularization (ridge parameter $\lambda$) for the bias-variance trade-off. The parameter is either chosen to reflect the knowledge of the spectral decay at the population level \citep{de2005model} (typically unknown to statistician), or by the means of cross-validation \citep{Golub_1979}. Interestingly, the explicit formula of Kernel Ridge Regression has been introduced as ``kriging'' in the literature before, and was widely used in Bayesian statistics \citep{cressie1990origins, wahba1990spline}.
In the learning theory community, Kernel Ridge Regression is known as a special case of Support Vector Regression \citep{vapnik1998statistical, shawe2004kernel, vovk2013kernel}. Notions like metric entropy \citep{cucker2002best} or ``effective dimension'' \citep{caponnetto2007optimal} were employed to analyze the guarantees on the excess loss of Kernel Ridge Regression, even when the model is misspecified. We refer the readers to \cite{gyorfi2006distribution} for more details. Again, the analysis leans crucially on the explicit regularization, as given by a careful choice of $\lambda$, for the model complexity and approximation trade-off, and mostly focusing on the fixed dimension and large sample size setting. However, to the best of our knowledge, the literature stays relatively quiet in terms of what happens to the minimum norm interpolation rules, i.e., $\lambda = 0$. As pointed out by \citep{belkin2018understand, belkin2018overfitting}, the existing bounds in nonparametric statistics and learning theory do not apply to interpolated solution either in the regression or the classification setting. In this paper, we aim to answer when and why interpolation in RKHS works, as a starting point for explaining the good empirical performance of interpolation using kernels in practice \citep{zhang2016understanding, belkin2018understand}.
\section{Preliminaries}
\subsection{Problem Formulation}
Suppose we observe $n$ i.i.d. pairs $(x_i,y_i)$, $1\leq i\leq n$, where $x_i$ are the covariates with values in a compact domain $\Omega \subset \mathbb{R}^d$ and $y_i\in \mathbb{R}$ are the responses (or, labels). Suppose the $n$ pairs are drawn from an unknown probability distribution $\mu(x,y)$. We are interested in estimating the conditional expectation function
$f_*(x) = \mathbf{E}(\bd{y}|\bd{x} = x)$, which is assumed to lie in a Reproducing Kernel Hilbert Space (RKHS) $\cH$. Suppose the RKHS is endowed with the norm $\| \cdot \|_{\cH}$ and corresponding positive definite kernel $K(\cdot, \cdot): \Omega \times \Omega \rightarrow \mathbb{R}$. The interpolation estimator studied in this paper is defined as
\begin{align}
\label{eq:interpolation}
\widehat{f} = \operatornamewithlimits{arg\,min}_{f \in \cH} \| f \|_{\cH}, ~~ \text{s.t.}~~ f(x_i) = y_i,~\forall i \enspace.
\end{align}
Let $X \in \mathbb{R}^{n \times d}$ be the matrix with rows $x_1,\ldots,x_n$ and let $Y$ be the vector of values $y_1,\ldots,y_n$. Slightly abusing the notation, we let $K(X, X) = [K(x_i, x_j)]_{ij} \in \mathbb{R}^{n \times n}$ be the kernel matrix. Extending this definition, for $x \in \Omega$ we denote by $K(x, X) \in \mathbb{R}^{1 \times n}$ the matrix of values $[K(x,x_1),\ldots,K(x,x_n)]$. When $K(X,X)$ is invertible, solution to \eqref{eq:interpolation} can be written in the closed form:
\begin{align}
\label{eq:interpolation_closedform}
\widehat{f}(x) &= K(x, X) K(X, X)^{-1} Y.
\end{align}
In this paper we study the case when $K(X,X)$ is full rank, taking \eqref{eq:interpolation_closedform} as the starting point. For this interpolating estimator, we provide high-probability (with respect to a draw of $X$) upper bounds on the integrated squared risk of the form
\begin{align}
\label{eq:target_estimation}
\mathbf{E}(\widehat{f}(\bd{x})-f_*(\bd{x}))^2 \leq \phi_{n,d}(X,f^*).
\end{align}
Here the expectation is over $\bd{x}\sim \mu$ and $Y|X$, and $\phi_{n,d}$ is a data-dependent upper bound. We remark that upper bounds of the form \eqref{eq:target_estimation} also imply prediction loss bounds for excess square loss with respect to the class $\cH$, as $\mathbf{E} (\widehat{f}(\bd{x}) - f_*(\bd{x}))^2 = \mathbf{E} (\widehat{f}(\bd{x}) - \bd{y})^2 - \mathbf{E} (f_*(\bd{x}) - \bd{y})^2$.
\subsection{Notation and Background on RKHS}
For an operator $A$, its adjoint is denoted by $A^*$. For real matrices, the adjoint is the transpose. For any $x \in \Omega$, let $K_x: \mathbb{R} \rightarrow \cH$ be such that
\begin{align}
\label{eq:kx}
f(x) = \langle K_x, f \rangle_{\cH} = K_x^* f.
\end{align}
It follows that for any $x, z \in \Omega$
\begin{align}
K(x, z) = \langle K_x, K_{z} \rangle_{\cH} = K_x^* K_{z}.
\end{align}
Let us introduce the integral operator $\mathcal{T}_\mu: L^2_\mu \rightarrow L^2_\mu$ with respect to the marginal measure $\mu(x)$:
\begin{align}
\mathcal{T}_\mu f(z) = \int K(z, x) f(x) d\mu(x),
\end{align}
and denote the set of eigenfunctions of this integral operator by $e(x) = \{e_1(x), e_2(x), \ldots, e_p(x)\}$, where $p$ could be $\infty$. We have that
\begin{align}
\mathcal{T}_\mu e_i = t_i e_i,~~\text{and}~~ \int e_i(x) e_j(x) d\mu(x) = \delta_{ij} \enspace.
\end{align}
Denote $T = {\rm diag}(t_1,\ldots, t_p)$ as the collection of non-negative eigenvalues. Adopting the spectral notation,
\begin{align*}
K(x, z) = e(x)^* T e(z).
\end{align*}
Via this spectral characterization, the interpolation estimator \eqref{eq:interpolation} takes the following form
\begin{align}
\widehat{f}(x) = e(x)^* T e(X) \left[ e(X)^* T e(X) \right]^{-1} Y \enspace.
\end{align}
Extending the definition of $K_x$, it is natural to define the operator $K_X: \mathbb{R}^n \rightarrow \cH$. Denote the sample version of the kernel operator to be
\begin{align}
\widehat{\mathcal{T}} := \frac{1}{n} K_{X} K_{X}^*
\end{align}
and the associated eigenvalues to be $\lambda_j(\widehat{\mathcal{T}})$, indexed by $j$. The eigenvalues are the same as those of $\frac{1}{n} K(X, X)$. It is sometimes convenient to express $\widehat{\mathcal{T}}$ as the linear operator under the basis of eigenfunctions, in the following matrix sense
$$\widehat{T} = T^{1/2} \left( \frac{1}{n} e(X) e(X)^* \right) T^{1/2}.$$
We write $\mathbf{E}_{\mu}[\cdot]$ to denote the expectation with respect to the marginal $\bd{x}\sim\mu$. Furthermore, we denote by
$$\norm{g}^2_{L^2_\mu} = \int g^2 d\mu(x) = \mathbf{E}_{\mu} g^2(\bd{x})$$
the squared $L^2$ norm with respect to the marginal distribution. The expectation $\mathbf{E}_{Y|X}[\cdot]$ denotes the expectation over $y_1,\ldots,y_n$ conditionally on $x_1,\ldots,x_n$.
\section{Main Result}
\label{sec:main}
We impose the following assumptions:
\begin{enumerate}[label={\bf (A.\arabic*)}]
\item High dimensionality: there exists universal constants $c, C \in (0, \infty)$ such that $c \leq d/n \leq C$. Denote by $\Sigma_d = \mathbf{E}_\mu[x_i x_i^*]$ the covariance matrix, assume that the operator norm $\| \Sigma_d \|_{\rm op} \leq 1$.
\item $(8+m)$-moments: $z_i := \Sigma_d^{-1/2} x_i \in \mathbb{R}^d$, $i=1,\ldots,n$, are i.i.d. random vectors. Furthermore, the entries $z_i(k), 1\leq k\leq d$ are i.i.d. from a distribution with $\mathbf{E} z_i(k) = 0, {\rm Var}(z_i(k)) = 1$ and $|z_i(k)| \leq C \cdot d^{\frac{2}{8+m}}$, for some $m>0$.
\item Noise condition: there exists a $\sigma>0$ such that $\mathbf{E}[ (f_*(\bd{x})-\bd{y})^2|\bd{x} = x]\leq \sigma^2$ for all $x \in \Omega$.
\item Non-linear kernel: for any $x \in \Omega$, $K(x, x) \leq M$. Furthermore, we consider the inner-product kernels of the form
\begin{align}
\label{eq:inner_prod_kernel}
K(x, x') = h\left(\frac{1}{d} \langle x, x'\rangle \right)
\end{align}
for a non-linear smooth function $h(\cdot): \mathbb{R} \rightarrow \mathbb{R}$ in a neighborhood of $0$.
\end{enumerate}
While we state the main theorem for inner product kernels, the results follow under suitable modifications\footnote{We refer the readers to \cite{el2010spectrum} for explicit extensions to RBF kernels.} for Radial Basis Function (RBF) kernels of the form
\begin{align}
K(x, x')= h\left(\frac{1}{d} \| x -x' \|^2 \right).
\end{align}
We postpone the discussion of the assumptions until after the statement of the main theorem.
Let us first define the following quantities related to curvature of $h$:
\begin{align}
\label{eq:alpha-beta-gamma}
\alpha &:= h(0) + h''(0) \frac{\mathrm{Tr}(\Sigma_d^2)}{d^2}, \quad \beta := h'(0), \nonumber\\
\gamma &:= h\left(\frac{\mathrm{Tr}(\Sigma_d)}{d} \right) - h(0) - h'(0) \frac{\mathrm{Tr}(\Sigma_d)}{d} .
\end{align}
\begin{theorem}
\label{thm:interpolation}
Define
\begin{align}
\label{eq:data-dependent-error-formula}
\phi_{n,d}(X, f_*) = \bd{V} + \bd{B} &:= \frac{8\sigma^2 \|\Sigma_d\|_{\rm op}}{d}\sum_j \frac{\lambda_j\left(\frac{X X^* }{d} + \frac{\alpha}{\beta} 1 1^* \right)}{ \left[ \frac{\gamma}{\beta} + \lambda_j\left(\frac{X X^*}{d} + \frac{\alpha}{\beta} 1 1^* \right) \right]^2} \nonumber \\
& \quad \quad \quad + \| f_* \|_{\cH}^2 \inf_{0\leq k \leq n} \left\{ \frac{1}{n} \sum_{j > k} \lambda_j(K_X K_{X}^*) + 2M \sqrt{\frac{k}{n}} \right\} \enspace.
\end{align}
Under the assumptions (A.1)-(A.4) and for $d$ large enough, with probability at least $1-2\delta-d^{-2}$ (with respect to a draw of design matrix $X$), the interpolation estimator \eqref{eq:interpolation_closedform} satisfies
\begin{align}
\mathbf{E}_{Y|X} \| \widehat{f} - f_*\|^2_{L^2_\mu} \leq \phi_{n,d}(X, f_*) + \epsilon(n, d).
\end{align}
Here the remainder term $\epsilon(n, d) = O(d^{-\frac{m}{8+m}} \log^{4.1} d) + O(n^{-\frac{1}{2}} \log^{0.5} (n/\delta))$.
\end{theorem}
A few remarks are in order. First, the upper bound is \emph{data-dependent} and can serve as a certificate (assuming that an upper bound on $\sigma^2, \| f_* \|_{\cH}^2$ can be guessed) that interpolation will succeed. The bound also suggests the regimes when the interpolation method should work. The two terms in the estimate of Theorem~\ref{thm:interpolation} represent upper bounds on the variance and bias of the interpolation estimator, respectively. Unlike the explicit regularization analysis (e.g. \citep{caponnetto2007optimal}), the two terms are not controlled by a tunable parameter $\lambda$. Rather, the choice of the non-linear kernel $K$ itself leads to an \textit{implicit control} of the two terms through curvature of the kernel function, favorable properties of the data, and high dimensionality. We remark that for the linear kernel ($h(a)=a$), we have $\gamma=0$, and the bound on the variance term can become very large in the presence of small eigenvalues. In contrast, curvature of $h$ introduces regularization through a non-zero value of $\gamma$. We also remark that the bound ``kicks in'' in the high-dimensional regime: the error term decays with both $d$ and $n$.
We refer to the favorable structure of eigenvalues of the data covariance matrix as \textit{favorable geometric properties} of the data. The first term (variance) is small when the data matrix enjoys certain decay of the eigenvalues, thanks to the implicit regularization $\gamma$. The second term (bias) is small when the eigenvalues of the kernel matrix decay fast or the kernel matrix is effectively low rank. Note that the quantities $\alpha, \beta$ are constants, and $\gamma$ scales with $(\mathrm{Tr}(\Sigma_d)/d)^2$. We will provide a detailed discussion on the trade-off between the bias and variance terms for concrete examples in Section~\ref{sec:data-dependent-bound}.
We left the upper bound of Theorem~\ref{thm:interpolation} in a data-dependent form for two reasons. First, an explicit dependence on the data tells us whether interpolation can be statistically sound on the given dataset. Second, for general spectral decay, current random matrix theory falls short of characterizing the spectral density non-asymptotically except for special cases \citep{bose2003limiting, el2010spectrum}.
\paragraph{Discussion of the assumptions}
\begin{itemize}
\item The assumption in (A.1) that $c\leq d/n \leq C$ emphasizes that we work in a high-dimensional regime where $d$ scales on the order of $n$. This assumption is used in the proof of \citep{el2010spectrum}, and the particular dependence on $c,C$ can be traced in that work if desired. Rather than doing so, we ``folded'' these constants into mild additional power of $\log d$. The same goes for the assumption on the scaling of the trace of the population covariance matrix.
\item The assumption in (A.2) that $Z_i(k)$ are i.i.d. across $k=1,\ldots,d$ is a strong assumption that is required to ensure the favorable high-dimensional effect. Relaxing this assumption is left for future work.
\item The existence of $(8+m)$-moments for $|z_i(k)|$ is enough to ensure $|z_i(k)| \leq C \cdot d^{\frac{2}{8+m}}$ for $1\leq i\leq n, 1\leq k\leq d$ almost surely (see, Lemma 2.2 in \cite{yin1988limit}). Remark that the assumption of existence of $(8+m)$-moments in (A.2) is relatively weak. In particular, for bounded or subgaussian variables, $m=\infty$ and the error term $\epsilon(n,d)$ scales as $d^{-1} + n^{-1/2}$, up to log factors. See Lemma~\ref{lem:1byn-gaussian} for an explicit calculation in the Gaussian case.
\item Finally, as already mentioned, the main result is stated for the inner product kernel, but can be extended to the RBF kernel using an adaptation of the analysis in \citep{el2010spectrum}.
\end{itemize}
\section{Behavior of the Data-dependent Bound}
\label{sec:data-dependent-bound}
In this section, we estimate, both numerically and theoretically, the non-asymptotic data-dependent upper bound in Theorem~\ref{thm:interpolation} in several regimes. To illustrate the various trade-offs, we divide the discussion into two main regimes: $n>d$ and $n<d$. Without loss of generality, we take as an illustration the non-linearity $h(t) = \exp(2t)$ and $K(x, x') = \exp(2\langle x, x'\rangle/d)$, with the implicit regularization ${\bf r}:= \gamma/\beta \asymp \left(\mathrm{Tr}(\Sigma_d)/d \right)^2$. In our discussion, we take both $n$ and $d$ large enough so that the residuals in Theorem~\ref{thm:interpolation} are negligible. The main theoretical results in this section, Corollaries~\ref{coro:general-n-d} and \ref{coro:general-d-n}, are direct consequences of the data-dependent bound in Theorem~\ref{thm:interpolation}.
\paragraph{Case $n>d$}
We can further bound the variance and the bias, with the choice $k=0$, as
\begin{align}
\label{eq:V_n_d}
{\bf V} &\precsim \frac{1}{d} \sum_j \frac{\lambda_j\left(\frac{X X^* }{d} \right)}{ \left[ {\bf r} + \lambda_j\left(\frac{X X^*}{d} \right) \right]^2} = \frac{1}{n} \sum_{j=1}^d \frac{\lambda_j\left(\frac{X X^* }{n} \right)}{ \left[ \frac{d}{n} {\bf r} + \lambda_j\left(\frac{X X^*}{n} \right) \right]^2}, \\
\label{eq:B_n_d}
{\bf B} &\precsim \frac{1}{n} \sum_{j=1}^n \lambda_j(K_X K_{X}^*) \asymp {\bf r} + \frac{1}{d} \sum_{j=1}^d \lambda_j\left(\frac{X X^* }{n}\right).
\end{align}
We first illustrate numerically the bias-variance trade-off by varying the geometric properties of the data in terms of the population spectral decay of $\bd{x}$. We shall parametrize the eigenvalues of the covariance, for $0<\kappa<\infty$, as
\begin{align*}
\lambda_j(\Sigma_d) = \left(1 - ((j-1)/d)^{\kappa}\right)^{1/\kappa}, 1\leq j \leq d.
\end{align*}
The parameter $\kappa$ controls approximate ``low-rankness'' of the data: the closer $\kappa$ is to $0$, the faster does the spectrum of the data decay. This is illustrated in the top row of Figure~\ref{fig:n_m_d} on page \pageref{fig:n_m_d}. By letting $\kappa \rightarrow 0$, ${\bf r}$ can be arbitrary small, as
\begin{align*}
\frac{\mathrm{Tr}(\Sigma_d)}{d} \asymp \int_0^1 (1 - t^{\kappa})^{1/\kappa} dt = \frac{\Gamma(1+1/\kappa)^2}{\Gamma(1+2/\kappa)} \in [0,1] .
\end{align*}
We will focus on three cases, $\kappa \in \{e^{-1}, e^{0}, e^{1}\}$, for the decay parameter, and values $d = 100$, $n \in \{500, 2000\}$. The data-dependent upper bounds on $\bd{V}$ and $\bd{B}$ are summarized in Table~\ref{tab:simuls_n_d}. More detailed plots are postponed to Figure~\ref{fig:n_m_d} (in this figure, we plot the ordered eigenvalues and the spectral density for both the population and empirical covariances). Table~\ref{tab:simuls_n_d} shows that as $\kappa$ increases (a slower spectral decay), the implicit regularization parameter becomes larger, resulting in a decreasing variance and an increasing bias.
We also perform simulations to demonstrate the trade-off between bias and variance in the generalization error. The result is shown in Figure~\ref{fig:gen_error_n_d}.
For each choice of $(n,d)$ pair, we vary the spectral decay of the kernel by changing gradually $\kappa \in [e^{-2}, e^{2}]$, and plot the generalization error on the log scale.
We postpone the experiment details to Section~\ref{sec:synthetic}, but point out important phenomenona observed in Figures~\ref{fig:gen_error_n_d}-\ref{fig:gen_error_d_n}: (1) an extremely fast spectral decay (small $\kappa$) will generate insufficient implicit regularization that would hurt the generalization performance due to a large variance term; (2) a very slow spectral decay (large $\kappa$) will result in a large bias, which can also hurt the generalization performance; (3) certain favorable spectral decay achieves the best trade-off, resulting in the best generalization error.
\begin{table}[ht]
\centering
\caption{}\label{tab:simuls_n_d}
\small
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{lcrrrrr}
\multicolumn{7}{c}{\normalsize Case $n > d$: variance bound $\bf V$ \eqref{eq:V_n_d}, bias bound $\bf B$ \eqref{eq:B_n_d}} \\
\hline
\hline
& & \multicolumn{2}{c}{$n/d=5$} & & \multicolumn{2}{c}{$n/d=20$} \\
\cline{3-4} \cline{6-7}
Spectral Decay & Implicit Reg & $\bf V$ & $\bf B$ && $\bf V$ & $\bf B$ \\
\hline
$\kappa=e^{-1}$ & 0.005418 & 14.2864 & 0.07898 && 9.4980 & 0.07891 \\
$\kappa=e^{0}$ & 0.2525 & 0.4496 & 0.7535 && 0.1748 & 0.7538 \\
$\kappa=e^{1}$ & 0.7501 & 0.1868 & 1.6167 && 0.05835 & 1.6165 \\
\hline
\hline
\end{tabular}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{error_N_m_D}
\caption{Generalization error as a function of varying spectral decay. Here $d = 200$, $n = 400, 1000, 2000, 4000$. }
\label{fig:gen_error_n_d}
\end{figure}
We now theoretically demonstrate scalings within the $n > d$ regime when both $\bf V$ and $\bf B$ vanish. For simplicity, we consider Gaussian $\bd{x}$.
\begin{corollary}[General spectral decay: $n>d$]
\label{coro:general-n-d}
Consider general eigenvalue decay with $\| \Sigma_d \|_{\rm op}\leq 1$. Then with high probability,
\begin{align*}
{\bf V} \precsim \frac{\mathrm{Tr}(\Sigma_d^{-1})}{n}, \quad {\bf B} \precsim {\bf r} + \frac{\mathrm{Tr}(\Sigma_d)}{d}.
\end{align*}
\end{corollary}
To illustrate the behavior of the estimates in Corollary~\ref{coro:general-n-d}, consider the following assumptions on the population covariance matrix:
\begin{exmp}[Low rank]
\label{exmp:low-rank}
Let $\Sigma_d = {\rm diag}(1, \ldots, 1, 0, \ldots, 0)$ with $\epsilon d$ ones, $\epsilon\in(0,1)$. In this case
${\bf r} = \epsilon^2$, and $\lambda_j(XX^*/n) \geq (1 - \sqrt{\epsilon d/n})^2$ with high probability by standard results in random matrix theory. Then
\begin{align*}
{\bf V} \precsim \frac{\epsilon d}{n} \frac{ (1 - \sqrt{\epsilon d/n})^2}{ \left( \epsilon^2 d/n + (1 - \sqrt{\epsilon d/n})^2 \right)^2} \asymp \frac{d}{n}\epsilon , \quad {\bf B} \precsim \epsilon^2 + \epsilon.
\end{align*}
Therefore, as $\epsilon \rightarrow 0$, both terms vanish for $n > d$.
\end{exmp}
\begin{exmp}[Approx. low rank]
\label{exmp:approx-low-rank}
Let $\Sigma_d = {\rm diag}(1, \epsilon, \ldots, \epsilon)$ for small $\epsilon>0$. In this case,
${\bf r} = \epsilon^2$ and $\lambda_j(XX^*/n) \geq \epsilon (1 - \sqrt{d/n})^2$ with high probability. Then
\begin{align*}
{\bf V} \precsim \frac{d}{n} \frac{\epsilon (1 - \sqrt{d/n})^2}{ \left( \epsilon^2 d/n + \epsilon (1 - \sqrt{d/n})^2 \right)^2} \asymp \frac{d}{n} \frac{1}{\epsilon}, \quad {\bf B} \precsim \epsilon^2 + \epsilon.
\end{align*}
For instance, for $\epsilon \asymp (d/n)^{1/2}$, both terms vanish for $n \gg d$.
\end{exmp}
\begin{exmp}[Nonparametric slow decay]
\label{exmp:nonparam}
Consider $\lambda_j(\Sigma_d) = j^{-\alpha}$ for $0<\alpha<1$. Then ${\bf r} \asymp d^{-2\alpha}$. One can bound w.h.p. (see \eqref{pf:example-nonparam})
\begin{align*}
{\bf V}
&\asymp \frac{1}{n} \int_0^d t^{\alpha} dt \asymp \frac{d^{\alpha+1}}{n}, \quad {\bf B} \precsim d^{-2\alpha} + d^{-\alpha}.
\end{align*}
Balancing the two terms, one obtains a nonparametric upper bound
$
n^{-\frac{\alpha}{2\alpha+1}}.
$
A similar analysis can be carried out for $\alpha\geq 1$.
\end{exmp}
\paragraph{Case $d>n$}
In this case, we can further bound the variance and the bias, with the choice $k=0$, as
\begin{align}
\label{eq:V_d_n}
{\bf V} &\precsim \frac{1}{d} \sum_{j=1}^n \frac{\lambda_j\left(\frac{X X^* }{d} \right)}{ \left[ {\bf r} + \lambda_j\left(\frac{X X^*}{d} \right) \right]^2}, \\
\label{eq:B_d_n}
{\bf B} &\precsim \frac{1}{n} \sum_{j=1}^n \lambda_j(K_X K_{X}^*) \asymp {\bf r} + \frac{1}{n} \sum_{j=1}^n \lambda_j\left(\frac{X X^* }{d}\right).
\end{align}
We first numerically illustrate the trade-off between the variance and the bias upper bounds. We consider three cases $\kappa \in \{ e^{-1}, e^{0}, e^{1} \}$, and $d = 2000$, $n \in \{400, 100\}$. As before, we find a trade-off between $\bd{V}$ and $\bd{B}$ with varying $\kappa$; the results are summarized in Table~\ref{tab:simuls_d_n}. Additionally, Figure~\ref{fig:d_m_n} provides a plot of the ordered eigenvalues, as well as spectral density for both the population and empirical covariances. As one can see, for a general eigenvalue decay, the spectral density of the population and the empirical covariance can be quite distinct. We again plot the generalization error in Figure~\ref{fig:gen_error_d_n} as a function of $\kappa$.
\begin{table}[ht]
\centering
\caption{}\label{tab:simuls_d_n}
\small
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{lcrrrrr}
\multicolumn{7}{c}{\normalsize Case $d > n$: variance bound $\bf V$ \eqref{eq:V_d_n}, bias bound $\bf B$ \eqref{eq:B_d_n}} \\
\hline
\hline
& & \multicolumn{2}{c}{$d/n=5$} & & \multicolumn{2}{c}{$d/n=20$} \\
\cline{3-4} \cline{6-7}
Spectral Decay & Implicit Reg & $\bf V$ & $\bf B$ && $\bf V$ & $\bf B$ \\
\hline
$\kappa=e^{-1}$ & 0.005028 & 3.9801 & 0.07603 && 0.7073 & 0.07591 \\
$\kappa=e^{0}$ & 0.2503 & 0.1746 & 0.7513 && 0.04438 & 0.7502 \\
$\kappa=e^{1}$ & 0.7466 & 0.06329 & 1.6106 && 0.01646 & 1.6102 \\
\hline
\hline
\end{tabular}
\end{table}
\begin{figure}[ht]
\centering
\includegraphics[width=0.5\textwidth]{error_D_m_N}
\caption{Generalization error as a function of varying spectral decay. Here $n=200$, $d = 400, 1000, 2000, 4000$.}
\label{fig:gen_error_d_n}
\end{figure}
We now theoretically showcase an example in the $d \gg n$ regime where both $\bf V$ and $\bf B$ vanish. Again consider $\bd{x}$ being Gaussian for simplicity.
\begin{corollary}[General spectral decay: $d > n$]
\label{coro:general-d-n}
With high probability, it holds that
\begin{align*}
{\bf V} \precsim \frac{n}{d} \frac{1}{4 {\bf r}}, \quad
{\bf B} \precsim {\bf r} + \frac{\mathrm{Tr}(\Sigma_d)}{d}.
\end{align*}
The variance bound follows from the fact that $\frac{t}{({\bf r} + t)^2} \leq \frac{1}{4{\bf r}}$ for all $t$.
\end{corollary}
\begin{exmp}[Favorable spectral decay for $d\gg n$]
\label{exmp:favor-decay}
Recall $\mathrm{Tr}(\Sigma_d)/d = {\bf r}^{1/2}$. With the choice ${\bf r} = (n/d)^{2/3}$, both terms vanish for $d \gg n$ as
\begin{align*}
{\bf V} &\precsim \frac{n}{d} \frac{1}{4 {\bf r}}, \quad {\bf B} \precsim {\bf r}^{1/2}.
\end{align*}
In this case, the spectrum satisfies $\mathrm{Tr}(\Sigma_d)/d = O((n/d)^{1/3})$.
\end{exmp}
\section{Proofs}
To prove Theorem~\ref{thm:interpolation}, we decompose the mean square error into the bias and variance terms (Lemma~\ref{lem:decomposition}), and provide data-dependent bound for each (Sections \ref{sec:variance} and \ref{sec:bias}).
\subsection{Bias-Variance Decomposition}
The following is a standard bias-variance decomposition for an estimator. We remark that it is an equality, and both terms have to be small to ensure the desired convergence.
\begin{lemma}
\label{lem:decomposition}
The following decomposition for the interpolation estimator \eqref{eq:interpolation_closedform} holds
\begin{align}
\mathbf{E}_{Y|X} \| \widehat{f} - f_*\|^2_{L^2_\mu} = \bd{V} + \bd{B} ,
\end{align}
where
\begin{align}
\bd{V}&:= \int \mathbf{E}_{Y|X} \left| K_x^* K_{X} (K_{X}^* K_{X})^{-1} (Y - \mathbf{E}[Y|X]) \right|^2 d\mu(x), \label{eq:variance}\\
\bd{B}&:= \int \left| K_x^* \left[ K_{X} (K_{X}^* K_{X})^{-1} K_{X}^* - I \right] f_* \right|^2 d\mu(x).
\label{eq:bias}
\end{align}
\end{lemma}
\begin{proof}[Proof of Lemma~\ref{lem:decomposition}]
Recall the closed form solution of the interpolation estimator:
\begin{align*}
\widehat{f}(x) &= K_x^* K_{X} (K_{X}^* K_{X})^{-1} Y = K(x, X) K(X, X)^{-1} Y.
\end{align*}
Define $E = Y - \mathbf{E}[Y|X] = Y - f_*(X)$. Since $\mathbf{E}_{Y|X} E = 0$, we have
\begin{align*}
\widehat{f}(x) - f_*(x) &= K_x^* K_{X} (K_{X}^* K_{X})^{-1} E + K_x^* \left[ K_{X} (K_{X}^* K_{X})^{-1} K_{X}^* - I \right] f_* \\
\mathbf{E}_{Y|X} (\widehat{f}(x) - f_*(x))^2 &= \mathbf{E}_{Y|X} \left(K_x^* K_{X} (K_{X}^* K_{X})^{-1} E \right)^2 + \left| K_x^* \left[ K_{X} (K_{X}^* K_{X})^{-1} K_{X}^* - I \right] f_* \right|^2.
\end{align*}
Using Fubini's Theorem,
\begin{align*}
&\mathbf{E}_{Y|X} \| \widehat{f} - f_*\|^2_{L^2_\mu} = \int \mathbf{E}_{Y|X} (\widehat{f}(x) - f_*(x))^2 d\mu(x) \\
& = \int \mathbf{E}_{Y|X} \left| K_x^* K_{X} (K_{X}^* K_{X})^{-1} E \right|^2 d\mu(x) + \int \left| K_x^* \left[ K_{X} (K_{X}^* K_{X})^{-1} K_{X}^* - I \right] f_* \right|^2 d\mu(x).
\end{align*}
\end{proof}
\subsection{Variance}
\label{sec:variance}
In this section, we provide upper estimates on the variance part $\bd{V}$ in \eqref{eq:variance}.
\begin{theorem}[Variance]
\label{thm:variance}
Let $\delta\in (0,1)$. Under the assumptions (A.1)-(A.4), with probability at least $1 - \delta - d^{-2}$ with respect to a draw of $X$,
\begin{align}
\label{eq:var-formula}
\bd{V} \leq \frac{8\sigma^2 \| \Sigma_d \|}{d} \sum_j \frac{\lambda_j\left(\frac{X X^*}{d} + \frac{\alpha}{\beta} 11^* \right)}{ \left[ \frac{\gamma}{\beta} + \lambda_j\left(\frac{X X^*}{d} + \frac{\alpha}{\beta} 11^*\right) \right]^2} + \frac{8\sigma^2}{\gamma^2} d^{-(4\theta - 1)} \log^{4.1} d,
\end{align}
for $\theta = \frac{1}{2} - \frac{2}{8+m}$ and for $d$ large enough.
\end{theorem}
\begin{remark}
Let us discuss the first term in Eq.~\eqref{eq:var-formula} and its role in implicit regularization induced by the curvature of the kernel, eigenvalue decay, and high dimensionality. In practice, the data matrix $X$ is typically centered, so $1^*X = 0$. Therefore the first term is effectively
\begin{align*}
\sum_{j} f_r\left( \lambda_j\left(\frac{X X^*}{d} \right) \right), ~\text{where}~ f_r(t) := \frac{t}{(r+t)^2} \leq \frac{1}{4r}.
\end{align*}
This formula explains the effect of implicit regularization, and captures the ``effective rank'' of the training data $X$. We would like to emphasize that this measure of complexity is distinct from the classical notion of effective rank for regularized kernel regression \citep{caponnetto2007optimal}, where the ``effective rank'' takes the form $\sum_j g_r(t_j)$ with $g_r(t) = t/(r+t)$, with $t_j$ is the eigenvalue of the population integral operator $\mathcal{T}$.
\end{remark}
\begin{proof}[Proof of Theorem~\ref{thm:variance}]
From the definition of $\bd{V}$ and $E[Y|X] = f_*(X)$,
\begin{align*}
\bd{V} &= \int \mathbf{E}_{Y|X} \mathrm{Tr}\left( K_x^* K_{X} (K_{X}^* K_{X})^{-1} (Y - f_*(X)) (Y - f_*(X))^* (K_{X}^* K_{X})^{-1} K_{X}^* K_x \right) d\mu(x) \\
& \leq \int \| (K_{X}^* K_{X})^{-1} K_{X}^* K_x \|^2 \| \mathbf{E}_{Y|X} \left[(Y - f_*(X)) (Y - f_*(X))^* \right]\| d\mu(x).
\end{align*}
Due to the fact that $\mathbf{E}_{Y|X}\left[ (Y_i - f_*(X_i))(Y_j -f_*(X_j)) \right] = 0$ for $i \neq j$, and $\mathbf{E}_{Y|X}\left[ (Y_i - f_*(X_i))^2 \right] \leq \sigma^2$, we have that $\| \mathbf{E}_{Y|X} \left[(Y - f_*(X)) (Y - f_*(X))^* \right]\| \leq \sigma^2$ and thus
\begin{align*}
\bd{V} &\leq \sigma^2 \int \| (K_{X}^* K_{X})^{-1} K_{X}^* K_x \|^2 d\mu(x) = \sigma^2 \mathbf{E}_\mu \| K(X, X)^{-1} K(X, \bd{x}) \|^2.
\end{align*}
Let us introduce two quantities for the ease of derivation. For $\alpha, \beta, \gamma$ defined in \eqref{eq:alpha-beta-gamma}, let
\begin{align}
K^{\rm lin}(X, X) &:= \gamma I + \alpha 11^T + \beta \frac{X X^*}{d} \in \mathbb{R}^{n \times n}, \\
K^{\rm lin}(X, x) &:= \beta \frac{X x^*}{d} \in \mathbb{R}^{n \times 1},
\end{align}
and $K^{\rm lin}(x, X)$ being the transpose of $K^{\rm lin}(X, x)$.
By Proposition \ref{prop:Karoui-result}, with probability at least $1-\delta-d^{-2}$, for $\theta = \frac{1}{2} - \frac{2}{8+m}$ the following holds
\begin{align*}
\left\| K(X, X) - K^{\rm lin}(X, X) \right\| \leq d^{-\theta} (\delta^{-1/2} + \log^{0.51} d).
\end{align*}
As a direct consequence, one can see that
\begin{align}
\left\| K(X, X)^{-1} \right\| &\leq \frac{1}{\gamma - d^{-\theta} (\delta^{-1/2} + \log^{0.51} d)} \leq \frac{2}{\gamma}, \label{eq:nbyn1}\\
\left\| K(X, X)^{-1} K^{\rm lin}(X, X) \right\| &\leq 1 + \| K(X, X)^{-1} \| \cdot \| K(X, X) - K^{\rm lin}(X, X) \| \nonumber \\
& \leq \frac{\gamma}{\gamma - d^{-\theta} (\delta^{-1/2} + \log^{0.51} d)} \leq 2, \label{eq:nbyn2}
\end{align}
provided $d$ is large enough, in the sense that
$$
d^{-\theta} (\delta^{-1/2} + \log^{0.51} d) \leq \gamma/2.
$$
By Lemma~\ref{lemma:1byn} (for Gaussian case, Lemma~\ref{lem:1byn-gaussian}),
\begin{align}
\label{eq:1byn}
\mathbf{E}_\mu \left\| K(\bd{x}, X) - K^{\rm lin}(\bd{x}, X) \right\|^2 \leq d^{-(4 \theta - 1)} \log^{4.1} d.
\end{align}
Let us proceed with the bound
\begin{align*}
\bd{V} & \leq \sigma^2 \mathbf{E}_\mu \| K(X, X)^{-1} K(X, \bd{x}) \|^2 \\
& \leq 2\sigma^2 \mathbf{E}_\mu \| K(X, X)^{-1} K^{\rm lin}(X, \bd{x}) \|^2 + 2 \sigma^2 \left\| K(X, X)^{-1} \right\|^2 \cdot \mathbf{E}_\mu \|K(X, \bd{x}) - K^{\rm lin}(X, \bd{x})\|^2 \\
& \leq 2\sigma^2 \left\| K(X, X)^{-1} K^{\rm lin}(X, X) \right\|^2 \mathbf{E}_\mu \| K^{\rm lin}(X, X)^{-1} K^{\rm lin}(X, \bd{x}) \|^2 + \frac{8\sigma^2}{\gamma^2} d^{-(4\theta - 1)} \log^{4.1} d \\
& \leq 8\sigma^2 \mathbf{E}_\mu \| K^{\rm lin}(X, X)^{-1} K^{\rm lin}(X, \bd{x}) \|^2 + \frac{8\sigma^2}{\gamma^2} d^{-(4\theta - 1)} \log^{4.1} d
\end{align*}
where the the third inequality relies on \eqref{eq:1byn} and \eqref{eq:nbyn1}, and the fourth inequality follows from \eqref{eq:nbyn2}.
One can further show that
\begin{align*}
&\mathbf{E}_\mu \| K^{\rm lin}(X, X)^{-1} K^{\rm lin}(X, \bd{x}) \|^2 \\
&= \mathbf{E}_{\mu} \mathrm{Tr}\left( \left[\gamma I + \alpha 1 1^* + \beta \frac{XX^*}{d} \right]^{-1} \beta \frac{X \bd{x}}{d} \beta \frac{\bd{x}^{*} X^{*}}{d} \left[\gamma I + \alpha 1 1^* + \beta \frac{XX^*}{d} \right]^{-1} \right) \\
&= \mathrm{Tr}\left( \left[\gamma I + \alpha 1 1^* + \beta \frac{XX^*}{d} \right]^{-1} \beta^2 \frac{X \Sigma_d X^{*}}{d^2} \left[\gamma I + \alpha 1 1^* + \beta \frac{XX^*}{d} \right]^{-1} \right)\\
& \leq \frac{1}{d} \| \Sigma_d \| \mathrm{Tr}\left( \left[\gamma I + \alpha 1 1^* + \beta \frac{X^* X}{d} \right]^{-1} \beta^2 \frac{X^*X}{d} \left[\gamma I + \alpha 1 1^* + \beta \frac{X^* X}{d} \right]^{-1} \right) \\
& \leq \frac{1}{d} \| \Sigma_d \| \mathrm{Tr}\left( \left[\gamma I + \alpha 1 1^* + \beta \frac{X^* X}{d} \right]^{-1} \left[\beta^2 \frac{X^*X}{d} + \alpha \beta 1 1^* \right] \left[\gamma I + \alpha 1 1^* + \beta \frac{X^* X}{d} \right]^{-1} \right) \\
& = \frac{1}{d} \| \Sigma_d \| \sum_j \frac{\lambda_j\left(\frac{X X^*}{d} + \frac{\alpha}{\beta} 11^* \right)}{ \left[ \frac{\gamma}{\beta} + \lambda_j\left(\frac{X X^*}{d} + \frac{\alpha}{\beta} 11^*\right) \right]^2}.
\end{align*}
We conclude that with probability at least $1-\delta-d^{-2}$,
\begin{align}
\bd{V} &\leq 8\sigma^2 \mathbf{E}_\mu \| K^{\rm lin}(X, X)^{-1} K^{\rm lin}(X, \bd{x}) \|^2 + \frac{8\sigma^2}{\gamma^2} d^{-(4\theta - 1)} \log^{4.1} d \\
& \leq \frac{8\sigma^2 \| \Sigma_d \|}{d} \sum_j \frac{\lambda_j\left(\frac{X X^*}{d} + \frac{\alpha}{\beta} 11^* \right)}{ \left[ \frac{\gamma}{\beta} + \lambda_j\left(\frac{X X^*}{d} + \frac{\alpha}{\beta} 11^*\right) \right]^2} + \frac{8\sigma^2}{\gamma^2} d^{-(4\theta - 1)}\log^{4.1} d
\end{align}
for $d$ large enough.
\end{proof}
\subsection{Bias}
\label{sec:bias}
\begin{theorem}[Bias]
\label{thm:bias}
Let $\delta\in(0,1)$. The bias, under the only assumptions that $K(x, x)\leq M$ for $x \in \Omega$, and $X_i$'s are i.i.d. random vectors, is upper bounded as
\begin{align}
\bd{B} \leq \| f_* \|_{\cH}^2 \cdot \inf_{0\leq k \leq n} \left\{ \frac{1}{n} \sum_{j > k} \lambda_j(K(X, X)) + 2 \sqrt{\frac{k}{n}} \sqrt{ \frac{\sum_{i=1}^n K(x_i, x_i)^2}{n}} \right\} + 3M \sqrt{\frac{\log 2n/\delta}{2n}},
\end{align}
with probability at least $1 - \delta$.
\end{theorem}
\begin{proof}[Proof of Theorem~\ref{thm:bias}]
In this proof, when there is no confusion, we use $f(x) = \sum_{i=1}^p e_i(x) f_i$ where $f_i$ denotes the coefficients of $f$ under the basis $e_i(x)$. Adopting this notation, we can write $f(x) = e(x)^* f$ where $f = [f_1, f_2, \ldots, f_p]^T$ also denotes a possibly infinite vector.
For the bias, it is easier to work in the frequency domain using the spectral decomposition. Recalling the spectral characterization in the preliminary section,
\begin{align*}
\bd{B} &= \int \left| e^*(x) T^{1/2} \left[ T^{1/2} e(X) (e(X)^* T e(X))^{-1} e(X)^* T^{1/2} - I \right] T^{-1/2} f_* \right|^2 d\mu(x) \\
&\leq \int \left\| \left[ T^{1/2} e(X) (e(X)^* T e(X))^{-1} e(X)^* T^{1/2} - I \right] T^{1/2} e(x) \right\|^2 d\mu(x) \cdot \| T^{-1/2} f_* \|^2 \\
&= \| f_* \|_{\cH}^2 \int \left\| \left[ T^{1/2} e(X) (e(X)^* T e(X))^{-1} e(X)^* T^{1/2} - I \right] T^{1/2} e(x) \right\|^2 d\mu(x).
\end{align*}
Here we use the fact that $T^{-1/2} f_* = \sum_i t_i^{-1/2} f_{*,i} e_i$ and $\| T^{-1/2} f_* \|^2 = \sum_i f_{*,i}^2/t_i = \| f_* \|_{\cH}^2$.
Next, recall the empirical Kernel operator with its spectral decomposition $\widehat{T} = \widehat{U} \widehat{\Lambda} \widehat{U}^*$, with
$\widehat{\Lambda}_{jj} = \frac{1}{n} \lambda_j\left( K(X, X) \right)$. Denote the top $k$ columns of $\widehat{U}$ to be $\widehat{U}_{k}$, and $P_{\widehat{U}_k}^\perp$ to be projection to the eigenspace orthogonal to $\widehat{U}_{k}$. By observing that $T^{1/2} e(X) (e(X)^* T e(X))^{-1} e(X)^* T^{1/2}$ is a projection matrix, it is clear that for all $k\leq n$,
\begin{align}
\bd{B} &\leq \| f_* \|_{\cH}^2 \int \left\| P^\perp_{\widehat{U}} \left(T^{1/2} e(x) \right) \right\|^2 d\mu(x) \leq \| f_* \|_{\cH}^2 \int \left\| P^\perp_{\widehat{U}_k} \left(T^{1/2} e(x) \right) \right\|^2 d\mu(x).
\end{align}
We continue the study of the last quantity using techniques inspired by \cite{shawe2004kernel}. Denote the function $g$ indexed by any rank-$k$ projection $U_k$ as
\begin{align}
g_{U_k}(x) := \left\| P_{U_k} \left(T^{1/2} e(x) \right) \right\|^2 = \mathrm{Tr}\left(e^*(x) T^{1/2} U_k U_k^T T^{1/2} e(x) \right).
\end{align}
Clearly, $\| U_k U_k^T \|_F = \sqrt{k}$.
Define the function class
\begin{align*}
\cG_k := \{ g_{U_k}(x): U_k^T U_k = I_k \}.
\end{align*}
It is clear that $g_{\widehat{U}_k} \in \cG_k$. Observe that $g_{\widehat{U}_k}$ is a random function that depends on the data $X$, and we will bound the bias term using the empirical process theory. It is straightforward to verify that
\begin{align*}
\mathbf{E}_{\bd{x} \sim \mu} \left\| P^\perp_{\widehat{U}_k} \left(T^{1/2} e(\bd{x}) \right) \right\|^2 &= \int \left\| P^\perp_{\widehat{U}_k} \left(T^{1/2} e(x) \right) \right\|^2 d\mu(x), \\
\widehat{\mathbf{E}}_n \left\| P^\perp_{\widehat{U}_k} \left(T^{1/2} e(\bd{x}) \right) \right\|^2 &= \frac{1}{n} \sum_{i=1}^n \left\| P^\perp_{\widehat{U}_k} \left(T^{1/2} e(x_i) \right) \right\|^2 \\
&= \mathrm{Tr}\left( P^\perp_{\widehat{U}_k} \widehat{T} P^\perp_{\widehat{U}_k} \right) = \sum_{j > k} \widehat{\Lambda}_{jj} = \frac{1}{n} \sum_{j > k} \lambda_j(K(X, X)).
\end{align*}
Using symmetrization Lemma~\ref{lem:symmetrization} with $M = \sup_{x\in \Omega} K(x, x)$, with probability at least $1-2\delta$,
\begin{align*}
& \int \left\| P^\perp_{\widehat{U}_k} \left(T^{1/2} e(x) \right) \right\|^2 d\mu(x) - \frac{1}{n} \sum_{j > k} \lambda_j(K(X, X)) \\
= &\mathbf{E}_\mu \left\| P^\perp_{\widehat{U}_k} \left(T^{1/2} e(\bd{x}) \right) \right\|^2 - \widehat{\mathbf{E}}_n \left\| P^\perp_{\widehat{U}_k} \left(T^{1/2} e(\bd{x}) \right) \right\|^2 \\
\leq & \sup_{U_k: U_k^T U_k = I_k} \left( \mathbf{E} - \widehat{\mathbf{E}}_n \right) \left\| P^\perp_{U_k} \left(T^{1/2} e(\bd{x}) \right) \right\|^2 \\
\leq & 2\mathbf{E}_\epsilon \sup_{U_k: U_k^T U_k = I_k} \frac{1}{n} \sum_{i=1}^n \epsilon_i \left( \left\| T^{1/2} e(x_i) \right\|^2 - \left\| P_{U_k} \left(T^{1/2} e(x_i) \right) \right\|^2 \right) + 3 M
\sqrt{\frac{\log 1/\delta}{2n}}
\end{align*}
by the Pythagorean theorem. Since $\epsilon_i$'s are symmetric and zero-mean and $\left\| T^{1/2} e(x_i) \right\|^2$ does not depend on $U_k$, the last expression is equal to
\begin{align*}
& 2\mathbf{E}_\epsilon \sup_{g \in \cG_k} \frac{1}{n} \sum_{i=1}^n \epsilon_i g(x_i) + 3 M
\sqrt{\frac{\log 1/\delta}{2n}}.
\end{align*}
We further bound the Rademacher complexity of the set $\cG_k$
\begin{align*}
&\mathbf{E}_\epsilon \sup_{g \in \cG_k} \frac{1}{n}\sum_{i=1}^n \epsilon_i g(x_i) = \mathbf{E}_\epsilon \sup_{U_k} \frac{1}{n}\sum_{i=1}^n \epsilon_i g_{U_k}(x_i) \\
& = \mathbf{E}_\epsilon \frac{1}{n} \sup_{U_k} \left\langle U_k U_k^T, \sum_{i=1}^n \epsilon_i T^{1/2} e(x_i) e^*(x_i) T^{1/2} \right\rangle \\
& \leq \frac{\sqrt{k}}{n} \mathbf{E}_\epsilon \left\| \sum_{i=1}^n \epsilon_i T^{1/2} e(x_i) e^*(x_i) T^{1/2} \right\|_F
\end{align*}
by the Cauchy-Schwarz inequality and the fact that $\| U_k U_k^T\|_F \leq \sqrt{k}$. The last expression is can be further evaluated by the independence of $\epsilon_i$'s
\begin{align*}
\frac{\sqrt{k}}{n} \left\{ \mathbf{E}_\epsilon \left\| \sum_{i=1}^n \epsilon_i T^{1/2} e(x_i) e^*(x_i) T^{1/2} \right\|_F^2 \right\}^{1/2} & = \frac{\sqrt{k}}{n} \left\{ \sum_{i=1}^n \left\| T^{1/2} e(x_i) e^*(x_i) T^{1/2} \right\|_F^2 \right\}^{1/2} \\
& = \sqrt{\frac{k}{n}} \sqrt{ \frac{\sum_{i=1}^n K(x_i, x_i)^2}{n}}.
\end{align*}
Therefore, for all $k\leq n$, with probability at least $1 - 2n\delta$,
\begin{align*}
\bd{B} \leq \| f_* \|_{\cH}^2 \cdot \inf_{0\leq k \leq n} \left\{ \frac{1}{n} \sum_{j > k} \lambda_j(K(X, X)) + 2 \sqrt{\frac{k}{n}} \sqrt{ \frac{\sum_{i=1}^n K(x_i, x_i)^2}{n}} + 3M \sqrt{\frac{\log 1/\delta}{2n}} \right\}.
\end{align*}
\end{proof}
\begin{remark}
Let us compare the bounds obtained in this paper to those one can obtain for classification with a margin. For classification, Thm. 21 in
\cite{bartlett2002rademacher} shows that the misclassification error is upper bounded with probability at least $1 - \delta$ as
\begin{align*}
\mathbf{E} {\bf 1}(\bd{y} \widehat{f}(\bd{x})<0) \leq \mathbf{E} \phi_\gamma(\bd{y} \widehat{f}(\bd{x})) \leq \widehat{\mathbf{E}}_n \phi_\gamma(\bd{y} \widehat{f}(\bd{x})) + \frac{C_\delta}{\gamma \sqrt{n}} \sqrt{\frac{\sum_{i=1}^n K(x_i, x_i)}{n}}
\end{align*}
where $\phi_{\gamma}(t) := \max(0, 1-t/\gamma) \wedge 1$ is the margin loss surrogate for the indicator loss ${\bf 1}(t<0)$. By tuning the margin $\gamma$, one obtains a family of upper bounds.
Now consider the noiseless regression scenario (i.e. $\sigma=0$ in (A.1)). In this case, the variance contribution to the risk is zero, and
\begin{align*}
\mathbf{E}_{Y|X} \| \widehat{f} -\bd{y} \|_{L^2_\mu}^2 &= \mathbf{E}_{Y|X} \| \widehat{f} - f_*\|^2_{L^2_\mu} = \mathbf{E} [ P_n^{\perp} f_* ]^2 \leq \mathbf{E} [P_k^{\perp}f_*]^2 \\
& \leq \widehat{\mathbf{E}}_n [P_k^{\perp}f_*]^2 + C'_\delta \sqrt{\frac{k}{n}} \sqrt{ \frac{\sum_{i=1}^n K(x_i, x_i)^2}{n}}
\end{align*}
where $P_k$ is the best-rank $k$ projection (based on $X$) and $P_k^\perp$ denotes its orthogonal projection. By tuning the parameter $k$ (similar as the $1/\gamma$ in classification), one can balance the RHS to obtain the optimal trade-off.
However, classification is easier than regression in the following sense: $\widehat{f}$ can present a non-vanishing bias in estimating $f_*$, but as long as the bias is below the empirical margin level, it plays no effect in the margin loss $\phi_{\gamma}(\cdot)$. In fact, for classification, under certain conditions, one can prove exponential convergence for the generalization error \citep{koltchinskii2005exponential}.
\end{remark}
\section{Experiments}
\label{sec:experiments}
\subsection{MNIST}
In this section we provide full details of the experiments on MNIST \citep{lecun2010mnist}. Our first experiment considers the following problem: for each pair of distinct digits $(i, j)$, $i, j \in \{0,1,\ldots,9\}$, label one digit as $1$ and the other as $-1$, then fit the Kernel Ridge Regression with Gaussian kernel $k(x, x') = \exp(-\|x - x' \|^2/d)$, where $d=784$ is the dimension as analyzed in our theory (also the default choice in Scikit-learn package \citep{scikit-learn}). For each of the $\binom{10}{2} = 45$ pairs of experiments, we chose $\lambda = 0$ (no regularization, interpolation estimator), $\lambda = 0.1$ and $\lambda = 1$. We evaluated the performance on the \emph{out-of-sample} test dataset, with the error metric
\begin{align}
\label{eq:normalized_mse}
\frac{\sum_{i} (\widehat{f}(x_i) - y_i)^2}{\sum_{i} (\bar{y} - y_i)^2}.
\end{align}
Remarkably, among all 45 experiments, no-regularization performs the best. We refer to the table in Section~\ref{sec:mnist} for a complete list of numerical results. For each experiment, the sample size is roughly $n\approx 10000$.
The second experiment is to perform the similar task on a finer grid of regularization parameter $\lambda \in \{0, 0.01, 0.02, 0.04, 0.08, 0.16, 0.32, 0.64, 1.28\}$. Again, in all but one pair, the interpolation estimator performs the best in out-of-sample prediction. We refer to Figure~\ref{fig:mnisit-finer-grid} for details.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\textwidth]{mnist-intro-full.pdf}
\caption{Test error, normalized as in \eqref{eq:normalized_mse}. The y-axis is on the log scale.}
\label{fig:mnisit-finer-grid}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.45\textwidth]{mnist-spectral.pdf}
\includegraphics[width=0.45\textwidth]{mnist-spectral-cov.pdf}
\caption{Spectral decay. The y-axis is on the log scale.}
\label{fig:mnisit-spectral}
\end{figure}
To conclude this experiment, we plot the eigenvalue decay of the empirical kernel matrix and the sample covariance matrix for the 5 experiments shown in the introduction. The two plots are shown in Figure~\ref{fig:mnisit-spectral}. Both plots exhibit a fast decay of eigenvalues, supporting the theoretical finding that interpolation performs well on a test set in such situations.
On the other hand, it is easy to construct examples where the eigenvalues do not decay and interpolation performs poorly. This is the case, for instance, if $X_i$ are i.i.d. from spherical Gaussian. One can show that in the high-dimensional regime, the variance term itself (and not just the upper bound on it) is large. Since the bias-variance decomposition is an equality, it is not possible to establish good $L^2_\mu$ convergence.
\subsection{A Synthetic Example}
\label{sec:synthetic}
In this section we provide the details of the synthetic experiments mentioned in Section~\ref{sec:data-dependent-bound} for Tables~\ref{tab:simuls_n_d}-\ref{tab:simuls_d_n} and Figures~\ref{fig:gen_error_n_d}-\ref{fig:gen_error_d_n}. We choose the RBF kernel as the non-linearity with $h(t)=\exp(-t)$. Again, we consider a family of eigenvalue decays for the covariance matrix parametrized by $\kappa$, with the small $\kappa$ describing fast spectral decay
\begin{align*}
\lambda_j(\Sigma_{d, \kappa}) = \left(1 - ((j-1)/d)^{\kappa}\right)^{1/\kappa}, 1\leq j \leq d.
\end{align*}
We set a target non-linear function $f_*$ in the RKHS with kernel $K(x, x') = h\left(\| x - x' \|^2/d \right)$ as
\begin{align*}
f_*(x) = \sum_{l=1}^{100} K(x, \theta_l), ~~\theta_l \stackrel{i.i.d.}{\sim} N(0, I_d).
\end{align*}
For each parameter triplet $(n, d, \kappa)$, we generate data in the following way
\begin{align*}
x_i \sim N(0, \Sigma_{d, \kappa}), ~~ y_i = f_*(x_i) + \epsilon_i
\end{align*}
for $1\leq i \leq n$ where $\epsilon_i \sim N(0, \sigma^2)$ is independent noise, with $\sigma = 0.1$ (Figures~\ref{fig:gen_error_n_d}-\ref{fig:gen_error_d_n}) and $\sigma = 0.5$ (Figures~\ref{fig:gen_error_high_noise}).
Figures~\ref{fig:n_m_d}-\ref{fig:d_m_n} contrasts the difference between the population and empirical eigenvalues for various parameter triplets $(n, d, \kappa)$.
We now explain Figures~\ref{fig:gen_error_n_d}-\ref{fig:gen_error_d_n}, which illustrate the true generalization error in this synthetic example, by varying the spectral decay $\kappa$, for a particular case of high dimensionality ratio $d/n$. Here we plot the \textit{out-of-sample} test error for the interpolated min-norm estimator $\widehat{f}$ on fresh new test data $(x_t, y_t)$ from the same data generating process, with the error metric
\begin{align*}
\text{error} = \frac{\sum_t (\widehat{f}(x_t) - f_*(x_t) )}{\sum_t ( y_t - \bar{y} )^2}.
\end{align*}
The error plots are shown in Figure~\ref{fig:gen_error_n_d} (for $n>d$) and \ref{fig:gen_error_d_n} (for $d>n$), and Figure~\ref{fig:gen_error_high_noise} for the high noise case. On the x-axis, we plot the $\log(\kappa)$, and on the y-axis the $\log(\text{error})$. Each curve corresponds to the generalization error behavior (and the bias and variance trade-off) as we vary spectral decay from fast to slow (as $\kappa$ increases) for a particular choice of $d/n$ or $n/d$ ratio. Clearly, for a general pair of high dimensionality ratio $d/n$, there is a ``sweet spot'' of $\kappa$ (favorable geometric structure) such that the trade-off is optimized.
\begin{figure}[ht]
\centering
\includegraphics[width=0.32\textwidth]{N_m_D_eig_kappa_0_36}
\includegraphics[width=0.32\textwidth]{N_m_D_eig_kappa_1_00}
\includegraphics[width=0.32\textwidth]{N_m_D_eig_kappa_2_71}
\includegraphics[width=0.32\textwidth]{N_m_D_hist_kappa_0_36}
\includegraphics[width=0.32\textwidth]{N_m_D_hist_kappa_1_00}
\includegraphics[width=0.32\textwidth]{N_m_D_hist_kappa_2_71}
\caption{Varying spectral decay: case $n > d$. Columns from left to right: $\kappa = e^{-1}, e^{0}, e^{1}$. Rows from top to bottom: ordered eigenvalues, and the histogram of eigenvalues. Here we plot the population eigenvalues for $\Sigma_d$, and the empirical eigenvalues for $X^*X/n$. In this simulation, $d = 100$, $n = 500, 2000$. }
\label{fig:n_m_d}
\end{figure}
\begin{figure}[ht]
\centering
\includegraphics[width=0.32\textwidth]{D_m_N_eig_kappa_0_36}
\includegraphics[width=0.32\textwidth]{D_m_N_eig_kappa_1_00}
\includegraphics[width=0.32\textwidth]{D_m_N_eig_kappa_2_71}
\includegraphics[width=0.32\textwidth]{D_m_N_hist_kappa_0_36}
\includegraphics[width=0.32\textwidth]{D_m_N_hist_kappa_1_00}
\includegraphics[width=0.32\textwidth]{D_m_N_hist_kappa_2_71}
\caption{Varying spectral decay: case $d > n$. Columns from left to right: $\kappa = e^{-1}, e^{0}, e^{1}$. Rows from top to bottom: ordered eigenvalues, and the histogram of eigenvalues. Here we plot the population eigenvalues for $\Sigma_d$, and the empirical eigenvalues for $XX^*/d$. In this simulation, $d = 2000$, $n = 400, 100$. }
\label{fig:d_m_n}
\end{figure}
\begin{figure}[H]
\centering
\includegraphics[width=0.45\textwidth]{error_N_m_D_high_noise}
\includegraphics[width=0.45\textwidth]{error_D_m_N_high_noise}
\caption{Varying spectral decay: generalization error for high noise case. Left: $d = 200$, $n = 4000, 2000, 1000, 240$. Right: $n = 200$, $d = 4000, 2000, 1000, 240$.}
\label{fig:gen_error_high_noise}
\end{figure}
\section{Further Discussion}
This paper is motivated by the work of \cite{belkin2018understand} and \cite{zhang2016understanding}, who, among others, observed the good out-of-sample performance of interpolating rules. This paper continues the line of work in \citep{belkin2018overfitting,belkin2018does,belkin2018approximation} on understanding theoretical mechanisms for the good out-of-sample performance of interpolation. We leave further investigations on the connection between kernel ridgeless regression and two-layer neural networks as a future work \citep{dou2019training}.
From an algorithmic point of view, the minimum-norm interpolating solution can be found either by inverting the kernel matrix, or by performing gradient descent on the least-squares objective (starting from $0$). Our analysis can then be viewed in the light of recent work on implicit regularization of optimization procedures \citep{yao2007early, neyshabur2014search,gunasekar2017implicit,li2017algorithmic}.
The paper also highlights a novel type of implicit regularization. In addition, we discover that once we parametrize the geometric properties --- the spectral decay --- we discover the familiar picture of the bias-variance trade-off, controlled by the implicit regularization that adapts to the favorable geometric property of the data.
Moreover, if one explicitly parametrizes the choice of the kernel by, say, the bandwidth, we are likely to see the familiar picture of the bias-variance trade-off, despite the fact that the estimator is always interpolating.
Whether one can achieve optimal rates of estimation (under appropriate assumptions) for the right choice of the bandwidth appears to be an interesting and difficult statistical question. Another open question is whether one can characterize situations when the interpolating minimum-norm solution is dominating the regularized solution in terms of expected performance.
|
1,108,101,564,858 | arxiv | \section{Introduction and main results}
\setcounter{equation}{0}
In this paper, we are concerned with the following elliptic equations
\begin{equation}\label{1.1}
\begin{cases}
-\Delta u= Q(x)u^{2^*-1 }+\varepsilon
u^{s},~ &{\text{in}~\Omega},\\[1mm]
u>0,~ &{\text{in}~\Omega},\\[1mm]
u=0, &{\text{on}~\partial \Omega},
\end{cases}
\end{equation}
where $N\geq 3$, $s\in [1,2^*-1)$ with $2^*=\frac{2N}{N-2}$, $\varepsilon>0$ is a small parameter, $\Omega$ is a smooth and bounded domain in $\R^N$.
Let $0\leq Q(x)\in C(\bar\Omega)\bigcap C^2(\Omega)$ satisfy the following condition.
\vskip 0.2cm
\noindent \textbf{Condition (Q).} ~There exist $k$ different points $a_j = ({a_j^1}\cdots, {a_j^N} ) \in \Omega$ with $j=1,\cdots,k$ such that
\begin{equation}\label{4-24-1}
Q(a_j)>0,~\nabla Q(a_j)=0, ~ \Delta Q(a_j)<0~\mbox{and}~ ~det~\Big(\frac{\partial^2 Q(a_j)}{\partial x^i\partial x^l} \Big)_{1\leq i,l\leq N} \neq 0,~\mbox{for}~j=1,\cdots,k.
\end{equation}
\vskip 0.2cm
For the case that $Q(x)$ is a positive constant, a well known fact is that Brezis and Nirenberg \cite{BN1983} established the existence of one positive solution of \eqref{1.1} when $s=1$ and
$\varepsilon \in (0,\lambda_1)$, where $\lambda_1$ denotes the first eigenvalue of $-\Delta$ with
0-Dirichlet boundary condition on $\partial\Omega$.
Also it is well known in \cite{Poh1965} that \eqref{1.1} has no solution in the case where $\Omega$ is star-shaped and $\varepsilon=0$. From then a lot of attention has been paid to the limiting behavior of the solutions $u_\varepsilon$
of \eqref{1.1} as $\varepsilon\rightarrow 0$.
For more details on this aspect, one can refer to \cite{Gla1993,Han,MP2002,Rey1990}.
In this case, the positive solution will concentrate at the critical point of the Robin function or the Kirchhoff-Routh function. Also we point out that the research on the
critical points of the Robin function or the Kirchhoff-Routh function are very interesting, one can refer to very recently work \cite{Bartsch1} and the references
therein for further results.
When $Q(x)$ is not a positive constant, a very early result is \cite{Esc}. Here
the existence of problem \eqref{1.1} has been established when $s=1$ and
$\varepsilon \in (0,\lambda_1)$.
A nature problem is that how about the property of concentrated solutions of problem \eqref{1.1} when $Q(x)$ is not a positive constant. In other words, if the concentrated solution of problem \eqref{1.1} exists, what is the location of the concentrated points. And under what conditions on $Q(x)$, can we find a concentrated solution of problem \eqref{1.1}.
A partial answer of these questions is given by Cao and Zhong \cite{CZ1997}. Their results show that the critical points of $Q(x)$
play a more important role than the critical points of the Robin function. And to state their results, we denote that $$
S=\inf \Big\{\displaystyle\int_{\Omega}|\nabla u|^2~~\big| ~~u\in H^1_0(\Omega),~\displaystyle\int_{\Omega}|u|^{2^*}=1\Big\}$$
is the best Sobolev constant and $\delta_x$ is the Dirac mass at $x$.
Now Cao and Zhong's main results in \cite{CZ1997} are as follows.
\vskip 0.3cm
\noindent \textbf{Theorem A.}~\,~ Suppose that $N\geq 4$, $ s\in(1,\frac{N+2}{N-2})$ and $Q(x)$ satisfies Condition (Q). Then there exists an $\varepsilon_0>0$ such that for $\varepsilon\in (0,\varepsilon_0]$ and $i=1,\cdots,k$, problem \eqref{1.1} has a solution $u_\varepsilon$ satisfying (in the sense of measure)
\begin{equation*}
\big|\nabla u_\varepsilon\big|^2\rightharpoonup Q(a_i)^{-(N-2)/2}S^{N/2}\delta_{a_i}~~\mbox{as}~\varepsilon\rightarrow 0~~\mbox{and}~~
\big| u_\varepsilon\big|^{2^*}\rightharpoonup Q(a_i)^{-N/2}S^{N/2}\delta_{a_i}~~\mbox{as}~\varepsilon\rightarrow 0.
\end{equation*}
\vskip 0.2cm
Next, Cao and Zhong proposed the following two questions (Remark 1.7 and Remark 1.8 in \cite{CZ1997}):
\vskip 0.2cm
\noindent\emph{\textup{(i)}
\textbf{It is interesting to know the existence of single-peak solutions for small $\varepsilon$ and $s=1$.}}
\vskip 0.2cm
\noindent\emph{\textup{(ii)}
\textbf{The question of solutions which concentrate at several points at the same time is still open.}}
\vskip 0.2cm
\noindent And for further results on the concentrated solution of \eqref{1.1}, one can also refer to \cite{CN1995,CY1999}. In this paper, we give some confirmative answers to the above questions by finite dimensional reduction and local Pohozaev identity. Also, we would like to point out that
local Pohozaev identity has been widely used on the concentrated solutions in nonlinear elliptic equations recently(see \cite{Cao1,Deng,GMPY20,PWY2018}).
It is well-known that the equation $-\Delta u= u^{\frac{N+2}{N-2}} ~~\mbox{in}~\R^N$ has a family of solutions
\begin{equation*}
U_{x,\lambda}(y)=\big(N(N-2)\big)^{\frac{N-2}{4}}\frac{\lambda^{(N-2)/2}}{(1+\lambda^2|y-x|^2)^{(N-2)/2}},
\end{equation*}
where $x\in\R^N$ and $\lambda\in \R^+$.
Now for any given $f\in H^1(\Omega)$, let $P$ denote the projection from
$H^1(\Omega)$ onto $H^{1}_0(\Omega)$, i.e., $u=Pf$ is the solution of
\begin{equation*}
\begin{cases}
\Delta u=\Delta f, &{\text{in}~\Omega}, \\
u=0, &{\text{on}~\partial\Omega}.
\end{cases}
\end{equation*}
First, we give the structure of the concentrated solutions of problem \eqref{1.1}.
\begin{Thm}\label{prop1}
Let $N\geq 4$, $s\in [1,2^*-1)$ and $m=1,\cdots,k$,
suppose that $u_{\varepsilon}(x)$ is a solution of \eqref{1.1} satisfying
\begin{equation}\label{4-6-1}
\big|\nabla u_\varepsilon\big|^2\rightharpoonup \sum^m_{j=1}Q(a_j)^{-(N-2)/2}S^{N/2}\delta_{a_j}~~\mbox{as}~\varepsilon\rightarrow 0~~\mbox{and}~~
\big| u_\varepsilon\big|^{2^*}\rightharpoonup \sum^m_{j=1} Q(a_j)^{-N/2}S^{N/2}\delta_{a_j}~~\mbox{as}~\varepsilon\rightarrow 0,
\end{equation}then
$u_{\varepsilon}(x)$ can be written as
\begin{equation}\label{luo--2}
u_{\varepsilon}= \sum^m_{j=1} Q(a_j)^{-(N-2)/4} PU_{x_{\varepsilon,j}, \lambda_{\varepsilon,j}}+w_{\varepsilon},
\end{equation}
satisfying
$\lambda_{\varepsilon,j}=:\big(u_\varepsilon(x_{\varepsilon,j})\big)^{\frac{2}{N-2}}$, $
x_{\varepsilon,j} \rightarrow a_j,~\lambda_{\varepsilon,j}\rightarrow +\infty,~ \|w_{\varepsilon}\|=o(1)$.
\end{Thm}
Next result is concerned with the non-existence of concentrated solutions of \eqref{1.1} when $s=1$.
\begin{Thm}\label{th1-1}
Suppose that
$N\geq 5$, $s=1$ and $Q(x)$ satisfies Condition (Q). Then for any $m=1,\cdots,k$, problem \eqref{1.1} has no solutions $u_\varepsilon$ satisfying
\eqref{4-6-1}.
\end{Thm}
Here our idea to prove Theorem \ref{prop1} and Theorem \ref{th1-1} is as follows.
If $u_{\varepsilon}(x)$ is a solution of \eqref{1.1}, then we have following \emph{local Pohozaev identities}:
\emph{\begin{equation}\label{clp-1}
\frac{1}{2^*}\int_{\Omega'} \frac{\partial Q(x)}{\partial x^i} u_\varepsilon^{2^*}{\mathrm d}x =
\int_{\partial \Omega'}\Big(
\frac{\partial u_\varepsilon}{\partial \nu}\frac{\partial u_\varepsilon}{\partial x^i}
+\big(\frac{1}{2^*} Q(x) u_\varepsilon^{2^*} -\frac{1}{2} |\nabla u_\varepsilon|^2 + \frac{\varepsilon}{{s+1}} u_\varepsilon^{s+1}\big)\nu^i \Big){\mathrm d}\sigma,
\end{equation}
and
\begin{equation}\label{clp-10}
\begin{split}
\frac{1}{2^*}&\int_{\Omega'}\Big((x-x_{\varepsilon,j})\cdot\nabla Q(x)\Big) u^{2^*}_\varepsilon {\mathrm d}x +(1-\frac{N}{2}+\frac{N}{1+s})\varepsilon\int_{ \Omega'} u^{s+1}_\varepsilon {\mathrm d}x
\\=&
\int_{\partial \Omega'} \Big[\Big(\frac{Q(x)}{2^*}u^{2^*}_\varepsilon+\frac{\varepsilon}{{s+1}
} u_{\varepsilon}^{s+1}-\frac{1}{2}
|\nabla u_{\varepsilon}|^2\Big) \big((x-x_{\varepsilon,j})\cdot\nu\big)
+
\Big((x-x_{\varepsilon,j})\cdot\nabla u_{\varepsilon}
+\frac{N-2}{2}u_{\varepsilon}\Big)\frac{\partial u_{\varepsilon}}{\partial\nu} \Big]{\mathrm d}\sigma,
\end{split}
\end{equation}
where $\Omega'\subset\subset \Omega$ and $\nu(x)=\big(\nu^{1}(x),\cdots,\nu^N(x)\big)$ is the outward unit normal of $\partial \Omega'$.}
Then using Pohozaev identity \eqref{clp-1} and blow-up analysis, we find the structure of the concentrated solutions (Theorem \ref{prop1}). Later we deduce Theorem \ref{th1-1} by contradiction when $N\geq 5$
and $s=1$ with the help of Pohozaev identity \eqref{clp-10} .
\vskip 0.2cm
Now we tend to establish the existence of multi-peak solutions of \eqref{1.1} under some conditions.
\begin{Thm}\label{th1.2}
Suppose that $N=4$, $s=1$ or $N\geq 4$, $s\in (1,2^*-1)$, and $Q(x)$ satisfies Condition (Q). Then for any $m=1,\cdots,k$, problem \eqref{1.1} has a solution $u_\varepsilon$ satisfying \eqref{4-6-1}.
\end{Thm}
\noindent \emph{\textbf{Remark A. }
Theorem \ref{th1-1} and Theorem \ref{th1.2} show that
problem \eqref{1.1} has no solutions satisfying \eqref{4-6-1} when
$N\geq 5$ and $s=1$, while problem \eqref{1.1} has a concentrated solution satisfying \eqref{4-6-1} when
$N=4$ and $s=1$, which
answer question (i) above(Remark 1.7 in \cite{CZ1997}).
Taking $m\geq 2$ in Theorem \ref{th1.2}, then we find that
problem \eqref{1.1} has a solution concentrated at several points when $N=4$, $s=1$ or $N\geq 4$, $s>1$,
which answers question (ii) above(Remark 1.8 in \cite{CZ1997}). In a word, our results show that
the concentration of the solutions of problem \eqref{1.1} is delicate whether $s=1$ or $s>1$.}
\vskip 0.2cm
Here to prove Theorem \ref{th1.2}, the standard method is finite dimensional reduction. We define
\begin{equation*}
\begin{split}
{\mathbb D}_{\varepsilon}=\Big\{(x_\varepsilon,\lambda_\varepsilon)| &~ x_\varepsilon=(x_{\varepsilon,1},\cdots,x_{\varepsilon,m}), ~~\lambda_\varepsilon=
(\lambda_{\varepsilon,1},\cdots,\lambda_{\varepsilon,m}), \\& ~|x_{\varepsilon,j}-a_j|=o(1),~ \frac{\lambda_{\varepsilon,j}}{\lambda_{\varepsilon,l}}\leq C~\mbox{and}~\lambda_{\varepsilon,j}\rightarrow +\infty, ~j,l=1,\cdots,m\Big\}.
\end{split}\end{equation*}
And for $(x_\varepsilon,\lambda_\varepsilon)\in {\mathbb D}_{\varepsilon}$, by Proposition \ref{Prop-Luo1} below which is standard, we find
that
$$u_\varepsilon=\displaystyle\sum^m_{j=1}\big(Q(a_j)\big)^{-\frac{N-2}{4}}PU_{x_{\varepsilon,j},
\lambda_{\varepsilon,j}}+ v_{\varepsilon}$$ solves
\begin{equation}\label{luou}
-\Delta u_\varepsilon= Q(x) u_\varepsilon^{\frac{N+2}{N-2}}+\varepsilon u^{s}_{\varepsilon}+\sum^m_{j=1}\sum^N_{i=0}c_{\varepsilon,i,j}
\varphi_{ij}(x),
\end{equation}
with $\varphi_{0j}(x)=\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial \lambda}$ and $\varphi_{ij}(x)=\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial x^{i}}$ for $j=1,\cdots,m$ and $i=1,\cdots,N$.
Next is to take suitable $(x_{\e},\lambda_{\e})$ such that $c_{\varepsilon,i,j}=0$ for $i=0,\cdots,N$ and $j=1,\cdots,m$. To do this, we use the following claim.
\vskip 0.2cm
\noindent \textbf{Claim 1:} \emph{ Suppose that $(x_\varepsilon,\lambda_\varepsilon)\in {\mathbb D}_{\varepsilon}$ satisfies
\begin{equation}\label{a22-1-8}
\int_{\Omega}\Big(\Delta u_\varepsilon+Q(x) u_\varepsilon^{\frac{N+2}{N-2}}+\varepsilon u^{s}_{\varepsilon} \Big)\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial \lambda}=0,
\end{equation}
and
\begin{equation}\label{a2-13-1}
\int_{\Omega}\Big(\Delta u_\varepsilon+Q(x) u_\varepsilon^{\frac{N+2}{N-2}}+\varepsilon u^{s}_{\varepsilon} \Big) \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}} =0,
\end{equation}
then $c_{\varepsilon,i,j}=0$ for $i=0,\cdots,N$ and $j=1,\cdots,m$.}
Here we point out that the above modified finite dimensional reduction was used to find the
concentrated solutions of scalar curvature problem in \cite{PWY2018}.
\vskip 0.2cm
Finally, the local uniqueness of the above multi-peak solutions can be stated as follows.
\begin{Thm}\label{th1.3}
Suppose that $N\geq 5$, $s\in (1,2^*-1)$ and $Q(x)$ satisfies Condition (Q). For any $m=1,\cdots,k$, if $u^{(1)}_\varepsilon$ and $u^{(2)}_\varepsilon$ are the solutions of
problem \eqref{1.1} satisfying \eqref{4-6-1} and
\begin{equation}\label{lp111}
\frac{1}{C}\leq \frac{\lambda^{(1)}_{\varepsilon,j}}{\lambda^{(2)}_{\varepsilon,j}}\leq C, ~\mbox{for all}~
j=1,\cdots,m~\mbox{and some constant }~C>0,
\end{equation}
where $\lambda^{(l)}_{\varepsilon,j}$ ($l=1,2$ and $j=1,\cdots,m$) are the height of the $j$-th bubble of the solution $u^{(l)}_\varepsilon$ as in \eqref{luo--2},
then $u^{(1)}_\varepsilon\equiv u^{(2)}_\varepsilon$ for $\varepsilon>0$ small.
\end{Thm}
Inspired by \cite{Cao1,Deng}, our idea to prove Theorem \ref{th1.3} is as follows. Let $u^{(1)}_{\varepsilon}(x)$, $u^{(2)}_{\varepsilon}(x)$ be two different positive solutions concentrating at $\{a_1,\cdots, a_m\}$.
Set
\begin{flalign}\label{3.1}
\xi_{\varepsilon}(x)=\frac{u_{\varepsilon}^{(1)}(x)-u_{\varepsilon}^{(2)}(x)}
{\|u_{\varepsilon}^{(1)}-u_{\varepsilon}^{(2)}\|_{L^{\infty}(\Omega)}}.
\end{flalign}
Then we prove $
\xi_{\varepsilon}(x)=o(1)$ for $x\in \Omega$,
which is incompatible with the fact $\|\xi_{\varepsilon}\|_{L^{\infty}(\Omega)}=1$.
For the estimates near the critical points, the following local Pohozaev identities play a crucial role.
\vskip 0.2cm
\noindent \emph{If $u_{\varepsilon}^{(1)}(x)$ and $u_{\varepsilon}^{(2)}(x)$ are solutions of \eqref{1.1}, then taking $u_\varepsilon=u_{\varepsilon}^{(l)}$ with $l=1,2$ in \eqref{clp-1} and \eqref{clp-10} with $\Omega'=B_d(x^{(1)}_{\e,j})$, and subtracting them on both sides respectively, we can obtain
\begin{equation}\label{dclp-1}
\begin{split}
\int_{B_d(x^{(1)}_{\e,j})} &\frac{\partial Q(x)}{\partial x^i} D_{1,\varepsilon}\xi_\e {\mathrm d}x \\=&
\int_{\partial B_d(x^{(1)}_{\e,j})}\Big( \big( Q(x) D_{1,\varepsilon}(x)+ \e D_{2,\varepsilon}(x)\big)\xi_\e -\frac{1}{2} \big(\nabla u^{(1)}_\varepsilon
+\nabla u^{(2)}_\varepsilon\big)\cdot \nabla \xi_\e \Big)\nu^i{\mathrm d}\sigma\\&
+\int_{\partial B_d(x^{(1)}_{\e,j})}\Big(
\frac{\partial u^{(1)}_\varepsilon}{\partial \nu}\frac{\partial \xi_\e}{\partial x^i}
+\frac{\partial \xi_\e }{\partial \nu}\frac{\partial u^{(2)}_\varepsilon}{\partial x^i}\Big){\mathrm d}\sigma,
\end{split}\end{equation}
and
\begin{equation}\label{dclp-10}
\begin{split}
\int_{B_d(x^{(1)}_{\e,j})} & \Big((x-x^{(1)}_{\varepsilon,j}) \cdot \nabla Q(x)\Big) D_{1,\varepsilon} \xi_\e {\mathrm d}x +\Big(\big(1+s\big)\big(1-\frac{N}{2}\big)+N\Big)\varepsilon\int_{B_d(x^{(1)}_{\e,j})} D_{2,\varepsilon}\xi_\e {\mathrm d}x
\\=&
\int_{\partial B_d(x^{(1)}_{\e,j})}
\left(
\Big( ( x-x^{(1)}_{\varepsilon,j}) \cdot \nabla u^{(1)}_{\varepsilon}
+\frac{N-2}{2}
u^{(1)}_{\varepsilon} \Big)\frac{\partial \xi_{\varepsilon}}{\partial\nu}
+ \Big((x-x^{(1)}_{\varepsilon,j}) \cdot \nabla \xi_{\varepsilon}
+\frac{N-2}{2}
\xi_{\varepsilon} \Big)\frac{\partial u^{(1)}_{\varepsilon}}{\partial\nu}
\right) {\mathrm d}\sigma
\\&
+ \int_{\partial B_d(x^{(1)}_{\e,j})}\Big( \big( Q(x) D_{1,\varepsilon}(x)+ \e D_{2,\varepsilon}(x)\big)\xi_\e -\frac{1}{2} \big(\nabla u^{(1)}_\varepsilon
+\nabla u^{(2)}_\varepsilon\big) \cdot\nabla \xi_\e \Big) \Big( ( x-x^{(1)}_{\varepsilon,j} ) \cdot \nu\Big)
{\mathrm d}\sigma,
\end{split}
\end{equation}
where
\begin{equation*}
D_{1,\varepsilon}(x)= \int_{0}^1
\Big(tu_{\varepsilon}^{(1)}(x)+(1-t)u_{\varepsilon}^{(2)}(x) \Big)
^{\frac{N+2}{N-2}}{\mathrm d}t~\mbox{and}~ D_{2,\varepsilon}(x)= \int_{0}^1
\Big(tu_{\varepsilon}^{(1)}(x)+(1-t)u_{\varepsilon}^{(2)}(x)\Big)
^{s}{\mathrm d}t.
\end{equation*}
}Here we would like to point out that the local Pohozaev identity \eqref{dclp-10} will have two terms involving
volume integral. Hence to calculate the two integrals precisely, we need to use some symmetries skillfully by some observations.
The paper is organized as follows. In sections 2--5, we give the proof of theorems \ref{prop1}--\ref{th1.3} correspondingly.
Finally, we give the proofs of various local Pohozaev identities and some known facts in the Appendix.
Throughout this paper, we use the same $C$ to denote various generic positive constants independent with $\varepsilon$
and $\|\cdot\|$ to denote the basic norm in the Sobolev space $H^1_0(\Omega)$ and $\langle\cdot,\cdot\rangle$ to mean the corresponding inner product. And we will use $D$ to denote the partial derivative for any function $h(y,x)$ with respect to $x$.
\section{The structure of the solutions (Proof of Theorem \ref{prop1})}
\setcounter{equation}{0}
\begin{Prop}
If $u_{\varepsilon}(x)$ is a solution of \eqref{1.1} with \eqref{4-6-1},
then $|\nabla Q(a_j)|=0$ for $j=1,\cdots,m$.
\end{Prop}
\begin{proof}
We prove it by using local Pohozeav identity \eqref{clp-10}.
From \eqref{4-6-1}, we find
\begin{equation}\label{7-3-01}
\int_{B_d(x_{\varepsilon,j})} \frac{\partial Q(x)}{\partial x^i} u_\varepsilon^{2^*}\rightarrow
\frac{\partial Q(a_j)}{\partial x^i}Q(a_j)^{-N/2}S^{N/2} ~~ \mbox{as} ~~ \varepsilon\rightarrow 0.
\end{equation}
And using \eqref{4-6-1} again, we also know
\begin{equation}\label{7-3-02}
\int_{\partial B_d(x_{\varepsilon,j} ) }\Big(
\frac{\partial u_\varepsilon}{\partial \nu}\frac{\partial u_\varepsilon}{\partial x^i}
+\big(\frac{1}{2^*} Q(x) u_\varepsilon^{2^*} -\frac{1}{2} |\nabla u_\varepsilon|^2 + \frac{\varepsilon}{{s+1}} u_\varepsilon^{s+1}\big)\nu^i\Big){\mathrm d}\sigma \rightarrow 0~~ \mbox{as}~~\varepsilon\rightarrow 0.
\end{equation}
Then \eqref{clp-10}, \eqref{7-3-01} and \eqref{7-3-02} give us that $|\nabla Q(a_j)|=0$ for $j=1,\cdots,m$.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{prop1}}]
Since $u_{\varepsilon}(x)$ is a solution of \eqref{1.1} satisfying
\eqref{4-6-1}, we find that $u_{\varepsilon}(x)$ blows up at $a_1$. Then there exists $x_{1,\varepsilon}\in \Omega$ satisfying
$$x_{1,\varepsilon}\rightarrow a_1~\mbox{and}~u_\varepsilon(x_{1,\varepsilon})\rightarrow +\infty.$$
Let $v_{\varepsilon}=\lambda_{1,\varepsilon}^{-(N-2)/2}
u_{\varepsilon}\big(\frac{x}{\lambda_{1,\varepsilon}}+x_{1,\varepsilon}\big)$,
then
\begin{equation*}
-\Delta v_{\varepsilon} =Q\big(\frac{x}{\lambda_{1,\varepsilon}}+x_{1,\varepsilon}\big) v_{\varepsilon}^{2^*-1}
+\frac{ \lambda_{1,\varepsilon}^{ \frac{N-2}2 (s-1) } \varepsilon}{ \lambda_{1,\varepsilon}^2 }v^s_{\varepsilon},
~~ \mbox{in} ~~ \R^N.
\end{equation*}
For any fixed small $d$, $\displaystyle\max_{B_{d\lambda_{1,\varepsilon}}
(0)}v_{\varepsilon}=1$,
this means that
\begin{equation*}
u_{\varepsilon}= Q(a_1)^{-(N-2)/4} PU_{x_{1,\varepsilon}, \lambda_{1,\varepsilon}}+w_{1,\varepsilon}, ~~ \mbox{with}~~ \displaystyle\int_{B_d(x_{1,\varepsilon})}\big[|\nabla w_{1,\varepsilon}|^2+w^2_{1,\varepsilon}\big]=o(1),
\end{equation*}
and
\begin{equation*}
\big|\nabla w_{1,\varepsilon}\big|^2\rightharpoonup \sum^m_{j=2}Q(a_j)^{-(N-2)/2}S^{N/2}\delta_{a_j} ~~ ~~ \mbox{and} ~~
\big|w_{1,\varepsilon}\big|^{2^*}\rightharpoonup \sum^m_{j=2} Q(a_j)^{-N/2}S^{N/2}\delta_{a_j}.
\end{equation*}
Repeating above step, we can find
\begin{equation*}
w_{1,\varepsilon}= Q(a_2)^{-(N-2)/4} PU_{x_{2,\varepsilon}, \lambda_{2,\varepsilon}}+w_{2,\varepsilon}, ~~ \mbox{with} ~~\displaystyle\int_{B_d(x_{2,\varepsilon})}\big[|\nabla w_{2,\varepsilon}|^2+w^2_{2,\varepsilon}\big]=o(1).
\end{equation*}
Then by finite step, we have
\begin{equation}\label{4-24-2}
u_{\varepsilon}= \sum^m_{j=1} Q(a_j)^{-(N-2)/4} PU_{x_{\varepsilon,j}, \lambda_{\varepsilon,j}}+w_{\varepsilon}, ~~ \mbox{with} ~~ \big\|w_{\varepsilon}\big\|=o(1).
\end{equation}
\end{proof}
\begin{Rem}Now, for any $x\in\Omega$ and $\lambda\in \R^+$, we define
\begin{equation*}
\begin{split}
{\mathbb E}_{x,\lambda}=\left\{v\in H^1_0(\Omega)\big|~~ \Big\langle \frac{\partial PU_{x,\lambda}}{\partial \lambda},v \Big\rangle=\Big\langle \frac{\partial PU_{x,\lambda}}{\partial x^i},v\Big\rangle=0,~\mbox{for}~i=1,\cdots,N\right\}.
\end{split}
\end{equation*}
From \eqref{4-24-2}, we know
\begin{equation*}
\Big\langle \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial \lambda},w_{\varepsilon}\Big\rangle=\Big\langle \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial x^i},w_{\varepsilon}\Big\rangle=o(1).
\end{equation*}
Hence we can move $x_{\varepsilon,j}$ a bit (still denoted by $x_{\varepsilon,j}$), so that the above error term $w_{\varepsilon}\in \displaystyle\bigcap^m_{j=1}{\mathbb E}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}$.
\end{Rem}
\section{Non-existence (Proof of Theorem \ref{th1-1})}
\setcounter{equation}{0}
\subsection{Computations concerning $PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}$}
\begin{Lem}\label{Lemma3.1}
For any small $d>0$, it holds
\begin{equation}\label{3-13-07}
\begin{split}
\int_{B_d(x_{\varepsilon,j})}&
\Big( (x-x_{\varepsilon,j}) \cdot \nabla Q(x)\Big) PU^{2^*}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}{\mathrm d}x=
\frac{A \Delta Q(a_j)}{\lambda_{\varepsilon,j}^2}
+o\Big(\frac{1}{\lambda_{\varepsilon,j}^2} \Big)+
O\Big(\frac{1}{\lambda_{\varepsilon,j}^{N-2}} \Big),
\end{split}
\end{equation}
where $A=\frac{1}{N}\displaystyle\int_{\R^N}\frac{|y|^2}{(1+|y|^2)^N}{\mathrm d}y $. And
\begin{equation}\label{3-13-08}
\begin{split}
\int_{B_d(x_{\varepsilon,j})}PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s+1}{\mathrm d}x
=&
\begin{cases}
\frac{B}{\lambda_{\varepsilon,j}^{2-\frac{N-2}{2}(s-1)}}
+o\Big(\frac{1}{\lambda_{\varepsilon,j}^{2-\frac{N-2}{2}(s-1)}} \Big),&~\mbox{for}~N+s\neq 5,\\[6mm]
\omega_4\frac{\log \lambda_{\varepsilon,j}}{\lambda_{\varepsilon,j}^{2}}
+O\Big(\frac{1}{\lambda_{\varepsilon,j}^{2}} \Big),&~\mbox{for}~N+s=5,
\end{cases}
\end{split}
\end{equation}
where $B= \displaystyle\int_{\R^N}\frac{1}{(1+|y|^2)^{\frac{N-2}{2}(s+1)}}{\mathrm d}y $ and
$\omega_4$ is a measure of the unit sphere of $\R^4$.
\end{Lem}
\begin{proof}[\underline{\textbf{Proof of \eqref{3-13-07}}}]
First, by \eqref{4-24-1}, \eqref{4-24-11} and \eqref{4-24-12}, we compute
\begin{equation*}
\begin{split}
\int_{B_d(x_{\varepsilon,j})}&
\Big((x-x_{\varepsilon,j}) \cdot \nabla Q(x)\Big)PU^{2^*}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} {\mathrm d}x\\=&
\int_{B_d(x_{\varepsilon,j})}\Big((x-x_{\varepsilon,j}) \cdot \nabla Q(x)\Big) U^{2^*}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} {\mathrm d}x
\\&+
O\Big(\int_{B_d(x_{\varepsilon,j})}\big(U_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{2^*-1}
\varphi_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}+
\varphi_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{2^*}\big)\cdot
| x-x_{\varepsilon,j}|{\mathrm d}x \Big)\\=&
\int_{B_d(x_{\varepsilon,j})}\Big((x-x_{\varepsilon,j}) \cdot \nabla Q(x)\Big) U^{2^*}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} {\mathrm d}x
+
O\Big(\frac{1}{\lambda_{\varepsilon,j}^{N-2}} \Big).
\end{split}
\end{equation*}
Also by basic scaling transform, we find
\begin{equation*}
\begin{split}
\int_{B_d(x_{\varepsilon,j})}&\Big((x-x_{\varepsilon,j}) \cdot \nabla Q(x)\Big) U^{2^*}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} {\mathrm d}x
\\=&
\int_{B_d(x_{\varepsilon,j})}
\Big((x-x_{\varepsilon,j}) \cdot \big ( \nabla Q(x) - \nabla Q(x_{\varepsilon,j} )\big) \Big)U_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{2^*} {\mathrm d}x\\=&
\frac{\Delta Q(x_{\varepsilon,j})}{N}\int_{B_d(x_{\varepsilon,j})}U_{x_{\varepsilon,j},
\lambda_{\varepsilon,j}}^{2^*}|x-x_{\varepsilon,j}|^2{\mathrm d}x
+O\Big(\int_{B_d(x_{\varepsilon,j})}U_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{2^*}
|x-x_{\varepsilon,j}|^3{\mathrm d}x \Big),
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
\int_{B_d(x_{\varepsilon,j})}&U_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{2^*}|x-x_{\varepsilon,j}|^2{\mathrm d}x
\\=&
\frac{1}{\lambda_{\varepsilon,j}^2}\int_{\R^N}\frac{|y|^2}{(1+|y|^2)^N}{\mathrm d}y
-\frac{1}{\lambda_{\varepsilon,j}^2}\int_{\R^N\setminus
B_{d\lambda_{\varepsilon,j}(0)}}\frac{|y|^2}{(1+|y|^2)^N}{\mathrm d}y
\\=&
\frac{1}{\lambda_{\varepsilon,j}^2}
\int_{\R^N}\frac{|y|^2}{(1+|y|^2)^N}{\mathrm d}y +o\Big(\frac{1}{\lambda_{\varepsilon,j}^2} \Big).
\end{split}
\end{equation*}
Similarly it holds
\begin{equation*}
\int_{B_d(x_{\varepsilon,j})}U_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{2^*}|x-x_{\varepsilon,j}|^3{\mathrm d}x
=O\Big(\frac{1}{\lambda_{\varepsilon,j}^3} \Big).
\end{equation*}
The above calculations yield that
\begin{equation*}
\begin{split}
\int_{B_d(x_{\varepsilon,j})}&
\Big( (x-x_{\varepsilon,j}) \cdot \nabla Q(x)\Big)U_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{2^*} {\mathrm d}x
\\=&\frac{1}{N\lambda_{\varepsilon,j}^2}\int_{\R^N}\frac{|y|^2}{(1+|y|^2)^N}{\mathrm d}y \Delta Q(a_j)
+o\Big(\frac{1}{\lambda_{\varepsilon,j}^2} \Big)+
O\Big(\frac{1}{\lambda_{\varepsilon,j}^{N-2}} \Big).
\end{split}\end{equation*}
Hence \eqref{3-13-07} follows by the above estimates.
\end{proof}
\begin{proof}[\underline{\textbf{Proof of \eqref{3-13-08}}}]
First, by \eqref{4-24-12}, we know
\begin{equation*}
\begin{split}
\int_{B_d(x_{\varepsilon,j})}PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s+1}{\mathrm d}x
=&
\int_{B_d(x_{\varepsilon,j})}U_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s+1}{\mathrm d}x
+
O\Big(\int_{B_d(x_{\varepsilon,j})}\big(U_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^s
|\varphi_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}|+
|\varphi_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}|^{s+1}\big){\mathrm d}x \Big).
\end{split}
\end{equation*}
Also by scaling transform, we have
\begin{equation*}
\begin{split}
\int_{B_d(x_{\varepsilon,j})}U_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s+1}{\mathrm d}x
=&
\frac{1}{\lambda_{\varepsilon,j}^{2-\frac{N-2}{2}(s-1)}}
\int_{B_{d\lambda_{\varepsilon,j}}(0)}\frac{1}{(1+|y|^2)^{\frac{N-2}{2}(s+1)}}{\mathrm d}y
\\=&
\begin{cases}
\frac{B}{\lambda_{\varepsilon,j}^{2-\frac{N-2}{2}(s-1)}}
+o\Big(\frac{1}{\lambda_{\varepsilon,j}^{2-\frac{N-2}{2}(s-1)}} \Big),&~\mbox{for}~N+s\neq 5,\\[6mm]
\omega_4\frac{\log \lambda_{\varepsilon,j}}{\lambda_{\varepsilon,j}^{2}}
+O\Big(\frac{1}{\lambda_{\varepsilon,j}^{2}} \Big),&~\mbox{for}~N+s=5.
\end{cases}
\end{split}
\end{equation*}
Next by \eqref{4-24-11} and H\"older's inequality, it follows
\begin{equation*}
\begin{split}
\int_{B_d(x_{\varepsilon,j})}U_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^s
\varphi_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}{\mathrm d}x
=&O\left( \lambda_{\varepsilon,j}^{ \frac {N-2} {2}(s-1)}\int_{B_d(x_{\varepsilon,j})}\Big(\frac 1 {1+\lambda_{\varepsilon,j}^2|x-x_{\varepsilon,j}|^2}\Big)^{\frac{(N-2)s}{2}}\right)\\ =& O\Big(\frac{1}{\lambda_{\varepsilon,j}^{(N-2)s-\frac{N-2}{2}(s-1)}}\Big),
\end{split}
\end{equation*}
and
\begin{equation*}
\int_{B_d(x_{\varepsilon,j})}\varphi_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s+1}{\mathrm d}x
=
O\Big(\frac{1}{\lambda_{\varepsilon,j}^{\frac{N-2}{2}(s+1)}} \Big).
\end{equation*}
Hence \eqref{3-13-08} holds by the above estimates.
\end{proof}
\begin{Lem}
It holds
\begin{equation}\label{4-24-14}
\left(\int_\Omega \Big(\sum^m_{j=1} \big|Q(x)-Q(a_j)\big|PU^{2^*-1}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} \Big)^{\frac{2N}{N+2}}{\mathrm d}x\right)^{\frac{N+2}{2N}}=O\Big(\sum^m_{j=1}\big( \frac{1}{\lambda_{\varepsilon,j}^2}+|x_{\e,j}-a_j|^2\big) \Big),
\end{equation}
and
\begin{equation}\label{4-24-15}
\begin{split}
\displaystyle\int_\Omega & \left(\big(\sum^m_{j=1}Q(a_j)^{-\frac{N-2}4}
PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\big)^{2^*-1}-\sum^m_{j=1}Q(a_j)^{-\frac{N+2}4} PU^{2^*-1}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\Big)^{\frac{2N}{N+2}}{\mathrm d}x \right)^{\frac{N+2}{2N}}\\=&
\begin{cases}
O\Big(\displaystyle\sum^m_{j=1} \frac{\big(\log \lambda_{\varepsilon,j}\big)^{\frac{N+2}{N}}}{\lambda_{\varepsilon,j}^{\frac{N+2}{2}}} \Big),~&\mbox{if}~N\geq 6,\\
O\Big(\displaystyle\sum^m_{j=1}\frac{1}{\lambda_{\varepsilon,j}^{N-2}}\Big),~&\mbox{if}~N<6.
\end{cases}
\end{split}
\end{equation}
\end{Lem}
\begin{proof}
First, from \eqref{4-24-1} and \eqref{4-24-11}, we know that
\begin{equation*}
\begin{split}
\mbox{LHS of \eqref{4-24-14}}=&O\left(\sum^m_{j=1}\int_{B_\delta(x_{\varepsilon,j})}\Big(\big|x-a_j\big|^2 U^{2^*-1}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\Big)^{\frac{2N}{N+2}}{\mathrm d}x\right)^{\frac{N+2}{2N}}\\=&O\left(\sum^m_{j=1}\int_{B_\delta(0)}
\Big(|x_{\varepsilon,j}-a_j|^{\frac{4N}{N+2} }+|z|^{\frac{4N}{N+2}}\Big)\Big(\frac{\lambda_{\varepsilon,j}}{1+\lambda_{\varepsilon,j}^2|z|^2}\Big)^{N} {\mathrm d}z\right)^{\frac{N+2}{2N}}\\=
&O\Big(\sum^m_{j=1}\big( \frac{1}{\lambda_{\varepsilon,j}^2}+|x_{\e,j}-a_j|^2\big)\Big).
\end{split}
\end{equation*}
Next, from \eqref{3-14-11}, we know that when $N\ge 6$,
\begin{equation*}
\begin{split}
\mbox{LHS of \eqref{4-24-15}}=
&O\Bigg( \sum^m_{j=1} \sum^m_{i\ne j}\Big(\int_\Omega \big(PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} ^{\frac {2^*-1}{2}}PU_{x_{\varepsilon,i},\lambda_{\varepsilon,i}}^{\frac {2^*-1}{2}}\big)^{\frac{2N}{N+2}} {\mathrm d}x \Big)^{\frac{N+2}{2N}}\Bigg)\\
=& O\Bigg(\sum^m_{j=1}\sum^m_{i\ne j}\Big(\int_{B_\delta(x_{\varepsilon,j})}\big(\frac{\lambda_{\varepsilon,j}}{1+\lambda_{\varepsilon,j}^2 |x-x_{\varepsilon,j}|^2}\big)^{\frac{N}{2}}\big(\frac{\lambda_{\varepsilon,i}}{1+\lambda_{\varepsilon,i}^2 |x-x_{\varepsilon,i}|^2}\big)^{\frac{N}{2}}{\mathrm d}x\Big)^{\frac{N+2}{2N}}\Bigg)
\\=&
O\Big(\displaystyle\sum^m_{j=1} \sum^m_{i\neq j}\frac{\big(\log \lambda_{\varepsilon,j}\big)^{\frac{N+2}{2N}}}{\lambda_{\varepsilon,i}^{\frac{N+2}{4}}
\lambda_{\varepsilon,j}^{\frac{N+2}{4}}}\Big)
=O\Big(\displaystyle\sum^m_{j=1} \frac{\big(\log \lambda_{\varepsilon,j}\big)^{\frac{N+2}{N}}}{\lambda_{\varepsilon,j}^{\frac{N+2}{2}}}\Big).
\end{split}
\end{equation*}
Similar, when $N\le 6$, we have
\begin{equation*}
\begin{split}
\mbox{LHS of \eqref{4-24-15}}=&O\Bigg(\sum^m_{j=1}\sum^m_{i\ne j}\Big(\int_\Omega \big(PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} ^{2^*-2 } PU_{x_{\varepsilon,i},\lambda_{\varepsilon,i }}+ PU_{x_{\varepsilon,i},\lambda_{\varepsilon,i}}^{2^*-2} PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\big )^{\frac{2N}{N+2}}{\mathrm d}x \Big)^{\frac{N+2}{2N}}\Bigg) \\
=&
O\Bigg(\sum^m_{j =1}\sum^m_{i\ne j}\frac{1}{\lambda_{\varepsilon,i}^{\frac{N-2}{2}}}\Big(\int_{B_\delta (x_{\varepsilon,j})}\Big(\frac{\lambda_{\varepsilon,j}}{1+\lambda_{\varepsilon,j}^2
|x-x_{\varepsilon,j}|^2}\Big)^{\frac{4N}{N+2}}{\mathrm d}x\Big)^{\frac{N+2}{2N}}\Bigg)
\\&
+ O\Bigg(\sum^m_{j=1}\sum^m_{i\ne j}\frac{1}{\lambda_{\varepsilon,i}^2}\Big(\int_{B_\delta(x_{\varepsilon,j})}\Big(\frac{
\lambda_{\varepsilon,j}}{1+\lambda_{\varepsilon,j}^2|x-x_{\varepsilon,j}|^2}\Big)^{\frac{N(N-2)}{N+2}} {\mathrm d}x\Big)^{\frac{N+2}{2N}}\Bigg)
\\
=&
O\Bigg(\sum^m_{j=1}\sum^m_{i\ne j}\frac{1}{\lambda_{\varepsilon,i}^{\frac{N-2}{2}}}\frac{1}{\lambda_{\varepsilon,j}^{\frac{N-2}{2}}} \Bigg)+O\Bigg(\sum^m_{j=1}\sum^m_{i\ne j}\frac{1}{\lambda_{\varepsilon,i}^2} \frac{1}{\lambda_{\varepsilon,j}^{\frac{N-2}{2}}}\Bigg)=O\Big(\displaystyle\sum^m_{j=1} \frac{1}{\lambda_{\varepsilon,j}^{N-2}}\Big).
\end{split}
\end{equation*}
Hence we complete \eqref{4-24-15} by above estimates.
\end{proof}
\subsection{Non-existence}
\begin{Prop}
Let $u_{\varepsilon}(x)$ is a solution of \eqref{1.1} with \eqref{4-6-1},
then for small $d>0$, it holds
\begin{equation}\label{ab4-18-12}
u_{\varepsilon}(x)=O\Big(\sum^m_{j=1}\frac{1}{\lambda^{(N-2)/2}_{\varepsilon,j}}\Big),
~\mbox{in}~C^1\Big(\Omega\backslash\bigcup^m_{j=1}B_d(x_{\varepsilon,j}) \Big).
\end{equation}
\end{Prop}
\begin{proof}By potential theory,
we can find
\begin{align}\label{4-24-29}
u_{\varepsilon}(x)=\int_\Omega G(x,y)\big( Q(x)u_{\varepsilon}^{2^*-1}+\varepsilon u_{\varepsilon}^{s}\big){\mathrm d}y
=O\Big(\int_\Omega \frac{1}{|x-y|^{N-2}}\big(u_{\varepsilon}^{2^*-1}+\varepsilon u_{\varepsilon}^{s}\big){\mathrm d}y\Big).
\end{align}
From \eqref{4-6-1}, we have $u_{\varepsilon}(x)=o(1)$ for $x\in \Omega\backslash\bigcup^m_{j=1}B_d(x_{\varepsilon,j})$ uniformly.
And from the process of proofs of Theorem \ref{prop1} (by blow-up analysis), we know that
$$u_\varepsilon(x)\leq C \sum^m_{j=1}PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}~\mbox{in}~\bigcup^m_{j=1}B_d(x_{\varepsilon,j}).$$
So for $ x\in \Omega\backslash\bigcup^m_{j=1}B_d(x_{\varepsilon,j})$, it holds
\begin{equation*}
\begin{split}
\int_\Omega & \frac{1}{|x-y|^{N-2}} u^{2^*-1}_{\varepsilon} {\mathrm d}y\\=&
\int_{\Omega\backslash\bigcup^m_{j=1}B_d(x_{\varepsilon,j})} \frac{1}{|x-y|^{N-2}} u^{2^*-1}_{\varepsilon} {\mathrm d}y+\underbrace{\int_{\bigcup^m_{j=1}B_d(x_{\varepsilon,j})} \frac{1}{|x-y|^{N-2}} u^{2^*-1}_{\varepsilon} {\mathrm d}y}_{:=I_1}\\ \leq &
\max_{\Omega\backslash\displaystyle\bigcup^m_{j=1}B_d(x_{\varepsilon,j})}|u_{\varepsilon}(x)|^{2^*-1}\int_{\Omega} \frac{1}{|x-y|^{N-2}} {\mathrm d}y+I_1
\leq C
\max_{\Omega\backslash\displaystyle\bigcup^m_{j=1}B_d(x_{\varepsilon,j})}|u_{\varepsilon}(x)|^{2^*-1} +I_1.
\end{split}
\end{equation*}
Next, we find
\begin{equation*}
\begin{split}
I_1=&\int_{\bigcup^m_{j=1}B_{\frac{d}{2}}(x_{\varepsilon,j})} \frac{1}{|x-y|^{N-2}} u^{2^*-1}_{\varepsilon} {\mathrm d}y+\int_{\bigcup^m_{j=1}\big(B_d(x_{\varepsilon,j})\backslash B_{\frac{d}{2}}(x_{\varepsilon,j})\big)} \frac{1}{|x-y|^{N-2}} u^{2^*-1}_{\varepsilon} {\mathrm d}y\\=&
O\Big(\sum^m_{j=1}\int_{\Omega}PU^{2^*-1}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\Big)
+ O\Big(\sum^m_{j=1} \frac{1}{\lambda^{\frac{N+2}{2}}_{\varepsilon,j}}\Big)=
O\Big(\sum^m_{j=1} \frac{1}{\lambda^{\frac{N-2}{2}}_{\varepsilon,j}}\Big).
\end{split}
\end{equation*}
These mean that
\begin{equation}\label{4-24-30}
\begin{split}
\int_\Omega & \frac{1}{|x-y|^{N-2}} u^{2^*-1}_{\varepsilon} {\mathrm d}y
=O\Big(
\max_{\Omega\backslash\displaystyle\bigcup^m_{j=1}B_d(x_{\varepsilon,j})}|u_{\varepsilon}(x)|^{2^*-1} +\sum^m_{j=1} \frac{1}{\lambda^{\frac{N-2}{2}}_{\varepsilon,j}}\Big).
\end{split}
\end{equation}
Similarly, we find
\begin{equation}\label{4-24-31}
\begin{split}
\int_\Omega \frac{1}{|x-y|^{N-2}} u^{s}_{\varepsilon} {\mathrm d}y
=&O\Big(
\max_{\Omega\backslash\displaystyle\bigcup^m_{j=1}B_d(x_{\varepsilon,j})}|u_{\varepsilon}(x)|^{s} +\sum^m_{j=1}\int_{\Omega}PU^{s}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\Big)\\=&O\Big(
\max_{\Omega\backslash\displaystyle\bigcup^m_{j=1}B_d(x_{\varepsilon,j})}|u_{\varepsilon}(x)|^{s} +\sum^m_{j=1} \eta(\lambda_{\varepsilon,j})\Big),
\end{split}
\end{equation}
where
\begin{equation}\label{ll1}
\eta\big({\lambda}\big)=
\begin{cases}
O\Big(\frac{1}{{\lambda}^{ N- \frac{ (N-2)s}{2}}} \Big),&~\mbox{for}~(N-2)s>N,\\[3mm]
O\Big(\frac{\log {\lambda}}{{\lambda}^{N-\frac{(N-2)s}{2}}} \Big),&~\mbox{for}~(N-2)s=N,\\[3mm]
O\Big(\frac{1}{{\lambda}^{ \frac{(N-2)s}{2}}} \Big),
&~\mbox{for}~(N-2)s<N.
\end{cases}
\end{equation}
Also we find
\begin{equation}\label{4-24-32}
\eta(\lambda_{\varepsilon,j})=O\Big(\frac{1}{\lambda^{\frac{N-2}{2}}_{\varepsilon,j}}\Big),~\mbox{for}~
s\in \big[1,\frac{N+2}{N-2}\big).
\end{equation}
Hence \eqref{4-24-29}, \eqref{4-24-30}, \eqref{4-24-31} and \eqref{4-24-32} give us that
\begin{equation*}
u_{\varepsilon}(x)=O\Big(\sum^m_{j=1}\frac{1}{\lambda^{(N-2)/2}_{\varepsilon,j}}\Big),
~\mbox{in}~ \Omega\backslash\bigcup^m_{j=1}B_d(x_{\varepsilon,j}).
\end{equation*}
On the other hand, for $x\in \Omega\backslash\displaystyle\bigcup^m_{j=1}B_{d}(x_{\varepsilon,j})$, we have
\begin{equation*}
\begin{split}
\frac{\partial u_\varepsilon(x)}{\partial x^i}=& \int_{\Omega}D_{x^i}G(y,x)\big( Q(x)u_{\varepsilon}^{2^*-1}+\varepsilon u_{\varepsilon}^{s}\big){\mathrm d}y
=O\Big(\int_\Omega \frac{1}{|x-y|^{N-1}}\big(u_{\varepsilon}^{2^*-1}+\varepsilon u_{\varepsilon}^{s}\big){\mathrm d}y\Big).
\end{split}
\end{equation*}
Repeating above estimates, we complete the proof of \eqref{ab4-18-12}.
\end{proof}
\begin{Prop}
Let $u_{\varepsilon}(x)$ is a solution of \eqref{1.1} with \eqref{luo--2},
then it holds
\begin{equation}\label{luo1}
|x_{\e,j}-a_j|=o\Big(\sum^m_{l=1} \frac{1}{\lambda_{\varepsilon,l}} \Big),~\mbox{for}~j=1,\cdots,m.
\end{equation}
\end{Prop}
\begin{proof} We using the identity \eqref{clp-1} with $ \Omega' =B_d(x_{\e,j})$.
First, we have
\begin{equation}\label{4-24-23}
\begin{split}
\mbox{LHS of}~\eqref{clp-1}=& \frac{1}{2^*}\int_{B_d(x_{\e,j})} \frac{\partial Q(x)}{\partial x^i} u_\varepsilon^{2^*}{\mathrm d}x \\=&
\frac{1}{2^*}\sum^{N}_{l=1}\int_{B_d(x_{\e,j})} \frac{\partial^2 Q(a_j)}{\partial x^i\partial x^l} \big(x_l-{a_j^l}\big)u_\varepsilon^{2^*}{\mathrm d}x+O\Big(\int_{B_d(x_{\e,j})} |x-a_j|^2u_\varepsilon^{2^*}{\mathrm d}x \Big)\\=&
\frac{ S^{\frac N2} }{2^*}\big(Q(a_j)\big)^{-\frac{N}{2}}\sum^{N}_{l=1} \frac{\partial^2 Q(a_j)}{\partial x^i\partial x^l} \big({x_{\e,j}^l} -{a_j^l}\big)+o \Big(\sum^m_{j=1}\big(|x_{\e,j}-a_j|+ \frac{1}{\lambda_{\varepsilon,j}}\big) \Big).
\end{split}\end{equation}
On the other hand, using estimate \eqref{ab4-18-12}, we can get
\begin{equation}\label{4-24-24}
\begin{split}
\mbox{RHS of}~\eqref{clp-1}=& O\Big(\sum^m_{j=1} \frac{1}{\lambda^{N-2}_{\varepsilon,j}} \Big).
\end{split}\end{equation}
Then we find \eqref{luo1} by \eqref{4-24-23} and \eqref{4-24-24}.
\end{proof}
Let ${\bf Q}_\e$ be a quadratic form on ${\mathbb E}_{x_\varepsilon,\lambda_\varepsilon}:=\displaystyle\bigcap^m_{j=1}{\mathbb E}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}$ given by,
for any $u,v \in {\mathbb E}_{x_\varepsilon,\lambda_\varepsilon}$,
\begin{equation}\label{Qe}
\Big\langle{\bf Q}_\varepsilon u,v\Big\rangle_{{\e}}=\Big\langle u,v \Big\rangle-(2^*-1)\int_{\Omega} Q(x)\Big(\sum^m_{j=1} Q(a_j)^{-(N-2)/4}PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\Big)^{2^*-2}uv.
\end{equation}
\begin{Prop}\label{Prop-Luo2}
For any $\varepsilon>0$ sufficiently small, there exists a constant $\rho>0$ such that
\begin{equation}\label{3-13-01}
\Big\langle{\bf Q}_\varepsilon v_\varepsilon,v_\varepsilon\Big\rangle_{\e} \geq \rho \|v_\varepsilon \|^2,
\end{equation}
where $v_\varepsilon\in {\mathbb E}_{x_\varepsilon,\lambda_\varepsilon},~
|x_{\varepsilon,j}-a_j|=o(1)$ and
$\lambda_{\varepsilon,j}\rightarrow +\infty$, for $j=1,\cdots,m$.
\end{Prop}
\begin{proof} We prove it by contradiction. Suppose that the exist a sequence of $\e_n\to 0, |x_{{\e_n},j}-a_j|=o(1)$ and $\lambda_{\varepsilon_n,j}\rightarrow +\infty$ such that \[\Big\langle Q_{\e_n}v_{\e_n},v_{\e_n} \Big\rangle_{\e_n}\le\frac 1n\|v_{\e_n}\|^2.\]
Without loss of generality, we assume $\|v_{\e_n}\|=1$. Then for $\psi\in H^1_0(\Omega)$, we have
\begin{equation} \label{4191018}
\int_\Omega\nabla v_{\e_n}\nabla\psi-(2^*-1)\int_{\Omega}Q(x)\Big(\sum^m_{j=1}Q(a_j)^{-(N-2)/4} PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\Big)^{2^*-2}v_{\e_n}\psi=o(1).
\end{equation}
Now we define $\bar v_{\e_n}=\lambda_{1,{\e_n}}^{-(N-2)/2}v_{{\e_n}}\big(\frac{x}{\lambda_{1, {\e_n}}}+x_{1, {\e_n}}\big)$,
then for some universal constant $C$, it holds
$\displaystyle\int_{\R^N}|\nabla\bar v_{\e_n}|^2\le C$.
So we have
\begin{equation*}
\bar{v}_{\e_n}\rightharpoonup\bar{v}~~\text{weakly in}~ H^1_{loc}(\mathbb{R}^N)~~\mbox{and}~
\bar{v}_{\e_n}\rightarrow\overline{v}~~\text{strongly in}~ L^2_{loc}(\mathbb{R}^N).
\end{equation*}
From above and \eqref{4191018}, we can get $-\Delta v-(2^*-1)U_{0,1}^{2^*-2}v=0$.
So we can conclude that
\begin{equation*}
v=c_0\frac{\partial U_{0,\lambda}}{\partial\lambda}\Big|_{\lambda=1}+\sum_{i=1}^Nc_i\frac{\partial U_{x,1}}{\partial x^i}\Big|_{x=0},~\text{for}~i=1,\cdots,N.
\end{equation*}
Also we have
\begin{equation} \label{1054}
\begin{split}
&\int_{\Omega}Q(x)\Big(\sum^m_{j=1}Q(a_j)^{-(N-2)/4}PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} \Big)^{2^*-2}v_{\e_n}^2\\
=&O\left(\int_{\bigcup^k_{j=1}B_d(x_{\e,j})}\Big(PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\Big)^{2^*-2} v_{\e_n}^2+\int_{\Omega\setminus\bigcup^k_{j=1}B_d(x_{\e,j})}\Big(\sum^m_{j=1}PU_{x_{\varepsilon,j},
\lambda_{\varepsilon,j}}\Big)^{2^*-2}v_{\e_n}^2\right)\\=&
O\Big(\sum^m_{j=1}\frac{1}{\lambda_{\varepsilon,j}^{N-2}}\int_{\R^N}\frac{|v|}{(1+|x|^2)^2}\Big)
+
O\Big(\sum^m_{j=1}\frac{\|v_{\varepsilon_n}\|^2}{\lambda_{\varepsilon,j}^{2}} \Big)=
o(1).
\end{split}
\end{equation}
Combining \eqref{4191018} and \eqref{1054}, we conclude $\|v_{\e_n}\|=o(1)$, which yields a contradiction.
\end{proof}
Let $u_\varepsilon=\displaystyle\sum^m_{j=1} Q(a_j)^{-(N-2)/4} PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}+ w_{\varepsilon}$ be a solution of \eqref{1.1}, then
\begin{equation}\label{2-16-1}
{\bf Q}_\varepsilon w_\varepsilon={\bf f}_\varepsilon+{\bf R}_\varepsilon(w_\varepsilon),
\end{equation}
where
\begin{equation}\label{fe}
{\bf f}_\varepsilon=Q(x) \Big(\sum^m_{j=1}Q(a_j)^{-\frac{N-2}{4}} PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\Big)^{2^*-1}-\sum^m_{j=1}Q(a_j)^{-\frac{N-2}{4}} PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{2^*-1},
\end{equation}
and
\begin{equation}\label{Re}
\begin{split}
{\bf R}_\varepsilon(w_\varepsilon)=&Q(x)\Big(\big(\sum^m_{j=1}Q(a_j)^{-\frac{N-2}{4}} PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}+w_\varepsilon\big)^{2^*-1}-
\big(\sum^m_{j=1}Q(a_j)^{-\frac{N-2}{4}}PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\big)^{2^*-1} \Big)\\&-(2^*-1)Q(x)\Big(\sum^m_{j=1}Q(a_j)^{-\frac{N-2}{4}} PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\Big)^{2^*-2}
w_\varepsilon
+\varepsilon
\Big(\sum^m_{j=1}Q(a_j)^{-\frac{N-2}{4}}PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}+w_\varepsilon \Big)^{s}.
\end{split}\end{equation}
\begin{Lem}\label{lem1}
For $v\in H^1_0(\Omega)$, it holds
\begin{equation}\label{3-13-02}
\int_\Omega {\bf f}_\e v{\mathrm d}x =O\Big(\sum^m_{j=1}\big( \frac{1}{\lambda_{\varepsilon,j}^2}+|x_{\e,j}-a_j|^2\big) \Big) \|v \|.
\end{equation}
\end{Lem}
\begin{proof}
First, by H\"older's inequality, we have
\begin{equation*}
\int_\Omega {\bf f}_\e v{\mathrm d}x =O\Big( \int_\Omega \big({\bf f}_\e\big)^{\frac{2N}{N+2}}{\mathrm d}x \Big)^{\frac{N+2}{2N}}\|v \|.
\end{equation*}
Next, we compute
\begin{equation*}
\begin{split}
\Big(\int_\Omega \big({\bf f}_\e\big)^{\frac{2N}{N+2}}{\mathrm d}x \Big)^{\frac{N+2}{2N}}=&
O\left(\Big(\int_\Omega \Big(\sum^m_{j=1} \big|Q(x)-Q(a_j)\big|PU^{2^*-1}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} \Big)^{\frac{2N}{N+2}}{\mathrm d}x\Big)^{\frac{N+2}{2N}}\right)
\\&
+O\Bigg(\int_\Omega\Big(\big(\sum^m_{j=1} PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\big)^{2^*-1}
-\sum^m_{j=1}PU^{2^*-1}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} \Big)^{\frac{2N}{N+2}}{\mathrm d}x\Big)^{\frac{N+2}{2N}} \Bigg).
\end{split}\end{equation*}
Hence using \eqref{4-24-14} and above estimates, we get \eqref{3-13-02}.
\end{proof}
\begin{Lem}
For $v\in H^1_0(\Omega)$, it holds
\begin{equation}\label{dd3-13-02}
\int_\Omega {\bf R}_\e(w_\e) v{\mathrm d}x =o\Big( \|w_\e\|\Big) \|v \|.
\end{equation}
\end{Lem}
\begin{proof}
First, direct calculations yield that
\begin{equation}\label{3-13-03}
\begin{split}
\int_\Omega \Big(\sum^m_{j=1}PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\Big)^{s}v{\mathrm d}x=& O\left(\Big(\int_\Omega \Big(\sum^m_{j=1}PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} \Big)^{\frac{2Ns}{N+2}}{\mathrm d}x\Big)^{\frac{N+2}{2N}}\|v \|\right)\\=&O\Big(\sum^m_{j=1}\eta_1({\lambda}_{\varepsilon,j})\Big)\|v\|,
\end{split}
\end{equation}
where
\begin{equation*}
\eta_1(\lambda)=
\begin{cases}
\frac{1}{\lambda^{(N+2)/2-(N-2)s/2}}, & \mbox{if}~s>\frac{N+2}{2(N-2)},\\[2mm]
\frac{(\log \lambda)^{(N+2)/(2N)}}{\lambda^{(N+2)/2-(N-2)s/2}}, & \mbox{if}~s=\frac{N+2}{2(N-2)},\\[2mm]
\frac{1}{\lambda^{ (N-2)s/2}}, & \mbox{if}~s<\frac{N+2}{2(N-2)}.
\end{cases}
\end{equation*}
Since ${\bf R}_\e(w_\e)$ is the high order of $w_\e$, then \eqref{dd3-13-02} can be deduced by
H\"older's inequality, \eqref{4-24-15}, \eqref{3-13-03} and \eqref{3-14-11}.
\end{proof}
\begin{Prop}
Let $u_{\varepsilon}(x)$ is a solution of \eqref{1.1} with \eqref{4-6-1},
then the error term $w_{\varepsilon}$ satisfies
\begin{equation}\label{a4-18-12}
\|w_{\varepsilon}\|=O\Big(\varepsilon \sum^m_{j=1}\eta_1(\lambda_{\varepsilon,j})\Big)
+O\Big(\sum^m_{j=1}\frac{1}{\lambda_{\varepsilon,j}^2}\Big).
\end{equation}
\end{Prop}
\begin{proof}
The estimate \eqref{a4-18-12} can be deduced by \eqref{luo1}, \eqref{3-13-01}, \eqref{2-16-1}, \eqref{3-13-02} and \eqref{dd3-13-02}.
\end{proof}
\begin{Prop}
For any fixed small $d>0$, it holds
\begin{equation}\label{2020-01-02-1}
\begin{split} \int_{B_d(x_{\varepsilon,j})}& \Big(( x-x_{\varepsilon,j}) \cdot \nabla Q(x)\Big)u_\varepsilon^{2^*}
{\mathrm d}x \\
=&A \big(Q(a_j)\big)^{-\frac{N}{2}} \frac{\Delta Q(a_j) }{\lambda_{\varepsilon,j}^2}
+o\Big(\sum^m_{j=1}\frac{1}{\lambda_{\varepsilon,j}^2} \Big)+
O\Big(\sum^m_{j=1}\frac{1}{\lambda_{\varepsilon,j}^{N-2}}+\varepsilon \sum^m_{j=1}
\frac{\eta_1(\lambda_{\varepsilon,j})}{\lambda_{\varepsilon,j}} \Big),
\end{split}
\end{equation}
where $A$ is the constant in Lemma \ref{Lemma3.1}.
\end{Prop}
\begin{proof}
First, using \eqref{4-24-12}, we compute
\begin{equation*}
\begin{split}
\int_{B_d(x_{\varepsilon,j})}&\Big(( x-x_{\varepsilon,j}) \cdot \nabla Q(x)\Big)u_\varepsilon^{2^*}
{\mathrm d}x
\\ =& \int_{B_d(x_{\varepsilon,j})}
\Big(( x-x_{\varepsilon,j}) \cdot \nabla Q(x)\Big)
\Big(\big(Q(a_j)\big)^{-\frac{N-2}{4}}PU_{x_{\varepsilon,j},
\lambda_{\varepsilon,j}}\Big)^{2^*}{\mathrm d}x
\\ &+O\left(\int_{B_d(x_{\varepsilon,j})} \Big( \big(Q(a_j)\big)^{-\frac{N-2}{4}}PU_{x_{\varepsilon,j},
\lambda_{\varepsilon,j}}\Big)^{2^*-1}|w_{\varepsilon}|\cdot |x-x_{\varepsilon,j}|{\mathrm d}x +
\int_{B_d(x_{\varepsilon,j})}|w_{\varepsilon}|^{2^*}{\mathrm d}x\right).
\end{split}
\end{equation*}
Also, by H\"older's inequality, we find
\begin{equation*}\begin{split}
\int_{B_d(x_{\varepsilon,j})} & \Big( \big(Q(a_j)\big)^{-\frac{N-2}{4}}PU_{x_{\varepsilon,j},
\lambda_{\varepsilon,j}} \Big)^{2^*-1}
|w_{\varepsilon}|\cdot |x-x_{\varepsilon,j}| {\mathrm d}x
\\=&
O\Big(\big(\int_{B_d(x_{\varepsilon,j})}PU^{2^*}_{x_{\varepsilon,j},{\lambda_{\varepsilon,j}}}|x-x_{\varepsilon,j}|
^{\frac{2^*}{2^*-1}}{\mathrm d}x\big)^{\frac{2^*-1}{2^*}}\|w_{\varepsilon}\|_{L^{2^*}} \Big)
=
O\Big(\frac{1}{\lambda_{\varepsilon,j}}\|w_{\varepsilon}\| \Big).
\end{split}
\end{equation*}
Then from above estimates, it holds
\begin{equation*}
\begin{split}
&\int_{B_d(x_{\varepsilon,j})} \Big(( x-x_{\varepsilon,j}) \cdot \nabla Q(x)\Big)u_\varepsilon^{2^*}
{\mathrm d}x \\
=& \int_{B_d(x_{\varepsilon,j})}
\Big(( x-x_{\varepsilon,j}) \cdot \nabla Q(x)\Big)
\Big(\big(Q(a_j)\big)^{-\frac{N-2}{4}} PU_{x_{\varepsilon,j},
\lambda_{\varepsilon,j}}\Big)^{ 2^*}{\mathrm d}x
+ O\Big(\frac{\|w_{\varepsilon}\|}{\lambda_{\varepsilon,j}}+\|w_{\varepsilon}\|^{2^*}\Big),
\end{split}
\end{equation*}
which, together with \eqref{3-13-07} and \eqref{a4-18-12}, gives \eqref{2020-01-02-1}.
\end{proof}
\begin{Lem}
It holds
\begin{equation}\label{2020-01-02-2}
\begin{split}
\displaystyle\int_{B_d(x_{\varepsilon,j})}u_\varepsilon^{s+1}{\mathrm d}x
=& \begin{cases}
\frac{1}{\lambda_{\varepsilon,j}^{2-\frac{N-2}{2}(s-1)}}
\Big( \big(Q(a_j)\big)^{-\frac{(N-2)(s+1)}{4}} B
+o(1) \Big),&~\mbox{for}~N+s\neq 5,\\[8mm]
\frac{\omega_4}{Q(a_j)} \frac{\log \lambda_{\varepsilon,j}}{\lambda_{\varepsilon,j}^{2}}
+O(\frac{1}{\lambda_{\varepsilon,j}^{2}}),&~\mbox{for}~N+s=5,
\end{cases}
\end{split} \end{equation}
with $B$ and $\omega_4$ are the constants in Lemma \ref{Lemma3.1}.
\end{Lem}
\begin{proof}
First, using \eqref{4-24-12}, we have
\begin{equation}\label{Luo3.34}
\begin{split}
\int_{B_d(x_{\varepsilon,j})}u_\varepsilon^{s+1}{\mathrm d}x
=&
\int_{B_d(x_{\varepsilon,j})} \Big( \big(Q(a_j)\big)^{-\frac{N-2}{4}} PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} \Big)^{s+1} {\mathrm d}x
\\ &~~ +O\left(
\int_{B_d(x_{\varepsilon,j})}\Big(\big(Q(a_j)\big)^{-\frac{(N-2)s}{4}} PU^{s}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} |w_{\varepsilon}|+
|w_{\varepsilon}|^{s+1}
\Big){\mathrm d}x \right).
\end{split}
\end{equation}
Furthermore, by H\"older's inequality and \eqref{a4-18-12}, we find
\begin{equation}\label{Luo3.35}
\begin{split}
\int_{B_d(x_{\varepsilon,j})}PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s}|w_{\varepsilon}|
{\mathrm d}x
=&
O\Big(\big(\int_{B_d(x_{\varepsilon,j})}PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{\frac{2Ns}{N+2}}
{\mathrm d}x\big)^{\frac{N+2}{2N}}\|w_{\varepsilon}\| \Big)
=O\Big(\eta_1 \big(\lambda_{\varepsilon,j}\big)\|w_{\varepsilon}\| \Big).
\end{split}
\end{equation}
Similarly, we know
\begin{equation}\label{Luo3.36}
\int_{B_d(x_{\varepsilon,j})}|w_{\varepsilon}|^{s+1}{\mathrm d}x
=O\Big(\|w_{\varepsilon}\|^{s+1} \Big).
\end{equation}
Combining Lemma \ref{Lemma3.1}, Lemma \ref{lem1}, \eqref{3-13-08}, \eqref{Luo3.34}, \eqref{Luo3.35} and \eqref{Luo3.36}, we get
\eqref{2020-01-02-2}.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{th1-1}:}]
If $u_{\varepsilon}(x)$ is a solution of \eqref{1.1} with \eqref{4-6-1}, then we have
that $u_{\varepsilon}(x)$ has the structure \eqref{luo--2}.
Also, Pohzaev identity \eqref{clp-10} in domain $\Omega'=B_d(x_{\varepsilon,j})$ yields
\begin{equation}\label{clp-11}
\begin{split}
\frac{1}{2^*}&\int_{ B_d(x_{\varepsilon,j}) }\Big( ( x-x_{\varepsilon,j}
) \cdot \nabla Q(x) \Big)u^{2^*}_\varepsilon {\mathrm d}x +(1-\frac{N}{2}+\frac{N}{1+s})\varepsilon\int_{ B_d(x_{\varepsilon,j}) } u^{s+1}_\varepsilon {\mathrm d}x
\\ =&
\int_{\partial B_d(x_{\varepsilon,j}) } \Big[\Big(\frac{Q(x)}{2^*}u^{2^*}_\varepsilon+\frac{\varepsilon}{{s+1}
} u_{\varepsilon}^{s+1}-\frac{1}{2}
|\nabla u_{\varepsilon}|^2 \Big) \Big( ( x-x_{\varepsilon,j} ) \cdot \nu\Big)
+
\Big( ( x-x_{\varepsilon,j})\cdot \nabla u_{\varepsilon}
+\frac{N-2}{2}
u_{\varepsilon} \Big)\frac{\partial u_{\varepsilon}}{\partial\nu} \Big]{\mathrm d}\sigma.
\end{split}
\end{equation}
Then from \eqref{ab4-18-12}, we can find
\begin{equation} \label{2020-01-02-3}
\mbox{RHS of}~\eqref{clp-11}= O\Big(\sum^m_{j=1}\frac{1}{\lambda_{\varepsilon,j}^{{N-2}}} \Big).
\end{equation}
Taking $j_0\in \{1,\cdots,m\}$ such that $\frac{\lambda_{\varepsilon,j_0}}{\lambda_{\varepsilon,j}}\leq C$ for $j\neq j_0$ as $\varepsilon\to 0$. Here $C$ is independent with $\varepsilon$.
If $N\geq 5$ and $s=1$, then \eqref{2020-01-02-1} gives
\begin{equation}\label{2020-01-02-4}
\begin{split}
\int_{B_d(x_{\varepsilon,j_0})}&\Big(( x-x_{\varepsilon,j_0 }) \cdot \nabla Q(x) \Big)u_\varepsilon^{2^*}
{\mathrm d}x
= \big(Q(a_{j_0})\big)^{-\frac{N}{2}} \frac{\Delta Q(a_{j_0}) }{\lambda_{\varepsilon,j_0}^2}
\Big( A+o(1)\Big),
\end{split} \end{equation} and from \eqref{2020-01-02-2}, we have
\begin{equation}\label{2020-01-02-5}
\displaystyle\int_{B_d(x_{\varepsilon,j_0})}u_\varepsilon^{s+1}{\mathrm d}x
=\big(Q(a_ {j_0} )\big)^{-\frac{(N-2)(s+1)}{4}} \frac{B+o(1)}{\lambda_{\varepsilon,j_0}^{2}}.
\end{equation} Then combining \eqref{clp-11}, \eqref{2020-01-02-3}, \eqref{2020-01-02-4} and \eqref{2020-01-02-5}, we can obtain
\begin{equation*}
\begin{split}
\Delta Q(a_{j_0})
= -\frac{B}{A} \big(Q(a_ {j_0} )\big)^{\frac{N}{2}-\frac{(N-2)(s+1)}{4}} \varepsilon
+o\big(1\big)=o\big(1\big),
\end{split}
\end{equation*}
which is a contradiction with \eqref{4-24-1}. Hence if $N\geq 5$, $s=1$ and $Q(x)$ satisfies Condition (Q), problem \eqref{1.1} has no solutions $u_\varepsilon$ with \eqref{4-6-1}.
\end{proof}
\section{Existence (Proof of Theorem \ref{th1.2})}
\setcounter{equation}{0}
\subsection{Computations concerning $PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}$}
\begin{Lem}
If $ N\geq 4 $, we have
\begin{equation}\label{3-13-21}
\begin{split}
\sum^m_{l=1} & \int_{\Omega} \big(Q(x)-Q(a_l) \big) PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial\lambda }
= - \frac{\Delta Q(a_j)(N-2)A}{N\lambda^3_{\varepsilon,j}} \big( 1+ o(1) \big)
+o\Big(\sum^m_{l=1}\frac{1}{\lambda_{\varepsilon,l}^{N-1}} \Big).
\end{split}
\end{equation}
\end{Lem}
\begin{proof}
First, by scaling transform and \eqref{4-24-11}, we have, for $l=j$,
\begin{equation*}
\begin{split}
\int_{\Omega}&\Big(Q(x)-Q(a_j) \Big) PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j }}^{\frac{N+2}{N-2}} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda } \\=&
\frac 1{2^*} \int_{\Omega} \Big(Q(x)-Q(a_j) \Big) \frac{\partial U^{2^*}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda } \\ &
+
O\Big(\frac{1}{\lambda_{\varepsilon,j}}\int_{\Omega} \big|Q(x)-Q(a_j)\big| \big( U_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{\frac{N+2}{N-2}}
\varphi_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}+\varphi^{2^*}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\big) \Big)\\ =&
\frac{\Delta Q(a_j)}{ 2^* N}\int_{\Omega} \big|x-x_{\e,j}\big|^2\frac{\partial U^{2^*}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda} \big( 1+ o(1) \big)
+
o\Big(\frac{1}{\lambda_{\varepsilon,j}^{N-1}} \Big)\\=&
- \frac{\Delta Q(a_j)(N-2)A}{N\lambda^3_{\varepsilon,j}} \big( 1+ o(1) \big)+ o\Big(\frac{1}{\lambda_{\varepsilon,j}^{N-1}} \Big).
\end{split}
\end{equation*}
On the other hand, for $l\neq j$, we find
\begin{equation*}
\begin{split}
\int_{\Omega}&\big(Q(x)-Q(a_l) \big) PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }\\ =&
\left\{ \int_{B_d(x_{\e,j})\bigcup B_d(x_{\e,l}) } + \int_{
\Big( \Omega\backslash \big(B_d(x_{\e,j}) \bigcup B_d(x_{\e,l})\big) \Big)} \right\} \big(Q(x)-Q(a_l) \big) PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }\\ =&
O\left(\frac{1}{\lambda_{\varepsilon,l}^{\frac{N+2}{2}}}\int_{B_d(x_{\e,j})} \Big| \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }\Big|
+\frac{1}{\lambda_{\varepsilon,j}^{\frac{N}{2}}} \int_{B_d(x_{\e,l}) } \Big|Q(x)-Q(a_l)\Big| PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} + \frac{1}{\lambda_{\varepsilon,l}^{\frac{N+2}{2}}\lambda_{\varepsilon,j}^{\frac{N}{2}}} \right)\\ =&
o\Big(\sum^m_{l=1}\frac{1}{\lambda_{\varepsilon,l}^{N-1}} \Big).
\end{split}
\end{equation*}
Then \eqref{3-13-21} can be deduced by above estimates.
\end{proof}
\begin{Lem}
It holds
\begin{equation}\label{3-13-31}
\begin{split}
\sum^m_{l=1} \int_{\Omega}&\Big(Q(x)-Q(a_l) \Big) PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}
\\ =&
\Big(2\int_{\R^N} \frac{|x|^2}{(1+|x|^2)^{N+1}}{\mathrm d}x+o(1)\Big)
\sum^m_{l=1}\frac{\partial^2Q(a_j)}{\partial x^i\partial x^l}\big({x_{\e,j}^l} -{a_j^l} \big) +
O\Big(\sum^m_{l=1}\frac{\log \lambda_{\e,l}}{\lambda^{N-1}_{\e,l}} \Big).
\end{split}
\end{equation}
\end{Lem}
\begin{proof}
First, by scaling transform and \eqref{4-24-11}, we have
\begin{equation*}
\begin{split}
\int_{\Omega}&\big(Q(x)-Q(a_j) \big) PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{\frac{N+2}{N-2}} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}
\\ =&
\int_{\Omega} \big(Q(x)-Q(a_j) \big) \frac{\partial U^{2^*}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}
+
O\Big( \lambda_{\varepsilon,j}\int_{\Omega} \big|Q(x)-Q(a_j)\big| \big( U_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{\frac{N+2}{N-2}}
\varphi_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}+
\varphi^{2^*}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\big) \Big)\\ =&
\frac{1}{2}\int_{\Omega} \sum^m_{l,q=1}\frac{\partial^2Q(a_j)}{\partial x^q\partial x^l}\big(x^l-{a_j^l}\big)\big(x^q-{a_{j}^q} \big)\frac{\partial U^{2^*}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}+
O\Big(\int_{\Omega} \big|x -a_{j}\big|^2 \Big|\frac{\partial U^{2^*}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}\Big| \Big)+
O\Big(\frac{\log \lambda_{\varepsilon,j}}{\lambda_{\varepsilon,j}^{N-1} } \Big)
\\ =&
\Big(2\int_{\R^N} \frac{|x|^2}{(1+|x|^2)^{N+1}}{\mathrm d}x+o(1) \Big)
\sum^m_{l=1}\frac{\partial^2Q(a_j)}{\partial x^i\partial x^l}\Big({x_{\e,j}^l} -{a_j^l} \Big) +
O\Big(\frac{\log \lambda_{\varepsilon,j}}{\lambda_{\varepsilon,j}^{N-1} } \Big).
\end{split}
\end{equation*}
And for $l\neq j$, we find
\begin{equation*}
\begin{split}
\int_{\Omega}&\Big(Q(x)-Q(a_l) \Big) PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}\\ =&
\int_{B_d(x_{\e,j}) \bigcup B_d(x_{\e,l}) \bigcup
\Big( \Omega\backslash \big(B_d(x_{\e,j}) \bigcup B_d(x_{\e,l})\big) \Big)} \Big(Q(x)-Q(a_l) \Big) PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}\\ =&
O\left(\frac{1}{\lambda_{\varepsilon,l}^{\frac{N+2}{2}}}\int_{B_d(x_{\e,j})} \Big| \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}\Big|
+\frac{1}{\lambda_{\varepsilon,j}^{\frac{N}{2}-2}} \int_{B_d(x_{\e,l}) } \Big|Q(x)-Q(a_l)\Big| PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} + \frac{1}{\lambda_{\varepsilon,l}^{\frac{N+2}{2}}\lambda_{\varepsilon,j}^{\frac{N-4}{2}}} \right)\\ =&
O\Big(\sum^m_{l=1}\frac{\log \lambda_{\varepsilon,l}}{\lambda_{\varepsilon,l}^{N-1} } \Big).
\end{split}
\end{equation*}
Then \eqref{3-13-31} can be deduced by above estimates.
\end{proof}
\begin{Lem}
Suppose $ N\ge 4 $, then it holds
\begin{equation}\label{3-13-51}
\begin{split}
\sum^m_{l=1}\int_{\Omega}
PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{s}\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial \lambda}
=& \begin{cases}
- \frac{4-(N-2)(s-1)}{2(s+1)}\Big(B+o(1)\Big)
\frac{1}{\lambda_{\varepsilon,j}^{3-\frac{N-2}{2}(s-1)}},&~\mbox{for}~N+s\neq 5,\\
- \omega_4\frac{\log \lambda_{\varepsilon,j}}{\lambda_{\varepsilon,j}^{3}}
+O\Big(\frac{1}{\lambda_{\varepsilon,j}^{3}}\Big),&~\mbox{for}~N+s=5,
\end{cases}
\end{split}
\end{equation}
with $B$ and $\omega_4$ are the constants in Lemma \ref{Lemma3.1}.
And it holds
\begin{equation}\label{3-13-41}
\begin{split}
\sum^m_{l=1}\int_{\Omega}
PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{s}\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}
=& O\Big(\sum^m_{l=1}\frac{1}{\lambda_{\varepsilon,l}^{N-2}} \Big). \end{split}
\end{equation}
\end{Lem}
\begin{proof}[\underline{\textbf{Proof of \eqref{3-13-51}}}]
First,
for $l\neq j$, we have
\begin{equation}\label{a3-13-42}
\begin{split}
\int_{\Omega}&
PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{s}\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {\lambda}}
\\=& \left\{\int_{B_d(x_{\e,j})}+\int_{ B_d(x_{\e,l})}+\int_{
\Omega\backslash \big(B_d(x_{\e,j}) \bigcup B_d(x_{\e,l})\big) }\right\} PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{s} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {\lambda}}
\\ =&
O\Big(\int_{B_d(x_{\e,j})}\big(\frac{\lambda_{\varepsilon,l}}{1+\lambda_{\varepsilon,l}^2|y -x_{\varepsilon,l}|^2}\big)^{\frac{(N-2)s}{2}}\frac{\lambda_{\varepsilon,j}^{\frac{N}{2}}|y -x_{\varepsilon,j}|^2}{\big(1+\lambda_{\varepsilon,j}^2|y-x_{\varepsilon,j}|^2\big)^{\frac{N}2}}\Big) \\
&+O\Big(\int_{B_d(x_{\e,l})}\big(\frac{\lambda_{\varepsilon,l}}{1+\lambda_{\varepsilon,l}^2|y-x_{\varepsilon,l} |^2}\big)^{\frac{(N-2)s}{2}}\frac{\lambda_{\varepsilon,j}^{\frac{N}{2}}|y-x_{\varepsilon,j}|^2}{\big(1+ \lambda_{\varepsilon,j}^2|y-x_{\varepsilon,j}|^2\big)^{\frac N2}}\Big)
\\&+O\Big(\int_{ \Omega\backslash \big(B_d(x_{\e,j}) \bigcup B_d(x_{\e,l})\big) }\big(\frac{\lambda_{\varepsilon,l}}{1+\lambda_{\varepsilon,l}^2|y -x_{\varepsilon,l}|^2}\big)^{\frac{(N-2)s}{2}}\frac{\lambda_{\varepsilon,j}^{\frac{N}{2}}|y -x_{\varepsilon,j}|^2}{\big(1+\lambda_{\varepsilon,j}^2|y-x_{\varepsilon,j}|^2\big)^{\frac N2}}\Big)
\\=&
O\Big(\frac{1}{\lambda_{\e,l}^{\frac{(N-2)s}{2}}\lambda_{\e,j}^{\frac{N}{2}}}\Big)
+O\Big(\frac{\eta(\lambda_{\e,l})}{\lambda_{\e,j}^{\frac{N}{2}}}\Big)
= O\Big( \sum^m_{l=1}\frac{1}{ \lambda_{\varepsilon,l}^{N-1}} \Big),
\end{split}
\end{equation}
where $\eta(\lambda)$ is the function in \eqref{ll1}.
On the other hand, we find
\begin{equation}\label{a3-13-43}
\begin{split}
\int_{\Omega}&
PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s}\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {\lambda}} \\=&\frac{1}{s+1} \frac{\partial }{\partial \lambda}
\Big(\frac{1}{\lambda_{\varepsilon,j}^{2-\frac{N-2}{2}(s-1)}} \int_{B_{d\lambda_{\varepsilon,j}}(0)} U^{s+1}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}} \Big)
\\=&\begin{cases}
- \frac{4-(N-2)(s-1)}{2(s+1)}\Big(B+o(1)\Big)
\frac{1}{\lambda_{\varepsilon,j}^{3-\frac{N-2}{2}(s-1)}},&~\mbox{for}~N+s\neq 5,\\
-\omega_4\frac{\log \lambda_{\varepsilon,j}}{\lambda_{\varepsilon,j}^{3}}
+O\Big(\frac{1}{\lambda_{\varepsilon,j}^{3}}\Big),&~\mbox{for}~N+s=5.
\end{cases}
\end{split}
\end{equation}
Then we find \eqref{3-13-51} by \eqref{a3-13-42} and \eqref{a3-13-43}.
\end{proof}
\begin{proof}[\underline{\textbf{Proof of \eqref{3-13-41}}}] First,
for $l\neq j$, we have
\begin{equation}\label{3-13-42}
\begin{split}
\int_{\Omega}&
PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{s}\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}
\\=& \left\{\int_{B_d(x_{\e,j})}+\int_{ B_d(x_{\e,l})}+\int_{
\Omega\backslash \big(B_d(x_{\e,j}) \bigcup B_d(x_{\e,l})\big) }\right\} PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{s} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}
\\ =&
O\Big(\int_{B_d(x_{\e,j})}\big(\frac{\lambda_{\varepsilon,l}}{1+\lambda_{\varepsilon,l}^2|y -x_{\varepsilon,l}|^2}\big)^{\frac{(N-2)s}{2}}\frac{\lambda_{\varepsilon,j}^{\frac{N+2}{2}}|y -x_{\varepsilon,j}|}{\big(1+\lambda_{\varepsilon,j}^2|y-x_{\varepsilon,j}|^2\big)^{\frac{N}2}}\Big) \\
&+O\Big(\int_{B_d(x_{\e,l})}\big(\frac{\lambda_{\varepsilon,l}}{1+\lambda_{\varepsilon,l}^2|y-x_{\varepsilon,l} |^2}\big)^{\frac{(N-2)s}{2}}\frac{\lambda_{\varepsilon,j}^{\frac{N+2}{2}}|y-x_{\varepsilon,j}|}{\big(1+ \lambda_{\varepsilon,j}^2|y-x_{\varepsilon,j}|^2\big)^{\frac N2}}\Big)
\\&+O\Big(\int_{ \Omega\backslash \big(B_d(x_{\e,j}) \bigcup B_d(x_{\e,l})\big) }\big(\frac{\lambda_{\varepsilon,l}}{1+\lambda_{\varepsilon,l}^2|y -x_{\varepsilon,l}|^2}\big)^{\frac{(N-2)s}{2}}\frac{\lambda_{\varepsilon,j}^{\frac{N+2}{2}}|y -x_{\varepsilon,j}|}{\big(1+\lambda_{\varepsilon,j}^2|y-x_{\varepsilon,j}|^2\big)^{\frac N2}}\Big)
\\=&
O\Big(\frac{1}{\lambda_{\e,l}^{\frac{(N-2)s}{2}}\lambda_{\e,j}^{\frac{N-2}{2}}}\Big)
+O\Big(\frac{\eta(\lambda_{\e,l})}{\lambda_{\e,j}^{\frac{N-2}{2}}}\Big)
= O\Big( \sum^m_{l=1}\frac{1}{ \lambda_{\varepsilon,l}^{N-2}} \Big),
\end{split}
\end{equation}
where $\eta(\lambda)$ is the function in \eqref{ll1}.
On the other hand, since $U_{0,1}(x)$ is an even function, we find
\begin{equation}\label{3-13-43}
\begin{split}
\int_{\Omega}
& PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s}\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}=\frac{1}{s+1}
\int_{\partial \Omega} PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s+1} \nu^i
=O\Big( \frac{1}{\lambda_{\varepsilon,j}^{(N-2)(s+1)/2}} \Big).
\end{split}
\end{equation}
Then \eqref{3-13-41} can be deduced by \eqref{3-13-42} and \eqref{3-13-43}.
\end{proof}
\begin{Lem}
Suppose $N\ge 4$, we have the following estimates:
\begin{align}\label{4-26-2}
\sum_{l\neq j} \int_{\Omega}&
PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}
\Big|\frac{\partial PU^{2^* -1}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }\Big|= O\Big(\sum^m_{l=1}\frac{1}{\lambda_{\varepsilon,l}^{N-1}} \Big),
\end{align}
\begin{align}\label{a4-26-2}
\sum_{l\neq j} \int_{\Omega}&
PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}
\Big|\frac{\partial PU^{s}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }\Big|=O\Big(\sum^m_{l=1}\frac{1}{\lambda_{\varepsilon,l}^{N-1}} \Big),
\end{align}
\begin{equation}\label{aa4-26-2}
\begin{split}
\sum_{l\neq j} \int_{\Omega}&
PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}
\Big|\frac{\partial PU^{\frac{N+2}{N-2}}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
{x^i}}\Big|= O\Big(\sum^m_{l=1}\frac{1}{\lambda_{\varepsilon,l}^{N-3}} \Big),
\end{split}
\end{equation}
and
\begin{equation}\label{ab4-26-2}
\begin{split}
\sum_{l\neq j} \int_{\Omega}&
PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}
\Big|\frac{\partial PU^{s}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}\Big|= O\Big(\sum^m_{l=1}\frac{1}{\lambda_{\varepsilon,l}^{N-3}} \Big).
\end{split}
\end{equation}
\end{Lem}
\begin{proof}
First, we have
\begin{equation}\label{3-14-01}
\begin{split}
\int_{\Omega}&
PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}
\Big|\frac{\partial PU^{ 2^* -1}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }\Big| \\ =&
O\Big( \int_{\Omega}
\frac{\lambda_{\e,l}^{\frac{N-2}{2}}}{\big(1+\lambda_{\e,l}^2|x-x_{\e,l}|^2\big)^{\frac{N-2}{2}}}
\cdot \frac{\lambda_{\e,j}^{\frac{N+4}{2}} |x-x_{\e,j}|^2}{\big(1+\lambda_{\e,j}^2|x-x_{\e,j}|^2\big)^{\frac{N+4}{2}}}
\Big)\\ =&
O\left( \int_{B_d(x_{\e,j})}\frac{1}{ \lambda_{\e,l}^{\frac{N-2}{2}}}
\frac{\lambda_{\e,j}^{\frac{N+4}{2}}|x-x_{\e,j}|^2}{\big(1+\lambda_{\e,j}^2
|x-x_{\e,j}|^2\big)^{\frac{N+4}{2}}}
\right)+O\Big(\frac{1}{ \lambda_{\e,l}^{\frac{N-2}{2}}\lambda_{\e,j}^{\frac{N+4}{2}}} \Big)\\ &
+O\left(
\frac{1}{ \lambda_{\e,j}^{\frac{N+4}{2}}} \int_{B_d(x_{\e,l})}
\frac{\lambda_{\e,l}^{\frac{N-2}{2}}}{\big(1+\lambda_{\e,l}^2|x-x_{\e,l}|^2\big)^{\frac{N-2}{2}}}
\right)
\\
& = O\Big(\frac{1}{ \lambda_{\e,l}^{\frac{N-2}{2}}\lambda_{\e,j}^{\frac{N}{2}}} \Big) + O\Big(\frac{1}{ \lambda_{\e,l}^{\frac{N-2}{2}}\lambda_{\e,j}^{\frac{N+4}{2}}} \Big) =
O\Big(\sum^m_{l=1}\frac{1}{\lambda_{\varepsilon,l}^{N-1}} \Big).
\end{split}
\end{equation}
Similar to \eqref{3-14-01}, we find
\begin{equation*}
\begin{split}
\int_{\Omega}&
PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}\Big|
\frac{\partial PU^{s}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial \lambda }\Big|\\ =&
O\Big( \int_{\Omega}
\frac{\lambda_{\e,l}^{\frac{N-2}{2}}}{\big(1+\lambda_{\e,l}^2|x-x_{\e,l}|^2\big)^{\frac{N-2}{2}}}
\cdot \frac{\lambda_{\e,j}^{\frac{(N-2)s}{2}+1}}{\big(1+\lambda_{\e,j}^2|x-x_{\e,j}|^2\big)^{\frac{(N-2)s}{2}+1}}
|x-x_{\e,j}|^2
\Big)\\
=&O\left( \frac{1}{ \lambda_{\e,l}^{\frac{N-2}{2}}\lambda_{\e,j}}\eta_2(\lambda_{\e,j})
+
\frac{1}{ \lambda_{\e,j}^{\frac{(N-2)s}{2}+1}\lambda_{\e,l}^{\frac{N-2}{2}}} +\frac{1}{ \lambda_{\e,l}^{\frac{N-2}{2}} \lambda_{\e,j}^{\frac{(N-2)s}{2}+1}}\right)
\\
=&O\left( \sum^m_{l=1}\frac{\eta(\lambda_{\e,l})}{ \lambda_{\e,l}^{\frac{N}{2}}}
+ \sum^m_{l=1}\frac{1}{\lambda_{\e,l}^{\frac{(N-2)(s+1)}{2}+1}} \right) =
O\Big(\sum^m_{l=1}\frac{1}{\lambda_{\varepsilon,l}^{N-1}} \Big),
\end{split}
\end{equation*}
where $\eta(\lambda)$ is the function in \eqref{ll1}.
Also, we compute that
\begin{equation*}
\begin{split}
\int_{\Omega}&
PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}
\Big|\frac{\partial PU^{\frac{N+2}{N-2}}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
{x^i}}\Big| \\ =&
O\Big( \int_{\Omega}
\frac{\lambda_{\e,l}^{\frac{N-2}{2}}}{\big(1+\lambda_{\e,l}^2|x-x_{\e,l}|^2\big)^{\frac{N-2}{2}}}
\cdot \frac{\lambda_{\e,j}^{\frac{N+6}{2}}}{\big(1+\lambda_{\e,j}^2|x-x_{\e,j}|^2\big)^{\frac{N+4}{2}}}
|x-x_{\e,j}|
\Big) \\ =&
O\left( \int_{B_d(x_{\e,j})}\frac{1}{ \lambda_{\e,l}^{\frac{N-2}{2}}}
\frac{\lambda_{\e,j}^{\frac{N+6}{2}}|x-x_{\e,j}|}{\big(1+\lambda_{\e,j}^2
|x-x_{\e,j}|^2\big)^{\frac{N+4}{2}}}
\right)
+O\big(\frac{1}{ \lambda_{\e,l}^{\frac{N-2}{2}}\lambda_{\e,j}^{\frac{N+2}{2}}} \big) \\ &
+ O\left(
\frac{1}{ \lambda_{\e,j}^{\frac{N+2}{2}}} \int_{B_d(x_{\e,l})}
\frac{\lambda_{\e,l}^{\frac{N-2}{2}}}{\big(1+\lambda_{\e,l}^2|x-x_{\e,l}|^2\big)^{\frac{N-2}{2}}}
\right) = O\big(\sum^m_{l=1}\frac{1}{\lambda_{\varepsilon,l}^{N-3}} \big).
\end{split}
\end{equation*}
And it follows
\begin{equation*}
\begin{split}
\int_{\Omega}&
PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}
\frac{\partial PU^{s}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}\\ =&
O\Big( \int_{\Omega}
\frac{\lambda_{\e,l}^{\frac{N-2}{2}}}{\big(1+\lambda_{\e,l}^2|x-x_{\e,l}|^2\big)^{\frac{N-2}{2}}}
\cdot \frac{\lambda_{\e,j}^{\frac{(N-2)s}{2}+2}}{\big(1+\lambda_{\e,j}^2|x-x_{\e,j}|^2\big)^{\frac{(N-2)s}{2}+1}}
|x-x_{\e,j}|
\Big)\\ =&
O\Big( \int_{B_d(x_{\e,j})}\frac{1}{ \lambda_{\e,l}^{\frac{N-2}{2}}}
\frac{\lambda_{\e,j}^{\frac{(N-2)s}{2}+2}|x-x_{\e,j}|}{\big(1+\lambda_{\e,j}^2
|x-x_{\e,j}|^2\big)^{\frac{(N-2)s}{2}+1}}
\Big)
+O\Big( \frac{1}{ \lambda_{\e,l}^{\frac{N-2}{2}}\lambda_{\e,j}^{\frac{(N-2)s}{2}}} \Big)\\ &
+O\Big(
\frac{1}{ \lambda_{\e,j}^{\frac{(N-2)s}{2}}} \int_{B_d(x_{\e,l})}
\frac{\lambda_{\e,l}^{\frac{N-2}{2}}}{\big(1+\lambda_{\e,l}^2|x-x_{\e,l}|^2\big)^{\frac{N-2}{2}}}
\Big)
\\
=& O\Big( \frac{ \lambda_{\e,j}}{ \lambda_{\e,l}^{\frac{N-2}{2} } } \eta(\lambda_{\e,j}) + \frac{1}{ \lambda_{\e,l}^{\frac{N-2}{2}}\lambda_{\e,j}^{\frac{(N-2)s}{2}}} \Big) = O\Big(\sum^m_{l=1}\frac{1}{\lambda_{\varepsilon,l}^{N-3}} \Big),
\end{split}
\end{equation*}
where $\eta(\lambda)$ is the function in \eqref{ll1}.
\end{proof}
\vskip 0.2cm
\subsection{Existence}~
\vskip 0.1cm
Now we recall \begin{equation*}
\begin{split}
{\mathbb D}_{\varepsilon}=\Big\{(x_\varepsilon,\lambda_\varepsilon)| &~ x_\varepsilon=(x_{\varepsilon,1},\cdots,x_{\varepsilon,m}), ~~\lambda_\varepsilon=
(\lambda_{\varepsilon,1},\cdots,\lambda_{\varepsilon,m}), \\& ~|x_{\varepsilon,j}-a_j|=o(1),~ \frac{\lambda_{\varepsilon,j}}{\lambda_{\varepsilon,l}}\leq C~\mbox{and}~\lambda_{\varepsilon,j}\rightarrow +\infty, ~j,l=1,\cdots,m\Big\},
\end{split}\end{equation*}
and ${\mathbb E}_{x_\varepsilon,\lambda_\varepsilon}:=\displaystyle\bigcap^m_{j=1}{\mathbb E}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}$ with
\begin{equation*}
\begin{split}
{\mathbb E}_{{x_{\varepsilon,j}},{\lambda_{\varepsilon,j}}}=\left\{v\in H^1_0(\Omega)\big|~~ \Big\langle \frac{\partial PU_{{x_{\varepsilon,j }},{\lambda_{\varepsilon,j}} }}{\partial {\lambda} },v \Big\rangle=\Big\langle \frac{\partial PU_{{x_{\varepsilon,j}}, {\lambda_{\varepsilon,j}} }}{\partial {x^i }},v\Big\rangle=0,~\mbox{for}~i=1,\cdots,N\right\}.
\end{split}
\end{equation*}
Now for $(x_\varepsilon,\lambda_\varepsilon)\in {\mathbb D}_{\varepsilon}$, we consider the following equation on ${\mathbb E}_{x_\varepsilon,\lambda_\varepsilon}$:
\begin{equation}\label{aa2-16-1}
{\bf Q}_\varepsilon v_\varepsilon={\bf f}_\varepsilon+{\bf R}_\varepsilon(v_\varepsilon),
\end{equation}
where ${\bf Q}_\varepsilon, {\bf f}_\varepsilon, {\bf R}_\varepsilon$ are defined in \eqref{Qe}, \eqref{fe}, \eqref{Re}.
\begin{Prop}\label{Prop-Luo1}
For $(x_\varepsilon,\lambda_\varepsilon)\in {\mathbb D}_{\varepsilon}$ and
$\varepsilon\in (0,\varepsilon_0)$ with $\varepsilon_0$ a fixed small constant, there exists $v_\varepsilon(x_\varepsilon,\lambda_\varepsilon)\in {\mathbb E}_{x_\varepsilon,\lambda_\varepsilon}$ satisfying \eqref{aa2-16-1}.
Furthermore $v_\varepsilon$ satisfies
\begin{equation}\label{a-4-18-12}
\|v_{\varepsilon}\|=O\Big(\varepsilon \eta_1(\widetilde{\lambda}_\varepsilon)+ \frac{1}{\widetilde{\lambda}^2_\varepsilon}+\sum^m_{j=1} |x_{\e,j}-a_j|^2 \Big),
\end{equation}
where $\widetilde{\lambda}_\varepsilon=:\min \big\{\lambda_{1,\varepsilon},\cdots,\lambda_{m,\varepsilon}\big\}$.
\end{Prop}
\begin{proof}
First, from Proposition \ref{Prop-Luo2}, ${\bf Q}_\e$ is invertible on ${\mathbb E}_{x_\varepsilon,\lambda_\varepsilon}$ and $\|{\bf Q}_\e^{-1}\|\leq C$ for some $C>0$ independent of $x_\varepsilon,\lambda_\varepsilon,\varepsilon$.
Furthermore, by using Riesz representation theorem, this equation can be rewritten in $ {\mathbb E}_{x_\varepsilon,\lambda_\varepsilon}$ as the operational form
\[ v_\e = {\bf Q}_\e^{-1} \Big[ {\bf f}_\varepsilon+{\bf R}_\varepsilon(v_\varepsilon) \Big] : = {\bf A } ( v_\e) \quad \text{for} ~ v_\e \in {\mathbb E}_{x_\varepsilon,\lambda_\varepsilon}. \]
For small fixed $\tau>0$, we define
\[ \mathcal{S} = \Bigg\{ v_\e \in {\mathbb E}_{x_\varepsilon,\lambda_\varepsilon}\Big| ~ \|v_{\varepsilon}\|\leq \varepsilon^{1-\tau} \eta_1(\widetilde{\lambda}_\varepsilon)+ \frac{1}{\widetilde{\lambda}^{2-\tau}_\varepsilon}+\sum^m_{j=1} |x_{\e,j}-a_j|^{2-\tau} \Bigg\}.
\]
Also we have the following estimate on ${\bf R}_\varepsilon(v_\varepsilon)$:
\begin{equation} \label{4.23}
\int_{\Omega} {\bf R}_\varepsilon(v_\varepsilon)v_\varepsilon= O\Big(\|v_\varepsilon\|^{\min\{3,2^*\}}+\|v_\varepsilon\|^{s+1} \Big)
+O\left( \e \int_\Omega\Big(\sum^m_{j=1}PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}
\Big)^{s}v_\e {\mathrm d}x \right).
\end{equation}
Combining \eqref{4.23} and the results of Lemma \ref{lem1}, we can prove the operator ${\bf A } $ is contraction mapping from $\mathcal{S}$ to $\mathcal{S}$.
Then by using fixed point Theorem, we can show that there exists $v_\varepsilon$ satisfying \eqref{2-16-1} and \eqref{a-4-18-12}.
\end{proof}
Let $u_\varepsilon=\displaystyle\sum^m_{j=1} Q(a_j) ^{ - \frac {N-2} {4 } } PU_{x_{\varepsilon,j},
\lambda_{\varepsilon,j}}+ v_{\varepsilon}$, then \eqref{2-16-1} gives us
\begin{equation*}
-\Delta u_\varepsilon= Q(x) u_\varepsilon^{\frac{N+2}{N-2}}+\varepsilon u^{s}_{\varepsilon}+\sum^m_{j=1}\sum^N_{i=0}c_{\varepsilon,i,j}
\varphi_{ij}(x),
\end{equation*}
with $\varphi_{0j}(x)=\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial \lambda}$ and $\varphi_{ij}(x)=\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial x^i}$ for $j=1,\cdots,m$ and $i=1,\cdots,N$.
\begin{Prop}\label{Prop1.2} If $(x_\varepsilon,\lambda_\varepsilon)\in {\mathbb D}_{\varepsilon}$, then \eqref{a22-1-8} is equivalent to
\begin{equation}\label{2020-01-02-11}
\frac{1}{\lambda^3_{\varepsilon,j}}=o\Big(
\frac{ |x_{\varepsilon,j}-a_j|^2 + \varepsilon \eta_1(\widetilde{\lambda}_\varepsilon) }{\widetilde{\lambda}_\e }
\Big) - \begin{cases}
\Big(\frac{N(4-(N-2)(s-1))B}{2A(N-2)\Delta Q(a_j)(s+1)}+o(1)\Big)
\frac{\varepsilon}{\lambda_{\varepsilon,j}^{3-\frac{N-2}{2}(s-1)}},&~\mbox{for}~N+s\neq 5,\\
\Big(\frac{2\omega_4}{A\Delta Q(a_j)}+o(1)\Big)\frac{\varepsilon \log \lambda_{\varepsilon,j}}{\lambda_{\varepsilon,j}^{3}},&~\mbox{for}~N+s=5.
\end{cases}
\end{equation}
and \eqref{a2-13-1} is equivalent to
\begin{equation}\label{2020-01-02-12}
\sum^m_{l=1}\frac{\partial^2Q(a_j)}{\partial x^i\partial x^l}\Big({x_{\e,j}^l} -{a_j^l} \Big)=
O\Big(\frac{1}{\widetilde{\lambda}_\e^{N-3}} \Big)+o\Big( \sum^m_{q=1}|x_{\e,q}-a_q|\Big).
\end{equation}
\end{Prop}
\begin{proof}
\textbf{Step 1.} Now we compute \eqref{a22-1-8} term by term.
\begin{equation}\label{4-26-5}
\begin{split}
\int_{\Omega}&\Big(\Delta u_\varepsilon+
Q(x) u_\varepsilon^{\frac{N+2}{N-2}} \Big)\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }\\ =&
\sum^m_{l=1} Q(a_l) ^{ - \frac {N+2} {4 } } \int_{\Omega}
\Big(Q(x)- Q(a_l) \Big) PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }+ \int_{\Omega}
\Delta v_\e\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }\\ &+
\int_{\Omega}
Q(x) \Big( u_\varepsilon^{\frac{N+2}{N-2}} - \sum^m_{l=1} Q(a_l) ^{ - \frac {N+2} {4 } } PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} \Big) \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }.
\end{split}\end{equation}
Also by orthogonality, we know
\begin{equation}\label{4-26-6}
\begin{split}
\int_{\Omega}
\Delta v_\e\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }
=-\int_{\Omega}
\nabla v_\e\frac{\partial \nabla PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }=0.
\end{split}
\end{equation}
And from \eqref{3-14-11}, we have
\begin{equation} \label{4-24-22}
\begin{split}
u_\varepsilon&^{\frac{N+2}{N-2}}= Q(a_l) ^{ - \frac {N+2} {4 } } PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{\frac{N+2}{N-2}}
+ \frac{N+2}{N-2} Q(a_j) ^{-1} PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{\frac{4}{N-2}} \big(\sum_{l\neq j} Q(a_l) ^{ - \frac {N-2} {4 } } PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}
+v_\e \big) \\ &
+ \begin{cases}
O\Big(\big(\displaystyle\sum_{l\neq j} PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}+v_\e \big)^{\frac{N+2}{N-2}} \Big),~&\mbox{for}~N\geq 6, \\[6mm]
O\Big(PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{\frac{6-N}{N-2}} \big(\displaystyle\sum_{l\neq j}PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}+v_\e \big)^{2} +\big(\displaystyle\sum_{l\neq j} PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}+v_\e \big)^{\frac{N+2}{N-2}} \Big),~&\mbox{for}~N<6.
\end{cases}
\end{split}\end{equation}
Also by orthogonality, we get
\begin{equation}\label{4-26-1}
\begin{split}
\int_{\Omega}&
Q(x) v_\e
\frac{\partial PU^{\frac{N+2}{N-2}}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }\\=&\frac {N+2}{N-2}
\int_{\Omega}\big(Q(x)-Q(a_j)\big)
v_\e PU^{\frac{ 4}{N-2}}_{x_{\varepsilon,j},\lambda_{\varepsilon,j} }
\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda} \\ =& O\left( \Big(\int_{\Omega} \big(Q(x)-Q(a_j)\big)^{\frac{2N}{N+2} }PU^{\frac{4}{N-2}\frac{2N}{N+2}}_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}
\big( \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda}\big)^\frac{2N}{N+2}\Big)^{\frac {N+2}{2N}}\| v_\e\|\right)\\=
&O\Big(\frac 1 {\lambda_{\varepsilon,j}}\Big(\int_{\Omega}|x-x_{\varepsilon,j}|^{\frac{4N}{N+2}}
PU^{\frac{ 2N}{N-2}} _{x_{\varepsilon,j},\lambda_{\varepsilon,j}}\Big)^{\frac{N+2}{2N}}\|v_\e\|\Big) \\=&
o\Big(\frac{1}{\widetilde{\lambda}_\e^{3}}
\Big)+o\Big(
\frac{ |x_{\varepsilon,j}-a_j|^2 + \varepsilon \eta_1(\widetilde{\lambda}_\varepsilon) }{\widetilde{\lambda}_\e }
\Big).
\end{split}
\end{equation}
So using \eqref{4-26-2}, \eqref{4-24-22} and \eqref{4-26-1}, we get
\begin{equation}\label{4-26-7}
\begin{split}
\int_{\Omega}
Q(x) \left( u_\varepsilon^{\frac{N+2}{N-2}} - \sum^m_{l=1} Q(a_l) ^{ - \frac {N+2} {4 } } PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} \right) \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda }
=o\Big(\frac{1}{\widetilde{\lambda}_\e^{3}}
\Big)+o\Big(
\frac{ |x_{\varepsilon,j}-a_j|^2 + \e \eta_1(\widetilde{\lambda}_\varepsilon) }{\widetilde{\lambda}_\e }
\Big).
\end{split}\end{equation}
Hence from \eqref{3-13-21}, \eqref{4-26-5}, \eqref{4-26-6} and \eqref{4-26-7}, we find
\begin{equation}\label{3-14-26}
\begin{split}
\int_{\Omega}&\Big(\Delta u_\varepsilon+
Q(x) u_\varepsilon^{\frac{N+2}{N-2}} \Big)\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda } \\ =& - \frac{\Delta Q(a_j)(N-2)A}{N\lambda^3_{\varepsilon,j}} \big( 1+ o(1) \big)+o\Big(\frac{1}{\widetilde{\lambda}_\e^{3}}
\Big)+o\Big(
\frac{ |x_{\varepsilon,j}-a_j|^2 + \e \eta_1(\widetilde{\lambda}_\varepsilon) }{\widetilde{\lambda}_\e }
\Big).
\end{split}\end{equation}
On the other hand,
\begin{equation}\label{4-26-11}
\begin{split}
\int_{\Omega}&u^{s}_{\varepsilon}\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial \lambda}=
\int_{\Omega}
PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial \lambda}+
\int_{\Omega}
\Big( u_\varepsilon^{s} - PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s} \Big) \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial \lambda}.
\end{split}\end{equation}
And we know
\begin{equation}\label{4-26-12}
\begin{split}
u_\varepsilon^{s}-PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s}
=O\left(PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s-1} \Big(\sum_{l\neq j} PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}
+v_\e \Big)+
\Big(\displaystyle\sum_{l\neq j} PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}+v_\e \Big)^{s}\right).
\end{split}\end{equation}
Also
we find
\begin{equation}\label{4-26-13}
\begin{split}
\int_{\Omega}\Big|v_{\varepsilon}\frac{\partial PU^s_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial \lambda}\Big|
+ \int_{\Omega}\Big|v^s_{\varepsilon}\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial \lambda}\Big|= O\Big(\frac{\eta_1({\lambda}_{\varepsilon,j})}{\lambda_{\e,j}} \Big) \|v_\e\|.
\end{split}\end{equation}
Then from \eqref{3-13-51}, \eqref{a4-26-2}, \eqref{4-26-11}, \eqref{4-26-12} and \eqref{4-26-13}, we get
\begin{equation}\label{3-14-27}
\begin{split}
\int_{\Omega}
u^{s}_{\varepsilon}\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda}{\mathrm d}x
= \begin{cases}
- \frac{4-(N-2)(s-1)}{2(s+1)}\Big(B+o(1)\Big)
\frac{1}{\lambda_{\varepsilon,j}^{3-\frac{N-2}{2}(s-1)}},&~\mbox{for}~N+s\neq 5,\\
-\omega_4\frac{\log \lambda_{\varepsilon,j}}{\lambda_{\varepsilon,j}^{3}}
+O\Big(\frac{1}{\lambda_{\varepsilon,j}^{3}}\Big),&~\mbox{for}~N+s=5.
\end{cases} \end{split}
\end{equation}
Hence \eqref{2020-01-02-11} follows by \eqref{3-14-26} and \eqref{3-14-27}.
\vskip 0.2cm
\noindent \textbf{Step 2.}
Now we compute \eqref{a2-13-1} term by term.
\begin{equation}\label{4-26-21}
\begin{split}
\int_{\Omega}&\Big(\Delta u_\varepsilon+
Q(x) u_\varepsilon^{\frac{N+2}{N-2}} \Big) \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}\\ =&
\sum^m_{l=1} Q(a_l) ^{ - \frac {N+2} {4 } } \int_{\Omega}
\Big(Q(x)-Q(a_l) \Big) PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}+ \int_{\Omega}
\Delta v_\e\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}\\ &+
\int_{\Omega}
Q(x) \Big( u_\varepsilon^{\frac{N+2}{N-2}} - \sum^m_{l=1} Q(a_l) ^{ - \frac {N+2} {4 } } PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} \Big) \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}.
\end{split}\end{equation}
Also by orthogonality, it holds
\begin{equation}\label{4-26-22}
\begin{split}
\int_{\Omega}
\Delta v_\e\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}
=-\int_{\Omega}
\nabla v_\e\frac{\partial \nabla PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}=0.
\end{split}
\end{equation}
From \eqref{aa4-26-2}, \eqref{a-4-18-12} and \eqref{4-24-22}, we know that
\begin{equation}\label{4-26-23}
\begin{split}
\int_{\Omega}
& Q(x) \Big( u_\varepsilon^{\frac{N+2}{N-2}} - \sum^m_{l=1} Q(a_l) ^{ - \frac {N+2} {4 } } PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}^{\frac{N+2}{N-2}} \Big) \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}} \\ & =
\frac{N+2}{N-2} Q(a_j) ^{-1} \int_{\Omega}
Q(x) PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{\frac{4}{N-2}} \Big(\sum_{l\neq j} Q(a_l) ^{ - \frac {N-2} {4 } } PU_{x_{\varepsilon,l},\lambda_{\varepsilon,l}}
+v_\e \Big) \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}} \\ &
=O\Big( \frac{1}{\widetilde{\lambda}_\e^{N-3}} \Big)+
o\Big( \sum^m_{q=1}|x_{\e,q}-a_q|\Big). \end{split}
\end{equation}
Hence by \eqref{3-13-31}, \eqref{4-26-21}, \eqref{4-26-22} and \eqref{4-26-23}, we find
\begin{equation}\label{3-14-28}
\begin{split}
\int_{\Omega}&\Big(\Delta u_\varepsilon+
Q(x) u_\varepsilon^{\frac{N+2}{N-2}} \Big) \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}\\ =&
\Big(2\int_{\R^N} \frac{|x|^2}{(1+|x|^2)^{N+1}}{\mathrm d}x +o(1)\Big)
\sum^m_{l=1}\frac{\partial^2Q(a_j)}{\partial x^i\partial x^l}\Big({x_{\e,j}^l} -{a_j^l} \Big) +
O\Big( \frac{1}{\widetilde{\lambda}_\e^{N-3}} \Big).
\end{split}\end{equation}
On the other hand,
\begin{equation}\label{4-26-31}
\begin{split}
\int_{\Omega}&u^{s}_{\varepsilon}\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}=\int_{\Omega}
PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s} \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}+
\int_{\Omega}
\Big( u_\varepsilon^{s} - PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}^{s} \Big) \frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}.
\end{split}\end{equation}
Also
we find
\begin{equation}\label{4-26-32}
\begin{split}
\int_{\Omega}\Big|v_{\varepsilon}\frac{\partial PU^s_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}\Big|
+ \int_{\Omega}\Big|v^s_{\varepsilon}\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial {x^i}}\Big|= O\Big(\lambda_{\e,j}\eta_1({\lambda}_{\varepsilon,j}) \Big) \|v_\e\|.
\end{split}\end{equation}
Then from \eqref{3-13-41}, \eqref{ab4-26-2}, \eqref{4-26-31} and \eqref{4-26-32}, we get
\begin{equation}\label{3-14-29}
\begin{split}
\int_{\Omega}
u^{s}_{\varepsilon}\frac{\partial PU_{x_{\varepsilon,j},\lambda_{\varepsilon,j}}}{\partial
\lambda}{\mathrm d}x
=O\Big( \frac{1}{\widetilde{\lambda}_\e^{N-3}} \Big).
\end{split}
\end{equation}
Hence \eqref{2020-01-02-12} can follows by \eqref{3-14-28} and \eqref{3-14-29}.
\end{proof}
\begin{proof}[\textbf{Proof of Theorem \ref{th1.2}}]
For $j=1,\cdots,m$, let
\begin{equation}\label{3-11-1}
\begin{split}
\lambda_{\varepsilon,j}\in \begin{cases}
\Big[c_1\varepsilon^{-\big(\frac{N-2}{2}(s-1)\big)^{-1}},c_2\varepsilon^{-\big(\frac{N-2}{2}(s-1)\big)^{-1}}\Big],
~&~\mbox{if}~N\geq 4 ~\mbox{and}~s>1,\\[3mm]
\Big[e^{\frac{c_1}{\varepsilon}},e^{\frac{c_2}{\varepsilon}}\Big],
~&~\mbox{if}~N=4 ~\mbox{and}~s=1,
\end{cases}
\end{split}\end{equation}
and
\begin{equation}\label{a3-11-1}
{x_{\varepsilon,j}^i} \in \begin{cases}
\Big[a_{j,i}-\varepsilon^{\frac{1}{2(s-1)}}, a_{j,i}+\varepsilon^{\frac{1}{2(s-1)}}\Big],&~\mbox{if}~N\geq 4~\mbox{and}~s>1,\\[3mm]
\Big[a_{j,i}-e^{-\frac{c_1}{2\varepsilon}}, a_{j,i}+e^{-\frac{c_1}{2\varepsilon}}\Big],
~&~\mbox{if}~N=4 ~\mbox{and}~s=1,
\end{cases} \end{equation}
where $c_1,c_2$ are fixed positive constants with $c_1$ small and $c_2$ large.
Hence
let $$M:=\Big\{(x_{\varepsilon},\lambda_\varepsilon), (x_{\varepsilon},\lambda_\varepsilon) ~\mbox{satisfying}~\eqref{3-11-1}~\mbox{and}~\eqref{a3-11-1}\Big\},$$
and some vectors $y=(y_1,\cdots,y_m)$, $\lambda=(\lambda_1,\cdots,
\lambda_m)$. Also for $i=1,\cdots,N,~j=1,\cdots,m$, we define
\begin{equation*}
F_{m(i-1)+j}(y,\lambda) =\sum^m_{l=1}\frac{\partial^2Q(a_j)}{\partial x^i\partial x^l}\Big({y_{j}^l} -{a_j^l} \Big)+
O\Big(\frac{1}{|\lambda|^{N-3}} \Big)+o\Big( \sum^m_{q=1}|y_{q}-a_q|\Big),
\end{equation*}
and
\begin{equation*}
\begin{split}
F_{mN+j}(y,\lambda)=&\frac{1}{\lambda^3_{j}}+o\Big(
\frac{ |y_{j}-a_j|^2 + \varepsilon \eta_1(|\lambda|)}{|\lambda|}
\Big) \\&+\begin{cases}
\Big(\frac{N(4-(N-2)(s-1))B}{2A(N-2)\Delta Q(a_j)(s+1)}+o(1)\Big)
\frac{\varepsilon}{\lambda_{j}^{3-\frac{N-2}{2}(s-1)}},&~\mbox{for}~N+s\neq 5,\\
\Big(\frac{2\omega_4}{A\Delta Q(a_j)}+o(1)\Big)\frac{\varepsilon \log \lambda_{j}}{\lambda_{j}^{3}},&~\mbox{for}~N+s=5.
\end{cases}\end{split}
\end{equation*}
By orthogonal transform, we can view the matrix
$\Big(\frac{\partial^2Q(a_j)}{\partial x^i\partial x^l}\Big)_{1\leq i,l\leq N}$ as a diagonal matrix
$diag(\mu_1,\cdots,\mu_N)$.
Then for $(y,\lambda)\in M$, $N\geq 4$ and $s>1$, it holds
\begin{equation*}
\Big(F_{m(i-1)+j}(y,\lambda)|_{y^i_j=a_{j,i}-\varepsilon^{\frac{1}{2(s-1)}}} \Big)\cdot \Big(F_{m(i-1)+j}(y,\lambda)|_{y^i_j=a_{j,i}+\varepsilon^{\frac{1}{2(s-1)}}}\Big)=
-\Big(\mu_i^2+o(1)\Big)\varepsilon^{\frac{1}{s-1}}<0,
\end{equation*}
\begin{equation*}
F_{mN+j}(y,\lambda)|_{\lambda_j=c_1\varepsilon^{-\big(\frac{N-2}{2}(s-1)\big)^{-1}}}
=
\frac{1}{\lambda_{j}^{3}}\Big(1+\frac{N(4-(N-2)(s-1))B}{2A(N-2)\Delta Q(a_j)(s+1)}\big(\frac{1}{c_1}\big)^{\frac{N-2}{2}(s-1)}+o(1)\Big)
< 0,
\end{equation*}
and
\begin{equation*}
F_{mN+j}(y,\lambda)|_{\lambda_j=c_2\varepsilon^{-\big(\frac{N-2}{2}(s-1)\big)^{-1}}}
= \frac{1}{\lambda_{j}^{3}}\Big(1+\frac{N(4-(N-2)(s-1))B}{2A(N-2)\Delta Q(a_j)(s+1)}\big(\frac{1}{c_2}\big)^{\frac{N-2}{2}(s-1)}+o(1)\Big)>
0.
\end{equation*}
Then by well known Poincar\'e-Miranda theorem (see Lemma \ref{lem-B-1} in the Appendix), \eqref{2020-01-02-11} and \eqref{2020-01-02-12} have a solution
$(x_{\varepsilon},\lambda_\varepsilon)\in M$ when $N\geq 4$ and $s>1$. Similarly, there exists
$(x_{\varepsilon},\lambda_\varepsilon)\in M$ which solves \eqref{2020-01-02-11} and \eqref{2020-01-02-12} when $N=4$ and $s=1$.
On the other hand, by finite dimensional reduction (Proposition \ref{Prop-Luo1}), we know that there exists $u_\varepsilon$ satisfying
\begin{equation*}
-\Delta u_\varepsilon= Q(x) u_\varepsilon^{\frac{N+2}{N-2}}+\varepsilon
u^{s}_{\varepsilon}+\sum^m_{j=1}\sum^N_{i=0}c_{\varepsilon,i,j}
\varphi_{ij}(x).
\end{equation*}
Also from
\textbf{Claim 1} in page 3 and Proposition \ref{Prop1.2}, we find
$$c_{\varepsilon,0,j}=c_{\varepsilon,i,j}=0~\mbox{for}~i=1,\cdots,N~ \mbox{and}~j=1,\cdots,m.$$
Hence
$u_\varepsilon$ is a concentrated solution of \eqref{1.1}, which satisfies \eqref{4-6-1}. \end{proof}
\section{Local uniqueness (Proof of Theorem \ref{th1.3})
}
\setcounter{equation}{0}
Let $u^{(1)}_{\varepsilon}(x)$, $u^{(2)}_{\varepsilon}(x)$ be two different solutions of $\eqref{1.1}$ satisfying \eqref{4-6-1}. Under Condition (Q), $N\geq 4$ and $ s\in[1,\frac{N+2}{N-2})$, we find from Theorem \ref{prop1} that
$u^{(l)}_{\varepsilon}(x)$ can be written as
\begin{equation*}
u^{(l)}_{\varepsilon}=\sum^m_{j=1}\big(Q(a_j)\big)^{-\frac{N-2}{4}} PU_{x^{(l)}_{\varepsilon,j}, \lambda^{(l)}_{\varepsilon,j}}+w^{(l)}_{\varepsilon},
\end{equation*}
satisfying, for $j=1,\cdots,m$, $l=1,2$,
$\lambda^{(l)}_{\varepsilon,j}=
\big(u_\varepsilon(x^{(l)}_{\varepsilon,j})\big)^{\frac{2}{N-2}}$,
\begin{equation*}
x^{(l)}_{\varepsilon,j}\rightarrow a_j, ~ \lambda^{(l)}_{\varepsilon,j} \rightarrow +\infty,~ \|w^{(l)}_{\varepsilon}\|=o(1)~\mbox{and}~w^{(l)}_\varepsilon\in \bigcap^m_{j=1}E_{x^{(l)}_{\varepsilon,j},\lambda^{(l)}_{\varepsilon,j}}.
\end{equation*}
Moreover we define $\bar{\lambda}_\varepsilon:=
\min
\Big\{\lambda^{(1)}_{1,\varepsilon},\cdots,
\lambda^{(1)}_{m,\varepsilon},\lambda^{(2)}_{1,\varepsilon},\cdots,
\lambda^{(2)}_{m,\varepsilon}\Big\}$, and from \eqref{lp111}, \eqref{2020-01-02-11} and \eqref{2020-01-02-12}, we find
\begin{equation}\label{4-26-61}
\big|x^{(l)}_{\varepsilon,j}-a_j\big|=o\big(\frac{1}{\bar{\lambda}_\varepsilon}\big), ~ \lambda^{(l)}_{\varepsilon,j}
=
\e^{-\frac{2}{(N-2)(s-1)}}\Big(C_j+o(1) \Big), ~~\mbox{for}~N\geq 5~\mbox{and}~s>1,
\end{equation}
with some positive constants $C_j$.
Now we set
\begin{equation}\label{3.1}
\xi_{\varepsilon}(x)=\frac{u_{\varepsilon}^{(1)}(x)-u_{\varepsilon}^{(2)}(x)}
{\|u_{\varepsilon}^{(1)}-u_{\varepsilon}^{(2)}\|_{L^{\infty}(\Omega)}},
\end{equation}
then $\xi_{\varepsilon}(x)$ satisfies $\|\xi_{\varepsilon}\|_{L^{\infty}(\Omega)}=1$ and
\begin{equation}\label{3.2}
-\Delta \xi_{\varepsilon}(x)=C_{\varepsilon}(x)\xi_{\varepsilon}(x),
\end{equation}
where
\begin{equation*}
C_{\varepsilon}(x)=\Big(2^*-1 \Big)Q(x)\int_{0}^1
\Big(tu_{\varepsilon}^{(1)}(x)+(1-t)u_{\varepsilon}^{(2)}(x) \Big)
^{\frac{4}{N-2}}{\mathrm d} t+\e s \int_{0}^1
\Big(tu_{\varepsilon}^{(1)}(x)+(1-t)u_{\varepsilon}^{(2)}(x) \Big)
^{s-1}{\mathrm d} t.
\end{equation*}
Next, we give some estimates on $\xi_\e$.
\begin{Prop}
For $N\geq 5$ and $\xi_{\varepsilon}(x)$ defined by \eqref{3.1}, we have
\begin{equation}\label{3-3}
\xi_{\varepsilon}(x) =O\Big(\frac{\log \bar{\lambda}_\varepsilon}{\bar{\lambda}^{N-2}_\varepsilon} \Big),~\mbox{in}~ C^1\Big(\Omega\backslash\bigcup_{j=1}^mB_{d}(x_{\varepsilon,j}^{(1)})\Big),
\end{equation}
where $d>0$ is any small fixed constant.
\end{Prop}
\begin{proof}
By the potential theory and \eqref{3.2}, we have
\begin{equation}\label{cc6}
\begin{split}
{\xi}_\varepsilon(x)=& \int_{\Omega}G(y,x)
C_{\varepsilon}(y) \xi_{\varepsilon}(y){\mathrm d}y\\=&
O\left(\sum^k_{j=1}\sum^2_{l=1}\int_{\Omega}\frac{1}{|x-y|^{N-2}} \Big( U^{\frac{4}{N-2}}_{x^{(l)}_{\varepsilon,j},\lambda^{(l)}_{\varepsilon,j}}
(y)+\varepsilon U^{s-1}_{x^{(l)}_{\varepsilon,j},\lambda^{(l)}_{\varepsilon,j}}
(y)\Big) {\mathrm d}y \right).
\end{split}
\end{equation}
Next, by \eqref{AB.2}, we know
\begin{equation}\label{acc6}
\int_{\Omega}\frac{1}{|x-y|^{N-2}} U^{\frac{4}{N-2}}_{x^{(l)}_{\varepsilon,j},\lambda^{(l)}_{\varepsilon,j}}
(y) {\mathrm d}y =
O\Big( \frac{1}{\big(1+\lambda^{(l)}_{\varepsilon,j}|x-x^{(l)}_{\varepsilon,j}|\big)^{2}}\Big),
\end{equation}
and
\begin{equation}\label{bcc6}
\int_{\Omega}\frac{1}{|x-y|^{N-2}} U^{s-1}_{x^{(l)}_{\varepsilon,j},\lambda^{(l)}_{\varepsilon,j}}
(y) {\mathrm d}y =
O\Big( \frac{1}{\big(1+\lambda^{(l)}_{\varepsilon,j}|x-x^{(l)}_{\varepsilon,j}|\big)^{\frac{(N-2)(s-1)}{2}}}\Big).
\end{equation}
Now from \eqref{4-26-61}, \eqref{cc6}, \eqref{acc6} and \eqref{bcc6}, we can compute
\begin{equation}\label{dd}
\begin{split}
{\xi}_\varepsilon(x)=& \int_{\Omega}G(y,x)
C_{\varepsilon}(y) \xi_{\varepsilon}(y){\mathrm d}y\\=&
O\left(\sum^m_{j=1}\sum^2_{l=1}
\frac{1}{\big(1+\lambda^{(l)}_{\varepsilon,j}|x-x^{(l)}_{\varepsilon,j}|\big)^{2}} \right)+
O\left(\sum^m_{j=1}\sum^2_{l=1}
\frac{1}{\big(1+\lambda^{(l)}_{\varepsilon,j}|x-x^{(l)}_{\varepsilon,j}|\big)^{(N-2)(s-1)}}\right).
\end{split}
\end{equation}
Next repeating the above process, we know
\begin{equation*}
\begin{split}
{\xi}_\varepsilon(x) = &O\Big( \sum^m_{j=1}\sum^2_{l=1}\int_{\Omega}\frac{1}{|x-y|^{N-2}}
\frac{ C_{\varepsilon}(y) }{\big(1+\lambda^{(l)}_{\varepsilon,j}|x-x^{(l)}_{\varepsilon,j}|\big)^{2}}
{\mathrm d}y\Big)\\&
+O\Big( \sum^m_{j=1}\sum^2_{l=1}\int_{\Omega}\frac{1}{|x-y|^{N-2}}
\frac{ C_{\varepsilon}(y)}{\big(1+\lambda^{(l)}_{\varepsilon,j}|x-x^{(l)}_{\varepsilon,j}|\big)^{(N-2)(s-1)}}
{\mathrm d}y\Big)\\=&
O\left(\sum^m_{j=1}\sum^2_{l=1}
\frac{1}{\big(1+\lambda^{(l)}_{\varepsilon,j}|x-x^{(l)}_{\varepsilon,j}|\big)^{4}} \right)+
O\left(\sum^m_{j=1}\sum^2_{l=1}
\frac{1}{\big(1+\lambda^{(l)}_{\varepsilon,j}|x-x^{(l)}_{\varepsilon,j}|\big)^{2(N-2)(s-1)}}\right).
\end{split}
\end{equation*}
Then we can proceed as in the above argument for finite number of times to prove
\begin{equation*}
{\xi}_\varepsilon(x) =
O\Big(\sum^m_{j=1}\sum^2_{l=1}\frac{\log \bar{\lambda}_\varepsilon}{\big(1+\lambda^{(l)}_{\varepsilon,j}
|x-x^{(l)}_{\varepsilon,j}|\big)^{N-2}}\Big).
\end{equation*}
Also we know $\frac{\partial
{\xi}_\varepsilon(x)}{\partial x^i}= \displaystyle\int_{\Omega}D_{x^i}G(y,x)
C_{\varepsilon}(y) \xi_{\varepsilon}(y){\mathrm d}y$.
Hence repeating above process, we can complete the proof of \eqref{3-3}.
\end{proof}
\begin{Prop}\label{prop3-2}
Let $\xi_{\varepsilon,j}(x)=\xi_{\varepsilon}(\frac{x}{\lambda_{\varepsilon,j}^{(1)}}+x_{\varepsilon,j}^{(1)})$. Then by taking
a subsequence if necessary, we have
\begin{equation}\label{cc7}
\xi_{\varepsilon,j}(x)\rightarrow \sum_{i=0}^N a_{j,i}\psi_{i}(x),~\mbox{uniformly in}~C^1\big(B_R(0)\big) ~\mbox{for any}~R>0,
\end{equation}
where $a_{j,i}$, $i=0,1,\cdots,N$ are some constants and $$\psi_{0}(x)=\frac{\partial U_{0,\lambda}(x)}{\partial\lambda}\big|_{\lambda=1},~~\psi_{i}(x)=\frac{\partial U_{x,1}(x)}{\partial x^i}\big|_{x=0},~i=1,\cdots,N.
$$
\end{Prop}
\begin{proof}
Since $\xi_{\varepsilon,j}(x)$ is bounded, by the regularity theory in \cite{Gilbarg}, we find
$$\xi_{\varepsilon,j}(x)\in {C^{1,\alpha}\big(B_r(0)\big)}~ \mbox{and}~ \|\xi_{\varepsilon,j}\|_{C^{1,\alpha}\big(B_r(0)\big)}\leq C,$$
for any fixed large $r$ and $\alpha \in (0,1)$ if $\varepsilon$ is small, where the constants $r$ and $C$ are independent of $\varepsilon$ and $j$.
So we may assume that $\xi_{\varepsilon,j}(x)\rightarrow\xi_{j}(x)$ in $C^1\big(B_r(0)\big)$. By direct calculations, we know
\begin{small}
\begin{equation}\label{tian41}
\begin{split}
-\Delta\xi_{\varepsilon,j}(x)=&-\frac{1}{(\lambda^{(1)}_{\varepsilon,j})^{2}}\Delta \xi_{\varepsilon}\big(\frac{x}{\lambda^{(1)}_{\varepsilon,j}}+x_{\varepsilon,j}^{(1)}\big)=
\frac{1}{(\lambda^{(1)}_{\varepsilon,j})^{2}}C_{\varepsilon}(\frac{x}{\lambda^{(1)}_{\varepsilon,j}}
+x_{\varepsilon,j}^{(1)})\xi_{\varepsilon,j}(x).
\end{split}
\end{equation}
\end{small}
Now, we estimate $\frac{1}{(\lambda^{(1)}_{\varepsilon,j})^{2}}C_{\varepsilon}(\frac{x}{\lambda^{(1)}_{\varepsilon,j}}
+x_{\varepsilon,j}^{(1)})$. By \eqref{4-26-61}, we have
\begin{equation*}
\begin{split}
U&_{x_{\varepsilon,j}^{(1)},\lambda_{\varepsilon,j}^{(1)}}(x)
-
U_{x_{\varepsilon,j}^{(2)},\lambda_{\varepsilon,j}^{(2)}}(x)
\\=&
O\Big(\big|x_{\varepsilon,j}^{(1)}-x_{\varepsilon,j}^{(2)}\big|\cdot \big(\nabla_y U_{y,\lambda_{\varepsilon,j}^{(1)}}(x)|_{y=x_{\varepsilon,j}^{(1)}}\big)+
\big|\lambda_{\varepsilon,j}^{(1)}-\lambda_{\varepsilon,j}^{(2)}\big|\cdot\big( \nabla_\lambda U_{x_{\varepsilon,j}^{(1)},\lambda}(x) |_{\lambda=\lambda_{\varepsilon,j}^{(1)}}\big)
\Big)\\=&
O\Big(\lambda_{\varepsilon,j}^{(1)}\big|x_{\varepsilon,j}^{(1)}-x_{\varepsilon,j}^{(2)}\big| +
(\lambda_{\varepsilon,j}^{(1)})^{-1}|\lambda_{\varepsilon,j}^{(1)}-\lambda_{\varepsilon,j}^{(2)}| \Big) U_{x_{\varepsilon,j}^{(1)},\lambda_{\varepsilon,j}^{(1)}}(x)
=o\big(1\big)U_{x_{\varepsilon,j}^{(1)},\lambda_{\varepsilon,j}^{(1)}}(x),
\end{split}
\end{equation*}
which means
\begin{equation*}
u_{\varepsilon}^{(1)}(x)-u_{\varepsilon}^{(2)}(x)=
o\big(1\big)\Big(\sum_{j=1}^mU_{x_{\varepsilon,j}^{(1)},\lambda_{\varepsilon,j}^{(1)}}(x)\Big)
+O\Big(\sum^2_{l=1}|w_{\varepsilon}^{(l)}(x)|\Big).
\end{equation*}
Then for a small fixed $d>0$, we find
\begin{equation*}
\begin{split}\frac{1}{(\lambda^{(1)}_{\varepsilon,j})^{2}}C_{\varepsilon}
(\frac{x}{\lambda^{(1)}_{\varepsilon,j}}
+x_{\varepsilon,j}^{(1)}) \rightarrow \frac{N+2}{N-2} U^{\frac{4}{N-2}}_{0,1}(x),~\mbox{for}~\frac{x}{\lambda^{(1)}_{\varepsilon,j}}
+x_{\varepsilon,j}^{(1)}\in B_{d}(0).
\end{split}
\end{equation*}
Letting $\varepsilon\rightarrow 0$ in \eqref{tian41} and using the elliptic regularity theory, we find that $\xi_{j}(x)$ satisfies
\begin{equation*}
-\Delta\xi_{j}(x)=\Big(\frac{N+2}{N-2}\Big)U_{0,1}^{\frac{4}{N-2}}(x)\xi_{j}(x),~~\textrm{in~}\R^N,
\end{equation*}
which gives $
\xi_{j}(x)=\displaystyle\sum_{i=0}^Na_{j,i}\psi_i(x)$.
\end{proof}
\begin{Prop}
For $N\geq 5$, it holds
\begin{equation}\label{luo--13}
a_{j,i}=0,~\mbox{for}~j=1,\cdots,m~\mbox{and}~i=1,\cdots,N,
\end{equation}
where $a_{j,i}$ are the constants in \eqref{cc7}.
\end{Prop}
\begin{proof}
First using \eqref{3-3} and \eqref{cc7}, we compute that
\begin{equation}\label{dclp-2}
\begin{split}
\mbox{LHS of \eqref{dclp-1}}=\big(Q(a_j)\big)^{-\frac{N+2}{4}}\left(
\sum^N_{l=1}\Big(\int_{\R^N}x^jU_{0,1}^{\frac{N+2}{N-2}}(x)\psi_j(x) \Big)\frac{\partial^2Q(a_j)}{\partial x^i\partial x_l}{a_j^l}+o(1)\right)\frac{1}{\big(\lambda^{(1)}_{j,\e}\big)^{\frac{N}{2}}}.
\end{split}\end{equation}
And from \eqref{ab4-18-12} and \eqref{3-3}, it holds
\begin{equation}\label{dclp-3}
\begin{split}
\mbox{RHS of \eqref{dclp-1}}= O\Big(\frac{\log \bar \lambda_{\e}}{\bar \lambda_{\e}^{\frac{3(N-2)}{2}}} \Big).
\end{split}\end{equation}
Hence we find \eqref{luo--13} by \eqref{dclp-2} and \eqref{dclp-3}.
\end{proof}
\begin{Prop}
For $N\geq 5$, it holds
\begin{equation}\label{luo-13}
a_{j,0}=0,~\mbox{for}~j=1,\cdots,m,
\end{equation}
where $a_{j,0}$ are the constants in Proposition \ref{prop3-2}.
\end{Prop}
\begin{proof}
First using \eqref{3-3} and \eqref{cc7}, we compute that
\begin{equation*}
\begin{split}
\int_{B_d(x^{(1)}_{\e,j})}& ( x-x_{\varepsilon,j}) \cdot \nabla Q(x) D_{1,\varepsilon}\xi_\e {\mathrm d}x
\\ =&\big(Q(a_j)\big)^{-\frac{N+2}{4}}\frac{\Delta Q(a_j)}{N}
\frac{a_{j,0}}{\big(\lambda^{(1)}_{j,\e}\big)^{\frac{N+2}{2}}}\int_{\R^N}|x|^2U^{\frac{N+2}{N-2}}_{0,1}(x)
\frac{\partial U_{0,\lambda}}{\partial \lambda}\big|_{\lambda=1}{\mathrm d}x
+o\Big(
\frac{1}{\bar \lambda_{\e}^{\frac{N+2}{2}}} \Big)
\\ =&-\big(Q(a_j)\big)^{-\frac{N+2}{4}}\frac{(N-2)A}{N}
\frac{a_{j,0}}{\big(\lambda^{(1)}_{\e,j}\big)^{\frac{N+2}{2}}}\Delta Q(a_j)
+o\Big(
\frac{1}{\bar \lambda_{\e}^{\frac{N+2}{2}}} \Big).
\end{split}\end{equation*}
Here we use the fact
$$\int_{\R^N}|x|^2U^{\frac{N+2}{N-2}}_{0,1}(x)
\frac{\partial U_{0,\lambda}}{\partial \lambda}\big|_{\lambda=1}{\mathrm d}x=
\frac{1}{2^*}
\frac{\partial }{\partial \lambda}\Big( \int_{\R^N}|x|^2U^{2^*}_{0,\lambda}(x)
{\mathrm d}x \Big) \Big|_{\lambda=1} =-(N-2)A.
$$
Similarly,
\begin{equation*}
\begin{split}
\int_{B_d(x^{(1)}_{\e,j})}& D_{2,\varepsilon}\xi_\e {\mathrm d}x
=\big(Q(a_j)\big)^{-\frac{(N-2)s}{4}}
\frac{a_{j,0}}{\big(\lambda^{(1)}_{j,\e}\big)^{N-\frac{(N-2)s}{2}}}\int_{\R^N} U^{s}_{0,1}(x)
\frac{\partial U_{0,\lambda}}{\partial \lambda}\big|_{\lambda=1}{\mathrm d}x
+o\Big(
\frac{1}{\bar \lambda_{\e}^{N-\frac{(N-2)s}{2}}}\Big).
\end{split}
\end{equation*}
Also we know
\begin{equation*} \int_{\R^N} U^{s}_{0,1}(x)
\frac{\partial U_{0,\lambda}}{\partial \lambda}\big|_{\lambda=1}{\mathrm d}x=
\frac{1}{s+1}
\frac{\partial }{\partial \lambda}\Big( \int_{\R^N} U^{s+1}_{0,\lambda}(x)
{\mathrm d}x \Big) \Big|_{\lambda=1} =-\Big(\frac{N}{s+1}+1-\frac{N}{2} \Big)B.
\end{equation*}
Next from \eqref{clp-10}, \eqref{2020-01-02-1} and \eqref{2020-01-02-2}, we find
\begin{equation*}
\begin{split}
\frac{1}{2^*}&\big(Q(a_j)\big)^{-\frac{N+2}{4}}\frac{(A+o(1))\Delta Q(a_j)}{\big(\lambda^{(1)}_{\e,j}\big)^{2}}=
-(1-\frac{N}{2}+\frac{N}{1+s})\big(Q(a_j)\big)^{-\frac{(N-2)s}{4}}
\frac{\varepsilon \big(B+o(1)\big)}{\big(\lambda^{(1)}_{j,\e}\big)^{2-\frac{(N-2)(s-1)}{2}}}.
\end{split}\end{equation*}
Hence we find
\begin{equation}\label{luo-131}
\mbox{LHS of \eqref{dclp-10}}=
\frac{(N-2)^2(1-s)}{4N}\big(Q(a_j)\big)^{-\frac{N+2}{4}}
\frac{Aa_{j,0}}{\big(\lambda^{(1)}_{\e,j}\big)^{\frac{N+2}{2}}}\Delta Q(a_j)
+o\Big(\frac{1}{\bar \lambda_{\e}^{\frac{N+2}{2}}} \Big).
\end{equation}
On the other hand, from \eqref{ab4-18-12} and \eqref{3-3}, we have
\begin{equation}\label{luo-132}
\begin{split}
\mbox{RHS of \eqref{dclp-10}}= O\Big(\frac{\log \bar \lambda_{\e}}{\bar \lambda_{\e}^{\frac{3(N-2)}{2}}} \Big).
\end{split}\end{equation}
So we deduce \eqref{luo-13} by \eqref{luo-131} and \eqref{luo-132}.
\end{proof}
\begin{Rem}
Here we point out that we can't find \eqref{luo-13} when $N=4$. Because in this case, similar to \eqref{luo-131}, we can compute that the main term of LHS of \eqref{dclp-10} is $0$. In fact
\begin{equation*}
\mbox{LHS of \eqref{dclp-10}}=O\Big(\frac{1}{\bar \lambda_{\e}^{3}} \Big),~\mbox{and}~
\mbox{RHS of \eqref{dclp-10}}=O\Big( \frac{\log \lambda^{(1)}_{\e,j}}{\big(\lambda^{(1)}_{\e,j}\big)^{3}}\Big),
\end{equation*}
which is not enough to get \eqref{luo-13}.
\end{Rem}
\begin{proof}[\textbf{Proof of Theorem \ref{th1.3}:}]
First, from \eqref{dd}, it holds,
\begin{equation*}
|\xi_{\varepsilon}(x)|=O\Big(\frac{1}{R^2} \Big)+
O\Big(\frac{1}{R^{(N-2)(s-1)}} \Big),~\mbox{for}~
x\in\Omega\backslash\bigcup_{j=1}^mB_{R(\lambda_{\varepsilon,j}^{(1)})^{-1}}(x_{\varepsilon,j}^{(1)}),
\end{equation*}
which implies that for any fixed $\gamma\in (0,1)$ and small $\varepsilon$, there exists $R_1>0$,
\begin{equation}\label{tian92}
|\xi_{\varepsilon}(x)|\leq \gamma,~ x\in\Omega\backslash\bigcup_{j=1}^mB_{R_1(\lambda_{\varepsilon,j}^{(1)})^{-1}}(x_{\varepsilon,j}^{(1)}).
\end{equation}
Also for the above fixed $R_1$, from \eqref{luo--13} and \eqref{luo-13}, we have
\begin{equation*}
\xi_{\varepsilon,j}(x)=o(1)~\mbox{in}~ B_{R_1}(0),~j=1,\cdots,m.
\end{equation*}
We know $\xi_{\varepsilon,j}(x)=\xi_{\varepsilon}(
\frac{x}{\lambda_{\varepsilon,j}^{(1)}}+x_{\varepsilon,j}^{(1)})$, so it follows
\begin{equation}\label{tian91}
\xi_{\varepsilon}(x)=o(1),~x\in \bigcup_{j=1}^m B_{R_1(\lambda_{\varepsilon,j}^{(1)})^{-1}}(x_{\varepsilon,j}^{(1)}).
\end{equation}
Hence for any fixed $\gamma\in (0,1)$ and small $\varepsilon$, \eqref{tian92} and \eqref{tian91} imply
$|\xi_{\varepsilon}(x)|\leq \gamma$ for all $x\in \Omega$,
which is in contradiction with $\|\xi_{\varepsilon}\|_{L^{\infty}(\Omega)}=1$. As a result, $u^{(1)}_{\varepsilon}(x)\equiv u^{(2)}_{\varepsilon}(x)$ for small $\varepsilon$.
\end{proof}
|
1,108,101,564,859 | arxiv | \section{Introduction}
During the last decade, outstanding results have been obtained on the
redshift distribution of faint galaxies up to
$I=22.5$ (see O. Le F\`evre, this conference) or $I=23$ (Cowie et al. 1996).
With the coming of 10 meter
class telescopes equipped with wide field multi-object
spectrographs these surveys will be extended to thousands of galaxies.
The leading goals of these
programme is to explore the evolution of clustering of galaxies,
their physical and stellar evolution with redshift up to
$B=24-25$ or $I=22.5-23.5$ as well as to study very high redshift galaxies.
The galaxies with magnitudes $B>25$ are also important for the models of
galaxy formation since we do not know whether they are all at large
redshift or if there is a significant fraction of faint nearby dwarfs
galaxies. Furthermore, the weak lensing inversion uses
the grid of faint distant sources with
magnitudes between $B=25$ to $B=28$ for which the redshift distribution
is unknown. While this information
is not important for nearby lenses, it is crucial for
for those having redshifts larger than $z=0.5$ and can be a major source of
uncertainty in the mass
determination (see Luppino \& Kaiser 1996). In the perspective of the
new surveys to study the large-scale mass distribution by using
weak lensing, the need of the redshift distribution of
the faintest galaxies is thus esssential.
Unfortunately, while we do expect considerable informations in the magnitude
range $B<25$,
beyond this limit even 10 meter class telescopes are still too small and
redshifts of a complete sample of $B>25$ galaxies cannot
be secured in a reasonable amount of observing time. The possibility of using
photometric
redshifts has been proposed as early as the beginning of eighties
and has been studied in great details. Observations
as well as reliability tests are still underway (Pell\'o
private communication). Howewer, they are based on
theoretical evolution scenarios of galaxies whose predictions about
faint distant galaxies are not confirmed yet. Furthermore, there is no
hope to calibrate the photometric redshifts of the faint samples
with spectroscopic data.
The most attractive alternative is the use of the deviation, magnification
and distortion effects induced by gravitational lensing on extended objects.
In this review, I discuss the recent advances in the redshift
distribution of $B>25$ galaxies by using spectroscopic samples of
arc(let)s, the lensing inversion and finally the magnification bias.
Though many works in these fields are still underway, they are new
promising approaches that must be tested jointly with photometric
redshifts in order to cross-check their reliablity and the consistency
of their predictions.
\begin{figure}
\rule{5cm}{0.2mm}\hfill\rule{5cm}{0.2mm}
\hskip 1.0 cm
\psfig{figure=mellier.HX37_a2390_panel.eps,height=6. cm}
\caption{Redshift of faint arclets in lensing-clusters. The left panel is a
deep HST image of A2390. Arclets are visible with
some of them showing multiple images with image parity changes. The
arclets 21 and 22 have been observed by B\'ezecourt and Soucail (1996).
As expected, they show the
[OII]$\lambda$3727 emission line from which the redshifts are easily
measured ($z=0.643$ and $z=0.790$ respectively. Courtesy J.
B\'ezecourt)..
\label{fig:radish}}
\end{figure}
\section{Spectroscopic surveys of arc(let)s}
Spectroscopic redshifts of arc(let)s are indispensable
to calculate the angular distances $D_{\rm d},
D_{\rm ds}$ and $D_{\rm s}$ and to get the absolute scaling of the projected
mass density. The redshifts of a large number of arc(let)s
in each
individual lensing-clusters provide the positions of many critical lines
and allow to probe locally the mass distribution. In practice, the
redshifts of arc(let)s check the lens modelling obtained
from giant arcs and can be used to refined it. It
is also possible to obtain information on the cosmological parameters if one
could have enough redshift to constrain both the deflector and the geometry of
the Universe.
More recently, with the development of the lensing inversion technique
(see next section), the need of spectroscopic confirmations of its
predictions led to intensive observations of arclets.
Spectroscopic surveys of the "brightest"
arclets in many clusters are now underway and
first results have been obtained in A2390 (B\'ezecourt \& Soucail 1996)
and A2218 (Ebbels et al. 1996; Ebbels this conference).
Since most of these objects are very faint, only arclets showing bright spots
of stars forming regions on HST images are selected in order to detect
an emission line and to optimize the chance to get reliable redshift
(see figure 1).
About 20 redshifts of arcs and 20 redshifts of arclets have been
measured. However, the use of this sample to recover the redshift
distribution of
$B>25$ galaxies is difficult because it is biased. First,
only arclets with star forming emisson lines are
selected. Second, beyond $B=25$ the
magnification bias favours observations of blue galaxies rather
than red.
So, even if the spectroscopy of arclets is
crucial for the lens modelling and eventually to obtain the spectral
energy distribution of high-redshift galaxies, the redshift
distributions obtained from these methods are still questionnable.
\begin{figure}
\rule{5cm}{0.2mm}\hfill\rule{5cm}{0.2mm}
\psfig{figure=mellier.HX37_a370_panel2.eps,height=14.7 cm}
\caption{The top
left pannel shows a B image of A370 with the ROSAT isocontours and the
position of arclets superimposed (dark segments). They are consistent
with the lens modelling (bottom left) and we can
expect good redshift predictions of arclets from the reconstructed
shear map (top right). Kneib et
al. inverted the arclets in A370 and found the magnitude-redshift
diagramme plotted in the bottom right panel. However, the ROSAT
isophotes and the arclets positions show some discrepancies with the
modelling on the eastern side (left) and in this region the predicted
redshifts could be wrong.
\label{fig:radish}}
\end{figure}
\section{Redshifts of arclets from lensing inversion}
When it is possible to recover the lensing potential with a good
accuracy, the lensing equation can be inverted to
reconstruct the lensed images back to the source plane. This is basically
the procedure of the lensing inversion which searches
for each arclet the source plane where its
distortion is minimum, assuming it gives its most
{\sl probable redshift}. The obvious interest of this
method is that it does not depend on the magnitude of the arclet but on
its position and its shape in the image plane. Potentially, it provides
redshift of galaxies up to $B=27$.
The lensing inversion has been developped by the Toulouse/Paris group
and was first applied on A370 (Kneib et al. 1994) from the lens
modelling of the giant arc. Though the (intrinsic)magnitude-redshift found
for these arclets shows a good continuity with the faint spectroscopic
surveys, there are still some uncertainties.
In fact, as it is shown in figure 2, the X-ray isophotes and
the arclet positions do not follow the expectations of the lens
modelling in the eastern region. This is an indication that
while the modelling is excellent in the cluster center, the mass
distribution has not a simple geometry beyond the giant arc.
Furthermore, the lensing inversion is also sensible to the accuracy of
the shape measurements of each arclet, and for so faint objects errors can
be large.
\begin{figure}
\rule{5cm}{0.2mm}\hfill\rule{5cm}{0.2mm}
\psfig{figure=mellier.HX37_cl0024_depletion_panel3.eps,height=13.7 cm}
\caption{Depletion by a singular isothermal sphere
as it would be observed on the sky
and radial density of galaxies (top left).
For a given redshift,
the minimum of the depletion is sharp and its radial position is
equivalent to a redshift (top right). The miminum increases with the
redshift of sources but the depletion curves tighten and converge
towards the curve corresponding to sources at infinity. In a realistic
case, the redshift distribution is broad and the individual curves must
be added. In this case, instead of
the single peaked depletion we expect a more pronounced minimum between
two radii (i.e. two redshifts; top left). The middle panels show the
depletion curves observed in $B$ and $I$ in Cl0024.
Since the mass distribution of this lens is well known, one can recover the
redhsift of the sources for the $B$ and $I$ populations (bottom panels:
note that this is a fraction of galaxies. The width of boxes is
the redshift range, not a total number of galaxies).
\label{fig:radish}}
\end{figure}
There are two solutions to solve these issues: first, it is highly
preferable to use HST images instead of ground based images. The results
obtained by Kneib et al. (1996) on A2218 show the efficiency of the
lensing inversion when applied on excellent images. Second, it is
important to have lensing-clusters with simple geometry. In this
respect, though A370 and A2218 are rather well modelled, they are not
the simplest and clusters like MS0440 or
MS2137-23 are better candidates.
\section{The distribution of faint galaxies from the
magnification bias }
The projected number density of galaxies through a lens results from
the competition between the gravitational magnification that increases
the detection of individual objects and the deviation
of light beam that increases the area and thus decreases the apparent number
density. Therefore the amplitude of the magnification bias
depends on the slope of the galaxy counts as a function of magnitude and
on the magnification factor of the lens (Broadhurst et al. 1995):
when the slope is higher than $0.4$ the number density increases,
whereas below $0.4$ is decreases and the radial distribution shows a
typical depletion curve (see figure 3).
When the slope is lower than $0.3$, a sharp decrease of the number of galaxies
is expected
close to the critical radius of the lens corresponding to the redshift
of the background sources. For a broad redshift distribution,
it can result a shallower depletion between the smallest and
the largest critical line
which depends on the redshift distribution of the galaxies
(Figure 3). Therefore, the analysis of the shape of the depletion
curves provide a new way
to sort out their redshift
distribution. As the lensing inversion, this is a statistical
method which can also
infer redshift of very faint sources (up to $B=28$) but does not need
anymore information on the shapes of arclets. However, the need of a good
lens modelling is still necessary.
This method was first used by Fort et al (1996) in the cluster
Cl0024+1654 to study the
faint distant galaxies population in the extreme range of magnitude
$B=26.5-28$ and $I=25-26.5$. For these selected bins of magnitude they
found on their CFHT blank fields
that the counts slope was near 0.2, well suited for the study of
the effect. After analysis of the shape of the depletion
curve (figure 4), $60\% \pm 10\%$ of the
$B$-selected galaxies were found
between $z=0.9$ and $z=1.1$ while most of the remaining $40\% \pm 10\%$
galaxies appears to be broadly distributed around a redshift of
$z=3$. The
$I$ selected population present a similar distribution with two maxima,
but
spread up to a larger redshift range with about 20\% above $z > 4$
(figure 3).
This very first tentative must be pursued on many lensing clusters in order to
provide significant results on the redshift distribution of the faintest
distant galaxies. Though it is a very promising approach, it also need to be
applied on clusters with simple geometry. Furthermore, the detection
procedure demands ultra-deep exposures with subarcsecond seeing.
\section{Conclusions }
The redshift distribution of galaxies beyond $B=25$ is a crucial
scientific question for galaxy evolution and weak lensing inversions.
I have discussed three innovative ways which can go as faint
as $B=28$. They must be considered jointly with photometric redshifts
which will need confirmations from others observations. But whatever
the method, how and when will we be sure that these redshifts are correct
from spectroscopic data? This
key issue may be solved with ultra-deep CCD spectroscopic exposures with
the VLTs. This is a major challenge for the coming years that will be
possible to match with the systematic use of gravitational telescopes.
\section*{Acknowledgments}
I thank B. Fort, R. Ellis, R. Pell\'o, P. Schneider and L. Van Waerbeke for
stimulating discussions on lensing and on distances of faint galaxies.
\begin{figure}
\rule{5cm}{0.2mm}\hfill\rule{5cm}{0.2mm}
\psfig{figure=mellier.HX37.zarcletspectro.eps,height=9.0 cm}
\caption{A magnitude-redshift diagramme showing the positions of
the redshift surveys (dark symbols on the left), the arc(let)s
spectroscopic surveys (large circles. Those concerning A2218
have been kindly provided by Pell\'o prior to publication), the predictions
of lensing inversions for A370 and A2218 (small circles),
of weak lensing studies by Bonnet et al. and Smail et al. (triangles)
and finally, of the depletion curves in Cl0024 (large boxes). The
spectroscopic redshift of Cowie et al. (1996) with Keck would be between
$B=22.5$ and $B=24.5$. We see the potential interest of gravitational
lensing which provide redshifts up to $B=28$. The straight line on
bottom right is the redshift of A370 which is a limit of the lensing
inversion in this cluster.
\label{fig:radish}}
\end{figure}
\section*{References}
|
1,108,101,564,860 | arxiv | \section{}
\section* {Introduction}
The first edition of Newton' s
{\it Mathematical Principles of Natural Philosophy},
commonly known as the {\it Principia}, appeared in 1687, transforming
our understanding of celestial mechanics and gravitational theory. In his magisterial book,
Newton gave a physical description and mathematical proof
for the origin of Kepler's three laws for planetary motion. These laws were empirical laws
based on the careful observations of the Danish astronomer
Tycho Brahe, which summarized most of the astronomical knowledge about planetary motion that
was known at the time.
To accomplish this feat, Newton assumed the validity of the principle of inertia
which he formulated in the { \it Principia} as the first law of motion \footnote{ \normalsize The principle of inertia was formulated
in the form:
\begin{quote}
Every body perseveres in
its state of being at rest or of moving uniformly straight forward
except insofar as it is compelled
to change its state by forces impressed (Cohen 1997, 416).
\end{quote}
}, and he
introduced two fundamental concepts: that the main attractive gravitational
force acting on a planet is a central force directed towards the
sun, and that the magnitude of this force is proportional to the planet's acceleration\footnote{\normalsize These concepts had been formulated
also by Robert Hooke who
discussed his ideas with Newton in a lengthy correspondence in 1679 (Turnbull 1960. 297-314) (Nauenberg 1994) (Nauenberg 2005).}.
From Kepler's third law\footnote {\normalsize The square of the period of the planet
is proportional to the cube of its distance from the sun.}, and the approximation that
planets move with uniform motion in a circular orbit, Newton then deduced that
this gravitational force
varied inversely with the square of the radial distance of a planet from the sun.
In addition, Newton proposed that the magnitude of the gravitational force between any two
bodies was proportional to the product of their {\it inertial} masses, and by applying Kepler's third law to
the motion of the moon and the satellites of Jupiter and
Saturn, he
of determined the masses of these planets and the mass of the earth relative to the mass of the Sun
\footnote{\normalsize This result was found among some early papers by Newton (Herivel 1965, 196).
Newton and, independently, Christiaan Huygens, had deduced that a body moving
with uniform velocity $v$ in a
circular orbit of radius $r$, has a radial acceleration $a$ towards the center of the orbit given by $a=v^2/r$.
Since $v=2\pi r/T$, where $T$ is the period, he obtained $a=4\pi^2 r/T^2$. Substituting for $T$ Kepler's third law
for planetary motion,
$T^2=c r^3$, where $c$ is a constant, gives $a=(4\pi^2/c) (1/r^2)$. In this way Newton found that the acceleration $a$
depends inversely on the square of the radial distance $r$ between a planet and the sun.
In the {\it Principia}, Newton proposed that the same relations apply also to the motion of the
satellites around a planet.
According to his principle of universal gravitation, $a$ is
proportional to the mass $M$ of the center of force, and therefore
$a=GM/r^2$, where $G$ is a universal constant, now called Newton's constant.
Hence $M=4\pi^2/Gc$, where $c$ is Kepler's constant, and by determining the
value of $c$ for the satellites of Jupiter, Saturn and the earth,
Newton obtained the ratio of the mass of each of these planets
relative to the mass of the sun, given in Prop. 8, Book 3 of the {\it Principia}.
}.
In his essay {\it La cause de la pesanteur}, the great Dutch scientist Christiaan Huygens remarked
that he was very pleased to read how Newton, by
\begin{quote}
supposing the distance from the earth to the sun to be known, had been able to compute
the gravity that the inhabitants of Jupiter and Saturn would feel compared with what we feel
here on earth, and what its measure would be on the Sun (Cohen 1997, 219).
\end{quote}
The importance of these developments was appreciated not only by astronomers and
mathematicians who read the {\it Principia}, but also by philosophers and by the educated public. The
French philosopher, Fran\c{c}ois Marie Voltaire encapsulated this recognition with a succinct comment,
\begin{quote}
Avant Kepler tous les hommes etoient aveugles, Kepler fut
borgne, et Newton a eu deux yeux
\footnote {Before Kepler all men were blind; Kepler was one-eyed, and Newton had
two eyes (Feingold 2004,99)}(Besterman 1968, 83)
\end{quote}
and shortly after Newton's death the English poet Alexander Pope wrote
\begin{quote}
Nature, and Nature's Laws lay hid by night\\
God said, let Newton be! and all was light.
\end{quote}
As the reputation of the {\it Principia} grew, even people who had little or no mathematical ability
attempted to understand its content. The English philosopher John Locke, who was in exile in Holland,
went to see Huygens
who assured him of the validity of Newton's propositions. He was able to follow its conclusions, and later
befriended Newton, referring to him ``as the incomparable Mr.Newton" in the preface of his essay
{\it Concerning Human Understanding} (Locke 2001,13). While in exile in England, Voltaire became acquainted with Newton's work,
and after his return to France
he wrote the {\it Elemens de la Philosophie de Neuton} which popularized Newton's ideas in France.
In this enterprise he was fortunate to have the collaboration of a gifted companion, Gabrielle Le Tonnelier de
Breteuil, better known as the Marquise du Ch\^{a}telet, who translated the { \it Principia} into French.
Francesco Algarotti, who was in communication with Voltaire, published his {\it Newtonianismo per le dame} which became fashionable in Italy (Feingold 2004).
Initially, there was considerable reluctance to accept
Newton's general principles, particularly because an action at a distance
was generally regarded as due to occult forces, in contrast to contact
forces. According to Descartes, gravitational forces were due to vortices of unobserved
celestial dust, and this explanation had been accepted by most Continental
astronomers. At the end of Book 2 of the {\it Principia}, Newton gave a proof that Cartesian
vortices were incompatible with Kepler's second and third laws for planetary motion, but his proof
was based on somewhat arbitrary assumptions about the frictional properties of these vortices, and
in an essay, `Nouvelles pens\'{e}e sur le syst\`{e}me de M. Descartes',
the Swiss mathematician Johann Bernoulli gave several objections to this proof (Aiton 1995, 17).
In his {\it Discourse sur
les differentes figures des astres}, Pierre-Louis Moreau de Maupertuis openly defended Newton's views,
pointing out its predictive power, and remarked that Cartesian impulsion
was no more intelligible than Newtonian attraction (Aiton 1995, 19), but that universal gravitation
was `` metaphysically viable and mathematically superior''
to vortices as an explanation of celestial mechanics (Feingold 2004, 98). In a remarkable tour de force,
Newton had applied his gravitational theory to determine the shape of the earth, and he
found that the centrifugal force due to the daily rotation about its axis deformed the earth
into an oblate spheroid flattened at the poles\footnote{\normalsize Assuming that the earth can be regarded as a rotating fluid
held together by its own attractive gravitational
forces, in Prop. 19, Book 3, Newton gave a proof that the shape of the earth is an oblate
spheroid corresponding to an ellipse of revolution about its short axis.
He calculated the ellipticity $\epsilon=a/b-1$, where $a$ and $b$ are the major and
minor axis of the ellipse,
by the requirement that two channels of water to the center of the earth, one
from the pole and another from the equator
would be in pressure equilibrium at the center. Remarkably,
in his calculation Newton also took into account the variation
of the gravitational force inside the earth due to the shape distortion which he discussed
in Prop. 91, Cor. 3, Book 1 (For a modern discussion see (Whiteside 1974, 225-226) and (Chandrasekhar
1995, 313-317) .
Newton obtained
for the ellipticity, $\epsilon=(5.04 /4) \delta $, where $\delta $, the ratio
of centrifugal acceleration to the acceleration of gravity $g$, is $\delta=(4\pi^2 r_e /(g T^2)$,
$r_e$ is the mean radius of the earth and $T$ is the period of rotation ( one siderial day).
This gives $\delta=1/289$, and Newton found that $\epsilon=1/229$ and
announced that the distance to the center of the earth at the equator exceeds the
value at the poles by $\epsilon r_e$= 17 miles. The present observed value is 13 miles,
because the actual density
of the earth is not homogeneous.
A similar calculation was carried out
by Huygens who included, however, only the effect of the centrifugal forces, because he
did not accept Newton's principle of universal gravitation. Hence, Huygens obtained
$\epsilon=(1/2)\delta = 1/578$. Newton's result was first derived by
Clairaut, who showed that the relation $\epsilon=(5/4)\delta$ is correct to first order
in $\epsilon$ (Todhunter 1962, 204).
}.
This prediction was contrary to the observations of the Italian astronomer Gian Domenico Cassini, who had joined
the French Academy of Sciences, and his son Jacques Cassini. They had
obtained faulty geodetic measurements, indicating
that the earth is a prolate spheroid \footnote{\normalsize For example, Jacques Cassini found that the length of a degree of longitude in the parallel of St. Malo, France,
is 36,670 {\it toises}, but on the supposition of a spherical earth it should be 37,707
toises (Todhunter 1962, 111) ( The length of a {toise} can be obtained
from Newton's remark in Prop. 19, book 3, that 367,196 {\it London} feet, the mean
radius of the earth obtained by a Mr. Norwood, is equal 57,300 {\it Parisian} toises).
}.
To resolve this conflict, Maupertuis together with the French mathematician Alexis-Claude
Clairaut led a scientific expedition commissioned by the French Academy of Sciences that left for Lapland on April 20, 1736
to measure the length of a degree of the meridian at that latitude, in order to
compare it with the corresponding length at the latitude of Paris. Mauepertuis
became famous for confirming Newton's prediction, and Voltaire called him the
``aplatisseur du monde et de Cassini", remarking sarcastically that
\begin{quote}
Vous avez confirm\'{e} dans des lieux pleins d'http://arxiv.org/help/ ennui\\
Ce que Newton connut sans sortir de chez lui
\footnote{\normalsize
You have confirmed in these difficult locations
what Mr. Newton knew without leaving his home.(Florian 1934, 664)
}
\end{quote}
Another expedition, headed by La Condamine, Bouguer and Godin, also members of the French Academy
of Sciences, went about a year earlier to Peru to measure a
corresponding arc of the meridian near the
equator. But they ran into considerable difficulties and delays due to
personal animosities between the leaders of the expedition, and only
ten years later, were they able to
report their results which were consistent with the conclusions of Maupertuis' expedition (Todhunter 1962).
Subsequently, the problem of evaluating theoretically the shape of the earth became the subject of intense efforts
by Continental mathematicians who studied Newton's {\it Principia}, and its difficulty
spurred major advances in mathematical physics.
In his {\it Lettres philosophiques}, Voltaire
reported these controversies with his characteristic wit,
\begin{quote}
For your Cartesians everything is moved by an impulsion you don't really understand, while for Mr. Newton
it is by gravitation, the cause of which is hardly better known. In Paris you see the earth shaped
like a melon, in London it is flattened on two sides (Voltaire 1734)
\end{quote}
The contrast between the methods of Descartes and Newton were neatly contrasted
by Bernard le Bovier Fontenelle, the secretary of the French Academy of Sciences, who wrote in his {\it Eloge de Newton},
\begin{quote}
Descartes proceeds from what he clearly understands to find the cause of what he sees, whereas
Newton proceeds from what he sees to find the cause, whether it be clear or obscure (Fontenelle 1728)
\end{quote}
Newton, however, left open the question of the origin of the gravitational force. During a correspondence
with the Reverend Richard Bentley, he made his reservations clear,
\begin{quote}
It is unconcievable that inanimate brute matter should ( without the mediation of something else
which is not material) operate and affect other matter without mutual contact; as it must if gravitation
in the sense of Epicurus be essential and inherent in it ... That gravity ... may act at a distance through
a vacuum ... is to me so great an absurdity that I believe no man who has in philosophical matters any
competent faculty of thinking can ever fall into it. Gravity must be caused by an agent acting
constantly according to certain laws, but whether this agent is material or inmaterial is a question
I have left to the consideration of my readers (Westfall 1995, 505).
\end{quote}
This view led to accusations by some readers of the {\it Principia}
that Newton had left the physics out of his mathematics.
Kepler had shown that the planets travel along elliptical orbits with the sun located at one of the foci
\footnote{\normalsize This subtlety was not always appreciated by the general public. For example,
the Bank of England issued
a two pound note, now retracted,
showing incorrectly a figure from Newton's {\it Principia},
with the sun at the center of the ellipse.},
moving with a non-uniform velocity satisfying his second law or area law\footnote{\normalsize
In the {\it Principia}, Book I, Prop. 1, Newton gave the following
formulation of the area law:
\begin{quote}
The areas which bodies made to move in orbits describe by radii drawn from an unmoving center
of forces lie in unmoving planes and are proportional to the times (Cohen 1999).
\end{quote}}.
To account for such an orbit, Newton had to extend
the rigorous geometrical methods developed by Greek mathematicians
to encompass the limit of ratios and sums of { \it vanishing} quantities (Nauenberg 1998). In
the { \it Principia} such quantities were represented by lines and arcs of curves of arbitrarily
small length, a procedure that had been introduced by Appollonius, and applied
by Ptolemy for calculations in his geocentric model of celestial motion, and
by Archimedes for calculations of
lengths and areas encompassed by curves. In the 17-century, this procedure was developed further
by several mathematicians including in particular Ren\`{e} Descartes, whose work Newton had
studied carefully \footnote{\normalsize Newton studied Frans van Schooten's second edition (1659) of his
translation of Descartes {\it Geometrie} from French into Latin, with appended tracts
by Hudde, Heurat and de Witt. This translation was crucial to Newton's education
because he could not read French.} (Whiteside 1967).
Since motion occurs with the passage of time, it was necessary
for Newton to express time as a geometrical variable, but this was a major stumbling block (Nauenberg 1994a). It was only
after a lengthy correspondence with Robert Hooke (Turnbull 1960, 297-314) (Nauenberg 1994b), that Newton
was able to give a proof of the validity of Kepler's area law for any central force (Nauenberg 2003). Newton recalled that
\begin{quote}
In the year 1679 in answer to a letter from Dr. Hook ... I found now that
whatsoever was the law of the forces which kept the Planets in their Orbs,
the area described by a radius from them to the Sun would be proportional to
the times in which they were described. And by the help of these two propositions
I found that their Orbs would be such ellipses as Kepler had described... (Lohne 1960)
\end{quote}
Thus, Newton was able to geometrize the passage of time by the
change of an area - a concept without which writing the
{\it Principia} would not have been possible. He
emphasized its importance by starting the {\it Principia} with a mathematical proof of the
generalization of Kepler's second law described in Prop. 1, Book 1.
(Brackenridge 1995) (Nauenberg 2003).
The style of the {\it Principia}
followed the mathematical format of the {\it Horologium Oscillatorioum} by Christiaan Huygens, who was the
most prominent scientist in the Continent during the later part of the 17-th century (Huygens 1673)
(Nauenberg 1998).
In 1673, when Newton received a copy of Huygens' book from Henry Oldenburg, he promptly
responded that
\begin{quote}
I have viewed it with great satisfaction finding it full of very subtile and useful speculations very
worthy of the Author. I am glad, we are to expect another discourse of the { \it Vis Centrifuga} [centrifugal
force ] which speculation may prove of good use in natural Philosophy and Astronomy, as well
a Mechanicks. (Huygens 1897, 325)
\end{quote}
In the preface of his biography, {\it A view of Sir Isaac Newton's Philosophy},
Henry Pemberton, who was the editor of the the third edition of the {\it Principia}, wrote that
\begin{quote}
Sir Isaac Newton has several times particularly recommend to me Huygens' style and manner.
He thought him the most elegant of any mathematical writer of modern times, and the
most just imitator of the ancients. Of their taste, and form of demonstration Sir Isaac always
professed himself a great admirer. I have heard him even censure himself for not following them
yet more closely than he did; and speak with regret of his mistake at the beginning of his mathematical
studies , in applying himself to the works of Des Cartes and other algebraic writers, before he had
considered the elements of Euclide with that attention, which so excellent a writer deserves (Pemberton 1728).
\end{quote}
In turn Huygens greatly admired Newton's work, and in the summer of 1689 he came to
England to meet Newton and discuss with him the current theories of gravitation.
Like Leibniz, Huygens did not accept Newton's concept of an action at a distance which was
regarded as an occult force by followers of Descartes vortex theory of gravitation, but he accepted
the inverse square dependence on distance of the gravitational force.
In 1689 the British mathematician David Gregory visited Newton in Cambridge, and reported that
\begin{quote}
I saw a manuscript [written]
before the year 1669 ( the year when its author Mr. Newton was made Lucasian Professor of Mathematics) where all the foundations of his philosophy are laid: namely the gravity of the Moon
to the Earth, and of the planets to the Sun. And in fact all these even then are subject
to calculation (Herivel 1965, 192).
\end{quote}
This manuscript, which Newton never published, revealed that already sixteen years before
writing the {\it Principia},
Newton had carried out the ``moon test", later described in Book 3, Prop. 4, where he compared the gravitational
force of the earth on the moon to the force of gravity on an object on the surface of the earth (Herivel 1965, 192-198)(see Appendix).
In order to make this
comparison, however, Newton had to assume that the gravitational force varied inversely proportional to the square of the distance from the center of the earth to any distance above its surface. Previously, he had deduced the inverse square dependence of this force from planetary motion ( see footnote 2), where the distance between the planets and the sun is very large compared to their sizes, and
it was reasonable to treat these celestial bodies as point masses. But to assume that this radial dependence was still valid for much shorter distances, and in particular down to the surface of a
planet had to be justified. Apparently it was only after Newton already had started writing the {\it Principia},
that he was able to provide such a justification, by assuming that the gravitational
attractive force due a finite size body can be compounded by adding the contribution of each
of its elements. In Prop. 71, Book 1 of the {\it Principia} he gave a remarkable proof
that the gravitational force of a
spherical distribution of mass acts at any distance from its surface as if the total mass is concentrated
at its centre (Chandrasekhar 1995, 269-272). Furthermore, in Prop. 91, Book 1, he considered also the force acting along the axis of any solid
of revolution, and in Cor. 2 he applied the result to evaluate the special case of an oblate ellipsoid which he needed to
determine the eccentricity of the earth due to its daily rotation ( see footnote 6).
In Book 3 of the {\it Principia}, Newton applied his mathematical theory of orbital
dynamics to planetary and lunar motion and to the motion of comets in order
to provide evidence for the universal law of gravitation - that the attractive gravitational force between two
bodies is proportional
to the product of their masses and inversely proportional to the square of the distance
between them \footnote{\normalsize In `Rules of Reasoning in Philosophy', { \it Principia}, Book 3, in Rule 3
Newton concluded :
\begin{quote}
Lastly, if it universally appears, by experiments and astronomical observations, that all
bodies about the earth gravitate towards the earth, and that in proportion to the quantity
of matter [mass] which they severally contain; that the moon likewise, according to the
quantity of its matter, gravitates towards the earth; that on the other hand, our sea gravitates
towards the moon; and all the planets one towards another; and the comets in like manner
towards the sun; we must in consequence of this rule, universally allow that all
bodies whatsoever are endowed with a principle of mutual gravitation.
\end{quote}
Compare Newton's formulation of universal gravitation
with the earlier one of Robert Hooke, who wrote,
\begin{quote}
That all Celestial Bodies whatsoever, have an attraction or gravitating power towards their
own Centers, whereby they attract not only their own parts, and keep them from flying from them, as
we may observe the Earth to do, but that they do also attract all the other Celestial Bodies that are
within the sphere of their activity; and consequently that not only the Sun and Moon have an influence
upon the body and motion of the Earth, and the Earth upon them, but that Mercury, also Venus, Mars
Saturn and Jupiter by their attractive powers, have a considerable influence upon its motion as in the
same manner the corresponding attractive power of the Earth hath a considerable influence upon every
one of their motions also (Hooke 1674) (Nauenberg 1994a).
\end{quote}} .
He persuaded the Royal Astronomer, John Flamsteed, to provide him
with the best available observational data at the time for the periods and major axis of the planets,
and for the Jovian and Saturnian satellites. Then he showed
that these observations were in good agreement with Kepler's third law, Book 3, Phenomenon 2 - 4, which
for circular motion he had considered some 20 years earlier as formulated in Cor.6 of Prop.4, Book 1, (see Appendix)
\begin{quote}
If the
periodic times are as the three half powers of the radii, the centripetal force will
be inversely as the squares of the radii.
\end{quote}
In Prop.15, Book 3, he extended this proof to elliptical motion, applying Cor.1 of Prop.45,
Book I, to show that the near immobility of the aphelia of the planets, Book 3, Prop. 14, implied that the
gravitational force between the planets and the sun satisfied the inverse square law.
This was Newton's best proof for the inverse square law, because
he had shown that the smallest deviation from this law would give rise to
a precession of the planetary aphelia which over the centuries would have accumulated to give an
observable effect.
Newton was aware, however, that astronomical observations had shown
that there were deviations from Kepler's laws in the motion of the planets and
the moon. In the preface to the {\it Principia}, he wrote:
\begin{quote}
But after I began to work on the inequalities of the motion
of the moon, and ...the motions of several bodies with
respect to one another ...I thought that publication should be put
to another time so that I might investigate these other things...
\end{quote}
Remarkably, a large part of these investigations apparently took place during the time
that Newton was composing his book, when he developed
methods to calculate the perturbation of the solar gravitational force on the lunar motion
around the earth,
and the effects due to the interplanetary
gravitational forces on the motion of the planets around the sun. In Prop. 45
he presented his simplest perturbation approximation for the lunar orbit
by assuming that it was a Keplerian elliptic orbit, but with its major axis rotating
uniformly. In characteristic fashion, first he solved the problem of
obtaining the exact law of force which would give rise to such an orbit,
and found that this force was a linear combination of inverse square and inverse
cube forces . Then he determined the rotation rate of the lunar apse by
considering the effect of the
component of the solar gravitational force along the earth-moon radial distance averaged
over a period. But this approximation gave a precession of the major axis of the lunar ellipse of only half the
observed rate.
In the first two editions of the {\it Principia} Newton was somewhat ambivalent about this large discrepancy,
and only in the third edition, which appeared in 1726, did he add a remark,
in Corollary 2 of Prop. 45, that `` the apse of the Moon is about twice as swift "
as the value that he had calculated.
This discrepancy became one the first major challenges for the mathematicians and
astronomers who studied the {\it Principia}, and it took another 20 years before a
solution to this problem was first found by Clairaut and the French mathematician
Jean le Rond d'Alembert.
\section*{ The Reception of the {\it Principia} by the Mathematicians in the Europe }
When Continental mathematicians and astronomers,
primarily from Holland, Germany, Switzerland, and France, first read the {\it Principia},
they had some difficulties understanding Newton's novel
mathematical concepts, with its combination of geometrical quantities in the tradition
of Greek mathematics and his concept of limits of ratios and sums of infinitesimals - quantities
which become vanishingly small (Nauenberg 2010). After introducing three ``Laws of Motion'', Newton
presented ten mathematical ``Lemmas" on his geometrical differential method of `` first and last
ratios". These lemmas constitute the basis for his calculus, and he referred
to them in the proof of his propositions. Except for Lemma 2 in Book 2 of the {\it Principia}, Newton did not explain his analytic
differential calculus
in much detail, and
European mathematicians, who already had been introduced to an equivalent calculus \footnote{\normalsize
In 1696 the Marquis de l' Hospital published {\it Analyse des infiniment petits}, based on lectures about
the calculus of Leibniz,
given to him by Johann Bernoulli who he had hired as his private tutor in mathematics.}
by
the German philosopher and mathematician, Gottfried Wilhelm Leibniz, first
had to translate Newton's mathematical language into Leibniz's language before
they could make further progress. Indeed, Leibniz was the first to express Newton's
formalism for orbital motion in the form of a differential equation based on his calculus (Nauenberg 2010).
Leibniz claimed to have achieved his results having only read a review of Newton's {\it Principia},
but in 1969 E.A. Fellman (1973) obtained a copy of Newton's work which contained abundant
Marginalia by Leibniz, indicating that he had carefully studied the text before undertaking his
own work. Moreover, recently discovered manuscripts show Leibniz preparatory work for his
1689 essay {\it Tentamen de motuum coelestium causis} based on a reading of the {\it Principia}
(Aiton 1995), 10) (Bertoloni Meli, 1991). But Leibniz obtained a differential
equation of motion for celestial objects that was remarkably original \footnote {\normalsize Applying Prop. 1 in Book 1 of the {\it Principia}, Leibniz derived an expression for the
second order differential $ddr$ for the radial distance. This led him to a genuine discovery which is not
found in the {\it Principia}: that this differential
is proportional to an effective centrifugal force minus the central attractive force $f(r)$. In modern notation Leibniz's result
corresponds to the equation $ d^2r/dt^2= h^2/r^3 - f(r)$, where $h=r^2 d\theta/dt $ is a constant corresponding
to the angular momentum. For the case that the orbit is an ellipse he found
that $f(r)=\mu/r^2$, where $\mu$ is the strength of the gravitational interaction (Aiton 1960) (Aiton1995) (Bertoloni Meli 1991) (Guicciardini 1999). Leibniz, however, assumed without justification that $h=\mu$= latus rectum of the ellipse,
and he incorrectly attributed Kepler's area law to a property of celestial vortices which leads to a physically inconsistent interpretation
of his equation.}
Another mathematician who applied Leibniz's version of the differential calculus was Jacob Hermann, a member
of a group around Jacob and Johann Bernoulli, two of Europe's leading mathematicians
who had formed a school in Basel (Guicciardini 1999). Expressing Prop. 1 and Prop. 2 in Book 1 of Newton's
{\it Principia} in the language of this calculus, he obtained a differential equation for the motion
of a body under the action of central force. Then he gave a proof that conic sections where the only solutions for
the case that the central force varied inversely with the square of the distance from the center of force (Herman 1710),
(Nauenberg 2010)
This was an important result, because in the first edition of the
{\it Principia}, in Cor. 1 to Prop. 13, Newton had asserted, without proof, that conic sections curves were the
{\it unique} solutions to orbital motion under inverse square forces. Johann Bernoulli
criticized Hermann's solution for being incomplete( Hermann had left out a constant of the motion), and then
derived the elliptic orbit by solving, via a suitable transformation of variables,
the general integral
for orbital motion in a central field force given in Prop. 41 Book 1 of the
{\it Principia}, for the special case of an inverse square force (Bernoulli 1710), (Nauenberg 2010)
Remarkably, Newton did not include
this fundamental solution in the {\it Principia}, giving rise to a gap that has caused considerable
confusion in the literature that remains up to the present time. Instead, Newton gave as
an example the orbit for an inverse cube force \footnote {\normalsize Prop. 41, Cor. 3} \footnote{
In Prop. 11-13 Newton gave a proof that
if the orbit for a central force is a conic section, then the force varies inversely
as the square of the radial distance. Johann Bernoulli criticized the incompleteness of
Cor. 1 of Prop. 13, Book 1, where Newton claimed to give a proof
to the solution of the {\it inverse} problem: given the gravitational force to show that
the resulting orbit is a conic section\footnote {\normalsize
In the 1980's, Bernoulli's criticism of Cor. 1 to Prop. 11-13 was
revived by Robert Weinstock,
in `Dismantling a centuries-old myth: Newton's Principia and inverse
square orbits', {\it American Journal of Physics} 50 (1982) 610-617.
Weinstock's arguments are dismantled in (Nauenberg 1994c) . } }.
Bernoulli also communicated to Newton an error
he had found in Prop. 10 of Book 2. In both cases Newton made corrections in the
next edition of the Principia (1713) without, however, acknowledging Bernoulli's important
contributions (Guicciardini 1999). Some British mathematicians like
David Gregory were able to contact Newton, and get help from him to overcome obstacles
in understanding the Principia, but this appears not to have been possible for
Continental mathematicians.
After Leibniz, the first Continental mathematician who undertook the reformulation of Newton's mathematical
concepts into the language of Leibniz's calculus, was Pierre Varignon (Aiton 1960, 1955) (Bertoloni-Meli 1991) (Guicciardini 1999) .
Varignon introduced an alternative expression for a central force in terms of the
curvature of the orbit\footnote{\normalsize Varignon (1701) called the radius of curvature `le rayon de D\'{e}velopp\'{e}', and
obtained his expression for the central force by recognizing
that a small arc of the orbit corresponds to that of a circle with this radius, called the {\it osculating} circle
by Leibniz but originating in the work of Huygens (Kline 1972) (Nauenberg 1996),
}, without being aware that Newton's earliest understanding of non-circular orbital motion was also based on curvature (Nauenberg 1994).
In a cryptic note written in his 1664 Waste book, Newton remarked that
\begin{quote}
If the body b moved in an ellipse, then its force in each point (if its motion in that point
be given) [can] be found by a tangent circle of equal crookedness with that point of the
ellipse (Herivel 1965, 130)
\end{quote}
Here the word ``crookedness" refers to curvature which is measured locally by the
inverse radius $\rho$ of the
tangent or osculating circle (as it was named later by Leibniz) at any point on an ellipse. Curvature
was also a mathematical concept that had been introduced earlier by Huygens
in his {\it Horologium Oscillatorum} (Huygens 1673) , (Nauenberg 1996)
Evidently, Newton was aware that
the central force or acceleration $a$ for non-uniform orbital motion
can be obtained from a generalization
of the relation for uniform circular motion, $ a_c=v^2/\rho$, where $v$ is the velocity,
which he, and independently Huygens, had obtained earlier (see Appendix). Then
$a=a_c/cos(\alpha )$ where $\alpha$ is the angle between the direction of the central force
and that of the radius of curvature. The problem, however, is that the motion
or velocity $v$, which is a variable along the orbit for non-circular motion because then
there is a tangential component of the central force, had to be known (Brackenridge 1995) , {Brackenridge 2002 ).
But 15 years later, Newton found a proof for the area law,
which implies that for any central force, the area swept by the radial line per unit time,
$(1/2)vr cos(\alpha)$ is a constant (proportional
to the conserved angular momentum) and $r$ is the radial distance. By substituting this expression
for $v$, Newton had an
explicitly expression for the central
acceleration $a \propto 1/ \rho r^2 cos^3(\alpha)$. Indeed, for conical sections,
the quantity $\rho cos^3(\alpha)$ is a constant (the semi-latus rectum of an ellipse) which provided Newton with a succint
proof that for such orbits the force depends inversely on the square of the
radial distance (Nauenberg 1994a). This relation was also found by
Abraham DeMoivre\footnote{\normalsize \begin{quote} After having found this theorem, I showed it to M. Newton and I was proud to believe that it would have appeared
new to him, but M. Newton had arrived at it before me; he showed this theorem to me among his papers that he is
preparing for a second edition of his {\it Principia Mathematica } ...
\end{quote}}
(Guicciardini 1999, 226), and applied by John Keill and Roger Cotes who were members of the school of
British mathematicians \footnote{\normalsize For a brief history of this important development see
(Whiteside 1974, 548-549)}.
In the first edition of the {\it Principia},
however, the curvature expression for the force does not appear explicitly, although Newton applied
it in a few instance without any explanation \footnote {\normalsize
Prop. 15, Book 2, and Prop. 26-29 Book 3.}, while in the second edition curvature is discussed
in a new Lemma, Lemma 11, and the curvature measure for force is derived
as corollaries to Prop. 6. Subsequently, Newton applied it to obtain
``another solution" to the fundamendal problem formulated in Prop. 11, Book 1,
\begin{quote}
Let a body revolve in an ellipse,
it is required to find the law
of the centripetal [central] force tending towards a focus of the ellipse.
\end{quote}
According to Newton's recollections, as told to DeMoivre in 1727,
\begin{quote}
In 1684 Dr. Halley came to visit him at Cambridge and after they had been
some time together, the Dr. asked him what the thought the Curve would
be that would be described by the Planets supposing the force of attraction
towards the Sun to be reciprocal to the square of their distance from it. Sir
Isaac replied immediately that it would be an {\it Ellipsis}, the Dr. struck with
joy and amazement asked him how he knew it, why said he, I have calculated
it, whereupon Dr. Halley asked him for his calculation without delay. Sir Isaac
looked among his papers but could not find it, but he promised him to renew it,
and then to send it to him. (Westfall 1995 ).
\end{quote}
However, the solution which Newton eventually sent to Edmund Halley in a manuscript entitled
{\it De Motu} (Whiteside 1974), and that three years later he presented in the same form
in Prop. 11 of the {\it Principia},
treated instead the inverse to the problem posed by Halley, namely given that the orbit is an ellipse, to prove that the
central force obeys the inverse square law, or as Newton formulated in 1687,
\begin{quote}
Let a body revolve in an ellipse; it is required to find the law of the centripetal force tending
towards a focus of the ellipse (Cohen 1999, 462)
\end{quote}
Relations with Bernoulli and his school were further aggravated when the notorious
priority dispute on the invention of the calculus erupted between Newton and Leibniz
in 1711. By the 1740's, serious reservations arouse regarding the general
validity of the inverse square law for gravitational force because of the failure
of Newton?s approximation of the solar perturbation to account for the rate of
precession of the lunar apside. One of the first to question on this ground the validity
of this law was the great mathematician Leonhard Euler.
He remarked that,
\begin{quote}
having first supposed that the force acting on the Moon from
both the Earth and the Sun are perfectly proportional reciprocal to the squares
of the distances, I have always found the motion of the apogee to be almost
two times slower than the observations make it: and although several small terms that
I have been obliged to neglect in the calculation may be able to accelerate the motion of the
apogee, I have ascertained after several investigations that they would be far from sufficient
to make up for this lack, and that it is absolutely necessary that the forces by which the
Moon is at present solicited are a little different from the ones I supposed. (Waff 1995, 37).
\end{quote}
He concluded that
\begin{quote}
all these reason joined together appear therefore to prove
invincibly that the centripetal force in the Heavens do not follow exactly the law
established by Newton (Waff 1995, 37).
\end{quote}
Clairaut had reached similar conclusions
and was delighted to find that he was in agreement with Euler. He had also found
\begin{quote}
that the period of the apogee [i.e. the time it takes for the lunar apogee to return
to the same point in the heavens] that follows from the attraction reciprocally proportional
to the squares of the distances, would be about 19 years, instead of a little less than
9 years which it is in fact (Waff 1995, 39)
\end{quote}
a result that Newton had mentioned earlier in Prop. 45, Book 1.
To account for this discrepancy, Clairaut proposed that an additional force was also in effect
which varied with distance inversely as the fourth power, possible due to Cartesian
vortices. Actually, suggestions for possible correction to the inverse square
gravitational law had been considered by Newton in Query 31 of his {\it Opticks} , but
he did not want to publicize them. Another mathematician,
Jean le Rond d' Alembert, arrived at the same discrepancy for the motion of the
lunar apogee, but in contrast to Euler and Clairaut, he
did not questioned the mathematical
form of Newton's gravitational law because of its successes in describing
other inequalities of the lunar motion. Ultimately, the French Academy of Sciences propose
a prize for the solution of this problem, and in 1749 Clairaut finally obtained a solution
without altering the inverse square force, by considering higher order contributions to the solar perturbation,
followed by d' Alembert with a more careful analysis which gave the same result.
\footnote {\normalsize The title that Clairaut chose for his winning essay was `Theory of the Moon Deduced
from the Single Principle of Attraction Reciprocally Proportional to the Squares of
the Distances'} (Waff 1995). Previously, similar solution had been obtained by Newton,
but it contained some errors (Nauenberg 2001a), and in a Scholium to Prop. 35, Book 3, inserted only in the first edition
of the {\it Principia}, he declared that
\begin{quote}
...These computations, however, excessively complicated and clogged with approximations as they
are, and insufficiently accurate we have not seen fit to set out .
\end{quote}
The details of Newton's computations remained unknown until 1872 when they were found among his papers
in the Portsmouth Collection (Whiteside 1974, 508-538) (Nauenberg 2000) (Nauenberg 2001a)
The importance of Clairaut's result can hardly be overestimated. In admiration Euler
declared in a letter to Clairaut that
\begin{quote}
. . . the more I consider this happily discovery,
the more important it seems to me. For it is very certain that it is only since this
discovery that one can regard the law of attraction reciprocally proportional to
the squares of the distance as solidly established, and on this depends the entire
theory of astronomy (Waff 1995, 46)
\end{quote}
In 1748 the French academy of sciences chose for its prize contest a theory that would explain the
inequalities in the motion of Jupiter and Saturn due to their mutual gravitational
interaction, which Newton had considered only semi-quantitatively in Prop. 13, Book 3 \footnote{\normalsize
From the action of Jupiter upon Saturn ``...arises a perturbation of the orbit of Saturn at every conjuction of
this planet with Jupiter , so sensible, that astronomers have been at a loss concerning it" ( Cohen 1999, 818).}
This problem was much more difficult than the lunar case, and Euler was the first
to deal with it (Euler 1769), and now he declared that
\begin{quote}
... because Clairaut has made the important discovery that the movement of the apogee of the Moon
is perfectly in accord with the Newtonian hypotheses ..., there no longer remains the least doubt about
its proportions... One can now maintain boldly that the two planets Jupiter and Saturn attract each other
mutually in the inverse ratio of the squares of their distance, and that all the irregularities that can be discovered
in their movement are infallibly caused by their mutual action... and if the calculations that one claims to have
drawn from the theory are not found to be in good agreement with the observations, one will always be
justified to doubting the correctness of the calculations, rather than the truth of the theory (Waff 1995, 46)
\end{quote}
After missing an expected lunar eclipse, Tycho Brahe had discovered a bi-montly variation in the lunar speed,
and Newton was able to account for this variation as an effect of the solar gravitational force.
In Prop. 28, Book 3, Newton introduced a novel frame of reference where
the earth is fixed at the center of a rotating frame with the period of one year. In this
frame the sun stands still when
the eccentricity of the earth-sun orbit is neglected. Then taking into account the solar gravitational force,
Newton found an approximate periodic orbit of the moon which accounted for the periodic of the
variation discovered by Brahe. In Prop. 29, Book 3, appropriately
entitled `To find the variation of the moon',
he calculated the amplitude of this variation, and found it in very good agreement with Brahe's observation.
In his review of Newton's work on lunar theory the great French mathematician and
astronomer, Pierre-Simon Laplace, singled out this result,
and remarked admiringly at Newton's insightful approximations,
\begin{quote}
Such hypothesis in calculations ... are permitted to inventors during
such difficult researches \footnote{\normalsize
Ces hypoth\`{e}ses de calcul... sont permises aus inventeurs
dans des reserches aussi difficiles (Laplace 1825, 391)
}
\end{quote}
\section*{Reception of Newton's gravitational theory for planetary and lunar motion}
Inspired by Newton's work,
Leohnard Euler introduced his rotating frame to calculate the solar perturbation to the lunar motion (Euler 1772). Likewise, in 1836
this frame was
considered also by Gustaf Carl Jacobi, who gave a proof for the
existence of a constant of the motion in what became known as the
restricted three body problem. Later,
the American astronomer George Hill also obtained periodic solutions in this rotating frame
(Hill 1783), and his work was extended by
Henri Poincar\'{e}, which led him eventually to his profound discovery
of {\it chaotic} orbital motion in Newtonian dynamics (Poincare 1892 ), (Barrow-Green 1991 ), (Nauenberg 2003b ).
In twenty two corollaries to Proposition 66, Book 1, Newton described entirely in prose his perturbations methods,
but his detailed calculations remained unpublished (Nauenberg 2000) (Nauenberg 2001a).
Here Newton considered gravitational perturbations to the elliptical motion of a
planet around the sun or the moon around the earth as a sequence of
impulses, equally spaced in time, which instantaneously alters the velocity of the celestial body in its orbit without, however, changing its position when these impulses occur. In Prop. 17, Book 1,
Newton had shown how the orbital parameters - the eccentricity, and the magnitude and direction
of the principal axis of the ellipse - can be determined given the velocity and position at a
given time (initial conditions). Hence, these impulses
lead to periodic changes in the orbital parameters which are determined by the discontinuous change
in velocity after the impulse has taken place. In corollaries 3 and 4 of Proposition 17, Newton gave a succint description of his method of variation of orbital parameters . These corollaries were
added to later drafts of the {\it Principia} \footnote {\normalsize In the initial revisions of the early manucript for the {\it Principia}, Prop. 17
contained only Corollaries 1 and 2 (Whiteside 1974, 160--161)}.
indicating that Newton had developed this method
during the period when he was writing his book.
In the limit that the time interval between impulses
is made vanishingly small, Newton's perturbation methods corresponds to the method
of variational parameters developed much later by Euler
\footnote{ \normalsize Starting with the equations
of motion as second order differential
equations in polar coordinates, Euler assumes that the solution for the
orbit is described by an ellipse with time varying orbital parameters
$p$,$e$ and $\omega$, where $p$ is the semilatus rectum of the ellipse,
$e$ is the eccentricity, and $\omega$ is the angle of the major axis.
Then he obtained first order differential equations for $e$ and $\omega$
by imposing two constraints: that $p=h^2/\mu $,
where $h$ is the angular momentuma, and that
$E=\mu (e^2-1)/2p$ where E is the time varying Kepler energy
of the orbit. In modern notation $\mu = GM $ where $M$ is
the sum of the mass of the earth, and the moon and $G$ is
Newton's gravitational constant (Euler 1769). It can be readily shown
that Euler's constraints lead to the same definition of the ellipse
described geometrically by Newton in the Portsmouth manuscript ( Nauenberg 2000) (Nauenberg 2001a).},
Joseph Louis Lagrange and Pierre-Simon Laplace \footnote {\normalsize Laplace
obtained the differential equations for the time
dependence of the orbital parameters
by evaluating the time derivate of the
vector $\vec {f}=\vec {v} \times \vec {h} - \mu
\vec {r}/r$, where $\vec {f}$ is a vector
along the major axis of the ellipse with
magnitude $f=\mu e$. The construction of this
vector was first given in geometrical form
by Newton in Book 1, Prop. 17, and in
analytic form by Jacob Hermann and Johann Bernoulli (Bernoulli,1710) (Nauenberg 2010).
Laplace's derivation(Laplace 1822, 357-390) of the
variation of orbital parameter is in effect
the analytic equivalent of Newton's geometrical
approach in the Portsmouth manuscript (Nauenberg 2000) (Nauenberg 2001a)}.
Now this method is usually credit to them.
Unpublished manuscript in the Porstmouth collection of Newton's papers, first examined
in 1872 by a syndicate
appointed by the University of Cambridge (Brackanbridge 1999) reveal that Newton had
intended to include a more detailed description of
his perturbation methods in the {\it Principia}, but neither the propositions and
lemmas in these manuscripts nor the
resulting equations, which in effect are non-linear coupled differential equations for the
orbital parameters, appeared in any of its three editions (Nauenberg 2000) Nauenberg 2001a).
But some of his results for the inequalities of the lunar motion
appeared in a lengthy Scholium after Prop. 35, Book 3, which includes numerical results
obtained by approximate solutions to his equations.
In this Scholium, for example, Newton stated that
\begin{quote}
By the same theory of gravity, the moon's apogee goes forwards at the
greatest rate when it is either in conjunction with or in opposition to
the sun, but in its quadratures with the sun it goes backwards; and
the eccentricity comes, in the former case to its greatest quantity;
in the latter to its least by Cor. 7,8 and 9 , Prop. 66, Book 1.
And those inequalities by the Corollaries we have named, are very great,
and generate the principle which I call the semiannual equation of
the apogee; and this semiannual equation in its greatest quantity
comes to about $12^o18'$, as nearly as I could determine from the
phenomena \footnote {\normalsize In Cor. 7 and 8, Prop. 66, Newton gave a qualitative
explanation for this motion
of the moon's apogee due to the perturbation of the sun,
stating it was based on results given in Book 1, Prop. 45, Cor. 1.
However, these results were obtained for the case of
radial forces only, and are therefore strictly
not applicable to the solar perturbation which is
not a purely radial force with respect to the earth as a center,
and which depends also on the angle $\psi$ .
According to the differential equation for the motion
of the lunar apogee which appears in the Portsmouth manuscript,
,his rate depends
on the relative angle between the moon's apogee $\omega$ and
the longitude $\theta$ of the sun, where $\omega -\theta = \psi -\phi$. It reaches a maximum
value when $ \omega - \theta= n\pi$ where $n$ is an integer.
and a minimum when $n$ is an odd integer divided by 2, in
accordance with Cor. 8. In fact, substituting Newton's
numerical values $\beta =11/2$,
one finds that the maximum rate of advance is $21.57'$, and of retardation
$14.83'$. This is in reasonable agreement with the values $23'$ and
$16$ $1/3'$ given in the original
(1687) Scholium to Prop. 35
corresponding to $\beta \approx 6$.
In Cor. 9 Newton gave a qualitative
argument for the variability of the eccentricity, but
there is no evidence that he obtained this quantitative result
from his ``theory of
gravity'' . According to his theory the
maximum variability of the apogee is $ 15m/8=8^0 2'$
instead of $12^0 18'$ as quoted in the Scholium to Prop. 35.
Although the lunar model of Horrocks was probably the
inspiration for his Portsmouth method, in the end Newton
was able to account partially for this model from his dynamical
principles.} \footnote{\normalsize These anomalies in the orbit of the moon around the earth had been a major challenge to astronomers
since Antiquity. Already by the second century B.C., Hipparchus had found that the moon's motion
at quadrature deviated in longitude by over two and a half degree from the predictions of the Greek
model of epicyclic motion, although this model accounted for the moon's position at conjunction and
opposition from the sun. Subsequently, Ptolemy proposed the first mechanism to account for this
anomaly, known as the \underline { evection}, but his mechanism also predicted a near doubling of the apparent
size of the moon during its orbit which is not observed. Nevertheless, Ptolemy's lunar model was not
challenged until the 15-th century when the Arab astronomer Ibn-al Shatir develop and alternative
mechanism for the lunar motion which was later adopted by Copernicus. Their model accounted
for the evection without introducing the large unobserved variations of the lunar size in Ptolemy's
model. In the 17-th century alternative models where developed by Tycho Brahe and Kepler who
incorporated his law of areas for planetary motion into his lunar model. In 1640, Jermy Horrocks
refined Kepler's model further predicting correctly the inequalities in the distance of the moon
from the earth. These are some of the additional inequalities that Newton was also able to demonstrate
to be caused by the gravitational force of the sun acting on the moon (Nauenberg 2001b)}.
\end{quote}
Newton's lunar work was received
with immense admiration by those who were able to understand the profound mathematical
innovations in his theory. An early reviewer of the second edition of the {\it Principia} stated
that
\begin{quote}
the computations made of the lunar motions from their own causes, by using the
theory of gravity, the phenomena being in accord, proves the divine force of intellect and the
outstanding sagacity of the discoverer
\end{quote}
Laplace asserted that the sections of the {\it Principia} dealing with the motion of the moon
are one of the most profound parts of this admirable work \footnote{\normalsize Parmi les in\'{e}galit\'{e}s da mouvement de la Lune en longitude,
Newton n`a d\'{e}velopp\'{e} que la {\it variation}. La m\`{e}thode qu' il suivie me
parait \^{e}tre une des choses le plus remarquables de l'Ouvrage des {\it Principes} (Laplace 825, 409)},
and the British Astronomer Royal, George
Airy, declared ``that it was the most valuable chapter that has ever been written on physical
science" (Cohen 1972) . The French mathematician and astronomer Fran\c{c}ois F\`{e}lix Tisserand
in his {\it Trait\'{e} de M\'{e}canique C\'{e}leste} (Tisserand 1894)
carefully reviewed Newton's lunar theory as it appeared
in the {\it Principia},
and also compared some of Newton's
results in the Portsmouth manuscript with the results of the variation of parameters
perturbation theory of Euler, Laplace and Lagrange. For
an arbitrary perturbing force, Tisserand found that
Newton's equation for the rotation of the major axis of the ellipse
was correct to lowest order in the eccentricity of the
orbit, while his application to the lunar case differed
only in the numerical value of one parameter,
which Newton gave as $11/2$, instead of the correct value of $5$ (Nauenberg 2001a)
In particular, Tisserand concluded that
\begin{quote}
Newton derives entirely correctly that the average annual movement of the
apogee is $38^051'51"$, while the one given in the astronomical tables is
$40^{0}41'5"$ \footnote{\normalsize Newton d\`{e}duit, tout \`{a} fait correctement ...que le mouvement
moyen annuel de l'apog\`{e}e est de $38^051'51"$, tandis que
celui qui est donne dans les Tables astronomiques es de
$40^{0}41'5"$ (Tisserand 1894, 45)}
\end{quote}
D'Alembert, however, doubted whether some of Newton's derivation were really sound, and complained that
\begin{quote}
there are some that M. Newton said to have calculated with the theory of gravitation,
but without letting us know the road that he took to obtain them. Those are the the
ones like $11' 49"$ that depend on the equation of the sun's center \footnote{\normalsize en est quelques-unes que M. Newton did avoir calcul\'{e}es par la Theorie de la gravitation,
mais nous apprendre le chemin qu'il a pris pour y parvenir. Telle son celles de 11' 49" qui
d\'{e}pend de l' \'{e}quation du centre du soleil.}.
\end{quote}
Here, d'Alembert was referring to Newton's calculation, in the Scholium mentioned previously, of the annual
equation of the mean motion of the moon which depends on the earth's eccentricity $\epsilon$ in
its orbit around the sun. Newton had taken $\epsilon$ equal to 16 7/8 divided by 1000, and
D'Alembert may have been aware that the amplitude of this perturbation is $3\epsilon m= 13' $
where $m$ is the ratio of the lunar sidereal period to a period of one year.478
Hence, although in this Scholium Newton had stated that his results had been obtained by ``his theory of gravity", it appears
that he adjusted some of the perturbation amplitudes to fit the observational data .
For the next two centuries after the publication of the {\it Principia}, Newton's approach to what became known as
the {\it three body problem}\footnote {\normalsize
Given the initial conditions (position and velocities ) for three bodies moving under the action
of their mutual gravitational attraction, to determine their motion at all times in the future.} in dynamical astronomy
stimulated the work of mathematicians and
astronomers, and this problem remains a challenge up to the present time \footnote{\normalsize
For example, in 1772 Lagrange discovered an exact solution of the three body problem where each of
the celestial bodies move in elliptic orbits with the same period, and with a common focus located at their center of mass.
The stability of these orbits, however, was not examined fully until much later, first by Routh (1875)
for the special case of circular orbits, and later for elliptic orbits by Danby (1964). These studies were restricted
to linear instabilities, and a non-linear instability analysis has been undertaken only recently (Nauenberg 2002).} .
By the late 1700's Lagrange and Laplace had written major
treatises on analytic mechanics (Lagrange 1811), and celestial mechanics (Laplace 1878) containing the mathematical progress that had
been made. There is an often repeated tale \footnote{\normalsize
Se non \`{e} vero, \`{e} ben trovato. } that Napoleon once asked Laplace why God did not appear
in his work, and that Laplace famously responded
``I didn't need that hypotheses", but in print he declared that
\begin{quote}
These phenomena and some others similarly
explained, lead us to believe that everything depends on these laws [the primordial laws of nature] by relations more
or less hidden, but of which it is wiser to admit ignorance, rather than to substitute imaginary causes solely in order
to quiet our uneasiness about the origin of the things that interest us (Morando 1995, 144)\footnote{\normalsize
Ces ph\`{e}nom\'{e}nes et quelques autres semblablement expliqu\'{e}s autorisent \`{a} penser que tous
d\'{e}pendent de ces lois, par des rapport plus ou moin cach\'{e}s, mais dont il es plus sage d'avouer l'ignorance que
d'y substituer des cause imagin\'{e}es par le seul besoin de calmer notre inqui\'{e}tude sur l'origine des choses qui nous
int\'{e}resent (Laplace 1835, 478).}.
\end{quote}
Newton claimed God needed to interfere from time to time in order to maintain the
stability of the solar system, but Laplace asserted that he had been able to give a
mathematical proof of this stability. Later, however, this proof was shown to be
flawed \footnote {\normalsize Laplace's proof that secular variations of the mean solar distances of the
planets do not occur were based on perturbation expansions up to third order in the eccentricities,
but these expansion were shown not to be convergent.}
by the work of Henri Poincar\`{e} (1892).
The overall impact of Newton's {\it Principia} in astronomy was best summarized by
Laplace's conclusion,
\begin{quote}
This admirable work contains the germs of all the great discoveries that have been made since,
about { \it the system of the world}: the history of its development by the followers of that great
geometer will be at the same time the most useful comment on his work, as well as
the best guide to arrive at knew discoveries \footnote{\normalsize
Cet admirable Ouvrage contient les germes de toutes les gandes d\`{e}couvertes qui ont
\`{e}t\`{e}s faits depuis sur le syst\`{e}me de monde: l'histoire de leur d\'{e}veloppement par
les successeurs de ce grand g\'{e}ometr\`{e} serait \`{a} la fois le plus utile commentaire de son
Ouvrage, ce le meilleur guide pur arriver \`{a} de novelles d\`{e}couvertes.}
\end{quote}
\subsection*{Acknowledgements}
I would like to thank Niccolo Guicciardini for many valuable comments.
For the Introduction, I am particularly indebted to
M. Feingold's account in \\ {\it The Newtonian Moment, Isaac Newton
and the making of modern culture} (Feingold, 2004).
\subsection*{Appendix, Newton's moon test}
Newton's assumption that the inverse square law for gravitational forces applies on the surface
on the earth, requires the relation $a_m/g=(r_e/r_m)^2$ , where $a_m$ is the radial
acceleration of the moon towards the earth, $g$ is the gravitational
acceleration at the surface of the earth, $r_m$
is the radius of the moon's orbit, and $r_e$ is the radius of the earth. Since Newton had found that
$a_m=4\pi^2 r_m/T_m^2$, he tested the inverse square law by calculating the ratio
$(g/ 2)/d_m$, where $g/2$ is the distance a body falls in one second on the surface of the earth, and
$d_m=a_m/2=2\pi^2 r_m/T_m $ is the corresponding distance that
the moon `` descends towards the earth
in one second".
Pendulum experiments had established that $g=32$ feet/ $sec^2$, but
to obtain $d_m$, Newton first had to calculate
$d_e=2\pi^2 r_e /T^2_e$, which is the corresponding distance of fall for a body on the surface
of the earth co-rotating with the earth's period $T_e$ of one day.
Taking for the earth's radius
$r_e=3500$ miles, and assuming that a mile is 5000 feet, he obtained
$d_e=5/9$ inches, and $(g/2)/d_e= 345.6$ which he rounded to 350. Huygens had
carried out a similar calculation, but taking a different value of the earth's radius, $r_e=3711$ miles,
and $g=27.33$ feet/ $sec^2$. he obtained
for this ratio the value 265 (Huygens 1929), while the correct value is $290$. This result
answered the long standing question why, if
the earth was spining about its axis once a day, objects on its surface do not fly off:
\begin{quote}
The force
of gravity is many times greater that what would prevent the rotation of the earth from causing
bodies to recede from it and raise into the air (Herivel 1965, 196)
\end{quote}
Since $d_e/d_m=(r_e/r_m)(T_m/T_e)^2$, where
$T_m/T_e=27.3216 $, and $r_m/r_e \approx 60$ which was already measurd by
Greek astronomers, one obtains $d_e/d_m$=12.44 (
Newton rounded it to 12.5). Hence, $(g/2)/d_m= 16(9/5)12.5=4320$ which differs appreciable
from the expected value $(r_m/r_e)^2 \approx 3600$.
Newton's only comment about this discrepancy was that the force of gravity at the surface of the Earth
\begin{quote}
is 4000 and more times greater than he endeavor of the Moon to recede from the Earth,
\end{quote}
but he must have been gravely disappointed with this result.
The reason for the failure of Newton's early {\it moon test} is that in his calculations
he had used an incorrect value for the
radius of the earth based on a value of about 61 English miles per degree of latitude,
and also that he had assumed that a mile corresponds to 5000 feet instead of the
correct value 5280 ( in this manuscript Newton stated that `` $1/30$ of a mile is $500/3$ feet").
Apparently he
did not become aware of his errors until 1683, when he substituted in his relation a much
better value for the earth's radius $r_e$ obtained in 1669 by Picard
from his measurement for a degree of latitude of
69.2 English miles (see Prop. 19, Book 3). This measurement gives $r_e=3965$ miles, close to the modern value.
In this case $(g/2)/d_m=4320 (61/69.2)(5000/5280)= 3606$,
in excellent agreement with the result predicted by Newton's theory.
\begin{list} {}{\itemsep .3in }
\item Aiton, E. J. (1960) The Celestial Mechanics of Leibniz' {\it Annals of Science} 15: 65-82
\item Aiton, E. J. (1995) The vortex theory in competition with Newtonian celestial dynamics,
{\it Planetary Astronomy from the Renaissance to the rise of Astrophysics} edited by
R. Taton and C. Wilson, Cambridge: Cambridge University Press.
\item Barrow-Green, June (1991) {\it Poincar\'{e} and the Three Body Problem} American
Mathematical Society, History of Mathematics vol. 11.
\item Bernoulli, Johann (1710) Extrait de la R\'{e}sponse de M. Bernoulli \`{a} M. Hermann, dat\'{e}e de
Basle le 7 Octobre 1710, {\it M\'{e}moires de l'Acad\'{e}mie des Sciences}: 521-533.
\item Bertoloni Meli, Domenico (1991) {\it Equivalence and priority: Newton versus Leibniz} Oxford:Clarendon Press
\item Besterman, T. (1968) {\it Voltaire Notebooks}, ed. T. Besterman, in {\it The Complete Works of Voltaire} vol. 81 : Toronto: 83
\item Brackenridge, J. Bruce (1995) {\it The Key to Newton's Principia}, Berkeley: University of California Press.
\item Brackenridge, J. Bruce and Nauenberg, Michael (2002) Curvature in Newton's Dynamics,
in I.B. Cohen and G. E. Smith (eds) {\it The
Cambridge Companion to Newton}. Cambridge: Cambridge University Press :
85-137.
\item Cajori, Florian (1934) {\it Principia} vol. 2 , Motte's translation revised
by F. Cajori, Berkeley: University of California Press: 664.
\item Chandrasekhar, S (1995) {\it Newton's Principia for the Common reader}, Oxford: Clarendon Press
\item Cohen, I. Bernard (1999) { \it The Principia, Mathematical Principles of Natural
Philosophy}. A new translation by I.Bernard Cohen
and Anne Whitman assisted by Julia Budenz, preceded by a guide to Newton's Principia by I. Bernard Cohen.
Los Angeles, Berkeley, London:
University of California Press.
\item Cohen, I. Bernard (1972) Historical Introduction to Isaac Newton's { \it Theory of the Moon's Motion},
Dawson (1972).
\item Danby, J. M. A (1964) Stability of Triangular Points in the Elliptic Restricted Problem
of Three Particles, {\it The Astronomical Journal} 69: 165-172
\item Euler, Leonhard (1769) { \it Opera Omnia} Series secunda,Opera
Mechanica et Astronomica, vol. 23, ed. L. Courvoisier
and J.O. Fleckenstein, Basel, MCMLXIX: 286-289.
\item Euler, Leonhard (1772) `Theoria Motuum Lunae, Nova Methodo Pertractata' { \it Opera Omnia} Series secunda,Opera
Mechanica et Astronomica, vol. 22, ed. L. Courvoisier, Lausanne, MCMLVIII.
\item Euler, Leonhard (1983) Une Th\`{e}orie de Saturne et de Jupiter, par laquelle on puisse expliquer le in\`{e}galit\`{e}s
que ces deux Plan\`{e}tes paroissert se causer mutuellement, principalement vers les temps de leur conjoction,
quoted by O. Volk, Eulers Beitr\"{a}ge zur Theorie der Bewegungen der Himmelsk\"{o}rper , in \\
{\it Leonhard Euler, 1707-1783, Beritr\"{a}ge zu Leben und Werke }, Basel:Birkh\"{a}user, pp. 345-361.
\item Feingold, Mordechai (2004) {\it The Newtonian Moment,
Isaac Newton and
the making of modern culture}, New York, Oxford: Oxford University Press.
\item Fontenelle, Bernard Le Bouvier (1728) {\it The elogium of Sir Isaac Newton.} Printed for J. Tonson, London
\item Guicciardini, Niccol\`{o} (1999) { \it Reading the Principia,
The debate on Newton's mathematical
methods for natural philosophy from 1687 to 1736}, Cambridge : Cambridge University Press.
\item Herivel, J. (1965) {\it The Background to Newton's Principia, A study of Newton's
dynamical researches in the years 1664-1684 } Oxford: Clarendon Press: 192-198.
\item Hermann, Jacob (1710) Extrait d'une lettre de M. Herman \`{a} M. Bernoulli, dat\'{e}e de Pado\"{u}e le 12, Juilliet 1710
{\it M\'{e}moires de l'Acad\'{e}mie des Sciences}: 519-521
\item Hill, George W. (1878) ` Researches in the Lunar Theory' \underline{ American Journal of Mathematics}
1 (1878) 5-26, 129-147, 245-260. Reproduced in \underline{Collected Mathematical Works
of G. W. Hill}, vol. 1 Washington:Carnegie Institute (1905) : 284-335.
\item Hooke, Robert (1674) An Attempt to Prove the Motion of the Earth from Observations, reprinted in
Gunther, R. T. { \it Early Science in
Oxford} vol. 3 , Oxford: Printed for the Author (1931) 1-28
\item Huygens, Christiaan (1673) C. Huygens, {\it Horologium Oscillatorum}, (Paris 1673).
English translation by Richard J. Blackwell,
{ \it C. Huygens, The Pendulum Clock} Ames, 1986
\item Huygens, Christiaan (1690) {\it Trait\`{e} de la Lumiere, avec un Discourse sur la Cause de la Pesanteur}, Leyden.
\item Huygens, Christiaan (1897) {\it Ouvres Compl\'{e}tes de Christiaan Huygens} , vol. 7, The Hague, Martinus Nijhoff,
p. 325
\item Huygens, Christiaan (1929) ` De Vi Centrifuga, Appendice I. 1659 ' \\
{\it Ouvres Compl\'{e}tes de Christiaan Huygens} , vol. 16, The Hague, Martinus Nijhoff,
p. 304.
\item Kline, Morris (1972) {\it Mathematical Thought from Ancient to Modern Times} Oxford : Oxford University Press :554-557
\item Lagrange, J. L (1811) { \it M\'{e}canique Analytique} Paris: Coursier, Paris
\item Laplace, Pierre-Simon (1796) {\it Exposition du Syst\`{e}me du Monde} Paris: Bachelier
\item Laplace, Pierre Simon (1822) { \it A treatise of Celestial Mechanics}, translated from the French
and elucidated with explanatory notes by the Rev. Henry H. Harte, Dublin : Printed for R. Milliken
\item Laplace, Pierre Simon (1825) {\it Trait\'{e} de M\'{e}canique C\`{e}leste} vol. 5, Paris: Bachelier Libraire
\item Laplace, PIerre Simon (1835) {\it Trait\'{e} de M\'{e}canique C\`{e}leste}, vol. 6, Paris: Bachelier Libraire
\item Laplace, PIerre Simon (1878) {\it Trait\'{e} de M\'{e}canique C\`{e}leste}, 5 volumes, Paris: Gauthier-Villars.
\item Leibniz, Gottfried Wilhelm (1973) { \it Marginalia in Newtoni Principia Mathematica} edited by E. A. Fellmann, Paris: Vrin
\item Locke, John (2001) {\it An essay in human understanding}. Ontario: Batoche Books
\item Lohne, J (1960) Hooke versus Newton, {\it Centaurus} 7: 6-52.
\item Morando, Bruno (1995) Laplace, in Rene Taton and Curtis Wilson (eds)\\
{\it The General History of
Astronomy} 2B,\\ {\it Planetary Astronomy
from the Renaissance to the rise of Astrophysics,}
Cambridge: Cambridge University Press
\item Nauenberg, Michael (1994a) Newton's Early Computational Method for Dynamics,
{\it Archive for History of Exact Sciences } 46 :221-252
\item Nauenberg, Michael (1994b) Hooke, Orbital Motion, and Newton's Principia,
{\it American Journal of Physics}, 62: 331- 350.
\item Nauenberg, Michael (1994c) Newton's
Principia and Inverse Square Orbits,
{ \it The College Mathematics Journal} 25: 212-222.
\item Nauenberg, Michael (1996) Huygens and Newton on Curvature and its applications
to dynamics {\it De zeventiende eeuw, Cultur in de Nederlanden in interdisciplinair
perspectief}, jaargang 12 : 215-234.
\item Nauenberg, Michael (1998) `The Mathematical Principles underlying the Principia
revisited',
{\it Journal for the History of Astronomy} 29: 301-307.
\item Nauenberg, Michael (2000) {\it Newton's Portsmouth perturbation Method
and its application to lunar motion}, in R.H Dalitz and M. Nauenberg (eds)
{\it The Foundations of Newtonian Scholarship}
Singapore: World Scientific
\item Nauenberg, Michael (2001a) ` Newton's perturbation methods for the three-body
problem and their application to Lunar motion', in J.Z. Buchwald and I. B. Cohen
(eds) {\it Isaac's Newton Natural Philosophy},
Cambridge: MIT Press.
\item Nauenberg, Michael (2001b) `Essay review of N. Kollerstrom's
Newton's Forgotten Lunar Theory' {\it Journal for the History of Astronomy} 32:
162-168.
\item Nauenberg, Michael (2002) ` Stability and eccentricity of periodic orbits for two planets in 1:1 resonance'
{\it The Astronomical Journal} 124:2332-2338
\item Nauenberg, Michael (2003a) ` Kepler's area law in the Principia: filling in some details in
Newton's proof of Proposition 1 ', {\it Historia Mathematica} 30: 441-456.
\item Nauenberg, Michael (2003b) `Orbites p\'{e}riodiques du probl\'{e}me des trois
corps: les origines, les contributions de Hill et Poincar\'{e}. et quelques
d\'{e}veloppements r\'{e}cents' , in E. Charpentier, E. GHys and A. Lesne (eds)
{\it L'h\'{e}ritage scientifique de Poincar\'{e}}, Paris:
Belin, pp. 128-157.
\item Nauenberg, Michael (2005) `Curvature in Orbital Dynamics'. {\it American Journal
of Physics} 73 : 340-348.
\item Nauenberg, Michael (2005)
`Robert Hooke's Seminal Contributions to Orbital Dynamics',
{\it Physics
in Perspective 7}: 4-34, reproduced in M. Cooper and M. Hunter (eds)
{\it Robert Hooke,
Tercentennial Studies}, London:
Ashgate ( 2006) pp. 3-32.
\item Nauenberg, Michael (2010) ` The early application of the calculus to inverse square problem',
{\it Archive for the History of Exact Science} (currently on line in electronic version,
to appear in May issue )
\item Newton, Isaac (1702) { \it Theory of the Moon's Motion}, with a
bibliographical and historical introduction by I. B. Cohen, Dawson (1972) :41.
\item Pemberton, H. (1728) {\it A view of Sir Isaac Newton's Philosophy} London:
Palmer.
\item Poincare, Henri (1892)) {\it Les M\'{e}thodes Nouvelles de la Mechanique C\'{e}leste},
Paris:Gauthier-Villars.
\item Routh, E. J. (1875) ` On Lagrange's Three Particle with a Supplement on the Stability of Steady Motion'
{\it Proceedings of the London Mathematical Society} 6: 86-96. Reprinted in
{\it Stability of Motion} edited by A. T Fuller, New York: Halsted Press.
\item Tisserand, Felix (1894) `Th\'{e}orie de la Lune de Newton'
{\it Trait\'{e}
de M\'{e}canique C\'{e}leste}, tome III,
Chapitre III, Paris:
Gauthier-Villars et Fils, pp. 27-45
\item Todhunter, I (1962) { \it A historty of the mathematical theories of attraction and the
figure of the earth}, New York : Dover.
\item Turnbull, H. W (1960) {\it The correspondence of Isaac Newton } vol.2, 1676-1678, edited by
H.W. Turnbull, Cambridge Univ. Press, pp. 297-314.
\item Turnbull , H. W. (1961) {\it The Correspondence of Isaac Newton} vol. 3 , 1688-1694, edited by H. W. Turnbull,
Cambridge: Cambridge University Press, p. 236.
\item Varignon, Pierre (1701) `Autre Regles Generale des Force Centrales' { \it M\'{e}moires de l'Academie des Sciences}
\item Voltaire, Francois-Marie (1734) {\it Letters on England}; translated by L. Tancock, Harmondsworth:
Penguin Books, 1984, pp. 71-72
\item Waff, Craig. B. (1976) `Isaac Newton,
the motion of the lunar apogee, and the
establishment of the inverse square law',
{\it Vistas in Astronomy} 20
(1976): 99-103; ``Newton and
the Motion of the Moon: An
Essay Review' {\it Centaurus} 21 (1977): 64-75.
\item Waff, Craig. B. (1995) 'Clairaut and the motion of the
lunar apse: the inverse-square law undergoes a test', in Rene Taton and
Curtis Wilson (eds)
{\it The General History of
Astronomy} 2B, {\it Planetary Astronomy
from the Renaissance to the rise of astrophysics,}
Cambridge: Cambridge University Press, pp. 35-46.
\item Westfall, R. S (1995)
{\it Never at Rest}, Cambridge : Cambridge University Press, p. 505
\item Whiteside, Derek (1967) { \it The Mathematical Papers of Isaac Newton}, vol. 1, 1664-1666,
edited by D.T. Whiteside, Cambridge: Cambridge University Press.
\item Whiteside, Derek (1974) {\it The Mathematical Papers of Isaac Newton} 1684-1691, vol. 6,
edited by D. T. Whiteside,
Cambridge: Cambridge University Press, pp. 30-91.
\end{list}
\end{document}
|
1,108,101,564,861 | arxiv | \section{Introduction}
One of the most important current questions in the field of star formation concerns the effect that environment and, especially, feedback may have on the star-formation process, in particular the stellar initial mass function (IMF) and the star-formation rate (SFR) or efficiency (SFE). Stars appear to form in two main modes. Spontaneous star formation is the predicted result of the naturally turbulent molecular-cloud environment (see e.g. \citealt{2004RvMP...76..125M}; \citealt{2004ASPC..322..299K}; \citealt{2006ApJ...648.1052H}; \citealt{2002ApJ...576..870P}) and is expected to produce a low background SFR. Triggered star formation, on the other hand, is an increase in SFR or SFE due to the effects of a mechanical interaction on molecular cloud gas, usually caused by the winds, radiation or expanding \textsc{Hii} regions associated with massive stars (see e.g. \citealt{1998ASPC..148..150E}; \citealt{2005A&A...433..565D}; \citealt{2011MNRAS.418.2121G}).
There are two main ways in which triggering might increase the SFE locally. The first is by creating new star-forming structures (i.e.\ dense cores), in addition to those forming spontaneously in the turbulent molecular cloud gas. This mode is most closely described by the {\em collect-and-collapse} mechanism (\citealt{1977ApJ...214..725E}, \citealt{1994MNRAS.268..291W}) in which an expanding dense shell, driven into a cloud by winds or thermal expansion, becomes gravitationally unstable and fragments to form dense, star-forming clumps. The second mode works by increasing the probability that pre-existing dense ``cores" will collapse to form stars.
This would usually require an increase in the ambient pressure, either from the passage of a (shock) wave through clumpy cloud gas, or when a core is overtaken by an ionisation front (IF). The latter mechanism is described by the radiatively-driven implosion (RDI) model (\citealp{1980SSRv...27..275K}; \citealp{1989ApJ...346..735B}; \citealp{1989ApJ...342L..87S}; \citealp{1990ApJ...354..529B})
\begin{figure}
\centering
\includegraphics[height=8.4cm]{Figure1.png}
\caption[small]{MSX image of the W3/4 region at 8$\,\mu$m. The black contours trace the $^{12}$CO J=1$\to$0 outline of the W3 GMC, as observed by the FCRAO. The white straight line marks the approximate boundary between the HDL and LDL regions. The yellow crosses mark the positions of the O stars in the IC 1805 cluster. Several regions of interest within the cloud are labeled.}
\label{fig:msxw4w3}
\end{figure}
A third potential type of mechanism for influencing the SFE would include any that affect the efficiency with which dense cores convert mass into stars. This is the same as saying that the accretion history of already bound cores is affected by a change in the local environment. This might be caused by variations in local density and/or effective signal speed, which would alter the accretion rate.
The W3 Giant Molecular Cloud (GMC) offers a potential `laboratory' for constraining the mechanism by which feedback triggers new star formation and for quantifying increases in star-formation efficiency above the spontaneous background rate. Both modes of star formation appear to exist within W3, each dominating different parts of the cloud. Thought to be a prime example of triggered, sequential star formation (\citealp{1978ApJ...226L..39L}; \citealp{2005ApJ...620L..43O}), it stands on the western side of the W4 chimney/superbubble whose expansion is driven by the winds of the IC 1805 OB association (Figure \ref{fig:msxw4w3}). This expansion is compressing the eastern side of the W3 GMC and has created a high density layer (HDL: \citealt{1978ApJ...226L..39L}) within which there are several prominent star-forming regions (i.e.\ W3\,Main, W3\,(OH) and AFGL\,333). The rest of the cloud seems so far to have been largely unaffected by this interaction and the star formation within it should be mainly spontaneous. One notable exception is the KR\,140 \textsc{Hii} region, located in the far south-west corner of the cloud, this may be an example of the spontaneous formation of an isolated massive star which is now triggering new star formation in a surrounding shell (\citealt{2008MNRAS.385..995K}).
\citet{2007MNRAS.379..663M} surveyed two thirds of the W3 cloud, including all the HDL and the southern half of the remaining cloud, in the 850-$\umu$m continuum and detected 316 dense cores with masses above 13\msolar. Dividing the GMC crudely into the two zones, they found that a significantly greater fraction of the total gas mass is contained in dense, potentially star-forming structures in the HDL (25--37\%, depending on assumptions about the clump mass function, or CMF) compared to the diffuse cloud (5--13\%), but detected no difference in the CMF between the two sections of the cloud. These results were interpreted as clear evidence of a collect-and-collapse type mechanism at work. However, this result was derived assuming a single excitation temperature (30\,K) everywhere in the molecular gas traced by CO J=1$\to$0. If the gas temperature were significantly higher in the HDL than in the remaining cloud, then the contrast in gas mass ratio between the two regions may be lower than this analysis suggests.
This paper presents new CO J=3$\to$2 emission line maps of the W3 GMC and an analysis of the physical excitation of the cloud molecular gas, in particular the distribution of excitation temperatures, using matching CO J=1$\to$0 data. The fraction of gas mass in dense clumps is then estimated as a function of position using the existing 850-$\umu$m continuum results and is compared to the Mach number of the turbulence in the CO-traced gas. The paper is structured as follows: In Section \ref{sec:Obs} we detail the data reduction procedure for the CO J=3$\to$2 data and describe the CO J=1$\to$0 and SCUBA datasets. In Section \ref{sec:Analysis} we describe our analysis and discuss the results. Finally, in Section \ref{sec:conc}, we present the conclusions of this study.
\section[]{Observations and Data Reduction.}
\label{sec:Obs}
\subsection{HARP Data}
\begin{figure*}
\includegraphics[width=\textwidth]{Figure2.png}
\caption[small]{T$_{R}^{*}$ $^{12}$CO J=3$\to$2 emission from the W3 GMC, integrated in the range $-65<V_{\rm LSR}<-25$\,\kms, showing the total area surveyed in this transition. Several star-forming regions have been identified on the map. The grid denotes Galactic coordinates.}
\label{fig:12co}
\end{figure*}
The W3 GMC was mapped with the HARP array receiver on the James Clerk Maxwell Telescope (JCMT) on Mauna Kea, Hawaii. HARP is a 16-element focal-plane array that operates in the 325--375\,GHz regime \citep{2009MNRAS.399.1026B}. The 16 receptors have $\sim$14\,arcsec beams separated by 30 arcseconds, giving the instrument a 2-arcminute footprint. HARP is combined with the ACSIS digital autocorrelation spectrometer backend.
Observations were made over three consecutive years (2006-2008) in $^{12}$CO, $^{13}$CO and C$^{18}$O J=3$\to$2 at 345.796, 330.540 and 329.278 GHz, respectively. All observations were taken in good weather with the sky opacity at 225\,GHz in the range $\tau_{225}<$0.08 (JCMT weather band categories 1 \& 2), using a bandwidth of 250\,MHz, giving a basic spectral resolution of 26\,m\,s$^{-1}$.
As the W3 GMC spans about 1$^{\rm o}$ on the sky, we split the cloud into 13 separate tiles of $20 \times 20$ arcminutes, each requiring about one hour to map. All tiles were observed using continuous raster scanning and pointing observations were made between each tile. We used a sample spacing of 7.5 arcseconds and the raster scan spacing was half an array footprint. Scans were aligned at a position angle of 70$^{\rm o}$ to the Declination axis to match the known geometry of the cloud. We also observed small raster scan maps of CRL 618, CRL 2688 and W3 (OH) to calibrate the science maps. The calibration factors applied in the segments of the maps were between 1.12 - 1.5 for $^{12}$CO, 1.7 - 3.04 for $^{13}$CO and 1.02 - 1.3 for C$^{18}$O. The System Temperature varied between 233\,K and 283\,K for $^{12}$CO with a median value of 242\,K; 289\,K to 902\,K for $^{13}$CO with a median of 359\,K; and 298\,K to 340\,K for C$^{18}$O, with a median value of 324\,K. The mean pointing error was 2.43$\pm$0.33 arcseconds for all the observations.
The observing procedure differed slightly over the years as new observing modes became available. In particular, the great majority of the $^{12}$CO and $^{13}$CO maps have been scanned only along one direction, as the ``basket weave" mode of orthogonal scanning was not available at the time.
The C$^{18}$O map and later parts of the $^{13}$CO data were made using this mode. The velocity range of the cubes is $-120$\,km\,s$^{-1}$ to $+30$\,km\,s$^{-1}$.
The raw data cubes were filtered for spikes and, in $^{12}$CO, were binned by a factor of nine in the spectral axis to achieve a rms noise level of $\sim$0.7\,K in a 0.23-km\,s$^{-1}$ wide channel. The $^{13}$CO and C$^{18}$O J=3$\to$2 maps were binned by a factor of 15 to obtain rms noise of $\sim$0.4\,K in a 0.39-km\,s$^{-1}$ wide channel. The maps were spatially regridded with pixel resolution of 7.7 arcseconds and the spectral baselines were removed.
\subsection{FCRAO Data}
\begin{figure*}
\centering
\includegraphics[width=6.3in]{Figure3.png}
\caption{The different isotope integrated T$_R$ (K\,km/s) emission line maps of the W3 GMC. {\bf Top Left:} $^{12}$CO J=3$\to$2; {\bf Top Right:} $^{12}$CO J=1$\to$0; {\bf Middle Left:} $^{13}$CO J=3$\to$2; {\bf Middle Right:} $^{13}$CO J=1$\to$0; {\bf Bottom Left:} C$^{18}$O J=3$\to$2; {\bf Bottom Right:} C$^{18}$O J=1$\to$0.}
\label{fig:coiso}
\end{figure*}
The $^{12}$CO J=1$\to$0 observations at 115.271 GHz were made on 1999 May 7--14 and 2000 April 24--28 at the Five Colleges Radio Observatory (FCRAO) 14-m telescope using the 16-element SEQUOIA array receiver and FAAS backend with 256 channels and 80-MHz bandwidth giving 0.8-km\,s$^{-1}$ resolution. System temperatures were in the range 600\,K to 800\,K producing rms noise of $T_A^{\star} \simeq 0.1$\,K. the rms pointing correction was 5 arcseconds or less. These data and their reduction are more fully described in \citet{BrethertonPhD2003}.
The $^{13}$CO and C$^{18}$O J=1$\to$0 lines at 110.201 and 109.722 GHz, respectively, were observed simultaneously using the expanded 32-element SEQUOIA array on the FCRAO 14-m on 2004 March 16--23 March in ``On-The-Fly" continuous raster mode. The target region was covered with 32 individual submaps which are fully Nyquist sampled on to a 25$''$ grid with a spatial resolution of 49$''$. The spectrometer was used with 50-MHz bandwidth centred on $V_{\rm LSR}= -40$\,km\,s$^{-1}$, which results in a velocity resolution of 133\,m\,s$^{-1}$. System temperatures were in the range $50-80$\,K for the duration of the observations. A fuller description of these data can be found in \citet{AllsoppPhD2011}.
All antenna temperatures have been changed to the $T_{\rm R}^{\star}$ scale using $T_{\rm R}^{\star} = T_{\rm A}^{\star} / \eta_{\rm fss}$ \citep{1981ApJ...250..341K}, where $\eta_{\rm fss}$ is the forward scattering and spillover efficiency, taken as 0.77 for HARP on JCMT \citep{2009MNRAS.399.1026B} and 0.70 for SEQUIOA on FCRAO \citep{1998ApJS..115..241H}.
\section[]{Results and analysis}
\label{sec:Analysis}
\subsection{The W3 GMC}
Emission from the W3 cloud was found in the range $-60 < V_{\mathrm{LSR}} < -30$\, \kms. The velocity structure and dynamical state of the cloud will be discussed in detail elsewhere. Figure \ref{fig:12co} shows the $^{12}$CO J=3$\to$2 emission integrated between --65\,\kms\ and --25\,\kms. These data cover the whole GMC, while the $^{13}$CO and C$^{18}$O data in this transition cover a more limited area, slightly vignetting the northern and southern edges of the cloud. The integrated emission in these lines and in the three J=1$\to 0$ transitions are shown in Figure \ref{fig:coiso}.
Figure \ref{fig:12co} shows that, in addition to the warm dense gas around the active star-forming regions, $^{12}$CO J=3$\to$2 also traces most of the diffuse, extended emission seen in lower-level transitions (Figure \ref{fig:coiso}, top right panel). This is somewhat surprising, since the critical density of the J=3$\to$2 transition is expected to be between $5 \times 10^4$\,cm$^{-3}$ at $T=40$\,K and $4\times10^5$\,cm$^{-3}$ at 10\,K \citep{1985MNRAS.214..271F}, slightly higher than that of CS J=1$\to$0, and the J=3 energy level is $E/ k=32.8$\,K above ground. The transition should thus trace the relatively warm, high-density gas associated with recent star formation. This may be explained by photon trapping caused by high optical depths which may be reduce the effective critical density.
Following \citet{1978ApJ...226L..39L} we identify as the high density layer the eastern edge of the cloud, adjacent to the bubble blown by the IC\,1805 OB cluster. The W3 North, W3 Main, IC\,1795, W3\,(OH) and AFGL\,333 regions are located here. For the purposes of our analysis, and consistency with Moore et al (2007), this is separated from the rest of the cloud (which we call the low density layer, LDL and includes the HB3, KR\,140 and the trilobite regions (see figure \ref{fig:msxw3w4})) by a line defined as $b = 1.2089 \times l + 162.7235$, in Galactic coordinates. This division is somewhat arbitrary and based on the visible extent of the intense star formation in the eastern portion of W3. However, while the definition of triggered and non-triggered cloud regions is not so clear cut, it is likely that the feedback effects from the W4 H{\sc ii} region will decrease in strength with distance from the IC\,1805 OB cluster. Given that the average integrated intensity in the HDL is three times higher than that of the LDL (23 K km/s compared to 7 K km/s), we assume, in this paper, that the expansion of the W4 Hii region has not yet affected these regions of W3. Therefore, star formation within the HDL, as defined by the equation above, is assumed to be triggered, whereas the LDL region is assumed to be dominated by spontaneous star formation.
The three brightest and best-known star forming regions in the cloud (W3 Main, W3 (OH) and AFGL 333) are easily identified in Figure \ref{fig:12co}, running from north to south along the eastern edge of the cloud. W3 Main ($l=133.7095^{\rm o}$, $b=1.2500^{\rm o}$) is the most prominent of these and the brightest source in the region at many wavelengths. OH and H$_{2}$O masers have been detected towards sources IRS 4 and 5 (\citealp{1987ApJS...65..193G}; \citealp{1984ApJ...285L..79C}) and it is the richest \textsc{Hii} region cluster known within 2\,kpc from the Sun \citep{2008hsf1.book..264M}. We find a mean integrated intensity of $\int T_{\rm R}^{*} {\rm d}V$=307.5\,K$\,$km\,s$^{-1}$ with a peak of 700$\,$K$\,$km\,s$^{-1}$ in the central area and $\sim$55\,K\,km\,s$^{-1}$ in the more diffuse surrounding emission. The cloud associated with W3 Main is seen to have rather complex physical structure, possibly shaped by the cavity situated immediately to the south which has been created by the winds from the young IC\,1795 stellar cluster \citep{2008hsf1.book..264M}.
On the other side of this cavity lies W3\,(OH) ($l=133.9515^{\rm o}$, $b=1.0600^{\rm o}$), another active high-mass star-forming region and also the host of OH and H$_{2}$O masers that point towards two centres of high mass star formation separated by 0.07\,pc (\citealp{1981MNRAS.195..213N}; \citealp{1977ApJ...215L.121F}). The molecular cloud associated with W3\,(OH) is clearly elongated north-south, which is consistent with it being part of the large-scale, compressed shell produced by the winds of the nearby IC\,1804 OB association responsible for the W4 \textsc{Hii} shell. Its brightest regions have a mean integrated $^{12}$CO J=3$\to 2$ intensity of $\int T_{\rm R}^{\star} \mathrm{d} \upsilon=167$\,K\,km\,s$^{-1}$ with a peak of 300\,K\,\kms, while the surrounding structure has a mean of $\sim$80\,K\,\kms.
South of W3\,(OH) is located the third active star-forming region in the HDL, AFGL\,333 ($l=134.2030^{\rm o}$, $b=0.7630^{\rm o}$). This cloud has a less well defined central peak than either of W3\,Main or W3\,(OH) and has a mean integrated intensity of $49$\,K\,km\,s$^{-1}$.
The rest of the W3 GMC contains less intense emission and less active star formation. On the south-eastern corner of the GMC, the cloud associated with the KR\,140 \textsc{Hii} bubble \citep{2008MNRAS.385..995K} is easily identifiable.
Between KR\,140 and AFGL\,333 is a region we term Loops. The CO emission in this area appears diffuse and generally has low integrated intensity ($\int T_R^{\star} \mathrm{d} \upsilon=15$\,K\,\kms). In the 850-$\umu$m continuum, it appears as a long, fine, looped filament \citep{2007MNRAS.379..663M} and in Spitzer data is revealed to contain a string of infrared sources (Polychroni et al.\, in preparation). Above KR\,140 there is a region we call Trilobite. It has a mean integrated intensity of 14.4\,K\,\kms. Here, again, \citet{2007MNRAS.379..663M} find a number of dense cores at 850\,$\umu$m, indicative of star formation. While this region seems rather cut off from the rest of the cloud there is a bridge of diffuse material that connects it to the KR\,140 bubble.
\begin{figure*}
\includegraphics[width=\textwidth]{Figure4.png}
\caption[small]{The $^{12}$CO, $^{13}$CO and C$^{18}$O J=3$\to$2 emission lines extracted from the cubes at $l=133.7095^{\rm o}$, $b=1.2149^{\rm o}$ (W3 Main), $l=133.9514^{\rm o}$, $b=1.0599^{\rm o}$ (W3\,(OH)), $l=133.2028^{\rm o}$, $b=1.7628^{\rm o}$ (AFGL\,333) and $l=133.4300^{\rm o}$, $b=1.4233^{\rm o}$ (KR\,140)}
\label{fig:isotopes}
\end{figure*}
In Figure \ref{fig:isotopes} we plot the emission lines of the three isotopes in J=3$\to2$ from individual pixels in the four main regions of the cloud (W3 Main, W3\,(OH), AFGL\,333 and KR\,140). It is clear from the spectra that the $^{12}$CO J=3$\to$2 emission line is optically thick and self-absorbed practically throughout the cloud. $^{13}$CO J=3$\to$2 also tends to be optically thick and self-absorbed, but only in the brightest regions like W3 Main or W3\,(OH). On the other hand C$^{18}$O J=3$\to$2 is very weak and we only observe it towards the brightest regions of the cloud (W3 Main, W3\,(OH) and AFGL\,333).
Throughout this study we assume that the CO tracing the molecular gas of the W3 GMC is in a state of local thermodynamic equilibrium (LTE), at least in the rotation levels $J\le3$. Where the optical depth is high, as is the case for $^{12}$CO emission along most lines of sight, the effective critical density will be reduced due to photon trapping. This means that, in reality, there may be somewhat different critical densities for different isotopologues, and the safety of the LTE assumption may depend on the rarity of the CO species. We ignore this in the following analysis but discuss it further below.
\begin{table}
\centering
\caption[The channel widths of $^{12}$CO, $^{13}$CO, C$^{18}$O J=1$\to$0 and J=3$\to$2 spectra.]{The channel widths of $^{12}$CO, $^{13}$CO, C$^{18}$O J=1$\to$0 and J=3$\to$2 spectra.}
\begin{tabular}{|c|c|c|}
\hline
Molecule & Transition & Channel Width (km\,s$^{-1}$) \\\hline
$^{12}$CO & 1$\to$0 & 0.8126 \\
$^{13}$CO &1$\to$0 & 0.1328 \\
C$^{18}$O &1$\to$0 & 0.1333 \\
$^{12}$CO &3$\to$2 & 0.2381 \\
$^{13}$CO &3$\to$2 & 0.8301 \\
C$^{18}$O &3$\to$2 & 0.3348 \\\hline
\end{tabular}
\label{tab:velchan}
\end{table}
The 3D cubes of all emission lines were collapsed along the velocity axis between --65\,\kms\ and --25\,\kms\ and multiplied by the velocity channel width (Table \ref{tab:velchan}) to produce integrated-intensity maps for each species and transition. The J=3$\to$2 data were re-gridded to match the J=1$\to$0 maps so that there was a one-to-one pixel correspondence.
\subsection{Optical Depth}
The measured radiation temperature of a source is given, in terms of the excitation temperature $T_x$ and optical depth $\tau$, by the solution to the equation of radiation transfer, in the absence of a background source:
\begin{equation}
\label{eqn:radtrans}
T_R=J\ (T_{x})\ \left(1-e^{-\tau}\right)
\end{equation}
where, in LTE,
\begin{equation}
\label{eqn:J}
J(T_{x})=\frac{h\nu}{k}\left(\frac{1}{e^{h\nu/kT_x}-1}-\frac{1}{e^{h\nu/kT_{\mathrm{bg}}}-1}\right),
\end{equation}
$\nu$ is the frequency and $T_{\mathrm{bg}}$ is the temperature of the cosmic microwave background (2.73K). Hence, if the same $T_x$ is assumed, the ratio of the line brightness temperatures in the same transition from two different isotopic species is given by
\begin{equation}
\label{eq:od_1}
\frac{T_{\rm R,1}(j\to i)} {T_{\rm R,2}(j\to i)}= \frac{1-e^{-\tau}}{1-e^{-\tau/X}},
\end{equation}
where $\tau$ is the optical depth of the more abundant species and $X$ is the abundance ratio.
We adopt a value of $X=77$ for the $^{12}$CO/$^{13}$CO abundance ratio \citep{2002A&A...390.1001S}. In this case, the results are not very sensitive to the choice of the value of $X$, particularly where the $^{12}$CO optical depth is high. Assuming $\tau(^{12}\mbox{CO}) \gg 1$, the numerator of equation \ref{eq:od_1} becomes approximately equal to unity, providing a first estimate for an iterative solution. We use this first estimate and the Newton-Raphson iterative method to solve equation \ref{eq:od_1} to calculate the velocity-averaged optical depth per pixel in $^{12}$CO and $^{13}$CO J=3$\to$2 and J=1$\to$0. We find that the range of $\tau(^{12}\mbox{CO})$ in both transitions is between 5 and 90 (see Figure \ref{fig:tau_distr}). At high temperatures we expect that $\tau_{32}(^{12}\mbox{CO})/\tau_{10}(^{12}\mbox{CO}) = 9$. However, globally, we find that this ratio is much lower, consistent with low gas temperatures.
$^{13}$CO emission is found to be optically thin across the cloud in both transitions, with the exception of a few pixels in J=3$\to$2 in the brightest regions (e.g.\ the central parts of W3 Main).
\begin{figure}
\centering
\includegraphics[height=6.cm]{Figure5.png}
\caption[small]{The optical depth distributions for $^{12}$CO J=3$\to$2 (red) and J=1$\to$0 (blue). For the $^{12}$CO J=3$\to$2 line we measure the mean=17, median=14 and the mode=11.2. For $^{12}$CO J=1$\to$0 we measure the mean=25, median=22 and the mode=10.8. }
\label{fig:tau_distr}
\end{figure}
\subsection{Excitation Temperature}
\label{subsec:Tx}
The excitation temperature $T_x$ parameterises the relative energy-level populations according to the Boltzmann distribution. Under the LTE assumption, $T_x$ is equal to the thermodynamic temperature of the gas. Where optical depths are determined for two transitions of the same species, the ratio of the two can be used to derive $T_x$, i.e.\ for the $^{12}$CO J=3$\to$2 and J=1$\to$0 transitions,
\begin{equation}
\label{eq:tau_rat}
\frac{\tau_{32}(^{12}\mbox{CO})}{\tau_{10}(^{12}\mbox{CO})}=3\ e^{-16.60/T_x} \ \frac{1-e^{-16.60/T_x}}{1-e^{-5.53/T_x}}.
\end{equation}
This recipe is obtained directly from equation \ref{eq:5}, assuming $\tau_{ji}$ is measured over the same velocity interval and distributed similarly over that interval for both transitions $j\to i$. Equation \ref{eq:tau_rat} cannot be solved analytically. Instead, we used it to compile a look-up table of optical depth ratio values as a function of $T_x$ in the range 3 to 34\,K (Figure \ref{fig:txmodels}). The look up table has a resolution of 0.5\,K.
\begin{figure}
\centering
\includegraphics[width=8.4cm]{Figure6.png}
\caption[small]{The solutions of the excitation temperature, T$_x$, as given from the $^{12}$CO J=3$\to$2 and J=1$\to$0 opacity ratios (solid line) and brightness temperature ratios (dashed line).}
\label{fig:txmodels}
\end{figure}
For those pixels in which complete optical depth information was not available, usually due to the inadequate detection of $^{13}$CO emission, we estimated $T_x$ from the ratio of line brightness temperatures, using a low optical depth approximation, as follows.
Assuming $\tau \ll 1$, the ratio of the observed $^{12}$CO J=3$\to$2 and J=1$\to$0 line strengths is given by
\begin{equation}
\frac{T_{R,32}}{T_{R,10}}=3 \ \frac{\tau_{32}(^{12}CO)}{\tau_{10}(^{12}CO)} \ \frac{ \left( e^{16.60/T_x} - 1 \right)^{-1} - 2.29\times10^{-3}}{\left(e^{5.53/T_x}-1\right)^{-1}-0.152},
\label{eq:TR-rat}
\end{equation}
in which $\tau_{32}(^{12}\mbox{CO})/\tau_{10}(^{12}\mbox{CO})$ is given by equation \ref{eq:tau_rat}. A look-up table was again used to obtain $T_x$ estimates from the observed line brightness ratios. Figure \ref{fig:Tx} shows the distribution of derived excitation temperatures. The spatial resolution of the map is that of the J=1$\to$0 CO data, namely 44$''$. The temperature resolution is 1\,K, which is the resolution of the look-up table, and is less than the uncertainties in the data. Where $T_x >$\,10\,K this error is less than 10\%, while in regions of lower $T_x$), the error is around 20\%.
To estimate the error on the excitation temperature we calculated the average spectral noise per pixel for each transition. These noise maps were added to the integrated intensity maps and the calculation of $\tau$ and $T_x$ repeated. The error estimates quoted above are the average difference between the results in the nominal maps and those with added noise.
The majority of the CO-traced gas in the LDL has excitation temperatures around 8 -- 10\,K with small excursions to slightly higher values near star-forming regions (e.g. the trilobite and the KR 140 bubble). The three main star-forming regions in the HDL all show significantly enhanced values of $T_x$ in the region of 15 -- 30\,K. One feature to note is that, while the $T_x$ distributions in W3\,Main and W3\,(OH) tend to peak centrally, those in AFGL\,333 peak near the eastern edge of the cloud facing the W4 \textsc{Hii} region (Figure \ref{fig:Tx}).
\begin{figure}
\includegraphics[height=6.7cm]{Figure7.png}
\caption{The distribution of CO excitation temperature in the W3 GMC, extimated from the optical depth and integrated intensity ratios of $^{12}$CO J=3$\to$2 and J=1$\to$0. The axes are Galactic coordinates.}
\label{fig:Tx}
\end{figure}
\subsection{Gas Mass Distribution}
\label{subsec:MCFE}
Having determined $T_x$ and $\tau$, the column density $N$ is obtained from equation \ref{eq:4}. For the $^{12}$CO J=3$\to$2 transition, and $\upsilon$ in \kms, the relationship is
\begin{equation}
\label{eq:N32}
\frac{ N_{\mbox{\rm co}}}{\mathrm{m}^{-2}}= \frac{ 7.67 \times 10^{17} \left(T_x+0.922\right) e^{16.60/T_x} }{\left(1-e^{-16.60/T_x}\right) } \int \tau(\upsilon)d\upsilon.
\end{equation}
\noindent
This is converted to molecular hydrogen column density using an abundance ratio of $[^{12}\mbox{CO}]/[\mbox{H}_2]=9.8 \times 10^{-5}$ \citep{1982ApJ...262..590F} and to mass per pixel (Figure \ref{fig:mass}) assuming a distance to the cloud of 2\,kpc (\citealp{2006Sci...311...54X}; \citealp{2006ApJ...645..337H}). Integrating over the map, we find that the W3 GMC has a total mass of $4.4\pm0.4 \times 10^{5}\msolar$, consistent with previous estimates of the cloud's mass. The error on the mass was estimated in the same way as for $T_x$, as described above in Section \ref{subsec:Tx}.
\begin{figure}
\includegraphics[width=8.4cm]{Figure8.png}
\caption{The distribution of molecular gas mass (in $\msolar$ units) per pixel (0.43\,pc/pixel) across the W3 GMC. }
\label{fig:mass}
\end{figure}
The accuracy of the absolute values in Figure \ref{fig:mass} is limited by the uncertainty in the value of [$^{12}$CO]/[H$_{2}$]. The value we use is derived from extinction measurements of the nearby Taurus and $\rho$ Ophiuchi molecular cloud \citep{1982ApJ...262..590F}. Direct measurements of this abundance (e.g. \citealp{1994ApJ...428L..69L}; \citealp{1985ApJ...298..316W}) can differ by factors of 3 to 5 which makes other sources of error, propagated from the $\tau$ and $T_x$ calculations, insignificant. However, this is a systematic error, and does not affect comparisons made within the map. In terms of random error, the most well-behaved regions are those with the highest optical depth ($\tau>$\,20), where uncertainties are of the order $\pm$10$\msolar$ per pixel. In regions with lower excitation temperatures ($T_x<$\,10), for which the uncertainties in the optical depth produce higher relative errors, the uncertainty on the mass per pixel is, on average, $\pm$20 $\msolar$. This translates to an error of as low as 4\% in pixels with mass higher than $\sim$ 250\,$\msolar$ and as high as 30\% in pixels with mass lower than $\sim$ 60$\msolar$.
The spatial distribution of mass shown in Figure \ref{fig:mass} closely follows that of the $^{12}$CO J=3$\to$2 emission in Figure \ref{fig:12co}, as well as of the 850-$\umu$m continuum in the HDL and southern part of the cloud \citep{2007MNRAS.379..663M}.
\section{Discussion}
\label{sec:discussion}
The CO J=3$\to$2 emission line data are similar in mapping extent to the existing FCRAO CO J=1$\to$0 data as well as the SCUBA continuum observations of the cloud \citep{2007MNRAS.379..663M}. The data contain a very large amount of detailed information on the physical state of the W3 GMC. This paper concentrates on the distribution of gas temperatures inferred from LTE excitation temperatures and the results of subsequent calculations of the distribution of mass in the cloud. The velocity structure and dynamics of the cloud will be the subject of a subsequent paper.
\subsection{Gas temperatures}
\subsubsection{$T_x$ method}
Our excitation temperature results are obtained by assuming that LTE conditions apply throughout the GMC, i.e.\ that rotational levels $J\le3$ are thermalised, populated according to the Boltzmann distribution which is dependent only on temperature. There are several caveats to this assumption. The first is that the critical density of the 3$\to$2 transition is quite high, as mentioned above, and it is quite likely that the mean density in the majority of the gas comprising W3 is less than this. Where this is the case, the energy-level populations will be determined by the collision rate, and so by both the temperature and density of the gas. $T_x$ will then underestimate the kinetic temperature $T$ of the gas and the observed line radiation temperature $T_{\rm R}$ will be less than predicted by LTE. The effect on predicted column densities is complex. Whether or not $N$ is under- or over-estimated depends at least partly on the gas temperature. At low $T$, an underestimate of $T$ may cause an overestimate of $N$ and at high $T$ the reverse may be true. At temperatures around 30\,K small errors in $T$ will have little effect. A second point to note is that the effective critical density of a transition may be lower where the optical depth is high and photon trapping becomes significant. This is likely to apply to the $^{12}$CO transitions across most of the cloud and to $^{13}$CO along high column density lines of sight and in the line centres. This effect may undermine the assumption of equal $T_x$ for the same transition of different isotopic species used to calculate optical depths.
These issues notwithstanding, our method of deriving $T_x$ from the ratio of the optical depths of two different transitions of the same species, or from the line radiation temperature ratio where optical depths are low, has some advantages. Firstly, the value of $\tau_{32}/\tau_{10}$ depends on the populations of all four energy levels and so the derived value of $T_x$ represents the distribution of energies relatively well (being equivalent to a fit of the Boltzmann distribution) and should give a fairly robust estimate of column density, even if $T_x$ underestimates the real kinetic temperature of the gas. Secondly, equation \ref{eqn:radtrans} contains, implicitly, the filling factor ($\eta_{\rm \,ff}$) of the emission within the telescope beam, i.e.\ the fraction of the beam area filled by the emitting gas. Although $\eta_{\rm \,ff}$ may not be quite the same for different transitions, it will be accounted for, to first order, by using $T_{\rm R}$ and $\tau$ ratios.
In other studies, $T_x$ has often been estimated from the brightness temperature of a single transition, using equation \ref{eqn:radtrans} (e.g.\ \citealt{2010MNRAS.401..204B}). This is the best method in the absence of data in other transitions, but the LTE assumption then models only the relative populations of the upper and lower levels of the one transition used (J=3 and J=2 in this case) and does not account for $\eta_{\rm \,ff}$, which may be quite small.
\subsubsection{$T_x$ results}
The large-scale distribution of $T_x$ revealed in Figure \ref{fig:Tx} contains few surprises. In general, we see higher temperatures ($>$20\,K) near regions of active star formation and cooler gas ($\le$10\,K) elsewhere. We also see two large-scale temperature gradients, one running east-west across the whole cloud and the other along a north-south axis through the HDL. Along the first axis, temperatures range from $\sim$20-30\,K in the HDL down to $\sim$4-9\,K in the central and western regions of the GMC. It is well known (e.g. \citealp{2007IAUS..237..481U}) that in regions of high density the gas is in thermal equilibrium with the radiatively heated dust, whereas at lower densities molecular cooling dominates and the gas cools down through molecular transitions. Therefore, this gradient is likely to be the result of lower densities to the west of the HDL as well as a gradient in the radiation field intensity. The second gradient ranges from $\sim$30\,K in W3 Main to $\sim$10\,K in AFGL 333. Since gas temperatures may be an indicator of the evolutionary stage of star formation within a cloud, this north-south trend may imply an age sequence. Such an age gradient has also been suggested by \citet{2005JKAS...38..257S} who observed all three regions in atomic carbon.
In addition to this, there are some detailed differences in the $T_x$ distribution within the three bright HDL regions. In W3 Main, $T_x$ peaks clearly in the middle of the associated cloud, coincident with the brightest IR and submm sources. The embedded YSOs in W3 Main therefore appear to be the dominant heating source in the cloud. The surrounding molecular gas may, in fact, be in the process of being dispersed by these centrally formed objects. W3\,(OH) has a lower average excitation temperature of about 20\,K. $T_x$ is also centrally peaked in this source, although less clearly than in W3 Main, and there are also high temperatures along its western edge, indicating that external heating may be important in this cloud. Finally, AFGL\,333 exhibits the lowest mean excitation temperature ($\sim$10\,K) of these three regions and the $T_x$ distribution clearly peaks at the eastern edge, which is exposed to the radiation from the IC\,1805 cluster (Figure \ref{fig:msxw4w3}). Assuming the embedded YSOs become more dominant heating sources with time, these internal $T_x$ distributions appear to support the idea of an age gradient from north to south along the HDL.
\subsection{Mass Distribution}
Obtaining the distribution of $T_x$ has allowed us to derive the mass distribution of the cloud with much more accuracy than previous studies which assume a single temperature. Our new estimate of the total mass of the GMC is $(4.4\pm0.4) \times 10^{5}\,\msolar$, consistent with that of
\citet{2007MNRAS.379..663M} who obtained $(3.8 \pm 1.1) \times 10^{5}\,\msolar$ from $^{13}$CO J=1$\to$0 data, assuming $T_x = 30$\,K everywhere. This agreement is despite most of the cloud having $T_x < 30$\,K which, because we are below $E/k = 33$\,K, should produce higher mass estimates. We find that the mass is almost equally divided between the HDL region and the remainder of the cloud ($2.23 \times 10^{5}\,\msolar$ and $2.19 \times 10^{5}\,\msolar$, respectively), even though the latter covers almost twice as much projected area as the HDL.
\subsubsection{Clump Formation Efficiency}
We use the existing SCUBA observations of the cloud \citep{2007MNRAS.379..663M} along with the masses derived above to calculate the fraction of gas in dense, potentially star-forming structures as a function of position in the cloud, i.e.\ the clump formation efficiency (CFE). The CFE is a time-integrated quantity and can be written as:
\begin{equation}
\mbox{CFE}=\frac{1}{M_{\rm cloud}} \int_{t=0}^{t=\mbox{\tiny now}} \dot{M}(t) \,{\rm d}t,
\end{equation}
where $\dot{M}$ is the rate of formation of dense-core mass from the available gas of the cloud. Therefore a high CFE can be the result of either a high average dense-core formation rate or of a long integration time. We have calculated the CFE for the area of W3 surveyed at 850$\umu$m by Moore et al (2007). Sub-millimetre flux densities are converted to gas mass using the standard formula,
\begin{equation}
M=\frac{S_{\nu}D^{2}}{\kappa_{\nu}B_{\nu}(T_{d})}
\end{equation}
where $S_{\nu}$ is the integrated flux density at 850\,$\umu$m, $D$ is the distance to the cloud, $\kappa_{\nu}$ is the mass absorption coefficient and B$_{\nu}(T)$ is the Planck function. $T_d$ is the dust temperature, for which we assume a constant value of 20\,K, representative of the dust temperatures of dense cores derived from infrared SEDs (e.g. \citealp{2010A&A...518L..97E}; \citealp{2010A&A...520L...8P}). We assume a constant value of $\kappa_{\nu}$=0.01\,cm$^{2}$g$^{-1}$ \citep{2001ApJ...556..215M} and so a constant gas-to-dust ratio throughout the cloud. The SCUBA map is then regridded to match the CO-traced mass data. The division of these two maps gives the distribution of the CFE across the cloud (Figure \ref{fig:cfe}).
\begin{figure}
\includegraphics[height=7.2cm]{Figure9.png}
\caption{The clump formation efficiency (\%) as a function of position across the W3 GMC. The CFE is given by the ratio of the sub-millimetre mass and the CO-traced molecular gas mass, calculated where the 850\,$\mu$m continuum data was available.}
\label{fig:cfe}
\end{figure}
As was found for $T_x$, there are two large-scale CFE gradients across the GMC. One runs north-south along the HDL where W3 Main with CFE$\sim$23\% and W3\,(OH) ($\sim$20\%) exhibit higher efficiency in gas-to-clump conversion than AFGL\,333 ($\sim$3\%). In comparison, the spontaneous star-forming regions in the western section of W3 have very low values of CFE, generally below 1\%, creating a second, east-west gradient. The most obvious interpretation is that the high CFE values present in the eastern regions of the cloud, are the direct result of the interaction between with the adjacent expanding W4 \textsc{Hii} superbubble. This is agreement with the results of \citet{2007MNRAS.379..663M} who found that 26\% of the gas has been converted into dense clumps with $M \ge 13$\,M$_{\odot}$ in the HDL, compared to 5\% in the western regions of the cloud. They also found that the Core Mass Function (CMF) of the cloud does not change significantly between the triggered and spontaneous star-forming regions of the cloud. There is clear evidence, therefore, that triggering due to the W\,4 expansion results in the creation of new dense structure (clumps/cores) across the affected regions that is able to form new stars. The common CMF of the two regions suggests that the triggering does not affect the star-formation process any further than that, i.e. triggering allows the formation of more more massive stars, but it does not alter the shape of the cannonical IMF.
This is in agreement with the collect-and-collapse model \citep{1994A&A...290..421W}, where a trigger, such as the winds from massive stars, can create a shock that propagates through the surrounding medium, sweeping and compressing the gas adjacent to the forming bubble, creating new dense structure that can become gravitationally unstable along its surface on long timescales.
The higher efficiencies measured in W3 Main, compared to the W3\,(OH) and AFGL\,333 regions, can also be interpreted as due to different timescales. Assuming that the triggering from the W4 bubble affects the three regions in a similar way, i.e. it increases the star formation rate to a similar degree, then the difference of a factor of three in the clump formation efficiency between W3 Main and AFGL\,333 star-forming regions may be simply due to W3 Main being older than AFGL\,333. This is consistent with the results of several studies that indicate multiple generations of star formation in and around W3 Main (e.g. \citealp{2008ApJ...673..354F}; \citealp{2011ApJ...743...39R}).
\subsection{Mach Number}
The turbulent fragmentation model of star formation neatly provides a mechanism for the simultaneous support of molecular clouds against gravity on large scales and the formation of dense, star-forming cores in the collisions between turbulent flows on small scales, thus naturally explaining the low star-formation efficiency (SFE) usually found \citep{2007ARA&A..45..565M}. A relation should therefore exist between turbulence and the CFE in a given cloud. \citet{2009arXiv0907.0248P} use the \citet{2005IAUS..227..276M} star-formation rate (SFR) model, extended to a magnetised medium, to study the relationship between the SFR, the virial parameter, $\alpha_{vir}$, and the sonic rms Mach number. Higher Mach numbers mean stronger shocks, which should produce thicker, denser compressed regions as well as additional support against gravity on large scales. The relationship, therefore, is not a simple one. However, the models predict a weak negative correlation between the star formation rate and the Mach number for turbulence-dominated star-forming regions. This implies that there should be more large-scale support against gravity where the turbulence is stronger, as expected.
In order to investigate this prediction, we have measured the velocity width, $\sigma_{co},$ of the J=3$\to$2 emission. Unfortunately, the emission line widths are very much dependent on whether the line is self-absorbed and/or optically thick. Thus, in this analysis we cannot use the $^{12}$CO transition since it is largely both self-absorbed and optically thick. $^{13}$CO also suffers from optical depth effects towards the densest and brightest regions of the cloud and we, therefore, have to be very careful when using it to derive the velocity widths. Generally, C$^{18}$O is the best choice as it is optically thin, however, it is a weak line and we only detect it in the HDL regions of the cloud. For the $^{13}$CO J=3$\to$2 we find that the line widths vary between 0.6 and 4.0\,km/s while for the C$^{18}$O J=3$\to$2 emission line they vary between 0.5 and 3.0\,km/s.
The total velocity dispersion in the molecular gas can be obtained from $\sigma_{co}$ by deconvolving the thermal velocity dispersion of the CO molecules, estimated from the sound speed $\sqrt{3kT/m_{\rm co}}$, and convolving the thermal velocity dispersion of the mean molecular gas, i.e.\ the sound speed $c_s = \sqrt{3kT/\mu m_{\rm H}}$. Here $\mu$ is the mean molecular weight = 2.8, assuming solar neighbourhood abundances. The gas temperature $T$ can be estimated using the CO excitation temperature $T_x$ obtained above (where $T_x = T_{kin}$ assuming LTE). This results in $c_s$=0.2-1.3\,km\,s$^{-1}$ through the cloud which is systematically lower than the line widths calculated above for $^{13}$CO and C$^{18}$O, confirming the presence of supersonic flows. If the velocity distribution is assumed to be Gaussian, we can express the total three-dimensional velocity dispersion in the CO-traced gas as
\begin{equation}
\label{eq:M2}
\sigma^{2}_{3D}(\mathrm{total})=
3\left[\sigma^{2}_{\rm co}-\frac{kT}{m_{\rm H}} \left( \frac{1}{\mu_{\rm co}}-\frac{1}{\mu} \right) \right]
\end{equation}
where $\mu_{\rm co} = 29$ and 30 for $^{13}$CO and C$^{18}$O, respectively.
\noindent
The Mach number, $\mathcal{M}$, is given by:
\begin{equation}
\mathcal{M}^2=\frac{\sigma^2_{3D}(\mathrm{total})}{c^2_s}.
\end{equation}
Using equation \ref{eq:M2}, this can be written
\begin{equation}
\mathcal{M}^{2}= 3.4\times10^{-4} \, \frac{\sigma^{2}_{co}}{T} - x
\end{equation}
where $x = 0.903$ for $^{13}$CO and 0.907 for C$^{18}$O.
\begin{figure}
\includegraphics[height=5.3cm]{Figure10.png}
\caption{The $\mathcal{M}$ normalised distributions for the HDL region as derived from $^{13}$CO (red line) and C$^{18}$O J=3$\to$2 (green dashed-dot line) and for the LDL region as derived from $^{13}$CO J=3$\to$2 (blue dashed line).}
\label{fig:mdist}
\end{figure}
Figure \ref{fig:mdist} show the range of $\mathcal{M}$ values in the LDL as derived from $^{13}$CO (blue dashed line) and in the HDL from $^{13}$CO (red line) and C$^{18}$O (green dash-dot line). From the $^{13}$CO it is clear that while there are higher values of $\mathcal{M}$ in the HDL region the distribution does not differ significantly from that of the LDL. This is due to the effects of optical depth as well as higher noise in the more diffuse gas that the $^{13}$CO traces in bot the HDL and LDL regions. The C$^{18}$O-derived MAch number distribution, on the other hand, does not suffer from optical depth effects and suffers less from noise levels as it comes from the brightest regions. Therefore it should be more indicative of the real $\mathcal{M}$ distribution in the densest regions.
The first panel of figure \ref{fig:mach2p} shows $\mathcal{M}$ plotted against CFE for the LDL region from $^{13}$CO J=3$\to$2 (blue stars) and the HDL region from both $^{13}$CO (red crosses) and C$^{18}$O J=3$\to$2 (green triangles). Note that we have included the $\mathcal{M}$ calculated from all the pixels in all three maps, where there is signal, i.e. there are two velocity dispersion estimates for regions that have signal in both the $^{13}$CO and C$^{18}$O maps. Comparisons between the HDL and LDL regions are only possible in the $^{13}$CO as we have practically no detections in the LDL in C$^{18}$O. To determine whether there is a correlation between $\mathcal{M}$ and CFE we used the Spearman Rank correlation test for all the subsets. For the $^{13}$CO data in the HDL we find that despite the spread there is positive correlation significant at the 3 sigma level ($\rho$=0.25, t=6.7, N=675). In the LDL region, on the other hand we find that there is no correlation between $\mathcal{M}$ and the CFE ($\rho$=0.066, t=1.2, N=240). The spread in the $^{13}$CO J=3$\to$2 derived $\mathcal{M}$ is because of the more diffuse material that surround the densest regions in W3 Main, W3 (OH) and AFGL 333 triggered regions. To minimise this effect and also to negate the effect introduced due to the $^{13}$CO lines being optically thick in the densest regions, we use the C$^{18}$O line, as it only traces the most dense regions and is also optically thin. We find a tighter correlation between $\mathcal{M}$ and the CFE significant at a 3 sigma level ($\rho$=0.47, t=850, N=261).
We therefore find no evidence in the current data to support the prediction of \citet{2009arXiv0907.0248P}. It is possible that the lack of correlation between $\mathcal{M}$ and CFE in the western regions of the cloud, which should be dominated by spontaneous star formation, could be due to the small range in CFE values found there. The significant positive correlation found in the HDL can have two interpretations. It could be the result of the expansion shock produced by the W4 shell interacting with the gas in these regions, increasing the turbulent velocities and the efficiency with which the cloud has formed dense structure, in a way that is inconsistent with the predictions of turbulent (spontaneous) star formation. A more likely interpretation is that the increased degree of turbulence is an effect of the higher CFE present in these regions, coupled with their age, i.e.\ it is due to feedback from the on-going star formation injecting momentum or kinetic energy, which results in higher measured $\mathcal{M}$.
In the bottom panel of figure \ref{fig:mach2p} we again plot $\mathcal{M}$, as derived from the C$^{18}$O J=3$\to$2 emission line against CFE per pixel, but separate the data into the three main HDL star-forming regions, W3 Main (red crosses), W3\,(OH) (blue stars) and AFGL\,333 (green triangles), revealing that the relationship between these quantities is different for each region. Spearman rank correlation tests on the three subsamples produce the results $\rho$=0.47, 0.51 and 0.65 with t= 5.6, 4.06, 7.4, and N=115, 63, 77 data points respectively, which are all significant at a level $>3 \sigma$.
A least-square fit to each of the three regions gives different results for each of them with gradient and intercept values of m=0.11$\pm$0.02 and $\beta$=0.55$\pm$0.01 for W3 Main, m=0.17$\pm$0.03 and $\beta$=0.56$\pm$0.01 for W3\,(OH) and m=0.26$\pm$0.03 and $\beta$=0.57$\pm$0.01 for AFGL\,333. We note, further, that while there is only a small difference of 1-2 $\sigma$ between the slopes of W3 Main/W3\,(OH) and of W3\,(OH)/AFGL\,333, there is a larger difference of 3-4 $\sigma$ between the slopes of W3 Main and AFGL\,333.
This change in the steepness of the slopes in the above correlations for the three different regions can be related to the evolutionary stage and state of each. If the observed turbulence were simply due to the mechanical interaction with the W4 expansion, and the CFE were determined by the turbulence, then all three regions should have more or less the same correlation between $\mathcal{M}$ and the CFE. The change in the correlation between the three regions can be interpreted as a result of the on-going star formation affecting the cloud. Feedback processes, like outflows or stellar winds from forming stars, have been acting on these regions for different lengths of time. In W3 Main the slope of the correlation is flatter than in W3\,(OH), which again is less steep than that found in AFGL\,333. Since W3 Main is thought to be more evolved than W3\,(OH) and AFGL\,333 it follows that the flattening of the slope is likely to be because of CFE values produced by a longer star-formation timescale.
\begin{figure}
\includegraphics[height=11.5cm]{Figure11.png}
\caption{{\bf Top:} $\mathcal{M}$ per pixel plotted against the Clump Formation Efficiency in the HDL (red crosses for the $^{13}$CO derived $\mathcal{M}$ and green triangles for the C$^{18}$O derived $\mathcal{M}$) and LDL (blue stars for the $^{13}$CO derived $\mathcal{M}$). {\bf Bottom:} $\mathcal{M}$ derived from the C$^{18}$ J=3$\to$2 emission line plotted against the CFE across the three HDL regions. The fitted lines are the linear least-square fits to the three regions. The gradients and intercepts for the three regions are: m=0.11$\pm$0.02 and $\beta$=0.55$\pm$0.01 for W3 Main, m=0.17$\pm$0.03 and $\beta$=0.56$\pm$0.01 for W3\,(OH) and m=0.26$\pm$0.03 and $\beta$=0.57$\pm$0.01 for AFGL\,333. The dashed lines indicate the 1-$\sigma$ confidence interval in the fit.}
\label{fig:mach2p}
\end{figure}
\section{Conclusions}
\label{sec:conc}
We have presented new CO J=3$\to$2 maps of the W3 giant molecular cloud, obtained using HARP on the JCMT. In conjunction with FCRAO CO J=1$\to$0 data, we have used these maps to derive the gas properties as a function of position within the W3 GMC.
We have used the ratio of the optical depths of the two transitions of $^{12}$CO (where $\tau\,>1$) and the ratio of the brightness temperatures T$_{b}$ (where $\tau\,<1$) to derive the distribution of excitation temperature in the CO-traced molecular gas (Figure \ref{fig:Tx}). We find high excitation temperatures (T$_{x}>12\,K$) in the eastern HDL region, where there is active star formation. In the remainder of the GMC the temperature rarely rises above 10\,K. We see a temperature gradient along the HDL, where star formation has been triggered by compression due to expansion of the nearby W4 H{\sc ii} region. We associate this with an age gradient in which W3 Main is the most evolved of the main star-forming regions, followed by W3\,(OH) and AFGL\,333.
Using the excitation temperature map, we have obtained an accurate determination of the distribution of gas mass in the W3 GMC (Figure \ref{fig:mass}). We find that the cloud contains $4.4 \pm 0.36 \times 10^{5}\msolar$, half of which is located in the HDL region. This value is in agreement with previous estimates ($3.8 \pm 1.1 \times 10^{5}\msolar$; \citet{2007MNRAS.379..663M}).
We used existing sub-millimetre continuum observations of the cloud \citep{2007MNRAS.379..663M} to measure the so-called clump-formation efficiency (CFE) i.e.\ the fraction of molecular gas in the form of dense, potentially star-forming structures (clumps), as a function of position in the cloud. We find that, in the regions affected by the expanding W4 \textsc{Hii} superbubble, the CFE has values of 3-25\%, much higher than that of the rest of the cloud that remains apparently unaffected, where values are less than $\sim$1\%. We conclude that the triggering mechanism that has created the actively star-forming HDL in the W3 GMC primarily works by creating new dense structures, in agreement with the collect-and-collapse model \citep{1994A&A...290..421W}, rather than by forcing the collapse of existing structures in the gas.
We have used the widths of the $^{13}$CO and C$^{18}$O J=3$\to$2 emission lines to derive the sonic rms Mach Number across the GMC. We find that there is a positive correlation between the Mach Number and the CFE, but only in the gas of the HDL. This correlation is opposite to that expected from models of turbulence-driven star formation and is probably due to feedback from the recent star formation injecting momentum into the nearby gas. The slope of this correlation is different (Figure \ref{fig:mach2p}) in each of the three main star-forming region in this part of the cloud (i.e. W3 Main, W3 (OH) and AFGL 333) and we interpret this as another indicator of the differing evolutionary stages of these three regions.
\section{Acknowledgements}
We would like to thank the reviewer, Chris Davis, for his comments on this paper as they helped make it better. We would also like to acknowledge Eugenio Schisano and Diego Turrini for useful discussions relating to this work and Tim Jennes for his help with the JCMT HARP data reduction. DP wishes to acknowledge a STFC PhD studentship for this work. The James Clerk Maxwell Telescope is operated by The Joint Astronomy Centre on behalf of the Science and Technology Facilities Council of the United Kingdom, the Netherlands Organisation for Scientific Research, and the National Research Council of Canada. The data were obtained under Program IDs m06bu21, m07bu17 and m08bu24. FCRAO was supported by NFS Grant AST 08-38222. This research has made use of NASA's Astrophysics Data System.
|
1,108,101,564,862 | arxiv | \section{Introduction}
The behavior of nuclear matter at extreme temperatures is now being studied with the
highest collision energies ever achieved using the Large Hadron Collider (LHC) at CERN.
The ultrarelativistic heavy ion collisions being studied there will eventually
have a center of mass energy of 5.5 TeV per nucleon, which is 27.5 times higher than the
200 GeV per nucleon energy achieved at the Relativistic Heavy Ion Collider
(RHIC) at Brookhaven National Laboratory. At
RHIC, observations indicated that initial temperatures on the order of twice the critical
temperature for the quark-gluon plasma phase transition were generated. This corresponds
to $T_0 \sim 360$ MeV. Assuming that the initial temperature scales with the
fourth root of the collision energy as predicted by dimensional analysis, one predicts
that initial temperatures on the order of $T_0 \sim 4.6\,T_c \sim 830$ MeV
will be generated at the LHC. At such high temperatures, one expects to generate
a quark-gluon plasma in which the formation of quark bound
states is suppressed in favor of a state of matter consisting of a deconfined plasma of quarks
and gluons.
Suppression of quark bound states follows from the fact that in the
quark-gluon plasma one expects color charge to be Debye screened \cite{Shuryak:1980tp,Gross:1980br}.
This effect led to early proposals to use heavy quarkonium production to measure the temperature
of the quark-gluon plasma. Heavy quarkonium has received the most theoretical
attention since heavy quark states are dominated by short rather than
long distance physics at low temperatures and can be treated using heavy quark
effective theory. Based on such effective theories of Quantum Chromodynamics (QCD) with weak coupling
at short distances, non-relativistic quarkonium states can be reliably described. Their binding
energies are much smaller than the quark mass $m_Q\gg\Lambda_{\rm
QCD}$ ($Q=c,b$), and their sizes are much larger than $1/m_Q$. At zero
temperature, since the velocity of the quarks in the bound state is
small, $v\ll c$, quarkonium can be understood in terms of
non-relativistic potential models \cite{Lucha:1991vn} such as the
Cornell potential \cite{Eichten:1979ms}. Such potential models can
be derived directly from QCD as an effective field theory
(potential non-relativistic QCD - pNRQCD) by integrating out modes
above the scales $m_Q$ and then $m_Q v$, respectively
\cite{Brambilla:2004jw}.
As mentioned above, at high temperature the deconfined phase of QCD exhibits
screening of static color-electric fields. It is expected that this
screening leads to the dissociation of quarkonium states, which can
serve as a signal for the formation of a deconfined quark-gluon plasma
in heavy ion collisions~\cite{Matsui:1986dk}. Inspired by the success at
zero temperature, potential model descriptions have also been applied to
understand quarkonium properties at finite temperature. The pioneering
paper of Matsui and Satz \cite{Matsui:1986dk} was followed by the work of
Karsch, Mehr, and Satz \cite{Karsch:1987pv}, which presented the first
quantitative calculation of quarkonium properties at high temperature.
In recent work, more involved calculations
of quarkonium spectral functions and meson current correlators
obtained from potential models have been performed
\cite{Mocsy:2004bv,Wong:2004zr,Mocsy:2005qw,Cabrera:2006wh,%
Mocsy:2007jz,Alberico:2007rg,Mocsy:2007yj,Mocsy:2008eg}.
The results have been compared to first-principle QCD calculations
performed numerically on lattices \cite{Umeda:2002vr,Asakawa:2003re,%
Datta:2003ww,Aarts:2006nr,Hatsuda:2006zz,Jakovac:2006sf,Umeda:2007hy,Aarts:2007pk,%
Aarts:2010ek} which rely on the maximum entropy method
\cite{Nakahara:1999vy,Asakawa:2000tr,Asakawa:2002xj}.
A summary and review of the current understanding of potential
models is presented in \cite{Mocsy:2008eg}, and different aspects of quarkonium
in collider experiments can be found in \cite{Abreu:2007kv,Rapp:2008tf}.
In recent years, the imaginary part of the potential due to Landau
damping has been calculated \cite{Laine:2006ns,Laine:2007gj,Beraudo:2007ky}.
Also, the derivation of potential models from QCD via effective field theory methods has
been extended to finite temperature~\cite{Brambilla:2008cx}. All of the aforementioned
calculations, however, were performed with the assumption of an
isotropic thermal medium.
In the last few years there has been an interest in the effect of plasma momentum-space
anisotropies on quarkonium binding energies for both ground and excited
states \cite{Dumitru:2007hy,Dumitru:2009ni,Burnier:2009yu,Dumitru:2009fy,%
Noronha:2009ia,Philipsen:2009wg}. The interest stems from the fact that
at early times a viscous quark-gluon plasma can have large momentum-space anisotropies
\cite{Israel:1976tn,Israel:1979wp,Baym:1984np,%
Muronga:2001zk,Muronga:2003ta,Martinez:2009mf,Florkowski:2010cf,Martinez:2010sc,Martinez:2010sd}.
Depending on the magnitude of the shear viscosity, these momentum-space
anisotropies can persist for a long time ($\sim$ 1 - 10 fm/c). The first paper to consider the quarkonium
potential in a momentum-space anisotropic plasma \cite{Dumitru:2007hy} considered
only the real part of the potential; however, two recent works have extended the calculation to
include the imaginary part of the potential \cite{Burnier:2009yu,Dumitru:2009fy}.
In this paper we use the imaginary part of the potential derived in \cite{Dumitru:2009fy}
and include it in a phenomenological model of the heavy quarkonium potential.
We then numerically solve the three-dimensional Schr\"odinger equation to find
the real and imaginary parts of the binding energies and full quantum wavefunctions
of the charmonium and bottomonium ground states, as well as the first excited state of
bottomonium.
We present data as a function of the temperature and are able to identify the
full effect of the isotropic and anisotropic potentials on these states.
We compare our results with a recent analytic estimate of the imaginary part of the binding energy by
Dumitru \cite{Dumitru:2010id}.
We show that, for an isotropic plasma,
the imaginary part of the binding energy is approximately linear in the temperature for temperatures near the
phase transition in agreement with Ref.~\cite{Dumitru:2010id}.
However, in the case of the $J/\psi$, we find a significantly smaller slope for the
imaginary part of the binding energy as a function of temperature than predicted
by Ref.~\cite{Dumitru:2010id}. The discrepancy most likely arises from the fact that
Ref.~\cite{Dumitru:2010id} assumed Coulombic wavefunctions.
The potential used here includes modifications at both intermediate and long ranges, which causes
the numerical wavefunctions to not be well approximated by Coulombic wavefunctions. In addition, our
wavefunctions are complex with the imaginary part
growing in magnitude as the temperature is increased. This effect was ignored by
the assumption of Coulombic wavefunctions in Ref.~\cite{Dumitru:2010id}.
We find that when the states
are small and dominated by the screened Coulomb potential, the imaginary
part of the binding energy increases approximately linearly with the temperature; however, as the size of the
bound state increases, the scale set by the string tension dominates and the imaginary part
of the binding energy increases more slowly with increasing temperature.
The structure of this paper is as follows: In Sec.~\ref{sec:potmodel}, we review
the potential introduced in Ref.~\cite{Dumitru:2009ni} and extend it to include
the imaginary part of the potential derived in Ref.~\cite{Dumitru:2009fy}.
In Sec.~\ref{sec:nummethod}, we
review the numerical method that we use to solve the three-dimensional
Schr\"odinger equation. In Sec.~\ref{sec:results}, we present our numerical
results for the real and imaginary parts of the binding energies of the charmonium
and bottomonium ground states and first excited state of bottomonium. In Sec.~\ref{sec:conc}, we
state our conclusions and give an outlook for future work. Finally, in an appendix
we present numerical benchmarks and tests of the code used here in order to
demonstrate its convergence and applicability to the problem at hand.
\section{Setup and Model Potential}
\label{sec:potmodel}
In this section we specify the potential we use in this work.
We consider the general case of a quark-gluon plasma which is anisotropic in
momentum space. In the limit that the plasma is assumed to be isotropic, the real
part of the potential used here reduces to the model originally introduced by Karsch,
Mehr, and Satz (KMS) \cite{Karsch:1987pv} with an additional entropy contribution \cite{Dumitru:2009ni}
and the imaginary part reduces to the result originally obtained by Laine et al \cite{Laine:2006ns}.
To begin the discussion we first introduce
our ansatz for the one-particle distribution function subject to a momentum-space
anisotropy.
\subsection{The anisotropic plasma}
\label{subsec:aniso}
The phase-space distribution of gluons in the local rest frame is assumed to be given by the
following ansatz~\cite{Dumitru:2007hy,Romatschke:2003ms,
Mrowczynski:2004kv,Romatschke:2004jh,Schenke:2006fz}
\begin{equation}
f({\bf x},{\bf p}) = f_{\rm iso}\left(\sqrt{{\bf p}^2+\xi({\bf p}\cdot{\bf
n})^2 } / p_{\rm hard}^2 \right) , \label{eq:f_aniso}
\end{equation}
where $p_{\rm hard}$ is a scale which specifies the typical momentum
of the particles in the plasma and can be identified with the temperature
in the limit that $\xi=0$.
Thus, $f({\bf x},{\bf p})$ is obtained from an isotropic distribution $f_{\rm
iso}(|{\bf{p}}|)$ by removing particles with a large momentum
component along ${\bf{n}}$, the direction of anisotropy. In this paper, we will
restrict our consideration to a plasma that is close to equilibrium. This is
motivated by the fact that in a heavy-ion collision, quarkonium states
are expected to form when the temperature has dropped to (1-2)~$T_c$.
At such temperatures the plasma may have partly equilibrated/isotropized.
Additionally, this means that we can
assume that the function $f_{\rm iso}(|{\bf{p}}|)$ is a thermal
distribution function.
The parameter $\xi$ determines the degree of anisotropy,
\begin{equation}
\xi = \frac{1}{2} \frac{\langle {\bf p}_\perp^2\rangle}
{\langle p_z^2\rangle} -1~,
\end{equation}
where $p_z\equiv \bf{p\cdot n}$ and ${\bf p}_\perp\equiv {\bf{p-n
(p\cdot n)}}$ denote the particle momentum along and perpendicular to
the direction ${\bf n}$ of anisotropy, respectively. If $\xi$ is small,
then it is also related to the shear viscosity of the plasma. For
example, for one-dimensional boost-invariant expansion governed
by Navier-Stokes evolution
\cite{Asakawa:2006jn,Martinez:2009mf,Martinez:2010sc,Martinez:2010sd}
one finds
\begin{equation} \label{eq:xi_eta}
\xi = \frac{10}{T\tau} \frac{\eta}{s}~,
\end{equation}
where $T$ is the temperature, $\tau$ is the proper time (and $1/\tau$ is
the Hubble expansion rate), and $\eta/s$ is
the ratio of shear viscosity to entropy density. In an expanding
system, non-vanishing viscosity (finite momentum relaxation
rate) implies an anisotropy of the particle momenta which
increases with the expansion rate $1/\tau$. For $\eta/s\simeq
0.1$ -- 0.2 and $\tau T\simeq1$ -- 3 one finds that $\xi\simeq1$.
In general, one can relate $\xi$ to the longitudinal and transverse
pressures in the plasma and it is possible to derive dynamical differential
equations which govern its time evolution similar to viscous hydrodynamics
\cite{Martinez:2010sc,Martinez:2010sd}
We point out that in this paper we restrict ourselves to solving the
time-independent Schr\"odinger equation, i.e.\ we assume that the
plasma is at a constant hard momentum scale $p_{\rm hard}$ and anisotropy $\xi$. This
approximation is useful if the time scale associated with the bound
state, $\sim 1/|E_{\text{bind}}|$, is short compared to the time
scales over which $p_{\rm hard}$ and $\xi$ vary. Indeed, for sufficiently large
quark mass $m_Q$ this condition should be satisfied.
\subsection{The model potential}
Lacking knowledge of the exact heavy-quark potential at finite
temperature, different phenomenological potentials and
lattice-QCD based potentials have been used to
study quarkonium binding energies in the quark-gluon plasma.
To start, we decompose the potential into real
and imaginary parts, $V = V_{\rm R} + i V_{\rm I}$. The model for the real part of the potential
we use was obtained in Ref.~\cite{Dumitru:2009ni}. The analytic calculation of the
imaginary part was performed in Refs.~\cite{Laine:2006ns,Laine:2007qy,Dumitru:2009fy}. The real part is
given by
\begin{equation}
\label{repot}
V_{\rm R}({\bf r}) = -\frac{\alpha}{r} \left(1+\mu \, r\right) \exp\left( -\mu
\, r \right) + \frac{2\sigma}{\mu}\left[1-\exp\left( -\mu
\, r \right)\right]
- \sigma \,r\, \exp(-\mu\,r)- \frac{0.8 \, \sigma}{m_Q^2\, r}~,
\end{equation}
where
\begin{equation}
\frac{\mu}{m_D} \equiv 1-\xi \frac{3+\cos 2\theta}{16}~,
\end{equation}
with $m_D = (1.4)^2 \cdot N_c (1+N_f/6) \, 4 \pi \alpha_s \, p_{\rm hard}^2/3$ being the isotropic leading-order Debye mass adjusted
by a factor of $(1.4)^2$ to take into account higher-order corrections \cite{Kaczmarek:2004gv}. The coupling $\alpha$
folds in a factor of $C_F = (N_c^2 - 1)/(2 N_c)$, i.e. $\alpha \equiv C_F \alpha_s$, where $\alpha_s = g_s^2/(4\pi)$
is the canonically defined strong coupling constant. We have taken $N_c=3$ and
assumed $N_f=2$ which is appropriate for the temperature range considered herein.
The first term in (\ref{repot}) is a screened Coulomb potential with an entropy addition. The
second and third terms are a screened linear potential associated with confinement in the low temperature
limit.
The last term in (\ref{repot}) is a relativistic correction which is critical for obtaining accurate
binding energies in the low temperature limit.
For the string tension, we fix $\sigma = 0.223$ GeV and for the strong coupling constant we fix
$\alpha = 0.385$.\footnote{Since $\alpha_s$ runs logarithmically and therefore has small
variation in the temperature ranges shown, we will ignore the running of the coupling here.
Incorporating this effect would be straightforward, however, a model of the behavior of $\alpha_s$
at large scales would be required in order fit zero temperature properties of the states considered here.}
The imaginary part is given by \cite{Dumitru:2009fy}
\begin{equation}
V_{\rm I}({\bf r}) = -\alpha T \biggl[ \phi(\hat{r}) - \xi \left(\psi_1(\hat{r},
\theta)+\psi_2(\hat{r}, \theta)\right)\biggr] ,
\label{impot}
\end{equation}
where $\hat r = m_D r$ and
\begin{eqnarray}
\phi(\hat{r}) &=& 2\int_0^{\infty}dz \frac{z}{(z^2+1)^2} \left[1-\frac{\sin(z\, \hat{r})}{z\, \hat{r}}\right]~, \\
\psi_1(\hat{r}, \theta) &=& \int_0^{\infty} dz
\frac{z}{(z^2+1)^2}\left(1-\frac{3}{2}
\left[\sin^2\theta\frac{\sin(z\, \hat{r})}{z\, \hat{r}}
+(1-3\cos^2\theta)G(\hat{r}, z)\right]\right), \\
\psi_2(\hat{r}, \theta) &=&- \int_0^{\infty} dz
\frac{\frac{4}{3}z}{(z^2+1)^3}\left(1-3 \left[
\left(\frac{2}{3}-\cos^2\theta \right) \frac
{\sin(z\, \hat{r})}{z\, \hat{r}}+(1-3\cos^2\theta)
G(\hat{r},z)\right]\right),\;\;\;\;
\label{funcs}
\end{eqnarray}
with $\theta$ being the angle from the beam direction and
\begin{equation}
G(\hat{r}, z)= \frac{\hat{r} z\cos(\hat{r} z)- \sin(\hat{r} z)
}{(\hat{r} z)^3}~.
\label{gdef}
\end{equation}
The short range part of $V_R$ is based on a
leading order hard-loop perturbation theory calculation presented in Ref.~\cite{Dumitru:2007hy}.
$V_I$ is also obtained from a leading order perturbative calculation \cite{Dumitru:2009fy}.
Being a leading order calculation one may wonder about higher order corrections.
One expects that the leading order calculation pQCD would receive large corrections
at low temperatures $(T < 10\,T_c)$ since the running coupling becomes large ($g_s > 1$).
For the coupling used above $\alpha_s = 0.29$ one finds $g_s = \sqrt{4 \pi \alpha_s} = 1.9$.
This means that the normal scale hierarchy, $g_s T < T$, implicit in the hard-loop resummation
becomes inverted.\footnote{We note that for temperatures
$T > 2\,T_c$ NNLO perturbative calculations of QCD thermodynamics based on hard-thermal-loop
resummation of QCD agree quite well with available lattice data even though $g_s$ is large
\cite{Andersen:2010wu,Andersen:2011sf,Andersen:2010ct,Andersen:2009tc}.}
We therefore need to
supplement the leading order pQCD calculation with a non-perturbative contribution.
For the real part we do this by including a long-range screened linear contribution that is modified
to include an entropy contribution \cite{Dumitru:2007hy}. In the isotropic limit the resulting
form of the real part potential is in good agreement with lattice data for the heavy quark potential
\cite{Kaczmarek:2004gv}. For the imaginary part we currently do not
have non-perturbative input from lattice calculations with which to constrain the long range
part; however, we note that calculations of the real and imaginary parts of the potential
using the AdS/CFT correspondence to calculate the corresponding potential in
large t' Hooft coupling limit of ${\cal N}=4$ Supersymmetric Yang-Mills yield similar results
to those obtained using perturbative QCD \cite{Noronha:2009ia,Noronha:2009da}.
For more information about the relevant scales and limitations of the current approach
we refer the reader to Sec.~III of Ref.~\cite{Dumitru:2007hy}.
Regarding the length scales which are relevant, we note that the short range part of the potential is appropriate
for describing wavefunctions which have $1/\langle r \rangle < {\cal O} (m_D)$ while the long range
part is relevant if $1/\langle r \rangle > {\cal O} (m_D)$. Using the form of the real potential
listed above, one finds that the distance scale at which medium effects become large is
roughly given by $r > r_{\rm med} \sim T_c/(2 T)$ fm
corresponding to $r_{\rm med} \sim 0.25$ fm at 2 $T_c$ \cite{Dumitru:2007hy} . Numerically, the isotropic Debye mass
used herein is $m_D \sim 3 p_{\rm hard}$, corresponding to $m_D \sim 1.2$ GeV at $p_{\rm hard} = 2 T_c$. As shown in
Ref.~\cite{Dumitru:2007hy} Fig.~4, using
the real part of the potential listed above, the RMS radius of the $J/\Psi$ state is approximately
0.8 fm at 2 $T_c$ corresponding to $1/\langle r \rangle \sim$ 250 MeV, which makes the screening
of the long range part of the potential crucially important for fixing the binding energy in this case. For the
case of the $\Upsilon$ one sees also from Ref.~\cite{Dumitru:2007hy} Fig.~4 that the
RMS radius of the $\Upsilon$ is approximately 0.25 fm corresponding to $1/\langle r \rangle \sim $
800 MeV. We note importantly that for the $\Upsilon$, due to its relatively small size, the bulk of the medium effect
comes from the temperature dependence of $\lim_{r \rightarrow \infty} V \equiv V_\infty$
(see Fig.~3 of Ref.~\cite{Dumitru:2007hy}). In closing, one finds that for both the $J/\Psi$ and
$\Upsilon$ that correct modeling of both the short and long range parts of the potential are
critical for obtaining the temperature dependence of these states. As mentioned above,
here we extend the results in \cite{Dumitru:2007hy} to include the imaginary part of the potential.
We note that one finds that RMS radii of the states are only weakly affected by inclusion of the
imaginary part of the potential, allowing us to use the estimates above as a rough guide for
understanding the relevant scales.
\subsection{Analytic estimate in isotropic case}
\label{subsec:est}
In a recent paper \cite{Dumitru:2010id}, Dumitru made an estimate of the effect of
the imaginary part of the potential on the imaginary part of the binding energy of
a quarkonium state. For this estimate Dumitru assumed a Coulomb wavefunction
for the quarkonium state and computed the expectation value of the imaginary part
of the potential exactly in the case of an isotropic plasma. The result obtained was
\begin{equation}
\label{eq:GammaXi0}
\Gamma(\xi=0) = \frac{T}{\alpha} \frac{m_D^2}{m_Q^2}
\frac{1-(2-\kappa^2)^2 + 4\log\frac{1}{\kappa} }{(1-\kappa^2)^3} ~~~,~~~
\kappa = \frac{1}{\alpha} \frac{m_D}{m_Q}~.
\end{equation}
When plotted in the temperature range between $T_c$ and $3 T_c$ the result above
is approximately linear for both the $J/\psi$ and $\Upsilon$ \cite{Dumitru:2010id}.
For charmonium with $m_Q = 1.3\;{\rm GeV}$ and using the values given for $\alpha$
and $m_D$ in the previous subsection, we obtain a slope consistent with
$\Gamma \propto (0.08\;{\rm GeV})\,T/T_c$ at $T=0.3$ GeV. Similarly,
for bottomonium with $m_Q = 4.7\;{\rm GeV}$ we obtain a slope consistent with $\Gamma \propto
(0.05\;{\rm GeV})\,T/T_c$. We note these here for later comparison with numerical
results presented in the results section.
\section{Numerical Method}
\label{sec:nummethod}
To determine the wavefunctions of bound quarkonium states, we solve
the Schr\"odinger equation
\begin{eqnarray}
\hat{H} \phi_\upsilon({\bf x}) &=& E_\upsilon \, \phi_\upsilon({\bf
x}) ~, \nonumber \\
\hat{H} &=& -\frac{\nabla^2}{2 m_R} + V({\bf x}) + m_1 + m_2~,
\label{3dSchrodingerEQ}
\end{eqnarray}
on a three-dimensional lattice in coordinate space with the potential
given by $V = V_{\rm R} + i V_{\rm I}$ where the real and imaginary
parts are specified in Eqs.~(\ref{repot}) and (\ref{impot}), respectively.
Here, $m_1$ and $m_2$ are the masses
of the two heavy quarks and $m_R$ is the reduced mass, $m_R = m_1
m_2/(m_1+m_2)$. The index $\upsilon$ on the eigenfunctions,
$\phi_\upsilon$, and energies, $E_\upsilon$, represents a list of all
relevant quantum numbers, such as\ $n$, $l$, and $m$ for a radial Coloumb
potential. Due to the anisotropic screening scale, the wavefunctions
are no longer radially symmetric if $\xi \neq 0$. Since we consider
only small anisotropies we nevertheless label the states as $1S$
(ground state) and $1P$ (first excited state), respectively.
To find solutions to Eq.~(\ref{3dSchrodingerEQ}), we use the finite
difference time domain method (FDTD)~\cite{Sudiarta:2007,Strickland:2009ft}. In this
method we start with the time-dependent Schr\"odinger equation
\begin{equation}
i \frac{\partial}{\partial t} \psi({\bf x},t) = \hat H \psi({\bf x},t) \, ,
\label{3dSchrodingerEQminkowski}
\end{equation}
which can be solved by expanding in terms of the eigenfunctions,
$\phi_\upsilon$:
\begin{equation} \psi({\bf x},t) = \sum_\upsilon c_\upsilon \phi_\upsilon({\bf x})
e^{- i E_\upsilon t}~.
\label{eigenfunctionExpansionMinkowski}
\end{equation}
If one is only interested in the lowest energy states (ground state
and first few excited states) an efficient way to proceed is to
transform~(\ref{3dSchrodingerEQminkowski})
and~(\ref{eigenfunctionExpansionMinkowski}) to Euclidean time using a
Wick rotation, $\tau \equiv i t$:
\begin{equation} \frac{\partial}{\partial \tau} \psi({\bf x},\tau) = - \hat H
\psi({\bf x},\tau) \, ,
\label{3dSchrodingerEQeuclidean}
\end{equation}
and
\begin{equation} \psi({\bf x},\tau) = \sum_\upsilon c_\upsilon \phi_\upsilon({\bf
x}) e^{- E_\upsilon \tau} ~.
\label{eigenfunctionExpansionEuclidean}
\end{equation}
For details of the discretizations used etc. we refer the reader
to Refs.~\cite{Strickland:2009ft,Sudiarta:2007}.
\subsection{Finding the ground state}
By definition, the ground state is the state with the lowest energy
eigenvalue, $E_0$. Therefore, at late imaginary time the sum
over eigenfunctions (\ref{eigenfunctionExpansionEuclidean}) is
dominated by the ground state eigenfunction
\begin{equation} \lim_{\tau \rightarrow \infty} \psi({\bf x},\tau) \rightarrow c_0
\phi_0({\bf x}) e^{- E_0 \tau}~.
\label{groundstateEuclideanLateTime}
\end{equation}
Due to this, one can obtain the ground state wavefunction,
$\phi_0$, and energy, $E_0$, by solving
Eq.~(\ref{3dSchrodingerEQeuclidean}) starting from a random
three-dimensional wavefunction, $\psi_{\text{initial}}({\bf x},0)$,
and evolving forward in imaginary time. This initial wavefunction
should have a nonzero overlap with all eigenfunctions of the
Hamiltonian; however, due to the damping of higher-energy
eigenfunctions at sufficiently late imaginary times we are left with
only the ground state, $\phi_0({\bf x})$. Once the ground state
wavefunction (or any other wavefunction) is found, we can
compute its energy eigenvalue via
\begin{eqnarray}
E_\upsilon(\tau\to\infty) = \frac{\langle \phi_\upsilon | \hat{H} |
\phi_\upsilon \rangle}{\langle \phi_\upsilon | \phi_\upsilon
\rangle} = \frac{\int d^3{\bf x} \, \phi_\upsilon^*
\, \hat{H} \, \phi_\upsilon }{\int d^3{\bf x} \, \phi_\upsilon^*
\phi_\upsilon} \; .
\label{bsenergy}
\end{eqnarray}
To obtain the binding energy of a state,
$E_{\upsilon,\text{bind}}$, we subtract the quark masses and
the real part of the potential at infinity
\begin{equation}
E_{\upsilon,\text{bind}} \equiv E_\upsilon - m_1 - m_2 -
\frac{\langle \phi_\upsilon | {\rm Re}[V(\theta,|{\bf r}|\to\infty)] | \phi_\upsilon
\rangle}{\langle \phi_\upsilon | \phi_\upsilon \rangle} \; .
\label{bsbindingenergy}
\end{equation}
For the isotropic KMS potential the last term is independent of the
quantum numbers $\upsilon$ and equal to $2\sigma/m_D$. In the
anisotropic case, however, this is no longer true since the operator
$V_\infty(\theta)$ carries angular dependence, as discussed
above. Its expectation value is, of course, independent of $\theta$ but
does depend on the anisotropy parameter $\xi$.
\subsection{Finding the excited states}
The basic method for finding excited states is to first evolve the
initially random wavefunction to large imaginary times, find the
ground state wavefunction, $\phi_0$, and then project this state out
from the initial wavefunction and re-evolve the partial-differential
equation in imaginary time. However, there are (at least) two more
efficient ways to accomplish this. The first is to record snapshots of
the 3d wavefunction at a specified interval $\tau_{\text{snapshot}}$
during a single evolution in $\tau$. After having obtained the ground
state wavefunction, one can go back and extract the excited
states by projecting out the ground state wavefunction from the
recorded snapshots of $\psi({\bf x},\tau)$.
An alternative way to select different excited states is to impose a
symmetry condition on the initially random wavefunction which cannot
be broken by the Hamiltonian evolution. For example, one can select
the first excited state of the (anisotropic) potential by
anti-symmetrizing the initial wavefunction around either the $x$, $y$,
or $z$ axes. In the anisotropic case this trick can be used to
separate the different polarizations of the first excited state of the
quarkonium system and to determine their energy eigenvalues with high
precision. This high precision allows one to more accurately
determine the splitting between polarization states which are
otherwise degenerate in the isotropic Debye-Coulomb potential.
Whichever method is used, once the wavefunction of an excited state
has been determined one can again use the general
formulas~(\ref{bsenergy}) and~(\ref{bsbindingenergy}) to determine
the excited state binding energy. For code benchmarks and tests see
App.~\ref{app:bench}.
\begin{figure}[t]
\vspace{1mm}
\includegraphics[width=16cm]{charmonium.eps}
\caption{Real and imaginary parts of the charmonium ground state ($J/\psi$) binding energy as a function of $p_{\rm hard}$.
Both isotropic $\xi=0$ and anisotropic $\xi=1$ cases are shown. The left panel shows full temperature range and the right
panel focuses on the region where the real and imaginary parts become comparable.
See text for parameters such
as lattice size, lattice spacing, etc.}
\label{fig:charmonium}
\end{figure}
\begin{figure}[t]
\vspace{1mm}
\includegraphics[width=11cm]{bottomonium.eps}
\caption{Real and imaginary parts of the bottomonium ground state ($\Upsilon$) binding energy as a function of $p_{\rm hard}$.
Both isotropic $\xi=0$ and anisotropic $\xi=1$ cases are shown. See text for parameters such
as lattice size, lattice spacing, etc.}
\label{fig:bottomonium}
\end{figure}
\begin{figure}[t]
\vspace{1mm}
\includegraphics[width=11cm]{chib.eps}
\caption{Real and imaginary parts of $\chi_b$ binding energy as a function of $p_{\rm hard}$.
Both isotropic $\xi=0$ and anisotropic $\xi=1$ cases are shown. See text for parameters such
as lattice size, lattice spacing, etc.}
\label{fig:chib}
\end{figure}
\section{Results and Discussion}
\label{sec:results}
In this section we present results for isotropic ($\xi=0$) and anisotropic ($\xi=1$) binding
energies for charmonium ($J/\psi$), bottomonium ($\Upsilon$), and the first excited state of
bottomonium ($\chi_b$) as a function of the hard momentum scale $p_{\rm hard}$.
We will first assume that $p_{\rm hard}$ is held constant and vary the anisotropy parameter.
Note increasing $\xi$ results in a
decrease in the density since $n \propto p_{\rm hard}^3/\sqrt{1+\xi}$ \cite{Dumitru:2007hy}.
This reduced density results in less
Debye screening and thus a more strongly bound state. We therefore expect that states
with large anisotropy will have increased binding energies compared to the isotropic states.
One could imagine holding another thermodynamic property such as the number density or energy
density constant as one changes the anisotropy parameter. We will return to this issue at the end of
this section and show that the results in these cases can be obtained from a simple rescaling of the results
presented below. In all plots shown, we assume $T_c = 192$ MeV and fix the imaginary-time step
in the numerical algorithm to be $\Delta\tau = a^2/8$ where $a$ is the spatial lattice spacing.
\subsection{Results as a function of the hard momentum scale}
\label{sec:fixedphard}
In Fig.~\ref{fig:charmonium} we plot the binding energy of the charmonium ground state ($J/\psi$)
as a function of $p_{\rm hard}$. For this figure, we used a lattice
size of $256^3$ with lattice dimension of $L = 25.6$ GeV$^{-1}$ and a lattice spacing of
$a = 0.1$ GeV$^{-1}$. For the charmonium mass, we used $m_c = 1.3$ GeV.
In Fig.~\ref{fig:charmonium} we show both the real part (black
line with filled circles) and the imaginary part (red line with filled squares) of the
isotropic ground state binding energy. Comparing these two curves, we see that the imaginary part of the binding energy
becomes comparable to the real part at $p_{\rm hard} \sim 1.63\,T_c$. In contrast, in the anisotropic case ($\xi=1$)
we find that the intersection between the imaginary (blue line with open triangles)
and real parts (green line with open diamonds) occurs at
$p_{\rm hard} \sim 1.88\,T_c$. In the range between 1 and 3 $T_c$
we obtain a slope of $4.9\times10^{-2}$ GeV for the imaginary part of the binding energy when
$\xi=0$ and $6.4\times10^{-2}$ GeV when $\xi=1$. In the isotropic case Dumitru's
perturbative calculation \cite{Dumitru:2010id} gives a slope of $8\times10^{-2}$ GeV.
Our method is non-perturbative since we don't assume perturbations around Coulomb
wave functions, so one should not
be surprised to see some important differences.
In Fig.~\ref{fig:bottomonium} we plot the binding energy of the
bottomonium ground state ($\Upsilon$) as a function of $p_{\rm hard}$.
For this figure, we used a lattice
size of $256^3$ with lattice dimension of $L = 25.6$ GeV$^{-1}$ and a lattice spacing of
$a = 0.1$ GeV$^{-1}$. For the bottomonium mass, we used $m_b = 4.7$ GeV.
In Fig.~\ref{fig:bottomonium} we show both the real part (black
line with filled circles) and the imaginary part (red line with filled squares) of
the isotropic ground state binding energy. When $\xi=0$, we see that the imaginary part of the binding energy
becomes comparable to the real part
at $p_{\rm hard} \sim 2.8\,T_c$. In the anisotropic case ($\xi=1$)
we find that the intersection between the imaginary (blue line with open triangles)
and real parts (green line with open diamonds) occurs at approximately
$3.5\,T_c$. For $\xi=0$, in the range between 1 and 4 $T_c$
we obtain a slope of $2.8\times10^{-2}$ GeV for the imaginary part of the binding energy.
In the anisotropic case ($\xi=1$) we find a slope of $4.2\times10^{-2}$ GeV. We
can once again compare to the analytic result of Dumitru \cite{Dumitru:2010id}
which gives an isotropic slope of $5\times10^{-2}$ for the $\Upsilon$. Once
again, the numbers are roughly in agreement.
In Fig.~\ref{fig:chib} we plot the binding energy of the first p-wave excited state of
bottomonium ($\chi_b$) as a function of $p_{\rm hard}$. For this figure we used a lattice
size of $256^3$ with lattice dimension of $L =38.4$ GeV$^{-1}$ and a lattice spacing of
$a = 0.15$ GeV$^{-1}$. For the bottomonium mass, we used $m_b = 4.7$ GeV.
As was the case with the bottomonium ground state we see an increase in the real
part of the binding energy with increasing anisotropy.
Most importantly, we find that there is an approximately 60 MeV
splitting between the $L_z = 0$ and $L_z = \pm 1$ states with the states with
$L_z = \pm 1$ having the lower binding energy. We would therefore expect fewer
$L_z = \pm 1$ states of the $\chi_b$ to be produced in an anisotropic plasma. Determining
precisely how many fewer would be produced requires knowledge of the time evolution of the
momentum scale $p_{\rm hard}$ and anisotropy $\xi$.
\subsection{Fixing number density or energy density}
\begin{figure}[t]
\vspace{1mm}
\includegraphics[width=11cm]{charmonium-nd.eps}
\caption{Real and imaginary parts of the charmonium ground state ($J/\psi$) binding energy
as a function of temperature assuming fixed number density.
Both isotropic $\xi=0$ and anisotropic $\xi=1$ cases are shown.
See text for parameters such
as lattice size, lattice spacing, etc.}
\label{fig:charmonium-nd}
\end{figure}
\begin{figure}[t]
\vspace{1mm}
\includegraphics[width=11cm]{bottomonium-nd.eps}
\caption{Real and imaginary parts of bottomonium ground state ($\Upsilon$) binding energy as a function of temperature
assuming fixed number density.
Both isotropic $\xi=0$ and anisotropic $\xi=1$ cases are shown. See text for parameters such
as lattice size, lattice spacing, etc.}
\label{fig:bottomonium-nd}
\end{figure}
\begin{figure}[t]
\vspace{1mm}
\includegraphics[width=11cm]{chib-nd.eps}
\caption{Real and imaginary parts of the $\chi_b$ binding energy as a function of temperature
assuming fixed number density.
Both isotropic $\xi=0$ and anisotropic $\xi=1$ cases are shown. See text for parameters such
as lattice size, lattice spacing, etc.}
\label{fig:chib-nd}
\end{figure}
As mentioned in the beginning of this section, when one is working in a non-equilibrium setting
it is necessary to specify which quantities are held fixed. In equilibrium, it is sufficient
to specify the temperature. The temperature then uniquely determines the number density, energy
density, etc. In the previous subsection we presented results obtained when one
holds the hard momentum scale $p_{\rm hard}$ fixed while varying the anisotropy
parameter $\xi$. Doing so, however, results in different number densities and energy
densities for different anisotropies ($\xi$). Here we discuss how to fix either the number
or energy density by adjusting $p_{\rm hard}$ appropriately. We first demonstrate this in the case
of the number density and show that for small anisotropy the scalings required
to fix the number density or energy density are practically identical. We then present results
for the binding energies of the states we are interested in for the case of fixed number
density, since in this paper we concentrate on anisotropies which are small enough
that the difference between the cases of fixed number density and fixed energy density
is numerically very small.
The number density as a function of $\xi$ and $p_{\rm hard}$
can be calculated for an arbitrary isotropic distribution $f_{\rm iso}$
\cite{Romatschke:2004jh}
\begin{equation}
n(\xi,p_{\rm hard}) = \frac{n_{\rm iso}(p_{\rm hard})}{\sqrt{1+\xi}} \, ,
\label{eq:number}
\end{equation}
where $n_{\rm iso}$ is the number density associated with the isotropic
distribution function $f_{\rm iso}$ via
\begin{equation}
n_{\rm iso}(p_{\rm hard}) = \int \frac{d^3p}{(2\pi)^3} f_{\rm iso}(|{\bf p}|,p_{\rm hard}) \, .
\end{equation}
Since $n_{\rm iso}$ only contains one dimensionful scale, by dimensional analysis we have
$n_{\rm iso} \propto p_{\rm hard}^3$. In order to keep the number density
(\ref{eq:number}) fixed as one changes $\xi$, one can adjust $p_{\rm hard}$ by requiring
\begin{equation}
p_{\rm hard} = (1+\xi)^{1/6} \, T \;\;\;\;\; {\rm[fixed\;number\;density]} \; ,
\label{eq:scalen}
\end{equation}
where $T$ is the corresponding isotropic scale (temperature) which gives the target
number density when $\xi=0$, i.e. $n_{\rm iso}(T)$.
Similarly, the energy density as a function of $\xi$ and $p_{\rm hard}$
can be calculated for an arbitrary isotropic distribution $f_{\rm iso}$ \cite{Martinez:2008di}
\begin{equation}
{\cal E}(\xi,p_{\rm hard}) = {\cal R}(\xi) {\cal E}_{\rm iso}(p_{\rm hard}) \, ,
\label{eq:energy}
\end{equation}
where ${\cal E}_{\rm iso}$ is the energy density associated with the isotropic
distribution function $f_{\rm iso}$ and
\begin{equation}
{\cal R}(\xi) = \frac{1}{2}\left(\frac{1}{1+\xi}
+\frac{\arctan\sqrt{\xi}}{\sqrt{\xi}} \right) \, .
\end{equation}
Since ${\cal E}_{\rm iso}$ only contains one dimensionful scale, by dimensional analysis
we have ${\cal E}_{\rm iso} \propto p_{\rm hard}^4$ and we can fix the energy density
to the corresponding isotropic energy density with scale $T$ by requiring
\begin{equation}
p_{\rm hard} =T/ [{\cal R}(\xi)]^{1/4} \;\;\;\;\; {\rm[fixed\;energy\;density]} \; .
\label{eq:scalee}
\end{equation}
The scalings for fixed number density (\ref{eq:scalen}) and fixed energy density (\ref{eq:scalee})
are different; however, in the limit of small anisotropies the scalings are very close.
Expanding to quadratic order, one finds
\begin{subequations}
\begin{align}
\label{eq:nsxi}
\frac{p_{\rm hard}}{T} &= 1+\frac{1}{6}\xi - \frac{29}{360} \xi^2 + {\cal O}(\xi^3) &{\rm[fixed\;number\;density]} \; , \\
\label{eq:esxi}
\frac{p_{\rm hard}}{T} &= 1+\frac{1}{6}\xi - \frac{5}{72} \xi^2 + {\cal O}(\xi^3) &{\rm[fixed\;energy\;density]} \; ,
\end{align}
\end{subequations}
which agree at linear order and differ by 7.4\% in the quadratic coefficient. One finds that, when
including all orders in the expansion, the right hand sides of (\ref{eq:nsxi}) and (\ref{eq:esxi})
differ by only 0.25\% at $\xi=1$. Therefore, for the range of anisotropies considered here, the two scalings are
functionally equivalent. We will therefore only present results for fixed number density with the
understanding that the fixed energy density results are indistinguishable by the human eye.
In Figs.~\ref{fig:charmonium-nd}, \ref{fig:bottomonium-nd}, and \ref{fig:chib-nd}, we show
the binding energies which result from the fixed number density rescaling of the horizontal axes of
Figs.~\ref{fig:charmonium}, \ref{fig:bottomonium}, and \ref{fig:chib}. As can be seen from
Figs.~\ref{fig:charmonium-nd}, \ref{fig:bottomonium-nd}, and \ref{fig:chib-nd}, requiring
fixed number/energy density weakens the effect of anisotropies on the ground state binding
energies. In the case of the ground states of charmonium and bottomonium shown in
Figs.~\ref{fig:charmonium-nd} and \ref{fig:bottomonium-nd} we find that the splitting
between the $\xi=0$ and $\xi=1$ cases at the critical temperature is approximately 50 MeV
in both cases.
Finally, we emphasize that in the case of the first excited states of bottomonium shown in
Fig.~\ref{fig:chib-nd} the splitting between the $L_z=0$ and $L_z=\pm1$ states is unaffected
by the rescaling since we have $\xi=1$ for both states. Therefore, one has a relatively clean observable
that is sensitive to plasma anisotropies regardless of the quantity which is assumed to be
held fixed.
\section{Conclusions}
\label{sec:conc}
In this paper we have presented first results on the effect of including both the real and imaginary parts
of the heavy quarkonium potential on the binding energies of the charmonium ground state ($J/\psi$),
the bottomonium ground state ($\Upsilon$), and the first $p$-wave excited state of bottomonium ($\chi_b$).
We did this by numerically solving the
three-dimensional Schr\"odinger equation for the complex potential given by Eqs.~(\ref{repot})
and (\ref{impot}). This enabled us to extract both the real and imaginary parts of the binding
energies for the states. Using our model potential, we investigated both isotropic and weakly
anisotropic plasmas. We found that, there can be a sizable effect of momentum-space
anisotropy on both the real and imaginary parts of the quarkonium binding energy. One can
estimate the disassociation temperature of the states by determining the temperature at which
the real and imaginary parts of the binding energy become the same. Using this criteria,
in the isotropic case we estimate the $J/\psi$, $\Upsilon$ and $\chi_b$ to have disassociation
temperatures of 1.6 $T_c$, 2.8 $T_c$, and 1.5 $T_c$, respectively. We note, however,
that even prior to these disassociation temperatures the states will be suppressed
due to the exponential decay of the states with a rate related to the imaginary part
of the binding energy. We plan to investigate the phenomenological impact of our results
on the time evolution of quarkonium decay in a future publication.
In the case of a plasma with a finite momentum-space anisotropy, we presented results
for both fixed hard momentum scale and fixed number density. Our results demonstrate that
the corresponding anisotropic states have a higher binding energy in accordance with previous results that
employed only the real part of the quarkonium potential used herein \cite{Dumitru:2009ni}.
We showed that, for small anisotropy, fixing the number density and fixing the energy
density gives results which are the same to within less than a fraction of a percent.
We demonstrated that fixing the number density reduces the effect of anisotropy compared
to the case of fixing the hard momentum scale, but does not completely remove the effect
of momentum-space anisotropy on the binding energies. Finally, we emphasized the importance
of the finite-anisotropy splitting between the $\chi_b$ states with $L_z =0$ and $L_z=\pm1$.
This splitting is independent of whether one fixes the hard momentum scale, number density, or
energy density. Therefore, this splitting represents a possible observable which could be used
to determine the time-averaged plasma anisotropy parameter.
Looking forward, to fully assess the phenomenological impact of plasma momentum-space
anisotropies on quarkonium states requires the convolution of the results presented here
with the space-time evolution
of the hard momentum scale and anisotropy parameter. A method for determining the
dynamical evolution of these parameters has recently been determined \cite{Martinez:2010sc,%
Martinez:2010sd}. In addition, since these works show that $\xi$ can become large, it
will be necessary to investigate the effect of large anisotropies on quarkonium binding
energies. The calculations necessary to address these questions are currently underway.
\section*{Acknowledgments}
We thank A.~Dumitru for discussions. K. McCarty was supported during the summer of 2010
by the Cormack Fund. M. Strickland was supported in part by the Helmholtz
International Center for FAIR Landesoffensive zur Entwicklung Wissenschaftlich-\"Okonomischer
Exzellenz program.
\section*{Note added}
In arXiv versions 1-3 there was a mistake in the final results for the imaginary part
of the binding energies. The mistake stems from the fact that we had
subtracted the full complex-valued potential at infinity $V_\infty$; however,
formally only the real part of $V_\infty$ should be subtracted since the imaginary part of
$V_\infty$ is related to heavy quark damping in the plasma which is physically relevant. As a
consequence all imaginary parts of the binding energies are changed and we have updated
all figures. While the results are qualitatively similar, the key change is that the imaginary
part of the binding energy now has a stronger dependence on the anisotropy parameter,
$\xi$, in most cases.
|
1,108,101,564,863 | arxiv | \section{Introduction}
In this paper let $(X,\omega)$ be a pair of a rational surface $X$ and an ample line bundle $\omega$.
We consider the moduli spaces
$M^X_\omega(c_1,d)$ of $\omega$-semistable torsion-free coherent sheaves of rank $2$ on $X$ with Chern classes $c_1\in H^2(X,{\mathbb Z})$ and $c_2$ such that $d=4c_2-c_1^2$.
Associated to a line bundle $L$ on $X$ there is a determinant bundle $\mu(L)\in\operatorname{Pic}(M^X_\omega(c_1,d))$. If $L$ is ample, then $\mu(L)$ is nef and big
on $M^X_\omega(c_1,d)$, and a suitable power induces the map from $M^X_\omega(c_1,d)$ to the corresponding Uhlenbeck compactification.
If one considers instead of a rational surface $X$ a curve $C$, the spaces of sections of the corresponding determinant bundles are the spaces of conformal blocks, and
their dimensions are given by the celebrated Verlinde formula. In \cite{Zag} many reformulations of this formula are given. In particular
[Thm.~1.(vi)]\cite{Zag} expresses the generating function on a fixed curve as a rational function.
In this paper we study the generating functions of the holomorphic Euler characteristics $\chi(M^X_\omega(c_1,d),\mu(L))$, and show that they are given as rational functions.
Let
$$\chi_{c_1}^{X,\omega}(L):=\sum_{d>0}\chi(M^X_\omega(c_1,d),\mu(L))\Lambda^d$$
(In case $c_1=0$ the coefficient of $\Lambda^4$ is slightly different, furthermore in case $\omega$ lies on a wall (see below), here instead of $\chi(M^X_\omega(c_1,d),\mu(L))$
we use the average over the chambers adjacent to $\omega$.) We can view the spaces of sections $H^0(M^X_\omega(c_1,d),\mu(L))$ as analogues to the spaces of conformal blocks. In most cases we will consider (see \propref{highvan} below), the higher cohomology groups of the determinant bundle
$\mu(L)$ vanish. Thus our formulas for the $\chi_{c_1}^{X,\omega}(L)$ are analogues of the Verlinde formula for rational surfaces.
\begin{Notation}
For two Laurent series $P(\Lambda)=\sum_{n} a_n\Lambda^n,Q(\Lambda)=\sum_{n} b_n\Lambda^n\in {\mathbb Q}[\Lambda^{-1}][[\Lambda]]$ we write $P(\Lambda)\equiv Q(\Lambda)$ if
there is an $n_0\in {\mathbb Z}$ with $a_n=b_n$ for all $n\ge n_0$.
\end{Notation}
\begin{Theorem}
\label{rationalal}
Let $X$ be ${\mathbb P}^2$, ${\mathbb P}^1\times {\mathbb P}^1$ or a blowup of ${\mathbb P}^2$ in $n$ points.
Let $c_1\in H^2(X,{\mathbb Z})$, $L\in Pic(X)$.
There is a polynomial $P^{X}_{c_1,L}(\Lambda)\in \Lambda^{-c_1^2}{\mathbb Q}[\Lambda^{\pm 4}]$ and $l^{X}_{c_1,L}\in {\mathbb Z}_{\ge 0}$, such that
$$\chi_{c_1}^{X,\omega}(L)\equiv\frac{P^X_{c_1,L}(\Lambda)}{(1-\Lambda^4)^{l^X_{c_1,L}}}.$$
Here $\omega$ is an ample line bundle on $X$ with $\<\omega,K_X\><0$. In case $X$ is the blowup of ${\mathbb P}^2$ in $n$ points we assume furthermore that $\omega=H-a_1E_1-\ldots-a_nE_n$, with $|a_i|<\frac{1}{\sqrt{n}}$ for all $i$.
Note that $P^{X}_{c_1,L}(\Lambda)$, $l^{X}_{c_1,L}$ are independent of $\omega$ (subject to the conditions above).
In particular for any other ample line bundle $\omega'$ on $X$ satisfying the conditions for $\omega$ above, we have
$\chi_{c_1}^{X,\omega'}(L)-\chi_{c_1}^{X,\omega}(L)\in {\mathbb Z}[\Lambda]$.
\end{Theorem}
We will see that there is an algorithm for determining the generating functions $\chi_{c_1}^{X,\omega}(L)$ of \thmref{rationalal}.
Let now $H$ be the hyperplane bundle on ${\mathbb P}^2$.
We apply the algorithm above to determine the generating functions of the
$\chi(M^{{\mathbb P}^2}_H(0,d),\mu(nH))$ and the $\chi(M^{{\mathbb P}^2}_H(H,d),\mu(nH))$ for $n\le 11$. These were determined before (and strange duality proven) for $c_1=0$ and $n=1,2$ in \cite{Abe}, and for all $c_1$ for $n=1,2,3$ in \cite{GY}.
We get the following result.
Put
{\small \begin{align*}&p_1(t)=p_2(t)=1,\ p_3(t)=1+t^2,p_4(t)=1+6t^2+t^3,
p_5(t)=1+21t^2+20t^3+21t^4+t^6, \\
&p_6(t)=1+56t^2+147t^3+378t^4+266t^5+148t^6+27t^7+t^8,\\
&p_7(t)=1+126t^2+690t^3+3435t^4+7182t^5+9900t^6 +7182t^7+3435t^8+690t^9+126t^{10}+t^{12},\\
&p_8(t)=1+252t^2+2475t^3+21165t^4+91608t^5+261768t^6+462384t^7+
549120t^8+417065t^9\\
&\ +210333t^{10}+66168t^{11}
+13222t^{12}+1515t^{13}+75t^{14}+t^{15},\\
&p_9(t)=1 + 462t^2 + 7392t^3 + 100359t^4 + 764484t^5+ 3918420t^6 + 13349556t^7 + 31750136t^8 \\
&\ + 52917800t^9 + 62818236t^{10}
+ 52917800t^{11}+ 31750136t^{12} + 13349556t^{13} + 3918420t^{14}\\
&\ + 764484t^{15}+ 100359t^{16}+7392t^{17}+ 462t^{18}+t^{20},\\
&p_{10}(t)=1+ 792t^2 + 19305t^3 + 393018t^4 + 4788696t^5 + 39997980t^6 + 231274614t^7 + 961535355t^8 \\
&\ + 2922381518t^9 + 6600312300t^{10} + 11171504661t^{11} + 14267039676t^{12} + 13775826120t^{13} \\
&\ + 10059442536t^{14} + 5532629189t^{15} +2277448635t^{16}+693594726t^{17}
+ 154033780t^{18} +24383106t^{19}\\
&\ + 2669778t^{20}+192588t^{21}
+ 8196t^{22}+ 165t^{23}+t^{24}.\\
&p_{11}(t)= 1 + 1287t^2 + 45474t^3 + 1328901t^4 + 24287340t^5 + 309119723t^6
+ 2795330694t^7 \\
&\ + 18571137585t^8 + 9253037887 6t^9 + 351841388847t^{10} + 1033686093846t^{11}+ 2369046974245t^{12}\\
&\ + 4264149851544t^{13} + 6056384937603t^{14} + 6805690336900t^{15}+ 6056384937603t^{16}+ 4264149851544t^{17} \\
&\ + 2369046974245t^{18} + 1033686093846t^{19} + 351841388847t^{20} + 92530378876t^{21} + 18571137585t^{22} \\
&\ + 2795 330694t^{23}+ 309119723t^{24}+ 24287340t^{25}+ 1328901t^{26}+ 45474t^{27}+ 1287t^{28}+t^{30}.
\end{align*}}
For $1\le n\le 11$ we put $P_n(\Lambda):=p_n(\Lambda^4)$, $Q_n(\Lambda)=\Lambda^{n^2-1}P_n(\frac{1}{\Lambda})$.
It is easy to see that for $n$ odd, $P_n$ is symmetric, i.e. $P_n(\Lambda)=Q_n(\Lambda)$.
\begin{Theorem} \label{mainp2} For $1\le n\le 11$ we have
\begin{enumerate}
\item
$$1+\binom{n+2}{2}\Lambda^4+\sum_{d>4}\chi(M^{{\mathbb P}^2}_H(0,d),\mu(nH))\Lambda^d=\frac{P_n(\Lambda)}{(1-\Lambda^4)^{\binom{n+2}{2}}}.$$
\item if $n$ is even, then
$$\sum_{d>0}\chi(M^{{\mathbb P}^2}_H(H,d),\mu(nH))\Lambda^d=\frac{Q_n(\Lambda)}{(1-\Lambda^4)^{\binom{n+2}{2}}}.$$
\end{enumerate}
\end{Theorem}
We see that for $n\le 11$, the generating functions $\chi^{{\mathbb P}^2,H}_H(nh)$, $\chi^{{\mathbb P}^2,H}_0(nH)$
have a number of interesting features, which we conjecture to hold for all $n>0$.
\begin{Conjecture}\label{p2con}
For all $n>0$ there are polynomials $p_n(t)\in {\mathbb Z}[t]$ such the following holds. We put $P_n(\Lambda)=p_n(\Lambda^4)$, $Q_n(\Lambda)=\Lambda^{n^2-1}P_n(\frac{1}{\Lambda})$.
\begin{enumerate}
\item $$1+\binom{n+2}{2}\Lambda^4+\sum_{d>4}\chi(M^{{\mathbb P}^2}_H(0,d),\mu(nH))\Lambda^d=\frac{P_n(\Lambda)}{(1-\Lambda^4)^{\binom{n+2}{2}}}.$$
\item If $n$ is odd, then
$P_n(\Lambda)=Q_n(\Lambda)$, if $n$ is even then
$$\sum_{d>0}\chi(M^{{\mathbb P}^2}_H(H,d),\mu(nH))\Lambda^d=\frac{Q_n(\Lambda)}{(1-\Lambda^4)^{\binom{n+2}{2}}}.$$
\item $p_n(1)=2^{\binom{n-1}{2}}$.
\item For $i$ odd and $i\le n-3$ we have
$$\mathop{\text{\rm Coeff}}_{x^i}\big[e^{-(n^2-1)x/2}P_n(e^x)]=\mathop{\text{\rm Coeff}}_{x^i}\big[e^{-(n^2-1)x/2}Q_n(e^x)]=0.$$
\item The degree of $p_{n}(t)$ is the largest integer strictly smaller than $n^2/4$.
\end{enumerate}
\end{Conjecture}
On ${\mathbb P}^1\times {\mathbb P}^1$ we get the following results. Let $F$ and $G$ be the classes of the fibres of the projections to the two factors. Let
{\small \begin{align*}
q^0_1&:=1,\ q^0_{2}:=1+t^2,\ q^0_3=1+10t^2+4t^3+t^4,q^0_4:=1+46t^2 + 104t^3 + 210t^4 + 104t^5 + 46t^6 + t^8,\\ q^0_5&:=1 + 146t^2 + 940t^3 + 5107t^4 + 12372t^5 + 19284t^6+ 16280t^7 + 8547t^8 + 2452t^9 + 386t^{10} + 20t^{11} + t^{12},\\
q^0_6&:=1 + 371t^2 + 5152t^3 + 58556t^4 + 361376t^5 + 1469392t^6 + 3859616t^7 + 6878976t^8 + 8287552t^9 \\&+ 6878976t^{10} + 3859616t^{11} + 1469392t^{12} + 361376t^{13} + 58556t^{14} + 5152t^{15} + 371t^{16} + t^{18},\\
q^0_7&=1+812t^2+ 20840t^3 + 431370t^4+5335368t^5+ 44794932t^6+ 259164216t^7+ 1070840447t^8\\
&+ 3214402272t^9+ 7125238944t^{10}+ 11769293328t^{11}
+ 14581659884t^{12} + 13577211024t^{13}\\
&+9496341984t^{14} + 4966846032t^{15} + 1928398719t^{16} + 548923040t^{17}+ 112654644t^{18}+ 16232904t^{19}\\
&+ 1584906t^{20}+ 97448t^{21}+ 3564t^{22} + 56t^{23}+t^{24},\\
q^{F+G}_{2}&=t^{\frac{1}{2}}(1+t),q^{F+G}_{4}=t^{\frac{1}{2}}(1 + 10t + 84t^2 + 161t^3 + 161t^4+ 84t^5+ 10t^6 + t^7),\\
q^{F+G}_{6}&=t^{\frac{1}{2}}(1+ 35t+ 1296t^2+ 18670t^3 + 154966t^4 + 770266t^5 + 2504382t^6+ 5405972t^7 + 7921628t^8 \\&+ 7921628t^{9} + 5405972t^{10}+ 2504382t^{11} + 770266t^{12}+ 154966t^{13} + 18670t^{14} + 1296t^{15}+ 35t^{16} + t^{17}),\\
q^F_{2}&:=2t,\ q^F_4:=3t + 43t^2 + 105t^3 + 210t^4 + 105t^5 + 43t^6 + 3t^7,\\
q^F_{6}&:=4t + 274t^2 + 5520t^3 + 57022t^4 + 366052t^5 + 1460922t^6 + 3873184t^7 + 6855798t^8 + 8316880t^9 \\&+ 6855798t^{10}+ 3873184t^{11} + 1460922t^{12} + 366052t^{13} + 57022t^{14} + 5520t^{15} + 274t^{16} + 4t^{17}.\\
\end{align*}}
For $n$ odd put $q^{F+G}_n(t)=t^{d^2/2}q^0_n(t^{-1})$.
Then we get
\begin{Theorem}\label{P11gen}
\begin{enumerate}
\item $\displaystyle{\sum_{d>4} \chi(M^{{\mathbb P}^1\times {\mathbb P}^1}_{F+G}(0,d),\mu(nF+nG))\Lambda^d=\frac{q^0_n(\Lambda^4)}{(1-\Lambda^4)^{(n+1)^2}}-1-(n^2+2n+1)\Lambda^4}$ for $1\le n\le 7$.
\item $\displaystyle{\sum_{d>0} \chi(M^{{\mathbb P}^1\times {\mathbb P}^1}_{F+G}(F+G,d),\mu(nF+nG))\Lambda^d=\frac{q^{F+G}_n(\Lambda^4)}{(1-\Lambda^4)^{(n+1)^2}}-\Lambda^2}$ for $1\le n\le 7$.
\item $\displaystyle{\sum_{d>0} \chi(M^{{\mathbb P}^1\times {\mathbb P}^1}_{F+G}(F,d),\mu(nF+nG))\Lambda^d=\frac{q^F_n(\Lambda^4)}{(1-\Lambda^4)^{(n+1)^2}}}$ for $n=2,4,6$.
\end{enumerate}
\end{Theorem}
\begin{Remark}
\begin{enumerate}
\item For $d$ even and $c_1=0$, $F$, $F+G$ we have $q^{c_1}_n(t)=t^{n^2/2}q^{c_1}_n(t^{-1})$.
\item For all $1\le n\le 7$ we have $q^0_n(1)=q^{F+G}_n(1)=2^{(n-1)^2}$, and if $d$ is also even $q^F_n(1)=2^{(n-1)^2}$.
\item For all $1\le n \le 7$ and all $i$ odd with $i\le n-2$ we have $\mathop{\text{\rm Coeff}}_{x^i}\big[e^{-n^2x/4}q^0_{d}(e^x)]=\mathop{\text{\rm Coeff}}_{x^i}\big[e^{-n^2x/4}q^{F+G}_{n}(e^x)]=0$.
\end{enumerate}
\end{Remark}
The results on ${\mathbb P}^2$, ${\mathbb P}^1\times {\mathbb P}^1$ as well as the computations for other rational surfaces lead to a general conjecture.
For a line bundle $L$ on a rational surface $X$ we denote $\chi(L)=L(L-K_X)/2+1$ the holomorphic Euler characteristic and
$g(L)=L(L+K_X)/2+1$ the genus of a smooth curve in the linear system $|L|$.
\begin{Conjecture}\label{ratconj}
Let $X$ be a rational surface and let $\omega$ be ample on $X$ with $\<\omega, K_X\><0$. Let $L$ be a sufficiently ample line bundle on $X$.
Then we have the following.
\begin{enumerate}
\item There is a polynomial $P^X_{c_1,L}(\Lambda)\in \Lambda^{-c_1^2}{\mathbb Z}_{\ge 0}[\Lambda^{\pm 4}]$, such that
$$\sum_{d\ge 0} \chi(M^{X}_\omega(c_1,d),\mu(L))\Lambda^d\equiv \frac{P^X_{c_1,L}(\Lambda)}{(1-\Lambda^4)^{\chi(L)}}.$$
\item We have $P^X_{c_1,L}(1)=2^{g(L)}$.
\item We have the "duality"
$$P^X_{c_1,L}(\Lambda)=\Lambda^{L^2+8-K_X^2}P^X_{L+K_X-c_1,L}(\frac{1}{\Lambda}).$$
\item If $i$ is odd, and $L$ is sufficiently ample with respect to $i$, then
$$\mathop{\text{\rm Coeff}}_{x^i}\big[e^{-\frac{1}{2}(L^2+8-K_X^2)x}P^X_{c_1,L}(e^x)\big]=0.$$
In the case of $({\mathbb P}^2,dH)$ and $({\mathbb P}^1\times {\mathbb P}^1,dF+dG)$ sufficiently ample with respect to $i$ means that $L+K_X$ is $i$-very ample.
\end{enumerate}
\end{Conjecture}
\begin{Remark} \label{unique}
The polynomial $P^X_{c_1,L}(\Lambda)$ is not well defined. We can write $P^X_{c_1,L}(\Lambda)=\Lambda^{-c_1^2}p^X_{c_1,L}(\Lambda^4)$, and the polynomial
$p^X_{c_1,L}(t)$ is well defined only up to adding a Laurent polynomial in $t$ divisible by $(1-t)^{\chi(L)}$. On the other hand, if $L$ is sufficiently ample with respect to $c_1,X$, we conjecture that we can choose $p^X_{c_1,L}(t)$ with
$\deg(p^X_{c_1,L}(t))<\chi(L)$ (i.e. the difference in degree of the highest order and lowest order term in $p^X_{c_1,L}(t)$ is smaller than $\chi(L)$). Assuming this, $p^X_{c_1,L}(t)$ and thus $P^X_{c_1,L}(\Lambda)$ are uniquely determined.
\end{Remark}
\begin{Remark}\label{chipol}
Part (1) of \conref{ratconj} requires a condition of sufficient ampleness (see \thmref{rpoly}). On the other hand it appears that a modified version of the conjecture holds in larger generality, i.e.
$\chi^{\omega}_{X,c_1}(L)\equiv \frac{P^X_{c_1,L}(\Lambda)}{(1-\Lambda^4)^{\chi(L)}}.$
with $P^X_{c_1,L}(\Lambda)\in \Lambda^{-c_1^2}{\mathbb Q}[\Lambda^{\pm 4}]$, and
\begin{enumerate}
\item $P^X_{c_1,L}(1)=2^{g(L)}$,
\item
$\chi^{\omega}_{X,c_1}(L)\equiv (-1)^{\chi(L)}\Lambda^{-(L-K_X)^2+4} \cdot \chi^\omega_{X,L+K_X-c_1}(L)|_{\Lambda=\frac{1}{\Lambda}}.$
\end{enumerate}
\end{Remark}
\begin{comment}
Let $X$ be a rational surface. The above results all have the following shape: for given $c_1,L$, and general ample $\omega$ on $X$, they give a
formula for the generating function
$\chi^{X,\omega}_{c_1}(L)$. It would be very desirable to understand the dependence of $\chi^{X,\omega}_{c_1}(L)$ on the line bundle $L$.
In \cite[Thm.~1.2]{GY} a partial result of this kind is obtained for ${\mathbb P}^1\times{\mathbb P}^1$ and ${\mathbb P}^2$. Here we give a generalization to blowups of ${\mathbb P}^2$.
\begin{Theorem}\label{ruledblow}
Let $X$ be the blowup of ${\mathbb P}^2$ in $1+n+m$ general points, with exceptional divisors $E,E_1,\ldots,E_n,D_1,\ldots,D_m$.
We write $D:=D_1+\ldots+D_m$. Let $\omega=...$. Fix $r_1,r_2,s_1,s_2$ with $r_1+r_2\le n$ and $s_1+s_2\le m$
Let $$J_{r_1,r_2}^{s_1,s_2}:=E_1+\ldots+E_{r_1}+2(E_{r_1+1}+\ldots+E_{r_1+r_2})+D_{1}+\ldots +D_{s_1}+2(D_{s_1+1}+\ldots D_{s_1+s_2}).$$
Then we have the following.
\begin{enumerate}
\item
If $s_1>0$, then $\chi^{X,\omega}_{D}(dH-dE-J_{r_1,r_2}^{s_1,s_2})=\chi^{X,\omega}_{H-E+D}(dH-dE-J_{r_1,r_2}^{s_1,s_2})=0$, otherwise
\begin{align*}
\chi^{X,\omega}_{D}(dH-dE-J_{r_1,r_2}^{s_1,s_2})\equiv \chi^{X,\omega}_{H-E+D}(dH-dE-J_{r_1,r_2}^{s_1,s_2})\equiv \frac{(-1)^{s_2}\Lambda^m}{(1-\Lambda^4)^{d+1-r_1-2r_2-2s_2}}\\
\end{align*}
\item \begin{align*}
$\chi^{X,\omega}_{D}(nH-(n-1) E-J_{r_1,r_2}^{s_1,s_2})\equiv 0$ if $s_1$ is odd, and $\chi^{X,\omega}_{H-E+D}(nH-(n-1) E-J_{r_1,r_2}^{s_1,s_2})\equiv 0$
\chi^{X,\omega}_{D}(nH-(n-1) E-J_{r_1,r_2}^{s_1,s_2})\equiv \\
\chi^{X,\omega}_{H-E+D}(nH-(n-1) E-J_{r_1,r_2}^{s_1,s_2})\equiv
\end{align*}
\item \begin{align*}
\chi^{X,\omega}_{D}(nH-(n-2)E-J_{r_1,r_2}^{s_1,s_2})\equiv\\
\chi^{X,\omega}_{H-E+D}(nH-(n-2)E-J_{r_1,r_2}^{s_1,s_2})\equiv
\end{align*}
\end{enumerate}
\end{Theorem}
\end{comment}
\begin{comment}
\begin{Remark}\label{chipol}
Part (1) of \conref{ratconj} requires some condition of sufficient ampleness, which sometimes can be stronger than just very ampleness.
In the case of ${\mathbb P}^2$ it just means very ample. On the other hand by \cite[Thm.~2.3]{dAH} $dH-E_1-\ldots -E_r$ will be very ample for $d=5$ and $r\le 15$, but in \ref{???} the coefficients of $P^X_{c_1,L}$ are all positive only for $r\le 12$. On the other hand it appears that (1), (2), (3) hold in much larger generality if we allow $P^X_{c_1,L}(\Lambda)\in \Lambda^{-c_1^2}{\mathbb Q}[\Lambda^4]$, i.e. we do not require the positivity and the integrality of the coefficients.
\end{Remark}
\end{comment}
{\bf Approach.}
This paper is built on \cite{GY}, and both papers are built on \cite{GNY}. In \cite{GNY} the wallcrossing terms for the $K$-theoretic Donaldson invariants are
determined in terms of modular forms, based on the solution of the Nekrasov conjecture for the $K$-theoretic partition function (see \cite{Nek}, \cite{NO}, \cite{NY1},\cite{NY2},\cite{NY3}).
, and both \cite{GY} and this paper sum up the wallcrossing terms to get closed formulas for the generating functions.
The main new inputs are the systematic use of the generating function the "$K$-theoretic Donaldson invariants
with point class" $\chi^{X,\omega}_{c_1}(L,P^r)$, and the blowup formulas.
We introduce in an ad hoc way $\chi^{X,\omega}_{c_1}(L,P^r):=\frac{1}{\Lambda^r}\chi^{\widehat X,\omega}_{c_1+E}(L-E)$, where $\widetilde X$ is the blowup of $X$ in $r$ general points and $E$ is the sum of the exceptional divisors (but note that these are invariants on $X$, depending on an ample class $\omega$ on $X$). These invariants satisfy a wallcrossing formula which is very similar to that of the standard $K$-theoretic Donaldson invariants $\chi^{X,\omega}_{c_1}(L)$.
We prove blowup formulas that compute all the generating formulas of $K$-theoretic Donaldson invariants on any blowup of $X$ in terms of the $\chi^{X,\omega}_{c_1}(L,P^r)$.
On the other hand we also prove blowdown formulas, which compute all the generating functions of the $K$-theoretic Donaldson invariants with point class $\chi^{X,\omega}_{c_1}(M,P^r)$ in terms of a {\it very small part} of those on the blowup $\widehat X$.
Then, generalizing the methods of \cite{GY}, we compute this small part in the case $\widehat X$ is the blowup of ${\mathbb P}^2$ in a point. Thus, using the blowdown formulas, we determine the generating functions of the $K$ theoretic Donaldson invariants with point class of ${\mathbb P}^2$, and thus, by using the blowup formula again, of all blowups of ${\mathbb P}^2$. Finally, as the blowup of ${\mathbb P}^1\times {\mathbb P}^1$ in a point is equal to the blowup of
${\mathbb P}^2$ in two points, we apply the blowdown formulas again to determine generating functions for ${\mathbb P}^1\times{\mathbb P}^1$.
These methods give an algorithm, which in principle computes all the generating functions mentioned above. The algorithm proves the rationality of the
generating functions, and is carried for many $X$ and $L$ to obtain the explicit generating functions $\chi^{X,\omega}_{c_1}(L)$.
\section{Background material}\label{sec:background}
In this whole paper $X$ will be a simply connected nonsingular projective rational surface over ${\mathbb C}$.
Usually $X$ will be ${\mathbb P}^2$, ${\mathbb P}^1\times {\mathbb P}^1$ or the blowup of ${\mathbb P}^2$ in finitely many points.
We will fix some notation that we want to use during this whole paper.
\begin{Notation} \label{R}
\begin{enumerate}
\item For a class $\alpha,\beta\in H^2(X,{\mathbb Q})$, we denote $\<\alpha,\beta\>$ their intersection product.
For $\beta\in H^2(X)$ we also write $\beta^2$ instead of $\<\beta,\beta\>$.
\item For a line bundle $L$ on $X$ we denote its first Chern class by the same letter.
\item If $\widehat X$ is the blowup of $X$ in a point or in a finite set of points, and $L\in Pic(X)$, we denote its pullback to $\widehat X$ by the same letter.
The same holds for classes $\alpha\in H^2(X,{\mathbb R})$.
\item We denote $
{\mathcal R}:={\mathbb Q}[[q^2\Lambda^2,q^4]]$.
\item Let ${\mathbb Q}[t_1,\ldots,t_k]_n$ be the set of polynomials in $t_1,\ldots,t_k$ of degree $n$ and ${\mathbb Q}[t_1,\ldots,t_k]_{\le n}$ the polynomials in $t_1,\ldots,t_n$
of degree at most $n$.
\item Let $\omega$ be an ample divisor on $X$.
For $r\ge 0$, $c_1\in Pic(X)$, $c_2\in H^4(X,{\mathbb Z})$ let
$M_\omega^X(r,c_1,c_2)$ the moduli space of $\omega$-semistable rank $r$ sheaves on $X$ with $c_1(E)=c_1$, $c_2(E)=c_2$.
Let $M^X_\omega(r, c_1,c_2)_s$ be the open
subset of stable sheaves.
We will write $M^X_\omega(c_1,d)$ with $d:=4c_2-c_1^2$ instead of $M^X_\omega(2,c_1,c_2)$.
\end{enumerate}
\end{Notation}
\subsection{Determinant line bundles}\label{sec:detbun}
We briefly review the determinant line bundles on the moduli space
\cite{DN},\cite{LP1}, \cite{LP2}, for more details we refer to \cite[Chap.~8]{HL}. We mostly follow \cite[Sec.~1.1,1.2]{GNY}.
For a Noetherian scheme $Y$ we denote by $K(Y)$ and $K^0(Y)$ the Grothendieck groups of coherent sheaves and locally free sheaves on $Y$ respectively.
If $Y$ is nonsingular and quasiprojective, then $K(Y)=K^0(Y)$.
If we want to distinguish a sheaf ${\mathcal F}$ and its class in $K(Y)$, we
denote the latter by $[{\mathcal F}]$. The product $[{\mathcal F}].[{\mathcal G}]:=\sum_i (-1)^i[\underline{Tor}_i({\mathcal F},{\mathcal G})]$ makes
$K^0(Y)$ into a commutative ring and $K(Y)$ into a $K^0(Y)$ module.
For a proper morphism $f\colon Y_1\to Y_2$ we have the pushforward
homomorphism
\(f_!\colon K(Y_1)\to K(Y_2); [{\mathcal F}] \mapsto\sum_i (-1)^i [R^if_*{\mathcal F}].\)
For any morphism $f\colon Y_1\to Y_2$ we have the pullback
homomorphism \( f^*\colon K^0(Y_2)\to K^0(Y_1) \)
given by
\( [{\mathcal F}] \mapsto[f^*{\mathcal F}] \) for a locally free sheaf ${\mathcal F}$ on $Y_2$.
Let ${\mathcal E}$ be a flat family of coherent sheaves of class $c$ on $X$ parametrized by a scheme $S$, then ${\mathcal E}\in K^0(X\times S)$.
Let $p:X\times S\to S$, $q:X\times S\to X$ be the projections.
Define $\lambda_{\mathcal E}:K(X)\to \operatorname{Pic}(S)$ as the composition of the following homomorphisms:
\begin{equation}\label{dlb}
\xymatrix@C=0.3cm{
K(X)=K^0(X) \ar[rr]^{~~q^{*}} && K^0(X\times S) \ar[rr]^{.[{\mathcal E}]} && K^0(X\times
S) \ar[rr]^{~~~p_{!}} && K^0(S)\ar[rr]^{det^{-1}} &&
\operatorname{Pic}(S),}\end{equation}
By Proposition 2.1.10 in \cite{HL} $p_{!}([{\mathcal F}])\in K^0(S)$ for ${\mathcal F}$ $S$-flat .
We have the following facts.
\begin{enumerate}
\item $\lambda_{\mathcal E}$ is a homomorphism, i.e. $\lambda_{\mathcal E}(v_1+v_2)=\lambda_{\mathcal E}(v_1)\otimes \lambda_{{\mathcal E}}(v_2)$.
\item If $\mu\in \operatorname{Pic}(S)$ is a line bundle, then $\lambda_{{\mathcal E}\otimes p^*\mu}(v)=
\lambda_{{\mathcal E}}(v)\otimes \mu^{\chi(c\otimes v)}$.
\item $\lambda_{\mathcal E}$ is compatible with base change: if
$\phi:S'\to S$ is a morphism, then $\lambda_{\phi^*{\mathcal E}}(v)=\phi^*\lambda_{{\mathcal E}}(v)$.
\end{enumerate}
Define $K_c:=c^\perp=\big\{v\in K(X)\bigm|
\chi(v\otimes c)=0\big\}$,~
and $K_{c,\omega}:=c^\perp\cap\{1,h,h^2\}^{\perp\perp}$, where $h=[{\mathcal O}_\omega]$. Then we have a well-defined morphism $\lambda\colon K_c\to \operatorname{Pic}(M_\omega^X(c)^s)$, and $\lambda\colon K_{c,\omega}\to \operatorname{Pic}(M_\omega^X(c))$ satisfying the following properties:
\begin{enumerate}
\item The $\lambda$ commute with the inclusions $K_{c,\omega}\subset K_c$ and $\operatorname{Pic}(M_\omega^X(c))\subset \operatorname{Pic}(M_\omega^X(c)^s)$.
\item If ${\mathcal E}$ is a flat family of semistable sheaves on $X$ of class $c$ parametrized by $S$, then we have
$\phi_{{\mathcal E}}^*(\lambda(v))=\lambda_{{\mathcal E}}(v)$ for all $v\in K_{c,\omega}$ with $\phi_{\mathcal E}:S\rightarrow M^X_\omega(c)$ the classifying morphism.
\item If ${\mathcal E}$ is a flat family of stable sheaves, the statement of (2) holds with $K_{c,\omega}$, $M^X_\omega(c)$ replaced by $K_{c}$, $M^X_\omega(c)^s$.
\end{enumerate}
Since $X$ is a simply connected surface, both the moduli space $M^X_\omega(c)$ and the determinant line bundle $\lambda(c^*)$ only depend on the images of $c$ and $c^*$ in $K(X)_{num}.$ Here $K(X)_{num}$ is the Grothendieck group modulo numerical equivalence. We say that $u,v\in K(X)$ are numerically equivalent if $u-v$ is in the radical of the quadratic form $(u,v)\mapsto \chi(X,u\otimes v)\equiv \chi(u\otimes v)$
We call $H$ {\it general} with respect to
$c$ if all the strictly semistable sheaves in $M_H^X(c)$
are strictly semistable with respect to all ample divisors on $X$ in a neighbourhood of $H$ \
Often $\lambda\colon K_{c,\omega}\to \operatorname{Pic}(M_\omega^X(c))$ can be extended. For instance let $c=(2,c_1,c_2)$, then $\lambda(v(L))$ is well-defined over $M^X_\omega(c)$ if $\<L,\xi\>=0$ for all $\xi$ a class of type $(c_1,d)$ (see \secref{walls}) with $\<\omega,\xi\>=0$. This can be seen from the construction of $\lambda(v(L))$ (e.g. see the proof of Theorem 8.1.5 in \cite{HL}).
\subsection{Walls}\label{walls}
Denote by ${\mathcal C}\subset H^2(X,{\mathbb R})$ the ample cone of $X$.
Then ${\mathcal C}$ has a chamber structure:
For a class $\xi\in H^2(X,{\mathbb Z})\setminus \{0\}$ let $W^\xi:=\big\{ x\in {\mathcal C}\bigm| \< x,\xi\>=0\big\}$.
Assume $W^\xi\ne \emptyset $. Let $c_1\in \operatorname{Pic}(X)$, $d\in {\mathbb Z}$ congruent to $-c_1^2$ modulo 4.
Then we call $\xi$ a {\it class of type} $(c_1,d)$ and call
$W^\xi$ a {\it wall of type} $(c_1,d)$ if the following conditions hold
\begin{enumerate}
\item
$\xi+c_1$ is divisible by $2$ in $H^2(X,{\mathbb Z})$,
\item $d+\xi^2\ge 0$.
\end{enumerate}
We call $\xi$ a {\it class of type} $(c_1)$, if $\xi+c_1$ is divisible by $2$ in $H^2(X,{\mathbb Z})$.
We say that $\omega\in {\mathcal C}$ lies on the wall $W^\xi$ if $\omega\in W^\xi$.
The {\it chambers of type} $(c_1,d)$ are the connected components of the complement
of the walls of type $(c_1,d)$ in ${\mathcal C}$.
Then $M_\omega^X(c_1,d)$ depends only on the chamber of type $(c_1,d)$ of $\omega$.
Let $c\in K(X)$ be the class of ${\mathcal F}\in M_\omega^X(c_1,d)$. It is easy to see
that $\omega$ is general with respect to $c$ if and only if $\omega$ does not lie on a wall of
type $(c_1,d)$.
\subsection{$K$-theoretic Donaldson invariants}\label{backDong}
We write $M^X_\omega(c_1,d)$ for $M^X_\omega(2,c_1,c_2)$ with $d=4c_2-c_1^2$.
Let $v\in K_c$, where $c$ is the class of a coherent rank $2$ sheaf
with Chern classes $c_1,c_2$.
Let $L$ be a line bundle on $X$ and assume that $\<L,c_1\>$ is even.
Then for $c$ the class of a rank $2$ coherent sheaf with Chern classes
$c_1,c_2$, we put
\begin{equation}\label{eq:uL} v(L):=(1-L^{-1})+\<\frac{L}{2},L+K_X+c_1\>[{\mathcal O}_x]\in K_c.\end{equation}
Note that $v(L)$ is independent of $c_2$.
Assume that $\omega$ is general with respect to $(2,c_1,c_2)$. Then we denote
$\mu(L):=
\lambda(v(L))\in \operatorname{Pic}(M^X_\omega(c_1,d))$.
The {\it $K$-theoretic Donaldson invariant\/} of $X$, with respect to $L,c_1,d,\omega$ is
$\chi(M^X_\omega(c_1,d),\mathcal O(\mu(L)))$.
We recall the following blowup relation for the $K$-theoretic Donaldson invariants from \cite[Sec.1.4]{GNY}.
Let $(X,\omega)$ be a polarized rational surface. Let $\widehat X$ be the
blowup of $X$ in a point and $E$ the exceptional divisor.
In the
following we always denote a class in $H^*(X,{\mathbb Z})$ and its pullback by
the same letter.
Let $Q$ be an open subset of a suitable quot-scheme
such that $M^X_\omega(c_1,d)=Q/GL(N)$. Assume that $Q$ is smooth \textup(e.g.\ $\langle -K_X,\omega\rangle>0$\textup).
We choose $\epsilon>0$ sufficiently small so that $\omega-\epsilon E$ is ample on $\widehat X$ and there is no class $\xi$ of type $(c_1,d)$ or of type $(c_1+E,d+1)$ on $\widehat X$ with $\<\xi, \omega\><0<\<\xi, (\omega-\epsilon E)\>.$
In case $c_1=0$ assume $d>4$.
\begin{Lemma} \label{blowsimple}
We have
\begin{align*}
\chi({M}^{\widehat X}_{\omega-\epsilon E}(c_1,d),\mu(L))&
=\chi(M^X_\omega(c_1,d),\mu(L)), \\
\chi({M}^{\widehat X}_{\omega-\epsilon E}(c_1+E,d+1),\mu(L))&
=\chi(M^X_\omega(c_1,d),\mu(L))
\end{align*}
for any line bundle $L$ on $X$ such that $\<L, c_1\>$ is even and
$\<L,\xi\>=0$ for $\xi$ any class of
type $(c_1,d)$ on $\widehat X$ with $\<\omega,\xi\>=0$.
\end{Lemma}
Following \cite{GY}, we introduce the generating function of the $K$-theoretic Donaldson invariants.
\begin{Definition} \label{KdonGen} Let $c_1\in H^2(X,{\mathbb Z})$. Let $\omega$ be ample on $X$ not on a wall of type $(c_1)$.
\begin{enumerate}
\item
If $c_1\not \in 2 H^2(X,{\mathbb Z})$, let
\begin{equation}\label{eq:Kdon}
\begin{split}
\chi_{c_1}^{X,\omega}(L)&:=\sum_{d>0}
\chi(M^X_\omega(c_1,d),\mathcal O(\mu(L)))\Lambda^d,
\end{split}
\end{equation}
\item In case $c_1=0$ let
$\widehat X$ be the blowup of $X$ in a point. Let $E$ be the exceptional divisor. Let $\epsilon>0$ be sufficiently small so that there is no class
$\xi$ of class $(E,d+1)$ on $\widehat X$ with $\<\xi, \omega\> <0 <\<\xi, (\omega-\epsilon E)\>$.
We put
\begin{equation}\label{eq:Kdon0}
\begin{split}
\chi_{0}^{X,\omega}(L)&:=\sum_{d>4}
\chi(M^X_\omega(0,d),\mathcal O(\mu(L)))\Lambda^d+\Big(\chi(M^{\widehat X}_{\omega-\epsilon E}(E,5) ,\mu(L))+LK_X-\frac{K_X^2+L^2}{2}-1\Big)\Lambda^4.
\end{split}
\end{equation}
\end{enumerate}
\end{Definition}
\begin{Remark}\label{rem:canonical}(\cite[Rem.~1.9]{GNY})
If $H$ is a general polarization, then
$\mu(2K_X)$ is a line bundle on $M^X_H(c)$
which coincides with the dualizing sheaf
on the locus of stable sheaves $M_H^X(c)^s$.
If $\dim (M_H^X(c) \setminus M_H^X(c)^s) \leq \dim M_H^X(c)-2$,
then $\omega_{M_H^X(c)}=\mu(2K_X)$.
\end{Remark}
Under rather general assumptions the higher cohomology of $\mu(L)$ vanishes.
The following follows from \cite[Prop.2.9]{GY} and its proof, which is based on \cite[Sec.1.4]{GNY}.
\begin{Proposition} \label{highvan}
Fix $c_1,d$. Let $\omega$ be an ample line bundle on $X$ which is general with respect to $c_1,d$, and satisfies $\<-K_X,\omega\>>0$.
Let $L$ be a nef line bundle on $X$ such that $L-2K_X$ is ample.
If $c_1$ is not divisible by $2$ in $H^2(X,{\mathbb Z})$ or $d>8$, we have
$H^i(M_\omega^X(c_1,d),\mu(L))=0$ for all $i>0$, in particular
$$\dim H^0(M_\omega^X(c_1,d),\mu(L))=\chi(M_\omega^X(c_1,d),\mu(L)).$$
\end{Proposition}
\section{Strange duality}\label{strd}
\subsection{Review of strange duality}
We briefly review the strange duality conjecture from for surfaces from \cite{LPst}.
The strange duality conjecture was formulated for $X$ a smooth curve in the 1990s (see \cite{Bea} and \cite{Donagi}) and in this case been proved around 2007 (see \cite{Bel1}, \cite{MO1}). For $X$ a surface, there is a formulation for some special due to Le Potier (see \cite{LPst} or \cite{Da2}).
Let $c,c^*\in K(X)_{num}$ with $c\in K_{c^*}$.
Let $H$ be an ample line bundle on $X$ which is
both $c$-general and $c^*$-general.
Write ${\mathcal D}_{c,c^*}:=\lambda(c^*)\in \operatorname{Pic}(M^X_H(c))$,
${\mathcal D}_{c^*,c}:=\lambda(c)\in \operatorname{Pic}(M^X_H(c^*))$.
Assume that all $H$-semistable sheaves ${\mathcal F}$ on $X$ of class $c$ and
all $H$-semistable sheaves ${\mathcal G}$ on $X$ of class $c^*$ satisfy
\begin{enumerate}
\item $\underline{Tor}_i({\mathcal F},{\mathcal G})=0$ for all $i\ge 1$,
\item $H^2(X,{\mathcal F}\otimes {\mathcal G})=0$.
\end{enumerate}
Both conditions are automatically satisfied if
$c$ is not of dimension $0$ and $c^*$ is of dimension $1$
(see \cite[p.9]{LPst}).
Put ${\mathcal D}:={\mathcal D}_{c,c^*}\boxtimes {\mathcal D}_{c^*,c}\in \operatorname{Pic}(M^X_H(c)\times M^X_H(c^*))$.
In \cite[Prop.~9]{LPst} a canonical section $\sigma_{c,c^*}$ of ${\mathcal D}$ is constructed, whose zero set is supported on
$$\mathscr{D}:=\big\{([{\mathcal F}],[{\mathcal G}])\in M^X_H(c)\times M^X_H(c^*)\bigm| H^0(X,{\mathcal F}\otimes {\mathcal G})\ne 0\big\}.$$
The element $\sigma_{c,c^*}$ of $H^0(M^X_H(c),{\mathcal D}_{c,c^*})\otimes H^0(M^X_H(c^*),{\mathcal D}_{c^*,c})$, gives a linear map
\begin{equation}
\label{SDmap}
SD_{c,c^*}:H^0(M^X_H(c),{\mathcal D}_{c,c^*})^\vee \to H^0(M^X_H(c^*),{\mathcal D}_{c^*,c}),
\end{equation}
called the {\it strange duality map}.
Le Potier's strange duality conjecture is then the following.
\begin{Conjecture}\label{sdcon} Under the above assumptions $SD_{c,c^*}$ is an isomorphism.
\end{Conjecture}
It seems natural to believe that under more general assumptions than \conref{sdcon} we have the {\it numerical version of strange duality}
$\chi(M^X_H(c),{\mathcal D}_{c,c^*})^\vee = \chi(M^X_H(c^*),{\mathcal D}_{c^*,c})$.
\subsection{Interpretation of the main results and conjectures in view of strange duality}
In this subsection let
$c=(2,c_1,c_2)$ and $c^*=(0,L,\chi=\<\frac{L}2,c_1\>)$, so that ${\mathcal D}_{c,c^*}=\mu(L)$. The moduli space $M^X_H(c^*)$ is a moduli space of pure dimension $1$ sheaves.
It has a natural projection $\pi:=\pi^{L,c_1}:M^X_H(c^*)\to |L|$, whose fibre over a smooth curve $C$ in $|L|$ is the Jacobian of line bundles degree $\< \frac{L}{2},c_1+K_X+L\>$ on $C$.
In particular $c^*$ is independent of $c_2$.
In case $c_1=0$ the fibre of $\pi^{L,0}$ over the class of a nonsingular curve $C$ is the Jacobian $J_{g(C)-1}(C)$ of degree $g(C)-1=\frac{1}{2}\deg(K_C)$ line bundles on $C$.
In this case we denote by $\Theta:=\lambda([{\mathcal O}_X])\in \operatorname{Pic}(M^X_H(c^*)$. The divisor of its restriction to a fibre $J_{g(C)-1}(C)$ is the classical theta divisor of
degree $g(C)-1$ divisors on $C$ with a section.
Let again $c_1$ be general and let ${\mathcal O}_X(c_1)$ be the line bundle with first Chern class $c_1$; we denote $\Theta_{2,c_1}:=\lambda([{\mathcal O}_X\oplus {\mathcal O}_X(c_1)])\in\operatorname{Pic}(M^X_H(c^*))$.
We also denote $\eta:=\lambda({\mathcal O}_{x})\in \operatorname{Pic}(M^X_H(c^*))$, for $x$ a general point of $X$. It is standard that
$\eta=\pi^*({\mathcal O}_{|L|}(1))$, with ${\mathcal O}_{|L|}(1)$ the hyperplane bundle on $|L|$.
Thus we see that ${\mathcal D}_{c^*,c}=\lambda(c)=\Theta_{2,c_1}\otimes \pi^*({\mathcal O}_{|L|}(c_2))$; in particular
in case $c_1=0$ we have ${\mathcal D}_{c^*,c}=\lambda(c)=\Theta^{\otimes 2}\otimes \pi^*({\mathcal O}_{|L|}(c_2))$.
We use Le Potier's strange duality conjecture and the results and conjectures from the introduction to make conjectures about the pushforwards
$\pi^{L,c_1}_*(\Theta_{2,c_1})$, $\pi^{L,c_1}_!(\Theta_{2,c_1})$.
For a Laurent polynomial $f(t):=\sum_{n} a_n t^n\in {\mathbb Z}[t^{-1},t]$ we put $f({\mathcal O}_{|L|}(-1)):=\bigoplus_n {\mathcal O}_{|L|}(-n)^{\oplus a_n}.$
\begin{Conjecture}\label{splitconj} \begin{enumerate}
\item If $L$ is sufficiently ample on $X$, then, defining $p^{X,c_1}_L$ as in \remref{unique}, then
$\pi^{L,c_1}_{*}(\lambda(\Theta_{2,c_1}))\otimes {\mathcal O}_{|L|}=p^{X,c_1}_L({\mathcal O}_{|L|}(-1))$ and $R^i\pi^{L,c_1}_*(\lambda(\Theta_{2,c_1}))=0$ for $i>0$.
In particular $\pi_{*}(\lambda_{c^*}(\Theta_{2,c_1}))$ splits as a direct sum of line bundles on
$|L|$. (Note that his implies that $p^{X,c_1}_L$ is a polynomial with nonnegative coefficients, as conjectured in \conref{ratconj}(1)).
\item In particular in the case $X={\mathbb P}^2$, and $d>0$, we get, with the polynomials $p_d(t)$ from \conref{p2con}, that
$$\pi^{dH,0}_{*}(\lambda(\Theta^2))=p_d({\mathcal O}_{|dH|}(-1)), \ \pi^{2dH,H}_{*}(\lambda(\Theta_{2,H}))=p_{2d}({\mathcal O}_{|2dH|}(1))\otimes{\mathcal O}_{|2dH|}(-d^2).$$
\item Under more general assumptions on $L$ on $X$, we expect that there is a choice of $P^{X,c_1}_L(\Lambda)=\Lambda^{-c_1^2}p^{X,c_1}_L(\Lambda^4)$,
such that
$\pi^{L,c_1}_{!}(\lambda(\Theta_{2,c_1}))=p^{X,c_1}_L({\mathcal O}_{|L|}(-1))$
\end{enumerate}
\end{Conjecture}
\begin{Remark}
\begin{enumerate}
\item Assuming part (2) of \conref{splitconj}, \thmref{mainp2} determines $\pi^{dH,0}_{*}(\lambda(\Theta^2))$, $\pi^{dH,H}_{*}(\lambda(\Theta_{2,H}))$ as direct sum of line bundles for $d\le 11$.
\item For $X={\mathbb P}^1\times{\mathbb P}^1$, assuming part (1) of \conref{splitconj},\thmref{P11gen} gives, with the notation from there, for $d\le 7$ that
\begin{align*}
\pi^{d(F+G),0}_{*}(\lambda(\Theta^2))&=q^0_d({\mathcal O}_{|d(F+G)|}(-1)),\\
\pi^{d(F+G),F}_{*}(\lambda(\Theta_{2,F}))&=q^F_d({\mathcal O}_{|d(F+G)|}(-1)),\\
\pi^{d(F+G),F+G}_{*}(\lambda(\Theta_{2,F+G}))&=(t^{1/2}q^{F+G}_d(t))|_{t=({\mathcal O}_{|d(F+G)|}(-1))}.
\end{align*}
\item
In \cite{GY} some further generating functions for the $K$-theoretic Donaldson invariants of $X={\mathbb P}^1\times{\mathbb P}^1$ or $X=\widetilde {\mathbb P}^2$ are computed. From
the results there we expect
\begin{align*}
\pi^{nF+2G,0}_*(\Theta^{2})&=({\mathcal O}_{|nF+2G|}\oplus {\mathcal O}_{|nF+2G|}(-1))^{\otimes n}_{ev},\\
\pi^{nF+2G,F}_*(\Theta_{2,F})&=({\mathcal O}_{|nF+2G|}\oplus {\mathcal O}_{|nF+2G|}(-1))^{\otimes n}_{odd},
\end{align*}
where $(\bullet)_{ev}$ and $(\bullet)_{odd}$ denotes respectively the part consisting only of even powers ${\mathcal O}(-2d)$ or odd powers ${\mathcal O}(-2d-1)$.
In particular this would give
$$\pi^{nF+2G,0}_*(\Theta^{2})\oplus
\pi^{nF+2G,F}_*(\Theta_{2,F})=({\mathcal O}_{|nF+2G|}\oplus {\mathcal O}_{|nF+2G|}(-1))^{\otimes n}.$$
\end{enumerate}
\end{Remark}
\begin{Remark} We briefly motivate the above conjectures.
Assuming strange duality \conref{sdcon}, we have, using also the projection formula,
\begin{align*}
H^0(M_\omega^X(c_1,d),\chi(L))^\vee&=H^0(M^X_\omega(c^*), \lambda(c))=H^0(|L|, \pi^{L,c_1}_*(\lambda(c))\\
&=H^0(|L|, \pi^{L,c_1}_*(\lambda(\Theta_{2,c_1}))\otimes {\mathcal O}_{|L|}(c_2)),
\end{align*}
and similarly, assuming the numerical version of strange duality above,
$$\chi(M_\omega^X(c_1,d),\chi(L))=\chi( \lambda(c))=\chi(M^X_\omega(c^*),\pi_!(\lambda(c))=\chi(\pi^{L,c_1}_!(\lambda(\Theta_{2,c_1}))\otimes {\mathcal O}_{|L|}(c_2)).$$
We assume $H^i(X,L)=0$ for $i>0$, thus $\dim(|L|)=\chi(L)-1$,
then for $0\le l\le \dim(|L|)$, and $n\ge 0$, we have
$$\sum_{n\ge 0} \chi(|L|,{\mathcal O}_{|L|}(-l+n))t^n=\frac{t^l}{(1-t)^{\chi(L)}}.$$
Thus, assuming the numerical part of the strange duality conjecture and part (3) of \conref{splitconj}, we would get
\begin{align*}
\chi^{X,\omega}_{c_1}(L)&\equiv\Lambda^{-c_1^2} \sum_{n\ge 0}\chi\big(|L|,\pi^{L,c_1}_!(\Theta_{2,c_1})\otimes {\mathcal O}_{|L|}(n)\big)\Lambda^{4n}\\
&\equiv\Lambda^{-c_1^2} \sum_{n\ge 0}\chi\big(|L|, p^{X,c_1}_L({\mathcal O}_{|L|}(-1))\otimes {\mathcal O}_{|L|}(n)\big)\Lambda^{4n}\\
&=\Lambda^{-c_1^2}\frac{p^{X,c_1}_L(\Lambda^4)}{(1-\Lambda^4)^{\chi(L)}}=\frac{P^{X,c_1}_L(\Lambda)}{(1-\Lambda^4)^{\chi(L)}}
\end{align*}
Assuming the strange duality conjecture and part (1) of \conref{splitconj}, we would get the same statement with the left hand side replaced by
$\sum_{n\ge 0} H^0(M^X_H(2,c_1,n), \mu(L)) t^{n}$. In other words \conref{splitconj} explains the generating functions of \thmref{mainp2}, \thmref{P11gen} and \conref{ratconj}(1).
\end{Remark}
\begin{Remark}
Assuming \conref{splitconj} and the strange duality conjecture, we see that $\mathop{\text{\rm rk}}(\pi_!(\Theta_{2,c_1}))=p^{X,c_1}_L(1)$.
As mentioned above, the fibre over $\pi^{L,c_1}:M^X_H(c^*)\to |L|$ over the point corresponding to a smooth curve $C$ in $|L|$ is the Jacobian $J_d(C)$ of line bundles degree $d=\< \frac{L}{2},c_1+K_X+L\>$ on $C$, and
we see that $\Theta_{2,c_1}$ is a polarisation of type $(2,\ldots,2)$. Thus by the Riemann-Roch theorem we have $\chi(J_d(C),\Theta_{2,c_1}|_{J_d(C)})=2^{g(C)}$.
Thus \conref{splitconj} implies that $\pi_!(\Theta_{2,c_1})$ has rank $2^{g(C)}$, therefore, assuming the strange duality conjecture, it implies $p^{X,c_1}_L(1)=2^{g(C)}$, as predicted in
\conref{ratconj} and seen e.g. in \thmref{mainp2} and \thmref{P11gen} for many $L$ in the case of ${\mathbb P}^2$ and ${\mathbb P}^1\times{\mathbb P}^1$.
\end{Remark}
\begin{Remark} Let $L$ again be sufficiently ample on $X$.
Assuming the strange duality conjecture \conref{sdcon} and part (1) of \conref{splitconj} we get that part (3) of \conref{ratconj}
gives the conjectural duality
$$\pi^{L,c_1}_*(\Theta_{2,c_1})=(\pi^{L,L+K_X-c_1}_*(\Theta_{2,L+K_X-c_1}))^\vee\otimes {\mathcal O}_{|L|}(-\<L,L+K_X)\>/2-\<c_1,c_1-K_X)\>/2+\<L,c_1\>/2-2)).$$
In particular in case $c_1=0$,
$$\pi^{L,0}_*(\Theta^{\otimes 2})=(\pi^{L,L+K_X}_*(\Theta_{2,L+K_X}))^\vee\otimes {\mathcal O}_{|L|}(-\<L,L+K_X)\>/2-2).$$
In the case of $X={\mathbb P}^2$ we should have for $d>0$ that
\begin{align*}
\pi^{2dH,0}_*(\Theta^{\otimes 2})&=(\pi^{2dH,H}_*(\Theta_{2,H}))^\vee\otimes {\mathcal O}_{|2dH|}(-d^2),\\
\pi^{(2d+1)H,0}_*(\Theta^{\otimes 2})&=(\pi^{(2d+1)H,0}_*(\Theta^{\otimes 2}))^\vee\otimes {\mathcal O}_{|(2d+1)H|}(-d(d+1)).
\end{align*}
Similarly we conjecture for $X={\mathbb P}^1\times{\mathbb P}^1$ e.g. that for $d>0$
$$\pi^{2d(F+G),0}_*(\Theta^{\otimes 2})=(\pi^{2d(F+G),0}_*(\Theta^{\otimes 2}))^\vee\otimes {\mathcal O}_{|2d(F+G)|}(-2d^2).$$
\end{Remark}
\begin{comment}
\begin{Conjecture} Let $X$ be a projective surface with $-K_X$ ample and let $L,\omega$ be very ample line bundles on $X$.
Fix $c_1\in H^2(X,{\mathbb Z})$.
\begin{enumerate}
\item $\pi^{L,c_1}_*(\Theta^{2}_{c_1})$ is a direct sum of line bundles on ${\mathbb P}^N$, more precisely
$\pi^{L,c_1}_*(\Theta^{2}_{c_1})=P^X_{c_1,L}({\mathcal O}_{{\mathbb P}^N}(-1))$.
\item Write $N:=L(L-K_X)/2$, and $n:=2-\frac{1}{2}(\<c_1,c_1-K_X\>+\<L,L+c_1-K_X\>)\in{\mathbb Z}$.
$$ \pi^{L,c_1}_*(\Theta^{2}_{c_1})=\pi^{L,L+K_X-c_1}_*(\Theta^2_{L+K_X-c_1})^\vee\otimes {\mathcal O}_{{\mathbb P}^N}(n)$$
\item $c_1(\pi^{L,c_1}_*(\Theta^{2}_{c_1}))$.
\end{enumerate}
\end{Conjecture}
\end{comment}
\section{Wallcrossing formula}
\subsection{Theta functions and modular forms}\label{thetamod}
We start by reviewing results and notations from \cite{GNY}, \cite[Sec.~3.1]{GY}.
For $\tau\in {\mathcal H}=\big\{\tau\in {\mathbb C}\bigm| \Im(\tau)>0\big\}$ put $q=e^{\pi i\tau/4}$ and for $h\in {\mathbb C}$ put $y=e^{h/2}$. Note that the notation is not standard.
Recall the $4$ Jacobi theta functions:
\begin{equation}
\begin{split}\label{theta}
\theta_1(h)&:=\sum_{n\in {\mathbb Z}} i^{2n-1} q^{(2n+1)^2} y^{2n+1}=-iq(y-y^{-1})\prod_{n>0}(1-q^{8n})(1-q^{8n}y^2)(1-q^{8n}y^{-2}),\\
\theta_2(h):&=\sum_{n\in {\mathbb Z}} q^{(2n+1)^2} y^{2n+1}=-q(y+y^{-1})\prod_{n>0}(1-q^{8n})(1+q^{8n}y^2)(1+q^{8n}y^{-2}),\\
\theta_3(h)&:=\sum_{n\in {\mathbb Z}} q^{(2n)^2} y^{2n},\qquad
\theta_4(h):=\sum_{n\in {\mathbb Z}} i^{2n}q^{(2n)^2} y^{2n}.
\end{split}
\end{equation}
We usually do not write the argument $\tau$. The conventions are essentially the same as in
\cite{WW} and in \cite{Ak}, where the $\theta_i$ for $i\le 3$ are denoted $\vartheta_i$ and $\theta_4$ is denoted $\vartheta_0$.
Denote
\begin{equation}\label{thetatilde}
\begin{split}\theta_i&:=\theta_i(0), \quad
\widetilde\theta_i(h):=\frac{\theta_i(h)}{\theta_i}, \quad i=2,3,4;\qquad \widetilde\theta_1(h):=\frac{\theta_1(h)}{\theta_4},\\
u&:=-\frac{\theta_2^2}{\theta_3^2}-\frac{\theta_3^2}{\theta_2^2}=-\frac{1}{4}q^{-2} - 5q^2 + \frac{31}{2}q^6 - 54q^{10}+O(q^{14}),
\end{split}
\end{equation}
and two Jacobi functions, i.e. Jacobi forms of weight and index $0$,
$\Lambda:=\frac{\theta_1(h)}{\theta_4(h)}$, $M:=2\frac{\widetilde \theta_2(h)\widetilde \theta_3(h)}{\widetilde \theta_4(h)^2}$, which satisfy the relation
\begin{equation}\label{MuL}
M=2\sqrt{1+u\Lambda^2+\Lambda^4},\end{equation}
and the formulas
\begin{equation}\label{dLdh}\frac{\partial\Lambda}{\partial h}=\frac{\theta_2\theta_3}{4i}M,
\quad
h=\frac{2i}{\theta_2\theta_3}\int_{0}^\Lambda\frac{dx}{\sqrt{1+ux^2+x^4}}.
\end{equation}
In \cite[Sec.~3.1]{GY} it is shown that $h\in iq^{-1}\Lambda{\mathcal R}.$
A function $F(\tau,h)$ can via formula \eqref{dLdh} also be viewed as a function
of $\tau$ and $\Lambda$.
In this case, viewing $\tau$ and $\Lambda$ as the independent variables we define
$F' :=\frac{4}{\pi i} \frac{\partial F}{\partial \tau}=q\frac{\partial F}{\partial q},
\quad F^*:=\Lambda\frac{\partial F}{\partial \Lambda},$ and get
\begin{equation}
\label{hstar}
h^*=\frac{4i\Lambda}{\theta_2\theta_3 M}, \quad
u'=\frac{2\theta_4^8}{\theta_2^2\theta_3^2}.
\end{equation}
For future use we record the following standard formulas for the behaviour of the theta functions under translation.
\begin{align}\label{T4trans}
\theta_4(h+2\pi i)&=\theta_4(h),\quad \theta_4(h+ 2\pi i \tau)=-q^{-4}y^{-2}\theta_4(h),\quad\theta_4(h+ \pi i \tau)= i q^{-1}y^{-1}\theta_1(h)\\
\label{T1trans}
\theta_1(h+2\pi i)&=-\theta_1(h),\quad \theta_1(h+ 2\pi i\tau)=-q^{-4}y^{-2}\theta_1(h),\quad \theta_1(h+ \pi i\tau)= i q^{-1}y^{-1}\theta_4(h),\\
\label{T2trans}\theta_2(h+\pi i \tau)&=q^{-1}y^{-1}\theta_3(h),\quad
\theta_3(h+\pi i \tau)=q^{-1}y^{-1}\theta_2(h).\end{align}
(see e.g. \cite[Table VIII, p.~202]{Ak}).
\begin{Lemma}\label{thetaadd} Let $a$, $b\in {\mathbb Z}$. Then
\begin{enumerate}
\item $\theta_4(h)=(-1)^bq^{4b^2}y^{2b}\theta_4(h+2\pi i b\tau)$, $\quad \theta_4(h+2\pi i a)=\theta_4(z)$,
\item $\theta_1(h)=(-1)^bq^{4b^2}y^{2b}\theta_1(h+2\pi i b\tau)$,$\quad \theta_1(h+2\pi i a)=(-1)^a\theta_1(z)$,
\item $\theta_4(h)=e^{\pi i (b-\frac{1}{2})}q^{(2b+1)^2}y^{2b+1}\theta_1(h+2\pi i (b+\frac{1}{2})\tau)$,
\item $\theta_1(h)=e^{\pi i (b-\frac{1}{2})}q^{(2b+1)^2}y^{2b+1}\theta_4(h+2\pi i (b+\frac{1}{2})\tau)$.
\end{enumerate}
\end{Lemma}
\begin{proof}
All these formulas follow by straightforward induction from \eqref{T4trans} and \eqref{T1trans}.
As an illustration we check (1) and (3). The formula $\theta_4(h+ 2\pi i \tau)=-q^{-4}y^{-2}\theta_4(h)$ gives by induction
\begin{align*}\theta_4(h+2\pi i b\tau)&=-q^{-4}e^{-(h+2\pi i(b-1)\tau)}\theta_4(h+2\pi i (b-1)\tau)\\
&=
-q^{-8b+4}y^{-2}(-1)^{-(b-1)}q^{-4(b-1)^2}y^{-(2b-2)}\theta_4(h)=(-1)^{-b}q^{-(2b)^2}y^{-2b}\theta_4(h),\end{align*}
and (1) follows.
Similarly
\begin{align*}\theta_4(h+2\pi i (b+1/2)\tau)&=iq^{-1}e^{-h/2-\pi i b\tau}\theta_1(h+2\pi ib\tau)=iq^{-4b-1}y^{-1}(-1)^{-b}q^{-(2b)^2}y^{-2b}\theta_1(h)\\
&=e^{-\pi i (b-\frac{1}{2})}q^{-(2b+1)^2}y^{-(2b+1)}\theta_1(h),\end{align*} and (3) follows.
\end{proof}
\subsection{Wallcrossing formula}\label{wallcro}
Now we review the wallcrossing formula from \cite{GNY}, \cite{GY}, and generalize it slightly.
Let $\sigma(X)$ be the signature of $X$.
\begin{Definition}\label{wallcrossterm}
Let $r\ge 0$, let $\xi\in H^2(X,{\mathbb Z})$ with $\xi^2< 0$. Let $L$ be a line bundle on $X$. We put
$$\Delta_\xi^X(L,P^r):=2 i^{\<\xi, K_X\>} \Lambda^2 q^{-\xi^2}
y^{\<\xi,(L-K_X)\>}\widetilde\theta_4(h)^{(L-K_X)^2}\theta_4^{\sigma(X)}u'h^*M^r,$$
and put $\Delta_\xi^X(L):=\Delta_\xi^X(L,P^0)$.
By the results of the previous section it can be developed as a power series
$$\Delta_\xi^X(L,P^r)=\sum_{d\ge 0} f_d(\tau)\Lambda^d\in {\mathbb C}((q))[[\Lambda]],$$
whose coefficients $f_d(\tau)$ are Laurent series in $q$.
If $\<\xi,L\>\equiv r \mod 2$,
the {\it wallcrossing term} is defined as
$$\delta_{\xi}^X(L,P^r):=\sum_{d\ge 0} \delta_{\xi,d}^X(L,P^r)\Lambda^d\in {\mathbb Q}[[\Lambda]], $$
with
$$\delta_{\xi,d}^X(L,P^r)=\mathop{\text{\rm Coeff}}_{q^0}[f_d(\tau)].$$
Again we write $\delta_{\xi,d}^X(L):=\delta_{\xi,d}^X(L,P^0)$ and $\delta_{\xi}^X(L):=\delta_{\xi}^X(L,P^0)$.
The wallcrossing terms $\delta_{\xi}^X(L):=\delta_{\xi,d}^X(L,P^0)$ were already introduced in \cite{GNY} and used in
\cite{GY}. As we will recall in a moment, they compute the change of the K-theoretic Donaldson invariants $\chi^{X,\omega}_{c_1}(L)$, when $\omega$ crosses a wall.
Later we will introduce K-theoretic Donaldson invariants with point class $\chi^{X,\omega}_{c_1}(L,P^r)$, whose wallcrossing is computed by
$\delta_{\xi}^X(L,P^r)$. Intuitively we want to think of $r$ as the power of a $K$-theoretic point class $\mathcal P$.
\end{Definition}
\begin{Remark}\label{delb}
\begin{enumerate}
\item $\delta_{\xi,d}^X(L,P^r)=0$ unless $d\equiv -\xi^2\mod 4$.
\item
In the definition of $\delta_\xi^X(L,P^r)$ we can replace
$\Delta_{\xi}^X(L,P^r)$ by
\begin{equation}
\begin{split}\label{Delbar}
&\overline \Delta_{\xi}^X(L,P^r):= \frac{1}{2}(\Delta_{\xi}^X(L,P^r)-\Delta_{-\xi}^X(L,P^r))\\
&\ =M^{r} i^{\<\xi, K_X\>} \Lambda^2 q^{-\xi^2}
\big(y^{\<\xi(L-K_X)\>}-(-1)^{\xi^2}y^{-\<\xi,(L-K_X)\>}\big)\widetilde\theta_4(h)^{(L-K_X)^2}\theta_4^{\sigma(X)}u'h^*.
\end{split}
\end{equation}
\end{enumerate}
\end{Remark}
\begin{proof}
(1) As $h\in {\mathbb C}[[q^{-1}\Lambda,q^4]]$, we also have $h^*, y,\widetilde \theta_4(h),M\in {\mathbb C}[[q^{-1}\Lambda,q^4]]$. Finally $u,u'\in q^{-2}{\mathbb Q}[[q^4]]$.
It follows that $\Delta_{\xi}^X(L,P^r)\in q^{-\xi^2}{\mathbb C}[[q^{-1}\Lambda,q^4]]
$. Writing $\Delta_{\xi}^X(L,P^r)=\sum_{d} f_{d,r}(\tau)\Lambda^d$, we see that $\mathop{\text{\rm Coeff}}_{q^0}[f_{d,r}(\tau)]=0$ unless $d\equiv -\xi^2\mod 4$.
(2) Note that $\widetilde \theta_4(h)$ is even in $\Lambda$ and $h^*$ is odd in
$\Lambda$, thus $\overline \Delta_{\xi}^X(L,P^r)= \sum_{d\equiv -\xi^2 (2) } f_{d,r}(\tau)\Lambda^d$, and the claim follows by (1).
\end{proof}
The main result of \cite{GNY} is the following (see also \cite{GY}).
\begin{Theorem}\label{wallcr}
Let $H_1$, $H_2$ be ample divisors on $X$, assume that $\<H_1,K_X\><0$, $\<H_2,K_X\><0$, and that $H_1$, $H_2$ do not lie on a wall
of type $(c_1,d)$. Then
\begin{align*}
\chi(M^X_{H_1}(c_1,d),\mu(L))-\chi(M^X_{H_2}(c_1,d),\mu(L))&=\sum_{\xi}\delta^X_{\xi,d}(L),
\end{align*}
where $\xi$ runs through all classes of type $(c_1,d)$
with $\<\xi, H_1\>>0 >\<\xi, H_2\>$.
\item
\end{Theorem}
Note that the condition $\<H_1,K_X\><0$, $\<H_2,K_X\><0$ implies that all the classes of type $(c_1,d)$
with $\<\xi, H_1\>>0 >\<\xi ,H_2\>$ are good in the sense of \cite{GNY}, so the wallcrossing formula there applies.
Let $c_1\in H^2(X,{\mathbb Z})$. Let $H_1,H_2$ be ample on $X$, assume they do not lie on a wall of type $(c_1)$. Then it follows that
$$\chi^{X,H_1}_{c_1}(L)-\chi^{X,H_2}_{c_1}(L)=\sum_{\xi}\delta^X_\xi(L),$$
where $\xi$ runs through all classes in $c_1+2H^2(X,{\mathbb Z})$ with $\<\xi ,H_1\> >0>\<\xi, H_2\>$.
\subsection{Polynomiality and vanishing of the wallcrossing}
By definition the wallcrossing terms
$\delta_\xi^X(L,P^r)$ are power series in $\Lambda$. We now show that
they are always polynomials, modifying the proof of \cite[Thm.~3.19]{GY}.
We have seen above that $h\in iq^{-1}\Lambda{\mathbb Q}[[q^{-2}\Lambda^2,q^4]]$, and thus $y=e^{h/2}\in {\mathbb Q}[[iq^{-1}\Lambda,q^4]]$.
\begin{Lemma} \label{qpow}
(\cite[Lem.~3.18]{GY})
\begin{enumerate}
\item $\sinh(h/2)=y-y^{-1}\in iq^{-1}\Lambda{\mathcal R}$, $\frac{1}{\sinh(h/2)}\in iq\Lambda^{-1}{\mathcal R}$.
\item For all integers $n$ we have
\begin{align*}
\sinh((2n+1)h/2)&\in i{\mathbb Q}[q^{-1}\Lambda]_{|2n+1|}{\mathcal R},\quad
\cosh(nh)\in {\mathbb Q}[q^{-2}\Lambda^2]_{|n|} {\mathcal R},\\
\sinh(nh)h^*&\in {\mathbb Q}[q^{-2}\Lambda^2]_{|n|}{\mathcal R}
\quad \cosh((2n+1)h/2)h^*\in i {\mathbb Q}[q^{-1}\Lambda]_{|2n+1|} {\mathcal R}.
\end{align*}
\item $\widetilde \theta_4(h)\in {\mathcal R}
$, with $\widetilde \theta_4(h)=1+q^2\Lambda^2+O(q^4)$.
\end{enumerate}
\end{Lemma}
\begin{Lemma}\label{vanwall}
Let $r\in {\mathbb Z}_{\ge 0}$, let $\xi\in H^2(X,{\mathbb Z})$, and $L$ a line bundle on $X$ with $\xi^2<0$ and $\<\xi,L\>\equiv r\mod 2$.
\begin{enumerate}
\item
$\delta_{\xi,d}^X(L,P^r)=0$ unless $-\xi^2\le d\le \xi^2+2|\<\xi,L-K_X\>|+2r+4$.
In particular $\delta_{\xi}^X(L,P^r)\in {\mathbb Q}[\Lambda]$.
\item $\delta_\xi^X(L,P^r)=0$ unless $-\xi^2\le |\<\xi,L-K_X\>|+r+2$. (Recall that
by definition $\xi^2<0$).
\end{enumerate}
\end{Lemma}
\begin{proof}
Assume first that $r=2l$ is even.
Let $N:=\<\xi,L-K_X\>.$ Then it is shown in the proof of \cite[Thm.~3.19]{GY} that
$\overline \Delta_\xi^X(L)=q^{-\xi^2}{\mathbb Q}[q^{-1}\Lambda]_{\le |N|+2}{\mathcal R}.$
On the other hand we note that $M^2=4(1+u\Lambda^2+\Lambda^4)\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le 1}{\mathcal R}.$
Putting this together we get
$$\Delta_\xi^X(L,P^{r})\in q^{-\xi^2}{\mathbb Q}[q^{-1}\Lambda]_{\le |N|+r+2}{\mathcal R}.$$
Now assume that $r=2l+1$ is odd. If $N$ is even, then by the condition that $\<L,\xi\>$ is odd, we get $(-1)^{\xi^2}=-(-1)^{N}$, and therefore
$$\overline \Delta_\xi^X(L,P^r)=q^{-\xi^2}M^ri^{\<\xi,K_X\>}\Lambda^2\cosh(Nh/2)h^*
\widetilde \theta_4^{(L-K_X)^2}\theta_4^{\sigma(X)} u'.$$
By \eqref{hstar} we get $h^*M=\frac{4i\Lambda}{\theta_2\theta_3}\in i\Lambda q^{-1}{\mathbb Q}[q^4]$.
Thus by Lemma \ref{qpow} we get $\cosh(Nh/2)h^*M\in i{\mathbb Q}[q^{-1}\Lambda]_{\le |N|+1}{\mathcal R}.$
Using also that $\<\xi, K_X\>\equiv \xi^2\equiv 1\mod 2$, that $M^2\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le 1}{\mathcal R}$ and $\Lambda^2u'\in q^{-2}\Lambda^2{\mathcal R}$, we get
again $\overline \Delta_\xi^X(L,P^r)\in q^{-\xi^2}{\mathbb Q}[q^{-1}\Lambda]_{\le |N|+r+2}{\mathcal R}.$
Finally, if $N$ is odd, a similar argument shows that
$$\overline \Delta_\xi^X(L,P^r)=q^{-\xi^2}M^ri^{\<\xi,K_X\>}\Lambda^2\sinh(Nh/2)h^*
\widetilde \theta_4^{(L-K_X)^2}\theta_4^{\sigma(X)} u'\in q^{-\xi^2}{\mathbb Q}[q^{-1}\Lambda]_{\le |N|+r+2}{\mathcal R}.$$
Therefore we have in all cases that
$\delta_{\xi,d}^X(L,P^r)= 0$ unless $-\xi^2-\min(d,2|N|+2r+4-d)\le 0$, i.e.
unless
$-\xi^2\le d\le \xi^2+2|N|+2r+4$.
In particular $\delta_{\xi}^X(L,P^r)=0$ unless
$-\xi^2\le\xi^2+2|N|+2r+4$, i.e. unless $-\xi^2\le |N|+r+2$.
\end{proof}
\begin{Remark}\label{c1d}
We note that this implies that for $\xi$ a class of type $(c_1)$,
$\delta_{\xi,d}^X(L)=0$ for all $L$ unless $\xi$ a class of type $(c_1,d)$.
\end{Remark}
\section{Indefinite theta functions, vanishing, invariants with point class}
We want to study the $K$-theoretic Donaldson invariants
for polarizations on the boundary of the ample cone.
Let $F\in H^2(X,{\mathbb Z})$ the class of an effective divisor with $F^2=0$ and such that $F$ is nef, i.e.
$\<F,C\>\ge 0$ for any effective curve in $X$. Then $F$ is a limit of ample classes.
Let $c_1\in H^2(X,{\mathbb Z})$ such that $\<c_1,F\>$ is odd.
Fix $d\in {\mathbb Z}$ with $d\equiv -c_1^2 \mod 4$. Let $\omega$ be ample on $X$. Then for $n>0$ sufficiently large $nF+ \omega$ is ample on $X$ and there is no wall $\xi$ of type $(c_1,d)$ with
$\<\xi, (nF+ \omega)\>>0> \<\xi, F\>$. Let $L\in \operatorname{Pic}(X)$ and $r\in{\mathbb Z}_{\ge 0}$ with $\<c_1,L\>$ even.
Thus we define for $n$ sufficiently large
\begin{align*}M_F^X(c_1,d)&:=M_{nF+\omega}^X(c_1,d), \\ \chi(M_F^X(c_1,d),\mu(L))&:=\chi(M_{nF+\omega}^X(c_1,d),\mu(L)),\\
\chi^{X,F}_{c_1}(L)&:=\sum_{d\ge 0} \chi(M_F^X(c_1,d),\mu(L))\Lambda^d.
\end{align*}
We use the following standard fact.
\begin{Remark}\label{vanbound}
Let $X$ be a simply connected algebraic surface, and let
$\pi:X\to {\mathbb P}^1$ be a morphism whose general fibre is isomorphic to
${\mathbb P}^1$. Let $F\in H^2(X,{\mathbb Z})$ be the class of a fibre. Then $F$ is nef. Assume that
$\<c_1,F\>$ is odd.
Then $M_F^X(c_1,d)=\emptyset$ for all $d$.
Thus $\chi(M_F^X(c_1,d),\mu(L))=0$ for all $d\ge 0$.
Thus if $\omega$ ample on $X$ and does not lie on a wall of type $(c_1)$, then
$$\chi^{X,\omega}_{c_1}(L)=\sum_{\omega\xi>0>\xi F} \delta_{\xi}^X(L),$$
where the sum is over all classes $\xi$ of type $(c_1)$ with
$\omega\xi>0>\xi F$.
\end{Remark}
\subsection{Theta functions for indefinite lattices}
We briefly review a few facts about theta functions for indefinite lattices of type $(r-1,1)$
introduced in \cite{GZ}. More can be found in \cite{GZ}, \cite{GY}. For us a {\it lattice} is a free ${\mathbb Z}$-module $\Gamma$ together with a quadratic form
$Q:\Gamma\to \frac{1}{2}{\mathbb Z}$, such that the associated bilinear form
$x\cdot y:=Q(x+y)-Q(x)-Q(y)$ is nondegenerate and ${\mathbb Z}$-valued.
We denote the extension of the quadratic and bilinear form to
$\Gamma_{\mathbb R}:=\Gamma\otimes_{{\mathbb Z}} {\mathbb R}$ and $\Gamma_{\mathbb C}:=\Gamma\otimes_{{\mathbb Z}} {\mathbb C}$ by the same letters.
We will consider the case that $\Gamma$ is $H^2(X,{\mathbb Z})$ for a rational surface $X$ with the {\it negative} of the intersection form.Thus for $\alpha,\beta\in H^2(X,{\mathbb Z})$ we have $Q(\alpha)=-\frac{\alpha^2}{2}$, $\alpha\cdot \beta=-\<\alpha,\beta\>$.
Now let $\Gamma$ be a lattice of rank $r$. Denote by $M_\Gamma$ the set of meromorphic
maps $f:\Gamma_{\mathbb C}\times {\mathcal H}\to {\mathbb C}$.
For $A=\left(\begin{matrix} a&b\\c&d\end{matrix}\right)\in Sl(2,{\mathbb Z})$,
we define a map
$|_kA:M_\Gamma\to M_\Gamma$ by
$$f|_{k}A(x,\tau):=(c\tau+d)^{-k}\exp\left(-2\pi i\frac{cQ(x)}{c\tau+d}\right)
f\left(\frac{x}{c\tau+d},\frac{a\tau+b}{c\tau+d}\right).$$
Then $|_kA$ defines an action of $Sl(2,{\mathbb Z})$ on $M_\Gamma$.
We denote
\begin{align*}S_\Gamma&:=\big\{ f\in \Gamma\bigm| f \hbox{ primitive}, Q(f)=0,\ f\cdot h <0\big\},\
C_\Gamma:=\big\{ m\in \Gamma_{\mathbb R}\bigm| Q(m)<0, \ m\cdot h<0\big\}.
\end{align*}
For $f\in S_\Gamma$ put
$D(f):=\big\{(\tau,x)\in {\mathcal H}\times \Gamma_{\mathbb C}\bigm| 0< \Im(f\cdot x)<\Im(\tau)/2\big\},$
and for $h\in C_\Gamma$ put $D(h)={\mathcal H}\times \Gamma_{\mathbb C}$.
For $t\in {\mathbb R}$ denote $$\mu(t):=\begin{cases} 1& t\ge 0, \\0 & t<0.\end{cases}$$
Let $c,b\in \Gamma$. Let $f,g\in S_\Gamma\cup C_\Gamma$.
Then for $(\tau,x)\in D(f)\cap D(g)$
define
$$\Theta^{f,g}_{\Gamma,c,b}(\tau,x):=\sum_{\xi\in \Gamma+c/2}(\mu(\xi\cdot f)-\mu(\xi\cdot g)) e^{2\pi i \tau Q(\xi)}e^{2\pi i \xi\cdot(x+b/2)}.$$
Let $T:=\left(\begin{matrix} 1 & 1\\0&1\end{matrix}\right)$,
$S:=\left(\begin{matrix} 0 & -1\\1&0\end{matrix}\right)\in Sl(2,{\mathbb Z})$.
\begin{Theorem}\label{thetaprop}
\begin{enumerate}
\item For $f,g\in S_\Gamma$
the function $\Theta^{f,g}_{X,c,b}(\tau,x)$ has an meromorphic continuation to
${\mathcal H}\times \Gamma_{\mathbb C}$.
\item For
$|\Im(f\cdot x)/\Im(\tau)|<1/2$ and $|\Im(g\cdot x)/\Im(\tau)|<1/2$ it has a Fourier development
\begin{align*}
&\Theta_{X,c,b}^{f,g}(x,\tau):=\frac{1}{1-e^{2\pi i f\cdot (x+b/2)}}
\sum_{\substack{\xi\cdot f=0\\ f\cdot g\le\xi\cdot g<0}}e^{2\pi i \tau Q(\xi)}e^{2\pi i \xi \cdot (x+b/2)}\\ &
-\frac{1}{1-e^{2\pi i g\cdot (x+b/2)}}\sum_{\substack{\xi\cdot g=0\\ f\cdot g \le \xi \cdot f<0 }}e^{2\pi i \tau Q(\xi)}e^{2\pi i \xi \cdot(x+b/2)}+
\sum_{\xi f>0>\xi g} e^{2\pi i \tau Q(\xi)}\big(e^{2\pi i \xi \cdot(x+b/2)}-
e^{-2\pi i \xi \cdot(x+b/2)}\big),
\end{align*}
where the sums are always over $\xi\in \Gamma+c/2$.
\item
\begin{equation*}
\label{thetajacobi}
\begin{split}
(\Theta_{X,c,b}^{f,g}\theta_{3}^{\sigma(\Gamma)})|_1S&=(-1)^{-b\cdot c/2} \Theta_{X,b,c}^{f,g}\theta_{3}^{\sigma(\Gamma)},\\
(\Theta_{X,c,b}^{f,g}\theta_{3}^{\sigma(\Gamma)})|_1T&=(-1)^{3Q(c)/2-c w/2} \Theta_{X,c,b-c+w}^{f,g}\theta_{4}^{\sigma(\Gamma)},\\
(\Theta_{X,c,b}^{f,g}\theta_{3}^{\sigma(\Gamma)})|_1T^2&=(-1)^{-Q(c)}
\Theta_{X,c,b}^{f,g}\theta_{3}^{\sigma(\Gamma)},\\
(\Theta_{X,c,b}^{f,g}\theta_{3}^{\sigma(\Gamma)})|_1T^{-1}S&=
(-1)^{-Q(c)/2-c\cdot b/2}\Theta_{X,w-c+b,c}^{f,g}\theta_{2}^{\sigma(\Gamma)},
\end{split}\end{equation*}
where $w$ is a characteristic element of $\Gamma$.
\end{enumerate}
\end{Theorem}
\begin{Remark}
For $f,\ g,\ h\in C_\Gamma\cap S_\Gamma$ we have the cocycle condition.
$\Theta^{f,g}_{\Gamma,c,b}(\tau,x)+\Theta^{g,h}_{\Gamma,c,b}(\tau,x)=\Theta^{f,h}_{\Gamma,c,b}(\tau,x)$, which holds wherever all three terms are defined.
\end{Remark}
In the following let $X$ be a rational algebraic surface.
We can express the difference of the $K$-theoretic Donaldson invariants for two different polarisations in terms of these indefinite theta functions. Here we take $\Gamma$ to be $H^2(X,{\mathbb Z})$ with the {\it negative} of the intersection form, and we choose $K_X$ as the characteristic element in \thmref{thetaprop}(3).
\begin{Definition}
Let $F,G\in S_\Gamma\cup C_\Gamma$, let $c_1\in H^2(X,{\mathbb Z})$.
We put
\begin{align*}
\Psi^{F,G}_{X,c_1}(L;\Lambda,\tau)&:=
\Theta^{F,G}_{X,c_1,K_X}\Big(\frac{(L-K_X)h}{2\pi i},\tau\Big) \Lambda^2
\widetilde\theta_4(h)^{(L-K_X)^2}\theta_4^{\sigma(X)}u'h^*.
\end{align*}
\end{Definition}
\begin{Lemma}\label{thetawall}
Let $H_1,H_2$ be ample on $X$ with $\<H_1,K_X\><0$ and $\<H_2,K_X\><0$, and assume that they do not lie on a wall of type $(c_1)$.
Then
\begin{enumerate}
\item $$\Psi^{H_2,H_1}_{X,c_1}(L;\Lambda,\tau)M^r=\sum_\xi \overline \Delta^X_\xi(L,P^r),$$ were $\xi$ runs through all classes on $X$ of type $(c_1)$ with
$\<H_2,\xi\> >0> \<H_1, \xi \>$.
\item
$\chi^{X,H_2}_{c_1}(L)-\chi^{X,H_1}_{c_1}(L)=\mathop{\text{\rm Coeff}}_{q^0} \big[\Psi^{H_2,H_1}_{X,c_1}(L;\Lambda,\tau)\big].
$
\end{enumerate}
\end{Lemma}
\begin{proof} (2) is proven in \cite[Cor.~4.6]{GY}, where the assumptions is made that $-K_X$ is ample, but the proof only uses $\<H_1,K_X\><0$ and $\<H_2,K_X\><0$, because
this condition is sufficient for \thmref{wallcr}. The argument of \cite[Cor.~4.6]{GY} actually shows (1) in case $r=0$, but as $\overline \Delta^X_\xi(L,P^r)=\overline \Delta^X_\xi(L,P^0)M^r$, the case of
general $r$ follows immediately.
\end{proof}
Following \cite{GY} we use \lemref{thetawall} to extend the generating function $\chi^{X,\omega}_{c_1}(L)$ to
$\omega\in S_L\cup C_L$.
\begin{Definition}\label{chiext}
Let $\eta$ be ample on $X$ with $\<\eta, K_X\><0$, and not on a wall of type $(c_1)$. Let $\omega\in S_X\cup C_X$. We put
$$\chi^{X,\omega}_{c_1}(L):=\chi^{X,\eta}_{c_1}(L)+\mathop{\text{\rm Coeff}}_{q^0} \big[\Psi^{\omega,\eta}_{X,c_1}(L;\Lambda,\tau)\big].$$
By the cocycle condition the definition of $\chi^{X,\omega}_{c_1}(L)$ is independent of the choice of $\eta$.
Furthermore by \corref{thetawall} this coincides with the previous definition in case $\omega$ is also ample, $\<\omega,K_X\><0$ and $\omega$ does not lie on a wall of type $(c_1)$. However if $\<\omega,K_X\>\ge 0$, it is very well possible that the coefficient of $\Lambda^d$ of $\chi^{X,\omega}_{c_1}(L)$ is different
from $\chi(M^X_\omega(c_1,d),\mu(L))$.
\end{Definition}
\begin{Remark}\label{difftheta}
Now let $H_1,H_2\in S_X\cup C_X$.
By the cocycle condition, we have
\begin{align*}
\chi^{X,H_2}_{c_1}(L)-\chi^{X,H_1}_{c_1}(L)=\mathop{\text{\rm Coeff}}_{q^0} \big[\Psi^{H_2,H_1}_{X,c_1}(L;\Lambda,\tau)\big].
\end{align*}
\end{Remark}
\begin{Proposition} \label{blowgen}
Let $X$ be a rational surface. Let $\omega\in C_X\cup S_X$. Let $c_1\in H^2(X,{\mathbb Z})$. Let $\widehat X$ be the blowup of $X$ in a general point, and $E$ the exceptional divisor.
Let $L\in \operatorname{Pic}(X)$ with $\<L, c_1\>$ even. Then
\begin{enumerate}
\item $\chi^{\widehat X,\omega}_{c_1}(L)=\chi^{X,\omega}_{c_1}(L)$,
\item $\chi^{\widehat X,\omega}_{c_1+E}(L)=\Lambda \chi^{X,\omega}_{c_1}(L)$.
\end{enumerate}
\end{Proposition}
\begin{proof}This is \cite[Prop.~4.9]{GY}, where the additional assumption is made that $-K_{\widehat X}$ is ample. The proof works without this assumption with very minor modifications.
In the original proof the result is first proven for an $H_0\in C_X$ which does not lie on any wall of type $(c_1)$. We now have to assume in addition that $\<H_0,K_X\><0$.
The rest of the proof is unchanged.
\end{proof}
In \cite[Thm.~4.21]{GY} is is shown that if $X$ is a rational surface with $-K_X$ ample, then $\chi^{X,F}_{c_1}(L)=\chi^{X,G}_{c_1}(L)$ for all $F,G\in S_X$.
A modification of this proof shows the following.
\begin{Proposition}\label{basic}
Let $X$ be ${\mathbb P}^1\times{\mathbb P}^1$ or a blowup of ${\mathbb P}^2$ in finitely many points. Let $L\in \operatorname{Pic}(X)$, let $c_1\in H^2(X,{\mathbb Z})$ with $\<c_1,L\>$ even. Let $F,G\in S_X$.
Assume that for all $W\in K_X+2H^2(X,{\mathbb Z})$ with $\<F,W\>\le 0\le \<G,W\>$, we have
$W^2<K_X^2$. Then
$\chi^{X,F}_{c_1}(L)=\chi^{X,G}_{c_1}(L)$.
\end{Proposition}
\begin{proof}
We know that $\chi^{X,F}_{c_1}(L)-\chi^{X,G}_{c_1}(L)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{F,G}_{X,c_1}(L,\Lambda,\tau)\big]$, and
in the proof of \cite[Thm.~4.21]{GY} it is shown that
$$\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{F,G}_{X,c_1}(L,\Lambda,\tau)\big]=-\frac{1}{4}\mathop{\text{\rm Coeff}}_{q^0}\big[\tau^{-2}\Psi^{F,G}_{X,c_1}(L,\Lambda,S\tau)\big]
-\frac{1}{4}i^{c_1^2+3}\mathop{\text{\rm Coeff}}_{q^0}\big[\tau^{-2}\Psi^{F,G}_{X,c_1}(L,i\Lambda,S\tau)\big].$$
Therefore it is enough to show that
$\mathop{\text{\rm Coeff}}_{q^0}\big[\tau^{-2}\Psi^{F,G}_{X,c_1}(L,\Lambda,S\tau)\big]=0$.
Furthermore in the proof of \cite[Thm.~4.21]{GY} we have seen that the three functions
$\widetilde u:=-\frac{\theta_3^4+\theta_4^4}{\theta_3^2\theta_4^2}$,
$$
\widetilde h=-\frac{2}{\theta_{4}\theta_{3}}.
\sum_{\substack{n\ge 0\\ n\ge k\ge0}}
\binom{-\frac{1}{2}}{n}\binom{n}{k}\frac{\widetilde u^k\Lambda^{4n-2k+1}}{4n-2k+1},\quad
\widetilde G(\Lambda,\tau)=
\frac{(-1)^{\<c_1,K_X\>/2-\sigma(X)/4}\Lambda^3}{\theta_3^3\theta_4^3(1+\widetilde{u}\Lambda^2+\Lambda^4)}
\widetilde\theta_{2}(\widetilde h)^{(L-K_X)^2}
$$
are regular at $q=0$, and furthermore that we can write
$$\tau^{-2}\Psi^{F,G}_{X,c_1}(L,\Lambda,S\tau)=\Theta_{X,K_X,c_1}^{F,G}\left(\frac{(L-K_X)\widetilde h}{2\pi i }, \tau\right)\theta_{2}^{K_X^2}
\widetilde G(\Lambda,\tau).$$
(note that $\sigma(X)+8=K_X^2$ to compare with the formulas in the proof of \cite[Thm.~4.19]{GY}).
As $\theta_{2}^{K_X^2}$ starts with $q^{K_X^2}$, specializing the formula of \thmref{thetaprop}(2) to the case
$c=K_X$, $b=c_1$, $F=f$, $G=g$, we see that all the summands in $\Theta_{X,K_X,c_1}^{F,G}\left(\frac{(L-K_X)\widetilde h}{2\pi i }, \tau\right)$
are of the form $q^{-W^2}J_W(\Lambda,\tau)$, where $J_W(\Lambda,\tau)$ is regular at $q=0$ and
$W\in K_X+2H^2(X,{\mathbb Z})$ with $\<F,W\>\le 0\le \<G,W\>$.
The claim follows.
\end{proof}
\begin{Corollary}\label{strucdiff}
Let $X={\mathbb P}^1\times{\mathbb P}^1$, or let $X$ be the blowup of ${\mathbb P}^2$ in finitely many general points $p_1,\ldots,p_n$ with exceptional divisors $E_1,\ldots,E_n$. In case $X={\mathbb P}^1\times{\mathbb P}^1$ let
$F$ be the class of a fibre of the projection to one of the two factors; otherwise let $F=H-E_i$ for some $i\in \{1,\ldots,n\}$.
Let $c_1\in H^2(X,{\mathbb Z})$ and let $L$ be a line bundle on $X$ with $\<L,c_1\>$ even.
Then
\begin{enumerate}
\item $\chi^{X,F}_{c_1}(L)=0.$
\item Thus for all $\omega\in S_X\cup C_X$ we have
$$\chi^{X,\omega}_{c_1}(L)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,F}_{X,c_1}(L;\Lambda,\tau)\big].$$
\end{enumerate}
\end{Corollary}
\begin{proof}
(1) Let $\widehat X$ be the blowup of $X$ in a general point with exceptional divisor $E$. Then $\widehat X$ is the blowup of ${\mathbb P}^2$ in $n+1$ general general points, (with $n=1$ in case $X={\mathbb P}^1\times{\mathbb P}^1$). We denote $E_1,\ldots,E_{n+1}$ the exceptional divisors, then we can assume that $F=H-E_1$. We put $G=H-E_{n+1}$. If $\<c_1, H\>$ is even, we put $\widehat c_1=c_1+E_{n+1}$, and
if $\<c_1, H\>$ is even, we put $\widehat c_1=c_1$. Thus $\<\widehat c_1, G\>$ is odd and therefore by \remref{vanbound} we get
$\chi^{\widehat X, G}_{\widehat c_1}(L)=0$.
By \propref{blowgen} we have $\chi^{X,F}_{c_1}(L)=\chi^{\widehat X,F}_{\widehat c_1}(L)$ or
$\chi^{X,F}_{c_1}(L)=\frac{1}{\Lambda}\chi^{\widehat X,F}_{\widehat c_1}(L)$.
Therefore it is enough to show that $\chi^{\widehat X,F}_{\widehat c_1}(L)=\chi^{\widehat X,G}_{\widehat c_1}(L)$.
So by \propref{basic} we need to show that for all $W\in K_{\widehat X}+2H^2(\widehat X,{\mathbb Z})$ with $\<F,W\>\le 0\le \<G,W\>$, we have
$W^2<K_{\widehat X}^2$.
Let $W=kH+a_1E_1+\ldots +a_{n+1}E_{n+1}\in K_{\widehat X}+2H^2(\widehat X,{\mathbb Z})$ with $\<F,W\>\le 0\le \<G,W\>$.
Then $k,a_1,\ldots,a_{n+1}$ are odd integers, the condition
$\<F,W\>\le 0$ gives that $k\le a_1$, and the condition $\<G,W\>\ge 0$ gives that $k\ge a_{n+1}$.
So either $k<0$ and $|a_1|\ge |k|$ or $k>0$, and $|a_{n+1}|\ge |k|$.
As all the $a_i$ are odd, this gives
$$W^2=k^2-a_1^2-\ldots-a_{n+1}^2\le -n<8-n=K_X^2.$$
\end{proof}
\subsection{Invariants with point class}
We can now define $K$-theoretic Donaldson invariants with powers of the point class.
\begin{Corollary}
Let $X$ be the blowup of ${\mathbb P}^2$ in general points $p_1,\ldots,p_r$, with exceptional divisors $E_1,\ldots, E_r$,
Let $\overline X$ be the blowup of ${\mathbb P}^2$ in general points $q_1,\ldots,q_r$, with exceptional divisors $\overline E_1,\ldots, \overline E_r$.
For a class $M=dH+a_1E_1+\ldots+a_rE_r\in H^2(X,{\mathbb R})$ let $\overline M:=dH+a_1\overline E_1+\ldots+a_r\overline E_r\in H^2(\overline X,{\mathbb R})$.
Then for all $L\in \operatorname{Pic}(X)$, $c_1\in H^2(X,{\mathbb Z})$ with $\<L,c_1\>$ even, $\omega\in C_X\cup S_X$, we have
$\chi^{X,\omega}_{c_1}(L)=\chi^{\overline X,\overline \omega}_{\overline c_1}(\overline L)$.
\end{Corollary}
\begin{proof}
Let $F=H-E_1\in S_X$, then $\overline F=H-\overline E_1\in S_{\overline X}$, and thus
$\chi^{X,F}_{c_1}(L)=0=\chi^{\overline X,\overline F}_{\overline c_1}(\overline L)$.
The map sending $E_i$ to $\overline E_i$ for all $i$ is an isomorphism of lattices, thus
$\Psi^{\omega,F}_{X,c_1}(L;\Lambda,\tau)=\Psi^{\overline \omega,\overline F}_{\overline X,\overline c_1}(\overline L;\Lambda,\tau)$.
Thus we get by \corref{strucdiff} that
$$\chi^{X,\omega}_{c_1}(L)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,F}_{X,c_1}(L;\Lambda,\tau)\big]=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\overline \omega,\overline F}_{\overline X,\overline c_1}(\overline L;\Lambda,\tau)\big]
=\chi^{\overline X,\overline \omega}_{\overline c_1}(\overline L).$$
\end{proof}
\begin{Definition}
Let $X$ be ${\mathbb P}^1\times{\mathbb P}^1$ or the blowup of ${\mathbb P}^2$ in finitely many general points. Let $\omega\in S_X\cup C_X$, $c_1\in H^2(X,{\mathbb Z})$, $L\in Pic(X)$.
Let $X_r$ be the blowup of $X$ in $r$ general points, with exceptional divisors $E_1,\ldots,E_r$. Write $E:=E_1+\ldots+E_r$.
We put
$$\chi^{X,\omega}_{c_1}(L,P^r):=\Lambda^{-r}\chi^{X_r,\omega}_{c_1+E}(L-E),
\quad
\chi^{X,\omega}_{c_1,d}(L,P^r):=\mathop{\text{\rm Coeff}}_{\Lambda^d}\big[\chi^{X,\omega}_{c_1}(L,P^r)\big].$$
We call the $\chi^{X,\omega}_{c_1,d}(L,P^r),\ \chi^{X,\omega}_{c_1}(L,P^r)$ the $K$-theoretic Donaldson invariants with point class.
More generally, if $F(\Lambda,P)=\sum_{i,j} a_{i,j} \Lambda^iP^j\in {\mathbb Q}[\Lambda,P]$ is a polynomial, we put
$$\chi^{X,\omega}_{c_1}(L,F(\Lambda,P)):=\sum_{i,j}a_{i,j} \Lambda^i \chi^{X,\omega}_{c_1}(L,P^j).$$
\end{Definition}
\begin{Remark}
There should be a $K$-theory class ${\mathcal P}$ on $M^X_\omega(c_1,d)$, such that
$\chi^{X,\omega}_{c_1,d}(L,P^r)=\chi(M^X_\omega(c_1,d),\mu(L)\otimes {\mathcal P}^r)$.
By the definition $\chi^{X,M}_{c_1}(L,P)=\Lambda^{-r}\chi^{\widehat X,M}_{c_1+E}(L-E)$ the sheaf ${\mathcal P}$ would encode local information at the blown-up point.
We could view $\mathcal P$ as a $K$-theoretic analogue of the point class in Donaldson theory.
This is our motivation for the name of $\chi^{X,\omega}_{c_1}(L,P^r)$.
For the moment we do not attempt to give a definition of this class ${\mathcal P}$.
There are already speculations about possible definitions of $K$-theoretic Donaldson invariants with powers of the point class in \cite[Sect.~1.3]{GNY}, and the introduction
of $\chi^{X,\omega}_{c_1}(L,P^r)$ is motivated by that, but for the moment we do not
try to make a connection to the approach in \cite{GNY}.
\end{Remark}
\section{Blowup polynomials, blowup formulas and blowdown formulas}
In \cite[Section 4.6]{GY} the blowup polynomials
$R_n({\mathfrak \lambda},x)$, $S_n({\mathfrak \lambda},x)$ are introduced. They play a central role in our approach. In this section we will first show that they express all the
$K$-theoretic Donaldson invariants of the blowup $\widehat X$ of a surface $X$ in terms of the $K$-theoretic Donaldson invariants of $X$.
On the other hand we will use them to show that a small part of the $K$-theoretic Donaldson invariants of the blowup $\widehat X$ determine {\it all} the
$K$-theoretic Donaldson invariants of $X$ (and thus by the above all the $K$-theoretic Donaldson invariants of any blowup of $X$, including $\widehat X$).
Finally (as already in \cite{GY} in some cases), in the next section, we will use the blowup polynomials to construct recursion relations for many $K$-theoretic Donaldson invariants of rational ruled surfaces, enough to apply the above-mentioned results and determine all $K$-theoretic Donaldson invariants of ${\mathbb P}^2$ and thus of any blowup of ${\mathbb P}^2$.
\subsection{Blowup Polynomials and blowup formulas}
\begin{Definition}\label{blowpol}
Define for all $n\in {\mathbb Z}$ rational functions $R_n$, $S_n\in {\mathbb Q}({\mathfrak \lambda},x)$ by
$R_0=R_1=1,$
$S_1={\mathfrak \lambda},\ S_2={\mathfrak \lambda} x,$
the recursion relations
\begin{align}
\label{recur} R_{n+1}&=\frac{R_{n}^2-{\mathfrak \lambda}^2 S_n^2}{R_{n-1}},\quad n\ge 1;\qquad
S_{n+1}=\frac{S_{n}^2-{\mathfrak \lambda}^2 R_{n}^2}{S_{n-1}},\quad n\ge 2.
\end{align}
and $R_{-n}=R_{n}$, $S_{-n}=-S_{n}$.
We will prove later that the $R_n$,
$S_n$
are indeed polynomials in ${\mathfrak \lambda},x$.
\end{Definition}
The definition gives
\begin{align*}
R_1&=1,\ R_2=(1-{\mathfrak \lambda}^4), R_3=-{\mathfrak \lambda}^4 x^2 + (1-{\mathfrak \lambda}^4)^2,\ R_4=-{\mathfrak \lambda}^4x^4+(1-{\mathfrak \lambda}^4)^4,\\
R_5&= -{\mathfrak \lambda}^4x^2\big(x^4 +(2-{\mathfrak \lambda}^4)(1-{\mathfrak \lambda}^4)^2x^2 +3(1-{\mathfrak \lambda}^4)^3\big)+(1-{\mathfrak \lambda}^4)^6,\\
S_1&={\mathfrak \lambda},\ S_2={\mathfrak \lambda} x,\ S_3={\mathfrak \lambda}(x^2-(1-{\mathfrak \lambda}^4)^2),\ S_4={\mathfrak \lambda} x\big((1-{\mathfrak \lambda}^8)x^2-2(1-{\mathfrak \lambda}^4)^3\big).
\end{align*}
\begin{Proposition}\label{rnpropo}(\cite[Prop.~4.7]{GY})
\begin{equation}\label{ThRn}
\frac{\widetilde \theta_4(nh)}{{\widetilde \theta}_4(h)^{n^2}}=R_n(\Lambda,M),\qquad
\frac{{\widetilde \theta}_1(nh)}{{\widetilde \theta}_4(h)^{n^2}}=S_n(\Lambda,M).
\end{equation}
\end{Proposition}
In the following \propref{blowpsi} and \corref{blowp} let $X={\mathbb P}^1\times{\mathbb P}^1$, or let $X$ be the blowup of ${\mathbb P}^2$ in finitely many general points $p_1,\ldots,p_n$ with exceptional divisors $E_1,\ldots,E_n$. In case $X={\mathbb P}^1\times{\mathbb P}^1$ let
$F$ be the class of a fibre of the projection to one of the two factors; otherwise let $F=H-E_i$ for some $i\in \{1,\ldots,n\}$.
\begin{Proposition}\label{blowpsi}
Let $c_1\in H^2(X,{\mathbb Z})$ and let $L$ be a line bundle on $X$ with $\<c_1, L\>$ even.
Let $\widehat X$ be the blowup of $X$ in a point, and let $E$ be the exceptional divisor.
Let $\omega\in C_X\cup S_X$. Then
\begin{enumerate}
\item $\chi^{\widehat X,\omega}_{c_1}(L-(n-1)E)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,F}_{X,c_1}(L,\Lambda,\tau)R_{n}(\Lambda, M)\big]$.
\item $\chi^{\widehat X,\omega}_{c_1+E}(L-(n-1)E)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,F}_{X,c_1}(L,\Lambda,\tau)S_{n}(\Lambda, M)\big]$.
\end{enumerate}
\end{Proposition}
\begin{proof} In \label{highblow}[Prop.~4.34]\cite{GY} it is proven for $X={\mathbb P}^1\times {\mathbb P}^1$ or the blowup of ${\mathbb P}^2$ in at most 7 points, and any $F,\omega\in C_X\cup S_X$ that
\begin{align*}\Psi^{\omega,F}_{\widehat X,c_1}(L-(n-1)E;\Lambda,\tau)&=\Psi^{\omega,F}_{X,c_1}(L,\Lambda,\tau)R_{n}(\Lambda, M)\\
\Psi^{\omega,F}_{\widehat X,c_1+E}(L-(n-1)E;\Lambda,\tau)&=\Psi^{\omega,F}_{X,c_1}(L,\Lambda,\tau)S_{n}(\Lambda, M),
\end{align*}
But the proof works without modification also for $X$ the blowup of ${\mathbb P}^2$ in finitely many points.
The result follows by \corref{strucdiff}.\end{proof}
We now see that the wall-crossing for the $K$-theoretic Donaldson invariants $\chi^{X,\omega}_{c_1}(L,P^r)$ with point class is given by the
wallcrossing terms $\delta^{X}_{\xi}(L,P^r)$.
\begin{Corollary} \label{blowp}
\begin{enumerate}
\item Let $r\ge 0$ and let $L$ be a line bundle on $X$ with $\<L,c_1\>\equiv r\mod 2$.
Then $$\chi^{X,\omega}_{c_1}(L,P^r)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,F}_{X,c_1}(L,\Lambda,\tau)M^r\big].$$
\item let $H_1,H_2\in C_X$, not on a wall of type $(c_1)$. Then
\begin{align*}
\chi^{X,H_2}_{c_1}(L,P^r)-\chi^{X,H_2}_{c_1}(L,P^r)&=\sum_{\xi} \delta^X_{\xi}(L,P^r),\\
\chi^{X,H_2}_{c_1,d}(L,P^r)-\chi^{X,H_2}_{c_1,d}(L,P^r)&=\sum_{\xi} \delta^X_{\xi,d}(L,P^r),
\end{align*}
where in the first (resp.~second) sum $\xi$ runs though all classes of type $(c_1)$ (resp.~$(c_1,d)$)with $\<H_1,\xi\><0<\<H_2,\xi\>$.
\end{enumerate}
\end{Corollary}
\begin{pf} (1)
Let $\widetilde X$ be the blowup of $X$ in $r$ general points and let $\overline E=\overline E_1+\ldots+\overline E_r$ be the sum of the
exceptional divisors. Then by definition and iteration of \propref{blowpsi} we have
$$\chi^{X,\omega}_{c_1}(L,P^r)=\frac{1}{\Lambda^r}\chi^{\widetilde X,\omega}_{c_1+\overline E}(L-\overline E)=\frac{1}{\Lambda^r}
\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,F}_{X,c_1}(L,\Lambda,\tau)S_{2}(\Lambda, M)^r\big].$$
The claim follows because $S_{2}(\Lambda, M)=\Lambda M$.
(2) By definition $\mathop{\text{\rm Coeff}}_{q^0}\big[\overline \Delta^X_{\xi}(L,P^r)\big]=\delta^X_{\xi}(L,P^r),$ therefore by \lemref{thetawall}, and using also
\remref{c1d},
(2) follows from (1).
\end{pf}
With this we get a general blowup formula.
\begin{Theorem}\label{nblow}
Let $X$ be ${\mathbb P}^2$, ${\mathbb P}^1\times{\mathbb P}^1$ or the blowup of ${\mathbb P}^2$ in finitely many general points.
Let $c_1\in H^2(X,{\mathbb Z})$ and let $L$ be a line bundle on $X$ and let $r\in {\mathbb Z}_{\ge 0}$ with $\<c_1, L\>\equiv r\mod 2$.
Let $\omega\in C_X\cup S_X$.
Let $\widehat X$ be the blowup of $X$ in a general point with exceptional divisor $E$.
Then
\begin{enumerate}
\item $\chi^{\widehat X,\omega}_{c_1}(L-(n-1)E,P^r)=\chi^{X,\omega}_{c_1}(L,P^r \cdot R_n(\Lambda,P))$,
\item $\chi^{\widehat X,\omega}_{c_1+E}(L-(n-1)E,P^r)=\chi^{X,\omega}_{c_1}(L,P^r \cdot S_n(\Lambda,P))$.
\end{enumerate}
\end{Theorem}
\begin{proof}
If $X={\mathbb P}^2$, then we apply \propref{blowgen} to reduce to the case that $X$ is the blowup of ${\mathbb P}^2$ in a point.
Thus we can by \corref{strucdiff} and the definition of $\chi^{X,G}_{c_1}(L,P^s)$, assume that there is an $G\in S_X$ with $\chi^{X,G}_{c_1}(L,P^s)=0$ for all $s\ge 0$.
(1) Let $\widetilde X$ be the blowup of $X$ in $r$ general points, with exceptional divisors $F_1,\ldots,F_r$ and put
$F:=F_1+\ldots+F_r$, and let $\overline X$ the blowup of $\widetilde X$ in a point with exceptional divisor $E$.
Then by definition
$$\chi^{\widehat X,\omega}_{c_1}(L-(n-1)E,P^r)=\chi^{\overline X,\omega}_{c_1+F}(L-F-(n-1)E).$$
We get by \corref{strucdiff} that
$$\chi^{\overline X,\omega}_{c_1+F}(L-F-(n-1)E)=\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,G}_{\widetilde X,c_1+F}(L-F,\Lambda,\tau)R_{n}(\Lambda, M)\big].$$
On the other hand, by \corref{blowp} we get
$$\mathop{\text{\rm Coeff}}_{q^0}\big[\Psi^{\omega,G}_{\widetilde X,c_1+F}(L-F,\Lambda,\tau)R_{n}(\Lambda, M)\big]=\chi^{\widetilde X,\omega}_{c_1+F}(L-F,R_{n}(\Lambda, P))=\chi^{X,\omega}_{c_1}(L,P^r\cdot R_n(\Lambda,P)).$$
The proof of (2) is similar.
\end{proof}
\subsection{Further properties of the blowup polynomials}
\begin{Proposition}\label{blowpolprop}
\begin{enumerate}
\item For all $n\in {\mathbb Z}$, we have $R_n
\in {\mathbb Z}[{\mathfrak \lambda}^4,x^2]$, $S_{2n+1}\in {\mathfrak \lambda}{\mathbb Z}[{\mathfrak \lambda}^4,x^2]$, $S_{2n}\in {\mathfrak \lambda} x{\mathbb Z}[{\mathfrak \lambda}^4,x^2]$.
\item $R_n(\lambda,-x)=R_n(\lambda,x)$ and $S_n(\lambda,-x)=(-1)^{n-1} S_n(\lambda,x)$.
\item The $R_n$, $S_n$ satisfy the symmetries
\begin{align*}
R_{2n}\Big(\frac{1}{{\mathfrak \lambda}},\frac{x}{{\mathfrak \lambda}^2}\Big)&=\frac{(-1)^n}{{\mathfrak \lambda}^{(2n)^2}}R_{2n}({\mathfrak \lambda},x),\quad S_{2n}\Big(\frac{1}{{\mathfrak \lambda}},\frac{x}{{\mathfrak \lambda}^2}\Big)=\frac{(-1)^{n-1}}{{\mathfrak \lambda}^{(2n)^2}}S_{2n}({\mathfrak \lambda},x),\\
R_{2n+1}\Big(\frac{1}{{\mathfrak \lambda}},\frac{x}{{\mathfrak \lambda}^2}\Big)&=\frac{(-1)^n}{{\mathfrak \lambda}^{(2n+1)^2}}
S_{2n+1}({\mathfrak \lambda},x),\quad S_{2n+1}\Big(\frac{1}{{\mathfrak \lambda}},\frac{x}{{\mathfrak \lambda}^2}\Big)=\frac{(-1)^n}{{\mathfrak \lambda}^{(2n+1)^2}}
R_{2n+1}({\mathfrak \lambda},x).
\end{align*}
\item For all $k,n\in {\mathbb Z}$, we have the relations
\begin{equation}\label{r2nRn}
\begin{split}
R_{2n}&=R_n^4-S_n^4, \quad S_{2n}=\frac{1}{{\mathfrak \lambda}}R_nS_n(S_{n+1}R_{n-1}-R_{n+1}S_{n-1}).
\end{split}
\end{equation}
\end{enumerate}
\end{Proposition}
\begin{proof}
We write
$$\widetilde R_n(h):=\frac{{\widetilde \theta}_4(nh)}{{\widetilde \theta}_4(h)^{n^2}}=R_n(\Lambda,M),\quad \widetilde S_n(h):= \frac{{\widetilde \theta}_1(nh)}{{\widetilde \theta}_4(h)^{n^2}}=S_n(\Lambda,M),$$
where we have used \eqref{ThRn}.
It is easy to see that $\Lambda$ and $M$ are algebraically independent, i.e. there exists no polynomial $f\in {\mathbb Q}[\lambda,x]\setminus \{0\}$, such that $f(\Lambda,M)=0$ as
a function on ${\mathcal H}\times {\mathbb C}$. For this, note that by $M^2=4(1+u\Lambda^2+\Lambda^4)$, the algebraic independence of $\Lambda$ and $M$ is equivalent to that of $\Lambda$ and $u$.
But this is clear, because as Laurent series in $q,y$, $u$ is a Laurent series in $q$ starting with $-\frac{1}{4q^{-2}}$ and $\Lambda$ depends on $y$ in a nontrivial way.
As $M$ and $\Lambda$ are algebraically independent, $R_n$, $S_n$
are the unique rational functions
satisfying \eqref{ThRn} .
Now we will show (4).
For any $k\in {\mathbb Z}$ we also have
\begin{equation}\label{RSkn}
\begin{split}
\widetilde R_{kn}(h)&= \frac{{\widetilde \theta}_4(knh)}{{\widetilde \theta}_4(h)^{k^2 n^2}}=\frac{{\widetilde \theta}_4(knh)}{{\widetilde \theta}_4(nh)^{k^2}}\Big(\frac{{\widetilde \theta}_4(nh)}{{\widetilde \theta}_4(h)^{n^2}}\Big)^{k^2}=
\widetilde R_k(nh) \widetilde R_n(h)^{k^2},\\
\widetilde S_{kn}(h)&=\frac{{\widetilde \theta}_1(kn z)}{{\widetilde \theta}_4(nh)^{k^2}}=\widetilde S_k(nh)\widetilde R_n(h)^{k^2}
\end{split}
\end{equation}
Thus, using $\widetilde R_2(h)=1-\Lambda^4$, we find in particular
$$\widetilde R_{2n}(h)=\widetilde R_2(nh) \widetilde R_n(h)^{4}=
\left(1-\left(\frac{{\widetilde \theta}_1(nh)}{{\widetilde \theta}_4(nh)}\right)^4\right)\widetilde R_n(h)^4=
\widetilde R_n(h)^4-\widetilde S_n(h)^4;$$
i.e., using the algebraic independence of $\Lambda$ and $M$, $R_{2n}=R_n^4- S_n^4$.
In the same way we have
$$\widetilde S_{2n}(h)=\widetilde S_2(nh)\widetilde R_n(h)^{4}=\Lambda(nh)M(nh)\widetilde R_n(h)^{4}.$$
By definition $\Lambda(nh)=\frac{{\widetilde \theta}_1(nh)}{{\widetilde \theta}_4(nh)}=\frac{\widetilde S_n(h)}{\widetilde R_n(h)}$.
Now take the difference of the two formulas (see \cite[\S 2.1 Ex.~3]{WW})
$$\theta_1(y\pm z)\theta_4(y\mp z)\theta_2\theta_3=\theta_1(y)\theta_4(y)\theta_2(z)\theta_3(z)\pm
\theta_2(y)\theta_3(y)\theta_1(z)\theta_4(z)$$ with $y=nh$, $z=h$
to get
$$\theta_1((n+1)h)\theta_4((n-1)h)\theta_2\theta_3-\theta_1((n-1)h)\theta_4((n+1)h)\theta_2\theta_3=2\theta_2(nh)\theta_3(nh)\theta_1(h)\theta_4(h).$$
This gives
\begin{align*}
M(nh)&=2\frac{\theta_2(nh)\theta_3(nh)}{\theta_2\theta_3\theta_4(nh)^2}=
\frac{\theta_1((n+1)h)\theta_4((n-1)h)-\theta_1((n-1)h)\theta_4((n+1)h)}{\theta_1(h)\theta_4(h)
\theta_4(nh)^2}\\&=\frac{1}{\Lambda}\frac{\widetilde S_{n+1}(h)\widetilde R_{n-1}(h)-\widetilde S_{n-1}(h)\widetilde R_{n+1}(h)}{\widetilde R_n(h)^2}
\end{align*}
Thus
$S_{2n}=\frac{1}{\lambda}S_nR_n(S_{n+1}R_{n-1}-S_{n-1}R_{n+1}).$
This shows (4)
(1) Next we will show that
$R_n\in {\mathbb Z}[\lambda^4,x]$ and $S_n \in \lambda{\mathbb Z}[\lambda^4,x]$ for all $n\in {\mathbb Z}$.
By symmetry it is enough to show this if $n$ is a nonnegative integer.
We know that this is true for $0\le n\le 4$.
Now assume that $m\ge 2$, and that we know
the statement for all $0\le n\le 2m$.
Therefore $R_{m+1}\in {\mathbb Z}[\lambda^4,x], S_{m+1}\in \lambda{\mathbb Z}[\lambda^4,x]$, and the formulas \eqref{r2nRn} give that
$R_{2m+2}\in {\mathbb Z}[\lambda^4,x]$, $S_{2m+2}\in \lambda{\mathbb Z}[\lambda^4,x]$.
The relations \eqref{recur} say that
\begin{align*}
R_{2m+2}R_{2m}&=R_{2m+1}^2-{\mathfrak \lambda}^2S_{2m+1}^2,\quad
S_{2m+2}S_{2m}=S_{2m+1}^2-{\mathfrak \lambda}^2 R_{2m+1}^2,
\end{align*}
and thus
\begin{equation}
\label{r2n1}
\begin{split}
(1-{\mathfrak \lambda}^4)R_{2m+1}^2&=R_{2m+2}R_{2m}+{\mathfrak \lambda}^2 S_{2m+2}S_{2m},\\
(1-{\mathfrak \lambda}^4)S_{2m+1}^2&=S_{2m+2}S_{2m}+{\mathfrak \lambda}^2R_{2m+2}R_{2m}.
\end{split}
\end{equation}
Thus we get $(1-\lambda^4)R_{2m+1}^2\in {\mathbb Z}[\lambda^4,x]$ and
$(1-\lambda^4)S_{2m+1}^2\in \lambda^2{\mathbb Z}[\lambda^4,x]$. Therefore, as $1-{\mathfrak \lambda}^4$ is squarefree in ${\mathbb Q}[\lambda,x]$, we also have
$R_{2m+1}^2\in {\mathbb Z}[\lambda^4,x]$ and
$S_{2m+1}^2\in \lambda^2{\mathbb Z}[\lambda^4,x]$. As we already know that $R_{2m+1},\ S_{2m+1}\in {\mathbb Q}(\lambda,x)$, this gives
$R_{2m+1}\in{\mathbb Z}[\lambda^4,x]$ and $S_{2m+1}\in \lambda{\mathbb Z}[\lambda^4,x]$. So $R_{2m+1}$, $R_{2m+2}\in {\mathbb Z}[\lambda^4,x]$ and $S_{2m+1}$, $S_{2m+2}\in \lambda{\mathbb Z}[\lambda^4,x]$.
Thus by induction on $m$, we get $R_n\in {\mathbb Z}[\lambda^4,x]$, $S_n\in \lambda{\mathbb Z}[\lambda^4,x]$.
(2) For $n=0,1,2$ we see immediately that the $R_n$ are even in $x$ and the
$S_n$ have partity $(-1)^{n-1}$ in $x$. On the other hand the
recursion formulas \eqref{recur} say that $R_{n+1}$ has the same parity as $R_{n-1}$ in $x$ and $S_{n+1}$ the same parity as $S_{n-1}$.
This also shows that $R_n\in {\mathbb Z}[\lambda^4,x^2]$, $S_{2n}\in \lambda x{\mathbb Z}[\lambda^4,x^2]$, $S_{2n+1}\in \lambda {\mathbb Z}[\lambda^4,x^2]$.
(3)
The formulas \eqref{T4trans},\eqref{T1trans},\eqref{T2trans} imply
\begin{equation}\label{Lambdatau}
\Lambda(h+\pi i \tau)=\frac{\theta_4(h)}{\theta_1(h)}=\frac{1}{\Lambda},\quad
M(h+\pi i \tau)=-2\frac{{\widetilde \theta}_2(h){\widetilde \theta}_3(h)}{{\widetilde \theta}_1(h)^2}=-\frac{M}{\Lambda^2}.
\end{equation}
Part (1) of \lemref{thetaadd} shows
$$
\theta_4(2nh+2\pi i n\tau)=(-1)^n q^{-4n^2}(y^{2n})^{-2n}\theta_4(2nh).
$$
Thus, using \eqref{T4trans} again, we get
$$
\widetilde R_{2n}(h+\pi i \tau)=(-1)^n\frac{{\widetilde \theta}_4(2nh)}{{\widetilde \theta}_1(h)^{4n^2}}=(-1)^n \frac{\widetilde R_{2n}(h)}{\Lambda^{(2n)^2}}.
$$
As $\Lambda^4$ and $M$ are algebraically independent, \eqref{Lambdatau} and \eqref{R2ntau} imply that
\begin{equation}\label{R2ntau}R_{2n}\Big(\frac{1}{\lambda},\frac{x}{\lambda^2}\Big)=R_{2n}\Big(\frac{1}{\lambda},-\frac{x}{\lambda^2}\Big)=(-1)^n\frac{1}{{\mathfrak \lambda}^{(2n)^2}}R_{2n}({\mathfrak \lambda},x).\end{equation}
The same argument using part (2) of \lemref{thetaadd} and \eqref{T1trans} shows
$
\widetilde S_{2n}(h+\pi i \tau)=(-1)^n \frac{\widetilde S_{2n}(h)}{\Lambda^{(2n)^2}},
$
and thus
\begin{equation}\label{S2ntau}S_{2n}\Big(\frac{1}{\lambda},\frac{x}{\lambda^2}\Big)=-S_{2n}\Big(\frac{1}{\lambda},-\frac{x}{\lambda^2}\Big)=(-1)^{n-1}\frac{1}{{\mathfrak \lambda}^{(2n)^2}}S_{2n}({\mathfrak \lambda},x).\end{equation}
Similarly using parts (3) and (4) of \lemref{thetaadd} we get by the same arguments
\begin{align*}
\widetilde R_{2n+1}(h+\pi i \tau)=\frac{{\widetilde \theta}_4((2n+1)h+\pi i (2n+1)\tau)}{{\widetilde \theta}_4(h+\pi i \tau)^{(2n+1)^2}}=
(-1)^n \frac{{\widetilde \theta}_1((2n+1)h)}{{\widetilde \theta}_1(h)^{(2n+1)^2}}=(-1)^n \frac{ \widetilde S_{2n+1}}{\Lambda^{(2n+1)^2}},
\end{align*}
and thus
$$R_{2n+1}\big(\frac{1}{{\mathfrak \lambda}},\frac{x}{{\mathfrak \lambda}^2}\big)=R_{2n+1}\big(\frac{1}{{\mathfrak \lambda}},-\frac{x}{{\mathfrak \lambda}^2}\big)=\frac{(-1)^n}{{\mathfrak \lambda}^{(2n+1)^2}} S_{2n+1}.$$
The same argument shows
$S_{2n+1}\big(\frac{1}{{\mathfrak \lambda}},\frac{x}{{\mathfrak \lambda}^2}\big)=S_{2n+1}\big(\frac{1}{{\mathfrak \lambda}},-\frac{x}{{\mathfrak \lambda}^2}\big)=\frac{(-1)^n}{{\mathfrak \lambda}^{(2n+1)^2}} R_{2n+1}.$
\end{proof}
\begin{comment}
We also introduce a normalized version of the blowup polynomials. \begin{LG} explain what they are good for\end{LG}
\begin{Definition}
\begin{enumerate}
\item For all $n$ and $i=0,1$ we define
$$r_n(\lambda,x):=\frac{R_n(\lambda,(1-\lambda^4)x)}{(1-\lambda^4)^{[n^2/4]}},\quad s_n(t,x):=\frac{S_n(\lambda,(1-\lambda^4)x)}{(1-\lambda^4)^{[n^2/4]}}.$$
\end{enumerate}
\begin{LG} clarify what is $[n]$.\end{LG}
It is easy to see that the $r_n$, $s_n$ satisfy
$r_0=r_1$, $
s_1=\lambda,\ s_2=\lambda x,$ and the recursion relations
\begin{align}\label{recurr}
r_{n+1}&=\frac{r_{n}^2-\lambda^2 s_n^2}{(1-\lambda^4)^{\epsilon} r_{n-1}},\
n\ge 1,\quad
s_{n+1}=\frac{s_{n}^2-\lambda^2r_n^2}{(1-\lambda^4)^{\epsilon} s_{n-1}},\ n\ge 2,
\end{align}
where ${\epsilon}=0$ if $n$ is even and ${\epsilon}=1$ if $n$ is odd.
We get
\begin{equation*}
\begin{split}
r_2&=1,\ r_3=-\lambda^4 x^2+1,\ r_4=-\lambda^4 x^4 + 1,\ r_5=-\lambda^4x^6 + (\lambda^8 + 2\lambda^4)x^4 - 3\lambda^4x^2 + 1,\\
r_6&=(-t^3 - t^2 - t)x^8 + (4t^2 + 4t)x^6 - 6tx^4 + 1,\\
s_3&=\lambda(x^2-1),\ s_4=\lambda x \big((\lambda^4 + 1)x^2 - 2\big),\ s_5=\lambda\big(-\lambda^8x^6 + (2\lambda^4 + 1)x^4 - 3x^2 + 1\big),\\
s_6&=\lambda x(-\lambda^8 x^8 + (\lambda^8 + 4\lambda^4 + 1)x^4 - (4\lambda^4 + 4)x^2 + 3\big).
\end{split}
\end{equation*}
\end{Definition}
\begin{Proposition}\label{rnprop}
\begin{enumerate}
\item For all $n\in {\mathbb Z}$,
we have $r_n\in {\mathbb Z}[\lambda^4,x^2]$, $s_{2n} \in \lambda x{\mathbb Z}[\lambda^4,x^2]$, $s_{2n+1} \in \lambda {\mathbb Z}[\lambda^4,x^2]$,
\item
$r_{2n}(\lambda,x)= R_n(\lambda x,(1+\lambda^4)x^2-2), \quad s_{2n}(\lambda,x)=S_n(\lambda x,(1+\lambda^4)x^2-2)
,$
\item The $r_n$, $s_n$ satisfy the symmetries
\begin{align*} r_{2n}\big(\frac{1}{\lambda},x\lambda^2\big)&=r_{2n}(\lambda,x),
\quad s_{2n}\big(\frac{1}{\lambda},x\lambda^2\big)=s_{2n}(\lambda,x),\\
(-1)^n r_{2n+1}\big(\frac{1}{\lambda},x\lambda^2\big)&=s_{2n+1}(\lambda,x), \quad
(-1)^n s_{2n+1}\big(\frac{1}{\lambda},x\lambda^2\big)=r_{2n+1}(\lambda,x).
\end{align*}
\end{enumerate}
\end{Proposition}
\begin{proof}
(2) Using $R_2=(1-t)$, we get by the definition of
$\widetilde R_{2n}$ that
\begin{align*}
\frac{\widetilde R_{2n}}{(1-\Lambda^4)^{n^2}}&=
\frac{\widetilde R_{2n}}{\widetilde R_2^{n^2}}=\frac{{\widetilde \theta}_4(2nh)}{{\widetilde \theta}_4(h)^{4n^2}} \cdot \frac{{\widetilde \theta}_4(h)^{4n^2}}{{\widetilde \theta}_4(2h)^{n^2}}=\frac{{\widetilde \theta}_4(2nh)}
{{\widetilde \theta}_4(2h)^{n^2}}=R_n(\Lambda^4(2h),M(2h)).
\end{align*}
By definition we have
\begin{align*}
\Lambda(2h)&=\frac{\theta_1(2h)}{\theta_4(2h)}=
\frac{\theta_1(2h)}{\theta_1(h){\widetilde \theta}_4(h)^3}\cdot\frac{{\widetilde \theta}_4(h)^4}{{\widetilde \theta}_4(2h)}\cdot \frac{\theta_1(h)}{\theta_4(h)}=\frac{\Lambda\widetilde S_2}{\widetilde R_2}=\frac{\Lambda M}{1-\Lambda^4},\\
M(2h)&=\frac{\theta_1(4h)}{\theta_1(2h){\widetilde \theta}_4(2h)^3}=
\frac{{\widetilde \theta}_1(4h)}{{\widetilde \theta}_4(h)^{16}}\cdot
\frac{{\widetilde \theta}_4(h)^4}{{\widetilde \theta}_1(2h)}\cdot
\Big(\frac{{\widetilde \theta}_4(h)^4}{{\widetilde \theta}_4(2h)}\Big)^3=\frac{\widetilde S_4}{\widetilde S_2\widetilde R_2^3}=\frac{1+\Lambda^4}{(1-\Lambda^4)^2}M^2-2.
\end{align*}
Thus we get
\begin{align*}\frac{R_{2n}(t,x)}{(1-t)^{n^2}}&=R_n\Big(\frac{\lambda x}{(1-\lambda^4)},\frac{\lambda^4+1}{(1-\lambda^4)^2}x^2-2\Big),\\
r_{2n}(\lambda,x)&=\frac{R_{2n}(\lambda,(1-\lambda^4)x)}{(1-\lambda^4)^{n^2}}=R_n(\lambda x,(\lambda^4+1)x^2-2).
\end{align*}
Similarly we have
\begin{align*}
\frac{\widetilde S_{2n}}{(1-\Lambda^4)^{n^2}}&=
\frac{\widetilde S_{2n}}{\widetilde R_2^{n^2}}=\frac{{\widetilde \theta}_1(2nh)}{{\widetilde \theta}_4(h)^{4n^2}} \cdot \frac{{\widetilde \theta}_4(h)^{4n^2}}{{\widetilde \theta}_4(2h)^{n^2}}
=\frac{{\widetilde \theta}_1(2nh)}
{{\widetilde \theta}_4(2h)^{n^2}}=S_n(\Lambda^4(2h),M(2h))
\end{align*}
In the same way as for $r_{2n}$ this gives
\begin{align*}
\frac{S_{2n}(\lambda,x)}{(1-\lambda^4)^{n^2}}&=S_n\Big(\frac{\lambda x}{(1-\lambda^4)},\frac{\lambda^4+1}{(1-\lambda^4)^2}x^2-2\Big),\quad
s_{2n}(\lambda,x)=S_n(\lambda x,(\lambda^4+1)x^2-2).
\end{align*}
(1) Next we will show that
$r_n,s_n\in {\mathbb Z}[\lambda,x]$, for all $n\in {\mathbb Z}$. Then (1) follows by the definition of $r_n$, $s_n$.
By symmetry it is enough to show this if $n$ is a positive integer.
We know that this is true for $n\le 4$.
Now assume that $m\ge 2$, and that we know
the statement for all $n\le 2m$.
As $R_{m+1},S_{m+1}\in {\mathbb Z}[\lambda,x]$, part (2) gives that
$r_{2m+2},s_{2m+2}\in {\mathbb Z}[\lambda,x]$.
The relations
$$(1-\lambda^4)r_{2m+2}r_{2m}=r_{2m+1}^2-\lambda^2s_{2m+1}^2, \
(1-\lambda^4)s_{2m+2}s_{2m}=s_{2m+1}^2-\lambda^2 r_{2m+1}^2,$$ give
\begin{align}
\label{r2n1}r_{2m+1}^2&=r_{2m+2}r_{2m}+\lambda^2 s_{2m+2}s_{2m},\quad
s_{2m+1}^2=s_{2m+2}s_{2m}+\lambda^2 r_{2m+2}r_{2m}.
\end{align}
Thus we get $r_{2m+1}^2\in {\mathbb Z}[\lambda,x]$, $s_{2m+1}^2\in{\mathbb Z}[\lambda,x]$. As we already know that
$r_{2m+1},\ s_{2m+1}\in {\mathbb Q}(\lambda,x)$, it follows that they are in ${\mathbb Z}[\lambda,x]$.
(3) By part (3) of \propref{blowpolprop} we get
$$r_{2n}\Big(\frac{1}{\lambda},x\lambda^2\Big)=\frac{(-\lambda^4)^{n^2}}{(1-t)^{n^2}}R_{2n}\Big(\frac{1}{\lambda},x(1-\lambda^4)/\lambda^2\Big)
=\frac{1}{(1-t)^{n^2}}R_{2n}(t,x(1-t))
=r_{2n}(t,x).$$
Similarly we get from part (3) of \propref{blowpolprop}
\begin{align*}
s_{2n}\Big(\frac{1}{\lambda},x\lambda^2\Big)&=s_{2n}(\lambda,x),\\
(-1)^n \lambda r_{2n+1}\Big(\frac{1}{\lambda},x\lambda^2\Big)&=
s_{2n+1}(\lambda,x),\ s_{2n+1}\Big(\frac{1}{\lambda},x\lambda^2\Big)=(-1)^n\lambda
r_{2n+1}(\lambda, x).
\end{align*}
\end{proof}
\end{comment}
\subsection{Blowdown formulas}
Let $\widehat X$ be the blowup of a rational surface $X$ in a point.
As mentioned at the beginning of this section, the blowup polynomials determine a blowup formula which computes the $K$-theoretic Donaldson invariants $\widehat X$ in terms of those of $X$. We will also need a blowdown
formula which determines all the $K$-theoretic Donaldson invariants of $X$ in
terms of a small part of those of $\widehat X$. In order to prove the blowdown formula, we will need that, for $n,m$ relatively prime integers,
the polynomials $R_n$ $R_{m}$ and $S_n$, $S_{m}$ are as polynomials in $x$ in a suitable sense relatively prime.
\begin{Proposition}\label{blowdownpol} Let $n,m\in {\mathbb Z}$ be relatively prime.
\begin{enumerate}
\item There exists a minimal integer $M^0_{n,m}\in {\mathbb Z}_{\ge 0}$ and unique polynomials
$h^0_{n,m},\ l^0_{n,m}\in {\mathbb Q}[\lambda^4,x^2]$, such that $(1-\lambda^4)^{M^0_{n,m}}=h^0_{n,m} R_n+l^0_{n,m} R_m$.
\item
There exists a minimal integers $M^1_{n,m}\in {\mathbb Z}_{\ge 0}$ and unique polynomials
$h^1_{n,m},\ l^1_{n,m}\in {\mathbb Q}[\lambda^4,x]$, such that $\lambda(1-\lambda^4)^{M^1_{n,m}}=h^1_{n,m} S_n+l^1_{n,m} S_m$.
\end{enumerate}
\end{Proposition}
\begin{proof}
For all $l\in {\mathbb Z}$ we write
$$\overline S_{2l}:=x\frac{S_{2l}}{\lambda}=\frac{S_2S_{2l}}{S_1^2}, \quad \overline S_{2l+1}:=\frac{S_{2l+1}}{\lambda}=\frac{S_{2l+1}}{S_1}\in {\mathbb Z}[\lambda^4,x^2].$$
Let $I_{n,m}=\<R_n,R_{m}\>\subset {\mathbb Z}[\lambda^4,x^2]$ be the ideal generated by $R_n, R_{m}\in {\mathbb Z}[\lambda^4,x^2]$, and let
$J_{n,m}=\<\overline S_n,\overline S_{m}\>\subset {\mathbb Z}[\lambda^4,x^2]$ be the ideal generated by $\overline S_n,\overline S_{m}\in {\mathbb Z}[\lambda^4,x^2]$.
Then the Proposition follows immediately from the following.
\begin{Claim}[1]
There are $M^0_{n,m} , M^1_{n,m}\in {\mathbb Z}_{\ge 0}$ with $(1-\lambda^4)^{M^0_{n,m}}\in I_{n,m}$ and $(1-\lambda^4)^{M^1_{n,m}}\in J_{n,m}$.
\end{Claim}
Let
\begin{align*}
V_{n,m}&:=\big\{(\alpha^4,\beta^2)\in {\mathbb C}^2\bigm| R_{n}(\alpha,\beta)=R_{m}(\alpha,\beta)=0 \big\}, \\
W_{n,m}&:=\big\{(\alpha^4,\beta^2)\in {\mathbb C}^2\bigm| \overline S_{n}(\alpha,\beta)=\overline S_{m}(\alpha,\beta)=0 \big\}.
\end{align*}
Then by the Nullstellensatz the Claim (1) follows immediately from the following.
\begin{Claim}[2]
$V_{n,m}, W_{n,m}\subset \{(1,0)\}$,
\end{Claim}
\noindent{\it Proof of Claim(2):}
The idea of the proof is as follows:
For each $(\alpha,\beta)\in {\mathbb C}^2$ with $(\alpha^4,\beta^2)\in {\mathbb C}^2\setminus \{(1,0)\}$ we want to show that
\begin{enumerate}
\item $R_n(\alpha,\beta)$ or $R_{m}(\alpha,\beta)$ is nonzero,
\item $\overline S_n(\alpha,\beta)$ or $\overline S_{m}(\alpha,\beta)$ is nonzero.
\end{enumerate}
Recall that we have $\widetilde R_n=R_n(\Lambda,M)$, and we put $\widehat S_n:=\overline S_n(\Lambda,M)$, so that
$\widehat S_{2n}=\frac{M \overline S_{2n}}{\Lambda}$ and $\widehat S_{2n+1}=\frac{\overline S_{2n+1}}{\Lambda}$.
We denote
\begin{align*}
\Lambda|_S(h,\tau)&:=\Lambda(\frac{h}{\tau},-\frac{1}{\tau}), \quad M|_S(h,\tau):=M(\frac{h}{\tau},-\frac{1}{\tau}),\\
\widetilde R_m|_S(h,\tau)&:=\widetilde R_m(\frac{h}{\tau},-\frac{1}{\tau}), \quad \widehat S_m|_S(h,\tau)=\widehat S_m(\frac{h}{\tau},-\frac{1}{\tau})
\end{align*}
the application of the operator $S:(h,\tau)\mapsto (\frac{h}{\tau},-\frac{1}{\tau})$ to the Jacobi functions $\Lambda$, $M$, $\widetilde R_m$, $\widehat S_m$.
Obviously we have
$$R_m(\Lambda|_S,M|_S)= \widetilde R_m|_S, \quad \overline S_m(\Lambda|_S,M|_S)= \widehat S_m|_S.$$
We denote $Z(f)\subset {\mathbb C}$ the zero set of a meromorphic function $f:{\mathbb C}\to{\mathbb C}$.
Therefore Claim (2) will follow once we prove the following facts:
\begin{enumerate}
\item Every $(\alpha, \beta)\in {\mathbb C}^2\setminus \{(1,0)\}$ can be written as
$(\Lambda(h,\tau)^4,M(h,\tau)^2)$ for some $h\in {\mathbb C}$, $\tau\in {\mathcal H}\cup\{\infty\}$ or as $(\Lambda^4|_S(h,\infty)$, $M^2|_S(h,\infty))$ for some $h\in {\mathbb C}$.
Here we by $\Lambda(h,\infty)$, $M(h,\infty)$, $(\Lambda|_S(h,\infty)$, $M|_S(h,\infty))$, we mean the coefficient of $q^0$ of the $q$-development of
$\Lambda$, $M$, ($\Lambda|_S$, $M|_S$) (asserting also that these developments are power series in $q$).
\item For all $\tau\in {\mathcal H}\cup \{\infty\}$ we have
\begin{align*}
Z(\widetilde R_n(\bullet,\tau))\cap Z(\widetilde R_{m}(\bullet,\tau))&:= \big\{h\in {\mathbb C} \bigm|\widetilde R_n(h,\tau)=\widetilde R_{m}(h,\tau)=0\big\}=\emptyset,\\
Z(\widetilde R_n|_S(\bullet,\infty))\cap Z(\widetilde R_{m}|_S(\bullet,\infty))&:= \big\{h\in {\mathbb C} \bigm|\widetilde R_n|_S(h,\infty)=\widetilde R_{m}|_S(h,\infty)=0\big\}=\emptyset.
\end{align*}
\item For all $\tau\in {\mathcal H}\cup \{\infty\}$ we have
\begin{align*}Z(\widehat S_n(\bullet,\tau))\cap Z(\widehat S_{m}(\bullet,\tau))&:= \big\{h\in {\mathbb C} \bigm|\widehat S_n(h,\tau)=\widehat S_{m}(h,\tau)=0\big\}=\emptyset,\\
Z(\widehat S_n|_S(\bullet,\infty)\cap Z(\widehat S_{m}|_S(\bullet,\infty))&:= \big\{h\in {\mathbb C} \bigm|\widehat S_n|_S(h,\infty)=\widehat S_{m}|_S(h,\infty)=0\big\}=\emptyset.
\end{align*}
\end{enumerate}
(1) For any fixed $\tau\in {\mathcal H}$ the range of the elliptic function $\Lambda=\Lambda(\tau,\bullet)$ is ${\mathbb C}\cup \infty$.
$u$ is a Hauptmodul for $\Gamma^0(4)$, which takes the values
$-2,2,\infty$ at the cusps $0,2,\infty$ respectively. Therefore the range of $u$ as a function on ${\mathcal H}$ is
${\mathbb C}\setminus \{-2,2\}$.
By the equation
$M^2=4(1+u\Lambda^2+\Lambda^4)$, we get therefore that the range of $(\Lambda^4, M^2)$ on ${\mathcal H}\times {\mathbb C}$ contains
the set
$$I_1:={\mathbb C}^2\setminus \{(c^2,4(1+c)^2)\ |\ c\in {\mathbb C}\}.$$
Now we look at $\tau=\infty$, i.e. $q=0$.
By the $q$-developments \eqref{theta}, we see that
\begin{equation}
\label{thetaq0}
\begin{split}
{\widetilde \theta}_1(h,\tau)&=O(q),\quad
{\widetilde \theta}_4(h,\tau)=1+O(q),
\quad {\widetilde \theta}_3(h,\tau)=1+O(q),\\
{\widetilde \theta}_2(h,\tau)&=\cosh(h/2)+O(q),\quad \frac{{\widetilde \theta}_1(h,\tau)}{{\widetilde \theta}_2(h,\tau)}=-i\tanh(h/2)+O(q).
\end{split}
\end{equation}
Therefore we get from the definitions
$$\Lambda(h,\tau)=O(q), \quad M(h,\tau)=2\frac{{\widetilde \theta}_2(h,\tau){\widetilde \theta}_3(h,\tau)}{{\widetilde \theta}_4(h,\tau)^2}=2\cosh(h/2)+O(q).$$
As $\cosh:{\mathbb C}\to {\mathbb C}$ is surjective, we see that the range of
$(\Lambda^4, M^2)$ on $ {\mathbb C}\times \{\infty\} $ is $I_2:=\{0\}\times {\mathbb C}$.
From the definitions and \eqref{thetaq0} we obtain
\begin{align*}
\Lambda^4|_S(h,\tau)&=\frac{{\widetilde \theta}_1(h,\tau)^4}{{\widetilde \theta}_2(h,\tau)^4}=\tanh(h/2)^4+O(q),\\
M^2|_S(h,\tau)&=4\frac{{\widetilde \theta}_3(h,\tau)^2{\widetilde \theta}_4(h,\tau)^2}{{\widetilde \theta}_2(h,\tau)^4}=\frac{4}{\cosh(h/2)^4}+O(q)=
4(1-\tanh(h/2)^2)^2+O(q).\end{align*}
It is an easy exercise that the range of $\tanh:{\mathbb C}\to {\mathbb C}$ is ${\mathbb C}\setminus\{\pm 1\}$.
Thus the range of $( \Lambda^4|_S,M^2|_S)$ on ${\mathbb C}\times \{\infty\}$ is
$$I_3=\big \{(c^2,4(1-c)^2)\bigm| c\in {\mathbb C}\setminus \{1\}\big\}.$$
As $I_1\cup I_2\cup I_3={\mathbb C}^2\setminus \{(1,0)\}$, (1) follows.
(2) First let $\tau$ in ${\mathcal H}$. It is standard that $\theta_1(h)$ and $\theta_4(h)$ are holomorphic in $h$ on ${\mathbb C}$ and
$Z(\theta_1(h))=2\pi i ({\mathbb Z}+{\mathbb Z}\tau)$, $Z(\theta_4(h))=2\pi i ({\mathbb Z}+({\mathbb Z}+\frac{1}{2})\tau)$.
Thus by $\widetilde R_n=\frac{{\widetilde \theta}_4(nz)}{{\widetilde \theta}_4(z)^{n^2}}$,
we see that
\begin{align*}
Z(\widetilde R_n(\bullet,\tau))&=\Big\{2\pi i\big(\frac{a}{n}+\frac{b}{2n}\tau\big)\bigm| a,b\in {\mathbb Z},\ b \hbox{ odd},
(a,b)\not \equiv (0,0) \hbox{ mod }n
\Big\}
\end{align*}
Assume that $2\pi i \big(\frac{a}{n}+\frac{b}{2n}\tau \big)=2\pi i \big(\frac{a'}{m}+\frac{b'}{2m}\tau \big)\in {\mathbb Z}(\widetilde R_n(\bullet,\tau))\cap Z(\widetilde R_{m}(\bullet, \tau))$.
As $n$ and $m$ are relatively prime, we see that there $a'',b''\in {\mathbb Z}$, such that
$$\frac{b}{2n}=\frac{b'}{2m}=\frac{b''}{2}, \quad \frac{a}{n}=\frac{a'}{m}=a''.$$
Thus $a$ and $b$ are both divisible by $n$, and thus $2\pi i \big(\frac{a}{n}+\frac{b}{2n}\tau\big)\not\in {\mathbb Z}(\widetilde R_n(\bullet,\tau))$.
Now let $\tau=\infty$
Then $\widetilde R_n(h,\infty)=\frac{{\widetilde \theta}_4(nh,\infty)}{{\widetilde \theta}_4(h,\infty)^{n^2}}=1$. Thus $Z(R_n(\bullet,\infty))=\emptyset$.
Finally we consider $\widetilde R_n|_S(h,\infty)$.
We have
\begin{align*}
\widetilde R_n|_S(h,\infty)&=\frac{{\widetilde \theta}_2(nh,\infty)}{{\widetilde \theta}_{2}(h,\infty)^{n^2}}=
\frac{\cosh(n h/2)}{\cosh( h/2)^{n^2}},\quad
\end{align*}
This gives
\begin{align*}
Z(\widetilde R_n|_S(\bullet,\infty))&=\Big\{\pi i \frac{b}{2n}\Bigm| b\in {\mathbb Z} \hbox{ odd, } n\not|b\Big\},\quad
\end{align*}
and again it is clear that $Z(\widetilde R_n|_S(\bullet,\infty))\cap Z(\widetilde R_m|_S(\bullet,\infty))=\emptyset$.
(3) We note that
\begin{align*}
\widehat S_{2l+1}&=\frac{\widetilde S_{2l+1}}{\widetilde S_1}=\frac{\theta_{1}((2l+1)h)}{\theta_1(h){\widetilde \theta}_4(h)^{4l^2+4l}},\\
\widehat S_{2l}&=\frac{\widetilde S_2\widetilde S_{2l}}{\widetilde S_1^2}=\frac{\theta_{1}(2lh)\theta_1(2h)}{\theta_1(h)^2{\widetilde \theta}_4(h)^{4l^2+2}}.
\end{align*}
Let $\tau\in {\mathcal H}$, then this gives
\begin{align*}
Z(\widehat S_{2l+1}(\bullet, \tau))&=\big\{2\pi i (\frac{a}{2l+1}+\frac{b}{2l+1}\tau)\bigm| a,b\in {\mathbb Z}, \ (a,b)\not \equiv (0,0)\mod 2l+1\big\},\\
Z(\widehat S_{2l}(\bullet, \tau))&=\big\{2\pi i (\frac{a}{2l}+\frac{b}{2l}\tau)\bigm| a,b\in {\mathbb Z}, \ (a,b)\not \equiv (0,0)\mod 2l\big\}.
\end{align*}
Thus we see immediately that $Z(\widehat S_{n}(\bullet, \tau))\cap Z(\widehat S_{m}(\bullet, \tau))=\emptyset,$ if $n$ and $m$ are relatively prime.
Now let $\tau=\infty$. Then $$\widehat S_{2l+1}(h, \infty)=\frac{\sinh((2l+1)h/2)}{\sinh(h/2)},\quad \widehat S_{2l}(h ,\infty)=\frac{\sinh(lh)\sinh(h)}{\sinh(h/2)^2},$$
So it is easy to see that $Z(\widehat S_{n}(\bullet,\infty))\cap Z(\widehat S_{m}(\bullet, \infty))=\emptyset,$ if $n$ and $m$ are relatively prime.
Finally $$\widehat S_{2l+1}|_S(h,\tau)=\frac{\theta_1((2l+1)h)}{\theta_1(h){\widetilde \theta}_2(h)^{4l^2+4l}},\quad
\widehat S_{2l}|_S(h,\tau)=\frac{\theta_1(2lh) \theta_1(2h)}{\theta_1(h)^2{\widetilde \theta}_2(h)^{4m^2+2}}.$$
Thus we get $$\widehat S_{2l+1}|_S(h,\infty)=\frac{\sinh((2l+1)h/2)}{\sinh(h/2)\cosh(h/2)^{4l^2+4l}},\
\widehat S_{2l}|_S(h,\infty)=\frac{\sinh(lh)\sinh(h)}{\sinh(h/2)^2\cosh(h/2)^{4l^2+2}},$$
and again it is evident that for $n$ and $m$ relatively prime
$Z(\widehat S_{n}|_S(\bullet,\infty))\cap Z(\widehat S_{m}|_S(\bullet, \infty))=\emptyset.$
\end{proof}
\begin{Corollary}\label{blowdownmn}
Let $n,m\in {\mathbb Z}$ be relatively prime.
Let $X$ be ${\mathbb P}^2$, ${\mathbb P}^1\times{\mathbb P}^1$ or the blowup of ${\mathbb P}^2$ in finitely many general points.
Let $c_1\in H^2(X,{\mathbb Z})$, let $L$ be a line bundle on $X$, let $r\in {\mathbb Z}_{\ge 0}$ with $\<c_1,L\>\equiv r\mod 2$.
Let $\widehat X$ be the blowup of $X$ in a point. Let $\omega\in S_X$
Using the notations of \propref{blowdownpol}, we have
\begin{align*}
\chi^{X,\omega}_{c_1}(L,P^r)&=\frac{1}{(1-\Lambda^4)^{M^0_{n,m}}}\big(\chi^{\widehat X,\omega}_{c_1}(L-(n-1)E,P^r\cdot h^0_{m,n}(\Lambda,P)\\
&\qquad+
\chi^{\widehat X,\omega}_{c_1}(L-(m-1)E,P^r\cdot l^0_{m,n}(\Lambda,P)\big)\\
&=\frac{1}{\Lambda(1-\Lambda^4)^{M^1_{n,m}}}\big(\chi^{\widehat X,\omega}_{c_1+E}(L-(n-1)E,P^r\cdot h^1_{n,m}(\Lambda,P)\\
&\qquad+
\chi^{\widehat X,\omega}_{c_1+E}(L-(m-1)E,P^r\cdot l^1_{n,m}(\Lambda,P)\big).
\end{align*}
\end{Corollary}
\begin{proof}
(1) By \thmref{nblow} we have
\begin{align*}
\big(&\chi^{\widehat X,\omega}_{c_1}(L-(n-1)E,P^r\cdot h^0_{m,n}(\Lambda,P)+
\chi^{\widehat X,\omega}_{c_1}(L-(m-1)E,P^r\cdot l^0_{m,n}(\Lambda,P)\big)\\&
=
\chi^{X,\omega}_{c_1}\big(L,P^r\cdot \big(R_n(\Lambda,P)h^0_{m,n}(\Lambda,P)+R_{m}(\Lambda,P)l^0_{n,m}(\Lambda,P)\big)\big)=
(1-\Lambda^4)^{M^0_{n,m}} \chi^{X,\omega}_{c_1}(L,P^r),
\end{align*}
where in the last step we use \propref{blowdownpol}.
The proof of (2) is similar.
\end{proof}
\section{Recursion formulas for rational ruled surfaces}
\subsection{The limit of the invariant at the boundary point}
For $X={\mathbb P}^1\times {\mathbb P}^1$ or $X=\widehat {\mathbb P}^2$ the blowup of ${\mathbb P}^2$ in a point,
we denote the line bundles on $X$ in a uniform way.
\begin{Notation}
Let $X={\mathbb P}^1\times {\mathbb P}^1$ or $X=\widehat {\mathbb P}^2$. In the case $X={\mathbb P}^1\times {\mathbb P}^1$ we denote $F$ the class of the fibre of the projection to the first factor, and by $G$ the class of the fibre of the projection to the second factor. In the case $X=\widehat {\mathbb P}^2$, let $H$ be the pullback of the hyperplane class on ${\mathbb P}^2$ and $E$ the class of the exceptional divisor. Then $F:=H-E$ is the fibre of the ruling of $X$. We put $G:=\frac{1}{2}(H+E)$. Note that $G$ is not an integral cohomology class. In fact, while $H^2({\mathbb P}^1\times{\mathbb P}^1,{\mathbb Z})={\mathbb Z} F\oplus {\mathbb Z} G$, we have
$$H^2(\widehat {\mathbb P}^2,{\mathbb Z})={\mathbb Z} H\oplus {\mathbb Z} E=\big\{aF+bG\bigm| a\in {\mathbb Z},b\in 2{\mathbb Z} \hbox{ or } a\in {\mathbb Z}+\frac{1}{2}, b\in 2{\mathbb Z}+1\big\}.$$
On the other hand we note that both on $X={\mathbb P}^1\times{\mathbb P}^1$ and $\widehat {\mathbb P}^2$ we have
$F^2=G^2=0$, $\<F,G\>=1$, and $-K_X=2F+2G$.
\end{Notation}
We want to define and study the limit of the $K$-theoretic Donaldson invariants $\chi^{X,\omega}_{c_1}(L,P^r)$ as the ample class $\omega$ tends to $F$.
For $c_1=F$ or $c_1=0$ this will be different from our previous definition of $\chi^{X,F}_{c_1}(L,P^r)$.
\begin{Definition}
Let $r\in {\mathbb Z}_{\ge 0}$, let $L\in \operatorname{Pic}(X)$ with $\<c_1,L\>+r$ even. Fix $d\in {\mathbb Z}$ with $d\equiv -c_1^2 \mod 4$. For $n_{d,r}>0$ sufficiently large, $n_{d,r}F+ G$ is ample on $X$,
and
there is no wall $\xi$ of type $(c_1,d)$ with
$\<\xi ,(n_{d,r}dF+ G)\>>0> \<\xi, F\>$.
For all $\omega\in S_X\cup C_X$, we define $\chi^{X,\omega}_{c_1,d}(L,P^r):=\mathop{\text{\rm Coeff}}_{\Lambda^d}\big[\chi^{X,\omega}_{c_1,d}(L,P^r)\big],$ and put
\begin{align*}\\
\chi^{X,F_+}_{c_1,d}(L,P^r)&:=\chi^{X,n_{d,r}F+ G}_{c_1,d}(L,P^r),\quad
\chi^{X,F_+}_{c_1}(L):=\sum_{d\ge 0} \chi^{X,F_+}_{c_1,d}(L,P^r)\Lambda^d.
\end{align*}
\end{Definition}
Now we give a formula for $\chi^{X,F_+}_{0}(nF+mG,P^r)$ and $\chi^{X,F_+}_{F}(nF+mG,P^r)$. The result and the proof are similar to \cite[Prop.~5.3]{GY}.
The rest of this section will be mostly devoted to giving an explicit
evaluation of this formula for $m\le 2$.
\begin{Proposition}\label{Fplus}
Let $X={\mathbb P}^1\times{\mathbb P}^1$ or $X=\widehat {\mathbb P}^2$.
\begin{enumerate}
\item
Let $nF+mG$ be a line bundle on $X$ with $m$ even. Then
$$\chi^{X,F_+}_{F}(nF+mG,P^r)=\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{1}{2\sinh((m/2+1)h)}\Lambda^2{\widetilde \theta}_4(h)^{2(n+2)(m+2)}u'h^*M^r\right].$$
\item
Let $nF+mG$ be a line bundle on $X$ (note that we might have $n\in \frac{1}{2}{\mathbb Z}$. Then
$$\chi^{X,F_+}_{0}(nF+mG,P^r)=-\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{1}{2}\left(\coth((m/2+1)h)\right)\Lambda^2{\widetilde \theta}_4(h)^{2(n+2)(m+2)}u'h^*M^r\right].$$
\end{enumerate}
\end{Proposition}
\begin{proof}
We denote $\Gamma_X=H^2(X,{\mathbb Z})$ with inner product the negative of the intersection form.
Let $c_1=0$ or $c_1=F$, fix $d$, and let $s\in {\mathbb Z}_{\ge 0}$ be sufficiently large so that there is no class $\xi$ of $(c_1,d)$ with $\<\xi, F\><0<\<\xi, (G+sF)\>$.
Write $L:=nF+mG$.
By \corref{blowp} we get
\begin{align*}
&\chi^{X,sF+G}_{F,d}(L,P^r)=\mathop{\text{\rm Coeff}}_{\Lambda^d}\mathop{\text{\rm Coeff}}_{q^0}\left[\Psi^{G+sF,F}_{X,F}(L;\Lambda,\tau)M^r\right]\\
\quad&=\mathop{\text{\rm Coeff}}_{\Lambda^d}\mathop{\text{\rm Coeff}}_{q^0}\left[\Theta^{G+sF,F}_{\Gamma_X,F,K_X}(\frac{1}{2\pi i}(L-K_X)h,\tau)\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r\right]\\
\quad&=
\mathop{\text{\rm Coeff}}_{\Lambda^d}\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{e^{-\<\frac{F}{2},(L-K_X)\>}}{1-e^{-\<F,(L-K_X)\>h}}\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r\right]+\sum_{\<F,\xi\><0<\<(G+sF),\xi\>}\delta^X_\xi(L,P^r).
\end{align*}
Here the second sum is over the classes of type $(F,d)$. By our assumption on $s$ the second sum is empty, so we get
\begin{align*}
\chi^{X,F_+}_F(L,P^r)&=\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{e^{-\<\frac{F}{2},(L-K_X)\>h}}{1-e^{-\<F,(L-K_X)\>h}}\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r\right]\\&=
\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*}{2\sinh(\<\frac{F}{2},(L-K_X)\>h)}M^r\right].\end{align*}
In the case $c_1=0$ the argument is very similar. By definition and \thmref{vanbound} we have
\begin{align*}
&\chi^{X,sF+G}_{0,d}(L,P^r)=\mathop{\text{\rm Coeff}}_{\Lambda^d}\mathop{\text{\rm Coeff}}_{q^0}\left[\Psi^{sF+G,F}_{X,0}(L;\Lambda,\tau)M^r\right]\\
\quad&=\mathop{\text{\rm Coeff}}_{\Lambda^d}\mathop{\text{\rm Coeff}}_{q^0}\left[\Theta^{sF+G,F}_{\Gamma_X,0,K_X}(\frac{1}{2\pi i}(L-K_X)h,\tau)\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r\right]\\
\quad&=
-\mathop{\text{\rm Coeff}}_{\Lambda^d}\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{e^{-\<F,L-K_X\>h}\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*}{1-e^{-\<F,(L-K_X)\>h}}M^r\right]+\sum_{\<F,\xi\><0<\<(G+sF),\xi\>}\delta^X_\xi(L,P^r).
\end{align*}
The second sum is again over the walls of type $(0,d)$, and thus it is $0$.
Thus we get
\begin{align*}
\chi^{X,F_+}_{0}(L,P^r)&=-\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{e^{-\<F,L-K_X\>h}\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*}{1-e^{-\<F,(L-K_X)\>h}}M^r\right]\\
&=
-\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{1}{2}\left(\coth(\<F,(L-K_X)/2\>h)-1\right)\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r\right].
\end{align*}
Note that by \remref{delb}, we get $$\mathop{\text{\rm Coeff}}_{q^0}[\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r]=\mathop{\text{\rm Coeff}}_{q^0}[(1-1)\Lambda^2{\widetilde \theta}_4(h)^{(L-K_X)^2}u'h^*M^r]=0.$$
\end{proof}
\begin{Remark}\label{Gplus}
In the case of ${\mathbb P}^1\times{\mathbb P}^1$, we can in the same way define $\chi^{{\mathbb P}^1\times{\mathbb P}^1,G_+}_{c_1,d}(L,P^r):=\chi^{{\mathbb P}^1\times{\mathbb P}^1,G+n_dF}_{c_1,d}(L,P^r)$ for $n_d$ sufficiently large with respect to $d$, and
$$\chi^{{\mathbb P}^1\times{\mathbb P}^1,G_+}_{c_1}(nF+mG,P^r):=\sum_{d>0} \chi^{{\mathbb P}^1\times{\mathbb P}^1,G_+}_{c_1,d}(L,P^r)\Lambda^d.$$
Then we see immediately that
$\chi^{{\mathbb P}^1\times{\mathbb P}^1,G_+}_{F}(nF+mG,P^r)=0$, and we get by symmetry from \propref{Fplus} that
$$\chi^{{\mathbb P}^1\times{\mathbb P}^1,G_+}_{0}(nF+mG,P^r)=-\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{1}{2}\left(\coth((n/2+1)h)\right)\Lambda^2{\widetilde \theta}_4(h)^{2(n+2)(m+2)}u'h^*M^r\right].$$
\end{Remark}
\subsection{Recursion formulas from theta constant identities}
We now use the blowup polynomials to show recursion formulas in $n$ and $r$ for the $K$-theoretical Donaldson invariants $\chi^{X,F_+}_{0}(nF+mG,P^r)$, $\chi^{X,F_+}_{F}(nF+mG,P^r)$ for $0\le m\le 2$.
We use the fact that the $\widetilde S_n$ vanish at division points $a\in \frac{2}{n}\pi i {\mathbb Z}$, together with other vanishing results proven in \cite{GY}.
We consider expressions relating the left hand sides of the formulas of \propref{Fplus} for $\chi^{X,F_+}_{0}(nF+mG,P^r)$, $\chi^{X,F_+}_{F}(nF+mG,P^r)$ for successive values of $n$.
We will show that these
are almost holomorphic in $q$, i.e. that they have only finitely many monomials $\Lambda^d q^s$ with nonzero coefficients and $s\le 0$.
This will then give recursion formulas for $\chi^{X,F_+}_{0}(nF+mG,P^r)$, $\chi^{X,F_+}_{F}(nF+mG,P^r)$.
We will frequently use the following
\begin{Notation}
\begin{enumerate}
\item
For a power series $f=\sum_{n\ge 0} f_n(y)q^n \in {\mathbb C}[y^{\pm 1}][[q]]$, and a polynomial $g\in {\mathbb C}[y^{\pm 1}]$ we say that $g$ {\it divides} $f$, if $g$ divides $f_n$ for all $n$.
\item
For a Laurent series $h=\sum_{n} a_nq^n\in {\mathbb C}((q))$ the {\it principal part} is ${\mathcal P}[h]:=\sum_{n\le 0} a_nq^n$. Note that this contains the coefficient of $q^0$. This is because
we think of $q$ as $e^{\pi i \tau/4}$, with $\tau$ in ${\mathcal H}$, and then $\frac{dq}{q}=\frac{\pi i}{4} d\tau$.
For a series $h=\sum_{n\ge 0} h_n(q)\Lambda^n\in {\mathbb C}((q))[[\Lambda]]$, the {\it principal part} is ${\mathcal P}[h]:=\sum_{n\ge 0} {\mathcal P}[h_n]\Lambda^n\in {\mathbb C}((q))[[\Lambda]]$.
We recall the previous notation $\mathop{\text{\rm Coeff}}_{q^0}[h]:=\sum_{n\ge 0} \mathop{\text{\rm Coeff}}_{q^0}[h_n]\Lambda^n$.
\item We write ${\mathbb Q}[[y^{\pm 2} q^4,q^4]]^\times$ for the set power series in $y^{\pm 2} q^4,q^4$ whose constant part is $1$.
\end{enumerate}
\end{Notation}
\begin{Remark} By \eqref{thetatilde},\eqref{MuL} we have
${\mathcal P}[M^2]=4-q^{-2}\Lambda^2+4\Lambda^4$, and thus obviously
\begin{align*}{\mathcal P}[M^2-(1-\Lambda^4)^2]&= 3 -q^{-2}\Lambda^2+6\Lambda^4 -\Lambda^8,\\
{\mathcal P}[M^2(1+\Lambda^4)-2(1-\Lambda^4)^2]&=2-q^{-2}\Lambda^2+12\Lambda^4-q^{-2}\Lambda^6+2\Lambda^8.
\end{align*}
\end{Remark}
\begin{Lemma}\label{theth1} For all $r\in {\mathbb Z}_{>0}$ we have
\begin{align*}
\tag{1} &g^r_1:={\mathcal P}\Big[\frac{1}{2\sinh(h)} M^{2r}u'h^*\Lambda^2\Big]\in {\mathbb Q}[q^{-2}\Lambda^2,\Lambda^4]_{\le r},\\
\tag{2}&g^r_2:={\mathcal P}\Big[-\frac{1}{2}\coth(h)M^{2r}u'h^*\Lambda^2\Big]\in {\mathbb Q}[q^{-2}\Lambda^2,\Lambda^4]_{\le r+1},\\
\tag{3}&{\mathcal P}\Big[\frac{1}{2\sinh(3h/2)}M\big({\widetilde \theta}_4(h)^3(1-\Lambda^4)-1\big)u'h^*\Lambda^2\Big]=\Lambda^4,\\
\tag{4}& g^r_3:={\mathcal P}\Big[\frac{1}{2\sinh(3h/2)}M^{2r-1}(M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4))u'h^*\Lambda^2\Big]\in {\mathbb Q}[q^{-2}\Lambda^2,\Lambda^4]_{\le r},\\
\tag{5}&g^r_4:={\mathcal P}\Big[-\frac{1}{2}\coth(3h/2)M^{2r-2}(M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4)) u'h^*\Lambda^2\Big]\in {\mathbb Q}[q^{-2}\Lambda^2,\Lambda^4]_{\le r+1},\\
\tag{6}&g^r_5:={\mathcal P}\Big[-\frac{1}{2}\tanh(h)M^{2r-2}\big({\widetilde \theta}_4(h)^8(M^2-(1-\Lambda^4)^2)-1\big) u'h^*\Lambda^2\Big] \in {\mathbb Q}[q^{-2}\Lambda^2,\Lambda^4]_{\le r+2}.
\end{align*}
\end{Lemma}
\begin{pf}
(1) We know ${\widetilde \theta}_4(h)\in {\mathbb Q}[[y^{\pm 2}q^4,q^4]]^\times$, ${\widetilde \theta}_1(h)\in iq(y-y^{-1}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]^\times$,
and from the product formula of \eqref{theta} we see that even ${\widetilde \theta}_1(2h)\in iq(y^2-y^{-2}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]^\times$.
By \defref{blowpol}, \propref{rnpropo} we have
$\frac{{\widetilde \theta}_1(2h)}{{\widetilde \theta}_4(h)^4}=\Lambda M$, thus we get that
$\Lambda^{2} M^{2}\in q^{2}(y^2-y^{-2})^{2}{\mathbb Q}[[y^{\pm 2}q^4,q^4]].$
As $u'\in q^{-2}{\mathbb Q}[[q^4]]$, we get that
$$f(y,q):=\sum_{n\ge 0} f_n(y)q^{4n}:=\frac{1}{\sinh(h)}\Lambda^{2} M^{2}u'\in (y^2-y^{-2}){\mathbb Q}[[y^{\pm 2}q^4,q^4]].$$
Thus $f_n(y)$ is a Laurent polynomial in $y^2$ of degree at most $n+1$, and we see from the definitions that is it antisymmetric under $y\to y^{-1}$.
Therefore $f_n(y)$ can be written as a linear combination of $\sinh(lh)$ for $l=1,\ldots, n+1$.
Thus we get by \lemref{qpow} that $f_n(y)h^*\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+1}{\mathbb Q}[[q^2\Lambda^2,q^4]]$, and thus the principal part of
$q^{4n} f_n(y)h^*$ vanishes unless $4n\le 2n+2$, i.e. $n\le 1$.
Therefore the principal part of $f(y,q)h^*$ is a polynomial in $q^{-2}\Lambda^2,\Lambda^2q^2$, and $q^4$ and thus (as the power of $q$ must be nonpositive)
a polynomial in $q^{-2}\Lambda^2$ and $\Lambda^4$, and we see that its degree is at most $1$.
By \eqref{MuL}, we have that $M^2=4+4u\Lambda^2+4\Lambda^4$. Using that $u\in q^{-2}{\mathbb Q}[[q^4]]$ we get that
$M^2\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le 1}{\mathbb Q}[[q^2\Lambda^2,q^4]]$. Therefore by the above
$$M^{2r-2}f_n(y,q)h^*\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+r}{\mathbb Q}[[q^2\Lambda^2,q^4]].$$
The same argument as above shows that the principal part of $M^{2r-2}f(y,q)h^*$ is a polynomial in $q^{-2}\Lambda^2$ and $\Lambda^4$ of degree at most $r$.
(2) In (1) we have seen that $$M^{2r-2}f_n(y,q)h^*\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+r}{\mathbb Q}[[q^2\Lambda^2,q^4]].$$
We have $\coth(h)\Lambda^{2} M^{2r}u'h^*=\cosh(h)M^{2r-2}f(y,q)h^*$, and by
\lemref{qpow} we have that $$\cosh(h)M^{2r-2}f_n(y,q)h^*\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+r+1}{\mathbb Q}[[q^2\Lambda^2,q^4]]$$
The same argument as in (1) shows that the principal part of $\cosh(h)M^{2r-2}f(y,q)h^*$ is a polynomial in $q^{-2}\Lambda^2$ and $\Lambda^4$ of degree at most $r+1$.
(3) By \cite{GY}, Prop.~5.10(5) and its proof, we have that ${\widetilde \theta}_4(h)^3(1-\Lambda^4)-1\in {\mathbb Q}[y^{\pm 1}][[q]]$ is divisible by $y^3-y^{-3}$.
Thus also $M({\widetilde \theta}_4(h)^3(1-\Lambda^4)-1)\in {\mathbb Q}[y^{\pm 2}][[q]]$ is divisible by $y^3-y^{-3}$.
We note that $\Lambda\in iq(y-y^{-1}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]$, thus $1-\Lambda^4\in {\mathbb Q}[y^{\pm 2}]_{\le 1}{\mathbb Q}[[y^{\pm 2}q^4,q^4]]$.
We already know ${\widetilde \theta}_4(h)\in {\mathbb Q}[[y^{\pm 2}q^4,q^4]]$,
$M\in (y+y^{-1}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]$. Thus $M({\widetilde \theta}_4^3(1-\Lambda^4)-1)\in (y^3-y^{-3}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]$.
Therefore, writing
$$f:=\sum_{n\ge 0} f_n(y) q^{4n}:=\frac{1}{2\sinh(3h/2)}M({\widetilde \theta}_4(h)^3(1-\Lambda^4)-1),$$
$f_n(y)$ is a Laurent polynomial in $y^2$ of degree at most $n$, and we see from the definitions that it is antisymmetric under $y\to y^{-1}$.
Thus by \lemref{qpow} we get $f_n(y)h^*\in{\mathbb Q}[q^{-2}\Lambda^2]_{\le n}{\mathbb Q}[[q^2\Lambda^2, q^4]]$. Therefore $fh^*\in {\mathbb Q}[[q^2\Lambda^2, q^4]]$ and $f h^*u'\Lambda^2\in
[q^{-2}\Lambda^2,\Lambda^4]_{\le 1}{\mathbb Q}[[q^2\Lambda^2, q^4]]$. Computation of the first few coefficients gives ${\mathcal P}[f u'\Lambda^2]=\Lambda^4$.
(4) As ${\widetilde \theta}_4(h)^3(1-\Lambda^4)-1\in {\mathbb Q}[y^{\pm 1}][[q]]$ is divisible by $y^3-y^{-3}$, the same is true for $\Lambda M^2({\widetilde \theta}_4^3(1-\Lambda^4)-1)$.
On the other hand $\Lambda(M^2-(1-\Lambda^4)^2)\in i{\mathbb Q}[y^{\pm 1}][[q]]$, and by $\Lambda(M^2-(1-\Lambda^4)^2)=\widetilde S_3=\frac{{\widetilde \theta}_1(3h)}{{\widetilde \theta}_4(h)^9}$
(see \defref{blowpol}, \propref{rnpropo}), we see that
$\Lambda(M^2-(1-\Lambda^4)^2)$, vanishes for $3h\in 2\pi i {\mathbb Z}$, i.e. when $y^3=y^{-3}$. Thus it is also divisible by $y^3-y^{-3}$. Therefore also
$$\Lambda (M^2{\widetilde \theta}_4(h)^3(1-\Lambda^4)-(1-\Lambda^4)^2)=\Lambda M^2({\widetilde \theta}_4^3(1-\Lambda^4)-1)+\Lambda(M^2-(1-\Lambda^4)^2)\in i{\mathbb Q}[y^{\pm 1}][[q]]$$
is divisible $y^3-y^{-3}$. We note that $(1-\Lambda^4)=\widetilde R_2=\frac{{\widetilde \theta}_4(2h)}{{\widetilde \theta}_4(h)^4}\in {\mathbb Q}[y^{\pm1}][[q]]$ does not vanish at
any $h$ with $3h\in 2\pi i {\mathbb Z}$. It follows that also the power series $\Lambda (M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4))$ is divisible by $y^3-y^{-3}$.
Finally we note that $\Lambda\in iq(y-y^{-1}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]$, thus $1-\Lambda^4\in {\mathbb Q}[y^{\pm 2}]_1{\mathbb Q}[[y^{\pm 2}q^4,q^4]]$.
Thus $$\Lambda (M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4))\in iq(y-y^{-1}){\mathbb Q}[y^{\pm 2}]_{\le 1}{\mathbb Q}[[y^{\pm2}q^4,q^4]],$$ and therefore, as it is divisible by $y^3-y^{-3}$, we can write
$\frac{1}{\sinh(3h/2)}M \Lambda (M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4))=q \sum_{n\ge 0} \overline f_n(y)q^{4n}$, where $\overline f_n(y)$ is an odd Laurent polynomial in $y$ of degree $2n+1$, symmetric under $y\to y^{-1}$. Thus by \lemref{qpow} we get $\overline f_n(y)h^*\in \Lambda{\mathbb Q}[q^{-2}\Lambda^2]_{\le n}{\mathbb Q}[[q^2\Lambda^2, q^4]]$, and thus
$\overline f_n(y)h^*u'\Lambda\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+1}{\mathbb Q}[[q^2\Lambda^2, q^4]]$.
It follows as before that the principal part of
$\frac{1}{2\sinh(3h/2)}M (M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4))u'h^*\Lambda^2$ is a polynomial in $q^{-2}\Lambda^2$ and $\Lambda^4$ of degree at most $1$.
Using the fact that $M^2\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le 1}{\mathbb Q}[[q^2\Lambda^2,q^4]]$ in the same way as in the proof of (1) we see that
the principal part of $\frac{1}{2\sinh(3h/2)} M^{2r-1}(M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4))u'\Lambda^2h^*$ is a polynomial of degree at most $r$ in $q^{-2}\Lambda^2$ and $\Lambda^4$.
(5) We see that the left hand side of (5) is obtained from the left hand side of (4) by multiplying by $\cosh(3h/2)/M$.
As by the above $M\in (y+y^{-1}){\mathbb Q}[[y^{\pm 2}q^4,q^4]]^\times$, we see that
$$2\cosh(3h/2)/M=(y^3+y^{-3})/M\in (y^2-1+y^{-2}){\mathbb Q}[[y^{\pm 2}q^4,q^4]] \subset
{\mathbb Q}[q^{-2}\Lambda^2]_{\le 1}{\mathbb Q}[[q^2\Lambda^2,q^4]],$$
where the inclusion on the right follows again by \lemref{qpow}. Therefore (5) follows from (4).
(6) We note that by \defref{blowpol} and \propref{rnpropo} we have
$$
s_1:=(1+\Lambda^4)M^2-2(1-\Lambda^4)^2=\frac{S_4(\Lambda,M)}{S_2(\Lambda,M)R_2(\Lambda,M)}=\frac{{\widetilde \theta}_1(4h)}{{\widetilde \theta}_1(2h) {\widetilde \theta}_4(2h){\widetilde \theta}_4(h)^8}.$$
Again $s_1$ is in ${\mathbb Q}[y^{\pm 1}][[q]]$. As ${\widetilde \theta}_4(h)$ has no zeros on $2\pi i {\mathbb R}$ and ${\widetilde \theta}_1(h)$ vanishes precisely for $h\in 2\pi i {\mathbb Z}$, we find that $s_1$ vanishes
if $y^4=y^{-4}$, but not $y^2=y^{-2}$. Thus the coefficient of every power of $q$ of $s_1$ is divisible by $y^2+y^{-2}$.
In \cite{GY} Proposition 5.10(6) and its proof it is shown that
$$s_2:={\widetilde \theta}_4(h)^8(1-\Lambda^4)^3-(1+\Lambda^4)\in (y^2+y^{-2}){\mathbb Q}[y^{\pm 1}][[q]].$$
Thus also
$$M^2{\widetilde \theta}_4(h)^8(1-\Lambda^4)^3-2(1-\Lambda^4)^2=M^2s_2+s_1\in (y^2+y^{-2}){\mathbb Q}[y^{\pm 1}][[q]].$$
As $\widetilde R_2=(1-\Lambda^4)\in {\mathbb Q}[y^{\pm 1}][[q]]^\times$ does not vanish for $h\in i{\mathbb R}$, we get that
$s_3:=M^2{\widetilde \theta}_4(h)^8(1-\Lambda^4)-2\in(y^2+y^{-2}) {\mathbb Q}[y^{\pm 1}][[q]].$
Therefore also
$$\frac{1}{2}(s_3+{\widetilde \theta}_4(h)^8s_1)=M^2{\widetilde \theta}_4(h)^8-{\widetilde \theta}_4(h)^8(1-\Lambda^4)^2-1 \in (y^2+y^{-2}){\mathbb Q}[y^{\pm 1}][[q]].$$
On the other hand we know
$M^2\in (y+y^{-1})^2 {\mathbb Q}[[y^{\pm 2}q^4,q^4]]$, ${\widetilde \theta}_4(h)\in {\mathbb Q}[[y^{\pm 2}q^4,q^4]]$ and $(1-\Lambda^4)^2\in {\mathbb Q}[y^{\pm 2}]_{\le 2}{\mathbb Q}[[y^{\pm 2}q^4,q^4]]$.
Thus
$$l:=\tanh(h)\big(M^2{\widetilde \theta}_4(h)^8-{\widetilde \theta}_4(h)^8(1-\Lambda^4)^2-1\big)\in {\mathbb Q}[y^{\pm 2}]_{\le 2}{\mathbb Q}[[y^{\pm 2}q^4,q^4]].$$
Thus we can write $l=\sum_{n\ge 0} l_n(y)q^{4n}$ where $l_n(y)$ is a Laurent polynomial in $y^2$ of degree $n+2$, symmetric under $y\to y^{-1}$. Thus by \lemref{qpow} we get $ l_n(y)h^*\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+2}{\mathbb Q}[[q^2\Lambda^2, q^4]]$, and thus
$l_n(y)h^*u'\Lambda^2\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le n+3}{\mathbb Q}[[q^2\Lambda^2, q^4]]$.
It follows as before that the principal part of
$\tanh(h)\big(M^2{\widetilde \theta}_4(h)^8-{\widetilde \theta}_4(h)^8(1-\Lambda^4)^2-1\big)h^*u'\Lambda^2$ is a polynomial in $q^{-2}\Lambda^2$ and $\Lambda^4$ of degree at most $3$.
Using again the fact that $M^2\in {\mathbb Q}[q^{-2}\Lambda^2]_{\le 1}{\mathbb Q}[[q^2\Lambda^2,q^4]]$, we see that
the principal part of $\tanh(h)M^{2r-2}\big(M^2{\widetilde \theta}_4(h)^8-{\widetilde \theta}_4(h)^8(1-\Lambda^4)^2-1\big)h^*u'\Lambda^2$ is a polynomial of degree at most $r+2$ in $q^{-2}\Lambda^2$ and $\Lambda^4$.
\end{pf}
\begin{Remark}
The principal parts above can be easily computed by calculations with the lower order terms with the power series, using the formulas given in \secref{thetamod}.
We see for instance:
\begin{align*}
g_1^1&=q^{-2}\Lambda^2-4\Lambda^4,\quad
g_2^1=-q^{-2}\Lambda^2 +\Big(\frac{1}{2}q^{-4} - 8\Big) \Lambda^4-q^{-2}\Lambda^6-\Lambda^8,\\
g_3^1&=q^{-2}\Lambda^2 -5\Lambda^4,
\quad g_3^2=4q^{-2}\Lambda^2 - (q^{-4} + 20)\Lambda^4
+ 9q^{-2}\Lambda^6 -23\Lambda^8,\\
g_4^1&=-\frac{1}{2}q^{-2}\Lambda^2 + (\frac{1}{2}q^{-4}- 11 )\Lambda^4-\frac{1}{2}\Lambda^8,\\
g_5^1&=(\frac{1}{2}q^{-4} - 12)\Lambda^4 + 2q^{-2}\Lambda^6 + 4\Lambda^8 -\frac{1}{2}q^{-2} \Lambda^{10} +5\Lambda^{12}.
\end{align*}
\end{Remark}
We apply \lemref{theth1} to compute the limit of the $K$-theoretic Donaldson invariants at $F$.
\begin{Proposition}\label{p11r} For $X={\mathbb P}^1\times{\mathbb P}^1$ or $X=\widehat {\mathbb P}^2$, all $n\in {\mathbb Z}$ we have
\begin{enumerate}
\item
$\displaystyle{
1+\chi^{X,F_+}_{F}(nF)=\frac{1}{(1-\Lambda^4)^{n+1}}}.$
\item
For all $r>0$ there is a polynomial $h^0_{r}(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r}$ with
$\chi^{X,F_+}_{F}(nF,P^{2r})=h^0_r(n,\Lambda^4).$
\item
$\displaystyle{1+(2n+5)\Lambda^4+\chi^{X,F_+}_{0}(nF)=\frac{1}{(1-\Lambda^4)^{n+1}}}$.
\item
For all $r>0$ there is a polynomial $\overline h^0_{r}(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r+1}$ with
$\chi^{X,F_+}_{0}(nF,P^{2r})=\overline h^0_r(n,\Lambda^4).$
\end{enumerate}
\end{Proposition}
\begin{proof}
(1) and (3) are proven in \cite[Prop.~5.14]{GY}.
(2) Let $r>0$. By \propref{Fplus} we have
\begin{align*}
\chi^{X,F_+}_{F}(nF,P^{2r})&=\mathop{\text{\rm Coeff}}_{q^0}\Big[\frac{1}{2\sinh(h)}\Lambda^2{\widetilde \theta}_4(h)^{4n+8}u'h^*M^{2r}\Big]=\mathop{\text{\rm Coeff}}_{q^0}\Big[g_{1}^r {\widetilde \theta}_4(h)^{4n+8}\Big].
\end{align*}
where the last step uses \lemref{theth1}(1) and the fact that ${\widetilde \theta}_4(h)\in {\mathbb Q}[[q^2\Lambda^2,q^4]]^\times$,
and thus ${\widetilde \theta}_4(h)^{4n+8}\in {\mathbb Q}[[nq^2\Lambda^2,nq^4,q^2\Lambda^2,q^4]]^\times$.
As $g_{1}^r(q^{-2}\Lambda^2,\Lambda^4)$ is a polynomial of degree at most $r$, we see that
$\mathop{\text{\rm Coeff}}_{q^0}\big[g_{1}^r {\widetilde \theta}_4(h)^{4n+8}\big]$ is a polynomial of degree at most $r$ in $\Lambda^4$, $n\Lambda^4$.
(4) Let $r>0$. By \propref{Fplus} and \lemref{theth1}(2) we have
\begin{align*}
\chi^{X,F_+}_{F}(nF,P^{2r})&=\mathop{\text{\rm Coeff}}_{q^0}\Big[-\frac{1}{2}\coth(h)\Lambda^2{\widetilde \theta}_4(h)^{4n+8}u'h^*M^{2r}\Big]=\mathop{\text{\rm Coeff}}_{q^0}\Big[g_{2}^r{\widetilde \theta}_4(h)^{4n+8}\Big].
\end{align*}
As $g_{2}^r$ is a polynomial of degree at most $r+1$, we see as in (1) that
$\mathop{\text{\rm Coeff}}_{q^0}\big[g_{2}^r{\widetilde \theta}_4(h)^{4n+8}\big]$ is a polynomial of degree at most $r+1$ in $\Lambda^4$, $n\Lambda^4$.
\end{proof}
\begin{Remark}
We list the first few polynomials $h^0_r$, $\overline h^0_r$.
\begin{align*} &h^0_1=(4n + 4)\Lambda^4, \quad h^0_2=(16n + 16)\Lambda^4 - (8n^2 + 6n -3)\Lambda^8, \\ &h^0_3=(64n + 64)\Lambda^4 + (-64n^2 + 24n + 100)\Lambda^8 + (\hbox{$\frac{32}{3}$}n^3 - 8n^2 - \hbox{$\frac{68}{3}$}n)\Lambda^{12},\\
&\overline h^0_1=-(4n + 16)\Lambda^4 + (4n^2 + 15n + 13)\Lambda^8,\\ &\overline h^0_2=-(16n + 64)\Lambda^4 + (24n^2 + 78n + 18)\Lambda^8 - (\hbox{$\frac{16}{3}$}n^3 + 20n^2 + \hbox{$\frac{50}{3}$}n -2)\Lambda^{12}.
\end{align*}
\end{Remark}
\begin{Proposition}\label{p11GM} For $X={\mathbb P}^1\times{\mathbb P}^1$ and $n\in {\mathbb Z}$, and for $X=\widehat {\mathbb P}^2$ and $n\in {\mathbb Z}+\frac{1}{2}$, and all $r\in {\mathbb Z}_{\ge 0}$ we have
the following.
\begin{enumerate}
\item
$\displaystyle{\chi^{X,F_+}_{F}(nF+G,P^{2r+1})=\frac{1}{(1-\Lambda^4)^{2n+1-2r}}+h^1_r(n,\Lambda^4)},$
where $h^1_r(n,\Lambda^4)\in{\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r}$.
\item
$\displaystyle{
\chi^{X,F+}_{0}(nF+G,P^{2r})=\frac{1}{(1-\Lambda^4)^{2n+2-2r}}+\overline h^1_r}(n,\Lambda^4),$
where $\overline h^1_r(n,\Lambda^4)\in{\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r+1}$.
\end{enumerate}
\end{Proposition}
\begin{pf}
(1) First we deal with the case $r=0$. We do this by ascending and descending induction on $n$.
Let $n=-1$. By \corref{blowp} we know that $\chi^{X,G}_{F}(-F+G,P)=0$
and $$\chi^{X,F+}_{F}(-F+G,P)=\sum_{\xi}\delta_\xi^X(-F+G,P),$$
where $\xi$ runs through all classes of type $F$ with $G\xi<0<F\xi$,
i.e. through all $\xi=(2mG-(2n-1)F))$ with $n,m\in {\mathbb Z}_{>0}$. By \lemref{vanwall} we have $\delta_{2mG-(2n-1)F}^X(-F+G,P)=0$ unless
$|6n-3-2m|+3\ge 8nm-4m$, and we check easily that this can only happen for $n=m=1$. Then computing with the lowest order terms of the formula of \defref{wallcrossterm} gives
$\delta_{2G-F}^X(-F+G,P)=-\Lambda^4$. Thus $\chi^{X,F_+}_{F}(-F+G,P)=-\Lambda^4=(1-\Lambda^4)-1$. This shows the case $n=-1$.
Now let $n\in \frac{1}{2}{\mathbb Z}$ be general, then we have by \propref{Fplus} that
\begin{align*}(1-\Lambda^4)\chi^{X,F_+}_{F}&((n+1/2)F+G,P)-\chi^{X,F_+}_{F}(nF+G,P)\\
&=\mathop{\text{\rm Coeff}}_{q^0}\left[\frac{1}{2\sinh(3h/2)}{\widetilde \theta}_4(h)^{6(n+2)}\big({\widetilde \theta}_4(h)^3(1-\Lambda^4)-1\big)Mu'h^*\Lambda^2\right]
\\&=\mathop{\text{\rm Coeff}}_{q^0}\big[{\widetilde \theta}_4(h)^{6(n+2)}\Lambda^4\big]=
\Lambda^4,\end{align*}
and where in the last line we have used \lemref{theth1}(3) and the fact that ${\widetilde \theta}_4(h)\in {\mathbb Q}[[q^2\Lambda^2,q^4]]^\times$.
Thus
$$(1-\Lambda^4)\big(1+\chi^{X,F_+}_{F}((n+1/2)F+G,P)\big)-(1+\chi^{X,F_+}_{F}(nF+G,P))=(1-\Lambda^4)-1+\Lambda^4=0$$
and using the result for $n=-1$, $r=0$, the result for $r=0$ follows by ascending and descending induction over $n\in \frac{1}{2}{\mathbb Z}$.
Let $r>0$, $n\in \frac{1}{2}{\mathbb Z}$. By \propref{Fplus} we have
\begin{align*}\chi^{X,F_+}_{F}\big(nF+G,&P^{2r+1}\big)-(1-\Lambda^4) \chi^{X,F_+}_{F}\big((n-1/2)F+G,P^{2r-1}\big)\\
&=\mathop{\text{\rm Coeff}}_{q^0}\Big[\frac{1}{2\sinh(3h/2)}{\widetilde \theta}_4(h)^{6n+9}M^{2r-1}\big(M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4)\big)u'h^*\Lambda^2\Big]
\\&= \mathop{\text{\rm Coeff}}_{q^0}\big[{\widetilde \theta}_4(h)^{6n+9}g^r_3\big],\end{align*}
where the last line is by \lemref{theth1}(4).
As ${\widetilde \theta}_4(h)^{6n+9}\in {\mathbb Q}[[nq^2\Lambda^2,nq^4,q^2\Lambda^2,q^4]]^\times$, and $g_3^r$ is a polynomial in $q^{-2}\Lambda^2$, $\Lambda^4$ of degree
$r$, we find, as in the proof of \propref{p11r}, that
$$h'_r(n,\Lambda^4):=\chi^{X,F_+}_{F}(nF+G,P^{2r+1})-(1-\Lambda^4) \chi^{X,F_+}_{F}\big((n-1/2)F+G,P^{2r-1}\big)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r}.$$
Assume now by induction on $r$ that
$$\chi^{X,F_+}_{F}\big((n-1/2)F+G,P^{2r-1}\big)=\frac{1}{1-\Lambda^4)^{2n-2r+2}}+h^1_{r-1}\big((n-1/2),\Lambda^4\big)$$
with $h^1_{r-1}(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r-1}.$
Then
\begin{align*}\chi^{X,F_+}_{F}(nF+G,P^{2r+1}\big)-\frac{1}{(1-\Lambda^4)^{2n-2r+1}}&=(1-\Lambda^4) h^1_{r-1}\big((n-1/2),\Lambda^4)+h'_r(n,\Lambda^4).
\end{align*}
Thus we put
$$h^1_r(n,\Lambda^4):=(1-\Lambda^4) h^1_{r-1}\big((n-1/2),\Lambda^4)+h'_r(n,\Lambda^4).$$ As $h'_r(n,\Lambda^4)$ has degree at most $r$ in $\Lambda^4$, $n\Lambda^4$, the claim follows.
(2) The case $r=0$ is proven in \cite[Prop~5.16]{GY}, with $\overline h^1_0(n,\Lambda^4)=-1-(3n+7)\Lambda^4.$
For $r>0$ we prove the result by induction. Let $r>0$, then
we have by \propref{Fplus} and \lemref{theth1}(5)
\begin{align*}\chi^{X,F_+}_{0}(nF+G,P^{2r}\big)&-(1-\Lambda^4)\chi^{X,F_+}_{0}\big((n-1/2)F+G,P^{2r-2}\big)\\&=\mathop{\text{\rm Coeff}}_{q^0}\Big[-\frac{1}{2}\coth(3h/2){\widetilde \theta}_4(h)^{6n+9}M^{2r-2}\big(M^2{\widetilde \theta}_4(h)^3-(1-\Lambda^4)\big)u'h^*\Lambda^2\Big]
\\&= \mathop{\text{\rm Coeff}}_{q^0}\big[{\widetilde \theta}_4(h)^{6n+9}g^{r}_4\big]=:l'_r(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r+1}.\end{align*}
Assume now that
$$\chi^{X,F_+}_{0}\big((n-1/2)F+G,P^{2r-2}\big)=\frac{1}{1-\Lambda^4)^{2n-2r+3}}+\overline h^1_{r-1}(n-1/2,\Lambda^4),$$
with $\overline h^1_{r-1}(n-1/2,\Lambda^4)\in{\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r}$.
Then
\begin{align*}\chi^{X,F_+}_{0}(nF+G,P^{2r})-\frac{1}{(1-\Lambda^4)^{2n-2r+2}}
&=(1-\Lambda^4) \overline h^1_{r-1}((n-1/2),\Lambda^4)+l'_r(n,\Lambda^4).
\end{align*}
The result follows by induction on $r$.
\end{pf}
\begin{Remark}
We list the $h^1_r(n,\Lambda^4),\ \overline h^1_r(n,\Lambda^4)$ for small values of $n$,
\begin{align*}
&h^1_0=-1,\quad h^1_1=-1+(6n + 5)\Lambda^4 \quad h^1_2=-1+(30n + 19)\Lambda^4 - (18n^2 + 15n - 2)\Lambda^8,\\ & h^1_3=-1+(126n + 69)\Lambda^4+ (-162n^2 + 9n + 114)\Lambda^8 + (36n^3 - 43n - 7)\Lambda^{12},\\
&\overline h^1_0=-1-(3n + 7)\Lambda^4, \quad \overline h^1_1=-1-(6n + 20)\Lambda^4 + (9n^2 + \hbox{$\frac{69}{2}$}n + 32)\Lambda^8,\\&
\overline h^1_2=-1-(18n +78)\Lambda^4 + (54n^2 + 189n + 120)\Lambda^8 - (18n^3 + 81n^2 + 109n +40)\Lambda^{12}.
\end{align*}
\end{Remark}
\begin{Proposition}\label{p112GM}
Let $X={\mathbb P}^1\times{\mathbb P}^1$ or $X=\widehat {\mathbb P}^2$.
\begin{enumerate}
\item
For all $n\in {\mathbb Z}$
$$\chi^{X,F_+}_{F}(nF+2G)=
\frac{1}{2}\frac{(1+\Lambda^4)^n-(1-\Lambda^4)^n}{(1-\Lambda^4)^{3n+3}}.$$
\item For all $n\in {\mathbb Z}$ and all $r>0$ we have
$$\chi^{X,F_+}_{F}(nF+2G,P^{2r})=\frac{2^{r-1}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}-h^2_r(n,\Lambda^4),$$
where $h^2_r(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le 2r+2}$.
\item
$$\chi^{X,F_+}_{0}(nF+2G)=
\frac{1}{2}\frac{(1+\Lambda^4)^n+(1-\Lambda^4)^n}{(1-\Lambda^4)^{3n+3}}-1-(4n+9)\Lambda^4.$$
\item For all $n\in {\mathbb Z}$ and all $r>0$ we have
$$\chi^{X,F_+}_{0}(nF+2G,P^{2r})=\frac{2^{r-1}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}-\overline h^2_r(n,\Lambda^4),$$
where $\overline h^2_r(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le 2r+2}$.
\end{enumerate}
\end{Proposition}
\begin{pf}
(1) and (3) were proven in \cite[Prop.~5.17]{GY}.
(2) We will first show by induction on $r$ that
\begin{equation}\label{p12req}
-\frac{1}{2}\mathop{\text{\rm Coeff}}_{q^0}\big[\tanh(h){\widetilde \theta}_4(h)^{8(n+2)}u'h^*\Lambda^2M^{2r}\big]=2^r\frac{(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}+s'_r(n,\Lambda^4).
\end{equation} For polynomials
$s'_r(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le 2r+2}$
For $r=0$ this is shown in the proof of \cite[Prop.~5.17]{GY} with $s_0'=-1-(4n+9)\Lambda^4.$
Fix $r>0$, assume that \eqref{p12req} holds $r-1$ and for all $n\in {\mathbb Z}$.
By \lemref{theth1}(6) we have
\begin{align*}
-\frac{1}{2}\mathop{\text{\rm Coeff}}_{q^0}\big[\tanh(h)&\big(M^{2r}{\widetilde \theta}_4(h)^{8(n+2)}-(1-\Lambda^4)^2M^{2r-2}{\widetilde \theta}_4(h)^{(8n+2)}-{\widetilde \theta}_4(h)^{8(n+1)}M^{2r-2}\big)u'h^*\Lambda^2\big]\\
&=\mathop{\text{\rm Coeff}}_{q^0}\big[{\widetilde \theta}_4(h)^{8(n+1)}g_5^{r}\big]=:s''_r(n,\Lambda^4).\end{align*}
Again, as ${\widetilde \theta}_4(h)\in {\mathbb Q}[[\Lambda^2q^4,q^4]]^\times$, and $g_5^{r}$ has degree $r+2$ in $q^{-2}\Lambda^2,\Lambda^4$, we see that
$s''_r(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r+2}$.
Thus we get by induction on $r$
\begin{align*}
-\frac{1}{2}\mathop{\text{\rm Coeff}}_{q^0}\big[&\tanh(h)M^{2r}{\widetilde \theta}_4(h)^{8(n+2)}u'h^*\Lambda^2\big]= \frac{2^{r-1}(1+\Lambda^4)^{n-r+1}}{(1-\Lambda^4)^{3n+3-2r}}+(1-\Lambda^4)^2s'_{r-1}(n,\Lambda^4)
\\&+
\frac{2^{r-1}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+2-2r}}+s'_{r-1}(n-1,\Lambda^4)+s''_r(n,\Lambda^4)=\frac{2^{r}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}+s'_{r}(n,\Lambda^4)\end{align*}
with $$s'_{r}(n,\Lambda^4)=(1-\Lambda^4)^2s'_{r-1}(n,\Lambda^4)+s'_{r-1}(n-1,\Lambda^4)+s''_r(n,\Lambda^4).$$ As $s'_{r-1}\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le 2r}$, $s''_r\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r+2},$ we get
$s'_{r}(n,\Lambda^4)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le 2r+2}$.
Now we show (2): We note that
$\frac{1}{2\sinh(2h)}=\frac{1}{4}\big(\coth(h)-\tanh(h)\big)$.
Therefore we get by \propref{Fplus}
\begin{align*}
\chi^{X,F_+}(nF+2G,P^{2r})&=\frac{1}{4}\mathop{\text{\rm Coeff}}_{q^0}\big[\big(\coth(h)-\tanh(h)\big){\widetilde \theta}_4(h)^{8(n+2)}M^{2r}u'h^*\Lambda^2\big]\\
=&-\frac{1}{2}\chi^{X,F_+}_0((2n+2) F,P^{2r})+\frac{1}{2}\mathop{\text{\rm Coeff}}_{q^0}\big[\big(-\frac{1}{2}\tanh(h)\big){\widetilde \theta}_4(h)^{8(n+2)}M^{2r}u'h^*\Lambda^2\big]\\
=&-\frac{1}{2}\overline h^0_r(2n+2,\Lambda^4)+\frac{2^{r-1}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}+\frac{1}{2}s_r'(n,\Lambda^4).
\end{align*}
here in the last line we have used \propref{p11r} and \eqref{p12req}.
The claim follows with $h^2_r(n,\Lambda^4)=\frac{1}{2}\big(s_r'(n,\Lambda^4)-\overline h^0_r(2n+2, \Lambda^4)\big)\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le 2r+2}$.
Finally we show (4):
$-\frac{1}{2}\coth(2h)=\frac{1}{4}\big(-\coth(h)-\tanh(h)\big)$ and
\propref{Fplus} give
\begin{align*}
\chi^{X,F_+}(nF+2G,P^{2r})&=\frac{1}{4}\mathop{\text{\rm Coeff}}_{q^0}\big[\big(-\coth(h)-\tanh(h)\big){\widetilde \theta}_4(h)^{8(n+2)}M^{2r}u'h^*\Lambda^2\big]\\
=&\frac{1}{2}\chi^{X,F_+}_0((2n+2)F,P^{2r})+\frac{1}{2}\mathop{\text{\rm Coeff}}_{q^0}\big[\big(-\frac{1}{2}\tanh(h)\big){\widetilde \theta}_4(h)^{8(n+2)}M^{2r}u'h^*\Lambda^2\big]\\
=&\frac{1}{2}\overline h^0_r(2n+2,\Lambda^4)+\frac{2^{r-1}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}+\frac{1}{2}s_r'(n,\Lambda^4).
\end{align*}
The claim follows with $\overline h^2_r=\frac{1}{2}\big(s_r'(n,\Lambda^4)+\overline h^0_r(2n+2, \Lambda^4)\big)$.
\end{pf}
\begin{Remark} Again we can readily compute the first few of the $h^2_r$, $\overline h^2_r$.
\begin{align*} &h^2_1=-1,\quad h^2_2=-2 + (8n + 6)\Lambda^4,\quad h^2_3=-4 + (48n + 24)\Lambda^4 - (32n^2 + 28n)\Lambda^8,\\ &h^2_4=-8 + (224n + 72)\Lambda^4 + (-320n^2 - 40n + 128)\Lambda^8 + (\hbox{$\frac{256}{3}$}n^3 + 32n^2 - \hbox{$\frac{184}{3}$}n - 16)\Lambda^{12},\\
&\overline h^2_1= -1 -(8n + 24)\Lambda^4 + (16n^2 + 62n + 59)\Lambda^8,\\
&\overline h^2_2=-2 -(24n +90)\Lambda^4 + (96n^2 + 348n + 270)\Lambda^8 -(\hbox{$\frac{128}{3}$}n^3 + 208n^2 + \hbox{$\frac{964}{3}$}n +154)\Lambda^{12}.
\end{align*}
It appears that one has $h^2_r\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r-1}$ and $\overline h^2_r\in {\mathbb Q}[\Lambda^4,n\Lambda^4]_{\le r}$.
\end{Remark}
\begin{Corollary}\label{P2P12GH} Let $X=\widehat {\mathbb P}^2$ or $X={\mathbb P}^1\times {\mathbb P}^1$, let $\omega\in H^2(X,{\mathbb R})$ be a class with $\<\omega^2\>>0$.
\begin{enumerate}
\item For $n\in {\mathbb Z}_{\ge 0}$ we have
\begin{align*}
\chi_{0}^{X,\omega}(nF)&\equiv \chi_{F}^{X,\omega}(nF)\equiv \frac{1}{(1-\Lambda^4)^{n+1}}, \quad
\chi_{0}^{X,\omega}(nF,P^{2r})\equiv \chi_{F}^{X,\omega}(nF,P^{2r})\equiv 0 \hbox{ for }r>0.
\end{align*}
\item For $n\in {\mathbb Z}_{\ge 0}$ if $X={\mathbb P}^1\times {\mathbb P}^1$ and $n\in {\mathbb Z}_{\ge 0}+\frac{1}{2}$ if $X=\widehat {\mathbb P}^2$, we have
\begin{align*}
\chi_{0}^{X,\omega}(nF+G,P^{2r})&\equiv \frac{1}{(1-\Lambda^4)^{2n+2-2r}},\quad
\chi_{F}^{X,\omega}(nF+G,P^{2r+1})\equiv \frac{1}{(1-\Lambda^4)^{2n+1-2r}}.
\end{align*}
\item For $n\in {\mathbb Z}_{\ge 0}$ we have
\begin{align*}
\chi_{0}^{X,\omega}(nF+2G)&\equiv \frac{(1+\Lambda^4)^{n}+(1-\Lambda^4)^n}{2(1-\Lambda^4)^{3n+3}},\quad
\chi_{F}^{X,\omega}(nF+2G)\equiv \frac{(1+\Lambda^4)^{n}-(1-\Lambda^4)^n}{2(1-\Lambda^4)^{3n+3}}\\
\chi_{0}^{X,\omega}(nF+2G,P^{2r})&\equiv \chi_{F}^{X,\omega}(nF+2G,P^{2r}) \equiv \frac{2^{r-1}(1+\Lambda^4)^{n-r}}{(1-\Lambda^4)^{3n+3-2r}}\hbox{ for }1\le r \le n.
\end{align*}
\end{enumerate}
\end{Corollary}
\begin{proof}
Write $\omega=uF+vG$, with $u,v\in {\mathbb R}_{>0}$, write $w=v/u$.
By \propref{p11r}, \propref{p11GM}, \propref{p112GM} and \lemref{vanwall} it is sufficient to prove that for $L=nF+mG$, with $0\le m\le 2$, under the assumptions of the Corollary, there are only finitely many classes $\xi$ of type $(0)$ or type $(F)$ with $\<\omega,\xi\>\ge 0>\<F,\xi\>$, such
that $\delta_\xi^X(nF+mG,P^s)\ne 0$, i.e. such that
$-\xi^2\le |\<\xi,(L-K_X)\>| +s+2$.
These walls are of the form
$\xi=aF-bG$ with $a\in {\mathbb Z}_{>0}$ and $b\in 2{\mathbb Z}_{>0}$, and $aw\ge b$, and the condition becomes
\begin{equation}
\label{abwall}
2ab\le |b(n+2)-a(m+2)| +s+2.
\end{equation}
Let $\xi=(aF-bG)$ be such a wall with $\delta_\xi^X(nF+mG,P^s)\ne 0$.
If $a(m+2)\le b(n+2)$, then \eqref{abwall} becomes
$$2ab\le b(n+2)-a(m+2) +s+2 \le b(n+2)+s,$$ therefore $(2a-n-2)b\le s$. Therefore $2a-n-2\le \frac{s}{b}\le \frac{s}{2}$. Therefore $a$ is bounded, and by the condition $b\le aw$ also $b$ is bounded, so there
are only finitely many possibilities for $a,b$.
Now assume $a(m+2)\ge b(n+2)$.
Then as $b\ge 2$, \eqref{abwall} gives
$$4a\le 2ab\le a(m+2)-2(n+2)+s+2,$$ i.e. $(2-m)a\le -2n+s-2$.
If $m=0,1$, then $a\le \frac{-2n+s-2}{2-m}$, thus $a$ is bounded, and by $a(m+2)\ge b(n+2)$ also $b$ is bounded.
If $m=2$, the inequality becomes $2n\le s-2$, so if $2n\ge s$ there are no walls with $\delta_\xi^X(nF+mG,P^s)\ne 0$. Thus the claim follows.
\end{proof}
\begin{Remark}\label{nonwalls}
As we will use this later in \secref{CompP2}, we explicitly state the bounds obtained in the above proof in the case of $X=\widehat {\mathbb P}^2$, $\omega=H$
(i.e. $w=2$ in the notation above). Fix $n\in {\mathbb Z}_{\ge 0}$, $s\in {\mathbb Z}_{\ge 0}$.
Let $\xi=aF-bG$ be a class of type $(0)$ or $(F)$ with $\<\xi, \omega\>\ge 0>\<\xi,F\>$.
\begin{enumerate}
\item If $\delta_\xi^X(nF-nE,P^s)=\delta_\xi^{X}(nF,P^s)\ne \emptyset$, then
\begin{enumerate}
\item either $2a\le (n+2) b$ and $0<a\le\frac{n+2}{2}+\frac{s}{4}$ and $0<b\le 2a$,
\item or $0<(n+2)b\le 2a$ and $0<a\le \frac{s}{2}-n-1$.
\end{enumerate}
\item If $\delta_\xi^X(nF-(n-1)E,P^s)=\delta_\xi^{X}((n-1/2)F+G,P^s)\ne \emptyset$, then
\begin{enumerate}
\item either $3a\le (n+\frac{3}{2}) b$ and $)<a\le \frac{n+3/2}{2}+\frac{s}{4}$ and $0<b\le 2a$,
\item or $0<(n+3/2)b\le 3a$ and $0<a\le s-2n-2$.
\end{enumerate}
\end{enumerate}
\end{Remark}
\begin{Remark}
Note that the results of \corref{P2P12GH} are compatible with \conref{ratconj}.
This is particularly remarkable for part (3) of \corref{P2P12GH}, which can only be proven for $r\le n$, while its correctness for $r>n$ would contradict \conref{ratconj}.
The fact that the formulas hold without restriction for $\chi^{X,F_+}_{0}(nF+2G,P^{2r})$, $\chi^{X,F_+}_{F}(nF+2G,P^{2r})$ is not in contradiction to \conref{ratconj},
because it is only claimed for $\chi^{Y,\omega}_{c_1}(L,P^r)$ with $\omega$ an ample class on $Y$.
\end{Remark}
\section{Computation of the invariants of the plane}
We now want to use the results obtained so far to give an algorithm to compute the generating functions
$\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)$, $\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)$ of the $K$-theoretic Donaldson invariants of the projective plane. We use this algorithm to prove that these generating functions
are always rational functions of a very special kind.
Then we will use this algorithm to explicitly compute $\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)$, $\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)$ for not too large values of $n$ and $r$.
First we explicitly carry out the algorithm by hand when $r=0$ and $n\le 5$ in an elementary but tedious computation.
Finally we implemented the algorithm as a PARI program, which in principle can prove a formula for $\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)$, $\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)$ for any $n$ and $r$.
The computations have been carried out for $r=0$ and $n\le 11$ and $n\le 8$ and $r\le 16$.
\subsection{The strategy}\label{strategy}
\corref{blowdownmn} says in particular the following.
\begin{Remark}\label{npol}
\begin{enumerate}
\item For all $n\in {\mathbb Z}_{>0}$ there exist unique polynomials $f_{n},g_n\in {\mathbb Q}[x,\lambda^4]$ and an integer $N_n$, such that
$f_n S_n+g_n S_{n+1}=\lambda(1-\lambda^4)^{N_n}$.
\item For all $n\in {\mathbb Z}_{>0}$ there exist unique polynomials $ h_{n}, l_n\in {\mathbb Q}[x^2,\lambda^4]$ and an integer $M_n$, such that
$h_n R_n+ l_n R_{n+1}=(1-\lambda^4)^{M_n}$.
\end{enumerate}
\end{Remark}
Using these polynomials, we can determine the $K$-theoretic Donaldson invariants of ${\mathbb P}^2$ in terms of those of $\widehat {\mathbb P}^2$.
\begin{Corollary}\label{blowdownform}
For all $n,k\in {\mathbb Z}$, $r\in {\mathbb Z}_{\ge 0}$ we have
\begin{align*}
\tag{H}\chi^{{\mathbb P}^2,H}_H(nH,P^r)&=\frac{1}{\Lambda(1-\Lambda^4)^{N_k}}\Big(\chi^{\widehat {\mathbb P}^2,H}_{F}\big((n+1-k)G+\frac{n+k-1}{2}F,P^r\cdot f_k(P,\Lambda)\Big)\\&\qquad\qquad +
\chi^{\widehat {\mathbb P}^2,H}_{F}\Big((n-k)G+\frac{n+k}{2}F,P^r\cdot g_k(P,\Lambda)\Big)\Big),\\
\tag{0}\chi^{{\mathbb P}^2,H}_0(nH,P^r)&=\frac{1}{(1-\Lambda^4)^{M_k}}\Big(\chi^{\widehat {\mathbb P}^2,H}_{0}\Big((n+1-k)G+\frac{n+k-1}{2}F,P^r\cdot h_k(P,\Lambda)\Big)\\&\qquad\qquad +
\chi^{\widehat {\mathbb P}^2,H}_{0}\Big((n-k)G+\frac{n+k}{2}F,P^r\cdot l_k(P,\Lambda)\Big)\Big).
\end{align*}
\end{Corollary}
\begin{proof}
Note that $(n-k)G+\frac{n+k}{2}F=nH-kE$. Therefore we get by
\thmref{nblow} that
\begin{align*}
\chi^{\widehat {\mathbb P}^2,H}_{F}&\Big((n+1-k)G+\frac{n+k-1}{2}F,P^r\cdot f_k(P,\Lambda)\Big
+
\chi^{\widehat {\mathbb P}^2,H}_{F}\Big((n-k)G+\frac{n+k}{2}F,P^r\cdot g_k(P,\Lambda)\Big)\\
&=\chi^{{\mathbb P}^2,H}_H\big(nH,P^r \cdot\big(f_k(P,\Lambda) S_{k}(P,\Lambda)+g_k(P,\Lambda) S_{k+1}(P,\Lambda)\big)\big),
\end{align*}
and the result follows by $f_k(P,\Lambda) S_{k}(P,\Lambda)+g_k(P,\Lambda) S_{k+1}(P,\Lambda)=\Lambda(1-\Lambda^4)^{N_k}$.
In the same way
\begin{align*}\chi^{\widehat {\mathbb P}^2,H}_{0}\Big((n+1-k)G&+\frac{n+k-1}{2}F,P^r\cdot h_k(P,\Lambda)\Big
+
\chi^{\widehat {\mathbb P}^2,H}_{0}\Big((n-k)G+\frac{n+k}{2}F,P^r\cdot l_k(P,\Lambda)\Big)\\
&=\chi^{{\mathbb P}^2,H}_0\big(nH,P^r \cdot\big(h_k(P,\Lambda) R_{k}(P,\Lambda)+l_k(P,\Lambda) R_{k+1}(P,\Lambda)\big)\big),
\end{align*}
and $h_k(P,\Lambda) R_{k}(P,\Lambda)+l_k(P,\Lambda) R_{k+1}(P,\Lambda)=(1-\Lambda^4)^{M_k}$.
\end{proof}
Using \corref{P2P12GH} we can use this in two different ways to compute the $K$-theoretic Donaldson invariants of ${\mathbb P}^2$.
\begin{enumerate}
\item We apply parts (1) and (2) of \corref{P2P12GH} to compute the $\chi_0^{\widehat {\mathbb P}^2,H}(nF,P^s)$, $\chi_F^{\widehat {\mathbb P}^2,H}(nF,P^s)$,
$\chi_0^{\widehat {\mathbb P}^2,H}(G+(n-\frac{1}{2})F,P^s)$, $\chi_F^{\widehat {\mathbb P}^2,H}(G+(n-\frac{1}{2})F,P^s)$, and then apply \corref{blowdownform} with $k=n$.
Parts (1) and (2) of \corref{P2P12GH} apply for all values of $n$ and $s$, so this method can always be used.
We will apply this \secref{P2rat} to prove the rationality of the generating functions of the $K$-theoretic Donaldson invariants of ${\mathbb P}^2$ and of blowups of ${\mathbb P}^2$, and then in \secref{CompP2} to
compute the $K$-theoretic Donaldson invariants of ${\mathbb P}^2$ using a PARI program.
\item
We apply parts (2) and (3) of \corref{P2P12GH} to compute the
$\chi_0^{\widehat {\mathbb P}^2,H}(G+(n-\frac{1}{2})F,P^s)$, $\chi_F^{\widehat {\mathbb P}^2,H}(G+(n-\frac{1}{2})F,P^s)$, $\chi_0^{\widehat {\mathbb P}^2,H}(2G+(n-1)F,P^s)$, $\chi_F^{\widehat {\mathbb P}^2,H}(2G+(n-1)F,P^s)$, and then apply \corref{blowdownform} with $k=n-1$. This requires less computation than the first approach. However,
as part (3) of \corref{P2P12GH} holds for $\chi_0^{\widehat {\mathbb P}^2,H}(2G+(n-1)F,P^s)$, $\chi_F^{\widehat {\mathbb P}^2,H}(2G+(n-1)F,P^s)$ only when $s\le 2n-2$, this method only allows to compute
$\chi_0^{{\mathbb P}_2,H}(nH,P^r)$ when
$$r+\max\big(\deg_x(h_{n-1}(x,\lambda),\deg_x(l_{n-1}(x,\lambda))\big)\le 2n-2,
$$
and the same way it only allows to compute
$\chi_H^{{\mathbb P}_2,H}(nH,P^r)$ when
$$r+\max\big(\deg_x(f_{n-1}(x,\lambda),\deg_x(g_{n-1}(x,\lambda))\big)\le 2n-2.$$
As the degree of $S_n$ and $R_n$ in $x$ grows faster than $2n$,
this requires that $n$ and $r$ are both relatively small.
We will use this to compute $\chi_0^{{\mathbb P}_2,H}(nH)$, $\chi_H^{{\mathbb P}_2,H}(nH)$ by hand for $n=4,5$.
\end{enumerate}
\subsection{Rationality of the generating function}\label{P2rat}
We now use the above algorithm to prove a structural result about the $K$-theoretic Donaldson invariants of ${\mathbb P}^2$ and the blowups of ${\mathbb P}^2$.
\begin{Theorem}\label{P2rat1}
\begin{enumerate}
\item For all $n\in {\mathbb Z}$, $r\in {\mathbb Z}_{\ge 0}$ with $n+r$ even, there exists an integer $d^1_{n,r}$ and a polynomial $p^1_{n,r}\in {\mathbb Q}[\Lambda^4]$, such that
$$\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)\equiv \frac{p^1_{n,r}}{\Lambda (1-\Lambda^4)^{d^1_{n,r}}}.$$
Furthermore we can choose $p^1_{n,0}\in {\mathbb Z}[\Lambda^4]$.
\item For all $n\in {\mathbb Z}$, $r\in 2{\mathbb Z}_{\ge 0}$ there exists an integer $d^0_{n,r}$ and a polynomial $p^0_{n,r}\in {\mathbb Q}[\Lambda^4]$, such that
$$\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)\equiv \frac{p^0_{n,r}}{(1-\Lambda^4)^{d^0_{n,r}}}.$$
Furthermore we can choose $p^0_{n,0}\in {\mathbb Z}[\Lambda^4]$.
\end{enumerate}
\end{Theorem}
\begin{proof}
By \corref{P2P12GH} there exist for all $L=nF$, $L=(n-1/2)F+G$ with $n\in {\mathbb Z}$ and all $r\in {\mathbb Z}_{\ge 0}$ integers
$e^F_{n,r}$, $e^0_{n,r}$ and polynomials $q^F_{n,r}, q^0_{n,r}\in {\mathbb Q}[\Lambda^4]$, so that
$$\chi^{\widehat {\mathbb P}^2,H}_F(L,P^r)=\frac{q^F_{n,r}}{(1-\Lambda^4)^{e^F_{n,r}}}, \quad
\chi^{\widehat {\mathbb P}^2,H}_0(L,P^r)=\frac{q^0_{n,r}}{(1-\Lambda^4)^{e^0_{n,r}}}.$$
Thus part (H) of \corref{blowdownform} (with $k=n$) gives
$\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)= \frac{p^1_{n,r}}{\Lambda (1-\Lambda^4)^{d^1_{n,r}}},$ for suitable
$d^1_{n,r}\in {\mathbb Z}_{\ge 0}$, $p^1_{n,r}\in {\mathbb Q}[\Lambda^4]$ and similarly
part (0)
of \corref{blowdownform} (with $k=n$) gives
$\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)= \frac{p^0_{n,r}}{ (1-\Lambda^4)^{d^0_{n,r}}},$ for suitable
$d^0_{n,r}\in {\mathbb Z}_{\ge 0}$, $p^0_{n,r}\in {\mathbb Q}[\Lambda^4]$.
Finally we want to see that $p^1_{n,0}\in {\mathbb Z}[\Lambda^4]$, and we can chose $p^0_{n,0}$ so that it is in ${\mathbb Z}[\Lambda^4]$.
By definition we have
$$\chi^{{\mathbb P}^2,H}_{H}(nH)=\sum_{k>0} \chi(M^{X}_{H}(H,4k-1),\mu(nH))\Lambda^d\in \Lambda^3{\mathbb Z}[[\Lambda^4]].$$
Writing $p^0_{n,0}=\sum_{k>0} a_k \Lambda^{4k}$ we see from the formula $\chi^{{\mathbb P}^2,H}_{H}(nH)=\frac{p^1_{n,0}}{\Lambda (1-\Lambda^4)^{d^1_{n,0}}}$
that $a_0=0$, and inductively that $$a_k=\chi(M^{X}_{H}(H,4k-1))-\sum_{i=1}^{k}a_{k-i} \binom{d^{1}_{n,0}+i-1}{i}\in {\mathbb Z}.$$
For $k$ large enough we have that the coefficient of $\Lambda^{4k}$ of $\chi^{{\mathbb P}^2,H}_{0}(nH)$ is $\chi(M^{X}_{H}(0,4k),\mu(nH))$. Thus, adding a polynomial
$h\in Q[\Lambda^4]$ to $p^0_{n,0}$, we can assume that $\frac{p^0_{n,0}}{ (1-\Lambda^4)^{d^0_{n,0}}}\in {\mathbb Z}[\Lambda^4]$. One concludes in the same way as for $p^1_{n,0}$.
\end{proof}
Indeed a more careful argument will show that we can choose $p^0_{n,r}, \ p^1_{n,r}\in {\mathbb Z}[\Lambda^4]$ for all $r$.
We now use this result and the blowup formulas to describe the generating functions of the $K$-theoretic Donaldson invariants of blowups of ${\mathbb P}^2$ in finitely many points in an open subset of the ample cone
as rational functions.
\begin{Lemma}\label{blowindep}
Let $X$ be the blowup of ${\mathbb P}^2$ in finitely many points, $p_1,\ldots,p_n$, and denote by $E_1,\ldots,E_n$ the exceptional divisors. Fix $c_1\in H^2(X,{\mathbb Z})$, and an $r\ge 0$. Let $L$ be a line bundle on $X$. Let $\omega=H-\alpha_1 E_1-\ldots -\alpha_n E_n$ with $|\alpha_i|<\frac{1}{\sqrt{n}}$, for all $i$, and $\<\omega, K_X\><0$.
Then $\chi^{X,\omega}_{c_1}(L,P^r)\equiv\chi^{X,H}_{c_1}(L,P^r)$.
\end{Lemma}
\begin{proof}
We put $\epsilon:=\max(|\alpha_i|)_{i=1}^n$, and $\delta:=\frac{1}{n}-\epsilon^2>0$.
Let $L=dH-m_1E_1-\ldots m_nE_n$, with $d,m_1,\ldots,m_n\in {\mathbb Z}$, and let $r\ge 0$.
We want to show that there are only finitely many classes $\xi$ of type $(c_1)$ on $X$ with
$\<H,\xi\>\ge 0 \ge \<\omega,\xi\>$ and $\delta^X_\xi(L,P^r)\ne 0$. As by \lemref{vanwall} each $\delta^X_\xi(L,P^r)$ is a polynomial in $\Lambda$, this gives $\chi^{X,\omega}_{c_1}(L,P^r)\equiv \chi^{X,H}_{c_1}(L,P^r)$.
We write $\xi=aH-b_1E_1-\ldots -b_n E_n$, and $b:=(|b_1|+\ldots+|b_n|$); then we get $a\ge 0$ and
$$0\ge \<\omega,\xi\>=a-\alpha_1b_1-\ldots-\alpha_nb_n\ge a-b\epsilon,$$
i.e. $a\le b\epsilon$.
Assume $\delta_\xi^X(L,P^r)\ne 0$, then by \lemref{vanwall} $-\xi^2\le |\<\xi, (L-K_X)\>|+r+2$.
We have
\begin{equation}\label{ineq}\xi^2=-a^2+b_1^2+\ldots+b_n^2\ge-\epsilon^2b^2+\frac{b^2}{n}=\delta b^2,
\end{equation} where we have used the easy inequality $b_1^2+\ldots+b_n^2\ge \frac{b^2}{n}$
and our definition $\frac{1}{n}-\epsilon^2=\delta>0$.
On the other hand, putting $m:=\max|m_i+1|_{i=1}^n$ we get
\begin{align*}
|\<\xi, (L-K_X)\>|+r+2&=|a(d+3)-(m_1+1) b_1-\ldots-(m_n+1)b_n|+r+2\\&\le a|d+3|+|m_1+1| |b_1|+\ldots+|m_n+1| |b_n|+r+2\\&\le
\epsilon b|d+3|+mb+r+2= (m+|d+3|\epsilon)b+r+2.
\end{align*}
Putting this together with \eqref{ineq}, and using $\epsilon\le 1$, we get
\begin{equation}
\label{blowbound}
\delta (|b_1|+\ldots+|b_n|)\le \max|m_i+1|_{i=1}^n +|d+3|+\frac{r+2}{|b_1|+\ldots+|b_n|}.
\end{equation}
thus $b=|b_1|+\ldots+|b_n|$ is bounded and $a\le b\epsilon$ is bounded, and therefore there are only finitely many choices for $\xi$.
\end{proof}
The following theorem contains \thmref{rationalal} as a special case.
\begin{Theorem}\label{blowrat}
Let $X$ be the blowup of ${\mathbb P}^2$ in finitely many points. With the assumptions and notations of \lemref{blowindep}, there exist an integer
$d^{c_1}_{L,r}\in{\mathbb Z}_{\ge 0}$ and a polynomial $p^{c_1}_{L,r}\in {\mathbb Q}[\Lambda^{\pm 4}]$, such that
$$\chi^{X,\omega}_{c_1}(L,P^r)\equiv\frac{p^{c_1}_{L,r}}{\Lambda^{c_1^2}(1-\Lambda^4)^{d^{c_1}_{L,r}}}.$$
\end{Theorem}
\begin{proof}
We write $c_1=kH+l_1E_1+\ldots +l_nE_n$. By renumbering the $E_i$ we can assume that $l_i$ is odd for $1\le i\le s$ and $l_i$ is even for $s+1\le i\le n$.
Write $L=dH-m_1E_1-\ldots -m_nE_n$, with $d,m_1,\ldots,m_n\in {\mathbb Z}$.
By \lemref{blowindep}, it is enough to show the claim for $\omega=H$.
By repeatedly applying \thmref{nblow}, we get
$$\chi^{X,H}_{c_1}(L,P^r)=\chi^{{\mathbb P}^2,H}_{kH}\Big(dH,P^r\cdot \Big(\prod_{i=1}^s S_{m_i+1}(P,\Lambda)\Big)\cdot \Big(\prod_{i=s+1}^n R_{m_i+1}(P,\Lambda)\Big)\Big).$$
Put $\kappa=0$ if $k$ is even, and $\kappa=1$ if $k$ is odd.
We know that $\chi^{{\mathbb P}^2,H}_{kH}(dH,P^r)$ depends only on $\kappa$, and by \thmref{P2rat1} we have
$$\chi^{{\mathbb P}^2,H}_{kH}(dH,P^r)=\frac{p^{\kappa}_{d,r}}{\Lambda^\kappa (1-\Lambda^4)^{d^\kappa_{d,r}}},$$
We know that $R_n(P,\Lambda)\in {\mathbb Z}[P,\Lambda^4]$, $S_n(P,\Lambda)\in \Lambda{\mathbb Z}[P,\Lambda^4]$.
Therefore we can write $\chi^{X,H}_{c_1}(L,P^r)=\frac{ p}{\Lambda^{\kappa-s}(1-\Lambda^4)^N}$ for a suitable polynomial $p\in {\mathbb Q}[\Lambda^{\pm 4}]$ and a nonnegative integer $N$.
Note that $c_1^2=k^2-l_1^2-\ldots -l_n^2\equiv \kappa-s\mod 4$. Let $w:=\frac{1}{4}(c_1^2-(\kappa-s))$. Then
$$\chi^{X,H}_{c_1}(L,P^r)=\frac{ p}{\Lambda^{\kappa-s}(1-\Lambda^4)^N}=\frac{\Lambda^{4w}p}{\Lambda^{c_1^2}(1-\Lambda^4)^N},$$ and
the claim follows.
\end{proof}
\subsection{Explicit computations for small $n$}\label{explicitp2}
We compute $\chi^{{\mathbb P}^2,H}_{0}(nH)$, $\chi^{{\mathbb P}^2,H}_{H}(nH)$ for small values of $n$, using the blowup formulas, using the strategy outlined in \secref{strategy}.
In the next subsection we do the same computations for larger $n$ using a computer program written in Pari.
These invariants have been computed before (see \cite{Abe},\cite{GY}) for $1\le n \le 3$.
\begin{Proposition}
\begin{enumerate}
\item
$\displaystyle{\chi^{{\mathbb P}^2,H}_{H}(4H)=\frac{\Lambda^3+6\Lambda^7+\Lambda^{15}}{(1-\Lambda^4)^{15}}.}$
\item
$\displaystyle{\chi^{{\mathbb P}^2,H}_{0}(4H)=\frac{1+6\Lambda^8+\Lambda^{12}}{(1-\Lambda^4)^{15}}-1-51/2\Lambda^4.}$
\end{enumerate}
\end{Proposition}
\begin{proof}
(1) We have $$S_3(x,\lambda)=\lambda \left(x^2-(1-\lambda^4)^2\right), \quad S_4=\lambda x\big((1-\lambda^8)x^2-2(1-\lambda^4)^3\big). $$
Using division with rest as polynomials in $x$, we write
$\lambda(1-\lambda^4)^6$ as a linear combination of $S_3(x,\lambda)$ and $S_4(x,\lambda)$
$$\lambda(1-\lambda^4)^6=\left((1-\lambda^8)x^2-(1-\lambda^4)^4\right)S_3(x,\lambda)-x S_4(x,\lambda).$$
Thus we get by \corref{blowdownform} that
\begin{equation}\label{4HB}
\begin{split}
\chi^{{\mathbb P}^2,H}_{H}(4H)&=\frac{1}{\Lambda(1-\Lambda^4)^6}\big((1-\Lambda^8)\chi^{\widehat {\mathbb P}^2,H}_{F}(4H-2E,P^2)-(1-\Lambda^4)^4\chi^{\widehat {\mathbb P}^2,H}_{F}(4H-2E)\\&\qquad\qquad-
\chi^{\widehat {\mathbb P}^2,H}_{F}(4H-3E,P^1)\big).
\end{split}
\end{equation}
By \propref{p11GM} we have
$$\chi^{\widehat {\mathbb P}^2,F_+}_{F}(4H-3E,P^1)=\frac{1}{(1-\Lambda^4)^8}-1,$$ and $\xi=H-3E$ is the only class of type $(F)$ on $\widehat {\mathbb P}^2$ with
$\<\xi, H\>\ge0 >\<\xi, F\>$ with $\delta_\xi^{\widehat{\mathbb P}^2}((4H-3E),P^1)\ne 0$. In fact $\delta_{H-3E}^{\widehat{\mathbb P}^2}((4H-3E),P^1)=\Lambda^8.$
Thus $$\chi^{\widehat {\mathbb P}^2,H}_{F}(4H-3E,P^1)=\frac{1}{(1-\Lambda^4)^8}-1+\Lambda^8.$$
By \propref{p112GM} we have that
$$\chi^{\widehat {\mathbb P}^2,F_+}_{F}(4H-2E)=\frac{3\Lambda^4+\Lambda^{12}}{(1-\Lambda^4)^{12}},\quad \chi^{\widehat {\mathbb P}^2,F_+}_{F}(4H-2E,P^2)=\frac{(1+\Lambda^4)^2}{(1-\Lambda^4)^{10}}-1.$$
Furthermore there is no class of type $(F)$ on
$\widehat {\mathbb P}^2$ with
$\<\xi, H\> \ge 0 >\<\xi, F\>$ with $\delta_\xi^{\widehat{\mathbb P}^2}(4H-2E)\ne 0$ or $\delta_\xi^{\widehat{\mathbb P}^2}(4H-2E,P^2)\ne 0$.
Thus $\chi^{\widehat {\mathbb P}^2,H}_{F}(4H-2E)=\chi^{\widehat {\mathbb P}^2,F_+}_{F}(4H-2E)$ and $\chi^{\widehat {\mathbb P}^2,H}_{F}(4H-2E,P^2)=\chi^{\widehat {\mathbb P}^2,F_+}_{F}(4H-2E,P^2)$.
Putting these values into \eqref{4HB} yields $\chi^{{\mathbb P}^2,H}_{H}(4H)=\frac{\Lambda^3+6\Lambda^7+\Lambda^{15}}{(1-\Lambda^4)^{15}}$.
(2) For $R_3(x,\lambda)=-\lambda^4x^2+(1-\lambda^4)^2$, $R_4=-\lambda^4 x^2+(1-\lambda^4)^4,$ we get
$$(1-\lambda^4)^5=\left(\lambda^4x^2+(1-\lambda^4)^2\right)R_3(x,\lambda)-\lambda^4R_4(x,\lambda).$$
Thus \corref{blowdownform} gives
\begin{equation}\label{40B}
\chi^{{\mathbb P}^2,H}_{0}(4H)=\frac{1}{(1-\Lambda^4)^5}\left(\Lambda^4\chi^{\widehat {\mathbb P}^2,H}_{0}(4H-2E,P^2)+(1-\Lambda^4)^2\chi^{\widehat {\mathbb P}^2,H}_{0}(4H-2E)
-\Lambda^4\chi^{\widehat {\mathbb P}^2,H}_{0}(4H-3E)\right).
\end{equation}
By \propref{p11GM} we have
$\chi^{\widehat {\mathbb P}^2,F_+}_{0}(4H-3E)=\frac{1}{(1-\Lambda^4)^9}-1-35/2\Lambda^4$. Furthermore there are no classes $\xi$ of type $(0)$ with
$\<\xi, H\> >0 >\<\xi ,F\>$ with $\delta_\xi^{\widehat{\mathbb P}^2}((4H-3E))\ne 0$, and the only classes of type $(0)$ with $\<\xi, H\> =0 >\<\xi, F\>$
are $-2E$ and $-4E$ with
$$\frac{1}{2}\delta^{\widehat {\mathbb P}^2}_{-2E}(4H-3E)=-2\Lambda^4 + 291\Lambda^8 - 3531\Lambda^{12} + 16215/2\Lambda^{16}, \quad
\frac{1}{2}\delta^{\widehat {\mathbb P}^2}_{-4E}(4H-3E)=7\Lambda^{16} - 51/2\Lambda^{20},$$ giving
$$\chi^{\widehat {\mathbb P}^2,H}_{0}(4H-3E)=\frac{1}{(1-\Lambda^4)^9}-1-39/2\Lambda^4 + 291\Lambda^8 - 3531\Lambda^{12} + 16229/2\Lambda^{16}- 51/2\Lambda^{20}.$$
By \propref{p112GM} we have
$\chi^{\widehat {\mathbb P}^2,F_+}_{0}(4H-2E)=\frac{1+3\Lambda^8}{(1-\Lambda^4)^{12}}-1-21\Lambda^4$. Furthermore the only class $\xi$ of type $(0)$ with
$\<\xi, H\> \ge 0 >\<\xi, F\>$ with $\delta_\xi^{\widehat{\mathbb P}^2}(4H-2E)\ne 0$ is $-2E$ with $\frac{1}{2}\delta^{\widehat {\mathbb P}^2}_{-2E}(4H-2E)=-3/2\Lambda^4 + 108\Lambda^8 - 1225/2\Lambda^{12}$, giving
$$\chi^{\widehat {\mathbb P}^2,H}_{0}(4H-2E)=\frac{1+3\Lambda^8}{(1-\Lambda^4)^{12}}-1-45/2\Lambda^4 + 108\Lambda^8 - 1225/2\Lambda^{12}.$$
By \propref{p112GM} we have
$\chi^{\widehat {\mathbb P}^2,F_+}_{0}(4H-2E,P^2)=\frac{(1+\Lambda^4)^2}{(1-\Lambda^4)^{10}}-1-48\Lambda^4+389\Lambda^8$, and the classes $\xi$ of type $(0)$ with $\<\xi, H\> \ge 0 >\<\xi, F\>$
with $\delta_\xi^{\widehat{\mathbb P}^2}(4H-2E,P^2)\ne 0$ are $-2E$ and $-4E$ with $\frac{1}{2}\delta^{\widehat{\mathbb P}^2}_{-2E}(4H-2E,P^2)=-6\Lambda^4 + 508\Lambda^8 - 4614\Lambda^{12} +
8600\Lambda^{16}
$ and $\frac{1}{2}\delta^{\widehat{\mathbb P}^2}_{-4E}(4H-2E,P^2)=1/2\Lambda^{16}$, giving
$$\chi^{\widehat {\mathbb P}^2,H}_{0}(4H-2E)=\frac{(1+\Lambda^4)^2}{(1-\Lambda^4)^{10}}-1-54\Lambda^4+ 897\Lambda^8 - 4614\Lambda^{12} + 17201/2\Lambda^{16} .$$
Putting this into \eqref{40B} gives
$\chi^{{\mathbb P}^2,H}_{0}(4H)=\frac{ 1 + 6\Lambda^8 + \Lambda^{12}}{(1-\Lambda^4)^{15}}-1-51/2\Lambda^4$.
\end{proof}
\begin{Proposition}
$\displaystyle{\chi^{{\mathbb P}^2,H}_{0}(5H)=\frac{1+21\Lambda^8+20\Lambda^{12}+21\Lambda^{16}+\Lambda^{24}}{(1-\Lambda^4)^{21}}-1-33\Lambda^4.}$
\end{Proposition}
\begin{proof}
We use \propref{p112GM} to compute $\chi^{\widehat {\mathbb P}^2,F_+}_{0}(5H-3E,P^r)$ for $r=0,2,4$, and \propref{p11GM} to compute $\chi^{\widehat {\mathbb P}^2,F_+}_{0}(5H-4E,P^r)$ for $r=0,2$.
The only classes of type $(0)$ with $\<H ,\xi\> \ge 0 >\<F,\xi\>$ and $\delta^{\widehat {\mathbb P}^2}_\xi(5H-3H,P^r)\ne 0$ for $r=0,2,4$ or $\delta^{\widehat {\mathbb P}^2}_\xi(5H-4H,P^r)\ne 0$ for $r=0,2$
are $-2E$ and $-4E$. Adding their wallcrossing terms to the $\chi^{\widehat {\mathbb P}^2,F_+}_{0}(5H-3E,P^r)$, $\chi^{\widehat {\mathbb P}^2,F_+}_{0}(5H-4E,P^r)$ we get
\begin{align*}\chi^{\widehat {\mathbb P}^2,H}_{0}(5H-3E)&=\frac{1+6\Lambda^8+\Lambda^{16}}{(1-\Lambda^4)^{15}} -1 - 27\Lambda^4 + 366\Lambda^8 - 6066\Lambda^{12} + 18917\Lambda^{16} - 33\Lambda^{20},\\
\chi^{\widehat {\mathbb P}^2,H}_{0}(5H-3E,P^2)&=\frac{(1+\Lambda^4)^3}{(1-\Lambda^4)^{13}}-1 - 64\Lambda^4 + 2163\Lambda^8 - 32806\Lambda^{12} + 172163\Lambda^{16}\\&\qquad - 242616\Lambda^{20} + 1007\Lambda^{24},\\
\chi^{\widehat {\mathbb P}^2,H}_{0}(5H-3E,P^4)&=\frac{2(1+\Lambda^4)^2}{(1-\Lambda^4)^{11}} -2 - 218\Lambda^4 + 10110\Lambda^8 - 170462\Lambda^{12} + 1121538\Lambda^{16} \\&\qquad- 2798450\Lambda^{20} + 2249462\Lambda^{24} - 18786\Lambda^{28},\\
\chi^{\widehat {\mathbb P}^2,H}_{0}(5H-4E)&=\frac{1}{(1-\Lambda^4)^{11}} -1 - 23\Lambda^4 + 786\Lambda^8 - 20234\Lambda^{12} + 124671\Lambda^{16} - 201885\Lambda^{20}\\&\qquad + 18372\Lambda^{24} - 21840\Lambda^{28},\\
\chi^{\widehat {\mathbb P}^2,H}_{0}(5H-4E,P^2)&=\frac{1}{(1-\Lambda^4)^9} -1 - 57\Lambda^4 + 3691\Lambda^8 - 95035\Lambda^{12} + 741175\Lambda^{16} - 2043587\Lambda^{20} \\&\qquad+ 1906119\Lambda^{24} - 414993\Lambda^{28} + 295880\Lambda^{32}.
\end{align*}
We compute
$$R_5(x,\lambda)=-\lambda^4x^6+\lambda^4(1-\lambda^4)^2(2+\lambda^4)x^4-3\lambda^4(1-\lambda^4)^4x^2+(1-\lambda^4)^6.$$
Using again division with rest, we get
\begin{align*}
(1-\lambda^4)^{11}&=\big((\lambda^4+3\lambda^8)x^4+(\lambda^4-8\lambda^8+10\lambda^{12}-3\lambda^{20})x^2 +(1+4\lambda^8-\lambda^{12})(1-\lambda^4)^4\big)R_4(x,\lambda)\\
&\qquad
-\big((\lambda^4+3\lambda^8)x^2+ (3+\lambda^4)(1-\lambda^4)^2\big)R_5(x,\lambda).
\end{align*}
Thus again we get
$(1-\Lambda^4)^{11}\chi^{{\mathbb P}^2,H}_{0}(5H)$ as the result of replacing $\lambda$ by $\Lambda$ and $x^rR_4(x,\lambda)$ by $\chi^{\widehat {\mathbb P}^2,H}_{0}(5H-3E,P^r)$, $x^rR_5(x,\lambda)$
by $\chi^{\widehat {\mathbb P}^2,H}_{0}(5H-4E,P^r)$.
This gives after some computation that
$\chi^{{\mathbb P}^2,H}_{0}(5H)=\frac{1 + 21\Lambda^8 + 20\Lambda^{12} + 21\Lambda^{16} + \Lambda^{24}}{(1-\Lambda^4)^{21}}-1-33\Lambda^4.$
\end{proof}
\subsection{Computer computations for larger $n$}
\label{CompP2}
We outline the computations of the PARI program to compute the $\chi^{{\mathbb P}^2,H}_0(nH,P^r)$,
$\chi^{{\mathbb P}^2,H}_H(nH,P^r)$.
We have carried out these computations for $r=0$ and $n\le 11$, and $r\le 16$ and $n\le 8$.
To have effective bounds on the number of terms needed to compute we use the following remark.
\begin{Remark} \label{qlevel} For all $l\in \frac{1}{2}{\mathbb Z}$ we have
$$
\frac{1}{\sinh(lh)},\ \coth(lh)\in q\Lambda^{-1}{\mathbb C}[[q^{-1}\Lambda,q^4]],\quad h,\ \exp(lh),\ {\widetilde \theta}_4(h),\ \Lambda^2u',\ h^*,\ M\in {\mathbb C}[[q^{-1}\Lambda,q^4]].$$
Therefore we have the following.
\begin{enumerate}
\item
By \propref{Fplus}, for $X={\mathbb P}^1\times {\mathbb P}^1$ or $\widehat {\mathbb P}^2$, to compute $\chi^{X,F_+}_F(nF+mG,P^r)$ modulo $\Lambda^{k+1}$, it is enough to evaluate the
formulas of \propref{Fplus} modulo $q^{k+1}$ and modulo $\Lambda^{k+1}$.
\item
By \defref{wallcrossterm} for any rational surface $X$ and any class $\xi\in H^2(X,{\mathbb Z})$ with $\xi^2<0$ and any line bundle $L\in \operatorname{Pic}(X)$, to compute $\delta^X_{\xi}(L,P^r)$
modulo $\Lambda^{k+1}$
it is enough to evaluate the formulas of \defref{wallcrossterm} modulo $q^{k+1}$ and modulo $\Lambda^{k+1}$.
\end{enumerate}
\end{Remark}
{\bf Step 1.} As mentioned above we will use \corref{blowdownform} with $k=n$.
The polynomials $f_n,g_n$, $h_n,l_n$, and the integers $N_n$, $M_n$ of \corref{blowdownform} are computed by the program as follows.
Apply the Euclidean algorithm in ${\mathbb Q}(\lambda)[x^2]$ (i.e. repeated division with rest) to $S_n$, $S_{n+1}$, to find $\overline f_n$, $\overline g_n\in {\mathbb Q}(\lambda)[x]$ with
$\overline f_n S_n+\overline g_n S_{n+1}=1$. Choose the minimal $N_n\in {\mathbb Z}_{\ge 0}$, so that
$$f_n:=\lambda(1-\lambda^4)^{N_n} \overline f_n, \
g_n:=\lambda(1-\Lambda^4)^{N_n} \overline g_n\in {\mathbb Q}[x,\Lambda^4].$$
These exist by \propref{blowdownpol}.
Similarly $h_n$, $l_n$, $M_n$ are computed as follows.
Apply the Euclidean algorithm in ${\mathbb Q}(\lambda)[x^2]$ to $R_n$, $R_{n+1}$, to find $\overline h_n$, $\overline l_n\in {\mathbb Q}(\lambda)[x]$ with $\overline h_nR_n+\overline l_nR_{n+1}=1$, and then again multiply with the minimal power
$(1-\lambda^4)^{M_n}$, to obtain.
$$h_n:=(1-\lambda^4)^{M_n} \overline h_n,\ l_n:=(1-\lambda^4)^{M_n} \overline l_n\in {\mathbb Q}[x^2,\lambda^4].$$
{\bf Step 2.} Use \propref{p11r} to compute $\chi^{X,F_+}_F(nF,P^{2s})$ for $2s\le \deg_x(g_n)+r$ and $\chi^{X,F_+}_0(nF,P^{2s})$ for $2s\le\deg_x(l_n)+r$.
For $s=0$, the formula is explicitly given in \propref{p11r}. For $s>0$ we know by \propref{p11r} that $\chi^{X,F_+}_F(nF,P^{2s})$ is a polynomial in $\Lambda^4$ of degree at most $s$ and $\chi^{X,F_+}_0(nF,P^s)$ a polynomial of at most degree $s+1$ in $\Lambda^4$,
So, using \remref{qlevel}, the computation is done by evaluating the formula of \propref{Fplus} as a power series in $\Lambda,q$ modulo $\Lambda^{4s+1}$ and
$q^{4s+1}$ or
$\Lambda^{4s+5}$, $q^{4s+5}$ respectively.
As all the power series in the formula are completely explicit, this is a straightforward evaluation.
In the same way we use \propref{p11GM} to compute
$\chi^{X,F_+}_F(G+(n-\frac{1}{2})F,P^{2s+1})$ for $2s+1\le \deg_x(f_n)+r$ and $\chi^{X,F_+}_0(G+(n-\frac{1}{2})F,P^{2s})$ for $2s\le\deg_x(h_n)+r$.
By \propref{p11GM}
$$\chi^{X,F_+}_F(G+(n-\frac{1}{2})F,P^{2s+1})-\frac{1}{(1-\Lambda^4)^{2n-2s}}, \quad
\chi^{X,F_+}_0(G+(n-\frac{1}{2})F,P^{2s})-\frac{1}{(1-\Lambda^4)^{2n+1-2s}}$$ are both polynomials of degree at most $s+1$ in $\Lambda^4$, so, using also \remref{qlevel},
again they are computed by
evaluating the formula of \propref{Fplus} as a power series in $\Lambda,q$ modulo $\Lambda^{4s+5}$ and $q^{4s+5}$.
Again this is a straightforward evaluation.
{\bf Step 3.}
By the proof of \corref{P2P12GH} there are finitely many classes $\xi=aF-bG$ of type $(0)$ or $F$ on $\widehat {\mathbb P}^2$ with $\<\xi ,H\> \ge 0>\<\xi ,F\>$ and
$\delta_\xi^{\widehat {\mathbb P}^2}(nF,P^s)\ne 0$ or $\delta_\xi^{\widehat {\mathbb P}^2}(G+(n-\frac{1}{2})F,P^s)\ne 0$. In \remref{nonwalls} effective bounds for $a$ and $b$ are given in terms of $n$ and $s$, which leave only finitely many possibilities. For all $\xi=aF-bG$, so that $(a,b)$ satisfies these bounds, it is first checked whether indeed the criterion
$-\xi^2\le |\<\xi ,L-K_{\widehat {\mathbb P}^2})\>|+s+2$ for the non-vanishing of $\delta_\xi^{\widehat {\mathbb P}^2}(L,P^s)$ for $L=nF$ or $L=(n-1/2)F+G$ is satisfied.
If yes, $\delta_\xi^{\widehat {\mathbb P}^2}(L,P^s)$ is computed by evaluating the formula of \defref{wallcrossterm}.
By \lemref{vanwall} we have that $\delta_\xi^{\widehat {\mathbb P}^2}(L,P^s)$ is a polynomial in $\Lambda$ of degree at most
$$a(\xi,L,X,s):=\xi^2+2|\<\xi,L-K_{\widehat {\mathbb P}^2})\>|+2s+4,$$
so to determine $\delta_\xi^{\widehat {\mathbb P}^2}(L,P^s)$ we only need to compute it modulo
$\Lambda^{a(\xi,L,X,s)+1}$ and thus by \remref{qlevel} we only need to evaluate the formula of \defref{wallcrossterm} modulo $\Lambda^{a(\xi,L,X,s)+1}$ and
$q^{a(\xi,L,X,s)+1}$, so this is again a straightforward evaluation.
Then for $c_1=0,F$ and $L=nF$, $(n-1/2)F+G$, we compute
$$\chi^{\widehat {\mathbb P}^2, H}_{c_1}(L,P^s):=\chi^{\widehat {\mathbb P}^2, F_+}_{c_1}(L,P^s)+\frac{1}{2}\sum_{\<\xi, H\>=0>\<\xi,F\>}\delta^X_\xi(L,P^s)+
\sum_{\<\xi, H\>>0>\<\xi,F\>} \delta^X_\xi(L,P^s),$$ where
the sums are over all $\xi$ of type $(c_1)$ with $\delta^{\widehat {\mathbb P}^2}_\xi(L,P^s)\ne 0$.
{\bf Step 4.}
Finally apply \corref{blowdownform} to compute
\begin{align*}
\chi^{{\mathbb P}^2,H}_H(nH,P^r)&=\frac{1}{\Lambda(1-\Lambda^4)^{N_k}}\Big(\chi^{\widehat {\mathbb P}^2,H}_{F}\big(G+(n-1/2)F,P^r\cdot f_n(P,\Lambda)\big
+
\chi^{\widehat {\mathbb P}^2,H}_{F}\big(nF,P^r\cdot g_n(P,\Lambda)\big)\Big),\\
\chi^{{\mathbb P}^2,H}_0(nH,P^r)&=\frac{1}{(1-\Lambda^4)^{M_k}}\Big(\chi^{\widehat {\mathbb P}^2,H}_{0}\big(G+(n-1/2)F,P^r\cdot h_n(P,\Lambda)\big
+
\chi^{\widehat {\mathbb P}^2,H}_{0}(\big(nF,P^r\cdot l_n(P,\Lambda)\big)\Big).
\end{align*}
At this point all the terms on the right hand side have already been computed.
We have carried out this computation for the following cases.
\begin{enumerate}
\item For $\chi^{{\mathbb P}^2,H}_H(nH,P^r)$ with $n\equiv r\mod 2$, in the cases $r\le 1$, $n\le 10$ and $r\le 16$, $n\le 8$.
\item For $\chi^{{\mathbb P}^2,H}_0(nH,P^r)$ with $r$ even, in the cases $r=0$, $n\le 11$ and $r\le 15$, $n\le 8$.
\end{enumerate}
For the case $r=0$ we obtain, with the notations of the introduction:
\begin{Proposition}\label{propp2}
With the notations of \thmref{mainp2} we have for $1\le n\le 11$
\begin{enumerate}
\item $\displaystyle{\chi^{{\mathbb P}^2,H}_0(nH)=\frac{P_n(\Lambda)}{(1-\Lambda^4)^{\binom{n+2}{2}}}-1-\frac{1}{2}(n^2+6n+11)\Lambda^4}$,
\item If $n$ is even, then $\displaystyle{\chi^{{\mathbb P}^2,H}_H(nH)=\frac{Q_n(\Lambda)}{(1-\Lambda^4)^{\binom{n+2}{2}}}}$.
\end{enumerate}
\end{Proposition}
\thmref{mainp2} now follows directly from \propref{propp2} and \propref{highvan}.
We list also the results for $ \chi^{{\mathbb P}^2,H}_{H}(nH,P^1)$.
We put
{\small \begin{align*}
&q_1=2-t,\ q_3=2,\ q_5=2 + 20t + 20t^2 + 20t^3 + 2t^4,\\
&q_7=2 + 80t + 770t^2 + 3080t^3 + 7580t^4 + 9744t^5
+ 7580t^6 + 3080t^7 + 770t^8 + 80t^9 + 2t^{10},\\
&q_9=2 + 207t + 6192t^2 + 85887t^3 + 701568t^4 + 3707406t^5+ 13050156t^6 + 31611681t^7 \\&
+ 53322786t^8 + 63463686t^9 + 53322786t^{10} + 31611681t^{11} + 13050156t^{12}
+ 3707406t^{13}\\
&+ 701568t^{14}+ 85887t^{15}+ 6192t^{16} + 207t^{17}+2t^{18}
\end{align*}}
\begin{Proposition}
For $1\le n\le 9$ we have
$\displaystyle{
\chi^H_{{\mathbb P}^2,dH}(H,P^1)=\frac{\Lambda^{3}q_{n}(\Lambda^4)}{(1-\Lambda^4)^{\binom{n+2}{2}-1}}}.$
\end{Proposition}
We list also in the form of tables part of the results obtained for $\chi^{{\mathbb P}^2,H}_{c_1}(nH,P^r)$, with $c_1=0, H$ for $r>0$. Here to simplify the expressions we
only write down the results up to adding a Laurent polynomial in $\Lambda$.
We define polynomials $p_{d,r}$ by the following tables.
$$\begin{tabular}{r|c|c|c|c|c|c|c}
&d=3&5&7\\
\hline
r=1&$2t$& $2t+20t^2+20t^3+20t^4+2t^5$&$\genfrac{}{}{0pt}{}{2t + 80^2 + 770t^3 + 3080t^4 + 7580t^5 + 9744t^6}{ + 7580t^7+ 3080t^8 + 770t^9 + 80t^{10}+ 2t^{11}}$\\
\hline
2&$1+t$&$1+6t + 25t^2 +25t^3+6t^4 +t^5$&$\genfrac{}{}{0pt}{}{1 + 15t + 239t^2 + 1549t^3 + 5274t^4+ 9306t^5 + 9306t^6 }{+ 5274t^7+ 1549t^8+ 239t^9 + 15t^{10} + t^{11}}$\\
\hline
3&$1+t$&$8t + 24t^2 + 24t^3 + 8t^4$&$
\genfrac{}{}{0pt}{}{8t + 219t^2 + 1485t^3 + 5159t^4 + 9513t^5 + 9513t^6}{ + 5159t^7 + 1485t^8 + 219t^8 + 8t^{10}}$\\
\hline
4&2&$2+14t+32t^{2}+14t^3+2t^4$&$\genfrac{}{}{0pt}{}{2+ 44t + 546t^2 + 2936t^3 + 7676t^4 + 10360t^5 + 7676t^6 }{+ 2936t^7+ 546t^8 + 44t^9 + 2t^{10}}$\\
\hline
5&2&$1+ 16t + 30t^2 +16t^3+t^4$&$
\genfrac{}{}{0pt}{}{32t+510t^2 + 2820t^3 + 7682t^4 + 10680t^5 }{+ 7682t^6 + 2820t^7+ 510t^8 + 32t^9}$\\
\hline
6&$t^{-1}+1$&$5+27t+27t^2+5t^3$&$\genfrac{}{}{0pt}{}{5 + 120t + 1209t^2 + 5075t^3 + 9975t^4 + 9975t^5 + 5075t^6 }{+ 1209t^7 + 120t^8 + 5t^9}$\\
\hline
7&$3-t$&$4 + 28t + 28t^2 + 4t^3$&$\genfrac{}{}{0pt}{}{1 + 99t + 1134t^2 + 4954t^3 + 10196t^4+ 10196t^5 }{ + 4954t^6 + 1134t^7+ 99t^8 + t^9}$\\
\hline
8&&$14 + 36t + 14t^2$&$\genfrac{}{}{0pt}{}{14 + 318t + 2508t^2 + 7874t^3 + 11340t^4 + 7874t^5}{+ 2508t^6 + 318t^7 + 14t^8}$\\
\hline
9&& $12 + 40t + 12t^2$&$\genfrac{}{}{0pt}{}{6 + 276t + 2376t^2 + 7884t^3 + 11684t^4 }{+ 7884t^5 + 2376t^6 + 276t^7 + 6t^8}$\\
\hline
10&&$t^{-1} + 31 + 31t + t^2$&$ \genfrac{}{}{0pt}{}{42 + 810t + 4742t^2 + 10790t^3 + 10790t^4 + 4742t^5}{ + 810t^6 + 42t^7}$\\
\hline
11&&$32+32t$&$\genfrac{}{}{0pt}{}{25 + 719t + 4605t^2 + 11035t^3 + 11035t^4}{ + 4605t^5 + 719t^6 + 25t^7}$\\
\hline
12&&$6t^{-1}+52+6t$&$ \genfrac{}{}{0pt}{}{132 + 1920t + 8028t^2 + 12608t^3 + 8028t^4 + 1920t^5}{ + 132t^6}$\\
\hline
13&&$-t^{-2} +8t^{-1}+50+8t-t^2$&$\genfrac{}{}{0pt}{}{90 + 1756t + 8038t^2 + 13000t^3 + 8038t^4}{ + 1756t^5 + 90t^6}$\\
\hline
14&&$22t^{-1}+57-21t+7t^2-t^3$&$ \genfrac{}{}{0pt}{}{t^{-1} + 407 + 4149t + 11827t^2 + 11827t^3 + 4149t^4}{ + 407t^5 + t^6}$\\
\hline
15&&$-4t^{-2} +36t^{-1} + 36 - 4t$&$\genfrac{}{}{0pt}{}{300 + 3964t + 12120t^2 + 12120t^3 + 3964t^4}{ + 300t^5}$\\
\end{tabular}$$
$$\begin{tabular}{l | c | c | c | c }
&d=2&4&6\\
\hline
r=2&$1$&$1+3t+4t^2$&$1+10t+89t^2+272t^3+371t^4+210t^5+67t^6+4t^7$\\
4&$t^{-1}$&$2+5t+t^2$ &$2+27t + 168t^2 + 370t^3+ 318t^4 + 123t^5 + 16t^6$\\
6&&5+3t &$5+ 66t+ 287t^2 + 404t^3+ 219t^4 + 42t^5 + t^6$\\
8&&$t^{-1}+7$&$14+149t + 408t^2 + 350t^3 + 98t^4 + 5t^5$\\
10&&$4t^{-1}+5-t$&$42 + 288t + 468t^2+ 208t^3 + 18t^4$\\
12&&$9t^{-1}-1$&$t^{-1} + 116 + 462t + 388t^2 + 57t^3$\\
14&&&$8t^{-1}+280+568t+168t^2$
\end{tabular}$$
\begin{Theorem}\label{rpoly}
With the polynomials $p_{d,r}$ given above, we have
\begin{enumerate}
\item If $r$ is even, then
$\displaystyle{\chi^{{\mathbb P}^2,H}_{0}(dH,P^r)\equiv \frac{p_{d,r}(\Lambda^4)}{(1-\Lambda^4)^{\binom{d+2}{2}-r}}.}$
\item If $d$ and $r$ are both odd, then
$\displaystyle{\chi^{{\mathbb P}^2,H}_{H}(dH,P^r)\equiv \frac{ \Lambda^{-1} p_{d,r}(\Lambda^{4})}{(1-\Lambda^4)^{\binom{d+2}{2}-r}}=
\frac{ \Lambda^{d^2-2r} p_{d,r}(\Lambda^{-4})}{(1-\Lambda^4)^{\binom{d+2}{2}-r}}.}$
\item If $d$ and $r$ are both even, then
$\displaystyle{\chi^{{\mathbb P}^2,H}_{H}(dH,P^r)\equiv \frac{ \Lambda^{d^2-2r-1} p_{d,r}(\Lambda^{-4})}{(1-\Lambda^4)^{\binom{d+2}{2}-r}}.}$
\end{enumerate}
\end{Theorem}
\subsection{Invariants of blowups of the plane}
We want to apply the above results to compute $K$-theoretic Donaldson invariants of blowups of ${\mathbb P}^2$ in a finite number of points.
\begin{Remark} Let $X_r$ be the blowup of ${\mathbb P}^2$ in $r$ general points and let $E:=E_1+\ldots+ E_r$ be the sum of the exceptional divisors.
By definition we have for $c_1=0,H$ that
$\chi^{X_r,H}_{c_1+E}(nH-E)=\Lambda^r\chi^{{\mathbb P}^2,H}_{c_1}(nH,P^r)$.
By \lemref{blowindep} we have therefore $\chi^{X_r,\omega}_{c_1+E}(nH-E)\equiv\Lambda^r\chi^{{\mathbb P}^2,H}_{c_1}(nH,P^r)$ for all classes $\omega=H-\sum_{i=1}^r a_i E_i$ on $X_r$ with $\<\omega, K_{X_r}\><0$ and $0\le a_i< \frac{1}{\sqrt r}$ for all $i$. Therefore the formulas of \thmref{rpoly} also give the $\chi^{X_r,\omega}_{c_1+E}(nH-E)$.
\end{Remark}
By \thmref{nblow}, and using \lemref{blowindep}, we can, from the $\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)$, $\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)$, readily compute the generating functions of $K$-theoretical Donaldson invariants
$\chi^{X,\omega}_{c_1}(L)$ for any blowup $X$ of ${\mathbb P}^2$ in finitely many points, for any $c_1,L\in \operatorname{Pic}(X)$, and for any $\omega$ close to $H$, up to addition of a Laurent polynomial.
In particular we can readily apply this computation to the tables of the $\chi^{{\mathbb P}^2,H}_{0}(nH,P^r)$, $\chi^{{\mathbb P}^2,H}_{H}(nH,P^r)$ of \thmref{rpoly} above.
We will only write down the result in one simple case.
We take $X_s$ the blowup of ${\mathbb P}^2$ in $s$ points, and let again $E=\sum_{i=1}^s E_i$ be the sum of the exceptional divisors, let $L:=dH-2E$ and consider the cases
$c_1=0$, $c_1=E$, $c_1=H$ and $c_1=K_{X_s}$.
We define polynomials $q_{d,s}$ by the following table,
$$\begin{tabular}{l | c | c|c |c }
&d=3&4&5&6\\
\hline
s=1&1&$1+3t^2$&$1+15t^2+10t^3+6t^4$&$1+46t^2+104t^3+210t^4 + 105t^5 + 43t^6 + 3t^7$\\
s=2&&$1+t^2$&$1+10t^2+4t^3+t^4$&$1+37t^2+ 70t^3 + 105t^4 + 34t^5 + 9t^6$\\
s=3&&$1$&$1+6t^2+t^3$&$1+ 29t^2 + 44t^3 + 45t^4 + 8t^5 + t^6$\\
s=4&&&$1+3t^2$&$1 + 22t^2 + 25t^3 + 15t^4 + t^5$\\
s=5&&&$1+t^2$&$1+ 16t^2 + 12t^3 + 3t^4$\\
s=6&&&$1$&$1+11t^2+4t^3$\\
s=7&&&&$1+7t^2$\\
\end{tabular}$$
and polynomials $r_{d,s}$ by the following table.
$$\begin{tabular}{l | c | c|c |c }
&d=4&6\\
\hline
s=1&$1+3t$&$1+24t+105t^2+161t^3+168t^4+43t^5+10t^6$\\
s=2&$1+t$&$1+21t+71t^2+90t^3+63t^4+9t^5+t^6$\\
s=3&$1$&$1+ 18t+ 45t^2+ 45t^3 + 18t^4+ t^5$\\
s=4&&$1+15t+26t^2+19t^3+3t^4$\\
s=5&&$1+12t +13t^2 +6t^3$\\
s=6&&$1+9t+5t^2+t^3$\\
s=7&&$1+6t+t^2$\\
s=8&&$1+3t$\\
\end{tabular}$$
\begin{Proposition} Let $X_s$ be the blowup of ${\mathbb P}^2$ in $r$ general points with exceptional divisors $E_1,\ldots,E_s$, and write $E=\sum_{i=1}^sE_i$.
With the $q_{d,s}$ and $r_{d,s}$ given by the above tables we get
\begin{align*}
\chi^{X,H}_{0}(dH-2E)&\equiv \frac{q_{d,s}(\Lambda^4)}{(1-\Lambda^4)^{\binom{d+2}{2}-3s}}\\
\chi^{X,H}_{K_{X_s}}(dH-2E)&\equiv \frac{\Lambda^{d^2-1-3s}q_{d,s}(\frac{1}{\Lambda^4})}{(1-\Lambda^4)^{\binom{d+2}{2}-3s}}, \quad d \hbox{ even}\\
\chi^{X,H}_{H}(dH-2E)&\equiv \frac{\Lambda^{3}r_{d,s}(\Lambda^4)}{(1-\Lambda^4)^{\binom{d+2}{2}-3s}}, \quad d \hbox{ even}\\
\chi^{X,H}_{E}(dH-2E)&\equiv \begin{cases}
\frac{\Lambda^{d^2-1-3s} q_{d,s}(\frac{1}{\Lambda^4})}{(1-\Lambda^4)^{\binom{d+2}{2}-3s}}& d \hbox{ odd}\\
\frac{\Lambda^{d^2-4-3s} r_{d,s}(\frac{1}{\Lambda^4})}{(1-\Lambda^4)^{\binom{d+2}{2}-3s}}& d \hbox{ even}
\end{cases}
\end{align*}
The same formulas also apply with $\chi^{X,H}_{c_1}(dH-2E)$ replaced by
$\chi^{X,\omega}_{c_1}(dH-2E)$, with $\omega=H-a_1E_1-\ldots a_rE_s$ and $0\le a_i\le \sqrt{s}$ for all $i$.
\end{Proposition}
\begin{proof}
Recall that $R_3=-{\mathfrak \lambda}^4 x^2 + (1-{\mathfrak \lambda}^4)^2$, $S_3={\mathfrak \lambda}(x^2-(1-{\mathfrak \lambda}^4)^2)$. Noting that $K_{X_s}\equiv H+E\mod 2H^2(X,{\mathbb Z})$, we
we get by
\thmref{nblow} that
\begin{align*}
\chi^{X_s,H}_0(dH-2E)&=\chi^{{\mathbb P}^2,H}_{0}\big(dH,\big(-\Lambda^4 P^2 + 1-\Lambda^4)^2\big)^s\big),\\
\chi^{X_s,H}_H(dH-2E)&=\chi^{{\mathbb P}^2,H}_{H}\big(dH,\big(-\Lambda^4 P^2 + 1-\Lambda^4)^2\big)^s\big),\\
\chi^{X_s,H}_E(dH-2E)&=\Lambda^s\chi^{{\mathbb P}^2,H}_{0}\big(dH,\big(P^2 - (1-\Lambda^4)^2\big)^s\big),\\
\chi^{X_s,H}_{K_{X_s}}(dH-2E)&=\Lambda^s\chi^{{\mathbb P}^2,H}_{H}\big(dH,\big(P^2 - (1-\Lambda^4)^2\big)^s\big).
\end{align*}
Now we just put the values of the tables of \thmref{rpoly} into these formulas.
\end{proof}
\begin{comment}
\begin{Corollary} Let $r,k\ge s\ge 0$. We denote $m:=\max(r,k)$. Let $X$ be the blowup of ${\mathbb P}^2$ in $m+1$ general points. Let $E_0,\ldots,E_{m}$ be the exceptional divisors.
\begin{align*}&\chi^{X,F+}_{E_{s+1}+\ldots +E_{r}}\big(nH-(n-2)E_0-3(E_1+\ldots E_s)-2(E_{s+1}+\ldots+E_k)\big)\\
&\qquad \equiv \frac{1}{2}\Lambda^{r-s} \frac{(1+\Lambda^4)^{n-1-2s-k}+(1+\Lambda^4)^{n-1-2s-k}}{(1-\Lambda^4)^{3(n-2s-k)}},\\
&\chi^{X,F+}_{H+E_0+E_{s+1}+\ldots +E_{r}}\big(nH-(n-2)E_0-3(E_1+\ldots E_s)-2(E_{s+1}+\ldots+E_k)\big)\\
&\qquad \equiv \frac{1}{2}\Lambda^{r-s} \frac{(1+\Lambda^4)^{n-1-2s-k}-(1+\Lambda^4)^{n-1-2s-k}}{(1-\Lambda^4)^{3(n-2s-k)}}.
\end{align*}
\end{Corollary}
\begin{proof} Let $\widehat {\mathbb P}^2$ be the blowup of ${\mathbb P}^2$ in a point and $E_0$ the exceptional divisor. As before we write $F=H-E_0$, $G=\frac{1}{2}(H+E_0)$. Then
$nH-(n-2)E_0=(n-1)F+2G$. We apply \propref{p112GM} and the blowup formulas: Using $S_1=\lambda$, $S_3=\lambda(x^2-(1-\Lambda^4)^2)$,
$R_3=(1-\lambda^4)^2-\lambda^4x^2$, $R_4=(1-\lambda^4)^4-\lambda^4x^4$, we get
\begin{align*}
\chi^{X,F+}_{E_{s+1}+\ldots +E_{r}}\big(nH-(n-2)E_0-3(E_1+\ldots E_s)-2(E_{s+1}+\ldots+E_k)\big)=\chi^{\widehat {\mathbb P}^2,F_+}
and \propref{p112GM}, the result follows by induction from the trivial identities
\begin{align*}
2(1+\lambda^4)^{n-1}-(1+\lambda^4)^{n}&=(1-\lambda^4)(1+\lambda^4)^{n-1}\\
(1+\lambda^4)^{n+1}-2\lambda^4(1+\lambda^4)^{n}&=(1-\lambda^4)(1+\lambda^4)^{n}\\
(1+\lambda^4)^{n+2}-4\lambda^4(1+\lambda^4)^{n}&=(1-\lambda^4)^2(1+\lambda^4)^{n}.
\end{align*}
\end{proof}
\end{comment}
\subsection{Symmetries from Cremona transforms}
\begin{Remark}
The Conjecture \ref{ratconj} will often predict a symmetry for the polynomials $P^X_{c_1,L}(\Lambda)$.
Assume Conjecture \ref{ratconj}. Then we have the following.
\begin{enumerate}
\item If $c_1\equiv L+K_X-c_1\mod 2H^2(X,{\mathbb Z})$, then
$P^X_{c_1,L}(\Lambda)=\Lambda^{L^2+8-K_X^2}P^X_{c_1,L}(\frac{1}{\Lambda})$.
\item More generally let $X$ be the blowup of ${\mathbb P}^2$ in $n$ points, with exceptional divisors $E_1,\ldots E_n$, $L=dH-a_1E_1-\ldots -a_nE_n$. If
$\sigma$ is a permutation of $\{1,\ldots, n\}$, we write $\sigma(L):=dH-a_{\sigma(1)}E_1-\ldots -a_\sigma(n)E_n$.
Then $\chi^{X,H}_{c_1}(L)=\chi^{X,H}_{\sigma(c_1)}(\sigma(L))$.
\end{enumerate}
Thus, if there is a $\sigma$ with $L=\sigma(L)$ and $\sigma(c_1)\equiv L+K_X-c_1\mod 2H^2(X,{\mathbb Z})$, then
$P^X_{c_1,L}(\Lambda)=\Lambda^{L^2+8-K_X^2}P^X_{c_1,L}(\frac{1}{\Lambda})$.
\end{Remark}
Other symmetries come from the Cremona transform of the plane, which we briefly review.
Let $p_1,p_2,p_3$ be three general points in ${\mathbb P}^2$. For $i=1,2,3$ let $L_k$ the line through $p_i,p_j$ where $\{i,j,k\}=\{1,2,3\}$.
Let $X$ be the blowup of ${\mathbb P}^2$ in $p_1,p_2,p_3$, with exceptional divisors $E_1,E_2,E_3$, and let $\overline E_1,\overline E_2,\overline E_3$ be the strict transforms of the lines
$L_1,L_2,L_3$. The $\overline E_i$ are disjoint $(-1)$ curves which can be blown down to obtain another projective plane $\overline {\mathbb P}^2$. Let $H$ (resp. $\overline H$) be the pullback of
the hyperplane class from ${\mathbb P}^2$ (resp. $\overline {\mathbb P}^2$) to $X$. Then $H^2(X,{\mathbb Z})$ has two different bases $H,E_1,E_2,E_3$ and $\overline H,\overline E_1,\overline E_2,\overline E_3$,
which are related
by the formula
$$dH-a_1E_1-a_2E_2-a_3E_3=(2d-a_1-a_2-a_3)\overline H-(d-a_2-a_3)\overline E_1-(d-a_1-a_3)\overline E_2-(d-a_1-a_2)\overline E_3.$$
Note that this description is symmetric under exchanging the role of $H,E_1,E_2,E_3$ and $\overline H, \overline E_1,\overline E_2,\overline E_3$.
Let $c_1\in H^2(X,{\mathbb Z})$. If $\<c_1,K_X\>$ is even, then it is easy to see that $\overline c_1\equiv c_1\mod 2H^2(X,{\mathbb Z})$, but if $\<c_1,K_X\>$ is odd, then $\overline c_1\equiv K_X-c_1\mod 2H^2(X,{\mathbb Z})$.
For a class $L=dH-a_1E_1-a_2E_2-a_3E_3\in H^2(X,{\mathbb Z})$ we denote $\overline L=d\overline H-a_1\overline E_1-a_2\overline E_2-a_3\overline E_3.$
Then it is clear from the definition that
$\chi^{X,H}_{c_1}(L)=\chi^{X,\overline H}_{\overline{c}_1}(\overline L)$, and by \lemref{blowindep} we get $\chi^{X,\overline H}_{\overline{c}_1}(\overline L)\equiv \chi^{X,H}_{\overline c_1}(\overline L).$
If $\sigma$ is a permuation of $\{1,2,3\}$ and we denote $\sigma(L):=dH-a_{\sigma_1}E_1-a_{\sigma_2}E_2-a_{\sigma_3}E_3$,
If $\sigma(L)=L$, then it is clear that $\chi^{X,H}_{c_1}(L)=\chi^{X,H}_{\sigma(c_1)}(L)$.
Now assume $d=a_1+a_2+a_3$. Then $\overline L=L$, so that
$\chi^{X,H}_{c_1}(L)\equiv\chi^{X,H}_{{\overline c_1}}(L).$
Assume now $\<c_1,K_X\>$ is odd.
Assuming also \conref{ratconj}, the polynomials $P^X_{c_1,L} \in \Lambda^{-c_1^2}{\mathbb Z}[\Lambda^4]$ mentioned there satisfy
\begin{enumerate}
\item
$P^X_{c_1,L}(\Lambda)=P^X_{K_X-c_1,L}(\Lambda)=\Lambda^{L^2+8-K_X^2} P^X_{L+K_X-c_1}(\frac{1}{\Lambda})=\Lambda^{L^2+8-K_X^2} P^X_{L- c_1}(\frac{1}{\Lambda}).$
\item
Thus if there is a permutation $\sigma$ of $\{1,2,3\}$ with $\sigma(L)=L$ and $c_1\equiv L-\sigma(c_1)\mod 2 H^2(X,{\mathbb Z})$, or with
$\sigma(L)=L$ and $c_1\equiv L+K_X-\sigma(c_1)\mod 2 H^2(X,{\mathbb Z})$. Then we have the symmetries
$$P^X_{c_1,L}(\Lambda)=\Lambda^{L^2+8-K_X^2} P^X_{c_1,L}(\frac{1}{\Lambda})=P^X_{K_X-c_1,L}(\Lambda)=\Lambda^{L^2+8-K_X^2} P^X_{K_X-c_1,L}(\frac{1}{\Lambda}).$$
\end{enumerate}
We check these predictions in a number of cases.
Let
$L_5=5H-2E_1-2E_2-E_3$, $L_6=6H-2E_1-2E_2-2E_3$, $L_7=7H-3E_1-2E_2-2E_3$.
We find
\begin{align*}
&\chi^{X,H}_{K_X+E_2}(L_5)\equiv\chi^{X,H}_{E_2}(L_5)\equiv\frac{5\Lambda^5+6\Lambda^9+5\Lambda^{13}}{(1-\Lambda^4)^{14}},\\
&\chi^{X,H}_{H}(L_6)\equiv\chi^{X,H}_{E_1+E_2+E_3}(L_6)\equiv\frac{\Lambda^3+18\Lambda^7+45\Lambda^{11}+45\Lambda^{15}+18\Lambda^{19}+\Lambda^{23}}{(1-\Lambda^4)^{19}},\\
&\chi^{X,H}_{K_X-E_3}(L_6)\equiv\chi^{X,H}_{E_3}(L_6)\equiv\frac{8\Lambda^5+26\Lambda^9+60\Lambda^{13}+26\Lambda^{17}+8\Lambda^{21}}{(1-\Lambda^4)^{19}},\\
&\chi^{X,H}_{K_X-E_3}(L_7)\equiv\chi^{X,H}_{E_3}(L_7)\equiv\frac{11\Lambda^5+61\Lambda^9+265\Lambda^{13}+350\Lambda^{17}+265\Lambda^{21}+61\Lambda^{25}+11\Lambda^{29}}{(1-\Lambda^4)^{24}}.
\end{align*}
\section{The invariants of ${\mathbb P}^1\times{\mathbb P}^1$}
In this section we will use the results of the previous section to compute the $K$-theoretic Donaldson invariants of ${\mathbb P}^1\times {\mathbb P}^1$.
\subsection{A structural result}
First we will show analoguously to \thmref{blowrat} that all the generating functions $\chi^{{\mathbb P}^1\times{\mathbb P}^1,\omega}_{c_1}(L,P^r)$ are rational functions.
\begin{Lemma}
Let $c_1\in H^2({\mathbb P}^1\times{\mathbb P}^1,{\mathbb Z})$. Let $L$ be a line bundle on ${\mathbb P}^1\times {\mathbb P}^1$ with $\<L,c_1\>+r$ even.
Let $\omega$ be an ample classes on ${\mathbb P}^1\times{\mathbb P}^1$. Then
$\chi^{{\mathbb P}^1\times {\mathbb P}^1,\omega}_{c_1}(L,P^r)\equiv \chi^{{\mathbb P}^1\times {\mathbb P}^1F+G}_{c_1}(L,P^{r})$.
\end{Lemma}
\begin{proof}
We write $L=nF+mG$. By exchanging the role of $F$ and $G$ if necessary we can write $\omega=G+\alpha F$, with $1\le \alpha$.
We have to show that there are only finitely many classes $\xi$ of type $(c_1)$ with $\<\xi,(F+G)\>\le 0\le \<\xi,\omega\>$ and $\delta^X_\xi(L,P^r)\ne 0$.
Such a class is of the form $\xi=aG-bF$, with $a,b\in {\mathbb Z}_{>0}$ satisfying
$$a\le b,\quad \alpha a\ge b, \quad 2ab\le |a(n+2)-b(m+2)|+r+2.$$
This gives $$2ab\le a|n+2|+b|m+2|+r+2\le b|n+m+4|+r+2.$$ Thus we get $a\le \frac{|n+m+4|}{2}+\frac{r}{2}+1$. Therefore $a$ is bounded and by $\alpha a\ge b$ also $b$ is bounded.
Therefore there are only finitely many possible classes $\xi$.
\end{proof}
We use the fact that the blowup $\widetilde {\mathbb P}^2$ of ${\mathbb P}^2$ in two different points is also the blowup of ${\mathbb P}^1\times {\mathbb P}^1$ in a point.
We can identify the classes as follows.
Let $H$ be the hyperplane class on ${\mathbb P}^2$ and let $E_1$, $E_2$ be the exceptional divisors of the double blowup of ${\mathbb P}^2$.
Let $F$, $G$ be the fibres of the two different projections of ${\mathbb P}^1\times {\mathbb P}^1$ to its factors, and let $E$ be the exceptional divisor of the blowup of ${\mathbb P}^1\times {\mathbb P}^1$.
Then on $\widetilde {\mathbb P}^2$ we have the identifications
\begin{align*}
F&=H-E_1, \quad G=H-E_2, \quad E=H-E_1-E_2,\\
H&=F+G-E, \quad E_1=G-E, \quad E_2=F-E.
\end{align*}
\begin{Theorem}\label{P11rat}
Let $c_1\in \{0,F,G,F+G\}$. Let $L$ be a line bundle on ${\mathbb P}^1\times {\mathbb P}^1$ with $\<L,c_1\>$ even. Let $r\in {\mathbb Z}_{\ge 0}$ with $\<L,c_1\>+r$ even.
There exists a polynomial $p^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,L,r}(t)$ and an integer $N^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,L,r}$, such that for all ample classes
$\omega$ on ${\mathbb P}^1\times {\mathbb P}^1$, we have
$$\chi^{{\mathbb P}^1\times {\mathbb P}^1,\omega}_{c_1}(L,P^r)\equiv \frac{p^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,L,r}(\Lambda^4)}{\Lambda^{c_1^2}(1-\Lambda^4)^{N^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,L,r}}}.$$
\end{Theorem}
\begin{proof}
Note that on $\widetilde {\mathbb P}^2$ we have
$F+G=2H-E_1-E_2$. We write $L=nF+mG$, with $n,m\in {\mathbb Z}$. Then on $\widetilde {\mathbb P}^2$ we have
$L=(n+m)H-nE_1-mE_2$.
By \thmref{nblow} we have therefore
\begin{align*}
\chi^{\widetilde {\mathbb P}^2,H}_{0}(nF+mG,P^r)&=\chi^{{\mathbb P}^2,H}_{0}\big((n+m)H, P^r \cdot R_{n+1}(P,\Lambda)R_{m+1}(P,\Lambda)\big),\\
\chi^{\widetilde {\mathbb P}^2,H}_{F}(nF+mG,P^r)&=\chi^{{\mathbb P}^2,H}_{H}\big((n+m)H, P^r \cdot S_{n+1}(P,\Lambda)R_{m+1}(P,\Lambda)\big),\\
\chi^{\widetilde {\mathbb P}^2,H}_{G}(nF+mG,P^r)&=\chi^{{\mathbb P}^2,H}_{H}\big((n+m)H, P^r \cdot R_{n+1}(P,\Lambda)S_{m+1}(P,\Lambda)\big),\\
\chi^{\widetilde {\mathbb P}^2,H}_{F+G}(nF+mG,P^r)&=\chi^{{\mathbb P}^2,H}_{0}\big((n+m)H, P^r \cdot S_{n+1}(P,\Lambda)S_{m+1}(P,\Lambda)\big).
\end{align*}
As $R_{n}(P,\Lambda)\in {\mathbb Z}[P^2,\Lambda^4]$, $S_{n}(P,\Lambda)\in \Lambda{\mathbb Z}[P,\Lambda^4]$, we see by \thmref{P2rat1} that
for $c_1=0,F,G,F+G$ we can write
$$
\chi^{\widetilde {\mathbb P}^2,H}_{c_1}(nF+mG,P^r)\equiv \frac{p^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,nF+mG,r}(\Lambda^4)}{\Lambda^{c_1^2}(1-\Lambda^4)^{N^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,nF+mG,r}}},$$
with $p^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,nF+mG,r}\in {\mathbb Q}[t]$ and $N^{{\mathbb P}^1\times {\mathbb P}^1}_{c_1,nF+mG,r}\in{\mathbb Z}_{\ge 0}.$
As on $\widetilde {\mathbb P}^2$ we have $F+G=2H-E_1-E_2$, we get by \thmref{blowrat} that again for $c_1=0,F,G,F+G$ we have
$\chi^{\widetilde {\mathbb P}^2,F+G}_{c_1}(nF+mG,P^r)\equiv\chi^{\widetilde {\mathbb P}^2,H}_{c_1}(nF+mG,P^r)$. Finally by the blowdown formula
\thmref{nblow} we have
$\chi^{\widetilde {\mathbb P}^2,F+G}_{c_1}(nF+mG,P^r)=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{c_1}(nF+mG,P^r).$
\end{proof}
\subsection{Computations for $L=d(F+G)$}
We will compute $\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{c_1}(d(F+G))$ for $d\le 7$ and $c_1=0$, $F$, $G$, $F+G$. Obviously by symmetry
$\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{G}(d(F+G))=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{F}(d(F+G))$, and furthermore we have just seen that
$\chi^{{\mathbb P}^1\times {\mathbb P}^1,\omega}_{c_1}(d(F+G))\equiv\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{c_1}(d(F+G))$ for any ample class $\omega$ on ${\mathbb P}^1\times{\mathbb P}^1$.
We use a different strategy than in the proof of \thmref{P11rat}, which is computationally more tractable, and allows us to compute
$\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{c_1}(d(F+G))$ for $d\le 7$, using only the $\chi^{{\mathbb P}^2,H}_H(nH,P^r)$, for $1\le n\le 8$, $0\le r\le 16$ already computed.
{\bf Step 1.}
By $F=H-E_1$, $G=H-E_2$, $E=H-E_1-E_2$, and thus $d(F+G)-dE=dH$, and \thmref{nblow} we have
\begin{align*}
\chi^{\widetilde {\mathbb P}^2,H}_E(d(F+G)-dE,P^r)&=\chi^{\widetilde {\mathbb P}^2,H}_{H-E_1-E_2}(dH,P^r)=\Lambda^2\chi^{{\mathbb P}^2,H}_{H}(dH,P^r),\\
\chi^{\widetilde {\mathbb P}^2,H}_E(d(F+G)-(d-1)E,P^r)&=\chi^{\widetilde {\mathbb P}^2,H}_{H-E_1-E_2}((d+1)H-E_1-E_2,P^{r})\\&=\Lambda^2\chi^{{\mathbb P}^2,H}_{H}((d+1)H,P^{r+2}),\\
\chi^{\widetilde {\mathbb P}^2,H}_F(d(F+G)-dE,P^r)&=\chi^{{\mathbb P}^2,H}_{H-E_1}(dH,P^r)=\Lambda\chi^{{\mathbb P}^2,H}_{H}(dH,P^r),\\
\chi^{\widetilde {\mathbb P}^2,H}_F(d(F+G)-(d-1)E,P^r)&=\chi^{\widetilde {\mathbb P}^2,H}_{H-E_1}((d+1)H-E_1-E_2,P^{r})\\&=\Lambda(1-\Lambda^4)\chi^{{\mathbb P}^2,H}_{H}((d+1)H,P^{r+1}),\\
\chi^{\widetilde {\mathbb P}^2,H}_{F+G-E}(d(F+G)-dE,P^r)&=\chi^{{\mathbb P}^2,H}_{H}(dH,P^r),\\
\chi^{\widetilde {\mathbb P}^2,H}_{F+G-E}(d(F+G)-(d-1)E,P^r)&=\chi^{\widetilde {\mathbb P}^2,H}_{H}((d+1)H-E_1-E_2,P^{r})\\&=(1-\Lambda^4)^2\chi^{{\mathbb P}^2,H}_{H}((d+1)H,P^{r}),
\end{align*}
where we have used that $S_1(P,\Lambda)=\Lambda$, $S_2(P,\Lambda)=P\Lambda$ and $R_2(P,\Lambda)=(1-\Lambda^4)$.
The $\chi^{{\mathbb P}^2,H}_{H}(nH,P^{s})$ have been computed for $n\le 8$ and $s\le 16$. In the tables above they are only listed for $n\le 7$ and only up to adding a Laurent polynomial in
$\Lambda$, so as to give them a particularly simple form, but they have been computed precisely.
Thus in the range $d\le 7$ and $r\le 14$, the all the invariants on the left hand side of the formulas of Step 1 have been computed.
{\bf Step 2.} For $d\le 7$ we compute
\begin{align*}
\tag{1}\chi^{\widetilde {\mathbb P}^2,F+G}_{E}(d(F+G)-dE,P^r)&=\Lambda^2\chi^{{\mathbb P}^2,H}_{H}(dH,P^r)+\sum_{\xi} \delta_\xi^{\widetilde {\mathbb P}^2}(dH,P^r),\\
\chi^{\widetilde {\mathbb P}^2,F+G}_{E}(d(F+G)-(d-1)E,P^r)&=\Lambda^2\chi^{{\mathbb P}^2,H}_{H}((d+1)H,P^{r+2})\\
&\quad+\sum_{\xi} \delta_\xi^{\widetilde {\mathbb P}^2}((d+1)H-E_1-E_2,P^r),\\
\tag{2}\chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-dE,P^r)&=\Lambda\chi^{{\mathbb P}^2,H}_{H}(dH,P^{r})+\sum_{\xi} \delta_\xi^{\widetilde {\mathbb P}^2}(dH,P^r),\\
\chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-(d-1)E,P^r)&=\Lambda(1-\Lambda^4)\chi^{{\mathbb P}^2,H}_{H}((d+1)H,P^{r+1})\\
&\quad+\sum_{\xi} \delta_\xi^{\widetilde {\mathbb P}^2}((d+1)H-E_1-E_2,P^r),\\
\tag{3}\chi^{\widetilde {\mathbb P}^2,F+G}_{F+G-E}(d(F+G)-dE,P^r)&=\chi^{{\mathbb P}^2,H}_{H}(dH,P^{r})+\sum_{\xi} \delta_\xi^{\widetilde {\mathbb P}^2}(dH,P^r),\\
\chi^{\widetilde {\mathbb P}^2,F+G}_{F+G-E}(d(F+G)-(d-1)E,P^r)&=(1-\Lambda^4)^2\chi^{{\mathbb P}^2,H}_{H}((d+1)H,P^{r})\\
&\quad+\sum_{\xi} \delta_\xi^{\widetilde {\mathbb P}^2}((d+1)H-E_1-E_2,P^r).\\
\end{align*}
Here the sums are over all classes $\xi\in H^2(\widetilde {\mathbb P}^2,{\mathbb Z})$ with $\<H,\xi\>\le 0\le\<(2H-E_1-E_2),\xi\>$ (but at least one of the inequalities is strict) and $\delta_\xi^{\widetilde {\mathbb P}^2}\ne \emptyset$, and the summand
is $\delta_\xi^{\widetilde {\mathbb P}^2}(dH,P^r)$, $\delta_\xi^{\widetilde {\mathbb P}^2}((d+1)H-E_1-E_2,P^r)$, if both inequalities are strict and
$\frac{1}{2}\delta_\xi^{\widetilde {\mathbb P}^2}(dH,P^r)$, $\frac{1}{2}\delta_\xi^{\widetilde {\mathbb P}^2}((d+1)H-E_1-E_2,P^r)$ if one of them is an equality.
(Note that we can exclude the $\xi$ with $\<\xi,H\>=0=\<\xi,2H-E_1-E_2\>$, because with $\xi$ also $-\xi$ will fulfil this property and
$\delta_\xi^{\widetilde {\mathbb P}^2}(L,P^r)=-\delta_{-\xi}^{\widetilde {\mathbb P}^2}(L,P^r)$.)
In (1) these are classes of type $(E=H-E_1-E_2)$, in (2) of type $(F=H-E_1)$ and in (3) of type $(F+G-E=H)$.
By \lemref{blowindep} there are finitely many such classes.
In fact in the notations of the proof of \lemref{blowindep} we have
$$n=2, \ \epsilon=\frac{1}{2}, \ \delta=\frac{1}{4}, |m_1+1|=|m_2+1|=1.$$
Thus if $\xi=aH-b_1E_1-b_2E_2$ is such a class, then we get by \eqref{blowbound} that
$|b_1|+|b_2|\le 4(|d+3| +r+4)$ and $0<a\le \frac{1}{2}(|b_1|+|b_2|)$.
For all $\xi$ satisfying these bounds it is first checked whether indeed the criterion of \lemref{vanwall}(2) for the non-vanishing of the wallcrossing term $\delta_\xi^{\widetilde {\mathbb P}^2}(dH,P^r)$, $\delta_\xi^{\widetilde {\mathbb P}^2}((d+1)H-E_1-E_2,P^r)$ is fulfilled, if yes we compute the wallcrossing term, we again use that by \lemref{vanwall}
$\delta_{\xi,d}^X(L,P^r)=0$ unless $d\le a_{\xi,L,X}:=\xi^2+2|\<\xi,L-K_X\>|+2r+4$.
Thus, also using \remref{qlevel} it is enough evaluate the formula of \defref{wallcrossterm} modulo $q^{a_{\xi,L,X}}$ and $\Lambda^{a_{\xi,L,X}}$.
This is again a finite evaluation.
{\bf Step 3.}
By \thmref{nblow} we have
\begin{align*}
\chi^{\widetilde {\mathbb P}^2,F+G}_{E}(d(F+G)-dE,P^r)&=\chi^{ {\mathbb P}^1\times {\mathbb P}^1,F+G}_{0}(d(F+G),S_{d+1}(P,\Lambda) P^r),\\
\chi^{\widetilde {\mathbb P}^2,F+G}_{E}(d(F+G)-(d-1)E,P^r)&=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{0}(d(F+G),S_{d}(P,\Lambda) P^r),\\
\chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-dE,P^r)&=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{F}(d(F+G),R_{d+1}(P,\Lambda)P^r),\\
\chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-(d-1)E,P^r)&=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{F}(d(F+G),R_{d}(P,\Lambda)P^r),\\
\chi^{\widetilde {\mathbb P}^2,F+G}_{F+G-E}(d(F+G)-dE,P^r)&=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{F+G}(d(F+G),S_{d+1}(P,\Lambda)P^r),\\
\chi^{\widetilde {\mathbb P}^2,F+G}_{F+G-E}(d(F+G)-(d-1)E,P^r)&=\chi^{{\mathbb P}^1\times {\mathbb P}^1,F+G}_{F+G}(d(F+G),S_{d}(P,\Lambda)P^r).
\end{align*}
By \remref{npol} there exist polynomials $f_d\in {\mathbb Q}[x,\Lambda^4]$, $g_d\in {\mathbb Q}[x,\Lambda^4]$ with
$f_d S_d(x,\lambda)+g_d S_{d+1}(x,\lambda)=\lambda(1-\lambda^4)^{N_d}$,
and
$h_d\in {\mathbb Q}[x,\Lambda^4]$, $l_d\in {\mathbb Q}[x,\Lambda^4]$ with
$h_d R_d(x,\lambda)+l_d R_{d+1}(x,\lambda)=(1-\lambda^4)^{M_d}$.
For $d\le 7$ we see that $f_d$, $h_d$ are polynomials in $x$ of degree at most $14$, and $g_d$, $l_d$ are polynomials in $x$ of degree at most $11$.
Thus we get
\begin{align*}
\chi^{ {\mathbb P}^1\times {\mathbb P}^1,F+G}_{0}(d(F+G))&=\frac{1}{\Lambda(1-\Lambda^4)^{M_d}}\Big(\chi^{\widetilde {\mathbb P}^2,F+G}_{E}(d(F+G)-(d-1)E,f_d(P,\Lambda))\\
&\qquad+\chi^{\widetilde {\mathbb P}^2,F+G}_{E}(d(F+G)-dE,g_d(P,\Lambda))\Big),\\
\chi^{ {\mathbb P}^1\times {\mathbb P}^1,F+G}_{F}(d(F+G))&=\frac{1}{(1-\Lambda^4)^{N_d}}\Big(\chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-(d-1)E,h_d(P,\Lambda))\\
&\qquad+\chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-dE,l_d(P,\Lambda))\Big),\\
\chi^{ {\mathbb P}^1\times {\mathbb P}^1,F+G}_{F+G}(d(F+G))&=\frac{1}{\Lambda(1-\Lambda^4)^{M_d}}\Big(\chi^{\widetilde {\mathbb P}^2,F+G}_{F+G-E}(d(F+G)-(d-1)E,f_d(P,\Lambda))\\
&\qquad+\chi^{\widetilde {\mathbb P}^2,F+G}_{F+G-E}(d(F+G)-dE,g_d(P,\Lambda))\Big).
\end{align*}
\begin{comment}
The strategy is as follows.
\begin{enumerate}
\item
We have computed before $\chi^{{\mathbb P}^2,H}_0(dH,P^r)$, $\chi^{{\mathbb P}^2,H}_H(dH,P^r)$ for $d\le 8$ and $r\le 16$.
(Note that we need these numbers now including the correction terms).
\item We use the fact that the blowup $\widetilde {\mathbb P}^2$ of ${\mathbb P}^2$ in two different points is also the blowup of ${\mathbb P}^1\times {\mathbb P}^1$ in a point.
We can identify the classes as follows.
Let $H$ be the hyperplane class on ${\mathbb P}^2$ and let $E_1$, $E_2$ be the exceptional divisors of the double blowup of ${\mathbb P}^2$.
Let $F$, $G$ be the fibres of the two different projections of ${\mathbb P}^1\times {\mathbb P}^1$ to its factors, and let $E$ be the exceptional divisor of the blowup of ${\mathbb P}^1\times {\mathbb P}^1$.
Then on $\widetilde {\mathbb P}^2$ we have the identifications.
$$F=H-E_1, \quad G=H-E_2, \quad E=H-E_1-E_2,$$
Therefore we have
$$dH=dF+dG-dE,\quad (d+1)H-E_1-E_2=dF+dG-(d-1)E.$$
\item The correspondence for the Chern classes is
\begin{enumerate}
\item $0=0$, thus start with $c_1=0$, both blowups with $R$,
\item $F=H-E_1$, start with $c_1=H$, one blowup with $S$, one with $R$
\item $F+G=2H-E_1-E_2$, start with $c_1=0$, both blowups with $S$.
Alternatively and I think better we do it via $E=H-E_1-E_2$. We have $F+G=H+E$. Thus we use $c_1=H$ on ${\mathbb P}^2$. And then in the end we do blowdown via $S$.
\end{enumerate}
\item Using the blowup formulas we compute from the results of ${\mathbb P}^2$ for $d\le 8$ and $r\le 14$, the
\begin{align*}
\chi^{\widetilde P^2,H}_{0}(d(F+G)-dE,P^r)&=\chi^{\widetilde P^2,H}_{0}(dH,P^r)\\
\chi^{\widetilde P^2,H}_{F}(d(F+G)-dE,P^r)&=\chi^{\widetilde P^2,H}_{H-E}(dH,P^r)=\Lambda\chi^{\widetilde P^2,H}_{H}(dH,P^r)\\
\chi^{\widetilde P^2,H}_{F+G}(d(F+G)-dE,P^r)&=\chi^{\widetilde P^2,H}_{E_1+E_2}(dH,P^r)=\Lambda^2\chi^{\widetilde P^2,H}_{0}(dH,P^r)
\end{align*}
and the
\begin{align*}
\chi^{\widetilde {\mathbb P}^2,H}_{0}(d(F+G)-(d-1)E,P^r)&=\chi^{\widetilde P^2,H}_{0}(dH-E_1-E_2,P^r)\\
\chi^{\widetilde {\mathbb P}^2,H}_{F}(d(F+G)-(d-1)E,P^r)&=\chi^{\widetilde P^2,H}_{H-E_1}(dH-E_1-E_2,P^r)=\Lambda(1-\Lambda^4)\chi^{\widetilde P^2,H}_{H}(dH,P^{r+1})\\
\chi^{\widetilde {\mathbb P}^2,H}_{F+G}(d(F+G)-(d-1)E,P^r)&=\chi^{\widetilde P^2,H}_{E_1+E_2}(dH-E_1-E_2,P^r)=\Lambda^2\chi^{\widetilde P^2,H}_{0}(dH,P^{r+2})
\end{align*}
\item we cross all the walls between $H$ and $F+G$ on $\widetilde {\mathbb P}^2$ (this might be a bit more complicated, because now the $H^2(X,{\mathbb Z})$ has rank $3$, so we have a $2$ dimensional family of walls).
Thus we arrive at
\begin{align*}
&\chi^{\widetilde P^2,F+G}_{0}(d(F+G)-dE,P^r),\quad
\chi^{\widetilde P^2,F+G}_{F}(d(F+G)-dE,P^r),\quad \chi^{\widetilde P^2,F+G}_{F+G}(d(F+G)-dE,P^r)\\
&\chi^{\widetilde {\mathbb P}^2,F+G}_{0}(d(F+G)-(d-1)E,P^r),\quad
\chi^{\widetilde {\mathbb P}^2,F+G}_{F}(d(F+G)-(d-1)E,P^r),\quad \chi^{\widetilde {\mathbb P}^2,F}_{F+G}(d(F+G)-(d-1)E,P^r).
\end{align*}
\item We use the blowdown formulas to compute the corresponding results for ${\mathbb P}^1\times{\mathbb P}^1$.
Again we use the get algorithm to find a linear combination of $R[d+1]$, $R[d]$, which is just a multiple of $(1-\Lambda^4)$.
Then this applied the above gives us the result:
If $$1=\sum_i f_i(\lambda^4)x^i R[d+1]+\sum_j g_j(\lambda^4)x^i R[d],$$
then
$$\chi^{{\mathbb P}^1\times{\mathbb P}^1,F+G}_{0}(d(F+G))=\sum_{i} f_i(\Lambda^4) \chi^{\widetilde P^2,F+G}_{0}(d(F+G)-dE,P^i)+\sum_j g_j(\Lambda^4)x^i\chi^{\widetilde {\mathbb P}^2,F+G}_{0}(d(F+G)-(d-1)E,P^j).$$
And similar for the other Chern classes.
This gives us a complete algorithm.
\end{enumerate}
I think I want to insist that I always start with $c_1=H$ on ${\mathbb P}^2$.
\begin{enumerate}
\item
$c_1=0$. Take $c_1=E=H-E_1-E_2$ on $\widetilde {\mathbb P}^2$. Then use the blowdown formulas with $S$.
To get to $H-E_1-E_2$ we start with $c_1=H$ and use both $S$ blowup formulas.
\item $c_1=F=H-E_1$ Start with $c_1=H$, use $S$-blowup and $R$ blowup, and then $R$-blowdown.
\item $c_1=F+G=H+E$. Use both $R$-blowup formulas, and then $S$-blowdown formula.
\end{enumerate}
For the moment we deal with all first Chern classes at the same time.
{\bf The walls between $H$ and $F+G$.}
I can choose either $H,E_1,E_2$ as basis or $F,G,E$. Thus $H=F+G-E$ or $F+G=2H-E_1-E_2$.
Let me work with $H,E_1,E_2$. I want to sum over the walls $\xi$ with $\<\xi ,H\>\le0\le\<\xi, (2H-E_1-E_2) \>$.
The walls with $=$ again need to be counted with a factor $\frac{1}{2}.$
Then I need to look at $\xi=-nH+a_1E_1+a_2E_1$ with
$a_1+a_2\ge 2n\ge 0$, note that in principle also $a_1$ or $a_2$ can be negative.
Depending on the first
First deal with $L=dH$. Then $L-K_X=(d+3)H-E_1-E_2$.
The condition for a non vanishing wall is that
$$-\xi^2=-n^2+a_1^2+a_2^2\le |\<\xi,(L-K_X)\>|+r+2=|n(d+3)-a_1-a_2|+r+2.$$
Let me list for the different values of $c_1$ what parities we have.
\begin{enumerate}
\item
$c_1=0$, i.e. $c_1=H-E_1-E_2$ on $\widetilde {\mathbb P}^2$.
In this case $n,a_1,a_2$ are all odd.
\item
$c_1=F=H-E_1$. In this case $n$ and $a_1$ are odd, $a_2$ is even.
\item
$c_1=F+G=H-E$. We work with $c_1=H$ on $\widetilde {\mathbb P}^2$. Thus $n$ is odd, and $a_1, \ a_2$ are both even.
\end{enumerate}
Note that $n$ is always odd, so we do not need to consider the case $n=0$, so $H$ is on no wall.
On the other side we will have to consider the case that $F+G$ lies on a wall, at least on ${\mathbb P}^1\times{\mathbb P}^1$, for $c_1=0$ and $c_1=F+G$.
On the other hand, for $c_1=0$ we have $F+G=2H-E_1-E_2$ can be orthogonal to a wall.
In the same way for $c_1=F+G$. And also for $c_1=F$.
Let me look at the case $c_1=F+G$, $L=dF+dG=dH+dE$. That is $dF+dG-dE=dH$, and as $E=H-E_1-E_2$ we get $dF+dG-(d-1)E=(d+1)H-E_1-E_2$.
We need two different ones.
\end{comment}
All these computations are carried out with a Pari program. Finally we arrive at the following result.
\begin{Theorem}
With the notation of \thmref{P11gen}.
\begin{enumerate}
\item $\displaystyle{\chi^{{\mathbb P}^1\times{\mathbb P}^1,F+G}_{0}(dF+dG)=\frac{p_d(\Lambda^4)}{(1-\Lambda^4)^{(d+1)^2}}-1-(d^2+4d+5)\Lambda^4}$
for $1\le d\le 7$.
\item $\displaystyle{\chi^{{\mathbb P}^1\times{\mathbb P}^1,F+G}_{F+G}(dF+dG)=\frac{q_d(\Lambda^4)}{(1-\Lambda^4)^{(d+1)^2}}-\Lambda^2}$ for $1\le d\le 7$.
\item $\displaystyle{\chi^{{\mathbb P}^1\times{\mathbb P}^1,F+G}_{F}(dF+dG)=\frac{r_d(\Lambda^4)}{(1-\Lambda^4)^{(d+1)^2}}}$ for $d=2,4,6$.
\end{enumerate}
\end{Theorem}
\thmref{P11gen} follows from this and \propref{highvan}.
|
1,108,101,564,864 | arxiv | \section{Introduction}\label{sec_intro}
Let $\Omega \subset \mathbb{R}^3$ be a bounded smooth domain occupied by a fluid and a rigid body. Let the rigid body $\mathcal{S}(t)$ be a regular, bounded domain and moving inside $\Omega$. The motion of the rigid body is governed by the balance equations for linear and angular momentum. We assume that the fluid domain $\mathcal{F}(t)=\Omega \setminus \overline{\mathcal{S}(t)}$ is filled with a viscous isentropic compressible fluid. We also assume the slip boundary conditions at the interface of the interaction of the fluid and the rigid body as well as at $\partial \Omega$.
The evolution of this fluid-structure system can be described by the following equations
\begin{align}
\frac{\partial {\rho}_{\mc{F}}}{\partial t} + \operatorname{div}({\rho_{\mc{F}}} u_{\mc{F}}) =0, \quad &\forall \ (t,x)\in (0,T)\times\mc{F}(t),\label{mass:comfluid}\\
\frac{\partial ({\rho_{\mc{F}}} u_{\mc{F}})}{\partial t}+ \operatorname{div}({\rho_{\mc{F}}} u_{\mc{F}}\otimes u_{\mc{F}})- \operatorname{div} \mathbb{T}(u_{\mc{F}})+\nabla p_{\mc{F}} =\rho_{\mc{F}}g_{\mc{F}},\quad &\forall \ (t,x)\in (0,T)\times \mc{F}(t),\label{momentum:comfluid}\\
mh''(t)= -\int\limits_{\partial \mc{S}(t)} \left(\mathbb{T}(u_{\mc{F}}\right) - p_{\mc{F}}\mathbb{I}) \nu\, d\Gamma + \int\limits_{\mc{S}(t)} \rho_{\mc{S}}g_{\mc{S}}\, dx,\quad &\forall \ t\in (0,T), \label{linear momentumcomp:body}\\
(J\omega)'(t) = -\int\limits_{\partial \mc{S}(t)} (x-h(t)) \times (\mathbb{T}(u_{\mc{F}}) - p_{\mc{F}}\mathbb{I}) \nu\, d\Gamma + \int\limits_{\mc{S}(t)} (x-h(t)) \times \rho_{\mc{S}}g_{\mc{S}}\, dx,\quad &\forall \ t\in (0,T), \label{angular momentumcomp:body}
\end{align}
the boundary conditions
\begin{align}
u_{\mc{F}}\cdot \nu = u_{\mc{S}} \cdot \nu, \quad &\forall \ (t,x) \in (0,T)\times \partial \mc{S}(t), \label{boundarycomp-1}\\ (\mathbb{T}(u_{\mc{F}}) \nu)\times \nu = -\alpha (u_{\mc{F}}-u_{\mc{S}})\times \nu, \quad &\forall \ (t,x) \in (0,T)\times\partial \mc{S}(t), \label{boundarycomp-2}\\
u_{\mc{F}}\cdot \nu = 0, \quad &\forall \ (t,x) \in (0,T)\times \partial \Omega, \label{boundarycomp-3}\\ (\mathbb{T}(u_{\mc{F}}) \nu)\times \nu = -\alpha (u_{\mc{F}}\times \nu), \quad &\forall \ (t,x) \in (0,T)\times \partial \Omega, \label{boundarycomp-4}
\end{align}
and the initial conditions
\begin{align}
{\rho_{\mc{F}}}(0,x)=\rho_{\mc{F}_{0}}(x),\quad ({\rho_{\mc{F}}}u_{\mc{F}})(0,x)=q_{\mc{F}_0}(x), & \quad \forall \ x\in \mc{F}_0,\label{initial cond}\\
h(0)=0,\quad h'(0)=\ell_{0},\quad \omega(0)=\omega_{0}.\label{initial cond:comp}
\end{align}
The fluid occupies, at $t=0$, the domain $\mc{F}_0=\Omega \setminus \mc{S}_0$, where the initial position of the rigid body is $\mc{S}_0$. In equations \eqref{linear momentumcomp:body}--\eqref{boundarycomp-4}, $\nu(t , x )$ is the unit normal to $\partial\mc{S}(t)$ at the point $x \in \partial\mc{S}(t)$, directed to the interior of the body.
In \eqref{momentum:comfluid} and \eqref{linear momentumcomp:body}--\eqref{angular momentumcomp:body}, $g_{\mc{F}}$ and $g_{\mc{S}}$ are the densities of the volume forces on the fluid and on the rigid body, respectively. Moreover, $\alpha >0$ is a coefficient of friction.
Here, the notation $u \otimes v$ is the tensor product for two vectors $u,v \in \mathbb{R}^3$ and it is defined as
$u \otimes v=(u_{i}v_{j})_{1\leq i,j \leq 3}$. In the above equations, $\rho_{\mc{F}}$ and $u_{\mc{F}}$ represent respectively the mass density and the velocity of the fluid, and the pressure of the fluid is denoted by $p_{\mc{F}}$.
We assume that the flow is in the barotropic regime and we focus on the isentropic case where the relation between $p_{\mc{F}}$ and $\rho_{\mc{F}}$ is given by the constitutive law:
\begin{equation} \label{const-law}
p_{\mc{F}}= a_{\mc{F}}\rho_{\mc{F}}^{\gamma},
\end{equation} with $a_{\mc{F}}>0$ and the adiabatic constant $\gamma > \frac{3}{2}$, which is a necessary assumption for the existence of a weak solution of compressible fluids (see for example \cite{EF70}).
As is common, we set $$\mathbb{T}(u_{\mc{F}})= 2\mu_{\mc{F}} \mathbb{D}(u_{\mc{F}}) + \lambda_{\mc{F}}\operatorname{div}u_{\mc{F}}\mathbb{I},$$ where $\mathbb{D}(u_{\mc{F}})=\frac{1}{2}\left(\nabla u_{\mc{F}} + \nabla u_{\mc{F}}^{\top}\right)$ denotes the symmetric part of the velocity gradient, $\nabla u_{\mc{F}}^{\top}$ is the transpose of the matrix $\nabla u_{\mc{F}}$, and $\lambda_{\mc{F}},\mu_{\mc{F}}$ are the viscosity coefficients satisfying
\begin{equation*}
\mu_{\mc{F}} > 0, \quad 3\lambda_{\mc{F}} + 2\mu_{\mc{F}} \geq 0.
\end{equation*}
The Eulerian velocity $u_{\mc{S}}(t,x)$ at each point $x\in \mc{S}(t)$ of the rigid body is given by
\begin{equation}\label{Svel1}
u_{\mc{S}}(t,x)= h'(t)+ \omega(t) \times (x-h(t)),
\end{equation}
where $h(t)$ is the centre of mass and $h'(t)$, $\omega(t)$ are the linear and angular velocities of the rigid body. We remark that $\mc{S}(t)$ is uniquely defined by $h(t)$, $\omega(t)$ and $\mc{S}_0$. Similarly, the knowledge of $\mc{S}(t)$ and $\mc{S}_0$ yields $h(t)$ and $\omega(t)$.
The initial velocity of the rigid body is given by
\begin{equation}\label{Svel2}
u_{\mc{S}}(0,x)= u_{\mc{S}_0}:= \ell_0+ \omega_0 \times x,\quad x\in \mc{S}_0.
\end{equation}
Here the mass density of the body $\rho_{\mc{S}}$ satisfies the following continuity equation
\begin{equation}\label{eq:vrBeq}
\frac{\partial \rho_{\mc{S}}}{\partial t} + u_\mc{S}\cdot\nabla \rho_{\mc{S}} = 0, \quad \forall \ (t,x)\in (0,T)\times\mc{S}(t),\quad \rho_{\mc{S}}(0,x)=\rho_{\mc{S}_0}(x),\quad \forall \ x\in \mc{S}_0.
\end{equation}
Moreover, $m$ is the mass and $J(t)$ is the moment of inertia matrix of the solid. We express $h(t)$, $m$ and $J(t)$ in the following way:
\begin{align}
m &= \int\limits_{\mc{S}(t)} \rho_{\mc{S}} \ dx, \label{def:m} \\
h(t) &= \frac{1}{m} \int\limits_{\mc{S}(t)} \rho_{\mc{S}} \ x \ dx, \\
J(t) &= \int\limits_{\mc{S}(t)} \rho_{\mc{S}} \big[ |x-h(t)|^2\mathbb{I} - (x-h(t)) \otimes (x-h(t)) \big] \ dx. \label{def:J}
\end{align}
In the remainder of this introduction, we present the weak formulation of the system, discuss our main result regarding the existence of weak solutions and put it in a larger perspective.
\subsection{Weak formulation}\label{S2}
We derive a weak formulation with the help of multiplication by appropriate test functions and integration by parts by taking care of the boundary conditions. Due to the presence of the Navier-slip boundary condition, the test functions will be discontinuous across the fluid-solid interface. We introduce the set of rigid velocity fields:
\begin{equation} \label{defR}
\mc{R}=\left\{ \zeta : \Omega \to \mathbb{R}^3 \mid \mbox{There exist }V, r, a \in \mathbb{R}^3 \mbox{ such that }\zeta(x)=V+ r \times \left(x-a\right)\mbox{ for any } x\in\Omega\right\}.
\end{equation}
For any $T>0$, we define the test function space $V_{T}$ as follows:
\begin{equation}\label{def:test}
V_{T}=
\left\{\!\begin{aligned}
&\phi \in C([0,T]; L^2(\Omega))\mbox{ such that there exist }\phi_{\mc{F}}\in \mc{D}([0,T); \mc{D}(\overline{\Omega})),\, \phi_{\mc{S}}\in \mc{D}([0,T); \mc{R})\\ &\mbox{satisfying }\phi(t,\cdot)=\phi_{\mc{F}}(t,\cdot)\mbox{ on }\mc{F}(t),\quad \phi(t,\cdot)=\phi_{\mc{S}}(t,\cdot)\mbox{ on }\mc{S}(t)\mbox{ with }\\ &\phi_{\mc{F}}(t,\cdot)\cdot \nu = \phi_{\mc{S}}(t,\cdot)\cdot \nu \mbox{ on }\partial\mc{S}(t),\ \phi_{\mc{F}}(t,\cdot)\cdot \nu=0 \mbox{ on }\partial\Omega\mbox{ for all }t\in [0,T]
\end{aligned}\right\},
\end{equation}
where $\mc{D}$ denotes the sets of
all infinitely differentiable functions that have compact support.
We multiply equation \eqref{momentum:comfluid} by a test function $\phi\in V_T$ and integrate over $\mc{F}(t)$ to obtain
\begin{multline}\label{express1}
\frac{d}{dt} \int\limits_{\mc{F}(t)} \rho_{\mc{F}}u_{\mc{F}}\cdot \phi_{\mc{F}} - \int\limits_{\mc{F}(t)} \rho_{\mc{F}}u_{\mc{F}}\cdot \frac{\partial}{\partial t}\phi_{\mc{F}} - \int\limits_{\mc{F}(t)} (\rho_{\mc{F}}u_{\mc{F}} \otimes u_{\mc{F}}) : \nabla \phi_{\mc{F}} + \int\limits_{\mc{F}(t)} (\mathbb{T}(u_{\mc{F}}) - p_{\mc{F}}\mathbb{I}) : \mathbb{D}(\phi_{\mc{F}}) \\
= \int\limits_{\partial \Omega} (\mathbb{T}(u_{\mc{F}}) - p_{\mc{F}}\mathbb{I}) \nu\cdot \phi_{\mc{F}} + \int\limits_{\partial \mc{S}(t)} (\mathbb{T}(u_{\mc{F}}) - p_{\mc{F}}\mathbb{I}) \nu\cdot \phi_{\mc{F}} +\int\limits_{\mc{F}(t)}\rho_{\mc{F}}g_{\mc{F}}\cdot \phi_{\mc{F}}.
\end{multline}
We use the identity $(A\times B)\cdot (C\times D)=(A\cdot C)(B\cdot D)-(B\cdot C)(A\cdot D)$ to have
\begin{equation*}
\mathbb{T}(u_{\mc{F}}) \nu\cdot \phi_{\mc{F}}= [\mathbb{T}(u_{\mc{F}}) \nu \cdot \nu](\phi_{\mc{F}}\cdot \nu) + [\mathbb{T}(u_{\mc{F}}) \nu \times \nu]\cdot (\phi_{\mc{F}}\times \nu),
\end{equation*}
\begin{equation*}
\mathbb{T}(u_{\mc{F}}) \nu\cdot \phi_{\mc{S}}= [\mathbb{T}(u_{\mc{F}}) \nu \cdot \nu](\phi_{\mc{S}}\cdot \nu) + [\mathbb{T}(u_{\mc{F}}) \nu \times \nu]\cdot (\phi_{\mc{S}}\times \nu).
\end{equation*}
Now by using the definition of $V_T$ and the boundary conditions \eqref{boundarycomp-1}--\eqref{boundarycomp-4}, we get
\begin{equation}\label{express2}
\int\limits_{\partial \Omega} (\mathbb{T}(u_{\mc{F}}) - p_{\mc{F}}\mathbb{I}) \nu\cdot \phi_{\mc{F}} = -\alpha \int\limits_{\partial \Omega} (u_{\mc{F}}\times \nu)\cdot (\phi_{\mc{F}}\times \nu),
\end{equation}
\begin{equation}\label{express3}
\int\limits_{\partial \mc{S}(t)} (\mathbb{T}(u_{\mc{F}}) - p_{\mc{F}}\mathbb{I}) \nu\cdot \phi_{\mc{F}} = -\alpha \int\limits_{\partial \mc{S}(t)} [(u_{\mc{F}}-u_{\mc{S}})\times \nu]\cdot [(\phi_{\mc{F}}-\phi_{\mc{S}})\times \nu] + \int\limits_{\partial \mc{S}(t)} (\mathbb{T}(u_{\mc{F}}) - p_{\mc{F}}\mathbb{I}) \nu\cdot \phi_{\mc{S}}.
\end{equation}
Using the rigid body equations \eqref{linear momentumcomp:body}--\eqref{angular momentumcomp:body} and some calculations, we obtain
\begin{equation}\label{express4}
\int\limits_{\partial \mc{S}(t)} (\mathbb{T}(u_{\mc{F}}) - p_{\mc{F}}\mathbb{I}) \nu\cdot \phi_{\mc{S}} = -\frac{d}{dt}\int\limits_{\mc{S}(t)} \rho_{\mc{S}}u_{\mc{S}}\cdot \phi_{\mc{S}} + \int\limits_{\mc{S}(t)} \rho_{\mc{S}}u_{\mc{S}}\cdot \frac{\partial}{\partial t} \phi_{\mc{S}} + \int\limits_{\mc{S}(t)} \rho_{\mc{S}}g_{\mc{S}}\cdot \phi_{\mc{S}}.
\end{equation}
Thus by combining the above relations \eqref{express1}--\eqref{express4} and then integrating from $0$ to $T$, we have
\begin{multline}\label{weak-momentum}
- \int\limits_0^T\int\limits_{\mc{F}(t)} \rho_{\mc{F}}u_{\mc{F}}\cdot \frac{\partial}{\partial t}\phi_{\mc{F}} - \int\limits_0^T\int\limits_{\mc{S}(t)} \rho_{\mc{S}}u_{\mc{S}}\cdot \frac{\partial}{\partial t}\phi_{\mc{S}} - \int\limits_0^T\int\limits_{\mc{F}(t)} (\rho_{\mc{F}}u_{\mc{F}} \otimes u_{\mc{F}}) : \nabla \phi_{\mc{F}} + \int\limits_0^T\int\limits_{\mc{F}(t)} (\mathbb{T}(u_{\mc{F}}) - p_{\mc{F}}\mathbb{I}) : \mathbb{D}(\phi_{\mc{F}}) \\
+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} (u_{\mc{F}}\times \nu)\cdot (\phi_{\mc{F}}\times \nu) + \alpha \int\limits_0^T\int\limits_{\partial \mc{S}(t)} [(u_{\mc{F}}-u_{\mc{S}})\times \nu]\cdot [(\phi_{\mc{F}}-\phi_{\mc{S}})\times \nu] \\
= \int\limits_0^T\int\limits_{\mc{F}(t)}\rho_{\mc{F}}g_{\mc{F}}\cdot \phi_{\mc{F}} + \int\limits_0^T\int\limits_{\mc{S}(t)} \rho_{\mc{S}}g_{\mc{S}}\cdot \phi_{\mc{S}} + \int\limits_{\mc{F}(0)} (\rho_{\mc{F}}u_{\mc{F}}\cdot \phi_{\mc{F}})(0) + \int\limits_{\mc{S}(0)} (\rho_{\mc{S}}u_{\mc{S}}\cdot \phi_{\mc{S}})(0).
\end{multline}
\begin{remark}
We stress that in the definition of the set $U_T$ (in Definition~\ref{weaksolution-main} below)
the function $u_{\mc{F}}$ on $\Omega$ is a regular extension of the velocity field $u_{\mc{F}}$ from $\mc{F}(t)$ to $\Omega$, see \eqref{Eu1}--\eqref{Eu2}. Correspondingly, $u_{\mc{S}}\in \mc{R}$ denotes a rigid extension from $\mc{S}(t)$ to $\Omega$ as in \eqref{Svel1}. Moreover, by the density $\rho _{\mc{F}}$ in \eqref{NO2}, we mean an extended fluid density $\rho _{\mc{F}}$
from $\mc{F}(t)$ to $\Omega$ by zero, see \eqref{Erho}--\eqref{15:21}. Correspondingly, $\rho_{\mc{S}}$ refers to an extended solid density from $\mc{S}(t)$ to $\Omega$ by zero.
\end{remark}
\begin{remark}\label{u-initial}
In \eqref{NO2}, the initial fluid density $\rho _{\mc{F}_0}$ on $\Omega$ represents a zero extension of $\rho _{\mc{F}_0}$ (defined in \eqref{initial cond})
from $\mc{F}_0$ to $\Omega$. Correspondingly, $\rho_{\mc{S}_0}$ in equation \eqref{NO5} stands for an extended initial solid density (defined in \eqref{eq:vrBeq}) from $\mc{S}_0$ to $\Omega$ by zero. Obviously, $q_{\mc{F}_0}$ refers to an extended initial momentum from $\mc{F}_0$ to $\Omega$ by zero and $u_{\mc{S}_0}\in \mc{R}$ denotes a rigid extension from $\mc{S}_0$ to $\Omega$ as in \eqref{Svel2}.
\end{remark}
\begin{defin}\label{weaksolution-main}
Let $T> 0$, and let $\Omega$ and $\mc{S}_0 \Subset \Omega$ be two regular bounded domains of $\mathbb{R}^3$. A triplet $(\mc{S},\rho,u)$ is a finite energy weak solution to system \eqref{mass:comfluid}--\eqref{initial cond:comp} if the following holds:
\begin{itemize}
\item $\mc{S}(t) \Subset \Omega$ is a bounded domain of $\mathbb{R}^3$ for all $t\in [0,T)$ such that
\begin{equation}\label{NO1}
\chi_{\mc{S}}(t,x) = \mathds{1}_{\mc{S}(t)}(x) \in L^{\infty}((0,T) \times \Omega).
\end{equation}
\item $u$ belongs to the following space
\begin{equation*}
U_{T}=
\left\{\!\begin{aligned}
&u \in L^{2}(0,T; L^2(\Omega)) \mbox{ such that there exist }u_{\mc{F}}\in L^2(0,T; H^1(\Omega)),\, u_{\mc{S}}\in L^{2}(0,T; \mc{R})\\ &\mbox{satisfying }u(t,\cdot)=u_{\mc{F}}(t,\cdot)\mbox{ on }\mc{F}(t),\quad u(t,\cdot)=u_{\mc{S}}(t,\cdot)\mbox{ on }\mc{S}(t)\mbox{ with }\\ &u_{\mc{F}}(t,\cdot)\cdot \nu = u_{\mc{S}}(t,\cdot)\cdot \nu \mbox{ on }\partial\mc{S}(t),\ u_{\mc{F}}\cdot \nu=0 \mbox{ on }\partial\Omega\mbox{ for a.e }t\in [0,T]
\end{aligned}\right\}.
\end{equation*}
\item $\rho \geq 0$, $\rho \in L^{\infty}(0,T; L^{\gamma}(\Omega))$ with $\gamma>3/2$, $\rho|u|^2 \in L^{\infty}(0,T; L^1(\Omega))$, where
\begin{equation*}
\rho= (1-\mathds{1}_{\mc{S}})\rho_{\mc{F}} + \mathds{1}_{\mc{S}}\rho_{\mc{S}},\quad u= (1-\mathds{1}_{\mc{S}})u_{\mc{F}} + \mathds{1}_{\mc{S}}u_{\mc{S}}.
\end{equation*}
\item The continuity equation is satisfied in the weak sense, i.e.\
\begin{equation}\label{NO2}
\frac{\partial {\rho_{\mc{F}}}}{\partial t} + \operatorname{div}({\rho}_{\mc{F}} u_{\mc{F}}) =0 \mbox{ in }\, \mc{D}'([0,T)\times {\Omega}),\quad \rho_{\mc{F}}(0,x)=\rho_{\mc{F}_0}(x),\ x\in \Omega.
\end{equation}
Also, a renormalized continuity equation holds in a weak sense, i.e.\
\begin{equation}\label{NO3}
\partial_t b(\rho_{\mc{F}}) + \operatorname{div}(b(\rho_{\mc{F}})u_{\mc{F}}) + (b'(\rho_{\mc{F}})-b(\rho_{\mc{F}}))\operatorname{div}u_{\mc{F}}=0 \mbox{ in }\, \mc{D}'([0,T)\times {\Omega}) ,
\end{equation}
for any $b\in C([0,\infty)) \cap C^1((0,\infty))$ satisfying
\begin{equation}\label{eq:b}
|b'(z)|\leq cz^{-\kappa_0},\, z\in (0,1],\ \kappa_0 <1, \qquad |b'(z)|\leq cz^{\kappa_1},\, z\geq 1,\ -1<\kappa_1 <\infty.
\end{equation}
\item The transport of $\mc{S}$ by the rigid vector field $u_{\mc{S}}$ holds (in the weak sense)
\begin{equation}\label{NO4}
\frac{\partial {\chi}_{\mc{S}}}{\partial t} + \operatorname{div}(u_{\mc{S}}\chi_{\mc{S}}) =0 \, \mbox{ in }(0,T)\times {\Omega},\quad \chi_{\mc{S}}(0,x)=\mathds{1}_{\mc{S}_0}(x),\ x\in \Omega.
\end{equation}
\item The density $\rho_{\mc{S}}$ of the rigid body $\mc{S}$ satisfies (in the weak sense)
\begin{equation}\label{NO5}
\frac{\partial {\rho}_{\mc{S}}}{\partial t} + \operatorname{div}(u_{\mc{S}}\rho_{\mc{S}}) =0 \, \mbox{ in }(0,T)\times {\Omega},\quad \rho_{\mc{S}}(0,x)=\rho_{\mc{S}_0}(x),\ x\in \Omega.
\end{equation}
\item Balance of linear momentum holds in a weak sense, i.e.\ for all $\phi \in V_{T}$ the relation \eqref{weak-momentum} holds.
\item The following energy inequality holds for almost every $t\in (0,T)$:
\begin{multline}\label{energy}
\int\limits_{\mc{F}(t)}\frac{1}{2} \rho_{\mc{F}}|u_{\mc{F}}(t,\cdot)|^2 + \int\limits_{\mc{S}(t)} \frac{1}{2} \rho_{\mc{S}}|u_{\mc{S}}(t,\cdot)|^2 + \int\limits_{\mc{F}(t)} \frac{a_{\mc{F}}}{\gamma-1}\rho_{\mc{F}}^{\gamma}+ \int\limits_0^t\int\limits_{\mc{F}(\tau)} \Big(2\mu_{\mc{F}} |\mathbb{D}(u_{\mc{F}})|^2 + \lambda_{\mc{F}} |\operatorname{div} u_{\mc{F}}|^2\Big) \\
+ \alpha \int\limits_0^t\int\limits_{\partial \Omega} |u_{\mc{F}}\times \nu|^2
+ \alpha \int\limits_0^t\int\limits_{\partial \mc{S}(\tau)} |(u_{\mc{F}}-u_{\mc{S}})\times \nu|^2\\ \leq \int\limits_0^t\int\limits_{\mc{F}(\tau)}\rho_{\mc{F}}g_{\mc{F}}\cdot u_{\mc{F}} + \int\limits_0^t\int\limits_{\mc{S}(\tau)} \rho_{\mc{S}}g_{\mc{S}}\cdot u_{\mc{S}} + \int\limits_{\mc{F}_0}\frac{1}{2} \frac{|q_{\mc{F}_0}|^2}{\rho_{\mc{F}_0}}
+ \int\limits_{\mc{S}_0} \frac{1}{2}\rho_{\mc{S}_0}|u_{\mc{S}_0}|^2 + \int\limits_{\mc{F}_0} \frac{a_{\mc{F}}}{\gamma-1}\rho_{\mc{F}_0}^{\gamma}.
\end{multline}
\end{itemize}
\end{defin}
\begin{remark}\label{16:37}
We note that our continuity equation \eqref{NO2} is different from the corresponding one in \cite{F4}. We have to work with $u_{\mc{F}}$ instead of $u$ because of the Navier boundary condition. The reason is that we need the $H^{1}(\Omega)$ regularity of the velocity in order to achieve the validity of the continuity equation in $\Omega$. Observe that $u \in L^{2}(0,T; L^2(\Omega))$ but the extended fluid velocity has better regularity, in particular, $u_{\mc{F}}\in L^2(0,T; H^1(\Omega))$, see \eqref{Eu1}--\eqref{Eu2}.
\end{remark}
\begin{remark}
In the weak formulation \eqref {weak-momentum}, we need to distinguish between the fluid velocity $u_{\mc{F}}$ and the solid velocity $u_{\mc{S}}$. Due to the presence of the discontinuities in the tangential components of $u$ and $\phi$, neither $\partial_t \phi$ nor $\mb{D}(u)$, $\mb{D}(\phi)$ belong to $L^2(\Omega)$. That's why it is not possible to write \eqref {weak-momentum} in a global and condensed form (i.e.\ integrals over $\Omega$).
\end{remark}
\begin{remark}
Let us mention that in the whole paper we assume the regularity of domains $\Omega$ and $\mc{S}_0$ as $C^{2+\kappa}$, $\kappa >0$. However, we expect that our assumption on the regularity of the domain can be relaxed to a less regular domain like in the work of Kuku\v cka \cite{kuku}.
\end{remark}
\subsection{Discussion and main result}
The mathematical analysis of systems describing the motion of a rigid body in a viscous \textit{incompressible} fluid is nowadays well developed. The proof of existence of weak solutions until a first collision can be found in several papers, see \cite{CST,DEES1,GLSE,HOST,SER3}. Later, the possibility of collisions in the case of a weak solution was included, see \cite{F3,SST}. Moreover, it was shown that under Dirichlet boundary conditions collisions cannot occur, which is paradoxical with respect to real situations; for details see \cite{HES, HIL, HT}. Neustupa and Penel showed that under a prescribed motion of the rigid body and under Navier type of boundary conditions collision can occur \cite{NP}. After that G\'erard-Varet and Hillairet showed that to construct collisions one needs to assume less regularity of the domain or different boundary conditions, see e.g.\ \cite{GH,MR3272367,GHC}. In the case of very high viscosity, under the assumption that rigid bodies are not touching each other or not touching the boundary at the initial time, it was shown that collisions cannot occur in finite time, see \cite{FHN}.
For an introduction we refer to the problem of a fluid coupled with a rigid body in the work by Galdi, see \cite{G2}.
Let us also mention results on strong solutions, see e.g.\ \cite{GGH13,T, Wa}.
A few results are available on the motion of a rigid structure in a \textit{compressible} fluid with Dirichlet boundary conditions. The existence of strong solutions in the $L^2$ framework for small data up to a collision was shown in \cite{BG,roy2019stabilization}. The existence of strong solutions in the $L^p$ setting based on $\mathcal{R}$-bounded operators was applied in the barotropic case \cite{HiMu} and in the full system \cite{HMTT}.
The existence of a weak solution, also up to a collision but without smallness assumptions, was shown in \cite{DEES2}. Generalization of this result allowing collisions was given in \cite{F4}.
The weak-strong uniqueness of a compressible fluid with a rigid body can be found in \cite{KrNePi2}. Existence of weak solutions in the case of Navier boundary conditions is not available; we explore it in this article.
For many years, the \emph{no-slip boundary
condition}
has been the most widely used given its success in reproducing the
standard velocity profiles for incompressible/compressible viscous fluids.
Although the no-slip hypothesis seems to be in good agreement with
experiments, it leads to certain rather surprising conclusions. As we have already mentioned before, the
most striking one being the absence of collisions of rigid objects
immersed in a linearly viscous fluid \cite{HES,HIL}.
The so-called \emph{Navier boundary
conditions}, which allow for slip,
offer more freedom and are
likely to provide a physically acceptable solution at least to some
of the paradoxical phenomena resulting from the no-slip boundary
condition, see, e.g.\ Moffat \cite{MOF}. Mathematically, the behavior of the tangential component $[{u}]_{tan}$ is a
delicate issue.
\smallskip
The main result of our paper (Theorem~\ref{exist:upto collision}) asserts local-in-time existence of a weak solution for the system involving the motion of a rigid body in a compressible fluid in the case of Navier boundary conditions at the interface and at the outer boundary. It is the first result in the context of rigid body-compressible fluid interaction in the case of Navier type of boundary conditions. Let us mention that the main difficulty which arises in our problem is the jump in the velocity through the interface boundary between the rigid body and the compressible fluid. This difficulty cannot be resolved by the approach introduced in the work of Desjardins, Esteban \cite{DEES2}, or Feireisl \cite{F4} since they consider the velocity field continuous through the interface. Moreover, G\'erard-Varet, Hillairet \cite{MR3272367} and Chemetov, Ne\v casov\' a \cite{CN} cannot be used directly as they are in the incompressible framework.
Our weak solutions have to satisfy the jump of the velocity field through the interface boundary.
Our idea is to introduce a new approximate scheme which combines the theory of compressible fluids introduced by P. L. Lions \cite{LI4} and then developed by Feireisl \cite{EF70} to get the strong convergence of the density (renormalized continuity equations, effective viscous flux, artificial pressure) together with ideas from G\'erard-Varet, Hillairet \cite{MR3272367} and Chemetov, Ne\v casov\' a \cite{CN} concerning a penalization of the jump. We remark that such type of difficulties do not arise for the existence of weak solutions of compressible fluids without a rigid body neither for Dirichlet nor for Navier type of boundary conditions.
Let us mention the main issues that arise in the analysis of our system and the methodology that we adapt to deal with it:
\begin{itemize}
\item It is not possible to define a uniform velocity field as in \cite{DEES2, F4}
due to the presence of a discontinuity through the interface of interaction. This is the reason why we introduce the regularized fluid velocity $u_{\mc{F}}$ and the solid velocity $u_{\mc{S}}$ and why we treat them separately.
\item We introduce approximate problems and recover the original problem as a limit of the approximate ones. In fact, we consider several levels of approximations; in each level we ensure that our solution and the test function do not show a jump across the interface so that we can use several available techniques of compressible fluids (without body). In the limit, however, the discontinuity at the interface is recovered. The particular construction of the test functions is a delicate and crucial issue in our proof, which we outline in \cref{approx:test}.
\item Recovering the velocity fields for the solid and fluid parts separately is also a challenging issue. We introduce a penalization in such a way that, in the last stage of the limiting process, this term allows us to recover the rigid velocity of the solid, see \eqref{12:04}--\eqref{re:solidvel}.
The introduction of an appropriate extension operator helps us to recover the fluid velocity, see \eqref{ext:fluid}--\eqref{re:fluidvel}.
\item Since we consider the compressible case, our penalization with parameter $\delta>0$, see \eqref{approx3}, is different from the penalization for the incompressible fluid in \cite{MR3272367}.
\item Due to the Navier-slip boundary condition, no $H^1$ bound on the velocity on the whole domain is possible. We can only obtain the $H^1$ regularity of the extended velocities of the fluid and solid parts separately. We have introduced an artificial viscosity that vanishes asymptotically on the solid part so that we can capture the $H^1$ regularity for the fluid part (see step 1 of the proof of \cref{exist:upto collision} in Section \ref{S4}).
\item We have already mentioned that the main difference with \cite{MR3272367} is that we consider compressible fluid whereas they have considered an incompressible fluid. We have encountered several issues that are present due to the compressible nature of the fluid (vanishing viscosity in the continuity equation, recovering the renormalized continuity equation, identification of the pressure). One important point is to see that passing to the limit as $\delta$ tends to zero in the transport for the rigid body is not obvious because our velocity field does not have regularity $L^{\infty}(0,T,L^2(\Omega))$ as in the incompressible case see e.g.\ \cite{MR3272367} but $L^2(0,T,L^2(\Omega))$ (here we have $\sqrt{\rho}u \in L^{\infty}(0,T,L^2(\Omega))$ only). To handle this problem, we apply \cref{sequential2} in the $\delta$-level, see \cref{S4}.
\end{itemize}
Next we present the main result of our paper.
\begin{theorem}\label{exist:upto collision}
Let $\Omega$ and $\mc{S}_0 \Subset \Omega$ be two regular bounded domains of $\mathbb{R}^3$. Assume that for some $\sigma > 0$
\begin{equation*}
\operatorname{dist}(\mc{S}_0,\partial\Omega) > 2\sigma.
\end{equation*}
Let $g_{\mc{F}}$, $g_{\mc{S}} \in L^{\infty}((0,T)\times \Omega)$ and the pressure $p_{\mc{F}}$ be determined by \eqref{const-law} with $\gamma >3/2$. Assume that the initial data (defined in the sense of \cref{u-initial}) satisfy
\begin{align} \label{init}
\rho_{\mc{F}_{0}} \in L^{\gamma}(\Omega),\quad \rho_{\mc{F}_{0}} \geq 0 &\mbox{ a.e. in }\Omega,\quad \rho_{\mc{S}_0}\in L^{\infty}(\Omega),\quad \rho_{\mc{S}_0}>0\mbox{ a.e. in }\mc{S}_0,\\
\label{init1}
q_{\mc{F}_{0}} \in L^{\frac{2\gamma}{\gamma+1}}(\Omega), \quad q_{\mc{F}_{0}}\mathds{1}_{\{\rho_{\mc{F}_0}=0\}}=0 &\mbox{ a.e. in }\Omega,\quad \dfrac{|q_{\mc{F}_{0}}|^2}{\rho_{\mc{F}_0}}\mathds{1}_{\{\rho_{\mc{F}_0}>0\}}\in L^1(\Omega),\\
\label{init2}
u_{\mc{S}_0}= \ell_0+ \omega_0 \times x& \quad\forall\ x \in \Omega \mbox{ with }\ell_0,\ \omega_0 \in \mb{R}^3.
\end{align}
Then there exists $T > 0$ (depending only on $\rho_{\mc{F}_0}$, $\rho_{\mc{S}_0}$, $q_{\mc{F}_0}$, $u_{\mc{S}_0}$, $g_{\mc{F}}$, $g_{\mc{S}}$, $\operatorname{dist}(\mc{S}_0,\partial\Omega)$) such that a finite energy weak solution to \eqref{mass:comfluid}--\eqref{initial cond:comp} exists on $[0,T)$. Moreover,
\begin{equation*}
\mc{S}(t) \Subset \Omega,\quad\operatorname{dist}(\mc{S}(t),\partial\Omega) \geq \frac{3\sigma}{2},\quad \forall \ t\in [0,T].
\end{equation*}
\end{theorem}
The outline of the paper is as follows. We introduce three levels of approximation schemes in \cref{F1351}. In \cref{S5}, we describe some results on the transport equation, which are needed in all the levels of approximation. The existence results of approximate solutions have been proved in \cref{S3}. \cref{sec:Galerkin} and \cref{14:14} are dedicated to the construction and convergence analysis of the Faedo-Galerkin scheme associated to the finite dimensional approximation level. We discuss the limiting system associated to the vanishing viscosity in \cref{14:18}. \cref{S4} is devoted to the main part: we derive the limit as the parameter $\delta$ tends to zero.
\section{Approximate Solutions}\label{F1351}
In this section, we present the approximate problems by combining the penalization method, introduced in \cite{MR3272367}, and the approximation scheme developed in \cite{MR1867887}
along with a careful treatment of the boundary terms of the rigid body to solve the original problem \eqref{mass:comfluid}--\eqref{initial cond:comp}. There are three levels of approximations with the parameters $N,\varepsilon,\delta$.
Let us briefly explain these approximations:
\begin{itemize}
\item The parameter $N$ is connected with solving the momentum equation using the Faedo-Galerkin approximation.
\item The parameter $\varepsilon > 0$ is connected with a new diffusion term $\varepsilon \Delta \rho $ in the continuity equation together with a term $\varepsilon \nabla \rho \nabla u$ in the momentum equation.
\item The parameter $\delta > 0$ is connected with the approximation in the viscosities (see \eqref{approx-viscosity}) together with a penalization of the boundary of the rigid body to get smoothness through the interface (see \eqref{approx3}) and together with the artificial pressure containing the approximate coefficient, see \eqref{approx-press}.
\end{itemize}
At first, we state the existence results for the different levels of approximation schemes and then we will prove these later on. We start with the $\delta$-level of approximation via an artificial pressure.
We are going to consider the following approximate problem: Let $\delta>0$. Find a triplet $(S^{\delta}, \rho^{\delta}, u^{\delta})$ such that
\begin{itemize}
\item $\mc{S}^{\delta}(t) \Subset \Omega$ is a bounded, regular domain for all $t \in [0,T]$ with
\begin{equation}\label{approx1}
\chi^{\delta}_{\mc{S}}(t,x)= \mathds{1}_{\mc{S}^{\delta}(t)}(x) \in L^{\infty}((0,T)\times \Omega) \cap C([0,T];L^p(\Omega)), \, \forall \, 1 \leq p < \infty.
\end{equation}
\item The velocity field $u^{\delta} \in L^2(0,T; H^1(\Omega))$, and the density function $\rho^{\delta} \in L^{\infty}(0,T; L^{\beta}(\Omega))$, $\rho^{\delta}\geq 0$ satisfy
\begin{equation}\label{approx2}
\frac{\partial {\rho}^{\delta}}{\partial t} + \operatorname{div}({\rho}^{\delta} u^{\delta}) =0 \mbox{ in } \mc{D}'([0,T)\times {\Omega}).
\end{equation}
\item
For all $\phi \in H^1(0,T; L^{2}(\Omega)) \cap L^r(0,T; W^{1,{r}}(\Omega))$, where $r=\max\left\{\beta+1, \frac{\beta+\theta}{\theta}\right\}$, $\beta \geq \max\{8,\gamma\}$ and $\theta=\frac{2}{3}\gamma -1$ with $\phi\cdot\nu=0$ on $\partial\Omega$ and $\phi|_{t=T}=0$, the following holds:
\begin{multline}\label{approx3}
- \int\limits_0^T\int\limits_{\Omega} \rho^{\delta} \left(u^{\delta}\cdot \frac{\partial}{\partial t}\phi + u^{\delta} \otimes u^{\delta} : \nabla \phi\right) + \int\limits_0^T\int\limits_{\Omega} \Big(2\mu^{\delta}\mathbb{D}(u^{\delta}):\mathbb{D}(\phi) + \lambda^{\delta}\operatorname{div}u^{\delta}\mathbb{I} : \mathbb{D}(\phi)- p^{\delta}(\rho^{\delta})\mathbb{I} : \mathbb{D}(\phi)\Big) \\
+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} (u^{\delta} \times \nu)\cdot (\phi \times \nu) + \alpha \int\limits_0^T\int\limits_{\partial \mc{S}^{\delta}(t)} [(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta})\times \nu]\cdot [(\phi-P^{\delta}_{\mc{S}}\phi)\times \nu]
+ \frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^{\delta}_{\mc{S}}(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta})\cdot (\phi-P^{\delta}_{\mc{S}}\phi)\\ = \int\limits_0^T\int\limits_{\Omega}\rho^{\delta} g^{\delta} \cdot \phi
+ \int\limits_{\Omega} (\rho^{\delta} u^{\delta} \cdot \phi)(0),
\end{multline}
where $\mc{P}_{\mc{S}}^{\delta}$ is defined in \eqref{approx:projection} below.
\item ${\chi}^{\delta}_{\mc{S}}(t,x)$ satisfies (in the weak sense)
\begin{equation}\label{approx4}
\frac{\partial {\chi}^{\delta}_{\mc{S}}}{\partial t} + P^{\delta}_{\mc{S}}u^{\delta} \cdot \nabla \chi^{\delta}_{\mc{S}} =0\, \mbox{ in }(0,T)\times {\Omega},\quad \chi^{\delta}_{\mc{S}}|_{t=0}= \mathds{1}_{\mc{S}_0}\, \mbox{ in } {\Omega}.
\end{equation}
\item $\rho^{\delta}{\chi}^{\delta}_{\mc{S}}(t,x)$ satisfies (in the weak sense)
\begin{equation}\label{approx5}
\frac{\partial }{\partial t}(\rho^{\delta}{\chi}^{\delta}_{\mc{S}}) + P^{\delta}_{\mc{S}}u^{\delta} \cdot \nabla (\rho^{\delta}{\chi}^{\delta}_{\mc{S}})=0\, \mbox{ in }(0,T)\times {\Omega},\quad (\rho^{\delta}{\chi}^{\delta}_{\mc{S}})|_{t=0}= \rho_0^{\delta}\mathds{1}_{\mc{S}_0}\, \mbox{ in } {\Omega}.
\end{equation}
\item Initial data are given by
\begin{equation}\label{approx:initial}
\rho^{\delta}(0,x)=\rho_0^{\delta}(x),\quad \rho^{\delta} u^{\delta}(0,x) = q_0^{\delta}(x),\quad x\in \Omega .
\end{equation}
\end{itemize}
Above we have used the following quantities:
\begin{itemize}
\item The density of the volume force is defined as
\begin{equation*}
g^{\delta}=(1-\chi^{\delta}_{\mc{S}})g_{\mc{F}} + \chi^{\delta}_{\mc{S}}g_{\mc{S}}.
\end{equation*}
\item The artificial pressure is given by
\begin{equation}\label{approx-press}
p^{\delta}(\rho)= a^{\delta}\rho^{\gamma} + {\delta} \rho^{\beta},\quad\mbox{ with }\quad a^{\delta} = a_{\mc{F}} (1-\chi^{\delta}_{\mc{S}}),
\end{equation}
where $a_{\mc{F}}>0$ and the exponents $\gamma$ and $\beta$ satisfy $\gamma > 3/2, \ \beta \geq \max\{8,\gamma\}$.
\item The viscosity coefficients are given by
\begin{equation}\label{approx-viscosity}
\mu^{\delta} = (1-\chi^{\delta}_{\mc{S}})\mu_{\mc{F}} + {\delta}^2\chi^{\delta}_{\mc{S}},\quad \lambda^{\delta} = (1-\chi^{\delta}_{\mc{S}})\lambda_{\mc{F}} + {\delta}^2\chi^{\delta}_{\mc{S}}\quad\mbox{ so that }\quad\mu^{\delta} >0,\ 2\mu^{\delta}+ 3\lambda^{\delta} \geq 0.
\end{equation}
\item The orthogonal projection to rigid fields, $P^{\delta}_{\mc{S}}:L^2(\Omega)\rightarrow L^2(\mc{S}^{\delta}(t))$, is such that, for all $t\in [0,T]$ and $u\in L^2(\Omega)$, we have $P^{\delta}_{\mc{S}}u \in \mc{R}$ and it is given by
\begin{equation}\label{approx:projection}
P^{\delta}_{\mc{S}}u(t,x)= \frac{1}{m^{\delta}} \int\limits_{\Omega} \rho^{\delta}\chi_{\mc{S}}^{\delta} u + \left((J^{\delta})^{-1} \int\limits_{\Omega}\rho^{\delta}\chi_{\mc{S}}^{\delta}((y-h^{\delta}(t)) \times u)\ dy \right)\times (x-h^{\delta}(t)), \quad \forall x\in \Omega,
\end{equation}
where $h^{\delta}$, $m^{\delta}$ and $J^{\delta}$ are defined as
\begin{align*}
h^{\delta}(t) = \frac{1}{m^{\delta}} \int\limits_{\mathbb{R}^3} \rho^{\delta}\chi_{\mc{S}}^{\delta} x \ dx,\quad m^{\delta} = \int\limits_{\mathbb{R}^3} \rho^{\delta}\chi_{\mc{S}}^{\delta} \ dx, \\
J^{\delta}(t) = \int\limits_{\mathbb{R}^3} \rho^{\delta}\chi_{\mc{S}}^{\delta} \left[ |x-h^{\delta}(t)|^2\mathbb{I} - (x-h^{\delta}(t)) \otimes (x-h^{\delta}(t))\right] \ dx.
\end{align*}
\end{itemize}
\begin{remark}
The penalization which we apply in our case is different from that in \cite{F4}. We do not use the high viscosity limit but our penalization contains an $L^2$ penalization (see \eqref{approx3}), which is necessary because of the discontinuity of the velocity field through the fluid-structure interface. Moreover, we consider a penalization of the viscosity coefficients \eqref{approx-viscosity} together with the additional regularity of the pressure, see \eqref{approx-press}. This approach is completely new.
\end{remark}
A weak solution of problem \eqref{mass:comfluid}--\eqref{initial cond:comp} in the sense of \cref{weaksolution-main} will be obtained as a weak limit of the solution $(\mc{S}^{\delta},\rho^{\delta},u^{\delta})$ of system \eqref{approx1}--\eqref{approx:initial} as $\delta \rightarrow 0$. The existence result of the approximate system reads:
\begin{proposition}\label{thm:approxn-delta}
Let $\Omega$ and $\mc{S}_0 \Subset \Omega$ be two regular bounded domains of $\mathbb{R}^3$. Assume that for some $\sigma>0$
\begin{equation*}
\operatorname{dist}(\mc{S}_0,\partial\Omega) > 2\sigma.
\end{equation*}
Let $g^{\delta}=(1-\chi^{\delta}_{\mc{S}})g_{\mc{F}} + \chi^{\delta}_{\mc{S}}g_{\mc{S}} \in L^{\infty}((0,T)\times \Omega)$ and
\begin{equation}\label{cc:dbg}
{\delta} >0,\ \gamma > 3/2, \ \beta \geq \max\{8,\gamma\}.
\end{equation}
Further, let the pressure $p^{\delta}$ be determined by \eqref{approx-press} and the viscosity coefficients $\mu^{\delta}$, $\lambda^{\delta}$ be given by \eqref{approx-viscosity}. Assume that the initial conditions satisfy
\begin{align}
\rho_{0}^{\delta} &\in L^{\beta}(\Omega), \quad \rho_0^{\delta}\geq 0 \mbox{ a.e. in }\Omega,\quad \rho_0^{\delta}\mathds{1}_{\mc{S}_0}\in L^{\infty}(\Omega),\quad \rho_0^{\delta}\mathds{1}_{\mc{S}_0}>0\mbox{ a.e. in }\mc{S}_0,\label{rhonot}\\
&q_0^{\delta} \in L^{\frac{2\beta}{\beta+1}}(\Omega), \quad q_0^{\delta}\mathds{1}_{\{\rho_0^{\delta}=0\}}=0 \mbox{ a.e. in }\Omega,\quad \dfrac{|q_0^{\delta}|^2}{\rho_0^{\delta}}\mathds{1}_{\{\rho_0>0\}}\in L^1(\Omega)\label{qnot}.
\end{align}
Let the initial energy
$$
E^{\delta}[\rho_0^{\delta},q_0^{\delta}] = \int\limits_{\Omega}\Bigg(\frac{1}{2} \frac{|q_0^{\delta}|^2}{\rho_0^{\delta}}\mathds{1}_{\{\rho_0^{\delta}>0\}} + \frac{a^{\delta}(0)}{\gamma-1}(\rho_0^{\delta})^{\gamma} + \frac{\delta}{\beta-1}(\rho_0^{\delta})^{\beta} \Bigg):=E^{\delta}_0$$
be uniformly bounded with respect to $\delta$. Then there exists $T > 0$ (depending only on $E^{\delta}_0$, $g_{\mc{F}}$, $g_{\mc{S}}$, $\operatorname{dist}(\mc{S}_0,\partial\Omega)$) such that system \eqref{approx1}--\eqref{approx:initial} admits a finite energy weak solution $(S^{\delta},\rho^{\delta},u^{\delta})$, which satisfies the following energy inequality:
\begin{multline}\label{10:45}
E^{\delta}[\rho ^{\delta}, q^{\delta}] + \int\limits_0^T\int\limits_{\Omega} \Big(2\mu^{\delta}|\mathbb{D}(u^{\delta})|^2 + \lambda^{\delta}|\operatorname{div}u^{\delta}|^2\Big) + \alpha \int\limits_0^T\int\limits_{\partial \Omega} |u^{\delta} \times \nu|^2
+ \alpha \int\limits_0^T\int\limits_{\partial \mc{S}^{\delta}(t)} |(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta})\times \nu|^2 \\+ \frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^{\delta}_{\mc{S}}|u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta}|^2 \leq \int\limits_0^T\int\limits_{\Omega}\rho^{\delta}
g^{\delta} \cdot u^{\delta}
+ E_0^{\delta}.
\end{multline}
Moreover,
\begin{equation*}
\operatorname{dist}(\mc{S}^{\delta}(t),\partial\Omega) \geq {2\sigma},\quad \forall \ t\in [0,T],
\end{equation*}
and the solution satisfies the following properties:
\begin{enumerate}
\item For $\theta=\frac{2}{3}\gamma-1$, $s=\gamma+\theta$,
\begin{equation}\label{rho:improved}
\|(a^{\delta})^{1/s}\rho^{\delta}\|_{L^{s}((0,T)\times\Omega)} + \delta^{\frac{1}{\beta+\theta}}\|\rho^{\delta}\|_{L^{\beta+\theta}((0,T)\times\Omega)} \leq c.
\end{equation}
\item The couple $(\rho^{\delta},u^{\delta})$ satisfies the identity
\begin{equation}\label{rho:renorm1}
\partial_t b(\rho^{\delta}) + \operatorname{div}(b(\rho^{\delta})u^{\delta})+[b'(\rho^{\delta})\rho^{\delta} - b(\rho^{\delta})]\operatorname{div}u^{\delta}=0,
\end{equation}
a.e.\ in $(0,T)\times\Omega$ for any $b\in C([0,\infty)) \cap C^1((0,\infty))$ satisfying \eqref{eq:b}.
\end{enumerate}
\end{proposition}
In order to prove \cref{thm:approxn-delta}, we consider a problem with another level of approximation: the $\varepsilon$-level approximation is obtained via the continuity equation with dissipation accompanied by the artificial pressure in the momentum equation.
We want to find a triplet $(S^{\varepsilon}, \rho^{\varepsilon}, u^{\varepsilon})$ such that we can obtain a weak solution $(\mc{S}^{\delta},\rho^{\delta},u^{\delta})$ of the system \eqref{approx1}--\eqref{approx:initial} as a weak limit of the sequence $(\mc{S}^{\varepsilon},\rho^{\varepsilon},u^{\varepsilon})$ as $\varepsilon \rightarrow 0$. For $\varepsilon >0$, the triplet is supposed to satisfy:
\begin{itemize}
\item $\mc{S}^{\varepsilon}(t) \Subset \Omega$ is a bounded, regular domain for all $t \in [0,T]$ with
\begin{equation}\label{varepsilon:approx1}
\chi^{\varepsilon}_{\mc{S}}(t,x)= \mathds{1}_{\mc{S}^{\varepsilon}(t)}(x) \in L^{\infty}((0,T)\times \Omega) \cap C([0,T];L^p(\Omega)), \, \forall \, 1 \leq p < \infty.
\end{equation}
\item The velocity field $u^{\varepsilon} \in L^2(0,T; H^1(\Omega))$ and the density function $\rho^{\varepsilon} \in L^{\infty}(0,T; L^{\beta}(\Omega)) \cap L^2(0,T;H^1(\Omega))$, $\rho^{\varepsilon}\geq 0$ satisfy
\begin{equation}\label{varepsilon:approx2}
\frac{\partial {\rho}^{\varepsilon}}{\partial t} + \operatorname{div}({\rho}^{\varepsilon} u^{\varepsilon}) =\varepsilon \Delta\rho^{\varepsilon} \mbox{ in }\, (0,T)\times \Omega, \quad \frac{\partial \rho^{\varepsilon}}{\partial \nu}=0 \mbox{ on }\, \partial\Omega.
\end{equation}
\item For all $\phi \in H^1(0,T; L^{2}(\Omega)) \cap L^{\beta+1}(0,T; W^{1,{\beta+1}}(\Omega))$ with $\phi\cdot \nu=0$ on $\partial\Omega$, $\phi|_{t=T}=0$, where $\beta \geq \max\{8,\gamma\}$, the following holds:
\begin{multline}\label{varepsilon:approx3}
- \int\limits_0^T\int\limits_{\Omega} \rho^{\varepsilon} \left(u^{\varepsilon}\cdot \frac{\partial}{\partial t}\phi + u^{\varepsilon} \otimes u^{\varepsilon} : \nabla \phi\right) + \int\limits_0^T\int\limits_{\Omega} \Big(2\mu^{{\varepsilon}}\mathbb{D}(u^{\varepsilon}):\mathbb{D}(\phi) + \lambda^{{\varepsilon}}\operatorname{div}u^{\varepsilon}\mathbb{I} : \mathbb{D}(\phi) - p^{\varepsilon}(\rho^{\varepsilon})\mathbb{I} : \mathbb{D}(\phi)\Big) \\
+\int\limits_0^T\int\limits_{\Omega} \varepsilon \nabla u^{\varepsilon} \nabla \rho^{\varepsilon} \cdot \phi
+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} (u^{\varepsilon} \times \nu)\cdot (\phi \times \nu) + \alpha \int\limits_0^T\int\limits_{\partial \mc{S}^{\varepsilon}(t)} [(u^{\varepsilon}-P^{\varepsilon}_{\mc{S}}u^{\varepsilon})\times \nu]\cdot [(\phi-P^{\varepsilon}_{\mc{S}}\phi)\times \nu] \\
+ \frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^{\varepsilon}_{\mc{S}}(u^{\varepsilon}-P^{\varepsilon}_{\mc{S}}u^{\varepsilon})\cdot (\phi-P^{\varepsilon}_{\mc{S}}\phi) = \int\limits_0^T\int\limits_{\Omega}\rho^{\varepsilon} g^{\varepsilon} \cdot \phi
+ \int\limits_{\Omega} (\rho^{\varepsilon} u^{\varepsilon} \cdot \phi)(0).
\end{multline}
\item ${\chi}^{\varepsilon}_{\mc{S}}(t,x)$ satisfies (in the weak sense)
\begin{equation}\label{varepsilon:approx4}
\frac{\partial {\chi}^{\varepsilon}_{\mc{S}}}{\partial t} + P^{\varepsilon}_{\mc{S}}u^{\varepsilon} \cdot \nabla \chi^{\varepsilon}_{\mc{S}} =0\mbox{ in }(0,T)\times {\Omega},\quad \chi^{\varepsilon}_{\mc{S}}|_{t=0}= \mathds{1}_{\mc{S}_0}\mbox{ in } {\Omega}.
\end{equation}
\item $\rho^{\varepsilon}{\chi}^{\varepsilon}_{\mc{S}}(t,x)$ satisfies (in the weak sense)
\begin{equation}\label{varepsilon:approx5}
\frac{\partial }{\partial t}(\rho^{\varepsilon}{\chi}^{\varepsilon}_{\mc{S}}) + P^{\varepsilon}_{\mc{S}}u^{\varepsilon} \cdot \nabla (\rho^{\varepsilon}{\chi}^{\varepsilon}_{\mc{S}})=0\mbox{ in }(0,T)\times {\Omega},\quad (\rho^{\varepsilon}{\chi}^{\varepsilon}_{\mc{S}})|_{t=0}= \rho^{\varepsilon}_0\mathds{1}_{\mc{S}_0}\mbox{ in }{\Omega}.
\end{equation}
\item The initial data are given by
\begin{equation}\label{varepsilon:initial}
\rho^{\varepsilon}(0,x)=\rho_0^{\varepsilon}(x), \quad \rho^{\varepsilon} u^{\varepsilon}(0,x) = q_0^{\varepsilon}(x)\quad\mbox{in }\Omega,\quad \frac {\partial \rho_0^{\varepsilon }}{\partial \nu}\big |_{\partial \Omega} =0.
\end{equation}
\end{itemize}
Above we have used the following quantities:
\begin{itemize}
\item The density of the volume force is defined as
\begin{equation}\label{gepsilon}
g^{\varepsilon}=(1-\chi^{\varepsilon}_{\mc{S}})g_{\mc{F}} + \chi^{\varepsilon}_{\mc{S}}g_{\mc{S}}.
\end{equation}
\item The artificial pressure is given by
\begin{equation}\label{p1}
p^{\varepsilon}(\rho)= a^{\varepsilon}\rho^{\gamma} + {\delta} \rho^{\beta},\quad\mbox{ with }\quad a^{\varepsilon} = a_{\mc{F}} (1-\chi^{\varepsilon}_{\mc{S}}),
\end{equation}
where $a_{\mc{F}},{\delta} >0$, and the exponents $\gamma$ and $\beta$ satisfy $\gamma > 3/2, \ \beta \geq \max\{8,\gamma\}$.
\item The viscosity coefficients are given by
\begin{equation}\label{vis1}
\mu^{\varepsilon} = (1-\chi^{\varepsilon}_{\mc{S}})\mu_{\mc{F}} + {\delta}^2\chi^{\varepsilon}_{\mc{S}},\quad \lambda^{\varepsilon} = (1-\chi^{\varepsilon}_{\mc{S}})\lambda_{\mc{F}} + {\delta}^2\chi^{\varepsilon}_{\mc{S}}\quad\mbox{ so that }\quad\mu^{\varepsilon} >0,\ 2\mu^{\varepsilon}+3\lambda^{\varepsilon} \geq 0.
\end{equation}
\item $P^{\varepsilon}_{\mc{S}}:L^2(\Omega)\rightarrow L^2(\mc{S}^{\varepsilon}(t))$ is the orthogonal projection to rigid fields; it is defined as in \eqref{approx:projection} with $\chi^{\delta}_{\mc{S}}$ is replaced by $\chi^{\varepsilon}_{\mc{S}}$.
\end{itemize}
\begin{remark}
Above, the triplet $\left(\mc{S}^{\varepsilon}, \rho^{\varepsilon}, u^{\varepsilon}\right)$ should actually be denoted by $\left(\mc{S}^{\delta,\varepsilon}, \rho^{\delta,\varepsilon}, u^{\delta,\varepsilon}\right)$. The dependence on $\delta$ is due to the penalization term $\Big(\tfrac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^{\varepsilon}_{\mc{S}}(u^{\varepsilon}-P^{\varepsilon}_{\mc{S}}u^{\varepsilon})\cdot (\phi-P^{\varepsilon}_{\mc{S}}\phi)\Big)$ in \eqref{varepsilon:approx3} and in the viscosity coefficients $\mu^{\varepsilon}$, $\lambda^{\varepsilon}$ in \eqref{vis1}. To simplify the notation, we omit $\delta$ here.
\end{remark}
In \cref{14:14} we will prove the following existence result of the approximate system \eqref{varepsilon:approx1}--\eqref{varepsilon:initial}:
\begin{proposition}\label{thm:approxn}
Let $\Omega$ and $\mc{S}_0 \Subset \Omega$ be two regular bounded domains of $\mathbb{R}^3$. Assume that for some $\sigma >0$,
\begin{equation*}
\operatorname{dist}(\mc{S}_0,\partial\Omega) > 2\sigma.
\end{equation*}
Let $g^{\varepsilon}=(1-\chi^{\varepsilon}_{\mc{S}})g_{\mc{F}} + \chi^{\varepsilon}_{\mc{S}}g_{\mc{S}} \in L^{\infty}((0,T)\times \Omega)$ and $\beta$, $\delta$, $\gamma$ be given as in \eqref{cc:dbg}. Further, let the pressure $p^{\varepsilon}$ be determined by \eqref{p1} and the viscosity coefficients $\mu^{\varepsilon}$, $\lambda^{\varepsilon}$ be given by \eqref{vis1}. The initial conditions satisfy, for some $\underline{\rho}$, $\overline{\rho}>0$,
\begin{equation}\label{inteps}
0<\underline{\rho}\leq \rho_0^{\varepsilon} \leq \overline{\rho}\ \mbox{ in }\ \Omega, \quad \rho_0^{\varepsilon} \in W^{1,\infty}(\Omega),\quad q_0^{\varepsilon}\in L^2(\Omega).
\end{equation}
Let the initial energy $$E^{\varepsilon}[\rho _0 ^{\varepsilon},q_0^{\varepsilon}] =\int\limits_{\Omega}\Bigg(\frac{1}{2} \frac{|q_0^{\varepsilon}|^2}{\rho^{\varepsilon}_0}\mathds{1}_{\{\rho^{\varepsilon}_0>0\}} + \frac{a^{\varepsilon}(0)}{\gamma-1}(\rho_0^{\varepsilon})^{\gamma} + \frac{\delta}{\beta-1}(\rho_0^{\varepsilon})^{\beta} \Bigg):= E_0^{\varepsilon}$$ be uniformly bounded with respect to $\delta$ and $ \varepsilon$.
Then there exists $T > 0$ (depending only on $E^{\varepsilon}_0$, $g_{\mc{F}}$, $g_{\mc{S}}$, $\operatorname{dist}(\mc{S}_0,\partial\Omega)$) such that system \eqref{varepsilon:approx1}--\eqref{varepsilon:initial} admits a weak solution $(S^{\varepsilon},\rho^{\varepsilon},u^{\varepsilon})$, which satisfies the following energy inequality:
\begin{multline}\label{energy-varepsilon}
E^{\varepsilon}[\rho ^{\varepsilon},q^{\varepsilon}]+ \int\limits_0^T\int\limits_{\Omega} \Big(2\mu^{\varepsilon}|\mathbb{D}(u^{\varepsilon})|^2 + \lambda^{\varepsilon}|\operatorname{div}u^{\varepsilon}|^2\Big) + \delta\varepsilon \beta\int\limits_0^T\int\limits_{\Omega} (\rho^{\varepsilon})^{\beta-2}|\nabla \rho^{\varepsilon}|^2
+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} |u^{\varepsilon} \times \nu|^2 \\
+ \alpha \int\limits_0^T\int\limits_{\partial \mc{S}^{\varepsilon}(t)} |(u^{\varepsilon}-P^{\varepsilon}_{\mc{S}}u^{\varepsilon})\times \nu|^2
+ \frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^{\varepsilon}_{\mc{S}}|u^{\varepsilon}-P^{\delta}_{\mc{S}}u^{\varepsilon}|^2 \leq \int\limits_0^T\int\limits_{\Omega}\rho^{\varepsilon}
g^{\varepsilon} \cdot u^{\varepsilon}
+ E_0^{\varepsilon}.
\end{multline}
Moreover,
\begin{equation*}
\operatorname{dist}(\mc{S}^{\varepsilon}(t),\partial\Omega) \geq {2\sigma},\quad \forall \ t\in [0,T],
\end{equation*}
and the solution satisfies
\begin{equation*}
\partial_t\rho^{\varepsilon},\ \Delta \rho^{\varepsilon}\in {L^{\frac{5\beta-3}{4\beta}}((0,T)\times\Omega)},
\end{equation*}
\begin{equation}\label{est:indofepsilon}
\sqrt{\varepsilon} \|\nabla \rho^{\varepsilon}\|_{L^2((0,T)\times\Omega)} + \|\rho^{\varepsilon}\|_{L^{\beta+1}((0,T)\times\Omega)} + \|(a^{\varepsilon})^{\frac{1}{\gamma+1}}\rho^{\varepsilon}\|_{L^{\gamma+1}((0,T)\times\Omega)} \leq c,
\end{equation}
where $c$ is a positive constant depending on $\delta$ but which is independent of $\varepsilon$.
\end{proposition}
To solve the problem \eqref{varepsilon:approx1}--\eqref{varepsilon:initial}, we need yet another level of approximation. The $N$-level approximation is obtained via a Faedo-Galerkin approximation scheme.
Suppose that $\{e_k\}_{k\geq 1} \subset \mc{D}(\overline{\Omega})$ with $e_k\cdot\nu=0$ on $\partial\Omega$ is a basis of $ L^2(\Omega)$. We set
\begin{equation*}
X_N = \mbox{ span}(e_1,\ldots,e_N).
\end{equation*}
$X_N$ is a finite dimensional space with scalar product induced by the scalar product in $L^2(\Omega)$. As $X_N$ is finite dimensional, norms on $X_N$ induced by $W^{k,p}$ norms, $k\in \mathbb{N},\ 1\leq p\leq \infty$ are equivalent. We also assume that
\begin{equation*}
\bigcup_{N}X_N \mbox{ is dense in }\left\{v\in W^{1,p}(\Omega) \mid v\cdot \nu=0\mbox{ on }\partial\Omega\right\},\mbox{ for any }1\leq p < \infty.
\end{equation*}
Such a choice of $X_N$ has been constructed in \cite[Theorem 11.19, page 460]{MR3729430}.
The task is to find a triplet $(S^N, \rho^N, u^N)$ satisfying:
\begin{itemize}
\item $\mc{S}^N(t) \Subset \Omega$ is a bounded, regular domain for all $t \in [0,T]$ with
\begin{equation}\label{galerkin-approx1}
\chi^N_{\mc{S}}(t,x)= \mathds{1}_{\mc{S}^N(t)}(x) \in L^{\infty}((0,T)\times \Omega) \cap C([0,T];L^p(\Omega)), \, \forall \, 1 \leq p < \infty.
\end{equation}
\item The velocity field $u^N (t,\cdot)=\sum\limits_{k=1}^N \alpha_k(t)e_k$ with $(\alpha_1,\alpha_2,\ldots,\alpha_N)\in C([0,T])^N$ and the density function $\rho^{N} \in L^2(0,T;H^{2}(\Omega)) \cap H^{1}(0,T;L^{2}(\Omega))$, $\rho^{N} > 0$ satisfies
\begin{equation}\label{galerkin-approx2}
\frac{\partial {\rho}^N}{\partial t} + \operatorname{div}({\rho}^N u^N) =\varepsilon \Delta\rho^N \mbox{ in }\, (0,T)\times \Omega, \quad \frac{\partial \rho^N}{\partial \nu}=0 \mbox{ on }\, \partial\Omega.
\end{equation}
\item For all $\phi \in \mc{D}([0,T); X_N)$ with $\phi\cdot \nu=0$ on $\partial\Omega$, the following holds:
\begin{multline}\label{galerkin-approx3}
- \int\limits_0^T\int\limits_{\Omega} \rho^N \left(u^N\cdot \frac{\partial}{\partial t}\phi + u^N \otimes u^N : \nabla \phi\right) + \int\limits_0^T\int\limits_{\Omega} \Big(2\mu^N\mathbb{D}(u^N):\mathbb{D}(\phi) + \lambda^N\operatorname{div}u^N\mathbb{I} : \mathbb{D}(\phi) - p^{N}(\rho^N)\mathbb{I}:\mathbb{D}(\phi)\Big) \\
\int\limits_0^T\int\limits_{\Omega} \varepsilon \nabla u^N \nabla \rho^N \cdot \phi
+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} (u^N \times \nu)\cdot (\phi \times \nu) + \alpha \int\limits_0^T\int\limits_{\partial \mc{S}^N(t)} [(u^N-P^N_{\mc{S}}u^N)\times \nu]\cdot [(\phi-P^N_{\mc{S}}\phi)\times \nu] \\
+ \frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^N_{\mc{S}}(u^N-P^N_{\mc{S}}u^N)\cdot (\phi-P^N_{\mc{S}}\phi) = \int\limits_0^T\int\limits_{\Omega} \rho^N g^N \cdot \phi
+ \int\limits_{\Omega} (\rho^N u^N \cdot \phi)(0).
\end{multline}
\item ${\chi}^N_{\mc{S}}(t,x)$ satisfies (in the weak sense)
\begin{equation}\label{galerkin-approx4}
\frac{\partial {\chi}^N_{\mc{S}}}{\partial t} + P^N_{\mc{S}}u^N \cdot \nabla \chi^N_{\mc{S}} =0\mbox{ in }(0,T)\times {\Omega},\quad \chi^N_{\mc{S}}|_{t=0}= \mathds{1}_{\mc{S}_0}\mbox{ in } {\Omega}.
\end{equation}
\item $\rho^{N}{\chi}^{N}_{\mc{S}}(t,x)$ satisfies (in the weak sense)
\begin{equation}\label{N:approx5}
\frac{\partial }{\partial t}(\rho^{N}{\chi}^{N}_{\mc{S}}) + P^{N}_{\mc{S}}u^{N} \cdot \nabla (\rho^{N}{\chi}^{N}_{\mc{S}})=0\mbox{ in }(0,T)\times {\Omega},\quad (\rho^{N}{\chi}^{N}_{\mc{S}})|_{t=0}= \rho_{0}^N\mathds{1}_{\mc{S}_0}\mbox{ in } {\Omega}.
\end{equation}
\item The initial data are given by
\begin{equation}\label{galerkin-initial}
\rho^N(0)=\rho_0^N, \quad u^N(0) = u_0^N \quad\mbox{ in }\Omega,\quad \frac {\partial \rho_0^{N }}{\partial \nu}\Big |_{\partial \Omega} =0.
\end{equation}
\end{itemize}
Above we have used the following quantities:
\begin{itemize}
\item The density of the volume force is defined as \begin{equation}\label{gN}
g^N=(1-\chi^N_{\mc{S}})g_{\mc{F}} + \chi^N_{\mc{S}}g_{\mc{S}}.
\end{equation}
\item The artificial pressure is given by
\begin{equation}\label{p2}
p^{N}(\rho)= a^{N}\rho^{\gamma} + {\delta} \rho^{\beta},\quad\mbox{ with }\quad a^{N} = a_{\mc{F}} (1-\chi^{N}_{\mc{S}}),
\end{equation}
where $a_{\mc{F}},{\delta} >0$ and the exponents $\gamma$ and $\beta$ satisfy $\gamma > 3/2, \ \beta \geq \max\{8,\gamma\}$.
\item The viscosity coefficients are given by
\begin{equation}\label{vis2}
\mu^N = (1-\chi^N_{\mc{S}})\mu_{\mc{F}} + {\delta}^2\chi^N_{\mc{S}},\quad \lambda^N = (1-\chi^N_{\mc{S}})\lambda_{\mc{F}} + {\delta}^2\chi^N_{\mc{S}}\quad\mbox{ so that }\quad\mu^N >0,\ 2\mu^N+3\lambda^N \geq 0.
\end{equation}
\item $P^{N}_{\mc{S}}:L^2(\Omega)\rightarrow L^2(\mc{S}^{N}(t))$ is the orthogonal projection to rigid fields; it is defined as in \eqref{approx:projection} with $\chi^{\delta}_{\mc{S}}$ replaced by $\chi^{N}_{\mc{S}}$.
\end{itemize}
\begin{remark}
Actually the triplet $\left(\mc{S}^N, \rho^N, u^N\right)$ above should be denoted by $\left(\mc{S}^{\delta,\varepsilon,N}, \rho^{\delta,\varepsilon,N}, u^{\delta,\varepsilon,N}\right)$. The dependence on $\delta$ and $\varepsilon$ is due to the penalization term $\left(\tfrac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^N_{\mc{S}}(u^N-P^N_{\mc{S}}u^N)\cdot (\phi-P^N_{\mc{S}}\phi)\right)$, the viscosity coefficients $\mu^{N}$, $\lambda^{N}$ and the artificial dissipative term $(\varepsilon\Delta\rho)$. To simplify the notation, we omit $\delta$ and $\varepsilon$ here.
\end{remark}
A weak solution $(S^{\varepsilon},\rho^{\varepsilon},u^{\varepsilon})$ to the system \eqref{varepsilon:approx1}--\eqref{varepsilon:initial} is obtained through the limit of $(S^N, \rho^N, u^N)$ as $N\rightarrow \infty$.
The existence result of the approximate solution of the Faedo-Galerkin scheme reads:
\begin{proposition}\label{fa}
Let $\Omega$ and $\mc{S}_0 \Subset \Omega$ be two regular bounded domains of $\mathbb{R}^3$. Assume that for some $\sigma>0$,
\begin{equation*}
\operatorname{dist}(\mc{S}_0,\partial\Omega) > 2\sigma.
\end{equation*}
Let $g^N=(1-\chi^{N}_{\mc{S}})g_{\mc{F}} + \chi^{N}_{\mc{S}}g_{\mc{S}} \in L^{\infty}((0,T)\times \Omega)$ and $\beta$, $\delta$, $\gamma$ be given by \eqref{cc:dbg}. Further, let the pressure $p^{N}$ be determined by \eqref{p2} and the viscosity coefficients $\mu^N$, $\lambda^N$ be given by \eqref{vis2}.
The initial conditions are assumed to satisfy
\begin{equation}\label{initialcond}
0<\underline{\rho}\leq \rho_0^N \leq \overline{\rho}\ \mbox{ in }\ \Omega, \quad \rho_0^N \in W^{1,\infty}(\Omega),\quad u_0^N\in X_N.
\end{equation}
Let the initial energy $$
E^N(\rho_0^N,q_0^N) =\int\limits_{\Omega} \left( \frac{1}{\rho_0^N}|q_0^N|^2\mathds{1}_{\{\rho_0>0\}} + \frac{a^N(0)}{\gamma-1}(\rho_0^N)^{\gamma} + \frac{\delta}{\beta-1}(\rho_0^N)^{\beta} \right):= E_0^N$$
be uniformly bounded with respect to $N,\varepsilon,\delta$.
Then there exists $T>0$ (depending only on $E^{N}_0$, $g_{\mc{F}}$, $g_{\mc{S}}$, $\overline{\rho}$, $\underline{\rho}$, $\operatorname{dist}(\mc{S}_0,\partial\Omega)$) such that the problem \eqref{galerkin-approx1}--\eqref{galerkin-initial} admits a solution $(\mc{S}^N,\rho^N,u^N)$ and it satisfies the energy inequality:
\begin{multline*}
E^N[\rho ^N, q^N] + \int\limits_0^T\int\limits_{\Omega} \Big(2\mu^N|\mathbb{D}(u^N)|^2 + \lambda^N |\operatorname{div}u^N|^2\Big) + \delta\varepsilon \beta\int\limits_0^T\int\limits_{\Omega} (\rho^N)^{\beta-2}|\nabla \rho^N|^2
+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} |u^N \times \nu|^2 \\
+ \alpha \int\limits_0^T\int\limits_{\partial \mc{S}^N(t)} |(u^N-P^N_{\mc{S}}u^N)\times \nu|^2
+ \frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^N_{\mc{S}}|u^N-P^N_{\mc{S}}u^N|^2 \leq \int\limits_0^T\int\limits_{\Omega}\rho^N g^N \cdot u^N
+ E^N_0.
\end{multline*}
Moreover,
\begin{equation*}
\operatorname{dist}(\mc{S}^{N}(t),\partial\Omega) \geq {2\sigma},\quad \forall \ t\in [0,T].
\end{equation*}
\end{proposition}
We prove the above proposition in \cref{sec:Galerkin}.
\section{Isometric propagators and the motion of the body} \label{S5}
In this section, we state and prove some results regarding the transport equation that we use in our analysis. We mainly concentrate on the following equation:
\begin{equation}\label{transport1}
\frac{\partial {\chi}_{\mc{S}}}{\partial t} + \operatorname{div}(P_{\mc{S}}u\chi_{\mc{S}}) =0 \mbox{ in }(0,T)\times\mathbb{R}^3,\quad {\chi}_{\mc{S}}|_{t=0}=\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3,
\end{equation}
where $P_{\mc{S}}u \in \mc{R}$. Note that here $\mc{R}$ is referring to the set of rigid fields on $\mathbb{R}^3$ in the spirit of \eqref{defR}. It is given by
\begin{equation}\label{projection:P}
P_{\mc{S}}u (t,x)= \frac{1}{m} \int\limits_{\Omega} \rho\chi_{\mc{S}} u + \left(J^{-1} \int\limits_{\Omega}\rho\chi_{\mc{S}}((y-h(t)) \times u)\ dy \right)\times (x-h(t)),\quad \forall \ (t,x)\in (0,T)\times\mathbb{R}^3.
\end{equation}
In \cite[Proposition 3.1]{MR3272367}, the existence of a solution to \eqref{transport1} and the characterization of the transport of the rigid body have been established with constant $\rho$ in the expression \eqref{projection:P} of $P_{\mc{S}}u$. Here we deal with the case when $\rho$ is evolving. We start with some existence results when the velocity field and the density satisfy certain regularity assumptions.
\begin{proposition}\label{reg:chiS}
Let $u \in C([0,T];\mc{D}(\overline{\Omega}))$ and $\rho \in L^2(0,T;H^2(\Omega)) \cap C([0,T];H^1(\Omega))$. Then the following holds true:
\begin{enumerate}
\item There is a unique solution $\chi_{\mc{S}} \in L^{\infty}((0,T)\times \mathbb{R}^3) \cap C([0,T];L^p(\mathbb{R}^3))$ $\forall \ 1\leq p<\infty$ to \eqref{transport1}. More precisely, $$\chi_{\mc{S}}(t,x)= \mathds{1}_{\mc{S}(t)}(x),\quad\forall \ t\geq 0,\ \forall \ x\in \mathbb{R}^3.$$ Moreover, $\mc{S}(t)=\eta_{t,0}(\mc{S}_0)$ for the isometric propagator $\eta_{t,s}$ associated to $P_{\mc{S}}u$:
\begin{equation*}
(t,s)\mapsto \eta_{t,s} \in C^1([0,T]^2; C^{\infty}_{loc}(\mb{R}^3)),
\end{equation*}
where
$\eta_{t,s}$ is defined by
\begin{equation}\label{ODE-propagator}
\frac{\partial \eta_{t,s}}{\partial t}(y)=P_{\mc{S}}u (t,\eta_{t,s}(y)),\quad \forall\ (t,s,y)\in (0,T)^2 \times \mb{R}^3, \quad \eta_{s,s}(y)=y,\quad \forall\ y\in \mb{R}^3.
\end{equation}
\item
Let $\rho_{0}\mathds{1}_{\mc{S}_0} \in L^{\infty}(\mathbb{R}^3)$. Then there is a unique solution $\rho\chi_{\mc{S}} \in L^{\infty}((0,T)\times \mathbb{R}^3) \cap C([0,T];L^p(\mathbb{R}^3))$, $\forall \ 1\leq p<\infty$ to the following equation:
\begin{equation}\label{re:transport1}
\frac{\partial (\rho {\chi}_{\mc{S}})}{\partial t} + \operatorname{div}((\rho\chi_{\mc{S}})P_{\mc{S}}u) =0 \mbox{ in }(0,T)\times\mathbb{R}^3,\quad \rho{\chi}_{\mc{S}}|_{t=0}=\rho_{0}\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3.
\end{equation}
\end{enumerate}
\end{proposition}
\begin{proof}
Following \cite[Proposition 3.1]{MR3272367}, we observe that proving existence of solution to \eqref{transport1} is equivalent to establishing the well-posedness of the ordinary differential equation
\begin{equation}\label{ODE-cauchy}
\frac{d}{dt}\eta_{t,0}=U_{\mc{S}}(t,\eta_{t,0}),\quad \eta_{0,0}=\mathbb{I},
\end{equation}
where $U_{\mc{S}}\in \mc{R}$ is given by
\begin{equation*}
U_{\mc{S}}(t,\eta_{t,0})= \frac{1}{m} \int\limits_{\mc{S}_0} \rho(t,\eta_{t,0}(y))\mathds{1}_{\Omega} u(t,\eta_{t,0}(y)) + \left(J^{-1} \int\limits_{\mc{S}_0}\rho(t,\eta_{t,0}(y))\mathds{1}_{\Omega}((\eta_{t,0}(y)-h(t)) \times u(t,\eta_{t,0}(y)))\ dy \right)\times (x-h(t)).
\end{equation*}
According to the Cauchy-Lipschitz theorem, equation \eqref{ODE-cauchy} admits the unique $C^1$ solution if $U_{\mc{S}}$ is continuous in $(t,\eta)$ and uniformly Lipschitz in $\eta$. Thus, it is enough to establish the following result analogous to \cite[Lemma 3.2]{MR3272367}: Let $u \in C([0,T];\mc{D}(\overline{\Omega}))$ and $\rho \in L^2(0,T;H^2(\Omega)) \cap C([0,T];H^1(\Omega))$. Then the function
\begin{equation*}
\mc{M}:[0,T]\times \operatorname{Isom}(\mathbb{R}^3)\mapsto \mathbb{R},\quad \mc{M}(t,\eta)=\int\limits_{\mc{S}_0} \rho(t,\eta(y))\mathds{1}_{\Omega}(\eta(y)) u(t,\eta(y))
\end{equation*}
is continuous in $(t,\eta)$ and uniformly Lipschitz in $\eta$ over $[0,T]$.
Observe that the continuity in the $t$-variable is obvious. Moreover, for two isometries $\eta_1$ and $\eta_2$, we have
\begin{align*}
\mc{M}(t,\eta_1)-&\mc{M}(t,\eta_2)\\=& \int\limits_{\mc{S}_0} \rho(t,\eta_1(y))\mathds{1}_{\Omega}(\eta_1(y)) (u(t,\eta_1(y))-u(t,\eta_2(y))) + \int\limits_{\mc{S}_0} \rho(t,\eta_1(y))(\mathds{1}_{\Omega}(\eta_1(y))-\mathds{1}_{\Omega}(\eta_2(y))) u(t,\eta_2(y)) \\&+ \int\limits_{\mc{S}_0} (\rho(t,\eta_1(y))-\rho(t,\eta_2(y)))\mathds{1}_{\Omega}(\eta_2(y)) u(t,\eta_2(y)):=M_1 +M_2 + M_3.
\end{align*}
As $\rho \in L^2(0,T;H^2(\Omega)) \cap C([0,T];H^1(\Omega))$, the estimates of the terms $M_1$ and $M_2$ are similar to \cite[Lemma 3.2]{MR3272367}. The term $M_3$ can be estimated in the following way:
\begin{equation*}
|M_3|\leq C\|\rho\|_{L^{\infty}(0,T;H^1(\Omega))}\|u\|_{L^{\infty}(0,T;L^2(\Omega))}\|\eta_1 -\eta_2\|_{\infty}.
\end{equation*}
This finishes the proof of the first part of \cref{reg:chiS}. The second part of this Proposition is similar and we skip it here.
\end{proof}
Next we prove the analogous result of \cite[Proposition 3.3, Proposition 3.4]{MR3272367} on strong and weak sequential continuity which are essential to establish the existence result of the Galerkin approximation scheme in \cref{sec:Galerkin}. The result obtained in the next proposition is used to establish the continuity of the fixed point map in the proof of the existence of Galerkin approximation.
\begin{proposition}\label{sequential1}
Let $\rho^N_0 \in W^{1,\infty}(\Omega)$, let $\rho^k \in L^2(0,T;H^{2}(\Omega)) \cap C([0,T]; H^{1}(\Omega)) \cap H^{1}(0,T;L^{2}(\Omega))$ be the solution to
\begin{equation}\label{eq:rhoN}
\frac{\partial {\rho^k}}{\partial t} + \operatorname{div}({\rho}^k u^k) = \Delta\rho^k \mbox{ in }\, (0,T)\times \Omega, \quad \frac{\partial \rho^k}{\partial \nu}=0 \mbox{ on }\, \partial\Omega, \quad\rho^k(0,x)=\rho_0^N(x)\quad\mbox{in }\Omega,\quad\frac {\partial \rho_0^{k}}{\partial \nu}\big |_{\partial \Omega} =0.
\end{equation}
\begin{equation*}
u^k \rightarrow u \mbox{ strongly in }C([0,T];\mc{D}(\overline{\Omega})),\quad \chi_{\mc{S}}^k \mbox{ is bounded in }L^{\infty}((0,T)\times \mb{R}^3)\mbox{ satisfying }
\end{equation*}
\begin{equation}\label{n:transport}
\frac{\partial {\chi}^k_{\mc{S}}}{\partial t} + \operatorname{div}(P^k_{\mc{S}}u^k\chi^k_{\mc{S}}) =0 \mbox{ in }(0,T)\times\mathbb{R}^3,\quad {\chi}^k_{\mc{S}}|_{t=0}=\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3,
\end{equation}
and let $\{\rho^{k}\chi_{\mc{S}}^{k}\}$ be a bounded sequence in $L^{\infty}((0,T)\times\mathbb{R}^3)$ satisfying
\begin{equation}\label{N:rhotrans}
\frac{\partial}{\partial t}(\rho^{k}{\chi}^{k}_{\mc{S}}) + \operatorname{div}(P^{k}_{\mc{S}}u^{k}(\rho^{k}\chi^{k}_{\mc{S}})) =0 \mbox{ in }(0,T)\times\mathbb{R}^3,\quad \rho^{k}{\chi}^{k}_{\mc{S}}|_{t=0}=\rho^N_0\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3,
\end{equation}
where $P^{k}_{\mc{S}}:L^2(\Omega)\rightarrow L^2(\mc{S}^{k}(t))$ is the orthogonal projection to rigid fields with $\mc{S}^{k}(t) \Subset \Omega$ being a bounded, regular domain for all $t \in [0,T]$.
Then
\begin{align*}
& \chi_{\mc{S}}^k \rightarrow \chi_{\mc{S}} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and}\mbox{ strongly} \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)), \ \forall \ 1\leq p<\infty,\\
&\rho^k\chi_{\mc{S}}^k \rightarrow \rho \chi_{\mc{S}} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and}\mbox{ strongly} \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)), \ \forall \ 1\leq p<\infty,
\end{align*}
where $\chi_{\mc{S}}$ and $\rho\chi_{\mc{S}}$ are satisfying \eqref{transport1} and \eqref{re:transport1} with initial data $\mathds{1}_{\mc{S}_0}$ and $\rho^{N}_{0}\mathds{1}_{\mc{S}_0}$, respectively. Moreover,
\begin{align*}
& P_{\mc{S}}^k u^k \rightarrow P_{\mc{S}} u \mbox{ strongly} \mbox{ in }C([0,T]; C^{\infty}_{loc}(\mb{R}^3)), \\
&\eta_{t,s}^k \rightarrow \eta_{t,s} \mbox{ strongly} \mbox{ in }C^{1}([0,T]^2; C^{\infty}_{loc}(\mb{R}^3)).
\end{align*}
\end{proposition}
\begin{proof}
As $\{u^{k}\}$ converges strongly in $C([0,T];\mc{D}(\overline{\Omega}))$
and $\{ \rho^{k} \chi_{\mc{S}}^{k}\} \mbox{ is bounded in }L^{\infty}((0,T)\times \mb{R}^3)$,
we obtain that $P^{k}_{\mc{S}}u^{k}$ is bounded in $L^2(0,T;\mc{R})$. Thus, up to a subsequence,
\begin{equation}\label{PN:weak}
P^{k}_{\mc{S}}u^{k} \rightarrow \overline{u_{\mc{S}}} \mbox{ weakly in }L^2(0,T;\mc{R}).
\end{equation}
Here, obviously $P^{k}_{\mc{S}}u^{k} \in L^1(0,T; L^{\infty}_{loc}(\mb{R}^3))$, $\operatorname{div}(P^{k}_{\mc{S}}u^{k})=0$ and $\overline{u_{\mc{S}}} \in L^{1}(0,T;W^{1,1}_{loc}(\mb{R}^3))$ satisfies
\begin{equation*}
\frac{\overline{u_{\mc{S}}}}{1+|x|} \in L^1(0,T;L^1(\mb{R}^3)).
\end{equation*}
Moreover, $\{\chi_{\mc{S}}^{k}\} \mbox{ is bounded in }L^{\infty}((0,T)\times \mb{R}^3)$, $\chi_{\mc{S}}^{k}$ satisfies \eqref{n:transport} and $\{\rho^{N}\chi_{\mc{S}}^{k}\}$ is bounded in $L^{\infty}((0,T)\times\mathbb{R}^3)$, $\rho^{k}\chi_{\mc{S}}^{k}$ satisfies \eqref {N:rhotrans}. As we have verified all the required conditions, we can apply \cite[Theorem II.4, Page 521]{DiPerna1989} to obtain
$$\chi_{\mc{S}}^{k} \mbox{ converges weakly-}*\mbox{ in }L^{\infty}((0,T)\times \mb{R}^3),\mbox{ strongly in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1\leq p<\infty),$$
$$\rho^{k}\chi_{\mc{S}}^{k} \mbox{ converges weakly-}*\mbox{ in }L^{\infty}((0,T)\times \mb{R}^3),\mbox{ strongly in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1\leq p<\infty).$$
Let the limit of $\chi_{\mc{S}}^{k}$ be denoted by ${\chi_{\mc{S}}}$, it satisfies
\begin{equation*}
\frac{\partial {\chi_{\mc{S}}}}{\partial t} + \operatorname{div}(\overline{u_{\mc{S}}}\ {\chi_{\mc{S}}}) =0 \mbox{ in }(0,T)\times \mathbb{R}^3,\quad {\chi_{\mc{S}}}|_{t=0}=\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3.
\end{equation*}
Let the weak limit of $\rho^{k}\chi_{\mc{S}}^{k}$ be denoted by $\overline{\rho\chi_{\mc{S}}}$; it satisfies
\begin{equation*}
\frac{\partial (\overline{\rho\chi_{\mc{S}}})}{\partial t} + \operatorname{div}(\overline{u_{\mc{S}}}\ \overline{\rho\chi_{\mc{S}}}) =0 \mbox{ in }(0,T)\times\mathbb{R}^3,\quad \overline{\rho\chi_{\mc{S}}}|_{t=0}=\rho^N_0\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3.
\end{equation*}
We follow the similar analysis as for the fluid case explained in \cite[Section 7.8.1, Page 362]{MR2084891} to conclude that
\begin{equation*}
\rho^k \rightarrow \rho\mbox{ strongly in } L^p((0,T)\times \Omega), \quad\forall \ 1\leq p< \frac{4}{3}\beta\mbox{ with } \ \beta \geq \max\{8,\gamma\},\ \gamma > 3/2. \end{equation*}
The strong convergences of $\rho^{k}$ and $\chi_{\mc{S}}^{k}$ help us to identify the limit:
\begin{equation*}
\overline{\rho\chi_{\mc{S}}}= \rho\chi_{\mc{S}}.
\end{equation*}
Using the convergences of $\rho^{k}\chi_{\mc{S}}^{k}$ and $u^{k}$ in the equation
\begin{equation*}
P_{\mc{S}}^{k}u^{k}(t,x)= \frac{1}{m^{k}} \int\limits_{\Omega} \rho^{k}\chi_{\mc{S}}^{k} u^{k} + \left((J^k)^{-1} \int\limits_{\Omega}\rho^{k}\chi_{\mc{S}}^{k}((y-h^{k}(t)) \times u^{k})\ dy \right)\times (x-h^{k}(t)),
\end{equation*}
and the convergence in \eqref{PN:weak}, we conclude that
\begin{equation*}
\overline{u_{\mc{S}}}={P_{\mc{S}}}u.
\end{equation*}
The convergence of the isometric propagator $\eta_{t,s}^{k}$ follows from the convergence of $P_{\mc{S}}^{k} u^{k}$ and equation \eqref{ODE-propagator}.
\end{proof}
We need the next result on weak sequential continuity to analyze the limiting system of Faedo-Galerkin as $N\rightarrow\infty$ in \cref{14:14}. The proof is similar to that of \cref{sequential1} and we skip it here.
\begin{proposition}\label{sequential11}
Let us assume that $\rho^N_0 \in W^{1,\infty}(\Omega)$ with $\rho^N_0 \rightarrow \rho_0$ in $W^{1,\infty}(\Omega)$,
$\rho^N$ satisfies \eqref{eq:rhoN} and
\begin{equation*}
\rho^N \rightarrow \rho\mbox{ strongly in }L^p((0,T)\times \Omega),\ 1\leq p< \frac{4}{3}\beta\mbox{ with } \ \beta \geq \max\{8,\gamma\},\ \gamma > 3/2.
\end{equation*}
Let $\{u^N,\chi_{\mc{S}}^N\}$ be a bounded sequence in $L^{\infty}(0,T; L^2(\Omega)) \times L^{\infty}((0,T)\times \mb{R}^3)$ satisfying
\eqref{n:transport}. Let $\{\rho^{N}\chi_{\mc{S}}^{N}\}$ be a bounded sequence in $L^{\infty}((0,T)\times\mathbb{R}^3)$ satisfying
\eqref{N:rhotrans}. Then, up to a subsequence, we have
\begin{align*}
& u^N \rightarrow u \mbox{ weakly-}* \mbox{ in }L^{\infty}(0,T; L^{2}(\Omega)),\\
& \chi_{\mc{S}}^N \rightarrow \chi_{\mc{S}} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and}\mbox{ strongly} \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)), \ \forall\ 1\leq p<\infty,\\
&\rho^N\chi_{\mc{S}}^N \rightarrow \rho\chi_{\mc{S}} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and}\mbox{ strongly} \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)), \ \forall \ 1\leq p<\infty,
\end{align*}
where $\chi_{\mc{S}}$ and $\rho\chi_{\mc{S}}$ satisfying \eqref{transport1} and \eqref{re:transport1}, respectively. Moreover,
\begin{align*}
& P_{\mc{S}}^N u^N \rightarrow P_{\mc{S}} u \mbox{ weakly-}* \mbox{ in }L^{\infty}(0,T; C^{\infty}_{loc}(\mb{R}^3)),\\
&\eta_{t,s}^N \rightarrow \eta_{t,s} \mbox{ weakly-}* \mbox{ in }W^{1,\infty}((0,T)^2; C^{\infty}_{loc}(\mb{R}^3)).
\end{align*}
\end{proposition}
At the level of the Galerkin approximation, we have boundedness of $\sqrt{\rho^N}u^N$ in $L^{\infty}(0,T;L^2(\Omega))$ and $\rho^N$ is strictly positive, which means that we get the boundedness of $u^N$ in $L^{\infty}(0,T; L^2(\Omega))$. So, we can use \cref{sequential11} in the convergence analysis of the Galerkin scheme. In the case of the $\varepsilon$-level for the compressible fluid, we have boundedness of $\sqrt{\rho
^{\varepsilon}}u^{\varepsilon}$ in $L^{\infty}(0,T;L^2(\Omega))$ but $\rho^{\varepsilon}$ is only non-negative. On the other hand, we establish boundedness of $u^{\varepsilon}$ in $L^{2}(0,T;H^1(\Omega))$. We need the following result for the convergence analysis of the vanishing viscosity limit in \cref{14:18}.
\begin{proposition}\label{sequential-varepsilon}
Let $\rho^{\varepsilon}_0 \in W^{1,\infty}(\Omega)$ with $\rho^{\varepsilon}_0 \rightarrow \rho_0$ in $L^{\beta}(\Omega)$,
$\rho^{\varepsilon}$ satisfies
\begin{equation*}
\frac{\partial {\rho^{\varepsilon}}}{\partial t} + \operatorname{div}({\rho}^{\varepsilon} u^{\varepsilon}) = \Delta\rho^{\varepsilon} \mbox{ in }\, (0,T)\times \Omega, \quad \frac{\partial \rho^{\varepsilon}}{\partial \nu}=0 \mbox{ on }\, \partial\Omega, \quad\rho^{\varepsilon}(0,x)=\rho_0^{\varepsilon}(x)\mbox{ in }\ \Omega,\quad\frac {\partial \rho_0^{\varepsilon}}{\partial \nu}\big |_{\partial \Omega} =0.,
\end{equation*}
and
\begin{equation}\label{epsilon-rhoweak}
\rho^{\varepsilon}\rightarrow \rho\mbox{ weakly in }L^{\beta+1}((0,T)\times \Omega),\mbox{ with } \ \beta \geq \max\{8,\gamma\},\ \gamma > 3/2.
\end{equation}
Let $\{u^{\varepsilon},\chi_{\mc{S}}^{\varepsilon}\}$ be a bounded sequence in $L^{2}(0,T; H^1(\Omega)) \times L^{\infty}((0,T)\times \mb{R}^3)$ satisfying
\begin{equation}\label{epsilon:transport}
\frac{\partial {\chi}^{\varepsilon}_{\mc{S}}}{\partial t} + \operatorname{div}(P^{\varepsilon}_{\mc{S}}u^{\varepsilon}\chi^{\varepsilon}_{\mc{S}}) =0 \mbox{ in }(0,T)\times\mathbb{R}^3,\quad {\chi}^{\varepsilon}_{\mc{S}}|_{t=0}=\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3,
\end{equation}
and let $\{\rho^{\varepsilon}\chi_{\mc{S}}^{\varepsilon}\}$ be a bounded sequence in $L^{\infty}((0,T)\times\mathbb{R}^3)$ satisfying
\begin{equation}\label{epsilon:rhotrans}
\frac{\partial}{\partial t}(\rho^{\varepsilon}{\chi}^{\varepsilon}_{\mc{S}}) + \operatorname{div}(P^{\varepsilon}_{\mc{S}}u^{\varepsilon}(\rho^{\varepsilon}\chi^{\varepsilon}_{\mc{S}})) =0 \mbox{ in }(0,T)\times\mathbb{R}^3,\quad \rho^{\varepsilon}{\chi}^{\varepsilon}_{\mc{S}}|_{t=0}=\rho^{\varepsilon}_0\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3,
\end{equation}
where $P^{\varepsilon}_{\mc{S}}:L^2(\Omega)\rightarrow L^2(\mc{S}^{\varepsilon}(t))$ is the orthogonal projection onto rigid fields with $\mc{S}^{\varepsilon}(t) \Subset \Omega$ being a bounded, regular domain for all $t \in [0,T]$. Then up to a subsequence, we have
\begin{align*}
& u^{\varepsilon} \rightarrow u \mbox{ weakly } \mbox{ in }L^{2}(0,T; H^{1}(\Omega)),\\
& \chi_{\mc{S}}^{\varepsilon} \rightarrow \chi_{\mc{S}} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and }\mbox{ strongly } \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1\leq p<\infty),\\
&\rho^{\varepsilon}\chi_{\mc{S}}^{\varepsilon} \rightarrow \rho\chi_{\mc{S}} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and }\mbox{ strongly } \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1\leq p<\infty),
\end{align*}
with $\chi_{\mc{S}}$ and $\rho\chi_{\mc{S}}$ satisfying \eqref{transport1} and \eqref{re:transport1} respectively. Moreover,
\begin{align*}
& P_{\mc{S}}^{\varepsilon} u^{\varepsilon} \rightarrow P_{\mc{S}} u \mbox{ weakly } \mbox{ in }L^{2}(0,T; C^{\infty}_{loc}(\mb{R}^3)),\\
&\eta_{t,s}^{\varepsilon} \rightarrow \eta_{t,s} \mbox{ weakly } \mbox{ in }H^{1}((0,T)^2; C^{\infty}_{loc}(\mb{R}^3)).
\end{align*}
\end{proposition}
\begin{proof}
As $\{u^{\varepsilon}\}$ is a bounded sequence in $L^2(0,T;H^1(\Omega))$
and $\{ \rho^{\varepsilon} \chi_{\mc{S}}^{\varepsilon}\} \mbox{ is bounded in }L^{\infty}((0,T)\times \mb{R}^3)$,
we obtain that $\{P^{\varepsilon}_{\mc{S}}u^{\varepsilon}\}$ is bounded in $L^2(0,T;\mc{R})$. Thus, up to a subsequence,
\begin{equation}\label{Pepsilon:weak}
P^{\varepsilon}_{\mc{S}}u^{\varepsilon} \rightarrow \overline{u_{\mc{S}}} \mbox{ weakly in }L^2(0,T;\mc{R}).
\end{equation}
Here, obviously $P^{\varepsilon}_{\mc{S}}u^{\varepsilon} \in L^1(0,T; L^{\infty}_{loc}(\mb{R}^3))$, $\operatorname{div}(P^{\varepsilon}_{\mc{S}}u^{\varepsilon})=0$ and $\overline{u_{\mc{S}}} \in L^{1}(0,T;W^{1,1}_{loc}(\mb{R}^3))$ satisfies
\begin{equation*}
\frac{\overline{u_{\mc{S}}}}{1+|x|} \in L^1(0,T;L^1(\mb{R}^3)).
\end{equation*}
Moreover, $\{\chi_{\mc{S}}^{\varepsilon}\} \mbox{ is bounded in }L^{\infty}((0,T)\times \mb{R}^3)$, $\chi_{\mc{S}}^{\varepsilon}$ satisfies \eqref{epsilon:transport} and $\{\rho^{\varepsilon}\chi_{\mc{S}}^{\varepsilon}\}$ is bounded in $L^{\infty}((0,T)\times\mathbb{R}^3)$, $\rho^{\varepsilon}\chi_{\mc{S}}^{\varepsilon}$ satisfies \eqref {epsilon:rhotrans}. As we have verified all the required conditions, we can apply \cite[Theorem II.4, Page 521]{DiPerna1989} to obtain
$$\chi_{\mc{S}}^{\varepsilon} \mbox{ converges weakly-}*\mbox{ in }L^{\infty}((0,T)\times \mb{R}^3),\mbox{ strongly in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1\leq p<\infty),$$
$$\rho^{\varepsilon}\chi_{\mc{S}}^{\varepsilon} \mbox{ converges weakly-}*\mbox{ in }L^{\infty}((0,T)\times \mb{R}^3),\mbox{ strongly in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1\leq p<\infty).$$
Let the limit of $\chi_{\mc{S}}^{\varepsilon}$ be denoted by ${\chi_{\mc{S}}}$; it satisfies
\begin{equation*}
\frac{\partial {\chi_{\mc{S}}}}{\partial t} + \operatorname{div}(\overline{u_{\mc{S}}}\ {\chi_{\mc{S}}}) =0 \mbox{ in }\mathbb{R}^3,\quad {\chi_{\mc{S}}}|_{t=0}=\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3.
\end{equation*}
Let the limit of $\rho^{\varepsilon}\chi_{\mc{S}}^{\varepsilon}$ be denoted by $\overline{\rho\chi_{\mc{S}}}$; it satisfies
\begin{equation*}
\frac{\partial (\overline{\rho\chi_{\mc{S}}})}{\partial t} + \operatorname{div}(\overline{u_{\mc{S}}}\ \overline{\rho\chi_{\mc{S}}}) =0 \mbox{ in }(0,T)\times\mathbb{R}^3,\quad \overline{\rho\chi_{\mc{S}}}|_{t=0}=\rho_0\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3.
\end{equation*}
The weak convergence of $\rho^{\varepsilon}$ and strong convergence of $\chi_{\mc{S}}^{\varepsilon}$ help us to identify the limit:
\begin{equation*}
\overline{\rho\chi_{\mc{S}}}= \rho\chi_{\mc{S}}.
\end{equation*}
Using the convergences of $\rho^{\varepsilon}\chi_{\mc{S}}^{\varepsilon}$ and $u^{\varepsilon}$ in the equation
\begin{equation*}
P_{\mc{S}}^{\varepsilon}u^{\varepsilon}(t,x)= \frac{1}{m^{\varepsilon}} \int\limits_{\Omega} \rho^{\varepsilon}\chi_{\mc{S}}^{\varepsilon} u^{\varepsilon} + \left((J^{\varepsilon})^{-1} \int\limits_{\Omega}\rho^{\varepsilon}\chi_{\mc{S}}^{\varepsilon}((y-h^{\varepsilon}(t)) \times u^{\varepsilon})\ dy \right)\times (x-h^{\varepsilon}(t)),
\end{equation*}
and the convergence in \eqref{Pepsilon:weak}, we conclude that
\begin{equation*}
\overline{u_{\mc{S}}}={P_{\mc{S}}}u.
\end{equation*}
The convergence of the isometric propagators $\eta_{t,s}^{\varepsilon}$ follows from the convergence of $P_{\mc{S}}^{\varepsilon} u^{\varepsilon}$ and equation \eqref{ODE-propagator}.
\end{proof}
In the limit of $u^{\delta}$, we can expect the boundedness of the limit only in $L^{2}(0,T;L^2(\Omega))$ but not in $L^{2}(0,T;H^1(\Omega))$.
That is why we need a different sequential continuity result, which we use in \cref{S4}.
\begin{proposition}\label{sequential2}
Let $\rho^{\delta}_0 \in L^{\beta}(\Omega)$ with $\rho^{\delta}_0 \rightarrow \rho_0$ in $L^{\gamma}(\Omega)$, let
$\rho^{\delta}$ satisfy
\begin{equation*}
\frac{\partial {\rho^{\delta}}}{\partial t} + \operatorname{div}({\rho}^{\delta} u^{\delta}) = 0 \mbox{ in }\, (0,T)\times \Omega, \quad\rho^{\delta}(0,x)=\rho_0^{\delta}(x)\mbox{ in }\ \Omega, \end{equation*}
and
\begin{equation}\label{delta-rhoweak}
\rho^{\delta}\rightarrow \rho\mbox{ weakly in }L^{\gamma+\theta}((0,T)\times \Omega),\mbox{ with }\gamma>3/2,\, \theta=\frac{2}{3}\gamma-1.
\end{equation}
Let $\{u^{\delta},\chi_{\mc{S}}^{\delta}\}$ be a bounded sequence in $L^{2}(0,T; L^2(\Omega)) \times L^{\infty}((0,T)\times \mb{R}^3)$ satisfying
\begin{equation}\label{delta:transport}
\frac{\partial {\chi}^{\delta}_{\mc{S}}}{\partial t} + \operatorname{div}(P^{\delta}_{\mc{S}}u^{\delta}\chi^{\delta}_{\mc{S}}) =0 \mbox{ in }(0,T)\times\mathbb{R}^3,\quad {\chi}^{\delta}_{\mc{S}}|_{t=0}=\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3,
\end{equation}
and let $\{\rho^{\delta}\chi_{\mc{S}}^{\delta}\}$ be a bounded sequence in $L^{\infty}((0,T)\times\mathbb{R}^3)$ satisfying
\begin{equation}\label{delta:rhotrans}
\frac{\partial}{\partial t}(\rho^{\delta}{\chi}^{\delta}_{\mc{S}}) + \operatorname{div}(P^{\delta}_{\mc{S}}u^{\delta}(\rho^{\delta}\chi^{\delta}_{\mc{S}})) =0 \mbox{ in }(0,T)\times\mathbb{R}^3,\quad \rho^{\delta}{\chi}^{\delta}_{\mc{S}}|_{t=0}=\rho^{\delta}_0\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3,
\end{equation}
where $P^{\delta}_{\mc{S}}:L^2(\Omega)\rightarrow L^2(\mc{S}^{\delta}(t))$ is the orthogonal projection onto rigid fields with $\mc{S}^{\delta}(t) \Subset \Omega$ being a bounded, regular domain for all $t \in [0,T]$. Then, up to a subsequence, we have
\begin{align*}
& u^{\delta} \rightarrow u \mbox{ weakly } \mbox{ in }L^{2}(0,T; L^{2}(\Omega)),\\
& \chi_{\mc{S}}^{\delta} \rightarrow \chi_{\mc{S}} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and }\mbox{ strongly } \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1\leq p<\infty),\\
& \rho^{\delta}\chi_{\mc{S}}^{\delta} \rightarrow \rho\chi_{\mc{S}} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and }\mbox{ strongly } \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1\leq p<\infty),
\end{align*}
with $\chi_{\mc{S}}$ and $\rho\chi_{\mc{S}}$ satisfying \eqref{transport1} and \eqref{re:transport1} respectively. Moreover,
\begin{align*}
& P_{\mc{S}}^{\delta} u^{\delta} \rightarrow P_{\mc{S}} u \mbox{ weakly } \mbox{ in }L^{2}(0,T; C^{\infty}_{loc}(\mb{R}^3)),\\
& \eta_{t,s}^{\delta} \rightarrow \eta_{t,s} \mbox{ weakly } \mbox{ in }H^{1}((0,T)^2; C^{\infty}_{loc}(\mb{R}^3)).
\end{align*}
\end{proposition}
\begin{proof}
As $\{u^{\delta}\}$ is a bounded sequence in $L^2(0,T;L^2(\Omega))$ and $\{ \rho^{\delta} \chi_{\mc{S}}^{\delta}\} \mbox{ is bounded in }L^{\infty}((0,T)\times \mb{R}^3)$, we obtain that $\{P^{\delta}_{\mc{S}}u^{\delta}\}$ is bounded in $L^2(0,T;\mc{R})$. Thus, up to a subsequence,
\begin{equation}\label{P:weak}
P^{\delta}_{\mc{S}}u^{\delta} \rightarrow \overline{u_{\mc{S}}} \mbox{ weakly in }L^2(0,T;\mc{R}).
\end{equation}
Here, obviously $P^{\delta}_{\mc{S}}u^{\delta} \in L^1(0,T; L^{\infty}_{loc}(\mb{R}^3))$, $\operatorname{div}(P^{\delta}_{\mc{S}}u^{\delta})=0$ and $\overline{u_{\mc{S}}} \in L^{1}(0,T;W^{1,1}_{loc}(\mb{R}^3))$ satisfies
\begin{equation*}
\frac{\overline{u_{\mc{S}}}}{1+|x|} \in L^1(0,T;L^1(\mb{R}^3)).
\end{equation*}
Moreover, $\{\chi_{\mc{S}}^{\delta}\} \mbox{ is bounded in }L^{\infty}((0,T)\times \mb{R}^3)$, $\chi_{\mc{S}}^{\delta}$ satisfies \eqref{delta:transport} and $\{\rho^{\delta}\chi_{\mc{S}}^{\delta}\}$ is bounded in $L^{\infty}((0,T)\times\mathbb{R}^3)$, $\rho^{\delta}\chi_{\mc{S}}^{\delta}$ satisfies \eqref {delta:rhotrans}. Now we can apply \cite[Theorem II.4, Page 521]{DiPerna1989} to obtain
$$\chi_{\mc{S}}^{\delta} \mbox{ converges weakly-}*\mbox{ in }L^{\infty}((0,T)\times \mb{R}^3),\mbox{ and strongly in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1\leq p<\infty),$$
$$\rho^{\delta}\chi_{\mc{S}}^{\delta} \mbox{ converges weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and }\mbox{ strongly } \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1\leq p<\infty).$$
Let the weak limit of $\chi_{\mc{S}}^{\delta}$ be denoted by ${\chi_{\mc{S}}}$. Then it satisfies
\begin{equation*}
\frac{\partial {\chi_{\mc{S}}}}{\partial t} + \operatorname{div}(\overline{u_{\mc{S}}}\ {\chi_{\mc{S}}}) =0 \mbox{ in }(0,T)\times\mathbb{R}^3,\quad {\chi_{\mc{S}}}|_{t=0}=\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3,
\end{equation*}
Let the limit of $\rho^{\delta}\chi_{\mc{S}}^{\delta}$ be denoted by $\overline{\rho\chi_{\mc{S}}}$; it satisfies
\begin{equation*}
\frac{\partial (\overline{\rho\chi_{\mc{S}}})}{\partial t} + \operatorname{div}(\overline{u_{\mc{S}}}\ \overline{\rho\chi_{\mc{S}}}) =0 \mbox{ in }(0,T)\times\mathbb{R}^3,\quad \overline{\rho\chi_{\mc{S}}}|_{t=0}=\rho_0\mathds{1}_{\mc{S}_0}\mbox{ in }\mathbb{R}^3.
\end{equation*}
From \eqref{delta-rhoweak}, we know that\begin{equation*}
\rho^{\delta}\rightarrow \rho\mbox{ weakly in }L^{\gamma+\theta}((0,T)\times \Omega),\mbox{ with }\gamma>3/2,\, \theta=\frac{2}{3}\gamma-1.
\end{equation*}
The weak convergence of $\rho^{\delta}$ to $\rho$ and strong convergence of $\chi_{\mc{S}}^{\delta}$ to $\chi_{\mc{S}}$ help us to identify the limit:
\begin{equation*}
\overline{\rho\chi_{\mc{S}}}= \rho\chi_{\mc{S}},
\end{equation*}
Using the convergences of $\rho^{\delta}\chi_{\mc{S}}^{\delta}$ and $u^{\delta}$ in the equation
\begin{equation*}
P_{\mc{S}}^{\delta}u^{\delta}(t,x)= \frac{1}{m^{\delta}} \int\limits_{\Omega} \rho^{\delta}\chi_{\mc{S}}^{\delta} u^{\delta} + \left((J^{\delta})^{-1} \int\limits_{\Omega}\rho^{\delta}\chi_{\mc{S}}^{\delta}((y-h^{\delta}(t)) \times u^{\delta})\ dy \right)\times (x-h^{\delta}(t)),
\end{equation*}
and the convergence in \eqref{P:weak}, we conclude that
\begin{equation*}
\overline{u_{\mc{S}}}={P_{\mc{S}}}u.
\end{equation*}
The convergence of the isometric propagator $\eta_{t,s}^{\delta}$ follows from the convergence of $P_{\mc{S}}^{\delta} u^{\delta}$ and equation \eqref{ODE-propagator}.
\end{proof}
\section{Existence proofs of Approximate solutions}\label{S3}
In this section, we present the proofs of the existence results of the three approximation levels. We start with the $N$-level approximation in \cref{sec:Galerkin} and the limit as $N\to\infty$ in \cref{14:14}, which yields existence at the $\varepsilon$-level. The convergence of $\varepsilon\to 0$, considered in \cref{14:18}, then shows existence of solutions at the $\delta$-level. The final limit problem as $\delta\to 0$ is the topic of \cref{S4}.
\subsection{Existence of the Faedo-Galerkin approximation}\label{sec:Galerkin}
In this subsection, we construct a solution $(\mc{S}^N,\rho^N,u^{N})$ to the problem \eqref{galerkin-approx1}--\eqref{galerkin-initial}.
First we recall a known maximal regularity result for the parabolic problem \eqref{galerkin-approx2}:
\begin{proposition}\cite[Proposition 7.39, Page 345]{MR2084891}\label{parabolic}
Suppose that $\Omega$ is a regular bounded domain and assume $\rho_0 \in W^{1,\infty}(\Omega)$, $\underline{\rho} \leq \rho_0 \leq \overline{\rho}$, $u \in L^{\infty}(0,T;W^{1,\infty}(\Omega))$. Then the parabolic problem \eqref{galerkin-approx2} admits a unique solution in the solution space
\begin{equation*}
\rho \in L^2(0,T;H^{2}(\Omega)) \cap C([0,T]; H^{1}(\Omega)) \cap H^{1}(0,T;L^{2}(\Omega))
\end{equation*} and it satisfies
\begin{equation} \label{bounds-on-rho}
\underline{\rho}\exp \left(-\int\limits_0^{\tau} \|\operatorname{div}u(s)\|_{L^{\infty}(\Omega)}\ ds\right)\leq \rho(\tau,x)\leq \overline{\rho}\exp \left(\int\limits_0^{\tau} \|\operatorname{div}u(s)\|_{L^{\infty}(\Omega)}\ ds\right)
\end{equation}
for any $\tau \in [0,T]$.
\end{proposition}
\begin{proof}[Proof of \cref{fa}]
The idea is to view our Galerkin approximation as a fixed point problem and then apply Schauder's fixed point theorem to it. We set
\begin{equation*}
B_{R,T}=\{u\in C([0,T]; X_N),\ \|u\|_{L^{\infty}(0,T;L^2(\Omega))}\leq R\},
\end{equation*}
for $R$ and $T$ positive which will be fixed in Step 3.
\underline{Step 1: Continuity equation and transport of the body.}
Given $u \in B_{R,T}$, let $\rho$ be the solution to
\begin{equation}\label{eq:rho}
\frac{\partial {\rho}}{\partial t} + \operatorname{div}({\rho} u) =\varepsilon \Delta\rho \mbox{ in }\, (0,T)\times \Omega, \quad \frac{\partial \rho}{\partial \nu}=0 \mbox{ on }\, \partial\Omega, \quad\rho(0)=\rho_0^N,\quad 0<\underline{\rho}\leq \rho_0^N \leq \overline{\rho},
\end{equation}
and let ${\chi}_{\mc{S}}$ satisfy
\begin{equation}\label{eq:chinew}
\frac{\partial {\chi}_{\mc{S}}}{\partial t} + P_{\mc{S}}u \cdot \nabla \chi_{\mc{S}} =0,\quad \chi_{\mc{S}}|_{t=0}= \mathds{1}_{\mc{S}_0},
\end{equation}
and
\begin{equation}\label{eq:rhochinew}
\frac{\partial }{\partial t}(\rho{\chi}_{\mc{S}}) + P_{\mc{S}}u \cdot \nabla (\rho{\chi}_{\mc{S}})=0,\quad (\rho{\chi}_{\mc{S}})|_{t=0}= \rho_0^{N}\mathds{1}_{\mc{S}_0},
\end{equation}
where $P_{\mc{S}}u \in \mc{R}$ and it is given by
\eqref{projection:P}.
Since $\rho_0^N \in W^{1,\infty}(\Omega)$, $u\in B_{R,T}$ in \eqref{eq:rho}, we can apply \cref{parabolic} to conclude that $\rho >0$ and
\begin{equation*}
\rho \in L^2(0,T;H^{2}(\Omega)) \cap C([0,T]; H^{1}(\Omega)) \cap H^{1}(0,T;L^{2}(\Omega)).
\end{equation*}
Moreover, by \cref{reg:chiS} we obtain
\begin{align*}
&\chi_{\mc{S}} \in L^{\infty}((0,T)\times \Omega) \cap C([0,T];L^p(\Omega)), \, \forall \, 1 \leq p < \infty,\\
&\rho\chi_{\mc{S}} \in L^{\infty}((0,T)\times \Omega) \cap C([0,T];L^p(\Omega)), \, \forall \, 1 \leq p < \infty.
\end{align*}
Consequently, we define
\begin{align*}
& \mu = (1-\chi_{\mc{S}})\mu_{\mc{F}} + \delta^2\chi_{\mc{S}},\quad \lambda = (1-\chi_{\mc{S}})\lambda_{\mc{F}} + \delta^2\chi_{\mc{S}}\mbox{ so that }\mu >0,\ 2\mu+3\lambda \geq 0, \\
& g=(1-\chi_{\mc{S}})g_{\mc{F}} + \chi_{\mc{S}}g_{\mc{S}},\quad
p(\rho)= a\rho^{\gamma} + {\delta} \rho^{\beta}\quad
\mbox{ with } \quad
a = a_{\mc{F}} (1-\chi_{\mc{S}}).
\end{align*}
\underline{Step 2: Momentum equation.} Given $u\in B_{R,T}$, let us consider the following equation satisfied by $\widetilde{u}: [0,T]\mapsto X_N$:
\begin{multline}\label{tilde-momentum}
- \int\limits_0^T\int\limits_{\Omega} \rho \Big(\widetilde{u}'(t)\cdot e_j + (u \cdot \nabla e_j)\cdot \widetilde{u} \Big) + \int\limits_0^T\int\limits_{\Omega} \Big(2\mu\mathbb{D}(\widetilde{u}):\mathbb{D}(e_j) + \lambda\operatorname{div}\widetilde{u}\mathbb{I} : \mathbb{D}(e_j) - p(\rho)\mathbb{I}:\mathbb{D}(e_j)\Big) \\
+\int\limits_0^T\int\limits_{\Omega} \varepsilon \nabla e_j \nabla \rho \cdot \widetilde{u}
+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} (\widetilde{u} \times \nu)\cdot (e_j \times \nu) + \alpha \int\limits_0^T\int\limits_{\partial \mc{S}^N(t)} [(\widetilde{u}-P_{\mc{S}}\widetilde{u})\times \nu]\cdot [(e_j-P_{\mc{S}}e_j)\times \nu] \\
+ \frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi_{\mc{S}}(\widetilde{u}-P_{\mc{S}}\widetilde{u})\cdot (e_j-P_{\mc{S}}e_j) = \int\limits_0^T\int\limits_{\Omega}\rho g \cdot e_j,
\end{multline}
where $\rho$, $\chi_{\mc{S}}$ are defined as in Step 1. We can write
\begin{equation*}
\widetilde{u}(t,\cdot)= \sum\limits_{i=1}^N g_{i}(t) e_i, \quad \widetilde{u}(0)=u_{0}^N= \sum\limits_{i=1}^N \left(\int\limits_{\Omega} u_{0} \cdot e_i\right)e_i.
\end{equation*}
Thus, we can identify the function $\widetilde{u}$ with its coefficients $\{g_{i}\}$ which satisfy the ordinary differential equation,
\begin{equation}\label{tildeu-ODE}
\sum\limits_{i=1}^N a_{i,j}g'_{i}(t) + \sum\limits_{i=1}^N b_{i,j}g_{i}(t) = f_j(t),\quad g_{i}(0)= \int\limits_{\Omega} u_{0}^N \cdot e_i,
\end{equation}
where $a_{i,j}$, $b_{i,j}$ and $f_j$
are given by
\begin{align*}
a_{i,j} &= \int\limits_0^T\int\limits_{\Omega} \rho e_i e_j, \\
b_{i,j} &= \int\limits_0^T\int\limits_{\Omega} \rho (u\cdot \nabla e_j)\cdot e_i + \int\limits_0^T\int\limits_{\Omega} \Big(2\mu\mathbb{D}(e_i):\mathbb{D}(e_j) + \lambda\operatorname{div}e_i\mathbb{I} : \mathbb{D}(e_j) \Big) + \int\limits_0^T\int\limits_{\Omega} \varepsilon \nabla e_j \nabla \rho \cdot e_i \\
&+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} (e_i \times \nu)\cdot (e_j \times \nu) + \alpha \int\limits_0^T\int\limits_{\partial \mc{S}(t)} [(e_i-P_{\mc{S}}e_i)\times \nu]\cdot [(e_j-P_{\mc{S}}e_j)\times \nu] + \frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi_{\mc{S}}(e_i-P_{\mc{S}}e_i)\cdot (e_j-P_{\mc{S}}e_j),\\
f_j &= \int\limits_0^T\int\limits_{\Omega} \rho g\cdot e_j + \int\limits_0^T\int\limits_{\Omega}p(\rho)\mathbb{I}:\mathbb{D}(e_j).
\end{align*}
Observe that the positive lower bound of $\rho$ in \cref{parabolic} guarantees the invertibility of the matrix $(a_{i,j}(t))_{1\leq i,j\leq N}$. We use the regularity of $\rho$ (\cref{parabolic}), of $\chi_{\mc{S}}$ and of the propagator associated to $P_{\mc{S}}u$ (\cref{reg:chiS}) to conclude the continuity of $(a_{i,j}(t))_{1\leq i,j\leq N}$, $(b_{i,j}(t))_{1\leq i,j\leq N}$, $(f_{i}(t))_{1\leq i \leq N}$. The existence and uniqueness theorem for ordinary differential equations gives that system \eqref{tildeu-ODE} has a unique solution defined on $[0,T]$
and therefore equation \eqref{tilde-momentum} has a unique solution
\begin{equation*}
\widetilde{u} \in C([0,T]; X_N).
\end{equation*}
\underline{Step 3: Well-definedness of $\mc{N}$.}
Let us define a map
\begin{align*}
\mc{N}: B_{R,T} &\rightarrow C([0,T],X_N) \\
u &\mapsto \widetilde{u},
\end{align*}
where $\widetilde{u}$ satisfies \eqref{tilde-momentum}.
Since we know the existence of $\widetilde{u} \in C([0,T]; X_N)$ to the problem \eqref{tilde-momentum}, we have that $\mc{N}$ is well-defined from $B_{R,T}$ to $C([0,T]; X_N)$. Now we establish the fact that $\mc{N}$ maps $B_{R,T}$ to itself
for suitable $R$ and $T$.
We fix
\begin{equation*}
0< \sigma < \frac{1}{2}\operatorname{dist}(\mc{S}_0,\partial \Omega).
\end{equation*}
Given $u\in B_{R,T}$, we want to estimate $\|\widetilde{u}\|_{L^{\infty}(0,T;L^2(\Omega))}$.
We have the following identities via a simple integration by parts:
\begin{align}\label{id:1}
\int\limits_0^t\int\limits_{\Omega} \rho \widetilde{u}'\cdot \widetilde{u} &=-\frac{1}{2}\int\limits_0^t\int\limits_{\Omega}\frac{\partial \rho}{\partial t}|\widetilde{u}|^2 + \frac{1}{2}(\rho|\widetilde{u}|^2)(t)-\frac{1}{2}\rho_0|u_0|^2, \\
\label{id:2}
\int\limits_0^T\int\limits_{\Omega} \rho (u\cdot\nabla)\widetilde{u}\cdot \widetilde{u} &= - \frac{1}{2}\int\limits_0^T\int\limits_{\Omega} \operatorname{div}(\rho u)|\widetilde{u}|^2,\\
\begin{split}\label{id:3}
\int\limits_{\Omega} \nabla (\rho^{\gamma})\cdot \widetilde{u} &= \frac{\gamma}{\gamma-1} \int\limits_{\Omega} \nabla (\rho^{\gamma-1})\cdot \rho \widetilde{u} = -\frac{\gamma}{\gamma-1}\int\limits_{\Omega}\rho^{\gamma-1} \operatorname{div}(\rho \widetilde{u})=\frac{1}{\gamma-1} \frac{d}{dt}\int\limits_{\Omega} \rho^{\gamma} - \frac{\varepsilon\gamma}{\gamma-1}\int\limits_{\Omega} \rho^{\gamma-1}\Delta\rho \\
&= \frac{1}{\gamma-1} \frac{d}{dt}\int\limits_{\Omega} \rho^{\gamma} + \varepsilon \gamma\int\limits_{\Omega} \rho^{\gamma-2}|\nabla \rho|^2 \geq \frac{1}{\gamma-1} \frac{d}{dt}\int\limits_{\Omega} \rho^{\gamma}.
\end{split}
\end{align}
Similarly,
\begin{equation}\label{nid:4}
\int\limits_{\Omega} \nabla (\rho^{\beta})\cdot \widetilde{u} = \frac{1}{\beta-1} \frac{d}{dt}\int\limits_{\Omega} \rho^{\beta} + \varepsilon \beta\int\limits_{\Omega} \rho^{\beta-2}|\nabla \rho|^2.
\end{equation}
We multiply equation \eqref{tilde-momentum} by $g_{j}$, add these equations for $j=1,2,...,N$, use the relations \eqref{id:1}--\eqref{nid:4} and the continuity equation \eqref{eq:rho} to obtain the following energy estimate:
\begin{multline}\label{energy:tildeu}
\int\limits_{\Omega}\Big(\frac{1}{2} \rho |\widetilde{u}|^2 + \frac{a}{\gamma-1}\rho^{\gamma} + \frac{\delta}{\beta-1}\rho^{\beta}\Big) + \int\limits_0^T\int\limits_{\Omega} \Big(2\mu|\mathbb{D}(\widetilde{u})|^2 + \lambda |\operatorname{div}\widetilde{u}|^2\Big) + \delta\varepsilon \beta\int\limits_0^T\int\limits_{\Omega} \rho^{\beta-2}|\nabla \rho|^2
+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} |\widetilde{u} \times \nu|^2 \\
+ \alpha \int\limits_0^T\int\limits_{\partial \mc{S}(t)} |(\widetilde{u}-P_{\mc{S}}\widetilde{u})\times \nu|^2
+ \frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi_{\mc{S}}|\widetilde{u}-P_{\mc{S}}\widetilde{u}|^2 \leq \int\limits_0^T\int\limits_{\Omega}\rho g \cdot \widetilde{u}
+ \int\limits_{\Omega} \Bigg( \frac{1}{2}\frac{\rho_0^N}{|q_0^N|^2}\mathds{1}_{\{\rho_0>0\}} + \frac{a}{\gamma-1}(\rho_0^N)^{\gamma} + \frac{\delta}{\beta-1}(\rho_0^N)^{\beta} \Bigg)\\
\leq \sqrt{\overline{\rho}}T\left(\frac{1}{2\widetilde{\varepsilon}}\|g\|^2_{L^{\infty}(0,T;L^2(\Omega))} + \frac{\widetilde{\varepsilon}}{2}\|\sqrt{\rho}\widetilde{u}\|^2_{L^{\infty}(0,T;L^2(\Omega))}\right) + \int\limits_{\Omega} \Bigg( \frac{1}{2}\frac{\rho_0^N}{|q_0^N|^2}\mathds{1}_{\{\rho_0>0\}} + \frac{a}{\gamma-1}(\rho_0^N)^{\gamma} + \frac{\delta}{\beta-1}(\rho_0^N)^{\beta} \Bigg).
\end{multline}
An appropriate choice of $\widetilde{\varepsilon}$ in \eqref{energy:tildeu} gives us
\begin{equation*}
\|\widetilde{u}\|^2_{L^{\infty}(0,T;L^2(\Omega))} \leq \frac{4{\overline{\rho}}}{\underline{\rho}}T^2\|g\|^2_{L^{\infty}(0,T;L^2(\Omega))} + \frac{4}{\underline{\rho}}E_0^N,
\end{equation*}
where $\overline{\rho}$ and $\underline{\rho}$ are the upper and lower bounds of $\rho$. In order to get $\|\widetilde{u}\|_{L^{\infty}(0,T;L^2(\Omega))} \leq R$,
we need
\begin{equation}\label{choice-R}
R^2 \geq \frac{4{\overline{\rho}}}{\underline{\rho}}T^2\|g\|^2_{L^{\infty}(0,T;L^2(\Omega))} + \frac{4}{\underline{\rho}}E_0^N.
\end{equation}
We also need to verify that for $T$ small enough and for any $u\in B_{R,T}$,
\begin{equation}\label{no-collision}
\inf_{u\in B_{R,T}} \operatorname{dist}(\mc{S}(t),\partial \Omega) \geq 2\sigma> 0
\end{equation}
holds. We follow \cite[Proposition 4.6, Step 2]{MR3272367} and write $\mc{S}(t)=\eta_{t,0}(\mc{S}_0)$ with the isometric propagator $\eta_{t,s}$ associated to the rigid field $P_{\mc{S}}u=h'(t) + \omega(t)\times (y-h(t))$. Then, proving \eqref{no-collision} is equivalent to establishing the following bound:
\begin{equation}\label{equivalent-T}
\sup_{t\in [0,T]}|\partial_t \eta_{t,0}(t,y)| < \frac{ \operatorname{dist}(\mc{S}_0,\partial\Omega) - 2\sigma}{T},\quad t\in [0,T],\, y\in \mc{S}_0.
\end{equation}
We have
\begin{equation*}
|\partial_t \eta_{t,0}(t,y)|=|P_{\mc{S}}u(t, \eta_{t,0}(t,y))| \leq |h'(t)| + |\omega(t)||y-h(t)|.
\end{equation*}
Furthermore, if $\overline{\rho}$ is the upper bound of $\rho$, then for $u\in B_{R,T}$
\begin{equation}\label{18:37}
|h'(t)|^2 + J(t)\omega(t)\cdot \omega(t)= \int\limits_{\mc{S}(t)} \rho|P_{\mc{S}}u(t,\cdot)|^2 \leq \int\limits_{\Omega} \rho|u(t,\cdot)|^2 \leq \overline{\rho}R^2
\end{equation}
for any $R$ and $t\in (0,T)$. As $J(t)$ is congruent to $J(0)$, they have the same eigenvalues and we have
\begin{equation*}
\lambda_0|\omega(t)|^2 \leq J(t)\omega(t)\cdot \omega(t),
\end{equation*}
where $\lambda_0$ is the smallest eigenvalue of $J(0)$. Observe that for $t\in [0,T],\, y\in \mc{S}_0$,
\begin{align}
\begin{split}\label{18:39}
|h'(t)| + |\omega(t)||y-h(t)|&\leq \sqrt{2}(|h'(t)|^2 + |\omega(t)|^2|y-h(t)|^2)^{1/2} \leq \sqrt{2}\max\{1,|y-h(t)|\}(|h'(t)|^2 + |\omega(t)|^2)^{1/2}
\\ &\leq C_0\left(|h'(t)|^2 + J(t)\omega(t)\cdot \omega(t)\right)^{1/2},
\end{split}
\end{align}
where $C_0=\sqrt{2}\frac{\max\{1,|y-h(t)|\}}{\min\{1,\lambda_0\}
^{1/2}}$.
Thus, with the help of \eqref{18:37}--\eqref{18:39} and the relation of $R$ in \eqref{choice-R}, we can conclude that any
\begin{equation}\label{choice-T}
T < \frac{ \operatorname{dist}(\mc{S}_0,\partial\Omega) - 2\sigma}{C_0 |\overline{\rho}|^{1/2}[\frac{4{\overline{\rho}}}{\underline{\rho}}T^2\|g\|^2_{L^{\infty}(0,T;L^2(\Omega))} + \frac{4}{\underline{\rho}}E_0^N]^{1/2}},
\end{equation}
satisfies the relation \eqref{no-collision}.
Thus, we choose $T$ satisfying \eqref{choice-T} and fix it. Then we choose $R$ as in \eqref{choice-R} to conclude that $\mc{N}$ maps $B_{R,T}$ to itself.
\underline{Step 4: Continuity of $\mc{N}$.} We show that if a sequence $\{u^k\} \subset B_{R,T}$ is such that $u^k \rightarrow u$ in $B_{R,T}$, then $\mc{N}(u^k) \rightarrow \mc{N}(u)$ in $B_{R,T}$. As $\mbox{span}(e_1,e_2,...,e_N)$ is a finite dimensional subspace of $\mc{D}(\overline{\Omega})$, we have $u^k \rightarrow u$ in $C([0,T];\mc{D}(\overline{\Omega}))$. Given $\{u^k\} \subset B_{R,T}$, we have that $\rho^k \in L^2(0,T;H^{2}(\Omega)) \cap C([0,T]; H^{1}(\Omega)) \cap H^{1}(0,T;L^{2}(\Omega))$ is the solution to
\eqref{eq:rho}, $\chi_{\mc{S}}^k \mbox{ is bounded in }L^{\infty}((0,T)\times \mb{R}^3)\mbox{ satisfying }$ \eqref{eq:chinew}
and $\{\rho^{k}\chi_{\mc{S}}^{k}\}$ is a bounded sequence in $L^{\infty}((0,T)\times\mathbb{R}^3)$ satisfying
\eqref{eq:rhochinew}.
We apply \cref{sequential1} to obtain
\begin{align*}
& \chi_{\mc{S}}^k \rightarrow \chi_{\mc{S}} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and }\mbox{ strongly } \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)), \, \forall \, 1 \leq p < \infty,\\
& P_{\mc{S}}^k u^k \rightarrow P_{\mc{S}} u \mbox{ strongly } \mbox{ in }C([0,T]; C^{\infty}_{loc}(\mb{R}^3)),\\
& \eta_{t,s}^k \rightarrow \eta_{t,s} \mbox{ strongly } \mbox{ in }C^{1}([0,T]^2; C^{\infty}_{loc}(\mb{R}^3)).
\end{align*}
We use the continuity argument as in Step 2 to conclude
\begin{equation*}
a^k_{i,j}\rightarrow a_{i,j}, \quad b^k_{i,j}\rightarrow b_{i,j}, \quad f^k_j \rightarrow f_j \mbox{ strongly in }C([0,T]),
\end{equation*}
and so we obtain
\begin{equation*}
\mc{N}(u^k)=\widetilde{u}^k \rightarrow \widetilde{u}=\mc{N}(u)\mbox{ strongly in }C([0,T]; X_N).
\end{equation*}
\underline{Step 5: Compactness of $\mc{N}$.} If $\widetilde{u}(t)=\sum\limits_{i=1}^N g_i(t) e_i$, we can view \eqref{tildeu-ODE} as
\begin{equation*}
A(t)G'(t) + B(t)G(t) = F(t),
\end{equation*}
where $A(t)=(a_{i,j}(t))_{1\leq i,j\leq N},\quad B(t)=(b_{i,j}(t))_{1\leq i,j\leq N},\quad F(t)=(f_{i}(t))_{1\leq i \leq N},\quad G(t)=(g_i(t))_{1\leq i\leq N}$. We deduce
\begin{equation*}
|g'_i(t)| \leq R|A^{-1}(t)||B(t)| + |A^{-1}(t)||F(t)|.
\end{equation*}
Thus, we have
\begin{equation*}
\sup_{t\in [0,T]} \Big(|g_i(t)| + |g'_i(t)|\Big) \leq C.
\end{equation*}
This also implies
\begin{equation*}
\sup_{u\in B_{R,T}} \|\mc{N}(u)\|_{C^1([0,T]; X_N)} \leq C.
\end{equation*}
The $C^1([0,T]; X_N)$-boundedness of $\mc{N}(u)$ allows us to apply the Arzela-Ascoli theorem to obtain compactness of $\mc{N}$ in $B_{R,T}$.
Now we are in a position to apply Schauder's fixed point theorem to
$\mc{N}$ to conclude the existence of a fixed point $u^N \in B_{R,T}$. Then we define $\rho^N$ satisfying the continuity equation \eqref{galerkin-approx2} on $(0,T)\times \Omega$, and $\chi_{\mc{S}}^N=\mathds{1}_{\mc{S}^N}$ is the corresponding solution to the transport equation \eqref{galerkin-approx4} on $(0,T)\times \mathbb{R}^3$. It only remains to justify the momentum equation \eqref{galerkin-approx3}. We multiply equation \eqref{tilde-momentum} by $\psi\in \mc{D}([0,T))$ to obtain:
\begin{multline}\label{22:49}
- \int\limits_0^T\int\limits_{\Omega} \rho^N \Big((u^N)'(t)\cdot \psi(t)e_j + (u^N \cdot \nabla (\psi(t)e_j))\cdot {u}^N \Big)+\int\limits_0^T\int\limits_{\Omega} \varepsilon \nabla (\psi(t)e_j) \nabla \rho^N \cdot u^N
+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} (u^N \times \nu)\cdot (\psi(t)e_j \times \nu)\\ + \int\limits_0^T\int\limits_{\Omega} \Big(2\mu^N\mathbb{D}({u}^N):\mathbb{D}(\psi(t)e_j) + \lambda^N\operatorname{div}{u}^N\mathbb{I} : \mathbb{D}(\psi(t)e_j) - p^{N}(\rho^N)\mathbb{I}:\mathbb{D}(\psi (t)e_j)\Big) \\
+ \alpha \int\limits_0^T\int\limits_{\partial \mc{S}^N(t)} [({u}^N-P^N_{\mc{S}}{u}^N)\times \nu]\cdot [(\psi(t)e_j-P^N_{\mc{S}}\psi(t)e_j)\times \nu]
+ \frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi_{\mc{S}}({u}^N-P^N_{\mc{S}}{u}^N)\cdot (\psi(t)e_j-P^N_{\mc{S}}\psi(t)e_j)\\ = \int\limits_0^T\int\limits_{\Omega}\rho^N g^N \cdot \psi(t)e_j,
\end{multline}
We have the following identities via integration by parts:
\begin{equation}\label{id:4}
\int\limits_0^T \rho^N (u^N)'(t)\cdot \psi(t)e_j = -\int\limits_0^T (\rho^N)' u^N\cdot \psi(t)e_j - \int\limits_0^T \rho^N u^N\cdot \psi'(t)e_j - (\rho^N u^N\cdot \psi e_j)(0),
\end{equation}
\begin{equation}\label{id:5}
\int\limits_{\Omega} \rho^N (u^N \cdot \nabla (\psi(t)e_j))\cdot {u}^N = -\int\limits_{\Omega} \operatorname{div}(\rho^N u^N) (\psi(t)e_j \cdot {u}^N) - \int\limits_{\Omega}{ \rho^N (u^N \cdot \nabla ) {u}^N\cdot \psi(t)e_j.}
\end{equation}
Thus we can use the relations \eqref{id:4}--\eqref{id:5} and continuity equation \eqref{galerkin-approx2} in the identity \eqref{22:49} to obtain equation \eqref{galerkin-approx3} for all $\phi \in \mc{D}([0,T); X_N)$.
\end{proof}
\subsection{Convergence of the Faedo-Galerkin scheme and the limiting system}\label{14:14}
In \cref{fa}, we have already constructed a solution $(\mc{S}^N,\rho^N,u^{N})$ to the problem \eqref{galerkin-approx1}--\eqref{galerkin-initial}. In this section, we establish \cref{thm:approxn} by passing to the limit in \eqref{galerkin-approx1}--\eqref{galerkin-initial} as $N\rightarrow\infty$ to recover the solution of \eqref{varepsilon:approx1}--\eqref{varepsilon:initial}, i.e.\ of the $\varepsilon$-level approximation.
\begin{proof} [Proof of \cref{thm:approxn}]
If we multiply \eqref{galerkin-approx3} by $u^N$, then as in \eqref{energy:tildeu}, we derive
\begin{multline}\label{energy:uN}
E^N[\rho ^N, q^N] + \int\limits_0^T\int\limits_{\Omega} \Big(2\mu^N|\mathbb{D}(u^N)|^2 + \lambda^N |\operatorname{div}u^N|^2\Big) + \delta\varepsilon \beta\int\limits_0^T\int\limits_{\Omega} (\rho^N)^{\beta-2}|\nabla \rho^N|^2
+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} |u^N \times \nu|^2
\\+ \alpha \int\limits_0^T\int\limits_{\partial \mc{S}^N(t)} |(u^N-P^N_{\mc{S}}u^N)\times \nu|^2
+ \frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^N_{\mc{S}}|u^N-P^N_{\mc{S}}u^N|^2 \leq \int\limits_0^T\int\limits_{\Omega}\rho^N g^N \cdot u^N
+ E^N_0,
\end{multline}
where
$$E^N[\rho ^N, q^N] = \int\limits_{\Omega}\Big(\frac{1}{2} \rho^N |u^N|^2 + \frac{a^N}{\gamma-1}(\rho^N)^{\gamma} + \frac{\delta}{\beta-1}(\rho^N)^{\beta}\Big).$$
Following the idea of the footnote in \cite[Page 368]{MR2084891}, the initial data $(\rho_0^N, u_0^N)$ is constructed in such a way that
\begin{equation*}
\rho_0^N \rightarrow \rho_0^{\varepsilon} \mbox{ in }W^{1,\infty}(\Omega),\quad \rho_0^N u_0^N \rightarrow q_0^{\varepsilon} \mbox{ in }L^{2}(\Omega)
\end{equation*}
and
\begin{equation} \label{lim}
\int\limits_{\Omega}\Bigg( \frac{1}{2}{\rho_0^N}|u_0^N|^2\mathds{1}_{\{\rho_0^N>0\}} + \frac{a^N}{\gamma-1}(\rho_0^N)^{\gamma} + \frac{\delta}{\beta-1}(\rho_0^N)^{\beta} \Bigg) \rightarrow \int\limits_{\Omega}\Bigg(\frac{1}{2} \frac{|q_0^{\varepsilon}|^2}{\rho_0^{\varepsilon}}\mathds{1}_{\{\rho_0^{\varepsilon}>0\}} + \frac{a^{\varepsilon}}{\gamma-1}(\rho_0^{\varepsilon})^{\gamma} + \frac{\delta}{\beta-1}(\rho_0^{\varepsilon})^{\beta} \Bigg)\mbox{ as }N\rightarrow \infty.
\end{equation}
Precisely, we approximate $q_0^{\varepsilon}$ by a sequence $q_0^N$ satisfying \eqref{initialcond} and such that
\eqref{lim} is valid. It is sufficient to take
$ u_{0}^N = P_N(\frac{q_0^{\varepsilon}}{\rho_0^{\varepsilon}})$, where by $P_N$ we denote the orthogonal projection of $L^2(\Omega) \mbox { onto } X_N$. \cref{fa} is valid with these new initial data. Therefore we can apply the arguments which we will explain below to get Proposition \ref{thm:approxn}.
The construction of $\rho^N$ and \eqref{bounds-on-rho} imply that $\rho^N >0$. Thus the energy estimate \eqref {energy:uN} yields that up to a subsequence
\begin{enumerate}
\item $u^N\rightarrow u^{\varepsilon}$ weakly-$*$ in $L^{\infty}(0,T;L^2(\Omega))$ and weakly in $L^2(0,T;H^1(\Omega))$,
\item $\rho^N \rightarrow \rho^{\varepsilon}$ weakly-$*$ in $L^{\infty}(0,T; L^{\beta}(\Omega))$,
\item $\nabla\rho^N \rightarrow \nabla\rho^{\varepsilon}$ weakly in $L^{2}((0,T)\times\Omega)$.
\end{enumerate}
We follow the similar analysis as for the fluid case explained in \cite[Section 7.8.1, Page 362]{MR2084891} to conclude that
\begin{itemize}
\item $\rho^N \rightarrow \rho^{\varepsilon}$ in $C([0,T]; L^{\beta}_{weak}(\Omega))$ and $\rho^N \rightarrow \rho^{\varepsilon}$ strongly in $L^p((0,T)\times \Omega)$, $\forall \ 1\leq p< \frac{4}{3}\beta$,
\item $\rho^N u^N \rightarrow \rho^{\varepsilon} u^{\varepsilon}$ weakly in $L^2(0,T; L^{\frac{6\beta}{\beta+6}})$ and weakly-$*$ in $L^{\infty}(0,T; L^{\frac{2\beta}{\beta+1}})$.
\end{itemize}
We also know that $\chi_{\mc{S}}^N$ is a bounded sequence in $ L^{\infty}((0,T)\times \mb{R}^3)$ satisfying
\eqref{galerkin-approx4} and $\{\rho^{N}\chi_{\mc{S}}^{N}\}$ is a bounded sequence in $L^{\infty}((0,T)\times\mathbb{R}^3)$ satisfying
\eqref{N:approx5}. We use \cref{sequential11} to conclude
\begin{align}\label{xi}
\chi_{\mc{S}}^N \rightarrow \chi_{\mc{S}}^{\varepsilon} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) &\mbox{ and }\mbox{ strongly } \mbox{ in } C([0,T]; L^p_{loc}(\mb{R}^3)),\ \forall \ 1 \leq p <\infty,
\end{align}
with $\chi_{\mc{S}}^{\varepsilon}$ satisfying \eqref{varepsilon:approx4} along with \eqref{varepsilon:approx1}. Thus, we have recovered the transport equation for the body \eqref{varepsilon:approx4}. From \eqref{xi} and the definitions of $g^N$ and $g^{\varepsilon}$ in \eqref{gN} and \eqref{gepsilon}, it follows that
\begin{equation}\label{g}
g^N \rightarrow g^{\varepsilon} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and }\mbox{ strongly } \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ \forall \ 1 \leq p <\infty.
\end{equation}
These convergence results make it possible to pass to the limit $N\rightarrow \infty$ in \eqref{galerkin-approx2} to achieve \eqref{varepsilon:approx2}. Now we concentrate on the limit of the momentum equation \eqref{galerkin-approx3}. The four most difficult terms are:
\begin{align*}
A^N(t,e_k)&= \int\limits_{\partial \mc{S}^N(t)} [(u^N-P^N_{\mc{S}}u^N)\times \nu]\cdot [(e_k-P^N_{\mc{S}}e_k)\times \nu],\quad
B^N(t,e_k)= \int\limits_{\Omega} \rho^N u^N \otimes u^N : \nabla e_k,\\ C^N(t,e_k)&= \int\limits_{\Omega} \varepsilon \nabla u^N \nabla \rho^N \cdot e_k,\quad
D^N(t,e_k)= \int\limits_{\Omega} (\rho^N)^{\beta}\mathbb{I}: \mathbb{D}(e_k),\quad 1\leq k\leq N.
\end{align*}
To analyze the term $A^N(t,e_k)$, we do a change of variables to rewrite it in a fixed domain and use the convergence results from \cref{sequential1} for the projection and the isometric propagator:
\begin{align*}
&P_{\mc{S}}^N u^N \rightarrow P_{\mc{S}}^{\varepsilon} u^{\varepsilon} \mbox{ weakly-}* \mbox{ in }L^{\infty}(0,T; C^{\infty}_{loc}(\mb{R}^3)),\\
&\eta_{t,s}^N \rightarrow \eta_{t,s}^{\varepsilon} \mbox{ weakly-}* \mbox{ in }W^{1,\infty}((0,T)^2; C^{\infty}_{loc}(\mb{R}^3)).
\end{align*}
We follow a similar analysis as in \cite[Page 2047--2048]{MR3272367} to conclude that $A^N$ converges weakly in $L^1(0,T)$ to
\begin{equation*}
A(t,e_k)= \int\limits_{\partial \mc{S}^{\varepsilon}(t)} [(u^{\varepsilon}-P^{\varepsilon}_{\mc{S}}u^{\varepsilon})\times \nu]\cdot [(e_k-P^{\varepsilon}_{\mc{S}}e_{k})\times \nu].
\end{equation*}
We proceed as explained in the fluid case \cite[Section 7.8.2, Page 363--365]{MR2084891} to analyze the limiting process for the other terms $B^N(t,e_k)$, $C^N(t,e_k)$, $D^N(t,e_k)$. The limit of $B^N(t,e_k)$ follows from the fact \cite[Equation (7.8.22), Page 364]{MR2084891} that
\begin{equation}\label{conv:convective}
\rho^N u^N \otimes u^N \rightarrow \rho^{\varepsilon} u^{\varepsilon} \otimes u^{\varepsilon} \mbox{ weakly in }L^2(0,T; L^{\frac{6\beta}{4\beta +3}}(\Omega)).
\end{equation}
To get the limit of $C^N(t,e_k)$, we use \cite[Equation (7.8.26), Page 365]{MR2084891}:
\begin{equation*}
\varepsilon\nabla u^N \nabla \rho^N \rightarrow \varepsilon\nabla u^{\varepsilon} \nabla \rho^{\varepsilon} \mbox{ weakly in }L^2(0,T; L^{\frac{5\beta-3}{4\beta}}(\Omega)),
\end{equation*}
and the limit of $D^N(t,e_k)$ is obtained by using \cite[Equation (7.8.8), Page 362]{MR2084891}:
\begin{equation}\label{conv:rho}
\rho^N \rightarrow \rho^{\varepsilon} \mbox{ strongly in }L^p(0,T; \Omega),\quad 1\leq p < \frac{4}{3}\beta.
\end{equation}
Thus, using the above convergence results for $B^N$, $C^N$, $D^N$ and the fact that
\begin{equation*}
\bigcup_{N}X_N\mbox{ is dense in }\left\{v\in W^{1,p}(\Omega) \mid v\cdot \nu=0\mbox{ on }\partial\Omega\right\}\mbox{ for any }p\in [1,\infty),
\end{equation*}
we conclude the following weak convergences in $L^1(0,T)$:
\begin{equation*}
B^N(t,\phi^N)\rightarrow B(t,\phi^{\varepsilon})= \int\limits_{\Omega} \rho^{\varepsilon} u^{\varepsilon} \otimes u^{\varepsilon} : \nabla \phi^{\varepsilon},
\end{equation*}
\begin{equation*}
C^N(t,\phi^N)\rightarrow C(t,\phi^{\varepsilon})=\int\limits_{\Omega} \varepsilon \nabla u^{\varepsilon} \nabla \rho^{\varepsilon} \cdot \phi^{\varepsilon},
\end{equation*}
\begin{equation*}
D^N(t,\phi^N)\rightarrow D(t,\phi^{\varepsilon})= \int\limits_{\Omega} (\rho^{\varepsilon})^{\beta}\mathbb{I}: \mathbb{D}(\phi^{\varepsilon}).
\end{equation*}
Thus we have achieved \eqref{varepsilon:approx2} as a limit of equation \eqref{galerkin-approx3} as $N\rightarrow \infty$. Hence, we have established the existence of a solution $(\mc{S}^{\varepsilon},\rho^{\varepsilon},u^{\varepsilon})$ to system \eqref{varepsilon:approx1}--\eqref{varepsilon:initial}. Now we establish energy inequality \eqref{energy-varepsilon} and estimates independent of $\varepsilon$:
\begin{itemize}
\item Notice that the solution $(\rho^N,u^N)$ of the Galerkin scheme satisfies \eqref{energy:uN} uniformly in $N$. The convergence of $\rho^N|u^N|^2$ in \eqref{conv:convective} and $\rho^N$ in \eqref{conv:rho} ensures that, up to the extraction of a subsequence,
\begin{equation*}
\int\limits_{\Omega}\Big(\frac{1}{2} \rho^N |u^N|^2 + \frac{a^N}{\gamma-1}(\rho^N)^{\gamma} + \frac{\delta}{\beta-1}(\rho^N)^{\beta}\Big) \rightarrow \int\limits_{\Omega}\Big(\frac{1}{2} \rho^{\varepsilon} |u^{\varepsilon}|^2 + \frac{a^{\varepsilon}}{\gamma-1}(\rho^{\varepsilon})^{\gamma} + \frac{\delta}{\beta-1}(\rho^{\varepsilon})^{\beta}\Big) \mbox{ as }N\rightarrow\infty.
\end{equation*}
\item Due to the weak lower semicontinuity of convex functionals, the weak convergence of $u^N$ in $L^2(0,T;H^1(\Omega))$, the strong convergence of $\chi_{\mc{S}}^N$ in $C([0,T];L^p(\Omega))$ and the strong convergence of $P_{\mc{S}}^N$ in $C([0,T]; C^{\infty}_{loc}(\mb{R}^3))$, we obtain
\begin{equation}\label{N1}
\int\limits_0^T\int\limits_{\Omega} \Big(2\mu^{\varepsilon}|\mathbb{D}(u^{\varepsilon})|^2 + \lambda^{\varepsilon}|\operatorname{div}u^{\varepsilon}|^2\Big) \leq\liminf_{N\rightarrow \infty}\int\limits_0^T\int\limits_{\Omega} \Big(2\mu^N|\mathbb{D}(u^N)|^2 + \lambda^N |\operatorname{div}u^N|^2\Big),
\end{equation}
\begin{equation}\label{N2}
\int\limits_0^T\int\limits_{\Omega} \chi^{\varepsilon}_{\mc{S}}|u^{\varepsilon}-P^{\delta}_{\mc{S}}u^{\varepsilon}|^2\leq\liminf_{N\rightarrow \infty}\int\limits_0^T\int\limits_{\Omega} \chi^N_{\mc{S}}|u^N-P_{\mc{S}}u^N|^2.
\end{equation}
\item Using the fact that $\nabla\rho^N\rightarrow\nabla\rho$ strongly in $L^2((0,T)\times\Omega)$ (by \cite[Equation (7.8.25), Page 365]{MR2084891}), strong convergence of $\rho^N$ in \eqref{conv:rho} and Fatou's lemma, we have
\begin{equation}\label{N3}
\int\limits_0^T\int\limits_{\Omega} (\rho^{\varepsilon})^{\beta-2}|\nabla \rho^{\varepsilon}|^2 \leq \liminf_{N\rightarrow \infty}\int\limits_0^T\int\limits_{\Omega} (\rho^N)^{\beta-2}|\nabla \rho^N|^2.
\end{equation}
\item For passing to the limit in the boundary terms, we follow the idea of \cite{MR3272367}. Define the extended velocities $U^N$, $U_{\mc{S}}^N$ to whole $\mathbb{R}^3$ associated with $u^N$, $P_{\mc{S}}u^N$ respectively. According to \cite[Lemma A.2]{MR3272367}, we have the weak convergences of $U^N$, $U_{\mc{S}}^N$ to $U^{\varepsilon}$, $U_{\mc{S}}^{\varepsilon}$ in $L^2(0,T;H^1_{loc}(\mathbb{R}^3))$. These facts along with the lower semicontinuity of the $L^2$-norm yield
\begin{align}
\int\limits_0^T\int\limits_{\partial \mc{S}^{\varepsilon}(t)} |(u^{\varepsilon}-P^{\varepsilon}_{\mc{S}}u^{\varepsilon})\times \nu|^2 &=\int\limits_0^T\int\limits_{\partial\mc{S}_0} |(U^{\varepsilon}-U_{\mc{S}}^{\varepsilon})\times \nu|^2\notag\\ &\leq \liminf_{N\rightarrow\infty} \int\limits_0^T\int\limits_{\partial\mc{S}_0} |(U^{N}-U_{\mc{S}}^{N})\times \nu|^2 \leq \liminf_{N\rightarrow\infty} \int\limits_0^T\int\limits_{\partial \mc{S}^N(t)} |(u^N-P_{\mc{S}}u^N)\times \nu|^2\label{N4}.
\end{align}
Similar arguments also help us to obtain
\begin{equation}\label{N5}
\int\limits_0^T\int\limits_{\partial \Omega} |u^{\varepsilon} \times \nu|^2 \leq \liminf_{N\rightarrow\infty}\int\limits_0^T\int\limits_{\partial \Omega} |u^N \times \nu|^2.
\end{equation}
\item Regarding the term on the right hand side of \eqref{energy:uN}, the weak convergence of $u^N$ in $L^2(0,T;H^1(\Omega))$, the strong convergence of $\rho^N$ in \eqref{conv:rho} and the strong convergence of $g^N$ in \eqref{g} yield
\begin{equation}\label{N6}
\int\limits_0^T\int\limits_{\Omega} \rho^N g^N \cdot u^N \rightarrow\int\limits_0^T\int\limits_{\Omega} \rho^{\varepsilon} g^{\varepsilon} \cdot u^{\varepsilon}, \;
\mbox{ as }N\rightarrow\infty.
\end{equation}
\end{itemize}
Thus, we have established energy inequality \eqref{energy-varepsilon}:
\begin{multline}\label{re:epsilon-energy}
E^{\varepsilon}[\rho ^{\varepsilon},q^{\varepsilon}]+ \int\limits_0^T\int\limits_{\Omega} \Big(2\mu^{\varepsilon}|\mathbb{D}(u^{\varepsilon})|^2 + \lambda^{\varepsilon}|\operatorname{div}u^{\varepsilon}|^2\Big) + \delta\varepsilon \beta\int\limits_0^T\int\limits_{\Omega} (\rho^{\varepsilon})^{\beta-2}|\nabla \rho^{\varepsilon}|^2 \\
+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} |u^{\varepsilon} \times \nu|^2
+ \alpha \int\limits_0^T\int\limits_{\partial \mc{S}^{\varepsilon}(t)} |(u^{\varepsilon}-P^{\varepsilon}_{\mc{S}}u^{\varepsilon})\times \nu|^2
+ \frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^{\varepsilon}_{\mc{S}}|u^{\varepsilon}-P^{\delta}_{\mc{S}}u^{\varepsilon}|^2 \leq \int\limits_0^T\int\limits_{\Omega}\rho^{\varepsilon}
g^{\varepsilon} \cdot u^{\varepsilon}
+ E^{\varepsilon}_0,
\end{multline}
where
$$E^{\varepsilon}[\rho^{\varepsilon} ,q^{\varepsilon}] =\int\limits_{\Omega}\left(\frac{1}{2}\frac{|q^{\varepsilon}|^2}{\rho^{\varepsilon}} + \frac{a^{\varepsilon}}{\gamma-1}(\rho^{\varepsilon})^{\gamma} + \frac{\delta}{\beta-1}(\rho^{\varepsilon})^{\beta}\right).$$
We obtain as in \cite[Equation (7.8.14), Page 363]{MR2084891}:
\begin{equation*}
\partial_t\rho^{\varepsilon},\ \Delta \rho^{\varepsilon}\in {L^{\frac{5\beta-3}{4\beta}}((0,T)\times\Omega)}.
\end{equation*}
Regarding the $\sqrt{\varepsilon} \|\nabla \rho^{\varepsilon}\|_{L^2((0,T)\times\Omega)}$ estimate in \eqref{est:indofepsilon}, we have to multiply \eqref{varepsilon:approx2} by $\rho^{\varepsilon}$ and integrate by parts to obtain
\begin{equation*}
\frac{1}{2}\int\limits_{\Omega} |\rho^{\varepsilon}(t)|^2 + \varepsilon\int\limits_0^T\int\limits_{\Omega} |\nabla\rho^{\varepsilon}(t)|^2 = \frac{1}{2}\int\limits_{\Omega} |\rho_0^{\varepsilon}|^2 - \frac{1}{2}\int\limits_0^T\int\limits_{\Omega} |\rho^{\varepsilon}|^2\operatorname{div} u^{\varepsilon} \leq \frac{1}{2}\int\limits_{\Omega} |\rho_0^{\varepsilon}|^2 + \sqrt{T}\||\rho^{\varepsilon}\|^2_{L^{\infty}(0,T;L^4(\Omega))}\|\operatorname{div}u^{\varepsilon}\|_{L^2(0,T;L^2(\Omega))}.
\end{equation*}
Now, the pressure estimates $\|\rho^{\varepsilon}\|_{L^{\beta+1}((0,T)\times\Omega)}$ and $\|\rho^{\varepsilon}\|_{L^{\gamma+1}((0,T)\times\Omega)}$ in \eqref{est:indofepsilon} can be derived by means of the test function $\phi(t,x) = \psi(t)\Phi(t,x)$ with $\Phi(t,x)=\mc{B}[ \rho^{\varepsilon}-\overline{m}]$ in \eqref{varepsilon:approx3}, where
\begin{equation*}
\psi \in \mc{D}(0,T),\quad \overline{m}=|\Omega|^{-1}\int\limits_{\Omega} \rho^{\varepsilon},
\end{equation*}
and $\mc{B}$ is the Bogovskii operator related to $\Omega$ (for details about $\mc{B}$, see \cite[Section 3.3, Page 165]{MR2084891}). After taking this special test function and integrating by parts, we obtain
\begin{multline}\label{bogovski:mom}
\int\limits_0^T \psi\int\limits_{\Omega}\Big(a^{\varepsilon}(\rho^{\varepsilon})^{\gamma} + {\delta} (\rho^{\varepsilon})^{\beta}\Big) \rho^{\varepsilon}= \int\limits_0^T \psi\int\limits_{\Omega}\Big(a^{\varepsilon}(\rho^{\varepsilon})^{\gamma} + {\delta} (\rho^{\varepsilon})^{\beta}\Big) \overline{m} + \int\limits_0^T 2\psi\int\limits_{\Omega} \mu^{{\varepsilon}}\mathbb{D}(u^{\varepsilon}):\mathbb{D}(\Phi) + \int\limits_0^T \psi\int\limits_{\Omega}\lambda^{\varepsilon}\rho^{\varepsilon}\operatorname{div}u^{\varepsilon}\\- \overline{m}\int\limits_0^T \psi\int\limits_{\Omega}\lambda^{\varepsilon}\operatorname{div}u^{\varepsilon} + \int\limits_0^T \psi\int\limits_{\Omega} \varepsilon \nabla u^{\varepsilon} \nabla \rho^{\varepsilon} \cdot \Phi + \alpha \int\limits_0^T \psi\int\limits_{\partial \mc{S}^{\varepsilon}(t)} [(u^{\varepsilon}-P^{\varepsilon}_{\mc{S}}u^{\varepsilon})\times \nu]\cdot [(\Phi-P^{\varepsilon}_{\mc{S}}\Phi)\times \nu] \\
+ \frac{1}{\delta}\int\limits_0^T \psi\int\limits_{\Omega} \chi^{\varepsilon}_{\mc{S}}(u^{\varepsilon}-P^{\varepsilon}_{\mc{S}}u^{\varepsilon})\cdot (\Phi-P^{\varepsilon}_{\mc{S}}\Phi) + \int\limits_0^T \psi\int\limits_{\Omega} \rho^{\varepsilon} g^{\varepsilon} \cdot \Phi.
\end{multline}
We see that all the terms can be estimated as in \cite[Section 7.8.4, Pages 366--368]{MR2084891} except the penalization term. Using H\"{o}lder's inequality and bounds from energy estimate \eqref{energy-varepsilon}, the penalization term can be dealt with in the following way
\begin{equation}\label{bogovski:extra}
\int\limits_0^T \psi\int\limits_{\Omega} \chi^{\varepsilon}_{\mc{S}}(u^{\varepsilon}-P^{\varepsilon}_{\mc{S}}u^{\varepsilon})\cdot (\Phi-P^{\varepsilon}_{\mc{S}}\Phi) \leq |\psi|_{C[0,T]} \left(\int\limits_0^T\int\limits_{\Omega} \chi^{\varepsilon}_{\mc{S}}|(u^{\varepsilon}-P^{\varepsilon}_{\mc{S}}u^{\varepsilon})|^2\right)^{1/2}\|\Phi\|_{L^2((0,T)\times\Omega)}\leq C |\psi|_{C[0,T]},
\end{equation}
where in the last inequality we have used $\|\Phi\|_{L^2(\Omega)}\leq c\|\rho^{\varepsilon}\|_{L^2(\Omega)}$ and the energy inequality \eqref{energy-varepsilon}. Thus, we have an improved regularity of the density and we have established the required estimates of \eqref{est:indofepsilon}.
The only remaining thing is to check the following fact: there exists $T$ small enough such that if $\operatorname{dist}(\mc{S}_0,\partial \Omega) > 2\sigma$, then
\begin{equation}\label{epsilon-collision}
\operatorname{dist}(\mc{S}^{\varepsilon}(t),\partial \Omega) \geq 2\sigma> 0 \quad \forall \ t\in [0,T].
\end{equation}
It is equivalent to establishing the following bound:
\begin{equation}\label{equivalent-Tag}
\sup_{t\in [0,T]}|\partial_t \eta_{t,0}(t,y)| < \frac{ \operatorname{dist}(\mc{S}_0,\partial\Omega) - 2\sigma}{T},\quad y\in \mc{S}_0.
\end{equation}
We show as in Step 3 of the proof of \cref{fa} that (see \eqref{no-collision}--\eqref{18:39}):
\begin{equation}\label{00:32}
|\partial_t \eta^{\varepsilon}_{t,0}(t,y)| \leq |(h{^\varepsilon})'(t)| + |\omega^{\varepsilon}(t)||y-h^{\varepsilon}(t)|\leq C_0\left(\int\limits_{\Omega} \rho^{\varepsilon} |u^{\varepsilon}(t)|^2\right)^{1/2},
\end{equation}
where $C_0=\sqrt{2}\frac{\max\{1,|y-h(t)|\}}{\min\{1,\lambda_0\}
^{1/2}}$. Moreover, the energy estimate \eqref{re:epsilon-energy} yields
\begin{equation*}
\frac{d}{dt}E^{\varepsilon}[\rho ^{\varepsilon},q^{\varepsilon}]+ \int\limits_{\Omega} \Big(2\mu^{\varepsilon}|\mathbb{D}(u^{\varepsilon})|^2 + \lambda^{\varepsilon}|\operatorname{div}u^{\varepsilon}|^2\Big) \leq \int\limits_{\Omega}\rho^{\varepsilon}
g^{\varepsilon} \cdot u^{\varepsilon}\\
\leq E^{\varepsilon}[\rho ^{\varepsilon},q^{\varepsilon}] + \frac{1}{2\gamma_1}\left(\frac{\gamma -1}{2\gamma}\right)^{\gamma_1/\gamma}\|g^{\varepsilon}\|^{2\gamma_1}_{L^{\frac{2\gamma}{\gamma -1}}(\Omega)},
\end{equation*}
with $\gamma_1=1-\frac{1}{\gamma}$, which implies
\begin{equation}\label{00:33}
E^{\varepsilon}[\rho ^{\varepsilon},q^{\varepsilon}] \leq e^{{T}}E^{\varepsilon}_0 + C{T} \|g^{\varepsilon}\|^{2\gamma_1}_{L^{\infty}((0,T)\times\Omega)}.
\end{equation}
Thus, with the help of \eqref{equivalent-Tag} and \eqref{00:32}--\eqref{00:33}, we can conclude that for any $T$ satisfying
\begin{equation*}
T < \frac{ \operatorname{dist}(\mc{S}_0,\partial\Omega) - 2\sigma}{C_0 \left[e^{{T}}E^{\varepsilon}_0 + C{T} \|g^{\varepsilon}\|^{2\gamma_1}_{L^{\infty}((0,T)\times\Omega)}\right]^{1/2}},
\end{equation*}
the relation \eqref{epsilon-collision} holds.
This completes the proof of \cref{thm:approxn}.
\end{proof}
\subsection{Vanishing dissipation in the continuity equation and the limiting system}\label{14:18}
In this section, we prove \cref{thm:approxn-delta} by taking $\varepsilon\rightarrow 0$ in the system \eqref{varepsilon:approx1}--\eqref{varepsilon:initial}. In order to do so, we have to deal with the problem of identifying the pressure corresponding to the limiting density.
First of all, following the idea of the footnote in \cite[Page 381]{MR2084891}, the initial data $(\rho_0^{\varepsilon}, q_0^{\varepsilon})$ is constructed in such a way that
\begin{equation*}
\rho_0^{\varepsilon}>0,\quad \rho_0^{\varepsilon} \in W^{1,\infty}(\Omega),\quad \rho_0^{\varepsilon} \rightarrow \rho_0^{\delta} \mbox{ in }L^{\beta}(\Omega),\quad q_0^{\varepsilon} \rightarrow q_0^{\delta} \mbox{ in }L^{\frac{2\beta}{\beta + 1}}(\Omega)
\end{equation*}
and
\begin{equation*}
\int\limits_{\Omega}\Bigg( \frac{|q_0^{\varepsilon}|^2}{\rho_0^{\varepsilon}}\mathds{1}_{\{\rho_0^{\varepsilon}>0\}} + \frac{a}{\gamma-1}(\rho_0^{\varepsilon})^{\gamma} + \frac{\delta}{\beta-1}(\rho_0^{\varepsilon})^{\beta} \Bigg) \rightarrow \int\limits_{\Omega}\Bigg( \frac{|q_0^{\delta}|^2}{\rho_0^{\delta}}\mathds{1}_{\{\rho_0^{\delta}>0\}} + \frac{a}{\gamma-1}(\rho_0^{\delta})^{\gamma} + \frac{\delta}{\beta-1}(\rho_0^{\delta})^{\beta} \Bigg)\mbox{ as }{\varepsilon}\rightarrow 0.
\end{equation*}
More precisely, let $(\rho^{\delta}_0,q^{\delta}_0)$ satisfy \eqref{rhonot}--\eqref{qnot}; then, following \cite[Section 7.10.7, Page 392]{MR2084891}, we can find $\rho^{\varepsilon}_{0} \in W^{1,\infty}({\Omega)}$, $\rho^{\varepsilon}_0 > 0$ by defining
\begin{equation*}
\rho^{\varepsilon}_{0}= \mc{K}_{\varepsilon}(\rho^{\delta}_{0}) + \varepsilon,
\end{equation*}
where $\mc{K}_{\varepsilon}$ is the standard regularizing operator in the space variable.
Then our initial density satisfies
\begin{equation*}
\begin{array}{l}
\rho^{\varepsilon}_{0} \to \rho^{\delta}_{0} \mbox{ strongly in } L^{\beta}(\Omega) .
\end{array}
\end{equation*}
We define
\begin{align*}
\overline{{q}^{\varepsilon}_0}= \begin{cases} q_{0}^{\delta}\sqrt{\frac{\rho^{\varepsilon}_{0}}{\rho_{0}^{\delta}}} &\mbox { if } \rho^{\delta}_{0} >0,\\
0 \quad \quad \quad \quad \quad &\mbox { if } \rho^{\delta}_{0} =0.
\end{cases}
\end{align*}
From \eqref{qnot}, we know that
\begin{equation*}
\frac{|\overline{{q}^{\varepsilon}_0}|}{\sqrt{\rho^{\varepsilon}_{0}}} \in { L^2(\Omega)}.
\end{equation*}
Due to a density argument, there exists
$h^{\varepsilon} \in W^{1,\infty}({\Omega})$ such that
\begin{equation*}
\left\|\frac{q^{\varepsilon}_0}{\sqrt{\rho^{\varepsilon}_{0}}} -h^{\varepsilon} \right\|_{L^2(\Omega)}< \varepsilon.
\end{equation*}
Now, we set $ q^{\varepsilon}_0= h^{\varepsilon}\sqrt{\rho^{\varepsilon}_{0}}$, which implies that
\begin{equation*}
q^{\varepsilon}_0 \to q_{0}^{\delta} \mbox { in } L^{\frac{2\beta}{\beta +1}}(\Omega),
\end{equation*}
and
\begin{equation*}
E^{\varepsilon}_0 \to E^{\delta}_0.
\end{equation*}
\begin{proof} [Proof of \cref{thm:approxn-delta}] The estimates \eqref{energy-varepsilon} and \eqref{est:indofepsilon} help us to conclude that, up to an extraction of a subsequence, we have
\begin{align}
& u^{\varepsilon}\rightarrow u^{\delta}\mbox{ weakly in }L^2(0,T; H^1(\Omega)),\label{conv1}\\
&\rho^{\varepsilon}\rightarrow \rho^{\delta}\mbox{ weakly in }L^{\beta+1}((0,T)\times \Omega),\mbox{ weakly-}*\mbox{ in } L^{\infty}(0,T;L^{\beta}(\Omega)),\label{conv2}\\
& (\rho^{\varepsilon})^{\gamma}\rightarrow \overline{ (\rho^{\delta})^{\gamma}}\mbox{ weakly in }L^{\frac{\beta+1}{\gamma}}((0,T)\times\Omega),\label{conv3}\\
& (\rho^{\varepsilon})^{\beta}\rightarrow \overline{ (\rho^{\delta})^{\beta}}\mbox{ weakly in } L^{\frac{\beta+1}{\beta}}((0,T)\times\Omega),\label{conv4}\\
& \varepsilon\nabla\rho^{\varepsilon}\rightarrow 0 \mbox{ strongly in }L^2((0,T)\times \Omega)\label{conv5}
\end{align}
as $\varepsilon\to 0$. Below, we denote by $\left(\rho^{\delta},u^{\delta}, \overline{ (\rho^{\delta})^{\gamma}},\overline{ (\rho^{\delta})^{\beta}}\right)$ also the extended version of the corresponding quantities in $(0,T)\times \mb{R}^3$.
\underline{Step 1: Limit of the transport equation.}
We obtain from \cref{thm:approxn} that $\rho^{\varepsilon}$ satisfies \eqref{varepsilon:approx2}, $\{u^{\varepsilon},\chi_{\mc{S}}^{\varepsilon}\}$ is a bounded sequence in $L^{2}(0,T; H^1(\Omega)) \times L^{\infty}((0,T)\times \mb{R}^3)$ satisfying \eqref{varepsilon:approx4}
and $\{\rho^{\varepsilon}\chi_{\mc{S}}^{\varepsilon}\}$ is a bounded sequence in $L^{\infty}((0,T)\times\mathbb{R}^3)$ satisfying \eqref{varepsilon:approx5}. Thus, we can use \cref{sequential-varepsilon} to conclude that up to a subsequence:
\begin{align}\label{13:02}
&\chi_{\mc{S}}^{\varepsilon} \rightarrow \chi_{\mc{S}}^{\delta} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and }\mbox{ strongly } \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1 \leq p < \infty), \\
\label{18:11}
& \rho^{\varepsilon}\chi_{\mc{S}}^{\varepsilon} \rightarrow \rho^{\delta}\chi_{\mc{S}}^{\delta} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and }\mbox{ strongly } \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1 \leq p < \infty),
\end{align}
with $\chi_{\mc{S}}^{\delta}$ and $\rho^{\delta}\chi_{\mc{S}}^{\delta}$ satisfying \eqref{approx4} and \eqref{approx5} respectively. Moreover,
\begin{equation}\label{13:01}
P_{\mc{S}}^{\varepsilon} u^{\varepsilon} \rightarrow P_{\mc{S}}^{\delta} u^{\delta} \mbox{ weakly } \mbox{in }L^{2}(0,T; C^{\infty}_{loc}(\mb{R}^3)).
\end{equation}
Hence, we have recovered the regularity of $\chi_{\mc{S}}^{\delta}$ in \eqref{approx1} and the transport equations \eqref{approx4} and \eqref{approx5} as $\varepsilon\rightarrow 0$.
\underline{Step 2: Limit of the continuity and the momentum equation.}
We follow the ideas of \cite[Auxiliary lemma 7.49]{MR2084891} to conclude: if $\rho^{\delta}, u^{\delta}, \overline{ (\rho^{\delta})^{\gamma}}, \overline{ (\rho^{\delta})^{\beta}}$ are defined by \eqref{conv1}--\eqref{conv4}, we have
\begin{itemize}
\item $(\rho^{\delta},u^{\delta})$ satisfies:
\begin{equation}\label{rho:delta}
\frac{\partial {\rho}^{\delta}}{\partial t} + \operatorname{div}({\rho}^{\delta} u^{\delta}) =0 \mbox{ in }\mc{D}'([0,T)\times \mb{R}^3).
\end{equation}
\item For all $\phi \in H^1(0,T; L^{2}(\Omega)) \cap L^r(0,T; W^{1,{r}}(\Omega))$, where $r=\max\left\{\beta+1, \frac{\beta+\theta}{\theta}\right\}$, $\beta \geq \max\{8,\gamma\}$ and $\theta=\frac{2}{3}\gamma -1$ with $\phi\cdot\nu=0$ on $\partial\Omega$ and $\phi|_{t=T}=0$, the following holds:
\begin{multline}\label{mom:delta}
- \int\limits_0^T\int\limits_{\Omega} \rho^{\delta} \left(u^{\delta}\cdot \frac{\partial}{\partial t}\phi + u^{\delta} \otimes u^{\delta} : \nabla \phi\right) + \int\limits_0^T\int\limits_{\Omega} \Big(2\mu^{\delta}\mathbb{D}(u^{\delta}):\mathbb{D}(\phi) + \lambda^{\delta}\operatorname{div}u^{\delta}\mathbb{I} : \mathbb{D}(\phi) - \left(a^{\delta}\overline{ (\rho^{\delta})^{\gamma}}+\delta \overline{ (\rho^{\delta})^{\beta}}\right)\mathbb{I}: \mathbb{D}(\phi)\Big) \\
+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} (u^{\delta} \times \nu)\cdot (\phi \times \nu) + \alpha \int\limits_0^T\int\limits_{\partial \mc{S}^{\delta}(t)} \left[(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta})\times \nu\right]\cdot \left[(\phi-P^{\delta}_{\mc{S}}\phi)\times \nu\right] \\
+ \frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^{\delta}_{\mc{S}}(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta})\cdot (\phi-P^{\delta}_{\mc{S}}\phi) = \int\limits_0^T\int\limits_{\Omega}\rho^{\delta} g^{\delta} \cdot \phi
+ \int\limits_{\Omega} (\rho^{\delta} u^{\delta} \cdot \phi)(0).
\end{multline}
\item The couple $(\rho^{\delta},u^{\delta})$ satisfies the identity
\begin{equation}\label{renorm:delta}
\partial_t b(\rho^{\delta}) + \operatorname{div}(b(\rho^{\delta})u^{\delta})+[b'(\rho^{\delta})\rho^{\delta} - b(\rho^{\delta})]\operatorname{div}u^{\delta}=0 \mbox{ in }\mc{D}'([0,T)\times \mb{R}^3),
\end{equation}
with any $b\in C([0,\infty)) \cap C^1((0,\infty))$ satisfying
\eqref{eq:b}.
\item $\rho^{\delta} \in C([0,T];L^p(\Omega))$, $1\leq p< \beta$.
\end{itemize}
We outline the main lines of the proof of the above mentioned result. We prove \eqref{rho:delta} by passing to the limit $\varepsilon\rightarrow 0$ in equation \eqref{varepsilon:approx2} with the help of the convergence of the density in \eqref{conv2}, \eqref{conv5} and the convergence of the momentum \cite[Section 7.9.1, page 370]{MR2084891}
\begin{align}\label{product1}
\rho^{\varepsilon}u^{\varepsilon}\rightarrow \rho^\delta u^\delta \mbox{ weakly-}* \mbox{ in }L^{\infty}(0,T;L^{\frac{2\beta}{\beta+1}}(\Omega)),\mbox{ weakly in }L^2(0,T;L^{\frac{6\beta}{\beta+6}}(\Omega)).
\end{align}
We obtain identity \eqref{mom:delta}, corresponding to the momentum equation, by passing to the limit in \eqref{varepsilon:approx3}. To pass to the limit, we use the convergences of the density and the velocity \eqref{conv1}--\eqref{conv4} and of the transport part \eqref{13:02}--\eqref{13:01} along with the convergence of the product of the density and the velocity \eqref{product1} and the convergence of the following terms \cite[Section 7.9.1, page 371]{MR2084891}:
\begin{align*}
&\rho^{\varepsilon}u^{\varepsilon}_i u^{\varepsilon}_j \rightarrow \rho^\delta u^\delta_i u^\delta_j\mbox{ weakly in }L^2(0,T;L^{\frac{6\beta}{4\beta + 3}}(\Omega)), \quad i,j=1,2,3,\\
&\varepsilon (\nabla \rho^{\varepsilon}\cdot \nabla)u^{\varepsilon}\rightarrow 0 \mbox{ weakly in }L^{\frac{5\beta-3}{4\beta}}((0,T)\times\Omega).
\end{align*}
Since, we have already established the continuity equation \eqref{rho:delta} and the function $b\in C([0,\infty)) \cap C^1((0,\infty))$ satisfies
\eqref{eq:b}, the renormalized continuity equation \eqref{renorm:delta} follows from the application of \cite[Lemma 6.9, page 307]{MR2084891}. Moreover, the regularity of the density $\rho^{\delta} \in C([0,T];L^p(\Omega))$, $1\leq p< \beta$ follows from \cite[Lemma 6.15, page 310]{MR2084891} via the appropriate choice of the renormalization function $b$ in \eqref{renorm:delta} and with the help of the regularities of $\rho^\delta\in L^{\infty}(0,T; L^{\beta}_{loc}(\mb{R}
^3)) \cap C([0,T];L^{\beta}_{loc}(\Omega))$, $u^\delta\in L^2(0,T; H^1_{loc}(\mb{R}^3))$. Hence we have established the continuity equation \eqref{approx2} and the renormalized one \eqref{rho:renorm1}.
\underline{Step 3: Limit of the pressure term.}
In this step, our aim is to identify the term $\left(\overline{ (\rho^{\delta})^{\gamma}}+\delta \overline{ (\rho^{\delta})^{\beta}}\right)$ by showing that $\overline{ (\rho^{\delta})^{\gamma}}=(\rho^{\delta})^{\gamma}$ and $\overline{ (\rho^{\delta})^{\beta}}=(\rho^{\delta})^{\beta}$. To prove this, we need some compactness of $\rho^{\varepsilon}$, which is not available. However, the quantity $(\rho^{\varepsilon})^{\gamma}+\delta (\rho^{\varepsilon})^{\beta}-(2\mu+\lambda)\rho
^{\varepsilon}\mathrm{div}u^{\varepsilon}$, called ``effective viscous flux", possesses a convergence property that helps us to identify the limit of our required quantity. We have the following weak and weak-$*$ convergences from the boundedness of their corresponding norms \cite[Section 7.9.2, page 373]{MR2084891}:
\begin{align}
\rho
^{\varepsilon}\mathrm{div}u^{\varepsilon}\rightarrow \overline{\rho^{\delta}\mathrm{div}u^{\delta}}&\mbox{ weakly in }L^2(0,T;L^{\frac{2\beta}{2+\beta}}(\Omega)),\label{conv6}\\
(\rho^{\varepsilon})^{\gamma+1}\rightarrow \overline{ (\rho^{\delta})^{\gamma+1}}&\mbox{ weakly-}*\mbox{ in } [C((0,T)\times\Omega)]',\label{conv7}\\
(\rho^{\varepsilon})^{\beta+1}\rightarrow \overline{ (\rho^{\delta})^{\beta+1}}&\mbox{ weakly-}*\mbox{ in } [C((0,T)\times\Omega)]'\label{conv8}.
\end{align}
We apply the following result regarding the "effective viscous flux" from \cite[Lemma 7.50, page 373]{MR2084891}:
Let $u^{\delta}$, $\rho^{\delta}$, $\overline{ (\rho^{\delta})^{\gamma}}$, $\overline{ (\rho^{\delta})^{\beta}}$, $\overline{ (\rho^{\delta})^{\gamma+1}}$, $\overline{ (\rho^{\delta})^{\beta+1}}$, $\overline{\rho^{\delta}\mathrm{div}u^{\delta}}$ be defined in \eqref{conv1}--\eqref{conv4}, \eqref{conv6}--\eqref{conv8}. Then we have
\begin{align}
&\overline{ (\rho^{\delta})^{\gamma+1}}\in L^{\frac{\beta+1}{\gamma+1}}((0,T)\times\Omega),\quad \overline{ (\rho^{\delta})^{\beta+1}} \in L^1((0,T)\times \Omega),\label{effective1}\\
&\overline{ (\rho^{\delta})^{\gamma+1}} + \delta\overline{ (\rho^{\delta})^{\beta+1}} - (2\mu+\lambda)\overline{\rho^{\delta}\mathrm{div}u^{\delta}}=\overline{ (\rho^{\delta})^{\gamma}}\rho^{\delta} + \delta\overline{ (\rho^{\delta})^{\beta}}\rho^{\delta} - (2\mu+\lambda)\rho^{\delta}\mathrm{div}u^{\delta}\mbox{ a.e. in }(0,T)\times\Omega \label{effective2}.
\end{align}
Using the above relation \eqref{effective1} and an appropriate choice of the renormalization function in \eqref{renorm:delta}, we deduce the strong convergence of the density as in \cite[Lemma 7.51, page 375]{MR2084891}: Let $\rho^{\delta}$, $\overline{ (\rho^{\delta})^{\gamma}}$, $\overline{ (\rho^{\delta})^{\beta}}$, $\overline{ (\rho^{\delta})^{\gamma+1}}$, $\overline{ (\rho^{\delta})^{\beta+1}}$ be defined in \eqref{conv2}--\eqref{conv4}, \eqref{conv6}--\eqref{conv7}. Then we have
\begin{equation*}
\overline{ (\rho^{\delta})^{\gamma}}= (\rho^{\delta})^{\gamma},\quad \overline{ (\rho^{\delta})^{\beta}}= (\rho^{\delta})^{\beta} \mbox{ a.e. in }(0,T)\times\Omega.
\end{equation*}
In particular,
\begin{equation}\label{strong:rhoepsilon}
\rho^{\varepsilon}\rightarrow \rho^{\delta}\mbox{ strongly in }L^p((0,T)\times\Omega),\ 1\leq p < \beta+1.
\end{equation}
Thus, we have identified the pressure term in equation \eqref{mom:delta}. Hence, we have recovered the momentum equation \eqref{approx3} and we have proved the existence of a weak solution $(\mc{S}^{\delta},\rho^{\delta},u^{\delta})$ to system \eqref{approx1}--\eqref{approx:initial}. It remains to prove the energy inequality \eqref{10:45} and the improved regularity for the density \eqref{rho:improved}.
\underline{Step 4: Energy inequality and improved regularity of the density.} Due to the convergences
\begin{align*}
&\rho^{\varepsilon}u^{\varepsilon}_i u^{\varepsilon}_j \rightarrow \rho^\delta u^{\delta}_i u^{\delta}_j \mbox{ weakly in }L^2(0,T;L^{\frac{6\beta}{4\beta + 3}}(\Omega)), \quad i,j=1,2,3,\\
&\rho^{\varepsilon}\rightarrow \rho^{\delta}\mbox{ strongly in }L^p((0,T)\times\Omega),\ 1\leq p < \beta+1,
\end{align*}
we have
\begin{align*}
&\int\limits_{\Omega} \rho^{\varepsilon}|u^{\varepsilon}|^2 \rightarrow \int\limits_{\Omega} \rho^{\delta}|u^{\delta}|^2 \mbox{ weakly in }L^2(0,T),\\
&\int\limits_{\Omega} \left((\rho^{\varepsilon})^{\gamma} +\delta(\rho^{\varepsilon})^{\beta}\right) \rightarrow \int\limits_{\Omega} \left((\rho^{\delta})^{\gamma} +\delta(\rho^{\delta})^{\beta}\right) \black\mbox{ weakly in }L^{\frac{\beta+1}{\beta}}(0,T).
\end{align*}
In particular,
\begin{equation*}
\int\limits_{\Omega}\Big( \rho^{\varepsilon} |u^{\varepsilon}|^2 + \frac{a^{\varepsilon}}{\gamma-1}(\rho^{\varepsilon})^{\gamma} + \frac{\delta}{\beta-1}(\rho^{\varepsilon})^{\beta}\Big) \rightarrow \int\limits_{\Omega}\Big( \rho^{\delta} |u^{\delta}|^2 + \frac{a^{\delta}}{\gamma-1}(\rho^{\delta})^{\gamma} + \frac{\delta}{\beta-1}(\rho^{\delta})^{\beta}\Big) \mbox{ as }\varepsilon\rightarrow 0.
\end{equation*}
Due to the weak lower semicontinuity of the corresponding $L^2$ norms, the weak convergence of $u^{\varepsilon}$ in $L^2(0,T;H^1(\Omega))$, the strong convergence of $\rho^{\varepsilon}$ in $L^p((0,T)\times\Omega),\ 1\leq p < \beta+1$, the strong convergence of
$\chi_{\mc{S}}^{\varepsilon}$ in $C([0,T];L^p(\Omega))$ and the strong convergence of
$P_{\mc{S}}^{\varepsilon}$ in $C([0,T]; C^{\infty}_{loc}(\mb{R}^3))$, we follow the idea explained in \eqref{N1}--\eqref{N6} to pass to the limit as $\varepsilon \rightarrow 0$ in the other terms of inequality \eqref{energy-varepsilon} to establish the energy inequality \eqref{10:45}.
To establish the regularity \eqref{rho:improved}, we use an appropriate test function of the type $$\mc{B}\left((\rho^{\delta})^{\theta} - |\Omega|^{-1}\int\limits_{\Omega}(\rho^{\delta})^{\theta}\right)$$
in the momentum equation \eqref{approx3}, where $\mc{B}$ is the Bogovskii operator. The detailed proof is in the lines of \cite[Section 7.9.5, pages 376-381]{MR2084891} and the extra terms can be treated as we have already explained in \eqref{bogovski:mom}--\eqref{bogovski:extra}. Moreover, we follow the same idea as in the proof of \cref{thm:approxn} (precisely, the calculations in \eqref{epsilon-collision}--\eqref{00:33}) to conclude that there exists $T$ small enough such that if $\operatorname{dist}(\mc{S}_0,\partial \Omega) > 2\sigma$, then
\begin{equation}\label{delta-collision}
\operatorname{dist}(\mc{S}^{\delta}(t),\partial \Omega) \geq 2\sigma> 0 \quad \forall \ t\in [0,T].
\end{equation}
This settles the proof of \cref{thm:approxn-delta}.
\end{proof}
\section{Proof of the main result}\label{S4}
We have already established the existence of a weak solution $(\mc{S}^{\delta},\rho^{\delta},u^{\delta})$ to system \eqref{approx1}--\eqref{approx:initial} in \cref{thm:approxn-delta}. In this section, we study the convergence analysis and the limiting behaviour of the solution as $\delta\rightarrow 0$ and recover a weak solution to system \eqref{mass:comfluid}--\eqref{initial cond:comp}, i.e., we show \cref{{exist:upto collision}}.
\begin{proof} [Proof of \cref{exist:upto collision}]
\underline{Step 0: Initial data.}
We consider initial data $\rho_{\mc{F}_0}$, $q_{\mc{F}_0}$, $\rho_{\mc{S}_0}$, $q_{\mc{S}_0}$ satisfying the conditions \eqref{init}--\eqref{init2}. In this step we present the construction of the approximate initial data $(\rho^{\delta}_0,q^{\delta}_0)$ satisfying \eqref{rhonot}--\eqref{qnot} so that, in the limit $\delta\rightarrow 0$, we can recover the initial data $\rho_{\mc{F}_0}$ and $q_{\mc{F}_0}$ on $\mc{F}_0$.
We set
\begin{equation*}
\rho_0 = \rho_{\mc{F}_0}(1-\mathds{1}_{\mc{S}_0}) + \rho_{\mc{S}_0}\mathds{1}_{\mc{S}_0},
\end{equation*}
\begin{equation*}
q_0 = q_{\mc{F}_0}(1-\mathds{1}_{\mc{S}_0}) + \rho_{\mc{S}_0}u_{\mc{S}_0}\mathds{1}_{\mc{S}_0}.
\end{equation*}
Similarly as in \cite[Section 7.10.7, Page 392]{MR2084891}, we can find $\rho^{\delta}_{0} \in L^{\beta}({\Omega)}$ by defining
\begin{equation}\label{init-apprx1}
\rho^{\delta}_{0}= \mc{K}_{\delta}(\rho_{0}) + \delta,
\end{equation}
where $\mc{K}_{\delta}$ is the standard regularizing operator in the space variable.
Then our initial density satisfies
\begin{equation}\label{ini-rho}
\begin{array}{l}
\rho^{\delta}_{0} \to \rho_{0} \mbox{ strongly in } L^{\gamma }(\Omega) .
\end{array}
\end{equation}
We define
\begin{align} \label{init-apprx2}
\overline{{q}^{\delta}_0}= \begin{cases} q_{0}\sqrt{\frac{\rho^{\delta}_{0}}{\rho_{0}}} &\mbox { if } \rho _{0} >0,\\
0 \quad \quad \quad \quad \quad &\mbox { if } \rho _{0} =0.
\end{cases}
\end{align}
From \eqref{init1}, we know that
\begin{equation*}
\frac{|\overline{{q}^{\delta}_0}|}{\sqrt{\rho^{\delta}_{0}}} \in { L^2(\Omega)}.
\end{equation*}
Due to a density argument, there exists
$h^{\delta} \in W^{1,\infty}({\Omega})$ such that
\begin{equation*}
\left\|\frac{q^{\delta}_0}{\sqrt{\rho^{\delta}_{0}}} -h^{\delta} \right\|_{L^2(\Omega)}< \delta.
\end{equation*}
Now, we set $ q^{\delta}_0= h^{\delta}\sqrt{\rho^{\delta}_{0}}$, which implies that
\begin{equation*}
q^{\delta}_0 \to q_{0} \mbox { in } L^{\frac{2\gamma}{\gamma +1}}(\Omega)
\end{equation*}
and
\begin{equation*}
E^{\delta} [\rho ^{\delta}_0, q^{\delta}_0] \to E[\rho_0,q_0].
\end{equation*}
Next we start with the sequence of approximate solutions $\rho^{\delta},u^{\delta}$ of the system \eqref{approx1}--\eqref{approx:initial} (\cref{thm:approxn-delta}). Since the energy $E^{\delta}[\rho ^{\delta}_0,q^{\delta}_0]$ is uniformly bounded with respect to $\delta$, we have
from inequality \eqref{10:45} that
\begin{multline}\label{again-10:45}
\|\sqrt{\rho^{\delta}} u^{\delta}\|_{L^{\infty}(0,T;L^2(\Omega))}^2 + \|\rho^{\delta}\|_{L^{\infty}(0,T;L^{\gamma}(\Omega))}^2 + \|\sqrt{2\mu^{\delta}}\mathbb{D}(u^{\delta})\|_{L^2((0,T)\times\Omega)}^2 + \|\sqrt{\lambda^{\delta}}\operatorname{div}u^{\delta}\|_{L^2((0,T)\times\Omega)}^2 \\
+ \frac{1}{\delta}\|\sqrt{ \chi^{\delta}_{\mc{S}}}\left(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta}\right)\|^2_{L^2((0,T)\times \Omega)} \leq C,
\end{multline}
with $C$ independent of $\delta$.
\underline{Step 1: Recovery of the transport equation for body.}
Since $\{u^{\delta},\chi_{\mc{S}}^{\delta}\}$ is a bounded sequence in $L^{2}(0,T; L^2(\Omega)) \times L^{\infty}((0,T)\times \mb{R}^3)$ satisfying
\eqref{approx4}, we can apply \cref{sequential2} to conclude that: up to a subsequence, we have
\begin{align}
& u^{\delta} \rightarrow u \mbox{ weakly } \mbox{ in }L^{2}(0,T; L^{2}(\Omega)),\notag\\
& \chi_{\mc{S}}^{\delta} \rightarrow \chi_{\mc{S}} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) \mbox{ and }\mbox{ strongly } \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1\leq p<\infty),\label{19:30}
\end{align}
with
\begin{equation*}
\chi_{\mc{S}}(t,x)=\mathds{1}_{\mc{S}(t)}(x),\quad \mc{S}(t)=\eta_{t,0}(\mc{S}_0),
\end{equation*}
where $\eta_{t,s}\in H^{1}((0,T)^2; C^{\infty}_{loc}(\mb{R}^3))$ is the isometric propagator. Moreover,
\begin{align}
& P_{\mc{S}}^{\delta} u^{\delta} \rightarrow P_{\mc{S}} u \mbox{ weakly } \mbox{ in }L^{2}(0,T; C^{\infty}_{loc}(\mb{R}^3)),\label{prop1}\\
& \eta_{t,s}^{\delta} \rightarrow \eta_{t,s} \mbox{ weakly } \mbox{ in }H^{1}((0,T)^2; C^{\infty}_{loc}(\mb{R}^3)).\notag
\end{align}
Also, we obtain that $\chi_{\mc{S}}$ satisfies
\begin{equation*}
\frac{\partial {\chi}_{\mc{S}}}{\partial t} + \operatorname{div}(P_{\mc{S}}u\chi_{\mc{S}}) =0 \, \mbox{ in }{\Omega},\quad \chi_{\mc{S}}(t,x)=\mathds{1}_{\mc{S}(t)}(x).
\end{equation*}
Now we set
\begin{equation}\label{uS}
u_{\mc{S}}=P_{\mc{S}}u
\end{equation}
to recover the transport equation \eqref{NO4}. Note that we have already recovered the regularity of $\chi_{\mc{S}}$ in \eqref{NO1}.
Observe that the fifth term of inequality \eqref{again-10:45} yields
\begin{equation}\label{12:04}
\sqrt{ \chi^{\delta}_{\mc{S}}}\left(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta}\right) \rightarrow 0 \mbox{ strongly in } L^2((0,T)\times \Omega).
\end{equation}
The strong convergence of $\chi_{\mc{S}}^{\delta}$ and the weak convergence of $u^{\delta}$ and $P^{\delta}_{\mc{S}}u^{\delta}$ imply that \begin{equation}\label{re:solidvel}
\chi_{\mc{S}}\left(u-u_{\mc{S}}\right)=0.
\end{equation}
To analyze the behaviour of the velocity field in the fluid part, we introduce the following continuous extension operator:
\begin{equation}\label{Eu1}
\mc{E}_u^{\delta}(t): \left\{ u\in H^{1}(\mc{F}^{\delta}(t)),\ u\cdot \nu=0\mbox{ on }\partial\Omega\right\} \rightarrow H^{1}(\Omega).
\end{equation}
Let us set
\begin{equation}\label{Eu2}
u^{\delta}_{\mc{F}}(t,\cdot)=\mc{E}_u^{\delta}(t)\left[u^{\delta}(t,\cdot)|_{\mc{F}^{\delta}}\right].
\end{equation}
We have
\begin{equation}\label{ext:fluid}
\{u^{\delta}_{\mc{F}}\} \mbox{ is bounded in }L^2(0,T;H^1(\Omega)),\quad u^{\delta}_{\mc{F}}=u^{\delta} \mbox{ on } \mc{F}^{\delta},\mbox{ i.e.\ } (1-\chi_{\mc{S}}^{\delta})(u^{\delta}-u^{\delta}_{\mc{F}})=0.
\end{equation}
Thus, the strong convergence of $\chi_{\mc{S}}^{\delta}$ and the weak convergence of $u^{\delta}_{\mc{F}}\rightarrow u_{\mc{F}}$ in $L^2(0,T;H^1(\Omega))$ yield that
\begin{equation}\label{re:fluidvel}
(1-\chi_{\mc{S}})\left(u-u_{\mc{F}}\right)=0.
\end{equation}
By combining the relations \eqref{re:solidvel}--\eqref{re:fluidvel}, we conclude that the limit $u$ of $u^{\delta}$ satisfies $u\in L^{2}(0,T; L^2(\Omega))$ and there exists $u_{\mc{F}}\in L^2(0,T; H^1(\Omega))$, $u_{\mc{S}}\in L^{2}(0,T; \mc{R})$ such that $u(t,\cdot)=u_{\mc{F}}(t,\cdot)$ on $\mc{F}(t)$ and $u(t,\cdot)=u_{\mc{S}}(t,\cdot)$ on $\mc{S}(t)$.
\underline{Step 2: Recovery of the continuity equations.}
We recall that $\rho^{\delta}{\chi}^{\delta}_{\mc{S}}(t,x)$ satisfies \eqref{approx5}, i.e.
\begin{equation*}
\frac{\partial }{\partial t}(\rho^{\delta}{\chi}^{\delta}_{\mc{S}}) + P^{\delta}_{\mc{S}}u^{\delta} \cdot \nabla (\rho^{\delta}{\chi}^{\delta}_{\mc{S}})=0,\quad (\rho^{\delta}{\chi}^{\delta}_{\mc{S}})|_{t=0}=\rho_0^{\delta}\mathds{1}_{\mc{S}_0}.
\end{equation*}
We proceed as in \cref{sequential2} to conclude that
\begin{align}\label{19:31}
\rho^{\delta}\chi_{\mc{S}}^{\delta} \rightarrow \rho\chi_{\mc{S}} \mbox{ weakly-}* \mbox{ in }L^{\infty}((0,T)\times \mb{R}^3) &\mbox{ and }\mbox{ strongly } \mbox{ in }C([0,T]; L^p_{loc}(\mb{R}^3)) \ (1\leq p<\infty),
\end{align}
and $\rho\chi_{\mc{S}}$ satisfies
\begin{equation*}
\frac{\partial }{\partial t}(\rho{\chi}_{\mc{S}}) + P_{\mc{S}}u \cdot \nabla (\rho{\chi}_{\mc{S}})=0,\quad (\rho{\chi}_{\mc{S}})|_{t=0}= \rho_{\mc{S}_0}\mathds{1}_{\mc{S}_0}.
\end{equation*}
We set
\begin{equation}\label{rhoS}
\rho_{\mc{S}}= \rho\chi_{\mc{S}}
\end{equation}
and use the definition of $u_{\mc{S}}$ in \eqref{uS} to conclude that
$\rho_{\mc{S}}$ satisfies:
\begin{equation*}
\frac{\partial {\rho}_{\mc{S}}}{\partial t} + \operatorname{div}(u_{\mc{S}}\rho_{\mc{S}}) =0 \, \mbox{ in }(0,T)\times{\Omega},\quad \rho_{\mc{S}}(0,x)=\rho_{\mc{S}_0}(x)\mathds{1}_{\mc{S}_0}\mbox{ in }\Omega.
\end{equation*}
Thus, we recover the equation of continuity \eqref{NO5} for the density of the rigid body.
We introduce the following extension operator:
\begin{equation*}
\mc{E}_{\rho}^{\delta}(t): \left\{ \rho\in L^{\gamma+\theta}(\mc{F}^{\delta}(t))\right\} \rightarrow L^{\gamma+\theta}(\Omega),
\end{equation*}
given by
\begin{equation}\label{Erho}
\mc{E}_{\rho}^{\delta}(t)\left[\rho^{\delta}(t,\cdot)|_{\mc{F}^{\delta}}\right]=
\begin{cases}
\rho^{\delta}(t,\cdot)|_{\mc{F}^{\delta}}&\mbox{ in }\mc{F}^{\delta}(t),\\
0 &\mbox{ in }\Omega\setminus \mc{F}^{\delta}(t).
\end{cases}
\end{equation}
Let us set
\begin{equation}\label{15:21}
\rho^{\delta}_{\mc{F}}(t,\cdot)=E_{\rho}^{\delta}(t)\left[\rho^{\delta}(t,\cdot)|_{\mc{F}^{\delta}}\right].
\end{equation}
From estimates \eqref{10:45}, \eqref{rho:improved}, \eqref{ext:fluid} and the definition of $\rho^{\delta}_{\mc{F}}$ in \eqref{15:21}, we obtain that
\begin{align}
u^{\delta}_{\mc{F}}\rightarrow u_{\mc{F}}&\mbox{ weakly in }L^2(0,T; H^1(\Omega)),\label{ag:conv1}\\
\rho^{\delta}_{\mc{F}}\rightarrow \rho_{\mc{F}}\mbox{ weakly in }L^{\gamma+\theta}((0,T)&\times \Omega),\ \theta=\frac{2}{3}\gamma-1\mbox{ and weakly-}*\mbox{ in } L^{\infty}(0,T;L^{\beta}(\Omega)),\label{ag:conv2}\\
(\rho^{\delta}_{\mc{F}})^{\gamma}\rightarrow \overline{ \rho^{\gamma}_{\mc{F}}}&\mbox{ weakly in }L^{\frac{\gamma+\theta}{\gamma}}((0,T)\times\Omega),\label{ag:conv3}\\
\delta(\rho^{\delta}_{\mc{F}})^{\beta}\rightarrow 0&\mbox{ weakly in }L^{\frac{\beta+\theta}{\beta}}((0,T)\times \Omega)\label{ag:conv4}.
\end{align}
Next, we follow the ideas of \cite[Auxiliary lemma 7.53, Page 384]{MR2084891} to assert: \black if $u_{\mc{F}}, \rho_{\mc{F}}, \overline{ \rho_{\mc{F}}^{\gamma}}$ are defined by \eqref{ag:conv1}--\eqref{ag:conv3}, we have
\begin{itemize}
\item $(\rho_{\mc{F}},u_{\mc{F}})$ satisfies:
\begin{equation}\label{agrho:delta}
\frac{\partial {\rho_{\mc{F}}}}{\partial t} + \operatorname{div}({\rho}_{\mc{F}} u_{\mc{F}}) =0 \mbox{ in }\mc{D}'([0,T)\times \mb{R}^3).
\end{equation}
\item The couple $(\rho_{\mc{F}},u_{\mc{F}})$ satisfies the identity
\begin{equation}\label{agrenorm:delta}
\partial_t \overline{b(\rho_{\mc{F}})} + \operatorname{div}(\overline{b(\rho_{\mc{F}})}u_{\mc{F}})+\overline{[b'(\rho_{\mc{F}})\rho_{\mc{F}} - b(\rho_{\mc{F}})]\operatorname{div}u_{\mc{F}}}=0 \mbox{ in }\mc{D}'([0,T)\times \mb{R}^3),
\end{equation}
for any $b\in C([0,\infty)) \cap C^1((0,\infty))$ satisfying
\eqref{eq:b} and the weak limits $\overline{b(\rho_{\mc{F}})}$ and $\overline{[b'(\rho_{\mc{F}})\rho_{\mc{F}} - b(\rho_{\mc{F}})]\operatorname{div}u_{\mc{F}}}$ being defined in the following sense:
\begin{align*}
& b(\rho^{\delta}_{\mc{F}}) \rightarrow \overline{b(\rho_{\mc{F}})} \mbox{ weakly-}*\mbox{ in }L^{\infty}(0,T; L^{\frac{\gamma}{1+\kappa_1}}(\mb{R}^3)),\\
& [b'(\rho^{\delta}_{\mc{F}})\rho^{\delta}_{\mc{F}} - b(\rho^{\delta}_{\mc{F}})]\operatorname{div}u^{\delta}_{\mc{F}} \rightarrow \overline{[b'(\rho_{\mc{F}})\rho_{\mc{F}} - b(\rho_{\mc{F}})]\operatorname{div}u_{\mc{F}}} \mbox{ weakly in }L^2(0,T; L^{\frac{2\gamma}{2+2\kappa_1+\gamma}}(\mb{R}^3)).
\end{align*}
\end{itemize}
We outline the main idea of the proof of the asserted result. We derive \eqref{agrho:delta} by letting $\delta\rightarrow 0$ in equation \eqref{approx2} with the help of the convergence of the density in \eqref{ag:conv2} and the convergence of the momentum \cite[Section 7.10.1, page 383]{MR2084891}
\begin{align}\label{ag:product1}
\rho^{\delta}_{\mc{F}}u^{\delta}_{\mc{F}}\rightarrow \rho_{\mc{F}} u_{\mc{F}} \mbox{ weakly-}* \mbox{ in }L^{\infty}(0,T;L^{\frac{2\gamma}{\gamma+1}}(\mb{R}^3)),\mbox{ weakly in }L^2(0,T;L^{\frac{6\gamma}{\gamma+6}}(\mb{R}^3)).
\end{align}
Recall that when we pass to the limit $\varepsilon\rightarrow 0$, we do have $\rho_{\mc{F}}^{\delta} \in L^2((0,T)\times \Omega)$. But in this step, we do not have $\rho_{\mc{F}} \in L^2((0,T)\times \Omega)$. So, it is not straightforward to obtain the renormalized continuity equation. Observe that this difficulty is not present in the case of $\gamma\geq \frac{9}{5}$ as in that case, $\rho_{\mc{F}}\in L^{\gamma+\theta}((0,T)\times\Omega)\subset L^2((0,T)\times\Omega)$ since $\gamma+\theta=\frac{5}{3}\gamma-1\geq 2$ for $\gamma\geq \frac{9}{5}$.
We use equation \eqref{rho:renorm1} and \cite[Lemma 1.7]{MR2084891} to establish that $\{b(\rho^{\delta}_{\mc{F}})\}$ is uniformly continuous in $W^{-1,s}(\Omega)$ with $s=\min\left\{\frac{6\gamma}{6\kappa_1+6+\gamma},2\right\}$, where the function $b\in C([0,\infty)) \cap C^1((0,\infty))$ satisfies
\eqref{eq:b}. We apply \cite[Lemma 6.2, Lemma 6.4]{MR2084891} to get
\begin{align*}
b(\rho^{\delta}_{\mc{F}}) \rightarrow \overline{b(\rho_{\mc{F}})} &\mbox{ in }C([0,T]; L^{\frac{\gamma}{1+\kappa_1}}(\Omega)),\\
b(\rho^{\delta}_{\mc{F}}) \rightarrow \overline{b(\rho_{\mc{F}})}& \mbox{ strongly in }L^p(0,T; W^{-1,2}(\Omega)), \quad 1\leq p < \infty.
\end{align*}
The above mentioned limits together with \eqref{ag:conv1} help us to conclude
\begin{equation*}
b(\rho^{\delta}_{\mc{F}})u^{\delta}_{\mc{F}}\rightarrow \overline{b(\rho_{\mc{F}})}u_{\mc{F}} \mbox{ weakly in }L^2\left(0,T; L^{\frac{6\gamma}{6\kappa_1+6+\gamma}}(\Omega)\right).
\end{equation*}
Eventually, we obtain \eqref{agrenorm:delta} by taking the limit $\delta\rightarrow 0$ in \eqref{rho:renorm1}.
\underline{Step 3: Recovery of the renormalized continuity equation.}
The method of an effective viscous flux with an appropriate choice of functions \cite[Lemma 7.55, page 386]{MR2084891} helps us to establish boundedness of oscillations of the density sequence and we have an estimate for the amplitude of oscillations \cite[Lemma 7.56, page 386]{MR2084891}:
\begin{equation*}
\limsup_{\delta\rightarrow 0} \int\limits_{0}^T\int\limits_{\Omega} [T_k(\rho^{\delta}_{\mc{F}})-T_k(\rho_{\mc{F}})]^{\gamma+1} \leq \int\limits_{0}^T\int\limits_{\Omega} \left[\overline{\rho_{\mc{F}}^{\gamma}T_k(\rho_{\mc{F}})} - \overline{\rho^{\gamma}_{\mc{F}}}\overline{T_k(\rho_{\mc{F}})}\right],
\end{equation*}
where $T_k(\rho_{\mc{F}})=\min\{\rho_{\mc{F}},k\}$, $k >0$, are cut-off operators and $\overline{T_k(\rho_{\mc{F}})}$, $\overline{\rho_{\mc{F}}^{\gamma}T_k(\rho_{\mc{F}})}$ stand for the weak limits of $T_k(\rho_{\mc{F}}^{\delta})$, $(\rho_{\mc{F}}^{\delta})^{\gamma}T_k(\rho_{\mc{F}}^{\delta})$. This result allows us to estimate the quantities
\begin{align*}
& \sup_{\delta> 0} \|\rho_{\mc{F}}^{\delta}\mathds{1}_{\{\rho_{\mc{F}}^{\delta}\geq k\}}\|_{L^p((0,T)\times\Omega)}, \quad \sup_{\delta> 0} \|T_k(\rho_{\mc{F}}^{\delta})-\rho_{\mc{F}}^{\delta}\|_{L^p((0,T)\times\Omega)},\\ & \|\overline{T_k(\rho_{\mc{F}})}-\rho_{\mc{F}}\|_{L^p((0,T)\times\Omega)},\quad \|T_k(\rho_{\mc{F}})-\rho_{\mc{F}}\|_{L^p((0,T)\times\Omega)} \mbox{ with }k>0, \ 1\leq p< \gamma+\theta.
\end{align*}
Using the above estimate and taking the renormalized function $b=T_k$ in \eqref{agrenorm:delta}, after several computations we obtain \cite[Lemma 7.57, page 388]{MR2084891}: Let $b\in C([0,\infty)) \cap C^1((0,\infty))$ satisfy \eqref{eq:b} with $\kappa_1+1\leq \frac{\gamma+\theta}{2}$ and let $u_{\mc{F}}$, $\rho_{\mc{F}}$ be defined by \eqref{ag:conv1}--\eqref{ag:conv2}. Then we obtain the renormalized continuity equation \eqref{NO3}:
\begin{equation*}
\partial_t b(\rho_{\mc{F}}) + \operatorname{div}(b(\rho_{\mc{F}})u_{\mc{F}}) + (b'(\rho_{\mc{F}})-b(\rho_{\mc{F}}))\operatorname{div}u_{\mc{F}}=0 \mbox{ in }\, \mc{D}'([0,T)\times {\Omega}) .
\end{equation*}
So far, we have recovered the transport equation of the body \eqref{NO4}, the continuity equation \eqref{NO2} and the renormalized one \eqref{NO3}. It remains to prove the momentum equation \eqref{weak-momentum} and establish the energy inequality \eqref{energy}.
\underline{Step 4: Recovery of the momentum equation.}
Notice that the test functions in the weak formulation of momentum equation \eqref{weak-momentum} belong to the space $V_T$ (the space is defined in \eqref{def:test}), which is a space of discontinuous functions. Precisely,
\begin{equation*}
\phi=(1-\chi_{\mc{S}})\phi_{\mc{F}} + \chi_{\mc{S}}\phi_{\mc{S}}\mbox{ with }\phi_{\mc{F}}\in \mc{D}([0,T);\mc{D}(\overline{\Omega})),\quad \phi_{\mc{S}}\in \mc{D}([0,T);\mc{R}),
\end{equation*}
satisfying
\begin{equation*}
\phi_{\mc{F}}\cdot \nu=0 \mbox{ on }\partial\Omega,\quad \phi_{\mc{F}}\cdot \nu= \phi_{\mc{S}}\cdot \nu\mbox{ on }\partial\mc{S}(t).
\end{equation*}
Whereas, if we look at the test functions in momentum equation \eqref{approx3} in the $\delta$-approximation, we see that it involves an $L^p(0,T; W^{1,p}(\Omega))$-regularity. Hence we approximate this discontinuous test function by a sequence of test functions that belong to $L^p(0,T; W^{1,p}(\Omega))$. The idea is to construct an approximation $\phi^{\delta}_{\mc{S}}$ of $\phi$ without jumps at the interface such that
\begin{equation}\label{cond1-good}
\phi^{\delta}_{\mc{S}}(t,x)=\phi_{\mc{F}}(t,x) \quad \forall\ t\in (0,T),\ x\in \partial\mc{S}^{\delta}(t),
\end{equation}
and
\begin{equation}\label{cond2-good}
\phi^{\delta}_{\mc{S}}(t,\cdot)\approx \phi_{\mc{S}}(t,\cdot)\mbox{ in }\mc{S}^{\delta}(t)\mbox{ away from a }\delta^{\vartheta}\mbox{ neighborhood of }\partial\mc{S}^{\delta}(t)\mbox{ with }\vartheta >0.
\end{equation}
In the spirit of \cite[Proposition 5.1]{MR3272367}, at first, we give the precise result regarding this construction and then we will continue the proof of \cref{exist:upto collision}.
\begin{proposition}\label{approx:test}
Let $\phi \in V_T$ and $\vartheta >0$. Then there exists a sequence
$$\phi^{\delta} \in H^1(0,T; L^{2}(\Omega)) \cap L^r(0,T; W^{1,{r}}(\Omega)),\mbox{ where } r=\max\left\{\beta+1, \frac{\beta+\theta}{\theta}\right\},\ \beta \geq \max\{8,\gamma\}\mbox{ and }\theta=\frac{2}{3}\gamma -1 $$
of the form
\begin{equation}\label{form:phi}
\phi^{\delta}=(1-\chi^{\delta}_{\mc{S}})\phi_{\mc{F}} + \chi^{\delta}_{\mc{S}}\phi^{\delta}_{\mc{S}}
\end{equation}
that satisfies for all $p\in [1,\infty)$:
\begin{enumerate}
\item $\|\chi^{\delta}_{\mc{S}}(\phi^{\delta}_{\mc{S}}-\phi_{\mc{S}})\|_{L^p((0,T)\times \Omega))}=\mc{O}(\delta^{\vartheta/p})$,
\item $\phi^{\delta} \rightarrow \phi$ strongly in $L^p((0,T)\times \Omega)$,
\item $\|\phi^{\delta}\|_{L^p(0,T;W^{1,p}(\Omega))}=\mc{O}(\delta^{-\vartheta(1-1/p)})$,
\item $\|\chi^{\delta}_{\mc{S}}(\partial_t + P^{\delta}_{\mc{S}}u^{\delta}\cdot\nabla)(\phi^{\delta}-\phi_{\mc{S}})\|_{L^2(0,T;L^p(\Omega))}=\mc{O}(\delta^{\vartheta/p})$,
\item $(\partial_t + P^{\delta}_{\mc{S}}u^{\delta}\cdot\nabla)\phi^{\delta}\rightarrow (\partial_t + P_{\mc{S}}u\cdot \nabla)\phi$ weakly in $L^2(0,T;L^p(\Omega))$.
\end{enumerate}
\end{proposition}
We give the proof of \cref{approx:test} at the end of this section. Next we continue the proof of \cref{exist:upto collision}.
\underline{Step 4.1: Linear terms of the momentum equation.}
We use $\phi^{\delta}$ (constructed in \cref{approx:test}) as the test function in \eqref{approx3}. Then we take the limit $\delta\rightarrow 0$ in \eqref{approx3} to recover equation \eqref{weak-momentum}. Let us analyze the passage to the limit in the linear terms. To begin with, we recall the following convergences of the velocities of the fluid part and the solid part, cf.\ \eqref{ag:conv1} and \eqref{prop1}:
\begin{align*}
& (1-\chi_{\mc{S}}^{\delta})u^{\delta}_{\mc{F}}= (1-\chi^{\delta}_{\mc{S}})u^{\delta},\quad u^{\delta}_{\mc{F}}\rightarrow u_{\mc{F}}\mbox{ weakly in }L^2(0,T;H^1(\Omega)),\\
& u^{\delta}_{\mc{S}}=P^{\delta}_{\mc{S}}u^{\delta},\ u_{\mc{S}}=P_{\mc{S}}u,\quad u^{\delta}_{\mc{S}}\rightarrow u_{\mc{S}}\mbox{ weakly in }L^2(0,T; C^{\infty}_{loc}(\mb{R}^3)).
\end{align*}
Let us start with the diffusion term $2\mu^{\delta}\mathbb{D}(u^{\delta}):\mathbb{D}(\phi^{\delta}) + \lambda^{\delta}\operatorname{div}u^{\delta}\mathbb{I} : \mathbb{D}(\phi^{\delta})$ in \eqref{approx3}. We write
\begin{align*}
\int\limits_0^T\int\limits_{\Omega} 2\mu^{\delta}\mathbb{D}(u^{\delta}):\mathbb{D}(\phi^{\delta}) &=
\int\limits_0^T\int\limits_{\Omega} \Big(2\mu_{\mc{F}}(1-\chi^{\delta}_{\mc{S}})\mb{D}(u_{\mc{F}}^{\delta}) + \delta^2 \chi_{\mc{S}}^{\delta}\mb{D}(u^{\delta})\Big): \mb{D}(\phi^{\delta})\\
&=\int\limits_0^T\int\limits_{\Omega} 2\mu_{\mc{F}}(1-\chi^{\delta}_{\mc{S}})\mb{D}(u_{\mc{F}}^{\delta}):\mb{D}(\phi_{\mc{F}}) + \delta^2\int\limits_0^T\int\limits_{\Omega} \chi_{\mc{S}}^{\delta}\mb{D}(u^{\delta}): \mb{D}(\phi^{\delta}).
\end{align*}
The strong convergence of $\chi^{\delta}_{\mc{S}}$ to $\chi_{\mc{S}}$ and the weak convergence of $u_{\mc{F}}^{\delta}$ to $u_{\mc{F}}$ imply that
\begin{equation*}
\int\limits_0^T\int\limits_{\Omega} 2\mu_{\mc{F}}(1-\chi^{\delta}_{\mc{S}})\mb{D}(u_{\mc{F}}^{\delta}):\mb{D}(\phi_{\mc{F}}) \rightarrow \int\limits_0^T\int\limits_{\Omega} 2\mu_{\mc{F}}(1-\chi_{\mc{S}})\mb{D}(u_{\mc{F}}):\mb{D}(\phi_{\mc{F}}).
\end{equation*}
We know from \eqref{again-10:45}, definition of $\mu^{\delta}$ in \eqref{approx-viscosity} and \cref{approx:test} (with $p=2$ case) that
\begin{equation*}
\|\delta\chi^{\delta}_{\mc{S}}\mathbb{D}(u^{\delta})\|_{L^2((0,T)\times\Omega)} \leq C,\quad \|\phi^{\delta}\|_{L^2(0,T;H^1(\Omega))}=\mc{O}({\delta}^{-\vartheta/2}).
\end{equation*}
These estimates yield that
\begin{equation} \label{alpha2}
\left|\delta^2\int\limits_0^T\int\limits_{\Omega} \chi_{\mc{S}}^{\delta}\mb{D}(u^{\delta}): \mb{D}(\phi^{\delta})\right|\leq \delta\|\delta\chi^{\delta}_{\mc{S}}\mathbb{D}(u^{\delta})\|_{L^2((0,T)\times\Omega)}\|\mb{D}(\phi^{\delta})\|_{L^2(0,T;L^2(\Omega))}\leq C\delta^{1-\vartheta/2}.
\end{equation}
If we consider $\vartheta<2$ and $\delta\rightarrow 0$, we have
\begin{equation*}
\delta^2\int\limits_0^T\int\limits_{\Omega} \chi_{\mc{S}}^{\delta}\mb{D}(u^{\delta}): \mb{D}(\phi^{\delta}) \rightarrow 0.
\end{equation*}
Hence,
\begin{equation*}
\int\limits_0^T\int\limits_{\Omega} \Big(2\mu^{\delta}\mathbb{D}(u^{\delta}):\mathbb{D}(\phi^{\delta}) + \lambda_{\mc{F}}\operatorname{div}u_{\mc{F}}\mathbb{I} : \mathbb{D}(\phi^{\delta})\Big)\rightarrow \int\limits_0^T\int\limits_{\mc{F}(t)} \Big(2\mu_{\mc{F}}\mb{D}(u_{\mc{F}}) + \lambda_{\mc{F}}\operatorname{div}u_{\mc{F}}\mathbb{I}\Big):\mb{D}(\phi_{\mc{F}})
\end{equation*}
as $\delta\rightarrow 0$. Next we consider the boundary term on $\partial\Omega$ in \eqref{approx3}. The weak convergence of $u_{\mc{F}}^{\delta}$ to $u_{\mc{F}}$ in $L^2(0,T;H^1(\Omega))$ yields
\begin{equation*}
\int\limits_0^T\int\limits_{\partial \Omega} (u^{\delta} \times \nu)\cdot (\phi^{\delta} \times \nu)=\int\limits_0^T\int\limits_{\partial \Omega} (u_{\mc{F}}^{\delta}\times \nu)\cdot (\phi_{\mc{F}}\times \nu)
\rightarrow\int\limits_0^T\int\limits_{\partial \Omega} (u_{\mc{F}}\times \nu)\cdot (\phi_{\mc{F}}\times \nu) \mbox{ as }\delta\rightarrow 0.
\end{equation*}
To deal with the boundary term on $\partial \mc{S}^{\delta}(t)$ we do a change of variables such that this term becomes an integral on the fixed boundary $\partial \mc{S}_0$. Then we pass to the limit as $\delta\rightarrow 0$ and afterwards transform back to the moving domain. Next, we introduce the notation $r^{\delta}_{\mc{S}}=P^{\delta}_{\mc{S}}\phi^{\delta}$ to write the following:
\begin{align*}
\int\limits_0^T\int\limits_{\partial \mc{S}^{\delta}(t)} [(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta})\times \nu]\cdot [(\phi^{\delta}-P^{\delta}_{\mc{S}}\phi^{\delta})\times \nu]=& \int\limits_0^T\int\limits_{\partial \mc{S}^{\delta}(t)} [(u^{\delta}_{\mc{F}}-u^{\delta}_{\mc{S}})\times \nu]\cdot [(\phi^{\delta}_{\mc{F}}-r^{\delta}_{\mc{S}})\times \nu]\\=& \int\limits_0^T\int\limits_{\partial \mc{S}_0} [(U^{\delta}_{\mc{F}}-U^{\delta}_{\mc{S}})\times \nu]\cdot [(\Phi^{\delta}_{\mc{F}}-R^{\delta}_{\mc{S}})\times \nu]
,\end{align*}
where we denote by capital letters the corresponding velocity fields and test functions in the fixed domain. By \cref{approx:test} we have that $\phi^{\delta} \rightarrow \phi$ strongly in $L^2(0,T;L^6(\Omega))$. Hence we obtain, as in \cref{sequential2}, that
\begin{equation*}
r^{\delta}_{\mc{S}} \rightarrow r_{\mc{S}}=P_{\mc{S}}\phi \mbox{ strongly in }
L^2(0,T; C^{\infty}_{loc}(\mb{R}^3)).
\end{equation*}
Now using \cite[Lemma A.2]{MR3272367}, we obtain the convergence in the fixed domain
\begin{equation*}
R^{\delta}_{\mc{S}} \rightarrow R_{\mc{S}} \mbox{ strongly in }
L^2(0,T; H^{1/2}(\partial\mc{S}_0)).
\end{equation*}
Similarly, the convergences of $u^{\delta}_{\mc{F}}$ and $u^{\delta}_{\mc{S}}$ with \cite[Lemma A.2]{MR3272367} imply
\begin{equation*}
U^{\delta}_{\mc{F}}\rightarrow U_{\mc{F}},\ U^{\delta}_{\mc{S}}\rightarrow U_{\mc{S}}\quad\mbox{weakly in }L^2(0,T;H^1(\Omega)).
\end{equation*}
These convergence results and going back to the moving domain gives
\begin{multline*}
\int\limits_0^T\int\limits_{\partial \mc{S}^{\delta}(t)} [(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta})\times \nu]\cdot [(\phi^{\delta}-P^{\delta}_{\mc{S}}\phi^{\delta})\times \nu]= \int\limits_0^T\int\limits_{\partial \mc{S}_0} [(U^{\delta}_{\mc{F}}-U^{\delta}_{\mc{S}})\times \nu]\cdot [(\Phi^{\delta}_{\mc{F}}-R^{\delta}_{\mc{S}})\times \nu]\\ \rightarrow \int\limits_0^T\int\limits_{\partial \mc{S}_0} [(U_{\mc{F}}-U_{\mc{S}})\times \nu]\cdot [(\Phi_{\mc{F}}-R_{\mc{S}})\times \nu]= \int\limits_0^T\int\limits_{\partial \mc{S}(t)} [(u_{\mc{F}}-u_{\mc{S}})\times \nu]\cdot [(\phi_{\mc{F}}-\phi_{\mc{S}})\times \nu].
\end{multline*}
The penalization term can be estimated in the following way:
\begin{align}
\left|\frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^{\delta}_{\mc{S}}(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta})\cdot (\phi^{\delta}-P^{\delta}_{\mc{S}}\phi^{\delta})\right|=&\left|\frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^{\delta}_{\mc{S}}(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta})\cdot ((\phi^{\delta}-\phi_{\mc{S}})-P^{\delta}_{\mc{S}}(\phi^{\delta}-\phi_{\mc{S}}))\right|\notag\\=\left|\frac{1}{\delta}\int\limits_0^T\int\limits_{\Omega} \chi^{\delta}_{\mc{S}}(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta})\cdot (\phi^{\delta}_{\mc{S}}-\phi_{\mc{S}})\right| &\leq \delta^{-1/2}\frac{1}{\delta^{1/2}}\left\|\sqrt{ \chi^{\delta}_{\mc{S}}}\left(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta}\right)\right\|_{L^2((0,T)\times \Omega)}\left\|\sqrt{\chi^{\delta}_{\mc{S}}}(\phi^{\delta}_{\mc{S}}-\phi_{\mc{S}})\right\|_{L^2(0,T;L^2(\Omega)))}\notag\\&
\leq C\delta^{-1/2+\vartheta/2},\label{12:27}
\end{align}
where we have used the estimates obtained from \eqref{again-10:45} and \cref{approx:test}. By choosing $\vartheta>1$ and taking $\delta\rightarrow 0$, the penalization term vanishes. Note that we also need $\vartheta <2$ in view of \eqref{alpha2}.
\underline{Step 4.2: Nonlinear terms of the momentum equation.}
In this step, we analyze the following terms:
\begin{multline}\label{convection}
\int\limits_0^T\int\limits_{\Omega} \rho^{\delta} \left(u^{\delta}\cdot \frac{\partial}{\partial t}\phi + u^{\delta} \otimes u^{\delta} : \nabla \phi^{\delta}\right) = \int\limits_0^T\int\limits_{\Omega} \rho^{\delta}_{\mc{F}}(1-\chi_{\mc{S}}^{\delta}) u^{\delta}_{\mc{F}}\cdot \frac{\partial}{\partial t}\phi_{\mc{F}} + \int\limits_0^T\int\limits_{\Omega} \rho^{\delta}_{\mc{F}}(1-\chi_{\mc{S}}^{\delta}) u^{\delta}_{\mc{F}} \otimes u^{\delta}_{\mc{F}} : \nabla \phi_{\mc{F}} \\+ \int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi^{\delta}_{\mc{S}}(\partial_t+u^{\delta}_{\mc{S}}\cdot\nabla)\phi^{\delta}\cdot u^{\delta}
\end{multline}
The strong convergence of $\chi^{\delta}_{\mc{S}}$ to $\chi_{\mc{S}}$
and the weak convergence of $\rho^{\delta}u^{\delta}_{\mc{F}}$ to $\rho_{\mc{F}}u_{\mc{F}}$ (see \eqref {ag:product1}) imply
\begin{equation}\label{A1}
\int\limits_0^T\int\limits_{\Omega} \rho^{\delta}_{\mc{F}}(1-\chi_{\mc{S}}^{\delta}) u^{\delta}_{\mc{F}}\cdot \frac{\partial}{\partial t}\phi_{\mc{F}} \rightarrow \int\limits_0^T\int\limits_{\Omega} \rho_{\mc{F}}(1-\chi_{\mc{S}}) u_{\mc{F}}\cdot \frac{\partial}{\partial t}\phi_{\mc{F}} \quad\mbox{ as }\delta\rightarrow 0.
\end{equation}
We use the convergence result for the convective term from \cite[Section 7.10.1, page 384]{MR2084891}
\begin{equation*}
\rho^{\delta}_{\mc{F}}(u^{\delta}_{\mc{F}})_i (u^{\delta}_{\mc{F}})_j \rightarrow \rho_{\mc{F}} (u_{\mc{F}})_i (u_{\mc{F}})_j\mbox{ weakly in }L^2(0,T;L^{\frac{6\gamma}{4\gamma + 3}}(\Omega)), \quad \forall i,j\in \{1,2,3\},
\end{equation*}
to pass to the limit in the second term of the right-hand side of \eqref{convection}:
\begin{equation}\label{A2}
\int\limits_0^T\int\limits_{\Omega} \rho^{\delta}_{\mc{F}}(1-\chi_{\mc{S}}^{\delta}) u^{\delta}_{\mc{F}} \otimes u^{\delta}_{\mc{F}} : \nabla \phi_{\mc{F}} \rightarrow \int\limits_0^T\int\limits_{\Omega} \rho_{\mc{F}}(1-\chi_{\mc{S}}) u_{\mc{F}} \otimes u_{\mc{F}} : \nabla \phi_{\mc{F}}.
\end{equation}
Next we consider the third term on the right-hand side of \eqref{convection}:
\begin{multline*}
\int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi^{\delta}_{\mc{S}}(\partial_t+u^{\delta}_{\mc{S}}\cdot\nabla)\phi^{\delta}\cdot u^{\delta}=\int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi_{\mc{S}}^{\delta}(\partial_t+u^{\delta}_{\mc{S}}\cdot\nabla)(\phi^{\delta}-\phi_{\mc{S}})\cdot u^{\delta} + \int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi_{\mc{S}}^{\delta}\partial_t\phi_{\mc{S}}\cdot u^{\delta}\\ + \int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi_{\mc{S}}^{\delta}(u^{\delta}_{\mc{S}}\cdot\nabla)\phi_{\mc{S}}\cdot u^{\delta}=: T_1^{\delta} + T_2^{\delta}+ T_3^{\delta}.
\end{multline*}
We write
\begin{equation*}
T_1^{\delta} = \int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi_{\mc{S}}^{\delta}(\partial_t+u^{\delta}_{\mc{S}}\cdot\nabla)(\phi^{\delta}-\phi_{\mc{S}})\cdot (u^{\delta}-P_{\mc{S}}^{\delta}u^{\delta}) + \int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi_{\mc{S}}^{\delta}(\partial_t+u^{\delta}_{\mc{S}}\cdot\nabla)(\phi^{\delta}-\phi_{\mc{S}})\cdot P_{\mc{S}}^{\delta}u^{\delta}.
\end{equation*}
We estimate these terms in the following way:
\begin{multline*}
\left|\int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi_{\mc{S}}^{\delta}(\partial_t+u^{\delta}_{\mc{S}}\cdot\nabla)(\phi^{\delta}-\phi_{\mc{S}})\cdot (u^{\delta}-P_{\mc{S}}^{\delta}u^{\delta})\right| \\ \leq \|\rho^{\delta}\chi_{\mc{S}}^{\delta}\|_{L^{\infty}((0,T)\times \Omega)}\|\chi^{\delta}_{\mc{S}}(\partial_t + P^{\delta}_{\mc{S}}u^{\delta}\cdot\nabla)(\phi^{\delta}-\phi_{\mc{S}})\|_{L^2(0,T;L^6(\Omega))}\frac{1}{\delta^{1/2}}\left\|\sqrt{ \chi^{\delta}_{\mc{S}}}\left(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta}\right)\right\|_{L^2((0,T)\times \Omega)}\delta^{1/2},
\end{multline*}
\begin{multline*}
\left|\int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi_{\mc{S}}^{\delta}(\partial_t+u^{\delta}_{\mc{S}}\cdot\nabla)(\phi^{\delta}-\phi_{\mc{S}})\cdot P_{\mc{S}}^{\delta}u^{\delta}\right|\\ \leq \|\rho^{\delta}\chi_{\mc{S}}^{\delta}\|_{L^{\infty}((0,T)\times \Omega)}\|\chi^{\delta}_{\mc{S}}(\partial_t + P^{\delta}_{\mc{S}}u^{\delta}\cdot\nabla)(\phi^{\delta}-\phi_{\mc{S}})\|_{L^2(0,T;L^6(\Omega))}\|P^{\delta}_{\mc{S}}u^{\delta}\|_{L^2((0,T)\times \Omega)},
\end{multline*}
where we have used $\rho^{\delta}\chi_{\mc{S}}^{\delta} \in L^{\infty}((0,T)\times \Omega)$ as it is a solution to \eqref{approx5}.
Moreover, by \cref{approx:test} (with the case $p=6$), we know that for $\vartheta > 0$$$\|\chi^{\delta}_{\mc{S}}(\partial_t + P^{\delta}_{\mc{S}}u^{\delta}\cdot\nabla)(\phi^{\delta}-\phi_{\mc{S}})\|_{L^2(0,T;L^6(\Omega))}=\mc{O}(\delta^{\vartheta/6}).$$
Hence,
\begin{equation}\label{A3}
T_1^{\delta} \rightarrow 0\mbox{ as } \delta\rightarrow 0.
\end{equation}
Observe that
\begin{equation*}
T_2^{\delta}= \int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi_{\mc{S}}^{\delta}\partial_t\phi_{\mc{S}}\cdot (u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta}) + \int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi_{\mc{S}}^{\delta}\partial_t\phi_{\mc{S}}\cdot P^{\delta}_{\mc{S}}u^{\delta}.
\end{equation*}
Now we use the following convergences:
\begin{itemize}
\item the strong convergence of $\sqrt{ \chi^{\delta}_{\mc{S}}}\left(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta}\right) \rightarrow 0$ in $ L^2((0,T)\times \Omega)$ (see the fifth term of inequality \eqref{again-10:45}),
\item the strong convergence of $\chi^{\delta}_{\mc{S}}$ to $\chi_{\mc{S}}$ (see the convergence in \eqref{19:30}),
\item the weak convergence of $\rho^{\delta}\chi_{\mc{S}}^{\delta}P^{\delta}_{\mc{S}}u^{\delta}\mbox{ to }\rho \chi_{\mc{S}}P_{\mc{S}}u$ (see the convergences in \eqref{19:31} and \eqref{prop1}),
\end{itemize}
to deduce
\begin{equation*}
T_2^{\delta} \rightarrow \int\limits_0^T\int\limits_{\mc{S}(t)}\rho \chi_{\mc{S}}\partial_t \phi_{\mc{S}}\cdot P_{\mc{S}}u\quad \mbox{ as }\quad\delta\rightarrow 0.
\end{equation*}
Recall the definition of $u_{\mc{S}}$ in \eqref{uS} and the definition of $\rho_{\mc{S}}$ in \eqref{rhoS}
to conclude
\begin{equation}\label{A4}
T_2^{\delta} \rightarrow \int\limits_0^T\int\limits_{\mc{S}(t)}\rho_{\mc{S}}\partial_t \phi_{\mc{S}}\cdot u_{\mc{S}}\quad \mbox{ as }\quad\delta\rightarrow 0.
\end{equation}
Notice that
\begin{equation}\label{A5}
T_3^{\delta}= \int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi^{\delta}_{\mc{S}}(u^{\delta}_{\mc{S}}\cdot\nabla)\phi_{\mc{S}}\cdot u^{\delta} = \int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi^{\delta}_{\mc{S}}(u^{\delta}_{\mc{S}}\otimes u^{\delta}_{\mc{S}}): \nabla\phi_{\mc{S}}= \int\limits_0^T\int\limits_{\Omega}\rho^{\delta}\chi^{\delta}_{\mc{S}}(u^{\delta}_{\mc{S}}\otimes u^{\delta}_{\mc{S}}): \mb{D}(\phi_{\mc{S}})=0.
\end{equation}
Eventually, combining the results \eqref{A1}--\eqref{A5}, we obtain
\begin{equation*}
\int\limits_0^T\int\limits_{\Omega} \rho^{\delta} \left(u^{\delta}\cdot \frac{\partial}{\partial t}\phi + u^{\delta} \otimes u^{\delta} : \nabla \phi\right) \rightarrow \int\limits_0^T\int\limits_{\mc{F}(t)} \rho_{\mc{F}}u_{\mc{F}}\cdot \frac{\partial}{\partial t}\phi_{\mc{F}} + \int\limits_0^T\int\limits_{\mc{S}(t)} \rho_{\mc{S}}u_{\mc{S}}\cdot \frac{\partial}{\partial t}\phi_{\mc{S}} + \int\limits_0^T\int\limits_{\mc{F}(t)} (\rho_{\mc{F}}u_{\mc{F}} \otimes u_{\mc{F}}) : \nabla \phi_{\mc{F}}.
\end{equation*}
\underline{Step 4.3: Pressure term of the momentum equation.}
We use the definition of $\phi^{\delta}$
\begin{equation*}
\phi^{\delta}=(1-\chi^{\delta}_{\mc{S}})\phi_{\mc{F}} + \chi^{\delta}_{\mc{S}}\phi^{\delta}_{\mc{S}},
\end{equation*}
to write
\begin{equation*}
\int\limits_0^T\int\limits_{\Omega} \left(a^{\delta}(\rho^{\delta})^{\gamma} + {\delta} (\rho^{\delta})^{\beta} \right)\mathbb{I} : \mathbb{D}(\phi^{\delta}) = \int\limits_0^T\int\limits_{\Omega} \left[a_{\mc{F}} (1-\chi^{\delta}_{\mc{S}})(\rho^{\delta}_{\mc{F}})^{\gamma}+ {\delta}(1-\chi^{\delta}_{\mc{S}}) (\rho_{\mc{F}}^{\delta})^{\beta}\right]\mathbb{I} : \mathbb{D}(\phi_{\mc{F}}),
\end{equation*}
where we have used the fact that $\operatorname{div}\phi_{\mc{S}}^{\delta}=0$.
Due to the strong convergence of $\chi^{\delta}_{\mc{S}}$ to $\chi_{\mc{S}}$ and the weak convergence of $(\rho_{\mc{F}}^{\delta})^{\gamma}$, $(\rho_{\mc{F}}^{\delta})^{\beta}$ in \eqref{ag:conv3}, \eqref{ag:conv4} we obtain \begin{equation*}
\int\limits_0^T\int\limits_{\Omega} a_{\mc{F}} (1-\chi^{\delta}_{\mc{S}})(\rho^{\delta}_{\mc{F}})^{\gamma}\mathbb{I} : \mathbb{D}(\phi_{\mc{F}}) \rightarrow \int\limits_0^T\int\limits_{\Omega} a_{\mc{F}} (1-\chi_{\mc{S}})\overline{(\rho_{\mc{F}})^{\gamma}}\mathbb{I} : \mathbb{D}(\phi_{\mc{F}}) \mbox{ as }\delta \to 0,
\end{equation*}
and
\begin{equation*}
\int\limits_0^T\int\limits_{\Omega} {\delta}(1-\chi^{\delta}_{\mc{S}}) (\rho_{\mc{F}}^{\delta})^{\beta}\mathbb{I} : \mathbb{D}(\phi_{\mc{F}}) \rightarrow 0 \mbox{ as }\delta \to 0.
\end{equation*}
In order to establish \eqref{weak-momentum}, it only remains to show that $\overline{\rho^{\gamma}_{\mc{F}}}=\rho^{\gamma}_{\mc{F}}$. This is equivalent to establishing some strong convergence result of the sequence $\rho^{\delta}_{\mc{F}}$. We follow the idea of \cite[Lemma 7.60, page 391]{MR2084891} to prove: Let $\{\rho^{\delta}_{\mc{F}}\}$ be the sequence and $\rho_{\mc{F}}$ be its weak limit from \eqref{ag:conv2}. Then, at least for a chosen subsequence,
\begin{equation*}
\rho^{\delta}_{\mc{F}}\rightarrow \rho_{\mc{F}} \mbox{ in }L^p((0,T)\times\Omega),\quad 1\leq p < \gamma+\theta.
\end{equation*}
This immediately yields $\overline{\rho^{\gamma}_{\mc{F}}}=\rho^{\gamma}_{\mc{F}}$. Thus, we have recovered the weak form of the momentum equation.
\underline{Step 5: Recovery of the energy estimate.} We derive from \eqref{10:45} that
\begin{multline*}
\int\limits_{\Omega}\Big( \rho^{\delta} |u^{\delta}|^2 + \frac{a_{\mc{F}}}{\gamma-1}(1-\chi^{\delta}_{\mc{S}})(\rho^{\delta}_{\mc{F}})^{\gamma} + \frac{\delta}{\beta-1}(\rho^{\delta}_{\mc{F}})^{\beta}\Big) + \int\limits_0^T\int\limits_{\Omega} \Big(2\mu_{\mc{F}}(1-\chi^{\delta}_{\mc{S}})|\mb{D}(u_{\mc{F}}^{\delta})|^2 + \lambda_{\mc{F}}(1-\chi^{\delta}_{\mc{S}})|\operatorname{div}u^{\delta}_{\mc{F}}|^2\Big)
+ \alpha \int\limits_0^T\int\limits_{\partial \Omega} |u^{\delta} \times \nu|^2 \\
+ \alpha \int\limits_0^T\int\limits_{\partial \mc{S}^{\delta}(t)} |(u^{\delta}-P^{\delta}_{\mc{S}}u^{\delta})\times \nu|^2
\leq \int\limits_0^T\int\limits_{\Omega}\rho^{\delta} g^{\delta} \cdot u^{\delta}
+ \int\limits_{\Omega}\Bigg( \frac{|q_0^{\delta}|^2}{\rho_0^{\delta}}\mathds{1}_{\{\rho_0^{\delta}>0\}} + \frac{a}{\gamma-1}(\rho_0^{\delta})^{\gamma} + \frac{\delta}{\beta-1}(\rho_0^{\delta})^{\beta} \Bigg).
\end{multline*}
To see the limiting behaviour of the above inequality as $\delta$ tends to zero, we observe that the limit as $\delta\to 0$ is similar to the limits $\varepsilon\rightarrow 0$ or $N\rightarrow \infty$ limit. Hence we obtain the energy inequality \eqref{energy}.
\underline{Step 6: Rigid body is away from boundary.} It remains to check that there exists $T$ small enough such that if $\operatorname{dist}(\mc{S}_0,\partial \Omega) > 2\sigma$, then
\begin{equation}\label{final-collision}
\operatorname{dist}(\mc{S}(t),\partial \Omega) \geq \frac{3\sigma}{2}> 0 \quad \forall \ t\in [0,T].
\end{equation}
Let us introduce the following notation:
\begin{equation*}
(\mc{U})_{\sigma} = \left\{x\in \mathbb{R}^3\mid \operatorname{dist}(x,\mc{U})<\sigma\right\},
\end{equation*}
for an open set $\mc{U}$ and $\sigma > 0$. We recall the following result \cite[Lemma 5.4]{MR3272367}:
Let $\sigma > 0$. There exists $\delta_0 >0$ such that for all $0< \delta \leq \delta_0$,
\begin{equation}\label{cond:col1}
\mc{S}^{\delta}(t) \subset (\mc{S}(t))_{\sigma/4} \subset (\mc{S}^{\delta}(t))_{\sigma/2},\quad \forall\ t\in [0,T].
\end{equation}
Note that condition \eqref{cond:col1} and the relation \eqref{delta-collision}, i.e., $\operatorname{dist}(\mc{S}^{\delta}(t),\partial \Omega) \geq 2\sigma> 0$ for all $t\in [0,T]$
imply our required estimate \eqref{final-collision}. Thus, we conclude the proof of \cref{exist:upto collision}.
\end{proof}
It remains to prove \cref{approx:test}.
The main difference between \cref{approx:test} and \cite[Proposition 5.1]{MR3272367} is the time regularity of the approximate test functions.
Since here we only have weak convergence of $u^{\delta}$ in $L^2(0,T;L^2(\Omega))$, according to \cref{sequential2} we have convergence of $\eta^{\delta}_{t,s}$ in $H^{1}((0,T)^2; C^{\infty}_{loc}(\mb{R}^3))$. In \cite[Proposition 5.1]{MR3272367}, they have weak convergence of $u^{\delta}$ in $L^{\infty}(0,T;L^2(\Omega))$, which yields higher time regularity of the propagator $\eta^{\delta}_{t,s}$ in $W^{1,\infty}((0,T)^2; C^{\infty}_{loc}(\mb{R}^3))$. Still, the construction of the approximate function is essentially similar, which is why we skip the details and only present the main ideas of the proof here.
\begin{proof}[Proof of \cref{approx:test}]
The proof relies on the construction of the approximation $\phi^{\delta}_{\mc{S}}$ of $\phi_{\mc{S}}$ so that we can avoid the jumps at the interface for the test functions such that \eqref{cond1-good}--\eqref{cond2-good} holds.
The idea is to write the test functions in Lagrangian coordinates through the isometric propagator $\eta^{\delta}_{t,s}$ so that we can work on the fixed domain. Let $\Phi_{\mc{F}}$, $\Phi_{\mc{S}}$ and $\Phi^{\delta}_{\mc{S}}$ be the transformed quantities in the fixed domain related to $\phi_{\mc{F}}$, $\phi_{\mc{S}}$ and $\phi^{\delta}_{\mc{S}}$ respectively:
\begin{equation}\label{chofv}
\phi_{\mc{S}}(t,\eta^{\delta}_{t,0}(y)) = J_{\eta_{t,0}^{\delta}}\Big|_{y} (\Phi_{\mc{S}}(t,y)),\quad \phi_{\mc{F}}(t,\eta^{\delta}_{t,0}(y)) = J_{\eta_{t,0}^{\delta}}\Big|_{y} \Phi_{\mc{F}}(t,y)\quad\mbox{ and }\quad\phi^{\delta}_{\mc{S}}(t,\eta^{\delta}_{t,0}(y)) = J_{\eta_{t,0}^{\delta}}\Big|_{y} \Phi^{\delta}_{\mc{S}}(t,y),
\end{equation}
where $J_{\eta_{t,0}^{\delta}}$ is the Jacobian matrix of $\eta_{t,0}^{\delta}$. Note that if we define
\begin{equation*}
\Phi^{\delta}(t,y)=(1-\chi^{\delta}_{\mc{S}})\Phi_{\mc{F}} + \chi^{\delta}_{\mc{S}}\Phi^{\delta}_{\mc{S}},
\end{equation*}
then the definition of $\phi^{\delta}$ in \eqref{form:phi} gives
\begin{equation}\label{chofv1}
\phi^{\delta}(t,\eta^{\delta}_{t,0}(y)) = J_{\eta_{t,0}^{\delta}}\Big|_{y} (\Phi^{\delta}(t,y)).
\end{equation}
Thus, the construction of the approximation $\phi^{\delta}_{\mc{S}}$ satisfying \eqref{cond1-good}--\eqref{cond2-good} is equivalent to building the approximation $\Phi^{\delta}_{\mc{S}}$ so that there is no jump for the function $\Phi^{\delta}$ at the interface and the following holds:
\begin{equation*}
\Phi^{\delta}_{\mc{S}}(t,x)=\Phi_{\mc{F}}(t,x) \quad \forall\ t\in (0,T),\ x\in \partial\mc{S}_0,
\end{equation*}
and
\begin{equation*}
\Phi^{\delta}_{\mc{S}}(t,\cdot)\approx \Phi_{\mc{S}}(t,\cdot)\mbox{ in }\mc{S}_0\mbox{ away from a }\delta^{\vartheta}\mbox{ neighborhood of }\partial\mc{S}_0\mbox{ with }\vartheta >0.
\end{equation*}
Explicitly, we set (for details, see \cite[Pages 2055-2058]{MR3272367}):
\begin{equation}\label{decomposePhi}
\Phi^{\delta}_{\mc{S}} = \Phi^{\delta}_{\mc{S},1} + \Phi^{\delta}_{\mc{S},2},
\end{equation}
with
\begin{equation}\label{phis1}
\Phi^{\delta}_{\mc{S},1}= \Phi_{\mc{S}} + \chi (\delta^{-\vartheta}z) \left[(\Phi_{\mc{F}}-\Phi_{\mc{S}}) - ((\Phi_{\mc{F}}-\Phi_{\mc{S}})\cdot e_z) e_z\right],
\end{equation}
where $\chi : \mathbb{R} \rightarrow [0,1]$ is a smooth truncation function which is equal to 1 in a neighborhood of 0 and $z$ is a coordinate transverse to the boundary $\partial \mc{S}_0 = \{z=0\}$. Moreover, to make $\Phi^{\delta}_{\mc{S}}$ divergence-free in $\mc{S}_0$, we need to take $\Phi^{\delta}_{\mc{S},2}$ such that
\begin{equation*}
\operatorname{div}\Phi^{\delta}_{\mc{S},2}=-\operatorname{div}\Phi^{\delta}_{\mc{S},1}\quad\mbox{in }\mc{S}_0,\quad \Phi^{\delta}_{\mc{S},2}=0\quad\mbox{on }\partial\mc{S}_0.
\end{equation*}
Observe that, the explicit form \eqref{phis1} of $\Phi^{\delta}_{\mc{S},1}$ yields
\begin{equation}\label{divphis1}
\operatorname{div}\Phi^{\delta}_{\mc{S},2}=-\operatorname{div}\Phi^{\delta}_{\mc{S},1} = -\chi(\delta^{-\vartheta}z)\operatorname{div}\left[(\Phi_{\mc{F}}-\Phi_{\mc{S}}) - ((\Phi_{\mc{F}}-\Phi_{\mc{S}})\cdot e_z) e_z\right].
\end{equation}
Thus, the expressions \eqref{phis1}--\eqref{divphis1} give us: for all $p<\infty$,
\begin{align}\label{Phi11}
\|\Phi^{\delta}_{\mc{S},1}-\Phi_{\mc{S}}\|_{H^1(0,T; L^p(\mc{S}_0))} &\leq C\delta^{\vartheta/p},
\\
\label{Phi12}
\|\Phi^{\delta}_{\mc{S},1}-\Phi_{\mc{S}}\|_{H^1(0,T; W^{1,p}(\mc{S}_0))} &\leq C\delta^{-\vartheta(1-1/p)},
\end{align}
and
\begin{equation}\label{Phi2}
\|\Phi^{\delta}_{\mc{S},2}\|_{H^1(0,T; W^{1,p}(\mc{S}_0))}\leq C\|\chi({\delta}^{-\vartheta}z)\operatorname{div}\left[(\Phi_{\mc{F}}-\Phi_{\mc{S}}) - ((\Phi_{\mc{F}}-\Phi_{\mc{S}})\cdot e_z) e_z\right]\|_{H^1(0,T; L^{p}(\mc{S}_0))} \leq C\delta^{\vartheta/p}.
\end{equation}
Using the decomposition \eqref{decomposePhi} of $\Phi_{\mc{S}}^{\delta}$ and the estimates \eqref{Phi11}--\eqref{Phi12}, \eqref{Phi2}, we obtain
\begin{align*}
\|\Phi^{\delta}_{\mc{S}}-\Phi_{\mc{S}}\|_{H^1(0,T; L^p(\mc{S}_0))} &\leq C\delta^{\vartheta/p},\\
\|\Phi^{\delta}_{\mc{S}}-\Phi_{\mc{S}}\|_{H^1(0,T; W^{1,p}(\mc{S}_0))} &\leq C\delta^{-\vartheta(1-1/p)}.
\end{align*}
Furthermore, we combine the above estimates with the uniform bound of the propagator $\eta^{\delta}_{t,0}$ in $H^1(0,T; C^{\infty}(\Omega))$ to obtain
\begin{align}\label{est:Phi}
\left\|J_{\eta_{t,0}^{\delta}}|_{y}(\Phi^{\delta}_{\mc{S}}-\Phi_{\mc{S}})\right\|_{H^1(0,T; L^p(\mc{S}_0))} &\leq C\delta^{\vartheta/p},\\
\label{est:dPhi}
\left\|J_{\eta_{t,0}^{\delta}}|_{y}(\Phi^{\delta}_{\mc{S}}-\Phi_{\mc{S}})\right\|_{H^1(0,T; W^{1,p}(\mc{S}_0))} &\leq C\delta^{-\vartheta(1-1/p)}.
\end{align}
Observe that due to the change of variables \eqref{chofv} and estimate \eqref{est:Phi}:
\begin{equation}\label{est1}
\|\chi^{\delta}_{\mc{S}}(\phi^{\delta}_{\mc{S}}-\phi_{\mc{S}})\|_{L^p((0,T)\times \Omega))}\leq C\|J_{\eta_{t,0}^{\delta}}|_{y}(\Phi^{\delta}_{\mc{S}}-\Phi_{\mc{S}})\|_{L^p((0,T)\times \mc{S}_0)}\leq C
\delta^{\vartheta/p}.
\end{equation}
Since
\begin{equation*}
\|\phi^{\delta}-\phi\|_{L^p((0,T)\times \Omega))}\leq \|(\chi^{\delta}_{\mc{S}}-\chi_{\mc{S}})\phi_{\mc{F}}\|_{L^p((0,T)\times \Omega))} + \|\chi^{\delta}_{\mc{S}}(\phi_{\mc{S}}^{\delta}-\phi_{\mc{S}})\|_{L^p((0,T)\times \Omega))} + \|(\chi^{\delta}_{\mc{S}}-\chi_{\mc{S}})\phi_{\mc{S}}\|_{L^p((0,T)\times \Omega))},
\end{equation*}
using the strong convergence of $\chi^{\delta}_{\mc{S}}$ and the estimate \eqref{est1}, we conclude that
\begin{equation*}
\phi^{\delta} \rightarrow \phi\mbox{ strongly in }L^p((0,T)\times \Omega).
\end{equation*}
We use estimate \eqref{Phi12} and the relation \eqref{chofv1} to obtain
\begin{equation*}
\|\phi^{\delta}\|_{L^p(0,T;W^{1,p}(\Omega))}\leq \delta^{-\vartheta(1-1/p)}.
\end{equation*}
Moreover, the change of variables \eqref{chofv} and estimate \eqref{est:Phi} give
\begin{align}\label{timechi}
\begin{split} \|\chi^{\delta}_{\mc{S}}(\partial_t + P^{\delta}_{\mc{S}}u^{\delta}\cdot\nabla)(\phi^{\delta}-\phi_{\mc{S}})\|_{L^2(0,T;L^p(\Omega))} &\leq C\left\|\frac{d}{dt}\left(J_{\eta_{t,0}^{\delta}}\Big|_{y}(\Phi^{\delta}_{\mc{S}}-\Phi_{\mc{S}})\right)\right\|_{L^2(0,T;L^p(\mc{S}_0))}\\
& \leq C\left\|J_{\eta_{t,0}^{\delta}}|_{y}(\Phi^{\delta}_{\mc{S}}-\Phi_{\mc{S}})\right\|_{H^1(0,T;L^p(\mc{S}_0))}\leq C\delta^{\vartheta/p}.
\end{split}
\end{align}
The above estimate \eqref{timechi}, strong convergence of $\chi_{\mc{S}}^{\delta}$ to $\chi_{\mc{S}}$ in $C(0,T;L^p(\Omega))$ and weak convergence of $P^{\delta}_{\mc{S}}u^{\delta}$ to $P_{\mc{S}} u \mbox{ weakly } \mbox{ in }L^{2}(0,T; C^{\infty}_{loc}(\mb{R}^3))$, give us
\begin{equation*}
(\partial_t + P^{\delta}_{\mc{S}}u^{\delta}\cdot\nabla)\phi^{\delta}\rightarrow (\partial_t + P_{\mc{S}}u\cdot \nabla)\phi \mbox{ weakly in } L^2(0,T;L^p(\Omega)),
\end{equation*}
where
\begin{equation*}
\phi^{\delta}=(1-\chi^{\delta}_{\mc{S}})\phi_{\mc{F}} + \chi^{\delta}_{\mc{S}}\phi^{\delta}_{\mc{S}}\quad\mbox{ and }\quad \phi=(1-\chi_{\mc{S}})\phi_{\mc{F}} + \chi_{\mc{S}}\phi_{\mc{S}}.
\end{equation*}
\end{proof}
\section*{Acknowledgment}
{\it \v S. N. and A. R. have been supported by the Czech Science Foundation (GA\v CR) project GA19-04243S. The Institute of Mathematics, CAS is supported by RVO:67985840.}
\section*{Compliance with Ethical Standards}
\section*{Conflict of interest}
The authors declare that there are no conflicts of interest.
|
1,108,101,564,865 | arxiv | \section{Why is classical physics (in)deterministic?}
It is generally accepted that classical physics (i.e., Newton's mechanics and Maxwell's electrodynamics) is \emph{deterministic}. Restating a famous argument due to Laplace (known as Laplace's demon), determinism is usually assumed to be the ``view that a sufficient knowledge of the laws of nature and appropriate boundary conditions will enable a superior intelligence to predict the future states of the physical world and to retrodict its past states with infinite precision'' \cite{determinism}.
Yet, stimulated by the development of statistical physics (which is taken to introduce indeterminacy as merely epistemic), one could find notable exceptions to this deterministic view, remarkably in the works of preeminent physicists the likes of L. Boltzmann, F. Exner, E. Schr{\"o}dinger and M. Born, who --admittedly with different standpoints-- all argued for genuine indeterminism in classical physics (see \cite{delsanto} and references thereof). These doubts about classical determinism were fostered in the second half of the twentieth century, after the theory of chaotic systems was systematically developed and its implications fully understood \cite{ornstein, prigogine}.
However, it is only in recent years that new life has been breathed into the critique of determinism in classical physics, showing that the advocacy of determinism leads to severe conceptual difficulties based on information-theoretic arguments \cite{dowek, gisin1, blundell}, and that determinism might even be incompatible with the derivation of the second law of thermodynamics \cite{drossel}. In fact, the hypothetical ``superior intelligence'' (demon), supposedly able to perfectly predict the future, is required to have complete information about the state of the universe and then use it to compute the subsequent evolution. Recent developments of information theory and its application to physics (mainly blossomed within the frameworks of quantum information and quantum thermodynamics) led to the conclusion that the abstract, mathematically well-formalized concept of information acquires a meaningful value in the natural sciences only if the information is embodied into a physical system (encoding), allowing it to be manipulated (computation) and transmitted (communication). As such, these processes are subject to the same limitations imposed by the laws of physics (\emph{Landauer's principle} \cite{landauer}). In the face of this, Laplace's demon, and hence determinism, leads to two categories of problems: the problem of \emph{infinities} and the problem of \emph{infinitesimals}.\footnote{Similar problems have been recently discussed in a more general context and without resort to information-theoretic arguments in Ref. \cite{ellisinf}.}
The former of these problems is directly related to the memory capability of the physical systems that are supposed to encode the information of the whole Universe, then to be manipulated to compute the subsequent evolution. About this, Blundell concludes: ``If such a demon were (even hypothetically) to be constructed in our physical world, it would be subject to physical constraints which would include a limit on the number of atoms it could contain, bounded from above by the number of particles in the observable Universe. [...] Hence there is insufficient physical resource in the entire Universe to allow for the operation of a Laplacian demon able to analyze even a relatively limited macroscopic physical system'' \cite{blundell}.\footnote{The problem with (physical) infinities is connected to the so-called \emph{Hilbert's Hotel paradox}, proposed by D. Hilbert in 1924 \cite{hilbert}. This paradox illustrates the possibility for a hypothetical hotel with (countably) infinite rooms, all of which already occupied, to allocate (countably) infinitely many more guests.}
The problem of infinitesimals, instead, is related to the question of whether it would be possible to know, even in principle, the necessary boundary conditions with infinite precision.\footnote{In what follows, boundary and initial conditions will be used interchangeably, for they are conceptually the same in our discussion. In fact, they both serve as necessary inputs to the (differential) dynamical equations in order to give predictions. Thus, if either of the initial or the boundary conditions are not determined with infinite precision, they affect the subsequent dynamics in the same way.} Moreover, do these infinite-precision conditions (i.e., with infinite predetermined digits) exist at all? As Drossel pointed out, this is related to the problem of determinism in so far as the ``idea of a deterministic time evolution represented by a trajectory in phase space can only be upheld within the framework of classical mechanics if a point in phase space has infinite precision'' \cite{drossel}. To address the problem of infinitesimals, and in doing so challenging determinism both at the epistemic and ontic levels, one can again use an argument that relates information and physics, namely the fact that finite volumes can contain only a finite amount of information (\emph{Bekenstein bound}, see \cite{gisin1}).
In this respect, one should realize that classical physics is not inherently deterministic just because its formalism is a set of deterministic functions (differential equations), but rather its alleged deterministic character is based on the metaphysical, unwarranted assumption of ``infinite precision''. Such a hidden assumption can be formulated as a principle --tacitly assumed in classical physics-- which consists of two different aspects:
\textbf{\emph{Principle of infinite precision}}:\\
1. (Ontological) -- there exists an actual value of every physical quantity, with its infinite determined digits (in any arbitrary numerical base).\\
2. (Epistemological) -- despite it might not be possible to know all the digits of a physical quantity (through measurements), it is possible to know an arbitrarily large number of digits.
In this paper, we further develop the argument put forward in Ref. \cite{gisin1}, wherein it has been outlined a concrete possibility to replace the \emph{principle of infinite precision}. According to this view, the limits of this principle rely on the faulty assumption of granting a physical significance to mathematical real numbers. We would like to stress that such an assumption cannot be whatsoever justified at the operational level, as already stressed by Born, as early as 1955: ``Statements like `a quantity x has a completely definite value' (expressed by a real number and represented by a point in the mathematical continuum) seem to me to have no physical meaning" \cite{born}. Relaxing the assumption of the physical significance of mathematical real numbers, allows one to regard classical physics as a fundamentally indeterministic theory, contrarily to its standard formulation. The latter can be considered, in this view, a deterministic completion in terms of (tacitly) posited \emph{hidden variables}. This situation resembles (without however being completely analogous) the contraposition between the standard formulation of quantum mechanics --which considers indeterminism an irrefutable part of quantum theory-- and Bohm's \cite{bohm} or Gudder's \cite{gudder} hidden variable models, which provide a deterministic description of quantum phenomena --adding in principle inaccessible supplementary variables.
Before further discussing, in what follows, the arguments for alternative interpretations of classical physics without real numbers --and the issues that this can cause-- some general remarks on indeterminism seem due. It appears, in fact, that a common misconception concerning physical indeterminism, is that --in the eyes of some physicists and philosophers-- this is taken to imply that any kind of regularity or predictive power looks unwarranted --the supporter of determinism would ask: how can you explain the incredible predictive success of our laws of physics without causal determinism? Yet, an indeterministic description of the world does not (at least necessarily) entail a ``lawless indeterminism'', namely a complete disorder devoid of \emph{any} laws or regularities (quantum mechanics with its probabilistic predictions provides a prime example of these indeterministic regularities). We would like to define indeterminism trough the sufficient condition of the existence of \emph{some} events that are not fully determined by their past states; in the words of K. Popper, ``indeterminism merely asserts that there exists at least one event (or perhaps, one kind of events [...]) which is not predetermined'' \cite{popper50}. Such a remark is important because, historically, classical mechanics allowed to predict with a tremendous precision the motion, for instance, of the planets in the Solar System and this led Laplace to formulate his ideas on determinism, which became the standard view among physicists and remained for a long time unchallenged. In fact, thinking that physics is deterministic seems completely legit, in so far as certain physical systems exhibit extremely stable dynamics --e.g. an harmonic oscillator (pendulum), or the (Newtonian) gravitational two-body problem, and in general any integrable systems. However, this justification of determinism can be challenged on the basis of two considerations. On the one hand, as already remarked, the existence of \emph{some} very stable systems (for all practical purposes treated as deterministic) does not undermine the possibility of indeterminism in the natural world. On the other hand, in the last one century a systematic study of chaotic systems --which are not integrable-- has been carried out, giving us good reasons to doubt determinism. Indeed, chaotic systems are not stable under perturbations, meaning that an arbitrarily small change in the initial conditions would lead to a significantly different future behavior, thus making the principle of infinite precision even more operationally unjustified, and therefore representing a concrete challenge to determinism.\footnote{Sometimes a distinction is made between strong and weak determinism. The former can be intuitively defined as ``\emph{similar} initial conditions lead to \emph{similar} trajectories’ and it is fulfilled by any integrable system. On the other hand, weak determinism can be defined as ``\emph{identical} conditions lead to \emph{identical} trajectories’. This holds for classical chaotic systems, however it is not empirically testable, for it would require the knowledge of the initial conditions with infinite precision.}
Incidentally, it is interesting to stress that, in the context of quantum physics, Bell's inequalities \cite{bell} have given us good reasons to believe, if not directly in indeterminism --which indeed cannot be confirmed or disproved within the domain of science (see further)-- that having at least one not predetermined event in the history of the Universe could have tremendous consequences on the following evolution. In fact, an experimental violation of Bell’s inequalities guarantees that if the inputs (measurement settings) were independent of the physical state shared by two distant parties, then the outcomes would be genuinely random (i.e., they cannot have predetermined values). Yet, the amount of random numbers generated in a Bell's test can be greater than the number of the corresponding inputs (in terms of bits of information): Bell's tests performed on quantum entangled states can be thought of as “machines” to increase the amount of randomness in the Universe (see also \cite{gisin2010}). Surprisingly enough, recent results \cite{renner, putz} showed that it is not even necessary to have a single genuinely random bit from the outset, but it is sufficient to introduce an arbitrarily small amount of initial randomness (i.e., of measurement independence) to generate virtually unbounded randomness. Hence, if one single event in the past history of the Universe was not fully causally determined beforehand, there is an operationally well defined procedure that allows to arbitrarily multiply the amount of indeterministic events in the future. Namely, it would be enough to use the randomness of the one indeterminate event as input in a Bell's test to extract more randomness (through the violation of a Bell's inequality). The random outputs can be used as new inputs for more Bell's experiments, and the process can be repeated arbitrarily many times \cite{amplification}. However, the initial arbitrarily small amount of randomness (or of indeterministic events) cannot be demonstrated by physics and its justification can only come from metaphysical arguments (see \cite{suppes}).
\section{Forms of indeterminism in classical and quantum physics}
Before proposing some possible models of indeterministic classical physics, in this section we shortly discuss some general features of deterministic and indeterministic theories. In doing so we aim at clarifying possible similarities between classical and quantum physics. Both classical and quantum mechanics are, in fact, formalized by a set of differential equations (laws of motion) that govern the dynamics of systems, together with appropriate initial conditions (IC) that fix the free parameters of these equations. Thus, if one aims at eliminating determinism as an unfounded interpretational element, there seems to be different possibilities, either involving the laws of physics or the characterization of the IC:
\textbf{1.} \emph{The laws of motions are fundamentally stochastic}. In this case, however, we cannot speak of an interpretation of the theory, but an actual modification of the formalism is required. In fact, in this case not only chaotic systems but also integrable ones would exhibit noisy outcomes, leading to experimentally inequivalent predictions. This case is the analogue of spontaneous collapse theories in quantum mechanics \cite{grw, gisincollapse, belljumps, cls}, which modify the Schr{\"o}dinger's equation with additional non-unitary terms.
\textbf{2.} \emph{The IC cannot, in principle, be fully known}. Without any ontological commitments, one can take seriously the epistemological statements of the \emph{principle of infinite precision} and push it to the extreme, namely asserting that there are in principle limits to the possibility of knowing (or measuring) certain quantities. This is what is entailed by the standard interpretation of the Heisenberg uncertainty principle that is usually stated as: there is in quantum physics a fundamental limit to the precision with which canonically conjugated variables \emph{can be known}. Such an uncertainty can be introduced in classical physics as well, and would be characterized by a new natural constant $\epsilon$ (e.g., the standard deviation of a Gaussian function centered in the considered point). This would set an epistemic (yet fundamental) limit of precision, with which physical variables could be determined, as a classical analogue of Heisenberg's uncertainty relations in quantum theory. This viewpoint appears similar to the one proposed in Ref. \cite{drossel}. Notice, however, that since this approach is agnostic with respect to the underlying ontology, it is fully compatible with a realist position that takes this uncertainty as being an ontological indeterminacy.
\footnote{As a supporting argument for this fundamental epistemic limit, one can even think in terms of theoretical reduction (i.e. when a theory is supervened by a more fundamental one, of which it represent an approximation). In fact, determining positions in classical physics with higher and higher precision, means to access digits that are relevant at the microscopic scale, and thus the Heisenberg's indeterminacy relations need to be applied (leading to the identification $\epsilon=\hbar$). However, it ought to be remarked that in certain classical chaotic systems the digits that are to become relevant are not necessarily the ones in the microscopic domain, but could be those which will become relevant only after a longer time} (such as in weather forecasts). So more general arguments than theoretical reduction would be desirable.
\textbf{3.} \emph{The IC \emph{are} not fully determined}. One can think that the fundamental limit of precision in determining physical quantities is not merely epistemic, but actually is an objective, ontologic indeterminacy that depends on the system and its interactions at a certain time. This view is the one that will be pursued in what follows, when we will propose ways to remove real numbers $\R$ from the domain of physics. Although this case does not seem to be the analogue of any specific interpretation of quantum theory, it clearly goes in the direction of realistic interpretations. It ought to be stressed, however, that if the initial conditions are not fully determined, this even makes (quantum) Bohmian mechanics becoming indeterministic.
\section{Indeterministic classical physics without real numbers}
In Ref. \cite{gisin1}, a model of indeterministic classical mechanics has been sketched, which, while leaving the dynamical equations unchanged, proposes a critical revision of the assumptions on the initial condition. In this view, the standard interpretation of classical mechanics has always tacitly assumed as hidden variables the predetermined values taken by physical quantities in the domain of mathematical real numbers, $\R $. The physical existence of an infinite amount of predetermined digits could lead to unphysical situations, such as the aforementioned infinite information density, as explained in detail in Refs. \cite{dowek} and \cite{gisin1}.
In this section, we discuss some possible solutions to eliminate these unwanted features, namely possible ways of carrying out the relaxation of the postulate according to which physical quantities take values in the real numbers. These solutions are however intended to be merely different interpretations of classical mechanics, i.e. they ought to be empirically indistinguishable from the standard predictions (in the same way as interpretations of quantum mechanics are) \cite{baumann}.
\subsection{1. ``Truncated real numbers''.} A first possibility is to consider physical variables as taking values in a set of ``truncated real numbers''. This, as already noted by Born, would ensure the empirical indistinguishability from the standard classical physics: ``a statement like $x = \pi$ cm would have a physical meaning only if one could distinguish between it and $x = \pi_n$ cm for every $n$, where $\pi_n$ is the approximation of $\pi$ by the first $n$ decimals. This, however, is impossible; and even if we suppose that the accuracy of measurement will be increased in the future, $n$ can always be chosen so large that no experimental distinction is possible" \cite{born}. If one, however, wishes to attribute an ontological value to such an interpretation has to identify $n$ with a new universal constant, that, independent of how big it could be, sets a limitation to the length of physically significant numbers, ultimately including the life of the Universe, if time too is to be considered a physical quantity. Another problematic issue is that $n$ would be dependent on the units in which one expresses the considered physical variables. This leads to consider a second possible solution.
\subsection{2. Rational numbers.} Another possibility is to consider that physical quantities take value in the rational numbers, $\Q$. Even if this sounds somewhat strange, one can argue that, in practice, physical measurements are in fact only described by rational numbers. For instance, a measurement of length is obtained by comparing a rod that has been carefully divided into equal parts (i.e. a ruler) with the object to be measured and determining the best fit within its (rational) divisions. And even probabilities are obtained as limits of frequencies of events' occurrences (i.e. ratios of counts). However, while rational numbers do eliminate the unwanted infinite information density, they do not seem to remove determinism in so far as \emph{all} the digits are fully predetermined. Moreover, the use of rational numbers leads to those that can be named ``Pitagora's no-go theorems". Indeed, positing a physics based on rational numbers, would rule out the possibility of constructing a physical object with the shape of a perfect square with unit edge or a perfect circle with unit diameter. In fact, by means of elementary mathematical theorems, their diagonal and circumference, respectively, would measure $\sqrt2$ and $\pi$, hence resulting to be physically unacceptable.
Additionally, if one plugs in the equations of motion initial conditions and time both taking values in the rational numbers, the solutions are not in general rational numbers.
These problematic issues lead to consider yet another possible solution.
\subsection{3. ``Computable real numbers''.}
A further alternative is to substitute the domain of physically meaningful numbers from (mathematically) real numbers to the proper subset thereof of ``computable real numbers''; that is, to keep all real numbers deprived of the irrational, uncomputable ones. In fact, even irrational, computable real numbers can encode at most the same amount of non-trivial information (in bits) as the length of the shortest algorithm used to output their bits (i.e., the \emph{Kolmogorov complexity}). Uncomputable real numbers are in this model instead substituted by genuinely random numbers, thus introducing fundamental randomness also in classical physics. These numbers, together with chaotic systems, lay the foundations of an alternative classical indeterministic physics, which removes the paradox of infinite information density. However, this proposal could be considered an \emph{ad hoc} solution, since it maintains a field of mathematical numbers as physically significant, but removes ``by hand'' those that are problematic (which admittedly are almost all).
\subsection{4. ``Finite information quantities'' (FIQs).}
Developing further the proposal in \cite{gisin1}, we put forward an alternative class of random numbers which are for all practical purposes (in terms of empirical predictions) equivalent to real numbers, but that have actually zero overlap with them (they are not a mathematical number field, nor a proper subset thereof). We refer to them as ``finite-information quantities'' (FIQs). In order to illustrate this possible alternative solution to overcome the problems with the principle of infinite precision, let us consider again the standard interpretation in greater formal detail. A physical quantity $\gamma$ (which may be the scalar parameter time, a universal constant, as well as a one-dimensional component of the position or of the momentum, etc.) is assumed to take values in the domain of real numbers, i.e., $\gamma \in \R$. Without loss of generality, but as a matter of simplicity, let us consider $\gamma$ to be between 0 and 1, and that its digits (bits) are expressed in binary base:
\begin{equation*}
\gamma=0.\gamma_1\gamma_2\cdots \gamma_j \cdots,
\end{equation*}
where each $\gamma_j\in\{0,1\}$, $\forall j\in \N^+$. This means that, being $\gamma \in \R$, its infinite bits are \emph{all} given at once and each one of them takes as a value either 0 or 1.
In an indeterministic world, however, not all the digits should be determined at all times, yet we require this model to give the same empirical predictions of the standard one. We therefore require a physical quantity to have the first (more significant) $N$ digits fully determined --and to be the same as those that give the standard deterministic predictions-- at time $t$, and we write $\gamma(N(t))$, whereas the following infinite digits are not yet determined. This reads:
\begin{equation*}
\gamma \left(N(t)\right)=0.\gamma_1\gamma_2\cdots \gamma_{N(t)} ?_{N(t)+1}\cdots ?_k\cdots,
\end{equation*}
where each $\gamma_j\in\{0,1\}$, $\forall j\leq N(t)$, and the symbol $?_k$ here means that the $k$th digit is a not yet actualized binary digit (see further).
Despite the element of randomness introduced, the transition between the actualized values and the random values still to be realized does not need to be a sharp one. In fact, one can conceive an objective property that quantifies the (possibly unbalanced) disposition of every digit to take one of the two possible values, 0 or 1. This property is reminiscent of Popper's propensities \cite{popper},\footnote{\label{note}While propensities were for Popper an interpretation of mathematical probabilities proper, we are not here necessarily requiring them to satisfy Kolmogorov's axioms, as discussed in Ref. \cite{gisin2}.} and it can be seen as the element of objective reality of this alternative interpretation:
\textbf{Definition - \textit{propensities}}\\
There exist (in the sense of being ontologically real) physical properties that we call \emph{propensities} $q_j\in [0,1] \cap \Q$, for each digit $j$ of a physical quantity $\gamma(N(t))$. A propensity quantifies the tendency or disposition of the $j$th binary digit to take the value 1.
{The interpretation of propensities can be understood starting from the limit cases. If the propensities are 0 or 1 the meaning is straightforward. For example, $q_j=1$ means that the $j$th digit will take value 1 with certainty. On the opposite extreme, if a bit has an associated propensity of 1/2, it means that the bit is totally random. Namely, if one were to measure the value of this bit, there would be an intrinsic property that makes it taking the value 0 or 1 with equal likelihood (we don't use ``probability'' to avoid formal issues, see footnote \ref{note}). All the intermediate cases can then be constructed. For instance, a propensity $q_k=0.3$ means that there is an objective tendency of the $k$th digit to take the value 1, quantified by 0.3, and thus the complementary propensity of taking the value 0 would be 0.7 (how this actualization occurs is an open issue, as we discuss in the next section). We would like to stress that while we assume propensities to be an (ontic) objective property, at the operational level they lead to the measured (epistemic) frequencies, but they supervene frequencies insofar as propensities can describe single-time events.}
{red}{ We posit that propensities take values in the domain of rational numbers such that they contain only a finite amount of information. Hence, postulating them as an element of reality, does not lead to the same information paradoxes of real numbers.} It also follows from the definition, that the propensities $q_j$ for the first $N(t)$ digits of a quantity $\gamma(N(t))$ at time $t$ are all either 0 or 1, i.e. $q_j\in\{0,1\}$, $\forall j\in[1,N(t)]$.
As a function of time, propensities must undergo a dynamical evolution. We envision more than a way to evolve propensities in time, although we do not propose an explicit model to describe this. On the one hand, one can think of a dynamical process similar to spontaneous collapse models of quantum mechanics. Admittedly, spontaneous collapse models require to modify the fundamental dynamical equation of quantum physics, the Schr{\"o}dinger's equation. Hence these models are not merely interpretations, but testable different theories. Nevertheless, for propensity this is not necessarily the case because they are a postulated element of reality which is however not observable. Thus, even if it would be desirable to have an explicit form for the equations governing the dynamics of propensities, the measured values of physical (observable) quantities would evolve in the usual way. Thus we maintain that our proposed new interpretation is indeed an interpretation and not a different testable theory.
On the other hand, intuitionistic mathematics could be the tool to solve the issue of the evolution of propensities (see ``choice sequences'' below). In fact, one can start from an infinite sequence of completely random bits (or digits), and then the number representing a physical quantity evolves according to a law (a function of these random bits). However, despite this law would describe the evolution of propensities, it is different from a standard a physical law, for it is a different way to construct mathematical numbers --which in turn describe physical quantities-- in time.
Making use of propensities, we can now refine our definition of physical quantities:
\textbf{Definition - \textit{FIQs}}\\
A \emph{finite-information quantity} (FIQ) is an ordered list of propensities $\{ q_1, q_2, \cdots , q_j, \cdots \}$, that satisfies:\\
1. (necessary condition): The information content is finite, i.e. $\sum_j I_j < \infty$, where $I_j=1-H(q_j)$ is the information content of the propensity, and $H$ is the binary entropy function of its argument. This ensures that the information content of FIQs is bounded from above;\\
2. (sufficient condition): After a certain threshold, all the bits are completely random, i.e. $\exists M(t) \in \N$ such that $q_j = \frac{1}{2}, \ \ \forall j>M(t)$\\
It ought to be stressed that this view grants a prior fundamentality to the potential property of becoming actual (a list of propensities, FIQ), more that to the already actualized number (a list of determined bits). {In fact, the analogue of a \emph{pure state} in this alternative interpretation of classical physics would be a collection of all the FIQs associated with the dynamical variable (i.e., the list of the propensities of each digit). Namely, this represents the maximal piece of information regarding a physical system. Yet, even having access to this knowledge (which is admittedly not possible due to the fact that propensities are not measurable) would lead to in principle unpredictable different evolutions. Thus, two systems that are identical at a certain instant of time (in the sense that they are in the same pure state, i.e. the propensities associated to their variables are all the same) will have, in general, different observable behaviors at later times.} However, the merit of this view is that the bits are realized univocally and irreversibly as time passes, but the information content of a FIQ is always bounded, contrarily to that of a real number. A physical quantity $\gamma$ reads in this interpretation as follows:
\begin{equation*}
\gamma \left(N(t), M(t)\right)=0.\underbrace{\gamma_1\gamma_2\cdots \gamma_{N(t)}}_{\textrm{determined }\gamma_j\in \{0, 1\}} \overbrace{?_{N(t)+1}\cdots ?_{M(t)}}^{?_k\textrm{, with } q_k\in(0, 1)}\underbrace{?_{M(t)+1}\cdots}_{?_l\textrm{, with } q_l=\frac{1}{2}}.
\end{equation*}
Notice that none of the FIQs is a mathematical number, but they capture the tendency (propensity) of each bit of a physical quantity to take the value 0 or 1 at the following instant in time. This admittedly leads to problematic issues, such as the problem of how and when the actualization of the digits from their propensity take place: it thus introduces the analogue of the quantum measurement problem also in classical physics (see further).
Moreover, FIQs partly even out the fundamental differences between classical and quantum physics, making both of them indeterministic (and making so even Bohmian interpretation).\footnote{There is yet another deterministic interpretation of quantum mechanics, the so-called \emph{many-worlds interpretation} that grants physical reality to the wave function of the Universe, which always evolve unitarily. If FIQs are introduced in that interpretation, then the realization of \emph{all} the values of the bits would actually take place, each of which being real in a different ``world''.} Table \ref{table} compares some possible combinations of deterministic and indeterministic interpretations of quantum and classical physics.
\begin{table*}[]
\centering
\begin{tabular}{c|c|c|}
\cline{2-3}
& Classical & Quantum \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\rotatebox[origin=c]{90}{Indeterministic}}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}IC $\in$ FIQs\\ Newton's equation\end{tabular}} & \begin{tabular}[c]{@{}c@{}}$|\psi\rangle \in \mathcal{L}^2(\R^N)$ \\Measurement postulate\end{tabular} \\ \cline{3-3}
\multicolumn{1}{|c|}{} & & \begin{tabular}[c]{@{}c@{}}IC (position) $\in$ FIQs and $|\psi\rangle \in \mathcal{L}^2(\R^N)$\\ Bohm's guidance equation\\ admitted\end{tabular} \\ \hline
\multicolumn{1}{|c|}{\multirow{2}{*}{\rotatebox[origin=c]{90}{Deterministic}}} & \multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}IC $\in \R$\\ Newton's equation\end{tabular}} & \begin{tabular}[c]{@{}c@{}}IC (position) $\in \R$ and $|\psi\rangle \in \mathcal{L}^2(\R^N)$\\ Bohm's guidance equation\\ Schr{\"o}dinger's equation\end{tabular} \\ \cline{3-3}
\multicolumn{1}{|c|}{} & & Many worlds interpretation (?) \\ \hline
\end{tabular}
\caption{\small{A table comparing deterministic and indeterministic interpretations of classical and quantum physics. Note that the substitution of FIQs in the place of real numbers makes not only classical physics indeterministic, but also Bohm's interpretation of quantum physics (which is usually taken to restore determinism).}}
\label{table}
\end{table*}
\subsection{5. ``Choice sequences''.}
Finite Information Quantities are not numbers in the usual sense, because their digits are not all given at once. On the contrary, the ``bits" of FIQs evolve as time passes, they start from the value $\frac{1}{2}$ and evolve until they acquire a bit value of either 0 or 1. In a nutshell, FIQs are processes that develop in time. Interestingly, in intuitionistic mathematics, the continuum is filled by ``choice sequences", as Brouwer, the father of intuitionism, and followers named them \cite{Brouwer1948}. This is not the place to present intuitionistic mathematics (see, e.g., \cite{IndeterminateNumbersPosy}), but let us emphasize that this alternative to classical (Platonistic) mathematics allows one to formalize ``dynamical numbers" that resemble much our FIQs \cite{Troelstra}. Interestingly, using the language of intuitionistic mathematics makes it much easier to talk of indeterminism \cite{NGHiddenReals}.
\section{Strong emergence}
The argument for determinism seems to rely, to a certain extent, on the tacit assumption of reductionism in its stronger form of microphysicalism, i.e. the view that every entity and phenomenon are ultimately reducible to fundamental interactions between elementary building blocs of physics (e.g., particles). In fact, in a completely deterministic picture, every particular phenomenon can be traced back to the interactions between its primitive components, along a (finite) chain of causally predetermined events. In this way any form of strong emergence seems to be ruled out, and it becomes only apparent (i.e. a weak or epistemic emergence). On the other hand, admitting genuine randomness in the universe, allows in our opinion the possibility of strong emergence.\footnote{For a definition of strong emergence, see, for instance, Ref. \cite{chalmers}: ``We can say that a high-level phenomenon is strongly emergent with respect to a
low-level domain when the high-level phenomenon arises from the low-level domain, but truths concerning that phenomenon are not deducible even in principle from truths in the low-level domain''.}\\
As a concrete example, consider the kinetic theory of gases. If one starts from a molecular description of the ideal gas, from the perspective of standard, deterministic classical mechanics, the stochasticity is only epistemic (i.e. only an apparent effect due to the lack of complete information regarding positions and momenta of every single molecule). Thus, the deterministic behavior of the law of the ideal gas is not expected to be a strong emergent feature, but solely a retrieving at the macroscopic scale of the fundamental determinism of the microscopic components.
In the perspective of the alternative indeterministic interpretation (based on FIQs), instead, the deterministic law of the ideal gas, ruling the behavior at the macroscopic level, emerges as a novel and not reducible feature, from fundamental randomness.\footnote{It is true that also the law of the ideal gas would not be \emph{perfectly} deterministic in a FIQ-based physics, however its stability makes it almost deterministic for all practical purposes, whereas at the microscopic level, chaotic behaviors multiply the fundamental uncertainty of the single molecules.}
Notice that the historical debate on the apparent incompatibility between Poincar\'e's recurrence theorem and Boltzmann's kinetic theory of gases does not arise in the framework of FIQs. Poincar\'e's recurrence theorem, in fact, states that continuous-state systems (i.e., in which the state variables change continuously in time) return to an arbitrarily small neighborhood of the initial state in phase space. However, Poincar\'e's theorem relies on the fact that the initial state is perfectly determined (i.e., it is a mathematical point identified by a set of coordinates which take values in the real numbers) in phase space. Thus in a FIQ-based alternative physics the theorem simply cannot be derived. In fact, FIQs interpretation features genuinely irreversible physical processes.
Similarly, Drossel has recently pointed out that, in a physics where it is impossible to determine points of phase space with infinite precision, ``the time evolution of thermodynamics is undetermined by classical mechanics [...]. Thus, the second law of thermodynamics is an emergent law in the strong sense; it is not contained in the microscopic laws of classical mechanics'' \cite{drossel}.
At this point one should ask oneself whether there are examples of emergence that possibly go beyond some form of the law of large numbers. Admittedly, we are unsure about this. Clearly, in all indeterministic physical theories, the law of large numbers will play an important role and lead to some stability and hence to some form of determinism at the larger scale (or higher-level description). It seems that this question is closely related to possible top-down causation, the topic of the next section.
\section{Top-down causation}
The idea of strong emergence, including emergent determinism, is related to the concept of ``top-down causation'' \cite{topdown, topdown2}. In this view, microphysicalism is not necessarily rejected ontologically (i.e., it admits that complex structures are hierarchical modular compositions of simpler ones), but the fact that the behavior of macroscopic events is fully determined by the interactions of the macroscopic entities is revised. Top-down causation maintains that the interactions between microscopic entities do not causally supervene the macroscopic phenomena, but rather it posits a mutual interaction where also the macroscopic (strongly emergent) laws impose constraints on the behavior of their constituents. Note that top-down causation requires indeterminism (at least at the lower level of the constituents) to be in principle conceivable \cite{drosselnew}; this was already remarked by Popper, when stating: ``{[A] higher level may exert a dominant influence upon a lower level}. For it seems that, were the universe \emph{per impossibile} a perfect determinist clockwork, there would be no heat production and no layers and therefore no such dominating influence would occur. This suggests that the emergence of hierarchical levels or layers, and of an interaction between them, depends upon a fundamental indeterminism of the physical universe. Each level is open to causal influences coming from lower and from higher levels'' \cite{poppernew}.
Concerning indeterministic interpretations of physical theories, top-down causation could help to understand how the determination of dynamical variables (i.e., the actualization of their values) occurs in the context of indeterministic theories. Namely, the reason why --and under what circumstances-- a single definite value is realized among all the possible ones. In the FIQ-based indeterministic interpretation of classical physics here introduced, this translates into the understanding of how the bits of physical variables becomes fully determined, namely how their propensities become either 0 or 1. We envision two possible mechanisms that could explain the actualization of the variables:
\textit{1. The actualizations happens spontaneously as time passes}. This view is compatible with reductionism and it does not necessarily require any effects of top-down causation. Note that this mechanism resembles, in the context of quantum mechanics, objective collapse models such as the ``continuous spontaneous localization'' (CSL) \cite{gisincollapse, cls}.
\textit{2. The actualization happens when a higher level requires it}. This means that when a higher level of description (e.g., the macroscopic measurement apparatus) requires some physical quantity pertaining to the lower-level description to acquire a determined value, then the lower level must get determined. In quantum mechanics a similar explanation is provided by the Copenhagen interpretation and, more explicitly, by the model in Ref. \cite{topdown2}.
In fact, the latter mechanisms are strongly related to what has been discussed at length in the context of quantum theory, namely the long-standing ``quantum measurement problem''. This comprises the problem of ``explaining why a certain outcome --as opposed to its alternatives-- occurs in a particular run of an experiment'' \cite{brukner}. In fact, some of the most commonly accepted interpretations of quantum mechanics (e.g. the Copenhagen interpretation) uphold the view that it is the act of measurement to impose to microscopic (quantum) objects to actualize one determined value, out of the possible ones.
Note that, despite it has been already remarked in the literature that every indeterministic theory has to deal with a ``measurement problem'' (see e.g. \cite{brukner}), it seems that there has hardly been any consideration of this issue in the context of other indeterministic theories than quantum mechanics. In the next subsection we will discuss what we call, by analogy, the ``classical measurement problem'. We will then draw a connection with top-down causation, and show how this could help to shed light on this problem.
\subsection{The ``classical measurement problem''}
It is a very corroborated experimental fact that if a quantity is measured twice with the same instrument, we expect a certain amount of digits to remain unchanged, and it is essential to scientific investigation that such a knowledge is intersubjectively available (up to the digit corresponding to the measurement accuracy). Moreover, if a more accurate measurement instrument is utilized, we expect not only the previous digits to remain unchanged, but also to determine some new digits that then become intersubjectively available. How to reconcile this stability of the measured digits with a fundamental uncertainty in the determination of a physical quantity? How does potentiality become actuality?
In the proposed FIQ-based indeterministic interpretation of classical physics, too, one has to carefully define how the digits of physical quantities realize themselves from the propensity of taking that (or another) possible value. Consider for example the chaotic systems analyzed in \cite{gisin1} (a simplified version of the \emph{baker's map}). One can then think of a ``faster'' dynamics that, at every time step, shifts the bits by not only one digit toward the more significant position, but, say, it shifts them by 1000 digits (or any other arbitrarily large finite number). This clearly entails that the rate of change of propensities depends on the dynamical system under consideration, and cannot be thought of as a universal constant of ``spontaneous'' actualization. A possible solution is to introduce a model of measurement that makes the digits becoming actual (and therefore stable) up to the corresponding precision. This clearly resembles the solution to the quantum measurement problem provided by the \emph{objective collapse models}, such as the CSL \cite{gisincollapse, cls} or the GRW \cite{grw, belljumps}. The latter model, indeed, posits a modification of the standard Schr{\"o}dinger's equation which accounts for a spontaneous random ``collapse'' of the wave function, occurring with a certain natural rate. Under certain assumptions, this model leads to a mechanism that changes the rate of spontaneous collapse, which increases linearly with the number of components of a system (thus during a measurement the wave function of a microscopic system in contact with a macroscopic apparatus collapses extremely fast). An analogous solution can be in principle proposed for the rate of actualization of the propensities that define the FIQs. However, this seems to mean that the dynamical equations need to be modified (in the same fashion as the GWR model modifies the Schr{\"o}dinger's equation), thus leading to a different formalism and not only an interpretation.
Coming back to top-down causation, this could explain why every time one performs a measurement the determined digits remain stable. In fact, the act of a measurement can be regarded as the direct action performed at the higher level which imposes to the lower level to get determinate. This is very similar to what is taken to be the solution to the quantum measurement problem within the Copenhagen interpretation, wherein the higher level is the macroscopic measurement apparatus, whereas the lower level is the measured microscopic system. However, this kind of solutions lacks a clear definition of what is to be considered a measurement and how to identify higher and lower levels of description.
As a matter of fact, the ``classical measurement problem'' here introduced remains so far unresolved, as well as the quantum measurement problem and, more in general, the problem of the actualization of physical variables in any indeterministic theory. Yet, it is desirable that the topic of the measurement problem should find room in the debate on foundations of physics, in more general discussions than those centered on quantum mechanics only.
\section{Conclusions}
We have discussed arguments --primarily based on the modern application of information theory to the foundations of physics-- against the standard view that classical physics is necessarily deterministic. We have also discussed concrete perspectives to reinterpret classical physics in an indeterministic fashion. We have then compared our indeterministic proposals with some interpretations of quantum physics. However, it seems clear that the empirical results of both classical and quantum mechanics can fit in either a deterministic or indeterministic framework. Furthermore, there are compelling arguments (see e.g., \cite{gisin1, suppes, wendl}) to support the view that the same conclusion can be reached for any given physical theory --a trivial way to make an indeterministic theory fully determined is to ``complete" the theory with all the results of every possible experiments that can be performed.
In conclusion, although the problem of determinism versus indeterminism is in our opinion central to science, the hope to resolve this problem within science itself has faded, and this is ultimately to be decided on the basis of metaphysical arguments.
\subsection{Acknowledgments}
We would like to thank the participants to the Workshop ``Experiencing Reality Directly: Philosophy and Science'', held in Jerusalem on May 20-22, 2019, for many discussions that provided interesting inputs. We also thank George Ellis, Veronica Baumann, Arne Hansen and Stefan Wolf for useful comments. FDS acknowledges the financial support through a DOC Fellowship of the Austrian Academy of Sciences.
\begin{small}
|
1,108,101,564,866 | arxiv | \section{Introduction}
As a rule, group-theoretical methods alleviate description of the
complicated physical systems. In this respect Lie
algebras are of a special interest. Very elegant examples of their
applications were found in quantum mechanics within
the concept of the spectrum generating,
or, dynamical (super)symmetry algebras \ci{r1}.
Not long ago a wide attention was drawn to the deformations
of the Lie algebras known nowdays under the name of
quantum algebras, or, quantum groups \ci{r2}.
Physical models where a coupling
constant is related to the deformation parameter $q$ and where
Hamiltonian commutes with the generators of the quantum algebra
$su_q(2)$ were found on the linear lattice \ci{r3}. Thus,
some kind of equivalence of the perturbation of the
interaction between "particles" to the deformation of the symmetry
algebras governing the dynamics was demonstrated.
Biedenharn and Macfarlane introduced $q$-deformed harmonic oscillator
as a building block of the quantum algebras \ci{r4,r5}.
Many mathematical applications
of the $q$-oscillators appeared since that time \ci{r6,r7}
(an overview of the algebraic aspects of the $q$-analysis can be
found in Ref.\ci{r8}). Physical models of $q$-oscillators can be devided
into the three classes. The first class, inspired by Ref.\ci{r3},
is related to
the lattice systems \ci{r7,r9}. In the second class dynamical quantities
are defined on the "quantum planes" -- the spaces with non-commutative
coordinates \ci{r10}. Although Schr\" odinger equation in this approach
looks similar to the standard one, an explicit representation of it in
terms of the normal calculus results in the non-local finite-difference
equation. Parameter $q$ responsible for the non-commutativity of quantum
space coordinates serves as some non-local scale
on the continuous manifolds and, therefore, the basic physical principles
are drastically changed in this type of deformation.
We shall not pursue here the routes of these two groups of models.
The third -- dynamical symmetry realization class -- is purely
phenomenological: one deforms
already known spectra by choosing the Hamiltonian as some combination of
the quantum algebra generators \ci{r11}, or, as an anticommutator of the
formal $q$-oscillator creation and annihilation operators \ci{r4,r9}.
This application, in fact, does not have straightforward
physical meaning because of the non-uniqueness of the deformation
procedure. Even within the standard physical concepts exact knowledge
of the spectrum is not enough for the reconstruction of the potential.
For a given potential with infinite
number of bound states one can associate another potential with
infinitely many independent parameters and the same spectrum \ci{r12}.
Therefore the physics behind such deformations is not completely
fixed. One should precisely describe what kind of
interaction between the excitations leads to
peculiar change of the spectrum. Some analysis
of the inverse problems can be found in Ref.\ci{r13}, where
simulations of periodic potentials with the prescribed
band structure was performed, and in Ref.\ci{r14}, where a reconstruction of
artificial symmetric non-oscillating potential generating
prescribed discrete spectrum in the WKB-approximation was considered.
$q$-Analogs of the harmonic oscillators were also used for the
description of small violation of the
statistics of identical particles \ci{r15,r16}.
Recently the author proposed new approach to the problem of the
quantum algebra symmetries in physical models \ci{r17}, namely,
to take exactly solvable Schr\" odinger potentials and deform
their shape (e.g., by
changing the Taylor series expansion coefficients) in such a way that
the problem remains to be exactly solvable but the spectrum aquires
complicated functional character. This idea was stimulated by the
Shabat's one-dimensional reflectionless potential showing peculiar
self-similar behavior and describing an infinite
number soliton system \ci{r18}.
The latter was identified in Ref.\ci{r17} as a representative of a general
two parameter potential unifying via the $q$-deformation conformally
invariant harmonic and Coulomb, Rosen-Morse, and P\"oschl-Teller
potentials. The hidden quantum algebraic symmetry was claimed
to be responsible for the exponential character of the spectrum. In
comparison with the discussed above third group of models present
approach is the direct one -- physical interaction is fixed first
and the question on the quantum algebra behind prescribed
rule of $q$-deformation is secondary.
In this paper we extend further the results of Ref.\ci{r17} and propose
general deformation of the supersymmetric (SUSY) quantum mechanics
\ci{r19}. We define
$q$-SUSY algebra and provide explicit realization of that on the
Hilbert space of square integrable functions. The
degeneracies of standard SUSY models are lifted.
The set of self-similar potentials naturally appears as a particular
example of $q$-SUSY system obeying dynamical symmetry algebra
$su_q(1,1)$. In particular, the raising and lowering operators
entering the definition of the supercharges are shown to generate
$q$-oscillator algebra of Biedenharn and Macfarlane.
The whole construction is based on the commutative analysis and has
many physical applications.
\section{SUSY quantum mechanics}
The simplest $N=2$ SUSY quantum mechanics is fixed by the following
algebraic relations between the Hamiltonian of a system $H$ and
supercharges $Q^{\dag} ,\, Q$ \ci{r19}
\begin{equation}
\{Q,Q^{\dag}\}=H,\quad Q^2=(Q^{\dag})^2=0, \quad [H,Q]=[H,Q^{\dag}]=0.
\lab{e1}
\end{equation}
All operators are supposed to be well defined on the relevant Hilbert
space. Then, independently on the explicit realizations the spectrum
is two-fold degenerate and the ground state energy is semipositive,
$E_{vac}\geq 0$.
Let us consider a particle moving in the one-dimensional space. Below,
the coordinate $x$ is tacitly assumed to cover the whole line,
$x\in {\it R}$, if it is not explicitly stated that it belongs to some cut.
Standard representation of the algebra \re{e1} contains one free
superpotential $W(x)$ \ci{r20}:
\begin{equation}
Q=\left(\matrix{0&0\cr A&0\cr}\right), \quad
Q^{\dag}=\left(\matrix{0&A^{\dag}\cr 0&0\cr}\right), \quad
A=(p+iW(x))/\sqrt2,\quad [x,p]=i,
\lab{e2}
\end{equation}
\begin{equation}
H=\left(\matrix{H_+&0\cr 0&H_-\cr}\right)=
\left(\matrix{A^{\dag} A&0\cr 0&A A^{\dag}\cr}\right)
=\fac12(p^2+W^2(x)+W^\prime(x)\sigma_3),
\lab{e3}
\end{equation}
$$W^\prime(x)\equiv {d\over dx} W(x), \qquad
\sigma_3=\left(\matrix{1&0\cr 0&-1\cr}\right).$$
It describes a particle with two-dimensional internal space the basis
vectors of which can be identified with the spin "up" and "down" states.
The subhamiltonians $H_\pm$ are isospectral as a result of the
intertwining relations
\begin{equation}
H_+ A^{\dag}=A^{\dag} H_-,\qquad A H_+=H_- A.
\lab{e4}
\end{equation}
The only possible difference concerns the lowest level. Note that the
choice $W(x)=-x$ corresponds to the harmonic oscillator problem and
then $A^{\dag},\, A$ coincide with the bosonic creation and
annihilation operators $a^{\dag},\,a$ which satisfy the algebra
\begin{equation}
[a,a^{\dag}]=1,\qquad [N,a^{\dag}]=a^{\dag},\qquad [N,a]=-a,
\lab{e5}
\end{equation}
where $N$ is the number operator, $N=a^{\dag} a$. This, and another
particular choice, $W(x)=\lambda/x$, correspond to the conformally
invariant dynamics \ci{r21}.
\section{$q$-Deformed SUSY quantum mechanics}
Now we shall introduce the tools needed for the quantum algebraic
deformation of the above construction. Let $T_q$ be smooth $q$-scaling
operator defined on the continuous functions
\begin{equation}
T_q f(x)=f(qx),
\lab{e6}
\end{equation}
where $q$ is a real non-negative parameter. Evident properties of this
operator are listed below
$$ T_q f(x)g(x)=[T_q f(x)][T_q g(x)],\qquad
T_q {d\over dx}=q^{-1}{d\over dx} T_q, $$
\begin{equation}
T_q T_p=T_{qp},\qquad T^{-1}_q=T_{q^{-1}},\qquad T_1=1.
\lab{e7}
\end{equation}
On the Hilbert space of square integrable functions ${\cal L}_2$ one has
\begin{equation}
\int_{-\infty}^{\infty} \phi^*(x)\psi(qx)dx=
q^{-1}\int_{-\infty}^{\infty} \phi^*(q^{-1}x)\psi(x)dx,
\lab{e8}
\end{equation}
where from the hermitian conjugate of $T_q$ can be found
\begin{equation}
T_q^{\dag}=q^{-1} T_q^{-1},\qquad \quad (T_q^{\dag})^{\dag}=T_q.
\lab{e9}
\end{equation}
As a result, $\sqrt{\, q}\, T_q$ is a unitary operator.
Because we take wave functions to be infinitely differentiable,
an explicit realization of $T_q$ is provided by the operator
\begin{equation}
T_q=e^{\ln q\, x\,d/dx}=q^{x\,d/dx}.
\lab{e10}
\end{equation}
Expanding \re{e10} into the formal series and using integration by parts
one can prove relations \re{e9} on the finite coordinate cut as well because
wave functions vanish on the boundaries.
Let us define the $q$-deformed factorization operators
\begin{equation}
A^{\dag}={1\over \sqrt2}\, (p-iW(x))\,T_q, \qquad
A={q^{-1}\over \sqrt2}\, T_q^{-1} (p+iW(x)),
\lab{e11}
\end{equation}
where $W(x)$ is arbitrary function and for the convinience we use
the same notations as in the undeformed case \re{e3}. $A$ and $A^{\dag}$
are hermitian conjugates of each other on the ${\cal L}_2$. Now one has
\begin{eqnarray}
A^{\dag} A&=&\fac12 q^{-1} (p^2+W^2(x)+W^\prime(x))\equiv q^{-1} H_+,
\lab{e12} \\
A\, A^{\dag}&=&\fac12q^{-1} T_q^{-1}(p^2+W^2(x)- W^\prime(x))T_q
\nonumber \\
&=&\fac12q\,(p^2+q^{-2}W^2(q^{-1}x) - q^{-1}W^\prime (q^{-1}x))
\equiv q H_-.
\lab{e13}
\end{eqnarray}
We define $q$-deformed SUSY Hamiltonian and supercharges to be
\begin{equation}
H=\left(\matrix{H_+&0\cr 0&H_-\cr}\right)
=\left(\matrix{qA^{\dag} A&0\cr 0&q^{-1}A A^{\dag}\cr}\right),\qquad
Q=\left(\matrix{0&0\cr A&0\cr}\right),\quad
Q^{\dag}=\left(\matrix{0&A^{\dag}\cr 0&0\cr}\right).
\lab{e14}
\end{equation}
These operators satisfy the following $q$-deformed version
of the $N=2$ SUSY algebra
\begin{equation}
\{Q^{\dag},Q\}_q= H, \quad \{Q,Q\}_q=\{Q^{\dag},Q^{\dag}\}_q=0,\quad
[H,Q]_q=[Q^{\dag}, H]_q=0,
\lab{e15}
\end{equation}
where we introduced $q$-brackets
\begin{equation}
[X,Y]_q\equiv qXY-q^{-1}YX,\qquad [Y,X]_q=-[X,Y]_{q^{-1}},
\lab{e16}
\end{equation}
\begin{equation}
\{X,Y\}_q\equiv qXY+q^{-1}YX,\qquad \{Y,X\}_q=\{X,Y\}_{q^{-1}}.
\lab{e17}
\end{equation}
Note that the supercharges are not conserved because they do not commute
with the Hamiltonian (in this respect our algebra principally
differs from the formal construction of Ref.\ci{r22}). An interesting
property of the algebra \re{e15} is that it shares with \re{e1}
the semipositiveness of the ground state energy which follows from the
observation that $Q^{\dag},\, Q$ and the operator $q^{-\sigma_3} H$
satisfy ordinary SUSY algebra \re{e1}. Evidently,
in the limit $q\to 1$ one recovers conventional SUSY quantum mechanics.
For the subhamiltonians $H_\pm$ the intertwining relations look as
follows
\begin{equation}
H_+ A^{\dag}=q^2 A^{\dag} H_-,\qquad A H_+=q^2 H_- A.
\lab{e18}
\end{equation}
Hence, $H_\pm$ are not isospectral but rather $q$-isospectral,
i.e. the spectrum of $H_+$ can be obtained from the spectrum of
$H_-$ just by the $q^2$-factor scaling:
\begin{equation}
H_+\, \psi^{(+)}=E^{(+)}\psi^{(+)}, \qquad
H_-\, \psi^{(-)}=E^{(-)}\psi^{(-)},
\nonumber
\end{equation}
\begin{equation}
E^{(+)}=q^2\, E^{(-)}, \qquad
\psi^{(+)}\propto A^{\dag} \psi^{(-)}, \quad
\psi^{(-)}\propto A\, \psi^{(+)}.
\lab{e18a}
\end{equation}
Possible exception concerns only the lowest level in the same spirit
as it was in the undeformed SUSY quantum mechanics. If $A^{\dag}, A$
do not have zero modes then there is one-to-one correspondence between
the spectra. We name this situation as a spontaneously broken $q$-SUSY
because for it $E_{vac}>0$. If $A$ (or, $A^{\dag}$) has zero mode
then $q$-SUSY is exact, $E_{vac}=0$, and $H_-$ (or, $H_+$) has one level
less than its superpartner $H_+$ (or, $H_-$).
As a simplest physical example let us consider the case $W(x)=-x$. The
Hamiltonian takes the form
\begin{equation}
4H=2p^2+(1+q^{-4})x^2 +q^{-2}-1+((1-q^{-4})x^2-1-q^{-2})\sigma_3,
\lab{e19}
\end{equation}
and describes a spin-1/2 particle in the harmonic well and
related magnetic field along the third axis.
The physical meaning of the deformation parameter $q$ is analogous
to that in the XXZ-model \ci{r3} -- it is a specific interaction constant
in the standard physical sense. This model has exact $q$-SUSY
and if $q^2$ is equal to the rational number
the spectrum exhibits accidental degeneracies.
\section{General deformation of the SUSY quantum mechanics}
Described above $q$-deformation of the SUSY quantum mechanics is by
no means unique. If one choses in the formulae \re{e11} $T_q$ to be not
$q$-scaling operator but, instead, the shift operator
\begin{equation}
T_q f(x)=f(x+q), \qquad T_q=e^{q\, d/dx},
\lab{e20}
\end{equation}
then SUSY algebra will not be deformed at all. The superpartner
Hamiltonians will be isospectral and the presence of the
$T_q$-operator results in the very simple deformation of the
standard superpartner potential $U_-(x)\to U_-(x-q)$ (kinetic
term is invariant). Evidently
such deformation does not change the spectrum of $U_-(x)$ and
that is why SUSY algebra remains intact but physically this
creates new SUSY quantum mechanical models. The crucial
point in generating of them was the use of the essentially
infinite order differential operators as the intertwining operators.
The most general choice of $T_q$ would be the shift operator
in the arbitrarily chosen change of the variable function $z=z(x)$
\begin{equation}
T_q f(z(x))= f(z(x)+q), \qquad
T_q=e^{q\,d/dz(x)},\quad {d\over dz}={1\over z^\prime(x)}\,
{d\over dx}.
\lab{e21}
\end{equation}
The choices $z=\ln x$ and $z=x$ were already discussed above.
In general, operator $T_q$ will not preserve the form of
the kinetical term in the $H_-$-Hamiltonian. Physically such change
would correspond to the transition from the motion of the particle
on the flat space to the curved space dynamics.
Application of the described construction to the spherically symmetric
potentials is straightforward. The general higher dimensional SUSY
models are more complicated but can also be "deformed".
It is evident that all quantum mechanical problems discussed within
the SUSY approach can be considered in the suggested fasion.
We leave detailed discussion of the approach and physical
applications for the future publications.
\section{$q$-Deformed conformal quantum mechanics}
Explicit form of the $su(1,1)$ dynamical symmetry generators can be
read off from the harmonic oscillator \re{e5} realization
\begin{equation}
K_+=\fac12 (a^{\dag})^2,\qquad K_-=\fac12 a^2,\qquad
K_0=\fac12 (N+\fac12),
\lab{e22}
\end{equation}
\begin{equation}
[K_0, K_\pm]=\pm \,K_\pm,\qquad [K_+,K_-]=-2K_0.
\lab{e23}
\end{equation}
Let us show that the potentials considered in Refs.\ci{r18,r17}
obey the quantum conformal symmetry algebra $su_q(1,1)$.
First, we shall derive those potentials within another physical
situation with the help of $q$-SUSY. Let us consider the Hamiltonian
of a spin-1/2 particle in the external potential $\fac12 U(x)$ and
the magnetic field $\fac12 B(x)$ along the third axis
\begin{equation}
H=\fac12 (p^2 + U(x) + B(x) \sigma_3)
\lab{e24}
\end{equation}
and impose two conditions: we take magnetic field to be homogeneous
\begin{equation}
B=-\beta^2 q^{-2} = const
\lab{e25}
\end{equation}
and require the presence of $q$-SUSY \re{e15}. Equating \re{e24}
and \re{e14} we arrive at the potential
\begin{equation}
U(x)=W^2(x)+W^{\prime}(x) + \beta^2 q^{-2},
\lab{e26}
\end{equation}
where $W(x)$ satisfies the following mixed finite-difference and
differential equation
\begin{equation}
W^\prime(x)-W^2(x)+qW^\prime (qx)+q^2 W^2(qx)+2\beta^2 =0.
\lab{e27}
\end{equation}
This is the condition of a self-similarity \ci{r18,r17} which bootstraps
the potential in different points (in Ref.\ci{r17}
$\beta^2= \gamma^2 (1+q^2)/2$ parametrization was used).
Smooth solution of \re{e27} for symmetric potentials
$U(-x)=U(x)$ is given by the following power series
\begin{equation}
W(x)=\sum_{i=1}^{\infty} c_i\, x^{2i-1}, \qquad
c_i={1-q^{2i}\over 1+q^{2i}}{1\over 2i-1}\sum_{m=1}^{i-1}c_{i-m}c_m, \quad
c_1=-\, {2\beta^2\over 1+q^2}.
\lab{e27a}
\end{equation}
In different limits of the parameters several well known
exactly solvable problems arise: 1. Rosen-Morse -- at $q\to 0;\;$
2. P\"oschl-Teller -- at $\beta\propto q\to \infty ;\;$ 3. Harmonic well --
at $q\to 1;\;$ 4. Coulomb potential -- at $q\to 0$ and $\beta = 0$.
Note that at $q>1$ the range of the coordinate should be
restricted to the finite cut. Soliton solution of Shabat \ci{r17}
corresponds to the range $0<q<1$ at fixed $\beta$.
Within the taken physical situation the potential \re{e26}
describes $q$-deformation of the Landau-level problem.
We already know that the spectra of $H_\pm$ subhamiltonians are
related via the $q^2$-scaling
\begin{equation}
E^{(+)}_{n+1}=q^2 E^{(-)}_n,
\lab{e27b}
\end{equation}
where the number $n$ numerates levels from below for both spectra.
Because $q$-SUSY is exact in this model the lowest level of $H_-$
corresponds to the first excited state of $H_+$. But at the restriction
\re{e25} the spectra differ only by a constant,
\begin{equation}
E^{(+)}_n=E^{(-)}_n -\beta^2 q^{-2},
\lab{e27c}
\end{equation}
Conditions \re{e27b} and \re{e27c} give us the spectrum
\begin{equation}
E_{n,m}=\beta^2 \, {q^{-2m}-q^{2n}\over 1-q^2},
\qquad m=0,1;\; n=0,1,\dots,\infty .
\lab{e28}
\end{equation}
At $q<1$ there are two finite accumulation points, i.e. \re{e28} somehow
approximates the two-band spectrum. At $q > 1$ energy eigenvalues
exponentially grow to the infinity.
Not more difficult is the derivation of the dynamical symmetry algebra.
To find that we rewrite relations \re{e12}, \re{e13} for the
superpotential \re{e27a}
\begin{equation}
A^{\dag}A=q^{-1} H+{\beta^2 q^{-1}\over 1-q^2}, \qquad
A\, A^{\dag}=q\, H+{\beta^2 q^{-1}\over 1-q^2},
\lab{e29}
\end{equation}
where H is the Hamiltonian with purely exponential spectrum
\begin{equation}
H=\fac12 (p^2+W^2(x)+W^{\prime}(x))- {\beta^2 \over 1-q^2},\qquad
E_n=-{\beta^2 \over 1-q^2}\, q^{2n}.
\lab{e29a}
\end{equation}
Evidently,
\begin{equation}
AA^{\dag}-q^2 A^{\dag}A=\beta^2 q^{-1}.
\lab{e30}
\end{equation}
Normalization of the r.h.s. of \re{e30} to unity results in the
algebra used in Ref.\ci{r16} for the description of small violation
of the Bose statistics.
The shifted Hamiltonian \re{e29a} and $A^{\dag},\, A$ operators $q$-commute
$$[A^{\dag},H]_q=[H,A]_q=0,$$
or,
\begin{equation}
H\, A^{\dag}=q^2A^{\dag} H,\qquad A\, H=q^2H\, A.
\lab{e30a}
\end{equation}
These are typical braid-type commutation relations.
Energy eigenfunctions $| n\rangle $ can be uniquely determined
from the ladder operators action
\begin{equation}
A^{\dag}|n\rangle =\beta q^{-1/2}\sqrt{\,{1-q^{2(n+1)}\over 1-q^2}}\,
|n+1\rangle ,\qquad
A\, |n\rangle =\beta q^{-1/2}\sqrt{\, {1-q^{2n}\over 1-q^2}}\,
|n-1\rangle .
\lab{e30b}
\end{equation}
It is convinient to introduce the formal number operator
\begin{equation}
N={\ln [(q^2-1) H/\beta^2]\over \ln q^2},\qquad
N\, |n\rangle =n |n\rangle,
\lab{e31}
\end{equation}
which is well defined only on the eigenstates of the Hamiltonian.
Now one can check that the operators
\begin{equation}
a_q={q\over \beta}\,A\, q^{-N/2},\qquad
a^{\dag}_q={q\over \beta}\, q^{-N/2} A^{\dag}
\lab{e32}
\end{equation}
satisfy the $q$-deformed harmonic oscillator algebra
of Biedenharn and Macfarlane \ci{r4,r5}
\begin{equation}
a_q a^{\dag}_q - q a^{\dag}_q a_q=q^{-N},\quad
[N,a^{\dag}_q]=a^{\dag}_q,\quad [N,a_q]=-a_q.
\lab{e33}
\end{equation}
Substituting $a^{\dag}_q$ and $a_q$ into the definitions \re{e22} and
renormalizing $2K_{\pm}/(q+q^{-1})\to K_{\pm}$ we get commutation
relations of the quantum algebra $su_q(1,1)$
\begin{equation}
[K_0,K_\pm]=\pm\, K_\pm,\qquad [K_+, K_-]=-\,
{\mu^{2K_0}-\mu^{-2K_0}\over \mu - \mu^{-1}},\quad \mu=q^2.
\lab{e34}
\end{equation}
Therefore the dynamical symmetry algebra of the model is $su_q(1,1)$.
Let us compare our deformed (super)conformal quantum mechanics with the
construction of Ref.\ci{r23}. Kalnins, Levine, and Miller
called as the conformal symmetry generator any differential operator
$L(t)$ which maps solutions of the
time-dependent Schr\"odinger equation to the solutions, i.e. which
satisfies the relation
\begin{equation}
i\, {\partial \over \partial t}\, L-[H,L]=
R\, (i\, {\partial \over \partial t}-H),
\lab{e35}
\end{equation}
where $R$ is some operator. On the shell of Schr\" odinger equation
solutions $L(t)$ is conserved and all higher
powers of the space derivative, entering the definition of $L(t)$,
can be replaced by the powers of $\partial /\partial t$ and
linear in $\partial /\partial x$ term. But any analytical
function of $\partial /\partial t$ is replaced by the
function of energy when applied to the stationary states.
This trick allows
to simulate any infinite order differential operator by the one
linear in space derivative and to prove that a solution with energy
$E$ can always be mapped to the not-necessarily normalizable solution with
the energy $E+f(E)$ where $f(E)$ is arbitrary analytical function.
"On-shell" raising and lowering operators always can be
found if one knows the basis solutions of the
Schr\"odinger equation but sometimes it is easier
to find symmetry generators and use them in search of the spectrum.
In our construction we have "off-shell" symmetry generators, which
satisfy quantum algebraic relations in the operator sense.
In this respect our results are complimentary to those of the
Ref.\ci{r23}. Indeed, time-dependent factorization operators
\begin{equation}
{\cal A}^{\dag}(t)=e^{i(q^{-2}-1)tH} A^{\dag}, \qquad
{\cal A}(t)=e^{i(q^2-1)tH} A
\lab{e36}
\end{equation}
satisfy \re{e35} with $R\equiv 0$ and, so, are real conserved
quantities. On the stationary states the exponential prefactors
in \re{e36} coincide with the time shift operators. One can reduce
operators ${\cal A}^{\dag}(t),\, {\cal A}(t)$ to the linear in
$\partial /\partial x$ "on-shell"-form and then they, probably,
shall coincide with the ladder operators of Ref.\ci{r23} corresponding
to the Hamiltonian \re{e29a}.
\section{Conclusions}
To conclude, in this paper we have suggested a deformation
of the SUSY quantum mechanics. The main feature of the construction
is that the superpartner Hamiltonians satisfy non-trivial braid-type
intertwining relations which remove degeneracies of the original SUSY
spectra. Deformed SUSY algebra preserves semipositiveness of the
vacuum energy. Peculiar set of $q$-SUSY potentials arising within
the Landau-level-like problem obey $q$-deformed dynamical
conformal symmetry algebra $su_q(1,1)$. Corresponding raising and
lowering operators satisfy $q$-deformed oscillator algebra
of Biedenharn and Macfarlane. A more general type of potential
deformations applicable in any dimensional space is outlined.
It is clear that $q$-scaling is a particular example of the possible
transformations of the spectra.
In general one should be able to analytically describe the map
of a given potential with spectrum $E_n$ to the particular potential
with the spectrum $f(E_n)$ for any analytical function $f(E)$.
A problem of arbitrary non-linear deformation of the Lie algebras
was treated in Ref.\ci{r24} using the symbols of operators
without well defined
coordinate representaion on the ordinary Hilbert space ${\cal L}_2$.
Certainly, the method of Ref.\ci{r23} should be
helpful in the analysis of this interesting problem and the
model with exponential spectrum given in Sect. 5 shows that sometimes
one can even find well defined "off-shell" spectrum generating algebra.
The Hopf algebra structure of the quantum groups was not mentioned
because its physical meaning within the standard quantum mechanical
context is unknown to the author. Perhaps the many identical body
problems shall elucidate this point. Another speculative conjecture
is that the results of this and \ci{r17,r18} papers may
be useful in seeking for the $q$-deformations of the non-linear
integrable evolution equations, like KdV, {\it sin}-Gordon, etc.
We end by the remark that presented type of $q$-deformation can also
be developed for the parasupersymmetric quantum mechanics \ci{r25}
where higher (odd and even) dimensional internal spaces are involved.
\bigskip\bigskip
\noindent{\Large{\bf Acknowledgments}}
\bigskip
The author is indebted to W.Miller and A.Shabat for the useful
discussions on the initial stage of this work, to J.LeTourneux and
L.Vinet for the encuragement to pursue the subject.
This research is supported by the NSERC of Canada.
\newpage
\begin{thebibliography}{33}
\bi{r1} Dynamical Groups and Spectrum Generating Algebras,
Eds. A.Bohm, Y.Ne'eman, and A.Barut (World Scientific, 1988).
\bi{r2} V.G.Drinfeld, Quantum Groups, {\it in:} Proc. of the Intern.
Congress of Mathematicians (Berkeley, 1986) vol.1, p.798; \\
M.Jimbo, Lett.Math.Phys. {\bf 10} (1985) 63; {\bf 11}
(1986) 247; \\ N.Yu.Reshetikhin, L.A.Takhtajan, and
L.D.Faddeev, Algebra i Analiz, {\bf 1} (1989) 178.
\bi{r3} N.Pasquier and H.Saleur, Nucl.Phys. {\bf B330} (1990) 523.
\bi{r4} L.C.Biedenharn, J.Phys. {\bf A22} (1989) L873.
\bi{r5} A.J.Macfarlane, J.Phys. {\bf A22} (1989) 4581.
\bi{r6} C.-P. Sen and H.-C.Fu, J.Phys. {\bf A22} (1989) L983; \\
T.Hayashi, Comm.Math.Phys. {\bf 127} (1990) 129; \\
M.Chaichian and P.Kulish, Phys.Lett. {\bf B234} (1990) 72; \\
P.P.Kulish and E.V.Damaskinsky, J.Phys. {\bf A23} (1990) L415; \\
R.Floreanini, V.P.Spiridonov, and L.Vinet, Comm.Math.Phys.
{\bf 137} (1991) 149; \\
D.B.Fairlie and C.Zachos, Quantized Planes and Multiparameter
Deformations of Heisenberg and $GL(N)$ Algebras, preprint
ANL-HEP-CP-91-28, 1991; \\
R.Floreanini, D.Leites, and L.Vinet, On the Defining Relations
of the Quantum Superalgebras, preprint UdeM-LPN-TH53, 1991.
\bi{r7} N.M.Atakishiev and S.K.Suslov, Sov.J.Theor.Math.Phys.
{\bf 85} (1990) 1055.
\bi{r8} R.Floreanini and L.Vinet, Representations of Quantum Algebras
and $q$-Special Functions, {\it in:} Proc. of the ${\it II^{nd}}$
Intern. Wigner Symposium (Springer-Verlag, 1991); preprint
UdeM-LPN-TH69, 1991.
\bi{r9} E.G.Floratos and T.N.Tomaras, Phys.Lett. {\bf B251} (1990) 163.
\bi{r10}J.Wess and B.Zumino, Nucl.Phys. (Proc.Suppl.) {\bf B18}
(1990) 302; \\ B.Zumino, Mod.Phys.Lett. {\bf A6} (1991) 1225; \\
U.Carow-Watamura, M.Schlieker, and S.Watamura, Z.Phys.
{\bf C49} (1991) 439; \\
J.A.Minanhan, Mod.Phys.Lett. {\bf A5} (1990) 2635; \\
L.Baulieu and E.G.Floratos, Phys.Lett. {\bf B258} (1991) 171.
\bi{r11} P.P.Raychev, R.P.Roussev, and Y.F.Smirnov, J.Phys. {\bf G16}
(1990) L137; \\
M.Chaichian, D.Ellinas, and P.Kulish, Phys.Rev.Lett. {\bf 65}
(1990) 980; \\
R.M.Mir-Kasimov, The Relativistic Oscillator as the Realization
of the Quantum Group of Dynamical Symmetry, {\it in:} Proc.
of the Intern. Seminar "Quarks'90", 14-19 May 1990, Telavi,
USSR. Eds. V.A.Matveev et al (World Scientific, Singapore) p.133; \\
E.G.Floratos, J.Phys. {\bf A24} (1991) 4739.
\bi{r12} M.M.Nieto, Phys.Lett. {\bf B145} (1984) 208; \\
M.Luban and D.L.Pursey, Phys.Rev. {\bf D33} (1986) 431.
\bi{r13} J.L.Rosner, Ann.Phys.(N.Y.) {\bf 200} (1990) 101.
\bi{r14} D.Bonatsos, C.Daskaloyannis, and K.Kokkotas,
J.Phys. {\bf A24} (1991) L795.
\bi{r15} A.Yu.Ignatiev and V.A.Kuzmin, Yad.Fiz. {\bf 46} (1987) 786.
\bi{r16} O.W.Greenberg, Phys.Rev.Lett. {\bf 64} (1990) 705; \\
R.Mohapatra, Phys.Lett. {\bf B242} (1990) 407; \\
V.P.Spiridonov, Dynamical Parasupersymmetry in Quantum
Systems, {\it in:} Proc. of the Intern. Seminar "Quarks'90",
14-19 May 1990, Telavi, USSR. Eds. V.A.Matveev et al (World
Scientific, Singapore) p.232
\bi{r17} V.Spiridonov, Exactly Solvable $q$-Deformed Potentials with
Exponentially Small or Large Bound Energies, preprint
UdeM-LPN-TH75, 1991.
\bi{r18} A.Shabat, The Infinite-Dimensional Dressing Dynamical System;
Inverse Problems, to be published.
\bi{r19} L.E.Gendenstein and I.V.Krive, Sov.Phys.Usp. {\bf 28} (1985) 645.
\bi{r20} E.Witten, Nucl.Phys. {\bf B188} (1981) 513.
\bi{r21} V.DeAlfaro, S.Fubini, and G.Furlan, Nuovo Cim. {\bf A34} (1976) 569.
\bi{r22} M.Chaichian, P.Kulish, and J.Lukierski, Phys.Lett.
{\bf B262} (1991) 43.
\bi{r23} E.G.Kalnins, R.D.Levine, and W.Miller, Jr., Conformal Symmetries
and Generalized Recurrences for Heat and Schr\"odinger
Equations in One Spatial Dimension, {\it in:}
Mechanics, Analysis and Geometry: 200 Years after Lagrange,
Ed. M.Francaviglia (Elsevier Science Publishers B.V., 1991) p. 237
\bi{r24} A.P.Polychronakos, Mod.Phys.Lett. {\bf A5} (1990) 2325; \\
M.Ro{\v c}ek, Phys.Lett. {\bf B255} (1991) 554; \\
K.Odaka, T.Kishi, and S.Kamefuchi, J.Phys. {\bf A24} (1991) L591; \\
C.Daskaloyannis, J.Phys. {\bf A24} (1991) L789; \\
C.Daskaloyannis and K.Ypsilantis, A Deformed Oscillator with
Coulomb Energy Spectrum, preprint THES-TP-91/09, 1991.
\bi{r25} V.A.Rubakov and V.P.Spiridonov, Mod. Phys. Lett. {\bf A3}
(1988) 1337; \\ V.Spiridonov, Parasupersymmetry in Quantum
Systems, {\it in:} Proc. of the $\DGM$ Intern. Conf. on the Diff.
Geometry Methods in Theor. Physics (World Scientific, 1991);
preprint UdeM-LPN-TH58, 1991.
\end{thebibliography}
\end{document}
|
1,108,101,564,867 | arxiv | \section{Introduction}\label{sec:intro}
\subsection{Significance of Binary Supermassive Black Holes}
It is now well established that most, if not all, large galaxies in the local universe have supermassive black holes (SMBHs; $> 10^8\,\ensuremath{M_{\sun}}$) in their nuclei \citep[e.g.,][]{kormendy95,ff05}. Further, the merger of galaxies is considered to be an integral aspect of galaxy assembly and evolution, with large galaxies in the local universe potentially having undergone multiple mergers during the course of their evolution. A robust prediction of the galaxy merging process is that the merger product will contain two SMBHs, which will sink to the center of the product via dynamical friction on time scales of order $10^8$ yr. Models of this merger process not only predict the formation of a binary\footnote{We distinguish between binary SMBHs and dual SMBHs or dual active galactic nuclei (AGNs). The former are systems for which their mutual gravitational interactions are dominant, while the latter are galaxies or merger products that contain two SMBHs, but for which the separation between the two SMBHs is so large that their motions are dominated by the gravitational potential of the host. } \hbox{SMBH} \citep[e.g.,][]{begelman80,milosavljevic01,yu02,DEGN}, but also claim success in being able to replicate a variety of other aspects of galaxy or quasar properties, including the $M_{\mathrm{BH}}$-$\sigma$ relation, the quasar luminosity function, the central brightness of galaxies, and the bending or apparent precession of radio jets \citep[e.g.,][]{kauffmann00,volonteri03,wyithe03,liufk04,hopkins08, Kormendy2009a,shen09,KormendyHo2013}.
The kinematics of the two SMBHs at the center of the merger product begins to be dominated by their mutual gravitational interaction, rather than by the gravitational potential of the host, when their separation is \citep{begelman80,volonteri03}
\begin{equation}
r_b = \frac{G(m_1 + m_2)}{2\sigma^2}
\sim 10\,\mathrm{pc}\,\frac{m_1 +
m_2}{10^8\,\ensuremath{M_{\sun}}}\left(\frac{\sigma}{150\,\mathrm{km}\,\mathrm{s}^{-1}}\right)^{-2},
\label{eqn:binary}
\end{equation}
for SMBHs of mass~$m_1$ and~$m_2$ located within a merger product with a central velocity dispersion~$\sigma$. Further dynamical friction by stars in the central region, and possibly gas interactions, cause the binary to harden. For a time, it was thought that the binary separation would cease to shrink at a separation of order 1 pc as stellar interactions become less effective \citep[the ``last parsec problem,''][]{begelman80,Quinlan1996,milosavljevic01,yu02}. Considerable recent attention has focused on interactions between the SMBH binary and the surrounding population of stars, particularly in light of the fact that the stellar population is likely to have an asymmetric spatial or velocity distribution as a result of the merger process itself \citep[e.g.,][]{yu02,berczik06,Mayer2007,Lodato2009,Sesana2010,khan11,Preto2011}. While the results are not yet conclusive, it appears plausible that such interactions would cause the binary to continue to harden to sub-milliparsec separations within the age of the universe, at which point its separation will shrink inexorably due to the emission of gravitational waves (GWs). Current or future pulsar timing arrays should be able to detect the GW emission from the ensemble of individual binary SMBHs at frequencies of order $10^{-9}$~Hz \citep{hobbs10,Arzoumanian2016,Babak2016}, while the last moments of in-spiral should produce GWs with frequencies of order $10^{-6}$~Hz, which would be detectable by future space interferometers \citep{Amaro-Seoane2013,Danzmann2017}. Determining the cosmological density of galaxies containing a pair of SMBHs therefore provides constraints on the rate at which galaxies undergo mergers and the late stages of the merger process \citep[e.g.,][]{yu11,Steinborn2015}, as well as being crucial to predicting the amplitudes and rates of GW signals that pulsar timing arrays and future space interferometers will detect \citep[e.g.,][]{Sesana2017}.
\subsection{Observational Evidence for Binary and Dual Supermassive Black Holes and Uncertainties}
Despite the theoretically appealing nature of this scenario, however, direct observational evidence for dual and binary SMBHs remains scarce. To date, only a single potential pc-scale binary SMBH is known,\footnote{There is one candidate sub-pc binary SMBH in OJ 287, which shows a $\sim$12 year quasi-periodic light curve, and is interpreted with a binary SMBH$+$accretion disk model \citep{sillanpaa88,valtonen08,Valtonen2016}. In addition, there is one candidate reported by \cite{boroson09}, which was later suggested to be an unusual disk emitter and not a binary SMBH \citep{chornock10}. There are $\sim$150 mili-pc binary SMBH candidates proposed based on quasi-periodic quasar light curves \citep[e.g.,][]{Graham2015a,Graham2015,Liutt2015,DOrazio2015a,DOrazio2016,Charisi2016,Zheng2015}, although the quasi-periodicity could also be due to single BH accretion disk instability and/or radio jet precession \citep[e.g.,][]{Kudryavtseva2011} or false periodicities caused by stochastic variability \citep[e.g.,][]{Vaughan2016}.} \objectname{B2~0402$+$379} \citep{maness04,rodriguez06}, with a (projected) separation of approximately 7 pc; one candidate sub-pc binary SMBH has just been identified in NGC 7674 from direct imaging \citep[with projected separation of $0.35$ pc;][]{Kharb2017} using Very long baseline interferometry (VLBI). The fraction of low-redshift AGN pairs on $\sim$5--100 kpc scales is a few percent \citep[e.g.,][]{Liu2011a,Liu2012}, and the fraction of intermediate-redshift binary quasars on tens to hundreds of kpc scales is $\sim 0.1$\% \citep[e.g.,][]{hennawi06,hennawi09,myers08,shen10c}. Until recently, on kpc scales, there had been only a handful of unambiguous cases of dual AGNs in which both SMBHs are detected in the radio \citep[e.g., \protect\objectname{3C~75},][]{owen85}, optical \citep[e.g., \protect\objectname{LBQS 0103-2753},][]{junkkarinen01}, or X-rays \citep[e.g., \protect\objectname{NGC~6240}, \protect\objectname{Mrk~463}, and \protect\objectname{Mrk~739};][]{komossa03,bianchi08,Koss2011}.
The advent of large and uniform spectroscopic surveys, such as the Sloan Digital Sky Survey \citep[SDSS;][]{York2000}, has enabled the identification of large numbers of galaxies with spectroscopic signatures potentially characteristic of dual AGNs on $\sim$kpc and sub-kpc scales \citep[e.g.,][]{wang09,Liu2010b,Smith2010,Ge2012,Barrows2013,Comerford2013,LyuLiu2016,Yuan2016}, as well as binary SMBHs on sub-pc scales \citep[e.g.,][]{tsalmantza11,eracleous11,Ju2013,shen13,Liu2014,Runnoe2015,Runnoe2017,Wang2017}. In particular, one such signature is having two spectral-line components associated with AGNs, such as \mbox{[\ion{O}{3}]} , and with a velocity separation of a few hundred km s$^{-1}$ which signals orbital motion on galactic scales
\citep[e.g.,][]{zhou04,gerke07,comerford08,xu09,barrows12}, analogous to a double-lined spectroscopic star binary.
Higher angular resolution follow-up observations of candidates from systematic surveys have dramatically increased the number of dual AGNs on kpc-scale separations\citep[e.g.,][]{Liu2010a,Liu2017a,Fu2011a,Fu2012,mcgurk11,Shen2011,comerford11b}. However, there are considerable ambiguities associated with identifying dual SMBHs from spectral-line observations alone. Outflows associated with jets and rotating disks may also produce double-peaked narrow emission lines in AGNs \citep{axon98,xu09,crenshaw09,rosario10,Smith2010,smith11,comerford11a,fischer11,Shen2011}, and recent work has suggested that the majority ($\gtrsim 50\%$) of double-peaked narrow emission-line AGNs are likely due to complex narrow-line kinematics around single AGNs \citep{Shen2011,Fu2012,Nevin2016}.
\subsection{This Work: High-resolution Imaging with VLBA}
Considering the sizes of the narrow-line regions (NLRs) responsible for the \mbox{[\ion{O}{3}]}\ lines and the evolutionary stages of a dual \hbox{SMBH}, the physical separation between dual AGNs with two distinct NLRs can be as small as $\sim$30 pc \citep[e.g.,][depending on AGN luminosity]{schmitt03,greene11}, corresponding to an angular separation of order 10 mas at a typical redshift $z \sim 0.1$; actual separations could be smaller when projected on the sky. Dual AGNs with such small separations would show double-peaked narrow emission lines given typical orbital velocities of the individual NLRs being a few hundred km s$^{-1}$ and typical velocity dispersions of $\lesssim$ a few hundred km s$^{-1}$ of the NLR gas clouds, provided that the two NLRs are not yet fully merged. Only with a full assessment of all the candidate dual AGNs at various separations will we be able to put robust constraints on aspects merger scenarios, such as the merger fraction, the dynamics of merging SMBH pairs, the significance of mergers in triggering AGNs, and the separations of the merging components at which AGNs are triggered.
VLBI techniques routinely produce images with milliarcsecond resolutions, equivalent to a linear separation of order 10 pc (at $z \sim 0.1$), which is far higher than can be obtained at other wavelengths even with adaptive optics in the optical/near-infrared. Further, VLBI observations have traditionally been sensitive to high brightness temperature structures in the inner regions and nuclei of galaxies, such as AGNs and jets. Thus, the combination of spectroscopic surveys and VLBI imaging offers a powerful means of searching for sub-kpc dual SMBHs that is less biased due to spatial resolution limit compared with other observations. VLBI will also be sensitive to radio jets of scales of tens of parsecs; the gas outflows they drive may be responsible for the double-peaked narrow emission lines in AGNs.
In essence, VLBI observations can probe a new spatial regime in the parameter space of dual SMBHs or jets.
This paper presents Very Long Baseline Array \citep[VLBA;][]{Napier1994} observations of a subset of the double-peaked narrow emission-line AGNs identified by \cite{Liu2010b} from the SDSS DR7 \citep{SDSSDR7}. The objective was to assess the radio detection rate and the fraction of objects that have compact and binary radio components in their cores. In~\S\ref{sec:observe}, we summarize our VLBA observations; in~\S\ref{sec:sources}, we discuss the six galaxies that we detect radio emission from; and in~\S\ref{sec:discuss}, we discuss what our results imply about sub-kpc dual SMBHs and implications for future observations. Throughout, we assume a cosmology of a Hubble constant of~70~km~s${}^{-1}$~Mpc${}^{-1}$, a matter density $\Omega_m = 0.3$, and a vacuum energy density of $\Omega_{\mathrm{vac}} = 0.7$.
\section{Target Selection, Observations, and Data Reduction and Analysis}\label{sec:observe}
As an initial search for close pairs (both physical and in projection) and to demonstrate the feasibility of a larger effort, we observed 13 type-2 Seyfert AGNs with double-peaked \mbox{[\ion{O}{3}]}\,$\lambda\lambda$4959,5007\ emission lines with VLBA (Program BL170, PI: Liu). The observations were at~3.6~cm (8.4~GHz) with a total bandwidth of~32~MHz. This observational wavelength was motivated by two factors: (1) the angular resolution of the VLBA at~3.6~cm is approximately 1 mas, more than sufficient to detect any sub-kpc dual AGNs given their expected separations (\S\ref{sec:intro}) and (2) the VLBA is at its most sensitive at this wavelength.
\subsection{Target Selection}
\begin{figure}
\begin{center}
\includegraphics[width=0.48\textwidth]{plot_vlba_target_flux.eps}
\caption{[O~{\tiny III}] emission-line flux versus FIRST integrated flux at 20 cm for our VLBA targets (red filled circles).
Also shown for comparison are the other FIRST-detected sources (open circles) in the parent sample of double-peaked, narrow-line AGNs from \cite{Liu2010b}.
Typical measurement uncertainties are $\sim2.6\times10^{-16}$ erg s$^{-1}$ cm$^{-2}$ for [O~{\tiny III}] emission-line flux and $\sim$0.15 mJy for FIRST integrated flux at 20 cm.
Targets that were detected by our VLBA observations are marked with blue open diamonds.
The VLBA-detected target \protect\objectname{SDSS~J135251.22$+$654113.2} is not shown here, because it was not covered by the FIRST survey.}
\label{fig:target}
\end{center}
\end{figure}
Our target sample was drawn from the 167 double-peaked emission-line AGNs identified by \cite{Liu2010b}. We first selected galaxies either having counterparts in the Faint Images of the Radio Sky at Twenty Centimeters survey \citep[\hbox{FIRST};][]{becker95} or that had existing observations within the VLA Archive from which the radio structure could be assessed. Of all the radio-bright objects in our parent sample (77 in 167 objects being detected by FIRST), we selected 12 objects that have 1.4 GHz flux densities above 7 mJy (from among a total of 23), and which appear unresolved\footnote{Had we have observed sources that were resolved by FIRST and not detected anything, one possibility would be that there is no milliarcsecond (mas) structure. Our preference toward unresolved FIRST sources is largely to assure a likely high detection rate.} in the core to \hbox{FIRST} (angular resolution $\sim$5''). The VLA Archival observations, where they exist, were used to confirm that the sources remain compact on sub-arcsec scales, to estimate their flux densities at higher frequencies to ensure adequate signal-to-noise ratios (S/N) for imaging, and improve the positional information for the new VLBA observations. We supplemented the sample with one additional object (\protect\objectname{SDSS~J135251.22$+$654113.2}) from the \cite{Liu2010b} sample that was not covered by the FIRST survey, but which had existing radio observations within the VLA Archive and satisfied our selection criteria. The median redshift of our target sample is $\sim$0.15. Figure \ref{fig:target} shows the [O~{\tiny III}] emission-line flux versus FIRST integrated flux at 20 cm for our targets as compared with the other radio-bright objects in our parent sample. Our VLBA targets are selected to have high radio fluxes at 20 cm, but their [O~{\tiny III}] emission-line fluxes sample the full range of the parent sample.
\begin{deluxetable*}{lcccccccc}
\tablecaption{VLBA 3.6 cm (8.4 GHz) Detected Galaxies\label{tab:detect}}
\tablewidth{\textwidth}
\tabletypesize{\footnotesize}
\tablehead{%
\colhead{} &
\colhead{} &
\colhead{$S_{{\rm FIRST}}$} &
\colhead{$\sigma_{{\rm FIRST}}$} &
\colhead{$\theta_{\mathrm{maj}} \times \theta_{\mathrm{min}}$} &
\colhead{$\sigma_I$} &
\colhead{$I$} &
\colhead{$S$} &
\colhead{$L_\nu$} \\
\colhead{SDSS Name} &
\colhead{Redshift} &
\colhead{(mJy)} &
\colhead{(\mbox{mJy~beam${}^{-1}$})} &
\colhead{(mas $\times$ mas)} &
\colhead{(\mbox{mJy~beam${}^{-1}$})} &
\colhead{(\mbox{mJy~beam${}^{-1}$})} &
\colhead{(mJy)} &
\colhead{($10^{23}$~W~Hz${}^{-1}$)}
\\
\colhead{(1)} &
\colhead{(2)} &
\colhead{(3)} &
\colhead{(4)} &
\colhead{(5)} &
\colhead{(6)} &
\colhead{(7)} &
\colhead{(8)} &
\colhead{(9)}
}
\startdata
\objectname{J091201.68$+$532036.6} & 0.1017 & 135.65 & 0.16 & 1.92 $\times$ 1.07 & 0.15 & 2.58 & 43.0
& 8.3 \\
\objectname{J113721.36$+$612001.2} & 0.1112 & 7.17 & 0.19 & 1.70 $\times$ 0.80 & 0.15 & 2.60 & 5.5 & 1.5 \\
\objectname{J124358.36$-$005845.4} & 0.4092 & 40.58 & 0.15 & 1.94 $\times$ 0.87 & 0.16 & 21.87 &
23.27 & 130 \\
\objectname{J135251.22$+$654113.2} & 0.2064 & N/A & N/A & 1.56 $\times$ 0.78 & 0.14 & 21.84 &
28.48 & 33 \\
\objectname{J231051.95$-$090011.9} & 0.0733 & 46.22 & 0.15 & 2.35 $\times$ 0.90 & 0.18 & 3.15 & 3.95
& 0.48 \\
\objectname{J233313.17$+$004911.8} & 0.1699 & 316.33 & 0.10 & 2.16 $\times$ 0.94 & 0.24 & 4.56 & 5.22
& 3.9 \\
\enddata
\tablecomments{
Column (1): SDSS designation with J2000 coordinates.
Column (2): SDSS spectroscopic redshift.
Column (3): FIRST integrated flux density at 20 cm; N/A means source is not covered by the FIRST survey.
Column (4): RMS noise at 20 cm in the FIRST survey map; N/A means source is not covered by the FIRST survey.
Column (5): major and minor axis of the \textsc{clean} restoring beam.
Column (6): 1-$\sigma$ noise level determined within a region 2$''$ square centered on the source.
Column (7): peak brightness.
Column (8): peak flux density.
Column (9): implied radio luminosity density.}
\end{deluxetable*}
\begin{deluxetable*}{lcccccc}
\tablecaption{VLBA 3.6 cm (8.4 GHz) Undetected Galaxies\label{tab:undetect}}
\tablewidth{\textwidth}
\tablehead{%
\colhead{} &
\colhead{} &
\colhead{$S_{{\rm FIRST}}$} &
\colhead{$\sigma_{{\rm FIRST}}$} &
\colhead{$\theta_{\mathrm{maj}} \times \theta_{\mathrm{min}}$} &
\colhead{$\sigma_I$} &
\colhead{$L_\nu$} \\
\colhead{SDSS Name} &
\colhead{Redshift} &
\colhead{(mJy)} &
\colhead{(\mbox{mJy~beam${}^{-1}$})} &
\colhead{(mas $\times$ mas)} &
\colhead{(\mbox{mJy~beam${}^{-1}$})} &
\colhead{($10^{21}$~W~Hz${}^{-1}$)}\\
\colhead{(1)} &
\colhead{(2)} &
\colhead{(3)} &
\colhead{(4)} &
\colhead{(5)} &
\colhead{(6)} &
\colhead{(7)}
}
\startdata
\objectname{J000911.58$-$003654.7} & 0.0733 &39.95 & 0.15 & 1.64 $\times$ 0.73 & 0.21 & $<$7.7 \\
\objectname{J073849.75$+$315611.9} & 0.2973 & 29.37 & 0.15 & 1.54 $\times$ 0.63 & 0.18 & $<$140 \\
\objectname{J080337.32$+$392633.1} & 0.0655 & 12.92 & 0.14 & 1.53 $\times$ 0.63 & 0.17 & $<$4.9 \\
\objectname{J085841.76$+$104122.1} & 0.1480 & 12.72 & 0.13 & 1.67 $\times$ 0.69 & 0.17 & $<$28 \\
\objectname{J110851.04$+$065901.4} & 0.1816 & 9.84 & 0.13 & 1.70 $\times$ 0.67 & 0.21 & $<$54 \\
\objectname{J135646.11$+$102609.1} & 0.1231 & 59.58 & 0.13 & 1.54 $\times$ 0.70 & 0.20 & $<$22 \\
\objectname{J171544.05$+$600835.7} & 0.1569 & 13.66 & 0.14 & 1.55 $\times$ 0.79 & 0.19 & $<$36 \\
\enddata
\tablecomments{
Column (1): SDSS designation with J2000 coordinates.
Column (2): SDSS spectroscopic redshift.
Column (3): FIRST integrated flux density at 20 cm.
Column (4): RMS noise at 20 cm in the FIRST survey map.
Column (5): major and minor axis of the \textsc{clean} restoring beam.
Column (6): 1-$\sigma$ noise level determined within a region 2$''$ square centered on the source location.
Column (7): 3$\sigma$ upper limit on the radio luminosity density, at the galaxy's redshift.}
\end{deluxetable*}
\subsection{VLBA Observations}
The VLBA observations were conducted in four sessions, with each session lasting approximately 6 hr, in the interval between 2010 March 30 and June 5. Target flux densities at 8.4 GHz ranged from 135 to 1.5 mJy, with a median flux density of 40 mJy. Anticipating that some fraction of the flux density measured by FIRST might be resolved out by the VLBA observations, we used phase referencing for all of the observations, cycling between the target source (3~minutes) and a phase reference calibrator (2~minutes). Typical on-source integration times were~40~minutes, implying expected thermal noise levels of approximately 0.15 mJy beam$^{-1}$. The phase reference calibrator was selected from the VLBA calibrator database. In addition, each session included short scans of a fringe finding calibrator (either \objectname{4C~39.25} or \objectname{3C~454.3}) and a source for checking the amplitude response (\objectname{J1310$+$3220}, \objectname{DA~193}, or \objectname{OJ~287}).
\subsection{Data Reduction and Analysis}
We adopted standard data reduction procedures using the Astronomical Image Processing System\footnote{http://www.aips.nrao.edu/index.shtml} (version 31DEC10). Specifically, we performed amplitude calibration of the visibility data using a priori knowledge of the system temperature information for the antennas, applied a parallactic correction for the rotation of the antenna feed orientation during the observation, determined the spectral bandpass response, and calculated residual delays and fringe rates (fringe fitting). A bandpass response function was determined because, even though the observations are intended to be continuum, the data were acquired in a spectral mode. These calibration steps were performed on the calibrators, most notably on the phase reference calibrators, and then interpolated onto the target sources. The full VLBA was used for these observations. Notionally, the array has 10 antennas, although because of the editing of data, the number of antennas being used in a given scan varied between nine and~10.
Each target source was then imaged. In all cases, a region approximately 2\arcsec\ $\times$ 2\arcsec\ was searched, corresponding to a linear distance of approximately 5~kpc at the median redshift of~0.15. Typical resolutions obtained were approximately 1.1 mas, corresponding to an equivalent linear distance of 2.7 pc, again at the median redshift. Tables~\ref{tab:detect} and~\ref{tab:undetect} summarize the characteristics of the images, and derived quantities from the images, for the detected and undetected galaxies, respectively. For all galaxies, we report the SDSS spectroscopic redshift~$z$; the major and minor axis of the \textsc{clean} restoring beam, $\theta_{\mathrm{maj}}$ and~$\theta_{\mathrm{min}}$; and the noise level in the image~$\sigma_I$. For the detected galaxies, we report the peak brightness~$I$ and the integrated flux density~$S$. We also report the implied (spectral) luminosity~$L_\nu$ at the redshift of the detected galaxies and the (3$\sigma$) upper limit on~$L_\nu$ based on the image noise level for the undetected galaxies. For the sources that were detected, all were found within 0\farcs05 of their nominal position (determined either from optical images or from lower-resolution radio observations). For the sources that were not detected, the stated image noise levels were determined within the searched region.
\section{Individual Galaxies}\label{sec:sources}
In this section, we present the images of the detected galaxies and discuss our results in the context of other observations of these galaxies in the literature. The images we present are much smaller than the full 2\arcsec\ $\times$ 2\arcsec\ region initially searched for emission as, in all cases, we found the emission to be not only compact but relatively close to the center of the region searched.
\begin{figure*}
\begin{center}
\includegraphics[width=0.54\textwidth]{J0912.eps}
\includegraphics[width=0.6\textwidth]{0554-284-52000.eps}
\caption{\footnotesize Top: VLBA 8.4 GHz image of \protect\objectname{SDSS~J091201.68$+$532036.6}.
The gray scale is linear over the range $-1.34$ to 3.18 \mbox{mJy~beam${}^{-1}$}. The beam is 1.92~mas
$\times$ 1.07 mas (corresponding to 3.6 pc $\times$ 2.0 pc at the redshift of the galaxy $z=0.1017$), and the noise level is 0.15 \mbox{mJy~beam${}^{-1}$}.
Contours are given with the levels set to be $-3$, 3, 5, 7, and~10 times the noise level in the image.
Bottom: SDSS spectrum (flux density shown in red points connected with black curves and 1$\sigma$ error shown in gray; subtracted for host-galaxy stellar continuum) along with our best fits (model in green and individual velocity components in magenta) for the H$\beta$-[O {\tiny III}] region. The vertical lines are drawn at the systemic redshift from host-galaxy stellar absorption. Labeled on the plot are our best-fit model function (Lorentzian or Gaussian) and the velocity offsets between the double-peaked components measured for [O {\tiny III}]$\lambda\lambda$4959,5007 and for H$\beta$.
}
\label{fig:J091201.68+532036.6}
\end{center}
\end{figure*}
\subsection{\protect\objectname{SDSS~J091201.68$+$532036.6}}\label{sec:J091201.68+532036.6}
The $z=0.1017$ AGN contains double-peaked \mbox{[\ion{O}{3}]}\,$\lambda\lambda$4959,5007\ lines in its SDSS fiber spectrum (Figure~\ref{fig:J091201.68+532036.6}), with peaks blueshifted and redshifted from the systemic velocity by 193 km s$^{-1}$ and 208 km s$^{-1}$, respectively \citep{Liu2010b}. SDSS images show that the host galaxy has a close companion to the southwest at a projected separation of 3.7 kpc ($2.''0$). It is unclear whether the SDSS fiber spectrum is significantly contaminated by light from the companion. We have a spectrum of the companion from MMT, but it shows no emission lines and does not have the S/N to measure a redshift. Its SDSS photometric redshift \citep{Oyaizu2008} is consistent with the redshift of the AGN.
\begin{figure*}
\begin{center}
\includegraphics[width=0.6\textwidth]{J1137.eps}
\includegraphics[width=0.6\textwidth]{0776-099-52319.eps}
\caption{Top: VLBA 8.4 GHz image of \protect\objectname{SDSS~J113721.36$+$612001.2}. The gray scale is linear over
the range $-0.61$ to 2.67 \mbox{mJy~beam${}^{-1}$}. The beam is 1.70 mas
$\times$ 0.80 mas (corresponding to 3.4 pc $\times$ 1.6 pc at the redshift of the galaxy $z=0.1112$), and the noise level is 0.15 \mbox{mJy~beam${}^{-1}$}.
Contours are given with the levels set to be $-3$, 3, 5, 7, and~10 times the noise level in the image.
Bottom: SDSS spectrum (subtracted for host-galaxy stellar continuum) along with our best spectral fits for the H$\beta$-[O {\tiny III}] region. Figure captions and symbols are the same as those in the bottom panel of Figure \ref{fig:J091201.68+532036.6}.}
\label{fig:J113721.36+612001.2}
\end{center}
\end{figure*}
This galaxy contains a radio source that has been detected in surveys over the frequency range 0.365--8.4 GHz \citep{bwe91,gc91,dbbtw96,Condon1998}. In addition, Chandra X-ray observations show a point source with a flux of $6.8_{-1.4}^{+1.5} \times 10^{-15}$~erg~s${}^{-1}$~cm${}^{-2}$ \citep[0.5--7~\hbox{keV},][]{evans10}. In images from both the FIRST survey and the Cosmic Lens All-Sky Survey \citep[\hbox{CLASS},][]{Myers2003}, the radio source appears unresolved. There is another, also unresolved, radio source approximately 30\arcsec\ to the southeast, but it is unclear whether the two are related.
Figure~\ref{fig:J091201.68+532036.6} shows the image resulting from our VLBA observations. The radio source has three components, approximately aligned north--south. The central component is unresolved, the northern component is clearly diffuse and extended, and the southern component is marginally resolved. The flux density obtained by CLASS is approximately 70 mJy whereas our VLBA observations recover approximately 43 mJy, indicating that there is (substantial) structure on sub-arcsecond scales that is resolved out by the VLBA observations.
In contrast to the other sources in this sample (Figures \ref{fig:J113721.36+612001.2}--\ref{fig:j233313.17+004911.8}, below), SDSS~J091201.68$+$532036.6 shows substantial sub-arcsecond--scale structure, though it also has a compact core component, with a flux density of 3 mJy, comparable to the flux densities measured for compact components of other sources in our sample. While this source is unusual in our sample \citep[and that of][]{tingay11}, it is not unprecedented. Multi-Element Radio-Linked Interferometer Network (MERLIN) observations of sources commonly show structure on sub-arcsecond scales \citep[e.g.,][]{Filho2006,Williams2017}, most notably including the double-peaked emission-line source \objectname{3C~316} \citep{An2013}.
Moreover, we note two potential selection effects. First, while the extended emission contributes significantly to the total flux density of the source, its typical surface brightness is a factor of 2--3 lower than that of the compact component or the peak brightnesses in the north and south components. Modest changes in the observations
(e.g., had we observed for only 20 minutes instead of 40 minutes) would have resulted in less of the extended emission being apparent. Second, typical observations of lower luminosity radio sources have sought to address whether there is a compact core component. With such a focus, it is possible that diffuse emission located tens of
milliarcseconds from a core component would not have been noticed. Indeed, a limited sampling of the literature shows a number of VLBI images of lower luminosity radio sources for which the typical image is 10~mas $\times$ 10~mas (cf. Figure~\ref{fig:J091201.68+532036.6}).
\subsection{\protect\objectname{SDSS~J110851.04$+$065901.4}}\label{J110851.04+065901.4}
This $z=0.1816$ AGN contains double-peaked \mbox{[\ion{O}{3}]}\,$\lambda\lambda$4959,5007\ lines in its SDSS fiber spectrum, with peaks blueshifted and redshifted from the systemic velocity by 95 km s$^{-1}$ and 114 km s$^{-1}$, respectively \citep{Liu2010b}. It represents our only target that hosts a kpc-scale dual AGN detected by Chandra in the X-rays \citep{Liu2013}.
\cite{Liu2010a} resolved a dual AGN in this galaxy using near-IR imaging and optical slit spectroscopy, with an approximate separation of 0\farcs7, corresponding to a linear separation of~2.1~kpc at its redshift \citep{Liu2013}. Both ground-based imaging in the NIR \citep{Liu2010a,Fu2012} and HST $Y$-band imaging \citep{Liu2013} show disturbance in the galaxy surface brightness profile, suggesting tidal interactions. \citet{Liu2013} confirmed its dual AGN nature in the X-rays based on Chandra observations combined with constraints on X-ray contribution from star formation estimated from HST $U$-band imaging. We do not detect a radio source, at a 5$\sigma$ limit of~1.1~\mbox{mJy~beam${}^{-1}$}.
\subsection{\protect\objectname{SDSS~J113721.36$+$612001.2} (\protect\objectname{4C~61.23})}\label{J113721.36+612001.2}
This $z=0.1112$ AGN contains double-peaked \mbox{[\ion{O}{3}]}\,$\lambda\lambda$4959,5007\ lines in its SDSS fiber spectrum (Figure~\ref{fig:J113721.36+612001.2}), with peaks blueshifted and redshifted from the systemic velocity by 87 km s$^{-1}$ and 214 km s$^{-1}$, respectively \citep{Liu2010b}. The SDSS images show no evidence for tidal disturbance, which is also confirmed by an {\it HST} $Y$-band image (in preparation). This galaxy contains a radio source that has been detected in surveys at least over the frequency range 38 MHz to 5 GHz \citep{gsw67,bwe91,gc91,White1992,hwrw95,dbbtw96,Cohen2007}. \cite{lcfgmmv01} classify it as a Faranoff-Riley~II radio galaxy. Their VLA image (at~5~GHz) shows it to be symmetric, with the lobes oriented at~135\arcdeg\ E from N.
Figure~\ref{fig:J113721.36+612001.2} shows our VLBA image. Although superficially an apparent double, with the second component approximately 2~mas to the southeast of the primary component, the orientation of the milliarcsecond structure is similar to that of the arcsecond FR~II structure, suggesting that the small-scale fainter component could be a portion of the jet that is feeding the southeast large-scale radio lobe.
\begin{figure*}
\begin{center}
\includegraphics[width=0.6\textwidth]{J1243.eps}
\includegraphics[width=0.6\textwidth]{0291-098-51928.eps}
\caption{Top: VLBA 8.4 GHz image of \protect\objectname{SDSS~J124358.36$-$005845.4}. The gray
scale is logarithmic, with a maximum at~22.08~\mbox{mJy~beam${}^{-1}$}.
The beam is 1.94 mas $\times$ 0.87 mas (corresponding to 11 pc $\times$ 4.7 pc at the redshift of the galaxy $z=0.4092$), and the noise level is 0.16 \mbox{mJy~beam${}^{-1}$}.
Contours are given with the levels set to be $-3$, 3, 5, 7, and 10 times the noise level in the image.
Bottom: SDSS spectrum (subtracted for host-galaxy stellar continuum) along with our best spectral fits for the H$\beta$-[O {\tiny III}] region. Figure captions and symbols are the same as those in the bottom panel of Figure \ref{fig:J091201.68+532036.6}.
}
\label{fig:J124358.36-005845.4}
\end{center}
\end{figure*}
\subsection{\protect\objectname{SDSS~J124358.36$-$005845.4}}\label{sec:J124358.36-005845.4}
This $z=0.4092$ AGN shows double-peaked \mbox{[\ion{O}{3}]}\,$\lambda\lambda$4959,5007\ lines in its SDSS fiber spectrum (Figure~\ref{fig:J124358.36-005845.4}), with peaks blueshifted and redshifted from the systemic velocity by 360 km s$^{-1}$ and 161 km s$^{-1}$, respectively \citep{Liu2010b}. The SDSS images show no evidence for tidal disturbance, although both the image quality and sensitivity may be too low to put a strong constraint. This galaxy contains a radio source that has been detected in surveys over the frequency range 1.4--5 GHz \citep{gwbe95,Condon1998}, and the radio source appears unresolved in \hbox{FIRST}. Figure~\ref{fig:J124358.36-005845.4} shows that the radio source has a compact, unresolved component in our VLBA image. The flux density on these scales is approximately 23 mJy at 3.6 cm, whereas the FIRST flux density is approximately 40 mJy (at 20 cm). The spectrum of the radio source is approximately flat, suggesting that nearly 50\% of the flux density is not recovered by the VLBA image.
Within a region of size $\pm 1\arcsec$, there are no other radio sources stronger than 1.15 mJy (7.5$\sigma$). The nearly equatorial declination and limited hour angle coverage contribute to a point spread function (beam) with high secondary peaks, such that a more stringent limit cannot be placed.
\subsection{\protect\objectname{SDSS~J135251.22$+$654113.2}}\label{sec:J135251.22+654113.2}
This $z=0.2064$ AGN shows double-peaked \mbox{[\ion{O}{3}]}\,$\lambda\lambda$4959,5007\ lines in its SDSS fiber spectrum (Figure~\ref{fig:J135251.22+654113.2}), with peaks blueshifted and redshifted from the systemic velocity by 108 km s$^{-1}$ and 265 km s$^{-1}$, respectively \citep{Liu2010b}. The SDSS images show tentative evidence for tidal disturbance, possibly related to a companion to the southeast of the main galaxy.
This galaxy contains a radio source which has been detected in surveys at least over the frequency range 74 MHz to 8 GHz \citep{hmwb90,bwe91,gc91,White1992,dbbtw96,Condon1998,Cohen2007}. The radio source is unresolved in the CLASS image, with a flux density of 79 mJy.
Figure~\ref{fig:J135251.22+654113.2} shows our VLBA image. There is at least one faint component located approximately 2.4 mas to the east of the primary component; there may also be a fainter component farther to the east. Our VLBA observations recover approximately 30 mJy, indicating that there is likely to be substantial structure on sub-arcsecond scales.
Within a region of size $\pm 1\arcsec$, there are no other apparent radio sources brighter than 0.79~\mbox{mJy~beam${}^{-1}$}\ (6.6$\sigma$). Strictly, this is brighter than the nominal statistical threshold, but the brightest pixels in the residual image appear near the edges of the image and there are (negative) pixels with comparable absolute brightnesses.
\subsection{\protect\objectname{SDSS~J231051.95$-$090011.9}}\label{sec:J231051.95-090011.9}
This $z=0.0944$ AGN shows double-peaked \mbox{[\ion{O}{3}]}\,$\lambda\lambda$4959,5007\ lines in its SDSS fiber spectrum (Figure~\ref{fig:j231051.95-090011.9}), with peaks blueshifted and redshifted from the systemic velocity by 121 km s$^{-1}$ and 206 km s$^{-1}$, respectively \citep{Liu2010b}. The SDSS images show no evidence for a double stellar core, which is further confirmed by $K_s$-band imaging with Magellan/PANIC \citep{Shen2011} at $0.''6$ resolution and by HST ACS/F606W imaging at $0.''1$ resolution \citep{Fu2012}. However, there is tentative evidence for tidal disturbance, both from the SDSS images and the $K_s$-band images presented by \citet{Shen2011}. There is an apparent companion $\sim4''$ away to the southwest at PA=73$^{\circ}$, which does not contribute to the \mbox{[\ion{O}{3}]}\ emission observed in the SDSS fiber spectrum. \citet{Shen2011} presented a slit spectrum from the Apache Point Observatory 3.5m Dual Imaging Spectrograph at PA=73$^{\circ}$, which suggested that the two velocity components in \mbox{[\ion{O}{3}]}\ were spatially unresolved at $2''$ resolution.
The FIRST image shows this galaxy to be dominated by a compact component, with what may be a faint jet extending to the southeast. Figure~\ref{fig:j231051.95-090011.9} shows our VLBA image to consist of a single component, which is either unresolved or just marginally resolved. The orientation of the milliarcsecond-scale structure may appear consistent with that in the FIRST image, but this orientation is essentially the same as the \textsc{clean} beam (which is an elliptical gaussian with a position angle of~$-9\arcdeg$, east of north).
\begin{figure*}[tb]
\begin{center}
\includegraphics[width=0.6\textwidth]{J1352.eps}
\includegraphics[width=0.6\textwidth]{0497-076-51989.eps}
\caption{Top: VLBA 8.4 GHz image of \protect\objectname{SDSS~J135251.22$+$654113.2}. The gray scale is
logarithmic, with a maximum at 20.29 \mbox{mJy~beam${}^{-1}$}. The beam is 1.56 mas
$\times$ 0.78 mas (corresponding to 5.3 pc $\times$ 2.6 pc at the redshift of the galaxy $z=0.2064$), and the noise level is 0.14 \mbox{mJy~beam${}^{-1}$}.
Contours are given with the levels set to be $-3$, 3, 5, 7, and 10 times the noise level in the image.
Bottom: SDSS spectrum (subtracted for host-galaxy stellar continuum) along with our best spectral fits for the H$\beta$-[O {\tiny III}] region. Figure captions and symbols are the same as those in the bottom panel of Figure \ref{fig:J091201.68+532036.6}.
}
\label{fig:J135251.22+654113.2}
\end{center}
\end{figure*}
\subsection{\protect\objectname{SDSS~J233313.17$+$004911.8} (\protect\objectname{PKS~2330$+$005})}\label{sec:J233313.17+004911.8}
This $z=0.1699$ AGN shows double-peaked \mbox{[\ion{O}{3}]}\,$\lambda\lambda$4959,5007\ lines in its SDSS fiber spectrum (Figure~\ref{fig:j233313.17+004911.8}), with both peaks blueshifted from the systemic velocity by 480 km s$^{-1}$ and 36 km s$^{-1}$, respectively \citep{Liu2010b}. The SDSS images show no evidence for a double stellar core, which is confirmed by $K_s$-band imaging with Magellan/PANIC \citep{Shen2011} at $0.''8$ resolution and by $K_p$-band imaging with Keck II/NIRC2 LGSAO at $0.''1$ resolution \citep{Fu2012}. However, there is tentative evidence for tidal disturbance, both from SDSS images and the $K_s$-band images presented by \citet{Shen2011}. There is an apparent companion $\sim6''$ away to the southeast at PA=117$^{\circ}$, which does not contribute to the \mbox{[\ion{O}{3}]}\ emission observed in the SDSS fiber spectrum. \citet{Shen2011} presented a slit spectrum from the APO 3.5m DIS at PA=117$^{\circ}$, which suggested that the two velocity components in \mbox{[\ion{O}{3}]}\ were spatially unresolved at $1.''5$ resolution.
This galaxy contains a radio source that has been detected in surveys from 74 MHz to 5 GHz \citep{blbhm86,Wright1990,bwe91,gc91,White1992,gwbe95,dbbtw96,Condon1998,Cohen2007}. We are unaware of any previous published observations in which the radio source is resolved. Figure~\ref{fig:j233313.17+004911.8} shows our VLBA image to consist of a single component.
\begin{figure*}
\begin{center}
\includegraphics[width=0.6\textwidth]{J2310.eps}
\includegraphics[width=0.6\textwidth]{0726-465-52226.eps}
\caption{Top: VLBA 8.4 GHz image of \protect\objectname{SDSS~J231051.95$-$090011.9}. The gray
scale is linear over the range $-0.90$ to 3.47 \mbox{mJy~beam${}^{-1}$}.
The beam is 2.35 mas $\times$ 0.90 mas (corresponding to 4.1 pc $\times$ 1.6 pc at the redshift of the galaxy $z=0.0944$), and the noise level is 0.18 \mbox{mJy~beam${}^{-1}$}.
Contours are given with the levels set to be $-3$, 3, 5, 7, and 10 times the noise level in the image.
Bottom: SDSS spectrum (subtracted for host-galaxy stellar continuum) along with our best spectral fits for the H$\beta$-[O {\tiny III}] region. Figure captions and symbols are the same as those in the bottom panel of Figure \ref{fig:J091201.68+532036.6}.
}
\label{fig:j231051.95-090011.9}
\end{center}
\end{figure*}
\section{Results and Discussion}\label{sec:discuss}
\subsection{Detection Rate}
Of our initial sample of~13 SDSS galaxies showing double-peaked \mbox{[\ion{O}{3}]}\ emission lines, and with either FIRST or previous VLA detections, we detected six (46\%) in our VLBA observations to a flux density of $\sim$1 mJy. Of these six, two show likely jet structures on sub-kpc scales, while the other four are unresolved. In the other seven objects without a detectable radio core, most of the radio emission is likely from larger-scale jets or lobes. These sources are all sufficiently radio luminous that the radio emission is unlikely to be dominated by star formation in the host galaxies. The lack of mas-scale structure could also indicate that the central engine has recently shut down, although this explanation would require fine tuning, i.e., the fuel to the central engine shuts down only so recently that we do not see the central engine but we still see some lobe emission. As shown in Figure \ref{fig:target}, the VLBA-detected targets on average have higher integrated fluxes at 20 cm from FIRST than the VLBA non-detected sources. On the other hand, the [O~{\tiny III}] emission-line fluxes for the VLBA-detected and non-detected samples are similar. In no case have we detected an unambiguous sub-kpc dual AGN.
\begin{figure*}
\begin{center}
\includegraphics[width=0.6\textwidth]{J2333.eps}
\includegraphics[width=0.6\textwidth]{0384-411-51821.eps}
\caption{Top: VLBA 8.4 GHz image of \protect\objectname{SDSS~J233313.17$+$004911.8}. The gray
scale is linear over the range $-1.43$ to 4.74 \mbox{mJy~beam${}^{-1}$}.
The beam is 2.16 mas $\times$ 0.94 mas (corresponding to 6.3 pc $\times$ 2.7 pc at the redshift of the galaxy $z=0.1699$), and the noise level is 0.24 \mbox{mJy~beam${}^{-1}$}.
Contours are given with the levels set to be $-3$, 3, 5, 7, and 10 times the noise level in the image.
Bottom: SDSS spectrum (subtracted for host-galaxy stellar continuum) along with our best spectral fits for the H$\beta$-[O {\tiny III}] region. Figure captions and symbols are the same as those in the bottom panel of Figure \ref{fig:J091201.68+532036.6}.
}
\label{fig:j233313.17+004911.8}
\end{center}
\end{figure*}
\subsection{Implications on the Population of Sub-kpc Dual AGNs}
We now consider what our observations imply about the potential population of sub-kpc dual AGNs. The fraction of dual AGNs on kpc scales among double-peaked emission-line AGN is estimated to be of order 10\% \citep[e.g.,][]{Shen2011,Fu2012}. Another $\sim 40$\% of objects in the \citet{Shen2011} sample are ambiguous and some of them could harbor a sub-kpc scale dual AGN that is unresolvable with optical/NIR observations. However, we detected no sub-kpc scale dual AGN out of 13 objects in our VLBA observations, which may hint at a similar fraction ($\lesssim 10$\%) of sub-kpc dual AGNs among these double-peaked \mbox{[\ion{O}{3}]}\ AGN. This would strengthen our conclusion in \citet{Shen2011} that most double-peaked profiles are caused by NLR kinematics in single AGNs rather than by the orbital motion of dual AGNs.
Another possibility is that some of the galaxies that we detect do contain a dual AGN, but that the second SMBH in the system is either radio faint or is of sufficiently low mass as to be undetectable in our radio observations. Accreting black holes display a ``fundamental plane'' relationship between their X-ray luminosity, radio luminosity, and mass \citep{Merloni2003,Falcke04,gkmdmr09}, which allows us to estimate a BH mass given measurements in radio and X-rays. Accordingly, we have searched the Chandra X-ray archives for available observations of these galaxies. Only \objectname{SDSS~J091201.68$+$532036.6} (\S\ref{sec:J091201.68+532036.6}) and \objectname{SDSS~J110851.04$+$065901.4} \citep{Liu2013} have been observed and detected previously in targeted X-ray observations. For the remaining galaxies, we examined the ROSAT All-Sky Survey\footnote{http://www.xray.mpe.mpg.de/cgi-bin/rosat/data-browser} for a possible X-ray counterpart. In no case, do we detect a ROSAT X-ray source at the location of these galaxies. We converted the ROSAT X-ray flux limits to rest-frame ~2--10~keV flux limits using the online PIMMS tool\footnote{http://cxc.harvard.edu/toolkit/pimms.jsp}.
\begin{deluxetable}{lccc}
\tablecaption{Radio and X-Ray Luminosities, and Estimates on Black Hole Masses\label{tab:bh}}
\tablewidth{0pc}
\tablehead{%
\colhead{} &
\colhead{$L_R$} &
\colhead{$L_X$} &
\colhead{$M_{\mathrm{BH}}$} \\
\colhead{SDSS Name} &
\colhead{($10^{38}$~erg~s${}^{-1}$)} &
\colhead{($10^{40}$~erg~s${}^{-1}$)} &
\colhead{($10^8$~$M_\sun$)} \\
\colhead{(1)} &
\colhead{(2)} &
\colhead{(3)} &
\colhead{(4)}
}
\startdata
\objectname{J091201.68$+$532036.6} & 34.0 & $8.3_{-1.7}^{+1.9}$ & 5.1 $\pm$ 0.3 \\
& $<$23.8 & $8.3_{-1.7}^{+1.9}$ & $<$4.3 \\
\objectname{J113721.36+612001.2} & 10.92 & $<$4.02 & $<$3.5 \\
\objectname{J124358.36-005845.4} & 225.12 & $<$101 & $<$6.9 \\
\objectname{J135251.22+654113.2} & 34.44 & $<$9.05 & $<$5.0 \\
\objectname{J231051.95-090011.9} & 9.24 & $<$6.26 & $<$2.9 \\
\objectname{J233313.17+004911.8} & 45.36 & $<$25.5 & $<$4.4 \\
\\
\objectname{J000911.58-003654.7} & $<$6.47 & $<$3.52 & $<$2.8 \\
\objectname{J073849.75+315611.9} & $<$118 & $<$73.7 & $<$5.4 \\
\objectname{J080337.32+392633.1} & $<$4.12 & $<$3.25 & $<$2.3 \\
\objectname{J085841.76+104122.1} & $<$23.5 & $<$19.2 & $<$3.5 \\
\objectname{J110851.04+065901.4} & $<$45.4 & $<$22.7 & $<$4.6 \\
\objectname{J135646.11+102609.1} & $<$18.5 & $<$12.0 & $<$3.5 \\
\enddata
\tablecomments{
Galaxies are divided into two sets according to whether they are
detected or not by VLBA. The top set corresponds to galaxies detected in our
VLBA observations (see also Table~\ref{tab:detect}), the lower set to the undetected
galaxies (see also Table~\ref{tab:undetect}).
Column (1): SDSS designation with J2000 coordinates.
Column (2): Radio luminosity or 3-$\sigma$ upper limit.
Column (3): X-ray luminosity or 3-$\sigma$ upper limit.
Column (4): Estimated black hole mass or upper limit assuming the fundamental plane relationship between X-ray luminosity, radio luminosity and black hole mass \citep{gkmdmr09}.
}
\end{deluxetable}
Table~\ref{tab:bh} summarizes what the combined radio and X-ray measurements imply about the masses, or upper limits on the masses, of SMBHs in the nuclei of these galaxies. For the galaxy with both radio and X-ray detections (\objectname{SDSS~J091201.68$+$532036.6}), we estimate the masses of a potential dual SMBH system (one being radio bright and one being radio faint, i.e., below our detection limit) making the following assumptions. For the radio factor in the fundamental plane relation, we use either the flux density of the compact component or the 3$\sigma$ upper limit. For the X-ray factor in the fundamental plane relation, we assume that the two putative SMBHs would contribute equally to the X-ray flux. While this approach clearly does not yield a unique solution, it is indicative of the characteristics of a dual SMBH system, if the second black hole is not radio bright.
For the remaining galaxies, we use the (3$\sigma$) upper limit in the radio image and that on the X-ray flux to constrain the mass of an SMBH in the galaxy\footnote{For consistency with the other radio undetected galaxies, we also quote the upper limit on the X-ray flux for the galaxy \objectname{SDSS~J110851.04$+$065901.4}, which is detected by Chandra.}. This approach is also clearly not unique, but it should suffice to provide an estimate of the possible SMBH masses.
Typical 3$\sigma$ upper limits on the mass of a second SMBH in the nuclei of these galaxies are (3--7) $\times 10^8$ $M_\sun$. We stress that these black hole mass limits should be viewed as indicative, particularly given that \cite{gkmdmr09} find that the fundamental plane relation that we have assumed has a scatter of~0.77~dex. Nonetheless, it is possible that these galaxies might still contain a second SMBH, whose presence may become apparent with significantly deeper radio and X-ray observations.
\subsection{Comparison with Previous Work and Remarks on Future Directions}
\cite{tingay11} have conducted a similar search for dual SMBHs with VLBA. While their fraction of detected sources was lower (2 of~12 or 17\%), they also found no dual AGN candidates. Taken together, these observations are consistent with the fraction of (radio-bright) dual SMBHs on sub-kpc to pc-scale separations being similar to that on larger separations, namely 0.1\%, though the sample remains small. As nearly half of the objects in the \cite{Liu2010b} sample have a radio counterpart and that new samples of AGNs with double-peaked narrow emission lines are available \citep[e.g.,][]{LyuLiu2016,Yuan2016}, there are ample possibilities for expanding the set of double-peaked line emission galaxies that have been examined for possible dual AGNs on sub-kpc scales.
Having a higher sensitivity than VLBA on larger scales, VLA is a better match in terms of searching for kpc-scale dual AGNs \citep[e.g.,][]{Burke-Spolaor2014,Fu2015,Muller-Sanchez2015}, kpc-scale jet-cloud interactions, or kpc-separation compact radio sources. Nevertheless, VLBA is superior in terms of spatial resolution, and has the potential to resolve sub-kpc projected pairs, and to test the probability of sub-kpc jets as the origin for double-peaked narrow-line profiles, where the NLR emission is unresolved on kpc scales \citep[e.g.,][]{Wrobel2014a}. It would also be interesting to carry out radio follow ups for new large samples of AGNs with double-peaked narrow emission lines at higher redshift \citep[e.g.,][]{LyuLiu2016,Yuan2016} to address their possible redshift and/or luminosity evolution \citep[e.g.,][]{yu11}.
\acknowledgements
We thank S.~Burke-Spolaor for helpful discussions and the anonymous referee for a careful and useful report that improved the paper.
Y.S. acknowledges support from the Alfred P. Sloan Foundation and NSF grant 1715579.
The NANOGrav project receives support from National Science Foundation Physics Frontier Center award number 1430284. The Long Baseline Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. This research has made use of the NASA/IPAC Extragalactic Database (NED) which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration. This research has made use of NASA's Astrophysics Data System. This research has made use of data obtained from the High Energy Astrophysics Science Archive Research Center (HEASARC), provided by NASA's Goddard Space Flight Center. This research has made use of data obtained from the Chandra Source Catalog, provided by the Chandra X-ray Center (CXC) as part of the Chandra Data Archive. Part of this research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration.
Funding for the SDSS and SDSS-II has been provided by the Alfred P. Sloan Foundation, the Participating Institutions, the National Science Foundation, the U.S. Department of Energy, the National Aeronautics and Space Administration, the Japanese Monbukagakusho, the Max Planck Society, and the Higher Education Funding Council for England. The SDSS website is http://www.sdss.org/.
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions. The Participating Institutions are the American Museum of Natural History, Astrophysical Institute Potsdam, University of Basel, University of Cambridge, Case Western Reserve University, University of Chicago, Drexel University, Fermilab, the Institute for Advanced Study, the Japan Participation Group, Johns Hopkins University, the Joint Institute for Nuclear Astrophysics, the Kavli Institute for Particle Astrophysics and Cosmology, the Korean Scientist Group, the Chinese Academy of Sciences (LAMOST), Los Alamos National Laboratory, the Max Planck-Institute for Astronomy (MPIA), the Max Planck Institute for Astrophysics (MPA), New Mexico State University, Ohio State University, University of Pittsburgh, University of Portsmouth, Princeton University, the United States Naval Observatory, and the University of Washington.
\textit{Facilities:} \facility{VLBA}, Sloan
|
1,108,101,564,868 | arxiv | \section{Introduction}\label{sec:intro}
In vector similarity search one preprocesses $n$ vectors $x \in \mathbb{R}^d$ such that, given a query $q\in R^d$ computing the top (or bottom) $k$ values of $f(x,q)$ is most efficient.
The function $f$ can encode similarity between $x$ and $q$ like dot product $\langle x, q \rangle$, cosine similarity $\langle x, q \rangle/\|x\|\|q\|$ or Jaccard Similarity. In these cases high values of $f$ are sought.
Alternatively, $f$ can encode dissimilarity or distance in which case, low values are searched for. Examples include $f(x,q) = \|x-q\|_p = (\sum_i (x_i -q_i)^p)^{1/p}$, or hamming distance. The reader should be aware that all the above problems are highly related and solutions for one often translate to solutions for another.
For example, max cosine similarity, max dot product and min Euclidean distance are all equivalent for unit vectors.
In recent years, vector similarity search has taken center stage in the deployment of large scale machine learning applications.
These include duplicate detection \cite{wang2006large,wang2010towards,zhou2017recent}, personalization \cite{10.1007/978-3-319-56608-5_54,grbovic2018real}, extreme classification \cite{dean2013fast}, and more.
The demand for vector search was further fueled by proliferation of pre-trained deep learning models which provide semantically rich vector embeddings for text, images, audio, and more \cite{cremonesi2010performance,weston2010large,wu2018starspace}.
The scale of these problems often requires searching through billions of vectors or performing thousands of searches a second.
Performance is therefore paramount which necessitates the use of approximate algorithms.
These algorithms trade some tolerance for inaccuracy in the result set with speed.
Practical considerations also need to consider indexing speed, index size, concurrent read-write support, support for inserts and deletes, and many other factors.
The exact tradeoffs between these factors are both complex and data dependent.
Due to the importance and complexity of this task, significant academic and industrial efforts were invested in approximate vector search algorithms \cite{johnson2019billion,chen2018sptag,malkov2018efficient}.
Algorithms fall generally into a handful of categories.
Hashing based algorithms \cite{NIPS2014_310ce61c,shrivastava2015assymetric,NIPS2015_2823f479}, tree-search based algorithms \cite{muja2014scalable,dasgupta2008random} and graph based algorithms \cite{malkov2018efficient,harwood2016fanng} mostly focus on reducing the dependence on $n$, the number of vectors.
Dimension reduction \cite{charikar2002similarity,vempala2005random,li2019random} and quantization algorithms \cite{wu2017multiscale,guo2016quantization,martinez2018lsq++,guo2020accelerating,ge2013optimized,gong2012iterative,martinez2016revisiting,zhang2014composite,ozan2016competitive}, on the other hand, reduce the dependence on $d$ by compressing the index vectors and computing similarities to these compressed representations quickly.
To achieve the best results for a specific application, usually a combination of the above methods is needed.
Versatile libraries like Faiss \cite{johnson2019billion} along with managed cloud services like Pinecone \cite{pinecone} make these a lot easier for scientists and engineers to use in production applications.
In this paper we focus solely on quantization techniques for approximate max inner product search (MIPS).
Simple quantization can be achieved with clustering, specifically, a small representative set of $k$ vectors is chosen and each vector in the data is projected onto that set.
Computing the dot products of the query and centers requires only $O(dk)$ time.
Then, computing the approximate dot product for each point can be done in $O(1)$ time by a simple lookup.
The shortcoming of this approach is that the quantization is very coarse;
it allows for only $k$ possible quantization centers.
More advanced product quantization techniques divide the $d$ coordinates to $m$ contiguous sections and project each set of $d/m$ coordinates onto one of $k$ centers independently.
The advantage of this is that computing the dot product of the query and the centers still requires $O(dk)$ operations but now each point can map to one of $k^m$ centers.
The price of this approach is that computing the approximate dot product requires $m$ lookups and additions instead of a single lookup.
Clearly, increasing the values of $m$ and $k$ improves the quality of the approximation while also increasing search running time.
In order to improve the tradeoff between increasing running time and increasing the number of cluster centers, we propose projective clustering product quantization (PCPQ).
The motivation is to significantly increase the number of cluster centers (and thereby improve vector approximation)
while, at the same time, having negligible increase in application running time.
The exact definition of projective clustering is given in Section \ref{sec:proj_clustering}.
Unlike standard clustering, each data point is projected to a point along the direction of the center $c$. Specifically, point $i$ projects to $\alpha_i c_j$ where $c_j$ is one of $k$ ``centers'' in $\mathbb{R}^d$ and $\alpha_i \in \mathbb{R}$ is a scalar.
Since $\alpha_i$ is chosen for each point separately, the solution is strictly more general than clustering.
Our experimental results below corroborate that this indeed provides a significant increase in approximation quality.
Unfortunately, it also requires storing an extra floating point for each point and section of coordinates.
This is both inefficient in terms of memory consumption and slow in application time since each lookup-add in PQ is replaced with a lookup-multiply-add in PCPQ.
\begin{table}[!t]
\begin{center}
\begin{tabular}{|l|c|c|c|} \hline
& Centers & Application Time & Index Size \\ \hline \hline
\parbox{6cm}{Clustering} & $k$ & $kd+n$ & $n\log_2(k)$ \\
\parbox{6cm}{Product Quantization (PQ)} & $k^m$ & $kd+2nm$ & $nm\log_2(k)$ \\
\parbox{6cm}{Product Quantization (PQ) with $ks$ centers (for comparison)} & $(ks)^m$ & $skd+2nm$ & $nm\log_2(ks)$ \\
Quantized Projective Clustering (Q-PCPQ) & $(ks)^m$ & $kd+skm+2nm$ & $nm\log_2(ks)$ \\ \hline
\end{tabular}
\end{center}
\caption{Number of centers, application time to compute inner-products of all points with a query and index size (in bits) for: vanilla clustering, clustering with product quantization, and quantized projective clustering.}
\label{table:clustering_comparison}
\vspace{0.5em}
\hrule
\vspace{-0em}
\end{table}%
To address this, we suggest a quantized version of PCPQ (Q-PCPQ). In Q-PCPQ each $\alpha_i$ is chosen from a set of at most $s$ values.
As a result, using a lookup table of size $ks$ one can, again, use only $m$ lookup adds.
Moreover, the number of cluster centers is $(ks)^m$ with Q-PCPQ compared with $k^m$ for PQ.
The increased cost is that the index size is $m\log(ks)$ bits per vector compared to $m\log(k)$ for PQ and the running time requires an additional $skm$ floating point multiplications.
Note that one could trivially increase the number of centers with PQ to $(ks)^m$ by using $k'=ks$ clusters in each section.
However, with $k$ typically taking values like $256$, and $s$ taking values like $16$ and $32$, the term $k' d = ksd$ in the query complexity will dominate the query time. Moreover, clustering vectors to $k' = 256 \times 32 \approx 7500$ is not usually feasible during index creation time, especially since the number of points indexed can be as large as $10^9$ in large datasets. We compare the number of centers, application time and index size for these methods in Table \ref{table:clustering_comparison}.
Moreover, we show that the choice of the optimal centers and scaling coefficients for PCPQ and Q-PCPQ can be made to be loss aware.
Following the logic in \cite{guo2020accelerating}, one can weight the loss of $(\langle x_i,q \rangle - \langle \tilde{x}_i,q \rangle )^2$ higher if $\langle x_i,q \rangle$ is large and therefore $x_i$ is a likely match for $q$.
Taking that into account, one gets an anisotropic cost function for the optimal cluster centers.
In our experiments we show that the additional flexibility afforded by the scalars in Q-APCPQ make its recall performance better than ScaNN on standard benchmark datasets for MIPS.
\subsection{Related Works}
\paragraph{Rotation matrix.} In state-of-the-art implementations of PQ \cite{wu2017multiscale,guo2020accelerating,ge2013optimized,gong2012iterative,johnson2019billion}, typically, a rotation matrix $R \in \mathbb{R}^{d \times d}$ is applied to the data before performing clustering and applying PQ. The rotation matrix $R$ is typically learned to make the data more suitable for breaking the $d$ dimensions into sections in PQ. Several methods are used to learn and optimize the matrix $R$, including IPQ \cite{gong2012iterative}, OPQ \cite{ge2013optimized} and MSQ \cite{wu2017multiscale}. Our methods address the method of clustering and performing the product quantization, and are oblivious to pre-processing -- in particular, for a rotation matrix $R \in \mathbb{R}^{d \times d}$, we can approximate the inner-product $\iprod{x}{q}$ with $\sum_{i = 1}^m \iprod{R\alpha^{(i)}_x c^{(i)}_x}{Rq}$ where $\alpha^{(i)}_x, c^{(i)}_x$ denotes the scaling and center from the $i$-th section of Q-PCPQ (or Q-APCPQ) for the data point $x$. A natural future direction to study is whether one can jointly learn the centers, scaling and rotation matrix in order to increase the performance of Q-PCPQ and Q-APCPQ. \vspace{-1em}
\paragraph{Optimization methods.} Several works have focused on proposing novel methods of finding the centers and the assignment of points to centers. For instance, LSQ \cite{martinez2016revisiting} and LSQ++ \cite{martinez2018lsq++} uses a local search based method that outperform the traditional, simpler alternating-minimization based methods. Some methods are proposed to optimize the quality of the centers by introducing cross-training across sections when finding centers, e.g. CompQ \cite{ozan2016competitive} is a stochastic gradient descent based method that achieves this by doing a joint training across sections and CQ \cite{zhang2014composite} is a method that penalizes an error dependent on centers across sections. Our method introduces a novel clustering method, along with a optimization problem for PQ which we optimize using a simple alternating-minimization based approach. We leave the development of more advanced optimization methods for future work. \vspace{-1em}
\paragraph{Non-PQ Methods.} There are several works that approach MIPS using techniques different than PQ. One popular method is \emph{locality sensitive hashing} LSH, which has seen a flurry of work \cite{NIPS2014_310ce61c,shrivastava2015assymetric,huang2018accurate,neyshabur2015symmetric} for MIPS tasks. In addition, several recent works \cite{sablayrolles2018spreading,Dong2020Learning,erin2015deep,jain2017subic,klein2019end,sablayrolles2017should} have looked at using a neural network to embed the input points and performing the search in the embedded space. Comparison of these methods against PQ methods, such as ours, is a vital research interest to better understand the performance of MIPS systems and we leave this to future work.
\subsection{Preliminaries}
Throughout we denote $X \in \mathbb{R}^{n \times d}$ to be the matrix defined by the data points $x_1, \dots, x_n$ and define $\bar{d} = d/m$ to be the dimension of each of the $m$ sections of coordinates in PQ. We sometimes write $A \subseteq X$ to denote a subset $A$ of the data points in $X$. Additionally, to define the quality of approximation from clustering for MIPS, we denote $\Qc$ to be a distribution over a set of queries in $\mathbb{R}^d$.
\section{Projective Clustering}\label{sec:proj_clustering}
\begin{figure}
\includegraphics[width=\linewidth]{clustering.png}
\caption{An illustration of $k$-clustering, projective $k$-clustering and their anisotropic counterparts for $k = 2$. While centers in $k$-clustering and anisotropic $k$-clustering are points, for their projective counterparts, they correspond to one-dimensional subspaces. Anisotropic projective clustering has the effect of pushing the centers further away from the origin.}
\label{fig:clustering}
\vspace{0.5em}
\hrule
\vspace{-1em}
\end{figure}
In projective $k$-clustering, our goal is to find $c_1, \dots, c_k \in \mathbb{R}^{\bar{d}}$ and scalars $\alpha_1, \dots, \alpha_n$ which minimize the following objective:
\begin{equation}\label{eqn:pq_loss}
\min_{\substack{c_1, \dots, c_k \in \mathbb{R}^{\bar{d}} \\ \alpha_1, \dots, \alpha_n}} \sum_{i =1}^n \min_{j \in [k]} \Ex_{q \sim \Qc}[ (\iprod{q}{x_i} - \iprod{q}{\alpha_i c_j})^2].
\end{equation}
When the query distribution $\Qc$ is isotropic, and $\alpha_i$ are constrained to the value $1.0$, the above minimization problem reduces to the ubiquitous \emph{$k$-means clustering} problem. When $\alpha_i$ are allowed to take on any real value, minimizing the loss function from \eqref{eqn:pq_loss} reduces to the popular \emph{$k$-projective clustering} problem; see Appendix \ref{appx:proj_clustering_reduction} for proofs.
\begin{definition}\label{def:projective_clustering}
The \emph{$k$-projective clustering problem} for points $X \in \mathbb{R}^{n \times \bar{d}}$ is a set of $k$ points $c_1, \dots, c_k \in \mathbb{R}^{\bar{d}}$ such that the following is minimized
\begin{equation}\label{eqn:proj_clustering_loss}
\min_{c_1, \dots, c_k \in \mathbb{R}^{\bar{d}}} \sum_{i = 1}^n \min_{j \in [k]} \left\|x_i - \frac{\iprod{x_i}{c_j}}{\|c_j\|_2^2} \cdot c_j \right \|_2^2.
\end{equation}
In words, the cost incurred by a point $x$ is the cost of projecting it onto the direction (from the choice of $k$ directions given by $c_1, \dots, c_k$) that minimizes the projection cost.
\end{definition}
For a set of centers $c_1, \dots, c_k \in \mathbb{R}^{\bar{d}}$, and for each $j \in [k]$, let $X_j \subseteq X$ denote the set of all points in $X$ that map to center $c_j$. For each $X_j$, denote $u_j, v_j \in \mathbb{R}^{\bar{d}}$ to be the top left and right singular vectors of $X_j$ respectively and let $\sigma_j \in \mathbb{R}^{\geq 0}$ be the top singular value.
By the Eckhart-Young-Mirsky Theorem \cite{eckart1936approximation}, we must have that $c_j = v_j$ and $X_jc_j = \sigma_j \cdot u_j$ for all $j \in [x]$ and hence, we can simplify the minimization problem from \eqref{eqn:proj_clustering_loss} as follows:
\begin{align*}
\min_{\{c_j\}_{j=1}^k} \sum_{i = 1}^n \min_{j \in [k]} \left\|x_i - \frac{\iprod{x_i}{c_j}}{\|c_j\|_2^2} \cdot c_j \right \|_2^2 &= \min_{\{c_j\}_{j=1}^k} \sum_{j = 1}^k \sum_{x \in X_j} \left\|x - \frac{\iprod{x}{c_j}}{\|c_j\|_2^2} \cdot c_j \right \|_2^2 \\
&= \min_{\{X_j\}_{j=1}^k} \sum_{j = 1}^k \sum_{x \in X_j} \left\|x - \sigma_j u_{jx} \cdot v_j \right \|_2^2 \\
&= \min_{\{X_j\}_{j=1}^k} \sum_{j = 1}^k \left\|X_j - \sigma_j u_{j} v_j^\top \right \|_F^2 \\
&= \min_{\{X_j\}_{j=1}^k} \sum_{j = 1}^k \left \|X_j\right \|_F^2 - \sigma_j^2 \\
&= \left\|X\right\|_{F}^2 - \max_{\{X_j\}_{j=1}^k} \sum_{j = 1}^k \sigma_j^2
\end{align*}
where the minimization over $\{X_j\}_{j =1}^k$ is a minimization over all partitions of the input $X$ into $k$ parts. It follows then that once the partition $X_1, \dots, X_k$ is computed, the centers are given by $c_j = v_j$ for $j \in [k]$ and the optimal scalar $\alpha_i$ for the $i$-th point is the projection $\alpha_i = \sigma_ju_{ji}$.
The running time of applying the projective clustering approximation is asymptotically identical to that of $k$-means and produces significantly more accurate results. This is unsurprising since projective clustering is more expressive than $k$-means. Moreover, the optimal solution for approximating a collection of vectors by a single vector is well known: it is the top singular vector of the collection of vectors rather than their geometric center, which is the optimal solution for $k$-means.
Building on these encouraging results, we proceed to investigate the quantized version of projective clustering.
\subsection{Quantized Projective $k$-Clustering}\label{sec:quant_proj}
Recall that our goal is to quantize the projections $\alpha_1, \dots, \alpha_n$ of the points $x_1, \dots, x_n$ to $s \in \N$ values. Given the centers $c_1, \dots, c_k$, we can write the quantized version of the minimization problem from above as follows.
\begin{definition}
In the \emph{quantized projective $k$-clustering} problem, a partition $\{X_j\}_{j \in [k]}$ of the data $X$ (corresponding to clusters) is given and the goal is to find a set of vectors $\{\bar{u}_j \in \mathbb{R}^{\bar{d}}\}_{j \in [k]}$ which contain at most $s$ distinct entries while minimizing the following objective:
\begin{equation}\label{eqn:qpcpq_loss}
\min_{\{\bar{u}_j\}_{j\in[k]}} \sum_{j =1}^k \|X_j - \bar{u}_jv_j^\top\|_F^2
\end{equation}
where $v_j \in \mathbb{R}^{\bar{d}}$ is the top right singular vector of $X_j$.
\end{definition}
It can be shown that in fact, the objective function from \eqref{eqn:qpcpq_loss} is upper bounded by the cost of the optimal clustering plus the cost of doing a one-dimensional $k$-means clustering of the projections $\alpha_1, \dots, \alpha_n$ with $k = s$. We state this specifically in the following fact and give a proof in Appendix \ref{appx:qpcpq}.
\begin{fact}\label{fact:qpcpq_loss_reduction}
The loss in \eqref{eqn:qpcpq_loss} is upper-bounded by:
$$ \min_{\{\bar{u}_j\}_{j\in[k]}} \sum_{j =1}^k \|X_j - \sigma_ju_jv_j^\top\|_F^2 + \|\sigma_ju_j - \bar{u}_j\|_2^2 $$
where $u_j$ and $\sigma_j$ are the left singular vector and top singular value of $X_j$ respectively.
\end{fact}
It is now easy to see that minimizing the loss $\sum_{j = 1}^k\|\sigma_ju_j - \bar{u}_j\|_2^2$ is simply a $k$-means problem in one dimension with $k = s$, hence this can be easily computed after finding the centers and the projections for each section.
\section{Anisotropic Projective Clustering}
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.8\linewidth]{anisotropic_centers.png}
\caption{The optimal scaling $\alpha$ for anisotropic projective clustering with differing query distributions. The weight function $\id_t : \mathbb{R} \to \{0, 1\}$ essentially restricts the queries in the distribution $\Qc$ on which the loss $\langle q, x-c \rangle^2$ is measured by outputting $0$ on all the queries $q$ for which $\langle q ,x \rangle < t$. When the distribution is isotropic, $x$ is mapped to its projection. When $q = x$, the point $x$ is mapped to a center $c$ such that the inner-product $\langle c, q\rangle = \langle x, q \rangle = \|x\|_2^2$.
}
\label{fig:anisotropic_clustering}
\vspace{0.5em}
\hrule
\vspace{-0em}
\end{center}
\end{figure}
The objective function in \eqref{eqn:pq_loss} takes the expectation over the entire query distribution $\Qc$, however, as observed by \citet{guo2020accelerating}, not all pairs of points and queries are equally important since in the MIPS setting we are interested in preserving the inner-product $\iprod{x}{q}$ only if it is large (and hence a candidate for a nearest neighbor). This observation motivates the work of \citet{guo2020accelerating} that introduces a weight function in the loss given by \eqref{eqn:pq_loss}. Specifically, given a threshold $t > 0$, let $\id_t(x)$ be a weight function that is $1$ when $x \geq t$ and $0$ when $x < t$. Then, they consider the following \emph{score-aware loss} function for clustering:
\begin{equation}\label{eqn:anisotropic_loss}
\min_{\substack{c_1, \dots, c_k \in \mathbb{R}^{\bar{d}} \\ \alpha_1, \dots, \alpha_n}} \Ex_{q \sim \Qc}[ \sum_{i =1}^n \min_{j \in [k]} \id_t(\iprod{x_i}{q}) \cdot (\iprod{q}{x_i} - \iprod{q}{ \alpha_ic_j})^2]
\end{equation}
where $\alpha_i$ is fixed to be $1.0$ and where $t > 0$ is a tuneable input-specific parameter\footnote{When the input is unit norm, it is suggested to set $t = 0.2$ \cite[Section 3.2]{guo2016quantization}.}. The threshold function $\id_t$ defines a cone of vectors (of unit length) around the point $x_i$. It essentially ``selects'' the queries from the support of $\Qc$ for which the inner-product between $x_i$ and the queries should be preserved. Figure \ref{fig:clustering} depicts this intuition, showing that anisotropic (projective) clustering finds centers that have larger norm than in plain vanilla clustering so as to preserve the inner-products with the relevant queries, defined by the threshold function.
As shown in \citet{guo2020accelerating}, the above loss (per point) can be written as a linear combination of the error introduced by the component of the residual $x_i - c_j$ that is parallel to $x_i$ and the component that is orthogonal to $x_i$. Formally, define $r_\parallel(x, c) \eqdef x - \frac{\iprod{x}{c}}{\|x\|_2^2}x$ and $r_\bot(x, c) \eqdef \frac{\iprod{x}{c}}{\|x\|_2^2}x - c$, then, it was shown in \citet[Theorem 3.2]{guo2020accelerating} that the loss in \eqref{eqn:anisotropic_loss} is equivalent to minimizing the loss
$$\min_{\substack{c_1, \dots, c_k \in \mathbb{R}^{\bar{d}}}} \sum_{i = 1}^n \min_{j \in [k]} h_\parallel(\|x_i\|_2) \cdot \| r_\parallel(x, c_j) \|_2^2 + h_\bot(\|x_i\|_2) \cdot \|r_\bot(x, c_j)\|_2^2$$
where $h_\parallel(y) = (\bar{d}-1)\int_0^{t/y} \sin^{\bar{d}-2}\theta - \sin^{\bar{d}} \theta d\theta$ and $h_\bot(y) = \int_0^{t/y} \sin^{\bar{d}}\theta d\theta$.
Intuitively, the functions $h_\parallel$ and $h_\bot$ represent how the error from the parallel component of the residual and the orthogonal component of the residual should be weighted depending on the weight-function $\id_t$ in the loss \eqref{eqn:anisotropic_loss}. It can be shown that for any $t \geq 0$ we have that $h_\parallel \geq h_\bot$, i.e. the parallel component of the error is weighted higher. We defer further discussion of $h_\parallel$ and $h_\bot$ to Appendix \ref{appx:hvalues_discussion}.
A natural way to extend the above loss, as in projective $k$-clustering, is to remove the restriction on the value of $\alpha_1, \dots, \alpha_n$ and allow it to take on any real value. We term this the \emph{anisotropic projective $k$-clustering problem} which we define below.
\begin{definition}\label{def:projective_anisotropic_clustering}
The \emph{anisotropic projective $k$-clustering problem} for a set of points $X \in \mathbb{R}^{n \times \bar{d}}$ aims to find a set of $k$ points $c_1, \dots, c_k \in \mathbb{R}^{\bar{d}}$ and $n$ scalars $\alpha_1, \dots, \alpha_n \in \mathbb{R}$ such that the following is minimized
\begin{equation}\label{eqn:proj-anisotropic-loss}
\min_{\substack{c_1, \dots, c_k \in \mathbb{R}^{\bar{d}} \\ \alpha_1,\ldots,\alpha_n \in \mathbb{R}}}\sum_{i = 1}^n \min_{j \in [k]} \left(h_\parallel(\|x_i\|_2) \cdot \|r_\parallel(x_i, \alpha_ic_j)\|_2^2 + h_\bot(\|x_i\|_2) \cdot \|r_\bot(x_i, \alpha_ic_j)\|_2^2 \right).
\end{equation}
\end{definition}
We show from a simple calculation that in fact, for a fixed $c_j$ and $x_i$, one can compute the optimal $\alpha_i \in \mathbb{R}$ that minimizes the loss in $\eqref{eqn:proj-anisotropic-loss}$. Intuitively, the optimal scaling pushes the center away from the projection in the direction of $c_j$ depending on which queries have weight $1$ in the loss function \eqref{eqn:proj-anisotropic-loss}; Figure \ref{fig:anisotropic_clustering} depicts this intuition.
\begin{fact}\label{fact:opt_alpha_aniso}
For vectors $x, c \in \mathbb{R}^{\bar{d}}$, we have
$$\argmin_{\alpha \in \mathbb{R}} \ h_\parallel \cdot \left\|x - \frac{\iprod{x}{\alpha c}}{\|x\|_2^2}x \right\|_2^2 + h_\bot \cdot \left\|\alpha c - \frac{\iprod{x}{\alpha c}}{\|x\|_2^2}x \right\|_2^2 = \frac{h_\parallel \cdot \iprod{x}{c}}{\frac{(h_\parallel - h_\bot)\iprod{x}{c}^2}{\|x\|_2^2} + h_\bot \cdot \|c\|_2^2}$$
where $h_\parallel$ and $h_\bot$ are abbreviations for $h_\parallel(\|x\|_2)$ and $h_\bot(\|x\|_2)$ respectively.
\end{fact}
\subsection{Quantized Anisotropic Projective Clustering}\label{sec:quant_aniso}
The quantized version of anisotropic projective $k$-clustering can be written by modifying the minimization problem from \eqref{eqn:proj-anisotropic-loss} like it was done for projective $k$-clustering. Specifically, given the centers $c_1, \dots, c_k$, the goal is to minimize the following loss:
\begin{equation}
\min_{\substack{\lambda_1,\ldots,\lambda_s}}\sum_{i = 1}^n \min_{\substack{j \in [k], \ l \in [s]}} \left(h_{\parallel}(\|x_i\|_2) \cdot \|r_\parallel(x_i, \lambda_lc_j)\|_2^2 + h_\bot(\|x_i\|_2) \cdot \|r_\bot(x_i, \lambda_lc_j)\|_2^2 \right). \label{eqn:quantized_aniso_proj_loss}
\end{equation}
Using the definition of $r_\parallel$ and $r_\bot$, the term in the summation can be written in the form $w_i\lambda_l^2 + a_i\lambda_l + b_i$ where $w_i, a_i, b_i \in \mathbb{R}$ are constants\footnote{$w_i = (h_\parallel +h_\bot)\iprod{x_i}{c_j}(\iprod{x_i}{c_j}-2) + h_\bot\|c_j\|_2^2$, $a_i = -2\iprod{x_i}{c_j}$ and $b_i = h_\parallel \|x_i\|_2^2$.} that are independent of $\lambda_1, \dots, \lambda_s$ and only dependent on the point $x_i$ and the center it maps to. Hence, the above minimization problem can be simplified as follows:
$$\min_{\lambda_1, \dots, \lambda_s} \sum_{i = 1}^n \min_{l \in [s]} w_i\lambda_l^2 + a_i\lambda_l + b_i.$$
Notice that this is a quadratic loss for $s$ variables in one-dimension for which we can find a fixed point efficiently by doing an alternating minimization (with random initialization) since the minimization problem can be solved exactly for a single scalar.
\section{Efficient Algorithms and Application to PQ}
In this section we show how to implement Q-PCPQ and Q-APCPQ in two stages: first we give efficient algorithms in Section \ref{sec:aniso_proj_clustering_alg} and \ref{sec:aniso_proj_clustering_alg} to compute the centers and scalars for projective $k$-clustering (PCPQ) and anisotropic projective $k$-clustering (APCPQ) respectively, and then, in Section \ref{sec:pq_alg}, we state how to create the index for Q-PCPQ and Q-APCPQ by using the methods from Section \ref{sec:quant_proj} and \ref{sec:quant_aniso} to quantize the scalars we computed and store the mapping of the points to their respective centers and quantized scalars. In the same section, we show how the index for Q-PCPQ and Q-APCPQ is used to compute the inner-product between a query vector and every point.
Our main approach to compute the centers and scalars is to use an alternating-minimization algorithm to minimize the loss function \eqref{eqn:proj_clustering_loss} for PCPQ and \eqref{eqn:proj-anisotropic-loss} for APCPQ. Alternating-minimization has been a long-standing approach on finding the clustering in the literature for product quantization \cite{gong2012iterative,ge2013optimized,wu2017multiscale,guo2020accelerating}.
\subsection{Projective $k$-clustering}\label{sec:proj_clustering_alg}
The projective $k$-clustering problem is a NP-Hard problem for $k \geq 2$ even in the 2D-plane, i.e. $\bar{d} = 2$. Nevertheless, several lines of work have proposed fast approximation algorithms -- including algorithms inspired by Lloyd's algorithm for k-means that iteratively find $k$ one-dimensional subspaces \cite{agarwal2004k}, a monte-carlo sampling method \cite{procopiuc2002monte} and coreset based algorithms \cite{feldman2020turning,statman2020faster} that sample (and reweight) a small subset of the input points in such a way that the cost of clustering the subset of points is approximately the same as clustering the entire input, allowing the use of any black-box algorithm (that is potentially computationally intractable on large datasets) on the much smaller set of points.
The algorithm we propose is similar to Lloyd's algorithm and iterates between an \emph{assignment step} and a \emph{center finding step} -- in each round the algorithm i) maps the points to the closest center by projecting every point to each of the directions (given by the centers) and picks the one that minimizes the squared distance of the point to its projection and ii) for each center, the algorithm considers all the points that mapped to the center in step i) and computes the top-right singular vector of those points to be the new center. It is easy to see from the Eckart-Young-Mirsky Theorem \cite{eckart1936approximation} that the top-right singular vector in fact minimizes the loss from \eqref{eqn:proj_clustering_loss} when $k = 1$.
Formally, we consider the following iterative procedure:
\begin{enumerate}[1.]
\item{(Initialization)} Sample $k$ random points $c_1, \dots, c_k$ from the data $X$.
\item{(Assignment)} For each point $x \in X$: let $j^* = \argmin_{j \in [k]} \|x - \frac{\iprod{x}{c_j}}{\|c_j\|_2^2}c_j\|_2$, set $\alpha_i = \frac{\iprod{x}{c_{j^*}}}{\|c_{j^*}\|_2^2}$ and assign $c_{j^*}$ to be the center of $x$.
\item{(Center Finding)} For each $j \in [k]$: denote $X_j \subset X$ be the subset of the points that were assigned to center $j$ in Step 2, compute the top-right singular vector $v$ of $X_j$, i.e. $v = \argmin_{\|y\|=1}\|X_j - X_jyy^\top\|_F^2$, and set $c_j = v$.
\item{(Termination)} Repeat Step 2 and 3 until convergence or a maximum number of iterations has been reached.
\end{enumerate}
\paragraph{Initialization.}
In practice, as in our experiments, we can hope to find better solutions (centers) by initializing the centers using an off-the-shelf algorithm for another clustering problem, like $k$-means++. Recently, the work of \cite{statman2020faster} proposed a different initialization that provably produces a $O(\log(k))$-approximation for the projective $k$-clustering problem. The algorithm first normalizes all the points in $X$ so they have unit norm and then samples $k$ points with a slight variant of the k-means++ initialization algorithm -- specifically, the algorithm maintains a set $S$ and, for $k$ iterations, samples a point $x \in X \backslash S$ to add to $S$ with probability proportional to $\min(\min_{s \in S}\|x - s\|_2^2, \min_{s \in S}\|x + s\|_2^2)$. Empirically, on the datasets we use, we found that initializing the centers with $k$-means++ produces centers with similar loss as this algorithm while being able to take advantage of optimized implementations of $k$-means++ leading to significant gains in indexing time.
\subsection{Anisotropic Projective $k$-clustering}\label{sec:aniso_proj_clustering_alg}
We propose an alternating-minimization framework for anisotropic projective $k$-clustering, similar to the algorithm from Section \ref{sec:proj_clustering_alg}. The difference lies in the assignment stage, where we need to compute the optimal $\alpha_i$ for each data point $x_i \in X$ with respect to the anisotropic projective clustering loss from Definition \ref{def:projective_anisotropic_clustering} and in the center finding stage where, for each subset of points in $X$ that are mapped to the same center, we need to compute the center that minimizes the aforementioned loss.
We showed in Fact \ref{fact:opt_alpha_aniso} that computing the optimal $\alpha_i$ for each $x_i$ can be done when the centers are fixed. It is left then to show how to compute the centers in the center finding stage ($k=1$). \citet{guo2020accelerating} show that when $\alpha_i = 1$, there is a closed form solution to finding the center that minimizes the loss from Definition \ref{def:projective_anisotropic_clustering}. A simple adaptation of their proof gives us a closed form solution to the aforementioned loss when $\alpha_i$ is any arbitrary fixed scalar.
\begin{theorem}[Theorem 4.2, \cite{guo2020accelerating}]\label{thm:opt_center_anisotropic_clustering}
For $n$ points $X \in \mathbb{R}^{n \times \bar{d}}$, we have that
$$c^* \eqdef \brackets{\sum_{i=1}^n\frac{\alpha_i^2(h_{\parallel, i} - h_{\bot, i})}{\|x_i\|_2^2}x_ix_i^\top + I \cdot \alpha_i^2h_{\bot, i}}^{-1}\sum_{i = 1}^n \alpha_i h_{\parallel, i}x_i$$
is the solution to the minimization problem
$\argmin_{c \in \mathbb{R}^{\bar{d}}} \sum_{i = 1}^n h_{\parallel, i} \|r_\parallel(x_i, \alpha_i c)\|_2^2 + h_{\bot, i} \|r_\bot(x_i, \alpha_i c)\|_2^2$
and where $h_{\parallel, i}$ and $h_{\bot, i}$ are abbreviations for $h_\parallel(\|x_i\|_2)$ and $h_\bot(\|x_i\|_2)$ respectively.
\end{theorem}
We propose the following iterative procedure for the anisotropic projective $k$-clustering problem.
\begin{enumerate}[1.]
\item{(Initialization)} Sample $k$ random points $c_1, \dots, c_k$ from the data $X$.
\item{(Assignment)} For each point $x_i \in X$: compute $\beta_{i}[j] \eqdef {h_{\parallel, i} \iprod{x_i}{c_j}} ({\frac{(h_{\parallel, i} - h_{\bot, i})\iprod{x_i}{c_j}^2}{\|x_i\|_2^2} + h_{\bot,i} \cdot \|c_j\|_2^2})^{-1}$ for each $j \in [k]$. Compute $j^* = \argmin_{j \in [k]} \|x_i - \beta_{i}[j]c_j\|_2$, then, set $\alpha_i = \beta_{i}[j^*]$ and $c_{j^{*}}$ to be the center for $x_i$.
\item{(Center Finding)} For each $j \in [k]$: denote $X_j \subset X$ be the subset of the points that were assigned to center $j$ in Step 2, set $c_j$ to be
$$\brackets{\sum_{\substack{i \in [n] \text{ s.t.}\\ x_i \in X_j}}\frac{\alpha_i^2(h_{\parallel, i} - h_{\bot,i})}{\|x_i\|_2^2}xx^\top + I \cdot \alpha_i^2h_{\bot, i}}^{-1}\sum_{\substack{i \in [n] \text{ s.t.}\\ x_i \in X_j}} \alpha_i h_{\parallel, i} \cdot x_i$$
\item{(Termination)} Repeat Step 2 and 3 until convergence or a maximum number of iterations has reached.
\end{enumerate}
Notice that unlike in our algorithm for projective $k$-clustering, the center finding stage in the above algorithm does not find the optimal scaling and center for each subset (cluster) of points. Instead we solve the anisotropic $k$-clustering problem on the cluster and then compute the optimal scaling using Fact \ref{fact:opt_alpha_aniso}. Computing the optimal scaling and center jointly, like we did for vanilla projective clustering, is an open problem we leave for future work.
\subsection{Product Quantization and Running Time}\label{sec:pq_alg}
Next, we outline how we use the algorithms from Sections \ref{sec:proj_clustering_alg} and \ref{sec:aniso_proj_clustering_alg} to compute the index for Q-PCPQ and Q-APCPQ respectively and how the inner-product of the query vector with each point is computed during query time.
As discussed in Section \ref{sec:intro}, the columns of the input data $X \in \mathbb{R}^{n \times d}$ are split into $m$ contiguous chunks $X^{(1)}, \dots, X^{(m)} \in \mathbb{R}^{n \times d/m}$ where $X^{(j)}$ contains the coordinates $[jm, \dots, (j+1)m - 1]$. We then compute the centers $C^{(1)}, \dots, C^{(m)} \in \mathbb{R}^{k \times d/m}$ and scalars $\{\alpha^{(j)}_i : j \in [m], i \in [n]\}$ independently for each section $j \in [m]$. The centers and scalars for each section are computed using the method in Section \ref{sec:proj_clustering_alg} for Q-PCPQ and in Section \ref{sec:aniso_proj_clustering_alg} for Q-APCPQ.
We store $m$ lookup tables $\{\phi_{j} : [n] \to [k] \ | \ j \in [m] \}$, where look up table $\phi_j$ maps every point to its center in $C^{(j)}$.
In order to quantize the scalars, we follow the procedure from Section \ref{sec:quant_proj} for Q-PCPQ and Section \ref{sec:quant_aniso} for Q-APCPQ\footnote{Recent implementations of product quantization for Euclidean distance \cite{wu2017multiscale} and that of anisotropic $k$-clustering (ScaNN) \cite{guo2020accelerating} unit normalize all the data and store the norm $\|x_i\|_2$ as a scalar $\alpha_i$ for each $x_i$ separately. The scalars are then quantized to $s$ values, and during query time, the inner-product computations for each point are scaled by its respective quantized norm.}. Specifically, the scalars $\{\alpha^{(j)}_i \ | \ j \in[m], i \in [n]\}$ are themselves quantized to $s$ values $\{\lambda_1, \dots, \lambda_s\}$ using the aforementioned procedured and $m$ mappings $\{\gamma_j : [n] \to [s] \ | \ j \in [m]\}$ are created such that mapping $\gamma_j$ maps the scalars $\{\alpha^{(j)}_i \}_{i \in [n]}$ from section $j$ to their respective quantized scalar among $\{\lambda_1, \dots, \lambda_s\}$.
\paragraph{Inner-product computation.} For a query $q \in \mathbb{R}^d$, let $[q_1, \dots, q_m]$ denote the $m$ sections of the coordinates of $q$ each containing $d/m$ coordinates. The inner-products between the data $X$ and $q$ is computed in the following phases:
\begin{enumerate}[i.]
\item for each $j \in [m]$, the $k$-length vector $\eta_j \eqdef C^{(j)}q_j$ is computed, then,
\item for each $j \in [m]$ and $l \in [s]$, the quantity $\eta_j \cdot \lambda_l$ is computed and stored in a lookup table of size $skm$, and finally,
\item to compute the inner-product with $x_i$, the sum \begin{equation}\label{eqn:final_ip_sum}\sum_{j = 1}^m \eta_{j, \phi_j(i)} \cdot \lambda_{\gamma_j(i)}\end{equation} is computed by performing $m$ lookups, one for each term.
\end{enumerate}
\paragraph{Running time and space complexity.} Computing the inner-products of the centers in each section with $q$ (i.e. stage i.) takes $kd$ multiply-adds, since each section has $d/m$ coordinates and there are $m$ sections with $k$ centers each. Computing the products with the $s$ scalars $\lambda_1, \dots, \lambda_s$ requires $skm$ multiplication operations, $k$ for each scalar and section. And finally computing the inner-product in step iii. requires $m$ lookups and $m$ additions per point, for a total of $nm$ lookups and additions. In total, the complexity of the search is $kd + skm + 2nm$ operations.
Storing the mapping $\phi^{(1)}, \dots, \phi^{(m)}$ requires $m\log_2(k)$ bits per point and storing the scalar quantizing requires $\log_2(s)$ bits per point, leading to a total of $n(m\log_2(k) + \log_2(s))$ bits to store the index. In addition, $kd + s$ floating point integers need to be stored, which is a lower order term since $n \gg kd/m$.
\section{Experiments}
\begin{figure}[t!]
\centering
\begin{subfigure}{0.6\linewidth}
\centering
\includegraphics[width=\linewidth]{ip_error_Glove100-4-bit.pdf}
\caption{Relative error of each method over 1000 queries, sorted by the magnitude of the error with the query.}
\label{fig:ip_error_glove_plot}
\end{subfigure}%
\begin{subfigure}{0.4\linewidth}
\centering
\begin{small}
\begin{tabular}{|l |l |l|}
\hline
Method (4-bit) & Glove-100 & Last.fm \\
\hline
\hline
$k$-means++ & 0.286 & 0.191 \\
Q-PCPQ & 0.166 & 0.124 \\
PCPQ & 0.164 & 0.114 \\
\hline
ScaNN & 0.229 & 0.178 \\
Q-APCPQ & 0.199 & 0.153 \\
APCPQ & 0.197 & 0.155 \\
\hline
\end{tabular}
\end{small}
\caption{Average relative error of inner-product over 1000 queries.}
\label{fig:ip_error_table}
\end{subfigure}
\caption{Comparison of 4-bit $k$-means++, ScaNN, Q-PCPQ and Q-APCPQ (along with the versions without scalar quantization: PCPQ and APCPQ) on the relative error of approximating the top inner-product over 1000 queries on Glove-100 and Last.fm. }
\label{fig:ip_error_glove}
\vspace{0.5em}
\hrule
\vspace{-0em}
\end{figure}
\begin{figure}[t]
\centering
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=1.\linewidth]{Recall1_at_N_Glove100-4-bit.pdf}
\label{fig:recall_glove_plot_4bit}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=1.\linewidth]{Recall1_at_N_Glove100-8-bit.pdf}
\label{fig:recall_glove_plot_8bit}
\end{subfigure}%
\newline
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=1.\linewidth]{Recall1_at_N_Lastfm-4-bit.pdf}
\label{fig:recall_lastfm_plot_4bit}
\end{subfigure}%
\begin{subfigure}{0.5\linewidth}
\centering
\includegraphics[width=1.\linewidth]{Recall1_at_N_Lastfm-8-bit.pdf}
\label{fig:recall_lastfm_plot_8bit}
\end{subfigure}%
\vspace{-1em}
\caption{Comparison of $k$-means++ and ScaNN with Q-PCPQ and Q-APCPQ (and without projective quantization: PCPQ and APCPQ) respectively on Recall1@N on Glove-100 and Last.fm datasets in the 4-bit and 8-bit settings.}
\label{fig:recall_plots}
\vspace{0.5em}
\hrule
\vspace{-0em}
\end{figure}
In this section we compare between $k$-clustering and its projective counterparts we proposed on their performance for maximum inner-product search. Specifically, we consider $k$-means++ and anisotropic $k$-clustering (ScaNN) \cite{guo2020accelerating} and compare them to quantized projective $k$-clustering (Q-PCPQ) and quantized anisotropic projective $k$-clustering (Q-APCPQ). In addition, we add the comaprisons to the versions of projective clustering without quantizing the scalars, which we denote by PCPQ and APCPQ respectively. In fixed-bit-rate settings we analyze the retrieval performance (i.e. recall) and the quality of approximation of maximum inner-product values to show that Q-PCPQ and Q-APCPQ achieve significantly better performance on both measures in standard benchmark datasets for MIPS.
\paragraph{Datasets.} We use Glove-100 introduced in \citet{pennington2014glove} which is a collection of 1.2 million word embeddings in 100-dimensions. Glove-100 is used in \citet{guo2020accelerating} and is also one of the main benchmarks in \citet{aumuller2017ann} for inner-product search; \citet{aumuller2017ann} is a widely-used tool for benchmarking approximate nearest neighbor algorithms. While Glove-100 is meant for cosine similarity search, we unit-normalize all the points at training time making MIPS equivalent to cosine similarlity. Additionally we use the Last.fm dataset, another benchmark dataset from \citet{aumuller2017ann} for MIPS, containing 300,000 points in 65-dimensions created by embedding soundtracks using a matrix factorization based recommendation model for music.
\paragraph{Multiscale quantization.}
Performing PQ on the entire dataset when the dataset has more than $10^6$ data points can lead to poor performance and can be prohibitive for datasets of size $10^7$ or $10^9$. In order to circumvent this, typically in similarity search systems \cite{johnson2019billion,wu2017multiscale,guo2020accelerating} an initial clustering of the dataset is done, followed by a product quantization of the data points in each cluster.
Usually, like in the aforementioned results, the residuals of the data points in each cluster are used for the PQ instead of the data points themselves. Specifically, for data points $X \in \mathbb{R}^{n \times d}$ a parameter $\bar{k} \in \N$ is chosen and $\bar{k}$ cluster centers $c_1, \dots, c_{\bar{k}} \in \mathbb{R}^d$ are computed. Typically $\bar{k}$ is chosen to be around $O(\sqrt{n})$ in order to balance the accucracy-performance tradeoff\footnote{see \texttt{https://github.com/facebookresearch/faiss/wiki/Guidelines-to-choose-an-index}}. Each point in $X$ is then mapped to its closest cluster center (in $\ell_2$-distance) to obtain a clustering $\{\Cc_1, \dots, \Cc_{\bar{k}}\}$ that partitions $X$. Finally, in order to build the index, the PQ method is applied to each cluster $\Cc_1, \dots, \Cc_{\bar{k}}$ separately.
For queries, a parameter $k_{\text{probe}} \ll \bar{k}$ parameter is picked beforehand. On the arrival of a query $q \in \mathbb{R}^d$, points in the top $k_{\text{probe}}$ clusters are queried based on the ordering of $\iprod{q}{c_1}, \dots, \iprod{q}{c_{\bar{k}}}$ to obtain the point(s) with maximum inner-product with $q$.
\paragraph{Parameters.} For each dataset, we compute the initial clustering of $\bar{k}$ clusters using the \texttt{scikit-learn} implementation of $k$-means++ and set $\bar{k}$ so that $n/\bar{k} \approx 1000$. This provides a fair comparison for the retrieval performance and inner-product approximation of the PQ methods across datasets. For each dataset, the same clustering is used across all compared methods and parameters.
We investigate two regimes for $k$, i.e., the number of centers in each section: $k = 16$ and $k = 256$ which are denoted the \emph{$4$-bit} and \emph{$8$-bit} settings respectively. We set the number of sections $m$ to be $m=25$ for Glove-100 and $m=16$ for Last.fm, this leads to $\bar{d} = 4$ coordinates per section. In the projected quantization setting (Q-PCPQ and Q-APCPQ), we set the scalar quantization to be $s = 8$.
For ScaNN and Q-APCPQ (and APCPQ), we set the threshold in the weight function to be $0.2 \times$ (average $\ell_2$-norm of the input vectors). An in-depth discussion of the rationale behind setting this threshold is done in \citet{guo2020accelerating}.
\paragraph{Setup.} Our implementations for the methods are in Python v3.x and all our experiments were run on \texttt{n1-standard} Google Cloud Engine VMs with 4vCPUs @2.2GHz and 15GB RAM.
\subsection{Approximating maximum inner-product}
We compare the methods on their accuracy of estimating the top-1 inner-product. Specifically, we measure the relative error ${|\iprod{q}{x_i} - {\sum_{j = 1}^m \eta_{j, \phi_j(i)} \cdot \lambda_{\gamma(i)}}|}/{|\iprod{q}{x_i}|}$ for the point $x_i$ with the highest inner-product with $q$ over 1000 such queries. Here the summation $\sum_{j = 1}^m \eta_{j, \phi_j(i)} \cdot \lambda_{\gamma(i)}$ is the approximate inner-product for query $q$ from \eqref{eqn:final_ip_sum}. We perform a head-to-head comparison of each method in the 4-bit setting: i) $k$-means++ against Q-PCPQ (and PCPQ), and ii) ScaNN against Q-APCPQ (and APCPQ). The results over the 1000 queries are shown in Figure \ref{fig:ip_error_glove_plot} and the average relative error is given in Table \ref{fig:ip_error_table}.
The results show that Q-PCPQ and Q-APCPQ consistently approximate the top inner-product with the query better than $k$-means++ and ScaNN respectively. Specifically, the relative error is $12\%$ and $3\%$ lower for Q-PCPQ and Q-APCPQ respectively compared to their non-projective counterparts on Glove-100. Additionally, the results show that the loss in the approximation of the inner-product from quantizing the projections is negligible; as seen by the average relative errors between PCPQ and Q-PCPQ (and between APCPQ and Q-APCPQ) in Table \ref{fig:ip_error_table}.
While Q-APCPQ does better on average than ScaNN, the average relative error is worse for Q-APCPQ than for Q-PCPQ. This does not align with the rationale of using the score-aware loss from \eqref{eqn:anisotropic_loss}, since the score-aware loss is meant to approximate top inner-products better than its counterpart in \eqref{eqn:pq_loss} that is not score-aware. We suspect that this might be due to the fact that we do not solve the minimization problem given in \eqref{eqn:proj-anisotropic-loss} for the anisotropic projective $k$-clustering optimally when $k=1$.
\subsection{Recall}
\begin{table}[t]
\centering
\begin{tabular}{|l|ll|ll|ll|ll|}
\hline
& \multicolumn{4}{c|}{Glove-100} & \multicolumn{4}{c|}{Last.fm} \\
\hline
\multirow{2}{*}{Method} & \multicolumn{2}{l|}{Recall 1@1} & \multicolumn{2}{l|}{Recall 1@10} & \multicolumn{2}{l|}{Recall 1@1} & \multicolumn{2}{l|}{Recall 1@10} \\
& 4-bit & 8-bit & 4-bit & 8-bit & 4-bit & 8-bit & 4-bit & 8-bit \\
\hline
\hline
$k$-means++ & 0.448 & 0.772 & 0.825 & 0.945 & 0.584 & 0.992 & 0.984 & 1.000 \\
Q-PCPQ & 0.633 & 0.829 & 0.924 & 0.947 & 0.775 & 0.919 & 1.000 & 1.000 \\
PCPQ & 0.634 & 0.840 & 0.925 & 0.947 & 0.757 & 0.994 & 1.000 & 1.000 \\
\hline
ScaNN & 0.416 & 0.741 & 0.820 & 0.944 & 0.594 & 0.989 & 0.978 & 1.000 \\
Q-APCPQ & 0.529 & 0.766 & 0.889 & 0.945 & 0.667 & 0.986 & 1.000 & 1.000 \\
APCPQ & 0.541 & 0.770 & 0.895 & 0.945 & 0.674 & 0.986 & 1.000 & 1.000 \\
\hline
\end{tabular}
\caption{Recall1@1 and Recall1@10 values for $k$-means++, ScaNN, Q-PCPQ (and PCPQ) and Q-APCPQ (and APCPQ) for Glove-100 and Last.fm (averaged over 1000 queries) in the 4-bit and 8-bit regimes.}
\label{table:recall}
\vspace{0.5em}
\hrule
\vspace{-0em}
\end{table}
One of the most important measures of the performance of a MIPS system is its retrieval performance. Specifically, for a set of queries $Q$ and a parameter $N \in \N^+$, let $\pi_N(q)$ be the top-$N$ points based on the approximate inner-products computed by method under consideration. Then, the Recall1@$N$ for the set of queries $Q$ is the following quantity:
$$\text{Recall1@$N$} = \frac{1}{|Q|}\sum_{q \in Q}\id\left[\max_{x \in \pi_N(q)} \iprod{x}{q} \geq \max_{x \in X} \iprod{x}{q}\right].$$
We compare the performance of each method in 4-bit and 8-bit settings and measure the Recall1@1 and Recall1@10 for each dataset in Table \ref{table:recall}. Figure \ref{fig:recall_plots} plots the Recall1@N for Glove-100 and Last.fm in the 4-bit and 8-bit setting. Especially in the 4-bit setting, we see that Q-PCPQ and Q-APCPQ achieve significant recall gains over their non-projective counterparts ($k$-means++ and ScaNN); achieving approximately $8\%-10\%$ recall gains in the anistropic setting for Recall1@1 and close to $~19\%$ gain for Q-PCPQ in the same setting!
In the 8-bit regime, the number of centers per section is comparable to the number of data points, i.e., $k=256$ for 1000 points. Hence, even vanilla $k$-means++ clustering fits the data very well, achieveing as little as $3.16\%$ reconstruction error on Glove-100. In this setting we have that $k \approx n$, precluding the need for product quantization in the first place. This could explain the fact that Q-PCPQ and Q-APCPQ have much more modest gains in the 8-bit regime, and even does worse on Recall1@1 on Last.fm in the 8-bit regime. On much larger datasets however, where the number of centers $k$ per section is much smaller than the number of points being indexed, we expect Q-PCPQ to do much better than $k$-means, as is consistently shown in the 4-bit regime, where $k$-means++ achieves $30.09\%$ reconstruction error on Glove-100.
|
1,108,101,564,869 | arxiv | \section{Introduction}
Efficient computation of persistent homology has been a central quest in Topological Data Analysis (TDA) since the early days of the field about 20 years ago. Given a filtration (a nested sequence of simplicial complexes), computation of persistent homology involves reduction of a boundary matrix, whose rows and columns are the simplices of the input filtration. Traditionally, there are two complementary lines of research that have been explored to improve the computation of persistent homology. The first approach led to improvement of the persistence algorithm (the boundary matrix reduction algorithm) and of its analysis, to efficient implementations and optimizations, and to a new generation of software~\cite{Gudhi,phat,ripser,eirene,dionysus,ripser++,dory}. The second and complementary approach is to reduce (or simplify) the input filtration to a smaller filtration through various geometric or topological techniques in an exact or approximate way and then compute the persistent homology of the smaller reduced filtration. This research direction has been intensively explored as well~\cite{Mischaikow, PawelSimpleColl, ChazalOudot,Botnan, SheehyRipsComp, KerberSharath, Aruni, Simba}.
Flag complexes and, in particular, the Vietoris-Rips complexes are an important class of simplicial complexes that are extensively used in TDA. Flag complexes are fully characterized by their graph (or 1-skeleton) and can thus be stored in a very compact way. Therefore, they are of great practical importance and are well studied theoretically. Various efficient codes and reduction techniques have been developed for those complexes~\cite{ripser, SheehyRipsComp,ripser++,dory}. However, further progress have been made only recently by the work of Boissonnat and Pritam~\cite{FlagCompStrongColl,FlagCompEdgeColl}. Both works~\cite{FlagCompStrongColl, FlagCompEdgeColl} put forward preprocessing techniques, which reduce an input flag filtration (nested sequence of flag complexes) to a smaller flag filtration using only the $1$-skeleton. The work in~\cite{FlagCompStrongColl} uses a special type of collapse called strong collapse (removal of special vertices called dominated vertices), introduced by J. Barmak and E. Miniam~\cite{StrongHomotopy}. In~\cite{FlagCompEdgeColl} they extend the notion of strong collapse to edge collapse (removal of special edges, called dominated edges) and use it for further filtration simplification which improves the performance by several orders of magnitude.
In this paper, we revisit the usage of edge collapse for efficient computation of persistent homology. We first give a simple and intuitive explanation of the principles underlying the algorithm proposed in~\cite{FlagCompEdgeColl}. We identify that an algorithm to edge collapse a filtration can be deconstructed as three fundamental operations: 1. \textit{Swap} two edges having same filtration value, 2. \textit{Shift} a dominated edge forward in the filtration and 3. \textit{Trim} the very last dominated edge. This new approach allows us to propose various extensions, which we list below.
\begin{itemize}
\item \textbf{Backward:} We propose a backward reduction algorithm, which processes the edges of a flag filtration with decreasing filtration values different to the algorithm in~\cite{FlagCompEdgeColl}. The algorithm in~\cite{FlagCompEdgeColl} processes edges one by one with increasing filtration values, i.e. in the forward direction. The backward processing results (shown experimentally) in faster reduction of the edges as it allows various operations like domination checks, computing the neighbourhood of an edge etc to be performed fewer times than in the forward algorithm of~\cite{FlagCompEdgeColl}. However, the forward algorithm of~\cite{FlagCompEdgeColl} has the advantage when the input filtration is in streaming fashion. Once we identify that to swap, to shift and to trim are the most basic operations of the reduction algorithm in~\cite{FlagCompEdgeColl}, it becomes clear that there could be possibly several different ways to reduce an input flag filtration using edge collapse. The forward algorithm of~\cite{FlagCompEdgeColl} and the backward algorithm proposed in this article are two natural variants of possibly several different variants one can think of.
\item \textbf{Parallel: }We propose a divide and conquer heuristic to further improve and semi-parallelize our backward reduction algorithm. Our approach is to subdivide the input filtration into two smaller sub-sequences (consisting of consecutive edges), we process these smaller sub-sequences in parallel and then merge the solutions of two sequences to form the solution of the complete sequence. The two sub-sequences can be further sub-divided and processed recursively in parallel.
\item \textbf{Approximate: }With this simplified perspective a simple tweak in the backward algorithm allows us to have an approximate version of the reduction algorithm. There are two goals in mind behind an approximate version, first to speed up the algorithm, and second to obtain a smaller reduced sequence. We perform certain experiments to show how the approximate version performs on these two parameters.
\item\textbf{ Zigzag: }Next, we provide a reduction algorithm for a zigzag flag filtration, which is a sequence of flag complexes linked through inclusion maps however the inclusion maps could possibly be in both forward and backward directions. The theoretical results in~\cite{FlagCompEdgeColl} can easily be extended to zigzag filtrations. We show that with the new point of view there is a simple algorithm for zigzag flag filtrations which incorporates parallelism as well.
\end{itemize}
We note that we don't assume that all the vertices appear in the beginning of the filtration. That is the filtration values of vertices can be arbitrary as well.
On the theory side, we show that the edge collapse of a flag filtration can be computed in time $O(n_e\, k^3)$, where $n_e$ is the number of input edges and $k$ is the maximal degree of a vertex in the input graph. The algorithm has been implemented and the code is available in the Gudhi library~\cite{Gudhi}.
An outline of this paper is as follows. Section~\ref{sec:prel} recalls some basic ideas and constructions related to simplicial complexes, persistent homology and collapses. We present the new simplified perspective and associated lemmas in Section~\ref{sec:shift_swap_trim}. In Section~\ref{sec:persistence_simplification}, we explain the new backward algorithm for flag filtration simplification. In Section~\ref{sec:parallel} and Section~\ref{sec:approximate}, we discuss the approach to parallel simplification and approximate computation respectively using edge collapse. The simplification algorithm for zigzag flag filtration is discussed in Section~\ref{sec:zigzag_filt}. Experiments are discussed in Section~\ref{sec:experiments}.
\section{Background}
\label{sec:prel}
In this Section, we briefly recall the basic notions like simplicial complexes, flag complexes, persistent homology and edge collapse. For more details on these topics please refer to~\cite{HerbertHarerbook, Hatcher, Munkres}.
\subparagraph{Simplicial complex and simplicial map.}
An \textbf{abstract simplicial complex} $\textit{K}$ is a collection of subsets of a non-empty finite set $\textit{X},$ such that for every subset $\textit{A}$ in $\textit{K}$, all the subsets of $\textit{A}$ are in $\textit{K}$. We call an \textit{abstract simplicial complex} simply a \textit{simplicial complex} or just a \textit{complex}. An element of $\textit{K}$ is called a \textbf{simplex}. An element of cardinality $k+1$ is called a $k$-simplex and
$k$ is called its \textbf{dimension}.
Given a simplicial complex $K$, we denote its geometric realization as $|K|$.
A simplex is called \textbf{maximal} if it is not a proper subset of any other simplex in $\textit{K}$. A sub-collection $\textit{L}$ of $\textit{K}$ is called a \textbf{subcomplex} if it is a simplicial complex itself.
An inclusion $\psi : K \xhookrightarrow{\sigma} K \cup \sigma$ of a single simplex $\sigma$ is called \textbf{elementary}, otherwise, it's called \textbf{non-elementary}.
An inclusion $\psi : K \hookrightarrow L$ between two complexes $K$ and $L$ induces a continuous map $|\psi| : |K| \rightarrow |L|$ between the underlying geometric realizations.
\subparagraph{Flag complex and neighborhood. } A complex $K$ is a \textbf{flag} or a \textbf{clique} complex if, when a subset of its vertices forms a clique (i.e.\ any pair of vertices is joined by an edge), they span a simplex. It follows that the full structure of $K$ is determined by its 1-skeleton (or graph) we denote by $G$. For a vertex $v$ in $G$, the \textbf{open neighborhood} $N_G(v)$ of $v$ in $G$ is defined as $N_G(v) := \{u \in G \: | \; [uv] \in E\}$, where $E$ is the set of edges of $G$. The \textbf{closed neighborhood} $N_G[v]$ is $N_G[v] := N_G(v) \cup \{ v\}$.
Similarly we define the closed and open neighborhood of an edge $[xy] \in E$, $N_G[xy]$ and $N_G(xy)$ as $N_G[xy] := N_G[x] \cap N_G[y]$ and $N_G(xy) := N_G(x) \cap N_G(y)$, respectively.
\subparagraph{Persistent homology.}
A \textbf{sequence} of simplicial complexes $\mathcal{F}$ : $\{K_1 \hookrightarrow K_2 \hookrightarrow \cdots \hookrightarrow K_m \}$ connected through inclusion maps is called a \textbf{filtration}.
A filtration is a \textbf{flag filtration} if all the simplicial complexes $K_i$ are flag complexes.
If we compute the homology groups of all the $K_i$, we get the sequence $\mathcal{P}(\mathcal{F})$ : $\{H_p(K_1) \xhookrightarrow{*} H_p(K_2) \xhookrightarrow{*} \cdots \xhookrightarrow{*} H_p(K_m)\}$. Here $H_p()$ denotes the homology group of dimension $p$ with coefficients from a field $\mathbb{F}$ and $\xhookrightarrow{*}$ is the homomorphism induced by the inclusion map. $\mathcal{P}(\mathcal{F})$ is a sequence of vector spaces connected through the homomorphisms and it is called a \textbf{persistence module}. More formally, a \textit{persistence module} $\mathbb{V}$ is a sequence of vector spaces $\{V_1 \xrightarrow{} V_2 \xrightarrow{} V_3 \xrightarrow{} \cdots \xrightarrow{} V_m\}$ connected with homomorphisms $\{\xrightarrow{}\}$ between them. A persistence module arising from a sequence of simplicial complexes captures the evolution of the topology of the sequence.
Any persistence module can be \textit{decomposed} into a collection of intervals of the form $[i,j)$~\cite{structure-pers}.
The multiset of all the intervals $[i, j)$ in this decomposition is called the \textbf{persistence diagram} of the persistence module. An interval of the form $[i,j)$ in the persistence diagram of $\mathcal{P}(\mathcal{F})$ corresponds to a homological feature (a `cycle') which appeared at $i$ and disappeared at $j$. The persistence diagram (PD) completely characterizes the persistence module, that is, there is a bijective correspondence between the PD and the equivalence class of the persistence module \cite{HerbertHarerbook, CarlssonZomorodian}.
Two different persistence modules $\mathbb{V} : \{V_1 \xrightarrow{} V_2 \xrightarrow{} \cdots \xrightarrow{} V_m\}$ and $\mathbb{W} : \{W_1 \xrightarrow{} W_2 \xrightarrow{} \cdots \xrightarrow{} W_m\}$, connected through a set of homomorphisms $\phi_i: V_i \rightarrow W_i$ are \textbf{equivalent} if the $\phi_i$ are isomorphisms and the following diagram commutes ~\cite{HerbertHarerbook, quivers}. Equivalent persistence modules have the same interval decomposition, hence the same diagram.
\begin{center}
\begin{tikzcd}
V_1 \arrow{r}{} \arrow{d}{\phi_1} & V_2 \arrow{r}{} \arrow{d}{\phi_2} &
\cdots & \arrow{r} & V_{m-1} \arrow{r}{} \arrow{d}{\phi_{m-1}} & V_m \arrow{d}{\phi_m} \\
W_1 \arrow{r}{} & W_2 \arrow{r}{} & \cdots & \arrow{r} & W_{m-1} \arrow{r}{} & W_m
\end{tikzcd}
\end{center}
\subparagraph{Edge collapse of a flag complex:}
In a flag complex $K$, we say that an edge $e =[ab]$, connecting vertices $a$ and $b$, is \textbf{dominated} by a vertex $v$ (different from $a$ and $b$) if $N_G[e]\subseteq N_G[v]$.
Removing $e$ and all its cofaces from $K$ defines a smaller flag complex $K^{\prime}$. It has been proven in~\cite{FlagCompEdgeColl} that when $e$ is dominated by a vertex of $K$, the inclusion $K^{\prime} \subset K$ induces an isomorphism between the homology groups of $K^{\prime}$ and $K$. This removal is called an \textbf{edge collapse}.
\section{Swapping, shifting and trimming} \label{sec:shift_swap_trim}
In this Section, we show three simple and fundamental operations that preserve the persistence diagram of a flag filtration:
1. Swapping any two edges with the same filtration value, 2. Shifting a dominated edge, and 3. Trimming a dominated edge at the end of the filtration. These operations can be combined to simplify a flag filtration.
Before we proceed, we will fix some notations. Let $\{t_1, t_2, \cdots, t_n\}$ be a finite index set where $t_i \in \mathbb{R}$ and $t_i < t_j$ for $i < j$. For convenience, we may consider $t_{n+1} = \infty$. With each $t_i $ (called the \textit{filtration value} or \textit{grade}) we associate a graph $G_{t_i}$
such that $G_{t_i} \hookrightarrow G_{t_{i+1}}$ is an \textit{inclusion}, (not necessarily elementary) of edges. The flag complex of $G_{t_i}$ is denoted as $\overline{G}_{t_i}$ and we consider the associated flag filtration $\mathcal{F} : \overline{G}_{t_1} \hookrightarrow \overline{G}_{t_2} \hookrightarrow \cdots \hookrightarrow \overline{G}_{t_n}$. The edges in the set $E := \{e_{1}, e_{2}, \cdots e_{m} \}$ ($m \geq n$) are thus indexed with an order compatible with the filtration values.
\subparagraph{Swapping:} Inserting several edges at the same filtration value can be done in any order. We state this basic observation as the following lemma.
\begin{lemma} [Swapping Lemma] \label{lemma:swap}
Given a flag filtration $\{\overline{G}_{t_1} \hookrightarrow \overline{G}_{t_2} \cdots \hookrightarrow \overline{G}_{t_n}\}$, such that ${G_{t_{i}}} \hookrightarrow {G_{t_{i+1}}}$ is a non-elementary inclusion. Then, the indices of the edges ${G_{t_{i+1}}} \setminus {G_{t_{i}}}$ could be assigned interchangeably. That is, swapping their order of insertion preserves the persistence diagram.
\end{lemma}
\subparagraph{Shifting:} In a filtration, insertion of a dominated edge does not bring immediate topological change. Therefore, its insertion can be shifted until the next grade and possibly even further.
\begin{lemma} [Shifting Lemma] \label{lemma:shift}
Let $e$ be a dominated edge in $G_{t_i}$ inserted at grade $t_i$. Then, the insertion of $e$ can be shifted by one grade to $t_{i+1}$ without changing the persistence diagram. In other words, the persistence diagrams of the original flag filtration $\mathcal{F} := \{\overline{G}_{t_1} \hookrightarrow \cdots \hookrightarrow \overline{G}_{t_i} \hookrightarrow \overline{G}_{t_{i+1}} \hookrightarrow \cdots \hookrightarrow \overline{G}_{t_n}\}$ and the shifted filtration $\{\overline{G}_{t_1} \hookrightarrow \cdots \hookrightarrow \red{\overline{{G}_{t_{i}} \setminus e}} \hookrightarrow \overline{G}_{t_{i+1}} \hookrightarrow \cdots \hookrightarrow \overline{G}_{t_n}\}$ are equivalent.
\end{lemma}
\begin{proof}
The proof follows from the commutativity of the following diagram, where all maps are induced by inclusions, and the fact that all vertical maps are isomorphisms.
\begin{center}
\begin{tikzcd}
{H_p(\overline{G}_{t_{i-1}})} \arrow[r, hook] \arrow[d, equal] & {H_p(\overline{G}_{t_i})} \arrow[r, hook] \arrow[d, "{|r_i|*}", shift left=1.5ex] & {H_p(\overline{G}_{t_{i+1}})} \arrow[d, equal] \\
{H_p(\overline{G}_{t_{i-1}})} \arrow[r, hook] & {H_p(\overline{{G}_{t_{i}} \setminus e})} \arrow[r, hook] \arrow[u, hook]
& {H_p(\overline{G}_{t_{i+1}})}
\end{tikzcd}
\end{center}
This implies that the persistence diagrams of the sequences $\{\overline{G}_{t_1} \hookrightarrow \cdots \hookrightarrow \overline{G}_{t_i} \hookrightarrow \overline{G}_{t_{i+1}} \hookrightarrow \cdots \hookrightarrow \overline{G}_{t_n}\}$ and $\{\overline{G}_{t_1} \hookrightarrow \cdots \hookrightarrow \overline{{G}_{t_{i}} \setminus e} \hookrightarrow \overline{G}_{t_{i+1}} \hookrightarrow \cdots \hookrightarrow \overline{G}_{t_n}\}$ are equivalent, see~\cite[Theorem 4] {FlagCompEdgeColl} for more details. Here, $|r_i|*$ is the isomorphism between the homology groups induced by the retraction map (on the geometric realizations of the complexes) associated to the edge collapse.
\end{proof}
After an edge is shifted to grade $t_{i+1}$, it can leap frog the edges inserted at grade $t_{i+1}$ using the swapping lemma (Lemma~\ref{lemma:swap}) and can explore the possibility of being shifted to the next grade.
\subparagraph{Trimming:} If the very last edge in the filtration is dominated then we can omit its inclusion. This is a special case of the shifting operation (Lemma~\ref{lemma:shift}) assuming that there is a graph $G_{\infty}$ at infinity.
\begin{lemma} [Trimming Lemma]\label{lemma:trimming}
Let $e\notin G_{t_{n-1}}$ be a dominated edge in the graph $G_{t_n}$. Then, the persistence diagrams of the original sequence $\mathcal{F} := \{\overline{G}_{t_1} \hookrightarrow \overline{G}_{t_2} \hookrightarrow \cdots \hookrightarrow \overline{G}_{t_n}\}$ and the trimmed sequence $\{\overline{G}_{t_1} \hookrightarrow \overline{G}_{t_2} \hookrightarrow \cdots \hookrightarrow \overline{{G}_{t_{n}} \setminus e}\}$ are equivalent.
\end{lemma}
Note that when shifting or trimming produces a sequence with identical consecutive graphs $G_{t_i}=G_{t_{i+1}}$, we can just drop index $t_{i+1}$.
\begin{comment}
The following lemma from~\cite{FlagCompEdgeColl} justifies the fact that during the forward domination check (the internal forward loop (Line 12) of \cref{alg:core_flag_filtration}) we only include the edges from the edge-neighbourhood of an edge whose domination is being checked.
\end{comment}
\begin{lemma}[Adjacency] \label{lemma:nbd_domination}
Let $e$ be an edge in a graph $G$ and let $e^\prime$ be a new edge with $G^\prime := G \cup e^\prime$.
If $N_{G}(e)=N_{G'}(e)$ and $e$ is dominated in $G$, then $e$ is also dominated in $G'$.
\end{lemma}
This is in particular the case if $e$ and $e'$
are not boundary edges of a common triangle in $\overline{G^{\prime}}$. The above lemma is not strictly necessary, but it is useful to speed up algorithms.
In the next Section, we show that one can cook up an algorithm to edge-collapse a flag filtration using these simple ingredients.
\section{Persistence simplification}\label{sec:persistence_simplification}
In this Section, we will describe our new approach to use edge collapse to speed up the persistence computation. As mentioned before, the simplification process will be seen as a combination of the basic operations described in Section~\ref{sec:shift_swap_trim}. This new perspective simplifies the design process and correctness proof of the algorithm. Along with this we achieve a significant improvement in the run-time efficiency as shown in Section~\ref{sec:experiments}. We first briefly look at the forward algorithm of~\cite{FlagCompEdgeColl} with this new point of view and then present the new approach called the \textit{backward algorithm} [Algorithm~\ref{alg:core_flag_filtration}]. Both algorithms take as input a flag filtration $\mathcal{F}$ represented as a sorted array $E$ of edges (pairs of vertices) with their filtration value, and output a similar array $E^c$, sorted in the case the Forward Algorithm and unsorted for the Backward Algorithm, that represents a reduced filtration $\mathcal{F}^c$ that has the same persistence diagram as $\mathcal{F}$.
\begin{comment}
We begin with fixing some notations.
Let $\mathcal{F} : \overline{G}_{t_1} \hookrightarrow \overline{G}_{t_2} \hookrightarrow \cdots \hookrightarrow \overline{G}_{t_n}$ be a flag filtration and $\mathcal{G}_{\mathcal{F}} : G_{t_1} \hookrightarrow G_{t_2} \hookrightarrow \cdots \hookrightarrow G_{t_n}$ be the associated sequence of $1$-skeletons. In most practical cases, the inclusion maps in the filtration $\mathcal{F}$ may not be elementary, that is, there could be several edge inclusions from $G_{t_i}$ to $G_{t_{i+1}}$. For example, in the case of Rips complex, if we consider $t_i$s as the distance parameter at which the complex changes, then there is a possibility that there are multiple new edges of length $t_{i+1}$ which were not there at $t_i$. We consider such general filtrations, however our simplification algorithm processes edges one by one.
When we look at the edges one by one, the crucial observation is that, inserting a dominated edge $e$ and all its cofaces in a flag complex $K$ does not affect the homology of the complex, or more precisely the inclusion between complexes $K \subset \{K \cup e\}$ induces an isomorphism between their homology groups. Also, domination is a relatively easy condition to check. For such an edge, it does not really matter what its exact filtration value is. Inserting $e$ slightly earlier or later, as long as the order with respect to the other edges is preserved, can only translate zero-length intervals of the persistence barcode. In particular, if this is the very last edge of the filtration, we may omit it completely. Otherwise, we may delay it so it is inserted at the same filtration value as the next edge $e^{\prime}$.
The second observation is that if two edges are inserted at the same filtration value, the order in which they are inserted does not affect the persistence diagram. It may affect the pairing of simplices, but the pairing of values remains the same. Considering the edge $e$ we delayed earlier, we can now swap it with the next edge $e^{\prime}$, and check if it is still dominated. If it is, we may delay it again until it reaches a later edge $e^{\prime \prime}$. A last observation, only useful to speed up the process, is that for most edges $e'$, moving a dominated edge $e$ after $e'$ cannot make $e$ critical. Only edges $e'$ that share a vertex with $e$ can have this effect. \marc{where do we define ``critical''?}
The whole simplification process will then be a sequence of such delays and swaps. In the end, each edge will have a filtration value larger than at the beginning, possibly infinite (i.e. the edge is dropped). There does not appear to be a canonical order in which to apply these operations. We describe two natural sequential approaches, hereafter called forward and backward, which turn out to be equivalent when no two edges have the same filtration value, i.e. they output the same reduced sequence of edges after performing the same domination checks (in a different order). However, this sequence is not fully reduced, applying the algorithm a second time often produces an even smaller sequence of edges.
\end{comment}
\subparagraph{Forward algorithm.}
In the forward algorithm (the original one from~\cite{FlagCompEdgeColl}), the edges are processed in the order of the filtration in a streaming fashion. If a new edge is dominated, we skip its insertion and consider the next edge. If the next edge is dominated as well its insertion is skipped as well. Intuitively, the sequence of such dominated edges forms a train of dominated edges that we are moving to the right. When a new edge $e$ is non-dominated (called \emph{critical}), we output it, and also check what part of the train of dominated edges is allowed to continue to the right (shifted forward) and what part has to stop right there. For all the previously dominated edges (actually only those that are \textit{adjacent} to $e$), we check if they are still dominated after adding the edge $e$. If an edge $e^{\prime}$ becomes critical, we output it with the same filtration value as $e$, and the following edges now have to cross both $e$ and $e^{\prime}$ to remain in the train. We stop after processing the last edge, and the edges that are still part of the dominated train are discarded (trimmed).
\subparagraph{Backward algorithm.}
The backward algorithm (\cref{alg:core_flag_filtration}) considers edges in order of decreasing filtration value. Each edge $e$ is considered once, delayed (shifted) as much as possible, then never touched again. We always implicitly swap edges so that while $e$ is the edge considered, it is the last one inserted at its current filtration value, and compute its domination status there. If the edge is dominated, we shift it to the next filtration value, and iterate, swapping and checking for domination again at this new filtration value. If there is no next filtration value, we remove the edge (trimming). Once the edge is not dominated, we update its filtration value and output it. As an optimization, instead of moving the edge one grade at a time, we may jump directly to the filtration value of the next adjacent edge, since we know that moving across the other edges will preserve the domination (\autoref{lemma:nbd_domination}).
The main datastructure used here is a neighborhood map $N$. For each vertex $u$, it provides a map $N[u]$ from the adjacent vertices $v_i$ to the filtration value $N[u][v_i]$ of edge $uv_i$. The two main uses of this map are computing the neighborhood of an edge $uv$ at a time $t$ (i.e. in the graph $G_t$) as $N_t[uv]=N_t[u]\cap N_t[v]$ (filtering out the edges of filtration value larger than $t$), and checking if such an edge neighborhood is included in the neighborhood of a vertex $w$ at time $t$. While computing $N_t[uv]$, we also get as a side product the list of the future neighbors $F_t[uv]$, i.e.\ the elements of $N_\infty[uv]\setminus N_t[uv]$, which we sort by filtration value. These operations can be done efficiently by keeping the maps sorted, or using hashtables. The information in $N$ is symmetric, any operation we mention on $N[u][v]$ (removal or updating $t$) will also implicitly be done on $N[v][u]$.
\begin{comment}
We first present the underlying principles of the algorithm. As mentioned before, the central idea is to delay the insertion of a dominated edge until it becomes non-dominated. This delay is achieved by increasing the index (filtration value) of the edge to a value when it becomes non-dominated. More specifically, we consider the edges in the decreasing filtration order as we move backward with the filtration value. When considering an edge $e_{i}$, we check whether it is dominated in $G_{t_i}$. If $e_{i}$ is dominated in $G_{t_i}$ then clearly the insertion of $e_{i}$ does not change the topology of $G_{t_{i-1}}$ and $e_{i}$ does not change the persistence diagram. Therefore, we can temporarily put its insertion on hold. Next, we check if the edge $e_{i}$ can be dominated in the graph $G_{t_{i+1}}$. If it is still dominated we still keep its insertion on hold and move to the next graph $G_{t_{i+2}}$. We repeat this process until $e_{i}$ is found to be non-dominated in some graph $G_{t_j}$ for some $j>i$. Suppose $e_{i}$ is found to be non-dominated in the graph $G_{t_j}$, then we set the new index of $e_{i}$ to be $t_j$ and denote it by $e_{i}^{t_j}$. On the other hand, if $e_{i}$ was found to be non-dominated in the graph $G_{t_i}$ at the first place then it is inserted right there by assigning its new index to $t_i$ (i.e. keeping the original index) and is denoted by $e_{i}^{t_i}$.
\end{comment}
In this Section, we denote $t(e)$ the filtration value of $e\in E$, which is stored as $N[u][v]$ if $e=uv$. Note that even though $E$ is sorted, since several edges may have the same filtration value, $G_{t(e)}$ may contain some edges that appear after $e$.
We now explain the precise computation of the reduced sequence of edges $E^c$. See [Algorithm~\ref{alg:core_flag_filtration}] for the pseudo-code. The main \lstinline{for} loop on line 4 (called the backward loop) iterates over the edges in the sequence $E$ by decreasing filtration values, i.e. in the \textit{backward direction}, and checks whether or not the current edge $e$ is dominated in the graph $G_{t(e)}$. If \textit{not}, we insert $e$ in $E^c$ and
keep its original filtration value ${t(e)}$.
Else, $e$ is dominated in $G_{t(e)}$, and we increase $t(e)$ to the smallest value $t'>t(e)$ where $N_{t(e)}[e]\subsetneq N_{t'}[e]$. We can then iterate (\lstinline{goto} on line 12), check if the edge is still dominated at its new filtration value $t'$, etc. When the edge stops being dominated, we insert it in $E^c$ with its new $t(e)$ and update $t(e)$ in the neighborhood map $N$.
If the smallest value $t'>t(e)$ does not actually exist, we remove the edge from the neighborhood map and do \emph{not} insert it in $E^c$.
\begin{algorithm}[h]
\caption{Core flag filtration backward algorithm}
\label{alg:core_flag_filtration}
\begin{algorithmic}[1]
\Procedure{Core-Flag-Filtration}{$E$}
\State {\bf input :} set of edges $E$ sorted by filtration value
\State $E^{c} \gets \emptyset$
\For{ $e \in E$} \Comment {In non-increasing order of $t(e)$}
\State Compute $N_{{t(e)}}(e)$ and $F_{t(e)}(e)$
\For{$w\in N_{{t(e)}}(e)$}\label{line:loop1}
\State Test if $w$ dominates $e$ at $t(e)$
\EndFor
\If{ $e$ is dominated in $G_{t(e)}$}
\If{$F_{t(e)}(e)$ is empty}
\State Remove $N[u][v]$ \Comment{Trimming.}
\State \Goto{line:end-edge-loop} (next edge)
\Else \Comment{Shift and Swap.}
\State $t' \gets$ filtration of the first element of $F_{t(e)}(e)$
\State Move from $F_{t(e)}(e)$ to $N_{{t(e)}}(e)$ the vertices that become neighbors of $e$ at $t'$
\State $N[u][v]=t(e) \gets t'$
\State \Goto{line:loop1}
\EndIf
\Else
\State Insert $\{e,{t(e)}\}$ in $E^{c}$
\State \Goto{line:end-edge-loop} (next edge)
\EndIf
\EndFor\label{line:end-edge-loop}
\State \textbf{return} $E^{c}$ \Comment{$E^{c}$ is the 1-skeleton of the core flag filtration.}
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Correctness] \label{equivalence_thm}
Let $\mathcal{F}$ be a flag filtration, and $\mathcal{F}^c$ the reduced filtration produced by \cref{alg:core_flag_filtration}.
$\mathcal{F}$ and $\mathcal{F}^c$ have the same persistence diagram.
\end{theorem}
\begin{proof}
The proof is based on the observation that the algorithm inductively performs the elementary operations from \cref{sec:shift_swap_trim}: either it trims the very last edge of the current sequence (Line~11) or shifts and swaps a dominated edge forward to get a new sequence. Then the result follows using \cref{lemma:shift,lemma:swap,lemma:trimming} inductively. The only subtlety is around Line~15, where instead of simply performing one shift to the next filtration value, we perform a whole sequence of operations. We first shift $e$ to the next filtration value $t'$ (and implicitly swap $e$ with the other edges of filtration value $t'$). As long as we have not reached the first element of $F_{t(e)}(e)$, we know that shifting has not changed the neighborhood of $e$ and thus by \cref{lemma:nbd_domination} the fact that $e$ is dominated. We can then safely keep shifting (and swapping) until we reach that first element of $F_{t(e)}(e)$.
\end{proof}
\subparagraph{Complexity.} We write $n_e$ for the total number of edges and $k$ for the maximum degree of a vertex in $G_{t_n}$.
The main loop of the procedure, Line~4 of \cref{alg:core_flag_filtration}, is executed $n_e$ times. Nested, we loop (in the form of \lstinline{go to 6}) on the elements of $F_{t(e)}(e)$, of which there are at most $k$. For each of those elements, on Line~6, we iterate on $N_{t(e)}(e)$, which has size at most $k$. Finally, testing if a specific vertex dominates a specific edge amounts to checking if one set is included in another, which takes linear time in $k$ for sorted sets or hash tables. The other operations are comparatively of negligible cost. Sorting $F_{t(e)}(e)$ on Line~5 takes time $k\log k=o(k^2)$. Line~15 may take time $k\log k$ depending on the datastructure, $O(k^2)$ in any case. This yields a complexity of $O(n_ek^3)$.
Note that this analysis is pessimistic. If we delay an edge a lot (costly) but end up removing it, it makes future operations cheaper. Non-dominated edges only cost $k^2$. The edges that have many neighbors (usually appear late towards the end and) have few extra adjacent edges left to cross. After shifting (\lstinline{go to 6}), we can cheaply check if the previous dominator is still valid and in many cases skip the costly ($k^2$) full domination check.
\subparagraph{Optimality.}
The sequence produced by the backward (or forward) algorithm may still contain dominated edges, and \cref{sec:exp-complete} shows that it can take several runs of the algorithm before we end up with a fully reduced sequence. While each edge in the output was non-dominated at some point in the algorithm, other edges may later swap with this one and make it dominated again. It would be possible to enhance the algorithm so it performs some specific action when swapping an edge with a neighboring edge, the simplest being to mark it so we know this edge is worth checking in the next run, but one run of the backward algorithm already brings most of the benefit, so we did not concentrate our effort on a full reduction.
Swapping, shifting and trimming may produce many different reduced sequences. There is no reason to believe that our gready approach yields the smallest possible sequence, finding that sequence looks like a hard problem. However, we are mostly interested in finding a small enough sequence, so this is not an issue.
\section{Parallelisation}\label{sec:parallel}
Delaying the insertion of an edge until the next grade, and possibly swapping it, is a very local operation. As such, there is no problem doing several of them in parallel as long as they are in disjoint intervals of filtration values.
We exploit this observation and further optimize our algorithm by parallelizing a significant part of the computation using a divide and conquer approach.
To describe the parallel approach, let us use the same notations $t_i$, $G_{t_i}$, $\mathcal{F}$, $G_{\mathcal{F}}$ and $E$ as in Section~\ref{sec:shift_swap_trim}. To make things simpler, we assume that all edges have distinct filtration values. We subdivide the given input edge set $E := \{e_{1}, e_{2}, \cdots e_{n} \}$ of size $n$ into two smaller halves: the left half $E_l := \{e_{1}, e_{2}, \cdots e_{{n/2}} \}$ and the right half $E_r := \{e_{{n/2 + 1}}, e_{{n/2 +2}}, \cdots e_{{n}}\}$ of roughly the same size. We will describe a version of the algorithm based on the backward algorithm, but the same could be done with the forward algorithm, or they could even be mixed.
We first apply the backward algorithm to $E_l$ normally (\emph{left call}), which produces a reduced $E^c_l$. We also remember the list of all edges that were removed in this procedure: $E^r_l:=E_l\setminus E^c_l$. Independently (in parallel), we apply the backward algorithm to $E$ (\emph{right call}), but stop after processing all the edges of $E_r$ on Line~4 of \cref{alg:core_flag_filtration}. In a final sequential merging step, we resume the right call, processing only the edges of $E^r_l$, as if they all had the same initial filtration value $t_{n/2+1}$. The subdivision can obviously be applied recursively to increase the parallelism.
\begin{lemma}
The parallel algorithm produces exactly the same output as the sequential algorithm, and is thus correct.
\end{lemma}
\begin{proof}
The right call and the sequential algorithm start by handling the edges of $E_r$ in exactly the same way. When we reach the edges of $E_l$, for each edge $e$, there are two cases. Either the sequential algorithm shifts $e$ no further than $t_{n/2}$, in which case the left call does the same. Or the sequential algorithms shifts $e$ further (possibly all the way to removing it), then shifting to $t_{n/2+1}$ is handled by the left call, while the rest of the shift happens in the merging step.
\end{proof}
\section{Approximation} \label{sec:approximate}
Another interesting extension is an approximate version that gives a diagram within bottleneck distance $\epsilon$ of the true diagram (or some other similar criterion). Since the Rips filtration is often used as an approximation of the \v Cech filtration, an additional error is often acceptable.
If an edge is non-dominated for a short range of filtration values and becomes dominated again afterwards, it is tempting to skip the non-dominated region and keep delaying this edge. However, if we are not careful, the errors caused by these skips may add up and result in a diagram that is far from the original. The simplest idea would be to round all filtration values to the nearest multiple of $\epsilon$ before running the backward algorithm (similarly to~\cite{FlagCompStrongColl}). However, we can do a little better.
We describe here one safe approximation algorithm, based on the backward algorithm.
When considering a new edge $e$, instead of checking if it is dominated at its original position $t(e)$, we start checking $\epsilon$ later, at filtration value $t(e)+\epsilon$. If it is dominated, we resume normal processing from there.
However, if the edge is not dominated $\epsilon$ after its original insertion time, we keep it at its original position, so we don't end up uselessly shifting the whole sequence.
\begin{lemma}
The resulting module is $\epsilon$-interleaved\footnote{See \cite{structure-pers} for a definition of interleaving.} with the original one.
\end{lemma}
\begin{proof}
Consider the set $D$ of edges that are delayed by this algorithm, and $C$ the edges that are kept at their original position.
Starting from the original sequence, we can delay all the edges of $D$ by exactly $\epsilon$. The flag filtration defined by this delayed sequence is obviously $(0,\epsilon)$-interleaved with the original.
We now run the regular backward algorithm on this sequence, with the difference that the edges in $C$ are handled as if they were never dominated. The output filtration has the same persistence diagram as the delayed sequence, which is at distance at most $\epsilon$ from the diagram of the original filtration.
The key observation here is that this procedure, where we first delay some edges then run the exact algorithm, produces the same output as the approximation algorithm.
\end{proof}
Many versions of this can be considered, with the goal to enable more reductions, but one needs to ensure that the $\epsilon$-approximations for two edges cannot combine to make an error larger than $\epsilon$ on the output. In this example, processing the edges from right to left is crucial to guarantee a bounded error, the approximation done with the initial shift from $t(e)$ to $t(e)+\epsilon$ of an edge $e$ cannot affect an already modified persistence interval, since those are after $t(e)+\epsilon$. However, it also limits the optimization opportunities a lot. It could make sense to run the exact simplification algorithm first, and only then make a pass with the approximation algorithm, to avoid "wasting" our approximation budget on unnecessary delays, but that would lose the advantage that the approximation algorithm may be faster than the exact one.
\section{Zigzag persistence} \label{sec:zigzag_filt}
The filtrations we have discussed so far are increasing sequences of complexes. There exists a more general type of filtration, called zigzag filtration~\cite{CarlssonZigzag,ClementSteveZigzag} $ \mathcal{Z}: K_1 \hookrightarrow K_2 \hookleftarrow K_3 \hookrightarrow \cdots \hookrightarrow K_n $. Here consecutive complexes are still related by an inclusion, but the direction of this inclusion may be different for every consecutive pair. In other words, as we move from left to right with increasing indices, the complex $K_i$ is obtained by either \textit{inclusion} of simplices or \textit{removal} of simplices from the previous complex $K_{i-1}$. Persistence diagrams can still be defined for these filtrations. Again, in this paper, we are only interested in flag zigzag filtrations, where each complex is a clique complex. For a flag zigzag filtration the underlying graphs are related through inclusions or removals of edges. We show that edge collapse can again be used for simplification of such sequences.
In the case of standard persistence (explained in Section~\ref{sec:persistence_simplification}) the goal of the simplification process was to shift as many dominated edges as possible towards the end of a filtration and then trim them. For a zigzag flag filtration there are several possible ways to simplify it: 1. If a dominated edge is included and is never removed, then as usual we try to \textit{shift} it towards the end and trim it. 2. If an edge is included and removed both as dominated, then we try to \textit{shift} the inclusion till its removal and then annihilate both operations. 3. If an edge is included as non-dominated but later removed as dominated then we try to shift its removal towards the right till the end or its re-insertion.
4. A zigzag filtration is symmetric and a removal is an inclusion from the opposite direction, therefore, we can shift dominated removals towards the beginning and perform symmetric operations as in 2.
The 3rd method reduces the number of events at the cost of a slightly bigger complex, which may or may not be preferred over a more ``zigzagy'' filtration, so we do not use it in the default algorithm.
With more ways to simplify, the simplification process of a zigzag flag filtration is more delicate compared to the usual filtration. And it has some subtleties, first, can we shift a dominated edge inclusion across an edge removal? We show that (in \cref{lemma:zigzag_shift}), a dominated edge $e$ can be shifted across an edge removal if $e$ is also dominated after the edge removal. Resolving the first issue leads us to the question, how to index (order) inclusions and removals of the same grade? In practice, this situation is not common and two complexes at consecutive grades are linked through either inclusions or removals.
Therefore, we adopt the following representation for a zigzag flag filtration.
We will use the same notations $t_i$, $G_{t_i}$, $\overline{G}_{t_i}$ and $E$ as in Section~\ref{sec:shift_swap_trim}.
We represent a zigzag filtration in slightly more general way as $\mathcal{Z} : \overline{G}_{t_1} \hookleftarrow \overline{G}_{t'_1} \hookrightarrow \overline{G}_{t_2} \hookleftarrow \cdots \hookrightarrow \overline{G}_{t_{i-1}} \hookleftarrow \overline{G}_{t_{i-1}^{\prime}} \hookrightarrow
\overline{G}_{t_i} \hookleftarrow \overline{G}_{t_{i}^{\prime}} \hookrightarrow
\overline{G}_{t_{i+1}},\cdots \hookrightarrow \overline{G}_{t_n}$. Here ${G}_{t_{i}^{\prime}}$ is an intermediate graph at grade $t_i$.
In a usual zigzag, $\overline{G}_{t'_{i}}$ is equal to either $\overline{G}_{t_i}$ or $\overline{G}_{t_{i+1}}$ depending on the direction of the arrow. Note that the standard zigzag algorithm still applies to this version.
The following lemma provides a sufficient condition to shift and swap an inclusion with removal.
\begin{lemma} [The Zigzag Shifting-Swapping Lemma] \label{lemma:zigzag_shift}
Let $e$ be an edge inserted at $t_i$, $e \in G_{t'_{i}}$ and dominated in both graphs $G_{t_i}$ and $G_{t'_{i}}$.
Then the persistence diagrams of the original zigzag flag filtration
$\{\ \cdots \hookleftarrow\overline{G}_{t'_{i-1}}\xhookrightarrow{e} \overline{G}_{t_i} \hookleftarrow \overline{G}_{t_{i}^{\prime}} \hookrightarrow \overline{G}_{t_{i+1}} \hookleftarrow \cdots\ \}$
and the shifted-swapped sequence
$\{\ \cdots \hookleftarrow\overline{G}_{t'_{i-1}}\hookrightarrow \overline{G_{t_i}\setminus e} \hookleftarrow \overline{G_{t_{i}^{\prime}}\setminus e} \xhookrightarrow{e} \overline{G}_{t_{i+1}} \hookleftarrow \cdots \ \}$
are equivalent. That is, the grade of $e$ can be shifted to $t_{i+1}$.
\end{lemma}
\begin{proof}
The proof follows through a similar argument as \cref{lemma:shift}. All three squares in the following diagram commute as all the maps are induced by inclusions. Note that the top left and the bottom right horizontal maps can be induced by the inclusion of more edges than just $e$.
\begin{center}
\begin{tikzcd}
H_p(\overline{G}_{t_{i-1}^{\prime}}) \arrow[r, hook, "{|e|^*}"] & H_p(\overline{G}_{t_i}) \arrow[d, "{|rt|*}", shift left=1.5ex] & H_p(\overline{G}_{t_{i}^{\prime}}) \arrow[l, hook, blue ] \arrow[r, hook] \arrow[d, "{|rt|*}", shift left=1.5ex] & H_p(\overline{G}_{t_{i+1}}) \arrow[d, equal] \\
H_p(\overline{G}_{t_{i-1}^{\prime})} \arrow[u, equal] \arrow[r, hook] &
H_p(\overline{G_{t_{i}} \setminus e}) \arrow[u, hook, "{|e|^*}"]
& H_p(\overline{G_{t_{i}^{\prime}} \setminus e}) \arrow[l, hook, red] \arrow[u, hook, "{|e|^*}"] \arrow[r, hook ,"{|e|^*}"] & H_p(\overline{G}_{t_{i+1}})
\end{tikzcd}
\end{center}
Since the vertical maps are either equalities or isomorphisms induced by the inclusion of the dominated $e$ ($|rt|$ is the corresponding retraction map associated with the collapse), the result follows immediately. That is, the shift of $e$ to the grade $t_{i+1}$ preserves the diagram.
\end{proof}
Note that in the above lemma, the hypothesis that the edge $e$ should be dominated in the graph $G_{t_{i}^\prime}$ is necessary as shown in \cref{fig:zigzag_example}.
\begin{figure}[H]
\centering
\includegraphics[scale=.6]{zigzag_exp.pdf}
\caption{In the top sequence, the green edge $f$ is dominated at grade $3$ and non-dominated at grade $4$. Shifting and swapping the inclusion of $f$ with the removal of the red edge $e$ results in the bottom sequence. This results in two different one dimensional persistence diagrams of the associated flag complexes. For the top sequence it is $\{[1,5]\}$ and for the bottom $\{[1,2], [4,5]\}$. Note that it is standard to use closed intervals in a zigzag persistence diagram.}
\label{fig:zigzag_example}
\end{figure}
If a dominated edge is inserted and removed at the same grade, we can cancel both operations.
\begin{lemma}[The Cancellation Lemma] \label{lemma:zigzag_cancel}
Let $e$ be an edge inserted and removed at $t_i$. If $e$ is dominated in $G_{t_i}$, then the persistence diagrams of the following two sequences
$\{\ \cdots \hookleftarrow\overline{G}_{t'_{i-1}}\xhookrightarrow{e} \overline{G}_{t_i} \xhookleftarrow{e} \overline{G}_{t_{i}^{\prime}} \hookrightarrow \overline{G}_{t_{i+1}} \hookleftarrow \cdots\ \}$ and
$\{\ \cdots \hookleftarrow\overline{G}_{t'_{i-1}} \hookrightarrow \overline{G_{t_i}\setminus e} \hookleftarrow \overline{G}_{t_{i}^{\prime}} \hookrightarrow \overline{G}_{t_{i+1}} \hookleftarrow \cdots\ \}$ are the same.
\end{lemma}
\subparagraph{Algorithm: }
The algorithm to simplify $\mathcal{Z} : \overline{G}_{t_1} \hookleftarrow \cdots
\overline{G}_{t_i} \hookleftarrow \overline{G}_{t_{i}^{\prime}} \hookrightarrow
\overline{G}_{t_{i+1}},\cdots \hookrightarrow \overline{G}_{t_n}$ is again a combination of swapping, shifting and trimming of a dominated edge. For each edge $e$ in $\mathcal{Z}$ there is a list of pairs $<t, inc>$ associated with it, where $t$ is a grade and \textit{inc} is a Boolean variable to denote whether $e$ is inserted or removed at $t$.
Below, we provide the main steps of the zigzag simplification algorithm. The algorithm first processes all the edge inclusions in decreasing grade order from $t_n$ to $t_1$ and tries to shift them towards the end. After processing the first edge inclusion, it processes all the removals in increasing grade order from $t_1$ to $t_n$ and tries to shift them towards the beginning. This process can be repeated several times until it converges. We use $t(e)$ to denote the current grade of the edge $e$ being considered by the algorithm.
\begin{algorithm}[H]
\caption{Core zigzag flag filtration algorithm}
\label{alg:core_zigzag_algorithm}
\begin{algorithmic}[1]
\Procedure{Core-Zigzag-Flag-Filtration}{$E$}
\ForAll {edge \textit{inclusions}, backward (from $t_n$ to $t_1$)}
\If {the current edge $e$ is dominated in the graph $G_{t(e)}$}
\If {$t{(e)} == t_n$}
\State trim $e$ (delete the element $<t(e), inc>$).
\ElsIf{ $G_{t(e)} \neq G_{{t'(e)}}$} \Comment{the next step is a removal $G_{t(e)} \hookleftarrow G_{{t'(e)}}$}.
\If{$e \notin G_{t'(e)}$}
\State delete the inclusion-removal pair of $e$ at $t(e)$.
\ElsIf{ $e$ is dominated in $G_{t'(e)}$.}
\State set $t(e) = t{(e)}+1$ and go-to step 3. \Comment{$t{(e)}+1$ denotes the next grade.}
\EndIf
\Else \Comment{the next step is an inclusion $G_{t(e)} \hookrightarrow G_{t(e)+1}$.}
\State set $t(e) = t(e)+1$ and go-to step 3.
\EndIf
\EndIf
\EndFor
\State Move forward from $t_1$ to $t_n$ and process edge \textit{removals} symmetric to steps 2-16. \label{alg-zig-rev}
\EndProcedure
\end{algorithmic}
\end{algorithm}
We skip the details of Line~\ref{alg-zig-rev}, which is similar to the previous loop.
Note that an edge can be inserted and removed multiple times, in this case, the algorithm proceeds by pairing an inclusion with its next removal.
The above algorithm outlines the essential aspects of the computation but is not optimal. Like \cref{alg:core_flag_filtration} we can use the Adjacency lemma (\cref{lemma:nbd_domination}) to perform fewer domination checks.
In an oscillating rips complex~\cite{RipsZigzagOudotSheehy}, it is quite common for edges to be transient (appear and disappear almost immediately). Identifying such edges and getting rid of both their insertion and removal is the hope of this simplification process.
\subparagraph{Correctness: }As the underlying principal of the zigzag algorithm is the same as Algorithm~\ref{alg:core_flag_filtration}. We avoid its detailed discussion. To certify the correctness of the algorithm, again we observe the fact that the above simplification algorithm inductively performs either shifting, swapping or trimming of a dominated edge that are validated by \cref{lemma:swap,lemma:zigzag_shift,lemma:trimming}. Note that \cref{lemma:swap,lemma:trimming} extend naturally to the zigzag case.
We can easily parallelize the zigzag simplification algorithm using the same divide and conquer approach described in Section~\ref{sec:parallel}.
\section{Experiments}
\label{sec:experiments}
In this Section we provide various set of experiments to showcase the efficiency of our new approach. We also benchmark the new approach with the current state of the art methods.
\subparagraph{Complete graph.}
\label{sec:exp-complete}
Starting from a complete graph on 700 vertices where all edges appear at the same time, the size of the graph after applying the algorithm several times decreases as 244650 (initial), 5340, 3086, 1307, 788 and finally 699. It stops decreasing after the 5th round since 699 edges is obviously minimal. This example demonstrates that one round of the algorithm is far from producing a fully reduced sequence. However, it removed a large number of edges, which makes subsequent rounds much faster, and may have already reduced the complex enough to compute (persistent) homology.
\subparagraph{Torus: distribution of filtration values.}
We use a dataset with 1307 points on a torus embedded in $\ensuremath{\mathbb{R}}^3$. Figure~\ref{fig:torus} (left) shows the distribution of the edge lengths. Originally, there are 853471 edges and the longest has size $2.6$. We apply repeatedly the backward algorithm until the process converges. In the end, we are left with 65053 edges, and a maximal filtration value of $1.427$.
\begin{figure}[H]
\centering
\includegraphics[width=.5\textwidth]{torus-both}%
\includegraphics[width=.5\textwidth]{torus-after}
\includegraphics[width=.5\textwidth]{torus-diag}
\caption{Filtration value of edges for a torus (top). Orange is for original edges and blue after collapse. Top right: enlarged blue graph. Bottom: persistence diagram.}
\label{fig:torus}
\end{figure}
First, note that some implementations (of which the first one is Eirene~\cite{eirene}) of Rips persistence first check at which filtration value the complex becomes a cone (here around $2$) and ignore longer edges. In our algorithm, this check is performed implicitly and the long edges are dominated by the apex of the cone and thus get removed (we actually manage to go significantly lower than $2$). Still, it remains sensible to avoid those edges when possible.
After collapsing, we notice several regions in the curve. First some short edges are added progressively, until the complex gets the homotopy type of a torus. Then nothing happens for a while, until we have enough edges to kill one of the 1-cycles and fill the cavity, where many edges are inserted at the same time. Then again nothing happens while the complex is equivalent to a circle, until we can kill this last 1-cycle, and the process quickly stops with a contractible complex.
\subparagraph{Benchmark backward vs forward.} We benchmark the new backward algorithm with the forward algorithm. For the forward algorithm, we use the code from Giotto-ph~\cite{giotto-ph}, which is derived from our implementation in Gudhi but faster by a factor $1.5$ to $2$. Our bench marking considers two aspects: run-time and reduction size (see \cref{fig:speed}). The datasets are: \emph{uniform} for an i.i.d. sample of points in a square, \emph{sparse} for the same, but using a low threshold on the maximal size of edges, \emph{polygon} for a regular polygon, \emph{circle} for an i.i.d. uniform sample of a circle, \emph{dragon} comes from~\cite{NinaPaper} and \emph{O3} from~\cite{ripser} (the first version uses a threshold of $1.4$ on edge lengths).
The backward algorithm comes with an optimization using a \emph{dense} array indexed by vertices. This usually speeds things up nicely, but in cases where the original set of edges is very sparse, this dense array can be an issue, so we also have a version without this array, denoted \emph{sparse}.
\begin{table}[H]
\centering
\begin{tabular}{l|l|l|l|l|l|l|l|}
\cline{4-8}
\multicolumn{3}{c|}{} & \multicolumn{2}{c|}{Forward} & \multicolumn{3}{c|}{Backward} \\
\cline{2-8}
& vertices & before & after & time & after & time dense & time sparse \\
\hline
uniform & 1000 & 499500 & 2897 & 2.4 & 2897 & {\bf 1.7} & 2.4 \\
\hline
sparse & 50000 & 389488 & 125119 & 0.3 & 125119 & 1.9 & {\bf 0.17} \\
\hline
polygon & 300 & 44850 & 44701 & 3.6 & 44701 & {\bf 0.5} & 1 \\
\hline
circle & 300 & 44850 & 41959 & 4.8 & 41959 & {\bf 0.4} & 0.8 \\
\hline
complete & 900 & 404550 & 24540 & 43 & {\bf 5980} & {\bf 0.4} & 0.4 \\
\hline
torus & 1307 & 853471 & 94993 & 31 & 94993 & {\bf 3.2} & 5 \\
\hline
dragon & 2000 & 1999000 & 53522 & 29 & 53522 & {\bf 14} & 20 \\
\hline
O3 (1.4) & 4096 & 4107941 & 13674 & 59 & 13674 & {\bf 37} & 51 \\
\hline
O3 & 1024 & 523776 & 519217 & 200 & 519217 & {\bf 12} & 23 \\
\hline
\end{tabular}
\caption{Run-time and reduction size comparison. Column \textit{before} and \textit{after} contains the number of edges before and after collapsing, and column \textit{time} contains run time in seconds of the collapse.}
\label{fig:speed}
\end{table}
Table~\ref{fig:speed} shows a clear advantage for the backward algorithm in cases where few edges can be removed, or when several edges have the same filtration value. Except for \emph{complete} which is a plain complete graph with every edge at the same filtration value, all edges are computed as Euclidean Rips graphs.
When all the input edges have distinct filtration values, both algorithms output exactly the same list of edges. However, this isn't the case anymore when multiple edges have the same filtration value (and in particular if we apply the algorithm several times). The forward algorithm, as presented, relies on the order of the edges and does not take advantage of edges with the same filtration value. The backward algorithm, at its core, checks if an edge is dominated \emph{at a specific filtration value (grade)}. As seen in Table~\ref{fig:speed}, for a complete graph on 900 vertices, the backward algorithm outputs 5 times fewer edges than the forward algorithm.
\subparagraph{Size gains with approximate version.}
\begin{table}[h]
\centering
\begin{tabular}{c|c|c|c|c|c|c|c|c|}
& original & 1 (exact) & 1.01 & 1.1 & 1.5 & 2 & 10 & 100 \\ \hline
uniform & 499500 & 2897 & 2891 & 2859 & 2609 & 2462 & 2356 & 2353 \\ \hline
circle & 44850 & 42007 & 30423 & 20617 & 17552 & 16404 & 14574 & 14342 \\
\it (seconds) & & \it 0.4 & \it 0.33 & \it 0.22 & \it 0.16 & \it 0.14 & \it 0.12 & \it 0.115 \\ \hline
dragon & 1999000 & 53522 & 52738 & 52161 & 45439 & 40564 & 36094 & 35860 \\ \hline
O3 (1.4) & 4107941 & 13674 & 13635 & 13418 & 12682 & 12050 & 11828 & 11823 \\ \hline
\end{tabular}
\caption{Gains with the approximate algorithm, for different interleaving factors.}
\label{tab:approx}
\end{table}
Table~\ref{tab:approx} shows the number of remaining edges when we don't require the output to have the same persistence diagram, but only ask that the modules be multiplicatively $\alpha$-interleaved. Usually, the approximate version gives modest gains over the exact version, for roughly the same running time. However, in some cases that are hard to simplify like the circle, even a small error allows a significant number of collapses.
\subparagraph{Parallelism benchmark.}
\begin{figure}[h]
\centering
\includegraphics[height=8cm]{par-thr}
\caption{Speed gain in function of the number of threads.}
\label{fig:parallel}
\end{figure}
We wrote a limited\footnote{This implementation assumes that no two edges have the same filtration value.} prototype based on \lstinline{tbb::parallel_reduce} and tested it on an i7-10875H CPU (8 cores, 16 threads) by limiting the number of threads. \autoref{fig:parallel} shows promising results for some datasets, but also that there is room for better parallel algorithms.
\subparagraph{Persistence benchmark.}
In our experience, doing edge collapses before computing persistent homology helps a lot for (homology) dimension 2 or higher. However, it is a terrible idea if we only care about dimension 0, since computing 0-persistence is cheaper than this simplification can ever hope to be. The case of dimension 1 is more mixed, it can help in some cases and hurt in others. By default we would only recommend its use for dimension greater than or equal to 2.
For convenience, the persistence computation is done using the version of Ripser~\cite{ripser} found in giotto-ph~\cite{giotto-ph} with $n\_threads=1$, and with our new backward algorithm. This means that edges after the complex has become a cone are ignored. \autoref{tab:ripser} shows the time it takes to compute persistent homology in dimension up to $k$, either directly, or first collapsing before computing it.
\begin{table}[h]
\centering
\begin{tabular}{c|c|c|c|c|c|}
& dim 1 & collapse \& dim 1 & dim 2 & collapse \& dim 2 & collapse \& dim 3 \\ \hline
torus3D & 6.2 & 3.8 & 75 & 6.4 & 47 \\ \hline
dragon & 3.3 & 9.2 & 148 & 9.7 & 16.3 \\ \hline
\end{tabular}
\caption{Persistent homology computation time in seconds, with or without edge collapse.
}
\label{tab:ripser}
\end{table}
|
1,108,101,564,870 | arxiv | \section{Introduction}
One of the main tasks tackled with deep learning methods is the one of categorization: classification of images as representing specific objects, identification of words for speech recognition, etc. \citep[see][for reviews]{lecun2015deep, schmidhuber2015deep}. The identification of categories is a central topic in cognitive science. In his book ``Categorical Perception: The Groundwork of Cognition'', \citet{Harnad_1987} shows how central and fundamental to cognition categorization is. A well-studied perceptual consequence of categorization in humans and other animals is characterized by a sharp transition in the identification responses and by greater cross-category than within-category discrimination, a phenomenon called categorical perception~\citep[see also][for a review]{Repp_1984}. It originates in the field of speech perception: the seminal work of \citet{Liberman_etal_1957} demonstrated that American English-speaking subjects are better at discriminating between a /ba/ and a /da/ than between two different /ba/ tokens, even when the magnitude of the physical difference between the two stimuli is equal. Authors have subsequently shown that such effects are not specific to English, occurring in any language, but with respect to its own structure, as distinct phonemic systems entail different discrimination abilities \citep{abramson1970discriminability, goto1971auditory}. Categorical perception was also found to be not specific to speech \citep{cross1965identification,burns1978categorical,Bornstein_Korda_1984,Goldstone_1994,Beale_Keil_1995}, nor even to humans \citep{nelson1989categorical, Kluender_etal_1998, caves2018categorical}.\\
Previous computational work has shown that categorical perception also happens in artificial neural networks \citep{anderson1977distinctive, padgett1998simple, tijsseling1997warping, Damper_Harnad_2000}, with internal representations organized into more or less well-separated clusters \citep{mcclelland2003parallel, tong2008fusiform,olah2015visualizing}. A noisy neural classifier, either biological or artificial, that aims at minimizing the probability of misclassifying an incoming stimulus, has to deal with two sources of uncertainty. One is due to the intrinsic overlap between categories (in the relevant case where classifying these stimuli is not an obvious task). The other one stems from the variability of the response of the neurons to a stimulus. Intuitively, one might want to reduce the neural uncertainty in the regions of the input space where the overlap between categories is already a major source of confusion, \textit{ie} the regions of boundaries between classes\footnote{Throughout this paper, we will interchangeably use the two terms `stimulus' and `input', and, likewise, the two terms `categories' and `classes'.}. In \citet{LBG_JPN_2008}, taking an information theoretic approach, in the context of a biologically motivated neural architecture with a large number of coding cells, we quantitatively showed how these two quantities interact precisely, and how as a consequence category learning induces an expansion of the representation space between categories, \textit{i.e.}, categorical perception. Reducing neuronal noise in the region where the chance of misclassifying a new stimulus is highest, by mistakenly crossing the decision boundary, thus lowers the probability of error. Based on our previous studies of categorical perception \citep{LBG_JPN_2008,LBG_JPN_2012}, here we introduce a framework for the analysis of categorization in deep networks. We analyze how categorization builds up in the course of learning and across layers. This analysis reveals the geometry of neural representations in deep layers after learning. We also show that categorical perception is a gradient phenomenon as a function of depth: the deeper, the more pronounced the effect -- from the first hidden layer that might show no specific effect of categorical perception to the last, decisional, layer that exhibits full categorical behavior.\\
So far we mentioned noise as a nuisance that we have to deal with, following the common signal processing tradition. Conversely, a great deal of work has shown that neuronal noise, whether small or large depending on the context, can be desirable, helping a system to learn more efficiently, more robustly, with a better generalization ability. In the field of artificial neural networks, many studies in the 1990s have shown that input noise helps a network to overcome overfitting \citep[see e.g.][]{holmstrom1992using, matsuoka1992noise, bishop1995training, an1996effects}. Authors have also shown that noise can have the effect of revealing the structure of the data, as for learning a rule from examples \citep{seung1992statistical} or in independent component analysis \citep{nadal1994nonlinear}, a fact recently put forward by \citet{schwartz-ziv2017opening} within the framework of the information bottleneck approach. One important ingredient in the \citet{krizhevsky2012imagenet} paper that ignited the recent revival of connectionism in the past decade is the use of multiplicative Bernoulli noise in the hidden activations, a technique called dropout \citep{srivastava2014dropout}. Our study of categorical perception in artificial networks leads us to suggest that a key ingredient making dropout beneficial is that the effect of this noise is not uniform with respect to the input space, but depends on the structure and relation between categories. Importantly, our findings allow us to more generally understand the effect of noise in any given layer as being dependent on the current neural representation, hence as not having the same benefit depending on layer type, layer depth and on the course of learning (initial vs. learned stage).\\
For our analysis of artificial neural networks, we consider two ways of extracting and visualizing the categorical nature of the encoding resulting from learning. The first one consists in mimicking a standard experimental protocol used for the study of categorical perception in humans and other animals. We generate smooth continua interpolating from one category to another, and we look at the structure of the neural activity as one moves along a continuum. We show how, following learning, the representation space is enlarged near the boundary between categories, thus revealing categorical perception: distance between the neural representations of two items is greater when these items are close to the boundary between the classes, compared to the case where they are drawn from a same category. We also show that these categorical effects are more marked the deeper the layer of the network. The second one consists in measuring a categoricality index which quantifies, for each layer, how much the neural representation as a whole is specific to the coding of categories. We show that categoricality gets greater following learning, and increases with the depth of the layer, paralleling the results found by means of the morphed continua. We also observe that convolutional layers, while containing categorical information, are less categorical than their dense counterparts.\\
The paper is organized as follows. In Section~\ref{sec:motivations}, we review empirical studies on categorical perception, and summarize theoretical results that we obtained on the neural basis of this phenomenon. We explain how these findings motivate the present study of categorization in artificial neural networks. In Section~\ref{sec:results}, we conduct computational experiments that confirm the relevance of the study of categorical perception in the understanding of artificial neural networks. We consider several examples gradually going up in task complexity and network sizes. We first illustrate the emergence of categorical perception in artificial neural networks on one- and two- dimensional toy examples. Moving to the MNIST dataset \citep{lecun1998gradient}, a commonly used large database of handwritten digits, we show in Section~\ref{sec:mnist} how categorical perception emerges in an artificial neural network that learns to classify handwritten digits. In Section~\ref{sec:naturalimage} we extend our analysis to very deep networks trained on natural images, working with the Kaggle Dogs vs. Cats dataset, and with the ImageNet dataset \citep{deng2009imagenet,ILSVRC15}. In Section~\ref{sec:dropout}, we discuss many heuristics practices in the use of dropout, along the lines mentioned above. Finally, in Section~\ref{sec:neuro}, we also suggest that our results might in turn shed light on the psychological and neuroscientific study of categorical perception. We provide technical details in the Materials and Methods section and supplementary information in Appendices.
\section{Insights from cognitive science}
\label{sec:motivations}
\subsection{Experimental results on categorical perception}
\label{sec:CP}
In the psychological and cognitive science literature, a standard way to look at categorical perception is to present different stimuli along a continuum that evenly interpolate between two stimuli drawn from two different categories. Let us consider a few illustrative examples. In their seminal study on speech perception, \citet{Liberman_etal_1957} generated a /ba/--/da/ synthetic speech continuum by evenly varying the second formant transition, which modulates the perception of the place of articulation. Studying categorical perception of music intervals, \citet{burns1978categorical} considered a continuum that interpolates from a minor third to a major third to a perfect fourth. Turning to vision, \citet{Bornstein_Korda_1984} considered a blue to green continuum, while \citet{Goldstone_1994} made use of rectangles that vary either in brightness or in size. In \citet{Beale_Keil_1995}, the authors generated a continuum of morphed faces interpolating between individual exemplars of familiar faces, from Kennedy to Clinton for instance. As a final example, we cite the monkey study by \citet{freedman2001categorical} that makes use of a dog to cat morphed continuum. In these sorts of studies, experimentalists typically measure category membership for all items along the considered continuum, discrimination between neighboring stimuli, as well as reaction times during a categorization task. Of interest for the present paper are the behaviours in identification and discrimination tasks observed in these experiments. Although the physical differences in the stimuli change in a continuous way, the identification changes abruptly in a narrow domain near the category boundary while discrimination is better near the category boundary than well inside a category, which is the hallmark of categorical perception.\\
\subsection{Optimal coding entails categorical perception}
\label{sec:CP_mod}
If the neural representations have been optimized for the identification of categories, as e.g. in speech perception, these behavioral performances are intuitive. The goal being to decide to which category the stimulus belongs to, far from a category boundary in stimulus space, there is no need to have a precise identification of the stimulus itself: two nearby stimuli may not be discriminated. On the contrary, in the vicinity of a boundary, the likelihood of a category is strongly affected by any small shift in stimulus space. Thus, one expects the neural code, if optimized in view of the categorization task, to provide a finer representation of the stimuli near a class boundary than within a category.\\
In previous work~\citep{LBG_JPN_2008,LBG_JPN_2012}, we formalized and made quantitative these arguments through modeling of the neural processing (see \ref{sec:appendix_model} for a summary). In empirical studies, authors find stimulus encoding layers where the categorical information is distributed among many cells, no single cell carrying information that fully characterizes a category. On the contrary, in what seems to be the decisional layer, single cells are specific to a single category~\citep[see e.g.][]{kreiman2000category, freedman2001categorical, meyers2008dynamic}. Based on this neuroscience data, we considered a simplified architecture: a coding layer with a distributed representation, and a decision layer. The coding layer represents the last encoding stage before the decision can be taken, and may correspond to a high level in the neural processing. Hence the input to this layer is not the actual stimulus but some projection of it on some relevant dimensions. Taking a Bayesian and information theoretic approach, we showed that efficient coding (targeting optimal performance in classification) leads to categorical perception (in particular better discrimination near the class boundaries). More precisely, we showed that in order to optimize the neural representation in view of identifying the category, one should maximize the mutual information between the categories and the neural code. This in turn, in the limit of a large number of coding cells, implies that one has to minimize a coding cost which, in qualitative terms, can be written:
\begin{equation}
\mathcal{\overline{C}}_{\text{coding}} = \frac{1}{2} \left <\frac{\text{\small categorization uncertainty}}{\text{\small neural sensitivity}} \right>
\end{equation}
where the brackets $<.>$ denote the average over the space of relevant dimensions $x$. In formal terms,
\begin{equation}
\mathcal{\overline{C}}_{\text{coding}} = \frac{1}{2} \int \frac{F_{\text{cat}}(x)}{F_{\text{code}}(x)} \;p(x)\,dx
\label{eq:midiff}
\end{equation}
where $F_{\text{code}}(x)$ and $F_{\text{cat}}(x)$ are Fisher information quantities. The quantity $F_{\text{code}}(x)$ represents the sensitivity of the neural code to a small change in stimulus $x$. Larger Fisher information means greater sensitivity. The quantity $F_{\text{cat}}(x)$ represents the categorical sensitivity, that is, how much the probability that the stimulus belongs to a given category changes for small variation in $x$ values. Each Fisher information quantity defines a metric over the space $x$ (more exactly, over spaces of probabilities indexed by $x$). Along a path in stimulus space, $F_{\text{cat}}$ quantifies the change in categorical specificity of $x$, and $F_{\text{code}}$ how much the neural activity changes locally. Efficient coding with respect to optimal classification is thus obtained by essentially matching the two metrics. Since $F_{\text{cat}}$ is larger near a class boundary, this should also be the case for $F_{\text{code}}(x)$.\\
The Fisher information $F_{\text{code}}(x)$ is directly related to the discriminability $d'$ measured in psychophysical experiments \cite[see e.g.,][]{green1966signal}. Within the framework of Signal Detection Theory, $d'$ characterizes the ability to discriminate between two stimuli $x_1=x$ and $x_2=x+\delta x$. If one assumes optimal Bayesian decoding, then this behavioural quantity $d'$ is equal to $|\delta x| \sqrt{F_{\text{code}}(x)}$ \citep{seung1993simple}. Thus, category learning entails greater Fisher information $F_{\text{code}}(x)$ between categories, hence better cross-category than within-category discrimination, leading to the so-called \textit{categorical perception}.
\subsection{Category learning and local neural geometry}
\label{sec:geometry}
One may define the dissimilarity $D_{\text{neural}}(x_1, x_2)$ between the stimuli $x_1$ and $x_2$ at the neural level, that is in the space defined by the neural activations, by a distance or a dissimilarity measure between the distributions of activities that they evoke, $D_{\text{neural}}(x_1, x_2) \equiv d(P(\mathbf{r}|x_1)||P(\mathbf{r}|x_2))$. Natural choices for $d(.||.)$ are f-divergences, among which the Kullback-Leibler divergence $D_{KL}(P(\mathbf{r}|x_1)||P(\mathbf{r}|x_2))$, or the symmetrised Kullback-Leibler divergence, $D_{KL}(P(\mathbf{r}|x_1)||P(\mathbf{r}|x_2)) + D_{KL}(P(\mathbf{r}|x_2)||P(\mathbf{r}|x_1))$. In such cases, for small $\delta x = x_2 - x_1$, the distance is proportional to the Fisher information. For, e.g., the symmetrised Kullback-Leibler divergence,
\begin{equation}
D(x, x+\delta x) = (\delta x)^2\, F_{\text{code}}(x).
\end{equation}
Thus, the Fisher information $F_{\text{code}}(x)$ provides the local neural geometry: a larger $F_{\text{code}}(x)$ around a certain value of $x$ means that the neural representation is stretched at that location. For $\mathbf{x}$ in dimension $K>1$, it is the $K\times K$ Fisher information matrix $F_{\text{code}}(\mathbf{x})$ which defines the local, non isotropic, geometry, $D(\mathbf{x},\mathbf{x}+\delta \mathbf{x}) \propto \;\delta\mathbf{x}^T\,F(\mathbf{x})\,\delta\mathbf{x}$. \\
The Fisher information $F_{\text{code}}(x)$ also characterizes the local neural geometry from the viewpoint of parameter estimation. The inverse of the Fisher information $F_{\text{code}}(x)$ is an optimal lower bound on the variance $\sigma_x^2 $ of any unbiased estimator $\widehat{x}(\mathbf{r})$ of $x$ from the noisy neural activity $\mathbf{r}$ \citep[Cramér-Rao bound, see e.g.][]{Blahut_1987}:
\begin{equation}
\sigma_x^2 \equiv \int \, \big(\widehat{x}(\mathbf{r}) - x\big)^2 \;P(\mathbf{r}|x)\,d\mathbf{r}\;\geq \; \frac{1}{F_{\text{code}}(x)}
\label{eq:cramer_rao}
\end{equation}
Given that after category learning the Fisher information $F_{\text{code}}(x)$ is greater between categories, the Cramér-Rao bound implies that the variance in the estimation of an input $x$ is smaller between categories than within. This between category vs. within category difference means that near a category boundary, we expect that the variance should be lower in the direction of the boundary vs. parallel to it, where the change is within category, i.e., the variance will be anisotropic (see also Section~\ref{sec:coding_efficiency}).
\subsection{Neuronal noise as virtual new inputs}
\label{sec:augmentation}
In biologically motivated models, neural noise is ubiquitous. Bayesian and information theoretic tools are well adapted to the study of such stochastic neural systems. In the simplest setting, neural spiking activity is described by a Poisson process. Hence, at each instant of time, in a feedforward architecture the input pattern from a layer to the next one has `missing data'. If we consider rates instead of spikes, one has an activity with a multiplicative noise, such as Poisson noise, very much in the same way as during a run of the dropout heuristic \citep{srivastava2014dropout}. Dropout is a widely used learning regularization technique consisting in perturbing the neural activity in any given layer with multiplicative Bernoulli noise, but other types of noise work just as well \citep[such as multiplicative Gaussian noise, also discussed in][]{srivastava2014dropout}. \citet{bouthillier2015dropout} propose an interesting interpretation of dropout: it serves as a kind of data augmentation \citep[see also][]{zhao2019equivalence}. In this work, the authors transform dropout noise into new samples: for a given input $x$, and a given perturbed neural activity $\mathbf{r}$, they compute the estimate $\widehat{x}$ that, in the absence of noise, would have produced an activity as close as possible to $\mathbf{r}$. In other words, presenting many times the same input to the noisy network is somewhat equivalent to presenting new inputs to a noiseless version of this network. This leads to an augmented training dataset, containing many more samples than the original dataset. They show that training a deterministic network on this augmented dataset leads to results on par with the dropout results. We have seen in the previous section \ref{sec:geometry} that, given the local geometry characterized by the Fisher information $F_{\text{code}}$, the variance in the estimation of an input is smaller between categories than within. Put in terms of data augmentation, regions that are within-category allow for more variability, whereas within cross-category regions, the acceptable variability is much more constrained in order to avoid generating a new input with a incorrect label (\textit{i.e.} a virtual input crossing the decision boundary; see also \ref{sec:appendix_model}, section~\ref{sec:learning-cp} and Fig.~\ref{fig:posterior_estimation}).
\subsection{A framework for the study of categorization in artificial neural networks}
\label{sec:framework}
In the following, we study the building of categorical information in artificial neural networks making numerical experiments with two main techniques guided by the above formal analysis.\\
First, we consider a protocol inspired by the categorical perception experiments mentioned above. We assess the discrimination ability of each layer of an artificial neural network along a continuum of stimuli by considering the distance in neural space between contiguous elements. More distant stimuli are easier to discriminate than closer ones. We make use of a neural distance (the cosine distance between activities, see Materials and Methods), chosen as a proxy for the Fisher information $F_{\text{code}}$. This distance similarly quantifies how much the neural activity changes in average with respect to small variations in the input $x$. However, contrary to Fisher information, it is straightforward to compute. Note that this quantity reflects sensitivity at the population level (a given layer in this study), and not at a single neuron level. In parallel to looking at the neural distance between neighboring stimuli along the continuum, in the simplest toy examples, we also compute for each stimulus in the continuum the variance of the virtual inputs as defined in the previous section.\\
Second, we consider the measure of a \textit{categoricality} index. Authors have proposed different measures of categoricality, either at the single neuron level \citep{kreiman2000category, freedman2001categorical}, or at the population level \citep{kriegeskorte2008matching, kreiman2000category}. The higher this index, the greater the intra class similarity and inter class dissimilarity. Here, we want to quantify categoricality at the population level. To do so, our choice of categoricality index consists in comparing the distributions of the distances in neural space, as given by the activations in each hidden layer, between items drawn from a same category vs. items drawn from two different categories (see Materials and Methods, paragraph ``Categoricality index'', for details). In what follows, this measure is computed from a held-out set different from the training set.
\section{Results}
\label{sec:results}
We present the results of numerical experiments on classification tasks of increasing difficulty, working with a variety of datasets, from a simple one-dimensional example with two categories to a case that involves natural images with one thousand categories. We consider neural network architectures of complexity congruent with the ones of the tasks, allowing us to explore various conditions: multi-layer perceptrons and convolutional networks, a wide range of depths (from one hidden layers to ten or more hidden layers), and learning with different types of multiplicative noise (Bernoulli noise as in dropout, or Gaussian noise as in Gaussian dropout).
\subsection{Experiments with toy examples}
\label{sec:toy}
\subsubsection{One dimensional example}
\label{sec:1d}
We consider a one dimensional input space with two overlapping Gaussian categories. The resulting ensemble of stimuli corresponds to a continuum interpolating from a stimulus well inside a category to a stimulus well inside the other category, with a category boundary in between. The neural network is a multi-layer perceptron with one hidden layer of $128$ cells, with sigmoid activation, subject to Gaussian dropout with rate $0.5$ -- i.e. multiplicative Gaussian noise with standard deviation $1.0$. First, in Figure~\ref{fig:gaussian1d}, left panel, we present the neural distance, after learning, between contiguous inputs that are evenly distributed in stimulus space. As expected (see Section \ref{sec:CP_mod}), the network exhibits categorical perception, with greater distance in neural space between categories than within. Second, we generate virtual inputs as explained Section~\ref{sec:augmentation}: for each considered stimulus $x$, we compute inputs which would have produced, in the absence of noise, an activity as close as possible to the neural activity evoked in the hidden layer by $x$ (see the Materials and Methods section for details on this computation). We present in Figure~\ref{fig:gaussian1d}, right panel, the inverse of the variance of the generated data-points at each point along the $x$ continuum after learning. In compliance with our analysis presented in Sections~\ref{sec:geometry} and~\ref{sec:augmentation}, this quantity is greater at the boundary between categories, paralleling the behavior of the distance shown in the left panel.
\begin{figure}[ht
\centering
\includegraphics[width=\linewidth]{fig/gaussian1d_mlp.pdf}
\caption{\textbf{One dimensional example with two Gaussian categories}, respectively centered in $x_{\mu_1} = -0.5$ and $x_{\mu_2} = +0.5$, with variance equal to $0.25$. For both panels, the dotted colored lines indicate the true posterior probabilities $P(\mu|x)$.
(Left) The dark solid line corresponds to the distance in the neural space between contiguous stimuli.
(Right) The dark solid line corresponds to one over the variance of the distribution of the estimated input $\widehat{x}(\mathbf{r})$ for $n=10000$ probabilistic realizations of the neural activity $\mathbf{r}$ given an input $x$. }
\label{fig:gaussian1d}
\end{figure}
\subsubsection{Two dimensional example}
\label{sec:2d}
Similarly, we consider a two dimensional example with two Gaussian categories. The neural network is a multi-layer perceptron with one hidden layer of $128$ cells, with ReLU activation, subject to dropout with rate $0.2$ (proportion of the input units to drop, ie multiplicative noise drawn from Bernoulli distribution with $p=0.2$). As for the 1d case, we generate virtual inputs, here for a set of stimuli tiling the 2d input space. In Figure~\ref{fig:gaussian2d}, we see how the individual estimates (the generated virtual inputs) $\widehat{\mathbf{x}}(\mathbf{r})$ are distributed around each input $\mathbf{x}$. As expected, we observe that over the course of learning, the variance of these estimates for a given $\mathbf{x}$ gets lower near the boundary between the two categories, and that it is not isotropic: it is larger in the direction that is safe from crossing the boundary, and much smaller in the direction that is orthogonal to the decision boundary.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{fig/gaussian2d_mlp_dynamics.pdf}
\caption[]{\textbf{Two dimensional example with two Gaussian categories}, respectively centered in $\mathbf{x}_{\mu_1} = \begin{psmallmatrix}-0.5\\-0.5\end{psmallmatrix}$ and $\mathbf{x}_{\mu_2} = \begin{psmallmatrix}0.5\\0.5\end{psmallmatrix}$ , with covariance matrix
$\mathbf{\Sigma} = \begin{psmallmatrix}0.25 & 0\\ 0 & 0.25\end{psmallmatrix}$.
This example shows how the representation changes over the course of learning. We present snapshots taken at three points during learning: early (after 1 epoch), intermediate (after 4 epochs) and late (after 20 epochs).
Individual training examples drawn from these multivariate Gaussian distributions are indicated as downward blue triangles (category 1) and upward red triangles (category 2). The background color indicates the true posterior probabilities $P(\mu|\mathbf{x})$, from blue (category 1) to red (category 2) through white (region between categories). The largest dark dots correspond to a $5\times5$ grid of stimuli tiling the input space between -1.0 and 1.0 in both dimensions. For each one of these inputs, we computed the estimates $\widehat{\mathbf{x}}(\mathbf{r})$ for $n=100$ probabilistic realizations of the neural activity $\mathbf{r}$. We represent these estimates as smaller gray dots, enclosed for each input by an ellipse that represents the $2\sigma$ confidence ellipse.}
\label{fig:gaussian2d}
\end{figure}
\subsection{Experiments with handwritten digits}
\label{sec:mnist}
In this section we move beyond the two simple previous examples to look at the MNIST dataset \citep{lecun1998gradient}. It is a database of handwritten digits that is commonly used in machine learning, with a training set of 60,000 images and a test set of 10,000 images. Here the goal is to look at categorical perception effects in neural networks that learn to classify these digits.
\subsubsection{Creation of an image continuum}
\label{sec:mnist_continuum}
We want to build sequences of images that smoothly interpolate between two items of different categories. Directly mixing two stimuli in input space cannot work, as it would just superimpose the two original images. In cognitive experiments, one builds such artificial stimuli keeping each new stimulus as a plausible stimulus for our perception, as in the cases mentioned Section \ref{sec:CP}. One may think of various methods to generate sequences of images. In the present work, we want to obtain sequences that somehow remain in the space generated by the database itself. Taking inspiration from the work of \citet{bengio2013better} to create an image continuum, we used an autoencoder trained to reproduce single digits from the MNIST training set (see Materials and Methods for algorithmic details). The autoencoder consists in an encoder and a decoder. The first part learns to build a compressed representation of the input. The second part learns to reconstruct the original input from this compressed representation. This representation projects the data onto a lower dimensional space that meaningfully represents the structure of the data. By interpolating between two stimuli in this space, then reconstructing the resulting image thanks to the decoder, we obtain a continuum that nicely morphs between two stimuli, along the nonlinear manifold represented by the data.\\
We provide in Figure~\ref{fig:mnist_continua} several examples of such generated continua, each one interpolating between two digits from the MNIST test set.
\begin{figure}
\centering
\includegraphics[width=0.99\linewidth]{fig/mnist_continua_examples.pdf}\\
\caption{\textbf{Examples of image continua}, smoothly interpolating between pairs of digits taken from the MNIST test set.}
\label{fig:mnist_continua}
\end{figure}
\subsubsection{Peak in discrimination at the boundary between categories}
\label{sec:mnist_cp}
In this section we consider a multi-layer perceptron with two hidden layers of 256 cells, and look at the changes in representation before and after learning on the whole MNIST training set. We investigate the behavior of the network with respect to the 4--9 continuum presented on top of Fig.~\ref{fig:mnist_continua}. The 4/9 classes are among the most confused ones, hence of particular interest for the present study which focuses on cases where the stimuli can be ambiguous. We summarize the results in Figure~\ref{fig:mnist_cp}, for each of the two hidden layers, before and after learning. Before training (left panel), the representation of all $4$ and $9$ digits are not well separated. If one looks at the neural distance between items along the 4--9 continuum (bottom row), we see that it is rather flat. Conversely, after training (right panel), the two categories are now well separated in neural space. The neural distance between items along the specific 4--9 continuum presents a clear peak at the decision boundary, thus exhibiting categorical perception. Again, this is predicted by the analysis presented in Section~\ref{sec:CP_mod}: as the category uncertainty increases, the neural distance increases. We can also already notice that the deeper hidden layer exhibits a more categorical representation.
\begin{figure}
\centering
\includegraphics[width=0.48\linewidth]{fig/mnist_4_9_example_before_learning}
\hfill
\includegraphics[width=0.48\linewidth]{fig/mnist_4_9_example_after_learning}
\caption{\textbf{Changes in the neural representation following learning of categories}: example on a `4' to `9' continuum, using the MNIST dataset. The neural network is a multi-layer perceptron with two hidden layers of 256 cells. (Left) Representation before learning. (Right) Representation after learning. (Top row) Two-dimensional PCA projections based on the activations of the hidden layers on the test set. Items from category `4' are colored in blue, while items from category `9' are colored in red. The rest of the test set is represented in gray. For a better visualization, only one every four data points is shown. One specific `4' to `9' continuum, connecting two items from the test set is represented in black. (Middle row) Same, zoomed in on the `4' to `9' continuum. (Bottom row) The dotted colored lines indicate the posterior probabilities, as found by the network, of category `4' (blue) or `9' (red) along the continuum. The dark solid line indicates the neural distance between adjacent items along the continuum. The scale along the y-axis is shared across conditions.}
\label{fig:mnist_cp}
\end{figure}
\subsubsection{Gradient categorical perception as a function of depth}
\label{sec:mnist_depth}
After having studied the properties along a single continuum, we now make a statistical analysis by looking at the pattern of discriminability along an ensemble of many similar continua that interpolate between pairs of stimuli from different categories (see Materials and Methods). We present in Fig.~\ref{fig:mnist_continua} a few examples of such continua. In order to look at the effect of depth, we consider here a multi-layer perceptron with three hidden layers, trained on the whole MNIST training set. For the particular examples shown in Fig.~\ref{fig:mnist_continua}, we plot in Fig.~\ref{fig:mnist_continua_examples_input_distance_pred} the labeling response provided by this network after learning, together with the distance in input space between adjacent items along each one of these continua. For each continuum in our large ensemble, we computed the neural distance between neighboring stimuli along the continuum, and this for each hidden layer. In order to compute averages over the set of continua, we aligned all these curves by centering them around the point where the two posterior probabilities cross. We present the results in Figure~\ref{fig:mnist_cp_depth}. We first observe that all layers exhibit categorical perception: space is dilated at the boundary between categories and warped within a category. Moreover, we see that the deeper the layer the more pronounced the effect.
\begin{figure}
\centering
\includegraphics[width=0.5\linewidth,valign=T]{fig/mnist_mlp_neuraldistance_layers}
\caption{\textbf{Gradual categorical perception across layers}: the deeper the layer, the more pronounced the categorical perception effect. The neural network is a multi-layer perceptron with three hidden layers of 256 cells, trained on the whole MNIST dataset. The dotted colored lines indicate the mean posterior probabilities from the network.
For each hidden layer, the solid line corresponds to the mean neural distance between adjacent items, averaged over several continua, and aligned with the boundary between the two classes (error bars indicate 95\% confidence intervals, estimated by bootstrap).}
\label{fig:mnist_cp_depth}
\end{figure}
\subsubsection{Categoricality as a function of depth and layer type}
\label{sec:mnist_categoricality}
We now turn to a second method for characterizing how much the neural code is specific to the categorization task, making use of the categoricality index mentioned Section \ref{sec:framework}. We recall that this index quantifies the degree of relative intra-class compression vs. inter-class expansion of the neural representation provided by a given layer (see Materials and Methods). A reasonable expectation is that after learning, categoricality increases with layer depth, since the input is not categorical, and the decision layer is constrained to be categorical. The issue now is to characterize how categoricality changes as a function of depth.\\
In Figure~\ref{fig:mnist_categoricality} we compare the categoricality index before (broken line) and after (solid line) learning on the MNIST training set for two types of neural network: on the left, (a), a multi-layer perceptron with three hidden layers, and on the right (b), a convolutional neural network with seven hidden layers (see Materials and Methods for full details). Let us first consider the multi-layer perceptron. The results in Fig.~\ref{fig:mnist_categoricality}a echo the one presented in Fig.~\ref{fig:mnist_cp_depth}: all layers present categorical effects, and, in line with recent findings \citep{alain2016understanding, mehrer2020individual}, categoricality increases with depth. Let us now turn to the convolutional neural network. The first convolutional layer does not have a larger categoricality index than an untrained network, meaning that the features it has learned are quite general and not yet specialized for the classification task at hand. The categoricality then increases with depth, the last hidden layer, a dense layer, presenting the largest value. Finally, comparing the two figures, we can see that the dense layers in the fully-connected multi-layer perceptron on the left have a larger categoricality index than the convolutional layers, even for the first layer.
\begin{figure}
\centering
\textbf{a.}
\includegraphics[width=0.44\linewidth,valign=T]{fig/mnist_mlp_categoricality}
\hfill
\textbf{b.}
\includegraphics[width=0.44\linewidth,valign=T]{fig/mnist_cnn_categoricality}
\caption{\textbf{Categoricality as a function of layer depth, using the MNIST dataset}. Categoricality is a measure that quantifies the distance between items drawn from the same category vs. different categories, thanks to the Kolmogorov-Smirnov statistic between the intra- and inter-category distributions of neural distances. Solid lines correspond to trained networks, broken lines to untrained ones (error bars indicate 95\% confidence intervals, estimated by bootstrap). (a) The neural network is a multi-layer perceptron with three hidden layers.
(b) The neural network is a convolutional neural network whose layer structure is described in the x-axis of the figure.}
\label{fig:mnist_categoricality}
\end{figure}
\subsection{Experiments with natural images}
\label{sec:naturalimage}
We now go one step further in task and network complexity by considering natural image databases and deeper networks with ten or more hidden layers.
\subsubsection{Categorical perception of a cat/dog continuum}
\label{sec:cat_dog}
In this section, we consider a deep convolutional neural network trained to classify natural images of cats and dogs. We investigate its behavior with respect to a continuum that interpolates between different cat/dog categories. Let us first introduce the neural network, the database used for training and finally the continua that are considered (see Materials and Methods for details). The neural network is a convolutional network with three blocks of two convolutional layers and a max pooling layer, followed by a global average pooling layer. We used Gaussian dropout during learning. We trained each network instance on the Kaggle Dogs vs. Cats database, which contains 25,000 images of dogs and cats, with a final classification performance on the test set of about 95\%. In order to assess the changes in representation induced by learning for the different layers in the neural network, we considered different continua that either interpolate between items from the two categories, in which case we expect categorical perception to emerge, and between items from the same categories, as a control. The continua that we consider cycle from one dog to the same dog, going from one dog to a cat to another cat to another dog and finally back to the first dog. Each sub-continuum is made of 8 images, thus totaling 28 images for a given full continuum. We considered two such continua, using the same cat/dog categories but considering different viewpoints (close up vs full body -- see the x-axes of Fig.~\ref{fig:cat_dog}a and b for the morphed images that were used as input). The different cat/dog, cat/cat or dog/dog morphed continua were generated thanks to the code and pretrained model provided by \citet{miyato2018spectral} that uses a method based on Generative Adversarial Networks \citep{goodfellow2014generative}. Note that these continua on which the neural networks are tested have been generated by a network trained on a completely different database.\\
We present the results in Figure~\ref{fig:cat_dog}. First, one can see that the network well categorizes the different cat and dog images (see the dotted blue and red lines) into the correct classes. The last hidden layer, right before the final decision, exhibits a strong categorical perception effect (see the darkest line): only items that straddle the categories can be discriminated. In contrast, the first convolutional layers do not exhibit categorical perception: there is no clear peak of discrimination between categories. Instead, differences between contiguous images appear to mainly reflect differences in input space (as a comparison, see \ref{sec:appendix_catdog_pixel}, Fig.~\ref{fig:cat_dog_input} for a picture of the distances in input space). For instance, the peak difference in input space and for these first convolutional layers is driven by the tongue sticking out of the mouth, which does not affect the more categorical upper layers. Finally, the last convolutional layers exhibit in-between behavior, with an ability to discriminate between within-category stimuli, but with a clear peak at the cat/dog boundary, thus displaying categorical perception.
\begin{figure}
\textbf{a.}
\includegraphics[width=.95\linewidth,valign=T]{fig/cat_dog_neuraldistance_a.pdf}
\vspace{0.1cm}\\
\textbf{b.}
\includegraphics[width=.95\linewidth,valign=T]{fig/cat_dog_neuraldistance_b.pdf}
\caption{\textbf{Categorical perception of a cat/dog circular continuum}. Experiment with continua interpolating between cats and dogs, with two different viewpoints: (a) close up on the face, and (b) full body.
The interpolations involve two types of dogs and two types of cats. Each continuum corresponds to a circular interpolation with four sub-continua: from the first dog to a cat, then to the other cat, then to the second dog, and finally back to the first dog. The neural network is a convolutional neural network with three blocks of two convolutional layers and a max pooling layer, finally followed by a global average pooling layer. The blue and red dashed lines indicate the posterior probabilities from the network (blue is dog, red is cat). The colored solid lines correspond to the neural distance between adjacent items along the continuum, the darker the line the deeper the layer. Only the last convolution layer of each block and the global average pooling layer are shown. The colored dotted lines are the counterparts for the same networks but before learning. Error bars indicate 95\% confidence intervals, estimated by bootstrap. The thin dotted vertical lines indicate the start and end points of each sub-continua.}
\label{fig:cat_dog}
\end{figure}
\subsubsection{Categoricality in deep networks}
\label{sec:imagenet}
In this section, we work with the ImageNet dataset \citep{deng2009imagenet}, and more precisely with the subset of images used in the ILSVRC-2010 challenge \citep{ILSVRC15}. The network that we consider is the VGG16 model described in \citet{simonyan2014very}, which has won the ImageNet Challenge 2014. This model is characterized by 16 weight layers, an architecture considered very deep (at the time this VGG16 model was published). Here, we compare the categoricality index on randomly initialized networks with the exact same architecture, and on a network that has been pretrained on the ImageNet database (as provided by the keras package\footnote{\url{https://keras.io/api/applications/vgg/}}) (see Materials and Methods for details).\\
We show in Figure~\ref{fig:imagenet_categoricality} the results of this comparison. As expected, one can see that for an untrained network, the categoricality index is flat across layers: the neuronal layers do not show any preferential knowledge of the categories. For a trained network, as seen in the MNIST section above, the categoricality increases as a function of depth. We can observe that, for the first convolutional layers, this index is essentially not much different from the case of an untrained network. Intermediate convolutional layers do exhibit some effects of category learning, while for the last convolutional layers the categoricality is much more marked. Finally, the last two dense layers exhibit the greatest categoricality.
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{fig/imagenet_vgg16_categoricality.pdf}
\caption{\textbf{Categoricality as a function of layer depth, using the ImageNet dataset}. The neural network is the VGG16 model \citep{simonyan2014very}. The dash line corresponds to a network with random initialization of the weights, whereas the solid line corresponds to a network pretrained on ImageNet (error bars indicate 95\% confidence intervals, estimated by bootstrap).}
\label{fig:imagenet_categoricality}
\end{figure}
\section{Discussion}
\label{sec:discussion}
\subsection{A new view of dropout}
\label{sec:dropout}
In the present discussion we show how our formal and numerical analyses of categoricality in multi-layer networks help elucidating various empirical observations and heuristic practices in the use of the dropout technique. As we have seen, the effect of learning categories leads to neural representations with a distorted geometry: a finer resolution is obtained near a class boundary, the Fisher information of the neural code becoming greater near the boundary between categories than within categories. Experimentally, we showed that after learning, neural distance between neighboring stimuli is indeed greater in these boundary regions compared to regions well within a category. As discussed, this categorical effect builds up gradually in the course of learning, becoming stronger for deeper layers, and stronger for dense layers compared to convolutional ones. These geometrical properties have the consequence of controlling the impact of neuronal noise as a function of the probability of misclassification (see Section~\ref{sec:CP_mod} and \ref{sec:appendix_model}). Thus the level of acceptable noise depends on how much the neural geometry in the considered layer is adapted to the category geometry. It is thus different at different learning stages, and different depending on layer depth and layer type. Dropout precisely injects noise in each layer, opening the possibility of a noise level adapted locally with respect to the current (in time and depth) congruence of the neural representation with the category geometry. We speculate here that this very possibility of having the noise (dropout) level adapted to the current neural geometry induces a positive interaction between noise and learning.\\
\citet{bouthillier2015dropout} propose that dropout is equivalent to a data augmentation technique, in that injecting such noise would be equivalent to processing additional stimuli (inputs). As discussed Section~\ref{sec:augmentation}, this data augmentation is not uniform with respect to stimulus space: the generated stimuli, following the method proposed by \citet{bouthillier2015dropout}, do not distribute uniformly over the input space (see Fig.~\ref{fig:gaussian1d} and Fig.~\ref{fig:gaussian2d}). Their variability is adapted to the structure of the categories, being greater within a category than between categories, where the risk of generating a new stimulus with a wrong label is highest. Hence, if one considers dropout as a data augmentation technique~\citep{bouthillier2015dropout,zhao2019equivalence}, injecting noise in the hidden layers is likely to yield better results than techniques that only consider input noise, as this latter type of noise might be too weak within a category and too strong between categories.\\
From this viewpoint, we expect that it should be beneficial to inject more dropout when and where it is allowed to have more noise -- hence at later stages of learning, in deeper layers, in dense layers vs. convolutional layers. If all this is correct, we then have a coherent picture which means that there is a positive interaction between categoricality and dropout: more categoricality allows for more noise, allowing to better benefit from the data augmentation, which in turn helps increasing categoricality, inducing a greater separation between categories. We now show that the widespread practices in the use of dropout, together with supplementary numerical simulations, support such conclusion. Although more work is needed to quantitatively investigate the effectiveness of dropout, it is reasonable to assume that the most common practices reflect intuitions and trial and error explorations leading to the selection of beneficial heuristics.
\paragraph{More dropout in deeper layers.}
A widespread practice is to use greater dropout level for the hidden layers compared to the input layer (see \citealp{Goodfellow-et-al-2016}\footnote{``Typically, an input unit is included with probability 0.8, and a hidden unit is included with probability 0.5.''}). In their original dropout article, \citet{srivastava2014dropout} also use three levels of dropout rate, with noise increasing with the layer depth\footnote{see ``Dropout was applied to all the layers of the network with the probability of retaining a hidden unit being p = (0.9, 0.75, 0.75, 0.5, 0.5, 0.5) for the different layers of the network (going from input to convolutional layers to fully connected layers).''}. Yet, to our knowledge, there is no systematic study of the optimal dropout level as a function of the depth of the layer. In Appendix \ref{sec:appendix_dropout_depth}, we provide a first numerical investigation providing evidence that the optimal dropout rate increases with layer depth. We trained a multi-layer perceptron on the CIFAR-10 image dataset \citep{krizhevsky2009learning}, with varying amount of dropout rate after the input layer and each of the two hidden layers. Fig.~\ref{fig:cifar10_dropout} presents the results. First, one can see that the deeper the layer the more robust it is to a great amount of noise, and, second, that the best dropout value for each layer increases with depth.
\paragraph{Categoricality increases without dropout, but dropout helps.}
\citet{mehrer2020individual} have found that an increase of the dropout probability yields greater categoricality (see their Fig. 8c). Our simulations in the Section \ref{sec:mnist_categoricality} made use of a dropout rate that increases with layer depth. One may then ask whether this use of dropout is not actually the main source in the difference of categoricality between layers.
We thus performed a control experiment on the MNIST dataset reproducing the results of Fig.~\ref{fig:mnist_categoricality} Section \ref{sec:mnist_categoricality}, but without the use of dropout. The results are presented in Appendix \ref{sec:appendix_dropout_mnist}, Fig.~\ref{fig:mnist_categoricality_nodropout}. We find that categoricality does build up in the absence of dropout, and that after learning the categoricality index does increase as a function of depth. Yet, dropout helps increasing the separation between categories: the slope of the increase of the categoricality index when learning with (Bernoulli or Gaussian) dropout is slightly larger than without the use of noise.
\paragraph{Dense vs.~convolutional layers.}
As we have shown in Sections \ref{sec:mnist_categoricality} and \ref{sec:imagenet}, dense layers exhibit more categoricality than their convolutional counterparts, and convolutional layers exhibit very little categoricality when close to the input layer, whereas the deepest ones do exhibit some categoricality. In agreement with our expectations, the dropout level is usually chosen to be stronger for dense layers, convolutional layers closest to the input layer receive a small amount of dropout or no dropout at all, and a gain of performance is obtained thanks to dropout in the deepest convolutional layers (\citealp{srivastava2014dropout}; see also \citealp{park2016analysis,spilsbury2019don}).
\paragraph{Increasing the dropout rate during learning is preferable.}
Several works have considered adapting the dropout rate during the course of learning. Relying on the simulated annealing metaphor, \citet{rennie2014annealed} suggest decreasing the dropout rate over the course of training, starting from a high initial value. The intuition is that at first high noise makes it possible to explore the space of solutions, avoiding local minima, with a last phase of fine-tuning without noise. On the opposite side, taking inspiration from curriculum learning~\citep{bengio2009curriculum}, \citet{morerio2017curriculum} propose the exact contrary: to start learning with low noise, and then increase the difficulty by raising the amount of noise. As discussed, a network needs to have already partly learned the structure of the categories in order to allow for the use of a high level of dropout. We thus expect the curriculum dropout to have better performance than the original dropout (that is with a fixed value of the dropout rate over the course of training), and better than the annealed dropout: this is precisely what is found in the experiments by \citet{morerio2017curriculum} on a variety of image classification datasets \citep[see also][]{spilsbury2019don}.
\paragraph{Nonlinear interaction with the size of the dataset.}
Finally, from the interaction that we describe between learning and dropout, we also expect an interaction with the size of the dataset. Obviously, with a very large number of training samples, there is no need for a regularization technique such as dropout, as there is no worry about overfitting. Less intuitive is the expectation here that if the number of training samples is too small, since the structure of the categories cannot be sufficiently learned, the use of dropout will be detrimental to the performance. In between, we expect the introduction of noise to be beneficial to the generalization ability of the neural network. All in all, this is exactly what is found by \citet[][see Fig.~10]{srivastava2014dropout}.
\subsection{Relevance to the fields of psychology and neuroscience}
\label{sec:neuro}
\subsubsection{An historical debate on categorical perception}
Historically, categorical perception was first presented as an all-or-nothing phenomenon according to which subjects can discriminate between stimuli only through their phonetic labels -- or more precisely through their class identification probability \citep[following the view of the classical Haskins model; see][]{Liberman_etal_1957}. As a result, discrimination was thought to be almost zero within category. In our setting, it would be as if discrimination was only made possible through the use of the very last layer of the neural network, the one corresponding to the decision process with the softmax function. However, from the very beginning, this idealized form of categorical perception has never been met experimentally (see, e.g., \citealp{Liberman_etal_1957} or \citealp{liberman1961effect}; see \citealp{lane1965motor, Repp_1984} for reviews): observed discrimination is always above the one predicted from the identification function. Some authors rather talk about \textit{phoneme boundary effect} in order to distinguish it from the original categorical perception proposal \citep{wood1976discriminability, iverson2000perceptual}. As this phenomenon is not limited to speech, we keep here the term \textit{categorical perception}, simply characterized as a better ability to perceive differences between stimuli in the cross-category regions than within a category.\\
Our framework makes it possible to better understand this phenomenon. As we have seen, category learning does not only affect the last decisional layer, but also the upstream coding layers in order to better represent categories and robustly cope with noise. For each hidden layer, we observe a warping of the neural space within category and an expansion between categories. Thus, without the need to even actually compute labels, assuming the coding layers are reused during discrimination experiments, we expect to see categorical perception effects during discrimination tasks, but possibly with an above chance discrimination within category, as indeed found experimentally. This effect is not due to labeling, but a consequence of the optimization of the coding layers upstream of the categorization final process.\\
As we have seen, a deep network exhibits a gradient of categorical perception, from the first layers having almost no traces of categoricality to the last layers that show an important distortion of the space according to the structure of the categories. Where exactly is the basis for our conscious perception remains an important research question.
\subsubsection{Results in neuroscience and imagery}
Neurophysiological studies (\citealp{xin2019sensory}; \citealp[][see their Fig. 6 panel H]{okazawa2021thegeometry}) have recently shown that category learning leads to neural representations with a geometry in agreement with our prediction of expansion/contraction along a direction quantifying the degree of ambiguity of the stimulus category (see \citealp{LBG_JPN_2008}, and Section~\ref{sec:CP_mod}).\\
In the present work, we notably study categorical representations by looking at the neural distance between stimuli that are equally spaced in stimulus space. Previous experimental works have also compared stimuli in neural space by looking at the distance between activities. Using high‐density intracranial recordings in the human posterior superior temporal gyrus, \citet{chang2010categorical} have found that this region responds categorically to a /ba/--/da/--/ga/ continuum (as used in the original \citealp{Liberman_etal_1957} study): stimuli from the same category yield indeed more similar neural patterns that stimuli that cross categories. Still in the domain of speech perception, using event-related brain potentials, \citet{bidelman2013tracing} followed a similar approach in comparing neural activities in response to a vowel continuum. They found that the brainstem encodes stimuli in a continuous way, with changes in activity mirroring changes in the sounds, contrary to late cortical activity that are shaped according to the categories. This is in line with the view of gradient effects of categorical perception as function of depth in the processing stream: input layers show almost no effect of categories, whereas last layers are strongly affected by them.\\
The monkey study by \citet{freedman2003comparison} have shown a distinct behavior of the prefontal and inferior temporal cortices during a visual categorization task. The visual processing stream goes from sensory to the prefrontal cortex (PFC) through the inferior temporal cortex (ITC). In order to assess the categorical nature of the activity of each region, \citet{freedman2003comparison} introduced an index similar to what we use in our study, by comparing the responses to within- vs. between-categories pairs of stimuli, but at the level of an individual neuron. In agreement with the picture proposed here, the authors have found that both regions show significant effects of category learning, with larger differences in the neural responses for pairs of stimuli that are drawn from different categories, and that the PFC neurons show stronger category effects than the ITC neurons. Note though that the categoricality of the ITC is actually underevaluated due to the use of a measure at the single neuron level. A subsequent study by the same team has indeed shown that the ITC contains actually more categorical information when analyzed at the population level \citep[see][]{meyers2008dynamic}. Similarly, using both electrode recordings in monkeys and fMRI data in humans, \citet{kriegeskorte2008matching} have shown that the ITC exhibits a categorical representation, contrary to the early visual cortex. Interestingly enough, the categorical information in ITC can only be seen at the population level. All in all, it is once again found that the visual stream is organized along a path that goes from being not categorical (early visual areas) to being categorical at the population level while retaining information of within-category individual examples (inferior temporal cortex) to a more fully categorical representation (prefrontal cortex), in agreement with our findings. From the work presented here, we expect an even wider range of such gradient effects of categorical perception to be found experimentally, either with neurophysiology or imagery.
\section{Conclusion}
Studying categorical perception in biological and artificial neural networks can provide a fruitful discussion between cognitive science and machine learning. Here, we have shown that artificial neural networks that learn a classification task exhibit an enlarged representation near the boundary between categories, with a peak in discrimination, \textit{ie} categorical perception. Our analysis is based on a mathematical understanding and empirical investigations of the geometry of neural representations optimized in view of a classification task, with contraction of space far from category boundaries, and expansion near boundaries. Our results also find counterparts in the literature of neurophysiology and imagery, with low-level generic regions feeding high-level task-specific regions. Our work further suggests a strong gradient of categorical effects along the processing stream, which will be interesting to investigate experimentally.\\
Considering various properties and practical uses of dropout, we have shown that our framework allows to propose a coherent picture of what makes dropout beneficial. We argued that dropout has a differential impact as a function of the neural representation that has been learned so far, implying that its effect is not the same before, during, and after learning, and depends on the layer type (dense vs. convolutional) and depth. Further work is needed so as to quantify more finely how the amount of noise should depend on the level of representation. We believe that our viewpoint should help devising better dropout protocols, but also new regularization techniques with other types of perturbation than dropout.\\
Another interesting perspective is in the domain of transfer learning, where a neural network trained for a specific task is reused for another task. The measure of categoricality of each layer, which is specific to the classification task at hand, gives thus a measure of the degree (the lack) of genericity of the layer. We expect that this quantity, along with a measure of the overlap between the old and the new tasks, can be used to decide where to cut a neural network for reuse, with the lower part left untouched and the deeper part fine-tuned or retrained from scratch on the new task.\\
To conclude, our work insists on the geometry of internal representations shaped by learning categories, and on the resulting positive impact of noise as a way to learn more robustly. The influence of noise is actually structured by learning, which implies an interaction between these two aspects.
\section*{Materials and Methods}
\paragraph*{Neural distance.}
For a given stimulus $\mathbf{x}$, let us notate $f(\mathbf{x})$ the $N$-dimensional deterministic function computed by the network in the absence of noise (for a given layer with $N$ neurons). The neural distance $D_{\text{neural}}(\mathbf{x}_1, \mathbf{x}_2)$ between two stimuli $\mathbf{x}_1$ and $\mathbf{x}_2$ is then defined, at the population level, as the cosine distance between $f(\mathbf{x}_1)$ and $f(\mathbf{x}_2)$. The cosine distance is equal to 1 minus the cosine similarity, which is equal to the dot product between the two vectors, normalized by the product of the norms of each vector:
\begin{equation}
D_{\text{neural}}(\mathbf{x}_1, \mathbf{x}_2) = 1 - \frac{f(\mathbf{x}_1) \cdot f(\mathbf{x}_2)}{||f(\mathbf{x}_1)||*||f(\mathbf{x}_2)||}
\end{equation}
Note that this measure is not mathematically a distance metric as it does not satisfy the triangular inequality. We nevertheless improperly call this dissimilarity a distance, following a common abuse of language.\\
This cosine distance was chosen so as to easily compare between layers, with different number of neurons: there was a need for a certain normalization, which the cosine distance provides. If one considers normalized vectors, the cosine distance is actually related to the Euclidean distance. Following Fisher information, the neural distance keeps the idea that, as one moves from $x$ to $x+\delta x$, the greater the distance between the activations evoked by these two stimuli the greater the distance in neural space. It would be interesting to consider other measures, that notably take the absolute amplitude of the neural responses into account.
\paragraph*{Categoricality index.}
The categoricality index quantifies the degree of relative intra-class compression vs inter-class expansion of the representation provided by a given layer. It measures the distance between (i) the distribution of the neural distance of items that belong to the same category, and (ii) the distribution of the neural distance of items that are drawn from different categories. Technically, we compute these distributions thanks to random samples taken either from the same categories or from different categories, and we take as distance between the two distributions the two sample Kolmogorov-Smirnov statistic \citep[using the Python implementation provided by the SciPy package,][]{scipy}. Other distance measures could be considered as well, as those previously proposed in the literature. We expect that they would yield overall qualitatively similar results. For instance, \citet{mehrer2020individual} recently considered a clustering index defined as the ``normalized difference in average distances for stimulus pairs from different categories (across) and stimulus pairs from the same category (within): CCI = (across - withing)/(across + within)''. Yet, while quantifying similarly the degree of intra class compression vs inter class separation, we believe our measure gives a better account of the categorical nature of the neural representation by considering not just the average values of the `across' and `within' distances between pairs but the full distributions. Imagine two cases where the average quantities `across' and `within' are equal, but whose distributions exhibit different variances: in the first case, the two distributions of distance are very well separated, with a small variance for each `across' or `within' distribution; in the second case, the two distributions overlap substantially, with larger variance for these distributions. By definition, both cases will receive the same clustering index as defined in \citet{mehrer2020individual}, but our measure assigns a greater categoricality to the first case, as expected by construction of this example.
\paragraph*{Section ``\textbf{\nameref{sec:toy}}'': Estimate of the deterministic counterpart of a noisy neural activity.}
For a given layer, we consider the neural activity $\mathbf{r}=\{r_1,\ldots,r_N \}$ from a population of $N$ neurons evoked by a stimulus $\mathbf{x}$ as a noisy version $\widetilde{f(\mathbf{x})}$ of the $N$-dimensional deterministic function $f(\mathbf{x})$ computed by the network at that level. As an example in the spirit of the dropout heuristic, for a Gaussian multiplicative noise, $\mathbf{r} = \widetilde{f(\mathbf{x})} = f(\mathbf{x})*\xi$, where $\xi \sim \mathcal{N}(1, \sigma^2)$ ($\sigma^2$ being the noise variance). For a given $\mathbf{x}$ and a given $\mathbf{r} = \widetilde{f(\mathbf{x})}$, we make use of gradient descent to compute the estimate $\widehat{\mathbf{x}}$ that minimizes the square error between $\widetilde{f(\mathbf{x})}$ and $f(\widehat{\mathbf{x}})$:
\begin{equation}
\widehat{\mathbf{x}} \equiv \argmin_{\mathbf{x}^*} \left(\widetilde{f(\mathbf{x})} - f(\mathbf{x}^*)\right)^2.
\end{equation}
\paragraph*{Section ``\textbf{\nameref{sec:mnist_continuum}}'': Autoencoder architecture and learning.}
The autoencoder is a made of an encoder chained with a decoder, both convolutional neural networks. The encoder is made of three convolutional layers, each followed by a max-pooling layer. Similarly, the decoder uses three convolutional layers, each followed by an upsampling layer. All cells have ReLU activation function. The autoencoder is trained with mean square error loss on the full MNIST training set for 1000 epochs, through gradient descent using Adam optimizer \citep{kingma2015adam} with learning rate 1e-4.
\paragraph*{Section ``\textbf{\nameref{sec:mnist_depth}}'': Selection of a set of image continua.}
We explain here how the continua considered in Section \ref{sec:mnist_depth} are selected. We draw the first 100 samples of each class from the test set. Each $n$th sample from one category is paired with the $n$th sample from another category, leading to 4500 pairs of stimuli drawn from two different digit categories. For each pair we generate a continuum as explained above. Continua presented in Fig.~\ref{fig:mnist_continua} and used in the experiment presented in Section~\ref{sec:mnist_cp}, Fig.~\ref{fig:mnist_cp}, are made of 16 images. Continua used in Section~\ref{sec:mnist_depth}, Fig.~\ref{fig:mnist_cp_depth}, use a finer resolution of 50 images. Not all pairs are valid pairs to look at in the context of our study: we are indeed interested in pairs that can be smoothly interpolated from one category to another. Categories that are close one to another are mainly concerned here -- for instance, if the generated continuum straddles another third category then it should be dismissed from this analysis. In order to only consider the relevant pairs, we keep a pair only if the sum of the two posterior probabilities is above a certain threshold (0.95 here) all along the continuum. In order to average over all these examples, we also exclude cases where the category boundary is be too close to one of the extremities of the continuum. In the end, 1650 pairs fulfill these criteria and are included in the study.
\paragraph*{Section ``\textbf{\nameref{sec:mnist_categoricality}}'': Details on the numerical protocol.}
The multi-layer perceptron has three hidden layers of 1024 cells, with ReLU activations. Gaussian dropout is used after each dense layer, with respective rate $0.1$, $0.2$, $0.4$. The convolutional neural network has two blocks of two convolutional layers of 32 cells with ReLU activations, followed by a max pooling layer, these two blocks finally followed by a dense hidden layers of $128$ cells. Each block is followed by a dropout layer with rate $0.2$, and the dense layer is followed by a dropout layer with rate $0.5$. Simulations for the multi-layer perceptrons and the convolutional neural networks share the same framework. It considers 10 trials. Each trial uses a different random initialization. For each trial, learning is done over 50 epochs through gradient descent using Adam optimizer with default parameters \citep{kingma2015adam}. In order to compute the categoricality index, the distributions of both the within- and between-category distances are evaluated thanks to 1000 pairs of samples drawn from the same category and 1000 pairs of samples drawn from different categories. Samples come from the test set.
\paragraph*{Section ``\textbf{\nameref{sec:cat_dog}}'': Details on the numerical protocol.}
We make use of the Kaggle Dogs vs. Cats database\footnote{\url{https://www.microsoft.com/en-us/download/details.aspx?id=54765}, \url{https://www.kaggle.com/c/dogs-vs-cats}}, which contains 25,000 images of dogs and cats. Image size is 180x180. The convolutional neural network has three blocks, each one composed of two convolutional layers of 64 cells with ReLU activations followed by a max pooling layer, these three blocks being finally followed by a global average pooling layer. Such average pooling layer has been introduced in~\citet{lin2013network} to replace the final dense layers of previous common models, reducing drastically the number of parameters -- this avoids overfitting when dealing with not so large databases such as the one considered here. The two first convolutional blocks are followed by a Gaussian dropout layer with rate $0.1$ and $0.2$ respectively, while the global average pooling layer is followed by a dropout layer with rate $0.4$. The simulation is repeated 10 times with different random initializations. Learning is done over 100 epochs through gradient descent using Adam optimizer with default parameters \citep{kingma2015adam}. The learning database being quite small, we used data augmentation during learning, performing horizontal random flip and random rotations (with an angle in $[-0.1 \times 2\pi, 0.1 \times 2\pi]$). After learning, average classification performance on the test set is 95\%. Finally, the different cat/dog, cat/cat or dog/dog morphed continua are generated thanks to the code provided by \citet{miyato2018spectral}\footnote{\url{https://github.com/pfnet-research/sngan_projection}}. We made use of the 256x256 model pretrained on ImageNet that is provided by the authors.
\paragraph*{Section ``\textbf{\nameref{sec:imagenet}}'': Evaluation of the categoricality index for the ImageNet experiment.}
The full ImageNet database consists in more than a million images categorized into 1000 different classes. Categoricality is evaluated through the use of 1000 pairs of samples for the within-category distribution of neural distances, and 1000 pairs of samples for the between-categories one. Samples come from the validation set.
\paragraph*{Computer code.}
The custom Python 3 code written for the present project makes use of the following libraries: \texttt{tensorflow v2.4.1} \citep{tensorflow2015-whitepaper} (using \texttt{tf.keras} API, \citealp{chollet2015keras}), \texttt{matplotlib v3.3.4} \citep{hunter2007matplotlib}, \texttt{numpy v1.19.5} \citep{harris2020array}, \texttt{scipy v1.4.1} \citep{scipy}, \texttt{pandas v1.2.3} \citep{mckinney2010data}, \texttt{seaborn v0.11.1} and \texttt{scikit\_learn v0.24.1} \citep{scikit-learn}. The code is available at \url{https://github.com/l-bg/categorical_perception_ann_neco}.
\section*{Acknowledgments}
We are grateful to Gary Cottrell as well as to an anonymous referee for important and constructive comments. We thank the Information Systems Division (DSI) of the EHESS, and in particular Laurent Henry, for their helpfulness in providing us with an access to computing resources during the covid-19 lockdown.
\clearpage
\renewcommand{\thesection}{Appendix \Alph{section}}
\renewcommand{\thesubsection}{\Alph{section}.\arabic{subsection}}
\setcounter{section}{0}
\renewcommand{\theequation}{\Alph{section}.\arabic{equation}}
\setcounter{equation}{0}
\renewcommand{\thefigure}{\Alph{section}.\arabic{figure}}
\setcounter{figure}{0}
\section{Modeling categorical perception}
\label{sec:appendix_model}
For completeness, in this Appendix we review and synthesize the results in \citet{LBG_JPN_2008, LBG_JPN_2012} that are relevant for the present paper. In a companion paper \citep{LBG_JPN_Theory_2020}, we show that the analysis presented here can be extended to multi-layer networks, making explicit that $x$, instead of being the stimulus, corresponds to the projection (achieved by the network) of the stimulus on a space relevant for the discrimination task.
\subsection{Model Description}
\label{sec:model_description}
We consider a finite set of $M$ categories, denoted by $\mu = 1, \ldots, M$, with probabilities of occurrence (relative frequency) $q_{\mu} > 0$, so that $\sum_{\mu} q_{\mu} = 1$. Each category is defined as a probability density distribution $P(\mathbf{x}|\mu)$ over the continuous space of stimulus $\mathbf{x}$. The stimulus space is assumed to be of small dimension $K$, corresponding to the selection of the features or directions relevant to the task at hand. For the sake of simplicity, we only consider here the one-dimensional case, $K=1$. See \citet{LBG_JPN_2008,LBG_JPN_2012} for equations in the general case $K>1$.\\
A stimulus $x \in \mathbb{R}$ elicits a response $\mathbf{r}=\{r_1,\ldots,r_N \}$ from a population of $N$ neurons. This neural activity $\mathbf{r}$ is some noisy representation of $x$ and aims at encoding properties about the given categories. The read-out is realized by $M$ output cells with activities $g_{\mu}, \mu=1,...,M$. Each output activity is a deterministic function of the neural activity $\mathbf{r}$, $g_{\mu} = g(\mu | \mathbf{r})$. With the goal of computing an estimate $\widehat{\mu}$ of $\mu$, we consider these outputs as estimators of the posterior probability $P(\mu|x)$, where $x$ is the (true) stimulus that elicited the neural activity $\mathbf{r}$. The processing chain can be summarized with the following Markov chain:
\begin{equation}
\mu \rightarrow x \xrightarrow[\text{coding}]{} \mathbf{r} \xrightarrow[\text{decoding}]{} \widehat{\mu}
\end{equation}
\subsection{Estimation of the posterior probabilities}
The read-out assumes that, given a neural activity $\mathbf{r}$ in the coding layer, the goal is to construct as neural output an estimator $g(\mu|\mathbf{r})$ of the posterior probability $P(\mu|x)$, where $x$ indicates the (true) stimulus that elicited the neural activity $\mathbf{r}$. The relevant Bayesian quality criterion is given by the Kullback-Leibler divergence (or relative entropy) $\mathcal{C}(x,\mathbf{r})$ between the true probabilities $\{P(\mu|x), \mu=1,...,M\}$ and the estimator
$\{g(\mu|\mathbf{r}), \mu=1,...,M\}$, defined as \citep{Cover_Thomas_2006}:
\begin{equation}
\mathcal{C}(x,\mathbf{r})
\equiv D_{KL}(P_{\mu|x}||g_{\mu|\mathbf{r}})
= \sum_{\mu=1}^{M} P(\mu|x) \ln \frac{P(\mu|x)}{g(\mu|\mathbf{r}) }
\label{eq:cost}
\end{equation}
Averaging over $\mathbf{r}$ given $x$, and then over $x$, the mean cost induced by the estimation can be written:
\begin{equation}
\mathcal{\overline{C}} = - \mathcal{H}(\mu|x) - \int \,\left( \int \, \sum_\mu P(\mu|x) \ln g(\mu|\mathbf{r})\; P(\mathbf{r}|x)\,d\mathbf{r} \,\right)\; p(x) \,dx
\label{eq:mean_cost}
\end{equation}
where $\mathcal{H}(\mu|x) = - \int dx\,p(x) \sum_{\mu=1}^{M} P(\mu|x) \ln P(\mu|x)$ is the conditional entropy of $\mu$ given $x$.\\
We can rewrite (\ref{eq:mean_cost}) as the sum of two terms :
\begin{equation}
\mathcal{\overline{C}} = \mathcal{\overline{C}}_{\text{coding}} + \mathcal{\overline{C}}_{\text{decoding}}
\label{eq:mean_cost_coding_decoding}
\end{equation}
respectively defined as :
\begin{equation}
\mathcal{\overline{C}}_{\text{coding}} = I(\mu,x)- I(\mu,\mathbf{r})
\label{eq:cost_coding}
\end{equation}
and
\begin{equation}
\mathcal{\overline{C}}_{\text{decoding}} = \int \, D_{KL}(P_{\mu|\mathbf{r}}||g_{\mu|\mathbf{r}}) \;P(\mathbf{r})\,d\mathbf{r}
\label{eq:cost_decoding}
\end{equation}
$I(\mu,x)$ and $I(\mu,\mathbf{r})$ are respectively the mutual information between the categories $\mu$ and the stimulus $x$, and between the categories $\mu$ and the neural activity $\mathbf{r}$, defined by \citep{Blahut_1987}:
\begin{equation}
I(\mu,x) = \sum_{\mu=1}^M q_{\mu} \int \,\ln \frac{P(x|\mu)}{P(x)} \, P(x|\mu) \, dx\,, \;\;
I(\mu,\mathbf{r}) = \sum_{\mu=1}^M q_{\mu} \int \, \ln \frac{P(\mathbf{r}|\mu)}{P(\mathbf{r})}\; P(\mathbf{r}|\mu) \,d\mathbf{r}
\label{eq:mi_mu_x}
\end{equation}
$D_{KL}(P_{\mu|\mathbf{r}}||g_{\mu|\mathbf{r}})$ is the relative entropy between the true probability of the category given the neural activity and the output function $g$:
\begin{equation}
D_{KL}(P_{\mu|\mathbf{r}}||g_{\mu|\mathbf{r}}) = \sum_{\mu=1}^{M} P(\mu|\mathbf{r}) \ln \frac{P(\mu|\mathbf{r})}{g(\mu|\mathbf{r}) }
\label{eq:dklpgr}
\end{equation}
Since processing cannot increase information \citep[see e.g.][pp. 158-159]{Blahut_1987}, the information $I(\mu,\mathbf{r})$ conveyed by $\mathbf{r}$ about $\mu$ is at most equal to the one conveyed by the sensory input $x$, hence we have that $\mathcal{\overline{C}}_{\text{coding}} \geq 0$. This coding cost tends to zero as noise vanishes. The decoding cost $\mathcal{\overline{C}}_{\text{decoding}}$ is the only term that depends on $g$, hence the function minimizing the cost function (\ref{eq:mean_cost_coding_decoding}) is (if it can be realized by the network):
\begin{equation}
g(\mu|\mathbf{r}) = P(\mu|\mathbf{r})
\label{eq:optdecod}
\end{equation}
From a machine learning viewpoint, minimization of the decoding cost (\ref{eq:dklpgr}) can be achieved through supervised learning taking as cost function the cross-entropy loss, as shown in \citet{LBG_JPN_2012}, SI Section 1.
\subsection{Coding efficiency}
\label{sec:coding_efficiency}
In \citet{LBG_JPN_2008}, we show that, in a high signal-to-noise ratio limit, that is when the number $N$ of coding cells grows to infinity, the mutual information $I(\mu,\mathbf{r})$ between the activity of the neural population and the set of discrete categories reaches its upper bound, which is the mutual information $I(\mu,x)$ between the stimuli and the categories. For large but finite $N$, the leading correction takes an interesting form. It can be written as the average (over the stimulus space) of the ratio between two Fisher-information values: in the denominator, the Fisher information $ F_{\text{code}}(x)$, specific to the neural encoding stage $x \rightarrow \mathbf{r}$, and in the numerator, the Fisher information $ F_{\text{cat}}(x)$, that characterizes the category realizations $\mu \rightarrow x$ and does not depend on the neural code. Fisher information is an important concept that comes from the field of parameter estimation in statistics. $F_{\text{code}}(x)$ characterizes the sensitivity of the neural activity $\mathbf{r}$ with respect to small variations of $x$. The higher the Fisher information $F_{\text{code}}(x)$, the better an estimate of $x$ can be obtained. $F_{\text{cat}}(x)$ quantifies the categorization uncertainty. As a consequence, $F_{\text{cat}}(x)$ is larger in the transition regions between categories, where the identification function $P(\mu|x)$ changes quickly, than within category, where the identification function $P(\mu|x)$ is almost flat.\\
Explicitly, the coding cost~(\ref{eq:cost_coding}) writes:
\begin{equation}
\mathcal{\overline{C}}_{\text{coding}} = \frac{1}{2} \int \frac{F_{\text{cat}}(x)}{F_{\text{code}}(x)}\;p(x)\,dx
\label{eq:midiff_app}
\end{equation}
where $F_{\text{code}}(x)$ and $F_{\text{cat}}(x)$ are respectively defined as
\begin{equation}
F_{\text{code}}(x) = - \int \, \frac{\partial^2 \ln P(\mathbf{r}|x) }{\partial x^2} \;P(\mathbf{r}|x) \,d\mathbf{r}
\label{eq:fisher_code}
\end{equation}
\begin{equation}
F_{\text{cat}}(x) = -\sum_{\mu=1}^M \, \frac{\partial^2 \ln P(\mu|x)}{\partial x^2} \; P(\mu|x). \,\\
\label{eq:fisher_cat}
\end{equation}
Crucially for the present work, the inverse of the Fisher information is an optimal lower bound on the variance $\sigma_x^2 $ of any unbiased estimator $\widehat{x}(\mathbf{r})$ of $x$ \citep[Cramér-Rao bound, see e.g.][]{Blahut_1987}:
\begin{equation}
\sigma_x^2 \equiv \int \, \big(\widehat{x}(\mathbf{r}) - x\big)^2 \;P(\mathbf{r}|x)\,d\mathbf{r}\;\geq \; \frac{1}{F_{\text{code}}(x)}
\label{eq:cramer_rao_app}
\end{equation}
In the case of $x$ in dimension $K>1$, one has $K\times K$ Fisher information matrices $F_{\text{cat}}$ and $F_{\text{code}}$, the Cramér-Rao bound relates the covariance matrix of the estimator to the inverse of $F_{\text{code}}$, and the ratio of the Fisher information quantities in (\ref{eq:midiff_app}) is replaced by the trace of the product of the transpose of $F_{\text{cat}}$ by the inverse of $F_{\text{code}}$ \citep[see][]{LBG_JPN_2008}. In such case the resulting neural metric will be anisotropic. Suppose one follows a 1d path (a continuum between two items). If this path crosses a category boundary, one will observe the same contraction/expansion of space as for the 1d case, whereas along a path parallel to a boundary the neural Fisher information will be more or less constant.
\subsection{Optimal decoding}
\label{sec:opt_decoding}
In \citet{LBG_JPN_2012}, we show that the function $g(\mu|\mathbf{r})$ that minimizes the decoding cost function given by Eq. \ref{eq:cost_decoding} is equal to $P(\mu|\mathbf{r})$, which is an (asymptotically) unbiased and (asymptotically) efficient estimator of $P(\mu|x)$. For a given $x$, its mean is thus equal to
\begin{equation}
\int \, g(\mu|\mathbf{r}) \, P(\mathbf{r}|x)\,d\mathbf{r} = P(\mu|x)
\end{equation}
and its variance is given by the Cramér-Rao bound, that is, in this 1d case,
\begin{equation}
\int \, \big(g(\mu|\mathbf{r})-P(\mu|x)\big)^2 \;P(\mathbf{r}|x) \, d\mathbf{r}\;
=\; \frac{P'(\mu|x)^2}{F_\text{code}(x)}
\label{eq:pmur_cramer_rao}
\end{equation}
\subsection{Category learning implies categorical perception}
\label{sec:learning-cp}
The Fisher information $F_{\text{cat}}(x)$ is the largest at the boundary between categories. If the neural code is to be optimized, from Eq.~\ref{eq:midiff_app} we therefore expect the Fisher information $F_{\text{code}}(x)$ to be greater between categories than within, so as to compensate for the higher value of $F_{\text{cat}}(x)$ in this region. Depending on the constraints specific to the system under consideration, minimization of the cost leads to a neural code such that $F_{\text{code}}(x)$ is some increasing function of $F_{\text{cat}}(x)$. For some constraints one gets $F_{\text{code}}(x)\propto F_{\text{cat}}(x)$ as optimum, but other constraints may lead to other relationships -- see \citet{LBG_JPN_2008,LBG_JPN_Theory_2020, KB_JPN_2020}. \\
Another way to look at the benefit of having greater neural sensitivity in the transition region between categories is through Eq.~\ref{eq:pmur_cramer_rao}: a greater Fisher information $F_{\text{code}}(x)$ in this region, where $P'(\mu|x)^2$ is the highest, makes it possible to lower the variance of the estimate $g(\mu|\mathbf{r})$ of the $P(\mu|x)$. As a result, the probability of misclassifying $x$ given the neural activity $\mathbf{r}$ is also decreased, as illustrated in Fig.~\ref{fig:posterior_estimation}. \\
Larger Fisher information $F_{\text{code}}(x)$ means greater sensitivity of the neural code to a small change in stimulus $x$. $F_{\text{code}}$ gives the metric of the representation: a larger value around a certain $x$ means that the representation is dilated at that location. In other words, category learning implies better cross-category than within-category discrimination, hence the so-called \textit{categorical perception}.
\begin{figure}[hb]
\centering
\includegraphics[width=0.98\linewidth]{fig/posterior_estimation}
\caption{\textbf{Increasing Fisher information $F_\text{code}$ decreases the probability of error due to noise in the neural processing.}
Probability of misclassifying a stimulus $x$ given the neural activity $\mathbf{r}$ that it has evoked is greater the closer to the boundary between categories. Here, we consider an example $x$ that belongs to category $\mu$. In order to minimize the probability of error, the best strategy is to categorize $x$ as $\mu$ if $\mu$ has the greatest posterior probability $P(\mu|\mathbf{r})$, above some value defining the decision boundary. All the values of $P(\mu|\mathbf{r})$ below the decision boundary result in an error in classification. Learning the categories increases the Fisher information $F_\text{code}$ at the boundary between categories, which reduces the variance $P'(\mu|x)^2/F_{\text{code}}(x)$ of the estimator of $P(\mu|x)$, thus reducing the probability of error.}
\label{fig:posterior_estimation}
\end{figure}
\clearpage
\setcounter{figure}{0}
\section{Examples of image continua, using the MNIST dataset}
\label{sec:appendix_mnist_continua_examples}
$\,$\\
\begin{figure}[h]
\centering
\includegraphics[width=0.95\linewidth]{fig/mnist_continua_examples_input_distance_pred}
\caption{\textbf{Examples of image continua}, smoothly interpolating between pairs of digits taken from the MNIST test set. The dotted colored lines indicate the posterior probabilities from the network of Fig.~\ref{fig:mnist_cp_depth} (blue corresponds to the correct response for the leftmost digit, red for the rightmost one).
The green solid line corresponds to the distance in input (pixel) space between adjacent items along the continuum.}
\label{fig:mnist_continua_examples_input_distance_pred}
\end{figure}
\clearpage
\setcounter{figure}{0}
\section{Supplementary figure for the cat/dog example: Distance in input space}
\label{sec:appendix_catdog_pixel}
$\,$\\
\begin{figure}[h]
\textbf{a.}
\includegraphics[width=.95\linewidth,valign=T]{fig/cat_dog_inputdistance_a.pdf}
\vspace{0.1cm}\\
\textbf{b.}
\includegraphics[width=.95\linewidth,valign=T]{fig/cat_dog_inputdistance_b.pdf}
\caption{\textbf{Categorical perception of a cat/dog continuum: Distance in input space}. Same legend as in Fig.~\ref{fig:cat_dog}. The green solid line corresponds to the distance in input (pixel) space between adjacent items along the continuum.}
\label{fig:cat_dog_input}
\end{figure}
\clearpage
\setcounter{figure}{0}
\section{Supplementary results on dropout}
\label{sec:appendix_dropout}
\subsection{Dropout rate vs. layer depth}
\label{sec:appendix_dropout_depth}
We propose here a first investigation aiming at testing the effectiveness of dropout as a function of layer depth. We conduct an experiment using a network trained on the CIFAR-10 image dataset \citep{krizhevsky2009learning}. This dataset is a collection of 60000 images (with 50000 images in the training set and 10000 in the test set) divided into 10 classes (such as airplanes, cars, birds or cats). The neural network is a multi-layer perceptron with two hidden layers of 1024 cells, with ReLU activations. We apply dropout after the input layer and after each hidden layer. For each of these three cases, we vary the dropout rate from 0.0 to 0.7 while keeping the other two with a fixed dropout rate of 0.2. For each value of dropout, the experiment consists in 10 trials with different random initializations. For each trial, learning is done over 1000 epochs through gradient descent using Adam optimizer with learning rate 1e-4 \citep{kingma2015adam}. In order to accommodate for the fact that different values of the dropout rate might require different learning rates or numbers of epochs \citep{srivastava2014dropout}, we consider the average of the ten best values obtained on the test set during these 1000 epochs (instead of considering only the very last value).\\
Fig.~\ref{fig:cifar10_dropout} presents the results. First, one can see that the deeper the layer the more robust it is to a great amount of noise: while a large amount of dropout rate (see e.g. 0.7) is very detrimental when applied at the level of the input layer, the deepest hidden layer, which is more categorical, does not suffer much from it. Second, one can see that the best dropout value for each layer increases with depth, which supports the idea that a deep layer benefits more from a larger value of dropout.
\begin{figure}[h]
\centering
\includegraphics[width=0.6\linewidth]{fig/cifar10_dropout_depth.pdf}
\caption{\textbf{Classification accuracy on the CIFAR-10 image dataset (test set) using a multi-layer perceptron with varying levels of dropout.} The neural network is a multi-layer perceptron with two hidden layers. Dropout is applied after the input layer (called layer 0 here) and the two hidden layers (layer 1 and 2).
For a given layer, we vary the dropout rate from 0.0 to 0.7 while keeping a fixed rate value of 0.2 in the other two layers.
Each arrow marks the maximum value in classification accuracy for the corresponding layer. Error bars indicate 95\% bootstrap confidence intervals.}
\label{fig:cifar10_dropout}
\end{figure}
\clearpage
\subsection{Comparing categoricality on the MNIST dataset with and without dropout}
\label{sec:appendix_dropout_mnist}
\begin{figure}[h]
\centering
\textbf{a.}
\includegraphics[width=0.44\linewidth,valign=T]{fig/mnist_mlp_categoricality_nodropout}
\hfill
\textbf{b.}
\includegraphics[width=0.44\linewidth,valign=T]{fig/mnist_cnn_categoricality_nodropout}
\caption{\textbf{Categoricality as a function of layer depth, using the MNIST dataset, with and without the use of dropout}. Reproduction of Fig.~\ref{fig:mnist_categoricality}, along with the categoricality obtained without the use of dropout or Gaussian dropout (orange lines).}
\label{fig:mnist_categoricality_nodropout}
\end{figure}
\clearpage
\bibliographystyle{apalike}
|
1,108,101,564,871 | arxiv | \chapter*{Preface}--not included
\title{Parameterized complexity of fair deletion problems.\thanks{Research was supported by the project GAUK 338216 and by the project SVV-2016-260332.
}}
\titlerunning{Fair deletion problems}
\author{Tomáš Masařík\inst{1}\thanks{Author was supported by the project CE-ITI P202/12/G061.} \and Tomáš Toufar\inst{2}}
\authorrunning{Tomáš Masařík and Tomáš Toufar}
\tocauthor{Tomáš Masařík, Tomáš Toufar}
\institute{
Department of Applied Mathematics, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic, \\
\email{[email protected]} \and
Computer Science Institute of Charles University, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic \\
\email{[email protected]}
}
\maketitle
\begin{abstract}
\input{src/abstract}
\end{abstract}
\input{src/intro}
\input{src/prelim}
\input{src/hardness}
\input{src/FPT}
\input{src/conclusions}
\input{src/ack}
\bibliographystyle{siam}
\section{Introduction}
We study the computational complexity of \emph{fair deletion problems}.
Deletion problems are a standard reformulation of some classical problems in combinatorial optimization examined by Yannakakis~\cite{Yannakakis81}.
For a graph property $\pi$ we can formulate an \emph{edge deletion problem}. That means, given a graph $G=(V,E)$, find the minimum set of edges $F$ that need to be deleted for graph ${G'=(V,E\setminus F})$ to satisfy property $\pi$.
A similar notion holds for the \emph{vertex deletion problem}.
Many classical problems can be formulated in this way such as {\sc minimum vertex cover, maximum matching} or {\sc minimum feedback arc set}.
For example {\sc minimum vertex cover} is formulated as a vertex deletion problem since we aim to find a minimum set of vertices such that the rest of the graph forms an independent set.
An example of an edge deletion problem is {\sc perfect matching}: we would like to find a minimum edge set such that the resulting graph has all vertices being of degree exactly one.
Many of such problems are {$\mathsf{NP}$}\xspace-complete~\cite{Yannakakis78,Watanabe,KriDeo}.
\emph{Fair deletion problems} are such modifications where the cost of the solution should be split such that the cost is not too high for anyone. More formally, the \textsc{fair edge deletion problem} for a given graph $G=(V,E)$ and a property $\pi$ finds a set $F\seq E$ which minimizes the maximum degree of the graph ${G^*=(V,F)}$ where the graph ${G'=(V,E\setminus F)}$ satisfies the property~$\pi$. Fair deletion problems were introduced by Lin and Sahni~\cite{LiSah}.
Minimizing the fair cost arises naturally in many situations, for example in defective coloring~\cite{defcol}. A graph is $(k,d)$-colorable if every vertex can be assigned a color from the set $\{1,\ldots,k\}$ in such a way that every vertex has at most $d$ neighbors of the same color. This problem can be reformulated in terms of fair deletion;
\tmcom{difference between ; and :}
we aim to find a set of edges of maximum degree $d$ such that after its removal the graph can be partitioned into $k$ independent sets.
We focus on fair deletion problems with properties definable in either first order ({$\mathsf{FO}$}\xspace) or monadic second order ({$\mathsf{MSO}$}\xspace) logic.
Our work extends the result of Kolman et al.~\cite{Kolman09onfair}.
They showed an \ensuremath{\mathsf{XP}}\xspace algorithm for a generalization of fair deletion problems definable by {$\mathsf{MSO}_2$}\xspace formula on graphs of bounded tree-width.
We give formal definitions of the problems under consideration in this work.
\prob{\sc Fair {$\mathsf{FO}$}\xspace edge-deletion}
{An undirected graph $G$, an {$\mathsf{FO}$}\xspace sentence $\psi$, and a positive integer~$k$.}
{Is there a set $F \subseteq E(G)$ such that $G \setminus F \models \psi$ and for every
vertex $v$ of $G$, the number of edges in $F$ incident with $v$ is at most
$k$?}
Similarly, \textsc{fair vertex deletion problem} finds, for a given graph $G=(V,E)$ and a property $\pi$, the solution which is the minimum of maximum degree of graph ${G[W]}$ where graph ${G[V\setminus W]}$ satisfy property $\pi$. Those problems are {$\mathsf{NP}$}\xspace-complete for some formulas. For example Lin and Sahni~\cite{LiSah} showed that deciding whether a graph $G$ has a degree one subgraph $H$ such that $G\setminus H$ is a spanning tree is {$\mathsf{NP}$}\xspace-complete.
\prob{\sc Fair {$\mathsf{FO}$}\xspace vertex-deletion}
{An undirected graph $G$, an {$\mathsf{FO}$}\xspace sentence $\psi$, and a positive integer~$k$.}
{Is there a set $W \subseteq V(G)$ such that $G \setminus W \models \psi$ and for every
vertex $v$ of $G$, it holds that $|N(v) \cap W| \leq k$?}
Both problems can be straightforwardly modified for {$\mathsf{MSO}_1$}\xspace or {$\mathsf{MSO}_2$}\xspace.
The following notions are useful when discussing the fair deletion problems.
The \emph{fair cost of a set} $F \subseteq E$ is defined as $\max_{v\in V} |\{ e \in F \mathrel| v \in e \}|$. We refer to the function that assigns each set $F$ its fair cost as the
\emph{fair objective function}. In case of vertex-deletion problems, the \emph{fair cost of a set $W \subseteq V$} is defined as $\max_{v\in V} |N(v) \cap W|$. The \emph{fair objective function} is defined analogously. Whenever we refer to the fair cost or the fair objective function, it should be clear from context whether we mean the edge or the vertex version.
We now describe the generalization of fair deletion problems considered by Kolman et al. The main motivation is that sometimes we want to put additional constraints on the deleted set itself (e.g. \textsc{Connected Vertex Cover}, \textsc{Independent Dominating Set}). However, the framework of deletion problems does not allow that. To overcome this problem, we define the generalized problems as follows.
\prob{\sc Generalized Fair {$\mathsf{MSO}$}\xspace edge-deletion}
{An undirected graph $G$, an {$\mathsf{MSO}$}\xspace formula $\psi$ with one free edge-set
variable, and a positive integer $k$.}
{Is there a set $F \subseteq E(G)$ such that $G \models \psi(F)$ and for every
vertex $v$ of $G$, the number of edges in $F$ incident with $v$ is at most
$k$?}
\prob{\sc Generalized Fair {$\mathsf{MSO}$}\xspace vertex-deletion}
{An undirected graph $G$, an {$\mathsf{MSO}$}\xspace formula $\psi$ with one free vertex-set variable, and a positive integer $k$.}
{Is there a set $W \subseteq V(G)$ such that $G \models \psi(W)$ and for every vertex $v$ of $G$, it holds that $|N(v) \cap W| \leq k$?}
In this version, the formula $\psi$ can force that $G$ has the desired property after deletion as well as imposing additional constraints on the deleted set itself.
Courcelle and Mosbah~\cite{CourcelleMosbah} introduced a semiring homomorphism framework that can be used
to minimize various functions over all sets satisfying a given {$\mathsf{MSO}$}\xspace formula. A natural question is
whether this framework can be used to minimize the fair objective function. The answer is no, as we exclude
the possibility of an existence of an \ensuremath{\mathsf{FPT}}\xspace algorithm for parameterization by tree-width under reasonable assumption. Note that there are semirings that capture the fair objective function, but their size is of order $\bigO{n^{\ensuremath{\mathop{\mathrm{tw}}}{(G)}}}$, so this approach does not lead
to an \ensuremath{\mathsf{FPT}}\xspace algorithm.
\subsection{Our results}
We prove that the \ensuremath{\mathsf{XP}}\xspace algorithm given by Kolman et al.~\cite{Kolman09onfair} is almost optimal under the exponential time hypothesis (ETH) for both the edge and the vertex version. Actually we proved something little bit stronger. We prove the hardness of the classical (weaker) formulation of {\sc fair deletion problems} described in (weaker as well) {$\mathsf{FO}$}\xspace logic.
\begin{theorem}\label{thm:hardvertex}
If there is an \ensuremath{\mathsf{FPT}}\xspace algorithm for \textsc{Fair {$\mathsf{FO}$}\xspace vertex-deletion} parameterized by the size of the formula $\psi$,
the pathwidth of $G$, and the size of minimum feedback vertex set of $G$ combined,
then $\ensuremath{\mathsf{FPT}}\xspace = \W{1}$.
Moreover, let $k$ denote $\ensuremath{\mathop{\mathrm{pw}}}(G)+ \ensuremath{\mathop{\mathrm{fvs}}}(G)$. If there is an algorithm for
\textsc{Fair {$\mathsf{FO}$}\xspace vertex-deletion} with running time $f(|\psi|, k) n^{o(\sqrt[3] k)}$,
then Exponential Time Hypothesis fails.
\end{theorem}
\begin{theorem}
\label{thm:edge_deletion_hardness}
If there is an \ensuremath{\mathsf{FPT}}\xspace algorithm for \textsc{Fair {$\mathsf{FO}$}\xspace edge-deletion} parameterized by the size of the formula $\psi$,
the pathwidth of $G$, and the size of minimum feedback vertex set of $G$ combined,
then $\ensuremath{\mathsf{FPT}}\xspace = \W{1}$.
Moreover, let $k$ denote $\ensuremath{\mathop{\mathrm{pw}}}(G)+\ensuremath{\mathop{\mathrm{fvs}}}(G)$.
If there is an algorithm for
\textsc{Fair {$\mathsf{FO}$}\xspace edge-deletion} with running time $f(|\psi|, k) n^{o(\sqrt[3] k)}$,
then Exponential Time Hypothesis fails.
\end{theorem}
By a small modification of our proofs we are able to derive tighter ($\sqrt{k}$ instead of $\sqrt[3]{k}$) results using {$\mathsf{MSO}_2$}\xspace logic or {$\mathsf{MSO}_1$}\xspace logic respectively. However, there is still a small gap that has been left open.
\begin{theorem}\label{thm:MSO_hardvertex}
If there is an \ensuremath{\mathsf{FPT}}\xspace algorithm for \textsc{Fair {$\mathsf{MSO}_1$}\xspace vertex-deletion} parameterized by the size of the formula $\psi$,
the pathwidth of $G$, and the size of minimum feedback vertex set of $G$ combined,
then $\ensuremath{\mathsf{FPT}}\xspace = \W{1}$.
Moreover, let $k$ denote $\ensuremath{\mathop{\mathrm{pw}}}(G)+ \ensuremath{\mathop{\mathrm{fvs}}}(G)$. If there is an algorithm for
\textsc{Fair {$\mathsf{MSO}_1$}\xspace vertex-deletion} with running time $f(|\psi|, k) n^{o(\sqrt k)}$,
then Exponential Time Hypothesis fails.
\end{theorem}
\begin{theorem}
\label{thm:MSO_edge_deletion_hardness}
If there is an \ensuremath{\mathsf{FPT}}\xspace algorithm for \textsc{Fair {$\mathsf{MSO}_2$}\xspace edge-deletion} parameterized by the size of the formula $\psi$,
the pathwidth of $G$, and the size of minimum feedback vertex set of $G$ combined,
then $\ensuremath{\mathsf{FPT}}\xspace = \W{1}$.
Moreover, let $k$ denote $\ensuremath{\mathop{\mathrm{pw}}}(G)+\ensuremath{\mathop{\mathrm{fvs}}}(G)$.
If there is an algorithm for
\textsc{Fair {$\mathsf{MSO}_2$}\xspace edge-deletion} with running time $f(|\psi|, k) n^{o(\sqrt k)}$,
then Exponential Time Hypothesis fails.
\end{theorem}
On the other hand we show some positive algorithmic results for the generalized version of the problems.
\begin{theorem}
\label{thm:FPTneighbordiversity}
\textsc{Generalized Fair {$\mathsf{MSO}_1$}\xspace vertex-deletion} is in \ensuremath{\mathsf{FPT}}\xspace with respect to the neighborhood diversity $\ensuremath{\mathop{\mathrm{nd}}}(G)$ and the size of the formula $\psi$.
\end{theorem}
We also provide an algorithm for the {$\mathsf{MSO}_2$}\xspace logic (strictly more powerful than {$\mathsf{MSO}_1$}\xspace), however we need a more restrictive parameter because model checking of an {$\mathsf{MSO}_2$}\xspace formula is not even in \ensuremath{\mathsf{XP}}\xspace for cliques unless ${\mathsf{E}}={\mathsf{NE}}$~\cite{Courcelle:00,Lampis:13}. We consider the size of minimum vertex cover that allows us to attack the edge-deletion problem in \ensuremath{\mathsf{FPT}}\xspace time.
\begin{theorem}
\label{thm:FPTvertexCover}
\textsc{Generalized Fair {$\mathsf{MSO}_2$}\xspace edge-deletion} is in \ensuremath{\mathsf{FPT}}\xspace with respect to the size of minimum vertex cover $\ensuremath{\mathop{\mathrm{vc}}}(G)$ and the size of the formula $\psi$.
\end{theorem}
\section{FPT algorithms}
We now turn our attention to FPT algorithms for fair deletion problems.
\subsection{FPT algorithm for parameterization by neighborhood diversity}
\begin{definition}
Let $G = (V,E)$ be a graph of neighborhood diversity $k$ and let $N_1,\ldots,N_k$
denote its classes of neighborhood diversity.
A \emph{shape of a set $X \subseteq V$ in $G$} is a $k$-tuple $s=(s_1,\ldots,s_k)$, where $s_i = |X \cap N_i|$.
We denote by $\overline s$ the \emph{complementary shape to $s$}, which is defined as the shape of $V \setminus X$,
i.e. $\overline{s} = (|N_1| - s_1, \ldots, |N_k| - s_k)$.
\end{definition}
\begin{proposition}
\label{prop:property_depends_on_shape}
Let $G = (V,E)$ be a graph, $\pi$ a property of a set of vertices, and let $X,Y \subseteq V$ be two sets of the same shape in $G$. Then $X$ satisfies $\pi$ if and only if $Y$ satisfies $\pi$.
\end{proposition}
\begin{proof}
Clearly, we can construct an automorphism of $G$ that maps $X$ to $Y$.
\end{proof}
\begin{definition}
Let $r$ be a non-negative integer and let $(s_1, \ldots, s_k)$, $(t_1, \ldots, t_k)$ be two shapes. The
shapes are \emph{$r$-equivalent}, if for every $i$:
\begin{itemize}
\item $s_i = t_i$, or
\item both $s_i$, $t_i$ are strictly greater than $r$,
\end{itemize}
and the same condition hold for the complementary shapes $\overline s$, $\overline t$.
\end{definition}
The following proposition gives a bound on the number of $r$-nonequivalent shapes.
\begin{proposition}
\label{prop:num_of_noneq_shapes}
For any graph $G$ of neighborhood diversity $k$, the number
of $r$-nonequivalent shapes is at most $(2r+3)^k$.
\end{proposition}
\emph{Proof.}
We show that for every $i$, there are at most $(2r+3)$ choices of $s_i$.
This holds trivially if $|N_i| \leq 2r+3$. Otherwise we have following $2r+3$ choices:
\begin{itemize}
\item $s_i = k$ and $\overline{s_i} > r$ for $k = 0,1,\ldots,r$, or
\item both $s_i, \overline{s_i} > r$, or
\item $s_i > r$ and $\overline{s_i} = k$ for $k = 0,1,\ldots,r$.
\end{itemize}
\vskip-20pt \hfill\qed
The next lemma states that the fair cost of a set can be computed from its shape in a straightforward manner.
Before we state it, let us introduce some auxiliary notation.
If a graph $G$ of neighborhood diversity $k$ has classes of neighborhood diversity $N_1,\ldots,N_k$,
we write $i \sim j$ if the classes $N_i$ and $N_j$ are adjacent. If the class $N_i$ is a clique,
we set $i \sim i$.
Moreover, we set $\eta_i =1$ if the class $N_i$ is a clique and $\eta_i = 0$ if it is an independent set.
The classes of size one are treated as cliques for this purpose.
\begin{lemma}
\label{lem:fair_cost_from_shape}
Let $G = (V,E)$ be a graph of neighborhood diversity $k$ and let $N_i$ be its classes of neighborhood diversity.
Moreover, let $X \subseteq V$ be a set of shape $s$. Then
the fair vertex cost of $X$ is
$$ \max_{i} \bigg(\Big(\sum_{j:i\sim j} s_j\Big)- \eta_{i}\bigg).$$
\end{lemma}
\begin{proof}
It is straightforward to check that vertex $v \in N_i$ has exactly
$\sum_{j:i\sim j} s_j - \eta_{i}$ neighbors in $X$.
\end{proof}
Our main tool is a reformulation of Lemma~5 from \cite{Lam}:
\begin{lemma}
\label{lem:formula_and_large_shape}
Let $\psi$ be an {$\mathsf{MSO}_1$}\xspace formula with one free vertex-set variable, $q_E$ vertex element quantifiers,
and $q_S$ vertex set quantifiers. Let $r = 2^{q_S}q_E$. If $G = (V,E)$ is a graph
of neighborhood diversity $k$ and
$X,Y \subseteq V$ are two sets such that their shapes are $r$-equivalent, then $G \models \psi(X)$ if and only if
$G \models \psi(Y)$.
\end{lemma}
The last result required is the {$\mathsf{MSO}_1$}\xspace model checking for graphs of bounded neighborhood diversity~\cite{Lam}:
\begin{theorem}
\label{thm:nd_MSO_model_checking}
Let $\psi$ be an {$\mathsf{MSO}_1$}\xspace formula with one free vertex-set variable. There exists an \ensuremath{\mathsf{FPT}}\xspace algorithm
that given a graph $G = (V,E)$ of neighborhood diversity $k$ and a set $X\subseteq V$ decides whether $G \models \psi(X)$.
The running time of the algorithm is $f(k,|\psi|)n^{\bigO{1}}$.
\end{theorem}
We now have all the tools required to prove Theorem~\ref{thm:FPTneighbordiversity}.
\begin{proof}[Proof of Theorem~\ref{thm:FPTneighbordiversity}]
Let $\psi$ be an {$\mathsf{MSO}_1$}\xspace formula in the input of \textsc{Fair {$\mathsf{MSO}_1$}\xspace vertex-deletion}. Denote by $q_S$ the number
of vertex-set quantifiers in $\psi$, by $q_E$ the number of vertex-element quantifiers in $\psi$, and set $r = 2^{q_S}q_E$.
By Proposition~\ref{prop:property_depends_on_shape}, the validity of $\psi(X)$ depends only on the shape of $X$.
Let us abuse notation slightly and write $G \models \psi(s)$ when ``$X$ has shape $s$'' implies $G\models \psi(X)$. Similarly, Lemma~\ref{lem:fair_cost_from_shape} allows us to refer to the fair cost of a shape $s$.
From Lemma~\ref{lem:formula_and_large_shape} it follows that the validity of $\psi(s)$ does not depend on the choice
of an $r$-equivalence class representative. The fair cost is not same for all $r$-equivalent shapes, but since the fair cost is monotone in $s$, we can easily find the representative of the minimal fair cost.
Suppose we have to decide if there is a set of a fair cost at most $\ell$. The algorithm will proceed as follows:
For each class of $r$-equivalent shapes, pick a shape $s$ of the minimal cost, if the fair cost is at most $\ell$ and $G \models \psi(s)$,
output \texttt{true}, if no such shape is found throughout the run, output \texttt{false}.
By the previous claims, the algorithm is correct. Let us turn our attention to the running time. The number
of shapes is at most $(2r+3)^k$ by Proposition~\ref{prop:num_of_noneq_shapes}, and so it is bounded
by $f(|\psi|,k)$ for some function $f$. The {$\mathsf{MSO}_1$}\xspace model checking runs in time $f'(|\psi|,k)n^{\bigO{1}}$ by Theorem~\ref{thm:nd_MSO_model_checking},
so the total running time is $f(|\psi|,k)f'(|\psi|,k)n^{\bigO{1}}$, so the described algorithm
is in \ensuremath{\mathsf{FPT}}\xspace.
\end{proof}
\subsection{FPT algorithm for parameterization by vertex cover}
The FPT algorithm for parameterization by the size of minimum vertex cover uses the same idea.
We use the fact that every {$\mathsf{MSO}_2$}\xspace formula can be translated to {$\mathsf{MSO}_1$}\xspace formula --- roughly speaking,
every edge-set variable is replaced by $\ensuremath{\mathop{\mathrm{vc}}}{(G)}$ vertex-set variables.
We only sketch translation from {$\mathsf{MSO}_2$}\xspace to {$\mathsf{MSO}_1$}\xspace, for the proof we refer
the reader to Lemma~6 in~\cite{Lam}. Let $G = (V,E)$ be a graph
with vertex cover ${C = \{v_1,\ldots,v_k\}}$ and $F\subseteq E$ a set of edges.
We construct vertex sets $U_1,\ldots,U_k$ in the following way: if $w$ is
a vertex such that an edge in $F$ connects $w$ with $v_i$, we put $w$ into $U_i$.
It is easy to see that the sets $U_1,\ldots,U_k$ together with the vertex cover
$v_1,\ldots, v_k$ describe the set $F$.
In this way, we reduce the problem of finding a set $F$ to finding $k$-tuple
of sets $(U_1, \ldots, U_k)$. We can define shapes and classes of $r$-equivalence
in an analogous way as we did in previous section. Since the number of $r$-equivalence classes defined in this way is still bounded, we can use essentially the same algorithm:
for each class of $r$-equivalence, run a model checking on a representative of this class.
From those representatives that satisfy $\psi$, we choose the one with best fair cost.
The translation from set of edges into $k$ sets of vertices is captured by the following definition.
\begin{definition}
Let $G = (V,E)$ be a graph with vertex cover $v_1,\ldots,v_k$. For a set $F\subseteq E$, we define
\emph{the signature of $F$ with respect to $v_1,\ldots,v_k$} as the $k$-tuple ${\cal U} = (U_1,\ldots,U_k)$,
where $U_i = \{ w \in V \mid \{w,v_i\} \in F\}$.
We refer to it simply as \emph{the signature} of $F$ and denote it by $S(F)$ if the vertex cover is clear from the context.
\end{definition}
In the original problem, we had an {$\mathsf{MSO}_2$}\xspace formula $\psi_2$ with one free edge-set variable.
By the translation, we obtain an {$\mathsf{MSO}_1$}\xspace formula $\psi$ with $k$ free vertex-set variables
and $k$ free vertex-element variables (the vertex-element variables will describe the vertex
cover; the formula need to have access to a vertex cover and it will be useful to fix one throughout the whole run of the
algorithm).
We start by finding a vertex cover $v_1,\ldots,v_k$ (this can be solved by an \ensuremath{\mathsf{FPT}}\xspace algorithm \cite{df13}).
We now want to find the sets $U_1,\ldots,U_k$ such that: $${G \models \psi(v_1,\ldots,v_k,U_1,\ldots,U_k)}.$$
To find such $k$-tuple of sets, we need to extend the notion of shapes to signatures.
\begin{definition}
Let $G = (V,E)$ be a graph with vertex cover $v_1,\ldots,v_k$, and let ${\cal U} = (U_1,\ldots,U_k)$
be a collection of $k$ subsets of $V$.
Denote by $N_1,\ldots,N_\ell$ the classes of neighborhood diversity of $G$.
For $j \in \{1,\ldots,\ell\}$ and $I \subseteq \{1 \ldots k\}$, denote by $\overline I$ the
set $\{1,\ldots,k\} \setminus I$. Furthermore, we define $S_{\cal U}(j,I)$ as
$$ S_{\cal U}(j,I) = \bigg|N_j \cap \bigcap_{i \in I} U_i \cap \bigcap_{i\in \overline I} (V \setminus U_i) \bigg|.$$
The mapping $S_{\cal U}$ is called \emph{the shape of a signature $\cal U$}.
\end{definition}
The shapes defined in this way have properties similar to those defined for neighborhood diversity; we only state those
properties without proofs.
\begin{definition}
Two shapes $S$, $S'$ are $r$-equivalent if for every $j \in \{1,\ldots,k\}$, $I \subseteq \{1,\ldots,k\}$ it holds
that
\begin{itemize}
\item $S(j,I) = S'(j,I)$, or
\item both $S(j,I)$, $S'(j,I)$ are strictly greater than $r$.
\end{itemize}
\end{definition}
As in the neighborhood diversity case, the number of $r$-nonequivalent shapes is bounded by a function of $r$ and $k$.
\begin{proposition}
\label{prop:extShapeCount}
Let $G = (V,E)$ be a graph with vertex cover $v_1,\ldots,v_k$ and denote by $\ell$
the neighborhood diversity of $G$.
The number of $r$-nonequivalent shapes is at most $(2r+3)^{\ell 2^k}$.
\end{proposition}
We now state corresponding variants of Lemma~\ref{lem:fair_cost_from_shape} and Lemma~\ref{lem:formula_and_large_shape}.
\begin{lemma}
Let $G = (V,E)$ be a graph with a vertex cover $v_1,\ldots, v_k$ and let $F \subseteq E$.
The number of edges in $F$ incident to $v_i$ is $|U_i|$. If $w$ is a vertex different from $v_1,\ldots,v_k$,
then the number of edges in $F$ incident to $w$ is $|\{ i \mid w \in U_i \}|$.
Those quantities (and therefore the fair cost of $F$) can be determined from the shape of $S(F)$.
\end{lemma}
\begin{lemma}
Let $G = (V,E)$ be a graph with a vertex cover $v_1,\ldots, v_k$, let
$\psi$ be an {$\mathsf{MSO}_1$}\xspace formula with $k$ free vertex-element variables and $k$ free vertex-set variables, and let ${\cal U} = (U_1,\ldots,U_k)$,
${\cal W} = (W_1, \ldots, W_k)$ be two signatures. If the shapes of $\cal U$ and $\cal W$ are $r$-equivalent,
then $G \models \psi(v_1,\ldots,v_k, U_1, \ldots, U_k)$ if and only if $G \models \psi(v_1,\ldots,v_k, W_1,\ldots,W_k)$.
\end{lemma}
\begin{proof}[Proof of Theorem~\ref{thm:FPTvertexCover}]
The algorithm goes as follows:
\begin{itemize}
\item we translate the {$\mathsf{MSO}_2$}\xspace formula $\psi_2$ with one free edge-set variable to
the {$\mathsf{MSO}_1$}\xspace formula $\psi$ with $k$ vertex-element variables and $k$ vertex-set variables.
\item We find a vertex cover $c_1,\ldots, c_k$.
\item For each class of $r$-equivalent shapes, we pick the one achieving the minimal fair cost,
determine the signature $U_1,\ldots,U_k$ and check whether: $${G \models \psi(c_1,\ldots,c_k,U_1,\ldots,U_k)}.$$
\end{itemize}
Similarly to Theorem~\ref{thm:FPTneighbordiversity}, the algorithm is correct. Moreover, we do only bounded number (Proposition~\ref{prop:extShapeCount})
of {$\mathsf{MSO}_1$}\xspace model checking, so the whole algorithm runs in \ensuremath{\mathsf{FPT}}\xspace time.
\end{proof}
\section{Preliminaries}
Throughout the paper we deal with simple undirected graphs.
For further standard notation in graph theory, we refer to Diestel~\cite{Diestel}.\tmcom{Možná bych Diestela změnil za nějakou jinou knihu, ale obecně mi takové řádky přijdou fajn.}
For terminology in parameterized computational complexity we refer to Downey and Fellows~\cite{df13}.
\subsection{Graph parameters}
We define several graph parameters being used throughout the paper.
\begin{figure}[ht]
\begin{minipage}[c]{0.5\textwidth}
\centering
\includegraphics{img/classes-401.mps}
\end{minipage}\hfill
\begin{minipage}[c]{0.5\textwidth}
\caption{Hierarchy of graph parameters. An arrow indicates that a graph parameter upper-bounds the other. Thus, hardness results are implied in direction of arrows and \ensuremath{\mathsf{FPT}}\xspace algorithms are implied in the reverse direction.
}
\end{minipage}
\label{fig:classes}
\end{figure}
We start by definition of \emph{vertex cover} being a set of vertices such that its complement forms an independent set. By $\ensuremath{\mathop{\mathrm{vc}}}{(G)}$ we denote the size of a smallest such set. This is the strongest of considered parameters and it is not bounded for any natural graph class.
A \emph{feedback vertex set} is a set of vertices whose removal leaves an acyclic graph. Again, by $\ensuremath{\mathop{\mathrm{fvs}}}{(G)}$ we denote the size of a smallest such set.
Another famous graph parameter is \emph{tree-width} introduced by Bertelé and Brioshi~\cite{bb72}.
\begin{definition}[Tree decomposition]
A \emph{tree decomposition} of a graph $G$ is a pair $(T,X)$, where ${T=(I,F)}$ is a tree, and $X=\{X_i\mid i\in I\}$ is a family of subsets of $V(G)$ such that:
\begin{itemize}
\item the union of all $X_i$, $i\in I$ equals $V$,
\item for all edges $\{v,w\}\in E$, there exists $i\in I$, such that $v,w\in X_i$ and
\item for all $v\in V$ the set of nodes $\{i\in I\mid v\in X_i\}$ forms a subtree of $T$.
\end{itemize}
\end{definition}
The \emph{width} of the tree decomposition is $\max(|X_i|-1)$.
The \emph{tree-width} of a graph $\ensuremath{\mathop{\mathrm{tw}}}{(G)}$ is the minimum width over all possible tree decompositions of the graph $G$.
The parameter of \emph{path-width} (analogously $\ensuremath{\mathop{\mathrm{pw}}}{(G)}$) is almost the same except the decomposition need to form a path instead of a general tree.
A less known graph parameter is the \emph{neighborhood diversity} introduced by Lampis~\cite{Lam}.
\begin{definition}[Neighborhood diversity]
The \emph{neighborhood diversity} of a graph $G$ is denoted by $\ensuremath{\mathop{\mathrm{nd}}}{(G)}$ and it is the minimum size of a partition of vertices into classes such that all vertices in the same class have the same neighborhood, i.e. ${N(v)\setminus\{v'\}=N(v')\setminus\{v\}}$, whenever
$v,v'$ are in the same class.
\end{definition}
It can be easily verified that every class of neighborhood diversity is either a clique or an independent set.
Moreover, for every two distinct classes $C$ and $C'$, either every vertex in $C$ is adjacent to every vertex in $C'$,
or there is no edge between them. If classes $C$ and $C'$ are connected by edges,
we refer to such classes as \emph{adjacent}.
\tmcom{ TODO?? přidat event: generalizes vc(G) and incomparable with tw }
\subsection{Parameterized problems and Exponential Time Hypothesis}
\begin{definition}[Parameterized language]
Let $\Sigma$ be a finite alphabet.
A \emph{parameterized language} $L \subseteq \Sigma^\ast \times \N$ set of pairs $(x, k)$ where $x$ is a finite word over $\Sigma$ and $k$ is a nonnegative integer.
\end{definition}
We say that an algorithm for a parameterized problem $L$ is an \emph{\ensuremath{\mathsf{FPT}}\xspace algorithm} if there exist
a constant $c$ and a computable function $f$ such that the running time for input $(x,k)$ is $f(k)|x|^c$ and the algorithm accepts
$(x,k)$ if and only if $(x,k) \in L$.
A standard tool for showing nonexistence of an \ensuremath{\mathsf{FPT}}\xspace algorithm is \W{1}-hardness (assuming $\ensuremath{\mathsf{FPT}}\xspace \neq \W{1}$).
For the definition of \W{1} class and the notion of \W{1}-hardness, we refer the reader to~\cite{df13}.
A stronger assumption than $\ensuremath{\mathsf{FPT}}\xspace \neq \W{1}$ that can be used to obtain hardness results is
the Exponential Time Hypothesis (ETH for short). It is a complexity theoretic assumption introduced by Impagliazzo, Paturi and Zane~\cite{IPZ01:ETH}.
We follow a survey on the topic of lower bounds obtained from ETH by Lokshtanov, Marx, and Saurabh~\cite{LMS11:ETHLowerBoundsSurvey}, which contains more details on this topic.
The hypothesis states that there is no subexponential time algorithm for {\sc 3-SAT} if we measure the time complexity by the number of variables in the input formula, denoted by $n.$
\vskip .2cm
\begin{minipage}[c]{.9\textwidth}
{\bf Exponential Time Hypothesis (ETH)~\cite{IPZ01:ETH}}
There is a positive real $s$ such that {\sc 3-SAT} with parameter $n$ cannot be solved in time ${2^{sn}(n+m)^{\bigO{1}}}.$
\end{minipage}
\vskip .2cm
\begin{definition}[Standard parameterized reduction]\label{def:reduction}
We say that parameterized language $L$ reduces to parameterized language $L'$ by a \emph{standard parameterized reduction} if there are functions ${f,g\colon\N\to\N}$
and ${h\colon\Sigma^*\times\N\to\Sigma^*}$ such that
function $h$ is computable in time $g(k) |x|^c$ for a constant $c$, and
$(x,k)\in L$ if and only if $(h(x,k),f(k))\in L'$.
\end{definition}
For preserving bounds obtained from the ETH, the asymptotic growth of the function $f$ need to be as slow as possible.
\subsection{Logic systems}
\label{subsec:logic_systems}
\tmcom{možná bych zde vysvětlil co myslíme tím size of formula.}
We heavily use graph properties that can be expressed in certain types of logical systems.
In the paper it is \emph{Monadic second-order logic} ({$\mathsf{MSO}$}\xspace) where monadic means that we allow quantification over sets (of vertices and/or edges). In \emph{first order logic} ({$\mathsf{FO}$}\xspace) there are no set variables at all.\tmcom{přeformulovat}
We distinguish {$\mathsf{MSO}_2$}\xspace and {$\mathsf{MSO}_1$}\xspace. In {$\mathsf{MSO}_1$}\xspace quantification only over sets of vertices is allowed and we can use the predicate of adjacency \adj{u}{v} returning true whenever there is an edge between vertices $u$ and $v$.
In {$\mathsf{MSO}_2$}\xspace we can additionally quantify over sets of edges and we can use the predicate of incidence \inc{v}{e} returning true whenever a vertex $v$ belongs to an edge $e$.
It is known that {$\mathsf{MSO}_2$}\xspace is strictly more powerful than {$\mathsf{MSO}_1$}\xspace. For example, the property that a graph is Hamiltonian is expressible in {$\mathsf{MSO}_2$}\xspace but not in {$\mathsf{MSO}_1$}\xspace \cite{LibkinFMT}.
Note that in {$\mathsf{MSO}_1$}\xspace it is easy to describe several complex graph properties like being connected or having a vertex of a constant degree.
\section{Hardness results}
In this section, we prove hardness of \textsc{Fair {$\mathsf{FO}$}\xspace vertex-deletion} by exhibiting a reduction from \textsc{Equitable 3-coloring}.
\prob{Equitable 3-coloring
{An undirected graph $G$.}
{Is there a proper coloring of vertices of $G$ by at most $3$ colors such that the size of any two color classes differ by at most one?}
The following result was proven implicitly in~\cite{eq_coloring}.
\begin{theorem}
\label{thm:eq_col_hardness}
\textsc{Equitable 3-coloring} is $\W{1}$-hard with respect to $\ensuremath{\mathop{\mathrm{pw}}}(G)$ and $\ensuremath{\mathop{\mathrm{fvs}}}(G)$ combined. Moreover, if there exists an algorithm for \textsc{Equitable 3-color\-ing} running in time $f(k)n^{o(\sqrt[3]k)}$, where $k$ is $\ensuremath{\mathop{\mathrm{pw}}}(G) + \ensuremath{\mathop{\mathrm{fvs}}}(G)$, then the Exponential Time Hypothesis fails.
\end{theorem}
The proof in~\cite{eq_coloring} relies on a reduction from \textsc{Multicolored Clique}~\cite{DBLP:journals/tcs/FellowsHRV09} to \textsc{Equitable coloring}. The reduction transforms an instance of \textsc{Multicolored clique} of parameter $k$ into an \textsc{Equitable coloring} instance of path-width and feedback vertex size at most $\bigO{k}$ (though only tree-width is explicitly stated in the paper). Algorithm for \textsc{Equitable coloring} running in time $f(k)n^{o(\sqrt[3] k)}$ would lead to an algorithm for \textsc{Multicolored Clique} running in time $f(k)n^{o(k)}$. It was shown by Lokshtanov, Marx, and Saurabh~\cite{LMS11:ETHLowerBoundsSurvey} that such algorithm does not exist unless ETH fails.
We now describe the idea behind the reduction from \textsc{Equitable 3-coloring} to \textsc{Fair {$\mathsf{FO}$}\xspace vertex-deletion}. Let us denote by $n$ the number of vertices of $G$ and assume that $3$ divides $n$. The vertices of $G$ are referred to as \emph{original vertices}. First, we add three vertices called \emph{class vertices}, each of them corresponds to a particular color class. Then we add edge between every class vertex and every original vertex and subdivide each such edge. The vertices subdividing those edges are called \emph{selector vertices}.
We can encode the partition of $V(G)$ by deleting vertices in the following way: if $v$ is an original vertex and $c$ is a class vertex, by deleting the selector vertex between $v$ and $c$ we say\ttcom{tohle je pekne dementni vyraz, ale o pul druhe nic lepsiho nemam} that vertex $v$ \emph{belongs} to the class represented by $c$. If we ensure that the set is deleted in such a way that every vertex belongs to exactly one class, we obtain a partition of $V(G)$.
The equitability of the partition will be handled by the fair objective function. Note that if we delete a subset $W$ of selector vertices that encodes a partition then $|W| = n$. Those $n$ vertices are adjacent to $3$ class vertices, so the best possible fair cost is $n/3$ and thus a solution of the fair cost $n/3$ corresponds to an equitable partition.
Of course, not every subset $W$ of vertices of our new graph encodes a partition. Therefore, the formula we are trying to satisfy must ensure that:
\begin{itemize}
\item every original vertex belongs to exactly one class,
\item no original or class vertex was deleted,
\item every class is an independent set.
\end{itemize}
However, the described reduction is too naive to achieve those goals; we need to slightly adjust the reduction.
Let us now describe the reduction formally:
\begin{proof}[of Theorem~\ref{thm:hardvertex}]
Let $G$ be a graph on $n$ vertices. We can assume without loss of generality (by addition of isolated vertices.) that $3$ divides $n$ and $n\ge 6$.
First we describe how to construct the reduction. All vertices of $G$ will be referred to as \emph{original vertices}. We add three vertices called \emph{class vertices} and connect every original vertex with every class vertex by an edge. We subdivide each such edge once; the vertices subdividing those edges are called \emph{selector vertices}. Finally, for every original vertex $v$, we add $n$ new vertices called \emph{dangling vertices} and connect each of them by an edge to $v$. We denote the graph obtained in this way as $G'$. For a schema of the reduction, see Figure~\ref{fig:reduction_figure}.
\begin{figure}
\begin{center}
\includegraphics{img/fo-reduction.pdf}
\end{center}
\caption{The schema of the reduction}
\label{fig:reduction_figure}
\end{figure}
Now, we wish to find a set $W\seq V(G')$ such that it encodes an equitable 3-coloring of a graph $G$. The set is described by the following {$\mathsf{FO}$}\xspace formula $eq\_3\_col$ imposed on a graph $G\setminus W$. We claim that whenever this set satisfy following claims it encodes an equitable 3-coloring. A set $W$ can contain only selector vertices and some dangling vertices (but those do not affect the coloring). For each vertex $v$ of a graph there can be only one selector vertex in the set $W$ and that vertex has only one class vertex as a neighbor. That vertex determine the color
of $v$.\tmcom{Mě se to stejně více hodí až dozadu.}
We use the following shorthand $\exists_{=k}$ meaning there are exactly $k$ distinct elements satisfying a given predicate:
\begin{multline*}
(\exists_{=k} w)(pred(w)) \equiv (\exists v_1, \ldots, v_k)\bigg(
\mathop{\bigwedge\limits}_{i=1}^k pred(v_i) \land \mathop{\bigwedge\limits}_{1 \leq i < j \leq k}(v_i \neq v_j)\\
\land (\forall v')\Big(pred(v') \rightarrow \mathop{\bigvee\limits}_{i=1}^k (v' = v_i)\Big)
\bigg)
\end{multline*}
The building blocks for the formula are as follows:
{
\allowdisplaybreaks
\begin{align*}
isol(v) &\equiv (\forall w)(\lnot adj(v,w)) \\
dangling(v) &\equiv (\exists w)\big(adj(v,w) \land (\forall w')(adj(v,w') \rightarrow w = w')\big) \\
original(v) &\equiv (\exists w)(dangling(w) \land adj(v,w)) \\
selector(v) &\equiv (\exists_{=2} w)(adj(v,w))\\
class(v) &\equiv \lnot orig(v) \land \lnot selector(v) \land \lnot dangling(v) \\
belongs\_to(v,a) &\equiv original(v) \land class(a) \land \lnot(\exists w)(adj(v,w) \land adj(w,a)) \\
same\_class(v,w) &\equiv original(v) \land original(w) \\
&\quad \land (\exists a)(class(a) \land belongs\_to(v,a) \land belongs\_to(w,a)) \\
valid\_deletion &\equiv (\forall v)(\lnot isol(v)) \\ & \quad \land (\forall v)\big(original(v) \rightarrow (\exists_{=1} c)(belongs\_to(v,c))\big)\\
eq\_3\_col &\equiv valid\_deletion \land (\forall v,w)(same\_class(v,w) \rightarrow \lnot adj(v,w)) \\
\end{align*}
}
The described reduction maps an instance $G$ of an \textsc{Equitable coloring} into an instance $(G', eq\_3\_col, n/3)$ of \textsc{Fair {$\mathsf{FO}$}\xspace vertex-deletion}.
We claim that there exists a set $W \subseteq V(G')$ of the fair cost at most $n/3$ if and only if $G$ admits an equitable 3-coloring.
If we have an equitable $3$-coloring of $G$ then it is easy to see that the set $W \subseteq V(G')$ corresponding to a partition into color classes has the fair cost exactly $n/3$ and it is straightforward to check that $G' \setminus W \models eq\_3\_col$.
For the other implication we prove that if we delete a subset $W \subseteq V(G')$ of the fair cost at most $n/3$, and the formula $valid\_deletion$ is true, then we obtained an equitable 3-coloring of a graph $G$. To get there we made a few basic claims.
\if false
\todo[inline]{
\begin{itemize}
\item no original vertex was deleted,
\item if $w$ has degree one in $G' \setminus W$, then its only neighbor is the original vertex,
\item formula $original$ correctly recognizes original vertices,
\item If $v$ is a dangling vertex then $dangling(v)$ is true,
\item if $class(v)$ is true if and only if $v$ is a class vertex,
\item no class vertex was deleted.
\end{itemize}
}
\fi
\emph{Claim 1: no original vertex was deleted:}
Suppose for the contradiction that original vertex $v$ was deleted. If we kept at least one of the dangling vertices attached to $v$, but this vertex is now isolated and formula $valid\_deletion$ is not true. On the other hand if we delete all dangling vertices that were attached to $v$, our deleted set has fair cost at least $n$.
\emph{Claim 2: if $w$ has degree one in $G' \setminus W$, then its only neighbor is an original vertex:}
If $w$ is dangling, then its only neighbor is original vertex by the construction of $G'$. Suppose that $w$ has degree one in $G' \setminus W$ but is not dangling. Since both class and original vertices have degree at least $n$ in $G'$, we cannot bring them down to degree one without exceeding the fair cost limit $n/3$. This leaves the only possibility that $w$ is a selector and exactly one of its two neighbors is in the deleted set $W$. By Claim 1, the deleted neighbor must have been a class vertex so the only remaining neighbor of $w$ in $G' \setminus W$ is an original vertex.
\emph{Claim 3: the formula $original$ correctly recognizes original vertices:}
If $v$ is original, then at least one of its dangling neighbors is not in $W$, otherwise we would exceed the fair cost. In this case the formula $original(v)$ is true. The other direction ($original(v)$ is true implies $v$ is original) is proved by Claim 2.
\emph{Claim 4: if $v$ is a dangling vertex such that $v \notin W$ then $dangling(v)$ is true:} By Claim 1, we cannot delete the only neighbor of $v$, which means $v$ has exactly one neighbor and so $dangling(v)$ is true.
\emph{Claim 5: the formula $class(v)$ is true if and only if $v$ is a class vertex that was not deleted:} Suppose that $v \notin W$ is a class vertex. It cannot have neighbor of degree one in $G' \setminus W$, because that would mean that an original vertex was deleted which violates Claim 1. This means that $original(v)$ is false. Moreover, we cannot decrease the degree of $v$ to two or less by deleting at most $n/3$ neighbors of $v$, so $dangling(v)$ and $selector(v)$ are false too. But then $class(v)$ is true.
For the other direction suppose that $v$ is not a class vertex. If it is original or dangling, then $original(v)$ or $dangling(v)$ is true (by Claim 3 or Claim 4) and hence $class(v)$ is false. If $v$ is a selector then either none of its neighbors were deleted, $v$ has degree two in $G' \setminus W$ and $selector(v)$ is true, or its class neighbor was deleted, $v$ has degree one in $G' \setminus W$ and $dangling(v)$ is true. Either way, $class(v)$ is false as required.
\emph{Claim 6: no class vertex was deleted:} since $valid\_deletion$ is true, we know that for every original vertex $v$ there is exactly one class vertex $c$ such that there is no path of length two between $v$ and $c$ (in other words, the selector vertex that was on the unique path of length two between $v$ and $c$ was deleted). Suppose for contradiction that one of the class vertices was deleted; then by Claim 5 we have at most two class vertices. But the $valid\_deletion$ formula implies that at least $n$ selector vertices were deleted. By pigeonhole principle, one of the class vertices has at least $n/2$ deleted neighbors which means the fair cost is greater than $n/3$, a contradiction.
The chain of claims we just proved guarantees that the deleted set $W$ indeed obeys the rules we required and corresponds to a partition (though we might have deleted a small number of dangling vertices, this does not affect the partition in any way). In order to meet the fair cost limit, each class of the partition must have at most $n/3$ vertices and since no original vertex was deleted, it has exactly $n/3$ vertices. Now it is easy to see that the formula $eq\_3\_col$ forces that each class of the partition is independent and so the graph $G$ has an equitable $3$-coloring.
Let us now discuss the parameters and the size of the \textsc{Fair {$\mathsf{FO}$}\xspace vertex-deletion} instance. If $G$ has a feedback vertex set $S$ of size $k$, then the union of $S$ with the set of class vertices is a feedback vertex set of $G'$. Therefore, $\ensuremath{\mathop{\mathrm{fvs}}}(G') \leq \ensuremath{\mathop{\mathrm{fvs}}}(G) + 3$. To bound the path-width, observe that after deletion of the class vertices we are left with $G$ with $\bigO{n^2}$ added vertices of degree one; the addition of degree one vertices to the original vertices can increase the path-width by at most one and so we have $\ensuremath{\mathop{\mathrm{pw}}}(G') \leq \ensuremath{\mathop{\mathrm{pw}}}(G) + 4$. Moreover it is clear that the size of instance is of size $\bigO{n^2}$. It is obvious that the reduction can be carried out in polynomial time.
\end{proof}
Let us mention that if we are allowed to use {$\mathsf{MSO}$}\xspace formulas, we are actually able to reduce any equitable partition problem to fair vertex deletion. This allows us to reduce for example \textsc{Equitable connected partition} to \textsc{Fair {$\mathsf{MSO}$}\xspace vertex-deletion} which in turn allows us to prove Theorem~\ref{thm:MSO_hardvertex}.
\prob{Equitable connected partition}
{An undirected graph $G$, a positive integer $r$}
{Is there a partition of $V(G)$ into $r$ sets such that each of them induces a connected graph and the sizes of every two sets differ by at most one?}
Enciso et al.~\cite{eq_conn_part} showed that \textsc{Equitable Connected Partition} is \W{1}-hard for combined parameterization by $\ensuremath{\mathop{\mathrm{fvs}}}(G)$, $\ensuremath{\mathop{\mathrm{pw}}}(G)$, and the number of partitions $r$. The part that $f(k)n^{o(\sqrt k)}$ algorithm would refute ETH is again contained only implicitly; the proof reduces an instance of \textsc{Multicolored clique} of parameter $k$ to an instance of \textsc{Equitable connected partition} of parameter $\bigO{k^2}$.
Our reduction can be easily adapted to $r$ parts (we just add $r$ class vertices and we set the fair cost limit to $n / r$). We define the formula $eq\_conn$ as follows.
\begin{align*}
class\_set(W) &\equiv (\exists v \in W) \land (\forall v,w \in W)( same\_class(v,w)) \\
&\quad \land (\forall w \in W, z \notin W)(\lnot same\_class(w,z)) \\
eq\_conn &\equiv (\forall W)(class\_set(W) \rightarrow connected(W)) \\
\end{align*}
By the same argument as in the proof of Theorem~\ref{thm:hardvertex}, we can show that there exists $W \subseteq V$ of fair cost at most $n/r$ such that $G' \setminus W \models eq\_conn$ if and only if $G$ admits an equitable connected partition.
\smallskip
\noindent\emph{Sketch of proof of Theorem~\ref{thm:edge_deletion_hardness}:} We do not present the complete proof, as the critical parts are the same as in proof of Theorem~\ref{thm:hardvertex}.
The reduction follows the same idea as before: we add three class vertices and connect each class vertex to each original vertex by an edge. This time, we do not subdivide the edges, as the partition is encoded by deleting the edges.
The protection against tampering with the original graph has to be done in slightly different way: in this case, we add $n/3 + 1$ dangling vertices of degree one to each original vertex.
Note that if we delete a set $F \subseteq E(G)$ of fair cost at most $n/3$, at least one of the added edges from every original vertex survives the deletion, so we can recognize the original vertices by having at least one neighbor of degree one.
In our formula, we require that each vertex has at most two neighbors of degree one. This forces us to delete all of those added edges except two. Since at least one edge from the original vertex must be deleted to encode a partition, by deleting an edge of the original graph $G$ we would exceed the fair cost limit $n/3$.
For the edge-deletion the formula $eq\_3\_col$ is built as follows.
\allowdisplaybreaks
\begin{align*}
dangling(v) &\equiv (\exists w)\big(adj(v,w) \land (\forall w')(adj(v,w') \rightarrow w = w')\big) \\
original(v) &\equiv (\exists w)(dangling(w) \land adj(v,w)) \\
class(v) &\equiv \lnot orig(v) \land \lnot dangling(v) \\
belongs\_to(v,a) &\equiv original(v) \land class(a) \land \lnot adj(v,a) \\
same\_class(v,w) &\equiv original(v) \land original(w) \\
&\quad \land (\exists a)(class(a) \land belongs\_to(v,a) \land belongs\_to(w,a)) \\
valid\_deletion &\equiv (\forall v)( \exists_{\leq 2} w)(adj(v,w) \land dangling(w)) \\ & \quad \land (\forall v)\big(original(v) \rightarrow (\exists_{=1} c)(belongs\_to(v,c))\big)\\
eq\_3\_col &\equiv valid\_deletion \land (\forall v,w)(same\_class(v,w) \rightarrow \lnot adj(v,w)) \\
\end{align*}
The complete proof of correctness is omitted due to space considerations, however, it is almost exactly the same as in the proof of Theorem~\ref{thm:hardvertex}.\hfill\qed
The transition between the {$\mathsf{FO}$}\xspace case and the {$\mathsf{MSO}$}\xspace case of edge-deletion (Theorem~\ref{thm:MSO_edge_deletion_hardness}) is done in exactly the same way as before.
\section{
Open problems}
The main open problem is whether the bound in Theorems~\ref{thm:edge_deletion_hardness} and~\ref{thm:hardvertex} can be improved to $f(|\psi|,k)n^{\smallo{k/\log k}}$ or even to $f(|\psi|,k)n^{\smallo{k}}$.
|
1,108,101,564,872 | arxiv |
\section{Conclusions}
\label{sec:conclusions}
In this paper, we have studied the problem of
mining frequent episodes over changing data streams. In particular our contribution in this work is three fold. We unearth an interesting aspect of temporal data mining where the data owner may desire results over a span of time in the data that cannot fit in the memory or
be processed at a rate faster than the data generation rate. We
have proposed
a new sliding window model which slides forward in hops of batches. At any point only one batch of data is available for processing. We have studied
this problem and identified the theoretical guarantees one can give and the necessary assumptions for supporting them.
In many real applications we find the need for characterizing pattern not just based on their frequency but also their tendency to persist over time. In particular,
in neuroscience, the network structure underlying an ensemble of neurons changes much slowly in comparison to the culture wide periods bursting phenomenon. Thus separating the persistent patterns from the bursty ones can give us more insight into the underlying connectivity map of the network.
We have proposed the notion of
$(v,k)$ persistent patterns to address this problem and outlined methods to mine all $(v,k)$-persistent patterns in the data.
Finally we have provided
detailed experimental results on both synthetic and real data to show the advantages of the proposed methods.
Finally, we reiterate that
although we have focused on episodes, the ideas presented in this paper could
be applied to other pattern classes with similar considerations.
\section{Problem Statement}
\label{sec:statement}
The data available (referred to as an {\em event stream}) is a potentially infinite sequence of events:
\begin{equation}
\mathcal{D} = \langle (e_1, \tau_1), (e_2, \tau_2), \ldots, (e_i, \tau_i),\ldots, (e_n, \tau_n),\ldots\rangle
\label{eq:streamD}
\end{equation}
Our goal is to find all episodes that were frequent in the recent past and to this end, we consider a sliding window model for the {\em window of interest} of the user\footnote{Streaming patterns literature has also considered other models, such as the landmark and time-fading models \cite{CKN08}, but we do not consider them in this paper.}. In this model, the user wants to determine episodes that are frequent over a window of fixed-size and terminating at the current time-tick. As new events arrive in the stream, the user's window of interest shifts, and the data mining task is to next report the frequent episodes in the new window of interest.
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{fig/sliding-window}
\caption{A sliding window model for episode mining over event streams: $B_{s}$ is the most recent batch of events that arrived in the stream and $W_{s}$ is the window of interest over which the user wants to determine the set of frequent episodes.}
\label{fig:sliding-window}
\end{figure}
Typically, the window of interest is very large and cannot be stored and processed in-memory. This straightaway precludes the use of standard multi-pass algorithms for frequent episode discovery over the window of interest. Events in the stream can be organized into batches such that at any given time only the new incoming batch needs to be stored and processed in memory. This is illustrated in Fig.~\ref{fig:sliding-window}. The current window of interest is denoted by $W_{s}$
and the most recent batch, $B_{s}$, consists of a sequence of events in $\mathcal{D}$ with times of occurrence, $\tau_{i}$, such that,
\begin{equation}
(s-1) T_{b}\leq \tau_{i} < s T_{b}
\end{equation}
where $T_{b}$ is the time-span of each batch and $s$ is the batch number ($s=1,2,\ldots$)\footnote{We assume that the number of events in any batch is bounded above and that we have sufficient memory to store and process all events that occur in a batch. For example, if time is integer-valued and if only one event occurs at any time-tick, then there are at most $T_{b}$ events in any batch.}. The frequency of an episode $\alpha$ in a batch $B_s$ is referred to as its {\em batch frequency} $f^s(\alpha)$. The current {\em window of interest}, $W_{s}$, consists of $m$ consecutive batches ending in batch $B_{s}$, i.e.
\begin{equation}
W_{s} = \langle B_{s-m+1},B_{s-m+2},\ldots,B_{s} \rangle
\end{equation}
\begin{definition}[Window Frequency]
The frequency of an episode $\alpha$ over window $W_s$, referred to as its {\em window frequency} and denoted by $f^{W_s}(\alpha)$, is defined as the sum of batch frequencies of $\alpha$ in $W_s$. Thus, if $f^j(\alpha)$ denotes the batch frequency of $\alpha$ in batch $B_j$, then the window frequency of $\alpha$ is given by $f^{W_s}(\alpha)=\sum_{B_j\in W_s} f^j(\alpha)$.
\label{def:window-frequency}
\end{definition}
In summary, we are given an event stream ($\ensuremath{\mathcal{D}}$), a time-span for batches ($T_b$), the number of consecutive batches that constitute the current window of interest ($m$), the desired size of frequent episodes ($\ell$) and the desired number of most frequent episodes ($k$). We are now ready to formally state the problem of discovering top-$k$ episodes in an event stream.
\begin{problem}[Streaming Top-$k$ Mining]
For each n- ew batch, $B_s$, of events in the stream, find all $\ell$-node episodes in the corresponding window of interest, $W_s$, whose window frequencies are greater than or equal to the window frequency, $f_s^k$, of $k^\mathrm{th}$ most frequent $\ell$-node episode in $W_s$.
\label{prob:streaming-topk-episodes-problem}
\end{problem}
\section{Introduction}
\label{sec:intro}
The problem of discovering interesting patterns from large datasets has been
well studied in the form of
pattern classes such as itemsets,
sequential patterns, and episodes with temporal constraints. However, most of these
techniques
deal with static datasets, over which multiple passes are
performed.
In many domains like telecommunication and computer security, it is becoming increasingly difficult to store
and process data at speeds comparable to their generation rate.
A few minutes of call logs data in a telecommunication network can easily run into millions of records.
Such data are referred to as \textit{data streams}~\cite{sketch19}. A \textit{data stream} is an unbounded sequence where new data points or events arrive continuously and often at very high rates. Many traditional data mining algorithms are rendered useless in this context as one cannot hope to store the entire data and then process it. Any method for data streams must thus
operate under the constraints of limited memory and processing time. In addition,
the data must be processed faster than it is being generated. In this paper, we
investigate the problem of mining temporal patterns called episodes under these constraints; while we focus on discovering frequent episodes from event streams, our method is general and adaptable to any class of patterns that might be of interest over the given data.
In several applications where frequent episodes have been found to be useful, share the streaming data characteristics. In neuroscience, multi electrode arrays are being used as implants to control artificial prosthetics. These interfaces interpret commands from the brain and direct external devices. Identifying controlling signals from brain is much like finding a needle in the hay stack. Large volumes of data need to be processed in real time to be able to solve this problem. Similar situations exist in telecom and computer networks where the network traffic and call logs must be analyzed to detect attacks or fraudulent activity.
A few works exist in current literature for determining frequent itemsets from a stream of transactions (e.g. see \cite{WF06}). However, they are either computationally impractical, due to worst-case assumptions, or ineffective due to strong independence assumptions. We make no statistical assumptions on the stream, independence or otherwise. We develop the error characterization of our algorithms by identifying two key properties of the data, namely, maximum rate of change and top-k separation. Our key algorithmic contribution is an adaptation of the border sets datastructures to reuse work done in previous batches when computing frequent patterns of the current batch. This reduces the candidate generation effort from $F^2$ to $FF_{new}$ (where $F$ denotes the number of frequent patterns of a particular size in the previous batch, while $F_{new}$ denotes the number of {\em newly} frequent patterns of that same size in the current batch). Experimental work demonstrates the practicality of our algorithms, both in-terms of accuracy of the returned frequent pattern sets as well as in terms of computational efficiencies.
\section{Method}
\begin{figure}[t]
\centering
\includegraphics[width=\columnwidth]{fig/top-k-connection}
\caption{Batch frequencies in {\em Example~\ref{eg:window-disconnect}}.}
\label{fig:top-k-connection}
\end{figure}
\begin{table}[t]
\centering
\caption{Window frequencies in {\em Example~\ref{eg:window-disconnect}}.}
\label{tab:window-counts}
\tiny
\begin{tabular}{|l|c|c|c|c|c|c|}
\hline
{\small Episode} & ABCD & MNOP & EFGH & WXYZ & IJKL & PQRS \\ \hline
{\small Window Freq} & 35 & 34 & 25 & 24 & 23 & 19\\
\hline
\end{tabular}
\end{table}
In general, the top-$k$ episodes over a window may be quite different from the top-$k$ episodes in the individual batches constituting the window. This is illustrated through an example in Fig.~\ref{fig:top-k-connection}.
\begin{example}[\label{eg:window-disconnect}Window Top-$k$ v/s Batch Top-$k$] Let $W$ be a window of four batches $B_1,\ldots, B_4$.
The episodes in each batch with corresponding batch frequencies are listed in Fig.~\ref{fig:top-k-connection}. The corresponding window frequencies (sum of each episodes' batch frequencies) are listed in Table~\ref{tab:window-counts}.
The top-2 episodes in $B_1$ are ${\rm (PQRS)}$ and ${\rm (WXYZ)}$. Similarly ${\rm (EFGH)}$ and ${\rm (IJKL)}$ are the top-2 episodes in $B_2$, and so on.
${\rm (ABCD)}$ and ${\rm (MNOP)}$ have the highest window frequencies but never appear in the top-2 of any batch -- these episodes would `fly below the radar' and go undetected if we considered only the top-2 episodes in every batch as candidates for the top-2 episodes over $W$. This example can be easily generalized to any number of batches and any $k$.
\end{example}
{\em Example~\ref{eg:window-disconnect}} highlights the main challenge in the streaming top-$k$ mining problem: while we can only store and process the most recent batch of events in the window of interest, the batchwise top-$k$ episodes may not contain sufficient informative about the top-$k$ over the entire window. It is obviously not possible to count and track all episodes (both frequent and infrequent) in every batch in the window, since the pattern space is typically very large. This brings us to the question of which episodes to select and track in every batch. How deep must we search within each batch for episodes that have potential to become top-$k$ over the window? In this paper, we develop the formalism to answer this question. We identify two important properties of the underlying event stream which determine the design and analysis of our algorithms. These are stated in {\em Definitions~\ref{def:maximum-rate-change} \& \ref{def:topk-separation}} below.
\begin{definition} [Maximum Rate of Change, $\Delta$]
\label{def:maximum-rate-change}
The maximum change in batch frequency of any episode, $\alpha$, across any pair of consecutive batches, $B_{s}$ and $B_{s+1}$, is bounded above by $\Delta (>0)$, i.e.,
\begin{equation}
|\f{s+1} - \f{s}| \leq \Delta,
\end{equation}
and $\Delta$ is referred to as the {\em maximum rate of change}.
\end{definition}
Intuitively, $\Delta$ controls the extent of change that we may see from one batch to the next. It is trivially bounded above by the maximum number of events arriving per batch, and in practice, it is in fact much smaller.
\begin{definition}[Top-$k$ Separation of $(\varphi,\epsilon)$]
\label{def:topk-separation}
A batch $B_s$ of events is said to have a {\em top-$k$ separation of $(\varphi,\epsilon)$}, $\varphi\geq 0$, $\epsilon\geq 0$, if there are no more than $(1+\epsilon)k$ episodes with batch frequency greater than or equal to $(f_k^s - \varphi\Delta)$, where $f_k^s$ denotes the batch frequency of the $k^\mathrm{th}$ most-frequent episode in $B_s$ and $\Delta$ denotes the maximum rate of change as per {\em Definition~\ref{def:maximum-rate-change}}.
\end{definition}
This is a measure of how well-separated the frequencies of the top-$k$ episodes are relative to the rest of the episodes. We expect to see roughly $k$ episodes with batch frequencies of at least $f_s^k$ and the separation can be considered to be high (or good) if $\epsilon$ can remain small even for relatively large $\varphi$. We observe that $\epsilon$ is a non-decreasing function of $\varphi$ and that top-$k$ separation is measured relative to the maximum rate of change $\Delta$. Also, top-$k$ separation of any given batch of events is characterized through not one but several pairs of $(\varphi,\epsilon)$ since $\varphi$ and $\epsilon$ are essentially functionally related -- $\epsilon$ is typically close to zero for $\varphi=0$ and $\epsilon$ is roughly the size of the entire class of episodes (minus $k$) for $\varphi \geq f_s^k$.
We now use the maximum rate of change property to design efficient streaming algorithms for top-$k$ episode mining and show that top-$k$ separation plays a pivotal role in determining the quality of approximation that our algorithms can achieve.
\begin{lemma}
\label{lem:fk}
Consider two consecutive batches, $B_{s}$ and $B_{s+1}$, with a maximum rate of change $\Delta$. The batch frequencies of the $k^\mathrm{th}$ most-frequent episodes in the corresponding batches are related as follows:
\begin{equation}
|\fk{s+1} - \fk{s}| \leq \Delta
\label{eq:fk-drift}
\end{equation}
\end{lemma}
\begin{proof}
There exist at least $k$ episodes in $B_{s}$ with batch frequency greater than or equal to $\fk{s}$ (by definition). Hence, there exist at least $k$ episodes in $B_{s+1}$ with batch frequency greater than or equal to $(f^s_k - \Delta)$ (since frequency of any episode can decrease by at most $\Delta$ going from $B_{s}$ to $B_{s+1}$). Hence we must have $f^{s+1}_k \geq (f^s_k-\Delta)$. Similarly, there can be at most $(k-1)$ episodes in $B_{s+1}$ with batch frequency strictly greater than $(\fk{s}+\Delta)$. Hence we must also have $f^{s+1}_k \leq (f^s_k+\Delta)$.
\end{proof}
Next we show that if the batch frequency of an episode is known relative to $\fk{s}$ in the current batch $B_{s}$, we can bound its frequency in a later batch.
\begin{lemma}
\label{lem:r-Delta}
Consider two batches, $B_s$ and $B_{s+r},\ r\in\mathbb{Z}$, located $r$ batches away from each other. If $\Delta$ is the maximum rate of change (as per {\em Definition~\ref{def:maximum-rate-change}}) then the batch frequency of any episode $\alpha$ in $B_{s+r}$ must satisfy the following:
\begin{enumerate}
\item If $\f{s} \geq \fk{s}$, then $\f{s+r} \geq \fk{s+r} - 2|r|\Delta$
\item If $\f{s} < \fk{s}$, then $\f{s+r} < \fk{s+r} + 2|r|\Delta$
\end{enumerate}
\end{lemma}
\begin{proof}
Since $\Delta$ is the maximum rate of change, we have $\f{s+r}\geq (\f{s}-|r|\Delta)$ and from Lemma~\ref{lem:fk}, we have $\fk{s+r} \leq (\fk{s}+|r|\Delta)$. Therefore, if $f^{s}(\alpha) \geq \fk{s}$, then
\[
\f{s+r} + |r|\Delta \geq \f{s} \geq \fk{s} \geq \fk{s+r}-|r|\Delta
\]
which implies $\f{s+r} \geq \fk{s+r} - 2|r|\Delta$.
Similarly, if $\f{s} < \fk{s}$, then
\[
\f{s+r} - |r|\Delta \leq \f{s} < \fk{s} \leq \fk{s+r}+|r|\Delta
\]
which implies $\f{s+r} < \fk{s+r} + 2|r|\Delta$.
\end{proof}
{\em Lemma~\ref{lem:r-Delta}} gives us a way to track episodes that have potential to be in the top-$k$ of future batches. This is an important property which our algorithm exploits and we recorded this as a remark below.
\begin{remark}
The top-$k$ episodes of batch, $B_{s+r},\ r\in\mathbb{Z}$, must have batch frequencies of at least $(\fk{s} - 2|r|\Delta)$ in batch $B_{s}$. Specifically, the top-$k$ episodes of $B_{s+1}$ must have batch frequencies of at least $(\fk{s}-2\Delta)$ in $B_{s}$.
\label{rem:batch-topk}
\end{remark}
Based on the maximum rate of change property we can derive a necessary condition for any episode to be top-$k$ over a window. The following theorem prescribes the minimum batch frequencies that an episode must satisfy if it is a top-$k$ episode over the window $W_s$.
\begin{theorem}[Exact Top-$k$ over $W_s$]
\label{thm:topk-mine}
An episode, $\alpha$, can be a top-$k$ episode over window $W_{s}$ only if its batch frequencies satisfy $f^{s'}(\alpha) \geq (\fk{s'}-2(m-1)\Delta)$ $\forall B_{s'} \in W_{s}$.
\end{theorem}
\begin{proof}
Consider an episode $\beta$ for which $\fb{s'} < (\fk{s'}-2(m-1)\Delta)$ in batch $B_{s'}\in W_s$. Let $\alpha$ be any top-$k$ episode of $B_{s'}$.
In any other batch $B_p\in W_s$, we have
\begin{align}
\label{eq:alph1}
\f{p} &\geq \f{s'}-|p-s'|\Delta \nonumber\\
&\geq \fk{s'}-|p-s'|\Delta
\end{align}
and
\begin{align}
\label{eq:alph2}
\fb{p} &\leq \fb{s'}+|p-s'|\Delta\nonumber\\
&< (\fk{s'}-2(m-1)\Delta)+|p-s'|\Delta
\end{align}
Applying $|p-s'|\leq (m-1)$ to the above, we get
\begin{align}
\f{p} \geq \fk{s'}-(m-1)\Delta > \fb{p}
\label{eq:alpha-greater-than-beta}
\end{align}
This implies $\fb{W_{s}} < \f{W_{s}}$ for every top-$k$ episode $\alpha$ of $B_{s'}$.
Since there are at least $k$ top-$k$ episodes in $B_{s'}$, $\beta$ cannot be a top-$k$ episode over the window $W_{s}$.
\end{proof}
Based on {\em Theorem~\ref{thm:topk-mine}} we can have the following simple algorithm for obtaining the top-$k$ episodes over a window: Use a traditional level-wise approach to find all episodes with a batch frequency of at least $(f_1^k-2(m-1)\Delta)$ in the first batch ($B_1$), simply accumulate their corresponding batch frequencies over all $m$ batches of $W_s$ and report the episodes with the $k$ highest window frequencies over $W_s$. This approach is guaranteed to give us the exact top-$k$ episodes over $W_s$. Further, in order to report the top-$k$ over the next sliding window $W_{s+1}$, we need to consider all episodes with batch frequency of at least $(f_2^k-2(m-1)\Delta)$ in the second batch and track them over all batches of $W_{s+1}$, and so on. Thus, an exact solution to {\em Problem~\ref{prob:streaming-topk-episodes-problem}} would require running a level-wise episode mining algorithm in every batch, $B_s$, $s=1,2,\ldots$, with a frequency threshold of $(f_s^k-2(m-1)\Delta)$.
\subsection{Class of $(v,k)$-Persistent Episodes}
\label{sec:persistance}
{\em Theorem~\ref{thm:topk-mine}} characterizes the minimum batchwise computation needed in order to obtain the exact top-$k$ episodes over a sliding window. This is effective when $\Delta$ and $m$ are small (compared to $f_s^k$). However, the batchwise frequency thresholds can become very low in other settings, making the processing time per-batch as well as the number of episodes to track over the window to become impractically high. To address this issue, we introduce a new class of episodes called {\em $(v,k)$-persistent episodes} which can be computed efficiently by employing higher batchwise thresholds. Further, we show that these episodes can be used to approximate the true top-$k$ episodes over the window and the quality of approximation is characterized in terms of the top-$k$ separation property (cf.~{\em Definition~\ref{def:topk-separation}}).
\begin{definition}[$(v,k)$-Persistent Episode] A pattern is said to be {\em $(v,k)$-persistent} over window $W_{s}$ if it is a top-$k$ episode in {\em at least} $v$ batches of $W_{s}$.
\label{def:persitent-episode}
\end{definition}
\begin{problem}[Mining $(v,k)$-Persistent Episodes]
For each new batch, $B_s$, of events in the stream, find all $\ell$-node $(v,k)$-persistent episodes in the corresponding window of interest, $W_s$.
\label{prob:vk-persistent-episodes-problem}
\end{problem}
\begin{theorem}
An episode, $\alpha$, can be $(v,k)$-persistent over the window $W_{s}$ only if its batch frequencies satisfy $f^{s'}(\alpha) \geq (\fk{s'}-2(m-v)\Delta)$ for every batch $B_{s'} \in W_{s}$.
\label{thm:persistent-episodes}
\end{theorem}
\begin{proof}
Let $\alpha$ be $(v,k)$-persistent over $W_s$ and let $V_\alpha$ denote the set of batches in $W_s$ in which $\alpha$ is in the top-$k$. For any $B_q\notin V_\alpha$ there exists $B_{\widehat{p}(q)}\in V_\alpha$ that is {\em nearest to} $B_q$. Since $|V_\alpha|\geq v$, we must have $|\widehat{p}(q)-q| \leq (m-v)$. Applying {\em Lemma~\ref{lem:r-Delta}} we then get
$f^q(\alpha)\geq f^q_k-2(m-v)\Delta$ for all $B_q\notin V_\alpha$.
\end{proof}
{\em Theorem~\ref{thm:persistent-episodes}} gives us the necessary conditions for computing all $(v,k)$-persistent episodes over sliding windows in the stream. The batchwise threshold required for $(v,k)$-persistent episodes depends on the parameter $v$. For $v=1$, the threshold coincides with the threshold for exact top-$k$ in {\em Theorem~\ref{thm:topk-mine}}. The threshold increases linearly with $v$ and is highest at $v=m$ (when the batchwise threshold is same as the corresponding batchwise top-$k$ frequency).
The algorithm for discovering $(v,k)$-persistent episodes follows the same general lines as the one described earlier for exact top-$k$ mining, only that we now apply higher batchwise thresholds: For each new batch, $B_s$, entering the stream, use a standard level-wise episode mining algorithm to find all episodes with batch frequency of at least $(f_s^k-2(m-v)\Delta)$. (We provide more details of our algorithm later in Sec.~\ref{sec:incremental-algorithm}). First, we investigate the quality of approximation of top-$k$ that $(v,k)$-persistent episodes offer and show that the number of errors is closely related to the degree of top-$k$ separation in the data.
\subsubsection{Top-$k$ Approximation}
\label{sec:topk-approximation}
The main idea here is that, under a maximum rate of change $\Delta$ and a top-$k$ separation of $(\varphi,\epsilon)$, there cannot be too many distinct episodes which are not $(v,k)$-persistent, while having sufficiently high window frequencies. To this end, we first compute a lower-bound ($f_L$) on the window frequencies of $(v,k)$-persistent episodes and an upper-bound ($f_U$) on the window frequencies of episodes that are {\em not} $(v,k)$-persistent (cf.~{\em Lemmas~\ref{lem:fL} \& \ref{lem:fU}}).
\begin{lemma}
If episode $\alpha$ is $(v,k)$-persistent over a window, $W_s$, then its window frequency, $f^{W_s}(\alpha)$, must satisfy the following lower-bound:
\begin{equation}
f^{W_s}(\alpha) \geq \sum_{B_{s'}} f^{s'}_k - (m-v)(m-v+1)\Delta \stackrel{\rm def}{=} f_L
\end{equation}
\label{lem:fL}
\end{lemma}
\begin{proof}
Consider episode $\alpha$ that is $(v,k)$-persistent over $W_s$ and let $V_\alpha$ denote the batches of $W_s$ in which $\alpha$ is in the top-$k$. The window frequency of $\alpha$ can be written as
\begin{eqnarray}
f^{W_s}(\alpha) &=& \sum_{B_p\in V_\alpha} f^p(\alpha) + \sum_{B_q\in W_s\setminus V_\alpha} f^q(\alpha) \nonumber\\
&\geq& \sum_{B_p\in V_\alpha} f^p_k + \sum_{B_q\in W_s\setminus V_\alpha} f^q_k -2|\widehat{p}(q)-q|\Delta \nonumber\\
&=& \sum_{B_{s'}\in W_s} f^{s'}_k - \sum_{B_q\in W_s\setminus V_\alpha} 2|\widehat{p}(q)-q|\Delta \label{eq:lemfL1}
\end{eqnarray}
where $B_{\widehat{p}(q)}\in V_\alpha$ denotes the batch nearest $B_q$ where $\alpha$ is in the top-$k$. Since $|W_s\setminus V_\alpha|\leq (m-v)$, we must have
\begin{eqnarray}
\sum_{B_q\in W_s\setminus V_\alpha} |\widehat{p}(q)-q| &\leq& (1+2+\cdots+(m-v)) \nonumber\\
&=&\frac{1}{2}(m-v)(m-v+1) \label{eq:lemfL2}
\end{eqnarray}
Putting together (\ref{eq:lemfL1}) and (\ref{eq:lemfL2}) gives us the lemma.
\end{proof}
\begin{lemma}
If episode $\beta$ is not $(v,k)$-persistent over a window, $W_s$, then its window frequency, $f^{W_s}(\beta)$, must satisfy the following upper-bound:
\begin{equation}
f^{W_s}(\beta) < \sum_{B_{s'}} f^{s'}_k + v (v+1)\Delta \stackrel{\rm def}{=} f_U
\end{equation}
\label{lem:fU}
\end{lemma}
\begin{proof}
Consider episode $\beta$ that is not $(v,k)$-persistent over $W_s$ and let $V_\beta$ denote the batches of $W_s$ in which $\beta$ is in the top-$k$. The window frequency of $\beta$ can be written as:
\begin{eqnarray}
f^{W_s}(\beta) &=& \sum_{B_p\in V_\beta} f^p(\beta) + \sum_{B_q\in W_s\setminus V_\beta} f^q(\beta) \nonumber\\
&<& \sum_{B_p\in V_\beta} f^p_k + 2|\widehat{p}(q)-q|\Delta + \sum_{B_q\in W_s\setminus V_\beta} f^q_k \nonumber\\
&=& \sum_{B_{s'}\in W_s} f^{s'}_k + \sum_{B_p\in V_\beta} 2|\widehat{q}(p)-p|\Delta \label{eq:lemfU1}
\end{eqnarray}
where $B_{\widehat{q}(p)}\in W_s\setminus V_\beta$ denotes the batch nearest $B_p$ where $\beta$ is not in the top-$k$. Since $|V_\beta| < v$, we must have
\begin{eqnarray}
\sum_{B_p\in V_\beta} |\widehat{q}(p)-p| &\leq& (1+2+\cdots+(v-1)) \nonumber\\
&=&\frac{1}{2}v(v+1) \label{eq:lemfU2}
\end{eqnarray}
Putting together (\ref{eq:lemfU1}) and (\ref{eq:lemfU2}) gives us the lemma.
\end{proof}
It turns out that $f_U > f_L\ \forall v,$ $1\leq v \leq m$, and hence there is always a possibility for some episodes which are not $(v,k)$-persistent to end up with higher window frequencies than one or more $(v,k)$-persistent episodes. We observed a specific instance of this kind of `mixing' in our motivating example as well (cf.~{\em Example~\ref{eg:window-disconnect}}). This brings us to the top-$k$ separation property that we introduced in {\em Definition~\ref{def:topk-separation}}. Intuitively, if there is sufficient separation of the top-$k$ episodes from the rest of the episodes in every batch, then we would expect to see very little mixing. As we shall see, this separation need not occur exactly at $k^\mathrm{th}$ most-frequent episode in every batch, somewhere close to it is sufficient to achieve a good top-$k$ approximation.
\begin{definition}[Band Gap Episodes, $\ensuremath{\mathcal{G}}_\varphi$]
In any batch $B_{s'}\in W_s$, the half-open frequency interval $[f_{s'}^k - \varphi \Delta,\ f_{s'}^k)$ is called the {\em band gap} of $B_{s'}$. The corresponding set, $\ensuremath{\mathcal{G}}_\varphi$, of {\em band gap episodes} over the window $W_s$, is defined as the collection of all episodes with batch frequencies in the band gap of at least one $B_{s'}\in W_s$.
\label{def:band-gap-patterns}
\end{definition}
The main feature of $\ensuremath{\mathcal{G}}_\varphi$ is that, if $\varphi$ is large-enough, then the only episodes which are not $(v,k)$-persistent but that can still mix with $(v,k)$-persistent episodes are those belonging to $\ensuremath{\mathcal{G}}_\varphi$. This is stated formally in the next lemma.
\begin{lemma}
If $\frac{\varphi}{2} > \max \{1, (1-\frac{v}{m})(m-v+1)\}$, then any episode $\beta$ that is {\em not} $(v,k)$-persistent over $W_s$, can have $f^{W_s}(\beta) \geq f_L$ only if $\beta\in\ensuremath{\mathcal{G}}_\varphi$.
\label{lem:freq-G}
\end{lemma}
\begin{proof}
If an episode $\beta$ is {\em not} $(v,k)$-persistent over $W_s$ then there exists a batch $B_{s'}\in W_s$ where $\beta$ is not in the top-$k$. Further, if $\beta\notin\ensuremath{\mathcal{G}}_\varphi$ then we must have $f_{s'}(\beta)<f^k_{s'}-\varphi\Delta$. Since $\varphi>2$, $\beta$ cannot be in the top-$k$ of any neighboring batch of $B_{s'}$, and hence, it will stay below $f_{s'}^k-\varphi\Delta$ for all $B_{s'}\in W_s$, i.e.,
\begin{equation*}
f^{W_s}(\beta) < \sum_{B_{s'}\in W_s} f^k_{s'} - m\varphi\Delta.
\end{equation*}
The Lemma follows from the given condition $\frac{\varphi}{2} > (1-\frac{v}{m})(m-v+1)$.
\end{proof}
The number of episodes in $\ensuremath{\mathcal{G}}_\varphi$ is controlled by the top-$k$ separation property, and since many of the non-persistent episodes which can mix with persistent ones must spend not one, but several batches in the band gap, the number of unique episodes that can cause such errors is bounded. {\em Theorem~\ref{thm:topk-approximation}} is our main result about quality of top-$k$ approximation that $(v,k)$-persistence can achieve.
\begin{theorem}[Quality of Top-$k$ Approximation]
Let every batch $B_{s'}\in W_s$ have a top-$k$ separation of $(\varphi,\epsilon)$ with $\frac{\varphi}{2} > \max \{1, (1-\frac{v}{m})(m-v+1)\}$. Let $\ensuremath{\mathcal{P}}$ denote the set of all $(v,k)$-persistent episodes over $W_s$. If $|\ensuremath{\mathcal{P}}| \geq k$, then the top-$k$ episodes over $W_s$ can be determined from $\ensuremath{\mathcal{P}}$ with an error of no more than $\left(\frac{\epsilon k m}{\mu}\right)$ episodes, where $\mu = \min \{ m-v+1, \frac{\varphi}{2}, \frac{1}{2}(\sqrt{1+2m\varphi} - 1)\}$.
\label{thm:topk-approximation}
\end{theorem}
\begin{proof}
By top-$k$ separation, we have a maximum of $(1+\epsilon)k$ episodes in any batch $B_{s'}\in W_s$, with batch frequencies greater than or equal to $f_{s'}^k-\varphi\Delta$. Since at least $k$ of these must belong to the top-$k$ of the $B_{s'}$, there are no more than $\epsilon k$ episodes that can belong to the band gap of $B_{s'}$. Thus, there can be no more than a total of $\epsilon km$ episodes over all $m$ batches of $W_s$ that can belong to $\ensuremath{\mathcal{G}}_\varphi$.
Consider any $\beta\notin\ensuremath{\mathcal{P}}$ with $f^{W_s}(\beta)\geq f_L$ -- these are the only episodes whose window frequencies can exceed that of any $\alpha\in\ensuremath{\mathcal{P}}$ (since $f_L$ is the minimum window frequency of any $\alpha$). If $\mu$ denotes the minimum number of batches in which $\beta$ belongs to the band gap, then there can be at most $\left(\frac{\epsilon k m}{\mu}\right)$ such {\em distinct} $\beta$. Thus, if $|\ensuremath{\mathcal{P}}|\geq k$, we can determine the set of top-$k$ episodes over $W_s$ with error no more than $\left(\frac{\epsilon k m}{\mu}\right)$ episodes.
There are now two cases to consider to determine $\mu$: (i)~$\beta$ is in the top-$k$ of some batch, and (ii)~$\beta$ is not in the top-$k$ of any batch.
Case (i): Let $\beta$ be in the top-$k$ of $B_{s'}\in W_s$. Let $B_{s''}\in W_s$ be $t$ batches away from $B_{s'}$. Using {\em Lemma~\ref{lem:r-Delta}} we get $f^{s''}(\beta) \geq f_{s''}^k - 2t\Delta$. The minimum $t$ for which $(f_{s''}^k - 2t\Delta < f_s^k - \varphi\Delta)$ is $\left(\frac{\varphi}{2}\right)$. Since $\beta\notin\ensuremath{\mathcal{P}}$, $\beta$ is below the top-$k$ in at least $(m-v+1)$ batches. Hence $\beta$ stays in the band gap of at least $\min\{m-v+1,\frac{\varphi}{2}\}$ batches of $W_s$.
Case (ii): Let $V_G$ denote the set of batches in $W_s$ where $\beta$ lies in the band gap and let $|V_G|=g$. Since $\beta$ does not belong to top-$k$ of any batch, it must stay below the band gap in all the $(m-g)$ batches of $(W_s\setminus V_G)$. Since $\Delta$ is the maximum rate of change, the window frequency of $\beta$ can be written as follows:
\begin{eqnarray}
f^{W_s}(\beta) &=& \sum_{B_p\in V_G} f^p(\beta) + \sum_{B_q\in W_s\setminus V_G} f^q(\beta) \nonumber\\
&<& \sum_{B_p\in V_G} f^p(\beta) + \sum_{B_q\in W_s\setminus V_G} (f^k_q-\varphi\Delta) \label{eq:thm-topk-pf1}
\end{eqnarray}
Let $B_{\widehat{q}(p)}$ denote the batch in $W_s\setminus V_G$ that is nearest to $B_p\in V_G$. Then we have:
\begin{eqnarray}
f^p(\beta) &\leq& f^{\widehat{q}(p)}(\beta) + |p-\widehat{q}(p)|\Delta \nonumber\\
&<& f^k_{\widehat{q}(p)} -\varphi\Delta + |p-\widehat{q}(p)|\Delta \nonumber\\
&<& f^k_p -\varphi\Delta + 2 |p-\widehat{q}(p)|\Delta \label{eq:thm-topk-pf2}
\end{eqnarray}
where the second inequality holds because $\beta$ is below the band gap in $B_{\widehat{q}(p)}$ and (\ref{eq:thm-topk-pf2}) follows from {\em Lemma~\ref{lem:fk}}. Using (\ref{eq:thm-topk-pf2}) in (\ref{eq:thm-topk-pf1}) we get
\begin{eqnarray}
f^{W_s}(\beta) &<& \sum_{B_{s'}\in W_s} f^k_{s'} -m\varphi\Delta+ \sum_{B_p\in V_G} 2|p-\widehat{q}(p)|\Delta \nonumber\\
&<& \sum_{B_{s'}\in W_s} f^k_{s'} -m\varphi\Delta+ 2(1+2+\cdots+g)\Delta\nonumber\\
&=& \sum_{B_{s'}\in W_s} f^k_{s'} -m\varphi\Delta+ g(g+1)\Delta = \mathrm{UB}\label{eq:thm-topk-pf3}
\end{eqnarray}
The smallest $g$ for which $(f^{W_s}(\beta)\geq f_L)$ is feasible can be obtained by setting $\mathrm{UB}\geq f_L$. Since $\frac{\varphi}{2} > (1-\frac{v}{m})(m-v+1)$, $\mathrm{UB} \geq f_L$ implies
\begin{eqnarray}
\sum_{B_{s'}\in W_s} f^k_{s'} - m\varphi\Delta + g(g+1)\Delta
&>& \sum_{B_{s'}\in W_s} f^k_{s'} - \frac{m\varphi\Delta}{2} \nonumber
\end{eqnarray}
Solving for $g$, we get $g\geq \frac{1}{2}(\sqrt{1+2m\varphi}-1)$. Combining cases (i) and (ii), we get $\mu=\min \{ m-v+1, \frac{\varphi}{2}, \frac{1}{2}(\sqrt{1+2m\varphi} - 1)\}$.
\end{proof}
{\em Theorem~\ref{thm:topk-approximation}} shows the relationship between the extent of top-$k$ separation required and quality of top-$k$ approximation that can be obtained through $(v,k)$-persistent episodes. In general, $\mu$ increases with $\frac{\varphi}{2}$ until the latter starts to dominate the other two factors, namely, $(m-v+1)$ and $\frac{1}{2}(\sqrt{1+2m\varphi}-1)$. The theorem also brings out the tension between the persistence parameter $v$ and the quality of approximation. At smaller values of $v$, the algorithm mines `deeper' within each batch and so we expect fewer errors with respect to the true top-$k$ epispodes. On the other hand, deeper mining within batches is computationally more intensive, with the required effort approaching that of exact top-$k$ mining as $v$ approaches 1. Finally, we use {\em Theorem~\ref{thm:topk-approximation}} to derive error-bounds for three special cases; first for $v=1$, when the batchwise threshold is same as that for exact top-$k$ mining as per {\em Theorem~\ref{thm:topk-mine}}; second for $v=m$, when the batchwise threshold is simply the batch frequency of the $k^{\rm th}$ most-frequent episode in the batch; and third, for $v=\left\lfloor\frac{m+1}{2}\right\rfloor$, when the batchwise threshold lies midway between the thresholds of the first two cases.
\begin{corollary}
Let every batch $B_{s'}\in W_s$ have a top-$k$ separation of $(\varphi,\epsilon)$ and let $W_s$ contain at least $m\geq 2$ batches. Let $\ensuremath{\mathcal{P}}$ denote the set of all $(v,k)$-persistent episodes over $W_s$. If we have $|\ensuremath{\mathcal{P}}| \geq k$, then the maximum number of errors in the top-$k$ episodes derived from $\ensuremath{\mathcal{P}}$, for three different choices of $v$, is given by:
\begin{enumerate}
\item $\left(\frac{\epsilon k m}{m-1}\right)$, for $v=1$, if $\frac{\varphi}{2} > (m-1)$
\item $(\epsilon k m)$, for $v=m$, if $\frac{\varphi}{2} > 1$
\item $\left(\frac{4 \epsilon k m^2}{m^2-1}\right)$, for $v=\left\lfloor\frac{m+1}{2}\right\rfloor$, if $\frac{\varphi}{2} > \frac{1}{m}\left\lceil \frac{m-1}{2}\right\rceil\left\lceil\frac{m+1}{2} \right\rceil$
\end{enumerate}
\label{cor:topk-approximation-heuristic-v}
\end{corollary}
\begin{proof}
We show the proof only for $v=\left\lfloor\frac{m+1}{2}\right\rfloor$. The cases of $v=1$ and $v=m$ are obtained immediately upon application of {\em Theorem~\ref{thm:topk-approximation}}.
Fixing $v=\left\lfloor\frac{m+1}{2}\right\rfloor$ implies $(m-v)=\left\lceil\frac{m-1}{2}\right\rceil$. For $m\geq 2$, $\frac{\varphi}{2} > \frac{1}{m}\left\lceil \frac{m-1}{2}\right\rceil\left\lceil\frac{m+1}{2} \right\rceil$ implies $\frac{\varphi}{2} > \max \{1, (1-\frac{v}{m})(m-v+1)\}$. Let $t_{\rm min} = \min\{m-v+1,\frac{\varphi}{2}\}$. The minimum value of $t_{\rm min}$ is governed by
\begin{eqnarray}
t_{\rm min} &\geq& \min \left\{ \left\lceil\frac{m+1}{2}\right\rceil, \frac{1}{m}\left\lceil \frac{m-1}{2}\right\rceil \left\lceil\frac{m+1}{2} \right\rceil \right\}\nonumber\\
&=& \frac{1}{m}\left\lceil \frac{m-1}{2}\right\rceil \left\lceil\frac{m+1}{2} \right\rceil \nonumber\\
&\geq& \left(\frac{m^2-1}{4m}\right)
\label{eq:thm-topk-heuristic-pf1}
\end{eqnarray}
Let $g_{\rm min}=\frac{1}{2}(\sqrt{1+2m\varphi} - 1)$. $\varphi > \frac{2}{m}\left\lceil \frac{m-1}{2}\right\rceil\left\lceil\frac{m+1}{2} \right\rceil$ implies $g_{\rm min}>\left(\frac{m-1}{2}\right)$. From {\em Theorem~\ref{thm:topk-approximation}} we have
\begin{equation*}
\mu=\min\{t_{\rm min}, g_{\rm min}\} \geq \left(\frac{m^2-1}{4m}\right)
\end{equation*}
and hence the number of errors is no more than $\left(\frac{4\epsilon k m^2}{m^2-1}\right)$.
\end{proof}
\subsection{Incremental Algorithm}
\label{sec:incremental-algorithm}
In this section we present an efficient algorithm for incrementally mining patterns with frequency $\geq (\fk{s} - \theta) $. From our formalism, the value of $\theta$ is specified by the type of patterns we want to mine. For $(v,k)$ persistence, $\theta = 2(m-v)\Delta$ whereas for mining the exact top-$k$ the threshold is $2(m-1)\Delta$.
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{fig/batch-update}
\caption{The set of frequent patterns can be incrementally updated as new batches arrive.}
\label{fig:batch-update}
\end{figure}
Recall that the goal of our mining task is to report frequent patterns of size $\ell$.
After processing the data in the batch $B_{s-1}$, we desire all patterns with frequency greater than $(\fk{s-1}-\theta)$. Algorithmically this is achieved by first setting a high frequency threshold and mining for patterns using the classical level wise Apriori method~\cite{srikant-agrawal}.
If the number of patterns of size-$\ell$ is less than $k$, the support threshold is decreased and the mining repeated until atleast $k$ $\ell$-size patterns are found. At this point $\fk{s}$ is known. The mining process is repeated once more with the frequency threshold $(\fk{s}-\theta)$. Doing this entire procedure for every new batch can be expensive and wasteful. After seeing the first batch of the data, whenever a new batch arrives we have information about the patterns that were frequent in the previous batch. This can be exploited to incrementally and efficiently update the set of frequent episodes in the new batch. The intuition behind this is that the frequencies of the majority of episodes do not change much from one batch to the next. As a result a small number of episode fall below the new support threshold in the new batch. There is also the possibility of some new episodes becoming frequent. This is illustrated in Figure~\ref{fig:batch-update}. In order to efficiently find these sets of episodes, we need to maintain additional information that allows us to avoid full-blown candidate generation.
We show that this state information is a by-product of Apriori algorithm and therefore any extra processing is unnecessary.
In the Apriori algorithm, frequent patterns are discovered iteratively, in ascending order of their size and it is often referred to as a levelwise procedure. The procedure alternates between counting and candidate generation. First a set $C^{i}$ of candidate $i$-size patterns is created by joining the frequent $(i-1)$-size itemsets found in the previous iteration. Then the data is scanned for determining the frequency or count of each candidate pattern and the frequent $i$-size patterns are extracted from the candidates. An interesting observation is that all candidate episodes that are not frequent constitute the negative border of the frequent lattice. This is true because, in the Apriori algorithm, a candidate pattern is generated only when all its subpatterns are frequent. The usual approach is to discard the border.
For our purposes, the patterns in the border contain the information required to identify the change in the frequent sets from one batch to the next.
The pseudocode for incrementally mining frequent patterns in batches is listed in Algorithm~\ref{alg:mine-top-k}.
Let the frequent episodes of size-$i$ be denoted by $\mathcal{F}_{s}^{i}$. Similarly, the border episodes of size-$i$ are denoted by $\mathcal{B}_{s}^{i}$. The frequency threshold used in each batch is $\fk{s}-\theta$. In the first batch of data, the top-$k$ patterns are found by progressively lowering the frequency threshold $f_{min}$ by a small amount $\epsilon$ (Lines 1-8). Once atleast $k$ patterns of size $\ell$ are found, $f_{k}^{s}$ is determined and the mining procedure repeated with a threshold of $\fk{s}-\theta$. The border patterns generated during level wise mining are retained.
For subsequent batches, first $\fk{s}$ is determined. As shown in Remark~\ref{rem:batch-topk}, if $\theta \geq 2\Delta$, then the set of frequent patterns $\mathcal{F}_{s-1}^{\ell}$ in batch $B_{s-1}$ contains all patterns that can be frequent in the next batch $B_{s}$. Therefore simply updating the counts of all patterns in $\mathcal{F}_{s-1}^{\ell}$ in the batch $B_{s}$ and picking the $k^{th}$ highest frequency gives $\fk{s}$ (Lines 10-11). The new frequency threshold $f_{min}$ is set to be $\fk{s}-\theta$.
The procedure, starting from bottom (size-1 patterns) updates the lattice for $B_{s}$. The data is scanned to determine the frequency of new candidates together with the frequent and border patterns from the lattice (Line 15-18). In the first level (patterns of size 1), the candidate set is empty. After counting, the patterns from the frequent set $\mathcal{F}_{s-1}^{\ell}$ that continue to be frequent in the new batch are added to $\mathcal{F}_{s}^{\ell}$. But if a pattern is no longer frequent it is marked as a border set and all its super episodes are deleted (Lines 19-24). This ensures that only border patterns are retained in the lattice.
All patterns, either from the border set or the new candidate set, that are found to be frequent are added to $\mathcal{F}_{s}^{\ell}$. Such episodes are also added to $F_{new}^{i}$. Any remaining infrequent patterns belong to border set because otherwise they would have atleast one of infrequent subpatterns and would have been deleted at a previous level (Line 24). These patterns are added to $\mathcal{B}_{s}^{\ell}$ (Line 30).
The candidate generation step is required to fill out the missing parts of the frequent lattice. We want to avoid a full blown candidate generation. Note that if a pattern is frequent in $B_{s-1}$ and $B_{s}$ then all its subpatterns are also frequent in both $B_{s}$ and $B_{s-1}$. Any new pattern ($\not\in \mathcal{F}_{s-1}^{\ell} \cup \mathcal{B}_{s-1}^{\ell}$) that turns frequent in $B_{s}$, therefore, must have atleast one subpattern that was not frequent in $B_{s-1}$ but is frequent in $B_{s}$. All such patterns are listed in $F_{new}^{i}$.
The candidate generation step (Line 31) for the next level generates only candidate patterns with atleast one subpattern $\in F_{new}^{i}$. This greatly restricts the number of candidates generated at each level without compromising the completeness of the results.
The space and time complexity of the candidate generation is now $O(|F_{new}^{i}|.|\mathcal{F}_{s}^{i}|)$ instead of $O(|\mathcal{F}_{s}^{i}|^{2})$ and in most practical cases $|F_{new}^{i}| \ll |\mathcal{F}_{s}^{i}|$. This is crucial in a streaming application where processing rate must match the data arrival rate.
\begin{algorithm}[!ht]
\caption{Mine top-$k$ $v$-persistent patterns.}
\label{alg:mine-top-k}
\begin{algorithmic}[1]
\small
\REQUIRE{A new batch of events $B_{s}$, the lattice of frequent and border patterns $(\mathcal{F}^{*}_{s-1}, \mathcal{B}^{*}_{s-1})$, and parameters $k$ and $\theta$}
\ENSURE{The lattice of frequent and border patterns $(\mathcal{F}^{*}_{s}, \mathcal{B}^{*}_{s})$}
\IF {$s = 1$}
\STATE $f_{min} = $ high value
\WHILE{$|\mathcal{F}_{s}^{\ell}| < k$}
\STATE Mine patterns with frequency $\geq f_{min}$
\STATE $f_{min} = f_{min} - \epsilon$
\ENDWHILE
\STATE $\fk{s} = $ frequency of the $k^{th}$ most frequent pattern $\in \mathcal{F}_{s}^{\ell}$
\STATE Mine $\ell$-size patterns in $B_{s}$ with frequency threshold $f_{min} = \fk{s} - \theta$
\STATE Store the frequent and border patterns (of size = $1\ldots \ell$) in $(\mathcal{F}^{*}_{s}, \mathcal{B}^{*}_{s})$
\ELSE
\STATE {\bf CountPatterns}$(\mathcal{F}^{\ell}_{s-1}, B_{s})$
\STATE Set $\fk{s} =$ frequency $k^{th}$ highest frequency (pattern $\in \mathcal{F}^{\ell}_{s-1}$)
\STATE Set frequency threshold for $B_{s}$, $f_{min} =(\fk{s}-\theta)$
\STATE $\mathcal{C}^{1} = \varphi$ \COMMENT{New candidate patterns of size $=1$}
\FOR{$i = 1\ldots\ell-1$}
\STATE $\mathcal{F}^{i}_{s} = \varphi$ \COMMENT{Frequent patterns of size $i$}
\STATE $\mathcal{B}^{i}_{s} = \varphi$ \COMMENT{Border patterns of size $i$}
\STATE $F^{i}_{new} = \varphi$ \COMMENT{List of newly frequent Patterns}
\STATE {\bf CountPatterns}$(\mathcal{F}^{i}_{s-1} \cup \mathcal{B}^{i}_{s-1} \cup \mathcal{C}^{i}, B_{s})$
\FOR{$\alpha \in \mathcal{F}^{i}_{s-1}$}
\IF{$\f{s} \geq f_{min}$}
\STATE $\mathcal{F}^{i}_{s} = \mathcal{F}^{i}_{s} \cup \{\alpha \}$
\ELSE
\STATE $\mathcal{B}^{i}_{s} = \mathcal{B}^{i}_{s} \cup \{\alpha \}$
\STATE Delete all its super-patterns from $(\mathcal{F}^{*}_{s-1}, \mathcal{B}^{*}_{s-1})$
\ENDIF
\ENDFOR
\FOR{$\alpha \in \mathcal{B}^{i}_{s-1} \cup \mathcal{C}^{i}$}
\IF{$\f{s} \geq f_{min}$}
\STATE $\mathcal{F}^{i}_{s} = \mathcal{F}^{i}_{s} \cup \{\alpha \}$
\STATE $F^{i}_{new} = F^{i}_{new} \cup \{\alpha\}$
\ELSE
\STATE $\mathcal{B}^{i}_{s} = \mathcal{B}^{i}_{s} \cup \{\alpha \}$
\ENDIF
\ENDFOR
\STATE $C^{i+1} = \mbox{\bf GenerateCandidate}_{i+1}(F^{i}_{new}, \mathcal{F}^{i}_{s})$
\ENDFOR
\ENDIF
\RETURN $(\mathcal{F}^{*}_{s}, \mathcal{B}^{*}_{s})$
\end{algorithmic}
\end{algorithm}
For a window $W_{s}$ ending in the batch $B_{s}$, the set of output patterns can be obtained by picking the top-$k$ most frequent patterns from the set $\mathcal{F}_{s}^{\ell}$. Each pattern also maintains a list that stores its batch-wise counts is last $m$ batches. The window frequency is obtained by adding these entries together. The output patterns are listed in decreasing order of their window counts.
\begin{example}
In this example we illustrate the procedure for incrementally updating the frequent patterns lattice as a new batch $B_{s}$ is processed (see Figure~\ref{fig:lattice}).
\begin{figure}[htbp]
\centering
\includegraphics[width=\columnwidth]{fig/lattice}
\caption{Incremental lattice update for the next batch $B_{s}$ given the lattice of frequent and border patterns in $B_{s-1}$.}
\label{fig:lattice}
\end{figure}
Figure~\ref{fig:lattice}(A) shows the lattice of frequent and border patterns found in the batch $B_{s-1}$. $A B C D$ is a 4-size frequent pattern in the lattice. In the new batch $B_{s}$, the pattern $A B C D$ is no longer frequent. The pattern $C D X Y$ appears as a new frequent pattern. The pattern lattice in $B_{s}$ is shown in Figure~\ref{fig:lattice}(B).
In the new batch $B_{s}$, $A B$ falls out of the frequent set. $AB$ now becomes the new border and all its super-patterns namely $ABC$, $BCD$ and $ABCD$ are deleted from the lattice.
At level 2, the border pattern $X Y$ turns frequent in $B_{s}$. This allows us to generate $DXY$ as a new 3-size candidate. At level 3, $DXY$ is also found to be frequent and is combined with $CDX$ which is also frequent in $B_{s}$ to generate $CDXY$ as a 4-size candidate. Finally at level 4, $CDXY$ is found to be frequent. This shows that border sets can be used to fill out the parts of the pattern lattice that become frequent in the new data.
\end{example}
\subsection{Estimating $\Delta$ dynamically}
The parameter $\Delta$ in the bounded rate change assumption is a critical parameter in the entire formulation. But unfortunately the choice of the correct value for $\Delta$ is highly data-dependent. In the streaming setting, the characteristics of the data can change over time. Hence one predetermined value of $\Delta$ cannot be provided in any intuitive way. Therefore we estimate $\Delta$ from the frequencies of $\ell$-size episodes in consecutive windows. We compute the differences in frequencies of episodes that are common in consecutive batches. Specifically, we consider the value at the 75th percentile as an estimate of $\Delta$. We avoid using the maximum change as it tends to be noisy. A few patterns exhibiting large changes in frequency can skew the estimate and adversely affect the mining procedure.
\section{Preliminaries}
\label{sec:preliminaries}
In the framework of frequent episodes \cite{MTV97}, an {\em event sequence} is denoted as $\langle (e_1,\tau_1), \ldots, (e_n,\tau_n) \rangle$, where $(e_i,\tau_i)$ represents the $i^\mathrm{th}$ event; $e_i$ is drawn from a finite alphabet $\ensuremath{\mathcal{E}}$ of symbols (called {\em event-types}) and $\tau_i$ denotes the time-stamp of the $i^\mathrm{th}$ event, with $\tau_{i+1} \geq \tau_i$, $i=1,\ldots,(n-1)$. An {\em $\ell$-node episode} $\alpha$ is defined by a triple $\alpha=(V_\alpha,<_\alpha,g_\alpha)$, where $V_\alpha=\{v_1,\ldots,v_\ell\}$ is a collection of $\ell$ nodes, $<_\alpha$ is a partial order over $V_\alpha$ and $g_\alpha\::\:V_\alpha \rightarrow \ensuremath{\mathcal{E}}$ is a map that assigns an event-type $g_\alpha(v)$ to each node $v\in V_\alpha$. There are two special classes of episodes: When $<_\alpha$ is total $\alpha$ is called a {\em serial episode} and when it is empty, it is called a {\em parallel episode}. An {\em occurrence} of an episode $\alpha$ is a map $h\::\:V_\alpha\rightarrow\{1, \ldots, n\}$ such that $e_{h(v)}=g(v)$ for all $v\in V_\alpha$ and for all pairs of nodes $v,v'\in V_\alpha$ such that $v<_\alpha v'$ the map $h$ ensures that $\tau_{h(v)}<\tau_{h(v')}$. Two occurrences of an episode are {\em non-overlapped} \cite{laxman06} if no event corresponding to one appears in-between the events corresponding to the other. The maximum number of non-overlapped occurrences of an episode is defined as its {\em frequency} in the event sequence. The task in frequent episode discovery is to find all patterns whose frequency exceeds a user-defined threshold. Given a frequency threshold, Apriori-style level-wise algorithms \cite{MTV97,ALVS12} can be used to obtain the frequent episodes in the event sequence. An important variant of this task is top-$k$ episode mining, where, rather than issue a frequency threshold to the mining algorithm, the user supplies the {\em number} of top frequent episodes that need to be discovered.
\begin{definition}[Top-$k$ episodes of size $\ell$]
\label{def:top-k}
The set of top-$k$ episodes of size $\ell$ is defined as the collection of all $\ell$-node episodes with frequency {\em greater than or equal to} the frequency $f^k$ of the $k^{th}$ most frequent $\ell$-node episode in the given event sequence.
\end{definition}
Note that the number of top-$k$ $\ell$-node episodes can exceed $k$, although the number of $\ell$-node episodes with frequencies strictly greater than $f^k$ is at most $(k-1)$.
\section{Related work}
\label{sec:related-work}
Most prior work in streaming pattern mining is related to frequent itemsets and sequential patterns~\cite{Karp:2003,Jin2005,Manku:2002,Calders2007}. Some interesting algorithms have also been proposed for streaming motif mining in time-series data~\cite{Mueen2010}. But these methods do not easily extend to other pattern classes like episodes and partial orders.
To our knowledge, there has been very little work in the area of mining patterns in discrete event streams. In this section we discuss some of the existing methods for itemsets, sequential patterns, and motifs.
Karp \textit{et al} proposed a one pass streaming algorithm for finding frequent events in an item sequence~\cite{Karp:2003}. The algorithm, at any given time, maintains a set $K$ of event types and their corresponding counts. Initially, this set is empty. When an event is read from the input sequence, if the event type exists in the set then its count is incremented. Otherwise the event type is inserted into the set $K$ with count 1. When the size of the set $K$ exceeds $\lfloor 1/\theta \rfloor$, the count of each event type in the set is decremented by 1 (and deleted from the set if count drops to zero). The key property is that any event type that occurs at least $n\theta$ times in the sequence is in the set $K$. Consider an event type that occurs $f$ times in the sequence, but is not in $K$. Each occurrence of this event type is eliminated together with more than $\lfloor 1/\theta \rfloor - 1$ occurrences of other event types achieved by decrementing all counts by 1. Thus, at least a total of $f/\theta$ elements are eliminated. Thus $f/\theta < n$, where $n$ is the number of events in the sequences and hence, $f < n\theta$. This method guarantees no false negatives for a given support threshold. But the space and time complexity of this algorithm varies inversely with the support threshold chosen by the user. This can be a problem when operating at low support thresholds. In \cite{Jin2005}, this approach was extended to mine frequent itemsets.
Lossy counting constitutes another important class of streaming algorithms proposed by Manku and Motwani in 2002 \cite{Manku:2002}. In this work an approximate counting algorithm for itemsets is described.
The algorithm stores a list of tuples which comprise an item or itemset, a lower bound on its count, and a maximum error term $(\Delta)$. When processing the $i^{th}$ item, if it is currently stored then its count is incremented by one; otherwise, a new tuple is created with the lower bound set to one, and $\Delta$ set to $\lfloor i \epsilon \rfloor$. Periodically, each tuple whose upper bound is less than $\lfloor i \epsilon \rfloor$ is deleted. This technique guarantees that the percentage error in reported counts is no more than $\epsilon$ and it is also shown that the space used by this algorithm is $O(\frac{1}{\epsilon}\log \epsilon n)$ for itemsets. Unfortunately, this method requires operating at very low support threshold $\epsilon$ in order to provide small enough error bounds.
In \cite{Mendes:2008}, the pattern growth algorithm - PrefixSpan \cite{PrefixSpan} for mining sequential patterns was extended to incorporate the idea of lossy counting.
In \cite{Calders2007}, the authors propose a new frequency measure for itemsets over data streams. The frequency of an itemset in a stream is defined as its maximal frequency over all windows in the stream from any point in the past until the current time that satisfy a minimal length constraint. They present an incremental algorithm that produces the current frequencies of all frequent itemsets in the data stream. The focus of this work is on the new frequency measure and its unique properties.
In \cite{Mueen2010} an online algorithm for mining time series motifs was proposed. The algorithm uses an interesting data structure to find a pair of
approximately repeating subsequences in a window. The Euclidean distance measure
is used to measure the similarity of the motif sequences in the window. Unfortunately this notion does not extend naturally to discrete patterns. Further,
this motif mining formulation does not explicitly make use of a support or frequency threshold and returns exactly one pair of motifs that are found to be the closest in terms of distance.
A particular sore point in pattern mining is coming up with a frequency threshold for the mining process. Choice of this parameter is key to the success of any effective strategy for pruning the exponential search space of patterns. Mining the top-$k$ most frequent patterns has been proposed in the literature as a more intuitive formulation for the end user. In \cite{PLR10} we proposed an information theoretic principle for determining the frequency threshold that is ultimately used in learning a dynamic Bayesian network model for the data. In both cases the idea is to mine patterns at the highest possible support threshold to either outputs the top-$k$ patterns or patterns that satisfy a minimum mutual information criteria. This is different from the approach adopted, for example, in lossy counting where the mining algorithm operates at support threshold proportional to the error bound. Therefore, in order to guarantee low errors,
the algorithm tries to operate at the lowest possible threshold.
An episode or a general partial order pattern can be thought of as a generalization of itemsets where each item in the set is not confined to occur within the same transaction (i.e. at the same time tick) and there is additional structure in the form of ordering of events or items. In serial episodes, events must occur in exactly one particular order. Partial order patterns allow multiple orderings. In addition there could be repeated event types in an episode.
The loosely coupled structure of events in an episode results in narrower separation between the frequencies of true and noisy patterns (i.e. resulting from random co-occurrences of events) and quickly leads to combinatorial explosion of candidates when mining at low frequency thresholds. Most of the itemset literature does not deal with the problem of candidate generation. The focus is on counting and not so much on efficient candidate generation schemes.
In this work we explore ways of doing both counting and candidate generation efficiently. Our goal is to devise algorithms that can operate at as high frequency thresholds as possible and yet
give certain guarantees about the output patterns.
\section{Results}
\label{sec:results}
In this section we present results both on synthetic data and data from real neuroscience experiments. We compare the performance of the proposed streaming episode mining algorithm on synthetic data to quantify the effect of different parameter choices and data characteristics on the quality of the top-k episodes reported by each method. Finally we show the quality of results obtained on neuroscience data.
For the purpose of comparing the quality of results we setup the following six variants of the mining frequent episodes:
\begin{description}
\item[Alg 0:] This is the naive brute force top-k mining algorithm that loads an entire window of events at a time and mines the top-k episode by repeatedly lowering the frequency threshold for mining. When a new batch arrives, events from the oldest batch are retired and mining process is repeated from scratch. This method acts as the baseline for comparing all other algorithms in terms of precision and recall.
\item[Alg 1:] The top-k mining is done batch-wise. The top-k episodes over a window are
reported from within the set of episodes that belong to the batch-wise top-k of atleast one batch in the window.
\item[Alg 2:] Here the algorithm is same is above, but once an episodes enters the top-k in any of the batches in a window, it is tracked over several subsequent batches. An episode is removed from the list of episodes being tracked if it does not occur in the top-k of last $m$ consecutive batches. This strategy helps obtaining a larger candidate set and also in getting more accurate counts of candidate patterns over the window.
\item[Alg 3:] This algorithm uses a batch-wise frequency threshold $\fk{s}-2\delta$ which ensures that the top-k episodes in the next batch $B_{s+1}$ are contained in the frequent lattice of $B_{s}$. This avoids multiple passes of the data while trying to obtain $k$ most frequent episodes lowering the support threshold iteratively. The patterns with frequency between $\fk{s}$ and $\fk{s}-2\delta$ also improve the overall precision and recall with respect to the window.
\item[Alg 4:] In this case the batch-wise frequency threshold is $\fk{s}-2(m-v)\delta$ which guarantees finding all $(v,k)$-persistent episodes in the data. We report results for $v=3m/4$ and $v=m/2$.
\item[Alg 5:] Finally, this last algorithm uses a heuristic batchwise threshold of $f_{k}^s -m(2-\frac{v}{m} -(\frac{v}{m})^2)\Delta$. Again we report results for $v=3m/4$ and $v=m/2$.
\end{description}
\subsection{Synthetic Datasets}
The datasets we used for experimental evaluation are listed in Table~\ref{tab:datasets}. The name of the
data set is listed in Column 1, the length of the data set (or number of time-slices in the data
sequence) in Column 2, the size of the alphabet (or total number of event types) in
Column 3, the average rest firing rate in Column 4 and the number of patterns embedded in Column 5.
In these datasets the data length is varied from - million to - million events, the alphabet size is varied from 1000 to 5000, the resting firing rate from 10.0 to 25.0,
and the number of patterns embedded in the data from 25 to 50.
\paragraph{Data generation model:}
The data generation model for synthetic data is based on the inhomogeneous Poisson process model for evaluating the algorithm for learning excitatory dynamic networks \cite{PLR10}. We introduce two changes to this model. First, in order to mimic real data more closely
in the events that constitute the background noise the event-type distribution follows a power law distribution. This gives the long tail characteristics to the simulated data.
The second modification was to allow the rate of arrival of episodes to change over time. As time progresses, the frequency of episodes in the recent window or batch slowly changes. We use a randomized scheme to update the connection strengths in the neuronal simulation model. The updates happen at the same timescale as the batch sizes used for evaluation.
\begin{table}
\centering
\caption{Datasets}
\label{tab:datasets}
\vspace{1mm}
\begin{tabular}{|l|rrr|}
\hline
Dataset & Alphabet & Rest Firing & Number of \\
Name & Size & Rate & Patterns \\
\hline
A1 & 500 & 10.0 & 50\\
A2 & 1000 & 10.0 & 50\\
A3 & 5000 & 10.0 & 50\\
\hline
B1 & 1000 & 2.0 & 50\\
B2 & 1000 & 10.0 & 50\\
B3 & 1000 & 25.0 & 50\\
\hline
C1 & 1000 & 10.0 & 10\\
C1 & 1000 & 10.0 & 25\\
C1 & 1000 & 10.0 & 50\\
\hline
\end{tabular}
\end{table}
\subsection{Comparison of algorithms}
In Fig.~\ref{fig:all-compare}, we compare the five algorithms---Alg 1 through Alg 5---that
report the frequent episodes over the window looking at one batch at a time with the baseline algorithm Alg 0 that stores and processes the entire window at each window slide. The results are averaged over all 9 data sets shown in Table~\ref{tab:datasets}. We expect to marginalize the data characteristics and give a more general picture of each algorithm. The parameter settings for the experiments are shown in Table~\ref{tab:params}.
Fig.~\ref{fig:all-compare} (a) plots the precision of the output of each algorithm compared to that of Alg 0 (treated as ground truth). Similarly, Fig~\ref{fig:all-compare} (b) shows the recall. Since the size of output of each algorithm is roughly $k$, the corresponding precision and recall numbers are almost the same.
Average runtimes are shown in Fig.~\ref{fig:all-compare}~(c) and average memory requirement in MB is shown in Fig.~\ref{fig:all-compare}~(d).
\begin{table}[htdp]
\centering
\caption{Parameter settings}
\label{tab:params}
\begin{tabular}{|l|l|}
\hline
Parameter & Value(s)\\
\hline
Batch size $T_{b}$ & $10^{5}$ sec ($\approx 1$ million events per batch)\\
Number of batches in a window $m$ & 10 (5,15)\\
$v$, in $(v,k)$-persistence & 0.5m, 0.75m\\
$k$ in $(v,k)$-persistence and in top-$k$ & 25, 50\\
$\ell$ - size of episode & 4 \\
\hline
\end{tabular}
\end{table}
\begin{figure}[htbp]
\centering
\subfigure[Precision] {\includegraphics[width=0.45\columnwidth,height=2in]{fig/sim-all-precision}}
\subfigure[Recall] {\includegraphics[width=0.45\columnwidth,height=2in]{fig/sim-all-recall}}
\subfigure[Runtime] {\includegraphics[width=0.45\columnwidth,height=2in]{fig/sim-all-runtime}}
\subfigure[Memory] {\includegraphics[width=0.45\columnwidth,height=2in]{fig/sim-all-memory}}
\caption{Comparison of average performance of different streaming episode mining algorithm. Alg 1 and 2 give lower precision and recall values compared with any other algorithms. Overall the proposed methods give atleast one order of magnitude improvement over the baseline algorithm (Alg 0) in terms of both time and space complexity.}
\label{fig:all-compare}
\end{figure}
We consistently observe that Alg 1 and 2 give lower precision and recall values compared with
any other algorithm. This reinforces our observation that the top-$k$ patterns in a window can be much different from the top-$k$ patterns in the constituent batches. Alg 2 provides only a slight improvement over Alg 1 by tracking an episode once it enters the top-$k$ over subsequent batches. This improvement can be attributed to the fact that window frequencies of patterns that were once in the top-$k$ is better estimated. Alg 3 gives higher precision and recall compared to Alg 1 and 2. The frequency threshold used in Alg 3 is given by Theorem~\ref{thm:topk-mine}. Using this result we are able to estimate the value $\fk{s}-2\delta$ by simply counting the episodes that are frequent in the previous batch. This avoids multiple iterations required in general for finding the top-$k$ patterns. Fortunately this threshold also results in significantly higher support and precision.
We ran Alg 4 and 5 for two different values of $v$, viz. $v = m/2$ and $v = 3m/4$. Both these algorithms guarantee finding all $(v,k)$-persistent patterns for respective values of $v$. For the sake of comparison we also include episodes that exceeded the frequency threshold prescribed by $(v,k)$-persistence, but do not appear in top-$k$ of atleast $v$ batches. We observe that the precision and recall improves a little over Alg 3 with reasonable increase in memory and runtime requirements.
For a higher value of $v$, this algorithm insists that the frequent patterns much persist over more batches. This raises the support threshold and as a result there is improvement in terms of memory and runtime, but a small loss in precision and recall. Note that the patterns missed by the algorithm are either not $(v,k)$-persistent or our estimation of $\delta$ has some errors in it.
In addition, Alg 5 gives a slight improvement over Alg 4. This shows that our heuristic threshold is effective.
Overall the proposed methods give atleast one order of magnitude improvement over the baseline algorithm in terms of both time and space complexity.
\subsubsection{Performance over time}
Next we consider one dataset A2 with number of event types = 1000, the average resting firing rate as 10 Hz, and the number of embedded patterns = 50. On this data we show how the
performances of the five algorithms change over time. The window size is set to be $m=10$ and the batch size = $10^{5}$ sec. Fig.~\ref{fig:over-time} shows the comparison over 50 contiguous batches. Fig.~\ref{fig:over-time} (a) and (b) show the way precision and recall evolve over time. Fig.~\ref{fig:over-time} (c) and (d) show the matching memory usage and runtimes.
The data generation model allows the episode frequencies to change slowly over time. In the
dataset used in the comparison we change the frequencies of embedded episodes at two time intervals: batch 15 to 20 and batch 35 to 42. In Fig.~\ref{fig:over-time} (a) and (b), we have a special plot shown by the dashed line. This is listed as Alg 0 in the legend. What this line shows is the comparison of top-$k$ episodes between consecutive window slides. In other words the top-$k$ episodes in window $W_{s-1}$ are considered as the predicted output to obtain the precision and recall for $W_{s}$. The purpose of this curve is to show how the true top-$k$ set changes with time and show how well the proposed algorithms track this change.
\begin{figure}[!ht]
\centering
\subfigure[Precision] {\includegraphics[width=\columnwidth,height=1.4in]{fig/sim-n1000-p50-e10-k50-Tb5-precision}}
\subfigure[Recall] {\includegraphics[width=\columnwidth,height=1.4in]{fig/sim-n1000-p50-e10-k50-Tb5-recall}}
\subfigure[Runtime] {\includegraphics[width=\columnwidth,height=1.4in]{fig/sim-n1000-p50-e10-k50-Tb5-runtime}}
\subfigure[Memory] {\includegraphics[width=\columnwidth,height=1.4in]{fig/sim-n1000-p50-e10-k50-Tb5-memory}}
\caption{Comparison of the performance of different streaming episode mining algorithms over a sequence of 50 batches (where each batch is $10^{5}$ sec wide and each window consists of 10 batches).}
\label{fig:over-time}
\end{figure}
Alg 1 and 2 perform poorly. On an average in the transient regions (batch 15 to 20 and batch 35 to 42) they perform 15 to 20\% worse than any other method. Alg 3, 4 and 5 (for v=0.75m and v=0.5m) perform consistently above the reference curve of Alg 0. It expected of any reasonable algorithm to do better than the algorithm which uses the top-$k$ of $W_{s-1}$ to predict the top-$k$ of the window $W_{s}$. The precision and recall performance are in the order Alg 3 $<$ Alg 4 v=0.75m $<$ Alg 4 v=0.5m $<$ Alg 5 v=0.75m $<$ Alg 4 v=0.5m. This is in the same order as the frequency thresholds used by each method, and as expected.
In terms of runtime and memory usage, the changing top-$k$ does not affect these numbers. The lowest runtimes are those of Alg 3. The initial slope in the runtimes and memory usage seen in Algo 0, is due to the fact that the algorithm loads the entire window, one batch at a time into memory. In this experiment the window consists of $m=10$ batches. Therefore only after the first 10 batches one complete window span is available in memory.
\subsubsection{Effect of Data Characteristics}
In this section we present results on synthetic data with different characteristics, namely, number of event types (or alphabet size), noise levels and number of patterns embedded in the data.
\begin{figure}[!ht]
\centering
\subfigure[Precision] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-alphabet-precision}}
\subfigure[Recall] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-alphabet-recall}}
\subfigure[Runtime] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-alphabet-runtime}}
\subfigure[Memory] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-alphabet-memory}}
\caption{Effect of alphabet size. The proposed algorithms are robust to large alphabet sizes. Precision and recall drop by only 2-4\% going from alphabet size of 500 to 5000.}
\label{fig:alphabet}
\end{figure}
In Fig.~\ref{fig:alphabet} we report the effect alphabet size on the quality of result of the different algorithms. In datasets A1, A2 and A3 the alphabet size, i.e. the number of distinct event types, is varied from 500 to 5000. We observe that for smaller alphabet sizes the performance is better. Alg 1 and 2 perform consistently worse that the other algorithm for different alphabet sizes.
In this experiment we find that the quality of results for the proposed algorithms is not very sensitive to alphabet size. The precision and recall numbers drop by only 2-4\%. This is quite different from the pattern mining setting where the user provides a frequency threshold. In our experience alphabet size is critical in the fixed frequency threshold based formulation. For low thresholds, large alphabet sizes can quickly lead to uncontrolled growth in the number of candidates. In our formulation the support threshold is dynamically readjusted and as a result the effect of large alphabet size is attenuated.
\begin{figure}[t]
\centering
\subfigure[Precision] {\includegraphics[width=0.45\columnwidth,height=2in]{fig/sim-noise-precision}}
\subfigure[Recall] {\includegraphics[width=0.45\columnwidth,height=2in]{fig/sim-noise-recall}}
\subfigure[Runtime] {\includegraphics[width=0.45\columnwidth,height=2in]{fig/sim-noise-runtime}}
\subfigure[Memory] {\includegraphics[width=0.45\columnwidth,height=2in]{fig/sim-noise-memory}}
\caption{Effect of noise. In terms of precision and recall Alg 5 is most robust to noise in the data.}
\label{fig:noise}
\end{figure}
Next, in Fig.~\ref{fig:noise}, we show the effect of noise. The average rate of firing of the noise event-types (event types that do not participate in pattern occurrences) is varied from 2.0 Hz to 25 Hz. The precision and recall of Alg 1 and 2 degrade quickly with increase in noise. A small decrease in precision and recall of Alg 3 and Alg 4 is seen. But the performance of Alg 5 (for both v=0.75m and v=0.5m) stays almost at the same level. It seems that the frequency threshold generated by Alg 5 is sufficiently low to finds the correct patterns even at higher noise level but not so low as to require significantly more memory ($\approx 400$ MB) or runtime ($\approx 70$ sec per batch at noise = 25.0 Hz for v=0.5m) as compared to other algorithms.
\begin{figure}[t]
\centering
\subfigure[Precision] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-patterns-precision}}
\subfigure[Recall] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-patterns-recall}}
\subfigure[Runtime] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-patterns-runtime}}
\subfigure[Memory] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-patterns-memory}}
\caption{Effect of number of embedded patterns.}
\label{fig:patterns}
\end{figure}
In Fig.~\ref{fig:patterns}, we change the number of patterns embedded in the data and study the effect. The number of embedded patterns vary from 10 to 50.
Once again the performance of our proposed methods is fairly flat in all the metrics. Alg 3, 4 and 5 are seen to be less sensitive to the number of patterns embedded in the data than Alg 1 and 2.
\subsubsection{Effect of Parameters}
So far the discussion has been about the effect of data characteristics of the synthetic data. The parameters of the mining algorithms were kept fixed. In this section we look at two important parameters of the algorithms, namely, the batch size $T_{b}$ and the number of batches that make up a window, $m$.
In Fig.~\ref{fig:batch}, the quality and performance metrics are plotted for three different batch sizes: $10^{3}$, $10^{4}$ and $10^{5}$ (in sec). Batch size appears to have a significant effect on precision and recall. There is a 10\% decrease in both precision and recall when batch size is reduced to $10^{3}$ sec from $10^{5}$. But note that a 100 fold decrease in batch size only changes quality of result by 10\%.
It is not hard to imagine that for smaller batch sizes the episode statistics can have higher variability in different batches resulting in a lower precision and recall over the window. As the size of the batch grows the top-$k$ in the batch starts to resemble the top-$k$ of the window. Transient patterns will not be able to gather sufficient support in a large batch size.
As expected, the runtimes and memory usage are directly proportional to the batch size in all cases. The extra space and time is required only to handle more data. Batch size does not play a role in growth of number of candidates in the mining process.
\begin{figure}[!ht]
\centering
\subfigure[Precision] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-batch-precision}}
\subfigure[Recall] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-batch-recall}}
\subfigure[Runtime] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-batch-runtime}}
\subfigure[Memory] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-batch-memory}}
\caption{Effect of batchsize. Larger batch sizes have higher precision and recall. Precision and recall increase logarithmically with batch size. (Note that x-axis is in log scale)}
\label{fig:batch}
\end{figure}
Next in Fig.~\ref{fig:window}, we show how the number of windows in a batch affect the performance. Precision and recall are observed to decrease linearly with the number of batches in a window in Fig.~\ref{fig:window}(a) and (b), whereas the memory and runtime requirements grow linearly with the number of batches. The choice of number of batches provides the trade-off between the window size over which the user desires the frequent persistent patterns and the accuracy of the results. For larger window sizes the quality of the results will be poorer. Note that the memory usage and runtime does not increase much for algorithms other than Alg 0 (see Fig.~\ref{fig:window} (c) and (d)). Because these algorithms only process one batch of data irrespective of the number of batches in window. Although for larger windows the
batch-wise support threshold decreases. In the synthetic datasets we see that this does not lead to unprecedented increase in the number of candidates.
\begin{figure}[!ht]
\centering
\subfigure[Precision] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-window-precision}}
\subfigure[Recall] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-window-recall}}
\subfigure[Runtime] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-window-runtime}}
\subfigure[Memory] {\includegraphics[width=0.45\columnwidth,height=1.8in]{fig/sim-window-memory}}
\caption{Effect of number of batches in a window. The memory usage and runtime does not increase much for algorithms other than Alg 0}
\label{fig:window}
\end{figure}
\subsection{Multi-neuronal Data}
Multi-electrode arrays provide high throughput recordings of the spiking activity in neuronal tissue and are hence rich sources of event data where events correspond to specific neurons being activated. We used the data from dissociated cortical cultures gathered by
Steve Potter's laboratory at Georgia Tech \cite{Potter2006} over several days. This is a rich collection of recordings from a 64-electrode MEA setup.
We show the result of mining frequent episodes in the data collected over several days from Culture 6~\cite{Potter2006}. We use a batch size of 150 sec and all other parameters for mining are the same as that used for the synthetic data. The plots in Fig.~\ref{fig:real} show the performance of the different algorithms as time progresses. Alg 1 and 2 give very low precision values which implies that the top-$k$ in a batch is much different from the top-$k$ in the window. Alg 3, 4 and 5 perform equally well over the MEA data with Alg 3 giving the best runtime performance.
\begin{figure}[!ht]
\centering
\subfigure[Precision] {\includegraphics[width=\columnwidth,height=1.4in]{fig/sim-real-precision}}
\subfigure[Recall] {\includegraphics[width=\columnwidth,height=1.4in]{fig/sim-real-recall}}
\subfigure[Runtime] {\includegraphics[width=\columnwidth,height=1.4in]{fig/sim-real-runtime}}
\subfigure[Memory] {\includegraphics[width=\columnwidth,height=1.4in]{fig/sim-real-memory}}
\caption{Comparison of performance of different algorithms on real multi-neuronal data.}
\label{fig:real}
\end{figure}
At times Alg 3 requires slightly higher memory than the other algorithm (Alg 4 and 5). This may seem counter intuitive as Alg 4 and 5 use lower frequency threshold. But since $\delta$ is dynamically estimated from all episodes being tracked by the algorithm it can easily be the case that the $\delta$ estimates made by Alg 3 are looser and hence result in higher memory usage.
|
1,108,101,564,873 | arxiv | \section{Introduction}
\label{sec:introducion}
Tag-aware recommendation systems (TRS), to better depict user preferences and item features, have been widely deployed to bridge the information gap between users and items \cite{hutchison_information_2006, bischoff_can_2008}. The fundamental factor of TRS is to provide a folksonomy, where users can freely assign tags to items that they interacted with (e.g., movies, music, and bookmarks) \cite{rendle_learning_2009}. These tags are composed of concise and comprehensive words or phrases that reflect users' subjective preferences and items' characteristics \cite{chen_tgcn_2020}. From this perspective, tags in folksonomy can bridge collaborative information between users and items. By exploring these collabortaive information in the tagging procedure, TRS is able to provide personalized item lists for users \cite{zuo_tag-aware_2016,li_tag-aware_2019,chen_tgcn_2020}. Therefore, folksonomy records can be introduced into recommendation systems to enhance interpretability and improve recommendation quality.
A common paradigm is to transform tags into a generic feature vector and feed them into feature-based models to integrate auxiliary folksonomy records. For example, CFA\cite{zuo_tag-aware_2016} used the sparse autoencoder (SAE) to obtain tag-based user latent representations and combines those with user-based collaborative filtering (CF). Besides, DSPR\cite{xu_dspr_2016} and HDLPR\cite{xu_hdlpr_2017} leveraged the multi-layer perceptron (MLP) to process such sparse feature vectors and extract abstract user and item representations, AIRec\cite{chen_airec_2021} provided a hybrid user model with hierarchical attention networks, which can depict user implicit preferences from explicit folksonomy records. Some researchers have organized the folksonomy records as a graph in recent years, utilizing graph neural networks (GNN) for TRS. TGCN\cite{chen_tgcn_2020} used graph convolutional networks (GCN) for TRS and outperformed other state-of-the-art TRS models. TA-GNN\cite{huang_tagnn_2021} leveraged two graph attention networks for embeddings aggregation and achieve higher quality recommendations as well.
Although the above method improves the recommendation performance in TRS, it comes with some unacceptable issues resulting from the sparsity of data and the redundancy and ambiguity of tags \cite{shepitsen_personalized_2008}. To be specific, the \textbf{sparsity} issues from most users assigning a few amounts of tags to the items they interacted with. The \textbf{redundancy} and \textbf{ambiguity} issues from the fact that several tags have the same or different meaning due to the lack of contextual semantics. For example, they are owing to the diversity in writing or expression styles of users. Some tags with different forms have the same meanings and indicate similar preferences, such as ``world war 2" and ``ww2" being two different tags in folksonomy, but usually assigned to the same movie ``Schindler's List" by users. Moreover, some tags consist of polysemous words, which means they have different understanding in different contexts. The tag ``apple" could be misunderstood as a technology company by most tech-enthusiasts rather than a kind of fruit. Those issues increase training difficulty and finally degrade the recommendation systems' effectiveness.
Several recent works which introduced GNN to TRS have demonstrated its effectiveness to deepen the use of subgraph with high-hop neighbors, such as TGCN\cite{chen_tgcn_2020}, GNN-PTR\cite{chen_graph_2020} and TA-GNN\cite{huang_tagnn_2021}. However, those works have shown promising results; we argue that its designs are rather burdensome, directly inherited from GCN without justification.
This paper focuses on the issues mentioned above and proposes a GCN-based recommendation model for TRS. Specifically, we construct two sets of edges among user tagging records. The first set of edges reflects the initiative interaction between users and tags, and the second set of edges reflects passive interaction between items and tags. Therefore, the Folksonomy Graph (FG) can be constructed based on the above edges set. It is worth mentioning that relationships between users and items are removed because part of tagging behavior is a negative interaction. For example, some users assign the tag ``worst movie ever" for some items. In this case, the FG is constructed based on the tagging and tagged information. Fig.\ref{folksonomy} illustrates an example. Then, we propose a Light Folksonomy Graph Collaborative Filtering (LFGCF) inspired by LightGCN\cite{he_lightgcn_2020}, on which the Graph Convolutional Networks integrate topological structure for representation learning and share tag representations, which bridge the information gaps between users and items. Finally, we design a regularization function named TransRT to model the representation and perform joint learning with the recommendation task from a user tagging and item tagged perspective.
The main contributions of this paper are summarized as follows:
\begin{itemize}
\item We construct FG based on the user tagging records and item tagged records, respectively, which reflect users' preferences and items' characteristics. The interaction between users and items is not used to construct the graph structure;
\item We leverage the GCN for representation learning, which is specific light designed for TRS, and jointly optimize TransRT for the Top-K recommendation task;
\item We perform extensive experiments on three public datasets, demonstrating improvements of LFGCF over state-of-the-art GCN methods for the tag-aware Top-K recommendation.
\end{itemize}
The rest of this paper is organized as follows. Section~\ref{sec:related} present the related words about TRS and GNN-based recommendation systems. Section~\ref{sec:material} gives some background, including problem formulation and definition FG. In Section~\ref{sec:method}, we propose a recommendation method based on GCN is described in detail. Section~\ref{sec:experiment} reports the hyperparameters experiments and ablation studies results. Finally, we conclude our contributions and give further work directions in Section~\ref{sec:conclusions}.
\section{Related Works}
\label{sec:related}
\subsection{Tag-aware recommendations}
Collaborative tagging recommendation allows users to assign tags freely on all kinds of items and then utilizes them to provide better-personalized recommendations. However, just like collaborative filtering recommendation, collaborative tagging recommendation suffers from the sparsity of interactions. Nevertheless, the redundancy and the ambiguity greatly compromise the performance of recommendations. By incorporating tagging information into collaborative filtering, Zhen et al. \cite{zhen_tagicofi_2009} came up with TagiCofi, which managed to characterize users' similarities better and achieved better recommendations. To tackle the redundancy of tags, Shepisten et al. \cite{shepitsen_personalized_2008} used hierarchical clustering on social tagging systems. Peng et al. \cite{peng_collaborative_2010} further integrated the relationship between users, items, and tags and then came up with a Joint Item-Tag recommendation. Zhang et al. \cite{zhang_personalized_2010, zhang_solving_2012} incorporated a user-tag-item tripartite graph, which turned out effective for improving the accuracy, diversity, and novelty of recommendations. FolkRank++ was brought up by Zhao et al. \cite{zhao_folkrank_2021} to dig out the similarities among users and items fully. Focusing on personalized ranking recommendations in a collaborative tagging system, Rendle et al. \cite{rendle_learning_2009} introduced matrix factorization and brought up RTF, which optimized ranking in personalized recommendations. Later, enlightened by BPR \cite{rendle_bpr_2009}, Li et al. \cite{li_tag-aware_2019} came up with BPR-T for tag-aware recommendations. With the development of deep learning, Zuo et al. \cite{zuo_tag-aware_2016} used deep neural networks to extract tag information, which helped alleviate the sparsity, redundancy, and ambiguity of tags and generate accurate user profiles.
Despite the fact that much effort has been devoted to tag-aware recommendations, the implicit pattern under user tagging behavior are not fully extracted. We introduce the modified knowledge graph algorithm to tackle this problem.
\subsection{GNN-based recommendations}
Recent years have witnessed rapid development in graph neural networks, which perform excellently in node classifications and edges predictions. Berg et al. \cite{berg_graph_2017} brought up GCMC, in which autoencoder was used in a user-item bipartite graph to generate expressive embeddings. Ying et al. \cite{ying_graph_2018} focused on web-scale recommendation systems and came up with a GCN-based algorithm: PinSage. Results showed that Pinsage had excellent robustness and could generate high-quality recommendations. Wang et al. \cite{wang_neural_2019} incorporated GCN and came up with NGCF. Benefited by the multi-hop neighborhood connectivities, NGCF achieved good recommendations performance. Later in the research of He et al. \cite{he_lightgcn_2020}, it is proven that some designs in NGCF are burdensome and compromising recommendation performance. So LightGCN was brought up to simplify the model design, achieving better recommendations. Focusing on click-through rate (CTR) predictions, Li et al. \cite{li_fi-gnn_2019} came up with Fi-GNN, in which the gated recurrent units (GRU) was used for information aggregation. DG-ENN was brought up by Guo et al. \cite{guo_dual_2021} for the same task. By incorporating the attribute graph and the collaborative graph, DG-ENN was able to alleviate the feature and behavior sparsity problem. For certain factors in CTR tasks, Zheng et al. \cite{zheng_price-aware_2020} taken price into account when coming up with a GCN-based model. Furthermore, Su et al. \cite{su_detecting_2020} used L0 regularization via GNN approach to distinguish valuable factors automatically. Focusing on the re-ranking task in recommendations, Liu et al. \cite{liu_personalized_2020} developed IRGPR, which introduced an intent embedding network to embed user intents, and it is proven effective for re-ranking. Lately, several researchers have focused on applying graph neural networks to collaborative tagging systems. Chen et al. \cite{chen_tgcn_2020} used graph convolutional networks for tag-aware recommendations, and their TGCN model outperformed other state-of-the-art models. Huang et al. \cite{huang_tagnn_2021} used two graph attention networks for embeddings aggregation and achieved high-quality recommendations as well.
Few researchers use graph neural networks on tag-aware recommendations, while the model structure is quite complicated, making the training rather tricky. Our proposed method uses a relatively light and straightforward graph structure which suppose lowers the training cost and improves the performance.
\section{Material}
\label{sec:material}
\subsection{Problem Formulation}
Folksonomy also named user tagging behavior, is the fundamental factor of the TRS. It is defined as a series of tags assigned by some users when interacting with certain items they are interested in. Generally, users interacting with certain items by operations such as clicking, tagging, or commenting could be viewed as folksonomy records. It is aggregated into a triplet, i.e, $a=(u,t,i)$, which represents user $u$ assigned tag $t$ to item $i$. These personalized tags reflect users' subjective preferences and characteristics of items. This tagging procedure is rich in collaborative information. By exploring this collaborative information, TRS can further infer users' preferences, summarize features of items and understand the connotation of tags, which hopefully improve the quality of recommendation systems.
Suppose the number of elements in user set $\mathcal{U}$, item set $\mathcal{I}$, tag set $\mathcal{T}$ are $N_u$, $N_i$ and $N_t$, respectively. The folksonomy is a tuple $\mathcal{F=(\mathcal{U},\mathcal{T},\mathcal{I},\mathcal{A})}$, where $\mathcal{A} \subseteq \mathcal{U} \times \mathcal{I} \times \mathcal{T}$ is a record in a typical TRS. During the tagging procedure, user $u$ interacts with the target item $i$ through an explicit tagging feedback by tag $t$, i.e., the user watching the movie 'Transformers' because of the tag 'sci-fi'.But few researchers have addressed the problem of information leaks on folksonomy. We argue that directly modeling explicit tagging feedback as implicit feedback may leak information because a part of tagging behavior is negative feedback, i,e, user tagging movie as 'boring movies'. The leak is supposed to hinder the recommendation performance in the test dataset. Our approach to tackling the problem is to model user-tag tagging interactions and item-tag tagged interactions in $\mathcal{A}$ separately, rather than model user-item interactions directly.
This paper focuses on recommending the personalized ranking list of items for each user $u$ on the TRS. By exploring the implicit feedback in user tagging assignments, leveraging collaborative information, and training a model to refine embeddings, we can generate the Top-K list of items for each user.
\begin{equation}
Top(u, K) = \mathop{argmax}^{(K)}_{i\in \mathcal{I}}\hat{y}_{u,i}
\end{equation}
\subsection{Folksonomies Graph}
To prevent the information leak from happening, we construct two sets of edges $\mathcal{E}_{u,t}$ and $\mathcal{E}_{i,t}$ among the folksonomies records $a=(u,t,i) \in \mathcal{A}$. Set $\mathcal{E}_{u,t}$ reflects the assignments between users and tags. On the one hand, set $\mathcal{E}_{i,t}$ indicates the passive tagged interactions between items and tags. In TRS scenario, each kind of edge $e$ from $\mathcal{E}_{u,t}$ and $\mathcal{E}_{i,t}$ is respectively defined as:
\begin{equation}
e_{u,t} =
\begin{cases}
1,&\text{if $(u,t) \in \mathcal{E}_{u,t}$} \\
0,&\text{otherwise}.
\end{cases}
\end{equation}
\begin{equation}
e_{i,t} =
\begin{cases}
1,&\text{if $(i,t) \in \mathcal{E}_{i,t}$} \\
0,&\text{otherwise}.
\end{cases}
\end{equation}
\begin{figure}[H]
\centering
\includegraphics[width=0.9\textwidth]{image/folksonomy.png}
\caption{Matrix form of Folksonomy}
\label{folksonomy}
\end{figure}
To make it easier to follow, the matrix form of folksonomy is shown in Fig. \ref{folksonomy}. Therefore, the FG is constructed based on user-tag tagging and item-tag tagged interactions according to the assignments set $\mathcal{A}$. Each set of edges can be respectively regarded as the set of edges in the bipartite graph $G_{tagging} = (\mathcal{U}, \mathcal{T}, \mathcal{E}_{u,t})$, $G_{tagged} = (\mathcal{I}, \mathcal{T}, \mathcal{E}_{i,t})$.
\section{Method}
\label{sec:method}
In this section, we first present the design of the Light Folksonomy Graph Collaborative Filtering (LFGCF) method, as illustrated in Fig.\ref{lagcf}, which is composed of two core modules: 1) \textbf{Light Folksonomy Graph Convolutional Network}, which leverages a light yet effective model by including the essential ingredients of GCN for constructing the FG from folksonomies records $\mathcal{A}$ and GCN-based collaborative aggregated operations to capture higher-order semantic information under tagging graph and tagged graph, respectively; 2) \textbf{TransR on Tags}, which provides a regularization function named TransRT to bridge tagging graph and tagged graph by triplets preserved. The joint optimization details and how to do modeling training for Top-K recommendation in TRS will be discussed later in this section.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{image/lagcf_model.png}
\caption{Model structure of LFGCF}
\label{lagcf}
\end{figure}
\subsection{LFGCF}
Mainstream GNN models, such as GCN\cite{kipf_gcn_2017} and GAT\cite{peter_gan_2017} were originally proposed for node or graph representation learning on attributed graphs. Specifically, each node has existing embeddings as input features, which are firstly transformed into a uniform shape by features transformation and then aggregated with its neighbors on the graph. In the end, embeddings are updated by nonlinear activations. Whereas from the view of the bipartite graph for Collaborative Filtering, each node (user or item) is only identified by a unique token without any concrete semantics as input features. In such a case, given unique token embeddings as input, performing multiple layers of feature transformation and nonlinear activations are keys to the success of modern neural networks\cite{he_deep_2015}. However, these operations not only make little benefits to the performance of recommendation tasks but also increase the difficulties of representation training. Inspired by the ideas of LightGCN\cite{he_lightgcn_2020}, we propose a light yet effective model by including the essential ingredients of GCN for a recommendation.
\subsubsection{Light Assign Graph Convolutional Network}
The fundamental factor of GCN is learning nodes recommendation by aggregating neighbors features over the graph\cite{kipf_gcn_2017}. To be specific, aggregating the features of neighbors is the new representation of nodes. The neighbors aggregated function can be universally defined as follows:
\begin{equation}
e_{u/i}^{(k+1)} = \mathcal{AGG}(e_{u/i}^{(k)}, {e_t^{(k)}: t \in \mathcal{N}_{u/i}})
\end{equation}
where the $\mathcal{AGG}$ is the neighbors aggregation function, which takes $k$-th layer's representations of target nodes and their neighbors into consideration. Besides, the $e_{u/i} \in \mathbf{R}^d$ is the representations of users or item embeddings in $G_{tagging}$ and $G_{tagged}$, respectively; the $e_t \in \mathbf{R}^d$ is the representations of tags and shared parameters between $G_{tagging}$ and $G_{tagged}$. Many researchers have proposed different aggregation functions for neighbors aggregation, such as the weighted sum aggregator in GIN\cite{xu_gin_2018}, LSTM aggregator in GraphSAGE\cite{hamilton_graphsage_2017}, and bilinear interaction in BGNN\cite{zhu_bgnn_2020}. However, most of them tie feature transformation or nonlinear activation with $\mathcal{AGG}$ function, such as complex attention-based and CNN-based feature transformation in TGCN\cite{chen_tgcn_2020}. Although they all perform well on node or graph classification tasks with semantics input features and TRS recommendation tasks that only have token embeddings as input features, they could be burdensome for Top-K recommendation tasks.
\subsubsection{Light Aggregation}
We leverage the LFGCN in FG, which is constructed based on $G_{tagging}$ and $G_{tagged}$. In LFCGN, we focus on the essential ingredients of GCN for recommendations. We adopt the light-weighted sum aggregator and abandon the use of feature transformation and nonlinear activation. The Light Aggregation function can be defined as follows:
\begin{eqnarray}
e_{u/i}^{(k+1)} &=& \sum_{t\in \mathcal{N}_{u/i}}\frac{1}{\sqrt{|\mathcal{N}_{u/i}|}\sqrt{|\mathcal{N}_{t}|}}e_t^{(k)}
\\
e_t^{(k+1)} &=& \sum_{u\in \mathcal{N}_t}\frac{1}{\sqrt{|\mathcal{N}_t|}\sqrt{|\mathcal{N}_{u/i}|}}e_{u/i}^{(k)}
\end{eqnarray}
The symmetric normalization term $\frac{1}{\sqrt{|\mathcal{N}_{u/i}|}\sqrt{|\mathcal{N}_t|}}$ follows the design of standard GCN\cite{kipf_gcn_2017}, which can avoid the scale of embeddings increasing with graph convolution operations.
\subsubsection{Layer Combination and Model Prediction}
In our LFGCF, the only trainable parameters are the embeddings at the 0-layer. When thay are initialize, the higher layers can be computed via LFGCN defined by Equation (5-6). After K layers LFGCN, we propose layer combination function that further combine the embeddings obtained at each layer to form the final representation of nodes. The layer combination function can be defined as follows:
\begin{equation}
e_{u/i/t} =\mathcal{COMB}(e_{u/i/t}^{(k)}:k \in K)
\end{equation}
which $\mathcal{COMB}$ is the layer combination function, which fusion all layers' representation of specific type of nodes. $K$ is the number of layers.
The embeddings at different layers capture different semantics in FG. For example, the first layer
enforces smoothness on users(items) and tags that have interactions, the second layer smooths users(items) that have overlap on interacted tags, and higher-layers capture higher-order proximity\cite{wang_neural_2019}. Thus, we do not design spectial componet, and the layer combination function can be further defined as follows:
\begin{equation}
e_u= \sum_{k=0}^{K}a_ke_u^{(k)},\quad e_i = \sum_{k=0}^{K}a_ke_i^{(k)}, \quad e_t = \sum_{k=0}^{K}a_ke_t^{(k)}
\end{equation}
which $a_k \geq 0$ denotes the importance of the k-th layer embedding in constituting the final embedding. It can be treated as a hyperparameter to be tuned manually, or a model parameter optimized automatically. We set $a_k$ uniformly as $1/(K + 1)$, where $K$ is the number of layers. The reasons that we designded the layer combination function to get final representations are two-fold. (1) With the increasing of layers, the embeddings will be over-smoothed\cite{li_deeper_2018}. Thus only using the last layer is problematic. (2) Combining embeddings at different layers with different weight captures the effect of GCN with self-connections\cite{he_lightgcn_2020}.
The model prediction is defined as the inner product of the user and item final representations:
\begin{equation}
\hat{y}_{ui} = e_u^{T}e_i
\end{equation}
where is used as the ranking score for recommendation generation.
\subsection{TransRT}
Knowledge graph embedding effectively parameterizes nodes as vector representations while keeping the graph's topology. In this paper, we propose a new method for embedding a knowledge graph based on the idea of transformers. Here we propose a new regularization function, which is based on TransR\cite{lin_learning_nodate}, a widely used method in a knowledge graph. To be specific, it learns the embedding of each node by optimizing the translation principle $e_u + e_t \approx e_t$, if a triplet $(u,t,i) \in \mathcal{A}$. Herein, $e_u, e_i, e_t \in \mathbf{R}^d$ is the final embedding of user $u$, item $i$ and tag $t$, respectively, and $e_u, e_i$ are the projected representations of $e_u$ and $e_i$ in the tag's space. Hence, for a given folksonomy record $(u, t, i)$, its plausibility score could be defined as follows:
\begin{equation}
g(u, t, i) = ||e_u + e_t - e_i||_2^2
\end{equation}
where $e_u, e_i, e_t$ are in the same d-dimension space, but not the same semantics space. A lower score of $g(u, t, i)$ suggests that the folksonomy score is more likely to be reasonable and vice versa.
\subsection{Jointly Optimization Details}
The trainable parameters of LFGCF are only the embeddings of the 0-th layer, which combine with users, items, and tags in FG. In other words, the model complexity is the same as the standard matrix factorization (MF). To optain better ranking, we employ the Bayesian Personalized Ranking (BPR) loss\cite{rendle_bpr_2009}, which is a pairwise loss that encourages the prediction of an observed record to be higher than its unobserved sampled counterpart. The BPR loss is defined as follows:
\begin{equation}
\mathcal{L}_{LFGCN} = \sum_{(u,i,i') \in \mathcal{O}} -ln(\sigma(\hat{y}_{u,i} - \hat{y}_{u,i'}))
\end{equation}
where $\mathcal{O} = \{(u,i,i')|(u,i) \in \mathcal{A},(u,i') \notin \mathcal{A}\}$ indicates pairwsie $(u,i)$ observed in folksonomy records, and pairwise $(u,i')$ means that the user $u$ and item $i'$ not observed in record, but uniformly sampled from the unobserved pairs.
To train TransRT, we minimize the plausibility score:
\begin{equation}
\mathcal{L}_{TranRT} = \alpha g(u, t, i)
\end{equation}
where $\alpha$ controls the strength of the knowledge graph regularization, and $g(u, t, i)$ is the plausibility score of the record $(u, t, i)$.
To effective learning parameters for recommendation and preserve the regularization relationship among folksonomy records, we integrate the Top-K recommendation task and the TransRT by a jointly learning framework. Finally, the total objective function of LFGCF is defined as follows:
\begin{equation}
\mathcal{L}_{LFGCF} = \mathcal{L}_{LFGCN} + \mathcal{L}_{TranRT} + \gamma ||\Theta||_2
\end{equation}
where $\gamma$ controls the strength of regularization. We employ the Adam\cite{kingma_adam_2014} optimize $\mathcal{L}_{LFGCN}$ and $\mathcal{L}_{TranRT}$ and use it in a mini-batch manner. Besides, an early stopping strategy is also applied to avoid overfitting during training.
\section{Experimental}
\label{sec:experiment}
In this section, we focus on the following questions:
\subparagraph{RQ1} Does LFGCF outperform other tag-aware recommendation models in the Top-K recommendation task?
\subparagraph{RQ2} Does it help improve the recommendation performance to remove some components in the GCN?
\subparagraph{RQ3} Whether or not the implementation of TransRT solves the redundancy and ambiguity of tags?
\subsection{Setup}
In order to answer the questions above, a series of experiments are designed and carried out. We first show that LFGCF outperforms other state-of-the-art models, then elaborate on its expressiveness and explainability. Meanwhile, according to the investigation from \cite{dacrema_arewe_2019}, experiments of some other models are carried out with the different processes, which brings difficulties to repeating them. Hence, all the experiments in this paper are all under the uniformed experimental framework Recbole\cite{zhao_recbole_2021} to make fair comparisons.
\subsubsection{Experiment Datasets}
Extensive experiments are carried out to evaluate the proposed LFGCF based on three real-world datasets: MovieLens, LastFM, and Delicious. They were all released in HetRec \cite{cantador_hetrec_2011}.
\begin{itemize}
\item [$\bullet$] \textbf{MovieLens} is a recommendation dataset that contains a list of tags assigned by users to various movies.
\item [$\bullet$] \textbf{LastFM} is a dataset from Last.FM music system with music listening information, tags assignments to artists, and user social networks.
\item [$\bullet$] \textbf{Delicious} is obtained from Del.icio.us and contains tagging information to web bookmarks.
\end{itemize}
Due to the sparsity of tagging information, some infrequently used tags exist. To rule out their negative influence of them, those tags used less than 5 times in MovieLens and LastFM and 15 times in Delicious are removed\cite{zuo_tag-aware_2016}. Basic statistic information of the datasets after preprocessing is summarized in Table.\ref{data_sta}
\begin{table}[h]
\centering
\caption{Datasets Statistics}
\begin{tabular}{c|c|c|c|c|c}
\hline
\textbf{Datasets} & \textbf{User} & \textbf{Item} & \textbf{Tag} & \textbf{Assignments} & \textbf{Sparsity} \\
\hline
Last.FM & 1808 & 12212 & 2305 & 175641 & 99.20\% \\
MovieLens & 1651 & 5381 & 1586 & 36728 & 99.59\% \\
Delicious & 1843 & 65877 & 3508 & 330744 & 99.73\% \\
\hline
\end{tabular}
\label{data_sta}
\end{table}
Since data we use are not time sequence data, so training sets, validation sets and test sets are randomly selected according to the proportion of \{0.6, 0.2, 0.2\}. The metrics that reflect model performances in the reminder part of the paper are all calculated from test sets.
\subsection{Evaluation Metrics}
The performance of TRS is directly related to the quality of Top-K recommendations, which are evaluated by the following metrics: Recall@N, Precision@N, Hit Ratio@N, NDCG@N, and MRR@N\cite{jarvelin_cumulated_2002}. Empirically, higher metrics mean better performances. Each metric is elaborated:
\begin{itemize}
\item [$\bullet$] \textbf{Recall@N} measures the percentage of the number of items in Top-K recommendations to the actual item set that user interact with.
\begin{equation}
Recall@N = \frac{|R^N(u) \bigcap T(u)|}{|T(u)|}
\end{equation}
where $R^N(u)$ denotes the Top-K recommendations and $T(u)$ denotes the ground truth item set.
\item [$\bullet$] \textbf{Precision@N} measures the fraction of items the users would interact with among the Top-K recommendations.
\begin{equation}
Precision@N = \frac{|R^N(u) \bigcap T(u)|}{K}
\end{equation}
\item [$\bullet$] \textbf{Hit Ratio@N} measures the percentage of users who interact with the recommendations for at least one time.
\begin{equation}
HR@N = \frac{1}{\mathcal{U}}\sum_{u \in \mathcal{U}}I(|R^N(u) \bigcap T(u)|>0)
\end{equation}
where $I(\cdot)$ is the indicator function.
\item [$\bullet$] \textbf{NDCG@N} reflects the quality of ranking by distinguishing the contributions of the accurately recommended items.
\begin{equation}
NDCG@N = \frac{1}{\mathcal{U}}\sum_{u \in \mathcal{U}} \frac{\sum^N_{n=1} \frac{I(R^N_n(u) \in T(u))}{log(n+1)}}{\sum^N_{n=1}\frac{1}{log(n+1)}}
\end{equation}
where $R^N_n(u)$ means the $n^{th}$ item in Top-K recommendations $R^N(u)$.
\item [$\bullet$] \textbf{MRR@N} computes the reciprocal rank of the first relevant item found by an rank algorithm.
\begin{equation}
MRR@N = \frac{1}{\mathcal{U}} \sum_{u \in \mathcal{U}}\frac{1}{rank_u^*}
\end{equation}
where $rank_u^*$ means the rank position of the first relevant item in recommendations for a user.
\end{itemize}
\subsection{Baselines and Parameters}
To fairly evaluate the performance and effectiveness of LFGCF, we adopt some classic or state-of-the-art TRS models as baselines.
\begin{itemize}
\item [$\bullet$] \textbf{DSPR}\cite{xu_dspr_2016} leverages deep neural networks to learn tag-based features by mapping user and item profiles to deep latent space.
\item [$\bullet$] \textbf{CFA}\cite{zuo_tag-aware_2016} is a user-based collaborative filtering model which adopts a sparse autoencoder to extract latent features.
\item [$\bullet$] \textbf{BPR-T}\cite{li_tag-aware_2019} is a collaborative filtering model which incorporates tagging information and the Bayesian ranking optimization.
\item [$\bullet$] \textbf{TGCN}\cite{chen_tgcn_2020} is a collaborative filtering model which incorporates tagging information into GCN along with an attention mechanism.
\end{itemize}
In order to make impartial comparisons, each model is optimized with mini-batch Adam, while the batch size is set as 2048. The learning rate of each model is searched from \{0.0001, 0.0005, 0.001, 0.005, 0.01, 0.05\} and the regression weight is searched from \{1e-5, 1e-4, 1e-3, 1e-2\}. For the autoencoder in CFA and the graph structure in TGCN and LFGCF, the number of layers is searched from \{1, 2, 3, 4\}. For BPR-T, its three regression weight is searched from \{1e-5, 1e-4, 1e-3, 1e-2, 1e-1\}. Additionally, by further searching the coefficients $\alpha, \beta$ from \{1e-5, 1e-4\}, we can analyze the sensibility of TransRT. The hyperparameter experiments are conducted under 10 random seeds. By conducting these experiments, all of the models are at their optimal performances, ensuring the fairness of the following comparisons. %
\subsection{Performance Analysis}
The experiment results in this section are all achieved with the optimal hyperparameters. Best performance is in boldface, best baseline proformance is in underline and imp. means the improvement of LFGCF over the state-of-the-art baseline.
\newpage
\begin{center}
\begin{longtable}{c|c|c|c|c|c|c|c}
\caption{Performance Comparison}
\label{performance_comparison}\\
\hline
\textbf{Dataset} & \textbf{Metric} & \textbf{DSPR} & \textbf{CFA} & \textbf{BPRT} & \textbf{TGCN} & \textbf{LFGCF} & \textbf{imp}\\
\hline
MovieLens & \begin{tabular}[c]{@{}c@{}}
Recall@10 \\ Recall@20 \\ Precision@10 \\ Precision@20 \\ Hit@10 \\ Hit@20 \\ NDCG@10 \\ NDCG@20 \\ MRR@10 \\MRR@20
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
0.0755 \\ 0.1076 \\ 0.0326 \\ 0.0266 \\ 0.1838 \\ 0.2358 \\ 0.0683 \\ 0.0748 \\ 0.0949 \\ 0.0984
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
0.0549 \\ 0.1073 \\ 0.0180 \\ 0.0169 \\ 0.1269 \\ 0.2068 \\ 0.0397 \\ 0.0533 \\ 0.0549 \\ 0.0607
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
\underline{0.2032} \\ 0.2146 \\ 0.0353 \\ 0.0256 \\ 0.2773 \\ 0.3025 \\ \underline{0.1778} \\ \underline{0.1822} \\ \underline{0.1879} \\ \underline{0.1894}
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
0.1951 \\ \underline{0.2362} \\ \underline{0.0400} \\ \underline{0.0292} \\ \underline{0.2997} \\ \underline{0.3626} \\ 0.1639 \\ 0.1748 \\ 0.1790 \\ 0.1833
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
\textbf{0.2928} \\ \textbf{0.3035} \\ \textbf{0.0546} \\ \textbf{0.0345} \\ \textbf{0.3950} \\ \textbf{0.4118} \\ \textbf{0.2468} \\ \textbf{0.2489} \\ \textbf{0.2482} \\ \textbf{0.2492}
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
44.09\% \\ 28.49\% \\ 36.5\% \\ 18.15\% \\ 31.80\% \\ 13.57\% \\ 38.81\% \\ 36.61\% \\ 32.09\% \\ 31.57\%
\end{tabular} \\
\hline
Last.FM & \begin{tabular}[c]{@{}c@{}}
Recall@10 \\ Recall@20 \\ Precision @10 \\ Precision@20 \\ Hit@10 \\ Hit@20 \\ NDCG@10 \\ NDCG@20 \\ MRR@10 \\MRR@20
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
0.0982 \\ 0.1472 \\ 0.0971 \\ 0.0754 \\ 0.3883 \\ 0.4637 \\ 0.1373 \\ 0.1389 \\ 0.2204 \\ 0.2257
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
0.1086 \\ 0.1661 \\ 0.0727 \\ 0.0603 \\ 0.3596 \\ 0.4561 \\ 0.1220 \\ 0.1320 \\ 0.1876 \\ 0.1946
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
0.3357 \\ 0.4118 \\ 0.1371 \\ 0.1106 \\ 0.6649 \\ 0.7450 \\ 0.3317 \\ 0.3487 \\ 0.4080 \\ 0.4136
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
\underline{0.3745} \\ \underline{0.4444}\\ \underline{0.1691} \\ \underline{0.1325} \\ \underline{0.6971} \\ \underline{0.7649} \\ \underline{0.3920} \\ \underline{0.4044} \\ \underline{0.4708} \\ \underline{0.4755}
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
\textbf{0.4189} \\ \textbf{0.4980} \\ \textbf{0.1847} \\ \textbf{0.1475} \\ \textbf{0.7404} \\ \textbf{0.7889} \\ \textbf{0.4248} \\ \textbf{0.4442} \\ \textbf{0.5023} \\ \textbf{0.5058}
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
11.86\% \\ 12.06\% \\ 9.22\% \\ 11.32\% \\ 6.21\% \\ 3.13\% \\ 8.36\% \\ 9.84\% \\ 6.69\% \\ 6.37\%
\end{tabular} \\
\hline
Delicious & \begin{tabular}[c]{@{}c@{}}
Recall@10 \\ Recall@20 \\ Precision @10 \\ Precision@20 \\ Hit@10 \\ Hit@20 \\ NDCG@10 \\ NDCG@20 \\ MRR@10 \\MRR@20
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
0.0134 \\ 0.0195 \\ 0.0320 \\ 0.0262 \\ 0.1611 \\ 0.2284 \\ 0.0374 \\ 0.0335 \\ 0.0788 \\ 0.0836
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
0.0097 \\ 0.0182 \\ 0.0112 \\ 0.0102 \\ 0.0963 \\ 0.1526 \\ 0.0161 \\ 0.0173 \\ 0.0377 \\ 0.0415
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
0.1396 \\ 0.2595 \\ 0.2473 \\ 0.2736 \\ 0.8069 \\ \underline{0.9174} \\ 0.2485 \\ 0.2961 \\ 0.3195 \\ 0.3277
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
\underline{0.1769} \\ \underline{0.3073} \\ \underline{0.3586} \\ \underline{0.3514} \\ \underline{0.8835} \\ 0.9168 \\ \underline{0.3866} \\ \underline{0.4048} \\ \underline{0.5474} \\ \underline{0.5498}
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
\textbf{0.1956} \\ \textbf{0.3341} \\ \textbf{0.3716} \\ \textbf{0.3615} \\ \textbf{0.9032} \\ \textbf{0.9404} \\ \textbf{0.4015} \\ \textbf{0.4225} \\ \textbf{0.5546} \\ \textbf{0.5572}
\end{tabular}
& \begin{tabular}[c]{@{}l@{}}
10.57\% \\ 8.72\% \\ 3.63\% \\ 2.87\% \\ 2.23\% \\ 2.51\% \\ 3.85\% \\ 4.37\% \\ 1.32\% \\ 1.35\%
\end{tabular} \\
\hline
\end{longtable}
\end{center}
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
%
Table.\ref{performance_comparison} shows the Top-K recommendation performance metrics of LFGCF and other baselines in three datasets when $N = \{10, 20\}$. Fig.\ref{per_com} shows the Top-K recommendation performance of our LFGCF and other baselines in terms of Recall@N, Precision@N and MRR@N while N ranges from 5 to 30 with the interval of 5.
Fig.\ref{per_com} shows the Top-K recommendation performance of our LFGCF and other baselines in Recall@N, Precision@N, and MRR@N, while N ranges from 5 to 30 with an interval of 5.
\begin{figure}[!htb]
\centering
\includegraphics[width=\textwidth]{image/performance_comparison.png}
\caption{Performace comparison on three datasets}
\label{per_com}
\end{figure}
The performance comparisons of our model and other baselines shows that LFGCF achieved the state-of-the-art performance while succcessfully alleviating the training difficulty with the help of light designed GCN. Among baseline models, TGCN and BPR-T outperform DSPR and CFA by a large margin.
We summarize why our LFGCF outperforms other baselines for the following reasons: (1) More effective GCN. Two LFGCN implemented for feature learning show better performances than deep neural networks in models CFA and DSPR. Effective representation propagation and aggregation benefit LFGCF by lifting performances and reducing training difficulties; (2) TransRT assists with extracting more expressive user and item representations. Recent years witnessed efforts devoted to the loss function based on BPR loss to better fit recommendation tasks. Experiments indicate that using the improved knowledge graph algorithm TransRT allows more efficient uses of collaborative tagging information. %
\subsection{Ablation Studies}
To verify our summaries of the superior performance of LFGCF above, we conduct a series of ablation studies on LFGCF.
\subsubsection{Effect of LFGCN}
Some models take user-item interaction information into personalized recommendations in the research field. According to the research of He et al. \cite{he_lightgcn_2020}, complex feature transformation and nonlinear activation not only bring difficulties to training but also compromise the performance. To verify the effectiveness of light designed GCN in tag-aware recommendation systems, We implement NGCFT as a baseline model based on NGCF \cite{wang_neural_2019}. Apart from the complex feature aggregation and propagation operations inherited from NGCF, NGCFT is identical to LFGCF in the loss function, recommendation generation, and parameter setting.
\begin{center}
\begin{longtable}{c|c|c|c|c|c}
\caption{Performance of NGCFT and LFGCF on Movielens}
\label{ngcft_ml}\\
\hline
\textbf{Model} & \textbf{Recall@20} & \textbf{Precision@20} & \textbf{Hit@20} & \textbf{NDCG@20} & \textbf{MRR@20} \\
\hline
NGCFT & 0.2158 & 0.0227 & 0.3109 & 0.1383 & 0.1405 \\
LFGCF & 0.2767 & 0.0375 & 0.4176 & 0.2156 & 0.2199 \\
imp & 28.22\% & 65.20\% & 34.32\% & 55.89\% & 56.51\% \\
\hline
\end{longtable}
\end{center}
\begin{center}
\begin{longtable}{c|c|c|c|c|c}
\caption{Performance of NGCFT and LFGCF on LastFM}
\label{ngcft_lfm}\\
\hline
\textbf{Model} & \textbf{Recall@20} & \textbf{Precision@20} & \textbf{Hit@20} & \textbf{NDCG@20} & \textbf{MRR@20} \\
\hline
NGCFT & 0.5005 & 0.1454 & 0.7953 & 0.4543 & 0.5167 \\
LFGCF & 0.5069 & 0.1468 & 0.8035 & 0.4442 & 0.5058 \\
imp & 1.28\% & 0.96\% & 1.03\% & -2.23\% & -2.11\% \\
\hline
\end{longtable}
\end{center}
\begin{center}
\begin{longtable}{c|c|c|c|c|c}
\caption{Performance of NGCFT and LFGCF on Movielens}
\label{ngcft_de}\\
\hline
\textbf{Model} & \textbf{Recall@20} & \textbf{Precision@20} & \textbf{Hit@20} & \textbf{NDCG@20} & \textbf{MRR@20} \\
\hline
NGCFT & 0.3256 & 0.3574 & 0.9354 & 0.4150 & 0.5453 \\
LFGCF & 0.3341 & 0.3615 & 0.9404 & 0.4225 & 0.5572 \\
imp & 2.61\% & 1.15\% & 0.53\% & 1.81\% & 2.18\% \\
\hline
\end{longtable}
\end{center}
Generally, LFGCF outperforms NGCFT on three datasets. On MovieLens, LFGCF outperforms NGCFT by a large margin. This suggests that using the light graph structure effectively imporves the performance of recommendations.
\subsubsection{Effect of TransRT}
Modified knowledge graph algorithm TransRT is implemented along with BPR loss to help with model training and achieving more expressive representations. Theoretically speaking, TransRT allows for better-using tagging information to improve qualities of user and item representations, which would lift the performance of recommendations. To verify the effectiveness of TransRT, we remove TransRT and only train the LFGCF with BPR loss and name this baseline model LFGCF-RT.
Apart from learning more expressive user and item representations, we believe that TransRT help learn tag representations as well. To verify this deduction, we follow the visualization method in \cite{yu_are_2022}. Learned tag embeddings are first reduced to two dimensions using t-SNE \cite{van_Visualizing_2008}. Then 2-dimensional embeddings are normalized and mapped on the unit hypersphere (i.e., a circle with radius 1). To make the presentation, the density estimations on angles for each point on the hypersphere are visualized in Fig.\ref{density_analysis_lfm}.
\begin{figure}[H]
\centering
\subfigure[LastFM]{
\label{density_lastfm}
\includegraphics[width=0.45\textwidth]{image/density_analysis_LastFM.png}
}
\hspace{0in}
\subfigure[Delicious]{
\label{density_delicious}
\includegraphics[width=0.45\textwidth]{image/density_analysis_Delicious.png}
}
\caption{Density analysis of LFGCF and LFGCF-RT}
\label{density_analysis_lfm}
\end{figure}
As it can be learned from the Fig.\ref{density_analysis_lfm}, tag representations trained by LFGCF are much smoother than those learned by the baseline model without TransRT. It indicates that TransRT help promote the recommendation performance by smoothing the learned representations. To make a clear comparison between LFGCF and LFGCF-RT, the performances are shown in the Table.\ref{transrt_ml}
\newpage
\begin{center}
\begin{longtable}{c|c|c|c|c|c}
\caption{Performance of LFGCF-RT and LFGCF on Movielens}
\label{transrt_ml}\\
\hline
\textbf{Model} & \textbf{Recall@20} & \textbf{Precision@20} & \textbf{Hit@20} & \textbf{NDCG@20} & \textbf{MRR@20} \\
\hline
LFGCF-RT & 0.2739 & 0.0311 & 0.3782 & 0.2109 & 0.2096 \\
LFGCF & 0.2767 & 0.0375 & 0.4176 & 0.2156 & 0.2199 \\
imp & 1.02\% & 20.58\% & 10.42\% & 2.23\% & 4.91\% \\
\hline
\end{longtable}
\end{center}
\begin{center}
\begin{longtable}{c|c|c|c|c|c}
\caption{Performance of LFGCF-RT and LFGCF on LastFM}
\label{transrt_lfm}\\
\hline
\textbf{Model} & \textbf{Recall@20} & \textbf{Precision@20} & \textbf{Hit@20} & \textbf{NDCG@20} & \textbf{MRR@20} \\
\hline
LFGCF-RT & 0.4637 & 0.1358 & 0.7708 & 0.3995 & 0.4565 \\
LFGCF & 0.5069 & 0.1468 & 0.8035 & 0.4442 & 0.5058 \\
imp & 9.32\% & 8.10\% & 4.24\% & 11.19\% & 10.80\% \\
\hline
\end{longtable}
\end{center}
\begin{center}
\begin{longtable}{c|c|c|c|c|c}
\caption{Performance of LFGCF-RT and LFGCF on Movielens}
\label{transrt_de}\\
\hline
\textbf{Model} & \textbf{Recall@20} & \textbf{Precision@20} & \textbf{Hit@20} & \textbf{NDCG@20} & \textbf{MRR@20} \\
\hline
LFGCF-RT & 0.3161 & 0.3512 & 0.9223 & 0.4068 & 0.5516 \\
LFGCF & 0.3254 & 0.3526 & 0.9305 & 0.4122 & 0.5558 \\
imp & 2.94\% & 0.40\% & 0.89\% & 1.33\% & 0.76\% \\
\hline
\end{longtable}
\end{center}
Table.\ref{transrt_ml} shows the recommendation performance of LFGCF and LFGCF-RT. LFGCF-RT underperforms LFGCF in all three datasets, indicating that the removal of TransRT hurts LFGCF. It compromises the performance to the same level as BPR-T and TGCN. The conclusion can be drawn that the light convolutional graph is effective in learning user and item representations, but the implicit feedbacks are not fully extracted. Implementing TransRT allows for leveraging the information pattern inside user tagging behavior to make the representations more expressive.
\section{Conclusions and Future Work}
\label{sec:conclusions}
In this work, we explore folksonomy records with collaborative information in FG for a tag-aware recommendation. We devised a new method LFGCF, which leverages a light yet effective model to capture higher-order information in FG, then proposed a regularization function to bridge users and items in a folksonomy records triplet as its core is the GCN-based model LFGCN. It only consists of two essential components - light aggregation and information updating. In light aggregation, we remove feature transformation and nonlinear activation, which are burdensome. We construct the final representations for users and items as a weighted sum of their embeddings on several layers in information updating. For adequate modeling folksonomy records, which keep the topology information, we proposed TransRT regularization function and performed jointly learning to users' subjective preferences and items' characteristics. We argued that the specific light design and regularization for TRS alleviate the sparsity, redundancy, and ambiguity issues in folksonomy records. Extensive hyperparameter experiments and ablation studies on three real-world datasets demonstrate the effectiveness and rationality of LFGCN.
This work explores the specific design of GNN in TRS. We believe the insights of LFGCF are inspirational for future developments in TRS. With the prevalence of user tagging behaviors in real applications, GNN-based models are becoming increasingly crucial in TRS; by explicitly exploiting the interactions among entities in the system, they are advantageous to context-based supervised learning scheme\cite{he_neural_2017} that models the interactions implicitly. Besides folksonomy, much other structural information can be extracted from real-world recommendation systems, such as social networks and knowledge graphs. For example, by integrating entity relationships with FG, we can capture the explicit information rather than collaborative information. In further work, we would like to exploit further self-supervised learning for TRS, which is a promising research direction.
|
1,108,101,564,874 | arxiv | \section{Introduction}
Quantum phenomenon has been observed by numerous experiments on microscopic scales. However, on macroscopic scales, it is difficult to find quantum effects, such as quantum superpositions. A lot of physicists have been looking up the root of quantum-to-classical transition for decades. The reason can be divided into two categories: coarsened measurement and decoherence\cite{lab1,lab2,lab3,lab4,lab5,lab6,lab7,lab8}.
Commonly viewpoint is that decoherence plays a prominent role in quantum-to-classical transition. There are two routes to explain decoherence: one route is that system interacts with external environments, the other is taken in wave function collapse\cite{lab9,lab10,lab11}, which need not external environments.
The latter one is often inspired by general relativity and makes a fundamental modification on quantum theory. Recently, Igor at al.\cite{lab12} demonstrated the existence of decoherence induced by gravitational time dilation without any modification of quantum mechanics. This work motivates further study on decoherence due to time dilation.
Spontaneous emission between two atomic levels inevitably occurs. We research decoherence due to time dilation during spontaneous emission. Without spontaneous emission, decoherence will not occur in our model only by time dilation. As we all know, spontaneous emission can induce decoherence. We find that gravitational time dilation can reduce or increase the decoherence due to spontaneous emission in different reference frames (different zero potential energy point). It is attributed to the fact that in different reference frame, the distinguishability of emission photon from different positions is different. The direction of emission light also influences the coherence of quantum superpositions in fixed direction of gravitational field.
In order to make the decoherence due to time dilation stronger than due to spontaneous emission, time-delayed feedback control\cite{lab121,lab1211} is used.
The rest of paper is arranged as follows. In section II, we present the model about the decoherence of quantum superpositions due to time dilation during spontaneous emission. Coherence of particle's position in different reference frame is explored in section III. In section IV, we discuss the influence of different directions of emission light. In section V, a
time-delayed feedback scheme is utilized to increase decoherence induced by gravitational time dilation. We deliver a conclusion and outlook in section VI.
\section{Model}
Firstly, let us simply review the gravitational time dilation which causes clocks to run slower near a massive object. Given a particle of rest mass $m$ with an arbitrary internal Hamiltonian $H_0$, which interacts with the gravitational potential $\Phi(x)$. The total Hamiltonian $H$ is described by\cite{lab12,lab122}
\begin{eqnarray}
H=H_{ext}+H_0[1+\Phi(x)/c^2-p^2/(2m^2c^2)],
\end{eqnarray}
where $H_{ext}$ is external Hamiltonian. For a free particle, $H_{ext}=mc^2+p^2/2m+m\Phi(x)$. In Eq.(1), the last term, $-H_0p^2/(2m^2c^2)$, is simply the velocity-dependent special relativistic time dilation. The coupling with position, $H_0\Phi(x)/c^2$, represents the gravitational time dilation. When we consider slowly moving particles, $p\approx0$, the gravitational time dilation will be the main source of time dilation. It will not be canceled by the velocity-dependent special relativistic time dilation.
We consider that an atom with two levels is in superposition of two vertically distinct positions $x_1$ and $x_2$.
The atom is coupled to a single unidirectional light field, as depicted in Fig. 1.
\begin{figure}[h]
\includegraphics[scale=0.30]{1.eps}
\caption{\label{fig.2} Decoherence of an atom is induced by gravitational time dilation under the situation of spontaneous emission. The dotted line represents a homogeneous gravitational field $\Phi(x)\approx g x$, where $g =9.81m/s^2 $ is the gravitational acceleration on Earth. Initial atom is in the superposition state: $1/\sqrt{2}(|x_1\rangle+|x_2\rangle)$. The direction of emitting photon is along the solid line, x-direction, which is contrary with the direction of gravitational field. }
\end{figure}
The whole system interacts with a homogeneous gravitational field $\Phi(x)\approx g x$ which generates the gravitational time dilation. The total system-field Hamiltonian is described by ($\hbar=1$)
\begin{eqnarray}
H=[mc^2+m g x_1+w_1(1+g x_1/c^2)|1\rangle\langle1|+w_2(1+g x_1/c^2)|2\rangle\langle2|]|x_1\rangle\langle x_1|\nonumber
\\+\sqrt{\kappa_1/2\pi}(1+g x_1/c^2)|x_1\rangle\langle x_1|\int dw [a b^\dagger(w)\exp(-i w x_1/c)+H.c]\nonumber\\
+[mc^2+m g x_2+ w_1(1+g x_2/c^2)|1\rangle\langle1|+ w_2(1+g x_2/c^2)|2\rangle\langle2|]|x_2\rangle\langle x_2|\nonumber\\
+\sqrt{\kappa_2/2\pi}(1+g x_2/c^2)|x_2\rangle\langle x_2|\int dw [a b^\dagger(w)\exp(-i w x_2/c)+H.c]+\int dw w b^\dagger(w)b(w),
\end{eqnarray}
where $w_1$ and $w_2$ ($w_1>w_2$) are eigenvalues for the atomic level 1 and 2, respectively, and operator $a=|2\rangle\langle1|$.
$\kappa_1$ and $\kappa_2$ denote the coupling constants in position $x_1$ and $x_2$, respectively. Without extra control, the two coupling constants should be same: $\kappa_1=\kappa_2=\kappa$. The last term in Eq.(2) represents the free field Hamiltonian, and the filed modes, $b(w)$, satisfy $[b(w),b^\dagger(w')]=\delta(w-w')$.
Using Pauli operator, $\sigma_z=|1\rangle\langle1|-|2\rangle\langle2|$, to simplify the Eq.(2), we can obtain the new form of system-field Hamiltonian
\begin{eqnarray}
H=[E_1+w_0/2(1+g x_1/c^2)\sigma_z]|x_1\rangle\langle x_1|+\sqrt{\kappa/2\pi}(1+g x_1/c^2)|x_1\rangle\langle x_1|\int dw [a b^\dagger(w)\exp(-i w x_1/c)+H.c]\nonumber\\
+[E_2+w_0/2(1+g x_2/c^2)\sigma_z]|x_2\rangle\langle x_2|+\sqrt{\kappa/2\pi}(1+g x_2/c^2)|x_2\rangle\langle x_2|\int dw [a b^\dagger(w)\exp(-i w x_2/c)+H.c]\nonumber\\+\int dw w b^\dagger(w)b(w),
\end{eqnarray}
where $E_i=mc^2+\frac{(w_1+w_2)}{2}(1+g x_i/c^2)$ for $i=1,2$ and $w_0=w_1-w_2$.
We consider the initial field in the vacuum state and the atom in the state $|1\rangle\frac{|x_1\rangle+|x_2\rangle}{\sqrt{2}}$.
Then, the atom will spontaneously emit photon. According to there being only a single excitation conservation between system and field\cite{lab13}, the system state in any time $t$ can be solved analytically, see Appendix.
\section{Coherence of particle's position}
The quantum coherence of particle's position state can be quantified by the interferometric visibility $V(t)$, as shown in Eq.(27) in Appendix. When the time $t$ satisfy $\lambda_1\kappa t\gg1$ and $\lambda_2\kappa t\gg1$, the amplitude of excitation state $C_1\approx0$ and $C_2\approx0$. Then, we arrive at
\begin{eqnarray}
V=\frac{2\kappa\lambda_1\lambda_2}{\sqrt{[\kappa(\lambda_1^2+\lambda_2^2)]^2+[w_0(\lambda_2-\lambda_1)]^2}}
\exp[-\lambda_1^2\kappa\tau],
\end{eqnarray}
where $\lambda_i=1+g x_i/c^2$ for $i=1,2$.
From the above equation, we can see that the decoherence comes from the spontaneous emission (when $\lambda_1=\lambda_2=1$) and the gravitational time dilation. Spontaneous emission can generate the decoherence due to the fact that photon is emitted from different positions, which leads to having a phase difference $w\tau$, where $w$ denotes the frequency of photon.
And we achieve that coherence depends on the reference frame. Different zero potential energy point (different value of $\lambda_1$ ) will give different coherence strength. The counterintuitive result occurs because in different frame the phase difference will become different so that the distinguishability of emitting photon from two positions is different. Reducing the zero potential point (increasing the value of $\lambda_1$), the phase difference will increase because of time dilation. For a fixed position difference $\Delta=g(x_2-x_1)/c^2$, the quantum coherence can be rewritten
\begin{eqnarray}
V(\lambda_1,\Delta)=\frac{2\kappa\lambda_1(\lambda_1+\Delta)}{\sqrt{[\kappa(\lambda_1^2+(\lambda_1+\Delta)^2)]^2+(w_0\Delta)^2}}
\exp[-\lambda_1^2\kappa\tau].
\end{eqnarray}
There is an optimal value of $\lambda_1$, which can give the maximal quantum coherence, as shown in Fig. 2.
\begin{figure}[h]
\includegraphics[scale=1]{2.eps}
\caption{\label{fig.2}Diagram of quantum coherence $V$ changing with $\lambda_1$. The quantum coherence depends on reference frame. The parameters are given by: $w_0/\kappa=10^6$, $\Delta=10^{-6}$, $\kappa \tau=10^{-2}$.}
\end{figure}
In an optimal reference frame, one can obseve the maximal coherence: $V$ is close to 1.
In order to observe the decoherence induced by gravitational time dilation, it need to satisfy that the decoherence effect from time dilation is stronger than only from spontaneous emission ($\lambda_1=\lambda_2=1$):
\begin{eqnarray}
\frac{2\kappa\lambda_1(\lambda_1+\Delta)}{\sqrt{[\kappa(\lambda_1^2+(\lambda_1+\Delta)^2)]^2+(w_0\Delta)^2}}\exp[-\lambda_1^2\kappa\tau]
\ll\exp[-\kappa\tau].
_{}\end{eqnarray}
Noting that the value of $\lambda_2-\lambda_1$ is generally small in experiment, the condition $\exp[(\lambda_1^2-1)\kappa\tau]\gg1$ is necessary for observing decoherence mainly induced by gravitational time dilation.
When one changes the direction of emitting photon, the quantum coherence will change accordingly,
\begin{eqnarray}
V'=\frac{2\kappa\lambda_1\lambda_2}{\sqrt{[\kappa(\lambda_1^2+\lambda_2^2)]^2+[w_0(\lambda_2-\lambda_1)]^2}}
\exp[-\lambda_2^2\kappa\tau].
\end{eqnarray}
It is due to that the phase difference changes with the direction of emitting photon, becoming $-w\tau$. Different directions of emitting photon in the fixed gravitational field will generate different quantum coherence $V$.
Then, we consider general three-dimensional space: the emitting photon can be along any direction, as shown in Fig. 3.
\begin{figure}[h]
\includegraphics[scale=0.40]{3.eps}
\caption{\label{fig.2}Diagram shows that the photon can spontaneously emit in any direction. Here $\theta$ denotes the angle between direction of photon and x-direction, which can change from 0 to $\pi$.}
\end{figure}
We obtain the quantum coherence of particle's position state as following:
\begin{eqnarray}
&V_3=\frac{3\kappa\lambda_1\lambda_2}{\sqrt{[\kappa(\lambda_1^2+\lambda_2^2)]^2+[w_0(\lambda_2-\lambda_1)]^2}}|
\int_0^{\pi/2} d\theta\sin\theta\cos^2\theta
\exp[(iw_0\lambda_1-\lambda_1^2\kappa)\tau\cos\theta]+\exp[-(iw_0\lambda_2+\lambda_2^2\kappa)\tau\cos\theta]|,\\
&=\frac{3\kappa\lambda_1\lambda_2}{\sqrt{[\kappa(\lambda_1^2+\lambda_2^2)]^2+[w_0(\lambda_2-\lambda_1)]^2}}|[-2 + \exp(k_1) (2 - 2 k_1 + k_1^2)]/k_1^3+[-2 + \exp(k_2) (2 - 2 k_2 + k_2^2)]/k_2^3|,\\
&\textmd{in which},\nonumber\\
&k_j=[(-1)^jiw_0\lambda_j-\lambda_j^2\kappa]\tau, \ \textmd{for} \ j=1,2,
\end{eqnarray}
where the coupling strength between atom and light field changes with the direction of emitting photon, becoming $\sqrt{\kappa/2}\cos\theta $\cite{lab14}.
For $w_0\lambda_j\tau\ll1$ and $\lambda_j^2\kappa\tau\ll1$, $V_3\approx\frac{2\kappa\lambda_1\lambda_2}{\sqrt{[\kappa(\lambda_1^2+\lambda_2^2)]^2+[w_0(\lambda_2-\lambda_1)]^2}}(1-\lambda_1^2\kappa\tau-\lambda_2^2\kappa\tau)<V'<V.$ It means that the quantum coherence in general three-dimensional space is smaller than in one-dimensional space of fixed direction.
For $w_0\lambda_j\tau\gg1$ and $\lambda_j^2\kappa\tau\gg1$, $V_3\approx\frac{2\kappa\lambda_1\lambda_2}{\sqrt{[\kappa(\lambda_1^2+\lambda_2^2)]^2+[w_0(\lambda_2-\lambda_1)]^2}}|\cos3\varphi|(3/[(w_0^2\lambda_1^2+(\lambda_1^2\kappa\tau)^2]^{3/2}+3/[(w_0^2\lambda_1^2+(\lambda_1^2\kappa\tau)^2]^{3/2})\geq V,$ with $\cos\varphi=\lambda_1^2\kappa\tau/\sqrt{w_0^2\lambda_1^2+(\lambda_1^2\kappa\tau)^2}$. It means that in new condition the quantum coherence in general three-dimensional space is larger than in one-dimensional space of fixed direction.
The root of generating $V_3\neq V$ is the phase difference changing from $w\tau$ to $w\tau\cos\theta$.
\section{time delay feedback}
When one chooses the center of two positions as the zero potential point, the interferometric visibility reads
\begin{eqnarray}
V_c=\frac{2\kappa(1-\Delta/2)(1+\Delta/2)}{\sqrt{[\kappa((1-\Delta/2)^2+(1+\Delta/2)^2)]^2+(w_0\Delta)^2}}
\exp[-(1-\Delta/2)^2\kappa\tau].
\end{eqnarray}
In order to observe the decoherence from gravitational time dilation, not from the spontaneous emission, it is necessary to satisfy condition $V_c\ll\exp[-\kappa\tau]$. However, the value of $\Delta$ is very small in experiment. So, the condition is hard to meet. We can utilize the time delay feedback \cite{lab15,lab16} to increase the decoherence from gravitational time dilation.
\begin{figure}[h]
\includegraphics[scale=0.30]{4.eps}
\caption{\label{fig.2}Diagram of time delay feedback. The center of two positions is chosen as zero potential point. So, in the new reference frame, the position $x_1$ ($x_2$) is transformed to be $\Delta c^2/2g$ ($-\Delta c^2/2g$). Here, we just consider that the photon emits along the fixed x-direction, which can be easily generalized to the case of three-dimensional space. At $r+\Delta c^2/2g$, a mirror is put to reflect the light field, leading to that a time-delay light field is fed back to system-field interaction. }
\end{figure}
As shown in Fig. 4, the light field is reflected by a mirror. The whole system-field Hamiltonian can be described by
\begin{eqnarray}
&H=\sqrt{\kappa/2\pi}(1+g x_1/c^2)|x_1\rangle\langle x_1|\int dw \{a b^\dagger(w)2\exp[-i w(r+\Delta c^2/2g)]\cos(2w r)+H.c\}\nonumber\\
&+\sqrt{\kappa/2\pi}(1+g x_2/c^2)|x_2\rangle\langle x_2|\int dw\{a b^\dagger(w)2\exp[-i w(r+\Delta c^2/2g)]\cos[ w (2r+2\Delta c^2/g)/c]+H.c\}\nonumber\\
&+\int dw w b^\dagger(w)b(w)+[E_1+w_0/2(1+g x_1/c^2)\sigma_z]|x_1\rangle\langle x_1|+[E_2+w_0/2(1+g x_2/c^2)\sigma_z]|x_2\rangle\langle x_2|,
\end{eqnarray}
Using the way in Appendix, we can obtain the quantum coherence at time $t\gg1$. With the feedback, the spontaneous emission is suppressed due to superposition effect. The total system-field wave function can also be described by Eq.(13) in Appendix.
When the conditions $w_0(1+\Delta/2)2r/c=n\pi$ and $w_0(1-\Delta/2)(2r+2\Delta c^2/g)/c\neq m\pi$ hold, for $t\gg1$ the amplitudes $|C_2|^2=\exp[-n\pi\kappa(1+\Delta/2)/w_0]$ and $|C_1|\simeq0$, where $n,m=1,2,3\cdot\cdot\cdot$. When $w_0\gg\kappa$ and $n=1$, $|C_2|^2\simeq1$. So, we achieve that the quantum coherence $V_c\simeq0$. Without gravitational time dilation, the quantum coherence is much larger than 0. So, utilizing time-delayed feedback scheme can satisfy that the decoherence induced by the gravitational time dilation is far less than by spontaneous emission.
\section{conclusion and outlook}
We explore the decoherence of an atom's positions induced by the gravitational time dilation only in the situation of spontaneous emission. As the phase difference of photon emitted from two positions are different in different reference frames, the quantum coherence of superposition state of positions depends on the reference frame. So one can choose proper reference frame to observe the decoherence from the gravitational time dilation. It is worth mentioning that the direction of emitting photon will influence the quantum coherence. So comparing the case of fixed emitting direction with the case of any direction, there are some differences about quantum coherence. When one chooses the center of two positions as the zero potential point, the decoherence induced by the gravitational time dilation is difficult to be far larger than by spontaneous emission. The time delay feedback can be used to increase the decoherence from the time dilation with proper conditions.
In this article, we only discuss the decoherence of an atom with two energy levels induced by gravitational time dilation. It is interesting to research the decoherence of many particles with many energy levels induced by time dilation with spontaneous emission. In this case we believe that it will increase the decoherence effect from the gravitational time dilation. And considering extra drive is the further research direction. In this situation, due to the fact that a single excitation conservation between system and field do not hold, the question will become complex and rich.
\section{Acknowledgement}
This work was supported by the National Natural Science Foundation of China under Grant No. 11375168.
|
1,108,101,564,875 | arxiv | \section{Introduction and summary of results}
In \cite{PabloPablo} we showed that, up to cubic order in curvature, the most general dimension-independent gravity theory constructed from arbitrary contractions of the metric and the Riemann tensor whose linearized spectrum coincides with the one of Einstein gravity can be written as a linear combination of the Lovelock terms \cite{Lovelock1,Lovelock2} plus a new cubic contribution $\mathcal{P}$, defined as
\begin{equation}
\mathcal{P}=12 R_{a\ b}^{\ c \ d}R_{c\ d}^{\ e \ f}R_{e\ f}^{\ a \ b}+R_{ab}^{cd}R_{cd}^{ef}R_{ef}^{ab}-12R_{abcd}R^{ac}R^{bd}+8R_{a}^{b}R_{b}^{c}R_{c}^{a}\, ,
\end{equation}
which we coined \emph{Einsteinian cubic gravity} (ECG) term. Remarkably, as opposed to the quadratic and cubic Lovelock terms --- which are respectively topological and trivial in that case --- this new term is dynamical in four-dimensions. Hence, the $D=4$ ECG action can be written as
\begin{equation}\label{ECGaction}
S=\frac{1}{16 \pi G}\int_{\mathcal{M}}d^4x\sqrt{|g|}\left[-2\Lambda_0+R- G^2 \lambda \mathcal{P}\right]\, ,
\end{equation}
where $\Lambda_0$ is the cosmological constant, $G$ is the Newton constant and $\lambda$ is a dimensionless coupling constant which we will assume to be positive throughout the paper, {\it i.e.,}\ $\lambda\geq 0$. Also in \cite{PabloPablo}, we anticipated that \req{ECGaction} admits static and spherically-symmetric black-hole-like solutions characterized by a single function $f(r)$, {\it i.e.,}\ metrics of the form
\begin{equation}
\label{ansatz0}
ds^2= -f( r ) dt^2+\frac{dr^2}{f( r )}+r^2 d\Omega_{(2)}^2\, ,
\end{equation}
where $d\Omega^2_{(2)}=d\theta^2+\sin^2\theta d\phi^2$ is the metric of the round sphere. In this paper we will show that this is indeed the case. In particular, we will construct new black-hole solutions of the ECG theory \req{ECGaction} which generalize the usual Schwarzschild black hole --- and reduce to it as $\lambda\rightarrow 0$. We will show that for general values of $\lambda$, $f(r)$ is determined by a non-linear second-order differential equation --- see \req{fequation} below. We shall not be able to solve this equation analytically. However, we will be able to compute analytic expressions --- valid for general values of $\lambda$ --- for the Hawking temperature $T$, the Wald entropy $\mathsf{S}$ and the Abbott-Deser mass $M$ of the black hole as functions of the new horizon radius $r_h$. Remarkably enough, we are able to show that our black hole solutions exactly satisfy the first-law of black hole mechanics
\begin{equation}\label{1st}
dM=T d\mathsf{S}\, .
\end{equation}
We stress that our calculations of $T$, $\mathsf{S}$ and $M$ are independent from each other, so the fact that \req{1st} holds is a quite non-trivial fact. Note that the first law was proven to hold for small perturbations of stationary black hole solutions of general higher-order gravities in \cite{Wald:1993nt}, so this is an important check on our solutions. To the best of our knowledge, the solutions presented here constitute the first examples of four-dimensional generalizations of the Schwarzschild and Reissner-Nordstr\"om black holes to any higher-order gravity which are determined by a single function and for which the first law is proven to hold analytically --- see the Note added at the end of this section though.
In the asymptotically flat case, given a value of the mass $M$, all solutions with $\lambda >0$ have horizon radius and Wald entropy $\mathsf{S}$ larger than the Schwarzschild ones. Also, some of them are thermodynamically stable --- {\it i.e.,}\ they have positive specific heat. Interestingly, the temperature of the new solutions, which is always smaller than Schwarzschild's, is bounded from above by
\begin{equation}
T_{\rm{max}}=\frac{1}{12\pi G^{1/2} \lambda^{1/4}}\, ,
\end{equation}
which is reached for $M_{\rm max}=16/27 G^{-1/2}\lambda^{1/4}$, and it vanishes both as $M\rightarrow 0$ and as $M\rightarrow +\infty$.
These results for uncharged asymptotically flat solutions are extended in section \ref{charged} to incorporate a non-vanishing cosmological constant and electric charge --- {\it i.e.,}\ we add a Maxwell term to the gravitational action \req{ECGaction}. The corresponding solutions, which are again characterized by a single function, generalize the usual Reissner-Nordstr\"om-(Anti) de Sitter (RN-(A)dS) black hole and we show that they also satisfy the first law, which in that case reads
\begin{equation}
dM= T d\mathsf{S}+\Phi dq,
\end{equation}
where $\Phi=q/(4\pi r)$ is the electrostatic potential.
The structure of the paper goes as follows. In section \ref{ecgse} we show that \req{ansatz0} is a valid ansatz for ECG, and determine the equation satisfied by $f(r)$. In section \ref{asy}, we focus on the uncharged asymptotically flat case. There, we provide asymptotic expressions for $f(r)$ and show that it can describe a black hole with horizon radius $r_h$. Then, we obtain exact expressions for the mass and the surface gravity as functions of $r_h$ and $\lambda$. We also plot numerically the $f(r)$ of our solution for various values of $\lambda$. In section \ref{them} we compute the Wald entropy and the Hawking temperature of the solution, unveiling some interesting differences with respect to the usual Schwarzschild case --- {\it e.g.,}\ some solutions possess positive specific heat. Finally, we prove that the first law holds exactly for our solutions. As explained above, these results are extended to the charged asymptotically (A)dS case in section \ref{charged}. We conclude in section \ref{discussion}, where we also argue that ECG is in fact the most general four-dimensional theory of gravity which allows for non-trivial single-function generalizations of the Schwarzschild- and RN-(A)dS solutions which reduce to the usual Einstein gravity ones when the corresponding higher-order couplings are set to zero. We also explain that, as opposed to the $D=4$ case, five-dimensional ECG does not admit single-function solutions. Appendix \ref{remnants} contains some speculations on the possibility that our solutions could describe long-lived microscopic black hole remnants without the ECG coupling affecting the usual macroscopic physics of general relativity.
\textbf{Note added}: As we were writing this paper, \cite{Hennigar:2016gkm} appeared in the arXiv.
This work contains a major overlap with our results in sections \ref{ecgse}, \ref{asy} and \ref{them}. There are some differences though: in \cite{Hennigar:2016gkm}, the fact that $f(r)$ satisfies a second-order differential equation has been overlooked (it is not obvious from the field equations of the theory that the corresponding third-order equation is a total-derivative). As a consequence, the authors do not find an expression for the mass $M$ as a function of the horizon radius. Instead, they use the first law to determine $M$ using the values of $T$ and $\mathsf{S}$ (which agree with ours). Here, we are able to compute $M$ independently, which allows us to verify that the first law holds. Of course, the final result is the same. On the other hand, the charged case is not considered in \cite{Hennigar:2016gkm}.
\section{Spherically symmetric solutions of ECG}\label{ecgse}
In this section we will show that \req{ECGaction} admits generalizations of the Schwarzschild-(A)dS black hole characterized by a single function whose expression can be determined by solving a second-order differential equation.
Without loss of generality, we assume the following ansatz for our static and spherically symmetric solution \footnote{The extension to hyperbolic and planar horizons is straightforward.}
\begin{equation}
\label{ansatz1}
ds^2= -N^2( r )f( r ) dt^2+\frac{dr^2}{f( r )}+r^2 d\Omega_{(2)}^2\, ,
\end{equation}
Now, a possible route would entail evaluating the field equations of \req{ECGaction} on the above ansatz and finding the corresponding equations for $N(r)$ and $f(r)$. Here we will use a different method inspired by the construction of \emph{Quasi-topological gravity} black holes in \cite{Quasi}. This consists in considering the action as a functional of these functions, $S[N,f]$. In fact, using the chain rule is it possible to show that the following equations hold for any higher-derivative Lagrangian
\begin{equation}
\frac{1}{4\pi r^2}\frac{\delta S[N,f]}{\delta N}= \frac{ 2 \mathcal{E}_{tt}}{f N^2}\, , \quad
\frac{1}{4\pi r^2} \frac{\delta S[N,f]}{\delta f}= \frac{\mathcal{E}_{tt}}{N f^2 }+N \mathcal{E}_{rr}\, .
\end{equation}
Here, $\mathcal{E}_{tt}$ and $\mathcal{E}_{rr}$ are the $tt$ and $rr$ components of the corresponding field equations\footnote{Note that $\mathcal{E}_{a b}=\frac{1}{\sqrt{|g|}}\frac{\delta S}{\delta g^{ab}}
$}. Therefore, one finds
\begin{equation}
\frac{\delta S[N,f]}{\delta N}= \frac{\delta S[N,f]}{\delta f}=0 \Leftrightarrow \mathcal{E}_{tt}=\mathcal{E}_{rr}=0\, ,
\end{equation}
{\it i.e.,}\ imposing the variations of $S[N,f]$ to vanish is equivalent to imposing those components of the field equations to be solved. Finally, the Bianchi identity $\nabla^{a}\mathcal{E}_{ab}=0$ ensures that the angular equations also hold whenever $ \mathcal{E}_{tt}=\mathcal{E}_{rr}=0$ are satisfied. This shows that the equations for $f(r)$ and $N(r)$ can be obtained from the action functional $S[N,f]$ without need to compute the full non-linear equations explicitly.
Interestingly, for four-dimensional ECG \req{ECGaction}, the action functional $S[N,f]$ can be written as
\begin{equation}
\label{Nfaction}
\begin{aligned}
S[N,f]=\frac{1}{8\pi G}\int dr &N(r) \cdot \bigg\{-\frac{1}{3}\Lambda_0r^3-(f-1)r-G^2 \lambda \bigg[4f'^3+12\frac{f'^2}{r}-24f(f-1)\frac{f'}{r^2}-12ff''\left(f'-\frac{2(f-1)}{r}\right)\bigg]\bigg\}'\\
&+\ldots\, ,
\end{aligned}
\end{equation}
where $()^{\prime}=d()/dr$ and where the ellipsis denote terms involving at least two derivatives of $N$, like $N'^2/N, N''N'/N$, $N'^3/N^2$, and so on. Now we can get the equations of $N$ and $f$ by computing the variation of this action with respect to them. Since $N$ is multiplied by a total derivative in (\ref{Nfaction}), when we compute $\delta_f S$ we get an expression which is homogeneous in derivatives of $N$. Hence, $\delta_f S=0$ can be solved by imposing
\begin{equation}
N'(r)=0\, .
\end{equation}
This is enough to show that ECG admits solutions characterized by a single function $f(r)$. From now on we set $N=1$. On the other hand, the equation $\delta_N S=0$ yields, after setting $N=1$ and integrating once, the following equation for $f$:
\begin{equation}
\label{fequation}
\begin{aligned}
-\frac{1}{3}\Lambda_0r^3-(f-1)r-G^2 \lambda \bigg[4f'^3
+12\frac{f'^2}{r}-24f(f-1)\frac{f'}{r^2}
-12ff''\left(f'-\frac{2(f-1)}{r}\right)\bigg]=r_0\, ,
\end{aligned}
\end{equation}
where $r_0$ is an integration constant. Let us stress here two important points. In general, in higher-order gravities one cannot set $N$ to a constant, so the solutions are characterized by two different functions, see {\it e.g.,}\ \cite{Lu:2015cqa,Lu:2015psa}.
Moreover, the equations of motion of higher-order gravity include in general up to fourth-order derivatives. Here we have been able to reduce the problem to a second-order equation for a single function.
\section{Asymptotically flat black hole}\label{asy}
In this section we construct black-hole solutions of \req{ECGaction} using \req{fequation}. For simplicity, we focus on the asymptotically flat case, {\it i.e.,}\ we set $\Lambda_0=0$. Unfortunately, we have not been able to solve \req{fequation} analytically. However, we can make several expansions and approximations which will help us understand the nature of the solution and will enable us to show that the first-law is exactly satisfied for our solutions. At the end of the section we also plot some numerical solutions of \req{fequation} corresponding to generalized Schwarzschild black holes for various values of the coupling $\lambda$.
\subsection{Asymptotic behavior}
Since (\ref{fequation}) is a second-order differential equation, it possesses a two-parameter family of solutions.
We will require the solution to be asymptotically flat, so that
\begin{equation}\label{bdy}
\lim_{r\rightarrow+\infty} f( r )=1\, .
\end{equation}
Then, the question is: does this condition completely fix the solution? In order to answer it, we can make an expansion around $r\rightarrow +\infty$. We assume that, for $r\rightarrow +\infty$ the solution can be expressed as Schwarzschild plus a small correction, {\it i.e.,}\
\begin{equation}
f( r )=1-\frac{r_0}{r}+f_1( r )\, ,
\end{equation}
where we assume that $| f_1( r )|\ll1$. Inserting this into (\ref{fequation}) and expanding linearly in $f_1$ we obtain a differential equation for the correction:
\begin{equation}
\begin{aligned}
-r^6 f_1-G^2\lambda (108 r_0^2-92 r_0^3/r)+12 G^2\lambda r_0\Big[(6r-14 r_0)f_1
+3r(r_0-2r)f_1'+3r^2(r-r_0)f_1''\Big]=0\,.
\end{aligned}
\end{equation}
The general solution of the above equation is given by the sum of the homogeneous solution plus a particular solution, $f_1=f_{1,p}+f_{1,h}$. To first order in $\lambda$, a particular solution is
\begin{equation}\label{party}
f_{1,p}( r )=G^2 \lambda\left(-\frac{108 r_0^2}{r^6}+\frac{92 r_0^3}{r^7}\right)+\mathcal{O}\left(\lambda^2,\frac{r_0^4}{r^8}\right)\, ,
\end{equation}
where terms with higher orders in $\lambda$ decay faster as $r\rightarrow+\infty$, so the first term provides a good approximation. The homogeneous equation can in turn be written as:
\begin{equation}
f_{1,h}''-\gamma ( r ) f_{1,h}' -\omega^2( r )f_{1,h}=0\, ,\quad \text{where} \quad \omega^2( r )=\frac{r^4}{36G^2\lambda r_0(r-r_0)}-\frac{6r-14 r_0}{3r^2(r-r_0)}, \quad \gamma( r )=\frac{2r-r_0}{r(r-r_0)}\, .
\end{equation}
Now, when $r$ is large we get $\omega'/\omega^2\ll1$ and $\gamma\ll\omega$. In this situation, the solution of the previous equation is approximately $f_{1,h}\approx A \exp\left[\int dr \omega( r )\right]+B\exp\left[-\int dr \omega( r )\right]$, for arbitrary constants $A$ and $B$. In particular, when $r\rightarrow+\infty$, we get $\omega^2=r^3/(36G^2\lambda r_0)+\mathcal{O}(r^2)$, and the solution is given very approximately by
\begin{equation}
f_{1,h}( r )\simeq A \exp\left(\frac{r^{5/2}}{15G\sqrt{\lambda r_0}}\right)+B \exp\left(-\frac{r^{5/2}}{15G\sqrt{\lambda r_0}}\right)\, .
\label{homogeneous}
\end{equation}
Now, since we want the metric to be asymptotically flat, we must set $A=0$. This leaves us with a 1-parameter family of solutions which are asymptotically flat. Hence, in the large $r$ limit, the solution is given by
\begin{equation}
f( r )\simeq 1-\frac{r_0}{r}-G^2\lambda\left(\frac{108 r_0^2}{r^6}-\frac{92 r_0^3}{r^7}\right)+\mathcal{O}\left(\lambda^2,\frac{r_0^4}{r^8}\right)+B \exp\left(-\frac{r^{5/2}}{15G \sqrt{\lambda r_0}}\right)\, ,
\label{asymptotic}
\end{equation}
for some constant $B$. Observe that all the leading asymptotic corrections to the Schwarzschild metric come from the solution \req{party}, while the contributions from the homogeneous equation are extremely subleading. Hence, the term proportional to $B$ above can be discarded from the asymptotic expansion \req{asymptotic}.
Note that had we considered a theory with massive modes (of mass $m$) in the linearized spectrum, we would have expected the corresponding asymptotic expansion to contain decaying exponential terms $\sim e^{-m r}$. By construction, ECG does not propagate massive modes linearly on the vacuum \cite{PabloPablo} and, consistently, those terms do not appear in \req{asymptotic}. However, since ECG is a higher-derivative theory, nothing prevents additional pseudo-modes from appearing at the non-linear level, or on backgrounds different from the vacuum. This seems to be the case here, because we can associate the decaying exponential in (\ref{asymptotic}) with a pseudo-mode of mass $m^2=\omega^2( r )$. Indeed, the decay is faster-than-exponential because the mass of this pseudo-mode goes to infinity as $r\rightarrow\infty$. Hence we see that, even though ECG can propagate additional pseudo-modes in backgrounds different from the vacuum, these modes are rapidly killed in the asymptotic limit, so they only live in a bounded region. The same behavior is expected to occur for any other higher-order gravity which only propagates a massless graviton on the vacuum --- {\it i.e.,}\ those belonging to the \emph{Einstein-like} class in the classification of \cite{Bueno:2016ypa}.
From the asymptotic expansion above we can obtain the mass of the black hole.
In the case of an asymptotically flat space-time, the Abbott-Deser mass formula is not changed by higher-order curvature terms, so we can apply the usual recipe \cite{Abbott:1981ff,Deser:2002jk}. In particular, the total mass can be found in our case through
\begin{equation}
M=\frac{1}{2G}\lim_{r\rightarrow+\infty}r(g_{rr}( r )-1)\, .
\end{equation}
Now, as we said before, higher-order corrections in $\lambda$ will decay with higher powers of $r$ as $r\rightarrow +\infty$, so the leading term, $-r_0/r$, will not be affected by these corrections. Therefore, this formula yields
\begin{equation}\label{massro}
r_0=2GM\, ,
\end{equation}
as usual. Naturally, using this and $\req{asymptotic}$ we can write the final expression for the asymptotic expansion of $f(r)$ --- for small values of $\lambda$ --- as
\begin{equation}
f( r\rightarrow \infty )= 1-\frac{2GM}{r}-G^2\lambda\left(\frac{108 (2GM)^2}{r^6}-\frac{92 (2GM)^3}{r^7}\right)\, .
\label{asymptotic2}
\end{equation}
\subsection{Horizon}\label{hori}
For a metric of the form (\ref{ansatz0}), a horizon is a surface $r=r_h$ at which $f(r_h)=0$ and $f'(r_h)\ge 0$. In particular, the function must be differentiable at $r_h$. Note also that the surface gravity on the horizon for this kind of metric is just $\kappa_g=f'(r_h)/2$. Assuming that the function $f$ is completely regular at the horizon and that it can be Taylor-expanded around it \footnote{As we see from (\ref{fequation}), at the horizon --- {\it i.e.,}\ when $f=0$ --- the term which multiplies $f''$ vanishes. This can give rise to non-differentiability on the horizon of some of the solutions, so imposing that the horizon is regular is indeed a strong restriction.}, we can write
\begin{equation}\label{Hexpansion}
f( r )=2 \kappa_g (r-r_h)+\sum_{n=2}^{\infty} a_n(r-r_h)^n\, ,
\end{equation}
where we have made explicit the first term, and where $a_n=f^{(n)}(r_h)/n!$.
The idea is to plug this expansion into (\ref{fequation}) and solve order by order in $(r-r_h)^n$. Up to quadratic order we get
\begin{align}
&+r_h-2GM-16\lambda\kappa_g^2\left(2\kappa_g+\frac{3}{r_h}\right)
+\left(1-2\kappa_g r_h-48\lambda\frac{\kappa_g^2}{r_h^2}\right)(r-r_h)\\ \notag &+
\Bigg[\lambda \Big(144 a_3 \kappa _g\left(\kappa_g+\frac{1}{r_h}\right)+48 a_2^2 \kappa _g-\frac{144 a_2 \kappa _g}{r_h^2}
-\frac{192 a_2 \kappa _g^2}{r_h}+\frac{144 \kappa _g^2}{r_h^3}+\frac{192 \kappa _g^3}{r_h^2}\Big)
-a_2r_h-2 \kappa _g\Bigg](r-r_h)^2\\ \notag &+\mathcal{O}((r-r_h)^3)=0,
\end{align}
where we have already taken into account \req{massro}. Now this equation must hold at every order in $(r-r_h)$. We see that the first two equations determine the horizon radius $r_h$ and the surface gravity $\kappa_g$ as a function of the mass,
\begin{eqnarray}
\label{rh}
r_h-2GM-16\lambda\kappa_g^2\left(2\kappa_g+\frac{3}{r_h}\right)&=&0\, ,\\
\label{kg1}
1-2\kappa_g r_h-48\lambda\frac{\kappa_g^2}{r_h^2}&=&0\, .
\end{eqnarray}
It is important to stress that the above expressions are exactly true. The fact that we can obtain exact relations among the mass, the horizon radius and the surface gravity is remarkable and not shared by other higher-order gravities.
Once $r_h$ and $\kappa_g$ are determined, from the third equation we get a relation between $a_2$ and $a_3$. Since it is linear in $a_3$, we can easily determine $a_3$ as a function of $a_2$. In the fourth equation $a_4$ appears linearly, so we can obtain it as a function of the previous coefficients, and so on. In general, from the $n-$th equation we can determine the coefficient $a_n$. Hence, we get a family of solutions with only one free parameter, $a_2$. We learn two things from this: one is that the theory \req{ECGaction} admits black hole solutions with regular horizons. Another is that the regular horizon condition reduces the number of solutions from a two-parameter family to a one-parameter one.
The expressions (\ref{rh}) and (\ref{kg1}) allow us to determine $\kappa_g$ and $r_h$ as functions of the mass. However, it is easier to obtain the relations $\kappa_g(r_h)$ and $M(r_h)$. We get \footnote{In fact there is another solution with a minus sign in front of the square root, but that choice does not reproduce the correct limit when $\lambda=0$. }
\begin{equation}
\kappa_g=\frac{1}{r_h(1+\sqrt{1+48G^2\lambda/r_h^4})}\, , \quad
\label{surfacegravity}
\frac{2GM}{r_h}=1-\frac{16G^2\lambda}{r_h^4}\frac{\Big(5+3\sqrt{1+48G^2\lambda/r_h^4}\Big)}{\Big(1+\sqrt{1+48G^2\lambda/r_h^4}\Big)^{3}} \, .
\end{equation}
In Fig. \ref{fig2} we plot $M(r_h)$. From this it is obvious that all solutions have a greater horizon radius than Schwarzschild, {\it i.e.,}\
\begin{equation}
r_{h}(\lambda)\geq r_h(0) \, \quad \text{for all} \quad \lambda \geq 0\, .
\end{equation}
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.56]{rh2.pdf}
\caption{We plot $2G\bar{M}$ as a function of $\bar{r}_h$ for the ECG solution (blue) with $\lambda>0$ and the usual Schwarzschild solution (red) where, for the sake of clarity, we defined $\bar{M}=M/(G^2 \lambda)^{1/4}$ and $\bar{r}_h=r_h/(G^2 \lambda)^{1/4}$. Note that the blue plot is valid for all values of $\lambda> 0$.}
\labell{fig2}
\end{figure}
\subsection{Numerical solution}
In the previous two subsections we have argued that we can construct a one-parameter family of asymptotically flat solutions and another one-parameter family of solutions which posseses a regular horizon. Thus, we expect that there exists one solution which connects both. A numerical computation shows that this is in fact the case. In order to perform the numerical computation, we start from the solution at the horizon, with $f(r_h)=0$, $f'(r_h)=2 \kappa_g$, with both $r_h$ and $\kappa_g$ determined in terms of the mass and $\lambda$ through (\ref{rh}) and (\ref{kg1}), and then we choose the following value for the free parameter, \begin{equation}
a_2=f''(r_h)/2\, .
\end{equation}
This must in fact be chosen very carefully so that we do not excite the growing exponential mode in (\ref{homogeneous}). For that value of $a_2$, we are able to construct numerically the solution up to a sufficiently large $r$ for which the solution becomes very similar to Schwarzschild. Then, for larger $r$ the approximation (\ref{asymptotic}) holds and we can use it to continue the solution all the way to $r=+\infty$. Also, since the horizon is regular, we can in practice continue the solution to the inner region $f<0$. The result for various values of $G^2 \lambda$ is presented in Fig. \ref{fig1}.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.71]{fig1.pdf}
\caption{Profile of $f( r )$ for several values of $\lambda$. The red line corresponds to the usual Schwarzschild blackening factor, $\lambda=0$. }
\labell{fig1}
\end{figure}
As we can see, the solutions are very similar to Schwarzschild when $r$ is large enough, but they differ notably as $r$ approaches the horizon. Note again that the horizon radius of all solutions with $\lambda>0$ is greater than the Schwarzschild value $r_h=2GM$. Remarkably, $f(r)$ does not diverge at the origin $r=0$. In fact, it can be shown that the behaviour at $r=0$ is approximately $f( r )=a+b r^2+\mathcal{O}(r^3)$, for some constants $a$ and $b$. Although there is no metric divergence, the curvature is still divergent at the origin. The divergence gets softened with respect to the Schwarzschild case though. In particular, the Kretschmann scalar reads
\begin{equation}\label{krets}
R_{abcd}R^{abcd}=\frac{4(f(0)-1)^2}{r^4}+\mathcal{O}\left(\frac{1}{r^2}\right)\, ,
\end{equation}
where $f(0)$ is the constant value which $f(r)$ takes at $r=0$ when $\lambda>0$. Note that the limit $\lambda \rightarrow 0$ is not continuous in the above expression because $f(0)$ diverges for the Schwarzschild solution. In that case, one finds the usual result $R_{abcd}R^{abcd}=48G^2M^2/r^6$ instead. Note also that if we had $f(0)=1$, the singularity would be completely removed --- the $\mathcal{O}(1/r^2)$ term would also vanish. Although this never happens for the ECG black holes, since we always have $f(0)<0$, it could be possible that the addition of even higher-order terms, --- {\it e.g.,}\ quartic ones --- would completely remove the singularity.
\section{Black Hole Thermodynamics}\label{them}
Our analysis in section \ref{hori} allowed us to obtain the horizon properties $r_h$ and $\kappa_g$ as functions of the black-hole mass $M$ and $\lambda$. As we stressed, those results are exact, {\it i.e.,}\ fully non-perturbative in $\lambda$. Let us now study some thermodynamic properties \cite{Hawking:1974sw,Bardeen:1973gs,Bekenstein:1973ur,Bekenstein:1974ax} associated to these solutions. According to Wald's formula \cite{Wald:1993nt,Iyer:1994ys,Jacobson:1993vj}, the entropy of a black hole in a higher-order derivative theory of gravity is given by
\begin{equation}
\mathsf{S}=-2\pi \int_{H} d^2x\sqrt{h} \frac{\delta \mathcal{L}}{\delta R_{abcd}}\epsilon_{ab}\epsilon_{cd}\, ,
\end{equation}
where $\frac{\delta }{\delta R_{abcd}}$ is the Euler-Lagrange derivative, $\mathcal{L}$ is the gravitational Lagrangian, $h$ is the determinant of the induced metric on the horizon and $\epsilon_{ab}$ is the binormal of the horizon, normalized as $\epsilon_{ab}\epsilon^{ab}=-2$.
Let us now apply this formula to our theory \req{ECGaction}. At this point it is convenient for our purposes to turn on an explicit Gauss-Bonnet term in the action, {\it i.e.,}\ $\mathcal{L} \rightarrow \mathcal{L}+ \frac{\alpha}{16 \pi} \mathcal{X}_4$, where $\mathcal{X}_4=R^2-4R_{ab}R^{ab}+R_{abcd}R^{abcd}$. This term has no effect in our discussion so far, but we will make use of it below. The result for $\mathsf{S}$ reads
\begin{equation}
\begin{aligned}
\mathsf{S}=\frac{1}{4G} \int_{H} d^2x\sqrt{h}\Big[&1+2\alpha G R_{(2)}+G^2\lambda\Big(36 R_{b\ d}^{\ e\ f}R_{aecf}+3R_{ab}^{\ \ ef}R_{cdef}\\
&-12 R_{ac}R_{db}-24R^{ef}R_{ebfc}g_{bd}+24g_{bd}R_{ce}R^{e}_{\ a}\Big)\epsilon^{ab}\epsilon^{cd}\Big]\, ,
\end{aligned}
\end{equation}
where $R_{(2)}$ is the Ricci scalar of the induced metric in the horizon, coming from the Gauss-Bonnet term. Naturally, this term yields a topological contribution.
For a metric of the form (\ref{ansatz0}) and with a spheric horizon placed at $r=r_h$, one finds
\begin{equation}
\label{GeneralEntropy}
\mathsf{S}=\frac{\pi r_h^2}{G}\left[1-48G^2 \lambda \frac{\kappa_g^2}{r_h^2}\left(\frac{2}{\kappa_g r_h}+1\right)\right]+2\pi\alpha\, ,
\end{equation}
where we have taken into account that $f'(r_h)=2\kappa_g$. This expression for the entropy is in principle valid for any static and spherically symmetric black hole solving the equations of motion of \req{ECGaction}. For the black hole we have constructed, the surface gravity is given by (\ref{surfacegravity}), so we can write the entropy in terms of the radius as
\begin{equation}
\mathsf{S}=\frac{\pi r_h^2}{G}\left[1- \frac{48G^2\lambda}{r_h^4}\frac{\left(3+2\sqrt{1+48G^2\lambda/r_h^4}\right)}{\left(1+\sqrt{1+48G^2\lambda/r_h^4}\right)^{2}}\right] +2\pi\alpha\, .
\label{Entropy}
\end{equation}
Also, by using the relation between $M$ and $r_h$ in (\ref{surfacegravity}), we can obtain the relation $\mathsf{S}(M)$ parametrized by $r_h$.
For $M=0$ the entropy reads $\mathsf{S}(M=0)=-2\pi\sqrt{48\lambda}+2\pi\alpha$. Hence, we can fix the Gauss-Bonnet coupling to $\alpha=\sqrt{48\lambda}$ so that the horizon entropy vanishes when $M=0$. As is clear from Fig. \ref{fig4}, the entropy is positive and larger than the Schwarzschild one for all other values of $M$ and $\lambda\geq 0$.
\begin{figure}[h]
\centering
\subfigure[ ]{
\includegraphics[scale=0.52]{entrop.pdf}}
\ \ \
\subfigure[ ]{
\includegraphics[scale=0.54]{Temp2.pdf}}
\caption{(a) We plot $\mathsf{S}(M)$ for the ECG black hole with $\lambda>0$ (blue) and the usual Schwarzschild solution (red). Again, we use the normalized mass $\bar{M}=M/(G^2 \lambda)^{1/4}$ and also $\bar{S}=S/ \lambda^{1/2}$. b) We plot the Hawking temperature as a function of its mass for Schwarzschild (red) and the ECG solution (blue). In this case we used the normalized temperature $\bar{T}=T\cdot (G^2\lambda)^{1/4}$. The dashed lines highlight the presence of a maximum temperature --- see \req{tee}. For smaller values, there exist two solutions for each $T$.}
\labell{fig4}
\end{figure}
On the other hand, the Hawking temperature \cite{Hawking:1974sw} of our solution can be written in terms of the radius as
\begin{equation}
T=\frac{1}{2\pi r_h(1+\sqrt{1+48G^2\lambda/r_h^4})}\, ,
\label{Temperature}
\end{equation}
where we used \req{surfacegravity}.
This temperature increases as the mass decreases up to a maximum temperature
\begin{equation}\label{tee}
T_{\rm{max}}=\frac{1}{12\pi G^{1/2} \lambda^{1/4}}\, , \quad \text{which is reached for a mass}\quad M_{\rm{max}}=16/27 \cdot G^{-1/2}\lambda^{1/4}\, .
\end{equation}
Then, the temperature decreases until it vanishes for $M=0$. As we can see from Fig. \ref{fig4}, this behavior is very different from the Einstein gravity one, since for this the temperature blows up as $M\rightarrow 0$. In fact, the temperature behaves in a similar fashion to the one of the usual Reissner-Nordstr\"om (RN) solution to the Einstein-Maxwell system --- see {\it e.g.,}\ \cite{Ortin:2004ms}. In that case, the temperature also reaches a maximum value $T\sim 1/|Q|$ and vanishes as $M\rightarrow +\infty$. An important difference with respect to that case is that for RN, the temperature vanishes when the extremality condition $M^2=Q^2$ is met, {\it i.e.,}\ for a positive value of the mass, whereas for the ECG black holes this occurs when the mass goes to zero.
Now, using (\ref{Temperature}) and (\ref{Entropy}) it is possible to show that the First law of black hole mechanics
\begin{equation}
dM=T d\mathsf{S}\, ,
\end{equation}
holds exactly.
This is an interesting check of our calculations, since the three physical quantities appearing in this expression --- namely, the Abbott-Deser mass $M$, the Wald entropy $\mathsf{S}$ and the Hawking temperature $T$ --- have been computed independently.
Observe also that using \req{tee} and \req{Entropy} it is possible to find the following explicit expression for the entropy as a function of the temperature
\begin{equation}
\mathsf{S}/\mathsf{S}_{E}=\frac{(T/T_{E})^2-4(1-T/T_{E})+\sqrt{1-T/T_{E}}}{T/T_{E}}\, ,
\end{equation}
where $\mathsf{S}_{E}=\pi r_h^2/G$ and $T_{E}=1/(4\pi r_h)$ are the Einstein gravity values of the entropy and the temperature, {\it i.e.,}\ those corresponding to the Schwarzschild solution. When $T=T_{E}$, one recovers $\mathsf{S}=\mathsf{S}_{E}$, as expected.
We can also compute the specific heat, defined as
\begin{equation}
C=T\left(\frac{\partial \mathsf{S}}{\partial T}\right)_M.
\end{equation}
Parametrized in terms of $r_h$, we can write it as
\begin{equation}\label{cct}
C=-\frac{8 \pi \left(1152 G^4\lambda ^2+24 G^2\lambda r_h^4 \left(4 \sqrt{1+\frac{48 G^2\lambda }{r_h^4}}+5\right)+r_h^8 \left(\sqrt{1+\frac{48 G^2\lambda }{r_h^4}}+1\right)\right)}{Gr_h^2 \left(\sqrt{1+\frac{48 G^2\lambda }{r_h^4}}+1\right)^2 \left(-48G^2 \lambda +r_h^4 \left(\sqrt{1+\frac{48G^2 \lambda }{r_h^4}}+1\right)\right)}\, .
\end{equation}
It has two regions: for $M>M_{\rm{max}}$, it is negative as in the Schwarzschild black hole, while for $M<M_{\rm{max}}$ we get $C>0$. It diverges in the midpoint $M=M_{\rm{max}}$. As we can see from Fig. \req{fig6}, when expressed as a function of the temperature, $C(T)$ has two branches, one negative and one positive, and both diverge at $T=T_{\rm{max}}$, which suggests the presence of a phase transition.
\begin{figure}[ht!]
\centering
\includegraphics[scale=0.66]{C.pdf}
\caption{We plot $C$ as a function of the temperature for the ECG solution (blue) with $\lambda>0$ and the usual Schwarzschild solution (red). In this case, we defined the normalized specific heat and temperature as $\bar C=C/ \lambda^{1/2}$ and again $\bar{T}=T\cdot (G^2\lambda)^{1/4}$.}
\labell{fig6}
\end{figure}
In Fig. \req{fig6}, the upper blue branch corresponds to the solutions with $M<M_{\rm{max}}$, while the ones with $M>M_{\rm{max}}$ are the ones in the lower branch. The solutions with positive heat are thermodynamically stable, and very different from the usual Schwarzschild solution, which has $C(T)<0$ for all $T$ as is clear from Fig. \ref{fig6}. As we saw, for a given temperature $T$ there exist two solutions with different horizon radius --- and hence mass, entropy, etc. In all cases, the one with the smaller $r_h$ is the one with positive specific heat, and vice-versa. The situation is reminiscent to the one observed in \cite{Myers:1988ze}, where certain odd-dimensional Lovelock black holes were shown to become stable for small enough masses.
\section{Generalized Reissner-Nordstr\"om-(Anti-)de Sitter solution}\label{charged}
In the previous section we focused on the simplest possible case, corresponding to generalized versions of the asymptotically flat Schwarzschild black hole. However, these solutions can be easily generalized. Here we will turn on the cosmological constant $\Lambda_0$ and add a Maxwell field to the action \req{ECGaction}. These two extensions allow us to obtain generalized versions of the usual Reissner-Nordstr\"om-(Anti-)de Sitter (RN-(A)dS) black hole. Hence, let us consider now the action
\begin{equation}
S=\int_{\mathcal{M}}d^4x\sqrt{|g|}\left[\frac{1}{16 \pi G}\left(-2\Lambda_0+R+\alpha G \mathcal{X}_4-\lambda G^2\mathcal{P}\right)-\frac{1}{4}F_{ab}F^{ab}\right],
\end{equation}
where $F_{ab}=2\partial_{[a}A_{b]}$. We consider the same ansatz \req{ansatz1} for the metric, while for the vector field we choose
\begin{equation}
A=A_0(r) dt \,.
\end{equation}
As before, we find that $N'=0$, so we set $N(r)=1$. On the other hand, we find the following equation for $A_0$,
\begin{equation}
-\left(A_0'\frac{r^2}{N(r)}\right)'=0\, .
\end{equation}
Since $N(r)=1$, this equation yields the usual expressions
\begin{equation}\label{elec}
A_0=\frac{q}{4 \pi r}\, , \quad F=\frac{q}{4\pi r^2}dt\wedge dr\, ,
\end{equation}
for the electric potential and its field strength. Here, $q$ is an integration constant related to the electric charge of the solution. Finally, the equation for $f$ is found from the variation with respect to $N$. It reads
\begin{equation}\label{fist}
-(f-1)r-G^2 \lambda \bigg[4f'^3
+12\frac{f'^2}{r}-24f(f-1)\frac{f'}{r^2}
-12ff''\left(f'-\frac{2(f-1)}{r}\right)\bigg]=\frac{1}{3}\Lambda_0 r^3+r_0-\frac{G Q^2}{r},
\end{equation}
where $Q^2=q^2 /(4\pi)$, and $r_0$ is an integration constant. Observe that for $\lambda=0$ we obtain
\begin{equation}
f(r)=1-\frac{\Lambda_0r^2}{3}-\frac{2GM}{r}+\frac{GQ^2}{r^2},
\end{equation}
where we identified $r_0=2GM$. This is of course nothing but the usual RN-(A)dS blackening factor. Interestingly, when $\lambda$ is turned on, the asymptotic quantities get corrected in this case. Indeed, by performing an asymptotic expansion around $r\rightarrow +\infty$, we see that $r_0$ is identified with the mass as before, $r_0=2 GM$, and that $f(r)$ takes the form
\begin{equation}
f(r)=1-\frac{1}{3}\Lambda_{\rm{eff}}r^2-\frac{2 G_{\rm{eff}}M}{r}+\frac{G_{\rm{eff}} Q^2}{r^2}+\mathcal{O}\left(\frac{1}{r^3}\right),
\end{equation}
where the effective cosmological constant $\Lambda_{\rm{eff}}$ is a solution of the equation
\begin{equation}\label{geg}
\frac{16}{9} \lambda G^2 \Lambda_{\rm{eff}}^3-\Lambda_{\rm{eff}}+\Lambda_0=0\,,
\end{equation}
and where the effective gravitational constant is given by
\begin{equation}
G_{\rm{eff}}=\frac{G}{1-\frac{16}{3}\lambda G^2 \Lambda_{\rm{eff}}^2}\,.
\end{equation}
Observe that \req{geg} is nothing but the embedding equation that pure (A)dS$_4$ with curvature $\Lambda=\Lambda_{\rm eff}/3$ must satisfy in order for it to be a solution of the ECG theory \cite{PabloPablo}.
As in the uncharged asymptotically flat case, it is possible to compute higher-order terms in the asymptotic expansion. Similarly, one can study the thermodynamic properties of these black holes by making a Taylor expansion around the horizon, as in (\ref{Hexpansion}). In that case we find the following generalized equations, which relate $r_h$ and $\kappa_g$ to $M$, $Q$ and $\Lambda_0$:
\begin{eqnarray}\label{ff}
-2GM+r_h-16\lambda G^2\kappa_g^2\left(2\kappa_g+\frac{3}{r_h}\right)-\Lambda_0\frac{r_h^3}{3}+\frac{GQ^2}{r_h}&=&0\, ,\\ \label{ff2}
1-\frac{GQ^2}{r_h^2}-\Lambda_0 r_h^2-2\kappa_g r_h-48\lambda G^2\frac{\kappa_g^2}{r_h^2}&=&0\, .
\end{eqnarray}
Using these relations, we can write the Hawking temperature $T=\kappa_g/(2\pi)$, the entropy $\mathsf{S}$ (\ref{GeneralEntropy}) and the mass $M$ in terms of $r_h$ and $Q$. The result reads
\begin{align}
T=&\frac{r_h^2-\Lambda_0 r_h^4-GQ^2}{2 \pi\left(r_h^3+ \sqrt{r_h^6+48G^2 \lambda ( r_h^2-\Lambda_0 r_h^4-GQ^2)}\right)}\, ,\\
\mathsf{S}=&\frac{\pi r_h^2}{G}\left[1-\frac{48 \lambda G^2 \left(r_h^2-\Lambda_0 r_h^4-GQ^2\right) \left(3 r_h^3-\Lambda_0 r_h^5-GQ^2 r_h+2 \sqrt{r_h^6+48G^2 \lambda ( r_h^2-\Lambda_0 r_h^4-GQ^2)}\right)}{r_h^3 \left(\sqrt{r_h^6+48G^2 \lambda ( r_h^2-\Lambda_0 r_h^4-GQ^2)}+r_h^3\right){}^2}\right]\\ \notag &+2\pi\alpha\, ,\\
\frac{2GM}{r_h}= &1+\frac{GQ^2}{r_h^2}-\frac{\Lambda_0 r_h^2}{3}\\ \notag &-\frac{16 G^2\lambda \left(r_h^2-\Lambda_0 r_h^4-GQ^2\right)^2 \left(5 r_h^3-2 \Lambda_0 r_h^5-2 GQ^2 r_h+3 \sqrt{r_h^6+48G^2 \lambda ( r_h^2-\Lambda_0 r_h^4-GQ^2)}\right)}{r_h^2\left(\sqrt{r_h^6+48G^2 \lambda ( r_h^2-\Lambda_0 r_h^4-GQ^2)}+r_h^3\right)^3}\, . \hspace{0.7cm}
\end{align}
The last equation fixes the horizon radius $r_h$ in terms of $M$, $Q$ and $\Lambda_0$. Using these expressions,
it is possible to show --- using Mathematica --- that the first law also holds for these solutions. In this case, it reads
\begin{equation}
dM= T d\mathsf{S}+\Phi dq,
\end{equation}
where $\Phi=A_0$ is the electrostatic potential \req{elec}. Let us also mention that the extremal limit --- corresponding to $\kappa_g=0$ --- coincides with the Einstein gravity one. This can be easily seen from \req{ff} and \req{ff2}, which become in that case
\begin{equation}
1-\frac{2GM}{r_h}+\frac{GQ^2}{r_h^2}-\frac{\Lambda_0 r_h^2}{3}=0\, , \quad \Lambda_0 r_h^4-r_h^2+GQ^2=0\, ,
\end{equation}
the second of which imposes $T=0$, $\mathsf{S}=\pi r_h^2/G$.
This is because when $\kappa_g=0$, all terms involving $\lambda$ vanish in the previous equations. Therefore, neither the extremality condition nor the horizon radius are altered by the ECG term in that case. In particular, if $\Lambda_0=0$, extremality is reached when $Q^2=GM^2$ and in that case, $r_h=\sqrt{G} |Q|$.
\section{Discussion}
In this paper we have constructed generalizations of the Schwarzschild and Reissner-Nordstr\"om black holes in four-dimensional Einsteinian cubic gravity \req{ECGaction} both with Minkowski and (A)dS asymptotes. We have shown that the theory admits solutions with a single function $f(r)$ determined through a non-linear second-order differential equation \req{fist} and studied some of their thermodynamic properties which, remarkably enough, can be accessed analytically. As far as we know, the new solutions represent the first non-trivial four-dimensional generalizations of the Schwarzschild- and RN-(A)dS black holes in higher-order gravity whose thermodynamic properties can be computed exactly.
Using those results we have been able to check analytically that the solutions satisfy the first law of black hole mechanics.
We have observed that the addition of the ECG term to the EH action softens the black-hole singularity --- see \req{krets}. In particular, the metric of the ECG black hole does not diverge at $r=0$ in Schwarzschild coordinates and the Kretschmann invariant diverges as $\sim r^{-4}$ instead of the $\sim r^{-6}$ behavior of the usual Schwarzschild solution.
It would be interesting to understand to what extent this appealing behavior is a generic phenomenon in other theories. One might wonder, for example, if certain theories (possibly of order higher than ECG) allow for a complete removal of the black hole singularity --- as expected in a UV-complete theory of gravity.
Another remarkable property of the solutions constructed here is the existence of neutral stable small black holes. Indeed, for a given temperature $T<T_{\rm{max}}$, we have found two possible solutions: a large black hole with $C<0$, and a small one with $C>0$.
As we explain in appendix \ref{remnants}, these black holes never evaporate completely, which gives rise to long-lived remnants. In the general case of a charged, asymptotically (A)dS black hole, we have also obtained exact formulas for all the relevant thermodynamical quantities. However, a detailed analysis of the physical consequences of these new thermodynamic relations is still lacking, and should be carried out elsewhere.
Let us also mention possible generalizations of the solutions presented here. We constructed black holes with spherical horizons, but it should be easy to extend our solutions to different horizon topologies. This possibility has been already considered for the uncharged case in \cite{Hennigar:2016gkm}. Coupling to more complicated matter fields also seems possible. One could of course try to construct non-static or non-spherically symmetric solutions, although that looks more challenging at first sight. Apart from these aspects, we would like to stress again that the properties of ECG make it quite appealing for holographic applications --- see {\it e.g.,}\ \cite{Dey:2016pei} for a first approach.
\subsection{On the uniqueness of $D=4$ ECG black holes }
According to our computations, up to cubic order in curvature ECG is the most general four-dimensional higher-order gravity which allows for non-trivial single-function generalizations of Schwarzschild- and RN-(A)dS which reduce to the usual Einstein gravity solutions when the corresponding higher-order couplings are set to zero. This requires some further clarification. In fact, four-dimensional black holes with a single function have been previously constructed for different higher-order gravities. However, all the known cases fall within one of the following three classes:
\begin{enumerate}
\item They are ``trivial'' embeddings of the usual Einstein black holes \footnote{Besides, it is not clear what the physical meaning of these embeddings is as, in general, the corresponding solutions do not correspond to the exterior gravitational field of any source.}, {\it i.e.,}\ the metric of the solutions is exactly the same as for Einstein gravity --- see {\it e.g.,}\ \cite{delaCruzDombriz:2009et} for $f(R)$ or \cite{Lu:2012xu,Smolic:2013gz} for quadratic gravities.
\item They are solutions to pure higher-order gravities, {\it i.e.,}\ the action does not incorporate the Einstein-Hilbert term. For example, pure Weyl-squared gravity --- whose Lagrangian reads $\mathcal{L}=\alpha C_{abcd}C^{abcd}$ --- allows for solutions in four dimensions with a single function \cite{Riegert:1984zz,Klemm:1998kf} but $\mathcal{L}=-2\Lambda_0+R+\alpha C_{abcd}C^{abcd}$ does not \cite{Lu:2012xu}. See also {\it e.g.,}\ \cite{Oliva:2011xu,Oliva:2010zd}.
\item They require the fine-tuning of some of the higher-order couplings appearing in the action, so the Einstein gravity limit does not exist. For example, extended single-function solutions to a theory of the form $\mathcal{L}=-(-4\Lambda_0+R)^2/(8\Lambda_0)=-2\Lambda_0+R-R^2/(8\Lambda_0)$, which is a perfect square, can be constructed by simply setting $R=4\Lambda_0$. Examples of this kind of constructions can be found {\it e.g.,}\ in \cite{Cai:2009ac,Love}.
\end{enumerate}
Our claim on the uniqueness of ECG four-dimensional black holes amongst quadratic and cubic gravities applies instead to the situation that we consider most natural. By ``natural'' we mean the following. Consider a theory extending the general relativity action through
\begin{equation}\label{Exx}
S=\frac{1}{16 \pi G}\int_{\mathcal{M}}d^4x\sqrt{|g|}\left[-2\Lambda_0+R+\sum_i \alpha_i X_i \right]\, ,
\end{equation}
where the $X_i=X_i(R_{abcd},g^{ef})$ are higher-curvature invariants and the $\alpha_i$ are independent parameters. Then, we consider black hole solutions of \req{Exx} which non-trivially extend the Einstein gravity ones and reduce to them as we set $\alpha_i=0$ --- a value for which \req{Exx} is also asked to reduce to the Einstein gravity action. This is naturally the kind of scenario that one expects from an effective-action perspective. Single-function examples satisfying these conditions are known in $D\geq 5$ {\it e.g.,}\ for Lovelock theories \cite{Boulware:1985wk,Cai:2001dz,Dehghani:2009zzb,deBoer:2009gx} or Quasi-topological gravity \cite{Quasi,Quasi2,Dehghani:2011vu}. We claim that in $D=4$, ECG is the only theory that admits this kind of genuine single-function extensions up to cubic order in curvature. According to our analysis, any other term will either imply $N^{\prime}(r)\neq 0$, or keep the Schwarzschild solution unaffected --- or the condition $N^{\prime}(r)=0$ would be achieved by fine-tuning some of the couplings like in the case discussed in item 3 above. This is a remarkable property of ECG, even more so if we take into account that the motivation for constructing this theory was quite different --- namely the fact that it is the most general dimension-independent quadratic or cubic theory whose linearized spectrum coincides with Einstein's \cite{PabloPablo}. In fact, there seems to be a connection between theories which only propagate a massless graviton in the vacuum, and those which allow for single-function black holes. Examples include again Lovelock \cite{Lovelock1,Lovelock2} and Quasi-topological gravity \cite{Quasi,Quasi2}. But note that the connection between these two different aspects of higher-order theories cannot be an \emph{if and only if} because we know examples of theories with the same linearized spectrum as Einstein gravity which do not posses single-function static and spherically symmetric solutions. This is the case for example of certain $f($Lovelock$)$ theories considered in \cite{Love,Bueno:2016ypa,Karasu:2016ifk}, or even of ECG itself when considered in dimensions higher than four --- see the next subsection.
\subsection{On higher-dimensional ECG black holes}
Let us close the paper by mentioning that the generalization of the solutions presented here to higher-dimensions is not straightforward. As we have seen, in $D=4$ we were able to set $N=1$, which allowed us to reduce the problem to a second-order differential equation for $f$. However, this property of four-dimensional ECG no longer holds for $D=5$ since in that case two independent functions are required instead --- the same presumably holding for $D> 5$. Hence, in dimensions higher than four, the construction of solutions should be considerably more involved.
\label{discussion}
\begin{acknowledgments}
We are thankful to Robert Mann, Rob Myers, Julio Oliva, Tom\'as Ort\'in and C. S. Shahbazi for useful comments. The work of PB was supported by a postdoctoral fellowship from the Fund for Scientific Research - Flanders (FWO). PB also acknowledges support from the Delta ITP Visitors Programme. The work of PAC was supported by a ``la Caixa-Severo Ochoa'' International pre-doctoral grant and in part by the Spanish Ministry of Science and Education grants FPA2012-35043-C02-01 and FPA2015-66793-P and the Centro de Excelencia Severo Ochoa
Program grant SEV-2012-0249.\end{acknowledgments}
|
1,108,101,564,876 | arxiv | \section{Introduction}
Bulk transition metal dichalcogenides (TMD) are layered van der Waals
solids displaying remarkable properties promising both for fondamental research
as well as for technological applications.
Metallic bulk transition
metal dichalchogenides (TMD) like NbSe$_2$,
present coexistence of charge density wave and
superconductivity \cite{DiSalvo,IwasaMoS2}, while insulating TMD (MoS$_2$,
WS$_2$) are flexible, have high mobilities and are routinely
used in flexible electronics.
Since the pioneering work of Frindt and coworkers \cite{ FrindtJAP1966,
JoensenMRB1986, FrindtPRL1972, YangPRB1991} and the successive
developments in the fields of mechanical \cite{NovoselovPNAS} and
liquid \cite{ColemanSCI} exfoliation, it has been possible to obtain
free-standing or supported single-layer TMD. These monolayers
are the inorganic analogue of graphene and display a rich
chemistry\cite{ChhowallaNChem} that makes them attractive
for energy storage applications.
Insulating single-layer TMD have much lower
mobilities \cite{Radisavljevic2011, FuhrerComment} than
Graphene, but are nevertheless interesting for
nanoelectronics, mainly due to the presence of a finite
bandgap.
In this context, MoS$_2$ is considered one of the most
promising materials\cite{NotAlone}. The most stable polytype of bulk MoS$_2$
is the 2H (Molybdenite), where each Mo has a trigonal prismatic coordination
with the nearby S atoms. Mechanical exfoliation of bulk 2H MoS$_2$
lead to formation of single layer samples with the same local
coordination (here labeled 1HMoS$_2$).
In chemically exfoliated samples the situation is different.
In the first step of chemical exfoliation of bulk 2H MoS$_2$,
Li atoms are intercalated between the layers.
The Li intercalation stabilize a 1T Li$_{x}$MoS$_2$
polytype having each Mo octahedrally coordinated with the nearby S
atoms. Subsequent hydratation with excess water and ultrasonication leads to the separation
of the layers via LiOH formation and synthesis of large-area single-layer MoS$_2$ samples
\cite{JoensenMRB1986}.
\begin{figure*}
\begin{minipage}[c]{0.2\linewidth}
\includegraphics[scale=0.125,angle=0]{1T_Top-eps-converted-to.pdf}
\includegraphics[scale=0.125,angle=0]{2H_Top-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.7\linewidth}
\includegraphics[scale=0.225,angle=0]{zig_zag_measure-eps-converted-to.pdf}\includegraphics[scale=0.12,angle=0]{struct_frind_2ax2a_measures-eps-converted-to.pdf}
\end{minipage}\hfill
\caption{Phases of chemically exfoliated MoS$_2$. The
1H has trigonal prismatic coordination and is the most stable
among all polytypes.
The 1T, 1T$^{'}$ and 1T$^{''}$ polytypes all have octahedral
coordination. The 1T$^{'}$ is the lowest energy polytype among
those with octahedral coordination. The in-plane Mo-Mo distance is
$3.193$ \AA\, and $3.183$ \AA\, for the 1T$^{'}$ and 1H structures, respectively}
\label{fig:struct}
\end{figure*}
The properties of chemically exfoliated MoS$_2$ single layers are poorly
understood. Recently it has been shown that
these samples are actually composed of heterostructures
of 1H, 1T and 1T-distorted (labeled 1T$^{\prime}$)
MoS$_{2}$ phases \cite{EdaACSnano}. The 1T$^{\prime}$ phase is a $2\times 1$
superstructure of the 1T phase formed by zig-zag chains. Remarkably, the
three phases cohexist in the same sample and have substantially different
conducting properties as the 1T phase is metallic while the 1H and
1T$^{\prime}$ are insulating\cite{EdaNanoL}. Upon mild annealing at $200-300$ C
the 1T and 1T$^{\prime}$ phase disappear and transform
in the 1H one.
Exposure to a $60-80$ keV electron beam induces S vacancies
\cite{Komsa} and transforms the 1T$^{\prime}$ phase into
the 1T one\cite{EdaACSnano,Suenaga}. Finally, it is important to remark that
chemically exfoliated single layers are covered
with absorbates that can play some role in stabilizing one structure or
the other.
The dynamical properties of the 1T$^{\prime}$ phase are not
understood.
For example, while it is well established that the high energy optical Raman
spectra of the 1H phase are composed of two prominent peaks,
attributed to the E$_{\rm 2g}$ mode at $\approx 385 $ cm$^{-1}$ and to
the A$_{\rm 1g}$ mode at $\approx 403$ cm$^{-1}$\cite{Lee}, little is known on
the Raman spectra of the 1T$^{\prime}$ phase. Raman
measurements \cite{Sandoval} on freshly-prepared single-layers with dominant 1T$^{\prime}$ phase
show that the E$_{\rm 2g}$ peak is missing, while at least five additional peaks
appear at lower energies (some of
these peaks are labeled J$_1$, J$_2$, J$_3$). Nothing is known on the phonon displacements
generating these features.
In this work we study the stability, the electronic structure and the
dynamical properties of the 1T and 1T$^{\prime}$ phases in single
layer MoS$_2$ by using density functional theory (DFT) calculations.
We show that the metallic 1T phase is dynamically
unstable. We find distorted structures with a $2\times 1$
(1T$^{'}$MoS$_2$) and $2\times 2$ (labeled
1T$^{''}$MoS$_2$) real space
periodicities having lower energies then the 1T one.
Both 1T$^{'}$ and 1T$^{''}$ structures are, however,
substantially higher in energy then the 1HMoS$_2$ phase (see Fig.\ref{fig:struct} for
a plot of the crystal structure of the different phases).
We then fully characterize the
distorted 1T$^{\prime}$ phase found in experiments on chemically
exfoliated MoS$_2$, by obtaining
its electronic structure, phonon dispersion and Raman intensities.
Finally, we study catalytic absorption in the 1T$^{'}$ phase and show
that H adsorbates stabilize the
1T$^{\prime}$ with respect to all the others.
The paper is organised as follows. In section \ref{sec:tech} we
describe the technical details of the calculation. In
sec. \ref{sec:oct}
we analyze the stability of octahedral phases with respect to the
trigonal prismatic ones and in sec. \ref{sec:raman} we study the Raman
spectrum of the distorted 1T$^{'}$ phase. Finally, in sec.
\ref{sec:cata} we study catalytic adsorption of Hydrogen and its
effect on the structural and electronic properties of the different
structures.
\section{Technical details\label{sec:tech}}
The results reported in the present paper were obtained from
first-principles
density functional theory in the generalized gradient approximation\cite{PBE}.
The QUANTUM-ESPRESSO\cite{QE} package was used with norm-conserving
pseudopotentials and a plane-waves cutoff energy of $90$ Ry. Semicore
states were included in the Mo pseudopotential. The
Electronic structure calculations were performed
by using a $24\times
24 $, $12\times 24$, $12\times 12$ electron-momentum grids for the
1T, 1T$^{'} $ and 1T$^{''}$ phases, respectively. For the metallic 1T structure
we use an Hermitian-Gaussian smearing of $0.01$ Ryd.
The phonon dispersion of the 1T phase was calculated by Fourier
interpulating dynamical matrices calculated on a $8\times 8$
phonon-momentum grid and on a $24\times
24$ electron-momentum grid. The Raman intensity calculation for the
1T$^{\prime}$ phase was performed on a $8\times 16$ electron-momentum
grid.
The phonon dispersion calculation for the 1T$^{'}$ structure was
performed using a $4\times 4$ phonon momentum grid.
\section{Results}
\subsection{Stability of octahedral phases\label{sec:oct}}
We first investigate the relative stability of 2H and 1T phases in Fig. \ref{fig:struct}.
As expected, we find that the 1H MoS$_2$ phase is the most stable one,
with a lower energy of 0.83 eV / Mo atom with respect to the 1T one.
The electronic structure calculation of the 1T
structure in Fig. \ref{fig:1Tpure_el} shows that this polytype is indeed metallic.
Differently from the 1H case, here the spin-orbit coupling
is very weak and from now on it will be neglected.
As the energy difference between the 1T and 2H phases is
more then 30 times larger then the 200-300 K
annealing temperature necessary to transform the 1T phase
in the 2H one, the experimental detection of the 1T and
1T$^{\prime}$ phases cannot be inferred from the total energy
difference between the two.
It has been suggested that the 1T phase is metastable and, as a consequence, an
energetic barrier occurs between the two \cite{ChhowallaNChem}.
To verify this hipothesis, we calculate the phonon dispersion
for the 1T phase. We find that the 1T structure is dynamically
unstable (see Fig. \ref{fig:1Tpure_ph}) at zone border, with the largest
instability at the M point of the hexagonal lattice.
\begin{table}[hb!]
\begin{tabular}{l c c c }
\hline
Atom & x & y & z \\
\hline
Mo & 0.0508948 & 0.0508948 & 0.0051972 \\
S & 0.1662841 & 0.6662835 & 0.1240922 \\
S & 0.3337158 & 0.3337161 & -0.1240922 \\
Mo & 0.4491051 & -0.0508948 & -0.0051972 \\
S & 0.6714116 & 0.6714108 & 0.0957802 \\
S & 0.8285883 & 0.3285888 & -0.0957802 \\
\hline
\end{tabular}
\caption{Atomic coordinates with respect to the direct axis for the 1T$^{'}$ structure. The lengths of
the two direct lattice vectors of the two dimensional lattice are identified by
$a=6.411 \AA$, $b=3.111 \AA$. The angle between them
$\gamma=119.034^{\circ}$. \label{tab:structTprime}}
\end{table}
This distortion is compatible with a $2\times 1$ superstructure.
To identify the lowest energy superstructure, we perform calculations on a $2\times 1$
supercell, by displacing the atoms along the direction given by the
phonon displacement of the most unstable mode at M. We find that substantial energy
( 0.19 eV/ Mo) is gained by the distortion. We then start from this
distorted structure and perform full structural optimization
of internal coordinates and of the 2D-cell. As shown in
Figs. \ref{fig:H_Stability}, we find a stabilization
of an octahedrally coordinated structure composed of zig-zag chains, with an energy
gain of 0.29 eV/Mo with respect to the 1T phase.
Structural parameters of the zig-zag distorted-structure are given in
Tab. \ref{tab:structTprime}.
Here we remark that the
shortest distance between Mo atoms belonging to the same chain is $\approx
2.72 $\AA , while the shortest distance between atoms on different
chains is $\approx 3.71 $\AA .
The angle between the Mo atoms
in the chain is $\approx 69.64^{\rm o}$.
The in-plane nearest-neighbours Mo-Mo distance of the 1T$^{'}$ structure is
almost identical to the nearest neighbours
distance of Mo atoms in bcc Mo\cite{Ashcroft}, that is $2.728$\AA.
On the contrary in the 1T structure the Mo-Mo bonds is $3.193$\AA,
substantially elongated with respect to the Mo-Mo nearest neighbout
distance in bcc Mo.
The devised 1T$^{'}$ structure closely
resembles that detected in experiments on chemically exfoliated
samples \cite{EdaACSnano}.
\begin{figure}
\centerline{\includegraphics[scale=0.5,angle=0]{PBE_wit_without_SO_1T_Mono-eps-converted-to.pdf}}
\caption{Electronic structure and density of states of the
1TMoS$_2$ phase with or without spin orbit coupling. The energy
are plotted with respect to the Fermi level. }
\label{fig:1Tpure_el}
\end{figure}
\begin{figure}
\centerline{\includegraphics[scale=0.5,angle=0]{1T_branchie_cmm1-eps-converted-to.pdf}}
\caption{Phonon dispersion of the 1TMoS$_2$ phase
showing a dynamical instability at zone border.}
\label{fig:1Tpure_ph}
\end{figure}
The electronic density of states of the distorted structure is shown
in Fig. \ref{fig:Dos_Distorted}. The distortion opens a very small gap
($\approx 0.045$ eV) that makes the system insulating. The formation
of zig-zag chains is actually very similar to the standard
Peierls dimerization in one dimensional systems, i. e. the system
gains energy in opening a gap. The Peierls distortion reduces the
dimensionality of the 2D layer that is now broken in 1D zig-zag chains.
This is at odd with most bulk metallic
transition metal dichalcogenides where the charge density wave state
coexists with metallicity and superconductivity \cite{DiSalvo}.
However, given the large energy gain and the strong bond
deformation involved in this distortion, the transition to 1D-zig-zag
chains has to be considered more a real structural transition then a
charge density wave.
\begin{table}[hb!]
\begin{tabular}{l c c c }
\hline
Atom & x & y & z \\
\hline
Mo & 0.022531337 & 0.022531337 & 0.0 \\
S & 0.317728127 & 0.651403657 & 0.056424736 \\
S & 0.651403657 & 0.317728127 & -0.056424736 \\
Mo & 0.465000117 & 0.001783220 & -0.000455305 \\
S & 0.815301793 & 0.651805630 & 0.049761398 \\
S & 1.150643861 & 0.316295157 & -0.062393003 \\
Mo & 0.444399776 & 0.444399776 & 0.0 \\
S & 0.814643878 & 1.148463448 & 0.056499460 \\
S & 1.148463448 & 0.814643878 & -0.056499460 \\
Mo & 0.001783220 & 0.465000117 & 0.000455305 \\
S & 0.316295157 & 1.150643861 & 0.062393003 \\
S & 0.651805630 & 0.815301793 & -0.049761398 \\
\hline
\end{tabular}
\caption{Coordinates of the 12 atoms in the 1T$^{''}$ unit-cell with
respect to the direct lattice vectors The lengths of the two direct
lattice vectors of the two dimensional lattice are identified by
$a=b=6.422 \AA$. The angle between the two is $\gamma=119.331^{\circ}$.\label{tab:structTdec}}
\end{table}
As the energy difference between the 1T$^{'}$ and the 1H structures is
large ( 0.54 eV/Mo ), we perform additional structural optimization on the
$2\times 2$ supercell to see if other superstructures can be
stabilized. We do indeed find another distorted structure formed
by Mo rhombus (1T$^{''}$ MoS$_2$, see Fig. \ref{fig:struct} and Tab.\ref{tab:structTdec})
that is 0.19 eV lower in energy then the 1T structure
but still higher then both the 1T$^{'}$ and the 1H one.
Interestingly, in past experimental works on chemically exfoliated MoS$_2$ samples
\cite{YangPRB1991}, a similar 1T$^{''}$ structure was proposed as the most stable one in
the monolayer.
\subsection{Raman spectra of the distorted 1T' phase \label{sec:raman}}
In order to
substantiate that the 1T$^{\prime}$ structure determined theoretically
is the same of the experimental one, we calculate the phonon
frequencies at zone center and the first-order Raman intensities for the 1T$^{\prime}$
structure. We also give a complete interpretation of Raman spectra
in chemically exfoliated samples that is currently lacking in
literature.
\begin{table}
\begin{tabular}{cccc}
{\it Theory } & {\it Theory} & {\it Experiment}\cite{Sandoval} & 1HMoS$_2$ \cite{Lee} \\
{\it (cm$^{-1}$)} & {\it (Intensity)} & {\it (cm$^{-1}$)} & {\it (cm$^{-1}$)} \\ \hline
147 & 0.003 & &
\nonumber \\
151 & 0.008 & 156 (J1) &
\nonumber \\
216 & 1.0 & 226 (J2) &
\nonumber \\
223 & 0.006 & &
\nonumber \\
286 & 0.011 & 287 &
\nonumber \\
333 & 0.033 & 333 (J3) &
\nonumber \\
350 & $<$0.001 & 358 & 385 (E$_{2g}$)
\nonumber \\
412 & 0.13 & 408 & 403 (A$_{1g}$)
\\ \hline
\end{tabular}
\caption{Calculated phonon frequencies (in
cm$^{-1}$) and first-order Raman intensities of the 1T$^{\prime}$ phase, as compared
with experiments on both 1T$^{\prime}$ and 1H phases. The intensities
are normalized to the most intense peak. The incoming and outcoming
light in the Raman experiment are assumed to be unpolarized. See Ref. \onlinecite{Boukhicha}
for more details on the definition of the Raman intensities.\label{tab:Raman}}
\end{table}
\begin{figure*}
\includegraphics[scale=0.125,angle=0]{mode4_freq-eps-converted-to.pdf}\includegraphics[scale=0.125,angle=0]{mode5_freq-eps-converted-to.pdf}
\includegraphics[scale=0.125,angle=0]{mode6_freq-eps-converted-to.pdf}\includegraphics[scale=0.125,angle=0]{mode7_freq-eps-converted-to.pdf}
\includegraphics[scale=0.125,angle=0]{mode13_freq-eps-converted-to.pdf}\includegraphics[scale=0.125,angle=0]{mode14_freq-eps-converted-to.pdf}
\includegraphics[scale=0.125,angle=0]{mode16_freq-eps-converted-to.pdf}\includegraphics[scale=0.125,angle=0]{mode17_freq-eps-converted-to.pdf}
\caption{ Raman active modes of 1T$^{\prime}$ MoS$_2$
single-layer. The length of the arrows is proportional to the modulus of the phonon
eigenvector.}
\label{fig:Raman_Modes}
\end{figure*}
In 1H MoS$_2$, at high energy, only two Raman peaks are seen, namely the
E$_{\rm 2g}$ mode at $\approx 385 $ cm$^{-1}$ and the A$_{\rm 1g}$
mode at $\approx 403$ cm$^{-1}$ (see Ref. \onlinecite{Lee}).
The experimental Raman spectra of the 1T$^{\prime}$ phase
show two main variations with respect to H-polytypes:
(i) the E$_{\rm 2g}$ peak disappears and (ii) five additional peaks
occur (see Table \ref{tab:Raman}).
Due to the reduced symmetry of the 1T$^{'}$ structure, we do indeed find several Raman
active peaks and a very rich spectrum. The E$_{2g}$ peak is
missing and the additional calculated Raman peaks can be associated
to the experimental ones with a high degree of accuracy.
\begin{figure}[t]
\includegraphics[scale=0.5,angle=0]{branchie_PBE-eps-converted-to.pdf}
\caption{Phonon dispersion of the 1T$^{'}$ structure along selected
directions}
\label{fig:1Tprimeph}
\end{figure}
In our calculation the peak with the largest intensity is the so called
J$_2$ peak at $216$ cm$^{-1}$ ( $226$ cm$^{-1}$ in experiment).
This mode tends to shorten the distance
between the two zig-zag chains and to recover the 1H structure (see
Fig. \ref{fig:Raman_Modes}). In experiments \cite{Sandoval} this mode
has a much larger linewidth then all the others. This partly explains why the
experimental height of the peak is substantially reduced with respect to the Raman
intensity.
The so called J$_1$
peak at $156$ cm$^{-1}$ in experiments is actually composed of two differents phonon
modes at $4$ cm$^{-1}$ distance one from the other. The one at $147$
cm$^{-1}$ shifts out-of-plane and in opposite directions
each stripe of Mo atoms inside the zig-zag chain.
The mode at $151$ cm$^{-1}$ is an in-plane shearing mode of one
stripe of atom with respect to the other inside a chain. The
peaks at $233$ cm$^{-1}$ and at $286$ cm$^{-1}$
involve shifts of the S-atom layers with respect
to the Mo atoms. The J$_3$ mode at $333$ cm$^{-1}$,
in excellent agreement with experiments, tends to break
each zig-zag chain in two stripes with a slight out-of-plane component.
The mode at $350$ cm$^{-1}$ compares favourably with the 358 cm$^{-1}$
peak detected in experiments, although in theory it has a too small
intensity.
Finally , the mode at $412$ cm$^{-1}$ is nothing
else that the usual A$_{1g}$ mode seen in the 1H polytype.
The agreement between the calculated zone-center energies
and the position of Raman peaks suggests that the devised structure
closely resemble the experimental one. Some disagreement still
exists between the calculated relative intensities and the
experimental ones. However, it should be noted that Raman spectra
on different samples\cite{Sandoval,EdaNanoL} show substantially different Raman
intensities, probably due either to the inhomogeneity of
the sample composed of several phases or to the presence of adsorbates
and vacancies.
Finally in Fig. \ref{fig:1Tprimeph} we show the calculated phonon dispersion of the 1T$^{'}$ structure
that is dynamically stable, suggesting that an energy barrier does indeed exist
between the 1H and the 1T$^{'}$ phases and that the 1T$^{'}$ is metastable.
\begin{figure}
\includegraphics[scale=0.5,angle=0]{Stability_plot-eps-converted-to.pdf}
\caption{Stability of different MoS$_2$ structures with respect to the
1H polytype and as a function of H coverage per Mo atom}
\label{fig:H_Stability}
\end{figure}
\begin{figure}
\includegraphics[scale=0.5,angle=0]{dos_distorted_noundist-eps-converted-to.pdf}
\caption{ Electronic density of states of the 1T$^{\prime}$ at 0 and 0.5 H/Mo
coverage. The zero of the energy has been set to the bottom of
the conduction band.}
\label{fig:Dos_Distorted}
\end{figure}
\subsection{Catalytic adsorption\label{sec:cata}}
In order to justify the stabilization of the 1T$^{\prime}$ crystal
structure with respect to the 1H one detected in experiments, we study
asorbates adsorption
on the 1H, 1T , 1T$^{'}$ and 1T$^{''}$ phases .
Single layers MoS$_2$ samples at the end of the chemical exfoliation
process are fully covered with adsorbates, due to the hydratation of
Li$_x$MoS$_2$ with water.
We focus on the simple case of
H adsorption. We consider $4\times4$ supercells of the
1T and 2H phase, as well as $2\times 4$ supercells of
the 1T$^{'}$ unit cell. We start considering only one H ion at random positions
on top of the MoS$_{2}$ layer and then perform several structural optimizations.
We find that the
H ion always binds to an S-atom, similarly to what
happens in WS$_2$ \cite{ChhowallaNChem}.
Indeed, in the absence of adsorbates, a positive ( negative )
charge resides on the Mo (S)-atom \cite{AtacaH2O},
as it can also be inferred from the relative electronegativity of S
and Mo.
We then add a second H atom and find that two H
atoms prefer to bind to different S atoms. Thus, we consider
as starting guess of the structural minimization all the possible way
of binding H to different S atoms that are compatible with the supercell size.
\begin{figure}
\includegraphics[scale=0.35,angle=0]{Dist_ZigZag_H_mesure-eps-converted-to.pdf}
\caption{ Most stable structure at 0.5 H coverage
(left). The S atoms are depicted in
yellow, while the hydrogens are the small cyan spheres. }
\label{fig:H_Distorted}
\end{figure}
By performing structural minimization, we find that at all H coverages
the 1H structure retains its trigonal prismatic
coordination. Similarly,
even when higher in energy, the H-covered 1T$^{\prime}$ structure never decays
into the 1H one, but preserves its zig-zag structure, although the
separation between the chains and the bonding inside the chain are affected by
H concentration. This confirms once more that an energy barrier does indeed
occur between the 1H and the 1T$^{\prime}$ structures.
Finally, we find that the H-covered 1T structure always decays
into the H-covered 1T$^{\prime}$ one, confirming the dynamical
instability of the 1T phase towards the 1T$^{\prime}$. At large enough
coverage,
this is also what happens to the 1T$^{''}$ structure that also decays
on the 1T$^{\prime}$.
In Fig. \ref{fig:H_Stability} we show the lowest energy configuration
of all phases
with respect to the most stable configuration of the 1H structure at
a given H-coverage .
We find that at H coverages superior
to $0.35$ / Mo, the 1T$^{\prime}$ phase is more stable then the 1H
one. This suggests that in chemically exfoliated MoS$_2$ monolayers, the
samples are divided in H-rich regions where the 1T$^{\prime}$
structure is stabilized and in H-poor regions where the 1H
phase is stabilized.
By comparing in details the 1T$^{\prime}$ structures at 0 and 0.5 H/Mo
coverage (see Fig. \ref{fig:H_Distorted}),
it is seen that upon H adsorption the separation between the
chains strongly increases, as the shortest distance between Mo atoms
on different chains is $3.91$ \AA ( $3.71$ \AA ) at coverage $0$ H/Mo
(0.5 H/Mo). Furthermore at coverage $0.5$ H/Mo the Mo atoms do not lay
on the same plane, as in the undistorted case, but are displaced above or below
by $\approx 0.07$ \AA.
The increased distance between
the chains implies a
larger band gap and more insulating character, as shown in Fig.
\ref{fig:Dos_Distorted}.
This agrees with experiments where it was found that the
zig-zag chain structure is indeed insulating \cite{EdaACSnano, EdaNanoL}.
\section{Conclusion}
Chemically and mechanically exfoliated MoS$_2$ single-layer samples have
substantially different properties. While mechanically exfoliated
single-layers are mono-phase ( 1H phase), the chemically exfoliated
samples show coexistence of three phases, 1H, 1T and 1T$^{'}$.
The fact that three phases experimentally coexist could lead to the
conclusion that the three pure structures have similar energies. However, as
we have shown in the present work, this is far from being the case,
as all octahedrally coordinated phases are much higher (more then
$0.54$ eV/Mo) in energy then the trigonal prismatic one (1H).
Moreover, the pure (i.e. without adsorbates or
vacancies) 1T phase is dynamically unstable and undergoes a phase
transition, again with
with a considerable energy gain ($0.29$ eV / Mo),
towards the most stable 1T$^{'}$ structure composed of separated
zig-zag chains. This finding strongly questions the detection of the
pure 1T phase in experiments \cite{EdaACSnano,EdaNanoL,Suenaga}
and points to a key role of either adsorbates or vacancies in
stabilizing the 1T metallic structure.
Wa have calculated dynamical properties of the
lowest energy octahedral structure (1T$^{'}$) and found that it is
dynamically stable, suggesting that an energy barrier does indeed exist
between the 1H and the 1T$^{'}$, similar to what happens in WS$_2$
where nudged elastic band calculations \cite{Voiry} find a $0.92$ eV/Mo barrier between the
1T$^{'}$ and the 1H phases. By investigating catalitic adsorption on single-layer
MoS$_2$ we demonstrate the key role of adsorbates, and, more
generally, of negative charging of the MoS$_2$ layer, in stabilizing the
1T$^{'}$ phase. This phase becomes the most stable at concentrations of
$\approx 0.35$ H / Mo.
Finally, we provided a microscopical description of the 1T$^{'}$ Raman spectrum
attributing the J$_1$, J$_2$ and J$_3$ features to specifical
vibrations.
These features were experimentally detected in 1986
\cite{JoensenMRB1986},
but their interpretation and understanding was unknown.
Our work represents the first complete study of static and lattice dynamical
properties of chemically exfoliated samples. We believe
that our results will be of great interest for future studies of
chemically exfoliated two dimensional crystals.
\section{Acknowledgements}
The author acknowledges useful discussions with Mannish Chhowalla and Goki
Eda. The author acknowledges support from the Graphene Flagship and
from the French state funds managed by
the ANR within the Investissements d'Avenir programme under reference
ANR-11-IDEX-0004-02, ANR-11-BS04-0019 and ANR-13-IS10- 0003-01.
Computer facilities were provided by CINES, CCRT and IDRIS
(project no. x2014091202).
|
1,108,101,564,877 | arxiv | \section{Introduction}
\label{intro}
Blazars are extremely bright and fast varying extragalactic sources observed throughout the
electromagnetic spectrum. Depending on the presence or absence of observed optical emission
lines they are classified as Flat Spectrum Radio Quasars (FSRQs) or BL Lac objects,
respectively (Urry \& Padovani 1995). In either case, it is believed that the blazar emission
is powered by relativistic jets which emerge from supermassive
black holes and beam their emission at our line of sight.
Blazar variability on timescales ranging from hours to decades has been commonly
observed at various wavelengths of the electromagnetic spectrum (see, e.g., B{\"o}ttcher 2007).
The recent discovery of blazar flaring on $\sim 5-10$ minute timescales
came as great surprise. Such extreme flaring has now been observed from several
objects both BL-Lacs and FSRQs [Markarian 421 (Gaidos et al. 1996);
PKS 2155--304 (Aharonian et al. 2007; hereafter referred to
as PKS 2155), Markarian 501 (Albert et al. 2007; Mrk 501), and the
prototype object of the class BL Lac (Arlen et al. 2013); and the FSRQ
source PKS 1222+216 (Aleksi\'c et al. 2011; hereafter PKS 1222)].
Fast-evolving TeV flares are, therefore, a generic feature of blazar activity.
Ultra-fast flares pose several challenges to theoretical models for the blazar emission.
The observed timescale of variability is too short to originate
directly from the central engine. Modulations of the properties of the
plasma in the vicinity of the black hole are limited by causality arguments to be longer
that the light crossing time of the horizon $t_{\rm v}\simmore R_{\rm Sch}/c\simeq 10^4M_9$ sec,
where $R_{\rm Sch}=2GM_{\rm BH}/c^2$ and $M_{\rm BH}=10^9M_9M_{\odot}$.
The observed $\sim 10$-min-long flares are far more likely to originate
from compact emitting regions that somehow form in the jet
(Begelman et al. 2008; Giannios et al. 2009; Narayan \& Piran 2012).
Furthermore, for the TeV photons to escape the source in PKS 2155
and Mrk 501 the emitting blob must have a Doppler boosting
of $\delta\simmore 50-100$ towards the observer (Begelman et al.
2008; Finke et al. 2008; Mastichiadis \& Moraitis 2008). This
is much larger than the Lorentz factor $\Gamma_j\sim 10-20$
typically inferred for blazar jets from superluminal motions
(see, e.g., Savolainen et al. 2010). Moreover, the $\gamma$-ray flaring from PKS 1222
directly constrains the location of the emitting region.
For $\simmore 100$ GeV photons to escape the {\it observed} broad line
region of the FSRQ,
the emitting region must be located at scales $\simmore 0.5$ pc
(Tavecchio et al. 2011; Nalewajko et al. 2012). This constraint is practically independent
of the assumed geometry of the broad-line region (Tavecchio \&
Ghisellini 2012).
Given the large dissipation distance and the
large inferred energy density at the source, the
intense flaring from PKS 1222 implies an unrealistically large jet power
unless the emitting material is, again, strongly boosting its emission
with $\delta\simmore 50$ (Nalewajko et al. 2012)\footnote{For
PKS 1222 a lower $\delta\sim 20$
is allowed {\it if} the distribution of energetic particles is extremely focused
in the rest frame of the emitting material; see Nalewajko et al. (2012).}.
One final clue for the origin of the fast flaring is that
it is observed on top of an envelope of longer $\sim$day-long flares.
During the fast flaring the flux increases by a factor of $\sim$
a few with respect that of the envelope (Aharonian et al. 2007; Albert
et al. 2007).
A large number of theoretical interpretations have been put forward
to explain the fast flares in {\it individual} sources.
Fast beams of particles at the light cylinder (Ghisellini \& Tavecchio
2008) or the interaction of the jet with a red giant star (Barkov et al. 2012)
are some of them. Rarefaction waves in magnetized shells (Lyutikov \&
Lister 2010), relativistic turbulence in the jet (Narayan \& Piran 2012) or
reconnection-driven minijets (Giannios et al. 2009)
may also be responsible for the fast flares.
I focus here on the latter possibility.
In the MHD jet paradigm (Blandford \& Payne 1982)
the jet is expected to emerge from the central engine
in the form of a Poynting-flux dominated flow.
If the magnetic field configuration is appropriate
after the jet acceleration and collimation phases,
magnetic reconnection can effectively dissipate
magnetic energy in the flow. As pointed out in
Giannios et al. (2009; 2010; Nalewajko et al. 2011)
magnetic reconnection dissipates energy in compact regions
characterized by fast motions within the jet, i.e., the radiating plasma
can move faster than the jet material on average. The extreme Doppler
boosting of the emitting region and its small size can naturally
account for the fast-evolving flares observed in blazars.
The reconnection minijet model is, however, based on a {\it steady} reconnection picture.
Both observations and recent advances in reconnection theory reveal that reconnection
is a highly time-dependent and dynamical process. Time dependent aspects of reconnection
turn out to be critical in understanding the multiple observed timescales related to blazar flaring.
The goal of this work it twofold: (i) to relax the steady-state
assumptions of the reconnection model for blazar flaring
and (ii) confront the model against all the available observational constraints.
In the Sec. 2 I summarize some of the recent observational and theoretical progress in understanding
time-dependent aspects of magnetic reconnection. In Sec. 3 this knowledge is
applied to a blazar jets predicting the relevant timescales and energetics
of flaring. The model is applied to specific sources in Sec.~4. I conclude in Sec.~5.
\section{Magnetic reconnection: a dynamical process}
In this Section I summarize recent progress in magnetic reconnection
theory that is relevant to the blazar jet application presented here.
Reconnection is the process of fast release of magnetic energy during
a topological rearrangement of magnetic field lines. It takes place
when magnetic field lines of opposite polarity are coming together
towards the reconnection plane ($x-y$ plane in fig.~1)
and annihilate, liberating magnetic energy that
heats the plasma and accelerates particles.
The large scale $l$ of the reconnection region is determined
by the distance over which the magnetic field strength drops
by a factor of $\sim$2 (along the $y$ direction).
The magnetic pressure gradient and also magnetic tension
along the $y$ direction result in the bulk acceleration of
the reconnected material to the
Alfv\'en speed $V_A$ of the upstream fluid.
The fast outflow in the downstream allows for fresh magnetized fluid
to enter the region and reconnect.
Observationally reconnection has been extensively studied
during solar flares and in Earth's magnetotail. Laboratory
experiments complement these studies in a controlled environment.
A richness of processes that take place on very different
timescales have been revealed by these works
(see, e.g., Aschwanden 2002). A characteristic long timescale of the process
is the global reconnection timescale $t_{\rm rec} \sim l/\epsilon
V_A$ over which the magnetic energy stored in a region of typical
scale\footnote{For simplicity and throughout this paper I will
assume that the reconnection region
has the same characteristic scale $l$ in all directions.} $l$
is released. Here $\epsilon$
parametrizes the reconnection speed; with $\epsilon \sim 0.1$
been a typical observationally inferred value.
{\it Besides the global reconnection timescale,
much shorter timescale variability and eruptive events are evident
both observationally and experimentally
highlighting the very dynamical nature of the process}
(see e.g., Lin et al. 2005; Park et al. 2006; Karlick\'y \& Kliem 2010).
For instance a solar flare of typical duration of $\sim$10 min can
show strong variability on $\sim$s timescale (see, e.g., Karlick\'y \& Kliem 2010).
For some time the reconnection theory has been dominated by steady-state models
(Sweet 1958; Parker 1957, Petschek 1964). They provide intuition on how {\it average}
properties such as the reconnection speed, the outflow speed and
temperature of the reconnected fluid depend on parameters.
Steady state models assume a continuous inflow of plasma in the
reconnection region and a smooth outflow. As such they
cannot account for the erratic behavior observed at the current sheet.
A distinctly different picture has emerged from recent theoretical studies of
magnetic reconnection. When the resistivity $\eta$ is sufficiently low, e.g.,
the corresponding Lundquist number $S=V_Al/\eta\gg S_c=10^4$
(as expected in most Astrophysical environments) the reconnection current
sheet is formally predicted by the Sweet-Parker theory to be extremely thin, with thickness $\delta/l=S^{-1/2}\ll
S_c^{-1/2}\sim 0.01$. Very thin current sheets suffer from tearing instabilities
that lead to their fragmentation to a large number of plasmoids separated by smaller current sheets
(Loureiro et al. 2007; Lapenta 2008; Daughton et al. 2009; Loureiro et
al. 2009; Samtaney et al. 2009; Bhattachackarjee et al. 2009;
Loureiro et al. 2012b). As a
result the reconnection process is fast and resistivity-independent.
Plasmoids grow fast through mergers and
leave the reconnection at a speed comparable to the Alfv\'en speed
of the upstream plasma (Uzdensky et al. 2010; Loureiro et al. 2012a). The typical plasmoid
forms away from the reconnection center (the, so-called x-point) at $y\sim l$
and grows up to a characteristic size $R_{\rm p}\ll l$.
Plasmoids that form fairly close to the reconnection center
have, however, more time available to merge and grow on their way out of the
reconnection region. These
plasmoids undergo significant (exponential) growth to reach
macroscopic scale. They are referred to as ``monster'' plasmoids (Uzdensky et al. 2010).
Their size reaches a scale $R_p \simeq S_c^{-1/4}l\equiv f l\simeq 0.1
l$, i.e., approaching the global reconnection scale.
The growth of the size
of a plasmoid is exponential $\propto$e$^{t/t_A}$, where $t_A=l/V_A$.
The mass doubling of the plasmoid takes place on a timescale
$\sim t_A$ with the plasmoid emerging from the reconnection region
at the Alfv\'en speed $V_A$.
Resistive MHD simulations support this theoretical picture
(see, e.g., Bhattachackarjee et al. 2009; Loureiro et al. 2012a).
The monster plasmoids consist of energetic particles that have undergone acceleration
in the secondary current sheets. Additional acceleration of particles takes place
during the merging process of the plasmoids (Zenitani \& Hoshino 2001;
2005; Drake et al. 2006, Sironi \& Spitkovsky 2011;
although the energy spectrum of the particles depends on the details and
is still topic of investigation). As I demonstrate below,
the macroscopic scale of the monster plasmoids, their short growth
timescale, and fast motion and energetic particles that they contain
make them natural candidates for powering the ultra-fast blazar flares.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[]{sketch1.eps}}
\caption[] {Sketch of the magnetic reconnection region with a large scale
$l$. Magnetic field lines of opposite polarity annihilate at the $x-y$
plane with speed $v_{\rm rec}$, i.e., the reconnection proceeds on
a global timescale $t_{\rm rec}=l/\epsilon c$. The physical conditions
relevant to AGN jets, the current layer is expected to fragment
to a large number of sub-layers separated by plasmoids (Loureiro et al. 2007).
The plasmoids leave the reconnection region with the Alfv\'en speed $V_A$
powering the ``envelope'' emission.
Occasionally, plasmoids grow to become ``monster'' plasmoids (shaded blob)
with scale $fl\sim 0.1 l$ giving rise to powerful, fast-evolving flares
of duration $t_{\rm flare}\ll t_{\rm rec}$.
\label{fig1}}
\end{figure}
\section{Application to blazar flaring}
\label{}
In the MHD-driving paradigm for jets (e.g., Blandford \& Payne 1982),
it is postulated that jets emerge from the central engine in the form
of Poynting-flux dominated flows.
Further out the flow converts part of its magnetic energy into
kinetic. At a (both theoretically and observationally very uncertain)
distance $R_{\rm diss}$, the blazar emission emerges.
If the magnetic field configuration is appropriate, magnetic reconnection can
effectively dissipate magnetic energy in the flow and power the blazar
emission. I assume that substantial magnetic energy release takes place in reconnection
regions of characteristic scale $l'$.\footnote{Hereafter primed quantities are measured in
the rest frame of the jet while double-primed quantities in the rest frame of
a plasmoid.} The magnetization $\sigma\equiv B'^2/4\pi\rho c^2$ of
the jet at the dissipation region is assumed to be $\sigma\simmore 1$. As a
result, the Alfv\'en four velocity and, correspondingly, that of the reconnection outflows is expected to
be moderately relativistic $u_{\rm out}\simeq u_A=\sqrt{\sigma}$ (Lyubarsky 2005).
The location of $R_{\rm diss}$ and the scale $l'$ are highly model dependent.
The trigger of magnetic dissipation may be instabilities that develop in the jet.
Even if the jet is launched by an axisymmetric magnetic field configuration, non-axisymmetric
instabilities can introduce smaller scale field
structures. Current-Driven Instabilities (CDIs) are likely to be the most relevant in strongly magnetized jets
(Eichler 1993; Begelman 1998; Giannios \& Spruit 2006). The observational indications that
the jet opening angle is related to the jet Lorentz factor through the relation
$\theta_{j}\Gamma_j\sim 0.2$ (Pushkarev et al. 2009) implies
that causal contact is established in the transverse direction of a
high-$\sigma$ jet. Under these conditions, CDIs can potentially grow as soon as the jet
develops a dominant toroidal field component. CDIs are non-axisymmetric
instabilities that reorganize the field configuration.
In the non-linear stages of their development small
scale field reversals may be induced in the jet allowing for energy
extraction through reconnection (Moll 2009). The non-linear stages of CDIs are,
however, poorly understood making it hard to predict
the distance at which they develop or the characteristic scales of the reconnection layers.
An interesting alternative is that the magnetic field is not axisymmetric at the launching region.
The jet contains, instead, small-scale field structures imprinted from the
central engine (Romanova \& Lovelace 1997, Levinson \& van Putten 1997,
Giannios 2010, McKinney \& Uzdensky 2012).
Such configurations can introduce field reversals in the jet of the order
of the size of the black hole horizon $R_{\rm sch}= 3\times
10^{14}M_9$ cm (in the lab frame).
The scale of the field reversal in the rest frame of the jet is $l'\simeq \Gamma_jR_{\rm Sch}
\simeq 6\times 10^{15}\Gamma_{\rm j,20}M_9$ cm. In such configuration,
substantial dissipation takes place when
the reconnection timescale $t_{\rm rec}=l'/\epsilon c=\Gamma_j R_{\rm Sch}/\epsilon c$
becomes comparable to the expansion timescale of the jet $t_{\rm exp}=R_{\rm diss}/\Gamma_jc$,
i.e., at a distance
\begin{equation}
R_{\rm diss}\simeq \Gamma_j^2R_{\rm Sch}/\epsilon= 1.2 \times 10^{18}M_9
\Gamma_{j,20}^2\epsilon_{-1}^{-1} \rm \quad cm.
\end{equation}
In the following, and for more concrete estimates, I adopt $R_{\rm diss}$ given by eq.~(1) and
$l'=\Gamma_{\rm j}R_{\rm Sch}$ as motivated by the proceeding discussion.
The model presented here can, however, be applied to any choice of the
parameters $R_{\rm diss}$ and $l'$.
\begin{figure}
\resizebox{\hsize}{!}{\includegraphics[]{Sketch2.eps}}
\caption[] {Sketch of the envelope-flare structure of the emission
from a reconnection layer. The envelope duration corresponds to that
of the reconnection event: $t_{\rm env}= l'/\Gamma_j\epsilon c$.
Monster plasmoids power fast flares which show exponential rise
and last for $t_{\rm flare}=0.1l'/\delta_{\rm p} c$.
For an envelope of $\sim 1$day blazar flaring the model predicts that
monster plasmoids result in $\sim 10$-minute flares.
\label{fig1}}
\end{figure}
\subsection{Fast flares from monster plasmoids}
For the physical conditions prevailing at the reconnection region relevant
to AGN jets, the current sheet is expected to fragment into a large number
of plasmoids while the reconnection process
proceeds fast (see Appendix A for details).
Some plasmoids regularly grow into ``monster'' plasmoids, i.e.,
large magnetized blobs that contain energetic particles
freshly injected by the reconnection process (Uzdensky et al. 2010). The relativistic
motion of the plasmoids in the rest frame of the jet results in additional
beaming of their emission (i.e., beyond that induced by the jet motion).
When the layer's orientation is such that plasmoids beam their emission towards the
observer, powerful and fast evolving flares emerge.
{\it Here we focus on the characteristic observed timescales and luminosities
resulting from plasmoids that form in the reconnection region.} To this end
I assume that the dissipated energy is efficiently converted into radiation.
In practice electrons are likely to be responsible for the emission
(e.g. see Nalewajko et al. 2012)
so I, in effect, assume that a significant amount of the dissipated energy
is deposited to electrons which undergo fast radiative cooling.
The latter assumption will be checked a posteriori.
The former maybe justified by the efficient electron
acceleration by the electric field present at the
current sheets but remains an assertion.
This study can be trivially generalized by including the
efficiency factor with which dissipated energy converts into radiation.
Consider a spherical blob (or plasmoid) emerging from the reconnection layer
moving with the Alfv\'en speed of the reconnection upstream ($V_{\rm
A}=\sqrt {\sigma/(1+\sigma)}c$), i.e, with a corresponding
bulk Lorentz factor $\gamma_p=\sqrt{1+\sigma}\sim$ a few
(measured in the jet rest frame) and of size $R_p''=fl'$. \footnote
{I treat the plasmoid as a sphere in its own rest frame (and not in
the jet rest frame). It is unclear which approximation describes better
the reconnection plasmoids for relativistic reconnection.
In the limit of modest relativistic motions of interest here
($\gamma_{\rm p} \sim$ a few), this distinction is not affecting the results presented here
by much.}
The growth of the plasmoid in the reconnection layer is exponential
with time. The observed rise time
for the plasmoid emission is $t_{\rm rise,obs}\simeq R''_{\rm
p}/\delta_p c$, where $\delta_{\rm p}$ is the Doppler boost of the plasmoid radiation towards the observer.
The plasmoid subsequently cools down and expands on a similar observed timescale $t_{\rm decline,obs}\sim
R''_{\rm p}/c\delta_{\rm p}$. We, therefore, define as the
variability timescale $t_v\equiv t_{\rm rise, obs}=fl'/c\delta_{\rm
p}$. For field reversals imprinted from the central engine $l'\simeq
\Gamma_{\rm j} R_{\rm Sch}$ resulting in
\begin{equation}
t_{\rm v}=\frac{f\Gamma_jR_{\rm Sch}}{\delta_{\rm p}c}=400
f_{-1}\Gamma_{\rm j, 20}M_9\delta_{p, 50}^{-1}\rm \quad s,
\end{equation}
where $\delta_{\rm p}=50\delta_{\rm p,50}$, $f=0.1f_{-1}$, $\Gamma_{\rm
j}=20\Gamma_{\rm J,20}$.
Flaring on several minute timescale is therefore expected in this picture.
Consider a jet emerging from a supermassive black hole with
(isotropic equivalent) power $L_{\rm iso}$, opening angle $\theta_j$ and Lorentz factor
$\Gamma_j$. We also assume that $\theta_j \Gamma_j=0.2$ as indicated by observations (Pushkarev et al. 2009).
The typical bulk Lorentz factor of gamma-ray active blazars is $\Gamma_j\sim 10-20$
(Savolainen et al. 2010; Piner et al. 2012). The energy density at the
dissipation, or ``blazar'', zone is
\begin{equation}
U'_{\rm j}=\frac{L_{\rm iso}}{4\pi (\theta_{\rm j} R_{\rm
diss})^2\delta_{\rm j}^4c}=0.12\frac{L_{\rm
iso,48}\epsilon_{-1}^{2}}{M_9^2\Gamma_{j,20}^2\delta_{\rm j,20}^4}
\quad \rm erg/cm^{3},
\end{equation}
where the $L_{\rm iso}$ is normalized to $10^{48}$ erg/s and the
dissipation distance is given by eq (1).
Pressure balance across
the reconnection layer requires the energy density of the plasmoid to be similar to that
of the jet $U''_p\sim U'_j$.\footnote{The exact relation of $U''_p$ and $U'_j$ depends
on the magnetic and heat content of the upstream and the plasmoid.
For instance balancing the pressure of cold, strongly magnetized
upstream $p_j=B'^2/8\pi=U'_j/2$ with that of an assumed
relativistically hot plasmoid $p_p=U''_p/4$, we find $U''_{p}=2U'_j$.
The expression is slightly different if the magnetic field contributes
to the pressure of the plasmoid. The Assumption $U''_p\sim U'_j$ is expected
to hold within a factor of $\sim 2$ independently of these details.}
Assuming efficient conversion of dissipated energy into radiation
(assumption to be verified in Sect. 4.3),
the rest-frame luminosity of the plasmoid is thus
$L''=U_{\rm p}''4\pi R_{\rm p}''^2c$. This luminosity can be converted to the
observed luminosity $L_{\rm p,obs}=\delta_p^4L''$.
Because of the $R_{\rm p}''^2$ dependence of the luminosity it is clear that the largest
``monster'' plasmoids (with $R_p''=fl'$, $f\simeq 0.1$) power the brightest flares.
Putting everything together, the observed luminosity of the plasmoid is
\begin{equation}
L_{\rm p,obs}=10^{47}\frac{\epsilon_{-1}^2f_{-1}^2\delta_{p,50}^4L_{\rm iso,48}}{\delta_{j,20}^4}\quad \rm erg/s.
\label{lp}
\end{equation}
The Doppler factor of the plasmoid $\delta_{\rm p}$ depends on several parameters. It is
related to $\Gamma_j$, $\gamma_p$, the angle of the plasmoid with
respect to the jet motion and the observer's angle of sight. For perfectly aligned
jet, plasmoid and observer $\delta_{\rm p} \simeq 4 \Gamma_{\rm
j}\gamma_{\rm p}$. In, perhaps, more common situations where
the reconnection layer is at a large $\theta\sim \pi/2$ angle with respect to the jet propagation
(as seen in the jet rest) and fairly aligned with the observer (giving
powerful flares) $\delta_{\rm p} \sim
\Gamma_{\rm j}\gamma_{\rm p}$. For the demonstrative arithmetic examples used here we adopt
$\delta_{\rm p}=1.25 \Gamma_{\rm j}\gamma_{\rm p}=50\Gamma_{\rm j,20}\gamma_{\rm p,2}$.
One can see (see eq.~2) that powerful flares on a timescale of $\sim$10 min is possible even with
very modest relativistic motions within the jet\footnote {Narayan \& Piran (2012)
come to a different conclusion concerning the fastest flares expected by magnetic reconnection.
Their analysis assumes {\it steady} reconnection and is therefore applicable to the global reconnection
timescale (topic of next Section) but is not constraining the fast
flares powered by plasmoids.} $\gamma_{\rm p}\sim 2$.
\subsubsection{Ejection of multiple monster plasmoids}
During a reconnection event multiple monster plasmoids are expected to form.
The seed of a monster plasmoid forms fairly close to the reconnection
center region $y\ll l'$ and spends sufficient time in the reconnection
region to grow to a large size. 2D simulations (Loureiro et al. 2012a)
indicate that monster plasmoids form every a few Alfv\'en times $t_A$
or at a rate of $\sim 0.3t_A^{-1}$.
It appears likely that 2D simulations underestimate the rate of formation
of monster plasmoids. The actual rate may be higher when the 3D structure
of the layer is considered. The x-point is in reality an elongated structure
(along the x-axis of fig.~1) providing a larger physical region where
the seeds of monster plasmoids form. Monster
plasmoids potentially form at a rate up to $l'/R'_{\rm p}\sim 10$
higher than that found in 2D studies. Clearly this question can only be
answered high resolution, 3D restive MHD simulations.
If monster plasmoids emerge at a rate $\sim (0.3-3)t_A^{-1}$,
some $(3-30)/\epsilon_{-1}$ plasmoids are expected
from a single reconnection layer powering multiple flares.
The observed properties of the monster plasmoids
are determined by the basic properties of the reconnection region that
generates them. To the extent that all monster plasmoids reach similar size of $\sim 0.1 l'$,
the model predicts a similar duration and brightness for this sequence
of fast flares. Smaller plasmoids $f<0.1$ can power even faster flares
since $t_{\rm v}\propto f$ albeit of lower peak luminosity ($L_{\rm p}\propto f^2$).
A sketch of such pattern is shown in Fig. (2).
\subsection{The ``envelope emission'' from the reconnection region}
The bulk motion of a monster plasmoid is expected to be similar to the speed of other
structures (e.g. smaller plasmoids) leaving the reconnection region.
When the plasmoid emission is beamed towards the observer (powering a fast flare), the
overall emission from the current layer is also beamed by a similar factor.
The emission from the layer forms a slower-evolving envelope.
In the following I calculate the timescale and luminosity of the
emission from the reconnection layer.
At the dissipation distance $R_{\rm diss}$, the reconnection proceeds
within the expansion time of the jet which is obseved to last for
$t_{\rm exp,obs}\simeq R_{\rm diss}/\Gamma_j^2c$. Therefore, $t_{\rm exp,obs}$
corresponds to the observed duration of the envelope emission (using
also eq.~(1)):
\begin{equation}
t_{\rm env}=\frac{R_{\rm diss}}{\Gamma_j^2c}=
10^5\frac{M_9}{\epsilon_{-1}} \quad \rm s.
\end{equation}
The duration of the envelope emission is $\sim$days.
Such timescale is characteristic of blazar flares (See next Section).
The (lab frame) energy available to power the envelope emission is $E_{\rm env}=U_{\rm j}2l'^3/\Gamma_{\rm j}$,
where $U_j=\Gamma_j^2U'_j$ is the energy density of the jet and $2l'^3/\Gamma_{\rm j}$
accounts for (lab frame) volume of the reconnection region that powers each minijet (see fig.~1).
The emitted luminosity of the reconnection region is $E_{\rm env}/t_{\rm env}$. It can be converted
into {\it observed} luminosity by accounting for beaming factor of the
emission $\sim \delta_p^2$:
\begin{equation}
L_{\rm env,obs}\simeq 2\Gamma_{\rm j}^2\delta_{\rm p}^2l'^2U_{\rm
j}'\epsilon c=3\times 10^{46}\frac{\Gamma_{\rm j,20}^2\delta_{\rm p,50}^2
\epsilon^3_{-1}L_{\rm iso, 48}}{\delta_{j,20}^4} \quad \rm erg/s. \label{lenv}
\end{equation}
The envelope emission is quite bright.
Dividing eqs.~(\ref{lp}) and (\ref{lenv}), one arrives to a fairly simple expression
for the ratio of the plasmoid to envelope luminosities $L_p/L_{\rm
env}\sim 3f_{-1}^2\delta_{\rm p,50}^2/\Gamma_{\rm j,20}^2\epsilon_{-1}$.
The luminosity contrast depends only on the Lorentz factor of the minijet in the rest frame of the
jet $\gamma_{\rm p}\simeq \delta_{\rm p}/\Gamma_{\rm j}$, the size
of the plasmoid parametrized by $f$,
and the reconnection rate $\epsilon$. As we discuss in the next
Sections, the luminosity ratio is observed to be of order unity constraining
$\delta_{\rm p,50}/\Gamma_{\rm j,20} \sim 1$ for $\epsilon\sim f\sim
0.1$. The ratio $\delta_{\rm p,50}/\Gamma_{\rm j,20}$ is determined by
the reconnection-induced bulk motions in the jet and points to
$\gamma_{\rm p}\sim 2$ or, equivalently, moderately magnetized jet
with $\sigma\sim $ a few.
So far I have considered the emission from a {\it single} reconnection
region which is beaming its emission towards the observer. When the
reconnection scale is smaller than the cross section of the jet ($l'<R\theta_j$),
there may be as many as $\sim (R\theta_j/l')^3$ reconnection regions in the emitting zone. Even after corrected for
beaming of the emission from current sheets, up to $\sim (R\theta_j/l')^3/\gamma_{\rm p}^2$
reconnection regions may beam their emission towards the observer.
The overall amplitude of the envelope emission and the number of fast
flares are therefore enhanced by this factor\footnote
{The opposite limit where $l'>R\theta_j$ is not physical since the reconnection region should fit
within the jet cross section: the condition $l'\simless R\theta_j$ must be satisfied.}.
In this case, the contrast $L_{\rm p,obs}/L_{\rm env,obs}$ drops since more than one reconnection layers contribute
to the envelope emission.
\section{Application to observations}
In this section we examine how the wealth of information from fast TeV
flaring among blazars can be used to extract information
on the physical conditions of the emitting region and
constrain the reconnection model. We first discuss observations
of several BL Lac objects and then FSRQ PKS 1222.
\subsection{Flaring BL Lacs}
Fast TeV flares have been observed in several BL Lac objects including
PKS 2155, Mrk 501, Mrk 421 and BL Lac itself. The variability timescale ranges from
a few to $\sim$10 minutes. Most detailed are the observations of
PKS 2155 which reveal an envelope emission of duration of hours that contains
several $\sim5$-minute flares of comparable luminosity (Aharonian et
al. 2007). Mrk 501 also shows flaring with similar flare-to-envelope ratio. BL Lac, the prototype
source has also been observed to flare on 10 minutes suggesting that minute-flaring
is a generic property of blazars.
Here we apply the model to the, most constraining, PKS 2155 observations (see Aharonian et al. 2007).
The observed luminosity of the envelope and of the fast flares is
$L_{\rm env}\sim L_{\rm ff}\sim 3 \times 10^{46}$ erg/s. The fast flares last for
$t_v\sim 5$ min. When the observation started the envelope emission
was already on and lasted for $\sim$100 min. Narayan \& Piran (2012)
use Bayesian analysis to estimate the mean duration of the envelope
emission of $t_{\rm env}\sim 2\times 10^4$ sec.
The isotropic equivalent jet power is uncertain but $L_{\rm iso}\sim
10^{48}$ erg/s appears a reasonable estimate given the {\it observed} luminosity
of the source of up to $10^{47}$ erg/s and assuming an overall
radiative efficiency of $10\%$.
A high Doppler factor $\delta_{\rm p}\simmore 50$ of the emitting
material is required for the escape of the TeV radiation from
the soft radiation field of the jet without extensive pair creation (Begelman et al. 2008).
Observations do not directly constrain where the emission takes place in this source.
The fact that $L_{\rm env}\sim L_{\rm ff}$ means that $\gamma_{\rm p}
\sim 2$. Setting $\delta_{\rm p}=50$ and $M=10^9M_{\odot}$ and the
inferred $L_{\rm iso}$ in eqs. (2), (5), and (6), we derive
all fast flaring and envelope timescales and luminosities in good agreement
with the observed values. Moreover, PKS 2155 showed
(i) several fast flaring events of (ii) similar characteristic timescale and luminosity.
Multiple flares have a natural explanation within the reconnection model.
They can be understood to come from different monster plasmoids that emerge
from the same reconnection region.
\subsection{TeV flares from PKS 1222}
The model can also be applied to the FSRQ PKS 1222. In this source
the dissipation distance is robustly constrained to be $R_{\rm diss}\simmore 0.5$ pc
(Tavecchio et al. 2011; Nalewajko et al. 2012).
If the emitting region is characterized by $\delta\sim \Gamma_j\sim 20$ the
required luminosity of the jet is unrealistically high (Nalewajko et al. 2012).
A moderately higher $\delta\sim 50$ (in agreement to that inferred for PKS 2155)
is, however, sufficient to relax the energetic requirements of the
jet and is adopted here. Around the epoch of the TeV flare there is an envelope of high
$\gamma$-ray activity. Fermi-LAT detected a flare of
$L_{\rm env}\sim 10^{48}$ erg/s and duration of roughly $t_{\rm
env}\sim 10^5$ sec (Tanaka et a. 2011; note, however, that {\it Fermi}
observations are not strictly simultaneous with the {\rm MAGIC} ones).
The fast flares are observed with {\rm MAGIC} in the sub-TeV energy range:
$L_{\rm ff}\sim 10^{47}$ erg/s (with total luminosity possibly higher by
$\sim$ a few to account for bolometric corrections in view of the steep
observed TeV spectrum). The flux evolved on a timescale of
$t_v\sim 7$ min. The isotropic equivalent jet power is also
uncertain but $L_{\rm iso}\sim 10^{49}$ erg/s appears a reasonable
estimate given that the observed disk emission is several $10^{46}$ erg/s
and that the beamed observed radiative luminosity of the jet reaches $10^{48}$ erg/s.
Setting $\delta_{\rm p}=50$, $L_{\rm iso}\sim 10^{49}$ erg/s, one can
verify that the observed duration of the envelope and of the fast flare are
reproduced by the model. The same holds true for the flare luminosity.
The observed envelope emission is observed to be more luminous than the fast flare
by a factor of several (though weaker, the fast flare
is clearly observed with MAGIC because of its harder emission).
Given the adopted parameters, eq.~(4) implies that a single reconnection
region has envelope luminosity $L_{\rm env}\sim L_{\rm ff}$ while
the envelope was a factor of $\sim$several brighter in this source.
Possibly several reconnection layers contribute to the envelope emission
simultaneously if $R\theta_j/l'\sim$ several, enhancing the ratio of the
luminosity of the envelope emission with respect to that of fast flares.
Summarizing all blazar flares can be accounted for by little changes of the
physical parameters of the model. Typically I infer $\Gamma_{\rm
j}\sim 20$, $\gamma_{\rm p}\sim 2$ and the size of the reconnection
region $l'\simless 10^{16}$ cm. The blazar zone is located at
$R_{\rm diss}\sim (0.3-1)$ pc.
\subsection{Radiative mechanisms and particle cooling}
The energetic requirements for the fast flaring
can become more stringent if the radiating particles
(assumed to be electrons) are not in the fast cooling regime.
Here we assume that the TeV emission is either result of synchrotron
self Compton (SSC) or external inverse Compton (EIC) and investigate
the electron energetics required to produce the observed $\sim 100$GeV- multi TeV emission
and whether they are likely to radiate efficiently for the model parameters adopted
in the previous Section. In the end of the Section, I discuss the
expectation of X-ray flares as result of synchrotron radiation from
the same population of electrons.
To assess whether the emitting particles ``cool fast'',
the expansion timescale of the plasmoid $t''_{\rm exp}=R''_{\rm p}/c=\delta_{\rm p} t_{\rm v}
=3\times 10^{4}\delta_{50}t_{\rm v,10}$ s
is to be compared with the radiative cooling timescale.
In the case of SSC emission from $e^-$ (or pairs)
with random Lorentz factor $\gamma_e$ in magnetic field $B''$,
the characteristic energy is $e_{\rm SSC}\simeq 10^{-8}
\delta_{\rm p} \gamma_e^4B''$ eV. Depending on details of the
reconnection configuration (such as guide field strength), the
plasmoid can be magnetically or heat dominated\footnote{The fact that
the jet is Poynting-flux dominated in this model does not necessarily mean that
the emitting region is also strongly magnetized. On the contrary, efficient
reconnection may result in weakly-magnetized (heat-dominated) plasmoids.}.
For simplicity, I parametrize the magnetic field strength in the plasmoid
as $B''=(\epsilon_B4\pi U_p'')^{1/2}=0.7\epsilon_{\rm
B,1/3}^{1/2}L_{\rm iso, 48}^{1/2}
\epsilon_{-1}M_9^{-1}\Gamma_{\rm j,20}^{-1}\delta_{\rm j,20}^{-2}$
Gauss. Setting $e_{\rm SSC}=100e_{\rm 100GeV}$ GeV, one finds for
the required electron (random) Lorentz factor $\gamma_e=2\times 10^4e_{\rm 100GeV}^{1/4}
M_9^{1/4}\Gamma_{\rm j,20}^{1/4}\delta_{j,20}^{1/2}\epsilon_{-1}^{-1/4}\epsilon_{\rm
B,1/3}^{-1/8}L^{-1/8}_{iso,48}\delta_{\rm p,50}^{-1/4}$. The SSC cooling
time scale is $t''_{\rm SSC}=5\times 10^{8}/(1+y)\gamma_eB''^2$ s or
\begin{equation}
t''_{\rm SSC}\simeq 1.5\times
10^4\frac{3}{1+y}\frac{\delta_{\rm p,50}^{1/4}M_9^{7/4}\Gamma_{\rm
j,20}^{7/4}\delta_{\rm j,20}^{7/2}}
{e_{\rm 100GeV}^{1/4}\epsilon_{-1}^{7/4}\epsilon_{\rm
B,1/3}^{7/8}L_{\rm iso,48}^{7/8}}\quad \rm s,
\end{equation}
where $y\sim$ a few accounts for the ratio of the SSC to synchrotron power.
If a substantial external radiation field of energy density $U_{\rm rad}$
is present, it can contribute to the particle cooling through EIC. Assuming an isotropic
radiation field of characteristic energy $e_{\rm seed}$, the energy of the up-scattered photon
is $e_{\rm EIC}\simeq \Gamma_{\rm p}\delta_{\rm p}\gamma_e^2e_{\rm seed}$ (for scattering in the Thomson limit).
Solving for the electron Lorentz factor: $\gamma_e\simeq 7\times 10^3(e_{\rm EIC, 100GeV}/
e_{\rm seed,1})^{1/2}\delta_{50}^{-1}$, where $e_{\rm seed}=1e_{\rm
seed,1}$ eV and $\Gamma_{\rm p}\simeq \delta_{\rm p}$.
The energy density of radiation in the rest frame of the blob is $U''_{\rm rad}\simeq
\Gamma_{\rm p}^2U_{\rm rad}$. The EIC cooling timescale for such electron is
$t''_{\rm EIC}=2\times 10^7/\gamma_eU''_{\rm rad}$ s or
\begin{equation}
t''_{\rm EIC}=\frac{1.4\times 10^4}{U_{\rm seed,-4}\delta_{\rm p,50}}\Big(\frac{e_{\rm seed,1}}{e_{\rm 100GeV}}\Big)^{1/2}\quad \rm s,
\end{equation}
for $U_{\rm seed}=10^{-4}U_{\rm seed,-4}$ erg cm$^{-3}$.
In the case of PKS 2155 no powerful ambient isotropic radiation field is evident. The plasmoid may
well propagate into a dense radiation field emerging from the large scale jet or
other reconnection regions. This depends however on uncertain details of the overall
geometry (see Nalewajko et al. 2011 for various possible geometrical
configurations). On the other hand SSC emission
has to be present. Setting the model parameters to those relevant for
PKS 2155 (see previous Section),
and $\epsilon_B=1/3$, $y=2$, I arrive at $t_{\rm SSC}\simless
10^4$ s (see eq.~5).
This timescale is, by a modest factor, shorter than the expansion time of the blob
$t''_{\rm exp}=1.5\times 10^4\delta_{50}t_{v,5}$ s indicating that efficient
$\sim$TeV emission is plausible. The required random Lorentz factor of
the emitting electrons is $\gamma_e\simmore 10^4$.
One can derive a similar estimate for the SSC cooling timescale for the
parameters relevant to PKS 1222 flaring, i.e. $t''_{\rm SSC}\simless t''_{\rm exp}\sim 2\times 10^4$s.
Another obvious possibility for emission from this source is EIC of
photons from the infrared torus.
With $U_{IR}\sim 10^{-4}$ erg cm$^{-3}$ and characteristic energy $e_{\rm IR}\sim$ 0.3 eV,
the EIC takes place in the Thomson regime (Nalewajko et al. 2012).
The typical electron emitting $\sim 100$ GeV has
$\gamma_e\sim 10^4$ with a cooling time scale of the particles of
$t''_{EIC}\sim 8\times 10^3$s $\sim t''_{\rm SSC}\simless t''_{\rm exp}$.
In Summary the $\sim$TeV emitting electrons are characterized by a random $\gamma_e\sim 10^4$
and have a cooling timescale somewhat shorter than the expansion timescale of the blob
allowing for efficient TeV emission. The detailed spectrum necessarily depends on the
details of the particle distribution that are model
dependent. However, an equipartition argument (e.g. sharing of the
dissipated magnetic energy between electrons and protons) would
give $\gamma_e \sim (m_p/2m_e)\sigma \simeq 3\times 10^3\sigma_3$ for the
electrons in the plasmoid, where $\sigma=3\sigma_3$ is the upstream
magnetization. Therefore, a modest particle acceleration
above equipartition is sufficient to explain the observed emission.
For the typical conditions inferred in the emitting region
($\gamma_e\simmore 10^4$, $B''\sim 1$ Gauss, $\delta_p\sim$ 50),
the synchrotron component naturally peaks in the soft X-ray band.
If SSC is the mechanism for the very high energy emission, the
synchrotron component may be quite powerful $L_{\rm syn}=L_{\gamma}/y$.
Fast X-ray flares, simultaneous to the very-high energy ones,
are therefore quite likely in this model (see the Discussion section for
observational evidence for such flaring).
\section{Discussion/Conclusions}
\label{sec-conclusions}
The similarities in the variability patterns seen in
several blazars (PKS 1221, PKS 2155, BL Lac,
Mrk 501) are striking: fast TeV flares on $\sim$minutes timescale
that appear on top of an
envelope of enhanced gamma-ray activity that lasts
for hours or days. The similarities strongly indicate
similar physical conditions at the emitting region:
large Doppler factor $\delta_{\rm p}\sim 50$
and a dissipation zone located at $\sim$pc distance from the black hole.
\subsection{Models for fast blazar flaring}
Several suggestions have been put forward for the ultrafast blazar
flaring. Fast electron beams (with bulk $\gamma_e\sim 10^6$)
may develop along magnetic field lines close to the light cylinder
(i.e. within several $R_{\rm Sch}$, see, Ghisellini \& Tavecchio 2008).
The TeV flare, in this scenario, is result of the beam inverse Compton
scattering external radiation fields. This model fails to account
for the large emission distance required by PKS 1222 (Aleksi\'c et
al. 2011). For a hadronic model applicable to the fast flares of
PKS 1222 see Dermer et al. (2012). In this model $\simmore 100$ TeV
neutrinos are predicted. Alternatively, a red giant may cross the jet (Barkov et al. 2012).
The ram pressure of the jet strips the envelope of the star which
consequently fragments. Emission from shocked stellar and/or jet plasma may
power blazar flares. While stellar-jet encounters are expected,
any interaction region will necessarily move slower that the
jet $\Gamma_{\rm int}\simless \Gamma_j$. The required Doppler
boost of the emitting region towards the observer $\delta\sim 50$
may therefore be hard to explain in this picture. Alternatively, a recollimation
shock on pc scales can help to focus the jet energy in a small region
inducing short timescale viability. However, non-axisymmetries
in the jet-external medium interaction will likely make the focusing
insufficient to explain the most extreme flares
(Bromberg \& Levinson 2009). If the jet activity is sufficiently erratic,
the jet can be envisioned as magnetized shells separated by vacuum
(Lyutikov \& Lister 2010). Rarefaction waves of the front part of a shell
can reach a bulk Lorentz factor much higher than that of the jet on average.
Fast flares may come from these rarefied regions.
Relativistic turbulence in the jet can also allow for emitters-blobs
moving with $\Gamma_b>>\Gamma_j$ to be responsible
for intense and fast flares (Narayan \& Piran 2012).
For the turbulence not to be supersonic (or it would decay fast
by shocks) the jet must be Poynting flux dominated. The driver
and the region where the turbulence develops remain to be identified.
Magnetic reconnection could drive the turbulent motions
(and turbulence can, in turn, enhance the reconnection rate;
Lazarian \& Vishniac 1999). In this case, however, it is quite likely
that the most powerful flares are directly related to the driver,
i.e., the reconnection process itself rather than the turbulent
motions.
\subsection{The reconnection model for fast flaring}
Here we have revisited the reconnection minijet model
for the fast flaring (Giannios et al. 2009; 2010). We focus on time-dependent aspects
that are naturally expected (and directly observed in laboratory experiments
and solar system environment) in the reconnection process.
It is demonstrated that at least two timescales appear in the problem.
The longer one is associated with the time it takes for a magnetic
energy to be dissipated in the reconnection region and creates
an ``envelope'' of flaring activity that lasts for several hours to days.
Instabilities in the current sheet (e.g. secondary tearing
instability; Loureiro et al. 2007) result in erratic formation
of plasmoids that leave the reconnection region at relativistic
speed. The largest ones, ``monster'' plasmoids, can power the fast,
$\simless 10$minute-long blazar flares. Several to tens of monster
plasmoids can emerge from a single reconnection layer.
The super-fast flaring may therefore not happen in isolation.
{\it A sequence of fast flares are expected to have similar timescale
set by the size of the reconnection layer as observed in PKS 2155.}
Verification of this trend of a sequence of flares in more sources
and/or in other bands such as X-rays (see below)
would provide strong support for the model.
A virtue of the model is that it can be applied to all
blazar sources with observed fast flaring for similar adopted parameters.
In this model, the dissipation of energy that powers the blazar
radiation takes place at distance $R_{\rm diss}\sim 0.3-1$ pc,
the bulk Lorentz factor of the jet is $\Gamma_j\sim
20$, and the size of the reconnection region $l'\simless 10^{16}$ cm.
These quantities point towards an interesting possibility for the
magnetic field structure in the jet and the origin of
the blazar emission. If the magnetic field configuration is not
exactly axisymmetric at the horizon, the jet may emerge with
small-scale field structures of size similar to that of the
central engine $\sim R_{\rm Sch}\sim 3\times
10^{14}M_9$ cm (along the direction of the jet propagation). Even a modest
non-axisymmetry at the base of the jet can be amplified by stretching
in the toroidal direction because of the jet (lateral)
expansion. The jet expands from a lateral size of $r\sim R_{\rm Sch}$
at the launching radius to $r=\theta_{\rm j}R_{\rm diss}\gg R_{\rm
Sch}$ at the dissipation distance. The resulting scale of the
field reversals in the rest frame of the jet is $\sim \Gamma_j R_{\rm
Sch}$ that may be used as an estimate of the characteristic scale of the
reconnection layer $l'\simless 10^{16}$cm. For typical parameters, the
reconnection time catches up with the expansion time of the jet
at distance $R_{\rm diss}\sim \Gamma_{\rm j}^2R_{\rm Sch}/\epsilon\sim 1$pc.
The very-high energy flares are modeled to be result of SSC or EIC
process of energetic electrons in the plasmoids,
depending on the source. For the physical parameters inferred
at the emitting region, however, the synchrotron emission from the
same population of electrons naturally peaks in the X-ray band.
In, at least some, of the sources fast $\gamma$-ray flares should also
be accompanied by fast X-ray flaring. Evidence for ultra-fast flares has
existed for some time. For instance Cui (2004) shows, using {\it RXTE}
observations of Mrk 421, that X-ray flaring on timescales as short as
$\sim 10$ minutes is evident. Also, a characteristic envelope-fast
flares structure is evident (see their Fig.~10). Simultaneous
detection of fast flares in both X-ray and $\gamma$-ray bands will be very
informative and constraining for the models.
The bulk motion of plasmoids in the jet rest frame required for the model to work are
very modest $\gamma_{\rm p}\sim 2$. The bulk motion in the reconnection picture
corresponds to the Alfv\'en speed: $\gamma_{\rm p}\simeq \sqrt{1+\sigma}$, implying
that a magnetization $\sigma\sim 3$ is required at the dissipation zone $R_{\rm diss}$.
Is it reasonable that the jet remains modestly Poynting-flux dominated at $\sim 1$pc
scale? This would imply that the conversion of Poynting-to-kinetic flux
is not complete by that distance. A systematic study of superluminal motions
on pc to tens of pc scales reveals that blazar jets still undergo acceleration
out to the larger scale (Piner et al. 2012). In the context of MHD acceleration
of jets, this would imply that, indeed, the pc-scale jet maintains a substantial
magnetization.
Ultra-fast flares are the tip of the iceberg of blazar variability.
The process of magnetic reconnection is potentially responsible for
powering a much broader range of blazar activity. Reconnection
may well take place at larger (e.g. multi pc) scale where the plasma
is presumably less magnetized (because of further conversion
of magnetic energy into kinetic). When the reconnecting plasma is characterized by
$\sigma\simless 1$, the reconnection speed $v_{\rm rec}$ slows down (since
$v_{\rm rec}\propto V_A<c$). In this case, the reconnection timescale
becomes longer and reconnection layers may power days-to-weeks
long flares of ``envelope'' emission.
\subsubsection{Other Astrophysical implications}
These ideas of plasmoid-dominated reconnection may be applied to other Astrophysical
sources. The variability patterns of gamma-ray burst (GRB) emission show fast flaring on top of
slower-evolving lightcurves that may be connected to such reconnection process
(see also Lyutikov 2006; Lazar et
al. 2009; McKinney \& Uzdensky
2012; Zhang \& Yan 2011). Similar considerations
may apply to flares observed during the GRB afterglows (Giannios 2006).
Reconnection minijets may also be the key to understand the fast GeV
flaring of the pulsar wind nebula of Crab (Clausen-Brown \& Lyutikov
2012; Cerutti et al. 2012).
In particular such model can attempt to explain the day-long
flares and shorter timescale variability observed during major
flaring of the Crab (Abdo et al. 2011; Tavani et al. 2011; Buehler et al. 2012).
\subsubsection{Open issues}
This study focused on the rough energetics and timescales of plasmoid-dominated
reconnection in blazar jets. While the feasibility of the process to account for blazar flares
has been made, a more detailed comparison to observations requires
progress in our theoretical understanding on a number of fronts.
Where is the dissipation zone of jets located?
Studies of the global jet structure and stability can reveal
where and why reconnection in the jet develops. These studies will
also probe the characteristic length scales and orientation of the
reconnection regions. The plasmoid-dominated reconnection
is also a study in progress. Better understanding of fragmentation
instabilities of the current sheet requires high-resolution 3D simulations.
The theory should be tested and extended to the,
interesting here, trans-relativistic $\sigma \sim$a few regime.
Finally for making predictions on the spectra of the resulting
emission and direct comparison to observations, a better understanding
of particle acceleration in reconnection regions is
required. Particle-in-cell simulations make rapid progress in this
direction (Zenitani \& Hoshino 2005; Drake et al. 2006; Sironi \&
Spitkovsky 2011).
\section*{Acknowledgments}
I thank H. Spruit and D. Uzdensky for insightful discussions and comments during the preparation of
the manuscript. I thank the referee for carefully reading the
manuscript, spotting inaccuracies in the derivations and making
suggestions that greatly improved the manuscript.
|
1,108,101,564,878 | arxiv | \section*{Introduction}
In this work we are interested in the convergence in distribution of
\begin{equation}\label{purpose}
\sqrt{N}\left(U\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{X_i} \right)-U\left(\mu\right)\right)
\end{equation}
where $U$ is a general function defined on some Wasserstein space of probability measures on $\mathbb{R}^d$ and $\left( X_i\right)_{i\geq 1} $ is a given sequence of $\mathbb{R}^d$-valued random vectors.\\ In the mathematical statistics literature, Von Mises \cite{vonmises1} \cite{vonmises2} (see also Chapter 6 of \cite{VonMises}) was the first to develop an approach for deriving the asymptotic distribution theory of functionals $U(\frac{1}{N}\sum_{i=1}^{N}\delta_{X_i})$ (called ``statistical functions'') under the assumption of independent and identically distributed real-valued random variables $\left( X_i\right)_{i\geq 1} $. \\Through the use of the G\^{a}teaux differential, he introduced a Taylor expansion of $U(\frac{1}{N}\sum_{i=1}^{N}\delta_{X_i})$ around $U(\mu)$ where $\mu$ is the common distribution of the random variables. He proved that if the linear term is the first
nonvanishing term in the Taylor expansion of the functional $U(.)$ at $\mu$, the limit distribution
is normal (under the usual restrictions corresponding to the central limit
theorem). One of the main difficulty was to prove that the remainder in the Taylor expansion goes to zero.\\
In dimension $d=1$, Boos and Serfling \cite{BoosSerfling} assumed the existence of a differential for $U(.)$ at $\mu$ in a sense stronger than the G\^{a}teaux differential. They assumed the existence of $\frac{d}{d\epsilon}_{|\epsilon=0^+}U\left(\mu+\epsilon\left(\nu-\mu\right) \right)= dU\left(\mu,\nu-\mu\right)$ at $\mu$ that is linear in $\nu-\mu$ (for $\nu$ any probability measure on the real line ) and such that
\begin{equation} \label{boosserfling}
U\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{X_i} \right)-U\left(\mu\right)=\frac{1}{N}\sum_{i=1}^{N}dU\left(\mu,\delta_{X_i}-\mu \right)+ o\left(\left\|\frac{1}{N}\sum_{i=1}^{N}1_{\left\lbrace X_i\leq\cdot \right\rbrace }-\mu\left(-\infty,\cdot \right] \right\|_\infty \right) .
\end{equation}
Taking
advantage of known stochastic properties of the Kolmogorov-Smirnov distance, in particular the boundness in probability of $\left(\sqrt{N}\left(\left\|\frac{1}{N}\sum_{i=1}^{N}1_{\left\lbrace X_i\leq\cdot \right\rbrace }-\mu\left(-\infty,\cdot \right] \right\|_\infty \right) \right)_{N\geq1}$, they conclude that $\sqrt{N}\left( U\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{X_i} \right)-U\left(\mu\right)\right) $ converges in distribution to a centered Gaussian random
variable with asymptotic variance equal to the common variance of the independent and identically distributed random variables $dU\left(\mu,\delta_{X_i}-\mu \right)$ when they are square integrable and centered. In addition to the limitation of their approach to dimension $d=1$, it relies on the uniformity in \eqref{boosserfling} of the approximation with respect to the Kolmogorov-Smirnov distance which is a strong assumption almost amounting to Fréchet differentiability of $U$ at $\mu$ for the Kolmogorov norm.
When $\mu$ is a probability measure on any measurable space, Dudley \cite{Dudley} obtains central limit theorems for $\sqrt{N}(U(\frac{1}{N}\sum_{i=1}^N\delta_{X_i})-U(\mu))$ under Fréchet differentiability of $U$ at $\mu$ with respect to $\|\nu-\mu\|=\sup_{f\in{\mathcal F}}\left|\int f(x)(\nu-\mu)(dx)\right|$ where the class ${\mathcal F}$ of measurable functions is such that a central limit theorem for empirical measures holds with respect to uniform convergence over ${\mathcal F}$. Clearly the requirements on ${\mathcal F}$ impose some balance: the larger the class ${\mathcal F}$, the easier Fréchet differentiability becomes, but the stronger the uniform convergence over ${\mathcal F}$ becomes. \\
Recently Jourdain and Tse \cite{Jourdain} reconsidered the same problem under the assumption of independent and identically distributed $\mathbb{R}^d$-valued random vectors $\left(X_i \right)_{i\geq1} $. By means of the notion of the linear functional derivative of $U$ that is a G\^{a}teaux differential with the property that $dU\left(\mu,\nu-\mu\right)=\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu,y \right)\left(\nu-\mu\right)(dy) $ for some measurable real valued function $\mathbb{R}^d\ni y\rightarrow \frac{\delta U}{\delta m}\left(\mu,y \right)$
with some polynomial growth assumption in $y$, they linearize $\sqrt{N}\left(U\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{X_i} \right)-U\left(\mu\right)\right)$ into the sum of
\begin{equation} \label{linearization}
\frac{1}{\sqrt{N}}\sum_{i=1}^{N}\left( \frac{\delta U}{\delta m}\left(\frac{1}{N}\sum_{j=1}^{i-1}\delta_{X_j}+\frac{N+1-i}{N}\mu,X_i \right)- \int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(\frac{1}{N}\sum_{j=1}^{i-1}\delta_{X_j}+\frac{N+1-i}{N}\mu,x \right)\mu(dx)\right)
\end{equation}
and a remainder. Such a decomposition allows to apply to the above sum the Central Limit Theorem for arrays of martingale increments and to investigate sufficient conditions for the remainder to vanish in probability. They finally proved that
$$\sqrt{N}\left(U\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{X_i} \right)-U\left(\mu\right)\right) \overset{d}{\Longrightarrow } \mathcal{N}\left(0, Var\left( \frac{\delta U}{\delta m}\left(\mu,X_1 \right)\right) \right).$$
They replace the uniformity leading to Fréchet differentiability required in the statistical literature, by supposing that the linear functional derivative exists not only at $\mu$ but on a Wasserstein ball with positive radius containing $\mu$. This is a mild restriction, since when a central limit theorem holds for some statistical functional, it is in general not limited to a single value of the common distribution $\mu$ of the samples.
The aim of this work is to generalize what has been done by Jourdain and Tse. We will first relax the equidistribution assumption by studying the convergence of (\ref{purpose}) by assuming that the $\mathbb{R}^d$-valued random vectors $(X_i)_{i\geq1}$ are independent and non-equidistributed and we denote the law of $X_i$ by $\nu_{i}$.
Since in our case we do not assume the equidistribution of the random variables, we need to give sufficient conditions for $\frac{1}{N}\sum_{i=1}^{N}\delta_{X_i}$ and for $\frac{1}{N}\sum_{i=1}^{N}\nu_{i} $ to converge to a common limit $\mu$ (for a distance that will be specified later). We will split $ \sqrt{N}\left(U\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{X_i} \right)-U\left(\mu\right)\right)$ into the sum
\begin{equation}
\sqrt{N}\left(U\left(\frac{1}{N}\sum_{i=1}^{N}\nu_{i} \right)-U\left(\mu\right)\right)+\sqrt{N}\left(U\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{X_i} \right)-U\left(\frac{1}{N}\sum_{i=1}^{N}\nu_{i}\right)\right) .
\end{equation} \\
We will give sufficient conditions for the first component to converge to a constant that will be specified later. For what concerns the second component we generalize the linearization argument by means of the linear functional derivative $\frac{\delta U}{\delta m}$ in (\ref{linearization}) to prove the convergence in distribution to a gaussian random variable.\newline \\
We next get rid of the independence hypothesis by assuming that the sequence of random vectors $\left( X_i\right)_{i\geq 1} $ to be a $\mathbb{R}^d$-valued Markov chain with transition kernel $P$ and unique invariant probability measure $\mu$.
By using the linear functional derivative $\frac{\delta U}{\delta m}$, we again linearize $\sqrt{N}\left(U\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{X_i} \right)-U\left(\mu\right)\right)$ into the sum of $(\ref{linearization})$
and a remainder. Under assumptions on the Markov kernel $P$ that ensure that the central limit theorem holds for linear functionals and by giving sufficient conditions for the remainder to vanish in probability as $N$ goes to infinity, we conclude that
$$\sqrt{N}\left(U\left(\frac{1}{N}\sum_{i=1}^{N}\delta_{X_i} \right)-U\left(\mu\right)\right) \overset{d}{\Longrightarrow } \mathcal{N}\left(0, \mu\left(PF^2\left(\mu, \cdot \right)\right)-\mu\left( \left(PF \right)^2\left(\mu, \cdot \right) \right)\right)$$
with $F$ solution of the Poisson equation
$$F\left(\mu,x \right)-PF\left(\mu,x \right)= \frac{\delta U}{\delta m}\left(\mu,x\right)- \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu,y\right)\mu(dy), \quad x\in \mathbb{R}^d.$$
As far as we know, such a generalization to non i.i.d. random vectors $X_i$ appears for the first time in the literature. This illustrates that the linear functional derivative is a versatile tool. \\
In the first section, we will provide the statement of the two results. Each of them will be preceeded by reminders useful for its understanding. Together with the independent non-equidistributed case we will recall the notions of Wasserstein distance and linear functional derivative and together with the Markov chains case we will recall definitions and general facts about Markov chains and the Poisson equation.
In the second section, the proofs of the two results are given.
\section{Main Results}
\subsection{Independent Non-Equidistributed Random Variables}
Let us observe that the case of a linear functional $U$, that is $U(m)=\int_{\mathbb{R}^d}f(x)m(dx)$ with $f:\mathbb{R}^d\rightarrow \mathbb{R}$ a measurable function, has been largely studied in this context.\\
We recall that a sequence $(f(X_i))_{i\geq 1}$ is Strongly Residually Cesaro $\beta$-Integrable for some $\beta >0$ (SRCI($\beta$), in short) if
\begin{enumerate}[label=(\roman*)]
\item $\sup_{N\geq 1}\dfrac{1}{N}\mathlarger{\sum}_{i=1}^{N}\mathbb{E}(|f(X_i)|)<\infty$
\item $\mathlarger{\sum}_{i=1}^{\infty}\dfrac{1}{i}\mathbb{E}\left(\left( |f(X_i)|-i^\beta\right) 1_{\left\lbrace|f(X_i)|> i^\beta \right\rbrace } \right) < \infty $.
\end{enumerate}
Chandra and Goswami \cite[Theorem 4.1]{chandra} proved that if $(f(X_i))_{i\geq 1}$ is a sequence of random variables pairwise independent and verifying the condition SRCI($\beta$) for some $\beta \in (0,1)$, then the Strong Law of Large Numbers holds:
\begin{equation} \label{lln}
\lim_{N\rightarrow \infty} \dfrac{\sum_{i=1}^{N} (f(X_i)-\mathbb{E}(f(X_i)))}{N} = 0 \quad a.s.
\end{equation}
Moreover Lindeberg proved (see for instance \cite{billingsley}) that, also in this case, the Central Limit Theorem holds. More in detail, consider a sequence $(f(X_i))_{i\geq 1}$ of square-integrable independent random variables such that $\lim_{N\rightarrow \infty}\frac{\mathlarger{\sum}_{i=1}^{N}Var(f(X_i))}{N}=\sigma^2$ where $\sigma^2>0$. If moreover the Lindeberg's condition is satified
$$ \forall \epsilon >0 \quad \lim_{N\rightarrow \infty}\dfrac{1}{N} \mathlarger{\sum}_{i=1}^{N} \mathbb{E}((f(X_i)-\mathbb{E}(f(X_i)))^21_{\left| f(X_i)-\mathbb{E}(f(X_i))\right|>\epsilon \sqrt{N} })=0, $$
then
\begin{equation} \label{clt}
\dfrac{\sum_{i=1}^{N} (f(X_i)-\mathbb{E}(f(X_i)))}{\sqrt{N}} \overset{d}{\Longrightarrow } \mathcal{N}(0,\sigma^2).
\end{equation}
Before giving the statement of the Central Limit Theorem for general $U$ in the case of independent non-equidistributed random variables, let us recall some notions about the Wasserstein distance and the linear functional derivative.
\subsubsection{The Wasserstein distance and the linear functional derivative}
Let $U:\mathcal{P}_\ell\left( \mathbb{R}^d\right)\rightarrow \mathbb{R}$ where for $\ell \geq 0$ we denote by $\mathcal{P}_\ell\left( \mathbb{R}^d\right)$ the set of probability measures $m$ on $\mathbb{R}^d$ such that $\int_{\mathbb{R}^d}\left|x \right|^\ell m(dx)<\infty$. For $\ell>0$, we consider the $\ell$-Wasserstein metric defined for $\mu_1,\mu_2 \in \mathcal{P}_\ell\left( \mathbb{R}^d\right)$ by
\begin{equation} \label{wasserstein}
W_\ell\left(\mu_1,\mu_2\right) = \inf\left\lbrace \left(\int_{\mathbb{R}^{2d}}\left|x-y \right|^\ell \rho\left(dx,dy\right) \right)^{\frac{1}{\ell \vee 1}} : \rho\in \mathcal{P}(\mathbb{R}^{2d})\,with\, \rho\left(\cdot\times\mathbb{R}^d \right)=\mu_1(\cdot), \rho\left(\mathbb{R}^d\times \cdot \right)=\mu_2(\cdot) \right\rbrace.
\end{equation}
If $\mu \in \mathcal{P}_\ell\left(\mathbb{R}^d\right)$ and $\left(\mu_n\right)_{n\in \mathbb{N}}$ is a sequence in this space, then $\lim_{n\to\infty} W_\ell\left(\mu_n,\mu \right)=0$ if and only if $\lim_{n\rightarrow \infty}\int_{\mathbb{R}^d}|x|^\ell\mu_{n}(dx)=\int_{\mathbb{R}^d}|x|^\ell\mu(dx)$ and $\mu_n$ converges weakly to $\mu$ as $n\to\infty$ where we write $\mu_n\rightharpoonup \mu$ to denote the weak convergence. Alternatively $\lim_{n\to\infty} W_\ell\left(\mu_n,\mu \right)=0$ if and only if $\forall \phi:\mathbb{R}^d\rightarrow \mathbb{R}$ continous $\mu$-almost everywhere and such that $\sup_{x\in \mathbb{R}^d}\frac{|\phi(x)|}{1+|x|^\ell}<\infty,$
\begin{equation} \label{carat_wass}
\lim_{n\to\infty}\int_{\mathbb{R}^d}\phi(x)\mu_{n}(dx)=\int_{\mathbb{R}^d}\phi(x)\mu(dx).
\end{equation}\\
For $\ell=0$ and $\mu_1,\mu_2 \in \mathcal{P}_0\left( \mathbb{R}^d\right)$, we consider
$$W_0\left(\mu_1,\mu_2\right) = \inf\left\lbrace \int_{\mathbb{R}^{2d}}(1\wedge\left|x-y\right|)\rho\left(dx,dy\right) :\rho\in \mathcal{P}(\mathbb{R}^{2d})\,with\, \rho\left(\cdot\times\mathbb{R}^d \right)=\mu_1(\cdot), \rho\left(\mathbb{R}^d\times \cdot \right)=\mu_2(\cdot) \right\rbrace.$$
Notice that $W_0$ metricizes the topology of weak convergence.
For $\ell\geq 0$ we can also consider $\mathcal{M}_\ell(\mathbb{R}^d)$, the space of the signed measures $\tau$ on $\mathbb{R}^d$ such that $\int_{\mathbb{R}^d}|x|^\ell |\tau|(dx)<\infty$ where $|\cdot|$ denotes the total variation of a signed measure.
For each $\tau \in \mathcal{M}_\ell(\mathbb{R}^d)$ we will define the norm $\left\|\tau \right\|_\ell= \sup_{f:|f(x)|\leq 1+|x|^\ell} \int_{\mathbb{R}^d}f(x)\tau(dx)$ where the supremum is computed over the set of the measurable functions satisfying the growth condition and it can be proved that given $\tau\in\mathcal{M}_\ell(\mathbb{R}^d)$ and $(\tau_n)_{n\in \mathbb{N}}$ a sequence in this space such that $\left\|\tau_n-\tau \right\|_\ell\rightarrow 0,$ then $\left\||\tau_n|-|\tau| \right\|_\ell\rightarrow 0.$\\
Let us observe that in $\mathcal{P}(\mathbb{R}^d)$ the convergence with respect to $\left\|\cdot \right\|_\ell$ implies the convergence with respect to $W_\ell$. Moreover if $\ell=0$, for $\mu_1,\mu_2\in \mathcal{P}(\mathbb{R}^d)$ we have
\begin{align*}
\left\|\mu_1-\mu_2 \right\|_0&=\sup_{\left\|f \right\|_\infty\leq 1 }\left|\int_{\mathbb{R}^d}f(x)(\mu_1(dx)-\mu_2(dx)) \right| = 2d_{TV}(\mu_1,\mu_2)\\
&=2\sup_{A\in \mathcal{B}\left(\mathbb{R}^d \right) }\left|\mu_1(A)-\mu_2(A) \right|=\left|\mu_1-\mu_2 \right|\left(\mathbb{R}^d \right)
\end{align*}
where $\mathcal{B}\left(\mathbb{R}^d \right)$ denotes the Borel $\sigma$-algebra of $\mathbb{R}^d$ and $d_{TV}$ the total variation distance between $\mu_1$ and $\mu_2$.
Let us now recall the notion of (first order) linear functional derivative associated to $U$. For a more detailed description, including the definition of the linear functional derivative of a superior order, see \cite{linearfunctionalderivative}.
\begin{definition}
Let $\ell \geq 0$. A function $U:\mathcal{P}_\ell\left(\mathbb{R}^d\right) \rightarrow \mathbb{R}$ admits a linear functional derivative at $\mu \in \mathcal{P}_\ell\left(\mathbb{R}^d\right)$ if there exists a measurable function $\mathbb{R}^d\ni y \mapsto \frac{\delta U}{\delta m}\left(\mu,y\right) $ such that $\sup_{y\in \mathbb{R}^d}\frac{\left| \frac{\delta U}{\delta m}\left(\mu,y\right)\right| }{1+|y|^\ell}< \infty$ and
$$\forall \nu \in \mathcal{P}_\ell\left(\mathbb{R}^d\right), \frac{d}{d\epsilon}_{\lvert_{\epsilon=0^+}} U\left(\mu+\epsilon(\nu-\mu) \right)= \int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(\mu,y\right) (\nu - \mu)(dy).$$
\end{definition}
The next lemma allows to express a finite difference of the function $U$ as an integral of the functional linear derivative.
\begin{lemma} \label{linearderivative}
Let $\ell \geq 0$, $m$, $m'\in \mathcal{P}_\ell\left(\mathbb{R}^d\right)$ and suppose that the linear functional derivative of a function $U:\mathcal{P}_\ell\left(\mathbb{R}^d\right) \rightarrow \mathbb{R} $ exists in the segment $\left(m_s:= sm'+(1-s)m\right)_{s\in \left[ 0,1\right] } $. Then if $\sup_{(s,y)\in \left[0,1 \right]\times \mathbb{R}^d}\frac{\left| \frac{\delta U}{\delta m}\left(m_s,y\right)\right| }{1+|y|^\ell}< \infty$, one has
\begin{equation}\label{linearfunc}
U(m')-U(m)= \int_{0}^{1}\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}((1-s)m+sm',y)(m'-m)(dy)ds.
\end{equation}
\end{lemma}
\subsubsection{Statement of the theorem}
Given a measure $\mu\in\mathcal{P}_\ell\left(\mathbb{R}^d \right)$, we will consider the following group of hypotheses about the regularity of the functional derivative $U$ (\textbf{RU}) in a neighborhood of $\mu$:
\debutRU
\item \label{functional_derivative2} there exists $r>0$ such that $U$ admits a linear functional derivative on the ball $B(\mu,r)$ centered at $\mu$ with radius $r$ for the metric $W_\ell$
\item \label{growth2} $\exists C < \infty, \forall (\tilde{\mu},x) \in B(\mu,r) \times \mathbb{R}^d, \left| \dfrac{\delta U}{\delta m}(\tilde{\mu},x)\right| \leq C \left( 1 + |x|^{\frac{\ell}{2}}\right) $
\item\label{derivative_conv2} $ \sup_{x\in \mathbb{R}^d} \dfrac{|\frac{\delta U}{\delta m}(\tilde{\mu},x)-\frac{\delta U}{\delta m}(\mu,x)|}{1+|x|^\frac{\ell}{2}} $ converges to $0$ when $W_\ell(\tilde{\mu},\mu)$ goes to $0$
\item \label{continuity} $x\mapsto \frac{\delta U}{\delta m}\left(\mu,x \right) $ is continuous $\mu$-almost everywhere
\item \label{growth_diff2} $\exists \alpha\in (\frac{1}{2},1], \exists C < \infty, \forall \mu_1,\mu_2\in B(\mu,r), \forall x\in \mathbb{R}^d$
$$\left| \dfrac{\delta U}{\delta m}(\mu_2,x)-\dfrac{\delta U}{\delta m}(\mu_1,x)\right| \leq C\left( (1+|x|^\ell)\left\| \mu_2-\mu_1\right\|_0^\alpha +(1+|x|^{\ell (1-\alpha)})\left( \int_{\mathbb{R}^d}|y|^\ell |\mu_2-\mu_1|(dy) \right)^\alpha \right). $$
\finRU
Moreover we will consider the following assumption about the tails of the random vectors $(X_i)_{i\geq1}$:
\begin{itemize}
\item[\textbf{TX}]\label{srci}there exists $\beta \in (0,1)$ such that $\mathlarger{\sum}_{i=1}^{\infty}\dfrac{1}{i}\mathbb{E}\left(\left( |X_i|^\ell-i^\beta\right) 1_{\left\lbrace|X_i|^\ell> i^\beta \right\rbrace } \right) < \infty. $
\end{itemize}
Let us observe that it coincides with condition $(ii)$ of the Strongly Residually Cesaro $\beta-$Integrability with the choice of $f(x)= x^\ell$.
We are now ready to state respectively the Strong Law of Large Numbers and the Central Limit Theorem.
\begin{teo}[LLN for independent non-equidistributed
r.v.]\label{SLLN}
Let $\ell \geq 0$ and let $X_i,\, i\geq 1$ be a sequence of independent random variables on $\mathbb{R}^d$ with law $\nu_i \in \mathcal{P}_{\ell}(\mathbb{R}^d)$ and let us define
$$\mu_N :=\frac{1}{N}\sum_{i=1}^{N} \delta_{X_i}$$
and
$$\bar{\nu}_N:= \mathbb{E}\left( \mu_N\right) = \frac{1}{N}\sum_{i=1}^{N} \nu_i.$$
Let us assume \textbf{TX} and the existence of $\mu \in \mathcal{P}_\ell(\mathbb{R}^d)$ such that $\lim_{N\rightarrow \infty}W_\ell(\bar{\nu}_N,\mu)=0.$ Then
$$W_\ell\left(\mu_N, \mu\right) \underset{N}{\longrightarrow} 0 \quad a.s.$$
\end{teo}
\begin{proof} Thanks to the existence of $\mu \in \mathcal{P}_\ell(\mathbb{R}^d)$ such that $\lim_{N\rightarrow \infty}W_\ell(\bar{\nu}_N,\mu)=0$, by the characterization of the Wasserstein convergence, one has $ \bar{\nu}_N \rightharpoonup \mu$ and so by Wellner paper \cite{Wellner} $\mu_N \rightharpoonup \mu$ a.s..\\
If $\ell=0$ the proof is completed, let us therefore suppose that $\ell>0$ and let us check the convergence of the $\ell$-th moment.
Since the $\left| X_i\right|^\ell$'s are independent, Assumption \textbf{TX} holds and $\sup_{N\geq 1}\frac{1}{N}\sum_{i=1}^{N}\mathbb{E}\left(\left| X_i\right|^\ell \right)=\sup_{N\geq 1} \int_{\mathbb{R}^d}|x|^\ell\bar{\nu}_N(dx) <\infty$ (using again the characterisation of the Wasserstein convergence), we can apply (\ref{lln}) and obtain that
$$\int_{\mathbb{R}^d} | x |^\ell \mu_N\left(dx\right) - \int_{\mathbb{R}^d} | x |^\ell \bar{\nu}_N\left(dx\right)=\dfrac{| X_1|^\ell+\cdots+| X_N|^\ell - \left( \mathbb{E}\left( | X_1|^\ell\right) +\cdots+ \mathbb{E}\left( | X_N|^\ell\right) \right) }{N} \underset{N\rightarrow \infty}{\longrightarrow} 0 \qquad a.s. $$
Therefore
\begin{align*}
\lim_{N\rightarrow \infty}\int_{\mathbb{R}^d}& | x |^\ell \mu_N\left(dx\right) - \int_{\mathbb{R}^d} | x |^\ell \mu\left(dx\right)\\
&=\lim_{N\rightarrow \infty} \int_{\mathbb{R}^d} | x |^\ell \mu_N\left(dx\right) - \int_{\mathbb{R}^d} | x |^\ell \bar{\nu}_N\left(dx\right)+ \int_{\mathbb{R}^d} | x |^\ell \bar{\nu}_N\left(dx\right) - \int_{\mathbb{R}^d} | x |^\ell \mu\left(dx\right) = 0 \qquad a.s.
\end{align*}
\end{proof}
\begin{teo}[CLT for independent non-equidistributed
r.v.]\label{independent_case}
Using the same notations of Theorem \ref{SLLN}, let us assume that
\begin{enumerate}
\item \label{wess.conv2} $W_\ell\left(\frac{1}{N}\sum_{i=1}^N\nu_i(dx)\nu_i(dy),\eta(dx,dy)\right) \underset{N\rightarrow \infty}{\longrightarrow} 0$ for some measure $\eta \in \mathcal{P}_\ell (\mathbb{R}^d \times \mathbb{R}^d)$ with $\mu(\cdot) = \eta(\mathbb{R}^d,\cdot)=\eta(\cdot,\mathbb{R}^d)$
\item\label{l_convergence}
$\|\sqrt{N}\left(\bar{\nu}_N-\mu \right) - \sigma\|_\ell \underset{N\rightarrow \infty}{\longrightarrow} 0$
for some measure $\sigma$ in $\mathcal{P}_\ell(\mathbb{R}^d). $
\end{enumerate}
If moreover \textbf{RU\ref{functional_derivative2}-\ref{growth_diff2}} and \textbf{TX} hold then
$$\sqrt{N}\left(U\left(\mu_N \right)- U\left(\mu \right) \right) \overset{d}{\Longrightarrow} \mathcal{N}\left(\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu,x\right)\sigma(dx),\int_{\mathbb{R}^d} \left( \frac{\delta U}{\delta m}\left(\mu,x \right)\right) ^2\mu(dx)-\int_{\mathbb{R}^d \times \mathbb{R}^d} \frac{\delta U}{\delta m} (\mu,x)\frac{\delta U}{\delta m} (\mu,y)\eta(dx,dy) \right). $$
\end{teo}
\begin{remark} \label{conv_nuntomu}
Since the marginals of $\frac{1}{N}\sum_{i=1}^N\nu_i(dx)\nu_i(dy)$ and $\eta(dx,dy)$ are respectively $\bar{\nu}_N$ and $\mu$, one has $$W_\ell\left(\bar{\nu}_N,\mu\right) \leq W_\ell\left(\frac{1}{N}\sum_{i=1}^N\nu_i(dx)\nu_i(dy),\eta(dx,dy)\right)$$
and so Assumption \ref{wess.conv2} implies that $W_\ell(\bar{\nu}_N,\mu)\underset{N\rightarrow \infty}{\longrightarrow}0$.
\end{remark}
\begin{remark}
According to the theorem, the asymptotic variance is given by
$$ \int_{\mathbb{R}^d}\left(\frac{\delta U}{\delta m} (\mu,x)\right) ^2\mu(dx)-\int_{\mathbb{R}^{2d}} \frac{\delta U}{\delta m} (\mu,x)\frac{\delta U}{\delta m} (\mu,y)\eta(dx,dy).$$
By Jensen's inequality
$$\frac{1}{N}\sum_{i=1}^N \int_{\mathbb{R}^{2d}} \frac{\delta U}{\delta m} (\mu,x) \frac{\delta U}{\delta m} (\mu,y)\nu_i(dx)\nu_i(dy) = \frac{1}{N}\sum_{i=1}^N \left( \int_{\mathbb{R}^d}\frac{\delta U}{\delta m} (\mu,x)\nu_i(dx)\right) ^2\ge \left( \frac{1}{N}\sum_{i=1}^N \int_{\mathbb{R}^d} \frac{\delta U}{\delta m}(\mu,x)\nu_i(dx)\right)^2.$$
Thanks to Hypothesis \ref{wess.conv2}, Hypothesis \textbf{RU\ref{growth2}} and Hypothesis \textbf{RU\ref{continuity}}, by taking the limit over $N\rightarrow \infty$, one deduces that
$$
\int_{\mathbb{R}^{2d}} \frac{\delta U}{\delta m} (\mu,x)\frac{\delta U}{\delta m} (\mu,y)\eta(dx,dy)\ge \left( \int_{\mathbb{R}^d} \frac{\delta U}{\delta m} (\mu,x)\mu(dx)\right) ^2.$$
Therefore the following ``variance'' inequality holds
$$ \int_{\mathbb{R}^d} \left( \frac{\delta U}{\delta m} (\mu,x)\right) ^2\mu(dx)-\int_{\mathbb{R}^{2d}} \frac{\delta U}{\delta m} (\mu,x)\frac{\delta U}{\delta m} (\mu,y)\eta(dx,dy)\leq \int_{\mathbb{R}^d} \left( \frac{\delta U}{\delta m} (\mu,x)\right) ^2\mu(dx)- \left( \int_{\mathbb{R}^d} \frac{\delta U}{\delta m} (\mu,x) \mu(dx)\right)^2. $$
\end{remark}
In what follows, we provide an example where Hypotheses \ref{wess.conv2} and \ref{l_convergence} are verified.
\begin{ex}
Let $\theta_0, \theta_1, \cdots, \theta_{m-1} \in \mathcal{P}_\ell(\mathbb{R}^d)$ for $m\in \mathbb{N}^*$ and define $\nu_{i}(dx):=\theta_{(i-1)\,mod\,m}(dx)$ for $i\geq 1$. We are now going to verify that Hypothesis \ref{wess.conv2} and \ref{l_convergence} are satisfied with $\eta(dx,dy)=\frac{1}{m}\sum_{i=0}^{m-1}\theta_i(dx)\theta_i(dy)$ and $\sigma=0$.\\
If we prove that
\begin{equation}\label{strongproof}
\lim_{N\to\infty}\sqrt{N}\left\|\frac{1}{N}\sum_{i=1}^{N}\theta_{(i-1)\,mod\,m}(dx)\theta_{(i-1)\,mod\,m}(dy)- \frac{1}{m}\sum_{i=0}^{m-1}\theta_i(dx)\theta_i(dy)\right\|_\ell =0,
\end{equation}
then using that the convergence with respect to $\left\|\cdot\right\|_\ell$ implies the convergence with respect to $W_\ell$, Hypothesis \ref{wess.conv2} will follow. Moreover since the marginals of $\frac{1}{N}\sum_{i=1}^{N}\theta_{(i-1)\,mod\,m}(dx)\theta_{(i-1)\,mod\,m}(dy)$ and $\frac{1}{m}\sum_{i=0}^{m-1}\theta_i(dx)\theta_i(dy)$ are respectively $\frac{1}{N}\sum_{i=1}^{N}\theta_{(i-1)\,mod\,m}$ and $\frac{1}{m}\sum_{i=0}^{m-1}\theta_i$, Hypothesis \ref{l_convergence} will follow too. Let us therefore prove (\ref{strongproof}).
For each $N>0$, there exist $k_N\in \mathbb{N}$ and $r_N\in \mathbb{N}$ with $0\leq r_N < m$ such that $N=k_Nm+r_N$. Let $f$ be a function on $\mathbb{R}^{2d}$ such that $|f(x,y)|\leq 1+|x|^\ell+|y|^\ell$. One has
\begin{align*}
&\sqrt{N}\left| \int_{\mathbb{R}^{2d}}f(x,y)\frac{1}{N}\sum_{i=1}^{N}\theta_{i-1\,mod\,m}(dx)\theta_{i-1\,mod\,m}(dy)-\frac{1}{m}\sum_{i=0}^{m-1}\int_{\mathbb{R}^{2d}}f(x,y)\theta_i(dx)\theta_i(dy)\right| \\
&\phantom{==}=\sqrt{N} \left| \left( \frac{k_N}{N}-\frac{1}{m}\right) \sum_{i=0}^{m-1}\int_{\mathbb{R}^{2d}}f(x,y) \theta_{i}(dx)\theta_{i}(dy)+\frac{1}{N}\sum_{i=0}^{r_N-1}\int_{\mathbb{R}^{2d}}f(x,y) \theta_i(dx)\theta_i(dy)\right| \\
&\phantom{==}=\left| -\frac{r_N}{\sqrt{N}m}\sum_{i=0}^{m-1}\int_{\mathbb{R}^{2d}}f(x,y) \theta_i(dx)\theta_i(dy)+\frac{1}{\sqrt{N}}\sum_{i=0}^{r_N-1}\int_{\mathbb{R}^{2d}}f(x,y) \theta_i(dx)\theta_i(dy) \right| .
\end{align*}
Taking the superior limit over $N\rightarrow \infty$, the right hand-side converges to $0$ which completes the proof.
\end{ex}
In the following example Hypothesis \ref{l_convergence} is verified with $\sigma=0$.
\begin{ex}
Let $\mu\in \mathcal{P}_\ell(\mathbb{R}^d)$ and $\nu_{i}\in \mathcal{P}_\ell(\mathbb{R}^d)$ such that $\left\|\nu_{i} - \mu \right\|_\ell\leq \dfrac{c}{i^\alpha}$ for $i\geq 1$ with $c <\infty$ and $\alpha>\frac{1}{2}$. Then
\begin{align*}
&\sqrt{N}\sup_{ f:|f(x)|\leq 1+|x|^\ell}\left|\dfrac{1}{N}\sum_{i=1}^{N}\int_{\mathbb{R}^d}f(x)\nu_{i}(dx)- \int_{\mathbb{R}^d}f(x)\mu(dx)\right|\\
&\phantom{==} \leq \dfrac{1}{\sqrt{N}} \sum_{i=1}^{N}\left\|\nu_{i} - \mu \right\|_\ell\leq\dfrac{c}{\sqrt{N}}\sum_{i=1}^{N}\dfrac{1}{i^\alpha} = \begin{cases}
O\left( \frac{1}{\sqrt{N}}\right) \, &\mbox{if}\,\,\alpha>1\\
O\left( \frac{\ln N}{\sqrt{N}}\right) \,&\mbox{if}\,\, \alpha=1\\
O\left( N^{\frac{1}{2}-\alpha}\right) \, &\mbox{if}\,\,\alpha\in(0,1)\\
\end{cases}
\end{align*}
Therefore, since $\alpha >\frac{1}{2}$, taking the superior limit over $N\rightarrow \infty$ we have $\sqrt{N}\|\bar{\nu}_N-\mu\|_\ell \underset{N\rightarrow \infty}{\longrightarrow}0.$
\end{ex}
\subsection{Markov Chains}
\subsubsection{Markov Chains and the Poisson equation}
We are now going to state the Central Limit Theorem for Markov chains but before doing it, let us recall some facts useful to it. See \cite{notesbenj} for more details. \\
Let us consider $X_i,\, i\geq 1$ a Markov chain with initial distribution $\nu_1$ and transition kernel $P$ on $\left(\mathbb{R}^d, \mathcal{B}\left(\mathbb{R}^d\right) \right) $.\\
We say that $P$ verifies the \textbf{Lyapunov Condition} if:
\debutL
\item \label{D1} $\exists V:\mathbb{R}^d \rightarrow \mathbb{R}_+$ measurable, $\exists K \in \mathbb{R}_+$, $\exists \gamma\in\left(0,1 \right)$, $\forall x\in \mathbb{R}^d$, $PV\left(x\right)\leq \gamma V\left(x\right)+K$
\item \label{D2} $\exists R>\dfrac{2K}{1-\gamma}$, $\exists \rho \in (0,1] $, $\forall x, y \in \mathbb{R}^d$ such that $V(x)+V(y)\leq R$, $P(x,\cdot)\wedge P(y,\cdot)(\mathbb{R}^d)\geq \rho.$
\finL
We can introduce the following normed space
$$\mathcal{V}_{V}:= \left\lbrace \phi:\mathbb{R}^d \rightarrow \mathbb{R} \,\,measurable : \sup_{x\in \mathbb{R}^d}\frac{\left|\phi(x) \right|}{1+ V(x)} <\infty \right\rbrace$$
equipped with the norm
$$ \|\phi \|_{V,\beta} = \sup_{x\in \mathbb{R}^d}\frac{|\phi(x)|}{1+\beta V(x)} $$
where $\beta>0$.
We can also associate the distance $d_{V,\beta}$ on $\mathcal{P}_V\left(\mathbb{R}^d \right):= \left\lbrace \theta \in \mathcal{P}\left(\mathbb{R}^d\right) : \theta(V)<\infty \right\rbrace $ defined by
$$d_{V,\beta}\left( \theta,\sigma\right) = \sup_{\phi:\|\phi \|_{V,\beta}\leq 1} |\theta\left(\phi\right)-\sigma\left(\phi\right)|.$$
It can be proved that under the Lyapunov condition, the transition kernel $P$ admits a unique invariant probability measure $\mu$. Moreover
\begin{equation}\label{int_of_V}
\mu(V)\leq \frac{K}{1-\gamma}
\end{equation}
and
\begin{equation}\label{stima_distanza}
\forall \beta> 0, \forall \sigma \in \mathcal{P}\left(\mathbb{R}^d\right), \forall n\in \mathbb{N}, d_{V,\beta}\left(\sigma P^n,\mu \right)\leq \left( \chi\left(\rho,\beta,\gamma,K,R \right)\right)^n d_{V,\beta}\left(\sigma,\mu \right)
\end{equation}
where $\chi\left(\rho,\beta,\gamma,K,R \right) = \left( 1-\rho +\beta K\right)\vee \frac{2+\beta \gamma R+2 \beta K}{2+\beta R} \in \left(0,1 \right) $ if $\beta \in \left(0,\frac{\rho}{K} \right).$
The proof of this result is given in \cite{Hairer} under a stronger condition on $P$: it verifies \textbf{L\ref{D1}} and there exists a constant $q \in \left(0,1 \right)$ and a probability measure $\zeta$ so that $\inf_{x\in C}P\left(x,\cdot \right)\geq q \zeta\left(\cdot \right) $ with $C=\left\lbrace x\in \mathbb{R}^d:V\left(x \right) \leq R \right\rbrace $ for some $R>\frac{2K}{1-\gamma}$. It remains valid in our context too.\\
We are now ready to recall the Strong Law of Large Numbers under the Lyapunov condition, a result that will be largely used in what follows.
\begin{teo}\label{large_numbers}
Let us assume that the transition kernel $P$ satisfies the Lyapunov condition and let $\mu$ be its unique invariant probability measure. Then for each function $f:\mathbb{R}^d\rightarrow \mathbb{R}$ measurable and such that $\mu(|f|)<\infty$,
$$\forall \nu_1 \in \mathcal{P}(\mathbb{R}^d)\quad \mathbb{P}_{\nu_1}\left(\frac{1}{N}\sum_{k=1}^{N}f(X_k)\underset{N\rightarrow \infty}{\rightarrow} \mu(f) \right)=1.$$
\end{teo}
Since, by (\ref{int_of_V}), $\mu(V)<\infty$ then
\begin{equation} \label{check}
\sup_{x\in \mathbb{R}^d} \frac{|f(x)|}{1+V(x)} <\infty
\end{equation}
is a sufficient condition to ensure that $\mu(|f|)<\infty$.
\begin{comment}
\begin{prop}\label{psforeachf}
Under the assumptions of Theorem \ref{large_numbers}, let $\mathcal{H}\subseteq \mathcal{F}\subseteq L^\infty(\mathbb{R}^d)$ where $\mathcal{H}$ is a countable, dense subset of $\mathcal{F}$ with respect to the infinity norm and let us define
$$\mu_{N}:=\frac{1}{N}\sum_{i=1}^{N}\delta_{X_i}. $$
Then
\begin{equation} \label{psforeachf2}
\mathbb{P}\left( \mu_{N}(f) \underset{N\rightarrow \infty}{\rightarrow} \mu(f)\,\, \forall f\in \mathcal{F} \right)=1.
\end{equation}
\end{prop}
\begin{proof}
Let us first observe that by Theorem \ref{large_numbers} and using that $\mathcal{H}$ is a countable set, one has
\begin{align*}
\mathbb{P}\left(\mu_{N}(h) \underset{N\rightarrow \infty}{\rightarrow} \mu(h)\,\, \forall h\in \mathcal{H} \right)=1.
\end{align*}
Therefore to prove (\ref{psforeachf2}), it is sufficient to prove that
$$\left\lbrace \mu_{N}(f) \underset{N\rightarrow \infty}{\rightarrow} \mu(f)\,\, \forall f\in \mathcal{F} \right\rbrace = \left\lbrace \mu_{N}(h) \underset{N\rightarrow \infty}{\rightarrow} \mu(h)\,\, \forall h\in \mathcal{H} \right\rbrace.$$
The inclusion ``$\subseteq$'' is
immediate. Let us now prove the other one. Let $f\in \mathcal{F}$, by the density of $\mathcal{H}$ in $\mathcal{F}$, $\forall \epsilon >0$ there exists $h\in \mathcal{H}$ such that $\left\|f-h \right\|_\infty\leq \epsilon. $ Therefore
\begin{align*}
|\mu_{N}(f)-\mu(f)|&\leq |\mu_{N}(f)-\mu_{N}(h)|+ |\mu(f)-\mu(h)|+|\mu_{N}(h)-\mu(h)|\\
&\leq 2\epsilon + |\mu_{N}(h)-\mu(h)|.
\end{align*}
For fixed $\epsilon$ the second term of the right hand-side goes to $0$ as $N$ goes to infinity while the first one is arbitrarily small for $\epsilon$ small.
\end{proof}
\end{comment}
Before enunciating the Central Limit Theorem for Markov chains, we need to introduce some facts about the Poisson equation.
For a fixed $f$ such that $\mu(|f|)<\infty$, a function $ F:\mathbb{R}^d\rightarrow \mathbb{R}$ measurable and such that $\forall x\in \mathbb{R}^d$, $P|F|(x)<\infty$ is called solution of the Poisson equation if it satisfies
\begin{equation} \label{eq_poisson}
\forall x\in \mathbb{R}^d,\quad F(x)-PF(x)=f(x)-\mu(f)
\end{equation}
where $\mu$ as above denotes the invariant probability measure associated to $P$.\\
If the transition kernel $P$ satisfies the Lyapunov condition and $f\in \mathcal{V}_{V}$, the series of general term $\left(P^nf-\mu(f)\right)_{n\in \mathbb{N}}$ converges in the space $\mathcal{V}_{V}$
equipped with the norm $\left\| \right\|_{V,1} $ and
\begin{equation}\label{poisson}
F= \sum_{n\in \mathbb{N}}\left(P^nf-\mu(f)\right)
\end{equation}
is a solution of the Poisson equation (\ref{eq_poisson}). Moreover any solution can be written as $c+\sum_{n\in \mathbb{N}}\left(P^nf-\mu(f)\right)$ for a constant $c\in \mathbb{R}$.\\
We now strengthen the condition on $P$ in order to have the Lyapunov condition satisfied with $\sqrt{V}$ as well as with $V$. It can be proved that if $P$ satisfies \textbf{L\ref{D1}} and
\begin{description}
\item[L2'] \label{D2'} $\exists R>\dfrac{4K}{\left( 1-\sqrt{\gamma}\right) ^2}$, $\exists \rho \in (0,1] $, $\forall x, y \in \mathbb{R}^d$ such that $V(x)+V(y)\leq R$, $P(x,\cdot)\wedge P(y,\cdot)(\mathbb{R}^d)\geq \rho,$
\end{description}
then one has the Lyapunov condition for the quadruple $\left(V,\gamma,K,R\right)$ and for the quadruple $\left(\sqrt{V},\sqrt{\gamma},\sqrt{K},\sqrt{R}\right).$ In particular \textbf{L\ref{D1}} and Jensen's inequality imply that
\begin{equation}\label{sqrtV}
P\sqrt{V}(x)\leq \sqrt{PV(x)}\leq\sqrt{\gamma V(x)+K}\leq \sqrt{\gamma} \sqrt{V(x)}+\sqrt{K}.
\end{equation}
The following proposition holds.
\begin{prop} \label{poisson_sqrt}
Let us assume that the transition kernel $P$ verifies \textbf{L\ref{D1}} and \textbf{L2'}. Then if $f$ is such that $f^2\in \mathcal{V}_{V},$ $F=\sum_{n\in
\mathbb{N}}(P^nf-\mu(f))$ converges in $\left\| \right\|_{\sqrt{V},1}$. $F$ is solution of the Poisson equation (\ref{eq_poisson}) and satisfies $F^2\in \mathcal{V}_{V}.$
Moreover for each $\beta \in (0,\frac{\rho}{\sqrt{K}})$ and for each $n\in \mathbb{N}$
$$ \sup_{x\in \mathbb{R}^d} \frac{\left|P^nf(x) - \mu(f) \right|}{1+\sqrt{V(x)}}\leq D \chi^n \sup_{x\in \mathbb{R}^d}\frac{|f(x)|}{1+\beta \sqrt{V(x)}} $$
with $\chi = \chi(\rho, \beta,\sqrt{\gamma}, \sqrt{K},\sqrt{R})\in (0,1)$ and $D= D(\beta,\sqrt{K},\sqrt{\gamma})$ a finite constant.
\end{prop}
We are now ready to enunciate the Central Limit Theorem for Markov chains in the linear case (see as reference book \cite{CLTMC}).
\begin{teo}
Let us assume that the transition kernel $P$ verifies \textbf{L\ref{D1}} and \textbf{L2'}. Then for each function $f: \mathbb{R}^d\rightarrow \mathbb{R}$ measurable and such that $f^2\in \mathcal{V}_{V},$
$$\sqrt{N}\left(\frac{1}{N}\sum_{i=1}^{N}f\left(X_i \right)-\int_{\mathbb{R}^d}f(x)\mu(dx) \right) \overset{d}{\Longrightarrow } \mathcal{N}\left(0,\mu\left(F^2\right)- \mu((PF)^2)\right) $$
with $F$ solution of the Poisson equation
$$F(x)-PF(x)=f(x)-\mu(f) \quad x\in \mathbb{R}^d. $$
\end{teo}
\subsubsection{Statement of the theorem}
As in the statement of Theorem \ref{independent_case}, for a given measure $\mu\in\mathcal{P}_\ell\left(\mathbb{R}^d \right)$, let us consider the Hyphoteses \textbf{RU\ref{functional_derivative2}-\ref{derivative_conv2}}, together with
\debutRU
\item \label{growth_diff}
$\exists \alpha\in (\frac{1}{2},1], \exists C < \infty, \forall \mu_1,\mu_2\in B(\mu,r)$
$$\sup_{x\in \mathbb{R}^d}
\frac{\left| \dfrac{\delta U}{\delta m}(\mu_2,x)-\dfrac{\delta U}{\delta m}(\mu_1,x)\right| }{1+|x|^{\frac{\ell}{2}}}
\leq C\left( \int_{\mathbb{R}^d}(1+|y|^{\frac{\ell}{2\alpha}}) |\mu_2-\mu_1|(dy) \right)^\alpha.$$
\finRU
\debutL
\item $C_{\ell} := \sup_{x\in \mathbb{R}^d} \dfrac{|x|^{\ell}}{1+V(x)}< \infty.$
\finL
\begin{remark}
Let us observe that by (\ref{int_of_V}) and \textbf{L3},
we have
\begin{equation} \label{l_moment_finite}
\int_{\mathbb{R}^d} |x|^\ell \mu(dx)< \infty.
\end{equation}
\end{remark}
We are now ready to provide the Strong Law of Large Numbers and the Central Limit Theorem in this context.
\begin{teo}[LLN for Markov chains]\label{llnmarkov}
Let $\ell\geq 0$ and let $X_i, \,i\geq1$ a Markov chain with initial law $\nu_1$ and transition kernel $P$ that we assume to satisfy \textbf{L\ref{D1}}, \textbf{L2} and \textbf{L3}. Let $\mu$ denote its unique invariant probability measure and let us define
$$\mu_N :=\frac{1}{N}\sum_{i=1}^{N} \delta_{X_i}.$$
Then $$W_\ell\left(\mu_N, \mu\right) \underset{N}{\longrightarrow} 0 \quad a.s.$$
\begin{proof}
Let us first prove that $\mu_{N} \rightharpoonup \mu$ a.s. It can be proved that for probability measures it is possible to test the weak convergence over the continuos functions with compact support (see for instance Corollary 30.9 \cite{Bauer}). Therefore
$$\mathbb{P} \left( \mu_{N}(f)\underset{N}{\rightarrow} \mu(f)\,\, \forall f\in C_b(\mathbb{R}^d)\right) = \mathbb{P} \left( \mu_{N}(f)\underset{N}{\rightarrow} \mu(f)\,\, \forall f\in C_c(\mathbb{R}^d)\right) $$
where $C_b(\mathbb{R}^d)$ denotes the set of bounded continuos functions and $C_c(\mathbb{R}^d)$ the set of continuos functions with compact support. Since $C_c(\mathbb{R}^d)$ is separable with respect to the infinity norm, we can apply Theorem \ref{large_numbers} to deduce that the right hand-side is equal to $1$ and so the almost sure weak convergence of $\mu_{N}$ to $\mu$ is proved. If $\ell >0$, to conclude the proof of the Wasserstein convergence we need to prove the convergence of the $\ell th$ moment. Since (\ref{l_moment_finite}) holds,
we can again apply Theorem \ref{large_numbers} and deduce that
$$ \lim_{N\rightarrow \infty}\int_{\mathbb{R}^d}|x|^\ell\mu_{N}(dx) = \int_{\mathbb{R}^d}|x|^\ell\mu(dx) \quad a.s..$$
\end{proof}
\end{teo}
\begin{teo}[CLT for Markov chains] \label{main_teo}
Under the same notations of Theorem \ref{llnmarkov},
let us assume that the transition kernel $P$ satisfies \textbf{L\ref{D1}}, \textbf{L2'} and \textbf{L3}. Let us morevore assume \textbf{RU\ref{functional_derivative2}-\ref{derivative_conv2}}, \textbf{RU\ref{growth_diff}}.\\
Then
$$ \sqrt{N}\left(U\left(\mu_{N} \right)-U\left(\mu \right)\right) \overset{d}{\Longrightarrow} \mathcal{N}\left(0, \mu\left(F^2\left(\mu,\cdot\right) \right) - \mu\left( \left( PF\right)^2\left(\mu,\cdot \right)\right) \right) $$
where $F\left( \mu,\cdot\right)$ denotes the unique (up to an additive constant) solution of the Poisson equation
$$F(\mu,x) - PF(\mu,x) = \frac{\delta U}{\delta m}\left(\mu,x \right)-\int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(\mu,y \right)\mu(dy), \quad x\in \mathbb{R}^d.$$
\end{teo}
In what follows, we provide an example of functional satisfying the Hypotheses \textbf{RU\ref{functional_derivative2}-\ref{derivative_conv2}}, \textbf{RU\ref{growth_diff}}.
\begin{ex}(U-statistics)
Let $\ell> 0$, $n\in\mathbb{N}\setminus \left\lbrace0,1 \right\rbrace $ and $\phi:(\mathbb{R}^{d})^n\rightarrow \mathbb{R}$ be a symmetric continuous function such that
\begin{equation}\label{ipotesi_phi}
\lim_{|x_1|\rightarrow \infty} \sup_{(x_2,\cdots,x_n)\in (\mathbb{R}^d)^{n-1}}\dfrac{|\phi(x_1,x_2,\cdots,x_n)|}{\prod_{i=1}^{n}(1+|x_i|^\frac{\ell}{2})}=0.
\end{equation}
We consider the function on $\mathcal{P}_\ell(\mathbb{R}^d)$ defined by
$$U(\mu):=\int_{(\mathbb{R}^{d})^n}\phi(x_1,\cdots,x_n)\mu(dx_1)\cdots\mu(dx_n).$$
It is possible to prove (see Example 2.7 \cite{Jourdain}) that its linear functional derivative is defined for each $\mu\in W_\ell(\mathbb{R}^d)$ and it is given by
$$\frac{\delta U}{\delta m}(\mu,x_1)=n\int_{(\mathbb{R}^{d})^{n-1}} (\phi(x_1,x_2,\cdots,x_n)-\phi(0,x_2,\cdots,x_n))\mu(dx_2)\cdots\mu(dx_n), \,\, x_1\in \mathbb{R}^d.$$
By the symmetry of $\phi$ and (\ref{ipotesi_phi}), $\forall \epsilon>0$ $\exists M_{\epsilon}$ such that for $|x_i|> M_{\epsilon}$
\begin{equation} \label{deflimite}
\sup_{(x_1,,\cdots,x_{i-1},x_{i+1},\cdots,x_n)\in (\mathbb{R}^d)^{n-1}}\dfrac{|\phi(x_1,x_2,\cdots,x_n)|}{\prod_{j=1}^{n}(1+|x_j|^\frac{\ell}{2})} \leq \epsilon.
\end{equation}
Moreover by (\ref{deflimite}) and the continuity of $\phi$, $\forall$ $(x_1,\cdots,x_n)\in (\mathbb{R}^d)^n$ one has
\begin{align}
\left| \phi(x_1,x_2,\cdots,x_n)\right| &=\left| \phi(x_1,x_2,\cdots,x_n)\right| 1_{\left\|x\right\|_\infty > M_1 } + \left| \phi(x_1,x_2,\cdots,x_n)\right| 1_{\left\|x\right\|_\infty \leq M_1 }\\
& \leq \prod_{i=1}^{n}(1+|x_i|^\frac{\ell}{2}) + C \label{estimation}
\end{align}
for some positive constant $C<\infty$.
We are now going to verify that \textbf{RU\ref{growth2}-\ref{derivative_conv2}} and \textbf{RU\ref{growth_diff}} are satisfied.\\
Let us fix $\mu\in \mathcal{P}_\ell(\mathbb{R}^d)$ and $r>0$. We recall that $B(\mu,r)$ denotes the ball centered at $\mu$ with radius $r$ for the metric $W_\ell$.
\begin{itemize}
\item [\textbf{RU2.}]
Let $(\tilde{\mu},x_1) \in B(\mu,r)\times \mathbb{R}^d$. Using the above estimation we obtain
\begin{align*}
\left|\dfrac{\delta U}{\delta m}(\tilde{\mu},x_1) \right|&\leq n\int_{(\mathbb{R}^{d})^{n-1}}\left( \left| \phi(x_1,x_2,\cdots,x_n)\right|+ \left| \phi(0,x_2,\cdots,x_n)\right|\right) \tilde{\mu}(dx_2)\cdots\tilde{\mu}(dx_n)\\
&\leq n\int_{(\mathbb{R}^{d})^{n-1}}\left( \prod_{i=1}^{n}(1+|x_i|^\frac{\ell}{2})+ \prod_{i=2}^{n}(1+|x_i|^\frac{\ell}{2})\right) \tilde{\mu}(dx_2)\cdots\tilde{\mu}(dx_n)+ 2Cn\\
&\leq D(1+|x_1|^\frac{\ell}{2})
\end{align*}
for some positive constant $D<\infty$.
\item[\textbf{RU3.}] If $W_\ell(\tilde{\mu},\mu)$ goes to $0$, let $\pi\in \mathcal{P}(\mathbb{R}^{2d})$ with $\pi\left(\cdot\times \mathbb{R}^{d} \right)=\tilde{\mu}(\cdot)$, $\pi\left(\mathbb{R}^{d}\times \cdot \right)=\mu(\cdot)$. One has
\begin{align}
\sup_{x_1\in \mathbb{R}^d}& \dfrac{|\frac{\delta U}{\delta m}(\tilde{\mu},x_1)-\frac{\delta U}{\delta m}(\mu,x_1)|}{1+|x_1|^\frac{\ell}{2}}\\
&\leq n \sup_{x_1\in \mathbb{R}^d}\dfrac{\int_{(\mathbb{R}^{d})^{2(n-1)}} \left| \phi(x_1,x_2,\cdots,x_n)- \phi(x_1,s_2,\cdots,s_n)\right| \prod_{i=2}^{n}\pi(dx_i,ds_i)}{1+|x_1|^\frac{\ell}{2}} \\
&+n \int_{(\mathbb{R}^{d})^{2(n-1)}} \left| \phi(0,x_2,\cdots,x_n)- \phi(0,s_2,\cdots,s_n)\right| \prod_{i=2}^{n}\pi(dx_i,ds_i)\\
&\leq 2n \sup_{x_1\in \mathbb{R}^d}\dfrac{\int_{(\mathbb{R}^{d})^{2(n-1)}} \left| \phi(x_1,x_2,\cdots,x_n)- \phi(x_1,s_2,\cdots,s_n)\right| \prod_{i=2}^{n}\pi(dx_i,ds_i)}{1+|x_1|^\frac{\ell}{2}}. \label{termine_conv}
\end{align}
Let $\epsilon >0$. By observing that for $\left|y\right|\geq \frac{1}{\epsilon}$, $\frac{1}{\left|y\right|}\leq \epsilon$, let us define $\tilde{M}_\epsilon:=\max( \frac{1}{\epsilon}, M_{\epsilon})$. The function $\phi$ is uniformly continuous on $B=\left\lbrace x\in(\mathbb{R}^d)^{n}: \left\| x\right\|_\infty\leq \tilde{M}_\epsilon\right\rbrace$. Therefore $\exists \eta_{\epsilon,\tilde{M}_\epsilon}>0$ such that for $x, \tilde{x} \in B$ satisfying $\max_{2\leq i \leq n}|x_i - \tilde{x}_i|\leq \eta_{\epsilon,\tilde{M}_\epsilon}$,
\begin{equation}\label{unifcontinuity}
\left|\phi(x_1,x_2,\cdots,x_n)-\phi(x_1,\tilde{x}_2,\cdots,\tilde{x}_n) \right|\leq \epsilon.
\end{equation}
If we denote the vectors $(x_2,\cdots,x_n)$ and $(s_2,\cdots,s_n)$ respectively by $x_{2:n}$ and $s_{2:n}$, it is possible to rewrite the integral in (\ref{termine_conv}) in the following way ($\star$)
\begin{align*}
\int_{}& \left| \phi(x_1,x_{2:n})- \phi(x_1,s_{2:n})\right|1_{\left\|(x_1,x_{2:n})\right\|_\infty\leq \tilde{M}_\epsilon }1_{\left\|(x_1,s_{2:n})\right\|_\infty\leq \tilde{M}_\epsilon}1_{\left\| x_{2:n} - s_{2:n}\right\|_\infty \leq \eta_{\epsilon,\tilde{M}_\epsilon}} \prod_{i=2}^{n}\pi(dx_i,ds_i)\\
& \phantom{=}+2\int_{} \left| \phi(x_1,x_{2:n})\right|1_{\left\|(x_1,x_{2:n})\right\|_\infty\leq \tilde{M}_\epsilon }1_{\left\|(x_1,s_{2:n})\right\|_\infty\leq \tilde{M}_\epsilon}1_{\left\| x_{2:n} - s_{2:n}\right\|_\infty> \eta_{\epsilon,\tilde{M}_\epsilon}}\prod_{i=2}^{n}\pi(dx_i,ds_i)\\
&\phantom{=}+\int_{}\left| \phi(x_1,x_{2:n})\right|1_{\left\|(x_1,x_{2:n})\right\|_\infty\leq \tilde{M}_\epsilon }1_{\left\|(x_1,s_{2:n})\right\|_\infty> \tilde{M}_\epsilon}\prod_{i=2}^{n}\pi(dx_i,ds_i)\\
&\phantom{=}+\int_{}\left| \phi(x_1,s_{2:n})\right|1_{\left\|(x_1,x_{2:n})\right\|_\infty\leq \tilde{M}_\epsilon }1_{\left\|(x_1,s_{2:n})\right\|_\infty> \tilde{M}_\epsilon}\prod_{i=2}^{n}\pi(dx_i,ds_i)\\
&\phantom{=}+\int_{}\left| \phi(x_1,x_{2:n})\right| 1_{\left\|(x_1,x_{2:n})\right\|_\infty> \tilde{M}_\epsilon } \prod_{i=2}^{n}\pi(dx_i,ds_i)+\int_{}\left| \phi(x_1,s_{2:n})\right| 1_{\left\|(x_1,x_{2:n})\right\|_\infty> \tilde{M}_\epsilon } \prod_{i=2}^{n}\pi(dx_i,ds_i).
\end{align*}
Let us now study one by one the terms appearing in the above expression ($\star$).
\begin{enumerate}
\item By (\ref{unifcontinuity})
$$ \int_{} \left| \phi(x_1,x_{2:n})- \phi(x_1,s_{2:n})\right|1_{\left\|(x_1,x_{2:n})\right\|_\infty\leq \tilde{M}_\epsilon }1_{\left\|(x_1,s_{2:n})\right\|_\infty\leq \tilde{M}_\epsilon}1_{\left\| x_{2:n} - s_{2:n}\right\|_\infty \leq \eta_{\epsilon,\tilde{M}_\epsilon}} \prod_{i=2}^{n}\pi(dx_i,ds_i) \leq \epsilon. $$
\item By the continuity of $\phi$ and Markov's inequality
\begin{align*}
2&\int_{} \left| \phi(x_1,x_{2:n})\right|1_{\left\|(x_1,x_{2:n})\right\|_\infty\leq \tilde{M}_\epsilon }1_{\left\|(x_1,s_{2:n})\right\|_\infty\leq \tilde{M}_\epsilon}1_{\left\| x_{2:n} - s_{2:n}\right\|_\infty> \eta_{\epsilon,\tilde{M}_\epsilon}}\prod_{i=2}^{n}\pi(dx_i,ds_i)\\
&\leq 2\cdot\sup_{ \left\|x\right\|_\infty\leq \tilde{M}_\epsilon }\left| \phi(x)\right|\cdot\sum_{i=2}^{n}\int_{}1_{|x_i - s_i|> \eta_{\epsilon,\tilde{M}_\epsilon}}\pi(dx_i,ds_i)\leq \frac{2 }{\eta_{\epsilon,\tilde{M}_\epsilon}^\ell}\cdot\sup_{ \left\|x\right\|_\infty\leq \tilde{M}_\epsilon}\left| \phi(x)\right|\cdot \sum_{i=2}^{n}\int_{}|x_i - s_i|^\ell\pi(dx_i,ds_i).
\end{align*}
\item By (\ref{estimation}), Markov's inequality, Cauchy-Schwarz inequality and the fact that $\frac{1}{\tilde{M}_\epsilon}\leq \epsilon$
\begin{align*}
\int_{}&\left| \phi(x_1,x_{2:n})\right|1_{\left\|(x_1,x_{2:n})\right\|_\infty\leq \tilde{M}_\epsilon }1_{\left\|(x_1,s_{2:n})\right\|_\infty> \tilde{M}_\epsilon}\prod_{i=2}^{n}\pi(dx_i,ds_i) \leq \sum_{i=2}^{n}\int_{}(C+\prod_{j=1}^{n} (1+|x_j|^\frac{\ell}{2})) 1_{|s_i| > \tilde{M}_\epsilon}\prod_{j=2}^{n}\pi(dx_j,ds_j) \\
&\leq\epsilon^\ell(n-1)C\int_{}|s|^\ell \mu(ds)+\epsilon^\frac{\ell}{2}(n-1)(1+|x_1|^\frac{\ell}{2})\left(\int_{}(1+|x|^\frac{\ell}{2})^2\tilde{\mu}(dx) \right)^\frac{n-1}{2}\left( \int |s|^\ell \mu(ds)\right) ^\frac{1}{2}.
\end{align*}
\item Similarly to the previous point
\begin{align*}
\int_{}&\left| \phi(x_1,s_{2:n})\right|1_{\left\|(x_1,x_{2:n})\right\|_\infty\leq \tilde{M}_\epsilon }1_{\left\|(x_1,s_{2:n})\right\|_\infty> \tilde{M}_\epsilon}\prod_{i=2}^{n}\pi(dx_i,ds_i)\\
&\leq \sum_{i=2}^{n}\int_{}(C+(1+|x_1|^\frac{\ell}{2})\prod_{j=2}^{n} (1+|s_j|^\frac{\ell}{2})) 1_{|s_i| > \tilde{M}_\epsilon}\prod_{j=2}^{n}\pi(dx_j,ds_j)\\
&\leq \epsilon^\ell(n-1)C\int|s|^\ell \mu(ds)+\epsilon^\frac{\ell}{2}(n-1)(1+|x_1|^\frac{\ell}{2})\left(\int_{}(1+|s|^\frac{\ell}{2})^2\mu(ds) \right)^\frac{n-1}{2}\left( \int |s|^\ell \mu(ds)\right) ^\frac{1}{2}
\end{align*}
\item Finally by (\ref{deflimite})
\begin{align*}
\int_{}&\left| \phi(x_1,x_{2:n})\right| 1_{\left\|(x_1,x_{2:n})\right\|_\infty> \tilde{M}_\epsilon } \prod_{i=2}^{n}\pi(dx_i,ds_i)+\int_{}\left| \phi(x_1,s_{2:n})\right| 1_{\left\|(x_1,x_{2:n})\right\|_\infty> \tilde{M}_\epsilon } \prod_{i=2}^{n}\pi(dx_i,ds_i)\\
&\leq \epsilon(1+|x_1|^\frac{\ell}{2})\left( \int_{}\left( 1+|x|^\frac{\ell}{2}\right) \tilde{\mu}(dx)\right)^{n-1} +\epsilon(1+|x_1|^\frac{\ell}{2})\left( \int_{}\left( 1+|s|^\frac{\ell}{2}\right) \mu(ds)\right)^{n-1}.
\end{align*}
\end{enumerate}
Therefore plugging all these estimations in (\ref{termine_conv}), choosing $\pi$ as the optimal $W_\ell$ coupling between $\tilde{\mu}$ and $\mu$, we obtain
\begin{align*}
\sup_{x_1\in \mathbb{R}^d}& \dfrac{|\frac{\delta U}{\delta m}(\tilde{\mu},x_1)-\frac{\delta U}{\delta m}(\mu,x_1)|}{1+|x_1|^\frac{\ell}{2}}\\
&\leq D \left( \sup_{ \left\|x\right\|_\infty\leq \tilde{M}_\epsilon }\left| \phi(x)\right|\cdot \frac{W^{\ell\wedge 1}_\ell(\tilde{\mu},\mu)}{\eta_{\epsilon,\tilde{M}_\epsilon}^\ell} + (\epsilon+\epsilon^\frac{\ell}{2}+\epsilon^\ell)\left( \left(\int_{}(1+|x|^\frac{\ell}{2})^2\tilde{\mu}(dx) \right)^\frac{n-1}{2}+1 \right)\right)
\end{align*}
for a positive constant $D$ depending neither on $\epsilon$ nor on $\tilde{\mu}$.
For fixed $\epsilon$, let $W_\ell(\tilde{\mu},\mu)$ converge to $0$. Then $\int_{}| x|^\ell\tilde{\mu}(dx)$ converges to $\int_{}| s|^\ell\mu(ds)$ and $\int_{}(1+|x|^\frac{\ell}{2})^2\tilde{\mu}(dx)$ converges to $\int_{}(1+|s|^\frac{\ell}{2})^2\mu(ds)$. It is then possible to conclude that the left-hand side goes to $0$ by letting $\epsilon$ go to $0$.
\item[\textbf{RU6.}] Let $\mu_1,\mu_2\in B(\mu,r)$. By (\ref{estimation}) we have
\begin{align*}
\sup_{x_1\in \mathbb{R}^d}&
\frac{\left| \dfrac{\delta U}{\delta m}(\mu_2,x_1)-\dfrac{\delta U}{\delta m}(\mu_1,x_1)\right| }{1+|x_1|^{\frac{\ell}{2}}}\\
&\leq 2n\sup_{x_1\in \mathbb{R}^d} \frac{ \int_{(\mathbb{R}^{d})^{n-1}} \left| \phi(x_1,x_2,\cdots,x_n)\right|\left| \mu_2(dx_2)\cdots\mu_2(dx_n)-\mu_1(dx_2)\cdots\mu_1(dx_n)\right| }{1+|x_1|^{\frac{\ell}{2}}}\\
&\leq 2n\sup_{x_1\in \mathbb{R}^d} \frac{\sum_{k=2}^{n} \int_{(\mathbb{R}^{d})^{n-1}} \left(C+\prod_{i=1}^{n}(1+|x_i|^\frac{\ell}{2}) \right) \mu_2(dx_{k+1})\cdots\mu_2(dx_n)\left| \mu_2-\mu_1\right|(dx_k) \mu_1(dx_2)\cdots\mu_1(dx_{k-1}) }{1+|x_1|^{\frac{\ell}{2}}}\\
&\leq \tilde{C}\int_{\mathbb{R}^{d}}(1+|x|^\frac{\ell}{2})\left| \mu_2-\mu_1\right|(dx)
\end{align*}
for a constant $\tilde{C}<\infty$ not depending on $(\mu_1,\mu_2)$.
\end{itemize}
\end{ex}
\section{Proof of the Results}
In this section we will provide the proofs of Theorem \ref{independent_case} and Theorem \ref{main_teo}. We will start by proving the Markov chains case since it is more complex and then we will provide the proof of the independent and non-equidistributed case where the common parts, where possible, are not repeated.
\subsection{Markov Chains}
In the proof of the theorem we will need that the integral of $V$ respect to $\nu_1$ is finite. The next Lemma permits to suppose it in the proof.
\begin{lemma} \label{int_V}
If Theorem \ref{main_teo} holds for each $\nu_1 \in \mathcal{P}(\mathbb{R}^d)$ such that $\nu_1(V)<\infty$, then it holds for all $\nu_1 \in \mathcal{P}(\mathbb{R}^d)$.
\end{lemma}
\begin{proof}
If $\nu_1(V)=\infty$, for $K> \inf_{x\in \mathbb{R}^d}V(x)$ let us consider $x_0\in \mathbb{R}^d$ such that $V\left(x_0\right)\leq K$ and define $\phi_K(x)= x1_{\left\lbrace V(x)\leq K\right\rbrace }+x_01_{\left\lbrace V(x)> K\right\rbrace }$.\\
If we now consider the measure $\nu^K_1(A):= \nu_1(\phi_K^{-1}(A))$ for $A\in \mathcal{B}\left(\mathbb{R}^d \right)$, then
$$ \nu^K_1(V) = \int_{\mathbb{R}^d}V(x)\nu^K_1(dx) =\int_{\mathbb{R}^d}V(\phi_K(x))\nu_1(dx)=\int_{\mathbb{R}^d}\left( V(x)1_{\left\lbrace V(x)\leq K \right\rbrace}+V(x_0)1_{\left\lbrace V(x)> K \right\rbrace}\right)\nu_1(dx)\leq K. $$
Moreover
\begin{align*}
d_{TV}\left( \nu_1^K,\nu_1\right)&= \sup_{\psi:\left\| \psi\right\|_\infty\leq \frac{1}{2}}\left|\int_{\mathbb{R}^d}\psi(x)\nu_1^K(dx)-\int_{\mathbb{R}^d}\psi(x)\nu_1(dx)\right|\\
&= \sup_{\psi:\left\| \psi\right\|_\infty\leq \frac{1}{2}}\left|\int_{\mathbb{R}^d}\left( \psi(x_0)-\psi(x)\right)1_{\left\lbrace V(x)>K \right\rbrace} \nu_1(dx)\right| \leq \nu_1\left( \left\lbrace x:V(x)>K \right\rbrace \right).
\end{align*}
Let us consider now the Markov chain with initial law $\nu_1^K$ and we denote by $\mathbb{P}_{\nu_1^K}$ the law of the process ($\mathbb{P}_{\nu_1}$ will denote the law of the original Markov chain). Since the initial law of a Markov chain determines the law of the entire process, one has that $d_{TV}\left(\mathbb{P}_{\nu_1},\mathbb{P}_{\nu_1^K}\right)=d_{TV}\left(\nu_1,\nu_1^K \right)$. \\
Therefore if we are able to prove that for all bounded, continuous functions $f$
$$\lim_{N\to\infty}\mathbb{E}_{\nu_1^K}\left(f\left(\sqrt{N}\left(U(\mu_N)-U(\mu) \right) \right) \right) = \mathbb{E}\left(f(\mathcal{G}) \right)$$
where $\mathcal{G}$ is a Gaussian random variable not depending on $K$, then
\begin{align*}
&\left| \mathbb{E}_{\nu_1}\left(f\left(\sqrt{N}\left(U(\mu_N)-U(\mu) \right) \right) \right) - \mathbb{E}\left(f(\mathcal{G}) \right)\right| \\
&\phantom{\mathbb{E}_{\nu_1}}\leq \left| \mathbb{E}_{\nu_1}\left(f\left(\sqrt{N}\left(U(\mu_N)-U(\mu) \right) \right) \right) - \mathbb{E}_{\nu_1^K}\left(f\left(\sqrt{N}\left(U(\mu_N)-U(\mu) \right) \right) \right)\right|\\
&\phantom{\mathbb{E}_{\nu_1}\leq}+ \left| \mathbb{E}_{\nu_1^K}\left(f\left(\sqrt{N}\left(U(\mu_N)-U(\mu) \right) \right) \right)- \mathbb{E}\left(f(\mathcal{G}) \right)\right|\\
&\phantom{\mathbb{E}_{\nu_1}}\leq 2\left\|f\right\|_\infty d_{TV}\left(\mathbb{P}_{\nu_1},\mathbb{P}_{\nu_1^K} \right)+ \left| \mathbb{E}_{\nu_1^K}\left(f\left(\sqrt{N}\left(U(\mu_N)-U(\mu) \right) \right) \right)- \mathbb{E}\left(f(\mathcal{G}) \right)\right|\\
&\phantom{\mathbb{E}_{\nu_1}}=2\left\|f\right\|_\infty d_{TV}\left(\nu_1,\nu_1^K \right)+ \left| \mathbb{E}_{\nu_1^K}\left(f\left(\sqrt{N}\left(U(\mu_N)-U(\mu) \right) \right) \right)- \mathbb{E}\left(f(\mathcal{G}) \right)\right|.
\end{align*}
Since $d_{TV}\left(\nu_1,\nu_1^K \right)\leq\nu_1\left( \left\lbrace x:V(x)>K \right\rbrace \right)$, we can first take the superior limit over $N\rightarrow \infty$ so to obtain that
$$ \limsup_{N\rightarrow \infty}\left| \mathbb{E}_{\nu_1}\left(f\left(\sqrt{N}\left(U(\mu_N)-U(\mu) \right) \right) \right) - \mathbb{E}\left(f(\mathcal{G}) \right)\right|\leq 2\left\|f\right\|_\infty\nu_1\left( \left\lbrace x:V(x)>K \right\rbrace \right).$$
Then we can conclude that the left hand-side is equal to $0$ by letting $K$ to infinity.
\end{proof}
The proof of the theorem relies on the Poisson equation whose explicit solution and norm estimations are studied in the following Lemma.
\begin{lemma}
Under the assumptions of the previous theorem, for a given $m\in B(\mu,r)$ let us consider the following Poisson equation
\begin{equation} \label{poisson_trm}
F(m,x) - PF(m,x) = \frac{\delta U}{\delta m}\left(m,x \right)-\int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(m,y \right)\mu(dy), \quad x\in \mathbb{R}^d.
\end{equation}
Then (\ref{poisson_trm}) admits a solution given by $F(m,\cdot) =\sum_{n\in \mathbb{N}}(P^n\frac{\delta U}{\delta m}\left(m,\cdot \right)-\mu(\frac{\delta U}{\delta m}\left(m,\cdot \right)))$.\\
Moreover
\begin{itemize}
\item For $m\in B(\mu,r)$ \begin{equation} \label{norm_F}
\left\|F\left(m,\cdot \right) \right\|_{\sqrt{V},1}\leq \bar{C}_\frac{1}{2}
\end{equation}
\item For $m_1, m_2 \in B(\mu,r)$
\begin{equation} \label{norm_diff_F_1}
\left\|F\left(m_1,\cdot \right) - F\left(m_2,\cdot \right) \right\|_{\sqrt{V},1} \leq \bar{C}_\frac{1}{2} \sup_{x\in \mathbb{R}^d} \frac{\left|\frac{\delta U}{\delta m} \left(m_1,x \right) -\frac{\delta U}{\delta m} \left(m_2,x \right) \right| }{1+ \sqrt{V(x)}}
\end{equation}
and so for $\alpha \in (\frac{1}{2},1]$
\begin{equation} \label{norm_diff_F}
\left\|F\left(m_1,\cdot \right) - F\left(m_2,\cdot \right) \right\|_{\sqrt{V},1} \leq\bar{C}_\frac{1}{2}\left( \int_{\mathbb{R}^d}(1+|y|^{\frac{\ell}{2\alpha}}) |m_2-m_1|(dy) \right)^\alpha
\end{equation}
with $\bar{C}_\frac{1}{2}$ a finite constant not depending on $m$, $m_1$ and $m_2$.
\end{itemize}
\end{lemma}
\begin{proof}
By Proposition \ref{poisson_sqrt}, to ensure the existence of the solution, it is sufficient to check that $\left( \frac{\delta U}{\delta m}\left(m,\cdot \right)\right) ^2 \in \mathcal{V}_{V}.$
By Hypothesis \textbf{RU\ref{growth2}} and \textbf{L3}
$$\sup_{x\in \mathbb{R}^d} \frac{\left( \frac{\delta U}{\delta m}\left(m,x \right)\right) ^2 }{1+V(x)}\leq 2C^2\sup_{x\in \mathbb{R}^d} \frac{1+|x|^\ell}{1+V(x)} \leq 2C^2\left(1+C_\ell \right) < \infty. $$
Let us now estimate the norm $\left\| \right\|_{\sqrt{V},1} $ of $F(m,\cdot)$. By Proposition \ref{poisson_sqrt}, one has
\begin{align*}
\left\|F\left(m,\cdot \right) \right\|_{\sqrt{V},1} &= \lim_{n\rightarrow \infty} \left\|\sum_{k=0}^{n} P^k\frac{\delta U}{\delta m}\left(m,\cdot \right)-\mu(\frac{\delta U}{\delta m}\left(m,\cdot \right)) \right\|_{\sqrt{V},1}\\
&\leq \sum_{k=0}^{\infty}\left\| P^k\frac{\delta U}{\delta m}\left(m,\cdot \right)-\mu(\frac{\delta U}{\delta m}\left(m,\cdot \right)) \right\|_{\sqrt{V},1}.
\end{align*}
Let $\beta\in \left( 0,\frac{\rho}{\sqrt{K}}\right)$, then again by Proposition \ref{poisson_sqrt} and Hypothesis \textbf{RU\ref{growth2}}
\begin{align*}
\left\| P^k\frac{\delta U}{\delta m}\left(m,\cdot \right)-\mu(\frac{\delta U}{\delta m}\left(m,\cdot \right)) \right\|_{\sqrt{V},1}&= \sup_{x\in \mathbb{R}^d}\dfrac{\left| P^k\frac{\delta U}{\delta m}\left(m,x \right)-\mu(\frac{\delta U}{\delta m}\left(m,\cdot \right))\right| }{1+\sqrt{V(x)}}\leq D\chi^k \sup_{x\in \mathbb{R}^d}\frac{\left| \frac{\delta U}{\delta m}\left(m,x \right)\right| }{1+\beta \sqrt{V(x)}}\\
& \leq D\chi^k C\left( 1+ \sup_{x\in \mathbb{R}^d}\frac{|x|^\frac{\ell}{2}}{1+\beta \sqrt{V(x)}} \right)\leq DC \chi^k \left(1+ \frac{\sqrt{C_\ell} }{\min(\beta,1)} \right)
\end{align*}
where to obtain the last inequality we used \textbf{L3}.
Therefore we have obtained that
$$ \left\|F\left(m,\cdot \right) \right\|_{\sqrt{V},1} \leq \frac{1}{1-\chi} DC \left(1+ \frac{\sqrt{C_\ell} }{\min(\beta,1)} \right). $$
Finally let $m_1, m_2 \in B(\mu,r)$ and proceeding as above, by Proposition \ref{poisson_sqrt} we have
$$\left\|F\left(m_1,\cdot \right) - F\left(m_2,\cdot \right) \right\|_{\sqrt{V},1} \leq \sum_{k=0}^{\infty}\left\|P^k\left( \frac{\delta U}{\delta m}\left(m_1,\cdot \right)-\frac{\delta U}{\delta m}\left(m_2,\cdot \right) \right)- \mu\left(\frac{\delta U}{\delta m}\left(m_1,\cdot \right)-\frac{\delta U}{\delta m}\left(m_2,\cdot \right) \right) \right\|_{\sqrt{V},1} $$
Let $\beta \in (0,\frac{\rho}{\sqrt{K}})$, then by Proposition \ref{poisson_sqrt} and Hypothesis \textbf{RU\ref{growth_diff}}\\
\begin{align*}
&\left\|P^k\left( \frac{\delta U}{\delta m}\left(m_1,\cdot \right)-\frac{\delta U}{\delta m}\left(m_2,\cdot \right) \right)- \mu\left(\frac{\delta U}{\delta m}\left(m_1,\cdot \right)-\frac{\delta U}{\delta m}\left(m_2,\cdot \right) \right) \right\|_{\sqrt{V},1}\\
&\phantom{P^k\quad\quad}= \sup_{x\in \mathbb{R}^d} \dfrac{\left| P^k\left( \frac{\delta U}{\delta m}\left(m_1,x \right)-\frac{\delta U}{\delta m}\left(m_2,x \right) \right)- \mu\left(\frac{\delta U}{\delta m}\left(m_1,\cdot \right)-\frac{\delta U}{\delta m}\left(m_2,\cdot \right) \right)\right| }{1+\sqrt{V(x)}}\\
&\phantom{P^k\quad\quad}\leq D\chi^k \sup_{x\in \mathbb{R}^d}\frac{\left| \frac{\delta U}{\delta m}\left(m_1,x \right)-\frac{\delta U}{\delta m}\left(m_2,x \right)\right|} {1+\beta \sqrt{V(x)}}\\
&\phantom{P^k\quad\quad}\leq DC\chi^k\sup_{x\in \mathbb{R}^d} \frac{1+|x|^{\frac{\ell}{2}}}{1+\beta \sqrt{V(x)}}\left( \int_{\mathbb{R}^d}(1+|y|^{\frac{\ell}{2\alpha}}) |m_2-m_1|(dy) \right)^\alpha\\ &\phantom{P^k\quad\quad}=DC\chi^k\left(1+\frac{\sqrt{C_\ell}}{\min\left(\beta,1\right) } \right) \left( \int_{\mathbb{R}^d}(1+|y|^{\frac{\ell}{2\alpha}}) |m_2-m_1|(dy) \right)^\alpha
\end{align*}
where we used \textbf{L3} to obtain the last equality. Therefore we have
$$ \left\|F\left(m_1,\cdot \right) - F\left(m_2,\cdot \right) \right\|_{\sqrt{V},1}\leq DC\frac{1}{1-\chi}\left(1+\frac{\sqrt{C_\ell}}{\min\left(\beta,1\right) } \right)\left( \int_{\mathbb{R}^d}(1+|y|^{\frac{\ell}{2\alpha}}) |m_2-m_1|(dy) \right)^\alpha. $$
\end{proof}
\begin{proof}[Proof of Theorem \ref{main_teo}]
\textbf{FIRST STEP}\quad
Let us preliminary recall that by Theorem \ref{llnmarkov}, $W_\ell\left(\mu_N, \mu\right) \underset{N}{\longrightarrow} 0$ a.s.
To study the limit distribution of $\sqrt{N}\left(U\left(\mu_N \right)- U\left(\mu \right)\right)$, let us define for $i=1,\cdots,N$ and $s\in \left[ 0,1\right] $
$$\mu_{N}^{i,s} = \frac{1}{N} \sum_{j=1}^{i-1}\delta_{X_j}+\frac{s}{N}\delta_{X_i}+\left(1+\frac{1-i-s}{N} \right)\mu.$$
We have $\mu_{N}^{N,1}= \mu_{N} $ and
$\mu_{N}^{1,0}=\mu$. Moreover, since for $i=1,\cdots,N-1$
$$\mu_{N}^{i,1} = \mu_{N}^{i+1,0}, $$
one has
$$\sqrt{N}\left(U\left(\mu_N \right)- U\left(\mu\right) \right) = \sqrt{N} \sum_{i=1}^{N}\left(U(\mu_{N}^{i,1}) -U(\mu_{N}^{i,0}) \right) .$$
Let us now show that almost surely $\mu_{N}^{i,s}$ converges uniformly to $\mu$ with respect to the distance $W_\ell$ that is
\begin{equation} \label{unif_conv}
\max_{1\leq i\leq N} \sup_{s\in \left[0,1 \right]} W_\ell(\mu_{N}^{i,s},\mu) \underset{N\rightarrow\infty}{\rightarrow} 0 \quad a.s..
\end{equation}
For $i=1,\cdots,N$ and $s\in \left[0,1 \right]$
$$ \mu_{N}^{i,s} = s\mu_{N}^{i,1}+(1-s)\mu_{N}^{i-1,1}$$
under the convention $\mu_{N}^{0,1}=\mu_{N}^{1,0}$. Let $\pi \in \Pi(\mu_{N}^{i,1},\mu)$ and $\tilde{\pi}\in \Pi(\mu_{N}^{i-1,1},\mu)$ where $\Pi(\mu_1,\mu_2)= \left\lbrace \mu \in \mathcal{P}(\mathbb{R}^{2d}): \mu(\cdot\times \mathbb{R}^{d})=\mu_1(\cdot),\mu(\mathbb{R}^{d}\times\cdot)= \mu_2(\cdot)\right\rbrace $ for $\mu_1, \mu_2 \in\mathcal{P}_\ell(\mathbb{R}^{d})$ and define
$$ \bar{\pi}\left(dx,dy \right)= s\pi\left(dx,dy \right) + (1-s)\tilde{\pi}\left(dx,dy \right).$$
Then
\begin{align*}
&\bar{\pi}\left(dx,\mathbb{R}^d \right)= s\pi\left(dx,\mathbb{R}^d \right) + (1-s)\tilde{\pi}\left(dx,\mathbb{R}^d \right) = s\mu_{N}^{i,1}+(1-s)\mu_{N}^{i-1,1} = \mu_{N}^{i,s}\\
&\bar{\pi}\left(\mathbb{R}^d,dy \right)= s\pi\left(\mathbb{R}^d,dy \right) + (1-s)\tilde{\pi}\left(\mathbb{R}^d,dy \right) = s\mu + (1-s)\mu = \mu.
\end{align*}
By supposing that $\ell>0$, we have:
\begin{equation}\label{magg0}
W_\ell^{\ell\vee 1}\left(\mu_{N}^{i,s},\mu \right)\leq \int_{\mathbb{R}^d \times \mathbb{R}^{d}} |x-y|^\ell \bar{\pi}\left(dx,dy \right) = s\int_{\mathbb{R}^d \times \mathbb{R}^{d}} |x-y|^\ell \pi\left(dx,dy \right) + (1-s)\int_{\mathbb{R}^d \times \mathbb{R}^{d}} |x-y|^\ell \tilde{\pi}\left(dx,dy \right).
\end{equation}
Taking the infimum over $\pi$ and $\tilde{\pi}$, we conclude that
$$ W_\ell^{\ell\vee 1}\left(\mu_{N}^{i,s},\mu \right)\leq sW_\ell^{\ell\vee 1}\left(\mu_{N}^{i,1},\mu \right) + (1-s)W_\ell^{\ell\vee 1}\left(\mu_{N}^{i-1,1},\mu \right)\leq W_\ell^{\ell\vee 1}\left(\mu_{N}^{i,1},\mu \right) \vee W_\ell^{\ell\vee 1}\left(\mu_{N}^{i-1,1},\mu \right) $$
and so
\begin{equation}\label{unifconv}
\max_{1\leq i\leq N} \sup_{s\in \left[0,1 \right]} W_\ell\left( \mu_{N}^{i,s},\mu\right) = \max_{0\leq i\leq N}W_\ell\left( \mu_{N}^{i,1},\mu \right).
\end{equation}
Now since for $i=1,\cdots, N$
$$ \mu_{N}^{i,1} = \frac{1}{N}\sum_{j=1}^{i}\delta_{X_j}+\left(1-\frac{i}{N} \right)\mu = \frac{i}{N}\mu_{i} + \left(1-\frac{i}{N} \right)\mu, $$
let $\gamma\in \Pi(\mu_{i},\mu)$ and define
$ \tilde{\gamma}(dx,dy)=\frac{i}{N}\gamma(dx,dy) +\left(1-\frac{i}{N} \right)\mu\left(dx \right) \delta_x(dy).$
Then
\begin{align*}
&\tilde{\gamma}(dx,\mathbb{R}^d) = \frac{i}{N}\gamma(dx,\mathbb{R}^d)+\left(1-\frac{i}{N} \right)\mu\left(dx \right) \delta_x(\mathbb{R}^d) = \frac{i}{N}\mu_{i}(dx)+\left(1-\frac{i}{N} \right)\mu\left(dx \right) = \mu_{N}^{i,1}(dx)\\
&\tilde{\gamma}(\mathbb{R}^d,dy) = \frac{i}{N}\gamma(\mathbb{R}^d,dy)+\left(1-\frac{i}{N} \right)\mu\left(dy \right)\delta_y(\mathbb{R}^d) = \frac{i}{N}\mu(dy)+ \left(1-\frac{i}{N} \right)\mu(dy) = \mu(dy).
\end{align*}
Therefore
\begin{equation}\label{magg}
W_\ell^{\ell\vee1}\left( \mu_{N}^{i,1},\mu\right)\leq \int_{\mathbb{R}^d\times\mathbb{R}^d}|x-y|^\ell\tilde{\gamma}(dx,dy) = \frac{i}{N}\int_{\mathbb{R}^d\times\mathbb{R}^d}|x-y|^\ell \gamma(dx,dy).
\end{equation}
Taking the infimum over $\gamma$, one has
$$W_\ell\left( \mu_{N}^{i,1},\mu\right)\leq \left( \frac{i}{N}\right) ^\frac{1}{\ell\vee1}W_\ell\left( \mu_i,\mu\right). $$
Finally let $\alpha\in\left(0,1 \right) $ so that
\begin{align*}
\max_{0\leq i\leq N}W_\ell\left( \mu_{N}^{i,1},\mu \right)&\leq \max_{1\leq i\leq N} \left( \frac{i}{N}\right) ^\frac{1}{\ell\vee1} W_\ell\left( \mu_i,\mu \right) = \max_{1\leq i\leq \llcorner\alpha N\lrcorner} \left( \frac{i}{N}\right) ^\frac{1}{\ell\vee1} W_\ell\left( \mu_i,\mu \right) +\max_{\llcorner\alpha N\lrcorner< i\leq N}\left( \frac{i}{N}\right) ^\frac{1}{\ell\vee1} W_\ell\left( \mu_i,\mu \right)\\
&\leq \alpha^\frac{1}{\ell\vee1}\sup_{i} W_\ell\left( \mu_i,\mu \right) + \max_{\llcorner\alpha N\lrcorner< i\leq N} W_\ell\left( \mu_i,\mu \right).
\end{align*}
Since we have proved that $\lim_{N\to\infty}W_\ell(\mu_{N},\mu)=0$, for fixed $\alpha$ the last term goes to $0$ as $N$ goes to infinity while the first one is arbitrarily small for $\alpha$ small and so (\ref{unif_conv}) is proved.\\\\
By replacing $\left|x-y\right|^\ell$ with $1\wedge \left|x-y\right|$ in (\ref{magg0}) and (\ref{magg}), we can do exactly the same for $\ell=0$. \\\\
\textbf{SECOND STEP}\quad Let us define
$$I_N = \min\left\lbrace 1\leq i \leq N : \exists s\in\left[ 0,1\right] :W_\ell\left(\mu^{i,s}_{N},\mu \right)\geq r \right\rbrace $$
and let us introduce the filtration $\left(\mathcal{F}_i=\sigma\left(X_1,\cdots,X_i\right)\right)_{i\geq1} $ for which $I_N$ is a stopping time. According to the first step, under the assumption $\min\emptyset=N+1$, $I_N$ is $a.s.$ equal to $N+1$ for each $ N\geq N^*$ for a random variable $N^*$ taking integers values. This stopping time allows to introduce in the proof the linear functional derivative associated to $U$ since by Hypothesis \textbf{RU\ref{functional_derivative2}} it is well defined in the ball of radius $r$ and center $\mu$. \\
For $N\geq N^*$, by Lemma \ref{linearderivative}, we have
$$U\left(\mu_N \right)- U\left(\mu\right) = \sum_{i=1}^{N}\left(U(\mu_{N}^{i,1}) -U(\mu_{N}^{i,0}) \right) = \sum_{i=1}^{N} \int_{0}^{1}ds\int_{\mathbb{R}^d} \frac{\delta U}{\delta m}(\mu_{N}^{i,s},x) \frac{(\delta_{X_i}-\mu)(dx)}{N}. $$
Setting
$$Q_N :=\frac{1}{N}\sum_{i=1}^{N}\left( \frac{\delta U}{\delta m}(\mu_{N}^{i\wedge I_N,0},X_i) -\int_{\mathbb{R}^d} \frac{\delta U}{\delta m}(\mu_{N}^{i\wedge I_N,0},x)\mu(dx) \right),$$
we deduce that for $N\geq N^*$, $U\left(\mu_N \right)- U\left(\mu\right) - Q_N$ coincides with
$$R_N = \frac{1_{N\geq N^*}}{N}\sum_{i=1}^{N} \int_{0}^{1}ds\int_{\mathbb{R}^d} \left( \frac{\delta U}{\delta m}(\mu_{N}^{i,s},x)- \frac{\delta U}{\delta m}(\mu_{N}^{i,0},x)\right) (\delta_{X_i}-\mu)(dx).$$
Let us therefore consider the following decomposition:
$$\sqrt{N}\left(U\left(\mu_N \right)- U\left(\mu\right) \right) = \sqrt{N}\left(U\left(\mu_N \right)- U\left(\mu\right) - Q_N \right) + \sqrt{N}Q_N. $$
We will see that the first term will go to $0$ in probability (\textbf{third step}) while the second one will converge in distribution to a normal random variable (\textbf{fourth step}). \\
\textbf{THIRD STEP}\quad Let us first prove the convergence in probability of $\sqrt{N}\left(U\left(\mu_N \right)- U\left(\mu\right) - Q_N \right)$ to $0$.
By definition we need to prove that $\forall \epsilon>0$
$$\lim_{N\to\infty} \mathbb{P}\left(\sqrt{N}|U\left(\mu_N \right)- U\left(\mu\right) - Q_N| \geq \epsilon\right) = 0. $$
One has
$$\left\lbrace \sqrt{N}|U\left(\mu_N \right)- U\left(\mu\right) - Q_N| \geq \epsilon\right\rbrace$$
$$= \left( \left\lbrace \sqrt{N}|U\left(\mu_N \right)- U\left(\mu\right) - Q_N|\geq \epsilon \right\rbrace\bigcap \left\lbrace N\geq N^* \right\rbrace \right) \overset{\cdot}{\bigcup} \left( \left\lbrace \sqrt{N}|U\left(\mu_N \right)- U\left(\mu\right) - Q_N|\geq \epsilon \right\rbrace\bigcap \left\lbrace N<N^* \right\rbrace \right) $$
$$ \subseteq \left( \left\lbrace \sqrt{N}|U\left(\mu_N \right)- U\left(\mu\right) - Q_N|\geq \epsilon \right\rbrace\bigcap \left\lbrace N\geq N^* \right\rbrace \right) \overset{\cdot}{\bigcup} \left\lbrace N<N^* \right\rbrace. $$
Therefore
$$\mathbb{P}\left(\sqrt{N}|U\left(\mu_N \right)- U\left(\mu\right) - Q_N| \geq \epsilon\right)\leq \mathbb{P}\left(N<N^* \right)+\mathbb{P}\left(\sqrt{N}1_{N\geq N^*}|U\left(\mu_N \right)- U\left(\mu\right) - Q_N|\geq \epsilon\right). $$
Since $\lim_{N\to\infty} \mathbb{P}\left(N<N^* \right) =0, $ it is sufficient to prove the almost sure convergence of $\sqrt{N}R_N$ to $0$. One has \\
\begin{align*}
|R_N|&\leq \frac{1_{N\geq N^*}}{N}\sum_{i=1}^{N}\int_{0}^{1}ds\int_{\mathbb{R}^d} \bigg\rvert\frac{\delta U}{\delta m}(\mu_{N}^{i,s},x)- \frac{\delta U}{\delta m}(\mu_{N}^{i,0},x)\bigg\rvert|\delta_{X_i}-\mu|(dx)\\
&\leq \frac{1_{N\geq N^*}}{N}\sum_{i=1}^{N}\int_{0}^{1}ds\int_{\mathbb{R}^d} \bigg\rvert\frac{\delta U}{\delta m}(\mu_{N}^{i,s},x)- \frac{\delta U}{\delta m}(\mu_{N}^{i,0},x)\bigg\rvert(\delta_{X_i}+\mu)(dx).
\end{align*}
By Assumption \textbf{RU\ref{growth_diff}}, $\exists C < \infty$, $\exists \alpha \in \left(\frac{1}{2},1 \right] $ such that for $N\geq N^*$
$$ \bigg\rvert\frac{\delta U}{\delta m}(\mu_{N}^{i,s},x)- \frac{\delta U}{\delta m}(\mu_{N}^{i,0},x)\bigg\rvert \leq C(1+|x|^{\frac{\ell}{2}})\left(\int_{\mathbb{R}^d}\left( 1+|y|^{\frac{\ell}{2\alpha}}\right) |\mu_{N}^{i,s}-\mu_{N}^{i,0}|(dy) \right)^\alpha $$
with
$$|\mu_{N}^{i,s} -\mu_{N}^{i,0}|(dy) \leq \frac{s}{N}(\delta_{X_i}+\mu)(dy).$$
Substituting the above quantity and using the subadditivity of $x\mapsto x^\alpha$ one obtains
\begin{align*}
\bigg\rvert\frac{\delta U}{\delta m}(\mu_{N}^{i,s},x)- \frac{\delta U}{\delta m}(\mu_{N}^{i,0},x)\bigg\rvert &\leq C\left((1+|x|^{\frac{\ell}{2}})\left(2+\int_{\mathbb{R}^d}|y|^{\frac{\ell}{2\alpha}}(\delta_{X_i}+\mu)(dy) \right)^\alpha \left( \frac{s}{N}\right) ^\alpha \right)\\
&\leq \frac{C}{N^\alpha} (1+|x|^{\frac{\ell}{2}})\left(2^\alpha+|X_i|^\frac{ \ell}{2}+\left( \int_{\mathbb{R}^d}|y|^\frac{ \ell}{2\alpha}\mu(dy)\right)^\alpha \right)\\
& \leq \frac{C^*}{N^\alpha}\left(1+|x|^{\frac{\ell}{2}} \right) \left(1+|X_i|^\frac{ \ell}{2} \right)
\end{align*}
where we used (\ref{l_moment_finite}) to obtain the last inequality with $C^*$ a finite constant.
Therefore
\begin{align*}
|R_N| &\leq \frac{C^*}{N^{\alpha+1}} \sum_{i=1}^{N}\int_{\mathbb{R}^d}\left(1+|x|^{\frac{\ell}{2}} \right) \left(1+|X_i|^\frac{ \ell}{2} \right)(\delta_{X_i}+\mu)(dx)\\
& = \frac{C^*}{N^{\alpha+1}} \sum_{i=1}^{N}\left(1+|X_i|^\frac{ \ell}{2} \right) \left(2+ |X_i|^\frac{ \ell}{2} + \int_{\mathbb{R}^d}|x|^{\frac{\ell}{2}} \mu(dx) \right)\\
& = \frac{C^*}{N^{\alpha+1}} \sum_{i=1}^{N} \left(2 +\int_{\mathbb{R}^d}|x|^{\frac{\ell}{2}} \mu(dx) + \left( 3+\int_{\mathbb{R}^d}|x|^{\frac{\ell}{2}} \mu(dx)\right) |X_i|^\frac{ \ell}{2} + |X_i|^\ell\right)\\
&\leq \frac{C_1}{N^\alpha}+ \frac{C_2}{N^\alpha} \int_{\mathbb{R}^d}|y|^\ell\mu_{N}(dy)
\end{align*}
for some positive constants $C_1$ and $C_2$. Therefore one has\\
$$\sqrt{N}|R_N| \leq \frac{C_1}{N^{\alpha-\frac{1}{2}}}+\frac{C_2}{N^{\alpha-\frac{1}{2}}}\int_{\mathbb{R}^d}|y|^\ell\mu_{N}(dy)$$ \\
and since $\alpha>\frac{1}{2}$ and $\lim_{N\to\infty}\int_{\mathbb{R}^d}|y|^\ell\mu_{N}(dy) = \int_{\mathbb{R}^d}|y|^\ell\mu(dy)$, we can conclude that the left-hand side goes to $0$ as $N$ goes to infinity and so the third step is concluded. \\
\textbf{FOURTH STEP}\quad As anticipated, we are now going to prove the convergence in distribution of $\sqrt{N}Q_N$ to a Gaussian random variable. \\
Remember that
$$ \sqrt{N}Q_N = \frac{1}{\sqrt{N}}\sum_{i=1}^{N}\left( \frac{\delta U}{\delta m}(\mu_{N}^{i\wedge I_N,0},X_i) -\int_{\mathbb{R}^d} \frac{\delta U}{\delta m}(\mu_{N}^{i\wedge I_N,0},x)\mu(dx) \right)$$
and let us consider for a given $m\in B(\mu,r)$ the following Poisson equation
\begin{equation} \label{poisson_fd}
F(m,x) - PF(m,x) = \frac{\delta U}{\delta m}\left(m,x \right)-\int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(m,y \right)\mu(dy), \quad x\in \mathbb{R}^d.
\end{equation}
By Proposition \ref{poisson_sqrt} we know that it admits a solution.
Thanks to that, we are able to rewrite $\sqrt{N}Q_N$ as
\begin{align*}
\sqrt{N}Q_N &= \frac{1}{\sqrt{N}} \sum_{i=1}^{N}\left( F\left(\mu_{N}^{i\wedge I_N,0},X_i\right) - PF\left(\mu_{N}^{i\wedge I_N,0},X_i\right)\right)\\
& = \dfrac{1}{\sqrt{N}}\left( \sum_{i=1}^{N} F\left(\mu_{N}^{i\wedge I_N,0},X_i\right)
- \sum_{i=2}^{N} F\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right) \right) \\
&\phantom{=}+\frac{1}{\sqrt{N}}\left( \sum_{i=2}^{N} F\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right)
- \sum_{i=2}^{N+1} PF\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)\right) \\
&= K_{0,N} + K_{1,N} + K_{2,N} + K_{3,N}
\end{align*}
with
\begin{align*}
&K_{0,N} = \dfrac{F\left(\mu,X_1\right)}{\sqrt{N}} ,\\
& K_{1,N} = \dfrac{1}{\sqrt{N}} \sum_{i=2}^{N}\left(F\left(\mu_{N}^{i\wedge I_N,0},X_i\right) - F\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right)\right),\\
&K_{2,N} =- \dfrac{PF\left(\mu_{N}^{N\wedge I_N,0},X_{N}\right)}{\sqrt{N}},\\
&K_{3,N} =\dfrac{1}{\sqrt{N}}\sum_{i=2}^{N}\left( F\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right) -PF\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)\right).
\end{align*}
The idea now is to study the convergence of $ K_{i,N}$ for $i=0,\cdots, 3.$ We will see that $K_{0,N}, K_{1,N}$ and $K_{2,N}$ go to $0$ in probability as $N$ goes to infinity while $K_{3,N}$ is the term providing the convergence in distribution to a Gaussian random variable. By Slutsky's theorem, we can therefore obtain the limit of $\sqrt{N}Q_N$ (same as the limit of $K_{3,N}$) and conclude the proof. \\
\underline{\textbf{Convergence of $K_{0,N}+K_{1,N}+K_{2,N}$ to $0$ in probability}}\\
The almost sure convergence (and so in probability) of $K_{0,N}$ to $0$ is immediate.\\
For what concerns the convergence in probability of $K_{1,N}$ to $0$, following the same idea used in the third step, it is sufficient to prove the almost sure convergence of $1_{N\geq N^*}|K_{1,N}|$ to $0$. Therefore
\begin{align}
1_{N\geq N^*}\left|K_{1,N}\right|&= \left| \dfrac{1_{N\geq N^*}}{\sqrt{N}} \sum_{i=2}^{N}\left(F\left(\mu_{N}^{i\wedge I_N,0},X_i\right) - F\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right)\right) \right|\\
& \leq \dfrac{1_{N\geq N^*}}{\sqrt{N}} \sum_{i=2}^{N}\left|F\left(\mu_{N}^{i,0},X_i\right) - F\left(\mu_{N}^{i-1,0},X_{i}\right) \right|\\\label{k1N}
&\leq \dfrac{1_{N\geq N^*}}{\sqrt{N}} \sum_{i=2}^{N} \left\|F\left(\mu_{N}^{i,0},\cdot\right) - F\left(\mu_{N}^{i-1,0},\cdot\right) \right\|_{\sqrt{V},1} \left( 1 + \sqrt{V(X_i)}\right).
\end{align}
By (\ref{norm_diff_F}), observing that $\mu_{N}^{i,0}-\mu_{N}^{i-1,0}= \frac{1}{N}\left( \delta_{X_{i-1}}-\mu\right), $
one obtains
\begin{align*}
1_{N\geq N^*}\left\|F\left(\mu_{N}^{i,0},\cdot\right) - F\left(\mu_{N}^{i-1,0},\cdot\right) \right\|_{\sqrt{V},1}&\leq1_{N\geq N^*}\bar{C}_\frac{1}{2}\left( \int_{\mathbb{R}^d}\left(1+ |y|^{\frac{\ell}{2\alpha}}\right) |\mu_{N}^{i-1,0}-\mu_{N}^{i,0}|(dy) \right)^\alpha\\
&\leq 1_{N\geq N^*}\bar{C}_\frac{1}{2} \frac{1}{N^\alpha} \left(2+ \int_{\mathbb{R}^d}|y|^{\frac{\ell}{2\alpha}} \delta_{X_{i-1}}(dy)+\int_{\mathbb{R}^d}|y|^{\frac{\ell}{2\alpha}}\mu(dy) \right)^\alpha\\
&\leq 1_{N\geq N^*}\bar{C}^*_\frac{1}{2} \frac{1}{N^\alpha}\left( |X_{i-1}|^{\frac{\ell}{2}} +1 \right).
\end{align*}
where we used (\ref{l_moment_finite}) and the subadditivity of $x\mapsto x^\alpha$ to obtain the last inequality with $\bar{C}^*_\frac{1}{2}$ a finite constant. Therefore plugging this inequality in (\ref{k1N}), we obtain the following estimation for $1_{N\geq N^*}\left|K_{1,N}\right|$
\begin{align*}
1_{N\geq N^*}\left|K_{1,N}\right| &\leq \dfrac{1_{N\geq N^*}\bar{C}^*_\frac{1}{2}}{N^{\alpha+\frac{1}{2}}} \sum_{i=2}^{N} \left( |X_{i-1}|^{\frac{\ell}{2}} + 1 \right)\left( 1 + \sqrt{V(X_i)}\right)\\
&=1_{N\geq N^*}\bar{C}^*_\frac{1}{2} \left( \dfrac{1}{N^{\alpha-\frac{1}{2}}}\times \frac{1}{N}\sum_{i=2}^{N} |X_{i-1}|^{\frac{\ell}{2}} +\dfrac{1}{N^{\alpha+\frac{1}{2}}} \sum_{i=2}^{N}|X_{i-1}|^{\frac{\ell}{2}} \sqrt{V(X_i)}\right) \\
&\phantom{=}+ 1_{N\geq N^*}\bar{C}^*_\frac{1}{2}\left( \frac{1}{N^{\alpha-\frac{1}{2}}}\times\frac{1}{N}\sum_{i=2}^{N} \sqrt{V(X_i)} + \frac{N-1}{N^{\alpha+\frac{1}{2}}}\right)\\
&\leq 1_{N\geq N^*}\bar{C}^*_\frac{1}{2} \left( \dfrac{1}{N^{\alpha-\frac{1}{2}}}\times\frac{1}{N}\sum_{i=2}^{N} |X_{i-1}|^{\ell}+\dfrac{1}{N^{\alpha-\frac{1}{2}}}\times\frac{1}{N} \sum_{i=2}^{N} V(X_i) + 2\frac{N-1}{N^{\alpha+\frac{1}{2}}}\right).
\end{align*}
Since $\alpha> \frac{1}{2}$, $\frac{1}{N}\sum_{i=2}^{N} |X_{i-1}|^{\ell}$ converges a.s. to $\int_{\mathbb{R}^d}|x|^\ell \mu(dx)$ and $\frac{1}{N} \sum_{i=2}^{N} V(X_i)$ converges a.s. to $\mu(V)$, we can conclude that $K_{1,N}$ goes to $0$ in probability. \\
Finally let us prove the convergence of $K_{2,N}$ to $0$ in probability. Still in this case it is sufficient to prove that $1_{N\geq N^*}|K_{2,N}|$ converges in probability to $0$.
By (\ref{norm_F}) we have
\begin{align*}
1_{N\geq N^*}|K_{2,N}| &= \frac{1_{N\geq N^*}}{\sqrt{N}}\left|PF\left(\mu_{N}^{N,0},X_{N}\right) \right|\leq\frac{1_{N\geq N^*}}{\sqrt{N}} \left\|F\left(\mu_{N}^{N,0},\cdot \right) \right\|_{\sqrt{V},1}\int_{\mathbb{R}^d}(1+\sqrt{V(x)})P(X_N,dx)\\
&=\frac{1_{N\geq N^*}}{\sqrt{N}} \left\|F\left(\mu_{N}^{N,0},\cdot \right) \right\|_{\sqrt{V},1}\left(1+P\sqrt{V}(X_N) \right) \leq \frac{1_{N\geq N^*}}{\sqrt{N}}\bar{C}_\frac{1}{2}\left(1+\sqrt{\gamma}\sqrt{V(X_N)}+\sqrt{K} \right)
\end{align*}
where we applied (\ref{sqrtV}) to obtain the last inequality. We can therefore conclude that the right-hand side goes to $0$ as $N$ goes to infinity by observing that by Theorem \ref{large_numbers}, $\bar{V}_N:=\frac{1}{N}\sum_{i=1}^{N}V\left(X_i \right)$ converges almost surely to $ \mu\left(V \right)$ and consequently
$$\sqrt{\frac{V\left(X_N\right) }{N}} =\sqrt{\bar{V}_N-\frac{N-1}{N}\bar{V}_{N-1}} \longrightarrow 0 \,\, a.s.$$
\underline{\textbf{Convergence in distribution of $K_{3,N}$}}\\
To study the convergence in distribution of $K_{3,N}$, we apply the Central Limit Theorem for martingales (see Corollary 3.1 \cite{HallHeyde}). First we need to prove a square integrable martingale property and then that the Bracket condition and the Lindeberg condition hold.\\
Let us recall the expression of $K_{3,N}$
$$ K_{3,N} =\dfrac{1}{\sqrt{N}}\sum_{i=2}^{N}\left( F\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right) -PF\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)\right).$$
For $i=2,\cdots,N$ let
$$Y_{N,i}= \dfrac{1}{\sqrt{N}}\left( F\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right) -PF\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)\right).$$\\
We will start by \textbf{checking that $\mathbb{E}\left(Y_{N,i}|\mathcal{F}_{i-1}\right)=0$.}
Since $\mu^{i\wedge I_N,0}_{N}=\sum_{j=1}^{i-1}1_{\left\lbrace I_N=j\right\rbrace }\mu^{j,0}_{N}+1_{\left\lbrace I_N>i-1\right\rbrace}\mu^{i,0}_{N}$ is $\mathcal{F}_{i-1}$- measurable, one has
\begin{align*}
\mathbb{E}\left(Y_{N,i}|\mathcal{F}_{i-1}\right) &= \dfrac{1}{\sqrt{N}}\mathbb{E}\left(F\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right) -PF\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right) |\mathcal{F}_{i-1}\right)\\
&= \dfrac{1}{\sqrt{N}}\left(\mathbb{E}\left(F\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right) |\mathcal{F}_{i-1}\right) - \mathbb{E}\left(F\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right)| \mathcal{F}_{i-1}\right)
\right) = 0.
\end{align*}
For what concerns the \textbf{square integrability}, it is sufficient to check that
$$\mathbb{E}\left(F^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right) \right) < \infty.$$
By (\ref{norm_F})
$$ \mathbb{E}\left(F^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right) \right)\leq \mathbb{E}\left( \left\|F\left(\mu_{N}^{(i-1)\wedge I_N,0},\cdot\right) \right\|^2_{\sqrt{V},1}\left(1+\sqrt{V\left( X_i\right)} \right)^2\right) \leq 2\bar{C}^2_\frac{1}{2} \left(
1+\mathbb{E}\left( V\left( X_i\right) \right)\right) $$
where $\mathbb{E}\left( V\left( X_i\right) \right)\leq \gamma^{i-1}\nu_1(V)+K\sum_{\ell=0}^{i-2}\gamma^\ell $ and so by Lemma \ref{int_V}, the right-hand side is finite.\\
Let us now study the \textbf{convergence of $\sum_{i=2}^{N}\mathbb{E}\left(Y^2_{N,i}|\mathcal{F}_{i-1} \right)$}:
\begin{align*}
&\sum_{i=2}^{N}\mathbb{E}\left(Y^2_{N,i}|\mathcal{F}_{i-1} \right)\\
&= \frac{1}{N} \sum_{i=2}^{N} \mathbb{E}\left(
F^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right) +\left( PF\right)^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)
-2 F\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right)PF\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)|\mathcal{F}_{i-1} \right)\\
&= \frac{1}{N} \sum_{i=2}^{N}
PF^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)-\frac{1}{N} \sum_{i=2}^{N}\left( PF\right)^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)
\end{align*}
Before studying the behavior of the above quantity, let us observe that the convergence of\\ $\sup_{x\in \mathbb{R}^d}\dfrac{|\frac{\delta U}{\delta m}\left(\tilde{\mu},x \right)-\frac{\delta U}{\delta m}\left( \mu,x \right)|}{1+|x|^\frac{\ell}{2}}$ to $0$ when $W_\ell\left(\tilde{\mu}, \mu \right)$ goes to $0$ together with the a.s. convergence of $\max_{1\leq i\leq N} W_\ell(\mu_{N}^{i,0},\mu)$ to $0$ imply the existence of a sequence of random variables $\left( \epsilon_N\right)_{N\geq 0} $ converging a.s. to $0$ as $N\rightarrow \infty$ such that
\begin{equation} \label{control}
\forall 1\leq i\leq N\qquad \bigg\rvert\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_N,x \right)-\frac{\delta U}{\delta m}\left( \mu,x \right)\bigg\rvert\leq\left(1+|x|^\frac{\ell}{2}\right)\epsilon_N.
\end{equation}
With (\ref{norm_diff_F_1}) and \textbf{L3}, we deduce that
\begin{equation} \label{control_diff}
\left\|F\left(\mu^{i\wedge I_N,0}_N,\cdot \right) - F\left(\mu,\cdot \right) \right\|_{\sqrt{V},1} \leq \bar{C}_\frac{1}{2} \epsilon_N \sup_{x\in \mathbb{R}^d} \frac{1+|x|^\frac{\ell}{2} }{1+\sqrt{V(x)}} \leq\bar{C}_\frac{1}{2} \left(1+\sqrt{C_\ell} \right) \epsilon_N.
\end{equation}\\
\underline{First Component}\quad
We can rewrite the first component in the following way
$$\frac{1}{N} \sum_{i=2}^{N}
PF^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right) = \frac{1}{N} \sum_{i=2}^{N}\left(
PF^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right) - PF^2\left(\mu,X_{i-1}\right)\right) +\frac{1}{N} \sum_{i=2}^{N} PF^2\left(\mu,X_{i-1}\right). $$
Since by \textbf{L\ref{D1}} \begin{equation}\label{PFsquared}
\left\|PF^2\left(\mu,\cdot\right)\right\|_{V,1} \leq \left\|F^2\left(\mu,\cdot\right)\right\|_{V,1} \sup_{x\in \mathbb{R}^d} \frac{1+PV(x)}{1+V(x)} < \infty,
\end{equation}
by Theorem \ref{large_numbers} we obtain that
$$ \lim_{N\rightarrow \infty}\frac{1}{N} \sum_{i=2}^{N} PF^2\left(\mu,X_{i-1}\right) = \mu(PF^2\left(\mu,\cdot\right))=\mu(F^2\left(\mu,\cdot\right)) \quad a.s. $$
where for the last equality we used the invariance of $\mu$ with respect to $P$. On the other hand
\begin{align*}
&\left| \frac{1}{N} \sum_{i=2}^{N}\left(
PF^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right) - PF^2\left(\mu,X_{i-1}\right)\right)\right|\\
&\phantom{\frac{1}{N}}\leq \frac{1}{N} \sum_{i=2}^{N} \int_{\mathbb{R}^d}\left| F^2\left(\mu_{N}^{(i-1)\wedge I_N,0},x\right) - F^2\left(\mu,x\right)\right| P\left(X_{i-1},dx \right)\\
&\phantom{\frac{1}{N}}\leq \frac{1}{N} \sum_{i=2}^{N}\int_{\mathbb{R}^d}\left( F\left(\mu_{N}^{(i-1)\wedge I_N,0},x\right) - F\left(\mu,x\right)\right)^2 P\left(X_{i-1},dx \right)\\
& \phantom{\frac{1}{N}}\quad+ \frac{2}{N} \sum_{i=2}^{N}\int_{\mathbb{R}^d} \left|F\left(\mu,x\right) \right| \left|F\left(\mu_{N}^{(i-1)\wedge I_N,0},x\right) - F\left(\mu,x\right) \right| P\left(X_{i-1},dx \right)\\
&\phantom{\frac{1}{N}}\leq \frac{1}{N} \sum_{i=2}^{N} \left\|F\left(\mu_{N}^{(i-1)\wedge I_N,0},\cdot\right) - F\left(\mu,\cdot\right) \right\|^2_{\sqrt{V},1}\left(1+PV\left(X_{i-1}\right)+2P\sqrt{V}\left(X_{i-1}\right) \right)\\
&\phantom{\frac{1}{N}}\quad+ \frac{2}{N} \sum_{i=2}^{N} \left\|F\left(\mu, \cdot \right)\right\|_{\sqrt{V},1} \left\|F\left(\mu_{N}^{(i-1)\wedge I_N,0},\cdot \right) - F\left(\mu, \cdot \right)\right\|_{\sqrt{V},1} \left(1+PV\left(X_{i-1}\right)+2P\sqrt{V}\left(X_{i-1}\right) \right)\\
&\phantom{\frac{1}{N}}\leq \epsilon_N\left(\bar{C}^2_\frac{1}{2} \left(1+\sqrt{C_\ell} \right)^2 \epsilon_N+ 2\bar{C}^2_\frac{1}{2} \left(1+\sqrt{C_\ell} \right) \right)\frac{1}{N}\sum_{i=2}^{N} \left(1+\gamma V\left(X_{i-1}\right) + K+2\sqrt{\gamma}\sqrt{V\left(X_{i-1}\right)} +2\sqrt{K} \right)
\end{align*}
where we used (\ref{control_diff}), (\ref{norm_F}), the Lyapunov condition and (\ref{sqrtV}) . Therefore the right-hand side goes to $0$ since $\frac{1}{N}\sum_{i=2}^{N}V\left(X_{i-1} \right) $ converges to $\mu(V)$, $\frac{1}{N}\sum_{i=2}^{N}\sqrt{V\left(X_{i-1} \right)} $ converges to $\mu(\sqrt{V})$ and $\epsilon_N$ goes to $0$ a.s..\\
\underline{Second Component}\quad
As before, we can rewrite the second component in the following way
\begin{align*}
& \frac{1}{N} \sum_{i=2}^{N}\left( PF\right)^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right) \\
&\phantom{\frac{1}{N}}= \frac{1}{N} \sum_{i=2}^{N}\left( \left( PF\right)^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)-\left( PF\right)^2\left(\mu,X_{i-1}\right)\right) +\frac{1}{N} \sum_{i=2}^{N}\left( PF\right)^2\left(\mu,X_{i-1}\right).\end{align*}
By (\ref{PFsquared}) and Jensen's inequality $ \left\|\left( PF\right)^2\left(\mu,\cdot\right) \right\|_{V,1} \leq \infty $, therefore by Theorem \ref{large_numbers}
$$\frac{1}{N} \sum_{i=2}^{N}\left( PF\right)^2\left(\mu,X_{i-1}\right) = \mu(\left( PF\right)^2\left(\mu,\cdot\right)) \quad a.s.. $$
On the other hand
\begin{align*}
&\left| \frac{1}{N} \sum_{i=2}^{N}\left( \left( PF\right)^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)-\left( PF\right)^2\left(\mu,X_{i-1}\right)\right)\right|\\
&\phantom{\frac{1}{N}}\leq\frac{1}{N} \sum_{i=2}^{N} \left( PF\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)- PF\left(\mu,X_{i-1}\right)\right)^2\\
&\phantom{=\frac{1}{N}}+ \frac{2}{N}\sum_{i=2}^{N}\left| PF\left(\mu,X_{i-1}\right)\right|\left| PF\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)- PF\left(\mu,X_{i-1}\right)\right|\\
&\phantom{\frac{1}{N}}\leq \frac{1}{N} \sum_{i=2}^{N}\left(\left\|F\left(\mu_{N}^{(i-1)\wedge I_N,0},\cdot \right)- F\left(\mu,\cdot \right) \right\|_{\sqrt{V},1} \left( 1+P\sqrt{V}(X_{i-1})\right) \right)^2\\
&\phantom{\frac{1}{N}}\quad+\frac{2}{N}\sum_{i=2}^{N} \left\| F\left(\mu,\cdot \right) \right\|_{\sqrt{V},1}\left\|F\left(\mu_{N}^{(i-1)\wedge I_N,0},\cdot \right)- F\left(\mu,\cdot \right) \right\|_{\sqrt{V},1}\left( 1+P\sqrt{V}(X_{i-1})\right)^2\\
&\phantom{\frac{1}{N}}\leq \epsilon_N\left( \bar{C}^2_\frac{1}{2} \left(1+\sqrt{C_\ell} \right)^2\epsilon_N + 2\bar{C}^2_\frac{1}{2} \left(1+\sqrt{C_\ell} \right)\right) \frac{1}{N}\sum_{i=2}^{N}\left(1+\sqrt{\gamma}\sqrt{V(X_{i-1})} +\sqrt{K}\right)^2.
\end{align*}
The right-hand side goes to $0$ as $N$ goes to infinity since $\epsilon_N$ goes to $0$ a.s. and, by Theorem \ref{large_numbers}, $\frac{1}{N}\sum_{i=2}^{N}V\left(X_{i-1} \right) $ converges to $\mu(V)$ and $\frac{1}{N}\sum_{i=2}^{N}\sqrt{V\left(X_{i-1} \right)} $ converges to $\mu(\sqrt{V})$.\\
In conclusion we have proved that almost surely
\begin{equation}\label{asint.variance}
\frac{1}{N} \sum_{i=2}^{N}
\left( PF^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)-\left( PF\right)^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)\right) \underset{N\rightarrow \infty}{\longrightarrow} \mu\left(F^2\left(\mu,\cdot\right) \right) - \mu\left( \left( PF\right)^2\left(\mu,\cdot \right) \right).
\end{equation}
To finally conclude that $K_{3,N} = \sum_{i=2}^{N} Y_{N,i}
\overset{d}{\Longrightarrow} \mathcal{N}\left(0, \mu\left(F^2\left(\mu,\cdot\right) \right) - \mu\left( \left( PF\right)^2\left(\mu,\cdot \right)\right) \right)$, it remains just to verify that the \textbf{Lindeberg condition} holds.\\ We need to check that for $\epsilon>0$, $\sum_{i=2}^{N}\mathbb{E}\left(Y^2_{N,i} 1_{\left\lbrace Y^2_{N,i}> \epsilon \right\rbrace }| \mathcal{F}_{i-1}\right) $ goes to $0$ in probability as $N$ goes to infinity.
By Jensen's inequality, the Lyapunov condition, (\ref{sqrtV}) and (\ref{norm_F}) we obtain
\begin{align*}
NY^2_{N,i} &= \left( F\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right) -PF\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)\right)^2\\
&\leq 2F^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i}\right)+2 PF^2\left(\mu_{N}^{(i-1)\wedge I_N,0},X_{i-1}\right)\\
&\leq 2 \left\|F\left(\mu_{N}^{(i-1)\wedge I_N,0},\cdot\right) \right\|^2_{\sqrt{V},1}\left(1+\sqrt{V(X_i)} \right)^2+2\left\|F\left(\mu_{N}^{(i-1)\wedge I_N,0},\cdot\right) \right\|^2_{\sqrt{V},1} \left(1+PV(X_{i-1})+2P\sqrt{V}(X_{i-1}) \right)\\
&\leq 2\bar{C}^2_\frac{1}{2}\left( 2+V(X_i)+2\sqrt{V\left(X_{i} \right) } +\gamma V\left(X_{i-1} \right)+K+2\sqrt{\gamma}\sqrt{V(X_{i-1})}+2\sqrt{K} \right)\\
&\leq 2L\bar{C}^2_\frac{1}{2}\left(1+ V(X_i)+ V\left(X_{i-1} \right)\right)
\end{align*}
where $L$ is a finite constant. As, for $a,b,c,g \in \mathbb{R}_+$
\begin{align*}
\left(a+b+c \right)1_{\left\lbrace a+b+c\geq g \right\rbrace}&=\left(a+b+c \right) \left(1_{\left\lbrace a>b,a>c,a+b+c\geq g \right\rbrace}+1_{\left\lbrace b\geq a,b>c,a+b+c\geq g \right\rbrace}+1_{\left\lbrace c\geq a,c\geq b,a+b+c\geq g \right\rbrace} \right)\\
&\leq 3a1_{\left\lbrace a>b,a>c,a+b+c\geq g \right\rbrace}+3b1_{\left\lbrace b\geq a,b>c,a+b+c\geq g \right\rbrace}+3c1_{\left\lbrace c\geq a,c\geq b,a+b+c\geq g \right\rbrace}\\
&\leq 3a1_{\left\lbrace a\geq\frac{g}{3} \right\rbrace} + 3b1_{\left\lbrace b\geq\frac{g}{3} \right\rbrace}+3c1_{\left\lbrace c\geq\frac{g}{3} \right\rbrace},
\end{align*}
it is enough to check that for each $\epsilon >0$
$$\frac{N-1}{N} 1_{\left\lbrace \frac{1}{N}> \epsilon\right\rbrace } + \sum_{i=2}^{N} \mathbb{E}\left(\frac{ V(X_i)}{N} 1_{\left\lbrace \frac{V(X_i)}{N}> \epsilon\right\rbrace } |\mathcal{F}_{i-1} \right) + \sum_{i=2}^{N} \frac{V(X_{i-1})}{N} 1_{\left\lbrace \frac{V(X_{i-1})}{N}> \epsilon\right\rbrace }$$
goes to $0$ in probability as $N$ goes to infinity.\\
It is immediate to prove that the \textbf{first component} goes to $0$. For the \textbf{second component} let us observe that
\begin{align*}
\mathbb{E}\left( \sum_{i=2}^{N} \mathbb{E}\left(\frac{V(X_i)}{N} 1_{\left\lbrace \frac{V(X_i)}{N}> \epsilon\right\rbrace } |\mathcal{F}_{i-1} \right)\right) &= \sum_{i=2}^{N} \mathbb{E}\left(\frac{V(X_i)}{N} 1_{\left\lbrace \frac{V(X_i)}{N}> \epsilon\right\rbrace } \right)\\
& = \sum_{i=2}^{N} \int_{\mathbb{R}^d} \frac{V(x)}{N} 1_{\left\lbrace \frac{V(x)}{N}> \epsilon\right\rbrace } \nu_1P^{i-1}\left( dx\right)\\
&= \sum_{i=2}^{N} \int_{\mathbb{R}^d} \frac{V(x)}{N} 1_{\left\lbrace \frac{V(x)}{N}> \epsilon\right\rbrace } \left( \nu_1P^{i-1}\left( dx\right)-\mu\left( dx\right) \right) \\
&\phantom{=}+ \frac{N-1}{N}\int_{\mathbb{R}^d} V(x) 1_{\left\lbrace \frac{V(x)}{N}> \epsilon\right\rbrace }\mu(dx)\\
&\leq \sum_{i=2}^{N} \left\|\frac{V(\cdot)}{N} 1_{\left\lbrace \frac{V(\cdot)}{N}> \epsilon\right\rbrace }\right\|_{V,\beta} d_{V,\beta}\left( \nu_1P^{i-1},\mu \right)+ \frac{N-1}{N}\int_{\mathbb{R}^d} V(x) 1_{\left\lbrace \frac{V(x)}{N}> \epsilon\right\rbrace }\mu(dx).
\end{align*}
Choosing $\beta \in \left(0,\frac{\rho}{\sqrt{K}} \right)$, we can apply (\ref{stima_distanza}) and deduce that
\begin{align*}
&\sum_{i=2}^{N} \left\|\frac{V(\cdot)}{N} 1_{\left\lbrace \frac{V(\cdot)}{N}> \epsilon\right\rbrace }\right\|_{V,\beta} d_{V,\beta}\left( \nu_1P^{i-1},\mu \right)+ \frac{N-1}{N}\int_{\mathbb{R}^d} V(x) 1_{\left\lbrace \frac{V(x)}{N}> \epsilon\right\rbrace }\mu(dx)\\
&\phantom{==}\leq \left\|\frac{V(\cdot)}{N} 1_{\left\lbrace \frac{V(\cdot)}{N}> \epsilon\right\rbrace }\right\|_{V,\beta} \sum_{i=0}^{\infty} \chi^i d_{V,\beta}\left(\nu_1,\mu \right)+ \frac{N-1}{N}\int_{\mathbb{R}^d} V(x) 1_{\left\lbrace \frac{V(x)}{N}> \epsilon\right\rbrace }\mu(dx)\\
& \phantom{==}\leq \frac{1}{N}\sup_{x\in \mathbb{R}^d} \frac{V(x)}{1+\beta V(x)} \frac{1}{1-\chi}d_{V,\beta}\left(\nu_1,\mu \right)+ \frac{N-1}{N}\int_{\mathbb{R}^d} V(x) 1_{\left\lbrace \frac{V(x)}{N}> \epsilon\right\rbrace }\mu(dx).
\end{align*}
By Lemma \ref{int_V}, it is enough to suppose that $\nu_1(V)<\infty$ so that $d_{V,\beta}\left(\nu_1,\mu \right)<\infty$ and the first term goes to $0$.
The second term goes to $0$ by Lebesgue's theorem since $\mu(V)<\infty$.
Finally let us study the \textbf{third component}.
By Theorem \ref{large_numbers} we know that $\overline{V}_N:= \frac{1}{N}\sum_{i=1}^{N}V\left( X_{i}\right)$ converges almost surely to $\mu\left(V \right)$. Therefore
$$\frac{V\left( X_{N}\right)}{N} = \overline{V}_N - \frac{N-1}{N}\overline{V}_{N-1}\underset{N\rightarrow \infty}{\longrightarrow} 0 \quad a.s.$$
that implies $\forall \epsilon >0$
$$V\left( X_{N}\right) 1_{\left\lbrace \frac{V\left( X_{N}\right)}{N} > \epsilon \right\rbrace} \underset{N\rightarrow \infty}{\longrightarrow} 0 \quad a.s.$$
and taking the Cesaro mean we deduce that
$$\lim_{N\to\infty} \frac{1}{N} \sum_{i=1}^{N} V\left( X_{i}\right) 1_{\left\lbrace \frac{V\left( X_{i}\right)}{N} > \epsilon\right\rbrace } \leq \lim_{N\to\infty} \frac{1}{N} \sum_{i=1}^{N}V\left( X_{i}\right)1_{\left\lbrace \frac{V\left( X_{i}\right)}{i} > \epsilon \right\rbrace} = 0. $$
\end{proof}
\subsection{Independent Non-Equidistributed Random Variables}
Before providing the proof of Theorem \ref{independent_case}, let us observe that Hypothesis \textbf{TX }implies the Lindeberg condition.
\begin{lemma} Hypothesis \textbf{TX} implies that
\begin{equation}\label{lindenberg}
\forall \epsilon>0,\underset{N\to\infty}{\lim}\dfrac{1}{N}\mathlarger{\sum}_{i=1}^{N}\mathbb{E}\left(|X_i|^\ell 1_{\left\lbrace |X_i|^\ell>N\epsilon \right\rbrace} \right)= 0.
\end{equation}
\end{lemma}
\begin{proof}
An application of Kronecker's lemma shows that \textbf{TX} implies
\begin{equation} \label{sci}
\lim_{N\to\infty} \dfrac{1}{N}\mathlarger{\sum}_{i=1}^{N}\mathbb{E}\left( \left( |X_i|^\ell-i^\beta\right) 1_{\left\lbrace |X_i|^\ell>i^\beta \right\rbrace} \right)=0.
\end{equation}
Let $\epsilon>0$, then exists $\bar{N}_{\epsilon,\beta}$ such that for $N\geq \bar{N}_{\epsilon,\beta},$ $N^\beta \leq \dfrac{\epsilon N}{2}$. Therefore if $N\geq \bar{N}_{\epsilon,\beta},$ the following chain of inequalities holds for each $i=1,\cdots,N$
\begin{align*}
\left( |X_i|^\ell-i^\beta\right) 1_{\left\lbrace |X_i|^\ell>i^\beta \right\rbrace} &\geq\left( |X_i|^\ell-\dfrac{\epsilon N}{2}\right) 1_{\left\lbrace |X_i|^\ell>\epsilon N \right\rbrace}\\
&\geq \dfrac{|X_i|^\ell}{2}1_{\left\lbrace |X_i|^\ell>\epsilon N \right\rbrace}.
\end{align*}
Taking the expectation and using (\ref{sci}) we can obtain (\ref{lindenberg}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{independent_case}]
To study the limit distribution of $\sqrt{N}\left(U\left(\mu_N \right)- U\left(\mu \right) \right)$, let us consider the following decomposition
\begin{equation} \label{decomposition}
\sqrt{N}\left(U\left(\mu_N \right)- U\left(\mu \right) \right) = \sqrt{N}\left(U\left(\mu_N \right)- U\left(\bar{\nu}_N\right) \right)+ \sqrt{N}\left(U\left(\bar{\nu}_N \right)- U\left(\mu\right) \right).
\end{equation}
We will prove that $\sqrt{N}\left(U\left(\mu_N \right)- U\left(\bar{\nu}_N\right) \right) \overset{d}{\Rightarrow} \mathcal{N}\left(0,\int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(\mu,x \right)^2\mu(dx)-\int_{\mathbb{R}^d \times \mathbb{R}^d} \frac{\delta U}{\delta m} (\mu,x)\frac{\delta U}{\delta m} (\mu,y)\eta(dx,dy) \right)$ and $\sqrt{N}\left(U\left(\bar{\nu}_N \right)- U\left(\mu\right) \right)\underset{N\rightarrow \infty}{\longrightarrow} \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}(\mu,x)\sigma(dx).$\\
Let us start by studying the second term in the above decomposition.
\subsection*{Limit of $\sqrt{N}\left(U\left(\bar{\nu}_N \right)- U\left(\mu\right)\right)$}
Let us first observe that by Remark \ref{conv_nuntomu}, Assumption \ref{wess.conv2} implies $
W_\ell\left(\bar{\nu}_N, \mu\right) \underset{N\rightarrow \infty}{\longrightarrow} 0$. Therefore one has
$$\lim_{N\to\infty} \sup_{s\in \left[0,1 \right]} W_\ell\left(\mu+s(\bar{\nu}_N - \mu),\mu \right)=0.$$
For $N$ bigger than a fixed $N_r$, by Lemma \ref{linearderivative} we can rewrite
$$\sqrt{N}\left(U\left(\bar{\nu}_N\right)- U\left(\mu\right) \right) = \sqrt{N}\int_{0}^{1}ds\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}(\mu+s(\bar{\nu}_N - \mu),x)(\bar{\nu}_N-\mu)(dx).$$
Let us now prove that the above quantity tends to $\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}(\mu,x)\sigma(dx)$ where $\sigma$ is the measure such that $\|\sqrt{N}\left(\bar{\nu}_N-\mu \right) - \sigma\|_\ell \underset{N\rightarrow \infty}{\longrightarrow} 0$ (see Hypothesis \ref{l_convergence}).
By the triangle inequality, for $N\geq N_r$, one has
\begin{align*}
&\left|\sqrt{N}\int_{0}^{1}ds\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}(\mu+s(\bar{\nu}_N - \mu),x)(\bar{\nu}_N-\mu)(dx) - \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}(\mu,x)\sigma(dx) \right|\\
& \phantom{==}\leq \left| \sqrt{N}\int_{0}^{1}ds\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}(\mu+s(\bar{\nu}_N - \mu),x)(\bar{\nu}_N-\mu)(dx) - \sqrt{N}\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}(\mu,x)(\bar{\nu}_N-\mu)(dx)\right|\\
&\phantom{===}+ \left|\sqrt{N}\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}(\mu,x)(\bar{\nu}_N-\mu)(dx) - \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}(\mu,x)\sigma(dx) \right| .
\end{align*}
Recalling that $\left\|\tau \right\|_\ell= \sup_{f:|f(x)|\leq 1+|x|^\ell} \int_{\mathbb{R}^d}f(x)\tau(dx)$ for $\tau \in \mathcal{M}_\ell(\mathbb{R}^d)$, the second component of the right-hand side goes to $0$ thanks to Assumption \ref{l_convergence} and Assumption \textbf{RU\ref{growth2}}.\\
For what concerns the first component of the right-hand side, we have by Hypothesis \textbf{RU\ref{growth_diff2}}
\begin{align*}
&\left| \sqrt{N}\int_{0}^{1}ds\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}(\mu+s(\bar{\nu}_N - \mu),x)(\bar{\nu}_N-\mu)(dx) - \sqrt{N}\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}(\mu,x)(\bar{\nu}_N-\mu)(dx)\right|\\
&\leq \int_{0}^{1}ds\int_{\mathbb{R}^d} \left| \frac{\delta U}{\delta m}(\mu+s(\bar{\nu}_N - \mu),x) - \frac{\delta U}{\delta m}(\mu,x) \right|\left|\sqrt{N}\left(\bar{\nu}_N-\mu \right) \right|(dx) \\
&\leq \int_{0}^{1}ds\int_{\mathbb{R}^d} C\left( (1+|x|^\ell)\left\| s(\bar{\nu}_N-\mu)\right\|_0^\alpha +(1+|x|^{\ell (1-\alpha)})\left( \int_{\mathbb{R}^d}|y|^\ell |s(\bar{\nu}_N-\mu)|(dy) \right)^\alpha \right)\left| \sqrt{N}\left(\bar{\nu}_N-\mu \right) \right|(dx)\\
&\leq C \left( \left|\bar{\nu}_N-\mu \right| (\mathbb{R}^d)\right)^\alpha\cdot\int_{\mathbb{R}^d}(1+|x|^\ell)\left| \sqrt{N}\left(\bar{\nu}_N-\mu \right) \right|(dx)\\
&\phantom{=}+ C \left( \int_{\mathbb{R}^d}|y|^\ell |\bar{\nu}_N-\mu|(dy) \right)^\alpha \int_{\mathbb{R}^d}(1+|x|^{\ell (1-\alpha)})\left| \sqrt{N}\left(\bar{\nu}_N-\mu \right) \right|(dx).
\end{align*}
Since $\|\sqrt{N}\left(\bar{\nu}_N-\mu \right) - \sigma\|_\ell \underset{N\rightarrow \infty}{\longrightarrow} 0$ implies
$\||\sqrt{N}\left(\bar{\nu}_N-\mu \right)| - |\sigma|\|_\ell \underset{N\rightarrow \infty}{\longrightarrow} 0$, we can deduce that:
\begin{enumerate}[label=(\roman*)]
\item $\int_{\mathbb{R}^d}(1+|x|^\ell)| \sqrt{N}\left(\bar{\nu}_N-\mu \right) |(dx) \underset{N\rightarrow \infty}{\longrightarrow} \int_{\mathbb{R}^d}(1+|x|^\ell)\sigma(dx) < \infty $\\
\item $\int_{\mathbb{R}^d}(1+|x|^{\ell (1-\alpha)})| \sqrt{N}\left(\bar{\nu}_N-\mu \right) |(dx) \underset{N\rightarrow \infty}{\longrightarrow} \int_{\mathbb{R}^d}(1+|x|^{\ell (1-\alpha)})\sigma(dx) < \infty$\\
\item $\left|\bar{\nu}_N-\mu \right| (\mathbb{R}^d) = \dfrac{|\sqrt{N}\left( \bar{\nu}_N-\mu\right)| (\mathbb{R}^d)}{\sqrt{N}}\underset{N\rightarrow \infty}{\longrightarrow} 0$\\
\item $\int_{\mathbb{R}^d}|y|^\ell |\bar{\nu}_N-\mu|(dy) \leq \int_{\mathbb{R}^d}(1+|y|^\ell) |\bar{\nu}_N-\mu|(dy) = \dfrac{\int_{\mathbb{R}^d}(1+|y|^\ell) |\sqrt{N}(\bar{\nu}_N-\mu)|(dy)}{\sqrt{N}} \underset{N\rightarrow \infty}{\longrightarrow} 0.$
\end{enumerate}
To conclude the proof, we have to study the convergence of $\sqrt{N}\left(U\left(\mu_N \right)- U\left(\bar{\nu}_N\right) \right) $ in the decomposition (\ref{decomposition}).
\subsection*{Limit distribution of $\sqrt{N}\left(U\left(\mu_N \right)- U\left(\bar{\nu}_N\right) \right)$}
\textbf{FIRST STEP}\quad
Let us preliminary recall that by Remark \ref{conv_nuntomu}, Theorem \ref{SLLN} implies $W_\ell\left(\mu_N, \mu\right) \underset{N}{\longrightarrow} 0$ a.s.\\
Let us now define for $i=1,\cdots,N$ and $s\in \left[ 0,1\right] $
$$\mu_{N}^{i,s} = \frac{1}{N}\sum_{j=1}^{i-1}\delta_{X_j} +\frac{s}{N}\delta_{X_i}+\frac{\left(1-s \right) }{N}\nu_i+\frac{1}{N} \sum_{j=i+1}^{N}\nu_j.$$
We have $\mu_{N}^{N,1}= \mu_{N} $ and $\mu_{N}^{1,0}=\bar{\nu}_N.$ Moreover since for $i=1,\cdots,N-1$
$$\mu_{N}^{i,1} = \mu_{N}^{i+1,0}, $$
one has
$$\sqrt{N}\left(U\left(\mu_N \right)- U\left(\bar{\nu}_N\right) \right) = \sqrt{N} \sum_{i=1}^{N}\left(U(\mu_{N}^{i,1}) -U(\mu_{N}^{i,0}) \right) .$$
Let us now show that
$$\max_{1\leq i\leq N} \sup_{s\in \left[0,1 \right]} W_\ell(\mu_{N}^{i,s},\mu) \underset{N\rightarrow \infty}{\longrightarrow} 0 \quad a.s.$$
Since for $i=1,\cdots,N$ and $s\in \left[0,1 \right]$
$$ \mu_{N}^{i,s} = s\mu_{N}^{i,1}+(1-s)\mu_{N}^{i-1,1}$$
under the convention $\mu_{N}^{0,1}=\mu_{N}^{1,0},$ we can proceed as in the first step of the proof of Theorem \ref{main_teo} to prove that
\begin{equation}\label{unifconv2}
\max_{1\leq i\leq N} \sup_{s\in \left[0,1 \right]} W_\ell\left( \mu_{N}^{i,s},\mu\right) = \max_{0\leq i\leq N}W_\ell\left( \mu_{N}^{i,1},\mu \right).
\end{equation}
By the triangle inequality,
\begin{equation}\label{triangine}
W_\ell\left( \mu_{N}^{i,1},\mu \right) \leq W_\ell\left( \mu_{N}^{i,1},\bar{\nu}_N \right) + W_\ell\left(\bar{\nu}_N,\mu \right)
\end{equation}
where $W_\ell\left(\bar{\nu}_N,\mu \right)$ goes to $0$ as $N$ goes to infinity.\\Therefore it is sufficient to prove that $\max_{0\leq i\leq N} W_\ell\left( \mu_{N}^{i,1},\bar{\nu}_N \right)\underset{N\rightarrow \infty}{\longrightarrow}0$ a.s. which we are now going to do.
For $i=1,\cdots, N$
$$ \mu_{N}^{i,1} = \frac{i}{N}\mu_{i}+\frac{1}{N}\sum_{j=i+1}^{N}\nu_j. $$
Hence let $\gamma\in \Pi(\mu_{i},\bar{\nu}_i)$ and define
$ \tilde{\gamma}(dx,dy)=\frac{i}{N} \gamma(dx,dy) +\frac{1}{N}\sum_{j=i+1}^{N}\nu_j(dx)\delta_x(dy):$
one has
$ \tilde{\gamma}(dx,\mathbb{R}^d) = \mu_{N}^{i,1}(dx) $ and $ \tilde{\gamma}(\mathbb{R}^d,dy)=\bar{\nu}_N(dy)$. If $\ell>0$ we have
\begin{align*}
W_\ell^{\ell\vee1}\left( \mu_{N}^{i,1},\bar{\nu}_N\right)&\leq \int_{\mathbb{R}^d\times\mathbb{R}^d}|x-y|^\ell\tilde{\gamma}(dx,dy)=\frac{i}{N}\int_{\mathbb{R}^d\times\mathbb{R}^d}|x-y|^\ell \gamma(dx,dy)+\frac{1}{N}\sum_{j=i+1}^{N}\int_{\mathbb{R}^d\times\mathbb{R}^d}|x-y|^\ell\nu_j(dx)\delta_x(dy)\\
&= \frac{i}{N}\int_{\mathbb{R}^d\times\mathbb{R}^d}|x-y|^\ell \gamma(dx,dy).
\end{align*}
Taking the infimum over $\gamma$, one has
$$W_\ell\left( \mu_{N}^{i,1},\bar{\nu}_N\right)\leq \left( \frac{i}{N}\right) ^\frac{1}{\ell\vee1}W_\ell\left( \mu_i,\bar{\nu}_i\right). $$
Finally let $\alpha\in\left(0,1 \right) $ so that
\begin{align*}
\max_{0\leq i\leq N} W_\ell\left( \mu_{N}^{i,1},\bar{\nu}_N \right) &= \max_{1\leq i\leq N} W_\ell\left( \mu_{N}^{i,1},\bar{\nu}_N \right) \leq \max_{1\leq i \leq N}\left( \frac{i}{N}\right) ^\frac{1}{\ell\vee1}W_\ell\left( \mu_i,\bar{\nu}_i\right)\\
&\leq \max_{1\leq i\leq N}\left( \frac{i}{N}\right) ^\frac{1}{\ell\vee1}W_\ell\left( \mu_i,\mu\right)+\max_{1\leq i\leq N}\left( \frac{i}{N}\right) ^\frac{1}{\ell\vee1}W_\ell\left( \mu,\bar{\nu}_i\right)\\
&\leq
\max_{1\leq i\leq \llcorner\alpha N\lrcorner}\left( \frac{i}{N}\right) ^\frac{1}{\ell\vee1}W_\ell\left( \mu_i,\mu\right)+
\max_{\llcorner\alpha N\lrcorner< i\leq N}\left( \frac{i}{N}\right) ^\frac{1}{\ell\vee1}W_\ell\left( \mu_i,\mu\right)\\
&+
\max_{1\leq i\leq \llcorner\alpha N\lrcorner}\left( \frac{i}{N}\right) ^\frac{1}{\ell\vee1}W_\ell\left( \mu,\bar{\nu}_i\right)+\max_{\llcorner\alpha N\lrcorner< i\leq N}\left( \frac{i}{N}\right) ^\frac{1}{\ell\vee1}W_\ell\left( \mu,\bar{\nu}_i\right)\\
&\leq \alpha^\frac{1}{\ell\vee1} \sup_{ i}W_\ell\left( \mu_i,\mu\right) + \alpha^\frac{1}{\ell\vee1}\sup_{i} W_\ell\left(\mu,\bar{\nu}_i \right)+
\max_{\llcorner\alpha N\lrcorner< i\leq N}W_\ell\left( \mu_i,\mu\right)+ \max_{\llcorner\alpha N\lrcorner< i\leq N}W_\ell\left(\mu,\bar{\nu}_i \right).
\end{align*}
For fixed $\alpha$, the sum of the two last terms goes to $0$ as $N$ goes to infinity while the sum of the first two terms is arbitrarily small for $\alpha$ small.\
Then it is possible to conclude that the left-hand side goes to $0$ almost surely.\\
As explained in the first step of the Theorem \ref{main_teo}, we can adapt the reasoning to the case $\ell=0$.\\
\textbf{SECOND STEP}\quad We can now reconsider the following stopping time
$$I_N = \min\left\lbrace 1\leq i \leq N : \exists s\in\left[ 0,1\right] :W_\ell\left(\mu^{i,s}_{N},\mu \right)\geq r \right\rbrace$$
for the filtration $\left(\mathcal{F}_i=\sigma\left(X_1,\cdots,X_i\right)\right)_{i\geq1}.$
Proceeding exactly as in the second step of the proof of the Theorem \ref{main_teo} and keeping the same notations, we can obtain the following decomposition
$$\sqrt{N}\left(U\left(\mu_N \right)- U\left(\bar{\nu}_N\right) \right) = \sqrt{N}\left(U\left(\mu_N \right)- U\left(\bar{\nu}_N\right) - Q_N \right) + \sqrt{N}Q_N $$
where
$$Q_N :=\frac{1}{N}\sum_{i=1}^{N}\left( \frac{\delta U}{\delta m}(\mu_{N}^{i\wedge I_N,0},X_i) -\int_{\mathbb{R}^d} \frac{\delta U}{\delta m}(\mu_{N}^{i\wedge I_N,0},x)\nu_i(dx) \right)$$
and for $N\geq N^*$, $U\left(\mu_N \right)- U\left(\bar{\nu}_N\right) - Q_N$ coincides with
$$R_N = \frac{1_{N\geq N^*}}{N}\sum_{i=1}^{N}\int_{0}^{1}ds\int_{\mathbb{R}^d} \left(\frac{\delta U}{\delta m}(\mu_{N}^{i,s},x)- \frac{\delta U}{\delta m}(\mu_{N}^{i,0},x)\right)(\delta_{X_i}-\nu_i)(dx). $$\\
\textbf{THIRD STEP}\quad Let us first prove the convergence in $L^1$ of $\sqrt{N}R_N$ to $0$. By applying the same argument as in the proof of Theorem \ref{main_teo} this is sufficient to deduce that $\sqrt{N}\left(U\left(\mu_N \right)- U\left(\bar{\nu}_N\right) - Q_N \right)$ goes to $0$ in probability. \\
By Assumption \textbf{RU\ref{growth_diff2}}, $\exists C < \infty$, $\exists \alpha \in \left(\frac{1}{2},1 \right] $ such that for $N\geq N^*$
$$ \bigg\rvert\frac{\delta U}{\delta m}(\mu_{N}^{i,s},x)- \frac{\delta U}{\delta m}(\mu_{N}^{i,0},x)\bigg\rvert \leq C\left((1+|x|^\ell)\left\|\mu_{N}^{i,s}-\mu_{N}^{i,0} \right\|_0^\alpha +(1+|x|^{\ell(\alpha-1)})\left(\int_{\mathbb{R}^d}|y|^\ell|\mu_{N}^{i,s}-\mu_{N}^{i,0}|(dy) \right)^\alpha \right) $$
with
$$|\mu_{N}^{i,s} -\mu_{N}^{i,0}|(dy) \leq \frac{s}{N}(\delta_{X_i}+\nu_i)(dy)$$
and
$$ \left\| \mu_{N}^{i,s}-\mu_{N}^{i,0}\right\|_0 = |\mu_{N}^{i,s} -\mu_{N}^{i,0}|(\mathbb{R}^d)=\frac{s}{N}|\delta_{X_i}-\nu_i|(\mathbb{R}^d)\leq \frac{s}{N}
(\delta_{X_i}+\nu_i)(\mathbb{R}^d)= \frac{2s}{N}.$$
Substituting the above quantities and using Young's inequality, one obtains
\begin{align*}
&\bigg\rvert\frac{\delta U}{\delta m}(\mu_{N}^{i,s},x)- \frac{\delta U}{\delta m}(\mu_{N}^{i,0},x)\bigg\rvert \leq C\left((1+|x|^\ell)\left( \frac{2s}{N}\right) ^\alpha+(1+|x|^{\ell(1-\alpha)})\left(\int_{\mathbb{R}^d}|y|^\ell(\delta_{X_i}+\nu_i)(dy) \right)^\alpha \left( \frac{s}{N}\right) ^\alpha \right)\\
&\phantom{=}\leq \frac{2^\alpha C}{N^\alpha} \left((1+|x|^\ell)+\alpha\left(|X_i|^\ell+\int_{\mathbb{R}^d}|y|^\ell\nu_i(dy) \right)+(1-\alpha)+\alpha \left(|X_i|^\ell+\int_{\mathbb{R}^d}|y|^\ell\nu_i(dy) \right)+|x|^\ell(1-\alpha)\right)\\
&\phantom{=}\leq \frac{2^\alpha C}{N^\alpha} \left((1+|x|^\ell)+\left(|X_i|^\ell+\int_{\mathbb{R}^d}|y|^\ell\nu_i(dy) \right)+\frac{1}{2}+ \left(|X_i|^\ell+\int_{\mathbb{R}^d}|y|^\ell\nu_i(dy) \right)+\frac{|x|^\ell}{2}\right) \\
&\phantom{=}= \frac{2^\alpha C}{N^\alpha} \left(\frac{3}{2}(1+|x|^\ell)+ 2|X_i|^\ell +2\int_{\mathbb{R}^d}|y|^\ell\nu_i(dy) \right).
\end{align*}
Therefore
\begin{align*}
|R_N|
&\leq \frac{1_{N\geq N^*}}{N}\sum_{i=1}^{N}\int_{0}^{1}ds\int_{\mathbb{R}^d} \bigg\rvert\frac{\delta U}{\delta m}(\mu_{N}^{i,s},x)- \frac{\delta U}{\delta m}(\mu_{N}^{i,0},x)\bigg\rvert(\delta_{X_i}+\nu_i)(dx)\\
&\leq \frac{2^\alpha C}{N^{\alpha+1}} \sum_{i=1}^{N}\int_{\mathbb{R}^d}\left(\frac{3}{2}(1+|x|^\ell)+ 2|X_i|^\ell +2\int_{\mathbb{R}^d}|y|^\ell\nu_i(dy) \right)(\delta_{X_i}+\nu_i)(dx)\\
& = \frac{2^\alpha C}{N^{\alpha+1}}\sum_{i=1}^{N}\left( 3 + \frac{11}{2}\int_{\mathbb{R}^d}|y|^\ell\delta_{X_i}(dy) + \frac{11}{2}\int_{\mathbb{R}^d}|y|^\ell\nu_i(dy)\right)\\
& = \frac{C_1}{N^{\alpha}}+ \frac{C_2}{N^{\alpha}}\int_{\mathbb{R}^d}|y|^\ell\mu_{N}(dy)+\frac{C_3}{N^{\alpha}}\int_{\mathbb{R}^d}|y|^\ell\bar{\nu}_N(dy)
\end{align*}
for some positive constants $C_1,C_2$ and $C_3$.\\
Finally
$$\mathbb{E}(\sqrt{N}|R_N|) \leq \frac{C_1}{N^{\alpha-\frac{1}{2}}}+\frac{C_2}{N^{\alpha-\frac{1}{2}}}\int_{\mathbb{R}^d}|y|^\ell\bar{\nu}_{N}(dy)+\frac{C_3}{N^{\alpha-\frac{1}{2}}}\int_{\mathbb{R}^d}|y|^\ell\bar{\nu}_N(dy)$$
and since $\alpha > \frac{1}{2}$ and $\lim_{N\to\infty} \int_{\mathbb{R}^d}|y|^\ell\bar{\nu}_{N}(dy)=\int_{\mathbb{R}^d}|y|^\ell\mu(dy) $, we can conclude that the left-hand side goes to $0$ as $N$ goes to infinity and so the third step is concluded.\\
\textbf{FOURTH STEP}\quad We are now going to prove the convergence in distribution of $\sqrt{N}Q_N$ to a Gaussian random variable. In this case, we can immediately apply the Central Limit Theorem for martingales while in Theorem \ref{main_teo} we had to introduce first the Poisson equation before applying it to $K_{3,N}$.
For $1\leq i \leq N$ let
$$Y_{N,i}=\frac{1}{\sqrt{N}}\left( \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},X_i \right)-\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right) \nu_i(dx)\right).$$\\
Let us start by \textbf{checking that $\mathbb{E}\left(Y_{N,i}|\mathcal{F}_{i-1}\right)=0$. }\\
Since $X_i$ is independent of $\mathcal{F}_{i-1}$ and $\mu^{i\wedge I_N,0}_{N}=\sum_{j=1}^{i-1}1_{\left\lbrace I_N=j\right\rbrace }\mu^{j,0}_{N}+1_{\left\lbrace I_N>i-1\right\rbrace}\mu^{i,0}_{N}$ is $\mathcal{F}_{i-1}$- measurable, applying the Freezing Lemma one has
\begin{align*}
\mathbb{E}\left(Y_{N,i}|\mathcal{F}_{i-1}\right) &= \mathbb{E}\left(\frac{1}{\sqrt{N}}\left( \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},X_i \right)-\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right) \nu_i(dx)\right)|\mathcal{F}_{i-1}\right)\\
&= \frac{1}{\sqrt{N}}\left(\mathbb{E}\left( \frac{\delta U}{\delta m}\left(t,X_i \right) \right)_{t=\mu^{i\wedge I_N,0}_{N}} - \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right) \nu_i(dx)\right) = 0.
\end{align*}
For what concerns the \textbf{square integrability}, let us check that both the components are squared integrable by applying Hypothesis \textbf{RU\ref{growth2}} and the fact that $\nu_{i} \in \mathbb{P}_\ell\left(\mathbb{R}^d \right).$ \\
\begin{itemize}
\item
$$\mathbb{E}\left( \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},X_i \right)^2\right)\leq C^2 \mathbb{E}\left( \left(1+|X_i|^\frac{\ell}{2} \right)^2 \right)\leq 2C^2 \mathbb{E}\left( 1+|X_i|^\ell \right) < \infty$$
\item $$\mathbb{E}\left(\left( \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right) \nu_i(dx) \right) ^2\right) \leq \mathbb{E}\left( \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)^2 \nu_i(dx) \right) \leq 2C^2 \left( 1+\int_{\mathbb{R}^d}|x|^\ell\nu_i(dx)\right)<\infty.$$\\
\end{itemize}
Let us now study the \textbf{convergence of
$\sum_{i=1}^{N}\mathbb{E}\left(Y^2_{N,i}|\mathcal{F}_{i-1} \right):$}\\
\begin{align*}
\sum_{i=1}^{N}\mathbb{E}\left(Y^2_{N,i}|\mathcal{F}_{i-1} \right)&= \frac{1}{N}\sum_{i=1}^{N}\mathbb{E}\left(\left( \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},X_i \right)-\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right) \nu_i(dx)\right)^2 \arrowvert\mathcal{F}_{i-1} \right)\\
&= \frac{1}{N}\sum_{i=1}^{N}\left( \int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)^2\nu_i(dx)-\left( \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)\nu_i(dx) \right)^2\right) .
\end{align*}
As in the fourth step of the proof of the Theorem \ref{main_teo}, thanks to Hypothesis \textbf{RU\ref{derivative_conv2}} and thanks to the a.s. convergence of $\max_{1\leq i\leq N} W_\ell(\mu_{N}^{i,0},\mu)$ to $0$, we obtain the existence of a sequence of random variables $\left( \epsilon_N\right)_{N\geq 0} $ converging a.s. to $0$ such that
\begin{equation} \label{control2}
\forall 1\leq i\leq N\qquad \bigg\rvert\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_N,x \right)-\frac{\delta U}{\delta m}\left( \mu,x \right)\bigg\rvert\leq\left(1+|x|^\frac{\ell}{2}\right)\epsilon_N.
\end{equation}
\underline{First Component}\quad Let us first show that
$$\lim_{N\to\infty}\frac{1}{N}\sum_{i=1}^{N} \int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)^2\nu_i(dx) - \int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(\mu,x \right)^2\mu(dx)=0\,\ a.s. $$
We can rewrite the difference in the following way
\begin{align*}
\phantom{==============}&\frac{1}{N}\sum_{i=1}^{N} \int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)^2\nu_i(dx) - \frac{1}{N}\sum_{i=1}^{N} \int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(\mu,x \right)^2\nu_i(dx)\\
& \phantom{=}+ \frac{1}{N}\sum_{i=1}^{N} \int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(\mu,x \right)^2\nu_i(dx) - \int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(\mu,x \right)^2\mu(dx) \\
& = \frac{1}{N}\sum_{i=1}^{N} \int_{\mathbb{R}^d} \left(\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)^2\ - \frac{\delta U}{\delta m}\left(\mu,x \right)^2 \right) \nu_i(dx)+ \int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(\mu,x \right)^2\left(\bar{\nu}_N-\mu \right)(dx).
\end{align*}
Since $\lim_{N\to\infty}W_\ell(\bar{\nu}_N,\mu)=0$, by the characterization (\ref{carat_wass}), we can deduce that the second term of the sum goes to $0$ thanks to Assumption \textbf{RU\ref{growth2}} and Assumption \textbf{RU\ref{continuity}}.
For what concerns the first term for $i=1,\cdots,N$, using Assumption \textbf{RU\ref{growth2}} and (\ref{control2}), one has
\begin{align*}
\bigg\rvert \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)^2\ - \frac{\delta U}{\delta m}\left(\mu,x \right)^2\bigg\rvert&\leq \left( \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right) - \frac{\delta U}{\delta m}\left(\mu,x \right)\right)^2+2\bigg\rvert\frac{\delta U}{\delta m}\left(\mu,x \right) \bigg\rvert \bigg\rvert \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)\ - \frac{\delta U}{\delta m}\left(\mu,x \right)\bigg\rvert\\
&\leq \left(1+|x|^\frac{\ell}{2}\right)^2\epsilon^2_N+2C\left(1+|x|^\frac{\ell}{2}\right)^2\epsilon_N \leq 2\left(1+|x|^\ell\right) \epsilon_N \left(\epsilon_N+2C\right).
\end{align*}
Therefore
\begin{align*}
\bigg\rvert\frac{1}{N}\sum_{i=1}^{N}\int_{\mathbb{R}^d} \left(\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)^2\ - \frac{\delta U}{\delta m}\left(\mu,x \right)^2 \right) \nu_i(dx)\bigg\rvert&\leq2 \epsilon_N \left(\epsilon_N+2C\right) \frac{1}{N}\sum_{i=1}^{N}\int_{\mathbb{R}^d}\left(1+|x|^\ell\right)\nu_i(dx)\\
&= 2 \epsilon_N \left(\epsilon_N+2C\right) \left(1+\int_{\mathbb{R}^d}|x|^\ell\bar{\nu}_N(dx) \right)
\end{align*}
where the right-hand side goes to $0$ a.s. since $\lim_{N\to\infty}\int_{\mathbb{R}^d}|x|^\ell\bar{\nu}_N(dx)= \int_{\mathbb{R}^d}|x|^\ell\mu(dx)$.\\
\underline{Second Component} We are going to prove that
$$\lim_{N\to\infty} \frac{1}{N}\sum_{i=1}^{N} \left( \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)\nu_i(dx) \right)^2 -\int_{\mathbb{R}^d \times \mathbb{R}^d} \frac{\delta U}{\delta m} (\mu,x)\frac{\delta U}{\delta m} (\mu,y)\eta(dx,dy)=0 \,\, a.s. $$
Let us rewrite the difference in the following way
$$\frac{1}{N}\sum_{i=1}^{N}\left( \left( \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)\nu_i(dx) \right)^2 - \left( \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu,x \right)\nu_i(dx) \right)^2\right) $$
$$+\left( \frac{1}{N}\sum_{i=1}^{N} \int_{\mathbb{R}^d \times \mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu,x \right)\frac{\delta U}{\delta m}\left(\mu,y\right)\nu_i(dx)\nu_i(dy) - \int_{\mathbb{R}^d \times \mathbb{R}^d} \frac{\delta U}{\delta m} (\mu,x)\frac{\delta U}{\delta m} (\mu,y)\eta(dx,dy)\right) $$\\
Since Hypothesis \ref{wess.conv2} holds, again by the characterization (\ref{carat_wass}), we can deduce that the second term of the sum goes to $0$ a.s. thanks to Hypothesis \textbf{RU\ref{growth2}} and Hypothesis \textbf{RU\ref{continuity}}. For what concerns the first term, using Hypothesis \textbf{RU\ref{growth2}} and (\ref{control2}), one has that for $i=1,\cdots,N$
\begin{align*}
&\left|\left( \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)\nu_i(dx) \right)^2- \left( \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu,x \right)\nu_i(dx) \right)^2 \right|\\
&\leq \left( \int_{}\left( \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)-\frac{\delta U}{\delta m}\left(\mu,x \right)\right) \nu_i(dx) \right)^2 + 2\left|\int_{}\frac{\delta U}{\delta m}\left(\mu,x \right)\nu_i(dx) \right| \left|\int_{}\left( \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)-\frac{\delta U}{\delta m}\left(\mu,x \right)\right) \nu_i(dx) \right|\\
&\leq \int_{}\left( \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)-\frac{\delta U}{\delta m}\left(\mu,x \right)\right)^2 \nu_i(dx) + 2\int_{}\left|\frac{\delta U}{\delta m}\left(\mu,x \right)\right|\nu_i(dx) \int_{\mathbb{R}^d}\left|\left( \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)-\frac{\delta U}{\delta m}\left(\mu,x \right)\right)\right| \nu_i(dx)\\
&\leq 2\epsilon_N^2\int_{\mathbb{R}^d} (1+|x|^\ell)\nu_i(dx)+ 2C\epsilon_N \int_{\mathbb{R}^d}\left(1+|x|^\frac{\ell}{2} \right) \nu_i(dx) \int_{\mathbb{R}^d}\left(1+|x|^\frac{\ell}{2} \right)\nu_i(dx) \\
&\leq 2\epsilon_N^2\int_{\mathbb{R}^d} (1+|x|^\ell)\nu_i(dx)+ 4C\epsilon_N \int_{\mathbb{R}^d}\left(1+|x|^\ell \right) \nu_i(dx) = \left(2\epsilon_N^2+ 4C\epsilon_N \right) \left(1+ \int_{\mathbb{R}^d} |x|^\ell\nu_i(dx)\right).
\end{align*}
Therefore
\begin{align*}
&\frac{1}{N}\sum_{i=1}^{N} \left( \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)\nu_i(dx) \right)^2 - \frac{1}{N}\sum_{i=1}^{N} \left( \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu,x \right)\nu_i(dx) \right)^2\\
&\phantom{==}\leq \left(2\epsilon_N^2+4C\epsilon_N \right)\left(1+\int_{\mathbb{R}^d}\left|x\right|^\ell\bar{\nu}_N(dx) \right)
\end{align*}
and the left-hand side goes to $0$ since $\lim_{N\to\infty}\int_{\mathbb{R}^d}|x|^\ell\bar{\nu}_N(dx)= \int_{\mathbb{R}^d}|x|^\ell\mu(dx)$.\\
To finally apply the Central Limit Theorem for martingales (see Corollary $3.1$ \cite{HallHeyde}) to conclude that $$\sqrt{N}Q_N=\sum_{i=1}^{N}Y_{N,i}\underset{d}{\Longrightarrow} \mathcal{N}\left(0, \int_{\mathbb{R}^d} \frac{\delta U}{\delta m}\left(\mu,x \right)^2\mu(dx)-\int_{\mathbb{R}^d \times \mathbb{R}^d} \frac{\delta U}{\delta m} (\mu,x)\frac{\delta U}{\delta m} (\mu,y)\eta(dx,dy) \right),$$ it remains just to verify that the \textbf{Lindeberg condition} holds.
We need to check that for $\epsilon>0$, $\sum_{i=1}^{N}\mathbb{E}\left(Y^2_{N,i} 1_{\left\lbrace Y^2_{N,i}> \epsilon \right\rbrace }| \mathcal{F}_{i-1}\right) $ goes to $0$ in probability as $N\rightarrow \infty$.\\
By Jensen's inequality and Hypothesis \textbf{RU\ref{growth2}}, one has
\begin{align*}
NY^2_{N,i} &= \left( \frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},X_i \right)-\int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right) \nu_i(dx)\right)^2\\
&\leq2\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},X_i \right)^2+2 \int_{\mathbb{R}^d}\frac{\delta U}{\delta m}\left(\mu^{i\wedge I_N,0}_{N},x \right)^2\nu_i(dx)\\
&\leq 4C^2\left( 1+ |X_i|^\ell \right)+ 4C^2 \int_{\mathbb{R}^d}\left( 1+ |x|^\ell \right)\nu_i(dx) = 4C^2\left(2+|X_i|^\ell+\int_{\mathbb{R}^d} |x|^\ell\nu_i(dx) \right).
\end{align*}
Therefore, as in the fourth step of the proof of the Theorem \ref{main_teo}, it is enough to check that for each $\epsilon>0$
$$2\times1_{\left\lbrace \frac{2}{N} > \epsilon\right\rbrace }+\sum_{i=1}^{N}\mathbb{E}\left(\frac{|X_i|^\ell}{N} 1_{\left\lbrace |X_i|^\ell>N\epsilon \right\rbrace} \big\arrowvert\mathcal{F}_{i-1}\right)+ \sum_{i=1}^{N} \frac{\int_{\mathbb{R}^d}|x|^\ell\nu_i(dx)}{N} 1_{\left\lbrace \int_{\mathbb{R}^d}|x|^\ell\nu_i(dx) > N\epsilon\right\rbrace } $$\\
goes to $0$ as $N$ goes to infinity (a.s.).\\
It is immediate to prove that the \textbf{first component} goes to $0$ while the \textbf{second component} goes to $0$ by the independence of $X_i$ by $\mathcal{F}_{i-1}$ and (\ref{lindenberg}). For what concerns the \textbf{third component}, we have that $\bar{y}_N:= \frac{1}{N}\sum_{i=1}^{N}\int_{\mathbb{R}^d}|x|^\ell\nu_i(dx) \rightarrow \int_{\mathbb{R}^d}|x|^\ell\mu(dx).$ Therefore
$$\frac{\int_{\mathbb{R}^d}|x|^\ell\nu_N(dx)}{N} = \bar{y}_N - \frac{N-1}{N}\bar{y}_{N-1}\underset{N\to\infty}{\longrightarrow} 0 $$
that implies $\forall \epsilon >0$
$$\int_{\mathbb{R}^d}|x|^\ell\nu_N(dx) 1_{\left\lbrace \int_{\mathbb{R}^d}|x|^\ell\nu_N(dx) >N \epsilon \right\rbrace} \underset{N\to\infty}{\longrightarrow} 0 $$
and taking the Cesaro mean we deduce that
$$\lim_{N\to\infty} \frac{1}{N} \sum_{i=1}^{N} \int_{\mathbb{R}^d}|x|^\ell\nu_i(dx) 1_{\left\lbrace \int_{\mathbb{R}^d}|x|^\ell\nu_i(dx) > N\epsilon\right\rbrace } \leq \lim_{N\to\infty} \frac{1}{N} \sum_{i=1}^{N}\int_{\mathbb{R}^d}|x|^\ell\nu_i(dx)1_{\left\lbrace \int_{\mathbb{R}^d}|x|^\ell\nu_i(dx) >i \epsilon \right\rbrace} =0$$
and so the proof is concluded.
\end{proof}
|
1,108,101,564,879 | arxiv | \section{Introduction}
Generalized topological spaces have been introduced by Cs\'{a}sz\'{a}r in the last years of twentieth century (see \cite{csaszar} and \cite{genopen}). They are investigated by many authors from all over the world. They have invented generalized analogues of separation axioms (see \cite{sep}, \cite{separ} and \cite{sarsak}), filters (see \cite{filters}), convergence (see \cite{baskar}, \cite{seq} and \cite{sarma}) or topological groups (see \cite{groups}).
What is quite surprising, is the fact that Cs\'{a}sz\'{a}r's spaces are barely used in formal logic. Recently, we have prepared generalized topological semantics for certain weak modal logics (see \cite{witczak}). We have shown connections between our frames and certain subclasses of neighbourhood frames. Also we have prepared subintuitionistic version of our semantics.
Our \emph{strong} generalized models (complete with respect to the modal logic $\mathbf{MTN4}$, as we can infer e.g. from \cite{indrze}) turned out to be similar to the \emph{complete extensional abstractions}, investigated by Soldano in \cite{soldano}. However, both his language and goals were different than ours. Moreover, he started from different intuitions. For us the primary goal was to check if it is possible to use Cs\'{a}sz\'{a}r's structures as a semantic tool for logic, while Soldano's approach was more practical (he wanted to model certain aspects of human reasoning and classifying objects in databases).
There is also an interesting paper by J\"{a}rvinen, Kondo and Kortelainen (see \cite{jarvi}). These autors used \emph{interior systems} (which are compatible with Cs\'{a}sz\'{a}r's generalized topologies and Soldano's extensional abstractions) to speak about soundness and completeness of logics $\mathbf{MT4}$ and $\mathbf{MTN4}$. Their approach was motivated by certain reflexions on the approximate reasoning and rough sets.
While working on this topic, we have developed certain purely topological notions and tools which seemed to be interesting. Basically, in Cs\'{a}sz\'{a}r's spaces it is possible that certain points are beyond any open set (and thus beyond maximal open set $\bigcup \mu$). We proposed an intermediate solution: that such points can have non-empty family of open neighbourhoods. Such family of sets is connected with these points by means of a special function $\mathcal{F}$. This approach allows to speak about new types of convergence and "openess" (this "openess" is weaker than the one which is typical for generalized topologies).
\section{General overview of \gtf-spaces}
\subsection{Basic notions}
First of all, we repeat the very definition of \textit{generalized topological space} in the sense of Cs\'{a}sz\'{a}r (see \cite{csaszar} and \cite{genopen}).
\begin{df}
Assume that $W$ is a non-empty set (universe) and $\mu \subseteq P(W)$. We say that $\mu$ is a generalized topology on $W$ if and only if: $\emptyset \in \mu$ and $\mu$ is closed under arbitrary unions, i.e. if $J \neq \emptyset$ and for each $i \in J$, $X_{i} \in \mu$, then $\bigcup_{i \in J} X_{i} \in \mu$.
In such a case we say that $\langle W, \mu \rangle$ is a generalized topological space. The elements of $\mu$ are named $\mu$-open sets (or just open sets, if there is no risk of confusion) and for any $A \subseteq W$, we define $Int(A)$ as the union of all open sets contained in $A$.
\end{df}
Sometimes we shall say that all points from $W \setminus \bigcup \mu$ are \textit{orphaned}. As for the notion of \emph{closed set}, we say that the set $A \subseteq W$ is \emph{closed} if and only if its complement is open. We define $Cl(A)$ (closure of $A$) as the smallest closed set containing $A$. It is easy to show that $Cl(A) = W \setminus Int(W \setminus A)$ (see \cite{geno}).
The second thing to do is to establish our new structure, equipped with an additional function which connects orphaned points with open sets (open neighbourhoods).
\begin{df}
\label{gtf}
We define $\gtf$-structure as a triple $M_\mu = \langle W, \mu, \mathcal{F} \rangle$ such that $\mu$ is a generalized topology on $W$ and $\mathcal{F}$ is a function from $W$ into $P(P(\bigcup \mu)$ such that:
\begin{itemize}
\item If $w \in \bigcup \mu$, then $[X \in \mathcal{F}_{w} \Leftrightarrow X \in \mu \text{ and } w \in X]$ {\normalfont [$\mathcal{F}_{w}$ is a shortcut for $\mathcal{F}(w)$]}
\item If $w \in W \setminus \bigcup \mu$, then $[X \in \mathcal{F}_{w} \Rightarrow X \in \mu]$
\end{itemize}
\end{df}
\begin{figure}[h]
\centering
\includegraphics[height=4.5cm]{mtop5}
\caption{Generalized topological model with function $\mathcal{F}$}
\label{fig:obrazek {mtop5}}
\end{figure}
Clearly, if $w \in \bigcup \mu$, then it belongs to \textit{each} of its open neighbourhoods. On the contrary, orphaned points do not belong to \textit{any} of their neighbourhoods. They are only \textit{associated} with them by means of $\mathcal{F}$.
The next definition is just a useful shortcut:
\begin{df}
Assume that $\langle W, \mu, \mathcal{F} \rangle$ is a \gtf-structure and $A \in \mu$. Then we introduce the following notation: $A^{-1} = \{z \in W; A \in \mathcal{F}_{z}\}$.
\end{df}
Below we shall discuss simple example of \gtf-structure. Its basic form is based strictly on Ex. 3.1. from \cite{baskar}.
\begin{przy}
\label{zet}
\normalfont{
Consider $\langle W, \mu, \mathcal{F} \rangle$, where $W = \mathbb{Z}$, $\mu = \{ \emptyset, \{1\}, \{1, 3\}, \{1, 3, 5\}, \{1, 3, 5, 7\}, ... \}$, $\mathcal{F}_{n} = \emptyset$ for any $n \in 2 \mathbb{Z}$ and if $n$ is odd, then $\mathcal{F}_{n}$ is just a collection of its open neighbourhoods.
Of course, this is \gtf-structure, but undoubtedly it is rather degenerated case. However, we may think about more complex versions of this space. For example, we may replace $\mathcal{F}$ by:
\begin{enumerate}
\item $\mathcal{F}'$. Consider $\gamma: 2 \mathbb{Z} \rightarrow 2 \mathbb{Z} + 1$, where $\gamma(x) = \max \{m; m \in 2\mathbb{Z} + 1, m<x\}$. Assume that:
- if $n \in 2 \mathbb{Z} + 1$, then $G \in \mathcal{F}'_{m}$ $\Leftrightarrow$ $G \in \mu$ and $m \in G$.
- if $n \in 2 \mathbb{Z}$, then $G \in \mathcal{F}'_{n}$ $\Leftrightarrow$ $G \in \mathcal{F}'_{\gamma(n)}$.
For example, $\mathcal{F}'_{8} = \mathcal{F}'_{\gamma(8)} = \mathcal{F}'_{7} = \{\{1, 3, 5, 7\}, \{1, 3, 5, 7, 9\}, \{1, 3, 5, 7, 9, 11\}, ... \}$.
\item $\mathcal{F}''$. It is just like $\mathcal{F}'$ but instead of $\gamma$ we use $\delta(x) = \min \{m; m \in 2 \mathbb{Z} + 1, m > x\}$. Then $\mathcal{F}''_{8} = \{ \{1, 3, 5, 7, 9\}, \{1, 3, 5, 7, 9, 11\}, ...\}$.
\item $\mathcal{F}'''$. We use $\gamma$ again and if $n \in 2 \mathbb{Z}$, then we define: $G \in \mathcal{F}'''_{n}$ $\Leftrightarrow$ $G \in \mu$, $G \neq \emptyset$ and $G \notin \mathcal{F}'''_{\gamma(n) + 2}$.
Now $\mathcal{F}'''_{8} = \{ \{1\}, \{1, 3\}, \{1, 3, 5\}, \{1, 3, 5, 7\}\}$.
\end{enumerate}
Of course our associating function $\mathcal{F}$ can be very arbitrary but the most interesting cases are those with certain regularities or patterns. Later (when discussing convergence) we shall go back to the examples above.
}
\end{przy}
In the next subsection we shall use our function $\mathcal{F}$ to establish some analogues of the well-known topological notions (like interior and closure).
\subsection{$\mathcal{F}$-interiors and $\mathcal{F}$-open sets}
The notions of $\mathcal{F}$-interior and $\mathcal{F}$-closure are based on the intuitions arising from the typical understanding of openness and closeness. However, one must note that there will be no full analogy. Our new concepts are somewhat weaker than their topological (and generalized topological) counterparts. This situation can be considered both as a limitation and a strength.
\begin{df}
\label{fint}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $w \in W$. Assume that $A \subseteq W$. We say that $w \in \mathcal{F}Int(A) \Leftrightarrow \text{ there is } G \in \mathcal{F}_{w} \text{ such that } G \subseteq A$.
\end{df}
In fact, we assume that $G \subseteq A \cap \bigcup \mu$. According to our earlier declarations, the definition above is modeled after the standard definition of interior in (generalized or not) topological spaces. Note, however, that in general we cannot say that $\mathcal{F}Int(A) \subseteq A$. To see this \footnote{Sometimes our examples and counter-examples will be presented only in the sketchy form.}, it is sufficient to consider any $A \in \mu$ and $w \in W \setminus \bigcup \mu$ such that $A \in \mathcal{F}_{w}$. Clearly, $w \in \mathcal{F}Int(A)$ but $w \notin A$.
Now let us think about different situation: that $A \subseteq W \setminus \bigcup \mu$ and $w \in A$. Then $w \notin \mathcal{F}Int(A)$ (because even if there are some sets in $\mathcal{F}_{w}$, they cannot be contained in $A$, because they belong to $\mu$, while $A$ is beyond $\bigcup \mu$). This example (or rather sketch of reasoning) shows us that sometimes $A \nsubseteq \mathcal{F}Int(A)$. Of course, this lack of inclusion is not as surprising as the first one (in fact, it is normal also for ordinary and generalized interiors). Be as it may, the last example could be even more general: it is enough to assume that $A \cap (W \setminus \bigcup \mu) \neq \emptyset$, $\bigcup \mu \nsubseteq A$, $w \in A \cap (W \setminus \bigcup \mu)$ and for each $G \in \mathcal{F}_{w}$, $G \cap A = \emptyset$.
For the reasons above, it is sensible to consider at least three concepts related to the notion of openness:
\begin{df}
Let $\langle W, \mu \rangle$ be a \gtf and $A \subseteq W$. We say that $A$ is:
\begin{itemize}
\item $\mathcal{F}$-open ($\mathcal{F}$o.) iff $A = \mathcal{F}Int(A)$
\item d$\mathcal{F}$-open (d$\mathcal{F}$o.) iff $\mathcal{F}Int(A) \subseteq A$
\item u$\mathcal{F}$-open (u$\mathcal{F}$o.) iff $A \subseteq \mathcal{F}Int(A)$
\end{itemize}
\end{df}
Each of the following lemmas refers to the \gtf-structure $\langle W, \mu, \mathcal{F} \rangle$. Hence, we shall not repeat this assumption.
\begin{lem}Assume that $A, B \subseteq W$ and $A \subseteq B$. Then $\mathcal{F}Int(A) \subseteq \mathcal{F}Int(B)$. \end{lem}
\begin{proof}This is obvious. If $v \in \mathcal{F}Int(A)$, then there is $G \in \mathcal{F}_{v}$ such that $G \subseteq A \subseteq B$.\end{proof}
\begin{lem}
\label{amu}
Assume that $A \subseteq W$. Then $Int(A) \subseteq \mathcal{F}Int(A)$. In particular, if $A \in \mu$, then $A \subseteq \mathcal{F}Int(A)$. \end{lem}
\begin{proof}If $Int(A) \neq \emptyset$ and $w \in Int(A)$, then there is certain $G \in \mu$ such that $w \in G \subseteq A$. Because $w \in \bigcup \mu$, $G \in \mathcal{F}_{w}$.
If $Int(A) = \emptyset$, then the result is obvious. \end{proof}
\begin{lem}
If $A \subseteq W$, then $\mathcal{F}Int(A) \cap \bigcup \mu = Int(A)$.
\end{lem}
\begin{proof}
($\subseteq$) If $v \in \mathcal{F}Int(A) \cap \bigcup \mu$, then there is $G \in \mathcal{F}_{v}$ such that $G \subseteq A$. Of course, $v \in G$ (because $v \in \bigcup \mu$). Hence, $v \in Int(A)$.
($\supseteq$) If $v \in Int(A)$, then $v \in \mathcal{F}Int(A)$ (by means of Lemma \ref{amu}). But if $v \in Int(A)$, then $v \in \bigcup \mu$. Hence, $v \in \mathcal{F}Int(A) \cap \bigcup \mu$.
\end{proof}
\begin{lem}If $A \subseteq W$, then $\mathcal{F}Int(\mathcal{F}Int(A)) \cap \bigcup \mu \subseteq \mathcal{F}Int(A)$.
\end{lem}
\begin{proof}
Assume that $v \in \mathcal{F}Int(\mathcal{F}Int(A)) \cap \bigcup \mu$. Then there is $G \in \mathcal{F}_{v}$ such that $G \subseteq \mathcal{F}Int(A)$. It means that for any $u \in G$ there is $H \in \mathcal{F}_{u}$ such that $H \subseteq A$. In particular, this is true for $v$ (because $v \in \bigcup \mu \supseteq G$). Thus $v \in G$ and $v \in \mathcal{F}Int(A)$.
\end{proof}
\begin{lem}Assume that $A \in \mu$. Then $A^{-1} \subseteq \mathcal{F}Int(A)$. \end{lem}
\begin{proof}If $w \in A^{-1}$, then it means that $A \in \mathcal{F}_{w}$. Hence, there is $G \in \mathcal{F}_{w}$, namely $A$, such that $G \subseteq A$. Thus $w \in \mathcal{F}Int(A)$. \end{proof}
\begin{lem}$\mathcal{F}Int(W) = W \Leftrightarrow \text{ for any } w \in W, \mathcal{F}_{w} \neq \emptyset$. \end{lem}
\begin{proof}($\Rightarrow$) Assume that there is $v \in W$ such that $F_{v} = \emptyset$. Then $v \notin \mathcal{F}Int(W)$. ($\Leftarrow$) Assume that $v \notin \mathcal{F}Int(W)$. This means that for any $G \in \mathcal{F}_{v}$, $G \nsubseteq W$. Clearly, this is possible only if there are no any sets in $\mathcal{F}_{v}$. \end{proof}
\begin{lem}$\mathcal{F}Int(\emptyset) = \emptyset \Leftrightarrow \text{ for any } w \in W$, $\emptyset \notin \mathcal{F}_{w}$. \end{lem}
\begin{proof}
($\Rightarrow$) Assume that there is $v \in W$ such that $\emptyset \in \mathcal{F}_{v}$. Then $v \in \mathcal{F}Int(\emptyset)$. Contradiction.
($\Leftarrow$) Suppose that $\mathcal{F}Int(\emptyset) \neq \emptyset$. Then there must be at least one $v \in W$ for which there is $G \in \mathcal{F}_{v}$ such that $G \subseteq \emptyset$. Undoubtedly, $G$ must be empty.
\end{proof}
\begin{lem}Assume that for certain $X \subseteq W$ and for any $A \in \mu$: if $A \neq \emptyset$, then $A \nsubseteq X$. Then $\mathcal{F}Int(X) = \emptyset$ or $\mathcal{F}Int(X) \subseteq Z = \{z \in W; \emptyset \in \mathcal{F}_{z}\}$. \end{lem}
\begin{proof}Suppose that $\mathcal{F}Int(X) \neq \emptyset$ and $\mathcal{F}Int \nsubseteq Z$. Hence, there is $v \in \mathcal{F}Int(X)$ such that $\emptyset \notin \mathcal{F}_{v}$. It means that $F_{v}$ contains only non-empty sets. But we assumed that there are no non-empty sets (from $\mu$) contained in $X$. \end{proof}
Note that if there is at least one $X \subseteq W$ such that $\mathcal{F}Int(X) = \emptyset$, then $Z$ (defined as above) must be empty. Assume the contrary. If $z \in Z$, then we can always say that there is $G \in F_{z}$, namely $G = \emptyset$, such that $G \subseteq X$. But then $z \in \mathcal{F}Int(X)$.
\begin{lem}
\label{sumint}
Suppose that $J \neq \emptyset$ and $\{X_i\}_{i \in J}$ is a family of subsets of $W$. Then $\bigcup_{i \in J} \mathcal{F}Int(X_i) \subseteq \mathcal{F}Int(\bigcup_{i \in J} X_i)$. If each $X_i$ is u$\mathcal{F}$o., then $\bigcup_{i \in J} X_{i} \subseteq \mathcal{F}Int(\bigcup_{i \in J} X_i)$.
\end{lem}
\begin{proof}
Let $v \in \bigcup_{i \in J}\mathcal{F}Int(X_i)$. Hence, there is $k \in J$ such that $v \in \mathcal{F}Int(X_{k})$. Then there is $G \in \mathcal{F}_{v}$ such that $G \subseteq X_k$. But then $G \subseteq X_k \subseteq \bigcup_{i \in J} X_i$. Hence, we can say that $v \in \mathcal{F}Int(\bigcup_{i \in J} X_i)$.
\end{proof}
Note that we can easily imagine the following situation: there is $v \in W$ such that for any $G \in \mathcal{F}_{v}$ and for each $i \in J$, $G \nsubseteq \mathcal{F}Int(X_i)$ but at the same time there is $H \in \mathcal{F}_{v}$ such that $H \subseteq \bigcup_{i \in J} \mathcal{F}Int(X_i)$. Hence, $v \in \mathcal{F}Int(\bigcup_{i \in J}X_i)$ but $v \notin \bigcup_{i \in J} \mathcal{F}Int(X_i)$. Please take a look below:
\begin{przy}
\normalfont{
Let $\langle W, \mu, \mathcal{F} \rangle$ be \gtf where $W = \mathbb{R}^2$ and $\mu$ is a standard (hence, in particular, generalized) topology but with respect to the ball $K[(0,0), 10]$. It means that $\mu$-open sets are unions of open balls contained in $K$, i.e $K = \bigcup \mu$. Now we can take $K_1[(0,0), 2]$, $K_2[(2, 0), 2]$, $v = (20, 20)$ and assume that $\mathcal{F}_{v}$ contains only one set, namely open rectangle based on coordinates $(-3, -1), (-3, 1), (3, 1), (3, -1)$. Clearly, it is contained in $K_1 \cup K_2$ (and in the union of their $\mathcal{F}$-interiors, because these balls are open) but is not contained in $K_1$ nor $K_2$.
}
\end{przy}
\begin{lem}
Suppose that $J \neq \emptyset$ and $\{X_i\}_{i \in J}$ is a family of subsets of $W_\mu$. Then $\mathcal{F}Int(\bigcap_{i \in J} X_i) \subseteq \bigcap_{i \in J} \mathcal{F}Int(X_i)$. If each $X_i$ is d$\mathcal{F}$o., then $\mathcal{F}Int(\bigcap_{i \in J} X_i) \subseteq \bigcap_{i \in J} X_i$.
\end{lem}
\begin{proof}The proof is similar to the former one. Also we can find counterexample for the opposite inclusion.\end{proof}
\begin{lem}
\label{uopen}
Assume that $A \subseteq \mathcal{F}Int(A) \subseteq \bigcup \mu$. Then $A \in \mu$.
\end{lem}
\begin{proof}
For any $v \in A$, there is $G \in \mathcal{F}_{v}$ such that $G \subseteq A$. Of course, $G \in \mu$. But $v \in \bigcup \mu$, hence $v \in G$ (from the very definition of $\mathcal{F}$ in \gtf-structure). This means that for any $v \in A$, $v \in Int(A)$. Thus $A \subseteq Int(A)$, so $A$ is open, i.e. $A \in \mu$.
\end{proof}
As we said, $\mathcal{F}$-open sets do not have all properties of open sets, even in Cs\'{a}sz\'{a}r's sense. Nonetheless, the following theorem should be considered as useful. In fact, it \textit{will} be usefull in our further investigations.
\begin{tw}
\label{eachopen}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf-structure and $w \in W$. Then $\mathcal{F}_{w} \neq \emptyset$ $\Leftrightarrow$ there is $\mathcal{F}$o. set $G \subseteq W$ such that $w \in G$.
\end{tw}
\begin{proof}
($\Rightarrow$)
Since $\mathcal{F}_{w} \neq \emptyset$, then there is at least one $A \in \mathcal{F}_{w}$. Of course, $w \in \mathcal{F}Int(A)$. If $A = \mathcal{F}Int(A)$, then we can finish our proof. If not, then it means that $\mathcal{F}Int(A) \nsubseteq A$. Let us define $G$ as $A \cup \mathcal{F}Int(A)$. We show that $G$ is open, i.e. that $\mathcal{F}Int(G) = G$.
($\subseteq$) Let $v \in \mathcal{F}Int(G)$. Hence, $v \in \mathcal{F}Int(A \cup \mathcal{F}Int(A))$. Now there is $U \in \mathcal{F}_{v}$ such that $U \subseteq A \cup \mathcal{F}Int(A)$. In fact, it means that $U \subseteq A$ (because $U \subseteq \bigcup \mu$ and $\mathcal{F}Int(A) \cap \bigcup \mu = Int(A) = A$). Thus $v \in \mathcal{F}Int(A)$. From this we infer that $v \in G$.
($\supseteq$) Let $v \in G$. Hence, $v \in A$ or $v \in \mathcal{F}Int(A)$. If $v \in A$, then $A \in \mathcal{F}_{v}$ (because $A \in \mu$). Therefore $v \in \mathcal{F}Int(G)$. If $v \in \mathcal{F}Int(A)$, then there is $U \in \mathcal{F}_{v}$ such that $U \subseteq A \subseteq G$. Hence, $v \in \mathcal{F}Int(G)$.
($\Leftarrow$)
Suppose that $G \subseteq W$ is $\mathcal{F}o.$, $w \in G$ and $\mathcal{F}_{w} = \emptyset$. Of course $\mathcal{F}Int(G) = G$, so $w \in \mathcal{F}Int(G)$. Hence, there is $H \in \mathcal{F}_{w}$ such that $H \subseteq G$. Contradiction.
\end{proof}
Note that in the case of $\Leftarrow$-direction it is enough to assume that $G$ is $u\mathcal{F}$o.
\subsection{$\mathcal{F}$-closures and $\mathcal{F}$-closed sets}
Any sensible definition of "openess" should be dual to certain understanding of "closeness". We propose the following notion, based on the very well known property of closed (and generalized closed) sets.
\begin{df}
\label{fclosur}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $w \in W$. Assume that $A \subseteq W$. We say that $w \in \mathcal{F}Cl(A) \Leftrightarrow \text{ for any } G \in \mathcal{F}_{w}, G \cap A \neq \emptyset$.
\end{df}
Now we can define $\mathcal{F}$-closed sets:
\begin{df}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $A \subseteq W$. We say that:
\begin{itemize}
\item $A$ is $\mathcal{F}$-closed ($\mathcal{F}$c.) if and only if $\mathcal{F}Cl(A) = A$.
\item d$\mathcal{F}$-closed (d$\mathcal{F}$c.) iff $\mathcal{F}Cl(A) \subseteq A$
\item u$\mathcal{F}$-closed (u$\mathcal{F}$c.) iff $A \subseteq \mathcal{F}Cl(A)$
\end{itemize}
\end{df}
This definition makes sense because it gives us expected dualism:
\begin{tw}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf. Assume that $A \subseteq W$ is $\mathcal{F}$-open. Then $-A$ is $\mathcal{F}$-closed.
\end{tw}
\begin{proof}
We know that $\mathcal{F}Int(A) = \{z \in W; \text{ there is } G \in \mathcal{F}_{z} \text{ such that } G \subseteq A\} = A$. Let us consider $-A = \{z \in W; \text{ for each } G \in \mathcal{F}_{z}, G \nsubseteq A\}$. We shall show that $\mathcal{F}Cl(-A) = -A$.
($\subseteq$) Assume that $w \in \mathcal{F}Cl(-A)$. Hence, for any $G \in \mathcal{F}_{w}$, $G \cap -A \neq \emptyset$. Now $G \nsubseteq A$ and for this reason $w \in -A$.
($\supseteq$) Suppose that $w \in -A$ and assume that there is $H \in \mathcal{F}_{w}$ such that $H \cap -A = \emptyset$. It means that $H \subseteq A$. But then $w \in \mathcal{F}Int(A) = A$ which gives us plain contradiction.
\end{proof}
As in the case of interiors, properties of $\mathcal{F}Cl$ are rather weak (when compared to the properties of closures and generalized closures). For example, we may ask if $A \subseteq \mathcal{F}Cl(A)$. The answer is (in general) negative. We may easily imagine the following situation: $A \subseteq W$, $A \cap (W \setminus \bigcup \mu) \neq \emptyset$, $w \in A \cap (W \setminus \bigcup \mu)$ and there is $G \in \mathcal{F}_{w}$ such that $G \cap A = \emptyset$.
On the other hand, it is possible that $\mathcal{F}Cl(A) \nsubseteq A$. It is even easier to imagine that for any $G \in \mathcal{F}_{w}$, $G \cap A \neq \emptyset$ but at the same time $w \notin A$ (we may assume that $w \in W \setminus \bigcup \mu$ and $A \subseteq \bigcup \mu$).
We have the following lemmas (with respect to an arbitrary $\langle W, \mu, \mathcal{F} \rangle$):
\begin{lem}
$\mathcal{F}Cl(\emptyset) = \emptyset \Leftrightarrow \text{ for any } z \in W, \mathcal{F}_{z} \neq \emptyset$
\end{lem}
\begin{proof}
Note that $\mathcal{F}Cl(\emptyset) = \{z \in W; G \cap \emptyset \neq \emptyset \text{ for each } G \in \mathcal{F}_{z}\}$.
($\Rightarrow$) If we assume that there is $z \in W$ such that $\mathcal{F}_{z} = \emptyset$, then we can say anything about an arbirtrary "$G \in \mathcal{F}_{z}$". In particular, we can say that "such $G$" has non-empty intersection with $\emptyset$. Hence, $z \in \mathcal{F}Cl(\emptyset)$ and $\mathcal{F}Cl(\emptyset) \neq \emptyset$.
($\Leftarrow$) If $\mathcal{F}Cl(\emptyset)$ is not empty, then there must be at least one $z \in \mathcal{F}Cl(\emptyset)$. Now if we assume that $\mathcal{F}_{z} \neq \emptyset$ (i.e. there is certain $G \in \mathcal{F}_{z}$), then it means that $\emptyset$ forms non-empty intersection with $G$. This is not possible.
\end{proof}
\begin{lem}
$\mathcal{F}Cl(W) = W \Leftrightarrow \text{ for any } z \in W, \emptyset \notin \mathcal{F}_{z}$.
\end{lem}
\begin{proof}
Note that $\mathcal{F}Cl(W) = \{z \in W; G \cap W \neq \emptyset \text{ for each } G \in \mathcal{F}_{z}\}$.
($\Rightarrow$) If the set in question set is equal to $W$, then it is impossible that there exists $z \in W$ such that $\emptyset \in \mathcal{F}_{z}$. It would mean that $\emptyset \cap W \neq \emptyset$.
($\Leftarrow$) On the other hand, if there is $z \in W$ such that $\emptyset \in \mathcal{F}_{z}$, then of course $\emptyset \cap W = \emptyset$. Hence, $z \notin \mathcal{F}Cl(W)$.
\end{proof}
\begin{lem}
Suppose that $J \neq \emptyset$ and $\{X_i\}_{i \in J}$ is a family of subsets of $W$. Then $\bigcup_{i \in J} \mathcal{F}Cl(X_i) \subseteq \mathcal{F}Cl(\bigcup_{i \in J} X_i)$. If each $X_i$ is u$\mathcal{F}$c., then $\bigcup_{i \in J} X_{i} \subseteq \mathcal{F}Cl(\bigcup_{i \in J} X_i)$.
\end{lem}
\begin{proof}
Let $v \in \bigcup_{i \in J} \mathcal{F}Cl(A)$. It means that there is $k \in J$ such that $v \in \mathcal{F}Cl(X_{k})$. Hence, for any $G \in \mathcal{F}_{v}$, $G \cap X_{k} \neq \emptyset$. Clearly, $X_{k} \subseteq \bigcup_{i \in J} X_i$. Thus $G \cap \bigcup_{i \in J} X_i \neq \emptyset$. For this reason, $v \in \mathcal{F}Cl(\bigcup_{i \in J} X_i)$.
\end{proof}
As in the case of $\mathcal{F}$-interiors, we can find counter-example for the opposite inclusion.
\begin{przy}
\normalfont{
Let us go back to the \gtf used after the Lemma \ref{sumint} but now assume that $\mathcal{F}_{v}$ contains only two sets, namely open balls: $L_1[(-1, 0), 0.5]$ and $L_2[(3, 0), 0.5]$. Clearly, each of them has non-empty intersection with $K_1 \cup K_2$ (in fact, they are contained in this union), hence $v \in \mathcal{F}Cl(K_1 \cup K_2)$. On the other hand, $v$ is not in $\mathcal{F}Cl(K_1)$ (because $L_2 \cap K_1 = \emptyset$) and is not in $\mathcal{F}Cl(K_2)$ (because $L_1 \cap K_2 = \emptyset$).
}
\end{przy}
The next lemma is about intersections:
\begin{lem}
Suppose that $J \neq \emptyset$ and $\{X_i\}_{i \in J}$ is a family of subsets of $W_\mu$. Then $\mathcal{F}Cl(\bigcap_{i \in J} X_i) \subseteq \bigcap_{i \in J} \mathcal{F}Cl(X_i)$. If each $X_i$ is d$\mathcal{F}$c., then $\mathcal{F}Cl(\bigcap_{i \in J} X_i) \subseteq \bigcap_{i \in J} X_i$.
\end{lem}
\begin{proof}
Let $v \in \mathcal{F}Cl(\bigcap_{i \in J}X_i)$. It means that for any $G \in \mathcal{F}_{v}$, $G \cap \bigcap_{i \in J} X_i \neq \emptyset$. Hence, for any $k \in J$, $G \cap X_k \neq \emptyset$. Thus $v \in \mathcal{F}Cl(X_k)$. Finally, $v \in \bigcap_{i \in J} \mathcal{F}Cl(X_i)$.
\end{proof}
As earlier, the converse is not true (in general). Finally, we should prove quite simple but important lemma:
\begin{lem}
\label{close}
Assume that $\langle W, \mu, \mathcal{F} \rangle$ is a \gt-frame and $A \subseteq W$. Then $\mathcal{F}Cl(A) \subseteq Cl(A)$.
\end{lem}
\begin{proof}
Let us assume that $w \in \mathcal{F}Cl(A)$. Hence, for any $G \in \mathcal{F}_{w}$, $G \cap A \neq \emptyset$. If $w \in \bigcup \mu$, then the result is obvious: each $G \in \mathcal{F}_{w}$ is just an ordinary generalized topological neighbourhood. Hence, $w \in Cl(A)$. If $w \in W \setminus \bigcup \mu$, then there are no any ordinary neighbourhoods of $w$. Thus, our conclusion is trivially true.
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[height=5.5cm]{asso1}
\caption{$w \notin \mathcal{F}Cl(A)$ but $w \in Cl(A)$}
\label{fig:obrazek {asso1}}
\end{figure}
Of course, the converse is not true. We shall not present detailed counterexample, just a scheme based on a Fig. 1. Clearly, $w \in W \setminus Int(W \setminus A) = Cl(A)$ but at the same time there are some sets in $\mathcal{F}_{w}$ which have empty intersection with $A$.
\section{Generalized nets and sequences}
In this section we adhere mostly to the notions introduced and investigated by Baskaran in \cite{baskar} and \cite{seq}. Of course, they are placed in our specific environment. Moreover, we have developed some new definitions and ideas. We have been inspired also by Dev Sarma (see \cite{sarma}) and Palaniammal, Murugalingam (see \cite{filters}).
In the presence of $\mathcal{F}$ we can introduce various definitions of convergence, limit and limit point. The first definition refers to the notion of generalized net and is based on the Baskaran's one:
\begin{df}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $(P, \geq)$ be a poset. A generalized net (gnet) in $W$ is a function $f: P \rightarrow W$. The image of $\lambda \in P$ under $f$ is denoted by $f_\lambda$ and the whole gnet is denoted as $(f_\lambda)$.
\end{df}
When there is no risk of confusion (e.g. when $P$ may be arbitrary or when we are working only with one gnet), we shall not always write directly that $f$ is a function from $P$ into $W$. It will be known from the context that $P$ in question is connected to the given $(f_\lambda)$ and \emph{vice-versa}.
What may be surprising, is the fact that generalized net has \textit{pre-ordered} domain and not necessarily directed (this was assumed both by \cite{baskar} and \cite{sarma}). For this reason we can introduce also two other notions:
\begin{df}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $(P, \geq)$ be a poset. We say that gnet $(f_\lambda)$, $f: P \rightarrow W$, is net if and only if $P$ is directed, i.e. for any two elements $\lambda_1, \lambda_2 \in P$ there is $\lambda_3 \in P$ such that $\lambda_1 \leq \lambda_3$ and $\lambda_2 \leq \lambda_3$. If $P = \mathbb{N}$, then we say that $(f_\lambda)$ is a sequence.
\end{df}
Now we go to the convergence, using $\mathcal{F}$ directly:
\begin{df}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $(f_\lambda)$ be a gnet in $W$. We say that:
\begin{itemize}
\item $(f_\lambda)$ is eventually in $U \subseteq W$ iff there is $\lambda_0 \in P$ such that for any $\lambda \geq \lambda_0$, $f_\lambda \in U$.
\item $(f_\lambda)$ converges to $w \in W$ (i.e. $(f_\lambda) \rightarrow w$) iff for any $G \in \mathcal{F}_{w}$, $f_\lambda$ is eventually in $G$. In this case we say that $w$ is a limit of $(f_\lambda)$. We say that $(f_\lambda)$ is convergent if there is $v \in W$ such that $(f_\lambda) \rightarrow v$.
\item $(f_\lambda)$ is frequently in $U$ iff for any $\lambda \in P$ there is $\lambda_1 \in P$ such that $\lambda_1 \geq \lambda$, we have $f_{\lambda_1} \in U$. We say that $w$ is a limit point of $(f_\lambda)$ if it is frequently in every $G \in \mathcal{F}_{w}$.
\end{itemize}
\end{df}
Sometimes, when it is useful for clarity, we shall name this kind of convergence as $\mathcal{F}$-convergence or $\rightarrow$-convergence. Contrary to the result for \gt (without $\mathcal{F}$), in our environment constant gnet may not be convergent. Let us formulate the following lemma and theorem:
\begin{lem}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $(f_\lambda) = (w)$ be a constant gnet in $W$. Then $(w)$ is convergent $\Leftrightarrow$ $(w) \rightarrow w$.
\end{lem}
\begin{proof}
($\Leftarrow$) This is obvious.
($\Rightarrow$) Assume the contrary, i.e. that there is $v \in W, v \neq w$ such that $(w) \rightarrow v$ but $(w) \nrightarrow w$. Hence, for any $G \in \mathcal{F}_{v}$, $w \in G$ (note that we speak about \emph{constant} gnet) but there still is $H \in \mathcal{F}_{w}$ such that $w \notin H$. But if $w$ is in each open neighbourhood of $v$, then $w$ must be in $\bigcup \mu$. Then for any $G \in \mathcal{F}_{w}$, $w \in G$, hence the existence of $H$ is not possible.
\end{proof}
\begin{tw}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $(f_\lambda) = (w)$ be a constant gnet in $W$. Then $(f_\lambda)$ is convergent $\Leftrightarrow$ $w \in \bigcup \mu$ or $\mathcal{F}_{w} = \emptyset$.
\end{tw}
\begin{proof}
($\Leftarrow$) Assume that $(f_\lambda)$ is not convergent. In particular (by the preceeding lemma) it means that $(w) \nrightarrow w$. Hence, there is $G \in \mathcal{F}_{w}$ such that $w \notin G$. Now we have two options. If $w \in \bigcup \mu$, then $w \in G$, this is contradiction. If $\mathcal{F}_{w} = \emptyset$ (which means, in particular, that $w \in W \setminus \bigcup \mu$), then $w \notin G \subseteq \bigcup \mu$.
($\Rightarrow$) Now $(w)$ is convergent. In particular, it means that $(w) \rightarrow w$. Suppose that $w \notin \bigcup \mu$ and $\mathcal{F}_{w} \neq \emptyset$. But then for any $G \in \mathcal{F}_{w}$, $w \notin G$. Hence, $(f_\lambda) = (w)$ is not eventually in $G$. Contradiction with convergence.
\end{proof}
The next question about constant gnets is: is the limit of convergent gnet unique? Let us introduce certain subclass of our structures.
\begin{df}
We say that \gtf-structure $\langle W, \mu, \mathcal{F} \rangle$ is $\mathcal{F}T_1$ $\Leftrightarrow$ for any $w \neq v$ there are $G \in \mathcal{F}_{w}$ such that $v \notin G$ and $H \in \mathcal{F}_{v}$ such that $w \notin H$.
\end{df}
\begin{tw}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf-structure. Then the limit of every constant and convergent gnet is unique $\Leftrightarrow$ $\langle W, \mu, \mathcal{F} \rangle$ is $\mathcal{F}T_1$.
\end{tw}
\begin{proof}
($\Rightarrow$) Assume that $(f_\lambda)$ has unique limit $w$. Hence, for any $v \neq w$, $f_\lambda = (w) \nrightarrow v$. Thus there is $H \in \mathcal{F}_{v}$ such that $w \notin H$.
But maybe for any $G \in \mathcal{F}_{w}$, $v \in G$? This would mean that $(v) \rightarrow w$ (by the very definition of convergence). But $(v) \rightarrow v$ and the limits are unique, so $v = w$. Contradiction.
($\Leftarrow$) Suppose that our space is $\mathcal{F}T_1$. Let $w \neq v$, and $(w)$ be a convergent gnet. Then $(w) \rightarrow w$. Assume that at the same time $(w) \rightarrow v$. It means that for any $G \in \mathcal{F}_{v}$, $w \in G$. But this is contradiction.
\end{proof}
The following theorem is nearly compatible with ($\Rightarrow$) part of Th. 13 in \cite{baskar}. However, we must assume that our gnet $(w)$ is convergent.
\begin{tw}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf. Assume that $w, v \in W$, $w \neq v$, $(w)$ is a convergent, constant gnet and $f_\lambda$ may be an arbitrary gnet (in $W$). Then:
$[(f_\lambda) \rightarrow w \Rightarrow (f_\lambda) \rightarrow v]$ $\Rightarrow$ $[w \in \bigcap \mathcal{F}_{v}]$.
\end{tw}
\begin{proof}
Suppose that whenever $(f_\lambda) \rightarrow w$, also $(f_\lambda) \rightarrow v$. Let us consider the constant gnet $w, w, w, ...$. It converges to $w$ but also to $v$. Hence, $w$ is eventually in every $G \in \mathcal{F}_{v}$. This means that $w \in \bigcap \mathcal{F}_{v}$.
\end{proof}
In the next theorem we do not need to assume that $(w)$ is convergent. This thesis is just like ($\Leftarrow$) from the aforementioned theorem.
\begin{tw}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf. Assume that $w, v \in W$, $w \neq v$ and $(f_\lambda)$ is an arbitrary gnet in $W$. Then:
$[w \in \bigcap \mathcal{F}_{v}]$ $\Rightarrow$ $[(f_\lambda) \rightarrow w \Rightarrow (f_\lambda) \rightarrow v]$
\end{tw}
\begin{proof}
Suppose that $w \in \bigcap \mathcal{F}_{v}$. It allows us to say that any $H \in \mathcal{F}_{v}$ is also in $\mathcal{F}_{w}$. Hence, $\mathcal{F}_{v} \subseteq \mathcal{F}_{w}$. Now assume that $(f_\lambda) \rightarrow w$. Thus, $(f_\lambda)$ is eventually in every $G \in \mathcal{F}_{w}$. In particular, it is in every $G \in \mathcal{F}_{v}$. Clearly, this means that $(f_\lambda) \rightarrow v$.
\end{proof}
The last lemma in this section is an interesting and useful observation.
\begin{lem}
\label{pusty}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $(f_\lambda)$ be a gnet. If $(f_\lambda) \rightarrow w \in W$, then $\emptyset \notin \mathcal{F}_{w}$.
\end{lem}
\begin{proof}
Assume that $\emptyset \in \mathcal{F}_{w}$. By convergence, we know that for any $G \in \mathcal{F}_{w}$, so also for $\emptyset$, there is $\lambda_0 \in P$ such that for each $\lambda \geq \lambda_0$, $f_\lambda \in \emptyset$. This is impossible.
\end{proof}
The last lemma in this section is a modification of Th. 2.6. in \cite{baskar}.
\begin{lem}
\label{max}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $f: P \rightarrow W$ be a gnet in $W$. Assume that $m$ is a maximal element of $P$ and $f_{m} \in \bigcup \mu$ or $\mathcal{F}_{f_{m}} = \emptyset$. Then $(f_\lambda) \rightarrow f_{m}$.
\end{lem}
\begin{proof}
If $f_{m} \in \bigcup \mu$, then let us consider an arbitrary $G \in \mathcal{F}_{f_{m}}$. Of course, $f_{m} \in G$ and $f_\lambda \in G$ for each $\lambda \geq m$. The reason is that $\lambda \geq m$ implies $\lambda = m$. We conclude that $f_\lambda \rightarrow f_{m}$.
If $\mathcal{F}_{f_{m}}$ is empty, then our result is trivial.
\end{proof}
In the original Baskaran's theorem there was no need to assume anything special about $f_m$. However, it should be clear for the reader that our assumptions are important. Without them it would be easy to imagine the following situation: that we construct $(f_\lambda)$ in such a way that $f_{m}$ is somewhere beyond $\bigcup \mu$ and there is at least one $G \in \mathcal{F}_{f_{m}}$. Then for any $\lambda \in P$ there is $\lambda_0 \geq \lambda$ (namely $m$) such that $f_{\lambda_0} = f_{m} \notin G$.
\section{The higher level of convergence}
We have already proved that each point of $W$ is contained in certain $\mathcal{F}$-open neighbourhood (if $\mathcal{F}_{w} \neq \emptyset$). This observation leads us to the second understanding of convergence.
\begin{df}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf-structure. Assume that $w \in W$. We define $\mathcal{E}_{w}$ as the set of all $\mathcal{F}$-open sets to which $w$ belongs.
\end{df}
As we know from Th. \ref{eachopen}, $\mathcal{E}_{w} = \emptyset \Leftrightarrow \mathcal{F}_{w} = \emptyset$. Let us go back to the \gtf-structure from Ex. \ref{zet}.
\begin{przy}
\label{zetdwa}
\normalfont{
\item Recall that basically we are working with $\langle \mathbb{Z}, \mu, \mathcal{F} \rangle$ where $\mu = \{ \emptyset, \{1\}, \{1, 3\}, \{1, 3, 5\}, \\ \{1, 3, 5, 7\}, ... \}$, $\mathcal{F}_{x} = \emptyset$ for any $x \in 2 \mathbb{Z}$ (if $x$ is odd, then $\mathcal{F}_{x}$ is just a collection of its open neighbourhoods; this comes from the general suppositions).
Assume that $m$ is an odd integer. Consider an arbitrary $G \in \mathcal{F}_{m}$. Note that for any $n \in 2 \mathbb{Z}$, $\mathcal{F}_{n} = \emptyset$. For this reason, $\mathcal{F}Int(G) \subseteq G$, hence (by means of Lemma \ref{amu} and the fact that $G \in \mu$) $G$ is $\mathcal{F}$-open. Of course $m \in G$ (because $m \in \bigcup \mu$), so $G \in \mathcal{E}_{m}$.
On the other hand, assume that there is $H \in \mathcal{E}_{m}$ such that $H \notin \mathcal{F}_{m}$. It means that $w \notin H$ (contradiction) or that $H \notin \mu$. If $H \cap [W \setminus \bigcup \mu] \neq \emptyset$ the we have contradiction again: if there is any $n \in H \cap [W \setminus \bigcup \mu]$ and $H$ is $\mathcal{F}$-open, then it means that $n \in \mathcal{F}Int(H)$, so $\mathcal{F}_{n} \neq \emptyset$. This is not possible (because $n$ is even).
Hence, $H = \mathcal{F}Int(H) \subseteq \bigcup \mu$. All the assumptions of Lemma \ref{uopen} are satisfied. Thus, $H \in \mu$. Of course, $m \in H$. It means that $H \in \mathcal{F}_{m}$. Finally, in this case $\mathcal{F}_{m} = \mathcal{E}_{m}$.
}
\end{przy}
Note that the reasoning presented above is in fact general. Hence, we can formulat the following conclusion:
\begin{tw}
Assume that $\langle W, \mu, \mathcal{F} \rangle$ is a \gtf-structure and $\mathcal{F}_{w} = \emptyset$ for each $w \in W \setminus \bigcup \mu$. Then for any $v \in \bigcup \mu$, $\mathcal{F}_{v} = \mathcal{E}_{v}$. Moreover, this result is true also for any $w \in W \setminus \bigcup \mu$: $\mathcal{F}_{w} = \mathcal{E}_{w} = \emptyset$.
\end{tw}
Now we can go further:
\begin{df}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $(f_\lambda)$ be a gnet in $W$. We say that:
\begin{itemize}
\item $(f_\lambda)$ $\mathcal{E}$-converges to $w \in W$ (i.e. $(f_\lambda) \rightarrow^{\mathcal{E}} w$) iff for any $G \in \mathcal{E}_{w}$, $f_\lambda$ is eventually in $G$. In this case we say that $w$ is an $\mathcal{E}$-limit of $(f_\lambda)$. We say that $(f_\lambda)$ is $\mathcal{E}$-convergent if there is $v \in W$ such that $(f_\lambda) \rightarrow^{\mathcal{E}} v$.
\item We say that $w$ is an $\mathcal{E}$-limit point of $(f_\lambda)$ if it is frequently in every $G \in \mathcal{E}_{w}$.
\end{itemize}
\end{df}
What are the properties of such convergence? Let us start from constant gnets.
\begin{lem}
Each constant gnet in any \gtf-structure $\langle W, \mu, \mathcal{F} \rangle$ is $\mathcal{E}$-convergent.
\end{lem}
\begin{proof}
Let us consider $(f_\lambda) = (w)$. Suppose that for any $v \in W$, $(w) \nrightarrow^{\mathcal{E}} v$. Hence, for any $v \in W$ there is $S \in \mathcal{E}_{v}$ such that $w \notin S$. In particular, this is true for $v = w$. Thus, there is $S \in \mathcal{E}_{w}$ such that $w \notin S$. This is impossible because of the very definition of $\mathcal{E}_{w}$.
\end{proof}
\begin{lem}
\label{econv}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $(f_\lambda) = (w)$ be a constant gnet in $W$. Then $(w)$ is $\mathcal{E}$-convergent $\Leftrightarrow$ $(w) \rightarrow^{\mathcal{E}} w$.
\end{lem}
\begin{proof}
Assume that $(w) \nrightarrow^{\mathcal{E}} w$. It means that there is $S \in \mathcal{E}_{w}$ such that $w \notin S$. This is contradiction.
\end{proof}
Now we introduce the notion of $\mathcal{E}T_1$-spaces.
\begin{df}
We say that \gtf-structure $\langle W, \mu, \mathcal{F} \rangle$ is $\mathcal{E}T_1$ $\Leftrightarrow$ for any $w \neq v$ there are $G \in \mathcal{E}_{w}$ such that $v \notin G$ and $H \in \mathcal{E}_{v}$ such that $w \notin H$.
\end{df}
We can prove the following theorem about uniqueness:
\begin{tw}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf-structure. Then the $\mathcal{E}$-limit of every constant gnet is unique $\Leftrightarrow$ $\langle W, \mu, \mathcal{F} \rangle$ is $\mathcal{E}T_1$.
\end{tw}
\begin{proof}
($\Rightarrow$)
Suppose that $(w)$ is $\mathcal{E}$-convergent. We may assume that $(w) \rightarrow^{\mathcal{E}} w$. For any $v \neq w$, $(w) \nrightarrow^{\mathcal{E}} v$, i.e. there is $H \in \mathcal{E}_{v}$ such that $w \notin H$.
But maybe for any $S \in \mathcal{E}_{w}$, $v \in S$? Let us consider constant gnet $(v)$. Then $(v) \rightarrow^{\mathcal{E}} v$. But then $(v) \nrightarrow^{\mathcal{E}} w$. Hence, there must be $G \in \mathcal{E}_{w}$ such that $v \notin G$.
($\Leftarrow$)
Assume that there is constant gnet $(w)$ with two different $\mathcal{E}$-limits, i.e. $(w) \rightarrow^{\mathcal{E}} w$ and $(w) \rightarrow^{\mathcal{E}} v \neq w$. It means that for any $S \in \mathcal{E}_{v}$, $w \in S$. Contradiction.
\end{proof}
Below we prove certain connection between convergence and $\mathcal{E}$-convergence.
\begin{tw}
\label{conv}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $(f_\lambda)$ be a gnet (we assume that $f: P \rightarrow W$). If $(f_\lambda) \rightarrow w$, then $(f_\lambda) \rightarrow^{\mathcal{E}} w$.
\end{tw}
\begin{proof}
Suppose that $(f_\lambda) \nrightarrow^{\mathcal{E}} w$. Then there is $S \in \mathcal{E}_{w}$ such that for any $\lambda \in P$ there exists $\lambda_1 \geq \lambda$ for which $f_{\lambda_1} \notin S$.
We know that $S \neq \emptyset$ (because $S \in \mathcal{E}_{w}$, so $w \in S$). Moreover, $S$ is $\mathcal{F}$-o., so $w \in \mathcal{F}Int(S)$. Hence, there is $H \in \mathcal{F}_{w}$ such that $H \subseteq S$. Recall that $(f_\lambda) \rightarrow w$, so there is $\lambda_0 \in P$ such that for any $\lambda \geq \lambda_0$, $f_\lambda \in H \subseteq S$. Contradiction.
\end{proof}
Let us go back to the \gtf-structure from Ex. \ref{zet}. Our considerations can be compared with Ex. 3.1. in \cite{baskar}.
\begin{przy}
\label{zetdwa}
\normalfont{
As earlier, we are working with $\langle \mathbb{Z}, \mu, \mathcal{F} \rangle$ where $\mu = \{ \emptyset, \{1\}, \{1, 3\}, \{1, 3, 5\}, \{1, 3, 5, 7\}, ... \}$ and $\mathcal{F}_{n} = \emptyset$ for any $n \in 2 \mathbb{Z}$.
Assume that $(P, \leq)$ is a poset, where $P = 2^{\mathbb{Z}} \setminus \{\emptyset\}$ and if $A, B \in P$, then $A \leq B \Leftrightarrow B \subseteq A$. Let us define $f: P \rightarrow \mathbb{Z}$ by $f(A) \in A$ (i.e. we require only this condition). Such $f$ is a gnet.
Let us discuss $\rightarrow$-convergence and $\mathcal{E}$-convergence. Assume that $m$ is an odd integer. Then $\mathcal{F}_{m} = \{ \{1, 3, ..., m\}, \{1, 3, ..., m, m+2 \}, \{1, 3, ..., m, m+2, m+4\}, ...\}$. If $G \in \mathcal{F}_{m}$, then for every $A \subseteq G$, $f(A) \in A \subseteq G$. In particular, it means that $(f_\lambda) \rightarrow m$ ($G$ itself plays the role of $\lambda_0$ in the general definition). Hence, we can say that every odd integer $m$ is a limit of $f$. Note that $m$ is \emph{not} a limit point of $f$, because for any $G \in \mathcal{F}_{m}$ we can take $B = \{2, 4, 6, ..., m-1\}$. Now $f(A) \notin G$ for any $A \subseteq G$.
Now consider $n \in 2 \mathbb{Z}$. We have assumed that in this case $\mathcal{F}_{n} = \emptyset$, hence we can immediately say that $(f_\lambda) \rightarrow n$. We see that $(f_\lambda)$ converges to every even integer.
Now let us think about $\mathcal{G}$, which can be just like $\mathcal{F}'$, $\mathcal{F}''$ or $\mathcal{F}'''$ in Ex. \ref{zet}. The case of odd numbers is without changes. As for the $n \in 2 \mathbb{Z}$, assume that $G \in \mathcal{G}_{n}$ and $A \subseteq G$. Then $f(A) \in A \subseteq G$. In fact, this reasoning is identical with the one for odd integers.
By means of Th. \ref{conv} we can say that in each case, both for odd and even integers, $(f_\lambda)$ is $\mathcal{E}$-convergent to each number.
}
\end{przy}
We can easily prove that the converse of Th. \ref{conv} is not true.
\begin{przy}
\normalfont{
Let us consider very simple \gtf-structure: $\langle W, \mu, \mathcal{F} \rangle$, where $W = \{w, v\}$, $\mu = \{ \emptyset, \{w\} \}$ and $\mathcal{F}_{v} = \{ \{w\} \}$. Then the set $\{w, v\} = W$ is $\mathcal{F}$-open and it is the only element of $\mathcal{E}_{v}$ (note that $\mathcal{F}Int(\{v\}) = \emptyset$). Now let us think about constant gnet $(f_\lambda) = (v)$ (connected to an arbitrary $P$). Undoubtedly, $(v) \rightarrow^{\mathcal{E}} v$. Note, however, that $(v) \nrightarrow v$ because there is $G \in \mathcal{F}_{v}$, namely $\{w\}$ such that $v \notin G$.
}
\end{przy}
We can also reformulate Lemma \ref{max}. Now we do not need any special assumptions about $f_m$.
\begin{lem}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf and $f: P \rightarrow W$ be a gnet in $W$. Assume that $m$ is a maximal element of $P$. Then $(f_\lambda) \rightarrow^{\mathcal{E}} f_{m}$.
\end{lem}
\begin{proof}
If $\mathcal{E}_{f_{m}} = \emptyset$, then the result is trivial. If not, then consider $S \in \mathcal{E}_{f_{m}}$. Clearly, $f_{m} \in S$ and $f_\lambda \in S$ for any $\lambda \geq m$ (because in such a case $\lambda = m$).
\end{proof}
\section{Gnets and the question of closure}
There is a strict dependence between closures and gnets in generalized topology. It has been proven by Baskaran in \cite{baskar} that if $\emptyset \neq A \subseteq W$ and $w \in W$, then $w \in Cl(A)$ $\Leftrightarrow$ there is a gnet $(f_\lambda)$ in $A$ (i.e. with its values in $A$) converging to $w$. However, Baskaran considered the notion of convergence based on typical open neighbourhoods. Hence, he assumed that each point is in each of its neighbourhoods. Clearly, in our case this is false (for all points from $W \setminus \bigcup \mu$). For this reason, we formulate the following dependece:
\begin{tw}
Assume that $\langle W, \mu, \mathcal{F} \rangle$ is a \gtf-structure, $\emptyset \neq A \subseteq W$. Then:
$w \in \mathcal{F}Cl(A)$ $\Leftrightarrow$ there is a gnet $(f_\lambda) \in A$ such that $(f_\lambda) \rightarrow w$.
\end{tw}
\begin{proof}
($\Rightarrow$) Assume that $w \in \mathcal{F}Cl(A)$. There are two possibilites. First, $\mathcal{F}_{w} = \emptyset$. In this case we may assume that $P = 2^{W} \setminus \{ \emptyset \}$ and $C \geq D \Leftrightarrow C \subseteq D$. We define $f: P \rightarrow W$ in such a way that $f(C) \in A$. Clearly, $(f_\lambda)$ becomes a gnet in $A$ and moreover $(f_\lambda) \rightarrow w$ (because there are no any sets in $\mathcal{F}_{w}$, so we can say anything about them).
Second option is that $\mathcal{F}_{w} \neq \emptyset$. Here we assume that $P = \mathcal{F}_{w}$. As for the $\geq$, it is defined as above. Note that (from the very definition of $\mathcal{F}Cl(A)$) for any $G \in \mathcal{F}_{w}$, $G \cap A \neq \emptyset$. Then assume \footnote{This reasoning is based on the one for ordinary generalized neighbourhoods, presented in \cite{baskar}. However, there is a mistake there (probably a typo). The author assumed only that $f(G) \in G$ (we use our notation). Clearly, we must assume that our gnet is in the (nonempty) intersection of neighbourhood and the set $A$.} that $f(G) \in G \cap A$ for any $G \in \mathcal{F}_{w}$. Then $(f_\lambda)$ is a gnet in $A$ and for any $G \in \mathcal{F}_{w}$ our gnet is eventually in $G$, i.e. $(f_\lambda) \rightarrow w$.
($\Leftarrow$) Assume that there is a gnet $(f_\lambda)$ in $A$ such that $(f_\lambda) \rightarrow w$. Hence, for any $G \in \mathcal{F}_{w}$, $(f_\lambda)$ is eventually in $G$, which means that for any $G \in \mathcal{F}_{w}$ there is $\lambda_0$ such that for any $\lambda \geq \lambda_0$, $f_\lambda \in G$. But for any $\lambda$, $f_\lambda \in A$. Hence, for any $G \in \mathcal{F}_{w}$, $G \cap A \neq \emptyset$. Thus $w \in \mathcal{F}Cl(A)$. Moreover, due to the Lemma \ref{close}, $w \in Cl(A)$.
\end{proof}
Is it possible to replace $\rightarrow$-convergence by $\rightarrow^{\mathcal{E}}$-convergence? Of course, if $w \in \mathcal{F}Cl(A)$, then we can find our expected $\rightarrow$-convergent gnet, as it has been shown above: and this gnet is (by means of Lemma \ref{econv}) $\rightarrow^{\mathcal{E}}$-convergent. But the converse is not true. Let us think about the following (counter)-example.
\begin{przy}
\normalfont{
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf-structure where $W = \{w, v, u\}$, $\mu = \{\emptyset, \{w\}\}$, $\mathcal{F}_{v} = \{ \{w\}, \{u\} \}$, $\mathcal{F}_{u} = \emptyset$. Of course, $\mathcal{F}_{w} = \{ \{w \} \}$. Consider $A = \{v \}$ and the constant gnet $(v)$. Clearly, $(v)$ is a gnet in $A$. It is $\rightarrow^{\mathcal{E}}$-convergent (at least to $v$).
Now $v \notin \mathcal{F}Cl(A)$ because there is $G = \{u\} \in \mathcal{F}_{v}$ such that $G \cap \{v\} = G \cap A = \{u\} \cap \{v\} = \emptyset$. In fact, $\mathcal{F}Cl(A) = \mathcal{F}Cl(\{v\}) = \{z \in W; \text{ for any } G \in \mathcal{F}_{z}, G \cap \{v\} \neq \emptyset\} = \{u\}$ (because there are no any sets in $\mathcal{F}_{u}$).
}
\end{przy}
We have introduced $\mathcal{F}$-interiors (closures) to speak later about $\mathcal{F}$-open (closed) sets. Then we have discussed the set $\mathcal{E}_{w}$ for an arbitrary $w$. One could ask: does it make sense to move these notions on even more high level? We wish to treat this issue very briefly.
\begin{df}
Assume that $\langle W, \mu, \mathcal{F} \rangle$ is a \gtf-structure and $A \subseteq W$. We say that:
\begin{itemize}
\item $w \in \mathcal{E}Int(A)$ $\Leftrightarrow$ there is $S \in \mathcal{E}_{w}$ such that $S \subseteq A$.
\item $w \in \mathcal{E}Cl(A)$ $\Leftrightarrow$ for any $S \in \mathcal{E}_{w}$, $S \cap A \neq \emptyset$.
\end{itemize}
We say that $A$ is $\mathcal{E}$-open if and only if $\mathcal{E}Int(A) = A$ and that $A$ is $\mathcal{E}$-closed if and only if $\mathcal{E}Cl(A) = A$.
\end{df}
Although on this stage of research such definitions seem to be somewhat artificial, there is at least one interesting thing to note.
\begin{tw}
Let $\langle W, \mu, \mathcal{F} \rangle$ be a \gtf-structure and $\mathcal{EO}$ is a collection of all $\mathcal{E}$-open sets (with respect to $\mu$ and $\mathcal{F}$). Then $\mathcal{EO}$ forms a generalized topology on $W$. If for any $w \in W$, $\mathcal{F}_{w} \neq \emptyset$, then $\mathcal{EO}$ is a strong generalized topology.
\end{tw}
\begin{proof}
First, let us prove that $\emptyset$ is $\mathcal{E}$-open. Compute: $\mathcal{E}Int(\emptyset) = \{z \in W; \text{ there is } S \in \mathcal{E}_{w} \text { such that } S \subseteq \emptyset\} = \emptyset$. It is because the only set contained in $\emptyset$ is empty set itself - but for any $S \in \mathcal{E}_{w}$, $w \in S$, hence $S \neq \emptyset$.
Assume now that $J \neq \emptyset$ and for any $i \in J$, $X_{i}$ is $\mathcal{E}$-open. Then $\bigcup X_{i}$ is also $\mathcal{E}$-open, i.e. $\mathcal{E}Int(\bigcup_{i \in J} X_i) = \bigcup X_{i}$.
($\subseteq$) Let $w \in \mathcal{E}Int(\bigcup X_i)$. Hence, there is $S \in \mathcal{E}_{w}$ such that $S \subseteq \bigcup_{i \in J} X_i$. But $w \in S$. Hence $w \in \bigcup_{i \in J} X_i$.
($\supseteq$) Let $w \in \bigcup_{i \in J} X_i$. Then there is $X_k$ such that $w \in X_k$. $X_k$ is $\mathcal{E}$-open, so $w \in \mathcal{E}Int(X_k)$. Thus there is $S \in \mathcal{E}_{w}$ such that $S \subseteq X_k \subseteq \bigcup_{i \in J} X_i$. Hence, $w \in \mathcal{E}Int(\bigcup_{i \in J}X_i)$.
Now assume that $\mathcal{F}_{w} \neq \emptyset$ for any $w \in W$. Then $\mathcal{E}_{w} \neq \emptyset$. Hence, $\mathcal{E}Int(W) = \{z \in W; \text{ there is } S \in \mathcal{E}_{w} \text{ such that } S \subseteq W\} = W$.
\end{proof}
Moreover, it is always true that $\mathcal{E}Int(\bigcap_{i \in J} X_i \subseteq \bigcap X_i$ (even if none of the sets indexed by $J$ is $\mathcal{E}$-open). Assume that $w \in \mathcal{E}Int(\bigcap_{i \in J}X_i$. Hence, there is $S \in \mathcal{E}_{w}$ such that $S \subseteq \bigcap_{i \in J} X_i$. Hence, for any $X_i$ we can say that $S \subseteq X_i$. But by the very definition of $\mathcal{E}_{w}$, $w \in S$. So $w$ is in each $X_i$, i.e. it is in $\bigcap_{i \in J} X_i$.
\section{Further investigations}
We would like to investigate the structures and functions presented above. It would be cognitively valuable to establish analogues of various topological notions (like density, nowhere density, connectedness, continuity etc.) both in the context of $\mu$-topology with $\mathcal{F}$ and in the context of $\mathcal{E}$ (i.e. $\mathcal{F}$-open and $\mathcal{E}$-open sets). It would be very natural to impose certain restrictions on $\mathcal{F}$ (that, for example, it is closed under supersets or unions). Logical applications of our new notions are also interesting: because in the presence of $\mathcal{F}$ we can discuss two types of possible worlds; those in $\bigcup \mu$ and those in $W \setminus \bigcup \mu$. Their logical "strenght" (in terms of validating given formulas and rules) is different.
|
1,108,101,564,880 | arxiv |
\section{Introduction}
Mechanical thrombectomy \cite{powers20152015,berkhemer2015randomized} is guided by an interventional C-arm system capable of 3-D imaging. Although its soft tissue contrast is comparable to helical CT scans, the prolonged acquisition time pose C-arm CBCT more susceptible to rigid head motion artifacts \cite{leyhe2017latest}. In the clinical workflow, however, it is desirable to reduce the time-to-therapy by avoiding prior patient transfers to helical CT or MR scanners \cite{psychogios2017one}. To this end, robust motion compensation methods are desirable.
Methods for rigid motion compensation can be clustered in four categories: 1) image-based autofocus \cite{sisniega2017motion,wicklein2012image}, 2) registration-based \cite{Ouadah2016}, 3) consistency-based \cite{Frysch2015,preuhs2018double,preuhs2019symmetry} and 4) data-driven \cite{bier2018detecting,latif2018automating,kustner2019retrospective}.
Recent data-driven approaches use image-to-image translation methods based on GANs \cite{latif2018automating,kustner2019retrospective} or aim to estimate anatomical landmarks in order to minimize a \textit{reprojection error} (RPE) \cite{bier2018detecting}.
The latter approach does not provide the required accuracy, whereas GAN-based approach are deceptive for clinical applications, as the data-integrity cannot be assured \cite{huang2019data}.
We propose a learning-based approach for rigid motion compensation ensuring data integrity.
An image-based autofocus method is introduced, where a regression network predicts the RPE directly from reconstructed slice images.
The motion parameters are found by iteratively minimizing the predicted RPE using the Nelder-Mead simplex method \cite{olsson1975nelder}.
\section{Motion Estimation and Compensation Framework}
\paragraph{Autofocus Framework:}
Rigid motion is compensated by estimating a motion trajectory $\mathcal{M}$ which samples the motion at each of the $N$ acquired views within the trajectory \cite{Kim2014}. $\mathcal{M}$ contains the motion matrices $\vec{M}_i$, where each motion matrix $\vec{M}_i \in \mathbb{SE}(3)$~---~with $\mathbb{SE}(3)$ being the special Euclidean group~---~describes the patient movement at view $i \in [1,N]$.
The motion matrices can be incorporated in the backprojection operator of a \textit{filtered backprojection}-type (FBP) reconstruction algorithm.
We denote the reconstructed image in dependence of the motion trajectory by FBP$_y(\mathcal{M})$, where FBP$_y$ is the FDK-reconstruction \cite{feldkamp1984practical} from projection data $y$.
In the following, FBP$_y$ will reconstruct the central slice on a $512^2$ pixel grid using a sharp filter kernel to emphasize motion artifacts.
Typical autofocus frameworks (cf.~\cite{sisniega2017motion}) estimate the motion trajectory based on an \textit{image quality metric} (IQM) evaluated on the reconstructed image by minimizing
\begin{equation}
\argmin_\mathcal{M} \enspace\text{IQM}(\text{FBP}_y(\mathcal{M})) \enspace.
\label{eq:IQM}
\end{equation}
A common problem in solving \eqref{eq:IQM} is the non-convexity of the IQM, which is typically chosen to be the image histogram entropy or total variation of the reconstructed slice. To overcome this limitation, we propose to replace the IQM by a network architecture that is trained to regress the RPE, which was shown to be quasi convex for a geometric reconstruction problem \cite{ke2007quasiconvex}.
\paragraph{Learning to Assess Image Quality:}
\label{sec:regression}
Let $\mathcal{X}$ be a set of \mbox{3-D} points $\vec{x} \in \mathbb{P}^3$ uniformly sampled from a sphere surface and let the acquisition trajectory associated to a dataset $y$ be defined by projection matrices $\vec{P}_i \in \mathbb{R}^{3\times4}$ mapping world points on the detector of a CBCT system at view $i$ \cite{hartley2003multiple}, then the RPE is computed as
\begin{equation}
\text{RPE}(\mathcal{M}) =\frac{1}{|\mathcal{X}|N} \sum_{\vec{x} \in \mathcal{X},i\in N}||\vec{P_i}\vec{M_i}\vec{x} - \vec{P_i}\vec{x}||_2^2 \enspace.
\label{eq:rpe}
\end{equation}
This metric measures the reconstruction-relevant deviations induced by motion \cite{strobel2003improving} and can thus be expected to be estimated directly from the reconstruction images.
To this end, we devise a regression network learning the RPE directly from a reconstructed image. Our regression network $R$ consists of a feature extraction stage, pretrained on ImageNet and realized by the first 33 layers from a residual network, \cite{he2016deep} followed by a densely connected layer defining the regression stage. The cost function $L$ of the network is defined by the difference between the network-predicted RPE from a reconstruction slice with simulated motion trajectory $\mathcal{M}$ and the corresponding RPE as defined by Eq. \eqref{eq:rpe}
\begin{equation}
L = || R(\text{FBP}_y(\mathcal{M})) - \text{RPE}(\mathcal{M})||_2^2 \enspace .
\end{equation}
For training, the projection data $y$ is ensured to be motion free, such that motion artifacts solely source from the virtual motion trajectory $\mathcal{M}$.
For training and testing, we use CBCT acquisitions (Artis Q, Siemens Healthcare GmbH, Germany) of the head ($N=496$) acquired from 20 patients which were split in 16 for training 3 for validation and 1 for testing.
For each patient we simulate 450 random motion trajectories resulting in a training set of 7200 reconstructions
\section{Experiments and Results}
For motion generation, we use rotational movements along the patient's longitudinal axis. The motion trajectory is modeled by an Akima spline \cite{akima1970new} with 15 equally distributed nodes inducing RPEs ranging from $0$\,mm to $0.6$\,mm. With the RPE measurement being sensitive to constant offsets, not inducing motion artifacts, we further only use motions affecting a third of the acquisition.
First, we inspect how well the network is able to regress the RPE on test and validation data. Then, in in an inverse crime scenario~---~i.e. the modeling capacity of the spline used for motion generation is equal to the spline used for motion compensation~---~we inspect the behavior for motion types significantly varying in their shape from any motion seen during training. In a last experiment we compare the performance of the network with a state-of-the-art IQM utilizing the histogram entropy. Therefore, we deploy an inverse crime scenario and a more realistic case where we use 10 spline nodes for motion generation and 20 nodes for compensation.
\paragraph{Regression Network:}
We use Adam optimization with learning rate of $0.00001$ and select the network parameters that achieved the best RPE prediction on our validation dataset. Our network achieves an average RPE deviation from the Gt of \mbox{$0.031$\,mm} on the test dataset, as depicted in Fig.\,\ref{fig:test_accuracy}.
\begin{figure}
\adjustbox{width=0.94\textwidth,trim=0ex 0pt 0ex 0pt}{
\begin{tikzpicture}
\node[inner sep=0pt, anchor = north west] (recos) at (-1,0)
{
\resizebox{!}{0.1\textheight}{
\begin{tabular}{lr}
\includegraphics[width=0.2\linewidth]{png/recos/rpe40a.png}\\ \includegraphics[width=0.2\linewidth]{png/recos/rpe40b.png}\\ \includegraphics[width=0.2\linewidth]{png/recos/rpe40c.png}
\end{tabular} }};
\node[inner sep=0pt, anchor = north east] (plot) at (-1,0.107)
{\input{tikz/regression.pgf}};
\node [inner sep=0pt] (A) at (-4.4,-1.6) {};
\node [inner sep=0pt] (B) at (-4.4,-1.5) {};
\node [inner sep=0pt] (C) at (-4.4,-1) {};
\node [inner sep=0pt] (A1) at (-0.8,-0.65) {};
\node [inner sep=0pt] (B1) at (-0.8,-2.2) {};
\node [inner sep=0pt] (C1) at (-0.8,-3.8) {};
\draw [-] (A) -- (C1);
\draw [-] (B) -- (B1);
\draw [-] (C) -- (A1);
\end{tikzpicture}
}
\caption{Network estimated RPE and different reconstructions, all revealing a RPE of $\approx 0.34$\,mm.}
\label{fig:test_accuracy}
\end{figure}
\paragraph{Network Inference for Motion Compensation:}
\begin{figure}
\adjustbox{width=0.94\textwidth,trim=0ex 0pt 0ex 0pt}{
\input{tikz/amp.pgf}
}
\caption{Left: Network-predicted and Gt RPE in each iteration step of the optimization. Right: Simulated motion trajectory and estimated motion trajectory after optimization.}
\label{fig:ic}
\end{figure}
Using the test patient, the network behavior for motion exceeding the RPE of the training process is inspected in an inverse crime scenario. The simulated motion trajectory is depicted in Fig.\,\ref{fig:ic} together with the estimated motion trajectory after optimization using the network as IQM (cf.\,Eq.\,\ref{eq:IQM}). For each iteration of the optimization process the network predicted RPE together with the corresponding Gt RPE is depicted. While the RPE is underestimated within the first iterations, the proportionality is still kept, guiding the optimization to a motion free reconstruction.
Figure \ref{fig:motion_reco} compares the proposed network-based IQM with the entropy-based IQM. The optimization process is identically for both metrics. In an inverse crime scenario both methods can restore the original image quality, however, in a more realistic setting the image entropy is stuck in a local minimum, whereas the network is able to lead the optimization to a nearby motion-free solution.
\begin{figure}
\begin{center}
\resizebox{1\textwidth}{!}{
{\def\arraystretch{1}\tabcolsep=2pt
\begin{tabular}{cccccccc}
\begin{tikzpicture}
\node[anchor=south west,inner sep=0,outer sep=0] (image) at (0,0) {%
\includegraphics[width=0.2\textwidth]{png/recos/01_org.png}%
};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\node[white,anchor=west] at (-0.0,0.9) {Gt};
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}
\node[anchor=south west,inner sep=0,outer sep=0] (image) at (0,0) {%
\includegraphics[width=0.2\textwidth]{png/recos/02_art.png}
};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\node[white,anchor=west] at (-0.0,0.9) {Mo};
\end{scope}
\end{tikzpicture}
&{\color{white} a}&
\begin{tikzpicture}
\node[anchor=south west,inner sep=0,outer sep=0] (image) at (0,0) {%
\includegraphics[width=0.2\textwidth]{png/recos/04_ic_e.png}%
};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\node[white,anchor=west] at (-0.0,0.9) {Ent};
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}
\node[anchor=south west,inner sep=0,outer sep=0] (image) at (0,0) {%
\includegraphics[width=0.2\textwidth]{png/recos/03_ic_ai.png}%
};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\node[white,anchor=west] at (-0.0,0.9) {Pro};
\end{scope}
\end{tikzpicture}
&{\color{white} a}&
\begin{tikzpicture}
\node[anchor=south west,inner sep=0,outer sep=0] (image) at (0,0) {%
\includegraphics[width=0.2\textwidth]{png/recos/06_e.png}%
};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\node[white,anchor=west] at (-0.0,0.9) {Ent};
\end{scope}
\end{tikzpicture}
&
\begin{tikzpicture}
\node[anchor=south west,inner sep=0,outer sep=0] (image) at (0,0) {%
\includegraphics[width=0.2\textwidth]{png/recos/05_ai.png}
};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\node[white,anchor=west] at (-0.0,0.9) {Pro};
\end{scope}
\end{tikzpicture}
\\
\multicolumn{2}{c}{Ground Truth (Gt) and Motion Affected}&&
\multicolumn{2}{c}{Inverse Crime Compensation}&&
\multicolumn{2}{c}{Clinical Setting (Entropy and Proposed)}
\end{tabular}%
}}
\end{center}
\caption{Reconstructions of the test patient using [500-2000] HU window. In the inverse crime scenario, the SSIM to the Gt is $0.84$ (Ent/Gt) and $0.95$ (Pro/Gt), respectively for the entropy (Ent) and proposed (Pro) measure. For the more realistic setting (Clinical Setting) the SSIM is $0.65$ (Ent/Gt) and $0.84$ (Pro/Gt), respectively.
}
\label{fig:motion_reco}
\end{figure}
\section{Conclusion and Discussion}
We present a novel data driven autofocus approach lead by a convolutional neural network.
The network is trained to predict the RPE given a slice of a CBCT reconstruction. The final motion compensated reconstruction is solely based on the projection raw-data and the estimated motion trajectory.
This allows us to devise a learning-based motion compensation approach while ensuring data integrity. We showed that the network is capable of generalizing well to unseen motion shapes and achieves higher SSIM compared to a state-of-the-art IQM measure.
\textbf{Disclaimer:} The concepts and information presented in this paper are based on
research and are not commercially available.
\medskip
\small
\bibliographystyle{splncs04}
|
1,108,101,564,881 | arxiv | \section{Introduction}
Consider a fractional Brownian motion (fBm), a self-similar Gaussian
process with stationary increments. It was introduced by Kolmogorov
\cite{kol} and studied by Mandelbrot and Van Ness \cite{MN}. The fBm
with Hurst parameter $H\in(0,1)$ is a centered Gaussian process with
covariance function
\[
R_H(t,s)=E\bigl(B_{t}^{H}B_{s}^{H}
\bigr)=\frac{1}{2} \bigl(t^{2H}+s^{2H}-|t-s|^{2H}
\bigr).
\]
If $H=1/2$, then the process $B^{1/2}$ is a standard Brownian motion.
When \hbox{$H\neq\frac{1}{2}$}, $B^H$ is neither a semimartingale nor
a Markov process, so that many of the techniques employed in stochastic
analysis are not available for an fBm. The self-similarity and
stationarity of increments make the fBm an appropriate model for many
applications in diverse fields from biology to finance. We refer to
\cite{Nua10} for details on these notions.
Consider the following stochastic differential equation (SDE)
\begin{align}
\label{SDE1}
\begin{cases}
dX_t = b(t,X_t)\,dt + dB^H_t,\\
X_0=x \in\R^d,
\end{cases}
\end{align}
where $
b:[0,T]\times\R^d \rightarrow\R^d $ is a measurable function, and
$B^H$ is a $d$-dimensional fBm with Hurst parameter $H < 1/2$ whose
components are one-dimensional independent fBms defined on a
probability space $(\varOmega, \mathcal{F}, \{\mathcal{F}_t\}_{t\in[0,T]}, P)$,
where the filtration $\{\mathcal{F}_t\}_{t\in[0,T]}$ is generated by
$B^H_t$, $t\in[0,T]$, augmented by the $P$-null sets. It has been
proved in \cite{BNP} that if $b$ satisfies the assumption
\begin{equation}
\label{ass} b\in L_{\infty}^{1,\infty}:= L^\infty
\bigl([0,T]; L^1 \bigl(\mathbb{R}^{d}\bigr)\cap
L^\infty\bigl(\R^d\bigr)\bigr),
\end{equation}
for $H < \frac{1}{2(3d-1)}$, then Eq.~(\ref{SDE1}) has a unique strong
solution, which will be assumed throughout this paper.
Notice that if the drift coefficient is Lipschitz continuous, then
Eq.~(\ref{SDE1}) has a unique strong solution, which is continuous with
respect to the initial condition. Moreover, the solution can be
constructed using various numerical schemes.
Our purpose in this paper is to establish some stability results under
the pathwise uniqueness of solutions and under weak regularity
conditions on the drift coefficient~$b$. We mention that a considerable
result in this direction has been established in \cite{BMO} when an fBm
is replaced by a standard Brownian motion.
The paper is organized as follows. In Section~2, we introduce some
properties, notation, definitions, and preliminary results. Section~3
is devoted to the study of the variation of solution with respect to
the initial data. In the last section, we drop the continuity
assumption on the drift and try to obtain the same result as in Section~3.
\section{Preliminaries}
In this section, we give some properties of an fBm, definitions, and
some tools used in the proofs.
For any $H<1/2$, let us define the square-integrable kernel
\begin{equation*}
K_{H}(t,s)=c_{H} \Biggl[ \biggl(\frac{t}{s}
\biggr)^{H-\frac{1}{2}}- \biggl(H-\frac{1}{2} \biggr)s^{\frac{1}{2}-H}\int
_{s}^{t}(u-s)^{H-\frac
{1}{2}}u^{H-\frac{3}{2}}\,du
\Biggr], \quad t>s,
\end{equation*}
where $c_{H}= [\frac{2H}{(1-2H)\beta(1-2H,H+\frac{1}{2}))}
]^{1/2}$, $t>s$.
Note that
\begin{equation*}
\frac{\partial K_{H}}{\partial t}(t,s)=c_{H} \biggl(H-\frac{1}{2} \biggr)
\biggl(\frac{t}{s} \biggr)^{H-\frac{1}{2}}(t-s)^{H-\frac{3}{2}}.
\end{equation*}
Let $B^{H}=\{B_{t}^{H}, \ t\in[0,T]\}$ be an fBm defined on $(\varOmega,
\mathcal{F}, \{\mathcal{F}_t\}_{t\in[0,T]}, P)$. We denote by $\zeta$
the set of step functions on $[0,T]$. Let $\mathcal{H}$ be the Hilbert
space defined as
the closure of $\zeta$ with respect to the scalar product
\begin{equation*}
\langle\textbf{1}_{[0,t]}, \textbf{1}_{[0,s]}\rangle_{\mathcal{H}}=R_{H}(t,s).
\end{equation*}
The mapping $\textbf{1}_{[0,t]}\rightarrow B_{t}^{H}$ can be extended
to an isometry between $\mathcal{H}$ and the Gaussian subspace of
$L^2({\varOmega})$ associated with $B^{H}$, and such an isometry is
denoted by $\varphi\rightarrow B^{H}(\varphi)$.
Now we introduce the linear operator $K_{H}^{*}$ from $\zeta$ to
$L^{2}([0,T])$ defined by
\begin{equation*}
\bigl(K_{H}^{*}\varphi\bigr) (s)=K_{H}(b,s)
\varphi(s)+\int_{s}^{b}\bigl(\varphi (t)-\varphi(s)
\bigr)\frac{\partial K_{H}}{\partial t}(t,s)\,dt.
\end{equation*}
The operator $K_{H}^{*}$ is an isometry between $\zeta$ and
$L^{2}([0,T])$, which can be extended to the Hilbert space $\mathcal
{H}$.
Define the process $W= \{W_{t},t\in[0,T]\}$ by
\begin{equation*}
W_{t}=B^{H}\bigl(\bigl(K_{H}^{\ast}
\bigr)^{-1}\textbf{1}_{[0,t]}\bigr).
\end{equation*}
Then $W$ is a Brownian motion; moreover, $B^{H}$ has the integral representation
\begin{equation*}
B_t^{H}=\int_{0}^{t}K_{H}(t,s)\,dW(s).
\end{equation*}
We need also to define an isomorphism $K_H$ from $L^2([0,T])$ onto
$I_{0+}^{H+\frac{1}{2}}(L^2)$ associated with the kernel $K_H(t,s)$ in
terms of the fractional integrals as follows:
\[
(K_H \varphi) (s) = I_{0^+}^{2H} s^{\frac{1}{2}-H}
I_{0^+}^{\frac
{1}{2}-H}s^{H-\frac{1}{2}} \varphi, \quad\varphi\in
L^2\bigl([0,T]\bigr).
\]
Note that, for $\varphi\in L^2([0,T])$, $I_{0^+}^{\alpha}$ is the left
fractional Riemann-Liouville integral operator of order $\alpha$
defined by
\[
I_{0^+}^\alpha\varphi(x) = \frac{1}{\varGamma(\alpha)} \int
_0^x (x-y)^{\alpha-1}\varphi(y)\,dy,
\]
where $\varGamma$ is the gamma function (see \cite{DU} for details).
The inverse of $K_H$ is given by
\[
\bigl(K_H^{-1} \varphi\bigr) (s) = s^{\frac{1}{2}-H}
D_{0^+}^{\frac{1}{2}-H} s^{H-\frac{1}{2}} D_{0^+}^{2H}
\varphi(s), \quad\varphi\in I_{0+}^{H+\frac{1}{2}}\bigl(L^2
\bigr),
\]
where for $\varphi\in I_{0^+}^{H+\frac{1}{2}} (L^2)$, $D_{0^+}^{\alpha
}$ is the left-sided Riemann Liouville derivative of order $\alpha$
defined by
\[
D_{0^+}^{\alpha} \varphi(x)= \frac{1}{\varGamma(1-\alpha)} \frac{d
}{d x}
\int_0^x \frac{\varphi(y)}{(x-y)^{\alpha}}\,dy.
\]
If $\varphi$ is absolutely continuous (see \cite{NO}), then
\begin{align}
\label{inverseKH} \bigl(K_H^{-1} \varphi\bigr) (s) =
s^{H-\frac{1}{2}} I_{0^+}^{\frac{1}{2}-H} s^{\frac{1}{2}-H}
\varphi'(s).
\end{align}
\begin{definition}
On a given probability space $(\varOmega, \mathcal{F}, P)$, a process $X$
is called a strong solution to \eqref{SDE1} if
\begin{itemize}
\item[\rm(1)] $X$ is $\{\mathcal{F}_t\}_{t\in[0,T]}$ adapted, where $\{
\mathcal{F}_t\}_{t\in[0,T]}$ is the filtration generated by $B^H_t,
t\!\in[0,T]$;
\item[\rm(2)] $X$ satisfies \eqref{SDE1}.
\end{itemize}
\end{definition}
\begin{definition}
A sextuple $(\varOmega, \mathcal{F}, \{\mathcal{F}_t\}_{t\in[0,T]}, P, X,
B^H )$ is called a weak solution to \eqref{SDE1} if
\begin{itemize}
\item[\rm(1)] $(\varOmega, \mathcal{F}, P)$ is a probability space
equipped with the filtration $\{\mathcal{F}_t\}_{t\in[0,T]}$ that
satisfies the usual \xch{conditions;}{conditions,}
\item[\rm(2)] $X$ is an $\{\mathcal{F}_t\}_{t\in[0,T]}$-adapted
process, and $B^H$ is an $\{\mathcal{F}_t\}_{t\in[0,T]}$-fBm;
\item[\rm(3)] $X$ and $B^H$ satisfy \eqref{SDE1}.
\end{itemize}
\end{definition}
\begin{definition}[Pathwise uniqueness]
We say that pathwise uniqueness holds for Eq.~\eqref{SDE1}
if whenever $(X,B^{H})$ and $(\widetilde{X},{B^{H}})$
are two weak solutions of Eq.~\eqref{SDE1} defined on the
same probability space $ (\varOmega,\mathcal{F},(\mathcal{F}_{t})_{t\in
[0,T]},P )$,
then $X$ and $\widetilde{X}$ are indistinguishable.
\end{definition}
The main tool used in the proofs is Skorokhod's selection theorem given
by the following lemma.
\begin{lemma} \textup{(\cite{IW}, p.~9)}
Let $(S,\rho)$ be a complete separable metric space, and let $P$,
$P_n$, $n=1,2,\ldots$, be probability measures on $(S,\mathbb{B}(S))$
such that $P_n$ converges weakly to $P$ as $n \rightarrow\infty$.
Then, on a probability space $(\widetilde{\varOmega}, \widetilde{\mathcal
{F}},\widetilde{P})$, we can construct $S$-valued random variables $X$,
$X_n$, $n=1,2,\ldots$, such that:
\begin{itemize}
\item[\rm(i)] $P_n = \widetilde{P}^{X_n}$, $n=1,2,\ldots$, and $P =
\widetilde{P}^{X}$, where $\widetilde{P}^{X_n}$ and $\widetilde{P}^{X}$
are respectively the laws of ${X_n}$ and ${X}$;
\item[\rm(ii)] $X_n$ converges to $X$ $\widetilde{P}$-a.s.
\end{itemize}
\end{lemma}
We will also make use of the following result, which gives a criterion
for the tightness of sequences of laws associated with continuous processes.
\begin{lemma} \textup{(\cite{IW}, p.\ 18)} \label{tightness}
Let $\{X_{t}^{n}$, $t\in[0,T]\}$, $n=1,2,\ldots$, be a sequence of
$d$-dimensional continuous processes satisfying the following two conditions:
\begin{itemize}
\item[\rm(i)] There exist positive constants $M$ and $\gamma$ such that
$E[|X^n(0)|^{\gamma}]\leq M$ for every $n=1,2,\ldots$;
\item[\rm(ii)] there exist positive constants $\alpha$, $\beta$, $M_k$,
$k=1,2,\ldots$, such that, for every $n\geq1$ and all $t, s \in[0,k]$,
$k=1,2,\ldots$,
\[
E\bigl[\bigl|X_{t}^{n}-X_{s}^{n}\bigr|^{\alpha}
\bigr]\leq M_k |t-s|^{1+\beta}.
\]
\end{itemize}
Then, there exist a subsequence $(n_k)$, a probability space
$(\widetilde{\varOmega}, \widetilde{\mathcal{F}},\widetilde{P})$, and
$d$-dimen\-sional continuous processes $\widetilde{X}$, $\widetilde
{X}^{n_k}$, $k=1,2,\ldots$, defined on $\widetilde{\varOmega}$ such that
\begin{itemize}
\item[\rm(1)] The laws of $\widetilde{X}^{n_k}$ and $X^{n_k}$ coincide;
\item[\rm(2)] $\widetilde{X}^{n_k}_{t}$ converges to $\widetilde
{X}_{t}$ uniformly on every finite time interval $\widetilde{P}$-a.s.
\end{itemize}
\end{lemma}
\section{Variation of solutions with respect to initial conditions}
The purpose of this section is to ensure the continuous dependence of
the solution with respect to the initial condition when the drift $b$
is continuous and bounded. Note that, in the case of ordinary
differential equation, the continuity of the coefficient is sufficient
to ensure this dependence.
Next, we give a theorem that will be essential in establishing the
desired result.
\begin{theorem}\label{main}
Let $b$ be a continuous bounded function. Then, under the pathwise
uniqueness for SDE \eqref{SDE1}, we have
\[
\lim_{x\rightarrow x_0} E \Bigl[\sup_{0\leq t\leq T}\bigl\llvert
X_t (x)-X_t (x_0)\bigr\rrvert
^{2} \Bigr]=0.
\]
\end{theorem}
Before we proceed to the proof of Theorem~\ref{main}, we state the
following technical lemma.
\begin{lemma} \label{tight}
Let $X^n$ be the solution of \eqref{SDE1} corresponding to the initial
condition $x_n$. Then, for every $p>\frac{1}{2H}$, there exists a
positive constant $C_p$ such that, for all $s, t \in[0,T]$,
\[
E\bigl[\bigl|X^{n}_t-X^{n}_s\bigr|^{2p}
\bigr] \leq C_p |t-s|^{2pH}.
\]
\end{lemma}
\begin{proof}
Fix $s<t$ in $[0,T]$. We have
\begin{align*}
\bigl|X^{n}_t-X^{n}_s\bigr|^{2p} &\leq C_p \Biggl[\Biggl\llvert \int_s^t b\bigl(u,X^{n}_u\bigr)\,du\Biggr\rrvert ^{2p} + \bigl|B^H_t-B^H_s\bigr|^{2p}\Biggr].
\end{align*}
Due to the stationarity of the increments and the scaling property
of an fBm and the boundedness of $b$, we get that
\begin{align*}
E\bigl|X^{n}_t-X^{n}_s\bigr|^{2p} &\leq C_p \bigl[ |t-s|^{2p}+|t-s|^{2pH} \bigr]\\
&\leq C_p|t-s|^{2pH},
\end{align*}
which finishes the proof.
\end{proof}
Let us now turn to the proof of Theorem~\ref{main}.
\begin{proof}
Suppose that the result of the theorem is false. Then there exist a
constant $\delta>0$ and a sequence $x_n$ converging to $x_0$ such that
\[
\inf_n E \Bigl[\sup_{0\leq t\leq T}\bigl\llvert
X_t (x_n)-X_t (x_0)\bigr\rrvert
^{2} \Bigr]\geq\delta.
\]
Let $X^n$ (respectively, $X$) be the solution of (\ref{SDE1})
corresponding to the initial condition $x_n$ (respectively, $x_0$).
According to Lemma~\ref{tight}, the sequence $(X^{n},X,B^H)$ satisfies
conditions (i) and (ii) of Lemma~\ref{tightness}. Then, by Skorokhod's
selection theorem there exist a subsequence $\{n_k,k\geq1 \}$, a
probability space $(\widetilde{\varOmega}, \widetilde{\mathcal
{F}},\widetilde{P})$, and stochastic processes $(\widetilde{X},
\widetilde{Y}, \widetilde{B}^H)$, $(\widetilde{X^{k}}, \widetilde
{Y^{k}},\widetilde{B}^{H,k})$, $k\geq1 $, defined on $(\widetilde
{\varOmega}, \widetilde{\mathcal{F}},\widetilde{P})$ such that:
\begin{itemize}
\item [$(\alpha)$] for each $k\geq1$, the laws of $(\widetilde
{X^{k}}, \widetilde{Y^{k}},\widetilde{B}^{H,k})$ and $({X^{n_k}},
X,B^H)$ coincide;
\item[$(\beta)$] $(\widetilde{X}^{k}, \widetilde{Y}^{k}, \widetilde
{B}^{H,k})$ converges to
$(\widetilde{X}, \widetilde{Y}, \widetilde{B}^H)$ uniformly on every
finite time interval $\widetilde{P}$-a.s.
\end{itemize}
Thanks to property $(\alpha)$, we have, for $k\geq1$ and $t>0$,
\[
E\Biggl\llvert \widetilde{X}^{k}_{t} -x_k -
\int_0^t b\bigl(s,\widetilde
{X}^{k}_{s}\bigr)\,ds - \widetilde{B}^{H,k}_t
\Biggr\rrvert ^{2}=0.
\]
In other words, $\widetilde{X}^{k}_{t}$ satisfies the following SDE:
\[
\widetilde{X}^{k}_t= x_k + \int
_0^t b\bigl(s,\widetilde{X}^{k}_{s}
\bigr)\,ds + \widetilde{B}^{H,k}_t .
\]
Similarly,
\[
\widetilde{Y}^{k}_t= x_0 + \int
_0^t b\bigl(s,\widetilde{Y}^{k}_{s}
\bigr)\,ds + \widetilde{B}^{H,k}_t .
\]
Using $(\beta)$, we deduce that
\[
\lim_{k \rightarrow\infty}\int_0^t b\bigl(s,
\widetilde{X}^{k}_{s}\bigr)\,ds=\int_0^t
b(s,\widetilde{X}_{s})\,ds
\]
and
\[
\lim_{k \rightarrow\infty}\int_0^t b\bigl(s,
\widetilde{Y}^{k}_{s}\bigr)\,ds=\int_0^t
b(s,\widetilde{Y}_{s})\,ds
\]
in probability and uniformly in $t \in[0, T]$.
Thus, the processes $\widetilde{X}$ and $\widetilde{Y}$ satisfy the same
SDE on $(\widetilde{\varOmega}, \widetilde{\mathcal{F}},\widetilde{P})$
with the same driving noise $\widetilde{B}^{H}_t$ and the initial
condition $x_0$. Then, by pathwise uniqueness, we conclude that
$\widetilde{X}_{t}=\widetilde{Y}_{t}$ for all $t \in[0, T]$,
$\widetilde{P}$-a.s.
On the other hand, by uniform integrability we have that
\begin{eqnarray*}
\delta&\leq& \liminf_n E \Bigl[\max_{0\leq t\leq T}
\bigl\llvert X_t (x_n)-X_t (x_0)
\bigr\rrvert ^{2} \Bigr]
\\
&=&\liminf_k \widetilde{E} \Bigl[\max_{0\leq t\leq T}
\bigl\llvert \widetilde {X}^{k}_t-\widetilde{Y}^{k}_t
\bigr\rrvert ^{2} \Bigr]
\\
&\leq& \widetilde{E} \Bigl[\max_{0\leq t\leq T}\llvert \widetilde
{X}_t-\widetilde{Y}_t\rrvert ^{2} \Bigr],
\end{eqnarray*}
which is a contradiction. Then the desired result follows.
\end{proof}
\section{The case of discontinuous drift coefficient}
In this section, we drop the continuity assumption on the drift
coefficient and only assume that $b$ is bounded. The goal of this
section is to generate the same result as in Theorem~\ref{main} without
the continuity assumption.
Next, in order to use the fractional Girsanov theorem given in $\mbox
{\cite[Thm.~2]{NO},}$ we should first check that the conditions imposed
in the latter are
satisfied in our context. This will be done in the following lemma.
\begin{lemma}\label{gir}
Suppose that $X$ is a solution of SDE \eqref{SDE1}, and let $b$ be a
bounded function. Then the process
$v= K_H^{-1} ( \int_0^{\cdot} b(r, X_r) \,dr )$ enjoys the
following properties:
\begin{itemize}
\item[$(1)$] $v_s \in L^2([0,T]), \ P\text{-a.s.}$;
\item[$(2)$] $E [ \exp \{\frac{1}{2} \int_0^T |v_s|^2 \,ds \} ] < \infty$.
\end{itemize}
\end{lemma}
\begin{proof}
(1) In light of \eqref{inverseKH}, we can write
\begin{align*}
|v_s| &= \bigl|s^{H-\frac{1}{2}} I_{0^+}^{\frac{1}{2}-H} s^{\frac{1}{2}-H} \bigl|b(s,X_s)\bigr| \bigr|\\
&= \frac{1}{\varGamma (\frac{1}{2}-H )} s^{H- \frac{1}{2}} \int_0^s(s-r)^{-\frac{1}{2}-H} r^{\frac{1}{2}-H} \bigl|b(r,X_r)\bigr|\,dr\\
&\leq \, \|b\|_\infty\frac{1}{\varGamma (\frac{1}{2}-H )} s^{H- \frac{1}{2}} \int_0^s (s-r)^{-\frac{1}{2}-H} r^{\frac{1}{2}-H}\,dr\\
&= \, \|b\|_\infty\frac{\varGamma (\frac{3}{2}-H )}{\varGamma(2-2H )}s^{\frac{1}{2}-H}\\
&\leq\, \|b\|_\infty\frac{\varGamma (\frac{3}{2}-H )}{\varGamma(2-2H )}T^{\frac{1}{2}-H},
\end{align*}
where $\|\cdot\|_{\infty}$ denotes the norm in $L^{\infty}([0,T];
L^\infty(\R^d))$.
As a result, we get that
\begin{align*}
\int_{0}^{T} |v_{s}|^2 \,ds
<\infty, \quad P\text{-a.s.}
\end{align*}
(2) The second item is obtained easily by the following estimate:
\begin{align*}
E& \Biggl[ \exp \Biggl\{\frac{1}{2} \int_0^T
\llvert v_s\rrvert ^2 \,ds \Biggr\} \Biggr] \leq\exp \biggl
\{ \frac{1}{2} C_H T^{2(1-H)} \|b\|_\infty
^2 \biggr\},
\end{align*}
where $C_H=\frac{\varGamma (\frac{3}{2}-H )^2}{\varGamma (2-2H
)^2}$,
which finishes the proof.
\end{proof}
Next, we will establish the following Krylov-type inequality that will
play an essential role in the sequel.
\begin{lemma}
\label{krylov1}
Suppose that $X$ is a solution of SDE \eqref{SDE1}. Then, there exists
$\beta>1+dH$ such that, for
any measurable nonnegative function $g:[0,T]\times\mathbb R^d \mapsto
\mathbb R^d
_+$, we have
\begin{eqnarray}
\label{krylovI} E\int_{0}^{T}g(t,X_{t})
\,dt \leq M \Biggl(\int_{0}^{T}\int
_{\R^d} g^{\beta}(t,x)\,dx\,dt \Biggr)^{1/\beta},
\end{eqnarray}
where $M$ is a constant depending only on $T$, $d$, $\beta$, and $H$.
\end{lemma}
\begin{proof}
Let $W$ be a $d$-dimensional Brownian motion such that
\begin{equation*}
B^{H}_t=\int_0^t
K_H(t,s)\,dW_s.
\end{equation*}
For the process $v$ introduced in Lemma~\ref{gir}, let us define
$\widehat{P}$ by
\begin{eqnarray*}
\label{ZT} \frac{d\widehat{P} }{dP}=\exp \Biggl\{ -\int_{0}^{T}v_{t}
\,dW_{t} - \frac{1}{2}\int_{0}^{T}v_{t}^{2}
\,dt \Biggr\}:=Z_T^{-1}.
\end{eqnarray*}
Then, in light of Lemma~\ref{gir} together with the fractional Girsanov
theorem $\mbox{\cite[Thm.~2]{NO}}$, we can conclude that $\widehat{P}$
is a probability measure under which the process $X-x$ is an fBm.
Now, applying H\"{o}lder's inequality, we have
\begin{align}
E\int_{0}^{T}g(t,X_{t})\,dt&= \widehat E \Biggl\{Z_T\int_0^Tg(t,X_{t})\,dt\Biggr\}\nonumber\\[-3pt]
&\leq C \bigl\{\widehat{E} \bigl[Z_T^\alpha \bigr] \bigr\}^{1/\alpha} \Biggl\{\widehat{E}\int_{0}^{T}g^{\rho}(t,X_{t})\,dt \Biggr\}^{1/\rho},\label{ineq}
\end{align}
where $1/\alpha+1/\rho=1$, and $C$ is a positive constant depending
only on $T$, $\alpha$, and $\rho$.
From $\mbox{\cite[Lemma 4.3]{BNP}}$ we can see that $\widehat
E[Z_T^\alpha]$ satisfies the following property:
\begin{equation}
\label{ineq-2} \widehat E\bigl[Z_T^\alpha\bigr] \leq
C_{H,d,T}\bigl(\|b\|_\infty\bigr) < \infty,
\end{equation}
where $C_{H,d,T}$ is a continuous increasing function depending only on
$H$, $d$, and~$T$.
On the other hand, applying again H\"{o}lder's inequality with $1/\gamma
+1/\gamma^{\prime}=1$
and $\gamma>dH+1$, we obtain
\begin{align}
\label{previous-inequality}
\widehat{E}\int_{0}^{T}g^{\rho}(t,X_{t})\,dt&=\int_{0}^{T} \int_{\R^d}g^{\rho}(t,y)\bigl(2\pi t^{2H}\bigr)^{-d/2}\exp^{-\|y-x\|^{2}/2t^{2H}}\,dy\,dt\nonumber\\[-3pt]
&\leq \Biggl(\int_{0}^{T}\int_{\R^d} \bigl(2\pi t^{2H}\bigr)^{-d\gamma^{\prime}/2}\exp^{-\gamma^{\prime}\|y-x\|^{2}/2 t^{2H}}\,dy\,dt \Biggr)^{1/\gamma^{\prime}}\nonumber\\[-3pt]
&\quad \times \Biggl(\int_{0}^{T}\int_{\R^d} g^{\rho\gamma}(t,y)\,dy\,dt \Biggr)^{1/\gamma}.
\end{align}
A direct calculation gives
\begin{eqnarray*}
\int_{\R^d}\bigl(2\pi t^{2H}\bigr)^{-d\gamma^{\prime} /2}
\exp^{-\gamma^{\prime}\|
y-x\|^{2}/2t^{2H}}\,dy =(2\pi)^{d/2-d\gamma^{\prime} /2} \bigl(\gamma^{\prime}
\bigr)^{-d/2}t^{(1-\gamma^{\prime})\,dH}.
\end{eqnarray*}
Plugging this into \eqref{previous-inequality}, we get
\begin{align*}
\widehat{E}\int_{0}^{T}g^{\rho}(t,X_{t})\,dt&\leq \Biggl(\int_{0}^{T}(2\pi)^{d/2-d\gamma^{\prime} /2}\bigl(\gamma ^{\prime} \bigr)^{-d/2}t^{(1-\gamma^{\prime})\,dH}\,dt\Biggr)^{1/\gamma^{\prime}}\\[-3pt]
&\quad \times \Biggl(\int_{0}^{T}\int_{\R^d} g^{\rho\gamma}(t,y)\,dy\,dt \Biggr)^{1/\gamma}\\[-3pt]
&\leq \bigl((2\pi)^{d/2-d\gamma^{\prime} /2} \bigl(\gamma^{\prime}\bigr)^{-d/2} \bigr)^{1/\gamma^{\prime}} \Biggl(\int_{0}^{T}t^{(1-\gamma^{\prime})\,dH}\,dt\Biggr)^{1/\gamma^{\prime}}\\
&\quad \times \Biggl(\int_{0}^{T}\int_{\R^d} g^{\rho\gamma}(t,y)\,dy\,dt \Biggr)^{1/\gamma}\\
&\leq C\bigl(\gamma^{\prime},T,d,H\bigr) \Biggl(\int_{0}^{T}\int_{\R^d} g^{\rho\gamma}(t,y)\,dy\,dt \Biggr)^{1/\gamma}.
\end{align*}
Finally, combining this with \eqref{ineq} and \eqref{ineq-2}, we get
estimate \eqref{krylovI} with $\beta=\rho\gamma$. The proof is now complete.
\end{proof}
Now we are able to state the main result of this section.
\begin{theorem}
If the pathwise uniqueness holds for Eq.~\eqref{SDE1}, then without the
continuity assumption on the drift coefficient, the conclusion of
Theorem~\ref{main} remains valid.
\end{theorem}
\begin{proof}
The proof is similar to that of Theorem~\ref{main}. The only difficulty
is to show that
\[
\lim_{k \rightarrow\infty}\int_0^t b\bigl(s,
\widetilde{X}^{k}_{s}\bigr)\,ds=\int_0^t
b(s,\widetilde{X}_{s})\,ds
\]
in probability. In other words,
for $\epsilon>0$, we will show that
\begin{eqnarray}
\label{desired-result} \limsup_{k \rightarrow\infty} P \Biggl[\Biggl\llvert \int
_0^t \bigl(b\bigl(s,\widetilde{X}^{k}_{s}
\bigr)- b(s,\widetilde{X}_{s}) \bigr) \,ds \Biggr\rrvert > \epsilon
\Biggr]=0.
\end{eqnarray}
Let us first define
\[
b^{\delta} (t,x)= \delta^{-d} \phi(x/\delta) \ast b(t,x) ,
\]
where $\ast$ denotes the convolution on $\R^d$, and $\phi$ is an
infinitely differentiable function with support in the unit ball such
that $\int\phi(x)\,dx = 1$.
Applying Chebyshev's inequality, we obtain
\begin{align*}
&P \Biggl[\Biggl\llvert \int_0^t\bigl(b\bigl(s,\widetilde{X}^{k}_{s}\bigr)- b(s,\widetilde{X}_{s}) \bigr) \,ds\Biggr\rrvert > \epsilon \Biggr]\nonumber\\
&\quad \leq \frac{1}{\epsilon^2} E \Biggl[\int_0^t\bigl| b\bigl(s,\widetilde{X}^{k}_{s}\bigr)- b(s,\widetilde{X}_{s})\bigr|^{2} \,ds \Biggr]\nonumber\\
&\quad \leq \frac{4}{\epsilon^2} \Biggl\{ E \Biggl[\int_0^t\bigl|b\bigl(s,\widetilde {X}^{k}_{s}\bigr)- b^{\delta}\bigl(s,\widetilde{X}^{k}_{s}\bigr)\bigr|^{2} \,ds\Biggr]\nonumber\\
&\qquad + E \Biggl[\int_0^t\bigl|b^{\delta} \bigl(s,\widetilde{X}^{k}_{s}\bigr)-b^{\delta} (s,\widetilde{X}_{s})\bigr|^{2} \,ds \Biggr]\nonumber\\
&\qquad + E \Biggl[\int_0^t\bigl|b^{\delta} (s,\widetilde{X}_{s})- b(s,\widetilde{X}_{s})\bigr|^{2}\,ds \Biggr] \Biggr\}\nonumber\\
&\quad = \frac{4}{\epsilon^2}(J_1+J_2+J_3).
\end{align*}
From the continuity of $b^{\delta}$ in $x$ and from the convergence of
$\widetilde{X}^{k}_{s}$ to $\widetilde{X}_{s}$ uniformly on every
finite time interval $\widetilde{P} $ a.s.\ it follows that $J_2$
converges to 0 as $k \rightarrow\infty$ for every $\delta>0$.\vadjust{\eject}
On the other hand, let $\theta: \mathbb R^d\rightarrow\mathbb R_+ $
be a smooth truncation function
such that $\theta(z)=1$ in the unit ball and $\theta(z)=0$ for
$|z|>1$.
By applying Lemma~\ref{krylov1} we obtain
\begin{align}
\label{in2} J_1 &= E\int_{0}^{t}\theta\bigl(\widetilde{X}^{k}_{s}/R\bigr) \bigl| b^{\delta}\bigl(s,\widetilde {X}^{k}_{s}\bigr) - b\bigl(s,\widetilde{X}^{k}_{s}\bigr) \bigr|^2 \,ds\nonumber\\
&\quad + E\int_{0}^{t} \bigl(1 - \theta\bigl(\widetilde{X}^{k}_{s}/R\bigr) \bigr) \bigl|b^{\delta}\bigl(s,\widetilde{X}^{k}_{s}\bigr)-b\bigl(s,\widetilde{X}^{k}_{s}\bigr) \bigr|^2 \,ds\nonumber\\
&\leq N \bigl\llVert b^{\delta}- b \bigr\rrVert _{\beta,R}+2CE\int_{0}^{t} \bigl(1-\theta\bigl(\widetilde{X}^{k}_{s}/R\bigr) \bigr)\,ds,
\end{align}
where $N$ does not depend on $\delta$ and $k$, and $\|\cdot\|_{\beta
,R}$ denotes the norm in
$L^{\beta}([0,T]\times B(0,R))$.
The last expression in the right-hand side of the last inequality
satisfies the following estimate:
\begin{equation}
\label{in2'} E\int_{0}^{t} \bigl(1 - \theta\bigl(\widetilde{X}^{k}_{s}/R\bigr) \bigr)\,ds \leq
\Sup_{k\geq1} P \Bigl[\Sup_{s\leq t}\bigl|\widetilde{X}^{k}_{s}\bigr|>R\Bigr].
\end{equation}
But we know that $\sup_{k\geq1} E [\sup_{s\leq t}|\widetilde
{X}^{k}_{s}|^{p} ] < \infty$ for all $p>1$, and thus
\begin{equation}
\label{sup} \Lim_{R \rightarrow\infty}\Sup_{k\geq1} P
\Bigl[\Sup_{s\leq t}\bigl|\widetilde{X}^{k}_{s}\bigr|>R \Bigr] = 0.
\end{equation}
Substituting estimate \eqref{in2'} into \eqref{in2}, letting $\delta
\rightarrow0$, and using \eqref{sup}, we deduce that the convergence
of the term $J_1$ follows.
Finally, since estimate \eqref{in2'} also holds for $\widetilde X$, it
suffices to use the same arguments as before to obtain the convergence
of the term $J_3$, which completes the proof.
\end{proof}
\section*{Acknowledgements} We thank the reviewer for his thorough
review and highly appreciate the comments and
suggestions, which significantly contributed to improving the quality
of the paper.
|
1,108,101,564,882 | arxiv | \section{Introduction}
\label{Sec:1}
Over the last two decades, TWDP model has been extensively used to characterize small-scale fluctuations of a signal envelope, in cases where mmWave band and directional antennas are employed~\cite{Zoc19, Zoc19-1, Mav15}, as well as in wireless sensor networks within the cavity environments~\cite{Fro07}. In these conditions, the waves arriving at the receiver can be observed as a sum of two specular line-of-site (LOS) components with constant magnitudes $V_1$ and $V_2$ and uniformly distributed phases, plus many diffuse non-LOS components treated as a complex zero-mean Gaussian random process with average power $2\sigma^2$. As such, the model is conventionally characterized by the parameters $K$, $\Delta$, and $\Omega$ defined as~\cite{Dur02}:
\begin{equation}
\label{eq1}
K = \frac{V_1^2+V_2^2}{2 \sigma^2}, ~~ \Delta = \frac{2V_1 V_2}{V_1^2+V_2^2}, ~~ \Omega = V_1^2 + V_2^2 + 2\sigma^2
\end{equation}
where parameter $K$ ($K \geq 0$) characterizes the ratio of the average power of the specular components to the power of the remaining diffuse components (like the Rician parameter $K$), parameter $\Delta$ \mbox{$(0\leq \Delta \leq 1)$} characterizes the relation between magnitudes of specular components, and $\Omega$ represents the average power of the received signal.
However, it is elaborated in~\cite{rad} that definition of parameter $\Delta$ is not in accordance with model’s underlying physical mechanisms.
Namely, according to the model's definition, specular components have constant magnitudes and are propagating in a linear medium. So, the function which characterizes the
ratio between $V_1$ and $V_2$ has to be linear~\cite{rad}. On the other hand, parameter $\Delta$ introduces nonlinear relation between the magnitude of specular components (since \mbox{$V_2 = V_1(1-\sqrt{(1-\Delta^2)})/\Delta$}), hindering accurate observation of the impact of their ratio on a system's performance metrics~\cite{rad}.
Therefore, a novel parameterization proposed in~\cite{rad} introduces parameter $\Gamma$ instead of $\Delta$, defined as:
\begin{equation}
\label{eq2}
\Gamma = \frac{V_2}{V_1}
\end{equation}
where $0 \leq \Gamma \leq 1$ for $0 \leq V_2 \leq V_1$ obviously ensures linear dependence between $V_1$ and $V_2$. According to~\cite{rad}, the definition of parameter $K$ in novel TWDP parameterization remains unchanged. However, it can be observed that parameter $\Delta$ completely changes the character of the original definition expression of parameter $K$ in (\ref{eq1}), since $K$ vs. $\Delta$ can be expressed as $K = (V_1^2+V_2^2)/(2\sigma^2)=({V_1^2}/{2\sigma^2})({2}/{\Delta})({V_2}/{V_1})= 2 K_{Rice}/({1-\sqrt{1-\Delta^2}}){\Delta^2}$
\begin{comment}
\begin{equation}
\nonumber
\begin{split}
K = \frac{V_1^2+V_2^2}{2\sigma^2}=\frac{V_1^2}{2\sigma^2}\frac{2}{\Delta}\frac{V_2}{V_1}= 2 K_{Rice}\frac{1-\sqrt{1-\Delta^2}}{\Delta^2}
\end{split}
\end{equation}
\end{comment}
\noindent where $K_{Rice} = V_1/2\sigma^2$. On the other side, when $K$ is expressed in terms of $\Gamma$, as $K = {V_1^2}/{2 \sigma^2}\left[1+({V_2}/{V_1})^2\right] = K_{Rice}(1 + \Gamma^2)$, \begin{comment}
\begin{equation}
\nonumber
K = \frac{V_1^2}{2 \sigma^2}\left[1+\left(\frac{V_2}{V_1}\right)^2\right] = K_{Rice}(1 + \Gamma^2)
\end{equation}
\end{comment}
the character of the original definition expression of $K$ in (\ref{eq1}) given in terms of $V_2/V_1$, remains unchanged by the parameter $\Gamma$.
Beside the aforesaid, nonphysical definition of parameter $\Delta$ also causes anomalies related to the estimation of $K$ and $\Delta$. These anomalies are for the first time observed in~\cite{Fer16}, by noticing "\textit{when $\Delta$
is an unknown parameter, the estimation error bound is lower for higher $\Delta$ values ($\Delta \to 1$). This is in contrast to the opposite situation on which $\Delta$ is a known parameter, and parameter $K$ is estimated}".
However, observed anomaly has not been further investigated. Another anomaly can be observed in~\cite[Fig. 4]{Fer16-1}, where estimated values of parameter $\Delta$ (i.e. $\hat{\Delta}$), obtained from $500$ different realizations of TWDP process with $N = 10^4$ i.i.d. samples, are illustrated along with the sample means of these estimated values for each considered tuple $(K, \Delta)$. The figure clearly shows that $\hat{\Delta}$
takes values greater than one for $\Delta \approx 1$.
However, according to definition given by (\ref{eq1}), parameter $\Delta$ can take values only between zero and one, indicating that the estimates greater than one are nonphysical and useless for gaining insight into the relation between $V_1$ and $V_2$.
The consequence of the above can be observed in sample mean of estimated values in the vicinity of one, for which obviously fake accurate results are obtained by averaging potentially accurate values of $\hat{\Delta}$ ($\hat{\Delta} \leq 1$) and incorrect ones ($\hat{\Delta} > 1$).
That caused decrements in estimation error of $\Delta$ as it approaches one (as shown in~\cite[Fig. 2]{Fer16-1}) and occurrence of the anomaly recognized in~\cite{Fer16} as inadequate behavior of $\hat{\Delta}$.
Therefore, although~\cite{Fer16, Fer16-1} provide mathematically correct
expressions for estimators of $K$ and $\Delta$, the results obtained by their application are inaccurate due to nonphysical definition of $\Delta$.
So, after the anomalies caused by conventional parameterization are identified, an overview of TWDP parameters' estimation results is presented, indicating the absence of those related to the estimation of a tuple $(K, \Gamma)$. Thus, a closed-form moment-based estimators of parameters $K$ and $\Gamma$ are derived and
analyzed through their AsVs and CRBs. Estimators of conventional and improved parameters are then compared, but only in order to gain qualitative insight into the differences in their behaviour. Otherwise, due to described problems in definition of $\Delta$ and its impact on the definition of $K$, it makes no sense to perform quantitative comparison between the results obtained for the tuples $(K, \Gamma)$ and $(K, \Delta)$.
\section{TWDP parameters' estimation - an overview}
\label{Sec:2}
The estimation of TWDP parameters is of a practical importance in a variety of wireless scenarios. It includes not only delay insensitive channel characterization and link budget calculations, but also online adaptive coding/modulation for which estimation of parameters must be both accurate and prompt. Accordingly, different approaches in estimation of TWDP parameters are proposed to make a deal between the computational complexity and estimation accuracy. These approaches are used to estimate a tuple $(K, \Delta)$, while the parameter $\Omega$ has not been usually the part of the approaches (since it can be directly estimated from the data set as a second moment)~\cite{Zoc19-1}.
Among the investigated approaches, the distribution fitting approach is used to estimate a tuple $(K, \Delta)$ from measurements performed in air-crafts and buses at 2.4 GHz~\cite{Fro07}, while the maximum likelihood procedure (ML) is used to estimate the tuple $(K, \Delta)$ at 60 GHz in indoor environment~\cite{Zoc19} and in vehicular-to-vehicular propagation scenario~\cite{Zoc19-1}. However, it is shown that both approaches are very computationally complex and inappropriate for online applications. Accordingly, the moment-based approach is considered in~\cite{Fer16, Fer16-1, Mav15} as a compromise between the complexity and the estimation accuracy. Thereby, in~\cite{Fer16, Fer16-1}, analytical expressions for $K$ and $\Delta$ estimators are derived and examined in terms of AsV and CRB. In~\cite{Fer16}, parameters $K$ and $\Delta$ are estimated separately under the assumption that one of them is previously known. However, although providing valuable insight into behaviour of $(K,\Delta)$ estimators, expressions from~\cite{Fer16} can not be used for empirical estimation, since in reality both parameters are unknown. The issue is overcome in~\cite{Fer16-1} by deriving computationally simple joint estimators of parameters $(K, \Delta)$.
\begin{figure*}[h!]
\centering
\begin{minipage}[t]{1\textwidth}
\centering
\includegraphics[trim={0.6cm 0.2cm 1.2cm 1.8cm},clip,width=0.7\linewidth]{nova_slika___.pdf}
\caption{$\hat{K}_{mean}$ (with absolute error bars) vs. $K$ for different values of $\Gamma$. The solid line shows a linear regression fit to the data. Unit slope dashed line is illustrated as a benchmark.}
\label{Figure_nova}
\end{minipage}%
\vspace{0.7cm}
\begin{minipage}[t]{1\textwidth}
\centering
\includegraphics[trim={0.6cm 0.2cm 1.2cm 1.8cm},clip,width=0.7\linewidth]{Slika4___.pdf}
\caption{$\hat{\Gamma}_{mean}$ (with absolute error bars) vs. $\Gamma$ for different values of $K$. The solid line shows a linear regression fit to the data. Unit slope dashed line is illustrated as a benchmark.}
\label{Figure_dd}
\end{minipage}%
\end{figure*}
However, due to nonphysical definition of parameter $\Delta$, estimators derived in~\cite{Fer16-1} provide irrelevant results for some combinations of parameters $K$ and $\Delta$. Therefore, it is necessary to derive estimator for physically justified TWDP parameters $K$ and $\Gamma$ and to investigate their behaviour in terms of asymptotic efficiency and the estimation accuracy.
\begin{comment}
In order to acquire more general insight into the anomalies caused by physically invalid $\Delta$-based parameterization and its impact on the behaviour of $\Delta$ estimates for different values of parameter $K$, Monte-Carlo simulation is performed.
Thus, for any fixed value of $\Delta$ and $K$, $500$ sequences of $10^4$ i.i.d. samples are generated and used to estimate $\hat{\Delta}_{j}$ $(j \in [1, 500])$ based on~\cite[eq. (13) - (16)]{Fer16-1} for each sequence, and to calculate ${\hat{\Delta}}_{mean}$ as a sample mean of estimated values for each tuple $(K, \Delta)$, i.e. ${\hat{\Delta}_{mean}}=(1/500) \sum_{j = 1}^{500} \hat{\Delta}_j$. Values of $\hat{\Delta}_{mean}$, obtained for $K \in [0, 10]$ taken with the step $0.25$ and for $\Delta \in [0,1]$ taken with the step $0.03$, are illustrated in Fig.~\ref{Figurex}, while the estimates $\hat{\Delta}_j$ for $K = 5$ and $K = 20$ are illustrated in Fig.~\ref{Figure_c}, for $\Delta$ inbetween $0.9$ and $1$.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Fig1.pdf}
\caption{$\hat{\Delta}_{mean}$ vs. $\Delta$ and $K$, calculated from the samples generated using Monte-Carlo simulation}
\label{Figurex}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Slika4_1.jpg}
\caption{$\hat{\Delta}_{j}$ vs. $\Delta$ for a) $K = 5$, b) $K = 20$, with unit slope line illustrated as a benchmark}
\label{Figure_c}
\end{figure}
Results given in Fig.~\ref{Figurex} indicate that values of $\hat{\Delta}_j$ greater than one are going to be obtained for small values of $K$ over the entire range of $\Delta$. Besides, from Fig.~\ref{Figure_c} can be observed that even for large values of parameter $K$, several sequence estimates $\hat{\Delta}_j$ will always exceed one for $\Delta$ values near to one, thus providing nonphysical results and acknowledging the necessity for introducing physically grounded parameterization within the TWDP fading model.
\end{comment}
\section{Moment-based estimation of the improved TWDP parameters}
\label{Sec:3}
To create moment-based estimators for improved set of TWDP parameters, the general expression for even moments of the signal envelope $r$ is derived from the expression for moments of a squared signal envelope $\gamma = r^2$~\cite[eq. (6)]{Fer16-1}, by transforming parameter $\Delta$ to $\Gamma$, ($\Delta=2\Gamma/(1+\Gamma^2)$), as:
\begin{equation}
\label{eq_3}
\begin{split}
\mu_n & = \mathbb{E}[r^n] = \mathbb{E}[{\gamma}^{\frac{n}{2}}] = \frac{\left({\frac{n}{2}}\right)!{\Omega}^{\frac{n}{2}}}{(1+K)^{\frac{n}{2}} 2\pi} \sum_{m = 0}^{\frac{n}{2}} \binom{{\frac{n}{2}}}{m} \frac{K^m}{m!} \\
& \times \int_{0}^{2\pi} \left(1 + \left(\frac{2\Gamma}{1 + \Gamma^2} \right) \cos(\theta)\right)^m \diff{\theta}, ~~ \text{for}~~\frac{n}{2} \in \mathbb{N}
\end{split}
\end{equation}
Obviously, the signal envelope's $n$-th moments $\mu_n = \mathbb{E}[r^n]$ depends on three unknown parameters: $K$, $\Gamma$, and $\Omega$. Consequently, the TWDP parameters' estimators can be constructed from at least three different moments.
Thereby, it is shown that the estimation accuracy is the largest when the lower-order moments are used~\cite{Fer16-1}. However, since only even moments are obtained as closed-form expressions, moment-based estimators of improved TWDP parameters are generated using second-, fourth-, and sixth-order moments.
To further reduce the estimation complexity, the impact of parameter $\Omega$ on $K$ and $\Gamma$ is canceled out by properly defining ratios between the fourth- and second-order as well as the sixth- and second-order moments of an envelope~\cite{Fer16-1},
as:
\begin{equation}
\label{eq_4}
\frac{\mu_4}{\mu_2^2} = \frac{\left(2 + \left(\frac{2\Gamma}{1 + \Gamma^2}\right)^2\right)K^2 + 8K + 4}{2(1+K)^2}
\end{equation}
\begin{equation}
\begin{split}
\frac{\mu_6}{\mu_2^3} = \frac{\left(6 + 9\left(\frac{2\Gamma}{1 + \Gamma^2}\right)^2\right)K^3 + \left(42 + 9\left(\frac{2\Gamma}{1 + \Gamma^2}\right)^2\right)K^2 }{2(1+K)^3} + \frac{36K + 12}{2(1+K)^3}
\end{split}
\nonumber
\end{equation}
\vspace{-0.3cm}
\noindent Using the sample moments $\hat{\mu}_n = \frac{1}{N}\sum_{i = 1}^N{r_i^n}$ instead of the ensemble averages $\mu_n$, the system (\ref{eq_4}) is solved for $K$ and $\Gamma$ using results from~\cite{Fer16-1} and the established relation between $\Delta$ and $\Gamma$,
providing expressions for moment-based estimators $\hat{K}$ and $\hat{\Gamma}$ as:
\begin{equation}
\label{eq_6}
\hat{K} = \left(p+\sqrt{p^2+q^3}\right)^\frac{1}{3} +{\left(p-\sqrt{p^2+q^3}\right)}^\frac{1}{3} - \frac{a_1}{3}
\end{equation}
\begin{equation}
\label{eq_7}
\hat{\Gamma} = \frac{
\sqrt{2} \hat{K} \left(
1 - \sqrt{\left( \frac{2 \left( 4\hat{K} + \hat{K}^2 - \frac{\hat{\mu}_4}{\hat{\mu}_2^2}(\hat{K} + 1)^2 + 2\right)}{\hat{K}^2} \right)+1}
\right)
}{
2\sqrt{\frac{\hat{\mu}_4}{\hat{\mu}_2^2}(\hat{K}+1)^2-\hat{K}^2 - 4\hat{K} - 2}
}
\end{equation}
where $p$, $q$, $a_1$, $a_2$, and $a_3$ are defined in~\cite{Fer16-1} as:
\[
p = \frac{1}{54}(9a_1 a_2-27 a_3-2 a_1^3), ~~~ q = \frac{1}{9}(3a_2-a_1^2)
\]
\[
a_1 = \frac{6\hat{\mu}_6 - 30\hat{\mu}_4\hat{\mu}_2 + 24\hat{\mu}_2^3}{2\hat{\mu}_6 - 6\hat{\mu}_4\hat{\mu}_2 + 4\hat{\mu}_2^3} ~~~~~~~~~ a_2 = \frac{6\hat{\mu}_6 - 42\hat{\mu}_4\hat{\mu}_2 + 48\hat{\mu}_2^3}{2\hat{\mu}_6 - 6\hat{\mu}_4\hat{\mu}_2 + 4\hat{\mu}_2^3}
\]
\[
a_3 = \frac{2\hat{\mu}_6 - 18\hat{\mu}_4\hat{\mu}_2 + 24\hat{\mu}_2^3}{2\hat{\mu}_6 - 6\hat{\mu}_4\hat{\mu}_2 + 4\hat{\mu}_2^3}
\]
Although the system given by (\ref{eq_4}) has several solutions, it can be shown that (\ref{eq_6}) and (\ref{eq_7}) represent the only real and positive solution for $\hat{K}$ and $\hat{\Gamma}$.
The performance of proposed estimators $\hat{K}$ and $\hat{\Gamma}$ is investigated by resorting the Monte-Carlo simulation, with the results illustrated in Fig.~\ref{Figure_nova} and Fig.~\ref{Figure_dd}.
Thereby, for any fixed value of $K$ and $\Gamma$, $500$ realizations of the TWDP process with $N = 10^4$ i.i.d. samples are generated and used in order to determine ${\hat{K}_{j}}$ (\ref{eq_6}) and ${\hat{\Gamma}_{j}}$ (\ref{eq_7}) for $j \in [1, 500]$. These values are then used to calculate sample means ${\hat{K}}_{mean}$ and ${\hat{\Gamma}}_{mean}$ as ${\hat{K}}_{mean}=(1/500) \sum_{j = 1}^{500} \hat{K}_j$ and ${\hat{\Gamma}}_{mean}=(1/500) \sum_{j = 1}^{500} \hat{\Gamma}_j$.
Accordingly, Fig.~\ref{Figure_nova} shows the boundaries where the estimates of parameter $K$ from each realization of TWDP process are located in respect to the mean estimated value $\hat{K}_{mean}$, while Fig.~\ref{Figure_dd} illustrates the boundaries where each estimate of parameter $\Gamma$ is located in respect to its mean estimated value.
\begin{comment} --------------------->
\begin{figure}[h]
\centering
\includegraphics[trim={1.3cm 0.2cm 1.2cm 0.4cm},clip,width=0.8\linewidth]{Fig3.pdf}
\caption{$\hat{\Gamma}_{mean}$ vs. $\Gamma$ and $K$, calculated from the samples generated using Monte-Carlo simulation}
\label{Figure_d}
\end{figure}
\end{comment}
From Fig.~\ref{Figure_nova} can be observed that the estimator of parameter $K$ given by (\ref{eq_6}) provides accurate results, especially in the region of medium and large values of $K$ and $\Gamma$ (e.g.
$\Gamma \geq 0.3$ and $K \geq 3$ for $N = 10^4$ samples), for which $\hat{K}_{mean}$ is very close to $K$.
From Fig.~\ref{Figure_dd} can be observed that estimated values of $\Gamma$, $\hat{\Gamma}_j$, are always smaller than one. Consequently, as $\Gamma$ approaches one,
$\hat{\Gamma}_{mean}$ starts to increasingly deviate from $\Gamma$, causing increasement in the error of its estimation.
Accordingly, Fig.~\ref{Figure_dd} indicates that no anomalies ascertained for $\hat{\Delta}$ can be observed within the $\hat{\Gamma}$.
It also can be observed that for a small values of $K$ (in the vicinity of one), $\hat{\Gamma}$ provides accurate estimation only in the narrow range of $\Gamma$ values close to $0.6$. However, from the practical point of view, the results for relatively small values of $K$ are irrelevant since in that region TWDP and Rayleigh distributions are almost identical. On the other side, increment of $K$ increases the range of $\Gamma$ values where the estimation is accurate (more precisely, for a considered simulation with $N = 10^4$ samples, $\hat{\Gamma}_{mean}$ is remarkably close to $\Gamma$ for $0.2 \leq \Gamma \leq 0.8$ and $K \geq 3$).
From Fig.~\ref{Figure_dd} also can be observed that in the considered range, dispersion of estimated values is quite insignificant, indicating that derived estimator provides accurate estimates of $\Gamma$ even for relatively small number of samples (i.e. $10^4$).
\section{AsV and CRB}
\label{Sec:3A}
To further assess the performance of the proposed estimators, equations~\cite[eq. (17) - (19)]{Fer16-1} are used to derive corresponding asymptotic variances for $K$ and $\Gamma$, $ASV_K$ and $AsV_{\Gamma}$. Also, the Cramer-Rao lower bounds, denoted as $CRB_{\Gamma}$ and $AsV_{\Gamma}$, are calculated numerically using~\cite[eq. (20)]{Fer16-1}, by employing the Fisher Information Matrix applied on TWDP envelope PDF~\cite[eq. (7)]{rad}, obtained by modifying infinite-series expression originally derived in~\cite{Kos78}. Thereby, following the approach presented in~\cite{Fer16-1, Fer16}, instead of using CRB and AsV, square root CRB and AsV normalized to $N$ and the true value of estimating parameter are used in estimators performance assessment. Thus, sqrt-normalized CRB and AsV of $\hat{K}$, $\sqrt{{CRB_K N}/{K^2}}$ and $\sqrt{{AsV_K N}/{K^2}}$, are plotted in Fig.~\ref{Figure1}, while sqrt-normalized CRB and AsV of $\hat{\Gamma}$, $\sqrt{{CRB_{\Gamma} N}/{\Gamma^2}}$ and $\sqrt{{AsV_{\Gamma} N}/{\Gamma^2}}$, are plotted in Fig.~\ref{Figure2} and denoted as estimation error.
\begin{figure*}[h!]
\centering
\begin{minipage}[t]{1\textwidth}
\centering
\includegraphics[trim={0.5cm 0.1cm 1.2cm 1.6cm},clip,width=0.6\linewidth]{Fig5__.pdf}
\caption{$\sqrt{CRB_K N/K^2}$ (solid line) and $\sqrt{AsV_K N/K^2}$ (dashed line) of $\hat{K}$ given by (\ref{eq_6}), for different values of $\Gamma$}
\label{Figure1}
\end{minipage}%
\vspace{0.7cm}
\begin{minipage}[t]{1\textwidth}
\centering
\includegraphics[trim={0.5cm 0.1cm 1.2cm 1.6cm},clip,width=0.6\linewidth]{Fig6_lin__.pdf}
\caption{$\sqrt{CRB_\Gamma N/{\Gamma}^2}$ (solid line) and $\sqrt{AsV_\Gamma N/{\Gamma}^2}$ (dashed line) of $\hat{\Gamma}$ given by (\ref{eq_7}), for different values of $K$}
\textbf{\label{Figure2}}
\end{minipage}%
\end{figure*}
\begin{figure*}[t]
\centering
\begin{minipage}[h]{1\textwidth}
\centering
\includegraphics[trim={0.5cm 0.1cm 1.2cm 1.6cm},clip,width=0.6\linewidth]{Fig7__.pdf}
\caption{$\sqrt{AsV_\Gamma N/(V_2/V_1)^2}$ of $\hat{\Gamma}$ (solid line) and $\sqrt{AsV_\Delta N/(V_2/V_1)^2}$ of $\hat{\Delta}$ (dashed line), for different values of $K$}
\label{Figure4}
\end{minipage}%
\end{figure*}
From Fig.~\ref{Figure1} can be observed that the estimation error of $K$ increases with the decreasement of parameter $K$. Thereby, when the power of specular components $V_1^2 + V_2^2$ is small in respect to the power of diffuse components $2\sigma^2$ (e.g. when TWDP channel becomes Rayleigh-like), error in estimation of $K$ is very large. However, as value of parameter $K$ increases, i.e. as $V_1^2 + V_2^2$ overrides $2\sigma^2$, the estimation of parameter $K$ becomes very accurate. Fig.~\ref{Figure1} also shows that the error in estimation of $K$ grows with the reduction of $\Gamma$, indicating that it becomes harder to accurately estimate $K$ as the specular component with the magnitude $V_2$ becomes more insignificant in respect to one with the magnitude $V_1$.
From Fig.~\ref{Figure1} can also be observed that the values of sqrt-normalized $AsV_K$ are remarkably close to the sqrt-normalized $CRB_K$ for the entire considered range of parameters $K$ and $\Gamma$, indicating almost asymptotic efficiency of proposed estimator of a parameter $K$.
Fig.~\ref{Figure2} shows that the estimation error of parameter $\Gamma$ behaves similarly as the estimation error of $\hat{K}$, in respect to $K$ and $\Gamma$.
Hence, the estimation of $\Gamma$ deteriorates with the reduction of $K$, i.e. as the power of diffuse components becomes more significant in respect to $V_1^2 + V_2^2$. The estimation error of $\hat{\Gamma}$ is large for small values of $\Gamma$, indicating that it is hard to estimate the values of $\Gamma$ when $V_2$ is insignificant in respect to $V_1$.
For moderate values of $\Gamma$, $\hat{\Gamma}$ given by (\ref{eq_7}) starts to provide pretty accurate results, especially for large values of $K$. However, as $\Gamma$ approaches one,
estimation of $\Gamma$ becomes more inaccurate. In these conditions,
the magnitudes of specular components, $V_1$ and $V_2$, become similar. Thereat, since their phase difference is uniform,
the probability of destructive superposition of specular components becomes pretty large, making their overall magnitude often insignificant. Thus, as $\Gamma$ approaches one, it gets harder to accurately determine the value of $V_2/V_1$, especially when the power of diffuse components is large in respect to $V_1^2 + V_2^2$.
When it comes to the observation of the proximity of proposed $\Gamma$ estimator to its CRB, form Fig.~\ref{Figure2} can be concluded that the values of the sqrt-normalized $AsV_{\Gamma}$ are remarkably close to the sqrt-normalized $CRB_{\Gamma}$ for $K \geq 2$ in the entire range of $\Gamma$, making the proposed estimator asymptotically efficient for the considered values of $K$.
Accordingly, except providing estimations significantly close to the corresponding CRBs, moment-based estimators (\ref{eq_6}) and (\ref{eq_7}) provide accurate estimates obtained from relatively small number of samples, which can be clearly observed from Fig.~\ref{Figure1} and Fig.~\ref{Figure2}. Namely,
if we assume that the sufficient accuracy of estimation process is achieved when the relative estimation error (obtained by multiplying estimation errors $\sqrt{AsV_K N/K^2}$ and $\sqrt{AsV_{\Gamma} N/{\Gamma}^2}$ by $1/\sqrt{N}$) is smaller than $20\%$, it can be reached by employing only $N = 10^4$ samples for $\Gamma \in [0.3, 0.9]$ and $K \geq 3$.
If necessary, the estimation accuracy in the determined region can be increased, or the region itself can be further expanded, by involving more samples within the estimation process (e.g. by employing $N = 10^6$ samples, relative estimation error in the considered region of a tuple $(K, \Gamma)$ could be reduced to $2\%$, or the estimation error of $20\%$ could be achieved for the wider range of $K$ and $\Gamma$, i.e. $K\geq3$ and $\Gamma \in [0.16, 0.99]$). In this way, the procedures used to create Fig.~\ref{Figure1} and Fig.~\ref{Figure2} can be used to determine the number of samples needed to obtain desired estimation accuracy within the desired range of parameters $K$ and $\Gamma$.
\vspace{-0.1cm}
\section{Comparison of conventional and improved moment-based TWDP parameters estimators}
\label{Sec:4}
In order to observe qualitative differences between $\Gamma$- and $\Delta$-based parameterization and to gain more precise insight into the relation between estimation errors for considered parameterizations, $AsV_{\Delta}$ and $AsV_{\Gamma}$ are normalized to the same parameter $V_2/V_1$ and presented in Fig.
\ref{Figure4}. That enables us to compare absolute values of AsV for $\Delta$ and $\Gamma$ and to discover the differences in their estimation errors for considered ratios between $V_1$ and $V_2$.
Fig.~\ref{Figure4} shows that for $0\leq V_2/V_1 \leq 0.5$, error in estimation of $\Gamma$ is two time smaller than the error obtained by estimating $\Delta$.
On the other side, for $0.5 < V_2/V_1 < 0.8$ and $K \geq 3$, there is no significant difference in accuracy of $\hat{\Delta}$ and $\hat{\Gamma}$.
Finally, for $V_2/V_1 \in [0.8, 1]$, error in estimation of $\Gamma$ starts to increase with the increment of $V_2/V_1$, thus being in line with the model's physical mechanisms. On the contrary, $\Delta$-based parameterization provides fake accurate results, obtained by considering also values of $\hat{\Delta}$ greater than one in calculation of $AsV_{\Delta}$.
Except benefits of $\Gamma$-based parameterization observed in respect to $\hat{\Gamma}$, it also enables reduction of the estimation error of a parameter $K$, for a much wider set of values of a parameter which reflects the relation between $V_1$ and $V_2$.
Namely, based on the expression of parameter $K$ given in terms of $\Delta$ and the results presented in~\cite[Fig. 2]{rad}, it can be concluded that $K/K_{Rice}$ is almost constant for the entire range of small and medium values of $\Delta$, implying that values of $V_2 \in [0,V_1/2]$ make almost no impact on the value of parameter $K$. That causes quite pronounced errors in estimation of $K$ for the entire range of small and medium values of $\Delta$ (i.e. $0 \leq \Delta < 0.5$), which can be clearly observed from~\cite[Fig. 1]{Fer16-1}. On contrary, when $K$ is expressed in terms of $\Gamma$, no such anomaly can be observed, as shown in Fig.~\ref{Figure1}. In these circumstances, errors in estimation of $K$ are huge only for small values of $\Gamma$ (i.e. $\Gamma < 0.2$).
\section{Conclusion}
\label{Sec:5}
In this paper, the problem of TWDP parameters' estimation has been investigated in depth. The investigation reveled that the existing moment-based estimators of conventional TWDP parameters $K$ and $\Delta$ are not able to provide accurate estimations for various combinations of their values, due to nonphysical definition of parameter $\Delta$. Accordingly, in this paper, a moment-based estimators for improved, physically justified parameters $K$ and $\Gamma$ are derived. It is shown that derived estimators will provide estimates from $10^4$ samples with the estimation error smaller than $20\%$, when parameters $K$ and $\Gamma$ are in the range $K \geq 3$ and $0.3 \leq \Gamma \leq 0.9$. That indicates that the tuple $(K, \Gamma)$ can be efficiently estimated using derived expressions within the range of these parameters expected to be obtained in mmWave band, even from a relatively small number of samples.
Since estimators of improved parameters enable us to gain precise insight into the ratios between the two specular and specular to diffuse components of TWDP model in the wide varieties of propagation conditions, simultaneously decreasing their estimation errors in respect to conventional parameterization, it is recommended to adopt parameters $K$ and $\Gamma$ as the only relevant parameters for a description of TWDP fading and to revise the existing measurement-based results related to estimation of TWDP parameters in specific propagation conditions.
\section*{Acknowledgment}
The authors would like to thank Prof. Ivo Kostić for many valuable discussions and advice.
\bibliographystyle{IEEEtran}
\section{Introduction}
\label{sec:I}
\IEEEPARstart{O}{ver} the last two decades, TWDP model has been used extensively to characterize signal envelope small-scale fluctuations in cases where mmWave band and directional antennas are employed~\cite{Rap15, Zoc19, Zoc19-1}, as well as in wireless sensor networks that are surface mounted within a different cavity structures~\cite{Fro07}. In these conditions, the waves arriving at the receiver can be split into two specular line-of-site (LOS) components with constant magnitudes $V_1$ and $V_2$ and uniformly distributed phases, plus many diffuse non-LOS (NLOS) components treated as a complex zero-mean Gaussian random process with average power $2\sigma^2$. As such, the model is conventionally characterized by the parameters $K$, $\Delta$ and $\Omega$ defined as~\cite{Dur02}:
\begin{equation}
\label{eq1}
K = \frac{V_1^2+V_2^2}{2 \sigma^2}, \Delta = \frac{2V_1 V_2}{V_1^2+V_2^2}, \Omega = V_1^2 + V_2^2 + 2\sigma^2
\end{equation}
where $K \geq 0$ characterizes the ratio of the average power of the specular components to the power of the remaining diffuse components (like the Rician $K$ parameter), $\Delta$ \mbox{$(0\leq \Delta \leq 1 \text{ for } 0 \leq V_2 < V_1)$} characterizes the relation between magnitudes of specular components and $\Omega$ represents the average received signal power.
However, it is noticed in~\cite{rad} that definition of parameter $\Delta$ is not in accordance with model’s underlying physical mechanisms.
Namely, according to the model's definition, specular components has constant magnitudes and are propagating in a linear medium. So, the only way to appropriately characterize the relation between magnitudes $V_1$ and $V_2$ is linear~\cite{Kos20}. However, parameter $\Delta$ introduces nonlinear relation between the magnitude of the specular components, hindering accurate observation of the impact of their ratio on system's performance metrics~\cite{rad}. Consequently, it is recommended to use the \hl{alternative} set of parameters $K$, $\Gamma$ and $\Omega$, where parameter $\Gamma$ defined as~\cite{rad}:
\begin{equation}
\label{eq1}
\Gamma = \frac{V_2}{V_1}
\end{equation}
introduces linear dependence between $V_1$ and $V_2$, while definitions of $K$ and $\Omega$ remain unchanged.
In these circumstances, it is necessary to revise the existing results related to TWDP parameter estimation and to investigate the impact of conventional and improved parameters on their estimation errors.
So, when it comes to conventional TWDP parameters, few different approaches have been treated to date for their estimation, providing either analytical expressions for parameters' estimators or estimated values of a tuple $(K, \Delta)$ for a specific propagation conditions.
Thereby, in~\cite{Fro07}, distribution fitting approach is used to estimate the tuple $(K, \Delta)$ from measurements performed in air-crafts and buses at 2.4 GHz, and to characterize small-scale propagation environments in in-vehicle wireless sensor networks. In~\cite{Zoc19, Zoc19-1}, maximum likelihood procedure (ML) is implemented based on~\cite[eq. (10)]{Zoc19}, realized by applying numerical differentiation and maximization implemented as an exhaustive search in the $(K, \Delta)$ grid. It is then used for estimation of the tuples $(K, \Delta)$ at 60 GHz in indoor environment~\cite{Zoc19} and in vehicular-to-vehicular propagation scenario~\cite{Zoc19-1}. In~\cite{Zoc19-1, Zoc19, Mav15}, moment-based approach is considered. Thereby, in~\cite{Fer16, Fer16-1}, analytical expressions for $K$ and $\Delta$ estimators are derived and examined in terms of their asymptotic values and CRBs. In~\cite{Fer16}, parameters $K$ and $\Delta$ are estimated separately, under the assumption that one of them is previously known. However, in reality, both parameters are unknown and due to their mutual dependency, can not be estimated separately. The issue is overcome in~\cite{Fer16-1} by $(K, \Delta)$ joint parameter estimation, providing computationally simple estimators of conventional TWDP parameters. The approach is used in~\cite{Mav15} to estimated $K$ and $\Delta$ in 60 GHz indoor near body channels, in the front and in the back region.
On the contrary, estimation of physically justified parameters $K$ and $\Gamma$ has not been yet considered.
Accordingly, in this paper, a moment-based estimators obtained utilizing the second-, fourth- and sixth-order moments are derived for improved parameters $K$ and $\Gamma$ as a simple closed-form expressions. For adopted estimators, asymptotic variances (AsV) are derived and graphically presented. Then, the limits of the estimation problem are explored and determined in terms of the Cramer-Rao bound (CRB), which provides a lower bound on the variance of any unbiased estimator~\cite{Fer16}. Finally, AsVs are compared to the corresponding CRBs derived under the i.i.d. assumption, for considered values of parameters. In the last step, results obtained for improved set of TWDP parameters are compared to those obtained for conventional ones, in terms of their relative estimation errors and their proximity to the CRB.
\section{Estimation of improved TWDP parameters}
The estimation of the parameters that characterize a fading model is of a practical importance in a variety of wireless scenarios, including not only delay insensitive channel characterization and link budget calculations but also real-time adaptive coding/modulation and geolocation applications~\cite{Tep03}, for which parameters estimation must be both accurate and prompt~\cite{Tep03}. Accordingly, various approaches in parameter estimation are proposed to date in order to achieve different kinds of compromises between the computational complexity and estimator's accuracy. Among them, the most popular are MLE, distribution fitting and moment-based approach.
However, in spite of their accuracy, it is shown that MLE and distribution fitting approaches are not well suited for online implementation due to their complexity, even for Rician distribution~\cite{Tep03} which present the special case of TWDP. Accordingly, in order to avoid cumbersome inversions of a nonlinear functions describing TWDP parameters, it is preferable to use moment-based estimation approach, realized by expressing unknown distribution parameters in terms of the received signal envelope's moments.
So, in order to create moment-based estimators for proposed set of TWDP parameters, the general $n$-th moment of the receiver signal envelope $\mu_n = E[r^n]$ is expressed in terms of the moments of the square signal envelope $\gamma = r^2$, as~\cite[eq. (6)]{Fer16-1}:
\begin{equation}
\label{eq_3}
\begin{split}
\mu_n = E[r^n] = & E({\gamma}^{\frac{n}{2}}) = \frac{\left({\frac{n}{2}}\right)!{\Omega}^{\frac{n}{2}}}{(1+K)^{\frac{n}{2}} 2\pi} \sum_{m = 0}^{\frac{n}{2}} \binom{{\frac{n}{2}}}{m} \frac{K^m}{m!} \\
& \int_{0}^{2\pi} \left(1 + \left(\frac{2\Gamma}{1 + \Gamma^2} \right) \cos(\theta)\right)^m d\theta\
\end{split}
\end{equation}
From (\ref{eq_3}) can be seen that the received signal envelope's $n$-th moments $\mu_n = E[r^n]$ depends on three unknown parameters, $K$, $\Gamma$ and $\Omega$. Hence, a moment based estimation requires estimates of at least three different moments. Thereby, it is shown that the estimation accuracy is maximum for the lowest order moments~\cite{Tep03}. However, form (\ref{eq_3}) is obvious that only even moments can be calculated from the existing equation. Accordingly, estimators obtained utilizing properly defined rations between the fourth- and second-order and the sixth- and second-order moments of a signal envelope, $\frac{\mu_4}{\mu_2^2}$ and $\frac{\mu_6}{\mu_2^3}$, present compromise between complexity and accuracy, simultaneously canceling out the impact of parameter $\Omega$ on estimation of two other parameters.
Accordingly, the ratios $\frac{\mu_4}{\mu_2^2}$ and $\frac{\mu_6}{\mu_2^3}$ are expressed in terms of parameters $K$ and $\Gamma$, as:
\begin{equation}
\label{eq_4}
\frac{\mu_4}{\mu_2^2} = \frac{\left(2 + \left(\frac{2\Gamma}{1 + \Gamm^2}\right)^2\right)K^2 + 8K + 4}{2(1+K)^2}
\end{equation}
\begin{equation}
\label{eq_5}
\begin{split}
\frac{\mu_6}{\mu_2^3} = & \frac{\left(6 + 9\left(\frac{2\Gamma}{1 + \Gamm^2}\right)^2\right)K^3 + \left(42 + 9\left(\frac{2\Gamma}{1 + \Gamm^2}\right)^2\right)K^2 }{2(1+K)^3} + \\
& + \frac{36K + 12}{2(1+K)^3}
\end{split}
\end{equation}
Using the sample moments $\hat{\mu}_n = \frac{1}{N}\sum_{i = 1}^N{r_i^n}$ instead of the ensemble averages $\mu_n$~\cite{Fer16-1}, (\ref{eq_4}) and (\ref{eq_5}) are then solved for $K$ and $\Gamma$, resulting in moment-based estimators:
$\hat{K} = (((\hat{\mu}_2^6 (16 \hat{\mu}_2^6 - 24 \hat{\mu}_2^4 \hat{\mu}_4 + 24 \hat{\mu}_2^3 \hat{\mu}_6 - 15 \hat{\mu}_2^2 \hat{\mu}_4^2 - 18 \hat{\mu}_2 \hat{\mu}_4 \hat{\mu}_6 + 16 \hat{\mu}_4^3 + \hat{\mu}_6^2))/(2 \hat{\mu}_2^3 - 3 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6)^4)^{1/2} - (12 \hat{\mu}_2^3 - 9 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6)/(2 (2 \hat{\mu}_2^3 - 3 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6)) - (4 \hat{\mu}_2^3 - 5 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6)^3/(2 \hat{\mu}_2^3 - 3 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6)^3 + ((12 \hat{\mu}_2^3 - 15 \hat{\mu}_4 \hat{\mu}_2 + 3 \hat{\mu}_6) (24 \hat{\mu}_2^3 - 21 \hat{\mu}_4 \hat{\mu}_2 + 3 \hat{\mu}_6))/(6 (2 \hat{\mu}_2^3 - 3 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6)^2))^{1/3} - (4 \hat{\mu}_2^3 - 5 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6)/(2 \hat{\mu}_2^3 - 3 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6) - (2 \hat{\mu}_2^2 (\hat{\mu}_2^2 \hat{\mu}_4 + \hat{\mu}_6 \hat{\mu}_2 - 2 \hat{\mu}_4^2))/((2 \hat{\mu}_2^3 - 3 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6)^2 (((\hat{\mu}_2^6 (16 \hat{\mu}_2^6 - 24 \hat{\mu}_2^4 \hat{\mu}_4 + 24 \hat{\mu}_2^3 \hat{\mu}_6 - 15 \hat{\mu}_2^2 \hat{\mu}_4^2 - 18 \hat{\mu}_2 \hat{\mu}_4 \hat{\mu}_6 + 16 \hat{\mu}_4^3 + \hat{\mu}_6^2))/(2 \hat{\mu}_2^3 - 3 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6)^4)^{1/2} - (12 \hat{\mu}_2^3 - 9 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6)/(4 \hat{\mu}_2^3 - 6 \hat{\mu}_4 \hat{\mu}_2 + 2 \hat{\mu}_6) - (4 \hat{\mu}_2^3 - 5 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6)^3/(2 \hat{\mu}_2^3 - 3 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6)^3 + ((12 \hat{\mu}_2^3 - 15 \hat{\mu}_4 \hat{\mu}_2 + 3 \hat{\mu}_6) (24 \hat{\mu}_2^3 - 21 \hat{\mu}_4 \hat{\mu}_2 + 3 \hat{\mu}_6))/(6 (2 \hat{\mu}_2^3 - 3 \hat{\mu}_4 \hat{\mu}_2 + \hat{\mu}_6)^2))^{1/3})$
\begin{equation}
\hat{\Gamma} = \frac{\sqrt{2} \hat{K} \left(1 - \sqrt\left( \frac{2 \left( 4\hat{K} + \hat{K}^2 - \frac{m_4^2 (\hat{K} + 1)^2}{m_2^2} + 2\right)}{\hat{K}^2} \right)+1\right)}{2\sqrt{\frac{m_4^2 (\hat{K}+1)^2}{m_2^2}-\hat{K}^2 - 4\hat{K} - 2}}
\end{equation}
which are the only solutions for
$\hat_K$ and $\hat_{\Delta}$ providing real and positive estimations of corresponding parameters~\cite{Zoc19-1}.
To assess the performance of the proposed estimators, corresponding asymptotic variances are derived for $K$ and $\Gamma$ using~\cite[eq. (17) - 19]{Fer16-1}, as a closed-form expressions. Then, Cramer-Rao lower bounds are calculated numerically by~\cite[eq. (20)]{Fer16-1}, employing Fisher Information Matrix (FIM) applied directly on TWDP PDF for chosen sets of parameters. Thereby, following the approach proposed in~\cite{Fer16-1, Fer16, Tep03}, square root of CRB and AsV are considered, normalized for both $N$ and values of parameters $K$ and $\Gamma$, i.e. $\sqrt{\frac{CRB_K N}{K^2}}$, $\sqrt{\frac{CRB_{\Gamma} N}{\Gamma^2}}$, $\sqrt{\frac{AsV_K N}{K^2}}$ and $\sqrt{\frac{AsV_{\Gamma} N}{\Gamma^2}}$. Sqre-normalised $CRB_K$ and $AsV_K$ are plotted in Fig. 1, while sqre-normalised $CRB_{\Gamma}$ and $AsV_{\Gamma}$ are plotted in Fig. 2.
\begin{figure}[h]
\centering
\includegraphics[width=0.39\textwidth]{Figure1.jpg}
\caption{Sqrt-normalized $CRB_K$ and $AsV_K$ as a function of $K$, for different values of $\Gamma$}
\label{Figure1}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.39\textwidth]{Figure2.jpg}
\caption{Sqrt-normalized $CRB_{\Gamma}$ and $AsV_{\Gamma}$ as a function of $\Gamma$, for different values of $K$}
\label{Figure2}
\end{figure}
From Fig. 1 can be observed that the estimation error increases as $\Gamma$ decreases, which means that the estimation of $K$ grows as the one LOS component vanishes. Also, when diffuse components vanish causing increment of $K$, estimation error of $\hat_K$ becomes more and more accurate.
From Fig. 1 can also be observed that the value of the sqrt-normalized $AsV_K$ is remarkably close to the sqrt-normalized $CRB_K$ for
the entire considered range of parameters, indicating that the proposed estimator of the parameter $K$ is almost asymptotically efficient.
Fig. 2 shows...
Similar conclusion can be made when considering sqrt-normalized $AsV_{\Gamma}$ vs. $CRB_{\Gamma}$ for small values of $\Gamma$ (i.e. for $0 \leq \Gamma \leq 0.5$), while for larger values of $\Gamma$, the different between $AsV_{\Gamma}$ and $CRB_{\Gamma}$ is somewhat larger, but still relatively small.
\section{Comparison of estimation accuracy of conventional and proposed TWDP parameters}
In order to observe benefits of $\Gamma$-based parameterization, obtained results presented in Fig 1. and Fig. 2 are compared to those given in~\cite[Fig. 1 and Fig. 2]{Fer16-1}. Thus, for the same values of parameter $\Delta$ and $\Gamma$, the estimation error obtained for parameter $K$ by considering conventional parameterization is between two and three times larger than the one obtained using newly proposed parameterization \hl{(e.g. for $\Delta = 0.3$ and $K = 1$, sqrt-normalized $CRB_K$ is 160, while for $\Gamma = 0.3$ and $K = 2$, sqrt-normalized $CRB_K$ is 60; for $\Delta = 0.2$ and $K = 2$, sqrt-normalized $CRB_K$ is 140, while for $\Gamma = 0.2$ and $K = 2$, sqrt-normalized $CRB_K$ is 40).} Accordingly, it is obvious that the newly proposed parameterization enables us to perform more accurate estimation of parameter $K$ in respect to conventional one, for much wider set of values of the parameter which reflects the relation between $V_2$ and $V_1$.
Similar conclusions can be obtained when considering estimation errors for parameters $\Delta$ and $\Gamma$, as shown on Fig. 3 and Fig. 4 (which are, unlike in~\cite{Fer16-1}, given in normal instead of lognormal scale in order to represent realistic relation between estimation errors for treated parameterizations).
\begin{figure}[h]
\centering
\includegraphics[width=0.39\textwidth]{Figure3.jpg}
\caption{Sqrt-normalized $AsV_{\Gamma}$ and $AsV_{\Delta}$ as a function of $\Gamma = \Delta$, for different values of $K$}
\label{Figure3}
\end{figure}
So, if considering the same numerical values of parameters $\Delta$ and $\Gamma$, $\Gamma$-based parameterization provides significantly smaller estimation error in respect to $\Delta$-based, for small values of parameters. On the contrary, for values of parameters close to 1, $\Delta$-based parameterization outperforms $\Gamma$-based. However, in that region, relative estimation error is much smaller than in the region of small $\Delta$ and $\Gamma$ values, making overall estimation much more precise when using new parameterization.
\begin{figure}[h]
\centering
\includegraphics[width=0.39\textwidth]{Figure4.jpg}
\caption{Sqrt-normalized $AsV_{\Gamma}$ and $AsV_{\Delta}$ as a function of $V_2/V_1$, for different values of $K$}
\label{Figure4}
\end{figure}
However, better insight into the effect of examined parameterizations on estimation errors can be obtained by comparing square-root values of AsV normalized to the same parameter, i.e. $V_2/V_1$, which is shown in Fig. 4. - Fig 6. That way, it is possible to compare absolute values of AsV for parameters $\Delta$ and $\Gamma$ and to discover true differences in estimation errors for considered ratio between specular components.
\begin{figure}[h]
\centering
\includegraphics[width=0.39\textwidth]{Figure5.jpg}
\caption{Sqrt-normalized $AsV_{\Gamma}$ and $AsV_{\Delta}$ as a function of $V_2/V_1$, for different values of $K$ and $0 \leq V_2/V_1 \leq 0.6$}
\label{Figure5}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.39\textwidth]{Figure6.jpg}
\caption{Sqrt-normalized $AsV_{\Gamma}$ and $AsV_{\Delta}$ as a function of $V_2/V_1$, for different values of $K$ and $V_2/V_1$ close to 1}
\label{Figure6}
\end{figure}
Fig. 4 - Fig. 6 show that for $0\leq V_2/V_1 \leq 0.6$, estimation error obtained using $\Gamma$-based parameterization is significantly smaller that the error obtained using $\Delta$-based parameterization. Only for very small values of $K$ (i.e $K<2$) and large $V_2/V_1$ (i.e $V_2/V_1>0.9$), $\Delta$-based parameterization overperforms $\Gamma$-based regarding the estimation error. Accordingly, in general, $\Gamma$-based estimator reduces both, estimation error and the difference between estimated values of parameter and its CRB, in respect to $\Delta$-based estiamtor.
\subsection{Monte-Carlo simulation}
In order to obtain qualitative insight into the behavior of considered estimators~\cite{Fer16-1}, Monte Carlo simulations is resorted~\cite{Abd01}.
So, for any fixed $K$ from 0 to 10 with the step 1, 500 sequences of i.i.d. samples of length $N = 10^4$ are generated~\cite{Abd01} first for $\Delta = [0.2, 0.3, 0.5, 1]$ and $\Gamma = [0.2, 0.3, 0.5, 1]$ in order to estimate $\hat{K}$ and to determine sample means of $\hat{K}, (1/500) \sum_{j = 1}^{500} \hat{K}_j$, considering $\Delta$- and $\Gamma$-based parameterization.
Then, 500 sequences of i.i.d. samples of length $N = 10^4$ are generated also for $K = [1, 2, 3, 10]$ in order to estimate $\hat{\Delta}$ and $\hat{\Gamma}$, and to determine their sample means $(1/500) \sum_{j = 1}^{500} \hat{\Delta}_j$ and $(1/500) \sum_{j = 1}^{500} \hat{\Gamma}_j$, respectively. Results are shown in Fig. 7. - Fig. 14.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Figure7.jpg}
\caption{Samples of $\hat{K}$ vs. true values of $K$, for $\Delta$ and $\Gamma$ equals 0.2}
\label{Figure7}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Figure8.jpg}
\caption{Samples of $\hat{K}$ vs. true values of $K$, for $\Delta$ and $\Gamma$ equals 0.3}
\label{Figure8}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Figure9.jpg}
\caption{Samples of $\hat{K}$ vs. true values of $K$, for $\Delta$ and $\Gamma$ equals 0.5}
\label{Figure9}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Figure10.jpg}
\caption{Samples of $\hat{K}$ vs. true values of $K$, for $\Delta$ and $\Gamma$ equals 1}
\label{Figure10}
\end{figure}
From Fig. 7. - Fig. 10. is clearly visible that the overal mean estimated values are closer to the true values of parameter $K$ and that the dispersion of the estimated values is reduced in general when using $\Gamma$-based estimator instead of $\Delta$-based one, for all values of parameter $K$.
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Figure11.jpg}
\caption{Samples of $\hat{\Delta}$ vs. $\Delta$ and $\hat{\Gamma}$ vs. $\Gamma$, for $K = 1$}
\label{Figure11}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Figure12.jpg}
\caption{Samples of $\hat{\Delta}$ vs. $\Delta$ and $\hat{\Gamma}$ vs. $\Gamma$, for $K = 2$}
\label{Figure12}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Figure13.jpg}
\caption{Samples of $\hat{\Delta}$ vs. $\Delta$ and $\hat{\Gamma}$ vs. $\Gamma$, for $K = 3$}
\label{Figure13}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Figure14.jpg}
\caption{Samples of $\hat{\Delta}$ vs. $\Delta$ and $\hat{\Gamma}$ vs. $\Gamma$, for $K = 10$}
\label{Figure14}
\end{figure}
The differences between estimated and true values, as well as the dispersion of estimated values, for $\Delta$- and $\Gamma$-based estimators are shown on Fig. 11 - Fig. 14. From these figure can be seen that in the region of small and moderate values of parameters $\Delta$ and $\Gamma$, the difference between the mean estimated values and the real values is significantly reduced when using $\Gamma$ estimator. However, for values of $\Delta$ and $\Gamma$ close to 1, the difference is smaller when using $\Delta$ parameter. Accordingly, when considering the entire range of parameters $\Delta$ and $\Gamma$, $\Gamma$ estimator reduces both, the difference between mean and true values and the dispersion of estimated values, for all considered values of $K$.
\hl{$\Delta$ estimirano ide do 1.2, a $\Gamma$ samo do 1!!!
}
\section{Conclusion}
In this paper, estimation of improved TWDP parameters, $K$ and $\Gamma$, is performed. Thereby, moment-based estimators involving second-, fourth- and sixth- moments are chosen as a compromise between estimation complexity and its accuracy. For chosen estimators is shown that, when parameter $K$ is expressed in terms of $\Gamma$ instead of $\Delta$, estimation error is significantly reduced for all values of parameter which reflect the relation between $V_1$ and $V_2$. It is also shown that estimated errors of parameters $\Gamma$ is significantly reduced in respect to $\Delta$ in the region of their small and medium values, providing more accurate estimation when considering entire range of these parameters' values. Accordingly, except being in line with physical mechanisms behind the TWDP fading model, proposed estimators reduce their overall estimation errors in respect to those obtained for using conventional parameters, and are recommended for further use when considering parameters estimation in propagation environments described using the TWDP fading model.
\bibliographystyle{IEEEtran}
|
1,108,101,564,883 | arxiv |
\section{#1}\inputtex{#2}}
\providecommand{\isubsection}[2] {\subsection{#1}\inputtex{#2}}
\providecommand{\etal} {\emph{et al\@.}\xspace}
\providecommand{\ie} {\emph{i.e\@.}\xspace}
\providecommand{\eg} {\emph{e.g\@.}\xspace}
\providecommand{\cf} {\emph{cf\@.}\xspace}
\providecommand{\defacto} {\emph{de facto}\xspace}
\providecommand{\adhoc} {\emph{ad hoc}\xspace}
\providecommand{\apriori} {\emph{a priori}\xspace}
\providecommand{\myurl}[1][] {\texttt{web.eecs.umich.edu/$\sim$fessler#1}\xspace}
\providecommand{\myweb} {\url{web.eecs.umich.edu/~fessler}\xspace}
\providecommand{\onweb}[1] {Available from \myurl.}
\let\comments=\comment
\let\endcomments=\endcomment
\long\def\comment#1{}
\providecommand{\uncomment}[1] {#1}
\providecommand{\bcent} {\begin{center}}
\providecommand{\ecent} {\end{center}}
\providecommand{\benum} {\begin{enumerate}}
\providecommand{\eenum} {\end{enumerate}}
\providecommand{\bitem} {\begin{itemize}}
\providecommand{\eitem} {\end{itemize}}
\providecommand{\bvers} {\begin{verse}}
\providecommand{\evers} {\end{verse}}
\providecommand{\btab} {\begin{tabbing}}
\providecommand{\etab} {\end{tabbing}}
\providecommand{\lfill} {\mbox{}\hfill}
\providecommand{\rfill} {\hfill\mbox{}}
\newcommand{\cent}[1] {\lfill{#1}\rfill}
\newcommand{\stacktext}[2][c] {\begin{tabular}{#1}#2\end{tabular}}
\newcommand{\fsbox}[2][c] {\fbox{\stacktext[#1]{#2}}}
\providecommand{\ul}[1] {\underline{#1}}
\providecommand{\ebox}[1] {\mbox{\fbox{$\displaystyle#1$}}}
\providecommand{\cbox}[1] {\[\ebox{#1}\]}
\newcounter{blist}
\providecommand{\blistmark} {\makebox[0pt]{$\bullet$}}
\providecommand{\blistitemsep} {0pt}
\providecommand{\blist}[1][] {%
\begin{list}{\blistmark}{%
\usecounter{blist}%
\setlength{\itemsep}{\blistitemsep}%
\setlength{\parsep}{0pt}%
\setlength{\parskip}{0pt}%
\setlength{\partopsep}{0pt}%
\setlength{\topsep}{0pt}%
\setlength{\leftmargin}{1.2em}%
\setlength{\labelsep}{0.5\leftmargin
\setlength{\labelwidth}{0em}%
#1
}
\providecommand{\elist} {\end{list}}
\providecommand{\elistup} {\elist\vspace{-\parskip}}
\providecommand{\blistitemsep} {0pt}
\providecommand{\bjfenum}[1][] {%
\begin{list}{\bcolor{\arabic{blist}.} }{%
\usecounter{blist}%
\setlength{\itemsep}{\blistitemsep}%
\setlength{\parsep}{0pt}%
\setlength{\parskip}{0pt}%
\setlength{\partopsep}{0pt}%
\setlength{\topsep}{0pt}%
\setlength{\leftmargin}{0.0em}%
\setlength{\labelsep}{1.0\leftmargin
\setlength{\labelwidth}{0pt}%
#1
}
\newcounter{blistAlph}
\providecommand{\blistAlph}[1][]
{\begin{list}{\makebox[0pt][l]{\Alph{blistAlph}.}}{%
\usecounter{blistAlph}%
\setlength{\itemsep}{0pt}\setlength{\parsep}{0pt}%
\setlength{\parskip}{0pt}\setlength{\partopsep}{0pt}%
\setlength{\topsep}{0pt}%
\setlength{\leftmargin}{1.2em}%
\setlength{\labelsep}{1.0\leftmargin
\setlength{\labelwidth}{0.0\leftmargin}#1}%
}
\newcounter{blistRoman}
\providecommand{\blistRoman}[1][]
{\begin{list}{\Roman{blistRoman}.}{%
\usecounter{blistRoman}%
\setlength{\itemsep}{0.5em}\setlength{\parsep}{0pt}%
\setlength{\parskip}{0pt}\setlength{\partopsep}{0pt}%
\setlength{\topsep}{0pt}%
\setlength{\leftmargin}{4em}%
\setlength{\labelsep}{0.4\leftmargin}
\setlength{\labelwidth}{0.6\leftmargin}#1}%
}
\providecommand{\myheadheight} {0in}
\providecommand{\myheadsep} {0in}
\providecommand{\Halfheight} {5.5in}
\providecommand{\Halfwidth} {4.25in}
\providecommand{\pagesize}[3][\parindent]{
\setlength{\parindent}{#1}
\setlength{\textwidth}{#2}
\setlength{\textheight}{#3}
\setlength{\headsep}{\myheadsep}
\setlength{\headheight}{\myheadheight}
\setlength{\oddsidemargin}{\Halfwidth}
\addtolength{\oddsidemargin}{-0.5\textwidth}
\addtolength{\oddsidemargin}{-1in}
\setlength{\evensidemargin}{\oddsidemargin}
\setlength{\topmargin}{\Halfheight}
\addtolength{\topmargin}{-0.5\textheight}
\addtolength{\topmargin}{-0.5\headheight}
\addtolength{\topmargin}{-0.5\headsep}
\addtolength{\topmargin}{-1in}
}
\providecommand{\pagesizeland}[3][\parindent]{
\setlength{\parindent}{#1}
\setlength{\textwidth}{#2}
\setlength{\textheight}{#3}
\setlength{\headsep}{0.0in}
\setlength{\headheight}{0.0in}
\setlength{\oddsidemargin}{5.5in}
\addtolength{\oddsidemargin}{-0.5\textwidth}
\addtolength{\oddsidemargin}{-1in}
\setlength{\evensidemargin}{\oddsidemargin}
\setlength{\topmargin}{4.25in}
\addtolength{\topmargin}{-0.5\textheight}
\addtolength{\topmargin}{-1in}
}
\providecommand{\pagehead}{%
\setlength{\headsep}{\topmargin}%
\addtolength{\headsep}{1in}%
\setlength{\headsep}{0.5\headsep}%
\setlength{\headheight}{\headsep}%
\setlength{\topmargin}{-1in}%
}
\section{Introduction}
The computational expense of first-order methods
depends only mildly on the problem dimension,
so they are attractive
for solving large-dimensional optimization problems~\cite{cevher:14:cof}.
In particular, Nesterov's fast gradient method (FGM)
\cite{nesterov:83:amf,nesterov:04,beck:09:afi}
is used widely
because it has a worst-case cost function rate
that is optimal up to constant
for large-dimensional smooth convex problems
\cite{nesterov:04}.
In addition, for smooth and strongly convex problems
where the strong convexity parameter is known,
a version of FGM has a linear convergence rate~\cite{nesterov:04}
that improves upon that of a standard gradient method.
However,
without knowledge of the function parameters,
conventional
FGM does not guarantee a linear convergence rate.
When the strong convexity parameter is unknown,
a simple adaptive restarting scheme~\cite{odonoghue:15:arf}
for FGM
heuristically improves its convergence rate
(see also~\cite{giselsson:14:mar,su:16:ade} for theory
and~\cite{cevher:14:cof,muckley:15:fpm,monteiro:16:aaa} for applications).
In addition,
adaptive restart is useful
even when the function is only locally strongly convex
near the minimizer
\cite{odonoghue:15:arf}.
First-order methods are known to be suitable
when only moderate solution accuracy is required,
and
adaptive restart can help
first-order methods achieve medium to high accuracy.
Recently we proposed the optimized gradient method (OGM)~\cite{kim:16:ofo}
(built upon \cite{drori:14:pof})
that has efficient per-iteration computation similar to FGM
yet that exactly achieves the optimal worst-case rate
for decreasing a large-dimensional smooth convex function
among all first-order methods
\cite{drori:17:tei}.
(See~\cite{kim:16:gto-arxiv,kim:17:otc,taylor:17:ewc}
for further analysis and extensions of OGM.)
This paper examines
\cblue{a general class of accelerated first-order methods
that includes a gradient method (GM), FGM, and OGM}
for strongly convex \emph{quadratic} functions,
and
develops an OGM variant, \cblue{named OGM-$q$},
that provides a linear convergence rate
that is faster than that of \cblue{the analogous version of} FGM.
The analysis reveals that,
like FGM~\cite{odonoghue:15:arf},
OGM may exhibit undesirable oscillating behavior
in some cases.
Building on the quadratic analysis
and the adaptive restart scheme of FGM in~\cite{odonoghue:15:arf},
we propose an adaptive restart scheme
that heuristically accelerates the convergence rate of OGM
when the function is strongly convex
or even when it is only locally strongly convex.
This restart scheme
circumvents the oscillating behavior.
Numerical results illustrate
that the proposed OGM with restart
performs better than
FGM with restart in~\cite{odonoghue:15:arf}.
Sec.~\ref{sec:prob,algo}
reviews first-order methods
for convex problems
such as GM, FGM, and OGM.
Sec.~\ref{sec:quadanal}
\cblue{analyzes a general class of accelerated first-order methods
that includes GM, FGM, and OGM}
for strongly convex quadratic problems,
\cblue{and proposes a new OGM variant
with a fast linear convergence rate}.
Sec.~\ref{sec:restart}
suggests an adaptive restart scheme for OGM
using the quadratic analysis in Sec.~\ref{sec:quadanal}.
Sec.~\ref{sec:propOGM}
illustrates the proposed adaptive version of OGM
that we use for numerical experiments
on various convex problems in Sec.~\ref{sec:result},
including nonsmooth composite convex functions,
and
Sec.~\ref{sec:conc} concludes.
\section{Problem and Methods}
\label{sec:prob,algo}
\subsection{Smooth and Strongly Convex Problem}
We first consider
the smooth and strongly convex minimization problem:
\begin{align}
\min_{\x\in\Reals^d} \;&\; f(\x)
\label{eq:prob} \tag{M}
\end{align}
that satisfies the following smooth and strongly convex conditions:
\begingroup
\allowdisplaybreaks
\begin{itemize}
\item
$f\;:\;\Reals^d\rightarrow\Reals$
has Lipschitz continuous gradient with Lipschitz constant $L>0$, \ie,
\begin{align}
||\nabla f(\x) - \nabla f(\y)|| \le L||\x-\y||, \quad \forall \x, \y\in\Reals^d
,\end{align}
\item $f$ is strongly convex with strong convexity parameter $\mu>0$, \ie,
\begin{align}
f(\x) \ge f(\y) + \Inprod{\nabla f(\y)}{\x - \y} + \frac{\mu}{2}||\x - \y||^2,
\quad \forall \x, \y\in\Reals^d
\label{eq:strcvx}
.\end{align}
\end{itemize}
\endgroup
We let \SL denote the class of functions $f$
that satisfy the above two conditions hereafter,
and let $\x_*$ denote the unique minimizer of $f$.
We let $q := \Frac{\mu}{L}$
denote the reciprocal of the condition number
of a function $f \in \SL$.
We also let \FL
denote the class of smooth convex functions $f$
that satisfy the above two conditions with $\mu=0$,
and let $\x_*$ denote a minimizer of $f$.
Some algorithms discussed in this paper
require knowledge of both $\mu$ and $L$,
but in many cases estimating $\mu$ is challenging compared to computing $L$.\footnote{
For some applications even estimating $L$ is expensive,
and one must employ a backtracking scheme~\cite{beck:09:afi}
or similar approaches.
We assume $L$ is known throughout this paper.
An estimate of $\mu$ could be found
by a backtracking scheme
as described in~\cite[Sec. 5.3]{nesterov:13:gmf}.
}
Therefore,
this paper focuses on the case where
the parameter $\mu$ is unavailable while $L$ is available.
Even without knowing $\mu$,
the adaptive restart approach in~\cite{odonoghue:15:arf}
and the proposed adaptive restart approach in this paper
both exhibit linear convergence rates
in strongly convex cases.
We next review known accelerated first-order methods
for solving~\eqref{eq:prob}.
\subsection{\cblue{Review of} Accelerated First-order Methods}
This paper focuses on accelerated first-order \cblue{methods (AFM)}
of the form shown in~\algref{alg:ogm}.
The fast gradient method (FGM)
\cite{nesterov:83:amf,nesterov:04,beck:09:afi}
(with $\gamma_k = 0$ in \algref{alg:ogm})
accelerates the gradient method (GM)
(with $\beta_k = \gamma_k = 0$)
using the \emph{momentum} term $\beta_k(\y_{k+1} - \y_k)$
with negligible additional computation.
The optimized gradient method (OGM)
\cite{kim:16:ofo,kim:17:otc}
uses
an \emph{over-relaxation} term
$\gamma_k(\y_{k+1} - \x_k) = -\gamma_k\alpha\nabla f(\x_k)$
for further acceleration.
\begin{algorithm}[H]
\caption{Accelerated First-order \cblue{Methods (AFM)}}
\label{alg:ogm}
\begin{algorithmic}[1]
\State {\bf Input:} $f\in\FL$ or $\SL$, $\x_0 = \y_0\in\Reals^d$.
\For{$k \ge 0$}
\State $\y_{k+1} = \x_k - \alpha\nabla f(\x_k)$
\State $\x_{k+1} = \y_{k+1}
+ \beta_k(\y_{k+1} - \y_k)
+ \gamma_k(\y_{k+1} - \x_k)$
\EndFor
\end{algorithmic}
\end{algorithm}
\cblue{
Tables~\ref{tab:alg1} and~\ref{tab:alg2}
summarize
the standard choices of coefficients $(\alpha,\beta_k,\allowbreak \gamma_k)$
for GM, FGM, OGM
in~\cite{nesterov:83:amf,nesterov:04,beck:09:afi,kim:16:ofo,kim:17:otc}
and their worst-case rates
for smooth convex functions $\FL$
and smooth and strongly convex functions $\SL$
respectively.
(Other choices can be found in
\cite{nesterov:04,kim:16:gto-arxiv,chambolle:15:otc}.)
For convenience hereafter,
we use the names GM, GM-$q$, FGM, FGM-$q$, OGM, and OGM$'$
to distinguish different choices of standard AFM coefficients
in Tables~\ref{tab:alg1} and~\ref{tab:alg2}.
}
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.4}
\caption{
\cblue{Accelerated First-order Methods for Smooth Convex Problems}}
\label{tab:alg1}
\cblue{
\begin{tabular}{|B||A|A|A||c|}
\hline
Method & $\alpha$ & $\beta_k$ & $\gam_k$ & Worst-case Rate \\
\hline
GM & $\frac{1}{L}$
& $0$
& $0$
& $\scriptstyle f(\y_k) - f(\x_*) \le \frac{L||\x_0-\x_*||^2}{4k+2}$
\cite{drori:14:pof}
\\ \hline
\multirow{2}{*}{FGM~\cite{nesterov:83:amf}}
& \multirow{2}{*}{$\frac{1}{L}$}
& \multirow{2}{*}{$\frac{t_k-1}{t_{k+1}}$}
& \multirow{2}{*}{$0$}
& $\scriptstyle f(\y_k) - f(\x_*) \le \frac{L||\x_0-\x_*||^2}{2t_{k-1}^2}
\le \frac{2L||\x_0-\x_*||^2}{(k+1)^2}$
\cite{beck:09:afi} \\
& & &
& $\scriptstyle f(\x_k) - f(\x_*) \le \frac{L||\x_0-\x_*||^2}{2t_k^2}
\le \frac{2L||\x_0-\x_*||^2}{(k+2)^2}$
\cite{kim:16:ofo}
\\ \hline
OGM$'$~\cite{kim:17:otc}
& $\frac{1}{L}$
& $\frac{t_k-1}{t_{k+1}}$
& $\frac{t_k}{t_{k+1}}$
& $\scriptstyle f(\y_k) - f(\x_*) \le \frac{L||\x_0-\x_*||^2}{4t_{k-1}^2}
\le \frac{L||\x_0-\x_*||^2}{(k+1)^2}$
\cite{kim:17:otc}
\\ \hline
OGM~\cite{kim:16:ofo}
& $\frac{1}{L}$
& $\frac{\theta_k-1}{\theta_{k+1}}$
& $\frac{\theta_k}{\theta_{k+1}}$
& $\scriptstyle f(\x_N) - f(\x_*) \le \frac{L||\x_0-\x_*||^2}{2\theta_N^2}
\le \frac{L||\x_0 - \x_*||^2}{(N+1)^2}$
\cite{kim:16:ofo}
\\ \hline \hline
\multicolumn{5}{|c|}
{Parameters}
\\ \hline
\multicolumn{5}{|c|}
{
\hspace{-4.4em}
$\scriptstyle
\hspace{0.8em}
t_0 = 1, \;\;\;
t_k = \frac{1}{2}\paren{1+\sqrt{1+4t_{k-1}^2}},\;\; k=1,\ldots,$
} \\
\multicolumn{5}{|c|}
{
$\scriptstyle
\theta_0 = 1, \;\;
\theta_k = \begin{cases}
\scriptstyle
\frac{1}{2}\paren{1+\sqrt{1+4\theta_{k-1}^2}}, & \scriptstyle k=1,\ldots,N-1, \\
\scriptstyle
\frac{1}{2}\paren{1+\sqrt{1+8\theta_{k-1}^2}}, & \scriptstyle k=N.
\end{cases}
$
} \\ \hline
\end{tabular}
}
\end{table}
\vspace{-5pt}
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.4}
\caption{
\cblue{
Accelerated First-order Methods (with $\gamma_k = 0$)
for Smooth and Strongly Convex Problems}
\cblue{(The worst-case rates also apply to $\frac{\mu}{2}||\y_k - \x_*||^2$
due to the strong convexity~\eqref{eq:strcvx}.)}}
\label{tab:alg2}
\cblue{
\begin{tabular}{|C||D|D||c|}
\hline
Method & $\alpha$ & $\beta_k$ & Worst-case Rate \\
\hline
GM
& $\frac{1}{L}$
& $0$
& $\scriptstyle f(\y_k) - f(\x_*) \le \paren{1 - \frac{2\mu}{1+q}}^k\frac{L||\x_0-\x_*||^2}{2}$
\cite{nesterov:04}
\\ \hline
GM-$q$
& $\frac{2}{\mu+L}$
& $0$
& $\scriptstyle f(\y_k) - f(\x_*) \le \paren{\frac{1-q}{1+q}}^{2k}\frac{L||\x_0-\x_*||^2}{2}$
\cite{nesterov:04}
\\ \hline
FGM-$q$ \cite{nesterov:04}
& $\frac{1}{L}$
& $\frac{1-\sqrt{q}}{1+\sqrt{q}}$
& $\scriptstyle f(\y_k) - f(\x_*) \le (1-\sqrt{q})^k\frac{(1+q)L||\x_0-\x_*||^2}{2}$
\cite{nesterov:04}
\\ \hline
\end{tabular}
}
\end{table}
\vspace{-5pt}
\cblue{
The worst-case OGM rate~\cite{kim:16:ofo} in Table~\ref{tab:alg1}
is about twice faster than the FGM rate~\cite{beck:09:afi}}
and is optimal for first-order methods
for the function class \FL
under the large-scale condition
$d\ge N+1$~\cite{drori:17:tei}.
\cblue{However,}
it is yet unknown
\cblue{which first-order methods provide an optimal worst-case}
linear convergence rate
for the function class \SL;
this topic is left as an interesting future work.\footnote{
Recently,
\cite{vanscoy:18:tfk}
developed a new first-order method for known $q$
that \cblue{is not in AFM class but}
achieves a linear worst-case rate $(1-\sqrt{q})^2$
for the decrease of a strongly convex function
that is faster than the linear rate $(1-\sqrt{q})$
\cblue{of FGM-$q$ in Table~\ref{tab:alg2}.}}
Towards this direction,
Sec.~\ref{sec:quadanal}
studies \cblue{AFM} for strongly convex \emph{quadratic} problems,
\cblue{leading to a new method named OGM-$q$
with a linear convergence rate that is faster than that of FGM-$q$}.
Sec.~\ref{sec:restart} uses
this quadratic analysis
to analyze an adaptive restart scheme for OGM.
\section{
Analysis of \cblue{AFM} for Quadratic Functions
}
\label{sec:quadanal}
This section analyzes the behavior of \cblue{AFM}
for minimizing a strongly convex quadratic function.
The quadratic analysis of \cblue{AFM}
in this section is similar
in spirit to the analyses of a heavy-ball method~\cite[Sec. 3.2]{polyak:87}
and \cblue{AFM with $\gamma_k=0$}
\cite[Appx. A]{lessard:16:aad}
\cite[Sec. 4]{odonoghue:15:arf}.
\cblue{In addition, Sec.~\ref{sec:ogmq}}
optimizes the coefficients of \cblue{AFM} for such quadratic functions,
yielding a linear convergence rate that is faster than that of
\cblue{FGM-$q$}.
The resulting \cblue{method, named OGM-$q$,} requires the knowledge of $q$,
and \cblue{Sec.~\ref{sec:OGM,conv}}
shows that using
\cblue{OGM (and OGM$'$) in Table~\ref{tab:alg1}}
instead
(without the knowledge of $q$)
will cause the OGM iterates to oscillate
when the momentum is larger than a critical value.
This analysis stems
from the dynamical system analysis of
\cblue{AFM with $\alpha=\Frac{1}{L}$ and $\gamma_k=0$}
in~\cite[Sec. 4]{odonoghue:15:arf}.
\subsection{Quadratic Analysis of \cblue{AFM}}
\label{sec:quad}
This section
considers minimizing a strongly convex quadratic function:
\begin{align}
f(\x) = \frac{1}{2}\x^\top \Q \x - \vp\tr \x \in \SL
\label{eq:quad}
\end{align}
where $\Q \in \Reals^{d\times d}$
is a symmetric positive definite matrix,
$\vp \in \Reals^d$ is a vector.
Here, $\nabla f(\x) = \Q\x - \vp$ is the gradient,
and $\x_* = \Q^{-1} \vp$ is the optimum.
The smallest and the largest eigenvalues of \Q
correspond to the parameters $\mu$ and $L$ of the function respectively.
For simplicity in the quadratic analysis,
we consider the version of \cblue{AFM}
that has constant coefficients
$(\alpha,\beta,\gamma)$.
Defining the vectors
$\xii_k := (\x_k^\top, \x_{k-1}^\top)^\top \in \Reals^{2d}$
and
$\xii_* := (\x_*^\top, \x_*^\top)^\top\in \Reals^{2d}$,
and extending the analysis for \cblue{AFM with $\gamma=0$}
in~\cite[Appx. A]{lessard:16:aad},
\cblue{AFM}
has
the following equivalent form
for $k \ge 1$:
\begin{align}
&\xii_{k+1} - \xii_* = \T(\alpha,\beta,\gamma) \, (\xii_k - \xii_*)
\label{eq:xii,update}
,\end{align}
where the system matrix
$\T(\alpha,\beta,\gamma)$ of \cblue{AFM} is defined as
\begin{align}
\T(\alpha,\beta,\gamma) := \bigg[\begin{array}{cc}
(1+\beta)(\I-\alpha\Q) - \gamma\alpha\Q & -\beta(\I-\alpha\Q) \\
\I & \Zero
\end{array}\bigg] \in \Reals^{2d\times 2d}
\label{eq:T}
\end{align}
for an identity matrix $\I \in \Reals^{d\times d}$.
The sequence
$\{\tilde{\xii}_k := (\y_k^\top, \y_{k-1}^\top)^\top\}_{k\ge1}$
also satisfies the recursion~\eqref{eq:xii,update},
implying that~\eqref{eq:xii,update}
characterizes the behavior of both the primary sequence $\{\y_k\}$
and the secondary sequence $\{\x_k\}$ of \cblue{AFM}
with constant coefficients.
The spectral radius $\rho(\T(\cdot))$ of matrix $\T(\cdot)$
determines the convergence rate of the algorithm.
Specifically,
for any $\epsilon>0$,
there exists $K\ge0$ such that
$[\rho(\T)]^k \le ||\T^k|| \le (\rho(\T) + \epsilon)^k$
for all $k\ge K$,
establishing
the following worst-case rate:
\begin{align}
||\xii_{k+1} - \xii_*||^{\cblue{2}} \le
(\rho(\T(\alpha,\beta,\gamma)) + \epsilon)^{\cblue{2}k}
\ ||\xii_1 - \xii_*||^{\cblue{2}}
\label{eq:xii,rate}
.\end{align}
We next analyze
$\rho(\T(\alpha,\beta,\gamma))$.
Considering the eigen-decomposition of \Q in $\T(\cdot)$
as in~\cite[Appx. A]{lessard:16:aad},
the spectral radius of $\T(\cdot)$
is:
\begin{align}
\rho(\T(\alpha,\beta,\gamma))
= \max_{\mu\le\lambda\le L} \rho(\T_\lambda(\alpha,\beta,\gamma))
\label{eq:rhoT}
,\end{align}
where
for any eigenvalue $\lambda$
of matrix \Q we define
a matrix $\T_\lambda(\alpha,\beta,\gamma) \in \Reals^{2\times 2}$
by substituting $\lambda$ and $1$ for $\Q$ and $\I$ in
$\T(\alpha,\beta,\gamma)$ respectively.
Similar to the analysis of \cblue{AFM with $\gamma=0$}
in~\cite[Appx. A]{lessard:16:aad},
the spectral radius of $\T_\lambda(\alpha,\beta,\gamma)$
is:
\begingroup
\allowdisplaybreaks
\begin{align}
&\, \rho(\T_\lambda(\alpha,\beta,\gamma))
= \max\{|r_1(\alpha,\beta,\gamma,\lambda)|, |r_2(\alpha,\beta,\gamma,\lambda)|\}
\label{eq:rho} \\
=&\, \begin{cases}
\frac{1}{2} \paren{
|(1 + \beta)\paren{1 - \alpha\lambda} - \gamma \alpha\lambda|
+ \sqrt{\Delta(\alpha,\beta,\gamma,\lambda)} },
& \Delta(\alpha,\beta,\gamma,\lambda) \ge 0, \\
\sqrt{\beta(1-\alpha\lambda)}, & \text{otherwise,}
\nonumber
\end{cases}
\end{align}
\endgroup
where $r_1(\alpha,\beta,\gamma,\lambda)$
and $r_2(\alpha,\beta,\gamma,\lambda)$
denote the roots of the characteristic polynomial of $\T_\lambda(\cdot)$:
\begin{align}
r^2 - ((1+\beta)(1-\alpha\lambda) - \gamma\alpha\lambda)r + \beta(1-\alpha\lambda)
\label{eq:poly}
,\end{align}
and
$\Delta(\alpha,\beta,\gamma,\lambda)
:= \paren{(1 + \beta)\paren{1 - \alpha\lambda} - \gamma \alpha\lambda}^2
- 4 \beta\paren{1 - \alpha\lambda}$
denotes the corresponding discriminant.
For fixed $(\alpha,\beta,\gamma)$,
the spectral radius
$\rho(\T_\lambda(\alpha,\beta,\gamma))$
in~\eqref{eq:rho}
is a continuous and quasi-convex\footnote{
\label{ftquasi}
It is straightforward to show that
$\rho(\T_\lambda(\alpha,\beta,\gamma))$ in~\eqref{eq:rho}
is quasi-convex over $\lambda$.
First, $\sqrt{\beta(1-\alpha\lambda)}$ is quasi-convex over $\lambda$
(for $\Delta(\alpha,\beta,\gamma,\lambda) < 0$).
Second, the eigenvalue $\lambda$ satisfying
$\Delta(\alpha,\beta,\gamma,\lambda) \ge 0$
is in the region where
the function
$
\frac{1}{2} \paren{
|(1 + \beta)\paren{1 - \alpha\lambda} - \gamma \alpha\lambda|
+ \sqrt{\Delta(\alpha,\beta,\gamma,\lambda)} }
$
either monotonically increases or decreases,
which overall makes the continuous function $\rho(\T_\lambda(\alpha,\beta,\gamma))$
quasi-convex over $\lambda$.
This proof can be simply applied to other variables, \ie,
$\rho(\T_\lambda(\alpha,\beta,\gamma))$
is quasi-convex over either $\alpha$, $\beta$ or \gam.
}
function of $\lambda$;
thus its maximum
over $\lambda$
occurs at one of its boundary points
$\lambda = \mu$ or $\lambda = L$.
\cblue{The next section reviews the optimization of AFM coefficients
to provide the fastest convergence rate,
\ie, the smallest spectral radius $\rho(\T(\cdot))$ in \eqref{eq:rhoT},
under certain constraints on $(\alpha,\beta,\gamma)$.
}
\subsection{\cblue{Review of Optimizing AFM} Coefficients
\cblue{under Certain Constraints on $(\alpha,\beta,\gamma)$}}
\cblue{The AFM} coefficients
that provide the fastest convergence
for minimizing a strongly convex quadratic function
\cblue{would} solve
\begin{align}
\argmin{\alpha,\beta,\gamma} \rho(\T(\alpha,\beta,\gamma))
=
\argmin{\alpha,\beta,\gamma}
\max\{\rho(\T_{\mu}(\alpha,\beta,\gamma)),\rho(\T_L(\alpha,\beta,\gamma))\}
\label{eq:optogm}
.\end{align}
\cblue{Note that a heavy-ball method~\cite{polyak:87}
(that is not in AFM class)
with similarly optimized coefficients
has a linear worst-case rate
with $\rho(\cdot) = \frac{1 - \sqrt{q}}{1 + \sqrt{q}}$
that is optimal (up to constant)
for strongly convex quadratic problems
\cite{nesterov:04}.
Thus, optimizing~\eqref{eq:optogm}
would be of little practical benefit for quadratic problems.
Nevertheless, such optimization is new to AFM for $\gamma>0$
(with the additional constraint $\alpha=\Frac{1}{L}$ introduced below),
and is useful in our later analysis for the adaptive restart
in Sec.~\ref{sec:restart}.
A heavy-ball method with the coefficients
optimized for strongly convex quadratic problems
does not converge
for some strongly convex nonquadratic problems~\cite{lessard:16:aad},
and other choices of coefficients
do not yield worst-case rates that are comparable
to those of some accelerated choices of AFM
\cite{drori:14:pof,lessard:16:aad},
so we focus on AFM hereafter.}
\cblue{The coefficient optimization~\eqref{eq:optogm} for AFM}
was studied previously
\cblue{with various constraint}.
\cblue{For example,}
optimizing~\eqref{eq:optogm} over $\alpha$
\cblue{with the constraint $\beta=\gamma=0$
yields GM-$q$.}
Similarly,
\cblue{FGM-$q$}
results from
optimizing~\eqref{eq:optogm} over $\beta$
for the \cblue{constraint}%
\footnote{%
For \cblue{FGM-$q$}
the value of
$\rho(\T_L(\Frac{1}{L},\beta,0))$ is $0$,
and the function $\rho(\T_{\mu}(\Frac{1}{L},\beta,0))$
is continuous and quasi-convex over $\beta$
(see footnote~\ref{ftquasi}).
The minimum of $\rho(\T_{\mu}(\Frac{1}{L},\beta,0))$
occurs at the point $\beta = \frac{1-\sqrt{q}}{1+\sqrt{q}}$
in \cblue{Table~\ref{tab:alg2}}
satisfying $\Delta\paren{\Frac{1}{L},\beta,0,\mu} = 0$,
verifying the statement
that \cblue{FGM-$q$}
results from optimizing~\eqref{eq:optogm} over $\beta$
given $\alpha = \Frac{1}{L}$ and $\gamma = 0$.
}
$\alpha = \Frac{1}{L}$ and $\gamma = 0$.
\cblue{In~\cite[Prop.~1]{lessard:16:aad},
AFM with coefficients
\(
(\alpha,\beta,\gamma)
= \paren{\frac{4}{\mu + 3L},
\frac{\sqrt{3+q} - 2\sqrt{q}}{\sqrt{3+q} + 2\sqrt{q}},\,0}
,\)
named FGM$'$-$q$ in Table~\ref{tab:coeff},
was derived by optimizing~\eqref{eq:optogm} over $(\alpha,\beta)$
with the constraint $\gamma=0$.}
Although a general unconstrained solution to~\eqref{eq:optogm}
would be an interesting future direction,
here
we focus on optimizing~\eqref{eq:optogm} over $(\beta,\gamma)$
\cblue{with the constraint} $\alpha=\Frac{1}{L}$.
This choice
simplifies the problem~\eqref{eq:optogm}
and is useful for analyzing
an adaptive restart scheme for OGM in Sec.~\ref{sec:restart}.
\subsection{Optimizing the Coefficients $(\beta,\gamma)$ of \cblue{AFM}
When $\alpha = \Frac{1}{L}$}
\label{sec:ogmq}
When $\alpha = \Frac{1}{L}$ and $\lambda = L$,
the characteristic polynomial~\eqref{eq:poly}
becomes
\(
r^2 + \gamma r = 0
.\)
The roots are $r = 0$ and $r = -\gamma$,
so $\rho(\T_L(\Frac{1}{L},\beta,\gamma)) = |\gamma|$.
In addition, because
$\rho(\T_{\mu}(\Frac{1}{L},\beta,\gamma))$
is continuous and quasi-convex over $\beta$
(see footnote~\ref{ftquasi}),
it can be easily shown that
the smaller value of $\beta$
satisfying the following equation:
\begin{align}
&\,\Delta(\Frac{1}{L},\beta,\gamma,\mu)
= ((1+\beta)(1-q) - \gamma q)^2 - 4\beta(1-q) \\
=&\, (1-q)^2\beta^2 - 2(1-q)(1+q+q\gamma)\beta + (1-q)(1-q-2q\gamma) + q^2\gamma^2
= 0 \nonumber
\end{align}
minimizes $\rho(\T_{\mu}(\Frac{1}{L},\beta,\gamma))$
for any given \gam (satisfying \cblue{$\gamma \ge -1$}).
The optimal
$\beta$ \cblue{for a given $\gamma$ (when $\alpha=\Frac{1}{L}$)}
is
\begin{align}
&\betastar(\gamma) := \Frac{\paren{1 - \sqrt{q(1 + \gamma)}}^2}{(1-q)}
\label{eq:betaopt}
,\end{align}
which reduces to
$\beta = \betastar(0) = \frac{1-\sqrt{q}}{1+\sqrt{q}}$
for \cblue{FGM-$q$} (with $\gamma = 0$).
Substituting
\eqref{eq:betaopt} into~\eqref{eq:rho}
yields
\(
\rho(\T_{\mu}(\Frac{1}{L},\betastar(\gamma),\gamma))
= |1 - \sqrt{q(1+\gamma)}|
,\)
leading
to the following
simplification of~\eqref{eq:optogm}
with $\alpha = \Frac{1}{L}$ and $\beta = \betastar(\gamma)$
from~\eqref{eq:betaopt}:
\begin{align}
\gamstar := \argmin{\gamma} \,
\max\left\{ |1 - \sqrt{q(1 + \gamma)} |, \, |\gamma| \right\}
\label{eq:optogmalpbet}
.\end{align}
The minimizer of~\eqref{eq:optogmalpbet}
satisfies
\(
1 - \sqrt{q(1 + \gamma)} = \pm \gamma
,\)
and with simple algebra,
we get
the following solutions to~\eqref{eq:optogm}
with \cblue{the constraint} $\alpha = \Frac{1}{L}$
(and~\eqref{eq:optogmalpbet}):
\begin{align}
\betastar := \beta^\star(\gamstar)
= \frac{\paren{\gamstar}^2}{1-q}
= \frac{(2+q - \sqrt{q^2 + 8q})^2}{4(1-q)},
\quad
\gamstar = \frac{2 + q - \sqrt{q^2 + 8q}}{2}
\label{eq:ogmquadalp}
,\end{align}
for which
the spectral radius is
\(
\rho^\star
:= \rho( \T(\Frac{1}{L}, \betastar, \gamstar) )
= 1 - \sqrt{q(1+\gamstar)}
= \gamstar
.\)
\cblue{We denote \algref{alg:ogm} with coefficients
$\alpha=\Frac{1}{L}$ and (\betastar,\gamstar) in~\eqref{eq:ogmquadalp}
as OGM-$q$.}
Table~\ref{tab:coeff}
compares
the spectral radius
of the
\cblue{OGM-$q$}
to \cblue{GM-$q$, FGM-$q$, and FGM$'$-$q$~\cite[Prop.~1]{lessard:16:aad}}.
Simple algebra
shows that the spectral radius of
\cblue{OGM-$q$} is smaller than those of \cblue{FGM-$q$ and FGM$'$-$q$},
\ie,
\(
\frac{2 + q - \sqrt{q^2 + 8q}}{2}
\le 1 - \frac{2\sqrt{q}}{\sqrt{3+q}}
\le 1 - \sqrt{q}
.\)
Therefore,
\cblue{OGM-$q$}
achieves a worst-case convergence rate
of $||\xii_k - \xii_*||^{\cblue{2}}$
that is faster than that of FGM \cblue{variants}
\cblue{(but that is slower than a heavy-ball method~\cite{polyak:87})}
for a strongly convex quadratic function.
\newcommand{\mycol}[1] { \colorbox{gray!25}{ \makebox[3em]{ $ #1 $ }}}
\begin{table}[htbp]
\centering
\renewcommand{\arraystretch}{1.4}
\caption{
Optimally tuned coefficients $(\alpha,\beta,\gamma)$
of \cblue{GM-$q$, FGM-$q$, FGM$'$-$q$, and OGM-$q$},
and their spectral radius $\rho(\T(\alpha,\beta,\gamma))$~\eqref{eq:rhoT}.
These optimal coefficients result from solving~\eqref{eq:optogm}
with the shaded coefficients fixed.}
\label{tab:coeff}
\begin{tabular}{|C||E|E|E||E|}
\hline
Method & $\alpha$ & $\beta$ & \gam & $\rho(\T(\alpha,\beta,\gamma))$ \\
\hline
\vspace{-1.1em} \cblue{GM-$q$}
& $\frac{2}{\mu+L}$
& \mycol{ 0 }
& \mycol{ 0 }
& $\frac{1-q}{1+q}$
\\ \hline
\vspace{-1.1em} \cblue{FGM-$q$ \cite{nesterov:04}}
& \mycol{ \frac{1}{L} }
& $\frac{1-\sqrt{q}}{1+\sqrt{q}}$
& \mycol{ 0 }
& $1-\sqrt{q}$
\\ \hline
\vspace{-1.1em} \cblue{FGM$'$-$q$ \cite{lessard:16:aad}}
& $\frac{4}{\mu+3L}$
& $\frac{\sqrt{3+q}-2\sqrt{q}}{\sqrt{3+q}+2\sqrt{q}}$
& \mycol{ 0 }
& $1 - \frac{2\sqrt{q}}{\sqrt{3+q}}$
\\ \hline
\vspace{-1.1em} \cblue{OGM-$q$}
& \mycol{ \frac{1}{L} }
& $\frac{(2+q - \sqrt{q^2 + 8q})^2}{4(1-q)}$
& $\frac{2+q - \sqrt{q^2 + 8q}}{2}$
& $\frac{2+q - \sqrt{q^2 + 8q}}{2}$
\\ \hline
\end{tabular}
\end{table}
\begin{figure}[htbp]
\begin{center}
\includegraphics[clip,width=0.48\textwidth]{fig,ogm,roots.eps}
\includegraphics[clip,width=0.48\textwidth]{fig,ogmb,roots.eps}
\end{center}
\caption{
Plots of
$|r_1(\Frac{1}{L},\beta,\gamma,\lambda)|$
and
$|r_2(\Frac{1}{L},\beta,\gamma,\lambda)|$
over $\mu\le\lambda\le L$
for various
(Left) \gam values for given $\beta = \betastar(\gamma)$,
and
(Right) $\beta$ values for given $\gamma = \gamstar$,
for a strongly convex quadratic problem with
$\mu=0.1$ and $L=1$ ($q=0.1$),
where $(\betastar,\gamstar) = (0.4,0.6)$.
The maximum of
$|r_1(\Frac{1}{L},\beta,\gamma,\lambda)|$
and
$|r_2(\Frac{1}{L},\beta,\gamma,\lambda)|$,
\ie the upper curve in the plot,
corresponds to
the value of $\rho(\T_\lambda(\Frac{1}{L},\beta,\gamma))$ in~\eqref{eq:rho},
\cblue{and
the maximum value of $\rho(\T_\lambda(\Frac{1}{L},\beta,\gamma))$ over $\lambda$
corresponds to
a spectral radius $\rho(\T(\Frac{1}{L},\beta.\gamma))$ in~\eqref{eq:rhoT}.}
}
\label{fig:ogm,roots}
\vspace{-3pt}
\end{figure}
To further understand the behavior of \cblue{AFM}
for each eigen-mode,
Fig.~\ref{fig:ogm,roots}
plots
$\rho(\T_\lambda(\Frac{1}{L},\beta,\gamma))$
\cblue{over} $\mu \le \lambda \le L$
for $\mu = 0.1$ and $L = 1$ ($q=0.1$) as an example,
where $(\betastar,\gamstar) = (0.4,0.6)$.
\cblue{The left plot of} Fig.~\ref{fig:ogm,roots}
first compares
\cblue{the $\rho(\T_{\lambda}(\Frac{1}{L},\beta,\gamma))$ values of OGM-$q$}
to \cblue{those of}
other choices of $\gamma=0,0.4,0.8$
\cblue{with $\beta=\betastar(\gamma)$ in~\eqref{eq:betaopt}.}
The \cblue{OGM-$q$}
(\cblue{see} upper red curve in Fig.~\ref{fig:ogm,roots})
has
\cblue{the largest value ($\rho^\star = \gamstar = 0.6$)
of $\rho(\T_{\lambda}(\Frac{1}{L},\beta,\gamma))$}
at both the smallest and the largest eigenvalues
\cblue{($\mu$ and $L$ respectively)},
unlike other choices of \gam (with $\betastar(\gamma)$)
where either $\rho(\T_{\mu}(\Frac{1}{L},\beta,\allowbreak\gamma))$
or $\rho(\T_L(\Frac{1}{L},\beta,\gamma))$
are the largest.
The other choices thus have a spectral radius
\cblue{$\rho(\T(\Frac{1}{L},\beta,\gamma))$}
larger than that of the \cblue{OGM-$q$}.
\cblue{The right plot of} Fig.~\ref{fig:ogm,roots}
illustrates \cblue{$\rho(\T_{\lambda}(\Frac{1}{L},\beta,\gamma))$ values}
for different choices of $\beta\cblue{=0,0.2,0.4,0.6}$
for given $\gamma = \gamstar$,
showing that suboptimal $\beta$ value
will slow down convergence,
\cblue{compared to the optimal $\beta^\star=0.4$}.
\cblue{AFM with
$(\alpha,\beta,\gamma) = (\Frac{1}{L},0,\gamma^\star)$
in Fig.~\ref{fig:ogm,roots}
is equivalent to
AFM with $\paren{\frac{1}{L}(1 + \gamma^\star),0,0}$,
and this implies that AFM with $\beta=\gamma=0$ (\eg, GM)
may have}
some modes for mid-valued $\lambda$ values
that will converge faster
than the accelerated methods,
\cblue{whereas}
its overall convergence rate \cblue{(\ie, the spectral radius value)}
is worse.
Apparently no one method
can have superior convergence rates
for all modes.
\cblue{Similarly,} although
\cblue{OGM-$q$ has}
the smallest possible
spectral radius $\rho(\T(\cdot))$
\cblue{among known AFM},
the upper blue and red curves in
\cblue{the left plot of} Fig.~\ref{fig:ogm,roots},
\cblue{corresponding to FGM-$q$ and OGM-$q$ respectively,}
illustrate that
\cblue{OGM-$q$}
will have modes
for large eigenvalues
that converge slower than with
\cblue{FGM-$q$}.
This behavior may be undesirable
when such modes \cblue{of large eigenvalues}
dominate the overall convergence behavior.
\cblue{The next section reveals}
that the convergence of the primary sequence $\{\y_k\}$ of
\cblue{AFM with $\alpha=\Frac{1}{L}$}
is not governed by such modes \cblue{of large eigenvalues}
unlike \cblue{its} secondary sequence $\{\x_k\}$.
\cblue{In addition,}
Fig.~\ref{fig:ogm,roots}
reveals change points across $\lambda$
meaning that there are different regimes;
\cblue{the next section} elaborates on this behavior,
building upon \cblue{the} dynamical system analysis of
\cblue{AFM with $\alpha=\Frac{1}{L}$ and $\gamma=0$ in}
\cite[Sec. 4]{odonoghue:15:arf}.
\subsection{Convergence Properties of \cblue{AFM}
When
$\alpha = \Frac{1}{L}$}
\label{sec:OGM,conv}
\cite[Sec. 4]{odonoghue:15:arf}
analyzed a constant-step \cblue{AFM with $\alpha=\Frac{1}{L}$ and $\gamma=0$}
as a linear dynamical system
for minimizing a strongly convex quadratic function~\eqref{eq:quad},
and showed that there are three regimes of behavior for the system;
low momentum, optimal momentum, and high momentum regimes.
This section
similarly analyzes \cblue{AFM with $\alpha=\Frac{1}{L}$ and $\gamma\ge0$}
to better understand its convergence behavior
when solving a strongly convex quadratic problem~\eqref{eq:quad},
complementing the previous
section's
spectral radius analysis of \cblue{AFM}.
We use the eigen-decomposition of $\Q = \V\Lam\V^\top$
with $\Lam := \diag{\lambda_i}$,
where the eigenvalues $\{\lambda_i\}$ are in an ascending order,
\ie, $\mu=\lambda_1\le\lambda_2\le\cdots\le\lambda_d=L$.
And for simplicity, we let $\vp = \Zero$
without loss of generality,
leading to $\x_* = \Zero$.
By defining
$\w_k := (w_{k,1},\cdots,w_{k,d})^\top = \V^\top \y_k \in \Reals^d$
and
$\vv_k := (v_{k,1},\cdots,v_{k,d})^\top = \V^\top \x_k \in \Reals^d$
as the mode coefficients
of the primary and secondary sequences respectively
and using~\eqref{eq:xii,update},
we have the following $d$ independently evolving
identical recurrence relations
for the evolution of $w_{\cdot,i}$ and $v_{\cdot,i}$
of the constant-step \cblue{AFM with $\alpha=\Frac{1}{L}$} respectively:
\begin{align}
w_{k+2,i} &= \paren{(1 + \beta)\paren{1 - \Frac{\lambda_i}{L}}
- \gamma\Frac{\lambda_i}{L}}w_{k+1,i}
- \beta\paren{1 - \Frac{\lambda_i}{L}}w_{k,i},
\label{eq:wrecur} \\
v_{k+2,i} &= \paren{(1 + \beta)\paren{1 - \Frac{\lambda_i}{L}}
- \gamma\Frac{\lambda_i}{L}}v_{k+1,i}
- \beta\paren{1 - \Frac{\lambda_i}{L}}v_{k,i}
\nonumber
,\end{align}
for $i=1,\ldots,d$,
although the initial conditions differ as follows:
\begin{align}
w_{1,i} = (1 - \Frac{\lambda_i}{L})w_{0,i},
\quad
v_{1,i} = ((1+\beta+\gamma)(1-\Frac{\lambda_i}{L}) - (\beta+\gamma))v_{0,i}
\label{eq:initial}
\end{align}
with $w_{0,i} = v_{0,i}$.
The convergence behavior of the $i$th mode of the dynamical system
of both $w_{\cdot,i}$ and $v_{\cdot,i}$
in \eqref{eq:wrecur}
is determined
by the characteristic polynomial~\eqref{eq:poly}
with $\alpha = \Frac{1}{L}$
and $\lambda = \lambda_i$.
Unlike the previous sections
that studied only the worst-case convergence performance
using the largest absolute value of the roots
of the polynomial~\eqref{eq:poly},
we next discuss
the convergence behavior of \cblue{AFM}
more comprehensively
using~\eqref{eq:poly} with $\alpha=\Frac{1}{L}$ and $\lambda=\lambda_i$
for the two cases
1) $\lambda_i = L$ and 2) $\lambda_i < L$.
1) $\lambda_i = L$:
The characteristic polynomial~\eqref{eq:poly}
of the mode of $\lambda_i = L$
reduces to $r^2 + \gamma r=0$
with two roots $0$ and $-\gamma$
regardless of the choice of $\beta$.
Thus we have monotone
convergence
for this ($d$th) mode of the
dynamical system~\cite[Sec. 17.1]{chiang:84:fmo}:
\begin{align}
w_{k,d} = 0^k + c_d(-\gamma)^k,
\quad
v_{k,d} = 0^k + \hat{c}_d(-\gamma)^k
\label{eq:wkd}
,\end{align}
where $c_d$ and $\hat{c}_d$ are constants
depending on the initial conditions~\eqref{eq:initial}.
Substituting $w_{1,d} = 0$
and $v_{1,d} = -(\beta+\gamma)v_{0,d}$~\eqref{eq:initial}
into~\eqref{eq:wrecur}
yields
\begin{align}
c_d = 0,
\quad
\hat{c}_d = v_{0,d}\paren{1 + \Frac{\beta}{\gamma}}
\label{eq:wkd,c}
,\end{align}
illustrating that the primary sequence $\{w_{k,d}\}$
reaches its optimum after one iteration,
whereas the secondary sequence $\{v_{k,d}\}$
has slow monotone convergence of the distance to the optimum,
while exhibiting undesirable oscillation
due to the term
$ (-\gamma)^k $,
corresponding
to overshooting over the optimum.
2) $\lambda_i < L$:
In \eqref{eq:ogmquadalp}
we found the optimal overall $\betastar$
for \cblue{AFM when $\alpha=\Frac{1}{L}$}.
One can alternatively explore
what the best value of $\beta$ would be
for any given mode of the system
for comparison.
The polynomial~\eqref{eq:poly} has repeated roots
for the following $\beta$,
corresponding to the smaller zero of
the discriminant
$\Delta(\Frac{1}{L},\beta,\gamma,\lambda_i)$
for given \gam and $\lambda_i$:
\begin{align}
\beta_i^\star(\gamma)
:= \Frac{\paren{1-\sqrt{(1+\gamma)\Frac{\lambda_i}{L}}}^2}{(1-\Frac{\lambda_i}{L})}
\label{eq:betaistar}
.\end{align}
This choice
satisfies
$\betastar = \betastar(\gamstar)
= \beta_1^\star(\gamstar)$~\eqref{eq:ogmquadalp},
because $\lambda_1$ is the smallest eigenvalue.
Next we examine
the convergence behavior of \cblue{AFM with $\alpha=\Frac{1}{L}$ and $\gamma\ge0$}
in the following three regimes,
similar to \cblue{AFM with $\alpha=\Frac{1}{L}$ and $\gamma=0$}
in~\cite[Sec. 4.3]{odonoghue:15:arf}:\footnote{
For simplicity in the momentum analysis,
we considered values $\beta$
within $[0\;1]$,
containing the \cblue{standard} $\beta_k$ values
in
\cblue{Tables~\ref{tab:alg1} and~\ref{tab:alg2}.}
\cblue{
This restriction
excludes the effect of the $\beta$
that corresponds
to the larger zero of the discriminant
$\Delta(\Frac{1}{L},\beta,\gamma,\lambda_i)$
for given $\gamma$ and $\lambda_i$,
and that is larger than $1$.
Any $\beta$ greater than $1$
has $\rho(\T_{\lambda_i}(\Frac{1}{L},\allowbreak\beta,\gamma))$ values
(in~\eqref{eq:rho} with $\alpha=\Frac{1}{L}$)
that are
larger than those for $\beta\in[\beta_i^\star(\gamma)\;1]$
due to the quasi-convexity of
$\rho(\T_{\lambda_i}(\Frac{1}{L},\allowbreak\beta,\gamma))$ over $\beta$.
}
}
\begin{itemize}[leftmargin=25pt]
\item $\beta<\beta_i^\star(\gamma)$:
low momentum, over-damped,
\item $\beta=\beta_i^\star(\gamma)$:
optimal momentum, critically damped,
\item $\beta>\beta_i^\star(\gamma)$:
high momentum, under-damped.
\end{itemize}
If $\beta \le \beta_i^\star(\gamma)$,
the polynomial~\eqref{eq:poly} has two real roots,
$r_{1,i}$ and $r_{2,i}$
where we omit $(\Frac{1}{L},\beta,\gamma,\lambda_i)$
in $r_{\cdot,i} = r_\cdot(\Frac{1}{L},\beta,\gamma,\lambda_i)$ for simplicity.
Then, the system evolves as~\cite[Sec. 17.1]{chiang:84:fmo}:
\begin{align}
w_{k,i} = c_{1,i}r_{1,i}^k + c_{2,i}r_{2,i}^k,
\quad
v_{k,i} = \hat{c}_{1,i}r_{1,i}^k + \hat{c}_{2,i}r_{2,i}^k
\label{eq:wki_over}
,\end{align}
where constants $c_{1,i}$, $c_{2,i}$, $\hat{c}_{1,i}$ and $\hat{c}_{2,i}$
depend on the initial conditions~\eqref{eq:initial}.
In particular, when $\beta = \beta_i^\star(\gamma)$ \eqref{eq:betaistar},
we have the repeated root:
\begin{align}
r_i^\star(\gamma) := 1 - \sqrt{(1+\gamma)\Frac{\lambda_i}{L}}
\label{eq:ris}
,\end{align}
corresponding to critical damping,
yielding the fastest monotone convergence
among \eqref{eq:wki_over}
for any $\beta$ \st $\beta\le\beta_i^\star(\gamma)$.
This property is due to the quasi-convexity
of $\rho(\T_{\lambda_i}(\Frac{1}{L},\allowbreak\beta,\gamma))$ over $\beta$.
If $\beta < \beta_i^\star(\gamma)$,
the system is over-damped, which corresponds to the low momentum regime,
where the system is dominated by the larger root
that is greater than $r_i^\star(\gamma)$ \cblue{\eref{eq:ris}},
and thus has slow monotone convergence.
However, depending on the initial conditions~\eqref{eq:initial},
the system may only be dominated by the smaller root,
as noticed for the case $\lambda_i=L$ in~\eqref{eq:wkd}
\cblue{and~\eqref{eq:wkd,c}}.
Also note that
the mode of $\lambda_i=L$ is always in the low momentum regime
regardless of the value of $\beta$.
If $\beta > \beta_i^\star(\gamma)$,
the system is under-damped,
which corresponds to the high momentum regime.
This means that the system evolves
as~\cite[Sec. 17.1]{chiang:84:fmo}:
\begin{align}
w_{k,i} = c_i\paren{\sqrt{\beta(1-\Frac{\lambda_i}{L})}}^k
\cos(k\psi_i(\beta,\gamma)-\delta_i),
\label{eq:wki_under} \\
v_{k,i} = \hat{c}_i\paren{\sqrt{\beta(1-\Frac{\lambda_i}{L})}}^k
\cos(k\psi_i(\beta,\gamma)-\hat{\delta}_i),
\nonumber
\end{align}
where the
frequency of the oscillation
is given by
\begin{align}
\psi_i(\beta,\gamma)
:= \cos^{-1}\paren{\Frac{\paren{(1+\beta)(1-\Frac{\lambda_i}{L}) - \gamma\Frac{\lambda_i}{L}}}
{\paren{2\sqrt{\beta(1-\Frac{\lambda_i}{L})}}}}
\label{eq:psii}
,\end{align}
and $c_i$, $\delta_i$, $\hat{c}_i$ and $\hat{\delta}_i$
denote constants that depend on the initial conditions~\eqref{eq:initial};
in particular for $\beta\approx1$,
we have $\delta_i\approx0$ and $\hat{\delta}_i\approx0$
so we will ignore them.
Based on the above momentum analysis,
we categorize
the behavior of the $i$th mode
of \cblue{AFM} for each $\lambda_i$ \cblue{in Fig.~\ref{fig:ogm,roots}}.
Regimes with two curves and one curve
\cblue{(over $\lambda$)}
in Fig.~\ref{fig:ogm,roots}
correspond to the low- and high-momentum regimes, respectively.
In particular, for $\beta = \betastar(\gamma)$
in \cblue{the left plot of} Fig.~\ref{fig:ogm,roots},
most $\lambda_i$ values
\cblue{(satisfying $\beta>\beta_i^\star(\gamma)$)}
experience high momentum
(and the optimal momentum for
$\lambda_i$
satisfying $\betastar(\gamma) = \beta_i^\star(\gamma)$,
\eg, $\lambda_i = \mu$),
whereas modes where $\lambda_i\approx L$
experience low momentum.
The fast convergence of
the primary sequence $\{w_{k,d}\}$
in \eref{eq:wkd} \cblue{and \eref{eq:wkd,c}}
generalizes to the case $\lambda_i \approx L$,
corresponding to the lower curves
in Fig.~\ref{fig:ogm,roots}.
In addition, for $\beta\cblue{=0,0.2}$ \cblue{that are}
smaller than $\betastar(\gamma)$
in \cblue{the right plot of} Fig.~\ref{fig:ogm,roots},
both $\lambda \approx \mu$ and $\lambda \approx L$
experience low momentum
so increasing $\beta$ improves the convergence rate.
Based on the quadratic analysis in this section,
we would like to use appropriately large $\beta$ and \gam
coefficients,
namely $(\betastar,\gamstar)$,
to have fast monotone convergence
(for the dominating modes).
However,
such values require knowing
the function parameter $q = \mu/L$
that is usually unavailable in practice.
Using OGM \cblue{(and OGM$'$) in Table~\ref{tab:alg1}}
without knowing $q$
will likely lead to oscillation
due to the high momentum (or under-damping)
for strongly convex functions.
The next section describes restarting schemes
inspired by~\cite{odonoghue:15:arf}
that we suggest to use with OGM
to avoid such oscillation
and thus heuristically accelerate
the rate of OGM
for a strongly convex quadratic function
and even
for a convex function that is locally strongly convex.
\mbox{}
\section{Restarting Schemes}
\label{sec:restart}
Restarting an algorithm
(\ie, starting the algorithm again
by using the current iterate as the new starting point)
after
a certain number of iterations
or when some restarting condition is satisfied
has been found useful,
\eg,
for the conjugate gradient method
\cite{powell:77:rpf,nocedal:06},
called ``fixed restart'' and ``adaptive restart'' respectively.
The fixed restart approach was also studied
for accelerated gradient schemes such as FGM
in~\cite{nesterov:13:gmf,nemirovski:94:emi}.
Recently
adaptive restart of FGM was shown
to provide dramatic practical acceleration
without requiring knowledge of function parameters%
~\cite{odonoghue:15:arf,giselsson:14:mar,su:16:ade}.
Building upon those ideas,
this section reviews and applies
restarting approaches for OGM.
A quadratic analysis in~\cite{odonoghue:15:arf}
justified using a restarting condition for FGM;
this section extends that analysis to OGM
by studying an observable quantity of oscillation
that serves as an indicator
for restarting the momentum of OGM.
\subsection{Fixed Restart}
Restarting an algorithm
every $k$ iterations
can yield a linear rate
for decreasing a function in \SL
\cite[Sec. 5.1]{nesterov:13:gmf}%
~\cite[Sec. 11.4]{nemirovski:94:emi}.
\cblue{Suppose one restarts OGM every $k$ (inner) iterations
by initializing the $(j+1)$th outer iteration using
$\x_{j+1,0} = \x_{j,k}$,
where
$\x_{j,i}$
denotes an iterate at the $j$th outer iteration
and $i$th inner iteration.}
Combining the OGM rate
\cblue{in Table~\ref{tab:alg1}}
and the strong convexity inequality~\eqref{eq:strcvx}
yields the following linear rate
\cblue{for each outer iteration of OGM with fixed restart}:
\begin{align}
f(\x_{j,k}) - f(\x_*)
\le \frac{L ||\x_{j,0} - \x_*||^2}{k^2}
\le \frac{2L}{\mu k^2} (f(\x_{j,0}) - f(\x_*))
\label{eq:fixed,cost}
.\end{align}
This \cblue{rate is faster} than the
\(
\Frac{4L}{\mu k^2}
\)
rate
\cblue{of one outer iteration of FGM with fixed restart}
(using the FGM \cblue{rate in Table~\ref{tab:alg1}}).
\cblue{For a given $N=jk$ total number of steps,
a simple calculation
shows that
the} optimal restarting interval $k$
minimizing
\cblue{the rate $\paren{\Frac{2L}{(\mu k^2)}}^j$ after $N$ steps
(owing from~\eqref{eq:fixed,cost})
is}
$k_{\mathrm{fixed}}^{ } := e\sqrt{\Frac{2}{q}}$
\cblue{that does not depend on $N$,
where $e$ is Euler's number.}
There are two drawbacks of the fixed restart approach
\cite[Sec. 3.1]{odonoghue:15:arf}.
First,
computing the optimal interval
$k_{\mathrm{fixed}}^{ }$
requires knowledge of $q$
that is usually unavailable in practice.
Second, using a global parameter $q$
may be too conservative
when the iterates enter locally strongly convex region.
Therefore,
adaptive restarting~\cite{odonoghue:15:arf}
\cblue{is more} useful in practice,
which we review next and then apply to OGM.
The above two drawbacks
also apply to the \cblue{methods in Table~\ref{tab:coeff}}
that assume knowledge of the global parameter $q$.
\subsection{Adaptive Restart}
To circumvent the drawbacks of fixed restart,
\cite{odonoghue:15:arf}
proposes
the following two adaptive restart schemes
for FGM:
\begin{itemize}[leftmargin=25pt]
\item Function scheme for restarting (FR): restart whenever
\begin{align}
f(\y_{k+1}) > f(\y_k)
\label{eq:fr}
,\end{align}
\item Gradient scheme for restarting (GR): restart whenever
\begin{align}
\Inprod{-\nabla f(\x_k)}{\y_{k+1} - \y_k} < 0
\label{eq:gr}
.\end{align}
\end{itemize}
These schemes
heuristically improve convergence rates
of FGM
and both performed similarly
well~\cite{odonoghue:15:arf,su:16:ade}.
Although
the function scheme
guarantees monotonic decreasing function values,
the gradient scheme has two advantages
over the function scheme~\cite{odonoghue:15:arf};
the gradient scheme involves only arithmetic operations
with already computed quantities,
and it is numerically more stable.
These two schemes encourage an algorithm to restart
whenever the iterates take a ``bad'' direction,
\ie,
when the function value increases
or the negative gradient and the momentum have an obtuse angle,
respectively.
However, a convergence proof that justifies
their empirical acceleration is yet unknown,
so~\cite{odonoghue:15:arf}
analyzes such restarting schemes
for strongly convex quadratic functions.
An alternative scheme
in~\cite{su:16:ade} that restarts
whenever the magnitude of the momentum decreases, \ie,
$||\y_{k+1} - \y_k|| < ||\y_k - \y_{k-1}||$,
has a theoretical convergence analysis
for the function class \SL.
However,
empirically both the function and gradient schemes
performed better in~\cite{su:16:ade}.
Thus,
this paper
focuses on
adapting practical restart
schemes to OGM
and extending the
analysis in~\cite{odonoghue:15:arf} to OGM.
First we introduce a new additional adaptive scheme
designed specifically for
\cblue{AFM with $\alpha=\Frac{1}{L}$ and $\gamma > 0$ (\eg, OGM).}
\subsection{Adaptive Decrease of \gam for
\cblue{AFM with $\alpha=\Frac{1}{L}$ and $\gamma>0$}}
\label{sec:gr,gamma}
Sec.~\ref{sec:OGM,conv}
described
that the secondary sequence $\{\x_k\}$ of
\cblue{AFM with $\alpha=\Frac{1}{L}$ and $\gamma>0$ (\eg, OGM)}
might experience overshoot and thus slow convergence,
unlike \cblue{its} primary sequence $\{\y_k\}$,
when the iterates enter
a region where the mode of the largest eigenvalue dominates.
(Sec.~\ref{sec:result,case2} illustrates such an example.)
From~\eqref{eq:wkd},
the overshoot of $\x_k$
has magnitude proportional to $|\gamma|$,
yet a suitably large \gam,
such as \gamstar \eqref{eq:optogmalpbet},
is essential for overall acceleration.
To avoid (or reduce) such overshooting,
we suggest the following adaptive scheme:
\begin{itemize}[leftmargin=25pt]
\item Gradient scheme for decreasing \gam (GD\gam):
decrease \gam whenever
\begin{align}
\inprod{\nabla f(\x_k)}{\nabla f(\x_{k-1})} < 0
\label{eq:gr,gamma}
.\end{align}
\end{itemize}
Because
the primary sequence $\{\y_k\}$ of \cblue{AFM with $\alpha=\Frac{1}{L}$}
is unlikely to overshoot,
one could choose to simply
use the primary sequence $\{\y_k\}$
as algorithm output
instead of the secondary sequence $\{\x_k\}$.
However, if one needs
to use the secondary sequence of
\cblue{AFM with $\alpha=\Frac{1}{L}$ and $\gamma>0$}
(\eg, Sec.~\ref{sec:nonstr}),
adaptive scheme
\eqref{eq:gr,gamma}
can help.
\subsection{Observable \cblue{AFM} Quantities \cblue{When $\alpha=\Frac{1}{L}$}}
\label{sec:OGM,obs}
This section revisits Sec.~\ref{sec:OGM,conv}
that suggested that
observing the evolution
of the mode coefficients $\{w_{k,i}\}$ and $\{v_{k,i}\}$
can help
identify the momentum regime.
However,
in practice
that evolution is unobservable
because the optimum $\x_*$ is unknown,
whereas
Sec.~\ref{sec:OGM,conv}
assumed $\x_* = \Zero$.
Instead we can observe the evolution of the function values,
which are related to the mode coefficients as follows:
\begin{align}
f(\y_k) = \frac{1}{2}\sum_{i=1}^d \lambda_i w_{k,i}^2,
\quad
f(\x_k) = \frac{1}{2}\sum_{i=1}^d \lambda_i v_{k,i}^2
\label{eq:cond,func}
,\end{align}
and also the inner products of the gradient and momentum, \ie,
\begin{align}
\inprod{-\nabla f(\x_k)}{\y_{k+1} - \y_k}
&= -\sum_{i=1}^d \lambda_i v_{k,i}(w_{k+1,i} - w_{k,i}),
\label{eq:cond,grad1} \\
\inprod{\nabla f(\x_k)}{\nabla f(\x_{k-1})}
&= \sum_{i=1}^d \lambda_i^2 v_{k,i}v_{k-1,i}
\label{eq:cond,grad2}
.\end{align}
These quantities appear
in the conditions for the adaptive schemes
\eqref{eq:fr},~\eqref{eq:gr}, and~\eqref{eq:gr,gamma}.
One would like to increase
$\beta$ and \gam as much as possible for acceleration
up to \betastar and \gamstar~\eqref{eq:ogmquadalp}.
However, without knowing $q$ (and \betastar,\gamstar),
\cblue{using large $\beta$ and \gam}
could end up placing
the majority of the modes
in the high momentum regime,
eventually leading to slow convergence
with oscillation as described in Sec.~\ref{sec:OGM,conv}.
To avoid such oscillation,
we
hope to detect it
using~\eqref{eq:cond,func} and~\eqref{eq:cond,grad1}
and
restart the algorithm.
We also hope to detect the overshoot~\eqref{eq:wkd}
of the modes of the large eigenvalues (in the low momentum regime)
using~\eqref{eq:cond,grad2}
so that we can then decrease \gam and avoid such overshoot.
\cblue{The rest of this section}
focuses on
the case where
$\beta > \beta_1(\gamma)$
for given \gam,
when the most of the modes are in the high momentum regime.
Because the maximum
of $\rho(\T_\lambda(\Frac{1}{L},\beta,\gamma))$
occurs at the points $\lambda=\mu$ or $\lambda=L$,
we expect that%
~\eqref{eq:cond,func},~\eqref{eq:cond,grad1}, and~\eqref{eq:cond,grad2}
will be quickly dominated
by the mode of the smallest or the largest \cblue{eigen}values.
\cblue{Specifically,
plugging $w_{k,i}$ and $v_{k,i}$
in~\eqref{eq:wkd},~\eqref{eq:wkd,c} and~\eqref{eq:wki_under}
to~\eqref{eq:cond,func},~\eqref{eq:cond,grad1}, and~\eqref{eq:cond,grad2}
for only the (dominating) mode of the smallest and the largest eigenvalues
\cblue{($\lambda_1=\mu$ and $\lambda_d=L$ respectively)}
leads to
the following approximations:}
\begingroup
\allowdisplaybreaks
\begin{align}
f(\y_k) \approx&\, \frac{1}{2}\mu c_1^2 \, \beta^k \, (1-\Frac{\mu}{L})^k \, \cos^2(k\psi_1),
\label{eq:mode} \\
f(\x_k) \approx&\, \frac{1}{2}\mu\hat{c}_1^2 \, \beta^k \,
(1-\Frac{\mu}{L})^k \, \cos^2(k\psi_1)
+ \frac{1}{2}L\hat{c}_d^2\gamma^{2k}
\nonumber \\
\inprod{-\nabla f(\x_k)}{\y_{k+1} - \y_k}
\approx&\, - \mu c_1\hat{c}_1 \, \beta^k \, (1-\Frac{\mu}{L})^k
\cos(k\psi_1) \nonumber \\
&\, \times \paren{ \sqrt{\beta(1-\Frac{\mu}{L})} \cos((k+1) \psi_1) - \cos(k\psi_1) },
\nonumber \\
\inprod{\nabla f(\x_k)}{\nabla f(\x_{k-1})}
\approx&\, \mu^2 \hat{c}_1^2 \, \beta^{k-\frac{1}{2}} \, (1-\Frac{\mu}{L})^{k-\frac{1}{2}}
\cos(k\psi_1) \, \cos((k-1)\psi_1) \nonumber \\
&\,
- L^2 \hat{c}_d^2 \, \gamma^{2k-1}
\nonumber
,\end{align}
\endgroup
where
$\psi_1 = \psi_1(\beta,\gamma)$ \cblue{in~\eqref{eq:psii}}.
\cblue{Furthermore,}
it is likely that
these expressions
will be dominated
by the mode
of either the smallest or largest eigenvalues,
\cblue{so we}
next analyze each case separately.
\subsubsection{Case 1: the Mode of the Smallest Eigenvalue Dominates}
\label{sec:OGM,obs1}
When the mode of the smallest eigenvalue dominates,
we further approximate
\eqref{eq:mode} as
\begingroup
\allowdisplaybreaks
\begin{align}
&f(\y_k) \approx \frac{1}{2}\mu c_1^2 \, \beta^k \, (1-\Frac{\mu}{L})^k \, \cos^2(k\psi_1),
\quad
f(\x_k) \approx \frac{1}{2}\mu \hat{c}_1^2 \, \beta^k \, (1-\Frac{\mu}{L})^k \, \cos^2(k\psi_1),
\nonumber \\
&\inprod{-\nabla f(\x_k)}{\y_{k+1} - \y_k} \label{eq:mode,s} \\
&\;\qquad\approx - \mu c_1\hat{c}_1 \, \beta^k \, (1-\Frac{\mu}{L})^k
\, \cos(k\psi_1) \, (\cos((k+1)\psi_1) - \cos(k\psi_1))
\nonumber \\
&\;\qquad= 2 \mu c_1\hat{c}_1 \, \beta^k \, (1-\Frac{\mu}{L})^k
\, \cos(k\psi_1) \sin((k+\Frac{1}{2})\psi_1)
\, \sin(\Frac{\psi_1}{2})
\nonumber \\
&\;\qquad\approx 2\mu c_1\hat{c}_1 \, \sin(\Frac{\psi_1}{2})
\, \beta^k \, (1-\Frac{\mu}{L})^k \, \sin(2k\psi_1)
\nonumber
,\end{align}
\endgroup
using simple trigonometric identities
and the approximations
$\sqrt{\beta(1-\Frac{\mu}{L})} \approx 1$
and
$\sin(k\psi_1) \approx \sin((k+\Frac{1}{2})\psi_1)$
\cblue{for small $\mu$ (leading to small $\psi_1$ in~\eqref{eq:psii})}.
The values~\eqref{eq:mode,s}
exhibit oscillations at a frequency proportional to
\(
\psi_1(\beta,\gamma)
\)
in
\eqref{eq:psii}.
This oscillation
can be
detected by the conditions~\eqref{eq:fr} and~\eqref{eq:gr}
and is useful
in detecting
the high momentum regime
where a restart
can help improve the convergence rate.
\subsubsection{Case 2: the Mode of the Largest Eigenvalue Dominates}
\label{sec:OGM,obs2}
Unlike the primary sequence $\{\y_k\}$
of \cblue{AFM with $\alpha=\Frac{1}{L}$ (\eg, OGM)},
convergence of
\cblue{its} secondary sequence $\{\x_k\}$
may be dominated by the mode of the largest eigenvalue
in~\eqref{eq:wkd} \cblue{and~\eqref{eq:wkd,c}}.
By further approximating~\eqref{eq:mode} for
the case when the mode of the largest eigenvalue dominates,
the function value
\(
f(\x_k) \approx \frac{1}{2} L \hat{c}_d^2 \, \gamma^{2k}
\)
decreases slowly but monotonically,
whereas
$f(\y_k) \approx f(\x_*) = 0$
and
\(
\inprod{-\nabla f(\x_k)}{\y_{k+1} - \y_k}
\approx 0
.\)
Therefore,
neither restart condition~\eqref{eq:fr} or~\eqref{eq:gr}
can detect such non-oscillatory observable values,
even though the secondary mode $\{w_{k,d}\}$
of the largest eigenvalue is oscillating
(corresponding to overshooting over the optimum).
However,
the inner product of two sequential gradients:
\begin{align}
\inprod{\nabla f(\x_k)}{\nabla f(\x_{k-1})}
\approx -L^2 \hat{c}_d^2 \, \gamma^{2k-1}
\end{align}
can detect the overshoot of the secondary sequence $\{\x_k\}$,
suggesting that the algorithm should adapt by decreasing
\gam
when condition~\eqref{eq:gr,gamma} holds.
Decreasing \gam too much
may slow down the overall convergence rate
when the mode of the smallest eigenvalue
\cblue{is not negligible.}
Thus,
we use~\eqref{eq:gr,gamma}
only
when using
the secondary sequence \cblue{$\{\x_k\}$}
as algorithm output
(\eg, Sec.~\ref{sec:nonstr}).
\section{Proposed Adaptive Schemes for OGM}
\label{sec:propOGM}
\subsection{Adaptive Scheme of OGM for Smooth and Strongly Convex Problems}
\label{sec:OGMrestart}
\algref{alg:OGMrestart}
illustrates
a new adaptive version of OGM\cblue{$'$}
\cblue{(rather than OGM)}\footnote{
\label{ft:ogmp}
OGM requires choosing the number of iterations $N$ in advance
for computing $\theta_N$ in Table~\ref{tab:alg1},
which seems incompatible with adaptive restarting schemes.
\cblue{In contrast,} the parameters $t_k$
in Table~\ref{tab:alg1} and \algref{alg:OGMrestart}
\cblue{are independent of $N$.}
The fact that $\theta_N$
is larger than $t_N$
at the last ($N$th) iteration
helps to dampen
(by reducing the values of $\beta$ and \gam)
the final update
to guarantee a faster \cblue{(optimal)}
worst-case rate \cblue{for the last secondary iterate $\x_N$}.
This property
was studied in~\cite{kim:17:otc}.
We could perform one last update using $\theta_N$
after a restart condition is satisfied,
but this
step appears unnecessary because
restarting already has the effect of
\cblue{dampening (reducing $\beta$ and $\gamma$)}.
Thus,
\cblue{\algref{alg:OGMrestart} uses OGM$'$ instead
that uses $t_k$
and that has a worst-case rate
that is similar to that of OGM.}}
that is used in our numerical experiments in Sec.~\ref{sec:result}.
When a restart condition is satisfied in~\algref{alg:OGMrestart},
we reset $t_k = 1$
to discard
the previous momentum
that has a bad direction.
When the decreasing \gam condition
is satisfied in~\algref{alg:OGMrestart},
we decrease $\sigma$
to suppress undesirable overshoot
of the secondary sequence $\{\x_k\}$.
Although
the analysis in Sec.~\ref{sec:quadanal}
considered only strongly convex quadratic functions,
the numerical experiments in Sec.~\ref{sec:result}
illustrate that the adaptive scheme
is also useful
more generally
for smooth convex functions in \FL,
as described in~\cite[Sec. 4.6]{odonoghue:15:arf}.
\begin{algorithm}[htbp]
\caption{OGM\cblue{$'$} with restarting momentum and decreasing \gam}
\label{alg:OGMrestart}
\begin{algorithmic}[1]
\State {\bf Input:}
$f\in\SL$ or $\FL$,
$\x_{-1} = \x_0 = \y_0\in\Reals^d$,
$t_0=\sigma=1$, $\bsig \in [0,\;1]$.
\For{$k \ge 0$}
\State $\y_{k+1} = \x_k - \frac{1}{L}\nabla f(\x_k)$
\If{$f(\y_{k+1}) > f(\y_k)$
(or $\Inprod{-\nabla f(\x_k)}{\y_{k+1} - \y_k} < 0$)}
\Comment{Restart condition}
\State $t_k = 1$, $\sigma \leftarrow 1$
\ElsIf{$\Inprod{\nabla f(\x_k)}{\nabla f(\x_{k-1})} < 0$}
\Comment{Decreasing \gam condition}
\State $\sigma \leftarrow \bsig \sigma$
\EndIf
\State $t_{k+1} = \frac{1}{2}\paren{1 + \sqrt{1 + 4t_k^2}}$
\State $\x_{k+1} = \y_{k+1}
+ \frac{t_k-1}{t_{k+1}}(\y_{k+1} - \y_k)
+ \sigma\frac{t_k}{t_{k+1}}(\y_{k+1} - \x_k)$
\EndFor
\end{algorithmic}
\end{algorithm}
\subsection{
Adaptive Scheme of a Proximal Version of OGM for Nonsmooth Composite Convex Problems
}
\label{sec:nonstr}
Modern applications often involve
nonsmooth composite convex problems:
\begin{align}
\argmin{\x} \; \{F(\x) := f(\x) + \phi(\x)\}
\label{eq:nonsmooth}
,\end{align}
where $f \in \FL$ is a smooth convex function
(typically not strongly convex)
and $\phi\in\Finf$ is a convex function
that is possibly nonsmooth
and ``proximal-friendly''~\cite{combettes:11:psm},
such as the $\ell_1$ regularizer
$\phi(\x) = ||\x||_1$.
Our numerical experiments in Sec.~\ref{sec:result}
show that
a new adaptive version
of a proximal variant of OGM
can be useful for solving such problems.
To solve~\eqref{eq:nonsmooth},
\cite{beck:09:afi} developed
a fast proximal gradient method,
popularized under the name
fast iterative shrinkage-thresh\allowbreak olding algorithm (FISTA).
\cblue{FISTA has the same rate as FGM in Table~\ref{tab:alg1}
for solving~\eqref{eq:nonsmooth},
by simply replacing the line 3 of \algref{alg:ogm}
with FGM coefficients
by $\y_{k+1} = \prox_{\alpha\phi}(\x_k - \alpha\nabla f(\x_k))$,
where the proximity operator is defined as
$
\prox_{h}(\z)
:= \argmin{\x}\{\frac{1}{2}||\z - \x||^2 + h(\x)\}
.$}
Variants of FISTA
with adaptive restart
were studied in~\cite[Sec. 5.2]{odonoghue:15:arf}.
Inspired by the fact that OGM
\cblue{has a worst-case rate} faster than FGM,
\cite{taylor:17:ewc}
studied a proximal variant\footnote{
Applying the proximity operator
to the primary sequence $\{\y_k\}$ of OGM,
similar to the extension of FGM to FISTA,
leads to
a poor worst-case rate~\cite{taylor:17:ewc}.
Therefore,~\cite{taylor:17:ewc}
applied the proximity operator
to the secondary sequence of OGM
and showed numerically
that this version has a worst-case rate
about twice faster than that of FISTA.
} of OGM (POGM).
It is natural
to pursue acceleration of POGM\footnote{
\cblue{Like OGM,}
POGM
in~\cite[\cblue{Sec.~4.3}]{taylor:17:ewc}
requires choosing the number of iterations $N$ in advance
for computing $\theta_N$,
\cblue{and this is incompatible with adaptive restarting schemes.
Therefore, analogous to using OGM$'$ instead of OGM
for an adaptive scheme
in \algref{alg:OGMrestart} (see footnote~\ref{ft:ogmp}),
\algref{alg:POGMrestart}
uses a proximal version of OGM$'$
(rather than the POGM in~\cite{taylor:17:ewc})
with restart.
An extension of OGM$'$ (without restart)
to a proximal version
with a fast worst-case rate
is unknown yet}}
by using
variations of any (or all)
of the three adaptive schemes
\eqref{eq:fr},
\eqref{eq:gr},
\eqref{eq:gr,gamma},
as illustrated in \algref{alg:POGMrestart}.
\cblue{Regarding} a function restart condition for POGM,
we use
\begin{align}
F(\x_{k+1}) > F(\x_k)
\end{align}
instead of $F(\y_{k+1}) > F(\y_k)$,
because $F(\y_k)$ can be unbounded
(\eg, $\y_k$ can be unfeasible for constrained problems).
For gradient conditions of POGM,
we consider the composite gradient mapping
\cblue{$G(\x_k) \in \nabla f(\x_k) + \partial \phi(\x_{k+1})$}
in \algref{alg:POGMrestart}
that differs from the standard composite gradient mapping
in~\cite{nesterov:13:gmf}.
We then use the gradient conditions
\begin{align}
\Inprod{-G(\x_k)}{\y_{k+1} - \y_k} < 0,
\quad
\Inprod{G(\x_k)}{G(\x_{k-1})} < 0
\label{eq:GR,gamma}
\end{align}
for restarting POGM or decreasing \gam of POGM
respectively.
Here POGM must output the secondary sequence $\{\x_k\}$
because the function value $F(\y_k)$ of the primary sequence
may be unbounded.
This situation
was the motivation
for~\eqref{eq:gr,gamma}
\cblue{(and the second inequality of~\eqref{eq:GR,gamma})}
and Sec.~\ref{sec:gr,gamma}.
When $\phi(\x) = 0$,
\algref{alg:POGMrestart} reduces to an algorithm that is similar to
\algref{alg:OGMrestart},
where only the location of the restart and decreasing $\gamma$ conditions differs.
\begin{algorithm}[htbp]
\caption{\color{black} POGM\cblue{$'$} with restarting momentum and decreasing \gam}
\label{alg:POGMrestart}
\begin{algorithmic}[1]
\State {\bf Input:} $f\in\FL$, $\phi\in\Finf$,
$\x_{-1} = \x_0 = \y_0 = \u_0 = \z_0 \in\Reals^d$, \\
\qquad\quad $t_0=\zeta_0=\sigma=1$, $\bsig \in [0,\;1]$.
\For{$k \ge 0$}
\State $\u_{k+1} = \x_k - \frac{1}{L}\nabla f(\x_k)$
\State $t_{k+1} = \frac{1}{2}\paren{1 + \sqrt{1 + 4t_k^2}}$
\State $\z_{k+1} = \u_{k+1} + \frac{t_k-1}{t_{k+1}}(\u_{k+1} - \u_k)
+ \sigma\frac{t_k}{t_{k+1}}(\u_{k+1} - \x_k)
- \frac{t_k-1}{t_{k+1}}\frac{1}{L\zeta_k}(\x_k - \z_k)$
\State $\zeta_{k+1} = \frac{1}{L}
\paren{1 + \frac{t_k-1}{t_{k+1}} + \sigma\frac{t_k}{t_{k+1}}}$
\State $\x_{k+1} = \prox_{\zeta_{k+1}\phi}(\z_{k+1})$
\State $G(\x_k) = \nabla f(\x_k) - \frac{1}{\zeta_{k+1}}(\x_{k+1} - \z_{k+1})$
\State $\y_{k+1} = \x_k - \frac{1}{L}G(\x_k)$
\If{$F(\x_{k+1}) > F(\x_k)$
(or $\Inprod{-G(\x_k)}{\y_{k+1} - \y_k} < 0$)}
\Comment{Restart condition}
\State $t_{k+1} = 1$, $\sigma \leftarrow 1$
\ElsIf{$\Inprod{G(\x_k)}{G(\x_{k-1})} < 0$}
\Comment{Decreasing \gam condition}
\State $\sigma \leftarrow \bsig \sigma$
\EndIf
\EndFor
\end{algorithmic}
\end{algorithm}
\section{Numerical Results}
\label{sec:result}
This section shows the results of applying
OGM\cblue{$'$} and POGM\cblue{$'$}
with adaptive schemes
\cblue{in \algref{alg:OGMrestart} and \algref{alg:POGMrestart}}
to
various numerical examples including
both strongly convex quadratic problems
and non-strongly convex problems.\footnote{
Software for the algorithms and for producing the figures in Sec.~\ref{sec:result}
is available at
\url{https://gitlab.eecs.umich.edu/michigan-fast-optimization/ogm-adaptive-restart}.
}
\cblue{(For simplicity,
we omit the prime symbol of OGM$'$ and POGM$'$ with adaptive restart
hereafter.)}
The results illustrate that
OGM (or POGM) with adaptive schemes
converges faster
than FGM (or FISTA) with adaptive restart.
The plots show the decrease of $F(\y_k)$ of the primary sequence
for FGM (FISTA) and OGM unless specified.
For POGM, we use the secondary sequence $\{\x_k\}$ as an output
and plot $F(\x_k)$,
since $F(\y_k)$ can be unbounded.
\subsection{Strongly Convex Quadratic Examples}
This section considers two types
of strongly convex quadratic examples,
where the mode of either the smallest eigenvalue
or the largest eigenvalue dominates,
providing examples of the analysis
in Sec.~\ref{sec:OGM,obs1} and~\ref{sec:OGM,obs2}
respectively.
\subsubsection{Case 1: the Mode of the Smallest Eigenvalue Dominates}
Fig.~\ref{fig:toy} compares
GM, FGM and OGM,
with or without the knowledge of $q$,
for minimizing a strongly convex quadratic function~\eqref{eq:quad}
in $d=500$ dimensions with $q=10^{-4}$,
where we generated \A (for $\Q=\A\tr\A$) and \vp randomly.
As expected,
knowing $q$ accelerates convergence.
Fig.~\ref{fig:toy} also illustrates that
adaptive restart helps FGM and OGM
to nearly achieve
the fast linear converge rate
of their non-adaptive versions that know $q$.
As expected,
OGM \cblue{variants} converge faster than FGM \cblue{variants} for all cases.
In Fig.~\ref{fig:toy},
`FR' and `GR' stand for function restart~\eqref{eq:fr}
and gradient restart~\eqref{eq:gr},
respectively,
and both behave nearly the same.
\begin{figure}[htbp]
\begin{center}
\includegraphics[clip,width=0.55\textwidth]{fig,toy.eps}
\end{center}
\caption{Minimizing a strongly convex quadratic function -
Case 1: the mode of the smallest eigenvalue dominates.
(FGM-FR and FGM-GR are almost indistinguishable,
as are
OGM-FR and OGM-GR.)
}
\label{fig:toy}
\vspace{-3pt}
\end{figure}
\subsubsection{Case 2: the Mode of the Largest Eigenvalue Dominates}
\label{sec:result,case2}
Consider the strongly convex quadratic function
with
\(
\Q =
\left[ \begin{array}{cc} q & 0 \\ 0 & 1 \end{array}\right]
,\)
$q = 0.01$,
$\vp = \Zero$
and $\x_* = \Zero$.
When starting the algorithm
from the initial point
\(
\x_0 = (0.2,\; 1)
,\)
the secondary sequence $\{\x_k\}$ of OGM-GR\footnote{
Fig.~\ref{fig:worst,toy}
only compares the results of the gradient restart (GR) scheme for simplicity,
where the function restart (FR) behaves similarly.
}
(or equivalently OGM-GR-GD\gam$(\bsig=1.0)$)
is dominated by the mode of largest eigenvalue in Fig.~\ref{fig:worst,toy},
illustrating
the analysis of Sec.~\ref{sec:OGM,obs2}.
Fig.~\ref{fig:worst,toy} illustrates that
the primary sequence of OGM-GR converges faster
than that of FGM-GR,
whereas the secondary sequence of OGM-GR
initially converges
even slower than GM.
To deal with such slow convergence
coming from the overshooting behavior
of the mode of the largest eigenvalue
of the secondary sequence of OGM,
we employ the decreasing \gam scheme in~\eqref{eq:gr,gamma}.
Fig.~\ref{fig:worst,toy} shows that using
$\bsig < 1$ in~\algref{alg:OGMrestart}
leads to overall faster convergence
of the secondary sequence $\{\x_k\}$
than the standard OGM-GR
where $\bsig = 1$.
We leave optimizing the choice of \bsig
or studying other strategies for decreasing \gam
as future work.
\begin{figure}[htbp]
\begin{center}
\includegraphics[clip,width=0.55\textwidth]{fig,worst,toy.eps}
\end{center}
\caption{Minimizing a strongly convex quadratic function -
Case 2: the mode of the largest eigenvalue dominates
for the secondary sequence $\{\x_k\}$ of OGM.
Using GD\gam~\eqref{eq:gr,gamma} with $\bsig < 1$ accelerates convergence
of the secondary sequence of OGM-GR,
where both the primary and secondary sequences
behave similarly
after first few iterations,
unlike $\bsig=1$.
}
\label{fig:worst,toy}
\vspace{-3pt}
\end{figure}
\subsection{Non-strongly Convex Examples}
This section applies adaptive OGM (or POGM)
to three non-strongly convex numerical examples
in~\cite{odonoghue:15:arf,su:16:ade}.
The numerical results show that
adaptive OGM (or POGM)
converges
faster than FGM (or FISTA) with adaptive restart.
\subsubsection{Log-Sum-Exp}
The following function
from~\cite{odonoghue:15:arf}
is smooth but non-strongly convex:
\[
f(\x) =
\eta\log\paren{\sum_{i=1}^m\exp\paren{\frac{1}{\eta}(\a_i^\top\x - b_i)}}
.\]
It approaches
$\max_{i = 1,\ldots,m} (\a_i^\top\x - b_i)$
as $\eta \to 0$.
Here,
$\eta$ controls the function smoothness
$L = \frac{1}{\eta}\lambda_{\max}(\A^\top\A)$
where $\A = [\a_1 \cdots \a_m]^\top \in \Reals^{m\times d}$.
The region around the optimum is approximately quadratic
since the function is smooth,
and thus the adaptive restart can be useful
without knowing the local condition number.
For $(m,d)=(100,20)$,
we randomly generated $\a_i\in\Reals^d$ and $b_i\in\Reals$
for $i=1,\ldots,m$,
and investigated $\eta=1,10$.
Fig.~\ref{fig:logsumexp}
shows that OGM with adaptive restart
converges faster
than FGM with the adaptive restart.
The benefit of adaptive restart is dramatic here;
apparently
FGM and OGM enter
a locally strongly convex region
after about $100-200$ iterations,
where adaptive restart
then provide a fast linear rate.
\begin{figure}[htbp]
\begin{center}
\includegraphics[clip,width=0.48\textwidth]{fig,logsumexp1.eps}
\includegraphics[clip,width=0.48\textwidth]{fig,logsumexp2.eps}
\end{center}
\caption{Minimizing a smooth but non-strongly convex Log-Sum-Exp function.
}
\label{fig:logsumexp}
\vspace{-3pt}
\end{figure}
\subsubsection{Sparse Linear Regression}
\label{sec:sparselr}
Consider the following cost function
used for sparse linear regression:
\[
f(\x) = \frac{1}{2}||\A\x - \bb||_2^2
,\quad
\phi(\x) = \tau ||\x||_1
\label{eq:sparselr}
,\]
for $\A\in\Reals^{m\times d}$,
where $L = \lambda_{\max}(\A^\top\A)$
and the parameter $\tau$ balances between
the measurement error and signal sparsity.
The proximity operator
becomes
a soft\hyp thresholding operator,
\eg,
$
\prox_{\zeta_{k+1}\phi}(\x)
= \sgn(\x)\max\big\{|\x|-\zeta_{k+1}\tau,0\big\}
$.
The minimization seeks a sparse solution $\x_*$,
and often the cost function is strongly convex
with respect to the non-zero elements of $\x_*$.
Thus
we expect to benefit from adaptive restarting.
For each choice of $(m, d, s, \tau)$ in Fig.~\ref{fig:sparselr},
we generated an $s$-sparse true vector $\x_{\mathrm{true}}$
by taking the $s$ largest entries of a randomly generated vector.
We then simulated
$\bb = \A\x_{\mathrm{true}} + \omegaa$,
where the entries of matrix \A and vector \omegaa
were sampled from a zero-mean normal distribution
with variances $1$ and $0.1$ respectively.
Fig.~\ref{fig:sparselr}
illustrates that POGM with adaptive schemes
provide acceleration over FISTA with adaptive restart.
While Sec.~\ref{sec:OGM,conv} discussed the undesirable overshooting behavior
that a secondary sequence of OGM (or POGM) may encounter,
these examples rarely encountered such behavior.
Therefore the choice of \bsig in the adaptive POGM
was not significant in this experiment,
unlike Sec.~\ref{sec:result,case2}.
\begin{figure}[htbp]
\begin{center}
\includegraphics[clip,width=0.48\textwidth]{fig,sparselr1.eps}
\includegraphics[clip,width=0.48\textwidth]{fig,sparselr2.eps}
\end{center}
\caption{Solving a sparse linear regression problem.
(ISTA is a proximal variant of GM.)}
\label{fig:sparselr}
\vspace{-3pt}
\end{figure}
\subsubsection{Constrained Quadratic Programming}
Consider the following box-constrained quadratic program:
\[
f(\x) = \frac{1}{2}\x^\top\Q\x - \vp^\top \x,
\quad
\phi(\x) = \begin{cases}
0, & \lll \preceq \x \preceq \u, \\
\infty, & \text{otherwise},
\end{cases}
,\]
where $L = \lambda_{\max}(\Q)$.
The ISTA (a proximal variant of GM),
FISTA and POGM use the projection operator:
\(
\prox_{\frac{1}{L}\phi}(\x)
= \prox_{\zeta_{k+1}\phi}(\x)
= \min\{\max\{\x, \lll\}, \u\}
.\)
Fig.~\ref{fig:quad} denotes
each algorithm by
a projected GM, a projected FGM, and a projected OGM respectively.
Similar to Sec.~\ref{sec:sparselr},
after the algorithm identifies the active constraints
the problem typically becomes a strongly convex quadratic problem
where we expect to benefit from adaptive restart.
Fig.~\ref{fig:quad} studies two examples
with problem dimensions $d = 500,1000$,
where we randomly generate a positive definite matrix \Q
having a condition number $10^7$ (\ie, $q = 10^{-7}$),
and a vector \vp.
Vectors \lll and \u
correspond to the interval constraints
$-1 \leq x_i \leq 1$ for $\x = \{x_i\}$.
The optimum $\x_*$ had $47$ and $81$ active constraints
out of $500$ and $1000$ respectively.
In Fig.~\ref{fig:quad},
the projected OGM with adaptive schemes
converged faster than
FGM with adaptive restart
and other non-adaptive algorithms.
\begin{figure}[htbp]
\begin{center}
\includegraphics[clip,width=0.48\textwidth]{fig,quad1.eps}
\includegraphics[clip,width=0.48\textwidth]{fig,quad2.eps}
\end{center}
\caption{Solving a box-constrained quadratic programming problem.}
\label{fig:quad}
\vspace{-15pt}
\end{figure}
\section{Conclusions}
\label{sec:conc}
We introduced adaptive restarting schemes
for the optimized gradient method (OGM)
to heuristically provide a fast linear convergence rate
when the function is strongly convex
or even when the function is not globally strongly convex.
The method resets
the momentum
when it makes a bad direction.
We provided a heuristic dynamical system analysis
to justify the practical acceleration of the adaptive scheme of OGM,
by extending the existing analysis of the fast gradient method (FGM).
On the way, we described
\cblue{a new accelerated gradient method named OGM-$q$}
for strongly convex quadratic problems.
Numerical results illustrate that
the proposed adaptive approach
practically accelerates the convergence rate of OGM,
and in particular,
performs faster than FGM with adaptive restart.
An interesting open problem is to
determine the worst-case rates
for OGM (and FGM) with adaptive restart.
\begin{acknowledgements}
This research was supported in part by NIH grant U01 EB018753.
\end{acknowledgements}
\bibliographystyle{spmpsci_unsrt}
|
1,108,101,564,884 | arxiv |
\subsection{Data Analytic Pipelines}
The data analytic pipeline refers to a framework consisting of a sequence of computational transformations on the data to produce the final predictions (or outputs) \cite{kumar2015model}. Pipelines help users better understand and organize the analysis task, as well as increase the reusability of algorithm implementations in each step.
Several existing widely adopted machine learning toolboxes provide the functionality to run analytic pipelines. Scikit-learn \cite{pedregosa2011scikit} and Spark ML \cite{mengml} provide programmatic ways to instantiate a pipeline. SPSS \cite{coakes2009spss} and RapidMiner \cite{mierswa2006yale} provide a visual way to assemble an analytic pipeline instance together and run. Microsoft Azure Machine Learning\footnote{\url{https://studio.azureml.net}} provides a similar capability in a cloud setting. There are also specialized pipelines, such as PARAMO \cite{ng2014paramo} in healthcare data analysis.
However, a major difficulty in using these systems is that none of the above described tools is able to efficiently help users decide which algorithms to use in each step. Some of the tools such as scikit-learn, Spark ML, and PARAMO allow searching all possible pipeline paths and tuning the hyperparameters of each step using an expensive grid search approach. While the search process can be sped up by running in parallel, the search space is still too large for the exhaustive search algorithms.
\subsection{Bayesian Optimization}
Bayesian optimization is a well-established technique for global and black-box optimization problems. In a nutshell, it comprises two main components: a probabilistic model and an acquisition function. For the probabilistic model, there are several popular choices: Gaussian process \cite{snoek2012practical,snoek2015scalable}, random forest such as Sequential Model-based Algorithm Configuration (SMAC) \cite{hutter2011sequential}, and density estimation models such as Tree-structured Parzen Estimator (TPE) \cite{bergstra2011algorithms}. Given any of these models, the posterior mean and variance of a new input can be computed, and used for computation of the acquisition function. The acquisition function defines the criterion to determine future input candidates for evaluation. Compared to the objective function, the acquisition function is chosen to be relatively cheap to evaluate, so that the most promising next input for querying can be found quickly. Various forms of acquisition functions have been proposed \cite{srinivas2009gaussian,hoffman2011portfolio,villemonteix2009informational,hoffman2014correlation}. One of the most prominent acquisition function is the \emph{Expected Improvement} (EI) function \cite{mockus1978application}, which has been widely used in Bayesian optimization. In this work, we use EI as our acquisition function, which is formally described in Section~\ref{sec:method}.
Bayesian optimization is known to be successful in tuning hyperparameters for various learning algorithms on different types of tasks \cite{snoek2015scalable,feurer2015initializing,bergstra2013making,snoek2012practical,wang2013bayesian}. Recently, for the problem of pipeline configurations tuning, several Bayesian optimization based systems have been proposed: Auto-WEKA \cite{thornton2013auto} which applies SMAC \cite{hutter2011sequential} to WEKA \cite{hall2009weka}, auto-sklearn \cite{feurer2015efficient} which applies SMAC to scikit-learn \cite{pedregosa2011scikit}, and hyperopt-sklearn \cite{komer2014hyperoptsklearn} which applies TPE \cite{bergstra2011algorithms} to scikit-learn. The basic idea of applying Bayesian optimization to pipeline tuning is to expand the hyperparameters of all algorithms and create large search space to perform optimization as we will show in the experiments. However, for practical pipelines the space becomes too large which hinders convergence of the optimization process. Auto-sklearn \cite{feurer2015efficient} uses a meta-learning algorithm that leverages performance history of algorithms on existing datasets to reduce the search space. However, in real-world applications, we often have unique datasets and tasks such that finding similar datasets and problems for the meta-learning algorithm will be difficult.
\section{Introduction}
\label{sec:intro}
\input{intro.tex}
\section{Background and Related Work}
\label{sec:back}
\input{back.tex}
\section{Methodology}
\label{sec:method}
\input{method.tex}
\section{Benchmark Experiments}
\label{sec:exp}
\input{exp.tex}
\section{Real-world Experiments}
\label{sec:real}
\input{exp_real.tex}
\section{Conclusions}
\label{sec:future}
\input{conclusion.tex}
\section{Acknowledgments}
This work was supported by the National Science Foundation, award IIS- \#1418511 and CCF-\#1533768, research partnership between Children's Healthcare of Atlanta and the Georgia Institute of Technology, CDC I-SMILE project, Google Faculty Award, Sutter health and UCB.
\clearpage
\subsection{Results and Discussions}
Table~\ref{table:real_perf} shows the performance of our methods compared to baselines when caching is enabled. Due to lack of space we only report the test performance. All cases FLASH and FLASH$^\star$ significantly outperform all the baselines.
Figure~\ref{fig:cache} shows the performance of FLASH without caching and original FLASH with caching on the real-world medical dataset. With caching, more pipeline paths can be evaluated within given period of time and our EIPS-based path selection leverages caching to select paths with high performance that run fast. As a result, we can see FLASH with caching converges much faster. For example, with caching we can get low test error within 6 hours.
\subsection{Results and Discussions}
Table~\ref{table:benchmark_perf} reports the experimental results on benchmark datasets. For each dataset, we report the performance achieved within three different time budgets. As shown in the table, our methods FLASH and FLASH$^\star$ perform significantly better than other baselines consistently in all settings, in terms of both lower error rate and faster convergence.
For example, on the \textsc{Madelon} dataset, our methods reach around 12\% test error in only 3 hours, while other baselines are still far from that even after 10 hours.
Performing statistical significance test via bootstrapping, we find that often FLASH and FLASH$^\star$ tie with each other on these benchmark datasets. For all the methods, the test error is quite consistent with the validation error, showing that the potential overfitting problem is well prevented by using cross validation.
Figure~\ref{fig:exp_MRBI_test} plots the convergence curves of median test error rate along with time for all baseline methods. As shown in the figure, after running about 4 hours, FLASH and FLASH$^\star$ start to lead others with steep drop of error rate, and then quickly converge on a superior performance.
\setlength\tabcolsep{3.5pt}
\begin{table*}[!ht]
\centering
\caption{Performance on both 3-fold cross-validation and test data of benchmark datasets. For each method, we perform 10 independent runs of 10 hours each. Results are reported as the median percent error across the 10 runs within different time budgets. Test data is never seen by any optimization method, which is only used for offline evaluations to compute test error rates. Boldface indicates the best result within a block of comparable methods. We underline those results not statistically significantly different from the best according to a 10,000 times bootstrap test with $p=0.05$.}
\label{table:benchmark_perf}
\vspace{5pt}
\footnotesize
\begin{tabular}{lccccccccccccc}
\toprule
\multirow{2}{*}{\textbf{Dataset}} & \multicolumn{1}{l}{\multirow{2}{*}{\textbf{\begin{tabular}[c]{@{}l@{}}Budget\\ (hours)\end{tabular}}}} & \multicolumn{1}{l}{\textbf{}} & \multicolumn{5}{c}{\textbf{Cross-validation Performance (\%)}} & \multicolumn{1}{l}{\textbf{}} & \multicolumn{5}{c}{\textbf{Test Performance (\%)}} \\ \cmidrule(l){3-14}
& \multicolumn{1}{l}{} & \multicolumn{1}{l}{} & \begin{tabular}[c]{@{}c@{}}\textsc{Rand.}\\ \textsc{Search}\end{tabular} & TPE & SMAC & FLASH & FLASH$^\star$ & & \begin{tabular}[c]{@{}c@{}}\textsc{Rand.}\\ \textsc{Search}\end{tabular} & TPE & SMAC & FLASH & FLASH$^\star$ \\ \midrule
\multirow{3}{*}{\textsc{Madelon}} & 3 & & 25.16 & 18.90 & 20.25 & {\ul 14.84} & \textbf{14.04} & & 19.17 & 16.15 & 16.03 & {\ul 12.18} & \textbf{11.73} \\
& 5 & & 23.60 & 18.82 & 19.12 & {\ul 14.31} & \textbf{14.04} & & 18.21 & 15.26 & 15.38 & {\ul 12.18} & \textbf{11.60} \\
& 10 & & 20.77 & 17.28 & 17.34 & {\ul 13.87} & \textbf{13.76} & & 15.58 & 14.49 & 13.97 & {\ul 11.49} & \textbf{11.47} \\ \midrule
\multirow{3}{*}{MNIST} & 3 & & 7.68 & 6.78 & 6.05 & \textbf{4.93} & {\ul 5.05} & & 7.75 & 5.41 & 6.11 & \textbf{4.62} & {\ul 4.84} \\
& 5 & & 6.58 & 5.94 & 5.83 & \textbf{4.26} & {\ul 4.87} & & 7.10 & 5.41 & 5.40 & \textbf{3.94} & {\ul 4.57} \\
& 10 & & 6.58 & 5.39 & 5.64 & \textbf{4.03} & {\ul 4.46} & & 6.64 & 5.03 & 5.23 & \textbf{3.78} & {\ul 4.37} \\ \midrule
\multirow{3}{*}{MRBI} & 3 & & 61.80 & 59.83 & 62.89 & {\ul 57.43} & \textbf{57.08} & & 60.58 & 59.83 & 60.58 & {\ul 54.72} & \textbf{54.28} \\
& 5 & & 58.67 & 58.61 & 58.14 & \textbf{45.11} & 54.25 & & 56.42 & 58.61 & 55.81 & \textbf{43.19} & 51.65 \\
& 10 & & 57.20 & 53.92 & 54.60 & \textbf{41.15} & {\ul 41.90} & & 54.43 & 52.01 & 52.30 & \textbf{39.13} & {\ul 39.89} \\ \midrule
\multirow{3}{*}{\textsc{Convex}} & 3 & & 28.14 & 24.70 & 24.69 & \textbf{22.63} & {\ul 23.31} & & 25.04 & 21.42 & 21.97 & \textbf{20.65} & {\ul 21.04} \\
& 5 & & 25.25 & 23.61 & 23.30 & \textbf{21.34} & {\ul 22.02} & & 23.18 & 21.37 & 20.82 & \textbf{19.56} & {\ul 19.71} \\
& 10 & \multicolumn{1}{l}{} & 24.51 & 22.21 & 23.30 & \textbf{20.49} & {\ul 20.62} & & 22.18 & 20.31 & 20.82 & \textbf{18.94} & {\ul 19.01} \\ \bottomrule
\end{tabular}
\end{table*}
\setlength\tabcolsep{6pt}
\begin{figure}[!ht]
\centering
\includegraphics[width=0.6\textwidth]{./Figs/MRBI_test}
\caption{Performance of our methods (FLASH and FLASH$^\star$) and other compared methods on MRBI dataset. We show the median percent error rate on test set along with standard error bars (generated by 10 independent runs) over time.}
\label{fig:exp_MRBI_test}
\end{figure}
\begin{figure*}[t]
\centering
\subfigure[The impact of optimal design \newline on MRBI dataset]{
\label{fig:exp_bench_optd}
\includegraphics[width=0.32\textwidth]{./Figs/component_opt}}
\subfigure[The impact of pipeline pruning \newline on MRBI dataset]{
\label{fig:exp_bench_prune}
\includegraphics[width=0.32\textwidth]{./Figs/component_pruning}}
\subfigure[The impact of pipeline caching \newline on real-world dataset]{
\label{fig:cache}
\includegraphics[width=0.32\textwidth]{./Figs/cache_compare}}
\caption{Component analysis experiments. \subref{fig:exp_bench_optd} Optimal design makes the initialization phase more robust. \subref{fig:exp_bench_prune} Pipeline pruning in the second phase of FLASH is the key to its superior performance. \subref{fig:cache} Performance of FLASH without caching and the original FLASH with caching on real-world dataset. In all figures, we show the median error rate on test set along with standard error bars (generated by 10 independent runs). Note that (a) and (b) are plotted with different x-axes; (c) is on a different dataset as (a) and (b).}
\label{fig:exp_bench}
\end{figure*}
\subsection{Detailed Study of FLASH Components}
FLASH has three main components: optimal design for initialization, cost-sensitive model for pipeline pruning, and pipeline caching. To study their individual contributions to the performance gain, we drop out each of the component and compare the performance with original FLASH. Since caching will be used for real-world experiments on large dataset, we describe the analysis of caching component in Section~\ref{sec:real}. Here we use MRBI dataset for these experiments.
Figure~\ref{fig:exp_bench_optd} shows the difference between using random initialization and optimal design by plotting the performance on initial 30 pipeline runs. The desirable property of optimal design ensures to run reasonable pipeline paths, giving FLASH a head start at the beginning of optimization. While random initialization is not robust enough, especially when the number of pipeline runs is very limited and some algorithms will have no chance to run due to the randomness.
Figure~\ref{fig:exp_bench_prune} shows the impact of pipeline pruning in the second phase of FLASH. Dropping out the pruning phase with EIPS, and using SMAC immediately after Phase 1, we see a major degradation of the performance.
The figure clearly shows that in Phase 2 of FLASH, the linear model with EIPS acquisition function is able to efficiently shrink the search space significantly such that SMAC can focus on those algorithms which perform well with little cost.
This figure confirms the main idea of this paper that a simple linear model can be more effective in searching high-dimensional and highly conditional hyperparameter space.
\subsection{Two-layer Bayesian Optimization}
\label{sec:linear}
\input{twolayer.tex}
\subsection{Optimal Design for Initialization}
\label{sec:optimal}
\input{initial.tex}
\subsection{Cost-sensitive Modeling}
\label{sec:costsensitive}
\input{costsensitive.tex}
\subsection{Pipeline Caching}
\label{sec:caching}
\input{caching.tex}
|
1,108,101,564,885 | arxiv | \section{INTRODUCTION}
In a recent series of papers we have considered the evolution of
low--mass X--ray binaries (LMXBs), and in particular
the question of which LMXBs should be transient
according to the disk instability model.
To have low enough mass transfer rates for transient behavior,
short--period ($P \mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 1$~d) neutron star systems require
special formation conditions (King, Kolb \& Burderi 1996, hereafter
KKB). These formation conditions can indeed be shown to hold in a
significant fraction of neutron--star systems (King \& Kolb 1997).
By contrast almost all short--period black--hole systems will be
transient (King, Kolb \& Szuszkiewicz 1997, hereafter KKS).
Longer--period LMXBs whose evolution is driven by the nuclear
expansion of an evolved low--mass companion with a degenerate helium
core are almost always transient, regardless of whether they contain a
black hole or a neutron star (KKS and King et al.\ 1997).
The recently discovered soft X--ray transient
GRO~J1655--40 (Orosz \& Bailyn 1997) belongs to none of these
classes. The companion orbits a $7{\rm M}_\odot$ black hole with a period of
2.62 days, and
has a rather large mass ($\simeq 2.3 {\rm M}_\odot$), too high for the simple
core--envelope picture used by KKS, King et al.\ (1997) to hold.
Its spectral type (F3--F6; effective temperature $T_{\rm
eff} \simeq 6500$~K) actually implies a star in the process of
crossing the Hertzsprung gap.
Since here the stellar radius $R$ expands significantly on
the star's thermal time $\sim 10^7$~yr, one would expect mass transfer rates
$\dot M \sim 10^{-7}$~${\rm M}_\odot$~yr$^{-1}$, which are too high for
disk instabilities to occur, particularly when one allows for
irradiation of the disk by the central accreting source
(van Paradijs 1996).
Nevertheless GRO~J1655--40 clearly shows transient behavior, though in
a rather complex form: the BATSE detectors aboard the Compton GRO observed
distinct hard X--ray outbursts in July 1994 (when the source was
discovered), as well as September 1994, November 1994, March 1995 and July 1995
(Harmon et al.\ 1995, Tavani et al.\ 1996). The 1994 outbursts were
associated with radio flaring and apparent
superluminal motion of radio plasmoids (Hjellming \& Rupen 1995),
giving the system distance as $d=3.2$~kpc. After a
phase of true X--ray quiescence a soft X--ray outburst was
observed by RXTE in April 1996 (Levine et al.\ 1996), again
accompanied by brightening in other wavebands.
Accordingly we consider the evolution of GRO~J1655--40 in detail
here. We apply results of computations of LMXBs with donors crossing the
Hertzsprung gap, details of which will be presented
elsewhere (Kolb 1997). We shall see that a short transient phase is
indeed possible, although the observed parameters of GRO~J1655--40 place
it slightly outside our currently computed instability
strip. Uncertainties in the input stellar physics
and perhaps in the observationally determined value for the effective
temperature probably allow the
source to lie within the strip.
Since the instability strip is narrow, we consider the consequences of the
high implied space density of systems like GRO~J1655--40.
\section{TRANSIENT SYSTEMS IN THE HERTZSPRUNG GAP}
A star crossing the Hertzsprung gap has finished core hydrogen
burning, but not yet ignited core helium burning.
Mass transfer from such a donor
is known as Case B. This case is well studied (e.g. Kippenhahn et al.\
1967, Pylyser \& Savonije 1988, Kolb \& Ritter 1990, De~Greve 1993),
particularly in the context of Algol evolution. The main difference
for black hole LMXBs is that the donor is the less massive
star. Elsewhere we shall present extensive computations of this case
(Kolb 1997), in
which we distinguish transient from persistent LMXBs by comparing the
mass transfer rate $\dot M$ with the critical rate $\dot M_{\rm crit}$
at which hydrogen is just ionized at the outer disk edge. A system is
transient if $\dot M < \dot M_{\rm crit}$.
We can summarize these results as follows.
Figure~1 gives the evolutionary tracks of the donor stars of selected
sequences on an HR diagram with parameters chosen to lie close to
GRO~J1655--40. The positions of single stars of fixed mass are also
shown. At any time the luminosity and effective temperature of the
donor is roughly the same as that of a single star of the same mass
and radius.
This explains why Orosz \& Bailyn (1997) found parameters for the
secondary in GRO~J1655--40 which are `consistent with a $2.3{\rm M}_\odot$
single star evolution'. However, the donor stars are not in
thermal equilibrium, and their evolution through any point on the
figure depends on their previous history.
In particular for much of the time the mass transfer rates for the
binaries are of order $M_2/t_{\rm th}$, where $t_{\rm th}$ is the
thermal timescale of the donor (of order the Kelvin--Helmholtz time)
at the start of mass transfer, and thus
depend strongly on the mass $M_{2 {\rm i}}$ of the donor at that initial
time. The portions of the tracks in thick linestyle show
where the systems are transient according to the criterion of KKS. As
can be seen, in addition to the expected transient phase when the
donors are close to the low--mass giant branch, there is a short
transient phase when the star is still in the Hertzsprung gap.
This phase occurs because the stellar radius expands very little (or
even contracts) at this point; this in turn is a result of the complex
structural change brought about by the switch from thick to thin shell
H--burning.
The radius slowdown always occurs at the same effective temperature
as for a single star with the donor's initial mass, but at roughly the
same radius as for a single star with the donor's actual mass.
These findings can be conveniently generalized to arbitrary
$M_{2 {\rm i}}$ in a plot of orbital period versus donor mass
(Fig.~2). Conservative evolution with constant binary mass $M$ is towards
longer $P$ and smaller
$M_2$ along the curves $P M_2^3 (M-M_2)^3=$~const, while along an
evolution with mass loss $|{\rm d} P/{\rm d} M_2|$ is larger
(see the tracks shown in the Figure).
The hatching on the Figure relates to the
transient/persistent nature of the systems:
in the unhatched region all black--hole systems are persistent, and in
the narrow hatched strip bisecting this zone all the systems are
transient. This is the phase where the radius expansion slows. Because
this only depends on the current stellar mass, it is insensitive to
the initial mass $M_{2 {\rm i}}$,
unlike most other properties of the evolutionary sequences.
Most systems in the large hatched region are also transient (this is
the phase where the donor becomes a low--mass giant), although systems
evolving into this region from the unhatched `transient exclusion
zone' (thus with large initial donor mass) remain persistent until
$M_2$ has dropped somewhat.
The transient strip terminates at $M_2 \simeq 3.5{\rm M}_\odot$, as for more
massive donors $\dot M > \dot M_{\rm crit}$ even when the
radius expansion slows.
\section{APPLICATION TO GRO~J1655--40}
Taken at face value, the system parameters of GRO~J1655--40
as derived by Orosz \& Bailyn (1997) place it in a region of the $P -
M_2$ plane where it should be persistent rather than transient, with a
mass transfer rate close to the Eddington limit (if the initial donor
mass was $\la 2.5{\rm M}_\odot$) or super--Eddington (if $M_{2 {\rm i}} \ga 2.5{\rm M}_\odot$).
In this case the observed outbursts
and variability must presumably be caused by instabilities in the accretion
flow at Eddington- or super--Eddington mass transfer rates.
Small variations in the accretion rate could have a significant effect
on the appearance of the source by changing its effective photosphere.
However, GRO~J1655--40 is classified as a transient, and indeed,
the X--ray outbursts show characteristic features consistent
with the operation of an underlying disk instability.
Simple scalings for the outburst duty cycle and
recurrence time of soft X--ray transients, taking into account the
effect of irradiation on the outburst (King \& Ritter 1997), suggest
an irregular outburst pattern in long--period systems like
GRO~J1655--40: a series of outbursts with short recurrence time (the
presently observed behavior) may alternate with a much longer
quiescence (the behavior before the first detection in 1994). The
delay between the optical and X--ray outburst observed in April
1996 (Orosz et al.\ 1997) is in close analogy to the UV delay observed
in dwarf novae. In terms of the disk instability model such
delays occur naturally whenever the disk does not extend all the way
to the white dwarf surface, or the innermost stable orbit in the case
of a black hole accretor (e.g. Livio \& Pringle 1992, King 1997). Hameury et
al.\ (1997) suggest the presence of a ``hole'' in the disk of
GRO~J1655--40 by postulating that the disk flow gives way to advection
at some inner radius.
In the light of this evidence it is intriguing that the observed
system parameters put GRO~J1655--40 quite close to the
short transient phase that similar systems encounter when the secondary
evolves through the Hertzsprung gap. If we assume that GRO~J1655--40
in reality actually falls into the narrow transient strip,
its outburst behavior is consistent with the disk instability
model for soft X--ray transients. This would require some error in the
position of the transient strip or of the system. The precise location of the
transient strip in the HR diagram is determined by the effective
temperature where single stars in the mass range $2-4{\rm M}_\odot$
temporarily slow or reverse their expansion from the main sequence to
the first giant branch. This effective temperature (and hence the
radius and luminosity) depends on details of the input physics of the
evolution code, particularly opacities and the treatment of
convection.
In addition, it is conceivable that the observational determination of
$T_{\rm eff}$ might yet change. A value $T_{\rm eff} \la 5600$~K would place
GRO~J1655--40 in the predicted transient strip. In Orosz \& Bailyn's
spectral classification procedure
the corresponding spectral type G2IV achieves a minimum r.m.s.\ value
which is only $\sim 10\%$ larger than that of the best--fit type F4IV.
Roche geometry implies with $T_{\rm eff} = 5600$~K a secondary luminosity
$\simeq 20{\rm L}_\odot$. This would require a color excess $E(B-V) \la 1.0$
(rather than 1.3) for consistency with the observed mean V magnitude
(17.12 mag) and distance (3.2~kpc).
A further uncertainty may arise from the fact that the accretion disk still
contributes $\la 10\%$ of the V flux during the extended X--ray
quiescence in 1996, and as much as $50\%$ in 1995 (Orosz \& Bailyn 1997).
Although this transient phase is very short ($\simeq 1$~Myr if
$M_{2 {\rm i}}=2.5{\rm M}_\odot$), it is quite plausible
that such a system would nevertheless be observable, because its
outbursts would be extremely luminous.
The actual transfer rate in the transient phase depends on the initial
orbital separation. The closer the initial
separation was to the present value, the lower the transfer rate.
With a typical transient duty cycle
$\mathrel{\spose{\lower 3pt\hbox{$\mathchar"218$} 10^{-2}$ the instantaneous accretion rate can be as high as
$\dot M \sim 10^{-8} - 10^{-6}$~${\rm M}_\odot$~yr$^{-1}$, implying
luminosities up to $10^{38} - 10^{40}$~erg~s$^{-1}$.
This is consistent with the outburst X--ray luminosity $\ga
10^{38}$~erg~s$^{-1}$ inferred for GRO~J1655--40 from observations
(e.g.\ Tanaka \& Shibazaki 1996).
X--ray luminosities of anything approaching this order
are visible throughout the Galaxy, as observations of high--mass
X--ray binaries show.
However, just as for those systems, the price one
pays for asserting the visibility of a short--lived evolutionary phase
is the requirement of a very high birth rate. Here we would need
a rate of order 1 per $10^4$ yr.
The giant branch transient phase is $10-20$ times longer than the
Hertzsprung gap transient phase. The detection probability of the
correspondingly higher number of giant branch transients with
intermediate--mass donors may nevertheless be comparable to that of
the Hertzsprung gap transients as the duty cycle decreases with
increasing orbital period (King \& Ritter 1997).
A more serious problem is that the required high formation rate
implies about $500$ bright persistent accreting sources in the
Hertzsprung gap, in apparent conflict with observation. In the next
section we examine possible resolutions of this disagreement.
\section{DISCUSSION}
The disk instability picture provides a very clean separation of LMXBs
into transient and persistent sources, provided that one takes due account
of the effects of irradiation by the central accretor (van Paradijs 1996).
This is strong circumstantial evidence in its favor.
Accordingly one would be very reluctant to abandon it simply because
the application of this picture to the transient source
GRO~J1655--40 leads to an apparent overpopulation of bright accreting
sources in the Galaxy.
The resolution of this conflict is probably related to the fact
that most systems populating the $P-M_2$ plane in Fig.~2
are extremely luminous. The persistent sources with $M_{2 {\rm i}} \la 2.5{\rm M}_\odot$
would be close to the Eddington limit for a $7{\rm M}_\odot$ black hole (and
super--Eddington if $M_{2 {\rm i}}\ga 2.5{\rm M}_\odot$), while
with typical duty cycles, the outbursts of the transient systems
could be still brighter.
This raises the possibility of significant mass
loss, in two forms. GRO~J1655--40 and another system, GRS~1915+105,
are seen to lose
mass in relativistic jets (Hjellming \& Rupen 1995, Mirabel \&
Rodr\'iguez 1994), although there is no consensus on the actual amount
of mass lost.
Further, it is theoretically possible (e.g. Begelman, McKee \& Shields
1983, Czerny \& King 1989) for most of the transferred mass to
be lost from the outer parts of the accretion disk, driven by the
luminosity of the accreting fraction. There is indirect evidence that
much of the mass transferred in neutron--star LMXBs is lost, since the
remnant pulsars always have rather modest masses (see King \& Kolb
1997 for a discussion).
This mass loss may prevent the bright systems appearing as X--ray
sources, either because much of the mass never accretes to the compact
primary, or because the mass loss itself shrouds the system and
degrades the X--rays.
Alternatively, the accretion rates may be too high for X--ray
production. The effective photosphere for the accretion flow may
actually radiate at much longer wavelengths for most of
the time.
Since most of these sources are
probably rather distant and heavily reddened, the infrared is the most
likely place to detect them.
If GRO~J1655--40 is indeed a genuine soft X--ray transient in the
Hertzsprung gap, then either of these possibilities would reconcile
theory and observation.
In this connection it is interesting that GRS~1915+105,
also classified as a soft X--ray transient,
has been persistently bright in X--rays since 1992. At
its minimum X--ray luminosity $\sim 5\times 10^{38}$~erg~s$^{-1}$
(e.g. Belloni et al.\ 1997) this is far longer than can be explained
by a disk instability. It may be that this source, too, is actually one of the
`persistent' sources predicted by Fig.~2. Certainly its current mass
accretion rate $\sim 10^{-7}$~${\rm M}_\odot$~yr$^{-1}$ is consistent with that
expected, although the heavy interstellar reddening
makes the nature of the companion very uncertain (Mirabel et al.\ 1997).
The current (post--1992) state may correspond to a
{\it lower} accretion rate, so that the X--rays become temporarily
visible (see also Belloni et al.\ 1997). Infrared monitoring of this
source should provide more information about this possibility, in that
the X--rays and infrared should be anticorrelated.
This
work was partially supported by the U.K.\ Particle Physics and Astronomy
Research Council (PPARC) and by grant NAG 5-3082 to LSU.
ARK acknowledges support as a PPARC Senior Fellow. JF thanks the Space
Telescope Science Institute for its hospitality.
We thank the anonymous referee for useful comments.
\clearpage
|
1,108,101,564,886 | arxiv | \section{The Einstein-Maxwell equations in a static space-time}
In 2+1 dimensions, the metric for an arbitrary static circularly symmetric
space-time can be written in the form
\begin{eqnarray} \label{metrica}
ds^{2}= e^{2 \, \alpha(r)}\, dt^{2}- e^{2 \, \beta(r)} \, dr^{2}- e^{2 \,
\gamma(r)} \, d\theta^{2}.
\end{eqnarray}
The general form of the electromagnetic field tensor which shares the static
circularly symmetric space-time is given by
$F= E_{r} \, dr \, \wedge \,dt + B \, dr \, \wedge \,d\theta$.
The electromagnetic field tensor in terms of a four-potential
\begin{eqnarray} \label{4-potencial}
A= A_{a}(r)\,dx^{a}
\end{eqnarray}
is given by $F = dA= \frac{1}{2}\,F_{a b}\,dx^{a} \wedge dx^{b}$,
where the functions $A_{a}(r)$ can be freely specified. So
$dA=F=q_{0}\,A^{\prime}_{0}\, dr \wedge dt + q_{2}\,A^{\prime}_{2} \,dr
\wedge d\theta$
where $A_{0}$ and $A_{2}$ are arbitrary functions of the $r$ coordinate, the
differentiation with respect to $r$ is denoted by ' and the constant
coefficients $q_{0}$ and $q_{2}$ are introduced for switching off the
electric and/or magnetic fields. This implies that the $r$-component of the
electric field is given by $q_{0}\,A^{\prime}_{0}$ and of the magnetic field
by $q_{2}\,A^{\prime}_{2}$.
To write the Einstein's equations we will use the tetrad formalism and
Cartan structure equations. A convenient orthonormal basis for the metric~(%
\ref{metrica}) is
\begin{eqnarray} \label{tetrada}
\theta^{(0)} = e^{\alpha}\, dt , \,\,\, \theta^{(1)} = e^{\beta} \, dr, \,
\,\, \theta^{(2)} = e^{\gamma} \, d\theta.
\end{eqnarray}
To construct Einstein-Maxwell fields, we must consider the Maxwell's
equations and the stress-energy tensor of the electromagnetic field which,
with respect to~(\ref{tetrada}) in Gaussian units, is defined by ~\cite{Gott}
\begin{eqnarray} \label{tensor electromagnetico}
T_{(a)(b)}= \frac{g_{(a)(b)}}{8 \pi} \, F_{(c)(d)} \, F^{(c)(d)} -
\frac{1}{2 \pi} F_{(a)(c)} \, F_{(b)}^{ \,\,\,\,\,\, (c)}
\end{eqnarray}
In (2+1) dimensions the trace of~(\ref{tensor electromagnetico}) is
$T=\frac{1}{8\pi} \, F_{a b} \, F^{a b}$.
To get its components we must compute $F_{(a)(b)}$. In the coordinate basis
\begin{eqnarray} \label{maxwellito}
F_{a b}= \frac{1}{2} \, q_{0}\,A^{\prime}_{0}\,\delta^{r}_{[a}\,
\delta^{t}_{b]}+ \frac{1}{2} \, q_{2}\,A^{\prime}_{2}\,\delta^{r}_{[a}\,
\delta^{\theta}_{b]}
\end{eqnarray}
or in the basis~(\ref{tetrada})
$F_{(a) (b)}= q_{0}\,I_{e} \, \delta^{1}_{[a}\, \delta^{0}_{b]} +
q_{2} \, I_{m} \, \delta^{1}_{[a}\, \delta^{2}_{b]}$, where
$I_{e}=\frac{1}{2} \,A^{\prime}_{0}\, e^{- \alpha - \beta}$ and
$I_{m}=\frac{1}{2} \, q_{2}\,A^{\prime}_{2} \, e^{- \beta - \gamma}$.
Thus from~(\ref{tensor electromagnetico}) we get
\begin{eqnarray}
T_{(a)(b)}^{e.m.} \, \theta^{(a)} \, \otimes \, \theta^{(b)}= \left(
\frac{q_{0}{}^{2}}{ \pi} \, I_{e}{}^{2} + \frac{q_{2}{}^{2}}{ \pi} \,
I_{m}{}^{2} \right) \, \theta^{(0)} \, \otimes \, \theta^{(0)}- \nonumber \\
\left( - \frac{q_{0}{}^{2}}{ \pi} \, I_{e}{}^{2} + \frac{q_{2}{}^{2}}{ \pi} \,
I_{m}{}^{2} \right) \, \theta^{(1)} \, \otimes \, \theta^{(1)}-
\nonumber \\
\left( \frac{q_{0}{}^{2}}{ \pi} \, I_{e}{}^{2} + \frac{q_{2}{}^{2}}{ \pi} \,
I_{m}{}^{2} \right) \, \theta^{(2)} \, \otimes \, \theta^{(2)}-
\nonumber \\
\frac{2 q_{0} \, q_{2}}{ \pi} I_{e} \, I_{m} \left( \theta^{(0)} \, \otimes \,
\theta^{(2)} + \theta^{(2)} \, \otimes \, \theta^{(0)} \right). \hspace{1cm}
\nonumber
\end{eqnarray}
From the tetrad~(\ref{tetrada}) and using Cartan exterior forms calculus the
following non-trivial components of the Einstein's equations with the
cosmological constant are obtained:
\begin{eqnarray} \label{cero-cero}
e^{- 2 \beta} \left( \gamma {}^{\prime} \beta {}^{\prime} -
\gamma {}^{\prime \prime}- \gamma {}^{\prime}{}^{2}\right) = \Lambda +
\frac{\kappa q_{0}{}^{2} I_{e}{}^{2}}{\pi} +
\frac{\kappa q_{2}{}^{2} I_{m}{}^{2}}{\pi}
\end{eqnarray}
\begin{eqnarray} \label{uno-uno}
- \alpha\, ^{\prime}\, \gamma\, ^{\prime}\,e^{- 2\, \beta}= \Lambda +
\frac{\kappa q_{0}{}^{2} I_{e}{}^{2}}{\pi} -
\frac{\kappa q_{2}{}^{2} I_{m}{}^{2}}{\pi}
\end{eqnarray}
\begin{eqnarray} \label{dos-dos}
e^{- 2 \beta} \left( \alpha {}^{\prime} \beta {}^{\prime}-
\alpha {}^{\prime\prime}- \alpha {}^{\prime}{}^{2} \right) = \Lambda -
\frac{\kappa q_{0}{}^{2} I_{e}{}^{2}}{\pi} -
\frac{\kappa q_{2}{}^{2} I_{m}{}^{2}}{\pi}
\end{eqnarray}
\begin{eqnarray} \label{cero-dos}
\frac{2 \kappa \, q_{0} \, q_{2}}{\pi} \, I_{e} \, I_{m} = 0
\end{eqnarray}
Now we must consider the Maxwell's equations. The contravariant density
components of~(\ref{maxwellito}) are
\begin{eqnarray*}
\sqrt{-g} \, F^{a b}=e^{\alpha+\beta+\gamma} \left(\frac{2 q_{0} I_{e}{}^{2}}
{A^{\prime}_{0}} \, \delta^{[a}_{r} \, \delta^{b]}_{t} +
\frac{2 q_{2} I_{m}{}^{2}} {A^{\prime}_{2}} \, \delta^{[a}_{r} \,
\delta^{b]}_{\theta} \right)
\end{eqnarray*}
It is clear that the source-free Maxwell's equations are satisfied if
\begin{eqnarray} \label{ecuacion electro-magnetica}
e^{ \alpha + \beta - \gamma}= A^{\prime}_{0}, \,\,
e^{- \alpha + \beta + \gamma}= A^{\prime}_{2}
\end{eqnarray}
where the constants of integration, without any loss of generality, have
been made equal to~$1$. So the Einstein-Maxwell equations are given by Eqs.~(
\ref{cero-cero})-(\ref{cero-dos}) and~(\ref{ecuacion electro-magnetica}). To
obtain the Einstein-Maxwell solutions it is useful
to notice that Eq.~(\ref{cero-dos}) says to us that either the electric or
the magnetic field must be zero.
It is easy to check that under the transformation
\begin{eqnarray} \label{transformacion1}
q_{0}= i \, q_{2}, \,\,\,\,\,\,\, q_{2}= i \, q_{0}, \,\,\,\,\,\,\, \alpha
\rightleftharpoons \gamma
\end{eqnarray}
the Einstein-Maxwell equations are invariant. This \\
means that if we have the
magnetic solution, then one can obtain the electric analog by making the
formal transformation~(\ref{transformacion1}). In other words, if the
Einstein-Maxwell solution is given in the form~(\ref{metrica}), then we
obtain an analogous metric making
\begin{eqnarray} \label{transformacion2}
dt= i \, d\theta, \,\,\,\,\,\,\,\,\,\, d\theta= i \, dt,
\,\,\,\,\,\,\,\,\,\, q_{0}= i \, q_{2}, \,\,\,\,\,\,\,\,\,\, q_{2}= i \,
q_{0}.
\end{eqnarray}
Now, we will find a general magnetic solution (in analytical form). So
we must consider Eq.~(\ref{cero-dos}). That means that in this case either
the electric or the magnetic field must be zero, so that in the considered
equations we must set $q_{0}= 0$ (or $A_{0}= 0$). This implies that, in
order to solve the self-consistent equations, it is not necessarily to
consider the first condition of~(\ref{ecuacion electro-magnetica}).
Subtracting equations (\ref{uno-uno}) and (\ref{dos-dos}) and using the second
condition of~(\ref{ecuacion electro-magnetica}), we obtain
$e^{2 \alpha} = D\, e^{C\, A_{2}(r)}$,
where $A_{2}$ is an arbitrary function of $r$, and $C$ and $D$ are constants
of integration. On the other hand the combination of~(\ref{uno-uno}) and~(%
\ref{dos-dos}) leads us to the equation
\begin{eqnarray}
\left(\alpha\,^{^{\prime}}\, e^{\alpha - \beta+ \gamma} \right)^{^{\prime}}=
- 2 \, \Lambda \, A^{^{\prime}}_{2} \, D \, e^{C \, A} + \frac{\kappa}{2 \pi}%
\, q_{2}{}^{2}\, A^{\prime}_{2}
\end{eqnarray}
which yields with the help of~(\ref{ecuacion electro-magnetica}) the following
expression for the function $e^{2 \gamma}$:
\begin{eqnarray}
e^{2 \gamma}= \frac{\kappa}{\pi \, C}\, q_{2}{}^{2}\, A_{2}- \frac{4 \,
\Lambda \, D}{C^{2}}\, e^{C \, A_{2}} + F,
\end{eqnarray}
where $F$ is a new constant of integration. From~(\ref{ecuacion electro-magnetica})
we obtain
$e^{2 \beta}= D\, A^{\prime}_{2} \, ^{2}\, e^{C \, A_{2}} \, e^{-2 \gamma}$.
Introducing a new coordinate $\tilde{r}$ defined by $\tilde{r}= A_{2}(r)$,
we have
\begin{eqnarray} \label{metrica2}
ds^{2} = D e^{C\, \tilde{r}} dt^{2}- D e^{C\, \tilde{r}} \,e^{-2 \gamma}
d\tilde{r}^{2} - e^{2 \gamma} d\theta^{2}.
\end{eqnarray}
where now $e^{2 \gamma}=\frac{\kappa \, q_{2}{}^{2}}{\pi \, C}\, \tilde{r}-
\frac{4 \, \Lambda \, D}{C^{2}}\, e^{C \, \tilde{r}} + F$.
In this coordinate gauge the magnetic field is constant. The
solution~(\ref{metrica2}) can not be carried out in the spatial gauge
$g_{2 2}= r^{2}$ with the help of the transformation
\begin{eqnarray} \label{gauge}
\tilde r^{2} = \frac{\kappa \, q_{2}{}^{2}}{\pi \, C}\, r -\frac{4 \,
\Lambda \, D}{C^{2}}\, e^{C \, r} + F,
\end{eqnarray}
because the resulting metric does not take an analytical form. However it is
possible to obtain the metrics for the gauge $g_{2 2}= r^{2}$ in exact form
by setting $\Lambda=0$ or switching off the magnetic field ($q_{2}=0$). This
means that if $q_{2}=0$ then, from~(\ref{gauge}) one can obtain the BTZ
non-charged three-dimensional black hole~\cite{Teitelboim}; and that when $%
\Lambda=0$ we obtain the 3+1 magnetic Reissner-Nordstr\"om counterpart. In
fact, in this case we obtain the following metric:
\begin{eqnarray} \label{Reissner2+1}
ds^{2}= e^{a \, r^{2}} dt^{2}- e^{a \, r^{2}} dr^{2}- r^{2} d\theta^{2},
\end{eqnarray}
with $a=\frac{\kappa \, q_{2}{}^{2}}{\pi}$. From Eq.~(\ref{Reissner2+1}) it
follows that the 2+1 magnetic monopole counterpart is not a black hole,
contrasting with the 2+1 electric analog. If we switch off the magnetic
field ($a=0$), one the flat three-dimensional space-time gets.
Finally, we note that application of transformation (\ref{transformacion2})
on~(\ref{metrica2}) leads to
\begin{eqnarray} \label{metricaBTZ}
ds^{2}= e^{2 \alpha} dt^{2} - D e^{C \, \tilde{r}} e^{-2 \alpha} \,
d\tilde{r}^{2}- D \, e^{C\, \tilde{r}} \, d\theta^{2}.
\end{eqnarray}
where $e^{2 \alpha}=\frac{- \kappa \, q_{2}{}^{2}}{\pi \, C}\, \tilde{r} -
\frac{4 \, \Lambda \, D}{C^{2}}\, e^{C \, \tilde{r}} + F$. In the gauge
$g_{2 2}=r^{2}$ the metric~(\ref{metricaBTZ}) takes on the new form
\begin{eqnarray} \label{metricaBTZBTZ}
ds^{2}= e^{2 \alpha} dt^{2} - e^{-2 \alpha} d{r}^{2} - r^{2} \, d\theta^{2},
\end{eqnarray}
where now $e^{2 \alpha}=\frac{- 2 \kappa \, q_{2}{}^{2}}{\pi} \,
ln \, r - \Lambda \, r^{2} + F$. Then the BTZ charged black hole~(\ref{BTZ}) is
obtained if $\Lambda= - l^{-2}$, $F=-M$ and
$Q^{2}= 4 \kappa \, q_{2}{}^{2}/ \pi$. When $\Lambda=0$ one gets the
electrically charged solution~(\ref{Gott}). In this case the
electromagnetic potential is given by $A= ln \, r$. This means that
$dA=q_{2}/r \, dr \wedge dt$ (or $E_{r}=q_{2} /r$). If one uses the
transformation~(\ref{transformacion2}), then $dA= - q_{0}/r \, dr \wedge
d\theta$, so that the electric field is replaced by a magnetic field and $%
B=- q_{0}/r$.
Now, in order to obtain the Einstein-Maxwell fields with a neutral fluid, we
must consider the stress-energy tensor of the perfect fluid, which is given by
\begin{eqnarray} \label{fluido}
T_{(a)(b)}^{p.f.} = (\mu +p) \, U_{(a)} U_{(b)} - p \, g_{(a)(b)},
\end{eqnarray}
where $\mu$ and $p$ are the mass-energy density and pressure of the fluid,
respectively. $U_{(a)}$ is its timelike 4-velocity. If we take the four
velocity $ {\bf U} = \theta^{(0)}, $ then~(\ref{fluido}) becomes
$T_{(0)(0)}^{p.f.}=\mu$, $T_{(1)(1)}^{p.f.}=T_{(2)(2)}^{p.f.}=
T_{(3)(3)}^{p.f.}=p$. With the electromagnetic potential~(\ref{4-potencial})
and a neutral perfect fluid, the Einstein-Maxwell equations with cosmological
constant are now given by
\begin{eqnarray} \label{rn-cero-cero}
e^{-2\beta} \left(\gamma {}^{\prime} \beta {}^{\prime} -
\gamma {}^{\prime \prime}- \gamma {}^{\prime}{}^{2}\right) = \Lambda +
\frac{\kappa q_{0}{}^{2} I_{e}{}^{2}}{\pi} + \\
\frac{\kappa q_{2}{}^{2} I_{m}{}^{2}}{\pi} + \kappa \mu \hspace{2cm}
\nonumber
\end{eqnarray}
\begin{eqnarray} \label{rn-uno-uno}
- \alpha\, ^{\prime}\, \gamma\, ^{\prime}\,e^{- 2\, \beta}= \Lambda +
\frac{\kappa q_{0}{}^{2} I_{e}{}^{2}}{\pi} -
\frac{\kappa q_{2}{}^{2} I_{m}{}^{2}}{\pi} - \kappa p
\end{eqnarray}
\begin{eqnarray} \label{rn-dos-dos}
e^{- 2 \beta} \left( \alpha {}^{\prime} \beta {}^{\prime}-
\alpha {}^{\prime\prime}- \alpha {}^{\prime}{}^{2} \right) = \Lambda -
\frac{\kappa q_{0}{}^{2} I_{e}{}^{2}}{\pi} - \\
\frac{\kappa q_{2}{}^{2} I_{m}{}^{2}}{\pi} - \kappa p \hspace{2cm}
\nonumber
\end{eqnarray}
\begin{eqnarray} \label{rn-cero-dos}
\frac{2 \kappa \, q_{0} \, q_{2}}{\pi} \, I_{e} \, I_{m} = 0
\end{eqnarray}
and by conditions~(\ref{ecuacion electro-magnetica}). It is easy to see that
equations~(\ref{rn-cero-cero})-(\ref{rn-cero-dos})
are not invariant under transformation (\ref{transformacion2}). This means
that one must solve the system~(\ref{rn-cero-cero})-(\ref{rn-cero-dos}) for
an electric or a magnetic field separately (Eq.~(\ref{rn-cero-dos}) implies
that either the electric or the magnetic field must be zero).
First we consider the electric solution: In this case we can not obtain the
solution for the general electromagnetic potential~(\ref{4-potencial}) with
an arbitrary function $A_{0}$ ($A_{2}=0$). We must consider some concrete
function, such as the 4-potential in the following form
\begin{eqnarray}
A_{0} =\left\{
\begin{array}{ll}
\frac{1}{n+1} \, r^{n+1} \, dt \,\,\, & \mbox {if $n\neq-1$} \\
ln\, r \, dt \,\,\, & \mbox {if $n=-1$}
\end{array}
\right. \label{rn-potencial}
\end{eqnarray}
where $n$ is an arbitrary constant, and solve the Einstein-maxwell
equations.
Thus the electric field takes the form $E=q_{0} \, r^{n}$. In the case of
the circularly symmetric metric, one takes $e^{2 \, \gamma}= r^{2}$,
arriving at the following metrics:~\cite{Firenze} \\
For {\bf $n \neq 1, -1, -3$},
\begin{eqnarray*}
\lefteqn{e^{2\alpha}= r^{2(n+1)} e^{- 2 \beta}=
\frac{\kappa q^{2} r^{2(n+1)}}{4 \pi (n-1)(n+1)} +\frac{A r^{n+3}}{n+3} + B} \\
& & \kappa p=\frac{\kappa q^{2}(n+1)}{8\pi (n-1)r^{2}} + \frac{A}{2}r^{-(n+1)} +
\Lambda, \\
& & \kappa \mu=\frac{B(n+1)}{r^{2(n+2)}}+\frac{A(n-1)}{2(n+3)}r^{-(n+1)}- \frac{\kappa
q^{2}}{8 \pi r^{2}}- \Lambda.
\end{eqnarray*}
For {\bf $n=-1$},
\begin{eqnarray*}
\lefteqn{e^{2\alpha}=e^{-2 \beta}=-\frac{\kappa q^{2}}{4\pi}ln\,r + Ar^{2}+B,}
\\
& & \kappa p = -\kappa \mu = A + \Lambda.
\end{eqnarray*}
For {\bf $n=1$},
\begin{eqnarray*}
\lefteqn{e^{2\alpha}= r^{4} e^{-2 \beta}= \frac{\kappa q^{2}}{8\pi}r^{4}\,ln\,r + A r^{4} + B,}
\\
& & \kappa p= \frac{\kappa q^{2}}{16\pi r^{2}}(3+4ln\,r)+ \frac{2A}{r^{2}} +
\Lambda, \\
& & \kappa \mu= -\frac{3\kappa q^{2}}{16\pi r^{2}} + \frac{2B}{r^{6}} -
\Lambda.
\end{eqnarray*}
For {\bf $n=-3$},
\begin{eqnarray*}
\lefteqn{e^{2\alpha}= r^{-4} e^{-2 \beta}=
\frac{\kappa q^{2}}{32\pi r^{4}} + A ln\,r + B,}\\
& & \kappa p= \frac{\kappa q^{2}}{16\pi r^{2}} + \frac{A}{2}r^{2} + \Lambda,\\
& & \kappa \mu= -\frac{\kappa q^{2}}{8\pi r^{2}}- \frac{Ar^{2}}{2}(1+ln\,r)-
2Br^{2} - \Lambda.
\end{eqnarray*}
From $n=-1$ we see that if $A = - \Lambda$ then we obtain the 2+1
Kottler solution analog~(\ref{BTZ}). We remark that when $B=q=0$ and the
fluid obeys a $\gamma$-law equation, i.e., $\mu$ and $p$ are related by an
equation of the form $p=(\gamma-1)\mu$ where $\gamma$ is a constant (which,
for physical reasons satisfies the inequality $1 \leq \gamma \leq 2$), the
constant $\gamma$ may be expressed as $ \gamma=2 \, \frac{n+1}{n-1}, $
where the limits of $n$ are $-\infty \leq n \leq -3$. In this case
\begin{equation}
\kappa \mu =\kappa \frac{n-1}{n+3}p=\frac{A(n-1)}{2(n+3)}r^{-(n+1)}.
\end{equation}
Recently, G\"urses~\cite{Gurses} obtained a class of metrics of Einstein
theory with perfect fluid sources in 2+1 dimensions. However this class of
solutions was found for the particular case $\mu=$const and $p=$const.
Finally, we present the circularly symmetric magnetic case ($A_{0}=0$) which
takes the following form:
\begin{eqnarray*}
ds^{2}=D\, e^{C \, A_{2} (r)} \left ( dt^{2} - r^{-2} \, A^{\prime}_{2}{}^{2}
\, dr^{2} \right) - r^{2} \, d \theta^{2},
\end{eqnarray*}
where the mass-energy density is given by
\begin{eqnarray*}
\kappa \, \mu = \frac{e^{- C \, A_{2}}}{ D}\, \left( \frac{C \, r}{2 \,
A^{\prime}_{2}} + \frac{A^{\prime\prime}_{2}}{A^{\prime}_{2}{}^{3}} - \frac{1%
}{A^{\prime}_{2}{}^{2}} - \frac{\kappa \, q_{2}^{2}}{4 \, \pi} \right) -
\Lambda,
\end{eqnarray*}
and the pressure by
\begin{eqnarray*}
\kappa \, p = \frac{e^{- C \, A_{2}}}{ D} \, \left( \frac{C \, r}{2 \,
A^{\prime}_{2}} - \frac{\kappa \, q_{2}^{2}}{4 \, \pi} \right) + \Lambda.
\end{eqnarray*}
Before we finish this paper we would like to make a few comments about our
most important results. We have studied the static Einstein-Maxwell fields in
(2+1)-dimensions and obtained new exact solutions with circular symmetry.
All of them were found in the presence of the cosmological constant $\Lambda$%
. It is noteworthy that the 2+1 magnetic Reissner-Nordstr\"om analog is not
a black hole in contrast with the 2+1 electric Reissner-Nordstr\"om analog,
where a black hole is present. It is shown that the magnetic solution
obtained with the help of the procedure used in~\cite{Cataldo}, can be
obtained from the static electrically charged BTZ metric using
transformation~(\ref{transformacion2}). In this case the radial parameter
satisfies $0 < r < \infty$.
Superpositions of a perfect fluid and an electric or a magnetic field are
separately studied and their corresponding solutions found.
Shortly after we completed this work M. Ba\~nados informed us of the
existence of ref.~\cite{Welch} where a magnetic solution (HW solution) was
also obtained by a different procedure. However, this solution uses a gauge
which lacks a parameter. This could be seen by introducing the new coordinate
(and taking a negative cosmological constant $\Lambda= - l^{- 2}$)
$\tilde{r}= ln \left( \frac{r^{2}/ l^{2}- M}{D} \right)^{1/C}$.
Then~(\ref{metrica2}) with this new coordinate becomes \\
$ds^{2}= \left( \frac{r^{2}}{l^{2}}- M \right) dt^{2} - r^{2}
\left(\frac{r^{2}}{l^{2}}- M \right)^{-1} e^{- 2 \gamma} dr^{2} -
e^{2 \gamma}d\theta^{2}$, \\
where $e^{2 \gamma}=r^{2} + \frac{\kappa \, q_{2}{}^{2} \, l^{4}}{4 \, \pi}
\, ln \left(r^{2}/ l^{2}- M \right) + F $. In this gauge then the lacking
parameter is $F$.
We would like to thank P. Minning for carefully reading the manuscript. This
work was supported in part by Direcci\'on de Promoci\'on y Desarrollo de la
Universidad del B\'\i o-B\'\i o through Grants $No.$ 951105-1 and
$No.$ 960205-1, and in part by Direcci\'on de Investigaci\'on de la Universidad
de Concepci\'on through Grant $\# 94.11.09-1$.
|
1,108,101,564,887 | arxiv | \section{Introduction}
Synthetic biology is concerned with building useful biological circuits with predictable dynamics. Most genetic circuits depend on the local machinery of a host chassis. Since the host must expend energy in expressing exogenous genes, resources are taken from endogenous processes which can impact the function of both synthetic and native genes. Genetic circuit burden is often explicitly modeled as sequestration of cellular resources or competition for a limited set of binding sites. These models are hypothesis driven, in that they rely on hypotheses or explicit knowledge of the nature of circuit burden.
Functional genetic circuits require a minimal footprint on the host. The higher the burden on the host, the more likely it is that the host mutates out the circuit. Even in cases where synthetic genes leave a minimal footprint on the host, an unpredictable change in dynamic behavior may occur due to the activation of synthetic genes.
Rondelez showed that competition between synthetic and native genes have important effects on the global dynamics of the system \cite{rondelez2012competition}. Recent studies involving host-circuit interactions looked at cross talk \cite{yeung2012quantifying} between genetic circuits and host resources for transcription \cite{gyorgy2015isocost}, translation \cite{borkowski2016overloaded,ceroni2015quantifying,gorochowski2016minimal,carbonell2015dealing,nystrom2018dynamic}, and protein degradation \cite{cookson2011queueing,qian2017resource}. These model-based approaches further our understanding of host-circuit interaction, however the models are often based on biophysical mechanisms that are difficult to validate or observe. Transcriptomics and proteomics resolve the activity of thousands of genes, providing a rich resource for learning models to answer key questions without the need for hypothesis-driven modeling. How can these measurements be leveraged through a data-driven approach to better understand host-circuit interaction and genetic stability?
Spectral methods have been increasingly popular in the data-driven analysis of nonlinear dynamical systems. Recently, researchers working in Koopman operator theory
have shown that it is possible to identify and learn the fundamental modes of a nonlinear dynamical system from data \cite{rowley_mezic_bagheri_schlatter_henningson_2009}. The seminal work by Schmid in developing dynamic mode decomposition (DMD) has led to an enormous growth in the use of Koopman spectral analysis of nonlinear dynamical systems \cite{schmid2010dynamic}. More recently, learning higher dimensional Koopman operators from data has become computationally tractable, largely due to advances in integrating machine learning and deep learning to generate efficient representations of observable bases \cite{yeung2017learning,lusch2018deep,otto2019linearly}. Often in biology and especially in omics measurements, the data are temporally sparse. Sinha and Yeung developed a method for computing the Koopman operator from sparse data \cite{sinha_yeung_2019}.
Synthetic biological circuit design is often viewed from a reductionist's perspective. Biological parts are designed and optimized for modularity, so that composition gives rise to predictable behavior. The challenge is that composition, while at times successfully gives rise to predictable {\it observed} behavior, has an unknown {\it emergent} effect on the host, and by the principles of feedback, the circuit as well \cite{cardinale2012contextualizing}.
In this paper we develop a completely novel algorithm, \textit{structured DMD}, to complement bottom-up genetic circuit design approaches in synthetic biology. Structured DMD is a purely data-driven model discovery framework that takes advantage of a part-by-part construction process to decouple emergent phenomena from isolated part dynamics. It reduces the total model complexity in the model identification process by adopting a hierarchical approach to identifying components of the model in stages. The decomposition we obtain is additive, due to nice linear mathematical properties endowed by Koopman operators \cite{mezic2005spectral}. We showcase our algorithm on a coupled oscillator system, but then consider a real genetic circuit design problem using a NAND gate designed from TetR orthologs \cite{stanton2014genomic}. Full-state but temporally sparse RNAseq measurements collected from wild-type \textit{E. coli}, single gate components transformed in \textit{E. coli}, and a NAND circuit composed from individual gates in \textit{E. coli} are used to explore how Koopman subspace functions encode increasing circuit interference on \textit{E. coli} chassis dynamics.
\section{Koopman operator formulation and Dynamic mode decomposition} \label{sec:Koop}
We briefly introduce Koopman operator theory (see \cite{mezic2005spectral} for a full discussion); as we will use it throughout this paper. Consider a discrete time open-loop nonlinear system of the form
\begin{equation}
x_{t+1} =f(x_t)\\
\label{eq:sys}
\end{equation}
where $f: M \subset \mathbb{R}^n \rightarrow M$ is an analytic vector field on the state space. The Koopman operator of (\ref{eq:sys}), $\mathcal{K}$ : $\mathcal{F}$ $\rightarrow$ $\mathcal{F}$, is a linear operator that acts on observable functions $\psi (x_k)$ and propagates them forward in time as
\begin{equation}
\psi (x_{t+1})=\mathcal{K}\psi (x_t).
\label{eq:KoopEqInf}
\end{equation}
Here ${\mathcal F}$ is the space of observable functions that is invariant under the action of $\mathcal{K}$.
Using data-driven approaches, commonly DMD \cite{schmid2010dynamic} or extended DMD \cite{williams_kevrekidis_rowley_2015}, an approximation to the Koopman operator, $K$, can be computed. The approach taken to compute an approximation to the Koopman operator in both DMD and extended DMD is to solve the following optimization problem
\begin{equation}
\min_{K} || \Psi(X_f) - K\Psi(X_p)||
\label{eq:learnKoop}
\end{equation}
where
$ X_f \equiv \begin{bmatrix} x_{1} & \hdots & x_{N-1} \end{bmatrix},$ $ X_p \equiv \begin{bmatrix} x_{2} & \hdots & x_{N} \end{bmatrix}
$
are snapshot matrices formed from the discrete-time dynamical system (\ref{eq:sys}) and
$
\Psi(X) \equiv \begin{bmatrix} \psi_1(x) & \hdots & \psi_R(x) \end{bmatrix}
$
is the mapping from physical space into the space of observables. DMD is a special case of extended DMD where $\psi(x) = x$. It was shown by Rowley et al. that the approximate Koopman operator obtained from DMD is closely related to a spectral analysis of the linear but infinite-dimensional Koopman operator \cite{rowley_mezic_bagheri_schlatter_henningson_2009}.
\begin{comment}
\begin{figure}
\centering
\includegraphics[height=1.25in]{figures/Psi_SDMD_noSDMD.pdf}
\caption{(a) $\psi_m$ obtained from structured DMD and (b) $\psi_m$ obtained from standard DMD.}
\label{fig:Psi}
\end{figure}
\end{comment}
\section{Structured dynamic mode decomposition} \label{sec:sdmd}
In this section we, for the first time, introduce structured dynamic mode decomposition (structured DMD). The structured DMD algorithm takes advantage of bottom-up design approaches where an original system is built upon by adding parts layer by layer to achieve complex dynamical behaviors.
The Koopman model (\ref{eq:KoopEqInf}) can be decomposed into original and added equations, or its parts in the bottom-up design approach, written as
\begin{equation}
\begin{bmatrix} \psi_O(x_{t+1})\\\psi_A(x_{t+1}) \end{bmatrix} \equiv \begin{bmatrix} K_{OO} & K_{OA} \\ K_{AO} & K_{AA} \end{bmatrix} \begin{bmatrix} \psi_O(x_{t})\\\psi_A(x_{t}) \end{bmatrix}
\label{eq:KoopEqOA}
\end{equation}
where the subscripts $O$ and $A$ correspond to the original and added components. The matrix of Koopman operators in (\ref{eq:KoopEqOA}) is unknown, but can be obtained using the standard techniques outlined in Section \ref{sec:Koop}. However, these approaches would not allow the decoupling of the underlying dynamics of the original components from added components. Therefore, it would not be possible to determine the impact that new components have on the original components. With this algorithm, we propose to discover the underlying (original) dynamics directly from data first, and then subsequently learn the interaction dynamics as an additive perturbation in the Koopman model.
If we want to solely understand the impact of added components on the original components of a system, we first learn the original-original interaction dynamics matrix $K_{OO}$ from the original system without added parts. This original system has Koopman model
\begin{equation}
\psi_O(x_{t+1}) = K_{OO}\psi_O(x_t)
\end{equation}
where $K_{OO}$ is learned through the optimization problem (\ref{eq:learnKoop}). The original-added interaction dynamics matrix $K_{OA}$ can now be learned by viewing it as an added perturbation in the original Koopman model, i.e. the new model is
\begin{equation}
\psi_O(x_{t+1}) = \underbrace{K_{OO}}_{known}\psi_O(x_t) + K_{OA}\psi_A(x_t).
\end{equation}
Here we have already learned $K_{OO}$ and of interest to us is $K_{OA}$ which can now be learned directly from data. In this way, we can completely decouple the underlying original dynamics from the effect that any added parts have on the original dynamics.
\begin{comment}
\subsubsection{Example (Coupled oscillators)}
To validate structured DMD, we illustrate the algorithm on the coupled oscillator system of two masses, $m$, coupled with a spring of spring constant $k_c$. Each mass is connected to an unmovable wall with springs of spring constant $k$. The equations of motion are
\begin{equation}
\begin{aligned}
m\Ddot{x}_1 & = -k x_1 + k_c (x_2 - x_1) \\
m\Ddot{x}_2 & = -k x_2 - k_c (x_2 - x_1).
\label{eq:coupledOscillators}
\end{aligned}
\end{equation}
In this example, the original system can be thought of as one mass connected to an unmovable wall with a spring. The equation of motion for this original system is
\begin{equation*}
m\Ddot{x}_1 = -k x_1.
\end{equation*}
The associated Koopman model is
\begin{equation*}
\psi_m(x_{1,t+1}) = K_{mm} \psi_m(x_{1,t})
\end{equation*}
where the subscript $m$ denotes the original mass. Adding the second mass and corresponding springs we get back the system (\ref{eq:coupledOscillators}), with the corresponding Koopman model
\begin{equation*}
\psi_m(x_{1,t+1}) = \underbrace{K_{mm}}_{known} \psi_m(x_{1,t}) + K_{m,am} \psi_{am}(x_{2,t})
\end{equation*}
for the original mass dynamics. The original mass-added mass interaction dynamics are given by $K_{a,am}$, from which the impact of the additional mass on the original mass can be noted. In this case, $K_{a,am}$ is nearly zero, therefore we expect $\psi_m$ computed from SDMD and from DMD to be identical. Figure \ref{fig:Psi}a shows $\psi_m$ computed using structured DMD. To validate, we then compare this to $\psi_m$ computed using standard DMD, which is shown in figure \ref{fig:Psi}b. The $\psi_m$ computed from both algorithms are identical. We next use structured DMD to quantify the impact of a genetic circuit on its host.
\end{comment}
\section{Impact of genetic circuit on host} \label{sec:impact}
\begin{figure*}
\centering
\vspace{-10mm}
\includegraphics[width=1.9\columnwidth]{figures/NAND_SDMD_model_copy.pdf}
\caption{Schematic of the bottom-up design of a NAND gate in \textit{E. coli}. a) \textit{E. coli}, b) \textit{E. coli} with IPTG input c) \textit{E. coli} with L-arabinose input d) \textit{E. coli} with IPTG input, PhlF gate, and YFP reporter e) \textit{E. coli} with L-arabinose input, IcaR gate, and YFP reporter, f) complete NAND circuit. Under each design iteration is the associated host Koopman model.}
\label{fig:CircuitParts}
\end{figure*}
First, note that typically RNAseq measurements are sparse in time (two timepoints and four replicates in this case). Sinha and Yeung \cite{sinha_yeung_2019} have addressed the problem of computation of the Koopman operator when the data is sparse. For each data tuple $(x_i,x_{i+1})$, the artificial data point $(x_i + \delta x_i, x_{i+1} + \delta x_{i+1})$ is added. Artificial snapshot matrices are formed as
\begin{equation*}
\begin{aligned}
X_f &\equiv \begin{bmatrix} x_{1} & \hdots & x_{N-1} & x_{1}+\delta x_{1} & \hdots & x_{N-1}+\delta x_{N-1} \end{bmatrix}, \\ X_p & \equiv \begin{bmatrix} x_{2} & \hdots & x_{N} & x_2+\delta x_2 & \hdots & x_{N} +\delta x_{N} \end{bmatrix}
\end{aligned}
\end{equation*}
These artificial data points (which are sufficiently small perturbations) are added to the sparse data set to enrich the data. Robust optimization-based techniques are then used to compute the approximate Koopman operator. The optimization problem to be solved is
\begin{equation*}
\min_{K} || \Psi(X_f) - K\Psi(X_p) ||_F + \lambda || K ||_F
\end{equation*}
where $\lambda$ is a regularization parameter.
Figure \ref{fig:CircuitParts} shows the bottom-up construction of a NAND gate in \textit{E. coli} where each iteration also has an associated host Koopman model (under each schematic). The composite interaction matrix $K_{HAIRPY}$ defines the impact of the NAND gate on the host. To learn this matrix, we first learn the underlying host dynamics $K_H$ in figure \ref{fig:CircuitParts}a. A heatmap of $K_H$ can be seen in figure \ref{fig:Kol}. Inducer-host interactions $K_{HI}$ and $K_{HA}$ in figure \ref{fig:CircuitParts}b and \ref{fig:CircuitParts}c are computed next. The subscripts $I$ and $A$ correspond to the inducers IPTG and L-arabinose, respectively. PhlF and IcaR inverters are then added along with \textit{yfp} reporters as seen in figure \ref{fig:CircuitParts}d and \ref{fig:CircuitParts}e. $K_{HIPY}$ and $K_{HARY}$ are learned from these systems where the subscripts $P$, $R$, and $Y$ denote the PhlF inverter, IcaR inverter, and \textit{yfp} reporters, respectively. At this stage, we can learn $K_{HAIRPY}$ since all the terms in the Koopman model of figure \ref{fig:CircuitParts}f are now known. Figure \ref{fig:K_NAND} shows a heatmap of $K_{HAIRPS}$.
The discovered host Koopman model from wild type MG1655K12 {\it E. coli} reveals a fundamentally antagonistic relationship between the LacI and AraC control modules in the transition from the first time point (log phase) to the second timepoint (stationary phase). We see that arabinose induction activates most of the Ara operon genes, as expected, but simultaneously creates a negative inhibitory effect on LacY, LacZ, and LacI. Conversely, induction with IPTG has a significant downregulating effect on the Ara operon genes, specifically the cluster of genes downstream of the pBAD cluster. This is consistent with prior analysis of the hierarchy of sugar utilization \cite{aidelberg2014hierarchy}. Crosstalk mechanisms mediated by the catabolite repression protein (CRP) and cyclic AMP pathway prioritize Lac operon activity over Ara operon activity when both sugars are present. While IPTG is not a sugar, it acts as a structural analog and thus induces the same diauxic response. We have verified this phenomena, using a data-driven approach, using only two timepoints and four biological replicates from noisy RNAseq measurements.
The import of this finding is that the NAND circuit inherently activates the diauxic response mechanism to its advantage. Since PhlF and IcaR are designed and intended to act independently, mutual repression of their underlying host machinery results in stronger underlying XOR logic and mutual coupling. That is, activation with arabinose will result in indirect repression of the lactose operon, and vice-versa.
Finally, we discovered that IcaR gene expression induces a positive-feedback loop with the AraBAD cluster. This in turn, results in elevated IcaR expression, which induces cytotoxicity. When calculating the Frobenius norm, as a total sum measure of circuit-to-host impact, we found that arabinose induction in the host had an impact of $ \frac{||K_{HA}||_F}{p\times q} = \num{4e-7} $ ($p$ and $q$ are the dimensions of $K_{HA}$), while induction of the IcaR component had an impact of $ \frac{||K_{HIPY}||_F}{r\times s} = \num{2e-2}$, nearly 5 orders of magnitude greater. Even though PhlF and IcaR had comparable per-term gain over the 429 genes we analyzed, IcaR impacted 420 genes, while PhlF only impacted 269 genes. When analyzing DNA sequencing data, we found that the IcaR gene had been deleted from the NAND circuit on the genome; the disparity in gain and widespread influence between the IcaR and the other circuit components provides a hypothesis for IcaR mutation. The IcaR part imposes considerable widespread perturbation on host genes, which is evidence of cytotoxicity leading to mutation.
\begin{figure}
\centering
\includegraphics[width=.7\columnwidth]{figures/Kol.pdf}
\caption{The Koopman operator estimated from structured DMD of the host dynamics in response to arabinose and IPTG induction. The color scale represents likely causal interaction; positive causal interaction represented by positive values and negative causal interaction represented by negative values.}
\label{fig:Kol}
\vspace{-4mm}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/K_NAND.pdf}
\vspace{-4mm}
\caption{The input Koopman operator estimated from structured DMD modeling circuit-to-host interaction of the NAND circuit. The color scale represents likely causal interaction; positive causal interaction represented by positive values and negative causal interaction represented by negative values.}
\label{fig:K_NAND}
\end{figure}
\section*{Acknowledgements}
The authors gratefully acknowledge the funding of DARPA grants FA8750-17-C-0229, HR001117C0092, HR001117C0094, DEAC0576RL01830. The authors would also like to thank Professors Igor Mezic, Alexandre Mauroy, Nathan Kutz, Steve Haase, John Harer, and Eric Klavins for insightful discussions.
Any opinions, findings, conclusions, or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the Defense Advanced Research Project Agency, the Department of Defense, or the United States government. This material is based on work supported by DARPA and AFRL under contract numbers FA8750-17-C-0229, HR001117C0092, HR001117C0094, DEAC0576RL01830.
\bibliographystyle{abbrv}
|
1,108,101,564,888 | arxiv |
\chapter*{Introduction}
\addcontentsline{toc}{chapter}{Introduction}
\begin{quote}
\emph{Siamo liberi di sceglierci ogni volta \\invece che lasciare troppe cose gi\`a decise \\ a scegliere per noi. (Tiromancino)}
\end{quote}
\
\noindent
\noindent This Ph.D. thesis \emph{The geometry of spherical random fields}
collects research results obtained in these last three years.
The main purpose is the study
of random fields indexed by the two-dimensional unit sphere $\mathbb S^2\subset \R^{3}$.
Let us first fix some probability space $(\Omega, \F, \P)$.
\begin{definition}\label{defIntro}
A random field $T$ on $\mathbb S^2$ \cite{dogiocam} is a (possibly complex-valued) measurable map
$$
T: (\Omega \times \mathbb S^2, \F\otimes \B(\mathbb S^2)) \goto (\C, \B(\mathbb C))\ ; \qquad (\omega, x)\mapsto T_x(\omega)\ ,
$$
where $\B(\mathbb S^2)$ (resp. $\B(\C)$) denotes, as usual, the Borel $\sigma$-field on the sphere (resp. the field of complex numbers).
\end{definition}
Often in this work we will write $T(\cdot, x)$ instead of $T_x(\cdot)$.
Loosely speaking, $T$ is a collection
of r.v.'s $(T_x)_{x\in \mathbb S^2}$ indexed by the points of the sphere or, equivalently, it can be seen as a r.v. $x\mapsto T_x$ taking values in some space of functions on $\mathbb S^2$.
In particular, we are interested in rotationally invariant or \emph{isotropic} random fields (e.g. see \cite{dogiocam, balditrapani, mauspin, malyarenkobook}): briefly we mean
that the random field $T$ and the ``rotated'' $T^g:=(T_{gx})_{x\in \mathbb S^2}$, have the same law for every $g\in SO(3)$ (for details see Definition \ref{invarian}).
$SO(3)$, as usual, denotes the group of all rotations of $\R^3$ about the origin, under the operation of composition.
Spherical random fields naturally arise
in a wide class of instances in geophysics, atmospheric sciences, medical imaging
and cosmology.
The application domain we are interested in concerns the latter, mainly in connection with the
analysis of Cosmic Microwave Background (CMB) radiation.
We can image that physical experiments for CMB measure, for each point $x\in \mathbb S^2$,
an ellipse on $T_x \mathbb S^2$ - the tangent plane to the sphere at $x$ (\cite{dogiocam, malyarenko}).
The ``width'' of this ellipse is related to the \emph{temperature} of this radiation whereas the other features (elongation and orientation) are collected in complex \emph{polarization} data.
Indeed, the modern random model for the absolute temperature of CMB is an isotropic random field on the sphere, according to Definition \ref{defIntro} (see also Part 1). Instead, to model the polarization of this radiation we need a more complex structure, namely an invariant random field on the sphere taking values in some space of algebraic curves (the so-called spin random fields - see Part 3).
To test some features of the CMB -- such as whether it is a realization of a Gaussian field, is a question that has attracted a lot of attention in last years: asymptotic theory must hence be developed in the high-frequency sense (see Part 2).
Although our attention has been mostly attracted by the spherical case, in this work we decided to treat more general
situations whenever it is possible to extend our results from the sphere to other structures. Actually the interplay between the probabilistic aspects and the geometric ones produces sometimes fascinating insights.
We shall deal for instance with homogeneous spaces of a compact group (Part 1)
as well as vector bundles (Part 3).
This thesis can be split into three strongly correlated parts: namely Part 1: Gaussian fields, Part 2:
High-energy Gaussian eigenfunctions and Part 3: Spin random fields. It is nice to note that
this work will turn out to have a ``circulant'' structure, in a sense to make clear below
(see Theorem \ref{intro1} and Theorem \ref{introFin}).
\subsection*{Related works}
\addcontentsline{toc}{subsection}{Related works}
Throughout the whole thesis, we refer to the following:
\begin{itemize}
\item P. Baldi, M. Rossi. \emph{On L\'evy's Brownian motion indexed by elements of compact groups}, Colloq. Math. 2013 (\cite{mauSO(3)});
\item P. Baldi, M. Rossi. \emph{Representation of Gaussian isotropic spin random fields}, Stoch. Processes Appl. 2014 (\cite{mauspin});
\item D. Marinucci, M. Rossi. \emph{Stein-Malliavin approximations for nonlinear functionals of random eigenfunctions
on $\mathbb S^d$}, J. Funct. Anal. 2015 (\cite{maudom});
\item D. Marinucci, G. Peccati, M. Rossi, I. Wigman. (2015+) \emph{Non-Universality of nodal length distribution for
arithmetic random waves}, Preprint arXiv:1508.00353 (\cite{misto}).
\end{itemize}
However some of the results presented here are works still in progress,
and should appear in forthcoming papers:
\begin{itemize}
\item M. Rossi. (2015+) \emph{The Defect of random hyperspherical harmonics}, in preparation (\cite{mau});
\item M. Rossi (2015) \emph{Level curves of spherical Gaussian eigenfunctions}, Preprint.
\end{itemize}
Moreover, we decided not to include some other works: for brevity \cite{simonmaudom} written with S. Campese and D. Marinucci, and to avoid heterogeneity
\cite{ld, sld}, both
joint works with P. Baldi and L. Caramellino.
\section*{Part 1: Gaussian fields}
\addcontentsline{toc}{section}{Part 1: Gaussian fields}
\subsection*{Chapters 1 \& 2}
Our investigation starts from a ``typical'' example of random field on the sphere,
i.e. P.~L\'evy's spherical Brownian motion. We mean a centered
Gaussian field $W=(W_x)_{x\in \mathbb S^2}$ whose covariance kernel $K$ is given by
\begin{equation}\label{covMBintro}
K(x,y) := \frac12 \left ( d(x,o) + d(y,o) - d(x,y) \right )\ ,\qquad x,y\in \mathbb S^2\ ,
\end{equation}
where $o$ is some fixed point on the sphere -- say the ``north pole'', and $d$ denotes the usual geodesic distance.
Note that \paref{covMBintro} implies $W_o=0$ (a.s.).
In particular, we recall P.~L\'evy's idea \cite{levy} for constructing $W$ (see Example \ref{MB}).
Consider a Gaussian white noise $S$ on the sphere, i.e. an isometry between
the space of square integrable functions on $\mathbb S^2$ and
finite-variance r.v.'s, defined on $(\Omega, \F, \P)$.
P.~L\'evy defines a spherical Gaussian field $T$ as
\begin{equation}\label{LevyT}
T_x := \sqrt{\pi} S(1_{H_x})\ ,\quad x\in \mathbb S^2\ ,
\end{equation}
where $1_{H_x}$ denotes the indicator function of the half-sphere centered at $x$.
It turns out that $T$ is isotropic and
$$
\E[|T_x - T_y|^2] = d(x,y)\ ,\quad x,y\in \mathbb S^2\ .
$$
From now on, $\E$ denotes the expectation under the probability measure $\P$.
P.~L\'evy's spherical Brownian motion is hence the Gaussian field $W$ defined as
\begin{equation}\label{MB}
W_x := T_x - T_o\ , \quad x\in \mathbb S^2\ .
\end{equation}
It is worth remarking
that the Brownian motion on the $m$-dimensional unit sphere $\mathbb S^m\subset \R^{m+1}$ ($m>2$) is analogously defined and P.~L\'evy itself extended the previous construction to the higher dimensional situation.
Our first question is the following.
$\bullet$ Can we extend this technique to construct isotropic Gaussian fields $T$ on $\mathbb S^2$?
We answered this question in the first part of \cite{mauspin}.
We note that \paref{LevyT} can be rewritten as
$$
T_x := \sqrt{\pi} S(L_{g} 1_{H_{o}})\ ,\quad x\in \mathbb S^2\ ,
$$
where $g=g_x$ is any rotation matrix $\in SO(3)$ mapping the north pole $o$ to the point $x$ and $L_{g} 1_{H_{o}}$ is the function
defined as $L_{g} 1_{H_o}(y):= 1_{H_o}(g^{-1} y)$, $y\in \mathbb S^2$.
Actually,
$$\displaylines{
L_{g} 1_{H_o}(y)= 1_{H_o}(g^{-1} y) = 1_{g H_{o}}( y)= 1_{H_{g o}}( y)= 1_{H_{x}}( y)\ ,\quad y\in \mathbb S^2\ .
}$$
$L$ coincides with the left regular representation (\ref{rappsin}) of $SO(3)$.
Consider now some homogeneous space $\X$ (see Definition \ref{hom}) of a compact group $G$
(e.g. $\X= \mathbb S^2$ and $G=SO(3)$).
As for the spherical case, we have the following.
\begin{definition}
A random field $T$ on $\X$ is a (possibly complex-valued) measurable map
$$
T: (\Omega \times \X, \F\otimes \B(\X)) \goto (\C, \B(\mathbb C))\ ; \qquad (\omega, x)\mapsto T_x(\omega)\ ,
$$
where $\B(\X)$ denotes, as usual, the Borel $\sigma$-field on $\X$.
\end{definition}
We develop P.L\'evy's construction to obtain isotropic Gaussian fields on $\X$.
First we consider a Gaussian white noise $S$ on $\X$, extended to the space $L^2(\X)$ of square integrable \emph{complex} functions $f$. $S$ respects the real character of $f$, i.e. $f$ is real if and only if $S(f)$ is real.
Let us fix once forever some point $x_0\in \X$ and denote $K$ the isotropy group of $x_0$, i.e. the closed subgroup of elements $g\in G$ fixing $x_0$.
Recall that $\X$ is isomorphic to the quotient space $G/K$.
To each $f\in L^2(\X)$ which is moreover left invariant w.r.t. the action of $K$, we
associate
an isotropic complex-valued Gaussian field $T^f$ on $\X$ as
\begin{equation}\label{defTintro}
T^f_x := S(L_g f)\ ,\quad x\in \X\ ,
\end{equation}
where the function $L_g f$ is defined as
$L_g f(y) := f(g^{-1} y)$, $y\in \X$
and $g\in G$ is any element that maps the point $x_0$ to the point $x=g x_0$. $L$ coincides with the left regular representation of $G$ (see \paref{rappsin}).
The law of the field $T^f$ is completely characterized
by the associated positive definite function $\phi^f$ which is defined, for $g\in G$, as
\begin{equation}\label{phii}
\phi^f(g) := \Cov \left (T^f_x, T^f_{x_0} \right )=\langle L_g f, f\rangle
\ ,
\end{equation}
where $x$ is such that $gx_0 = x$. As usual, $\langle \cdot, \cdot \rangle$ denotes the inner product in $L^2(\X)$. Moreover we need
the ``relation'' function of $T^f$
\begin{equation}
\zeta^f(g) := \Cov \left (T^f_x, \overline{T^f_{x_0}} \right )=\langle L_g f, \overline{f}\rangle
\ ,
\end{equation}
where $\overline{T^f_{x_0}}$ (and $\overline{f}$) denotes complex conjugation.
$\bullet$ Now we ask whether every isotropic, complex-valued Gaussian
random field on $\X$ can be obtained with this construction.
The answer is \emph{no} in general (see Remark \ref{rem-sfera} and \paref{cic} for some counterexample). It is
however positive if we consider isotropic \emph{real} Gaussian fields $T$ on $\X$. Our first result is the following (Theorem \ref{real-general}).
\begin{theorem}\label{intro1}
Let $T$ be a real isotropic Gaussian field on $\X$. Then there exists a real left-$K$-invariant function $f\in L^2(\X)$ such that $T$ and $T^f$ have the same law
$$
T \ \mathop{=}^{\mathcal L} \ T^f\ ,
$$
where $T^f$ is defined as \paref{defTintro}.
\end{theorem}
Actually, we prove that the associated positive definite function $\phi$ on the group $G$ of $T$ is of the form \paref{phii}. Precisely, if $\phi$ is defined as before as $\phi(g) := \Cov \left (T^f_x, T^f_{x_0} \right )$, where $x= g x_0$, then we show (see Proposition \ref{real-sq}) that there exists a real function $f\in L^2(\X)$ such that
$$
\phi(g) = \langle L_g f, f \rangle\ ,\quad g\in G\ ,
$$
i.e. $T$ and $T^f$ have the same distribution.
\subsection*{Chapter 3}
Assume now that $\X$ is in addition endowed with some metric
$d$. Analogously for the spherical case, P.L\'evy's Brownian motion on the metric space $(\X, d)$ is defined as a real centered Gaussian field on $\X$ which vanishes at some point $x_0\in\X$ and such that $\E[|X_x-X_y|^2]=d(x,y)$. By polarization, its covariance function is
\begin{equation}\label{kernel MBintro}
K(x,y)=\frac{1}{2} \,( d(x,x_0) + d(y,x_0) - d(x,y) )\ .
\end{equation}
Note that it is not obvious that the Brownian motion exists on $(\X, d)$,
equivalently that the kernel \paref{kernel MBintro} is positive definite on $\X$.
Positive
definiteness of $K$ for $\X=\R^{m+1}$ and $d$ the Euclidean metric had been proved by Schoenberg \cite{schoenberg} in 1938 and, as recalled above, P.L\'evy itself
constructed the Brownian motion on $\X=\bS^{m}$, here
$d$ being the spherical distance.
Later Gangolli \cite{GAN:60} gave an analytical proof of the positive definiteness of the kernel \paref{kernel MBintro} for the same metric space $(\bS^{m},d)$, in a paper that dealt with this question for a large class of homogeneous spaces.
Finally Takenaka in \cite{TKU} proved the positive definiteness of the kernel \paref{kernel MBintro} for the Riemannian metric spaces
of constant sectional curvature equal to $-1,0$ or $1$, therefore adding the hyperbolic disk to the list. To be precise in the case of the hyperbolic space
$\mathcal{H}_m = \lbrace (x_0, x_1, \dots, x_m)\in \R^{m+1} :
x_1^2 + \dots x_m^2 - x_0^2 = 1 \rbrace $, the distance under consideration is the unique, up to multiplicative
constants, Riemannian distance that is invariant with respect to the action of $G=L_m$, the Lorentz group.
$\bullet$ Now we ask the question of the existence of P.L\'evy's Brownian motion on $\X=SO(3)$, endowed with the Riemannian metric induced by the embedding $SO(3)\hookrightarrow \R^9$.
There are deep motivations for this choice, connected to the spin theory, which will be clearer in Part 3.
We answer this question in \cite{mauSO(3)} (Proposition \ref{kernel MB su SO(3)}).
\begin{prop}\label{propIntro}
The kernel $K$ in \paref{kernel MBintro} is not positive definite on $SO(3)$, endowed with the Riemannian metric induced by the embedding $SO(3)\hookrightarrow \R^9$.
\end{prop}
This is somehow surprising
as, in particular, $SO(3)$ is locally isometric to $SU(2)$, where
positive definiteness of $K$ is immediate since isomorphic to
the unit hypersphere $\mathbb S^3$.
Proposition \ref{propIntro} moreover allows to prove the non existence of P. L\'evy's Brownian motion on the group $SO(n)$ of all rotations of $\R^{n+1}$ for $n>3$. Actually, $SO(n)$ contains a closed subgroup that is isomorphic
to $SO(3)$.
Indeed the same argument holds also on e.g.
the group $SU(n)$ of all $n\times n$ unitary matrices with determinant one, for $n\ge 3$ (see Corollary \ref{kernel MB su SO(n)}).
Our method could be applied to investigate positive definitess of the Brownian kernel on other compact Lie groups.
\section*{Part 2: High-energy Gaussian eigenfunctions}
\addcontentsline{toc}{section}{Part 2: High-energy Gaussian eigenfunctions}
\subsection*{Chapters 4, 5 \& 6}
As already briefly stated, the investigation of spherical random fields has been strongly motivated by cosmological applications (e.g. concerning CMB): the asymptotic analysis in this setting must be hence developed in the high-energy sense, as follows.
First recall that the eigenvalues of the Laplace-Beltrami operator $\Delta_{\mathbb S^2}$ on $\mathbb S^2$ are integers of the form $-\ell(\ell +1)$, $\ell\in \N$.
Under Gaussianity, an isotropic random field $T$ on $\mathbb S^2$ can be decomposed in terms of its random Fourier components $T_\ell$, $\ell\in \N$. The latter are independent and isotropic centered Gaussian fields, whose covariance kernel is
\begin{equation}\label{ker2intro}
\E[T_\ell(x) T_\ell(y)]=P_\ell(\cos d(x,y))\ ,\quad x,y\in \mathbb S^2\ ,
\end{equation}
where $P_\ell$ is the $\ell$-th Legendre polynomial \cite{szego, dogiocam}
and $d(x,y)$ denotes the spherical distance between $x$ and $y$.
The following spectral representation holds \cite[Propositions 5.13]{dogiocam}
$$
T_x = \sum_{\ell\in \N} c_\ell T_\ell(x)\ ,
$$
where the series converges in $L^2(\Omega\times \mathbb S^2)$ and the nonnegative sequence $(c_\ell)_\ell$ is the power spectrum of the field \cite{dogiocam}.
$T_\ell$ is known as the $\ell$-th \emph{Gaussian spherical eigenfunction} or random spherical harmonic (see \paref{Telle} for a precise definition), indeed ``pathwise'' satisfies
$$
\Delta_{\mathbb S^2}T_\ell + \ell(\ell+1) T_\ell = 0\ .
$$
In this second part, we investigate the high-energy behavior (i.e., as $\ell\to +\infty$) of $T_\ell$. We are interested in the geometry of the $z$-\emph{excursion set} (\cite{adlertaylor} e.g.), which is defined for $z\in \R$ as
\begin{equation}\label{excset}
A_\ell(z) := \lbrace x\in \mathbb S^2 : T_\ell(x) > z\rbrace\ .
\end{equation}
For instance, one can investigate the area of $A_\ell(z)$, the length of the boundary $\partial A_\ell(z)$ -- that is the length of level curves $T^{-1}_\ell(z)$, and the Euler-Poincar\'e characteristic of these domains. For completeness, we recall that these three quantities correspond to the so-called Lipschitz-Killing curvatures on the sphere \cite{adlertaylor}.
Many authors have studied properties of excursion sets of random fields on the sphere or other manifolds:
for instance, one can investigate the behavior of the excursion probability \cite{yimin},
i.e. as $z\to +\infty$
$$
\P\left ( \sup_{x\in \mathbb S^2} T_x > z\right )\ ,
$$
where $T$ is some random field on the sphere;
(see also e.g.
\cite{adlertaylor, yimin-cheng, cheng, cheng2, chenschwar, yimin-cheng,
fluct}).
It is worth remarking that random spherical harmonics have attracted great interest also in other disciplines, such as Mathematical Physics.
Indeed Berry's Random Wave Model (\cite{berry}) allows to compare - at least for ``generic'' chaotic Riemannian \emph{surfaces} $\mathcal M$ - a \emph{deterministic} Laplace eigenfunction $f$ on $\mathcal M$ of some large eigenvalue $E$ to a ``typical''
instance of an isotropic, monochromatic \emph{random} wave with wavenumber
$\sqrt{E}$ (see also \cite{wigsurvey}). In view of this conjecture, much effort has been devoted to $2$-dimensional manifolds such as the torus $\mathbb T$ (see e.g. \cite{AmP}) and the sphere $\mathbb S^2$ (see e.g. \cite{vale2}, \cite{vale3}, \cite{Nonlin},
\cite{Wig}), as stated just above.
In this setting, the \emph{nodal} case corresponding to $z=0$ has received the greatest attention.
Indeed nodal domains (the complement of the set where eigenfunctions are equal to zero) appear
in many problems in engineering, physics and the natural sciences: they
describe the sets that remain \emph{stationary} during vibrations, hence their importance
in such areas as musical instruments industry, earthquake study and astrophysics
(for further motivating details see \cite{wigsurvey}).
$\bullet$ In this thesis we want to investigate the geometry of excursion sets \paref{excset} of high-energy Gaussian eigenfunctions $T_\ell$ on the sphere.
The geometric features we are interested in can be written as nonlinear functionals of the random field itself (and its spatial derivatives).
\underline{Excursion area}
The area $S_\ell(z)$ of $z$-excursion sets \paref{excset} can be written as
\begin{equation*}
S_\ell(z) = \int_{\mathbb S^2} 1_{(z, +\infty)}(T_\ell(x))\,dx\ ,
\end{equation*}
where $1_{(z, +\infty)}$ is the indicator function of the interval $(z, +\infty)$. The expected value is simply computed to be
$\E[S_\ell(z)] = 4\pi (1 - \Phi(z))$, where $\Phi$ denotes the cumulative distribution function of a standard Gaussian r.v. The variance has been studied in \cite{wigexc, Nonlin, def}:
we have, as $\ell \to +\infty$,
\begin{equation}\label{varintro}
\Var(S_\ell(z)) = z^2 \phi(z)^2 \cdot \frac{1}{\ell} + O\left (\frac{\log \ell}{\ell^2} \right )\ ,
\end{equation}
where $\phi$ is the standard Gaussian probability density function.
In particular, for $z\ne 0$, \paref{varintro} gives the exact asymptotic form of the
variance.
The nodal case corresponds to the Defect $D_\ell$, which is defined as
$$
D_\ell := \int_{\mathbb S^2} 1_{(0, +\infty)}(T_\ell(x))\,dx - \int_{\mathbb S^2} 1_{(-\infty, 0)}(T_\ell(x))\,dx\ ,
$$
i.e. the difference between the measure of the positive and negative regions.
Note that $D_\ell = 2S_\ell(0) - 4\pi$. We have $\E[D_\ell]=0$ and from \cite{def}
\begin{equation}\label{varD}
\Var(D_\ell) = \frac{C}{\ell^2}( 1 +o(1))\ ,\quad \ell\to +\infty\ ,
\end{equation}
for some $C> \frac{32}{\sqrt{27}}$.
It is worth remarking that the Defect variance is of smaller order than the non-nodal case.
This situation is similar to the \emph{cancellation phenomenon} observed by Berry in a different setting (\cite{berry}).
In \cite{Nonlin} Central Limit Theorems are proved for the excursion area:
\begin{align*}
\frac{S_\ell(z) - \E[S_\ell(z)]}{\sqrt{\Var(S_\ell(z))}}&\mathop{\goto}^{\mathcal L} Z\ , \qquad z\ne 0\ ,\\
\frac{D_\ell}{\sqrt{\Var(D_\ell)}}&\mathop{\goto}^{\mathcal L} Z\ ,
\end{align*}
$Z\sim \mathcal N(0,1)$ being a standard Gaussian r.v. and $\mathop{\goto}^{\mathcal L}$ denoting the convergence in distribution from now on. Often we will write $\mathop{\goto}^{d}$ instead of $\mathop{\goto}^{\mathcal L}$.
A CLT result is ``only'' an asymptotic result with no information on the \emph{speed of convergence} to the limiting
distribution. More refined results indeed aim at the investigation of the asymptotic behaviour for
various probability metrics, such as Wasserstein, Kolmogorov and total variation distances, see \paref{prob distance}. In this respect, a major development in the last few years has been provided by the
so-called \emph{fourth-moment} literature, which is summarized in the recent monograph \cite{noupebook}. In short, a
rapidly growing family of results is showing how to establish bounds on probability
distances between multiple stochastic integrals and the Gaussian distribution analyzing the fourth-moments/fourth cumulants alone (\cite{taqqu, simon, nou-pe, nou-pe2, simonS} e.g.).
$\bullet$ We establish a quantitative CLT for the excursion area of random spherical harmonics.
In \cite{maudom} we consider a more general situation, i.e. nonlinear functionals of Gaussian eigenfunctions $(T_{\ell})_{\ell\in \N}$ on the $m$-dimensional unit sphere $\mathbb S^m$, $m\ge 2$.
The eigenvalues of the Laplace-Beltrami operator $\Delta_{\mathbb S^m}$
on $\mathbb S^m$ are integers of the form $-\ell(\ell+m-1)$, $\ell\in \N$.
The $\ell$-th Gaussian eigenfunction $T_\ell$ on $\mathbb S^m$ \paref{Telle}
$$
\Delta_{\mathbb S^m} T_\ell + \ell(\ell +m-1) T_\ell = 0\ , \quad a.s.
$$
is a centered isotropic Gaussian field with covariance function
\begin{equation}\label{kermintro}
\E[T_\ell(x) T_\ell(y)] = G_{\ell;m} (\cos d(x,y))\ ,
\end{equation}
where $G_{\ell;m}$ denotes the normalized Gegenbauer polynomial \cite{szego} and $d$ the usual distance on the $m$-sphere.
Precisely, we consider
sequences of r.v.'s of the form
$$
S_\ell(M) := \int_{\mathbb S^m} M(T_\ell(x))\,dx\ ,
$$
where $M:\R\to \R$ is some measurable function such that $\E[M(Z)^2]<+\infty$,
$Z\sim \mathcal N(0,1)$. Note that if
we choose $M=1_{(z,+\infty)}$, then $S_\ell(M) = S_\ell(z)$ the excursion volume of random hyperspherical harmonics, i.e. the empirical measure of the set where eigenfunctions lie upon the level $z$.
The main idea for our proof is first to develop $S_\ell(M)$ into Wiener chaoses, i.e. as a series
in $L^2(\P)$ of the type \paref{chaos exp}
$$
S_\ell(M) = \sum_{q=0}^{+\infty} \frac{J_q(M)}{q!} \underbrace{\int_{\mathbb S^m} H_q(T_\ell(x))\,dx}_{:= h_{\ell,q;m}}\ ,
$$
where $H_q$ denotes the $q$-th Hermite polynomial \paref{hermite} (see also \cite{szego, noupebook}) and $J_q(M):= \E[M(Z)H_q(Z)]$. Then, we study the asymptotic behavior of each summand $h_{\ell,q;m}$ of the previous series by means of a careful investigation of asymptotic variances (see Proposition \ref{varianza}) and the Fourth Moment Theorem \paref{th}: we are hence able to prove a quantitative CLT for $h_{\ell,q;m}$ (Proposition \ref{teo1}) in Wasserstein distance \paref{prob distance}. To be more precise, we can restrict ourselves to even integers $\ell$ (see the related discussion in Chapter $5$).
It turns out that, if the projection of $M(Z)$ onto the second order Wiener chaos is not zero ($J_2(M)\ne 0$), then this component dominates the whole series, i.e., as $\ell\to +\infty$
$$
\frac{S_\ell(M) - \E[S_\ell(M)]}{\sqrt{\Var(S_\ell(M))}} = \frac{\frac{J_2(M)}{2}h_{\ell,2;m}}{\sqrt{\Var(S_\ell(M))}} + o_\P(1)\ .
$$
We can therefore prove the following (Theorem \ref{general}).
\begin{theorem}
If $J_2(M)\ne 0$, then
$$
d_W\left( \frac{S_\ell(M) - \E[S_\ell(M)]}{\sqrt{\Var(S_\ell(M))}}, Z \right) = O(\ell^{-1/2})\ ,
$$
that gives the rate of convergence, as $\ell\to +\infty$, to the standard Gaussian distribution in Wasserstein distance $d_W$.
\end{theorem}
Moreover if $M=1_{(z,+\infty)}$, then it easy to compute that $J_2(M)\ne 0 \iff z\ne 0$. The following corollary is therefore immediate (Theorem \ref{mainteo}).
\begin{cor}
If $z\ne 0$, then
$$
d_W\left( \frac{S_\ell(z) - \E[S_\ell(z)]}{\sqrt{\Var(S_\ell(z))}}, Z \right) = O(\ell^{-1/2})\ ,\quad \ell\to \infty\ .
$$
\end{cor}
We have just obtained a quantitative CLT for the excursione volume of random hyperspherical eigenfunctions in the non-nodal case.
The nodal case which correspond to the Defect $D_\ell$ requires harder work, since it is no longer true that the second chaotic component dominates.
In \cite{mau} we show first the exact rate for the Defect variance (Theorem \ref{thdefvar}).
\begin{theorem}
For $m>2$, as $\ell\to +\infty$
$$
\Var(D_\ell) = \frac{C_m}{\ell^m}(1 +o(1))\ ,
$$
where $C_m>0$ is some constant depending on the dimension $m$.
\end{theorem}
Remark that the case $m=2$ has been solved in \cite{def}.
Moreover we prove CLT results for the Defect, in the high-energy limit (Theorem \ref{thDef}).
\begin{theorem}
For $m\ne 3,4,5$ we have, as $\ell\to +\infty$,
$$
\frac{D_\ell}{\sqrt{\Var(D_\ell)}}\mathop{\to}^{\mathcal L} Z\ ,
$$
where as before $Z\sim \mathcal N(0,1)$.
\end{theorem}
The remaining cases ($m=3,4,5$) require a precise investigation of fourth-order cumulant of r.v.'s $h_{\ell,3;m}$ and are still work in progress in \cite{mau}, where moreover the \emph{quantitative} CLT for the Defect in the Wasserstein distance will be proved:
$$
d_W \left ( \frac{D_\ell}{\sqrt{\Var(D_\ell)}}, Z\right ) =O\left( \ell^{-1/4} \right) \ .
$$
\subsection*{Chapters 7 \& 8}
\underline{Length of level curves}
The length $\mathcal L_\ell(z)$ of level curves $T_\ell^{-1}(z)$ can be formally written as
$$
\mathcal L_\ell(z) = \int_{\mathbb S^2} \delta_z(T_\ell(x))\| \nabla T_\ell(x)\|\, dx\ ,
$$
where $\delta_z$ denotes the Dirac mass in $z$, $\| \cdot \|$ the norm in $\R^2$ and $\nabla$ the gradient.
The expected value is \cite{Wig, wigsurvey}
$$
\E[\mathcal L_\ell(z)] = 4\pi\cdot \frac{\e^{-z^2/2}}{2\sqrt{2}} \sqrt{\ell(\ell+1)}
$$
and for the variance we have \cite{wigsurvey} if $z\ne 0$
$$
\Var\left (\mathcal L_\ell(z)\right ) \sim C\e^{-z^2}z^4\cdot \ell\ ,\quad \ell\to +\infty\ ,
$$
for some $C>0$. I.Wigman computed (private calculations) moreover the exact constant
$$
C=\frac{\pi^2}{2}\ .$$
For the nodal length $\mathcal L_\ell:= \mathcal L_\ell(0)$ we have \cite{Wig}
\begin{equation}\label{varN}
\Var(\mathcal L_\ell) \sim \frac{1}{32} \cdot \log \ell\ ,\quad \ell\to +\infty\ .
\end{equation}
Here also we observe the different behavior for asymptotic variances: in the nodal case it is of smaller order (logarithmic) rather than what should be the ``natural'' scaling $\approx \ell$. This is due to some analytic cancellation in the asymptotic expansion of the length variance which occurs only for $z=0$.
This phenomenon has been called ``obscure'' Berry's cancellation (see \cite{Berry2002, Wig}).
$\bullet$ We investigate the asymptotic distribution of the length of level curves.
We try to answer this question in \cite{mistosfera}: here also we first compute the chaotic expansion of $\mathcal L_\ell(z)$ (Proposition \ref{teoexpS}). Let us denote $(\partial_1 \widetilde T_\ell, \partial_2 \widetilde T_\ell)$ the normalized gradient, i.e. $ \sqrt{\frac{2}{\ell(\ell+1)}}\nabla T_\ell$.
\begin{prop}
The chaotic expansion of $\mathcal L_\ell(z)$ is
\begin{eqnarray}\label{chaosexpintro}
\mathcal L_\ell(z) &=& \E [\mathcal L_\ell(z)] + \sqrt{\frac{\ell(\ell+1)}{2}}\sum_{q=2}^{+\infty}\sum_{u=0}^{q}\sum_{k=0}^{u}
\frac{\alpha _{k,u-k}\beta _{q-u}(z)
}{(k)!(u-k)!(q-u)!} \!\!\times\\
&&\hspace{4.5cm} \times \int_{\mathbb S^2}\!\! H_{q-u}(T_\ell (x))
H_{k}(\partial_1 \widetilde T_\ell (x))H_{u-k}(\partial_2
\widetilde T_\ell (x))\,dx,\notag
\end{eqnarray}
where the series converges in $L^2(\P)$ and $(\beta_l(z))_l$, $(\alpha_{n,m})_{n,m}$ are respectively the chaotic coefficients of the Dirac mass
in $z$ and the norm in $\R^2$.
\end{prop}
By some computations, it is possible to give an exact formula for the second chaotic component (Proposition \ref{2sfera})
$$
\text{proj}(\mathcal L_\ell(z)|C_2) = \sqrt{\frac{\ell(\ell+1)}{2}}
\frac{1}{4} \e^{-z^2/2} z^2 \int_{\mathbb S^2} H_2(T_\ell(x))\,dx=
\sqrt{\frac{\ell(\ell+1)}{2}}
\frac{1}{4} \e^{-z^2/2} z^2 h_{\ell,2;2}\ .
$$
Note that, as for the excursion area, the second component vanishes if and only if $z=0$.
Computing the exact variance of $\text{proj}(\mathcal L_\ell(z)|C_2) $, it turns out that, for $z\ne 0$
$$
\lim_{\ell\to +\infty} \frac{\Var(\mathcal L_\ell(z))}{\Var(\text{proj}(\mathcal L_\ell(z)|C_2) )}=1\ ,
$$
so that, as $\ell\to \infty$,
$$
\frac{\mathcal L_\ell(z) - \E[\mathcal L_\ell(z)]}{\sqrt{\Var(\mathcal L_\ell(z))}}= \frac{\text{proj}(\mathcal L_\ell(z)|C_2)}{\sqrt{\Var(\mathcal L_\ell(z))}}+o_\P(1)\ .
$$
This implies that the total length has the same asymptotic distribution of the second chaotic projection (Theorem \ref{main th S}).
\begin{theorem}
As $\ell\to +\infty$, if $z\ne 0$, we have
$$
\frac{\mathcal L_\ell(z) - \E[\mathcal L_\ell(z)]}{\sqrt{\Var(\mathcal L_\ell(z))}}\mathop{\to}^{\mathcal L} Z\ ,
$$
where $Z\sim \mathcal N(0,1)$.
\end{theorem}
The nodal case requires harder work, indeed it is not a simple task to derive an explicit expression for the fourth-order chaos and is still a work in progress. We can anticipate that the fourth-order chaotic projection dominates the whole nodal length and limit theorems will come hopefully soon (\cite{mistosfera}).
Furthermore we decided to investigate the nodal case on the standard $2$-torus $\mathbb T$: in \cite{misto} we prove a Non-Central Limit Theorem for
nodal lengths of arithmetic random waves (Theorem \ref{thm:lim dist sep}). The situation is analogous to the sphere: indeed the second chaotic component disappers and the fourth-order chaos dominates. The limit distribution is unexpectly non-Gaussian, indeed it is a linear combination of
$H_2(Z_1)$ and $H_2(Z_2)$, where $H_2$ is the second Hermite polynomial and $Z_1, Z_2$ are i.i.d. standard Gaussian r.v.'s.
\underline{Euler-Poincar\'e characteristic}
The Euler-Poincar\'e characteristic of $z$-excursion set $\chi(A_\ell(z))$ for random spherical harmonics has been investigated in \cite{fluct}. In the quoted paper, a precise expression for the
asymptotic variance is proven, moreover the Gaussian Kinematic
Formula \cite{adlertaylor} gives immediately the expected value:
$$
\E[\chi(A_\ell(z))]= \sqrt{\frac{2}{\pi}}\e^{-z^2/2} z \frac{\ell(\ell+1)}{2}
+ 2(1 - \Phi(z))\ ,
$$
and
$$
\lim_{\ell\to +\infty} \ell^{-3}\Var(\chi(A_\ell(z)))= \frac{1}{4}(z^3 - z)^2 \phi(z)^2\ .
$$
The same phenomenon happens here: i.e. the nodal variance is of smaller order than the case $z\ne 0$.
For a unified formula for asymptotic variances of excursion area, length of level curves and Euler-Poincar\'e characteristic see (1.11) in \cite{fluct}.
Quantitative CLTs for $\chi(A_\ell(z)$ will be treated in forthcoming papers by the same authors V.Cammarota, D.Marinucci and I.Wigman.
\begin{remark}\rm
A careful investigation of previous results on asymptotic variances, suggests that there is a strict connection between Berry's cancellation phenomenon for Lipschitz-Killing curvatures and chaotic expansions. Indeed the unique case which shows a different scaling for the variance as well as a zero second order chaotic component is the nodal one. Moreover, in situations analyzed above, there is always a single dominating chaotic projection: the second one at non-zero level and the fourth one in the nodal case.
We conjecture that this qualitative behaviour should be universal somehow: we mean that it should hold for all Lipschitz-Killing curvatures on every ``nice'' compact manifold.
\end{remark}
\section*{Part 3: Spin random fields}
\addcontentsline{toc}{section}{Part 3: Spin random fields}
\subsection*{Chapter 9}
As briefly stated above, in cosmology and astrophysics spherical random fields are used
to model CMB data \cite{dogiocam}. More precisely, the \emph{temperature} of this radiation is seen as a single
realization of an isotropic random field on $\mathbb S^2$, whereas to model its \emph{polarization}
we need to introduce random fields which do not take ordinary scalar values but have a more complex
geometrical structure, the so-called spin random fields \cite{dogiocam, malyarenko, mauspin, claudiospin}. Roughly speaking, they can be seen
as random structures that at each point of the sphere take as a value a random ``curve''.
This family of random models is indexed by
integers $s\in \Z$: for instance, a spin-$0$ random field is simply a spherical random field,
whereas a spin-$1$ random field takes as a value at the point $x\in \mathbb S^2$
a random tangent vector to the sphere at $x$. The polarization of CMB is seen as a realization of a spin-$2$ random field.
From a mathematical point of view, spin random fields are random sections of particular
complex-line bundles on the sphere $\mathbb S^2$, whose construction can be interpreted
in terms of group representation concepts. These
are special cases of so-called homogeneous vector bundle, which we handle in
the second part of \cite{mauspin}. Briefly, given a compact topological group $G$ and an irreducible
representation $\tau$ of its
closed subgroup $K$, we can construct the $\tau$-homogeneous vector bundle as follows.
Let $H$ be the (finite-dimensional) Hilbert space of the representation $\tau$.
Consider the action of $K$ on $G\times K$ defined as
$$\displaylines{
k(g,h) := (gk, \tau(k^{-1}) h)\ ,
}$$
and denote by $G\times_\tau H$ the quotient space. The $\tau$-homogeneous vector
bundle is the triple $\xi_\tau:=(G\times_\tau H, \pi_\tau, G/K)$ where the bundle projection is
$$
\pi_\tau :G\times_\tau H\to G/K;\qquad \theta(g,h)\mapsto gK\ ,
$$
$\theta(g,h)$ denoting the orbit of the pair $(g,h)$ and $gK$ the lateral class of $g$.
\begin{definition}
A random field $T$ on the $\tau$-homogeneous vector bundle is a random section of $\xi_\tau$, i.e. a measurable map
$$
T: \Omega\times G/K \to G\times_\tau H; \quad (\omega, x)\mapsto T_x(\omega)\ ,
$$
where for each $\omega\in \Omega$, the sample path is a section of $\xi_\tau$, that is
$\pi_\tau \circ T (\omega) = \text{id}_{G/K}$.
\end{definition}
Of course this means that for each $x=gK\in G/K$, $T_x$ takes as a value a random element in $\pi^{-1}_\tau(gK)$.
In the quoted paper, we first introduce a new approach
to study random fields in $\tau$-homogeneous vector bundles: the ``pullback''
random field. The latter is a (complex-valued) random field $X$ on $G$ whose paths satisfy the following invariance property
\begin{equation}\label{invS}
X_{gk} = \tau(k^{-1}) X_g\ ,\qquad g\in G, k\in K\ .
\end{equation}
There is one to one correspondence between (random) sections of $\xi_\tau$ and (random) functions on $G$ satysfying \paref{invS} (Proposition \ref{pullback-s-deterministic}) and we prove that $T$ is \emph{equivalent} to its pullback $X$ (see Proposition \ref{prop-pull1}).
Now our attention is devoted to the spherical case. Here $G=SO(3)$, $K\cong SO(2)$: the latter being abelian, its irreducible representations are all one-dimensional and coincide with its linear characters $\chi_s$, $s\in \Z$. Each $k\in SO(2)$ is a counterclockwise rotation of the complex plane $\C$ by an angle $\theta(k)$. The action of $k$ through $\chi_s$ can be seen as a clockwise rotation by the angle $s\cdot \theta(k)$.
The $\chi_s$-homogeneous vector bundle $\xi_s:= \xi_{\chi_s}$ on the sphere is called spin $s$ line bundle and a random field in $\xi_s$ is known as spin $s$ random field.
Our aim is to extend the representation formulas for isotropic Gaussian fields on homogeneous spaces of a compact group in Part 1 to the spin case. The pullback approach allows to deal with isotropic Gaussian fields $X$ on $SO(3)$ whose sample paths satysfy the invariance property \paref{invS}, that is
$$
X_{gk} = \chi_s(k^{-1}) X_g\ ,\quad g\in SO(3), k\in SO(2)\ .
$$
Indee we prove, with an analogous construction to the one developed in Chapter 2, that for each function $f\in L^2(SO(3)$ bi-$s$-associated, i.e. such that
$$
f(k_1 g k_2) = \chi_s(k_1) f(g) \chi_s(k_2)\ ,\qquad g\in SO(3), k_1, k_2\in SO(2)\ ,
$$
an isotropic complex Gaussian spin $s$ random field $X^f$ is associated (Proposition \ref{propspin=}). Moreover also the converse is true (Theorem \ref{teospin=}).
\begin{theorem}\label{introFin}
Let $T$ be an isotropic complex-Gaussian section of the spin $s$ line bundle and $X$ its pullback random field on $SO(3)$. Then there exists a bi-$s$-associated function $f\in L^2(SO(3))$ such that $X$ and $X^f$ have the same law.
\end{theorem}
Finally, we prove that our approach is equivalent to the existing ones:
by Malyarenko \cite{malyarenko} (Proposition \ref{noimal}) and Marinucci \& Geller \cite{marinuccigeller} (\S 9.4, especially Lemma \ref{lemma angolo} ).
\
\noindent The anticipated ``circulant'' structure can be found in Theorem \ref{intro1} and Theorem \ref{introFin}: this connection is the starting point in the analysis of spin random fields. Indeed open questions concern how to extend results presented in Chapters 1--8 to the spin case. We leave it as a possible topic for future research.
\chapter*{Part 1\\ Gaussian fields}
\addcontentsline{toc}{chapter}{Part 1: Gaussian fields}
\chapter{Background: isotropic random fields }
In this first chapter we recall basic results concerning both,
Fourier analysis for a topological compact group $G$,
and the structure of isotropic random fields indexed by elements of $G$-homogeneous spaces.
The plan is as follows. In \S 1.1. we give main definitions and fix some notations, whereas in \S 1.2 we investigate Fourier developments for square integrable functions on $G$-homogeneous spaces \cite{faraut}. In \S 1.3 we recollect some useful properties of isotropic random fields from several works (\cite{mauspin, balditrapani, dogiocam} e.g.). Finally, the last section is devoted to the connection between isotropy and positive definite functions on compact groups - highlighting the main features we will deeply need in the sequel.
A great attention is devoted to the case of the $2$-dimensional unit sphere
$\mathbb S^2$, to whom we particularize the results recalled in each section of this chapter.
\section{Preliminaries}
Throughout this work $G$ denotes a topological \emph{compact} group (e.g. \cite{faraut}). Let us recall the notion of homogeneous space.
\begin{definition}\label{hom}
A topological space $\X$ is said to be a $G$-homogeneous space if $G$ acts on $\X$
with a continuous and transitive action which we shall denote
$$
G\times \X \goto \X\ ;\qquad (g,x)\mapsto gx\ .
$$
\end{definition}
Remark that $G$ itself is a $G$-homogeneous space, indeed the left multiplication
$(g,h)\mapsto g^{-1}h$
is a continuous and transitive action.
$\B(\X)$ and
$\B(G)$ stand for the Borel $\sigma$-fields of $\X$ and $G$
respectively
and $dg$ for the Haar measure (see \cite{faraut} e.g.) of $G$.
The latter induces
on $\X$ a $G$-invariant measure which we denote $dx$ abusing notation, given by
$$dx:=\int_G \delta_{gx}\,dg\ ,$$
where $\delta_{gx}$ as usual stands for the Dirac mass at the singleton $\lbrace gx \rbrace$.
For $G$-invariant we mean that for each integrable function $f$ on $\X$ we have for any $g\in G$
$$\int_\X f(gx)\,dx=\int_\X f(x)\,dx\ .$$
We assume that both these
measures have total mass equal to 1, unless explicitly stated.
For instance, in the case $\X=\mathbb S^d$ the unit $d$-dimensional sphere and $G=SO(d+1)$ the special orthogonal group of order $d+1$,
we have
\int_{{\mathbb{S}}^{d}}\,dx=\mu _{d}$ where
\begin{equation}\label{ms}
\mu _{d}:=\frac{2\pi ^{\frac{d+
}{2}}}{\Gamma \left( \frac{d+1}{2}\right) }\ .
\end{equation}
We set $L^2(G):=L^2(G,dg)$ and similarly $L^2(\X):=L^2(\X,dx)$;
the $L^2$-spaces are spaces of \emph{complex-valued} square integrable functions, unless otherwise
stated.
Let us fix a point $x_0\in\X$ once forever and denote by $K$ the
isotropy group of $x_0$
$$K=\lbrace g\in G : gx_0=x_0 \rbrace\ ,$$
i.e. the (closed) subgroup of the elements $g\in G$
fixing $x_0$. Then it is immediate to check that $\X\cong G/K$ i.e., there exists a $G$-invariant
isomorphism $\phi: \X \goto G/K$. Actually
the morphism $\widetilde \phi: G\goto \X$ defined as $\widetilde \phi(g):=gx_0$ is
surjective and $K=\widetilde \phi^{-1}(x_0)$.
For instance, in the case
$G=SO(3)$ the group of all rotations about the origin of three-dimensional Euclidean space $\R^3$ (under the operation of composition), and $\X=\cS^2$ the two-dimensional unit sphere,
$x_0$ will be the north pole and the subgroup $K$ of matrices that leave $x_0$ fixed will be isomorphic to $SO(2)$, the special orthogonal group of order two.
The $G$-invariant isomorphism $\X\cong G/K$ suggests that it is possible to identify
functions defined on $\X$ with particular functions on the group $G$.
\begin{definition}
The pullback on $G$ of a function $f:\X\to\C$ is the function $\widetilde f$ defined as
\begin{equation}\label{pul}
\widetilde f(g) := f(gx_0)\ ,\quad g\in G\ .
\end{equation}
\end{definition}
Note that $\widetilde f$ is a right-$K$-invariant function on $G$, i.e.
constant on left closets of $K$. Actually for $g\in G, k\in K$ it is immediate that
$
\tilde f(gk)=f(gkx_0)=f(gx_0)=\tilde f(g)
$.
We have
\begin{equation}\label{int-rule}
\int_\X f(x)\, dx=\int_G \widetilde f(g)\, dg\ ,
\end{equation}
by the
integration rule of image measures,
whenever one of the above integrals has sense.
\begin{remark}\label{id}
In particular, from \paref{pul} and \paref{int-rule}, the map
$f\mapsto \widetilde f$ is an isometry between $L^2(\X)$ and the (closed) subspace of
right-$K$-invariant
functions in $L^2(G)$.
\end{remark}
\section{Fourier expansions}
In this section we briefly recall Fourier expansions on compact groups
(for further details see \cite{faraut}).
The \emph{left regular representation} $L$ of $G$ is given, for $g\in G$ and $f\in
L^2(G)$, by
\begin{equation}\label{rappsin}
L_g f(h)= f(g^{-1}h)\ ,\qquad h\in G\ .
\end{equation}
Let $\widehat G$ be the \emph{dual} of $G$, i.e., the set of
the equivalence classes of irreducible unitary representations of
$G$. The compactness of $G$ implies that $\widehat G$ is at most
countable.
In what follows, we use the same approach as in \cite{balditrapani, mauspin, faraut}).
Let us choose, for every $\sigma\in \widehat G$, a representative $(D^\sigma, H_\sigma)$
where $D^\sigma$ is a unitary operator acting
irreducibly on $H_\sigma$ (a complex finite dimensional Hilbert space).
As usual, $\langle \cdot, \cdot \rangle$ denotes the inner
product of $H_\sigma$ and $\dim\sigma:=\dim H_\sigma$ its dimension.
Recall that the \emph{character} of $\sigma$ is the (continuous) function on $G$ defined as
$$
\chi_\sigma(g) := \tr D^\sigma(g)\ ,\qquad g\in G\ ,
$$
where $\tr D^\sigma(g)$ denotes the trace of $D^\sigma(g)$.
Given $f\in
L^2(G)$, for every $\sigma\in \widehat G$ we define the Fourier operator coefficient
\begin{equation}\label{Fourier coefficient}
\widehat f(\sigma) :=\sqrt{\dim \sigma}\int_G f(g)
D^\sigma(g^{-1})\,dg
\end{equation}
which is a linear endomorphism of $H_\sigma$.
Let us denote for $g\in G$
\begin{equation}\label{component}
f^\sigma(g):=\sqrt{\dim \sigma}\, \tr(\widehat f(\sigma) D^\sigma(g))\ ;
\end{equation}
by standard arguments in Representation Theory \cite{sugiura}, $f^\sigma$ is a continuous function on $G$.
Let us denote $*$ the convolution operator on $G$,
defined for $f_1, f_2\in L^2(G)$ as
$$
f_1*f_2 (g) := \int_G f_1(h) f_2(h^{-1}g)\,dh\ ,
$$
so that
\begin{equation}\label{conv1}
\widehat{f_1\ast f_2}(\sigma)=\frac 1{\sqrt{\dim \sigma}}\,\widehat
f_2(\sigma) \widehat f_1(\sigma)\ .
\end{equation}
Actually by a Fubini argument and the $G$-invariance property of Haar measure
$$\displaylines{
\widehat{f_1\ast f_2}(\sigma) = \sqrt{\dim \sigma}\int_G f_1\ast f_2(g)
D^\sigma(g^{-1})\,dg = \cr
= \sqrt{\dim \sigma}\int_G \left( \int_G f_1(h) f_2(h^{-1}g)\,dh \right)
D^\sigma(g^{-1})\,dg = \cr
= \sqrt{\dim \sigma}\int_G f_1(h)\left( \int_G f_2(g)D^\sigma(g^{-1})\,dg \right) D^\sigma(h^{-1})
\,dh= \frac 1{\sqrt{\dim \sigma}}\,\widehat
f_2(\sigma) \widehat f_1(\sigma)\ .
}$$
Moreover note that
$$
f^\sigma = \text{dim}\,\sigma\cdot f\ast \chi_\sigma \ .
$$
The
Peter-Weyl Theorem and Schur's orthogonality relations
(see \cite{dogiocam} or \cite{sugiura} e.g.) imply the following, known as Plancherel's Theorem.
\begin{theorem}\label{PW}
Let $f\in L^2(G)$. Then
\begin{equation}\label{PW for compact groups}
f(g) = \sum_{\sigma\in \widehat G} f^\sigma(g)\ ,
\end{equation}
the convergence of the series taking place in $L^2(G)$;
$$
\| f\|_{L^2(G)}^2 = \sum_{\sigma \in \widehat G} \| \widehat f(\sigma) \|_{\text{H.S.}}^2\ ,
$$
where $\| \cdot \|_{\text{H.S.}}$ denotes the Hilbert-Schmidt norm \cite{faraut}. If $f_1, f_2\in L^2(G)$, then
$$
\langle f_1, f_2 \rangle_{L^2(G)} = \sum_{\sigma\in \widehat G} \tr \widehat f_1(\sigma) \widehat
f_2(\sigma)^* \ .
$$
\end{theorem}
Recall that a function $f$ is said to be a \emph{central} (or class) function if for every $g\in G$
$$
f(hgh^{-1}) = f(g), \quad h\in G\ .
$$
\begin{prop}\label{classExp}
The set of characters $\lbrace \chi_\sigma : \sigma\in \widehat G \rbrace$
is an orthonormal basis of the space of square integrable
central functions on $G$.
\end{prop}
Now fix any orthonormal basis $v_1, v_2, \dots, v_{\dim\sigma }$ of
$H_\sigma$ and for $i,j=1,\dots , \text{dim $\sigma$}$ denote
$D^\sigma_{ij}(g):= \langle D^\sigma(g) v_j, v_i\rangle$
the $(i,j)$-th coefficient of
the matrix representation for $D^\sigma(g)$ with respect to this
basis. The matrix representation for $\widehat f(\sigma)$ has entries
$$
\dlines{\widehat f(\sigma)_{i,j}=\langle \widehat f(\sigma) v_j,
v_i\rangle = \sqrt{\dim \sigma} \int_G f(g) \langle D^\sigma(g^{-1})
v_j, v_i\rangle \,dg = \cr = \sqrt{\dim \sigma} \int_G f(g)
D^\sigma_{i,j}(g^{-1})\,dg = \sqrt{\dim \sigma} \int_G f(g)
\overline{D^\sigma_{j,i}(g)}\,dg\ ,}
$$
and Theorem \ref{PW} becomes
\begin{equation}\label{PW1}
f(g)=\sum_{\sigma\in \widehat G} \sqrt{\dim \sigma} \sum_{i,j=1}^{\dim\sigma
} \widehat f(\sigma)_{j,i} D^\sigma_{i,j}(g)\ ,
\end{equation}
the above series still converging in $L^2(G)$. The Peter-Weyl Theorem also states that the set of functions
$\lbrace \sqrt{\text{dim}\,\sigma}D^\sigma_{i,j}\ , \sigma \in \hat G, i,j=1,\dots, \text{dim}\,\sigma \rbrace$ is a complete orthonormal basis for
$L^2(G)$. Therefore \paref{PW1} is just the corresponding Fourier development and $\widehat f(\sigma)_{j,i}$ is
the coefficient corresponding to the element $\sqrt{\text{dim}\,\sigma} D^\sigma_{i,j}(g)$ of this basis.
Let $L^2_\sigma(G)\subset L^2(G)$ be the $\sigma$-isotypical subspace,
i.e. the subspace generated by the functions $D^\sigma_{i,j}, i,j=1,\dots , \text{dim}\,\sigma$; it is a $G$-module that can
be decomposed into the
orthogonal direct sum of $\dim\sigma $ irreducible and
equivalent $G$-modules
$(L^2_{\sigma,j}(G))_{j=1,\dots,\dim\sigma }$ where each
$L^2_{\sigma,j}(G)$ is spanned by the functions
$D^\sigma_{i,j}$ for $i=1,\dots, \dim\sigma $, loosely
speaking by the $j$-th column of the matrix $D^\sigma$.
Note that $f^\sigma$ is the component (i.e. the orthogonal projection) of $f$ in $L^2_\sigma(G)$.
Equivalently the Peter-Weyl Theorem can be stated as
\begin{equation}\label{PW2}
L^2(G)= \bigoplus_{\sigma \in \widehat G} \,\, \mathop{\oplus}\limits_{j=1}^{\dim\sigma }
L^2_{\sigma,j}(G)\ ,
\end{equation}
the direct sums being orthogonal.
Let us now deduce the Fourier expansion of functions $f\in L^2(\X)$. It can be easily obtained
from Theorem \ref{PW} and Remark \ref{id}, indeed their pullbacks $\widetilde f$
belong to $L^2(G)$ and form a $G$-invariant closed subspace of $L^2(G)$.
We can therefore associate to $f\in L^2(\X)$ the family of operators $\bigl(
\widehat{\widetilde f}(\sigma) \bigr)_{\sigma\in \widehat G}$.
Let $H_{\sigma,0}$ denote the subspace of $H_\sigma$ (possibly
reduced to $\{0\}$) formed by the vectors that remain fixed under the action of
$K$, i.e. for every $k\in K, v\in
H_{\sigma,0}$, $D^\sigma(k)v=v$.
Right-$K$-invariance implies that the image of $\widehat
{\widetilde f}(\sigma)$ is contained in $H_{\sigma,0}$:
\begin{equation}\label{proiezione triviale}
\begin{array}{c}
\displaystyle\widehat {\widetilde f}(\sigma) =\sqrt{\dim \sigma}
\int_G \widetilde f(g) D^\sigma(g^{-1})\,dg=\\
=\sqrt{\dim \sigma}
\int_G \widetilde f(gk) D^\sigma(g^{-1})\,dg
\displaystyle= \sqrt{\dim \sigma} \int_G\widetilde f(h)
D^\sigma(kh^{-1} )\,dh =\\
= D^\sigma(k)\sqrt{\dim \sigma}
\int_G\widetilde f(h) D^\sigma(h^{-1} )\,dh=D^\sigma(k)\widehat
{\widetilde f}(\sigma)\ .
\end{array}
\end{equation}
Let us denote by $P_{\sigma,0}$ the projection of $H_\sigma$ onto
$H_{\sigma,0}$, so that $\widehat {\widetilde
f}(\sigma)=P_{\sigma,0}\widehat {\widetilde f}(\sigma)$,
and $\widehat G_0$ the set of irreducible unitary
representations of $G$ whose restriction to $K$ contains the trivial
representation.
If $\sigma\in \widehat G_0$ let us consider a basis of $H_\sigma$
such that the elements $\lbrace v_{p+1}, \dots , v_{\dim\sigma} \rbrace$,
for some integer $p=p(\sigma)\ge 0$, span $H_{\sigma,0}$. Then the first $p$ rows
of the representative matrix of $\widehat {\widetilde f}(\sigma)$ in this
basis contain only zeros. Actually, by
(\ref{proiezione triviale}) and $P_{\sigma,0}$ being self-adjoint,
for $i\le p$
$$
\widehat {\widetilde f}_{i,j}(\sigma) = \langle \widehat {\widetilde
f}(\sigma) v_j, v_i \rangle = \langle P_{\sigma,0} \widehat
{\widetilde f}(\sigma) v_j, v_i \rangle=\langle \widehat
{\widetilde f}(\sigma) v_j, P_{\sigma,0}v_i \rangle= 0\ .
$$
Identifying $L^2(\X)$ as the closed subspace of right-$K$-invariant functions in $L^2(G)$, the Peter-Weyl Theorem entails that
$$
L^2(\X) = \bigoplus_{\sigma \in \widehat G_0} \oplus_{j=p+1}^{\text{dim}\,\sigma} L^2_{\sigma, j}(G)\ ,
$$
the direct sums being orthogonal.
Now we consider an important class of functions we shall need in the sequel.
\begin{definition}
A function $f:G \goto \C$ is said to be \emph{bi-$K$-invariant} if for every $g\in G, k_1, k_2\in K$
\begin{equation}\label{bicappa}
f(k_1gk_2)=f(g)\ .
\end{equation}
\end{definition}
If moreover $f\in L^2(G)$, the equality in \paref{bicappa} entails that, for every
$k_1,k_2\in K$, $\sigma\in\widehat G$,
$$
\widehat f(\sigma)=D^\sigma(k_1)\widehat f(\sigma)D^\sigma(k_2)
$$
and therefore a function $f\in L^2(G)$ is {bi-$K$-invariant} if and only if
for every $\sigma\in \hat G$
\begin{equation}\label{fourier-bicappa}
\widehat f(\sigma) = P_{\sigma,0} \widehat f(\sigma) P_{\sigma,0}\ .
\end{equation}
Note that we can identify of course bi-$K$-invariant functions in $L^2(G)$ with
left-$K$-invariant functions in $L^2(\X)$.
\subsection{Spherical harmonics}
Now we focus on the case of $\X=\mathbb S^2$ under the action of $G=SO(3)$, first
specializing previous results and then
recalling basic facts we will need in the rest of this work (see \cite{faraut}, \cite{dogiocam} e.g.
for further details).
The isotropy
group $K\cong SO(2)$ of the north pole is abelian, therefore its unitary irreducible
representations are unitarily equivalent to its linear characters which we shall denote
$\chi_s, s\in \Z$, throughout the whole work.
A complete set of unitary irreducible matrix representations of $SO(3)$ is given by
the so-called Wigner's $D$ matrices
$\lbrace D^\ell, \ell \ge 0 \rbrace$, where each $D^\ell(g)$ has dimension
$(2\ell+1)\times (2\ell+1)$ and acts on a representative space that we shall denote $H_\ell$.
The restriction to $K$ of each $D^\ell$ being unitarily equivalent to the direct sum of the
representations $\chi_m$, $m=-\ell, \dots, \ell$, we can suppose
$v_{-\ell}, v_{-\ell +1}, \dots, v_\ell$ to be an orthonormal basis for $H_\ell$ such that
for every $m : |m| \le \ell$
\begin{equation}\label{restrizione}
D^\ell(k)v_m = \chi_m(k)v_m\ , \qquad k\in K\ .
\end{equation}
Let $D^\ell_{m,n}=\langle D^\ell v_n, v_m \rangle$ be the $(m,n)$-th entry of $D^\ell$
with respect to the basis fixed above.
It follows from (\ref{restrizione}) that for every $g\in SO(3), k_1, k_2\in K$,
\begin{equation}\label{prop fnz di Wigner}
D^\ell_{m,n}(k_1gk_2) = \chi_m(k_1)D^\ell_{m,n}(g)\chi_n(k_2)\ .
\end{equation}
The functions $D^\ell_{m,n}:SO(3) \goto \C$, $ \ell\ge 0, m,n=-\ell,\dots,\ell$
are usually called Wigner's $D$ functions.
Given $f\in L^2(SO(3))$, its $\ell$-th Fourier coefficient (\ref{Fourier coefficient}) is
\begin{equation}\label{coefficiente ellesimo}
\widehat f(\ell) := \sqrt{2\ell+1}\,\int_{SO(3)} f(g)
D^\ell(g^{-1})\,dg
\end{equation}
and its Fourier development (\ref{PW1}) becomes
\begin{equation}\label{PW SO(3)}
f(g)=\sum_{\ell \ge 0} \sqrt{2\ell + 1} \sum_{m.n=-\ell}^{\ell
} \widehat f(\ell)_{n,m} D^\ell_{m,n}(g)\ .
\end{equation}
If $\widetilde f$ is the pullback of $f\in L^2(\mathbb S^2)$, (\ref{restrizione}) entails that
for every $\ell\ge 0$
$$
\widehat{\widetilde f}(\ell)_{n,m} \ne 0 \iff n= 0\ .
$$
Moreover if $f$ is left-$K$-invariant, then
$$
\widehat{\widetilde f}(\ell)_{n,m} \ne 0 \iff n,m = 0\ .
$$
In words, an orthogonal basis for the space of the square integrable right-$K$-invariant
functions on $SO(3)$ is given by the central columns of the matrices $D^\ell$, $\ell \ge 0$. Furthermore
the subspace of the bi-$K$-invariant functions is spanned by the central functions
$D^\ell_{0,0}(\cdot),\, \ell \ge 0$, which are \emph{real-valued}.
The important role of the other columns of Wigner's $D$ matrices will appear further in this work.
\begin{definition}
For every $\ell \ge 0, m=-\ell \dots, \ell$, let us define the spherical harmonic $Y_{\ell,m}$ as
\begin{equation}\label{armoniche sferiche1}
Y_{\ell,m}(x) := \sqrt{\frac{2\ell+1}{4\pi}}\, \overline{D^\ell_{m,0}(g_x)}\ , \qquad x\in \cS^2\ ,
\end{equation}
where $g_x$ is any rotation mapping the north pole of the sphere to $x$.
\end{definition}
Remark that this is a good definition thanks to the
invariance of each $D^\ell_{m,0}(\cdot)$ under the right action of $K$.
The functions in (\ref{armoniche sferiche1}) form an orthonormal basis of the space $L^2(\mathbb S^2)$
considering the sphere with total mass equal to $4\pi$.
Often, e.g. in the second part ``High-energy eigenfunctions'',
we work with \emph{real} spherical harmonics, i.e. the orthonormal set
of functions given by
\begin{equation}\label{realSH}
\frac{1}{\sqrt 2}\left ( Y_{\ell,m} + \overline{Y_{\ell,m}} \right )\ , \qquad \frac{1}{i\sqrt 2}\left ( Y_{\ell,m} - \overline{Y_{\ell,m}} \right )\
\end{equation}
which abusing notation we will again denote by $Y_{\ell,m}$ for $\ell\ge 0$, $m=1, \dots, 2\ell +1$.
Every $f\in L^2(\mathbb S^2)$ admits the Fourier development of the form
\begin{equation}
f(x) = \sum_{\ell=0}^{+\infty} \sum_{m=-\ell}^{\ell} a_{\ell,m} Y_{\ell,m}(x)\ ,
\end{equation}
where the above series converges in $L^2(\mathbb S^2)$ and
$$
a_{\ell,m} = \int_{\mathbb S^2} f(x) \overline{Y_{\ell,m}(x)}\,dx\ .
$$
Moreover the Fourier expansion of a left-$K$-invariant function $f\in L^2(\cS^2)$ is
\begin{equation}\label{PW invariant sphere}
f=\sum_{\ell=0}^{+\infty} \beta_\ell Y_{\ell,0}\ ,
\end{equation}
where $\beta_\ell := \int_{\mathbb S^2} f(x) Y_{\ell,0}(x)\,dx$.
The functions $Y_{\ell,0},\,\ell\ge 0$ are called \emph{central spherical harmonics}.
\
We stress that there exists an alternative characterization of spherical harmonics,
as \emph{eigenfunctions} of the spherical Laplacian $\Delta_{\mathbb S^2}$ (see \cite{dogiocam}
e.g.). We shall deeply use this formulation in Part 2: High-energy Gaussian eigenfunctions.
Recall that the spherical Laplacian is the Laplace-Beltrami operator on $\mathbb S^2$ with its
canonical metric of constant sectional curvature $1$, moreover
its (totally discrete) spectrum
is given by the set of eigenvalues $\lbrace -\ell(\ell+1)=: - E_\ell$, $\ell\in \mathbb N \rbrace$.
It can be proved that for $\ell \ge 0, m=-\ell, \dots, \ell $
$$
\Delta_{\mathbb S^2} Y_{\ell,m} + E_{\ell} Y_{\ell,m} = 0\ ,
$$
and the subset of spherical harmonics
$\lbrace Y_{\ell,m}, m=-\ell, \dots, \ell \rbrace$ is an orthonormal basis for the eigenspace $\mathcal H_\ell$ corresponding to the $\ell$-th
eigenvalue.
The Spectral Theorem for self-adjoint compact operators then entails that $\mathcal H_\ell$ and $\mathcal H_{\ell'}$ are orthogonal whenever $\ell \ne \ell'$ and moreover
$$
L^2(\mathbb S^2) = \bigoplus_{\ell\ge 0} \mathcal H_\ell\ ,
$$
which coincides with the Peter-Weyl decomposition for the sphere.
\section{Isotropic random fields}
Let us recall main definitions and facts about isotropic random fields on homogeneous spaces
(see \cite{dogiocam, mauspin, balditrapani} e.g.). First fix some probability space $(\Omega, \F, \P)$ and
denote $L^2(\P):=L^2(\Omega, \P)$ the space of finite-variance random variables.
\begin{definition}\label{campo aleatorio su uno spazio omogeneo}
A (complex-valued) random field $T=(T_x)_{x \in \X}$ on the $G$-ho\-mo\-ge\-neous space $\X$ is
a collection of (complex-valued) random variables indexed by elements of $\X$
such that the map
\begin{align}
\nonumber
T:\Omega \times \X \goto \C\ ;\qquad
(\omega, x) \mapsto T_x(\omega)
\end{align}
is $\F \otimes \B(\X)$-measurable.
\end{definition}
Note that often we shall write $T(x)$ instead of $T_x$.
\begin{definition}
We say that the random field $T$ on the $G$-homogeneous space $\X$
is second order if $T_x\in L^2(\P)$ for every $x\in \X$.
\end{definition}
\begin{definition}\label{contRF}
We say that the random field $T$ on the $G$-homogeneous space $\X$
is a.s. continuous if the functions
$\X\ni x\mapsto T_x$ are a.s. continuous.
\end{definition}
In this work the minimal regularity assumption for the paths of a random field $T$ is the a.s. square integrability. From now on, $\E$ shall stand for the expectation under the probability measure $\P$.
\begin{definition}
We say that the random field $T$ on the $G$-homogeneous space $\X$ is
\begin{enumerate}
\item a.s. square integrable if
\begin{equation}
\int_\X | T_x |^2\,dx < +\infty\;\; a.s.\label{integrabile q.c.}
\end{equation}
\item mean square integrable if
\begin{equation}
\E \left[\int_\X | T_x |^2\,dx \right] < +\infty\ .\label{integrabile in media quadratica}
\end{equation}
\end{enumerate}
\end{definition}
\noindent Note that the mean square integrability implies the $a.s.$ square integrability.
\begin{remark}\rm\label{variabile aleatoria che prende valori in L2X}
If $T$ is a.s. square integrable,
it can be regarded to as a random variable taking a.s. its values in $L^2(\X)$
i.e. $T(\cdot)=(x\goto T_x(\cdot))$.
\end{remark}
We define now the notion of isotropy.
Let $T$ be a.s. square integrable. For every $f\in L^2(\X)$, we can consider the integral
$$
T(f):=\int_\X T_x \overline{f(x)}\,dx
$$
which defines a r.v. on $(\Omega, \F, \P)$.
For every $g\in G$, let $T^g$ be the \emph{rotated field} defined as
$$
T^g_x:=T_{gx},\qquad x\in \X\ .
$$
Losely speaking, $T$ is isotropic if its law and the law of the rotated field $T^g$
coincide for every $g\in G$.
We give the following formal definition of isotropy (see \cite{maupec, mauspin, balditrapani})
\begin{definition}\label{invarian}
An a.s. square integrable random field $T$ on the homogeneous space
$\X$ is said to be (strict sense) $G$-invariant or
isotropic if
the joint laws of
\begin{equation}\label{l2-invar}
(T(f_1),\dots,T(f_m))\quad\mbox{and}\quad (T(L_gf_1),\dots,T(L_gf_m))=
(T^g(f_1),\dots,T^g(f_m))
\end{equation}
coincide for every $g\in G$ and $f_1,f_2,\dots,f_m\in L^2(\X)$.
\end{definition}
This definition is somehow different from the one usually considered in the literature, where
the requirement is the equality of the finite dimensional distributions, i.e. that the random vectors
\begin{equation}\label{invar-continui1}
(T_{x_1},\dots,T_{x_m})\qquad\mbox{and}\qquad(T_{gx_1},\dots,T_{gx_m})
\end{equation}
have the same law for every choice of $g\in G$ and
$x_1,\dots,x_m\in \X$. Remark that (\ref{l2-invar})
implies (\ref{invar-continui1}) (see \cite{maupec}) and
that, conversely, by standard approximation arguments
(\ref{invar-continui1}) implies (\ref{l2-invar}) if $T$ is
continuous.
We see now how the Peter-Weyl decomposition naturally applies to
random fields.
It is worth remarking that every a.s. square integrable random field $T$ on $\X$
uniquely defines an a.s. square integrable random field on $G$ (whose paths
are the pullback functions of the paths $x\mapsto T_x$).
Therefore w.l.o. we can investigate the case $\X=G$.
To every a.s. square integrable random field $T$ on $G$ we
can associate the set of operator-valued r.v.'s $(\widehat
{T}(\sigma))_{\sigma\in \widehat G}$ defined ``pathwise'' as
\begin{equation}
\widehat { T}(\sigma) = \sqrt{\dim\sigma}\int_G T_g D^\sigma(g^{-1})\,dg\ .
\end{equation}
From (\ref{PW for compact groups}) therefore
\begin{equation}\label{l2as-dev}
T_g = \sum_{\sigma\in \widehat G} T^\sigma_g
\end{equation}
where the convergence takes place in $L^2(G)$ a.s.
Remark that
\begin{equation}
T^\sigma_g:=\sqrt{\dim\sigma} \tr(\widehat {
T}(\sigma) D^\sigma(g))= \sqrt{\dim \sigma} \sum_{i,j=1}^{\dim\sigma
} \widehat T(\sigma)_{j,i} D^\sigma_{i,j}(g)\ ,
\end{equation}
is the projection of $T$ on $H_\sigma$ and $T^\sigma$ is continuous.
For the proof of the following see \cite{balditrapani}.
\begin{prop}
Let T be an a.s. square integrable random field on G. Then T is isotropic if and only
if, for every $g\in G$, the two families of r.v's
$$
(\widehat T(\sigma))_{\sigma\in \widehat G} \quad \text{and} \quad
(\widehat T(\sigma)D^\sigma(g))_{\sigma\in \widehat G}
$$
are equi-distributed.
\end{prop}
If the random field $T$ is second order and isotropic (so that
\paref{integrabile in media quadratica} holds by a standard Fubini argument),
it is possible to say more about the convergence of the series
in (\ref{l2as-dev}).
Indeed we have the following result, that we reproduce here from \cite{dogiocam}.
\begin{theorem}\label{Stochastic-PW}
Let $T$ be a second order and
isotropic random field on the compact group $G$.
Then
\begin{equation}
T=\sum_{\sigma \in \hat G} T^\sigma\ .
\end{equation}
The convergence of the infinite series is both in the sense of
$L^2(\Omega\times G, \P \otimes dg)$ and $L^2(\P)$ for every fixed $g$, that is, for any enumeration
$\lbrace \sigma_k: k\ge 1 \rbrace$ of $\hat G$, we have both
\begin{align}
&\lim_{N\to +\infty} \E \left [\int_G \left | T_g - \sum_{k=1}^{N} T^{\sigma_k}_g \right |^2\,dg \right ]=0\ ,\label{align1}\\
&\lim_{N\to +\infty} \E \left [\left | T_g - \sum_{k=1}^{N} T^{\sigma_k}_g \right |^2\right ] =0\ .\label{2}
\end{align}
\end{theorem}
\noindent The previous theorem has the following interesting consequence (for a proof see \cite{maupec}).
\begin{prop}\label{Mean square continuity of invariant}
Every second order and isotropic random field $T$ on the homogeneous space $\X$ of a compact group is
mean square continuous, i.e.
\begin{equation}
\lim_{y\to x} \E[|T_y - T_x|^2 ]=0\ .
\end{equation}
\end{prop}
It is worth remarking some features of Fourier coefficients of a second order and isotropic random field $T$ (see \cite{balditrapani}).
\begin{theorem}
If $\sigma\in \widehat G$ is not the trivial representation, then
$$
\E[\widehat T(\sigma)]=0\ ,
$$
moreover for $\sigma_1, \sigma_2\in \widehat G$, we have
\begin{enumerate}\label{th coeff}
\item if $\sigma_1, \sigma_2$ are not equivalent, the r.v.'s $\widehat T(\sigma_1)_{i,j}$ and $\widehat T(\sigma_2)_{k,l}$ are orthogonal for
$i,j=1, \dots, \text{dim}\,\sigma_1$ and $k,l=1, \dots , \text{dim}\,\sigma_2$;
\item if $\sigma_1=\sigma_2=\sigma$, and $\Gamma(\sigma)=\E[\widehat T(\sigma) \widehat T(\sigma)^*]$, then $\Cov(\widehat T(\sigma)_{i,j}, \widehat T(\sigma)_{k,l})=\delta_j^l \Gamma(\sigma)_{i,k}$. In particular
coefficients belonging to different columns are orthogonal and the covariance between entries in
different rows of a same column does not depend on the column.
\end{enumerate}
\end{theorem}
Theorem \ref{th coeff} states that the entries of $\widehat T(\sigma)$ might not be pairwise orthogonal and this happens when
the matrix $\Gamma$ is not diagonal. This phenomenon is actually already been remarked by other authors
(see \cite{malyarenkobook} e.g.).
Of course there are situations in
which orthogonality is still guaranteed: when the dimension of $H_{\sigma,0}$ is one at most (i.e. in every
irreducible $G$-module the dimension of the space $H_{\sigma,0}$ of the $K$-invariant vectors in one at most) as
is the case for $G = SO(m+1)$, the special orthogonal group of dimension $m+1$, $K = SO(m)$ and $G/K \cong \mathbb S^m$ the $m$-dimensional unit sphere. In this case actually the matrix $\widehat T(\sigma)$ has
just one row that does not vanish and $\Gamma(\sigma)$ is all zeros, but one entry in the diagonal.
\
Let us now focus on Gaussian fields, which will receive the greatest attention in this work.
First it is useful to recall the following.
\begin{definition}\label{Gaussian}
Let $Z=Z_1+iZ_2$ be a complex random variable (we mean that $Z_1,Z_2$ are real random variables).
We say that
\begin{itemize}
\item $Z$ is a \emph{complex-valued} Gaussian random variable if $(Z_1,Z_2)$ are jointly Gaussian;
\item $Z$ is a \emph{complex} Gaussian random variable if $Z_1,Z_2$ are independent Gaussian random variables
with the same variance.
\end{itemize}
Furthermore we say that the random vector
$(Y_1,Y_2,\dots,Y_m)$ is a complex (resp. com\-plex-valued) Gaussian vector if
$$\sum_i a_i Y_i$$
is a complex (resp. complex-valued) Gaussian random variable for every choice of $a_1,a_2,\dots,a_m\in \mathbb C$.
\end{definition}
\noindent From this definition it follows that if $T$ is complex-valued Gaussian,
meaning that the r.v.
$T(f)$
is complex-valued Gaussian for every $f\in L^2(\X)$, then its Fourier coefficients
are complex-valued Gaussian r.v.'s. Furthermore, if each representation of $G$ occurs at most once
in the Peter-Weyl decomposition of $L^2(\X)$ and $T$ is Gaussian and isotropic, we have that these Fourier
coefficients are pairwise independent from Theorem \ref{th coeff}. This is the case for instance for $G=SO(m+1)$ and $\X=\mathbb S^m$.
In \cite{BMV} a characterization
of isotropic real Gaussian fields on homogeneous spaces of compact groups is given: under some mild
additional assumption also the converse is true, namely that if a random field is isotropic
and its Fourier coefficients are independent, then it is necessarily Gaussian.
For more discussions on this topic see also \cite{balditrapani}.
\subsection*{Isotropic spherical random fields}
Let us consider a random field $T=(T_x)_{x\in \cS^2}$ on $\cS^2$
according to Definition (\ref{campo aleatorio su uno spazio omogeneo}).
We assume that $T$ is $a.s.$ square integrable.
From previous sections,
$T$ admits the following stochastic Fourier expansion
\begin{equation}\label{espansione stocastica sulla sfera}
T_x = \sum_{\ell \ge 0} \sum_{m=-\ell}^{\ell} a_{\ell,m} Y_{\ell,m}(x)
\end{equation}
where $a_{\ell,m}=\int_{\cS^2} T_x \overline{Y_{\ell,m}(x)}\,dx$ are the Fourier coefficients w.r.t. the basis of spherical harmonics and the convergence is
in the sense of $L^2( \cS^2)$ $a.s.$ \\
If the random field $T$ is in addition second order and isotropic, Theorem (\ref{Stochastic-PW}) states that the convergence
of the series in (\ref{espansione stocastica sulla sfera}) holds both
in the sense of $L^2(\Omega\times \cS^2, \P\otimes dx)$ and $L^2(\P)$ for every fixed $x$, and
furthermore, Corollary \ref{Mean square continuity of invariant} states that $T$ is mean square continuous.
\noindent Moreover from Theorem \ref{th coeff}
we obtain easily
\begin{equation}\label{media zero}
\E(a_{\ell,m}) = 0\; \mbox{for every}\; m=-\ell,\dots , \ell\; \mbox{and}\; \ell > 0
\end{equation}
so that $\E (T_x)=\E (a_{0,0})/ \sqrt{4\pi}$, as $Y_{0,0}= 1/\sqrt{4\pi}$, according to the
fact that the mean of an isotropic random field is constant.
If $c$ is any additive constant, the random field $T^c:=T+c$ has the same
Fourier expansion as $T$, except for the term $a_{0,0}^c Y_{0,0}= c + a_{0,0}Y_{0,0}$ because for every $\ell > 1$ the spherical harmonics $Y_{\ell,m}$ are orthogonal to the constants.
In what follows we often consider \emph{centered} isotropic random fields,
this is generally done by ensuring that also the trivial coefficient $a_{0,0}$ is a centered
random variable. However we will often require that $a_{0,0}=0$, i.e.,
the average of the random field vanishes on $\cS^2$:
\begin{equation}
\int_{\cS^2} T_x\,dx = 0\ .
\end{equation}
As in the Peter-Weyl decomposition of $L^2(\cS^2)$ two irreducible representations with $\ell\not=\ell'$ are not equivalent,
the random coefficients $a_{\ell,m}, m=-\ell, \dots, \ell$ are pairwise orthogonal and moreover
the variance of $a_{\ell,m}$ does not depend on $m$. We denote
$$c_\ell:=\E[|a_{\ell,m}|^2]$$
the variance of $a_{\ell,m}$. The (nonnegative) sequence
$(c_\ell)_\ell$ is known as the \emph{angular power spectrum} of the field.
It turns out that $T$ is Gaussian and isotropic if and only $a_{\ell,m}$ are Gaussian independent random variables.
In this case, from \paref{espansione stocastica sulla sfera}, setting
$$
T_\ell(x) := \sum_{m=-\ell}^{\ell} \frac{a_{\ell,m}}{\sqrt{c_\ell}} Y_{\ell,m}(x)
$$
we can write
$$
T_x = \sum_\ell \sqrt{c_\ell}\, T_\ell(x) \ ,
$$
where $T_\ell$ is known as the $\ell$-th Gaussian eigenfunctions on $\mathbb S^2$ or random spherical harmonics (see \paref{Telle} for further details).
\section{Positive definite functions}
To every second order random field $T$ one can associate
the \emph{covariance kernel} $R:\X\times \X \goto \C$ defined as
$$
R(x,y)=\Cov(T_x,T_y)\ .
$$
This kernel is positive definite, as, for every choice of $x_1,\dots,x_m\in\X$ and of $\xi\in\C^m$ we have
$$
\displaylines{
\sum_{i,j=1}^mR(x_i,x_j)\xi_i\overline{\xi_j}=
\sum_{i,j=1}^m\Cov(T_{x_i},T_{x_j})\xi_i\overline{\xi_j}
=\Var\left(\sum_{i}T_{x_i}\xi_i\right)\ge0\ .
}
$$
If in addition $T$ is isotropic we have, for every $g\in G$,
$$
R(gx,gy)=R(x,y)
$$
and, in this case, $R$ turns out to be continuous, thanks to proposition (\ref{Mean square continuity of invariant}).
Moreover to this kernel one can associate the function on $G$
\begin{equation}\label{funzione fi}
\phi(g):=R(gx_0,x_0)\ .
\end{equation}
This function $\phi$ is
\begin{itemize}
\item continuous, as a consequence of the continuity of $R$.
\item \emph{bi-$K $-invariant} i.e. for every $k_1,k_2\in K $ and $g\in G$ we have
$$\phi(k_1gk_2)=R(k_1gk_2x_0,x_0)\underbrace{=R(k_1gx_0,x_0)}_{k_2x_0=x_0} \underbrace{=R(gx_0,k_1^{-1}x_0)}_{R\,is\,G-invariant}=R(gx_0,x_0)=\phi(g)$$
\item \emph{positive definite}, actually as $R$ is a positive definite kernel, for every $g_1,\dots,g_m\in G$ and
$\xi_1,\dots,\xi_m\in \C$ we have
\begin{align}\label{ab}
\sum_{i,j} \phi(g_i^{-1}g_j)\overline{\xi_i} \xi_j = \sum_{i,j} R(g_i^{-1}g_jx_0,x_0)\overline{\xi_i} \xi_j=\sum_{i,j} R(g_jx_0,g_ix_0)\overline{\xi_i} \xi_j\ge 0\ .
\end{align}
\noindent By standard approximation arguments \paref{ab} imply that for every continuous function $f$ we have
\begin{equation}\label{dis1}
\int_G \int_G \phi(h^{-1}g) f(h) \overline{f(g)}\,dg\,dh \ge 0\ .
\end{equation}
\end{itemize}
Moreover $\phi$ determines the covariance kernel $R$ by observing that if $g_x x_0=x, g_yx_0=y$, then
$$R(x,y)=R(g_xx_0,g_yx_0)=R(g_y^{-1}g_xx_0,x_0)=\phi(g_y^{-1}g_x)\ .$$
\noindent Now it is useful to introduce the following functions and their properties.
\begin{definition}\label{f chech}
Let $\zeta$ be a function defined on $G$. We denote by $\zsmile$
the function
$$\zsmile(g):=\overline{\zeta(g^{-1})}$$
\end{definition}
\begin{remark}\rm\label{algebra gruppo}
We have just defined a map
\begin{align}
\zeta \goto \zsmile
\end{align}
that is an \emph{involution} of the convolution algebra $L^2(G)$ that becomes an $H^*$-algebra. $L^2(G)$ is known as the \emph{group algebra} of $G$. \qed
\end{remark}
\begin{remark}\rm\label{aggiunto}
If $\zeta\in L^2(G)$, then for every $\sigma \in \hat G$ we have
$$\hat \zsmile (\sigma)= \hat \zeta(\sigma)^*\ .$$
Actually,
$$
\hat \zsmile (\sigma) = \int_G \zsmile(g) D^\sigma(g^{-1})\,dg=
\int_G \overline{\zeta(g^{-1})} D^\sigma(g^{-1})\,dg\ .
$$
Thus, for every $v\in H_\sigma$,
$$\dlines{
\langle \hat \zsmile (\sigma) v,v \rangle = \int_G \overline{\zeta(g^{-1})} \langle D^\sigma(g^{-1})v,v \rangle \,dg=\cr
= \int_G \overline{\zeta(g^{-1})} \langle v,D^\sigma(g)v \rangle \,dg= \int_G \overline{\zeta(g^{-1})} \overline{\langle D^\sigma(g)v,v \rangle} \,dg=\cr
=\overline{ \langle \hat \zeta(\sigma)v, v \rangle}=\langle v, \hat \zeta(\sigma)v \rangle\ .
}$$
\end{remark}
Remark that every positive definite function $\phi$ on $G$ (see \cite{sugiura} p.123) satisfies
$$
\fismile = \phi\ .
$$
The following proposition states some (not really unexpected) properties of continuous positive definite functions that we shall need later.
\begin{prop}\label{structure of positive definite}
Let $\phi$ a continuous positive function and $\sigma\in \hat G$.
a) Let, $\phi(\sigma):H_\sigma\to H_\sigma$ the operator coefficient $\phi(\sigma)=\int_G \phi(g)D^\sigma(g^{-1})\, dg$. Then $\phi(\sigma)$ Hermitian positive definite.
b) Let $ \phi^\sigma:G\to \C$ the component of $\phi$ corresponding to $\sigma$. Then $\phi^\sigma$ is also a positive definite function.
\end{prop}
\begin{proof}
a) Let us fix a basis $v_1,\dots,v_{d_\sigma}$ of $H_\sigma$, we have
\begin{equation}\label{dis3}
\langle \hat \phi(\sigma) v,v \rangle = \int_G \phi(g)
\langle D^\sigma(g^{-1})v,v\rangle\,dg\ .
\end{equation}
By the invariance of the Haar measure
$$
\dlines{ \int_G \phi(g)
\langle D^\sigma(g^{-1})v,v\rangle\,dg = \int_G \int_G \phi(h^{-1}g) \langle D^\sigma(g^{-1}h)v,v\rangle\,dg\,dh=\cr =
\int_G \int_G \phi(h^{-1}g) \langle D^\sigma(h)v,D^\sigma(g)v\rangle\,dg\,dh=\cr
=\int_G \int_G \phi(h^{-1}g) \sum_k ({D^\sigma(h)}{v})_k\overline{({D^\sigma(g)} {v})}_k\,dg\,dh =\cr
\sum_k \int_G \int_G \phi(h^{-1}g) f_k(h)\overline{f_k(g)}\,dg\,dh \ge 0\ ,
}
$$
where we have set, for every $k$, $f_k(g)=({D^\sigma(g)}{v})_k$ and (\ref{dis1}) allows to conclude.
Let $\phi^\sigma$ be the projection of $\phi$ onto the $\sigma$-isotypical subspace $L^2_\sigma(G)\subset L^2(G)$.
b) The Peter-Weyl theorem states that
\begin{equation}
\phi= \sum_{\sigma\in \hat G} \phi^\sigma\ ,
\end{equation}
the convergence of the series taking place in $L^2(G)$.
\noindent Let $f\in L^2_\sigma(G)$ in \ref{dis1} and replace $\phi$ with its Fourier series.
Recall that $f$ is a continuous function.
We have
$$
\dlines{
0\le \int_G \int_G \sum_{\sigma'} \phi^{\sigma'}(h^{-1}g) f(h) \overline{f(g)}\,dg\,dh=
\int_G \sum_{\sigma'} \underbrace{\int_G \phi^{\sigma'}(h^{-1}g) f(h)\,dh}_{= f\ast \phi^{\sigma'}} \overline{f(g)}\,dg\ .}
$$
Now recalling that the subspaces $L^2_{\sigma'}(G)$ are pairwise orthogonal
under the product of convolution, we obtain
$$
f\ast \phi^{\sigma'} \ne 0 \iff \sigma'=\sigma\ .
$$
Therefore for every $\sigma\in \hat G$
\begin{equation}
\int_G \int_G \phi^{\sigma}(h^{-1}g) f(h) \overline{f(g)}\,dg\,dh=\int_G \int_G \phi(h^{-1}g) f(h) \overline{f(g)}\,dg\,dh \ge 0
\end{equation}
for every $f\in L^2_\sigma(G)$. Let now $f\in L^2(G)$ and let $f=\sum_{\sigma'} f^{\sigma'}$ its Fourier series. The same argument as above gives
$$
\int_G \int_G \phi^{\sigma}(h^{-1}g) f(h) \overline{f(g)}\,dg\,dh=\int_G \int_G \phi^{\sigma}(h^{-1}g) f^\sigma(h) \overline{f^\sigma(g)}\,dg\,dh\ge 0\ ,
$$
so that $\phi^\sigma$ is a positive definite function.
\end{proof}
\noindent Another important property enjoyed by positive definite and continuous functions on $G$ is shown
in the following classical theorem (see \cite{GAN:60}, Theorem 3.20, p.151).
\begin{theorem}\label{gangolli-true}
Let $\zeta$ be a continuous positive definite function on $G$. Let $\zeta^\sigma$ be
the component of $\zeta$ on the $\sigma$-isotypical subspace $L^2_\sigma(G)$, then
$$
\zeta=\sum_{\sigma\in \hat G} \sqrt{\text{dim}\,\sigma} \tr\, \widehat \zeta(\sigma) < +\infty\ ,$$
and the Fourier series
$$
\zeta = \sum_{\sigma\in \hat G} \zeta^\sigma
$$
converges uniformly on $G$.
\end{theorem}
\begin{remark}\rm
This theorem is an extension of a classical result for trigonometric series:
\emph{every continuous function on the unit circle with all nonnegative Fourier coefficients has
its Fourier series converging uniformly on the unit circle}. \qed
\end{remark}
\chapter{Representation of isotropic Gaussian fields}
In this chapter we recollect the first part of \cite{mauspin}: as stated in the Introduction, starting from P. L\'evy's construction of his spherical Brownian motion, we prove a representation formula for isotropic Gaussian fields on homogeneous spaces $\X$ of a compact group $G$ (\S 2.1 and \S 2.2).
In particular, we show that to every square integrable bi-$K$-invariant
function $f$ on $G$ a Gaussian isotropic random field on $\X$ can be associated
and also that every \emph{real} Gaussian
isotropic random field on $\X$ can be obtained in this way.
This kind of result is extended to the case of random fields in the
spin-line bundles of the sphere in the second part of \cite{mauspin} and will be presented in the last chapter of this thesis.
\section{Construction of isotropic Gaussian fields}\label{sec4}
In this section we point out the method for Gaussian isotropic random
fields on the homogeneous space $\X$ of a compact group $G$. We start with the construction
of a white noise on $\X$.
Let $(X_n)_n$ be a sequence of i.i.d. standard Gaussian r.v.'s on some probability
space $(\Omega, \F, \P)$ and denote by $\mathscr{H}\subset L^2(\P)$ the real Hilbert
space generated by $(X_n)_n$.
Let $(e_n)_n$ be an orthonormal basis of $L^2_{\mathbb{R}}(\mathscr{X})$, the space of real square integrable
functions on $\X$.
We define an isometry $S:L^2_{\mathbb{R}}(\mathscr{X})\to\mathscr{H}$ by
$$
L^2_{\mathbb{R}} (\mathscr{X})\ni \sum_k \alpha_k
e_k\enspace\leftrightarrow\enspace \sum_k \alpha_k X_k \in
\mathscr{H}\ .
$$
It is easy to extend $S$ to an isometry on $L^2(\mathscr{X})$,
indeed if $f\in L^2(\mathscr{X})$, then $f=f_1+if_2$, with $f_1, f_2
\in L^2_{\mathbb{R}}(\mathscr{X})$, hence just set
$S(f)=S(f_1)+iS(f_2)$. Such an isometry respects the real character
of the function $f\in L^2(\X)$ (i.e. if $f$ is real then $S(f)$ is a
real r.v.).
Let $f$ be a left $K $-invariant function in $L^2(\mathscr{X})$.
We then define a random field $(T^f_x)_{x\in \mathscr{X}}$
associated to $f$ as follows: set $T^f_{x_0}=S(f)$ and, for every $x\in \X$,
\begin{equation}\label{def campo}
T^f_x =S(L_gf)\ ,
\end{equation}
where $g\in G$ is such that $gx_0=x$ ($L$ still denotes the left
regular action of $G$).
This is a good definition: in fact if also $\widetilde{g}\in G$ is such that
$\widetilde{g}x_0=x$, then $\widetilde{g}=gk$ for some $k\in K$ and therefore $L_{\widetilde g}f(x)=f(k^{-1}g^{-1}x)=
f(g^{-1}x)=L_gf(x)$ so that
$$
S(L_{\widetilde{g}}f)=S(L_gf)\ .
$$
The random field $T^f$ is mean square integrable, indeed
$$
\dlines{ \E \Bigl[\int_\X |T^f_x|^2\,dx \Bigr]
< +\infty\ .}
$$
Actually,
if $g_x$ is any element of $G$ such that $g_xx_0=x$ (chosen in some
measurable way), then, as $\E[|T^f_x|^2]=\E [|S(L_{g_x} f)|]^2=\|
L_{g_x}f \|^2_{L^2(\X)}=\| f \|^2_{L^2(\X)}$, we have $\E \int_\X |T^f_x|^2\,dx= \| f \|^2_{L^2(\X)}$.
$T^f$ is a centered and \emph{complex-valued Gaussian} random field.
Let us now check that $T^f$ is isotropic. Recall that
the law of a complex-valued Gaussian random vector $Z=(Z_1, Z_2)$ is
completely characterized by its mean value $\E[Z]$, its covariance
matrix $\E[ \left(Z - \E[Z]\right) \left( Z- \E[Z] \right)^*]$ and
the \emph{pseudocovariance} or \emph{relation matrix} $\E[ \left(Z -
\E[Z]\right) \left( Z- \E[Z] \right)']$. We have
(i) as $S$ is an isometry
$$
\displaylines{
\mathbb{E}[T^f_{gx}\overline{T^f_{gy}}]=\mathbb{E}[S(L_{gg_x}f)\overline{S(L_{gg_y}f)}]=
\langle L_{gg_x }f, L_{gg_y}f\rangle_{L^2(\X)}
=\cr
=\langle L_{g_x }f, L_{g_y}f \rangle_{L^2(\X)}=
\mathbb{E}[T^f_x\overline{T^f_y}]\ .}
$$
(ii) Moreover, as complex conjugation commutes both with $S$ and the
left regular representation of $G$,
$$
\displaylines{
\mathbb{E}[T^f_{gx}T^f_{gy}]=\mathbb{E}[S(L_{gg_x}f)\overline{S(L_{gg_y}\overline{f})}]=
\langle L_{gg_x }f, L_{gg_y}\overline{f}\rangle_{L^2(\X)}=\cr
=
\langle L_{g_x }f, L_{g_y}\overline{f}\rangle_{L^2(\X)}=\mathbb{E}[T^f_xT^f_y]\ .}
$$
Therefore $T^f$ is isotropic because it has the same covariance and
relation kernels as the rotated field $(T^f)^g$ for every $g\in G$.
If $R^f(x,y)=\E[T^f_x \overline{T^f_y}]$ denotes its covariance
kernel, then the associated positive definite function
$\phi^f(g):=R(gx_0,x_0)$ satisfies
\begin{equation}\label{convolution for phi}
\begin{array}{c}
\displaystyle\phi^f(g)=\E[S(L_g f)\overline{S(f)}]=
\langle L_gf, f\rangle
=\\
\noalign{\vskip3pt}
\displaystyle= \int_G \widetilde f(g^{-1}h) \overline{\widetilde f(h)}\,dh= \int_G \widetilde f(g^{-1}h) \breve {\widetilde f} (h^{-1})\,dh=\widetilde f \ast \breve {\widetilde f} (g^{-1})\ , \\
\end{array}
\end{equation}
where $\widetilde f$ is the pullback on $G$ of $f$ and the convolution $\ast$ is in $G$. Moreover the relation function of $T^f$, that is
$
\zeta^f(g) := \E[T^f_{gx_0} T^f_{x_0}]
$
satisfies
\begin{equation}\label{convolution for zeta}
\zeta^f(g)=\E[S(L_gf)S(f)]=\langle L_gf, \overline{f}\rangle\ .
\end{equation}
One may ask whether every a.s. square integrable, isotropic, complex-valued Gaussian centered random field on $\X$ can be obtained with this construction: the answer
is \emph{no} in general. It is however positive if we consider
{\it real} isotropic Gaussian random fields (see Theorem \ref{real-general} below). Before considering the case of a general homogeneous space $\X$, let us look first at the case of the sphere, where things are particularly simple.
\begin{remark}\label{rem-sfera} \rm (Representation of
real Gaussian isotropic random fields on $\cS^2$) If $\X=\cS^2$
under the action of $SO(3)$, every isotropic, \emph{real} Gaussian
and centered random field is of the form \paref{def campo} for some
left-$K$-invariant function $f:\cS^2\to \R$. Indeed let us consider on $L^2(\cS^2)$ the Fourier
basis $Y_{\ell,m}$, $\ell=1,2,\dots$, $m=-\ell,\dots,\ell$,
given by the spherical harmonics (\ref{armoniche sferiche1}).
Every continuous positive definite left-$K$-invariant function $\phi$ on $\cS^2$ has a Fourier expansion of the form (\ref{PW invariant sphere})
\begin{equation}\label{fi per sfera}
\phi = \sum_{\ell \ge 0} \alpha_\ell Y_{\ell,0}\ ,
\end{equation}
where (Proposition
\ref{structure of positive definite}) $\alpha_\ell \ge 0$ and
$$
\sum_{\ell \ge 0} \sqrt{2\ell+1}\,\alpha_\ell<+\infty
$$
(Theorem \ref{gangolli-true}).
The $Y_{\ell,0}$'s being real, the function $\phi$ in (\ref{fi per sfera})
is\emph{ real}, so that,
$\phi(g)=\phi(g^{-1})$
(in this remark and in the next example we identify functions on $\cS^2$ with
their pullbacks on $SO(3)$ for simplicity of notations).
If $\phi$ is the positive definite left-$K$-invariant function associated to
$T$, then, keeping in mind that $Y_{\ell,0}*Y_{\ell',0}=(2\ell+1)^{-1/2}
Y_{\ell,0}\,\delta_{\ell,\ell'}$, a ``square root'' $f$ of $\phi$ is given by
\begin{equation}
f = \sum_{\ell \ge 0} \beta_\ell \, Y_{\ell,0}\ ,
\end{equation}
where $\beta_\ell$ is a complex number such that
$$
\frac{|\beta_\ell |^2}{\sqrt{2\ell+1}}= \sqrt{\alpha_\ell}\ .
$$
Therefore there exist infinitely many real
functions $f$ such that $\phi(g)=\phi(g^{-1})=
f \ast \breve{f}(g)$, corresponding to the choices $\beta_\ell=\pm
( (2\ell+1)\alpha_\ell )^{1/4}$. For each of these, the random field $T^f$ has
the same distribution as $T$, being real and having the same associated positive
definite function. \qed
\end{remark}
As stated in the Introduction, this method generalizes P. L\'evy's construction of his spherical Brownian motion. In the following example, we show
the connection between this construction and our method. Moreover, it is easy to extend the following to the case of the hyperspherical Brownian motion.
\begin{example}\rm{(P.L\'evy's spherical Brownian field)}\label{MB}.
Let us choose as a particular instance of the previous construction
$f=c1_{H}$, where $H$ is the half-sphere centered at the north pole
$x_0$ of $\cS^2$ and $c$ is some constant to be chosen later.
Still denoting by $S$ a white noise on $\cS^2$, from (\ref{def campo})
we have
\begin{equation}
T^f_x = c S(1_{H_x})\ ,
\end{equation}
where $1_{H_x}$ is the half-sphere centered at $x\in \cS^2$. Now,
let $x, y\in \bS^2$ and denote by $d(x,y) = \theta$ their distance, then,
$S$ being an isometry,
\begin{equation}
\Var(T^f_x - T^f_y) = c^2 \| 1_{H_x\vartriangle H_y}\|^2\ .
\end{equation}
The symmetric difference $H_x\vartriangle H_y$ is formed by the union of
two wedges whose total surface is equal to $\frac{\theta}{\pi}$
(recall that we
consider the surface of $\cS^2$ normalized with total mass $=1$).
Therefore, choosing $c= \sqrt{\pi}$, we have
\begin{equation}
\Var(T^f_x - T^f_y) = d(x,y)
\end{equation}
and furthermore $\Var(T^f_x) = \tfrac\pi2$.
Thus
\begin{equation}\label{aa}
\Cov(T^f_x, T^f_y) = \tfrac12 \bigl( \Var(T^f_x) + \Var(T^f_y) -
\Var(T^f_x - T^f_y)\bigr) = \tfrac\pi2 - \tfrac12 d(x,y)\ .
\end{equation}
Note that the positive definiteness of (\ref{aa}) implies that the distance $d$
is a Schoenberg restricted negative definite kernel on $\cS^2$ (see \paref{neg def}). The random
field $W$
\begin{equation}
W_x := T^f_x - T^f_{o}\ ,
\end{equation}
where $o$ denotes the north pole of the sphere
is \emph{P.L\'evy's spherical Brownian field}, as $W_o=0$ and its
covariance kernel is
\begin{equation}\label{kernel del mb}
\Cov(W_x, W_y) = \tfrac12 \left( d(x,o) + d(y,o) - d(x,y) \right)\ .
\end{equation}
In particular the kernel at the r.h.s. of (\ref{kernel del mb})
is positive definite (see also \cite{GAN:60}).
Let us compute the expansion into spherical harmonics of the
positive definite function $\phi$ associated to the random field
$T^f$ and to $f$. We have $\phi(x)=\frac\pi2-\frac
12\,d(x,o)$, i.e. $\phi(x)=\frac\pi2-\frac12\, \vt$ in spherical coordinates, $\vt$ being the colatitude of $x$, whereas
$Y_{\ell,0}(x)=\sqrt{2\ell+1}\,P_\ell(\cos\vt)$ where $P_\ell$ is the $\ell$-th Legendre polynomial. This formula for the central spherical harmonics
differs slightly from the usual one, as we consider the total
measure of $\cS^2$ to be $=1$. Then, recalling the normalized
measure of the sphere is $\frac 1{4\pi}\,\sin\vt\, d\vt\, d\phi$ and
that $Y_{\ell,0}$ is orthogonal to the constants
$$
\dlines{ \int_{\cS^2}\phi(x)Y_{\ell,0}(x)\, dx=-\frac
14\,\sqrt{2\ell+1}\int_0^\pi\vt P_\ell(\cos\th)\sin \vt\, d\vt=\cr
=-\frac 14\,\sqrt{2\ell+1}\int_{-1}^{1}\arccos t\, P_\ell(t)\, dt
=\frac 14\,\sqrt{2\ell+1}\int_{-1}^{1}\arcsin t\, P_\ell(t)\,
dt=\frac14\,\sqrt{2\ell+1}\,c_\ell\ , \cr }
$$
where
$$
c_\ell=\pi\Bigl\{\frac{3\cdot 5\cdots(\ell-2)}{2\cdot 4\cdots
(\ell+1))}\Bigr\}^2\,\quad \ell=1,3,\dots
$$
and $c_\ell=0$ for $\ell$ even (see \cite{WW}, p.~325). As for
the function $f=\sqrt{\pi}\,1_{H}$, we have
$$
\int_{\cS^2}f(x)Y_{\ell,0}(x)\, dx=\frac {\sqrt{\pi}}2
\,\sqrt{2\ell+1}\int_0^{\pi/2}P_\ell(\cos\vt)\sin\vt\, d\vt= \frac
{\sqrt{\pi}}2 \,\sqrt{2\ell+1}\int_0^1P_\ell(t)\, dt\ .
$$
The r.h.s. can be computed using Rodriguez formula for the
Legendre polynomials (see again \cite{WW}, p.~297) giving
that it vanishes for $\ell$ even and equal to
\begin{equation}\label{2m}
(-1)^{m+1}\,\frac {\sqrt{\pi}}2 \,\sqrt{2\ell+1}\,\frac
{(2m)!{2m+1\choose m}}{2^{2m+1}(2m+1)!}
\end{equation}
for $\ell=2m+1$. Details of this computation are given in Remark
\ref{rod}. Simplifying the factorials the previous
expression becomes
$$
\dlines{ (-1)^m\,\frac {\sqrt{\pi}}2 \,\sqrt{2\ell+1}\,\frac
{(2m)!}{2^{2m+1}m!(m+1)!}=(-1)^m\,\frac {\sqrt{\pi}}2
\,\sqrt{2\ell+1}\,\frac{3\cdots (2m-1)}{2\cdots (2m+2)}=\cr
=(-1)^m\,\frac 12 \,\sqrt{2\ell+1}\,\sqrt{c_\ell}\ .\cr }
$$
Therefore the choice $f=\sqrt{\pi}\, 1_H$ corresponds to taking
alternating signs when taking the square roots. Note that the
choice $f'=\sum_\ell \beta_\ell Y_{\ell,0}$ with $\beta_\ell=\frac 12
\,\sqrt{2\ell+1}\,\sqrt{c_\ell}$ would have given a function
diverging at the north pole $o$. Actually it is elementary to
check that the series $\sum_\ell ( {2\ell+1})\,\sqrt{c_\ell}$
diverges so that $f'$ cannot be continuous by Theorem \ref{gangolli-true}.\qed
\end{example}
\begin{remark}\label{rod}\rm Rodriguez formula for the Legendre polynomials states that
$$
P_\ell(x)=\frac 1{2^\ell\ell!}\,\frac
{d^\ell\hfil}{dx^\ell}\,(x^2-1)^\ell\ .
$$
Therefore
\begin{equation}\label{integral}
\int_0^1P_\ell(x)\, dx=\frac 1{2^\ell\ell!}\,\frac
{d^{\ell-1}\hfil}{dx^{\ell-1}}\,(x^2-1)^\ell\Big|^1_0\ .
\end{equation}
The primitive vanishes at $1$, as the polynomial $(x^2-1)^\ell$ has
a zero of order $\ell$ at $x=1$ and all its derivatives up to the
order $\ell-1$ vanish at $x=1$. In order to compute the primitive at
$0$ we make the binomial expansion of $(x^2-1)^\ell$ and take the
result of the $(\ell-1)$-th derivative of the term of order $\ell-1$
of the expansion. This is actually the term of order $0$ of the
primitive. If $\ell$ is even then $\ell-1$ is odd so that this term
of order $\ell-1$ does not exist (in the expansion only even
powers of $x$ can appear). If $\ell=2m+1$, then the term of order
$\ell-1=2m$ in the expansion is
$$
(-1)^m{2m+1\choose m} z^{2m}
$$
and the result of the integral in \paref{integral} is actually, as
given in \paref{2m},
$$
(-1)^{m+1}\,\frac {(2m)!}{2^{2m+1}(2m+1)!}\,{2m+1\choose m}\ \cdotp
$$\qed
\end{remark}
\section{Representation formula}
The result of Remark \ref{rem-sfera} concerning $\mathbb S^2$ can be extended to the case of
a general homogeneous space $\X$.
We shall need the following ``square root'' theorem in the proof of the representation formula
of Gaussian isotropic random fields on $\X$.
\begin{theorem}\label{square-root} Let $\phi$ be a bi-$K$-invariant positive
definite continuous function on $G$. Then there exists a bi-$K$-invariant function $f\in L^2(G)$
such that $\phi=f*\breve
f$. Moreover, if $\phi$ is real valued then $f$ also can be chosen
to be real valued.
\end{theorem}
\begin{proof}
For every $\sigma\in\widehat G$, $\widehat \phi(\sigma)$ is
Hermitian positive definite. Therefore there exist matrices
$\Lambda(\sigma)$ such that
$\Lambda(\sigma)\Lambda(\sigma)^*=\widehat\phi(\sigma)$.
Let
$$
f= \sum_{\sigma\in\widehat G} \underbrace{\sqrt{\dim\sigma}\,
\tr\bigl(\Lambda(\sigma) D^\sigma\bigr)}_{=f^\sigma}\ .
$$
This actually defines a function $f\in L^2(G)$ as it is easy to see that
$$
\Vert f^\sigma\Vert^2_2=\sum_{i,j=1}^{\dim\sigma}(\Lambda(\sigma)_{ij})^2=
\tr(\Lambda(\sigma)\Lambda(\sigma)^*)=\tr(\widehat\phi(\sigma))
$$
so that
$$
\Vert f\Vert^2_2=\sum_{\sigma\in\widehat G}\Vert f^\sigma\Vert^2_2=
\sum_{\sigma\in\widehat G}\tr(\widehat\phi(\sigma))<+\infty
$$
thanks to \paref{gangolli-true}.
By Remark \ref{aggiunto} and (\ref{conv1}), we have
$$
\phi= f\ast \breve f\ .
$$
Finally the matrix $\Lambda(\sigma)$ can be chosen to be Hermitian
and with this choice $f$ is bi-$K $-invariant
as the relation \paref{fourier-bicappa}
$\widehat f (\sigma)=P_{\sigma,0}\widehat f(\sigma)P_{\sigma,0}$
still holds. The last statement follows from next proposition.
\end{proof}
\begin{prop}\label{real-sq}
Let $\phi$ be a real positive definite function on a compact group $G$,
then there exists a real function $f$ such that $\phi=f*\breve f$.
\end{prop}
\begin{proof}
Let
$$
\phi(g)=\sum_{\sigma\in\widehat G}
\phi^\sigma(g)=\sum_{\sigma\in\widehat
G}\sqrt{\dim
\sigma}\,\tr(\widehat\phi(\sigma)D^\sigma(g))
$$
be the Peter-Weyl decomposition of $\phi$ into isotypical components. We know
that the Hermitian matrices $\widehat\phi(\sigma)$ are positive
definite, so that there exist square roots
$\widehat\phi(\sigma)^{1/2}$ i.e. matrices such that
$\widehat\phi(\sigma)^{1/2}{\widehat\phi(\sigma)^{1/2}}^*=\widehat\phi(\sigma)$
and the functions
$$
f(g)=\sum_{\sigma\in\widehat G}\sqrt{\dim
\sigma}\,\tr(\widehat\phi(\sigma)^{1/2}D^\sigma(g))
$$
are such that $\phi=f*\breve f$. We need to prove that these square
roots can be chosen in such a way that $f$ is also real. Recall that
a representation of a compact group $G$ can be classified as being
of real, complex or quaternionic type (see \cite{B-D}, p. 93 e.g. for details).
\tin{a)} If $\sigma$ is of real type then there exists a conjugation $J$
of $H_\sigma\subset L^2(G)$ such that $J^2=1$. A conjugation is a
$G$-equivariant antilinear endomorphism. It is well known that in
this case one can choose a basis $v_1,\dots, v_{d_\sigma}$ of
$H_\sigma$ formed of ``real'' vectors, i.e. such that $Jv_i=v_i$. It
is then immediate that the representative matrix $D^\sigma$ of the
action of $G$ on $H_\sigma$ is real. Actually, as $J$ is equivariant
and $Jv_i=v_i$,
$$
D^\sigma_{ij}(g)=\langle gv_j,v_i\rangle=\overline{\langle
Jgv_j,Jv_i\rangle}=\overline{\langle
gv_j,v_i\rangle}=\overline{D^\sigma_{ij}(g)}\ .
$$
With this choice of the basis, the matrix $\widehat\phi(\sigma)$ is real
and also $\widehat\phi(\sigma)^{1/2}$ can be chosen to be real and
$g\mapsto\sqrt{\dim \sigma}\,
\tr(\widehat\phi(\sigma)^{1/2}D^\sigma(g))$ turns out to be real
itself.
\tin{b)} If $\sigma$ is of complex type, then
it is not isomorphic to its dual representation $\sigma^*$.
As $D^{\sigma^*}(g):=D^\sigma (g^{-1})^t=
\overline{D^\sigma (g)}$ and $\phi$ is real-valued, we have
\begin{equation*}\label{easy}
\widehat \phi(\sigma^*) = \overline{\widehat \phi(\sigma)}\ ,
\end{equation*}
so that we can choose $\widehat \phi(\sigma^*)^{1/2} = \overline{\widehat \phi(\sigma)^{1/2}}$ and, as $\sigma$ and $\sigma^*$ have the same dimension, the function
$$
g\mapsto \sqrt{\dim \sigma} \tr(\widehat \phi(\sigma)^{1/2} D^\sigma(g))+ \sqrt{\dim \sigma^*} \tr(\widehat \phi(\sigma^*)^{1/2} D^{\sigma^*}(g))
$$
turns out to be real.
\tin{c)} If $\sigma$ is quaternionic, let $J$ be the corresponding
conjugation. It is immediate that the vectors $v$ and $Jv$ are
orthogonal and from this it follows that
$\dim \sigma=2k$ and that there exists an orthogonal basis for $H_\sigma$
of the form
\begin{equation}\label{anti-basis}
v_1, \dots, v_k, w_1=J(v_1), \dots, w_k=J(v_k)\ .
\end{equation}
In such a basis the representation matrix of any linear transformation $U:H_\sigma\to H_\sigma$ which commutes with $J$ has the form
\begin{equation}\label{eq blocchi-gen}
\begin{pmatrix}
A & B\\
\noalign{\vskip4pt}
-\overline{B} & \overline{A} \\
\end{pmatrix}
\end{equation}
and in particular $D^\sigma(g)$ takes the form
\begin{equation}\label{eq blocchi}
D^\sigma(g)=\begin{pmatrix}
A(g) & B(g)\\
\noalign{\vskip4pt}
-\overline{B(g)} & \overline{A(g)} \\
\end{pmatrix}\ .
\end{equation}
By \paref{eq blocchi} we have also, $\phi$ being real valued,
\begin{equation}\label{matr per fi}
\widehat \phi(\sigma) = \begin{pmatrix}
\int_G \phi(g)A(g^{-1})\,dg & \int_G \phi(g)B(g^{-1})\,dg\\
\noalign{\vskip6pt}
-\int_G \phi(g)\overline{B(g^{-1})}\,dg & \int_G
\phi(g)\overline{A(g^{-1})}\,dg \\
\end{pmatrix}:=\begin{pmatrix}
\phi_A & \phi_B \\
\noalign{\vskip4pt}
-\overline{ \phi_B} & \overline{\phi_A } \\
\end{pmatrix}\ .
\end{equation}
More interestingly, if $\phi$ is any function such that, with respect to the basis above,
$\widehat\phi(\sigma)$ is of the form \paref{matr per fi}, then the corresponding component
$\phi^\sigma$ is necessarily a real valued function: actually
$$
\dlines{
\phi^\sigma(g)=\tr(\widehat\phi(\sigma)D^\sigma(g))
=\tr\bigl(\phi_A A(g)-\phi_B\overline {B(g)}-\overline{\phi_B}B(g)+\overline {\phi_A}\,\overline {A(g)}\bigr)=\cr
=\tr\bigl(\phi_A A(g)+\overline{\phi_A A(g)}\bigr)-
\tr\bigl(\phi_B\overline {B(g)}+\overline{\phi_B\overline {B(g)}}\bigr)\ .\cr
}
$$
We now prove that the Hermitian square root, $U$ say, of $\widehat\phi(\sigma)$ is of the form
\paref{matr per fi}. Actually note that $\widehat\phi(\sigma)$ is self-adjoint, so that it
can be diagonalized and all its eigenvalues are real (and positive by Proposition
\ref{structure of positive definite} a)). Let $\lambda$ be an eigenvalue and $v$ a
corresponding eigenvector. Then, as
$$
\widehat\phi(\sigma)Jv=J\widehat\phi(\sigma)=J\lambda v=\lambda Jv\ ,
$$
$Jv$ is also an eigenvector associated to $\lambda$. Therefore there exists a basis as in
\paref{anti-basis} that is formed of eigenvectors, i.e. of the form
$v_1,\dots,v_k, w_1,\dots,w_k$ with $Jv_j=w_j$ and
$v_j$ and $w_j$ associated to the same positive eigenvalue $\lambda_j$. In this basis
$\widehat\phi(\sigma)$ is of course diagonal with the (positive) eigenvalues on the diagonal.
Its Hermitian square root $U$ is also diagonal, with the square roots of the eigenvalues
on the diagonal. Therefore $U$ is also the form \paref{matr per fi} and the corresponding
function $\psi(g)=\tr(UD(g))$ is real valued and such that $\psi*\breve\psi=\phi^\sigma$.
\end{proof}
Note that the decomposition of Theorem \ref{square-root} is not
unique, as the Hermitian square root of the positive definite
operator $\widehat\phi(\sigma)$ is not unique itself.
Now we prove the main result of this chapter.
\begin{theorem}\label{real-general} Let $\X$ be the homogeneous space of a compact group $G$ and let $T$ be an a.s. square
integrable isotropic Gaussian \emph{real} random field on $\X$.
Then there exists a left-$K$-invariant function $f\in L^2(\X)$ such that $T^f$ has the same distribution as $T$.
\end{theorem}
\begin{proof} Let $\phi$ be the invariant positive definite function associated to $T$.
Thanks to \paref{convolution for phi} it is sufficient to prove that
there exists a \emph{real} $K$-invariant
function $f\in L^2(\X)$ such that $\phi(g)=\widetilde f \ast \breve{\widetilde
f}(g^{-1})$. Keeping in mind that $\phi(g)=\phi(g^{-1})$, as $\phi$
is real,
this follows from
Theorem \ref{square-root}.
\end{proof}
As remarked above $f$ is not unique.
Recall that a complex valued Gaussian r.v. $Z=X+iY$ is said to be
\emph{complex Gaussian} if the
r.v.'s $X,Y$ are jointly Gaussian, are independent and have the same variance. A $\C^m$-valued r.v.
$Z=(Z_1,\dots, Z_m)$ is said to be complex-Gaussian if the r.v.
$\alpha_1Z_1+\dots+\alpha_mZ_m$ is complex-Gaussian for every choice of $\alpha_1,\dots,\alpha_m\in \C$.
\begin{remark}An a.s. square integrable random field $T$ on $\X$ is \emph{complex Gaussian}
if and only if the complex valued r.v.'s
$$
\int_\X T_xf(x)\, dx
$$
are complex Gaussian for every choice of $f\in L^2(\X)$.
\end{remark}
Complex Gaussian random fields will play an important role in the last chapter of this work. By
now let us remark
that, in general, it is not possible to obtain a complex Gaussian random field by the procedure
\paref{def campo}.
\begin{prop}\label{zeta-prop} Let $\zeta(x,y)=\E[T_xT_y]$ be the relation kernel of a centered complex
Gaussian random field $T$. Then $\zeta\equiv 0$.
\end{prop}
\begin{proof} It easy to check that a centered complex valued r.v. $Z$ is complex Gaussian if and only
if $\E[Z^2]=0$. As for every $f\in L^2(\X)$
$$
\int_\X\int_\X \zeta(x,y)f(x)f(y)\, dx dy=\E\Bigl[\Bigl(\int_\X T_xf(x)\, dx\Bigr)^2\Bigr]=0\ ,
$$
it is easy to derive
that $\zeta\equiv0$.
\end{proof}
Going back to the situation of Remark \ref{rem-sfera}, the relation function $\zeta$ of the
random field $T^f$ is easily found to be
\begin{equation}\label{cic}
\zeta^f=\sum_{\ell \ge 0} \beta_\ell^2\, Y_{\ell,0}\ ,
\end{equation}
and cannot vanish unless $f\equiv 0$ and $T^f$ vanishes itself.
Therefore no isotropic complex Gaussian random field on the sphere can be obtained by
the construction \paref{def campo}.
\chapter{On L\'evy's Brownian fields}
In 1959 P.L\'evy \cite{levy} asked the question of the existence of a random field $X$
indexed by the points of
a metric space $(\X,d)$ and generalizing the Brownian motion, i.e. of a real Gaussian process
which would be centered, vanishing at some point $x_0\in\X$ and such that $\E(|X_x-X_y|^2)=d(x,y)$.
By polarization, the covariance function of such a process would be
\begin{equation}\label{kernel MB}
K(x,y)=\frac{1}{2} \,( d(x,x_0) + d(y,x_0) - d(x,y) )
\end{equation}
so that this question is equivalent to the fact that the kernel $K$ is positive definite. As anticipated in the Introduction, $X$ is called P.L\'evy's Brownian field on $(\X,d)$. Positive
definiteness of $K$ for $\X=\R^{m+1}$ and $d$ the Euclidean metric had been proved by
\cite{schoenberg} in 1938 and P.L\'evy itself
constructed the Brownian field on $\X=\bS^{m}$, the euclidean sphere of $\R^{m+1}$,
$d$ being the distance along the geodesics (Example \ref{MB}).
Later Gangolli \cite{GAN:60} gave an analytical proof of the positive definiteness of the kernel
\paref{kernel MB} for the same metric space $(\bS^{m},d)$, in a paper that dealt with this
question for a large class of homogeneous spaces.
Finally Takenaka in \cite{TKU} proved the positive definiteness of the kernel \paref{kernel MB}
for the Riemannian metric spaces
of constant sectional curvature equal to $-1,0$ or $1$, therefore adding the hyperbolic disk to the
list. To be precise in the case of the hyperbolic space
$\mathcal{H}_m = \lbrace (x_0, x_1, \dots, x_m)\in \R^{m+1} :
x_1^2 + \dots x_m^2 - x_0^2 = 1 \rbrace $, the distance under consideration is the unique, up to
multiplicative
constants, Riemannian distance that is invariant with respect to the action of $G=L_m$, the Lorentz group.
In \cite{mauSO(3)}
we investigate this question for the cases
$\X=SO(n)$. The answer is that the kernel \paref{kernel MB} is not
positive definite on $SO(n)$ for $n>2$. This is somehow surprising
as, in particular, $SO(3)$ is locally isometric to $SU(2)$, where
positive definiteness of the kernel $K$ is immediate as shown below.
It is immediate that this imply the non existence on $SU(n), n>2$.
The plan of this chapter is as follows.
In \S\ref{elem} we recall some elementary facts about invariant distances and positive definite kernels. In
\S\ref{sud} we treat the case $G=SU(2)$, recalling well known facts about the invariant distance and Haar measure of this group.
Positive definiteness of $K$ for $SU(2)$ is just a simple remark, but these facts are needed in
\S\ref{sot} where we treat the case $SO(3)$ and deduce from the case $SO(n)$, $n\ge 3$.
\section{Some elementary facts}\label{elem}
In this section we recall some well known facts about Lie groups (see mainly
\cite{faraut} and also \cite{MR2088027, sugiura}).
\subsection{Invariant distance of a compact Lie group}
In this chapter $G$ denotes a compact \emph{Lie group}. It is well known that $G$ admits {at least} a bi-invariant Riemannian metric
(see \cite{MR2088027} p.66 e.g.), that
we shall denote $\lbrace \langle \cdot, \cdot \rangle_g \rbrace_{g\in G}$ where of course
$\langle \cdot, \cdot \rangle_g$ is the inner product defined on the tangent space $T_g G$ to the manifold $G$ at $g$ and the family $\lbrace \langle \cdot, \cdot \rangle_g \rbrace_{g\in G}$ smoothly depends on $g$. By the bi-invariance property, for $g\in G$ the diffeomorphisms
$L_g$ and $R_g$ (resp. the left multiplication and the right multiplication of the group) are isometries.
Since the tangent space $T_g G$ at any point $g$ can be translated to the tangent space $T_e G$ at the identity element $e$ of the group, the metric $\lbrace \langle \cdot, \cdot \rangle_g \rbrace_{g\in G}$ is completely characterized by $\langle \cdot, \cdot \rangle_e$.
Moreover, $T_e G$ being the Lie algebra $\mathfrak g$ of $G$, the bi-invariant metric corresponds to an inner product $\langle \cdot, \cdot \rangle$ on
$\mathfrak g$ which is invariant under the adjoint representation $Ad$ of $G$. Indeed there is a one-to-one correspondence between bi-invariant Riemannian metrics on $G$ and $Ad$-invariant inner products on
$\mathfrak g$. If in addition $\mathfrak g$ is {semisimple}, then the negative Killing form
of $G$ is an $Ad$-invariant inner product on $\mathfrak g$ itself.
If there exists a unique
(up to a multiplicative factor) bi-invariant metric on $G$ (for a sufficient condition see
\cite{MR2088027}, Th. $2.43$) and $\mathfrak g$ is semisimple, then
this metric is necessarily proportional to the negative Killing form of $\mathfrak g$. It is well known that this is the case for $SO(n), (n\ne 4)$ and $SU(n)$; furthermore, the (natural) Riemannian metric on $SO(n)$ induced by the embedding
$SO(n) \hookrightarrow \R^{n^2}$ corresponds to the negative Killing form of ${so}(n)$.
Endowed with this bi-invariant Riemannian metric, $G$ becomes a {metric space}, with a distance $d$ which is
bi-invariant. Therefore the function $g\in G \to d(g,e)$ is a class function as
\begin{equation}\label{cfunction}
d(g,e)=d(hg,h)=d(hgh^{-1},hh^{-1})=d(hgh^{-1},e), \qquad g,h\in G\ .
\end{equation}
It is well known that {geodesics} on $G$ through the identity $e$
are exactly the one parameter subgroups of $G$ (see \cite{MR0163331} p.113 e.g.), thus a geodesic from $e$
is the curve on $G$
\begin{equation*}
\gamma_X(t) : t\in [0,1] \to \exp(tX)
\end{equation*}
for some $X\in \mathfrak g$. The length of this geodesic is
\begin{equation*}
L(\gamma_X) = \| X \| = \sqrt{\langle X, X \rangle}\ .
\end{equation*}
Therefore
\begin{equation*}\label{distanza}
d(g,e) = \inf_{X\in \mathfrak g: \exp X=g} \| X \|\ .
\end{equation*}
\subsection{Brownian kernels on a metric space}
Let $(\X, d)$ be a metric space.
\begin{lemma}\label{fondamentale}
The kernel $K$ in (\ref{kernel MB}) is positive definite on $\X$ if
and only if $d$ is a restricted negative definite kernel, i.e., for
every choice of elements $x_1, \dots, x_n\in \X$ and of complex
numbers $\xi_1, \dots, \xi_n$ with $\sum_{i=1}^n \xi_i =0$
\begin{equation}\label{neg def}
\sum_{i,j=1}^n d(x_i,x_j) \xi_i \overline{\xi_j} \le 0\ .
\end{equation}
\end{lemma}
\begin{proof}
For every $x_1, \dots, x_n\in \X$ and complex numbers $\xi_1,\dots,
\xi_n$
\begin{equation}\label{eq1}
\sum_{i,j} K(x_i,x_j) \xi_i \overline{\xi_j} = \frac{1}{2} \Bigl(
\overline{a} \sum_i d(x_i, x_0)\xi_i + a \sum_j d(x_j,
x_0)\overline{\xi_j}- \sum_{i,j}d(x_i, x_j)\xi_i \overline{\xi_j}
\Bigr)
\end{equation}
where $a:= \sum_i \xi_i$. If $a=0$ then it is immediate that in (\ref{eq1}) the l.h.s. is $\ge 0$ if and only if the r.h.s. is $\le 0$. Otherwise set
$\xi_{0}:=-a$ so that $\sum_{i=0}^n \xi_i =0$. The following equality
\begin{equation}
\sum_{i,j=0}^{n} K(x_i,x_j) \xi_i \overline{\xi_j} =
\sum_{i,j=1}^n K(x_i,x_j) \xi_i \overline{\xi_j}
\end{equation}
is then easy to check, keeping in mind that $K(x_i,x_0)=K(x_0,
x_j)=0$, which finishes the proof.
\end{proof}
For a more general proof see \cite{GAN:60} p. $127$ in the proof of Lemma 2.5.
If $\X$ is the homogeneous space of some topological group $G$, and
$d$ is a $G$-invariant distance, then (\ref{neg def}) is satisfied
if and only if for every choice of elements $g_1,\dots,g_n\in G$ and
of complex numbers $\xi_1,\dots, \xi_n$ with $\sum_{i=1}^n \xi_i =0$
\begin{equation}\label{neg def2}
\sum_{i,j=1}^n d(g_ig_j^{-1}x_0,x_0) \xi_i \overline{\xi_j} \le 0
\end{equation}
where $x_0\in \X$ is a fixed point. We shall say that
the function $g\in G \to d(gx_0, x_0)$ is restricted negative definite on $G$ if it satisfies (\ref{neg def2}).
In our case of interest $\X=G$ a compact (Lie) group and
$d$ is a bi-invariant distance as in $\S 3.1$.
The Peter-Weyl development for the class function $d(\cdot,e)$ on $G$ (see Theorem \ref{classExp}) is
\begin{equation}\label{PW dev}
d(g,e)= \sum_{\ell \in \widehat G} \alpha_\ell \chi_\ell(g)\ ,
\end{equation}
where $\widehat G$ denotes the family of equivalence classes of irreducible representations of $G$ and $\chi_\ell$ the character of the
$\ell$-th irreducible representation of $G$.
\begin{remark}\label{coeff neg}\rm
A function $\phi$ with a development as in (\ref{PW dev}) is
restricted negative definite if and only if $\alpha_\ell \le 0$ but
for the trivial representation.
Actually note first that, by standard approximation arguments,
$\phi$ is restricted negative definite if and only if for every
continuous function $f:G\to \C$ with $0$-mean (i.e. orthogonal to
the constants)
\begin{equation}\label{neg def measure}
\int_G\int_G \phi(gh^{-1}) f(g)\overline{f(h)}\,dg\, dh \le 0
\end{equation}
$dg$ denoting the Haar measure of $G$.
Choosing $f=\chi_\ell$ in the l.h.s. of
(\ref{neg def measure}) and denoting $d_\ell$ the dimension of the corresponding representation, a straightforward computation gives
\begin{equation}\label{semplice}
\int_G\int_G \phi(gh^{-1}) \chi_\ell(g)\overline{\chi_\ell(h)}\,dg\,
dh = \frac{\alpha_\ell}{d_\ell}
\end{equation}
so that if $\phi$ restricted negative definite, $\alpha_\ell\le 0$
necessarily.
Conversely, if $\alpha_\ell \le 0$
but for the
trivial representation, then $\phi$ is restricted negative definite,
as the characters $\chi_\ell$'s are positive definite and orthogonal
to the constants.
\end{remark}
\section{$SU(2)$}\label{sud}
The special unitary group $SU(2)$ consists of the complex unitary $2\times 2$-matrices $g$ such that
$\det(g)=1$.
Every $g\in SU(2)$ has the form
\begin{equation}\label{matrice}
g= \begin{pmatrix} a & b \\
-\overline{b} & \overline{a}
\end{pmatrix}, \qquad a,b\in \C,\, |a|^2 + |b|^2 = 1\ .
\end{equation}
If $a=a_1 + ia_2$ and $b=b_1 + ib_2$, then the map
\begin{align}\label{omeomorfismo}
\Phi(g)=
(a_1, a_2, b_1, b_2)
\end{align}
is an {homeomorphism} (see \cite{faraut}, \cite{sugiura} e.g.) between $SU(2)$ and the unit sphere $\cS^3$
of $\R^4$. Moreover the right translation
\begin{equation*}
R_g : h\to hg, \qquad h,g\in SU(2)
\end{equation*}
of $SU(2)$ is a rotation (an element of $SO(4)$) of $\cS^3$ (identified with $SU(2)$).
The homeomorphism (\ref{omeomorfismo}) preserves the invariant measure, i.e., if $dg$ is the normalized Haar measure
on $SU(2)$, then $\Phi(dg)$ is the normalized Lebesgue measure on $\cS^3$.
As the $3$-dimensional polar coordinates on $\cS^3$ are
\begin{equation}\label{polar coord}
\begin{array}{l}
a_1=\cos \theta,\cr
a_2= \sin \theta\,\cos \varphi,\cr
b_1= \sin \theta\,\sin \varphi\,\cos \psi,\cr
b_2= \sin \theta\,\sin \varphi\,\sin \psi\ ,
\end{array}
\end{equation}
$(\theta, \varphi, \psi) \in [0,\pi] \times [0,\pi]\times [0,2\pi]$, the normalized Haar integral of $SU(2)$ for an integrable function $f$ is
\begin{equation}\label{int}
\int_{SU(2)} f(g)\,dg = \frac{1}{2\pi^2}\int_0^\pi\sin \varphi\, d\varphi\,
\int_0^\pi \sin^2 \theta\,d\theta\, \int_0^{2\pi}
f(\theta, \varphi, \psi)\, d\psi
\end{equation}
The bi-invariant Riemannian metric on $SU(2)$ is necessarily proportional to the negative
Killing form of its Lie algebra ${su(2)}$ (the real vector space of
$2\times 2$ anti-hermitian complex matrices).
We consider the bi-invariant metric corresponding to the $Ad$-invariant inner product on ${su(2)}
\begin{equation*}
\langle X, Y \rangle= -\frac12\,{\tr(XY)},\qquad X,Y \in {su(2)}\ .
\end{equation*}
Therefore as an orthonormal basis of ${su(2)}$ we can consider the matrices
$$
\dlines{
X_1= \begin{pmatrix} 0 & 1 \\
-1 & 0
\end{pmatrix}, \quad
X_2= \begin{pmatrix} 0 & i \\
i & 0
\end{pmatrix}, \quad
X_3 = \begin{pmatrix} i & 0\\
0 & -i
\end{pmatrix}
}$$
The homeomorphism (\ref{omeomorfismo}) is actually an isometry between $SU(2)$ endowed with this
distance and $\cS^3$. Hence the restricted negative definiteness of the kernel $d$ on $SU(2)$ is an immediate consequence of this property on $\cS^3$ which is known
to be true as mentioned in the introduction (\cite{GAN:60}, \cite{levy}, \cite{TKU}).
In order to develop a comparison with $SO(3)$, we shall give a different proof of this fact in \S\ref{s-final}.
\section{$SO(n)$}\label{sot}
We first investigate the case $n=3$. The group
$SO(3)$ can also be realized as a quotient of $SU(2)$. Actually
the adjoint representation $Ad$ of $SU(2)$
is a surjective morphism from $SU(2)$ onto $SO(3)$ with kernel $\lbrace \pm e \rbrace$ (see \cite{faraut} e.g.).
Hence the well known result
\begin{equation}\label{iso}
SO(3) \cong {SU(2)}/{\lbrace \pm e \rbrace}\ .
\end{equation}
Let us explicitly recall this morphism:
if $a=a_1 +ia_2, b=b_1 + ib_2$ with $|a|^2 + |b|^2=1$ and
$$
\widetilde g=\begin{pmatrix} a & b \\
-\overline{b} & \overline{a}
\end{pmatrix}$$
then the orthogonal matrix $Ad(\widetilde g)$ is given by
\begin{equation}\label{matr}
g=\begin{pmatrix}
a_1^2-a_2^2-(b_1^2-b_2^2)&-2a_1a_2-2b_1b_2&-2(a_1b_1-a_2b_2)\cr
2a_1a_2-2b_1b_2&(a_1^2-a_2^2)+(b_1^2-b_2^2)&-2(a_1b_2+a_2b_1)\cr
2(a_1b_1+a_2b_2)&-2(-a_1b_2+a_2b_1)&|a|^2-|b|^2
\end{pmatrix}
\end{equation}
The isomorphism in (\ref{iso}) might suggest that the
positive definiteness of the Brownian kernel on $SU(2)$ implies
a similar result for $SO(3)$.
This is not true and actually it turns out that the distance $(g,h) \to d(g,h)$ on $SO(3)$
induced by its bi-invariant Riemannian metric \emph{is not
a restricted negative definite kernel} (see Lemma \ref{fondamentale}).
As for $SU(2)$, the bi-invariant Riemannian metric on $SO(3)$ is proportional to the negative Killing form of its Lie algebra
${ so(3)}$ (the real $3\times 3$ antisymmetric real matrices).
We shall consider the $Ad$-invariant inner product on ${so(3)}$ defined as
\begin{equation*}
\langle A,B \rangle = -\frac12\,{\tr(AB)}\ , \qquad A,B\in {so(3)}\ .
\end{equation*}
An orthonormal basis for ${so(3)}$ is therefore given by
the matrices
$$\dlines{
A_1= \begin{pmatrix} 0 & 0 & 0\\
0 & 0 & -1\\
0 & 1 & 0
\end{pmatrix}, \quad
A_2= \begin{pmatrix} 0 & 0 & 1 \\
0 & 0 & 0\\
-1 & 0 & 0
\end{pmatrix}, \quad
A_3 = \begin{pmatrix} 0 & -1 & 0\\
1 & 0 & 0\\
0 & 0 & 0
\end{pmatrix}
}$$
Similarly to the case of $SU(2)$, it is easy to compute the distance
from $g\in SO(3)$ to the identity. Actually $g$ is conjugated to the matrix of the form
\begin{equation*}
\Delta(t)= \begin{pmatrix} \cos t & \sin t & 0 \\
-\sin t & \cos t & 0\\
0 & 0 & 1
\end{pmatrix} = \exp(tA_1)
\end{equation*}
where $t\in [0,\pi]$ is the {rotation angle} of $g$.
Therefore if $d$ still denotes the distance induced by the bi-invariant metric,
\begin{equation*}
d(g,e) = d( \Delta(t), e ) = t
\end{equation*}
i.e. the distance from $g$ to $e$ is the rotation angle of $g$.
Let us denote $\lbrace \chi_\ell \rbrace_{\ell \ge 0}$ the set of characters for $SO(3)$.
It is easy to compute the Peter-Weyl development in (\ref{PW dev}) for $d(\cdot, e)$ as the
characters $\chi_\ell$ are also simple functions of the rotation angle.
More precisely, if $t$ is the rotation angle of $g$ (see \cite{dogiocam} e.g.),
\begin{equation*}
\chi_\ell(g)= \frac{\sin\frac{(2\ell + 1)t}{2}}{\sin\frac{t}{2}}=1 + 2\sum_{m=1}^\ell \cos(mt)\ .
\end{equation*}
We shall prove that the coefficient
$$
\alpha_\ell=\int_{SO(3)} d(g,e)\chi_\ell(g)\, dg
$$
is positive for some $\ell\ge 1$. As both $d(\cdot,e)$ and $\chi_\ell$ are functions of the rotation angle $t$,
we have
$$
\alpha_\ell=\int_0^\pi t\Bigl( 1 + 2\sum_{j=1}^\ell \cos(jt)\Bigr)\, p_T(t)\, dt
$$
where $p_T$ is the density of $t=t(g)$, considered as a r.v. on the probability space $(SO(3),dg)$. The next statements are devoted to the computation of the density $p_T$. This is certainly well
known but we were unable to find a reference in the literature. We first compute the density of the trace of $g$.
\begin{prop}
The distribution of the trace of a matrix in $SO(3)$ with respect to the normalized Haar measure is given by the density
\begin{equation}\label{trace3}
f(y)=\frac 1{2\pi}\,(3-y)^{1/2}(y+1)^{-1/2}1_{[-1,3]}(y)\ .
\end{equation}
\end{prop}
\begin{proof}
The trace of the matrix \paref{matr} is equal to
$$
\tr(g)=3a_1^2-a_2^2-b_1^2-b_2^2\ .
$$
Under the normalized Haar measure of $SU(2)$ the vector $(a_1,a_2,b_1,b_2)$ is uniformly distributed on the sphere $\cS^3$.
Recall the normalized Haar integral (\ref{int})
so that, taking the corresponding marginal, $\th$ has density
\begin{equation}\label{denth}
f_1(\th)=\frac 2\pi\,\sin^2(\th)\, d\th\ .
\end{equation}
Now
$$
\dlines{
3a_1^2-a_2^2-b_1^2-b_2^2=4\cos^2\th-1\ .\cr
}
$$
Let us first compute the density of $Y=\cos^2 X$, where
$X$ is distributed according to the density \paref{denth}. This is elementary as
$$
\dlines{
F_Y(t)=\P(\cos^2 X\le t)=
\P(\arccos(\sqrt{t})\le X\le \arccos(-\sqrt{t}))=\cr
=\frac 2\pi\!\int\limits_{\arccos(\sqrt{t})}^{\arccos(-\sqrt{t})}
\sin^2(\th)\, d\th\ .\cr
}
$$
Taking the derivative it is easily found that the density of $Y$ is, for $0<t<1$
$$
\dlines{
F'_Y(t)=\frac 2\pi\,(1-t)^{1/2}t^{-1/2}\ .\cr
}
$$
By an elementary change of
variable the distribution of the trace $4Y-1$ is therefore given by \paref{trace3}.
\end{proof}
\begin{cor}
The distribution of the rotation angle of a matrix in $SO(3)$ is
\begin{equation*}
p_T(t)=\frac1\pi\,(1 - \cos t)\,1_{[0,\pi]}(t)\ .
\end{equation*}
\end{cor}
\begin{proof}
It suffices to remark that if $t$ is the rotation angle of $g$, then its trace is equal to $2\cos t + 1$. $p_T$ is therefore the distribution of $W=\arccos ( \frac{Y-1}{2})$, $Y$ being distributed as \paref{trace3}. The elementary details are left to the reader.
\end{proof}
Now it is easy to compute the Fourier development of the function $d(\cdot, e)$.
\begin{prop}\label{kernel MB su SO(3)}
The kernel $d$ on $SO(3)$ is not restricted negative definite.
\end{prop}
\begin{proof}
It is enough to show that in the Fourier development
$$
d(g,e)=\sum_{\ell \ge 0} \alpha_\ell \chi_\ell(g)
$$
$\alpha_\ell > 0$ for some $\ell \ge 1$ (see Remark \ref{coeff neg}).
We have
$$
\dlines{ \alpha_\ell =\int_{SO(3)} d(g,e) \chi_\ell(g) dg = \frac 1\pi
\int_0^\pi t \Bigl( 1 + 2\sum_{m=1}^\ell \cos(mt) \Bigr) (1-\cos t)\, dt =\cr
=\frac{1}{\pi}\underbrace{\int_0^\pi t (1-\cos t)\, dt}_{:= I_1} + \frac{2}{\pi}\sum_{m=1}^{\ell} \underbrace{\int_0^\pi t \cos(mt)\, dt}_{:= I_2}
- \frac{2}{\pi}\sum_{m=1}^{\ell} \underbrace{\int_0^\pi t \cos(mt)\cos t\, dt}_{:= I_3}\ .\cr
}$$
Now integration by parts gives
$$
I_1 = \frac{\pi^2}{2} + 2,\quad
I_2=
\frac{ (-1)^m -1}{m^2}\ \raise2pt\hbox{,}
$$
whereas, if $m\ne 1$, we have
$$
\dlines{I_3=\int_0^\pi t \cos(mt)\cos t\, dt
= \frac{ m^2 +1}{ (m^2 -1)^2} ( (-1)^m +1)}
$$
and for $m=1$,
$$
I_3=\int_0^\pi t\cos^2 t\, dt=\frac {\pi^2}4\ .
$$
Putting things together we find
$$
\alpha_\ell =\frac{2}{\pi} \Bigl( 1 + \sum_{m=1}^{\ell} \frac{ (-1)^m -1}{m^2} + \sum_{m=2}^{\ell} \frac{ m^2 +1}{ (m^2 -1)^2} ( (-1)^m +1) \Bigr)\ .
$$
If $\ell=2$, for instance, we find $\alpha_2=\frac{2}{9\pi}>0$, but it is easy to see
that $\alpha_\ell>0$ for every $\ell$ even.
\end{proof}
Consider now the case $n>3$. $SO(n)$ (resp. $SU(n)$) contains a closed subgroup $H$ that is isomorphic to $SO(3)$ and
the restriction to $H$ of any bi-invariant distance $d$ on $SO(n)$ (resp. $SU(n)$) is a bi-invariant distance $\widetilde d$ on $SO(3)$.
By Proposition \ref{kernel MB su SO(3)}, $\widetilde d$ is not
restricted negative definite, therefore there exist $g_1, g_2, \dots, g_m\in H$,
$\xi_1, \xi_2, \dots, \xi_m \in \R$ with $\sum_{i=1}^m \xi_i =0$ such that
\begin{equation}
\sum_{i,j} d(g_i, g_j) \xi_i \xi_j =\sum_{i,j} \widetilde d(g_i, g_j) \xi_i \xi_j > 0\ .
\end{equation}
We have therefore
\begin{cor}\label{kernel MB su SO(n)}
Any bi-invariant distance $d$ on $SO(n)$ and $SU(n)$, $n\ge 3$ is not
a restricted negative definite kernel.
\end{cor}
Remark that the same argument applies to other compact groups.
Moreover the bi-invariant Riemannian metric on $SO(4)$ is not unique, meaning that it is not necessarily
proportional to the negative Killing form of $so(4)$.
In this case Corollary \ref{kernel MB su SO(n)} states that every such
bi-invariant distance cannot be restricted negative definite.
\section{Final remarks}\label{s-final}
We were intrigued by the different behavior of the invariant distance of $SU(2)$ and $SO(3)$ despite these groups
are locally isometric and decided to compute also for $SU(2)$ the development
\begin{equation}\label{sviluppo}
d(g,e) = \sum_{\ell} \alpha_\ell \chi_\ell(g)\ .
\end{equation}
This is not difficult as, denoting by $t$ the distance of $g$ from $e$, the characters of $SU(2)$ are
$$
\chi_\ell(g)=\frac{\sin((\ell+1)t)}{\sin t},\quad t\not=k\pi
$$
and $\chi_\ell(e)=\ell+1$ if $t=0$, $\chi_\ell(-)=(-1)^\ell(\ell+1)$ if $t=\pi$. Then it is elementary to compute, for $\ell>0$,
$$
\alpha_\ell =\frac 1\pi\,\int_0^\pi t\sin((\ell+1)t)\sin t\, dt=\begin{cases}
-\frac 8\pi\,\frac{m+1}{m^2(m+2)^2}&\ell\mbox{ odd}\cr
0&\ell\mbox{ even}
\end{cases}
$$
thus confirming the restricted negative definiteness of $d$ (see Remark \ref{coeff neg}).
Remark also that the coefficients corresponding to the even numbered
representations, that are also representations of $SO(3)$, here vanish.
\chapter*{Part 2 \\High-energy Gaussian eigenfunctions}
\addcontentsline{toc}{chapter}{Part 2: High-energy Gaussian eigenfunctions}
\chapter{Background: Fourth-Moment phenomenon and Gaussian eigenfunctions}\label{background}
As made clear by the title, this chapter is first devoted to the so-called Fourth Moment phenomenon. Main results in this area are
summarized in the recent monograph \cite{noupebook}:
a
beautiful connection has been established between Malliavin calculus and Stein's method for normal approximations
to prove Berry-Esseen bounds and quantitative Central Limit
Theorems
for functionals of a Gaussian random field.
Finally we recall definitions and fix some notation for Gaussian eigenfunctions on the $d$-dimensional unit sphere $\mathbb S^d$ ($d\ge 2$) whose properties we will deeply investigate in the sequel of this work.
\section{Fourth-moment theorems}
\subsection{Isonormal Gaussian fields}
Let $H$ be a (real) separable Hilbert space with inner product $\langle \cdot, \cdot \rangle_H$ and $(\Omega, \F, \P)$ some probability space.
\begin{definition}
The isonormal Gaussian field $T$ on $H$ is a centered Gaussian random field $(T(h))_{h\in H}$ whose covariance kernel is given by
\begin{equation}
\Cov(T(h), T(h')) = \langle h, h' \rangle_H\ , \qquad h,h'\in H\ .
\end{equation}
\end{definition}
Consider, from now on, the case $H=L^{2}(X,\mathcal{X},\mu )$ the space of
square integrable functions on the measure space
$(X,\mathcal{X},\mu )$, where $X$ is a
Polish space, $\mathcal{X}$ is the $\sigma $-field on $X$ and $\mu $ is a
positive, $\sigma $-finite and non-atomic measure on $(X,\mathcal{X})$.
As usual the inner product is given by
\langle f,g\rangle _{H}=\int_{X}f(x)g(x)\,d\mu (x)$.
Let us recall the
construction of an isonormal Gaussian field on $H$. Consider a
(real) Gaussian measure over $(X, \mathcal X)$, i.e. a centered Gaussian family $W$
\begin{equation*}
W=\{W(A):A\in \mathcal{X},\mu (A)<+\infty \}
\end{equation*
such that for $A,B\in \mathcal{X}$ of $\mu$-finite measure, we have
\begin{equation*}
{\mathbb{\ E}}[W(A)W(B)]=\mu(A\cap B)\ .
\end{equation*
We define a random field $T=(T(f))_{f\in H}$ on $H$ as follows. For each $f\in H$,
let
\begin{equation}
T(f)=\int_{X}f(x)\,dW(x)\ \label{isonormal}
\end{equation
be the Wiener-It\^o integral of $f$ with respect to $W$. The random field $T$ is the
isonormal Gaussian field on $H$: indeed it is centered Gaussian and by construction
\begin{equation*}
\mathrm{Cov\,}(T(f),T(g))=\langle f,g\rangle _{H}\ .
\end{equation*
\subsection{Wiener chaos and contractions}
Let us recall now the notion of Wiener chaos. Define the space of
constants $C_{0}:=\mathbb{R}\subseteq L^2(\P)$, and for $q\geq 1$,
let $C_{q}$ be the closure in $L^{2}(\P):=L^2(\Omega, \F, \P)$ of the linear subspace
generated by random variables of the form
\begin{equation*}
H_{q}(T(f))\ ,\qquad f\in H,\ \Vert f\Vert _{H}=1\ ,
\end{equation*
where $H_{q}$ denotes the $q$-th Hermite polynomial, i.e.
\begin{equation}\label{hermite}
H_q (t) := (-1)^q \phi^{-1}(t) \frac{d^q}{d t^q} \phi(t)\ ,\quad t\in \R\ ,
\end{equation}
$\phi$ being the density function of a standard Gaussian r.v. $Z\sim \mathcal N(0,1)$. $C_{q}$ is
called the $q$-th Wiener chaos.
The following, well-known property is very important: let $Z_{1},Z_{2}\sim \mathcal{N}(0,1)$ be jointly
Gaussian; then, for all $q_{1},q_{2}\geq 0$
\begin{equation} \label{hermite orto}
{\mathbb{\ E}}[H_{q_{1}}(Z_{1})H_{q_{2}}(Z_{2})]=q_{1}!\,{\mathbb{\ E}
[Z_{1}Z_{2}]^{q_{1}}\,\delta _{q_{2}}^{q_{1}}\ .
\end{equation}
\begin{theorem}
The Wiener-It\^o chaos expansion holds
\begin{equation*}
L^2(\P)=\bigoplus_{q=0}^{+\infty }C_{q}\ ,
\end{equation*
the above sum being orthogonal from (\ref{hermite orto}).
Equivalently, each random variable $F\in L^2(\P)$ admits a unique
decomposition in the $L^2(\P)$-sense of the form
\begin{equation}
F=\sum_{q=0}^{\infty }J_{q}(F)\ , \label{chaos exp}
\end{equation
where $J_{q}:L^2(\P){\goto} C_{q}$ is the orthogonal
projection operator onto the $q$-th Wiener chaos. Remark that $J_{0}(F)={\mathbb{\ E}}[F]$.
\end{theorem}
Often we will use the symbols $\text{proj}(F| C_q)$ or $F_q$ instead of $J_q(F)$.
We denote by $H^{\otimes q}$ and $H^{\odot q}$ the $q$-th tensor product and
the $q$-th symmetric tensor product of $H$ respectively. Therefore
H^{\otimes q} = L^2(X^{q}, \mathcal{X}^{q},\mu ^{q})$ and $H^{\odot
q}=L^2_s(X^{q}, \mathcal{X}^{q},\mu ^{q})$, where by $L^2_s$ we mean the symmetric and
square integrable functions w.r.t. $\mu^q$. Note that for $(x_1,x_2,\dots,
x_q)\in X^q$ and $f\in H$, we have
\begin{equation*}
f^{\otimes q}(x_1,x_2,\dots,x_q)=f(x_1)f(x_2)\dots f(x_q)\ .
\end{equation*}
Now for $q\ge 1$, let us define the map $I_{q}$ as
\begin{equation}
I_{q}(f^{\otimes q}):=\,H_{q}(T(f))\ ,\qquad f\in H\ , \label{isometria}
\end{equation
which can be extended to a linear isometry between $H^{\odot q}$ equipped
with the modified norm $\sqrt{q!}\,\Vert \cdot \Vert _{H^{\odot q}}$ and the
q $-th Wiener chaos $C_{q}$. Moreover for $q=0$, set $I_{0}(c)=c\in \mathbb{
}$. Hence \eqref{chaos exp} becomes
\begin{equation}
F=\sum_{q=0}^{\infty }I_{q}(f_{q})\ , \label{chaos exp2}
\end{equation
where the kernels $f_{q}, q\ge 0$ are uniquely determined, $f_{0}={\mathbb{\ E}}[F]$ and for $q\ge 1$ $f\in
H^{\odot q}$.
In our setting, it is well known that for $h\in H^{\odot q}$, $I_q(h)$
coincides with the multiple Wiener-Ito integral of order $q$ of $h$ with respect to the
Gaussian measure $W$, i.e.
\begin{equation} \label{int multiplo}
I_q(h)= \int_{X^q} h(x_1,x_2,\dots x_q)\,dW(x_1) dW(x_2)\dots dW(x_q)
\end{equation}
and, loosely speaking, $F$ in (\ref{chaos exp2}) can be seen as a series of
(multiple) stochastic integrals.
For every $p,q\ge 1$, $f\in H^{\otimes p}, g\in H^{\otimes q}$ and
r=1,2,\dots, p\wedge q$, the so-called \emph{contraction} of $f$ and $g$ of
order $r$ is the element $f\otimes _{r}g\in H^{\otimes p+q-2r}$ given by
\begin{equation}\label{contrazione}
\begin{split}
f\otimes _{r}g\, &(x_{1},\dots,x_{p+q-2r})=\\
=\int_{X^{r}}f(x_{1},\dots,x_{p-r},y_{1},\dots,&y_{r})
g(x_{p-r+1},\dots,x_{p+q-2r},y_{1},\dots,y_{r})\,d\mu(\underline{y})\text{ ,}
\end{split}
\end{equation}
where we set $d\mu(\underline{y}):=d\mu(y_{1})\dots d\mu
(y_{r})$.
For $p=q=r$, we have $f\otimes _{r}g = \langle f,g \rangle_{H^{\otimes_r}}$
and for $r=0$, $f\otimes _{0}g = f\otimes g$. Note that
$f\otimes_r g$ is not necessarily symmetric, let us denote by $f\widetilde \otimes
_{r}g$ its canonical symmetrization.
The following
multiplication formula is well-known: for $p,q=1,2,\dots$, $f\in H^{\odot
p}$, $g\in H^{\odot q}$, we have
\begin{equation*}
I_p(f)I_q(g)=\sum_{r=0}^{p\wedge q} r! {\binom{p }{r}} {\binom{q }{r}
I_{p+q-2r}(f\widetilde \otimes _{r}g)\ .
\end{equation*}
\subsection{Some language of Malliavin calculus}
Let $\mathcal S$ be the set of all cylindrical r.v.'s of the type
$F=f(T(h_1), \dots , T(h_m))$, where $m\ge 1$, $f:\R^m \to \R$ is an infinitely differentiable
function with compact support and $h_i\in H$.
The Malliavin derivative $DF$ (or $D^1 F$) of $F$ w.r.t. $T$ is the element $\in L^2(\Omega, H)$ defined as
$$
DF = \sum_i \frac{\partial f}{\partial x_i }(T(h_1), \dots , T(h_m)) h_i\ .
$$
We can define, by iteration, the $r$-th derivative $D^r F$ which is an element of $L^2(\Omega, H^{\odot r})$ for every $r\ge 2$. Recall that
$\mathcal S$ is dense in $L^q(\P)$ for each $q\ge 1$.
For $r\ge 1$ and $q\ge 1$,
let us denote by $\mathbb{D}^{r,q}$ the closure of $\mathcal S$ w.r.t.
the norm $\| \cdot \|_{\mathbb{D}^{r,q}}$ defined by the relation
\begin{equation*}
\Vert F\Vert _{\mathbb{D}^{r,q}}:=\left( {\mathbb{\ E}}[|F|^{q}]+\dots +
\mathbb{\ E}}[\Vert D^{r}F\Vert _{H^{\odot r}}^{q}]\right) ^{\frac{1}{q
}\ .
\end{equation*}
For $q,r\ge 1$, the $r$-th Malliavin derivative of the random
variable $F=I_{q}(f)\in C_{q}$ where $f\in H^{\odot q}$, is given by
\begin{equation}
D^{r}F=\frac{q!}{(q-r)!}I_{q-r}(f)\text{ ,} \label{marra2}
\end{equation
for $r\leq q$, and $D^{r}F=0$ for $r>q$.
It is possible to show that if we consider the chaotic representation (\ref{chaos exp2}), then $F\in \mathbb{D}^{r,2}$ if and only if
$$
\sum_{q=r}^{+\infty} q^r q!\, \| f_q\|^2_{H^{\odot q}} < +\infty
$$
and in this case
$$
D^r F = \sum_{q=r}^{+\infty}\frac{q!}{(q-r)!}I_{q-r}(f_q)\ .
$$
We need to introduce also the generator of the Ornstein-Uhlenbeck semigroup,
defined as
\begin{equation*}
L:=-\sum_{q=1}^{\infty }q\,J_{q}\ ,
\end{equation*
where $J_{q}$ is the orthogonal projection operator on $C_{q}$, as in (\re
{chaos exp}). The domain of $L$ consists of $F\in L^2(\P)$ such that
\begin{equation*}
\sum_{q=1}^{+\infty }q^{2}\Vert J_{q}(F)\Vert _{L^2(\P)}^{2}<+\infty
\ .
\end{equation*
The pseudo-inverse operator of $L$ is defined as
\begin{equation*}
L^{-1}=-\sum_{q=1}^{\infty }\frac{1}{q}J_{q}
\end{equation*
and satisfies for each $F\in L^2(\P)$
\begin{equation*}
LL^{-1}F=F-{\mathbb{\ E}}[F]\text{ ,}
\end{equation*}
equality that justifies its name.
\subsection{Main theorems}
We will need
the following definition throughout the rest of this thesis.
\begin{defn} {\rm Denote by $\mathscr{P}$ the collection of all probability measures on $\R$, and let $d : \mathscr{P}\times \mathscr{P}\to \R$ be a distance on $\mathscr{P}$. We say that the $d$ {\it metrizes weak convergence on} $\mathscr{P}$ if the following double implication holds for every collection $\{\P, \P_n : n\geq 1\}\subset \mathscr{P} $, as $n\to \infty$:
$$
\P_n \mbox{\ converges weakly to } \P \quad \mbox{if and only if} \quad d(\P_n , \P)\to 0.
$$
Given two random variables $X_1,X_2$ and a distance $d$ on $\mathscr{P}$, by an abuse of notation we shall
write $d(X_1,X_2)$ to indicate the quantity $d(\mathbf{D}(X_1) ,\mathbf{D}(X_2))$, where $\mathbf{D}(X_i)$
indicates the distribution of $X_i$, $i=1,2$. Recall that, given random variables $\{X, X_n : n\geq 1\}$, one
has that $\mathbf{D}(X_n)$ converges weakly to $\mathbf{D}(X)$ if and only if $X_n$ converges in distribution
to $X$. In this case, we write
$$
X_n \stackrel{\rm d}{\longrightarrow} X\quad \text{or}\quad X_n \stackrel{\mathcal L}{\longrightarrow} X\ ,
$$
whereas $X \stackrel{\rm d}{=} Y$ or $X \stackrel{\mathcal L}{=} Y$ indicates that $\mathbf{D}(X) = \mathbf{D}(Y)$.}
\end{defn}
Outstanding examples of distances metrizing weak convergence are the {\it Prokhorov distance} (usually denoted by $\rho$) and the {\it Fortet-Mourier distance} (or {\it bounded Wasserstein} distance, usually denoted by $\beta$). These are given by
$$
\rho\, (\P, \Q) = \inf\left\{ \epsilon >0 : \P(A)\leq \epsilon +\Q(A^\epsilon),\,\, \, \mbox{for every Borel set}\, A\subset \R\right\},
$$
where $A^{\epsilon} := \{ x : |x-y| <\epsilon, \mbox{ for some } y \in A\}$, and
$$
\beta\, (\P, \Q) = \sup\left\{ \left| \int_\R f \, d(\P - \Q) \right| : \|f\|_{BL}\leq 1 \right\}.
$$
where $\|\cdot\|_{BL} = \|\cdot \|_L + \|\cdot \|_\infty$, and $\|\cdot \|_L$ is the usual Lipschitz seminorm (see e.g. \cite[Section 11.3]{D} for further details on these notions).
Let us recall moreover the
usual Kolmogorov $d_{K}$, total variation $d_{TV}$ and Wasserstein $d_{W}$
distances between r.v.'s $X,Y$: for $\mathcal D \in \lbrace K, TV, W \rbrace$
\begin{equation}
d_{\mathcal D}(X,Y) :=\sup_{h\in H_{\mathcal D}}\left\vert {\mathbb{E}}[h(X)]-
\mathbb{E}}[h(Y)]\right\vert \text{ ,} \label{prob distance}
\end{equation}
where $H_{K} = \lbrace 1(\cdot \le z), z\in \mathbb R \rbrace$,
$H_{TV} = \lbrace 1_A(\cdot), A\in \B(\mathbb R) \rbrace$ and $H_{W}$ is
the set of Lipschitz functions with Lipschitz
constant one.
It is a standard fact (see e.g. \cite[Proposition C.3.1]{noupebook}) that $d_K$ {\it does not} metrize, in general, weak convergence on $\mathscr{P}$.
The connection between stochastic calculus and probability metrics
is summarized in the following result (see e.g. \cite{noupebook},
Theorem 5.1.3), which will provide the basis for most of our results to
follow.
From now on, $\mathcal N(\mu, \sigma^2)$ shall denote the Gaussian law with mean $\mu$ and variance $\sigma^2$.
\begin{prop}
\label{BIGnourdinpeccati}
Let $F\in
\mathbb{D}^{1,2}$ such that $\mathbb{E}[F]=0,$ $\mathbb{E}[F^{2}]=\sigma
^{2}<+\infty .$ Then we have for $Z\sim \mathcal{N}(0,\sigma^2)$
\begin{equation*}
d_{W}(F,Z)\leq \sqrt{\frac{2}{\sigma ^{2}\,\pi }}\mathbb{E
[\left\vert \sigma ^{2}-\langle DF,-DL^{-1}F\rangle _{H}\right\vert ]\text{
}
\end{equation*
Also, assuming in addition that $F$ has a densit
\begin{eqnarray*}
d_{TV}(F,Z) &\leq &\frac{2}{\sigma ^{2}}\mathbb{E}[\left\vert
\sigma ^{2}-\langle DF,-DL^{-1}F\rangle _{H}\right\vert ]\text{ ,} \\
d_{K}(F,Z) &\leq &\frac{1}{\sigma ^{2}}\mathbb{E}[\left\vert
\sigma ^{2}-\langle DF,-DL^{-1}F\rangle _{H}\right\vert ]\text{ .}
\end{eqnarray*
Moreover if $F\in \mathbb{D}^{1,4}$, we have als
\begin{equation*}
\mathbb{E}[\left\vert \sigma ^{2}-\langle DF,-DL^{-1}F\rangle
_{H}\right\vert ]\leq \sqrt{\mathrm{Var}[\langle DF,-DL^{-1}F\rangle _{H}]
\text{ .}
\end{equation*}
\end{prop}
Furthermore, in the special case where $F=I_{q}(f)$ for $f\in H^{\odot q}$, $q\ge 2$
then from \cite{noupebook}, Theorem 5.2.6
\begin{equation}
\mathbb{E}[\left\vert \sigma ^{2}-\langle DF,-DL^{-1}F\rangle
_{H}\right\vert ]\leq \sqrt{\Var \left ( \frac{1}{q} \|DF\|_H^2 \right )}\ ,
\end{equation}
and Lemma 5.2.4 gives
\begin{equation}\label{casoparticolare}
\Var \left ( \frac{1}{q} \|DF\|_H^2 \right )=\frac{1}{q^{2}}\sum_{r=1}^{q-1}r^{2}r!^{2}
\binom{q}{r}}^{4}(2q-2r)!\Vert f\widetilde{\otimes }_{r}f\Vert _{H^{\otimes
2q-2r}}^{2}\ .
\end{equation}
In addition it is possible to show the powerful chain of inequalities: for $q\ge 2$
$$
\Var \left ( \frac{1}{q} \|DF\|_H^2 \right )\le \frac{q-1}{3q}k_4(F) \le (q-1) \Var \left ( \frac{1}{q} \|DF\|_H^2 \right )\ ,
$$
where
$$
k_4(F) := \E[F^4] - 3(\sigma^2)^2
$$
is the \emph{fourth} cumulant of $F$.
\begin{remark}\rm
Note that in (\ref{casoparticolare}) we can replace $\Vert f\widetilde
\otimes }_{r}f\Vert _{H^{\otimes 2q-2r}}^{2}$ with the norm of the unsymmetrized
contraction $\Vert f\otimes _{r}f\Vert _{H^{\otimes 2q-2r}}^{2}$ for the upper
bound, since
$$\Vert f\widetilde{\otimes }_{r}f\Vert _{H^{\otimes
2q-2r}}^{2}\leq \Vert f\otimes _{r}f\Vert _{H^{\otimes 2q-2r}}^{2}$$
by the
triangular inequality.
\end{remark}
We shall make an extensive use of the following.
\begin{cor}
Let $F_n$, $n\ge 1$, be a sequence of random variables belonging to the $q$-th Wiener chaos, for some fixed integer $q\ge 2$. Then we have the following bound: for
$\mathcal D \in \lbrace K, TV, W\rbrace$
\begin{equation}\label{th}
d_{\mathcal D}\left (\frac{F_n}{\sqrt{\Var(F_n)}},Z \right )\le C_{\mathcal D}(q) \sqrt{\frac{k_4 \left ( F_n \right )}{\Var(F_n)^2}}\ ,
\end{equation}
where $Z\sim \mathcal N(0,1)$, for some constant $C_{\mathcal D}(q)>0$. In particular, if the right hand side in \paref{th} vanishes for $n\to +\infty$, then the following convergence in distribution holds
$$
\frac{F_n}{\sqrt{\Var(F_n)}}\mathop{\goto}^{\mathcal L} Z\ .
$$
\end{cor}
\newpage
\section{Gaussian eigenfunctions on the $d$-sphere}
\subsection{Some more notation}
For any two positive sequence $a_{n},b_{n}$,
we shall write $a_{n}\sim b_{n}$ if $\lim_{n\rightarrow \infty }\frac{a_{n}}
b_{n}}=1$ and $a_{n}\ll b_{n}$ or $a_{n}=O(b_{n})$ if the sequence $\frac
a_{n}}{b_{n}}$ is bounded; moreover $a_n = o(b_n)$ if $\lim_{n\rightarrow \infty }\frac{a_{n}}
b_{n}}=0$. Also, we write as usual $dx$ for the Lebesgue
measure on the unit $d$-dimensional sphere ${\mathbb{S}}^{d}\subset \R^{d+1}$, so that
\int_{{\mathbb{S}}^{d}}\,dx=\mu _{d}$ where $\mu _{d}:=\frac{2\pi ^{\frac{d+
}{2}}}{\Gamma \left( \frac{d+1}{2}\right) }$, as already stated in \paref{ms}.
Recall that
the triple $(\Omega, \F, \P)$ denotes a probability space and $\E$
stands for the expectation w.r.t $\P$; convergence (resp. equality) in law
is denoted by $\mathop{\rightarrow}^{\mathcal{L}}$ or equivalently $\mathop{\rightarrow}^{d}$ (resp. $\mathop{=}^{\mathcal{L}}$ or $\mathop{=}^{d}$)
and finally, as usual, $\mathcal N(\mu, \sigma^2)$ stands for a Gaussian
random variable with mean $\mu$ and variance $\sigma^2$.
Let $\Delta _{{\mathbb{S}}^{d}}$ $(d\geq 2)$ denote the Laplace-Beltrami operator on $\mathbb{S
^{d}$ and $\left( Y_{\ell ,m;d}\right) _{\ell ,m}$ the
orthonormal system of (real-valued) spherical harmonics, i.e. for $\ell \in
\mathbb{N}$ the set of eigenfunctions
\begin{equation*}
\Delta _{{\mathbb{S}}^{d}}Y_{\ell ,m;d}+\ell (\ell +d-1)Y_{\ell ,m;d}=0\
,\quad m=1,2,\dots ,n_{\ell ;d}\ .
\end{equation*
For $d=2$ compare with \paref{realSH}.
As well-known, the spherical harmonics $\left( Y_{\ell ,m;d}\right)
_{m=1}^{n_{\ell ;d}}$ represent a family of linearly independent homogeneous
polynomials of degree $\ell $ in $d+1$ variables restricted to ${\mathbb{S}
^{d}$ of size
\begin{equation*}
n_{\ell ;d}:=\frac{2\ell +d-1}{\ell }{\binom{\ell +d-2}{\ell -1}}\ \sim \
\frac{2}{(d-1)!}\ell ^{d-1}\text{ ,}\quad \text{as}\ \ell \rightarrow
+\infty \ ,
\end{equation*
see e.g. \cite{andrews} for further details.
\subsection{Definition and properties}
\begin{definition}
For $\ell \in \mathbb{N}$, the Gaussian eigenfunction $T_{\ell }$
on ${\mathbb{S}}^{d}$ is defined as
\begin{equation}\label{Telle}
T_{\ell }(x):=\sum_{m=1}^{n_{\ell ;d}}a_{\ell ,m}Y_{\ell ,m;d}(x)\ ,\quad
x\in {\mathbb{S}}^{d}\ ,
\end{equation
with the random coefficients $\left( a_{\ell ,m}\right) _{m=1}^{n_{\ell ;d}}$
Gaussian i.i.d. random variables, satisfying the relation
\begin{equation*}
{\mathbb{E}}[a_{\ell ,m}a_{\ell ,m^{\prime }}]=\frac{\mu _{d}}{n_{\ell ;d}
\delta _{m}^{m^{\prime }}\ ,
\end{equation*
where
$\delta _{a}^{b}$ denotes the Kronecker delta function and $\mu_d=\frac{2\pi ^{\frac{d+
}{2}}}{\Gamma \left( \frac{d+1}{2}\right) }$ the hypersurface volume of $\mathbb S^d$, as in \paref{ms}.
\end{definition}
It is then readily checked that $\left( T_{\ell }\right) _{\ell \in \mathbb{
}}$ represents a sequence of isotropic, zero-mean Gaussian random fields on
{\mathbb{S}}^{d}$, according to Definition \paref{campo aleatorio su uno spazio omogeneo} and moreover
$$
\E[T_\ell(x)^2] = 1\ ,\qquad x\in \mathbb S^d\ .
$$
$T_\ell$ is a continuous random field (Definition \ref{contRF}) and its isotropy simply means that the probability laws of
the two random fields $T_{\ell }(\cdot )$ and $T_{\ell }^{g}(\cdot
):=T_{\ell }(g\,\cdot )$ are equal (in the sense of finite dimensional distributions)
for every $g\in SO(d+1)$ (see \paref{invar-continui1}).
As briefly states in the Introduction of this thesis, it is also well-known that every Gaussian and isotropic random field $T$ on
\mathbb{S}^{d}$
satisfies in the $L^{2}(\Omega \times {\mathbb{S}}^{d})$-sense the spectral representation
(see \cite{malyarenko, adlertaylor, dogiocam} e.g.)
\begin{equation*}
T(x)=\sum_{\ell =1}^{\infty }c_{\ell }T_{\ell }(x)\ ,\quad x\in {\mathbb{S}
^{d}\text{ ,}
\end{equation*
where for every $x\in \mathbb S^d$, $\mathbb{E}\left[ T(x)^{2}\right] =\sum_{\ell =1}^{\infty }c_{\ell
}^{2}<\infty $; hence the spherical Gaussian eigenfunctions $\left( T_{\ell
}\right) _{\ell \in \mathbb{N}}$ can be viewed as the Fourier components (Chapter 1) of
the field $T$ (note that w.l.o.g. we are implicitly assuming that $T$ is
centered). Equivalently these random eigenfunctions \paref{Telle}
could be defined
by their covariance function, which equals
\begin{equation}
{\mathbb{E}}[T_{\ell }(x)T_{\ell }(y)]=G_{\ell ;d}(\cos d(x,y))\ ,\quad
x,y\in {\mathbb{S}}^{d}\ . \label{defT}
\end{equation
Here and in the sequel, $d(x,y)$ is the spherical distance between $x,y\in
\mathbb{S}^{d}$, and $G_{\ell ;d}:[-1,1]{\longrightarrow }\mathbb{R}$ is the
$\ell $-th normalized Gegenbauer polynomial, i.e.
$$G_{\ell ;d}\equiv \frac{P_{\ell }^{(\frac{
}{2}-1,\frac{d}{2}-1)}}{\left(
\begin{array}{c}
\ell +\frac{d}{2}-1 \\
\el
\end{array
\right)}\ ,$$
where $P_{\ell }^{(\alpha ,\beta )}$ are the Jacobi
polynomials; throughout the whole thesis therefore
$G_{\ell ;d}(1)=1$. As a special case, for $d=2$, it equals $G_{\ell ;2}\equiv
P_{\ell },$ the degree-$\ell $ Legendre polynomial.
Remark that the Jacobi polynomials
P_{\ell }^{(\alpha ,\beta )}$ are orthogonal on the interval $[-1,1]$ with
respect to the weight function $w(t)=(1-t)^{\alpha }(1+t)^{\beta }$ and
satisfy
$
P_{\ell }^{(\alpha ,\beta )}(1)=\left(
\begin{array}{c}
\ell +\alpha \\
\el
\end{array
\right)
$,
see e.g. \cite{szego} for more details.
\subsection{Isonormal representation}
Let us give the isonormal representation (\ref{isonormal}) on $L^{2}(
\mathbb{S}}^{d}
$
for the Gaussian random eigenfunctions $T_{\ell }$, $\ell\ge 1$.
We shall show that the following identity in law holds:
\begin{equation*}
T_{\ell }(x)\overset{\mathcal L}{=}\int_{{\mathbb{S}}^{d}}\sqrt{\frac{n_{\ell ;d}}
\mu _{d}}}G_{\ell ;d}(\cos d(x,y))\,dW(y)\text{ ,}\qquad x\in \cS^d\ ,
\end{equation*
where $W$ is a Gaussian white noise on ${\mathbb{S}}^{d}$. To compare with
\ref{isonormal}), $$T_{\ell }(x)=T(f_{x})\ ,$$
where $T$ is the isonormal Gaussian field
on $L^2(\cS^d)$ and $$f_{x}(\cdot ):=\sqrt{\frac
n_{\ell ;d}}{\mu _{d}}}G_{\ell ;d}(\cos d(x,\cdot ))\ .$$
Moreover we have
immediately that
\begin{equation*}
{\mathbb{E}}\left[\int_{{\mathbb{S}}^{d}}\sqrt{\frac{n_{\ell ;d}}{\mu _{d}}
G_{\ell ;d}(\cos d(x,y))\,dW(y)\right]=0\ ,
\end{equation*
and by the reproducing formula for Gegenbauer polynomials (\cite{szego})
$$\displaylines{
{\mathbb{E}}\left[\int_{{\mathbb{S}}^{d}}\sqrt{\frac{n_{\ell ;d}}{\mu _{d}}
G_{\ell ;d}(\cos d(x_{1},y_{1}))\,dW(y_{1})\int_{{\mathbb{S}}^{d}}\sqrt
\frac{n_{\ell ;d}}{\mu _{d}}}G_{\ell ;d}(\cos d(x_{2},y_{2}))\,dW(y_{2}
\right] = \cr
=\frac{n_{\ell ;d}}{\mu _{d}}\int_{{\mathbb{S}}^{d}}G_{\ell ;d}(\cos
d(x_{1},y))G_{\ell ;d}(\cos d(x_{2},y))dy=G_{\ell ;d}(\cos d(x_{1},x_{2})
\text{ .}
}$$
\chapter{Empirical measure of excursion sets}
The asymptotic behavior (i.e. for growing eigenvalues) of Gaussian eigenfunctions on a compact Riemannian manifold is a topic which has
recently drawn considerable attention, see e.g. \cite{Sodin-Tsirelson, AmP, bogomolnyschmit}.
In particular, in view of Berry's Random Wave model \cite{wigsurvey} much effort has been devoted to the case of the sphere
$\mathbb S^2$ (see \cite{nazarovsodin, Wig, def, Nonlin}).
As anticipated in the Introduction, the aim of this chapter is the investigation of the asymptotic distribution of the empirical measure $S_\ell(z)$ of $z$-excursion sets
of random spherical harmonics. A Central Limit Theorem has already been proved \cite{Nonlin}, but it can provide little guidance to the
actual distribution of random functionals, as it is only an asymptotic
result with no information on the \emph{speed of convergence} to the limiting
distribution.
In \cite{maudom} therefore we exploit the results about fourth-moments phenomenon (see \cite{noupebook} and Chapter 4) to
establish \emph{quantitative} Central Limit Theorems
for the excursion volume of Gaussian eigenfunctions on the $d$-dimensional unit sphere $\mathbb{S}^d$, $d\ge 2$ (see also \cite{mauproc}).
We note that
there are already results in the literature giving rates of convergence in CLTs for value distributions of
eigenfunctions of the spherical Laplacian, see in particular \cite{meckes}, which investigates however
the complementary situation to the one considered here,
i.e. the limit for eigenfunctions of fixed degree $\ell$ and increasing dimension $d$.
To achieve our goal, we will provide a number of
intermediate results of independent interest, namely the asymptotic analysis for
the variance of moments of Gaussian eigenfunctions, the rates of convergence for various probability
metrics for so-called Hermite subordinated processes, the analysis of arbitrary polynomials
of finite order and square integrable nonlinear transforms. All these results could be useful
to attack other problems, for instance quantitative Central Limit Theorems for Lipschitz-Killing
curvatures of arbitrary order. A more precise statement of
our results and a short outline of the proof is given in \S \ref{main results}.
\section{Main results and outline of the proofs}
\label{main results}
The excursion volume of $T_\ell$ \paref{Telle},
for any fixed $z\in \mathbb{R}$ can be written as
\begin{equation}\label{voldef}
S_{\ell }(z)=\int_{{\mathbb{S}}^{d}}1(T_{\ell }(x)> z)\,dx\text{ ,}
\end{equation
where $1(\cdot > z)$ denotes the indicator function of the interval
(z,\infty)$; note that \\${\mathbb{E}}[S_{\ell }(z)]=\mu _{d}(1-\Phi (z))$, where
$\Phi (z)$ is the standard Gaussian cdf and $\mu_d$ as in \paref{ms}. The variance of this excursion volume will be
shown below to have the following asymptotic behavior (as $\ell \to +\infty$)
\begin{equation}
\Var(S_{\ell}(z))=\frac{z^2\phi(z)^2}{2}\frac{\mu_d^2}{n_{\ell,d}}+o(\ell^{-d})\ ,
\end{equation}
where $\phi$ denotes the standard Gaussian density and $n_{\ell,d} \sim \frac{2}{(d-1)!}\ell^{d-1}$ is the dimension of the eigenspace related to the eigenvalue $-\ell(\ell+d-1)$, as in Chapter \ref{background}.
Note that the variance is of order smaller than $\ell^{-(d-1)}$ if and only if $z=0$.
The main result of this chapter is then as follows.
\begin{theorem}\label{mainteo}
The excursion volume $S_\ell(z)$ in \paref{voldef} of Gaussian eigenfunctions $T_\ell$
on $\mathbb S^d$, $d\ge 2$,
satisfies a quantitative CLT as $\ell \to +\infty$, with rate of convergence in the Wasserstein
distance \paref{prob distance} given by, for $z\ne 0$ and $Z\sim \mathcal{N}(0,1)$
\begin{equation*}
d_{W}\left( \frac{S_{\ell }(z)-\mu_d (1-\Phi(z))}{\sqrt{\mathrm{Var}[S_{\ell
}(z)]}}, Z\right) =O\left(\ell^{-1/2}\right)\text{
.}
\end{equation*}
\end{theorem}
An outline of the main steps and auxiliary results to prove this theorem
is given in the following subsection.
\subsection{Steps of the proofs}\label{outline}
The first tool to investigate quantitative CLTs for the excursion volume
of Gaussian eigenfunctions on ${\mathbb{S}}^{d}$ (compare for $d=2$ with \cite{Nonlin}) is
to study the asymptotic behavior, as $\ell \rightarrow \infty $, of the random variables
h_{\ell ;q,d}$ defined for $\ell =1,2,\dots $ and $q=0,1,\dots $ as
\begin{equation}
h_{\ell ;q,d}=\int_{{\mathbb{S}}^{d}}H_{q}(T_{\ell }(x))\,dx\ , \label{hq}
\end{equation
where $H_{q}$ represent the family of Hermite polynomials \paref{hermite} (see also \cite{noupebook}).
The rationale to investigate these sequences is the fact that
the excursion volume, and more generally any square integrable transform of $T_\ell$,
admits the Wiener-Ito chaos decomposition \paref{chaos exp}
(for more details e.g. \cite{noupebook}, \S 2.2),
i.e. a series expansion in the $L^2(\P)$-sense
of the form
\begin{equation}\label{volseries}
S_{\ell }(z) = \sum_{q=0}^{+\infty} \frac{J_q(z)}{q!}\, h_{\ell ;q,d}\ ,
\end{equation}
where $J_0(z) =1-\Phi(z)$ and for $q\ge 1$, $J_q(z)=\E[1(Z>z)H_q(Z)]$,
$\Phi$ and $\phi$
denoting again respectively the cdf and the density of $Z\sim \mathcal N(0,1)$.
The main idea in our argument will then be to establish first a CLT for each of the summands
in the series, and then to deduce from this a CLT for the excursion volume.
The starting point will then
be the analysis of the asymptotic variances for $h_{\ell ;q,d}$, as $\ell\to +\infty$.
To this aim, note first that, for all $d$
\begin{equation*}
h_{\ell ;0,d}=\mu _{d}\ ,\qquad h_{\ell ;1,d}=0
\end{equation*
a.s., and therefore it is enough to restrict our discussion to $q\geq 2$.
Moreover ${\mathbb{E}}[h_{\ell; q, d}]=0$ and
\begin{equation}
\mathrm{Var}[h_{\ell ;q,d}]=q!\mu _{d}\mu _{d-1}\int_{0}^{\pi }G_{\ell
;d}(\cos \vartheta )^{q}(\sin \vartheta )^{d-1}\,d\vartheta \label{int var}
\end{equation
(see \S \ref{tech} for more details). Gegenbauer polynomials satisfy the symmetry
relationships \cite{szego}
\begin{equation*}
G_{\ell ;d}(t)=(-1)^{\ell }G_{\ell ;d}(-t)\ ,
\end{equation*
whence the r.h.s. integral in (\ref{int var}) vanishes identically when both
$\ell $ and $q$ are odd; therefore in these cases $h_{\ell ;q,d}= 0$ a.s. For
the remaining cases we have
\begin{equation}\label{int var doppio}
\mathrm{Var}[h_{\ell ;q,d}]=2q!\mu _{d}\mu _{d-1}\int_{0}^{\frac{\pi }{2
}G_{\ell ;d}(\cos \vartheta )^{q}(\sin \vartheta )^{d-1}\,d\vartheta \ .
\end{equation
We have hence the following asymptotic result for these variances,
whose proof is given in \S \ref{subvar}.
\begin{prop}
\label{varianza} As $\ell \rightarrow \infty ,$ for $q,d\ge 3$,
\begin{equation} \label{int1}
\int_{0}^{\frac{\pi }{2}}G_{\ell ;d}(\cos \vartheta )^{q}(\sin \vartheta
)^{d-1}\,d\vartheta =\frac{c_{q;d}}{\ell ^{d}}(1+o_{q;d}(1))\ .
\end{equation}
The constants $c_{q;d}$ are given by the formula
\begin{equation}\label{ecq}
c_{q;d}=\left(2^{\frac{d}{2} - 1}\left (\frac{d}2-1 \right)!\right)^q\int_{0}^{+\infty
}J_{\frac{d}{2}-1}(\psi )^{q}\psi ^{-q\left( {\textstyle\frac{d}{2}
-1\right) +d-1}d\psi\ ,
\end{equation
where $J_{\frac{d}{2}-1}$ is the Bessel function
of order $\frac{d}{2}-1$. The r.h.s. integral in (\ref{ecq}) is absolutely convergent for any
pair $(d,q)\neq (3,3)$ and conditionally convergent for $d=q=3$.
\end{prop}
It is well known that for $d\geq 2$, the second moment of the Gegenbauer
polynomials is given by
\begin{equation}
\int_{0}^{\pi }G_{\ell ;d}(\cos \vartheta )^{2}(\sin \vartheta
)^{d-1}\,d\vartheta =\frac{\mu _{d}}{\mu _{d-1}\,n_{\ell ;d}}\ ,
\label{momento 2}
\end{equation
whence
\begin{equation}
\mathrm{Var}(h_{\ell ;2,d})=2\frac{\mu _{d}^{2}}{n_{\ell ;d}}\ \sim\ 4\mu_d \mu_{d-1}\frac{c_{2;d}}{\ell ^{d-1}}\ ,\qquad \text{as}\ \ell \rightarrow
+\infty \ , \label{q=2}
\end{equation
where $c_{2;d} :=
\frac{(d-1)!\mu _{d}}{4 \mu_{d-1}}$.
For $d=2$ and every $q$, the asymptotic behavior of these integrals was resolved in \cit
{def}. In particular, it was shown that for $q=3$ or $q\geq 5$
\begin{equation}
\mathrm{Var}(h_{\ell ;q,2})=(4\pi )^{2}q!\int_{0}^{\frac{\pi}{2}}P_{\ell }(\cos
\vartheta )^{q}\sin \vartheta \,d\vartheta =(4\pi )^{2}q!\frac{c_{q;2}}{\ell
^{2}}(1+o_{q}(1))\ , \label{int2}
\end{equation
where
\begin{equation}
c_{q;2}=\int_{0}^{+\infty }J_{0}(\psi )^{q}\psi \,d\psi \ , \label{cq2}
\end{equation
$J_{0}$ being the Bessel function of order $0$ and the above integral being
absolutely convergent for $q\ge 5$ and conditionally convergent for
q=3$. On the other hand, for $q=4$, as $\ell \rightarrow \infty $,
\begin{equation}
\mathrm{Var}[h_{\ell;4,2}]\ \sim \ 24^{2}\frac{\text{log}\ell }{\ell ^{2}}= (4\pi)^2 4!\, c_{4;2}\frac{\log \ell}{\ell^2}\
, \label{q=4d=2}
\end{equation
where we set $c_{4;2}:=\frac{3}{2\pi^2}$.
Clearly for any $d,q\geq 2$, the constants $c_{q;d}$ are
nonnegative and it is obvious that $c_{q;d}>0$ for all even $q$.
Moreover, as we will recall in the next chapter, there exists an explicit formula
in the case $q=3$.
We conjecture that this strict inequality holds for every $(d,q)$,
but leave this issue as an open question for future research;
also, in view of the previous discussion on the symmetry
properties of Gegenbauer polynomials, to simplify the discussion
in the sequel we restrict ourselves to even multipoles $\ell $.
As argued earlier, the following step is to establish quantitative CLTs for
$h_{\ell ;q,d}$ (see \S \ref{clthl}) in various probability metrics
\paref{prob distance}. Here the crucial point to stress
is that the Gaussian eigenfunctions $(T_{\ell})_\ell$
can be always expressed as stochastic integrals with respect to a
Gaussian white noise measure on $\mathbb{S}^d$, as seen in
Chapter \ref{background}. As a consequence, the random
sequences $h_{\ell ;q,d}$ can themselves be represented as multiple Wiener-Ito integrals,
and therefore fall inside the domain of
quantitative CLTs by means of the Nourdin-Peccati approach (Chapter \ref{background}).
It is thus sufficient to investigate the so-called circular components of their normalized
fourth-order cumulants (Proposition \ref{BIGnourdinpeccati}) to establish the following Proposition
\ref{teo1}.
\begin{prop}
\label{teo1}
For all $d,q\ge 2$ and
${\mathcal{D}}\in \lbrace K, TV, W \rbrace$ we have, as $\ell\to +\infty$,
\begin{equation}\label{bounds}
d_{\mathcal{D}}\left( \frac{h_{\ell ;q,d}}{\sqrt{\mathrm{Var}[h_{\ell ;q,d}]}}
Z\right) =O \left (\ell^{-\delta(q;d)} (\log \ell)^{-\eta(q;d)}\right )\ ,
\end{equation
where for $d=2$
\begin{align}\label{rate2}
&\delta(2;2)=\delta(3;2)=1/2\ , \quad \delta(4;2)=0\ ,\quad \delta(q;2)=1/4\ \text{ for }q\ge 5\ ;\\
\nonumber
&\eta(4;2)=1\ , \quad \eta(5;2)=\eta(6;2)=-1\ ,\quad \delta(q;2)=0\ \text{ for }q=2,3\text{ and for }q\ge 7\ ;
\end{align}
whereas for $d\ge 3$ we have
$$\eta(q;d) = 0\quad \text{ for } q\ge 2\ ;
$$
$$\delta(2;d)=(d-1)/2\ , \quad \delta(3;d)=(d-5)/2\ ,\quad \delta(4;d)=(d-3)/2\ $$
$$ \delta(q;d) = (d-1)/4\ \text{ for }q\ge 5\ .$$
\end{prop}
Let us set $R(\ell;q,d) := \ell^{-\delta(q;d)} (\log \ell)^{-\eta(q;d)}$.
The following corollary is immediate.
\begin{cor}
\label{cor1} For all $q$ such that $(d,q)\neq (3,3), (3,4),(4,3),(5,3)$ and
c_{q;d}>0$, $d\ge 2$,
\begin{equation}
\frac{h_{2\ell ;q,d}}{\sqrt{Var[h_{2\ell ;q,d}]}}\mathop{\goto}^{\mathcal{L}
Z\ ,\qquad \text{as}\ \ell \rightarrow +\infty \ ,
\label{convergenza hq}
\end{equation}
where $Z\sim \mathcal{N}(0,1)$.
\end{cor}
\begin{remark}\rm
\textrm{For $d=2$, the CLT in (\ref{convergenza hq}) was already provided by
\cite{Nonlin}; nevertheless Theorem \ref{teo1} improves the existing bounds
on the speed of convergence to the asymptotic Gaussian distribution. More
precisely, for $d=2,q=2,3,4$ the same rate of convergence as in (\ref{rate2
) was given in their Proposition 3.4; however for arbitrary $q$ the total
variation rate was only shown to satisfy (up to logarithmic terms)
d_{TV}=O(\ell ^{-\delta _{q}}),$ where $\delta _{4}=\frac{1}{10},$ $\delta
_{5}=\frac{1}{7},$ and $\delta _{q}=\frac{q-6}{4q-6}<\frac{1}{4}$ for $q\geq
7$.}
\end{remark}
\begin{remark}\rm
\textrm{The cases not included in Corollary \ref{cor1} correspond to the
pairs where $q=4$ and $d=3$, or $q=3$ and $d=3,4,5$; in these circumstances
the bounds we establish on fourth-order cumulants in Proposition \ref{cumd}
are not sufficient to
ensure that the CLT holds. We leave these computations as a topic for future research.}
\end{remark}
As briefly anticipated earlier in this subsection,
the random variables $h_{\ell;q,d}$ defined in (\ref{hq}) are the basic
building blocks for the analysis of any square integrable nonlinear transforms
of Gaussian eigenfunctions on ${\mathbb{S}}^{d}$. Indeed, let us
first consider generic polynomial functionals of the form
\begin{equation}
Z_{\ell }=\sum_{q=0}^{Q}b_{q}\int_{{\mathbb{S}}^{d}}
T_{\ell
}(x)^{q}\,dx\ ,\qquad Q\in \mathbb{N},\text{ }b_{q}\in \mathbb{R},
\label{Z}
\end{equation
which include, for instance, the so-called polyspectra (see e.g. \cite{dogiocam}, p.148)
of isotropic random
fields defined on ${\mathbb{S}}^{d}$. Note
\begin{equation}
Z_{\ell }=\sum_{q=0}^{Q}\beta _{q}h_{2\ell ;q,d} \label{Z2}
\end{equation
for some $\beta _{q}\in \mathbb{R}$. It is easy to establish CLTs
for generic polynomials (\ref{Z2}) from convergence results on
$ h_{2\ell ;q,d}$, see e.g. \cite{peccatitudor}. It is more
difficult to investigate the speed of convergence in the CLT in terms of the
probability metrics we introduced earlier; indeed, in \S \ref{genpoly} we establish the
following.
\begin{prop}
\label{corollario1} As $\ell \rightarrow \infty$, for $Z\sim \mathcal N(0,1)$
\begin{equation*}
d_{\mathcal{D}}\left( \frac{Z_{\ell }-{\mathbb{E}}[Z_{\ell }]}{\sqrt{\mathrm
Var}[Z_{\ell }]}},Z\right) =O(R(Z_{\ell };d))\text{ ,}
\end{equation*
where $d_{\mathcal{D}}=d_{TV},d_{W},d_{K}$ and for $d\ge 2$
\begin{equation*}
R(Z_{\ell };d)
\begin{cases}
\ell ^{-\left( \frac{d-1}{2}\right) }\qquad & \text{if}\quad \beta _{2}\neq
0\ , \\
\max_{q=3,\dots ,Q\,:\,\beta _{q},c_{q;d}\neq 0}\,R(\ell ;q,d)\qquad & \text{i
}\quad \beta _{2}=0\
\end{cases
\end{equation*}
\end{prop}
The previous results can be summarized as follows: for polynomials of
\emph{Hermite rank $2$}, i.e. such that
$\beta
_{2}\neq 0$ (more details later on the notion of Hermite rank) the asymptotic behavior of $Z_{\ell }$ is
dominated by the single term $h_{\ell ;2,d},$ whose variance is of order $\ell
^{-(d-1)}$ rather than $\ell ^{-d}$ as for the other terms. On the other
hand, when $\beta _{2}=0,$ the convergence rate to the asymptotic Gaussian
distribution for a generic polynomial is the slowest among the rates for the
Hermite components into which $Z_{\ell }$ can be decomposed, i.e.
the terms $\beta _{q}h_{2\ell ;q,d}$ in \paref{Z2}.
The fact that the bound for generic polynomials is of the same order as for
the Hermite case (and not slower) is indeed rather unexpected; it can be
shown to be due to the cancelation of some cross-product terms, which are
dominant in the general Nourdin-Peccati framework, while they vanish for
spherical eigenfunctions of arbitrary dimension (see (\re
{eq=0}) and Remark \ref{rem0}). An inspection of our proof will reveal that
this result is a by-product of the orthogonality of eigenfunctions
corresponding to different eigenvalues; it is plausible that similar ideas
may be exploited in many related circumstances, for instance random
eigenfunction on generic compact Riemannian manifolds.
Proposition \ref{corollario1} shows that the asymptotic behavior of arbitrary
polynomials of Hermite rank $2$ is of particularly simple nature. Our result
below will show that this feature holds in much greater generality, at least
as far as the Wasserstein distance $d_W$ is concerned. Indeed, we shall consider
the case of functionals of the form
\begin{equation}
S_{\ell }(M)=\int_{{\mathbb{S}}^{d}}M(T_{\ell }(x))\,dx\ , \label{S}
\end{equation
where $M:\mathbb{R}\rightarrow \mathbb{R}$ is some
measurable function such that $\E[M(Z)^2]<+\infty$, where
$Z\sim \mathcal N(0,1)$. As in Chapter \ref{background},
for such transforms the
following chaos expansion holds in the $L^2(\P)$-sense \paref{chaos exp}
\begin{equation}
M(T_{\ell })=\sum_{q=0}^{\infty }\frac{J_{q}(M)}{q!}H_{q}(T_{\ell })\ ,\qquad J_{q}(M):={\mathbb{E}}[M(T_{\ell
})H_{q}(T_{\ell })]\ . \label{exp}
\end{equation
Therefore the asymptotic analysis, as $\ell \rightarrow \infty $, of $S_{\ell }(M)$
in (\ref{S}) directly follows from the Gaussian approximation
for $h_{\ell ;q,d}$ and their polynomial transforms $Z_{\ell }$. More
precisely, in \S \ref{genvol} we prove the following result.
\begin{prop}
\label{general} Let $Z\sim \mathcal N(0,1)$. For functions $M$ in (\ref{S}) such that $$\mathbb{E}\left[
M(Z)H_{2}(Z)\right] =J_{2}(M)\neq 0\ ,$$ we have
\begin{equation}
d_{W}\left( \frac{S_{2\ell }(M)-{\mathbb{E}}[S_{2\ell }(M)]}{\sqrt
Var[S_{2\ell }(M)]}},Z\right) =O(\ell ^{-1/2}) \
,\qquad \text{as}\ \ell \rightarrow \infty\ , \label{sun2}
\end{equation}
in particular
\begin{equation}
\frac{S_{2\ell }(M)-{\mathbb{E}}[S_{2\ell }(M)]}{\sqrt{\mathrm{Var}[S_{2\ell
}(M)]}}\mathop{\goto}^{\mathcal{L}} Z\ . \label{sun1}
\end{equation}
\end{prop}
Proposition \ref{general} provides a Breuer-Major like result on nonlinear
functionals, in the high-frequency limit (compare for instance \cit
{nourdinpodolskij}). While the CLT in \eqref{sun1} is somewhat expected, the
square-root speed of convergence \eqref{sun2} to the limiting distribution
may be considered quite remarkable; it is mainly due to some specific
features in the chaos expansion of Gaussian eigenfunctions, which is dominated
by a single term at $q=2.$ Note that the function $M$ need not be smooth in
any meaningful sense; indeed, as explained above, our
main motivating rationale here is the analysis of the asymptotic behavior of the excursion
volume in \paref{voldef}
S_{\ell }(z)=S_{\ell }(M)$, where $M(\cdot )=M_{z}(\cdot )=1(\cdot > z)$ is again the
indicator function of the interval $(z, +\infty)$. An application
of Proposition \ref{general} (compare \paref{volseries} to \paref{exp}) provides a quantitative CLT for
S_{\ell }(z)$, $z\neq 0$, thus completing the proof of our main result.
The plan of the rest of this chapter is as follows: in \S \ref{2.2} we specialize results in Chapter \ref{background} to the hypersphere, in \S \ref{clthl} we establish the
quantitative CLT for the sequences $h_{\ell;q,d,}$, while \S \ref{genpoly} extends these
results to generic finite-order polynomials. The results for general nonlinear transforms and excursion
volumes are given in \S \ref{genvol}; most technical proofs and (hard) estimates, including in particular
evaluations of asymptotic variances, are collected in \S \ref{tech}.
\section{Polynomial transforms in Wiener chaoses}\label{2.2}
As mentioned earlier in \S \ref{main results}, we shall be concerned first
with random variables $h_{\ell ;q,d}$, $\ell \geq 1$, $q,d\geq 2$
\begin{equation*}
h_{\ell ;q,d}=\int_{{\mathbb{S}}^{d}}H_{q}(T_{\ell }(x))\,dx\ ,
\end{equation*
and their (finite) linear combinations
\begin{equation}
Z_{\ell }=\sum_{q=2}^{Q}\beta _{q}h_{\ell ;q,d}\text{ ,}\qquad \beta _{q}\in
\mathbb{R},Q\in \mathbb{N}\ . \label{genovese}
\end{equation}
Our first objective is to represent \paref{genovese} as a (finite) sum of
(multiple) stochastic integrals as in (\ref{chaos exp2}), in order to apply
the results recalled in Chapter \ref{background}.
Note that by (\ref{isometria}), we have
\begin{equation*}
\displaylines{ H_{q}(T_{\ell }(x))=I_{q}(f_{x}^{\otimes q})=\cr
=\int_{({\mathbb{S}}^{d})^{q}}\left( \frac{n_{\ell;d}}{\mu _{d}}\right)
^{q/2}G_{\ell ;d}(\cos d(x,y_{1}))\dots G_{\ell ;d}(\cos
d(x,y_{q}))\,dW(y_{1})...dW(y_{q})\text{ ,} }
\end{equation*
so that
\begin{equation*}
h_{\ell ;q,d}\overset{\mathcal L}{=}\int_{({\mathbb{S}}^{d})^{q}}g_{\ell
;q}(y_{1},...,y_{q})\,dW(y_{1})...dW(y_{q})\ ,
\end{equation*
wher
\begin{equation}
g_{\ell ;q}(y_{1},...,y_{q}):=\int_{{\mathbb{S}}^{d}}\left( \frac{n_{\ell ;d
}{\mu _{d}}\right) ^{q/2}G_{\ell ;d}(\cos d(x,y_{1}))\dots G_{\ell ;d}(\cos
d(x,y_{q}))\,dx\text{ .} \label{moltobello}
\end{equation
Thus we just established that $h_{\ell ;q,d}\overset{\mathcal L}{=}I_{q}(g_{\ell ;q})$
and therefore
\begin{equation}
Z_{\ell }\overset{\mathcal L}{=}\sum_{q=2}^{Q}I_{q}(\beta _{q}\,g_{\ell ;q})\ ,
\end{equation
as required. It should be noted that for such random variables $Z_{\ell }$,
the conditions of the Proposition \ref{BIGnourdinpeccati} are trivially
satisfied.
\section{The quantitative CLT for Hermite transforms}
\label{clthl}
In this section we prove Proposition \ref{teo1} with the help of Proposition \re
{BIGnourdinpeccati} and \eqref{casoparticolare} in particular. The
identifications of \S \ref{2.2} lead to some very explicit expressions for the
contractions \eqref{contrazione}, as detailed in the following result.
For $\ell\ge 1, q\ge 2$, let $g_{\ell ;q}$ be defined as in \eqref{moltobello}.
\begin{lemma}
\label{contractions}
For all $q_{1},q_{2}\ge 2$, $r=1,...,q_{1}\wedge q_{2}-1$,
we have the identities
$$\displaylines{ \left\Vert g_{\ell ;q_{1}}\otimes _{r}g_{\ell
;q_{2}}\right\Vert _{H^{\otimes
n}}^{2}
= \int_{(\cS^{d})^4}\!\!\!\!G_{\ell
;d}^{r}\cos d( x_{1},x_{2}) G_{\ell
;d}^{q_{1}\wedge q_{2}-r}\!(\cos d( x_{2},x_{3}) ) G_{\ell
;d}^{r}(\cos d(x_{3},x_{4}) ) \times \cr
\times G_{\ell
;d}^{q_{1}\wedge q_{2}-r}(\cos d( x_{1},x_{4}) )\,d\underline{x}\ ,}
$$
where we set $d\underline{x} := dx_{1}dx_{2}dx_{3}dx_{4}$ and
$n:= q_{1}+q_{2}-2r$.
\end{lemma}
\begin{proof}
Assume w.l.o.g. $q_1\le q_2$ and set for simplicity of notation $d\underline{t}:=dt_{1}\dots dt_{r}$.
The contraction \paref{contrazione} here takes the for
$$\displaylines{
(g_{\ell;q_{1} }\otimes _{r}g_{\ell;q_{2}})(y_{1},...,y_{n})=\cr
=\int_{(\cS^d)^r} g_{\ell;q_{1} }(y_{1},\dots,y_{q_{1}-r},t_{1},\dots,t_{r})g_{\ell;q_{2}
}(y_{q_{1}-r+1},\dots,y_{n},t_{1},\dots,t_{r})\,d\underline{t}=\cr
=\int_{(\cS^d)^r} \int_{\cS^{d}}\left(\frac{n_{\ell; d}}{\mu_d }
\right) ^{q_1/2} G_{\ell ;d}(\cos d(
x_{1},y_{1}) )\dots
G_{\ell
;d}(\cos d( x_{1},t_{r}) )\,dx_{1}\times \cr
\times \int_{\cS^{d}}\left(\frac{n_{\ell; d}}{\mu_d }
\right) ^{q_2/2}G_{\ell ;d}(\cos d(
x_{2},y_{q_{1}-r+1}) )\dots
G_{\ell ;d}(\cos d(
x_{2},t_{r}) )\,dx_{2}\, d\underline{t}=\cr
=\int_{(\cS^{d})^2}\left(\frac{n_{\ell; d}}{\mu_d }
\right) ^{n/2}G_{\ell ;d}(\cos d(
x_{1},y_{1})) \dots G_{\ell ;d}(\cos d(
x_{1},y_{q_1-r}) )\times \cr
\times G_{\ell ;d}(\cos d(
x_{2},y_{q_{1}-r+1}) )\dots G_{\ell ;d}(\cos d(
x_{2},y_{n}) )G_{\ell
;d}^{r}(\cos d( x_{1},x_{2}) )\,dx_{1}dx_{2}\ ,
}$$
where in the last equality
we have repeatedly used the reproducing property of Gegenbauer polynomials (\cite{szego}).
Now set $d\underline{y}:=dy_{1}\dots dy_{n}$. It follows at once that
$$\displaylines{
\left\Vert g_{\ell; q_{1} }\otimes _{r}g_{\ell; q_{2} }\right\Vert _
{H}^{\otimes n}}^{2
=\int_{(\cS^d)^{n}} (g_{\ell; q_{1} }\otimes _{r}g_{\ell; q_{2} })^{2}
(y_{1},\dots,y_{n})\,d\underline{y}=\cr
=\int_{(\cS^d)^{n}} \int_{(\cS^{d})^2}\Bigl(\frac{n_{\ell; d}}{\mu_d
} \Bigr) ^{n}\!G_{\ell ;d}(\cos d(
x_{1},y_{1})) \dots G_{\ell ;d}(\cos d(
x_{2},y_{n})) G_{\ell
;d}^{r}\cos d(x_{1},x_{2}) dx_1 dx_2\times \cr
\times
\int_{(\cS^{d})^2}G_{\ell ;d}(\cos d(
x_{4},y_{1})) \dots G_{\ell ;d}(\cos d(
x_{3},y_{n})) G_{\ell
;d}^{r}(\cos d( x_{3},x_{4}))
\,dx_{3}dx_{4}\,d\underline{y}=\cr
=\int_{(\cS^{d})^4}\!\!\!G_{\ell
;d}^{r}(\cos d( x_{1},x_{2})) G_{\ell
;d}^{q_{1}-r}(\cos d( x_{2},x_{3})) G_{\ell
;d}^{r}(\cos d(x_{3},x_{4})) G_{\ell
;d}^{q_{1}-r}(\cos d( x_{1},x_{4}))
\,d\underline{x}\ ,
}$$
as claimed.
\end{proof}
We need now to introduce some further notation, i.e. for $q\ge 2$ and $r= 1,\dots, q-1$
\begin{equation*}
\displaylines{ \mathcal{K}_{\ell }(q;r):=\int_{(\cS^{d})^4}G_{\ell
;d}^{r}(\cos d( x_{1},x_{2}) ) G_{\ell ;d}^{q-r}(\cos d( x_{2},x_{3})
)\times \cr \times G_{\ell ;d}^{r}(\cos d(x_{3},x_{4}) ) G_{\ell
;d}^{q-r}(\cos d( x_{1},x_{4}) ) \,dx_{1}dx_{2}dx_{3}dx_{4}, }
\end{equation*
Lemma \ref{contractions} asserts that
\begin{equation}\label{K}
\mathcal{K}_{\ell }(q;r)=\left\Vert g_{\ell ;q}\otimes _{r}g_{\ell
;q}\right\Vert _{H^{\otimes 2q-2r}}^{2}\text{ ;}
\end{equation
it is immediate to check that
\begin{equation}\label{simm}
\mathcal{K}_{\ell }(q;r)= \mathcal{K}_{\ell }(q;q-r)\ .
\end{equation}
In the following two propositions we bound each term of the form $\mathcal{K}(q;r)$ (from (\ref{simm})
it is enough to consider $r=1,\dots, \left[\frac{q}2\right]$).
As noted in
\S \ref{outline}, these bounds improve the existing literature even for the case $d=2,$
from which we start our analysis.
For $d=2,$ as previously recalled, Gegenbauer polynomials become standard
Legendre polynomials $P_{\ell },$ for which it is well-known that (see (\re
{momento 2}))
\begin{equation}
\int_{{\mathbb{S}}^{2}}P_{\ell }(\cos d( x_{1},x_{2}) )^{2}\,dx_{1}=O\left(
\frac{1}{\ell }\right) \text{ ;} \label{tri1}
\end{equation
also, from \cite{Nonlin}, Lemma 3.2 we have tha
\begin{equation}
\int_{{\mathbb{S}}^{2}}P_{\ell }(\cos d( x_{1},x_{2}) )^{4}\,dx_{1}=O\left(
\frac{\log \ell }{\ell ^{2}}\right) \text{ .} \label{tri2}
\end{equation
Finally, it is trivial to show tha
\begin{equation}
\int_{{\mathbb{S}}^{2}}\left\vert P_{\ell }(\cos d( x_{1},x_{2})
)\right\vert\, dx_{1}\leq \sqrt{\int_{{\mathbb{S}}^{2}}P_{\ell
}(\cos d( x_{1},x_{2}) )^{2}\,dx_{1}}=O\left( \frac{1}{\sqrt
\ell }}\right) \label{tri3}
\end{equation
and by Cauchy-Schwartz inequality
\begin{equation}
\int_{{\mathbb{S}}^{2}}\left\vert P_{\ell }(\cos d( x_{2},x_{3})
)\right\vert ^{3}\,dx_{2}
=O\left( \sqrt{\frac{\log \ell }{\ell
^{3}}}\right) \text{ .} \label{tri4}
\end{equation
These results will be the main tools to establish the upper bounds which are
collected in the following Proposition, whose proof is deferred to the last section.
\begin{prop}
\label{cum2} For all $r=1,2,\dots ,q-1,$ we hav
\begin{eqnarray}
\mathcal{K}_{\ell }(q;r) &=& O\left(\frac{1}{\ell ^{5}
\right)\text{ for }q=3\text{ ,} \label{hotel1} \\
\mathcal{K}_{\ell }(q;r) &=& O\left(\frac{1}{\ell ^{4}
\right)\text{ for }q=4\text{ ,} \label{hotel2}\\
\mathcal{K}_{\ell }(q;r) &=& O\left(\frac{\log \ell }
\ell ^{9/2}}\right)\text{ for }q=5,6\text{ } \label{hotel3}
\end{eqnarray
an
\begin{equation}
\mathcal{K}_{\ell }(q;1) =\mathcal{K}_{\ell
}(q;q-1) =O\left(\frac{1}{\ell ^{9/2}}\right)\text{ , }
\mathcal{K}_{\ell }(q;r) =O\left(\frac{1}{\ell ^{5}}\right)\text{
, }r=2,...,q-2,\text{ for }q\geq 7\text{ .} \label{hotel4}
\end{equation}
\end{prop}
We can now move to the higher-dimensional case, as follows. Let us start
with the bounds for all order moments of Gegenbauer polynomials.
From (\ref{momento 2})
\begin{equation}
\int_{{\mathbb{S}}^{d}}G_{\ell ;d}(\cos d( x_{1},x_{2}) )^{2}dx_{1}=O\left(
\frac{1}{\ell ^{d-1}}\right) \text{ ;} \label{trid2}
\end{equation
also, from Proposition \ref{varianza}, we have that if $q=2p,$ $p=2,3,4...$,
\begin{equation}
\int_{{\mathbb{S}}^{d}}G_{\ell ;d}(\cos d( x_{1},x_{2}) )^{q}dx_{1}=O\left(
\frac{1}{\ell ^{d}}\right) \text{ .} \label{tridq}
\end{equation
Finally, it is trivial to show tha
$$
\int_{{\mathbb{S}}^{d}}\left\vert G_{\ell ;d}(\cos d( x_{2},x_{3})
)\right\vert dx_{2}\le
$$
\begin{equation}
\leq \sqrt{\int_{{\mathbb{S}}^{d}} G_{\ell
;d}(\cos d( x_{2},x_{3}) )^{2}dx_{2}}=O\left( \frac{1}{\sqrt
\ell ^{d-1}}}\right) \text{ ,} \label{trid1}
\end{equation
$$
\int_{{\mathbb{S}}^{d}}\left\vert G_{\ell ;d}(\cos d( x_{2},x_{3})
)\right\vert ^{3}dx_{2}\le
$$
\begin{equation}
\leq \sqrt{\int_{{\mathbb{S}}^{d}}G_{\ell
;d}(\cos d( x_{2},x_{3}) )^{2}dx_{2}}\sqrt{\int_{{\mathbb{S}}^{d}}G_{\ell
;d}(\cos d( x_{1},x_{2}) )^{4}dx_{1}}=O\left( \frac{1}{\ell ^{d-{\textstyl
\frac{1}{2}}}}\right) \text{ } \label{trid3}
\end{equation
and for $q\geq 5$ odd,
$$
\int_{{\mathbb{S}}^{d}}\left\vert G_{\ell ;d}(\cos d( x_{2},x_{3})
)\right\vert ^{q}dx_{2}\le
$$
\begin{equation}
\leq \sqrt{\int_{{\mathbb{S}}^{d}}G_{\ell
;d}(\cos d( x_{2},x_{3}) )^{4}dx_{2}}\sqrt{\int_{{\mathbb{S}}^{d}}G_{\ell
;d}(\cos d( x_{1},x_{2}) )^{2(q-2)}dx_{1}}=O\left( \frac{1}{\ell ^{d}
\right) \text{ .} \label{tridgen}
\end{equation
Analogously to the $2$-dimensional case, we can exploit the previous results
to obtain the following bounds, whose proof is again collected in \S \ref{tech}.
\begin{prop}
\label{cumd} For all $r=1,2,\dots, q-1$
\begin{eqnarray}
\mathcal{K}_{\ell }(q;r) &=&O\left( \frac{1}{\ell ^{2d
{\frac{d-5}{2}}}}\right) \text{ for }q=3\text{ ,} \label{hoteld1} \\
\mathcal{K}_{\ell }(q;r) &=&O\left( \frac{1}{\ell ^{2d
{\frac{d-3}{2}}}}\right) \text{ for }q=4\text{ ,} \label{hoteld2}
\end{eqnarray
and for $r=2,\dots ,q-2$
\begin{equation}
\mathcal{K}_{\ell }(q;1) = \mathcal{K}_{\ell
}(q;q-1) =O\left( \frac{1}{\ell ^{2d+{\frac{d-1}{2}}}}\right)
\text{ , }\mathcal{K}_{\ell }(q;r)=O\left( \frac{1}
\ell ^{3d-1}}\right) \text{ }\text{ for }q\geq 5\text{ .}
\label{hoteld4}
\end{equation}
\end{prop}
Exploiting the results in this section and the variance evaluation in Proposition \ref{varianza}
in \S \ref{tech},
we have the proof of our first quantitative CLT.
\begin{proof}[Proof Proposition \ref{teo1}]
By Parseval's identity, the case $q=2$ can be treated as a sum of independent random variables and the proof
follows from standard Berry-Esseen arguments, as in Lemma 8.3 of \cite{dogiocam} for the case $d=2$.
For $q\ge 3$, from Proposition \ref{BIGnourdinpeccati} and \paref{casoparticolare}, for $d_{\mathcal{D}}=
d_K, d_{TV}, d_W$
\begin{equation}
d_{\mathcal{D}}\left(\frac{h_{\ell ;q}}{\sqrt{\Var[h_{\ell ;q,d}]}},Z\right)
= O\left(\sup_{r}\sqrt{\frac{\mathcal{K}_{\ell }(q;r) }
\Var[h_{\ell ;q,d}]^{2}}}\right)\text{ .}
\end{equation
The proof is thus an immediate consequence of
the previous equality and the results in Proposition \ref{varianza},
Proposition \ref{cum2} and Proposition \ref{cumd}.
\end{proof
\section{General polynomials}\label{genpoly}
In this section, we show how the previous results can be extended to establish quantitative
CLTs, for the case of general, non Hermite polynomials.
To this aim, we need to introduce some more notation, namely
(for $Z_{\ell }$ defined as in (\ref{genovese})
\begin{equation*}
\mathcal{K}(Z_{\ell }; d):=\max_{q:\beta _{q}\neq 0}\max_{r=1,...,q-1}\mathcal{
}_{\ell }(q;r)\text{ ,}
\end{equation*}
and as in Proposition \ref{corollario1}
\begin{equation*}
R(Z_{\ell}; d )=\begin{cases}
\ell ^{-\frac{d-1}2}\ ,\quad&\text{for }\beta _{2}\neq 0\ , \\
\max_{q=3,\dots,Q:\beta _{q}, c_{q;d}\neq 0}\,R(\ell ;q,d)\ ,\quad&\text{for }\beta _{2}=0\
\end{cases}
\end{equation*}
In words, $\mathcal{K}(Z_{\ell }; d)$ is the largest contraction term among
those emerging from the analysis of the different Hermite components, and
R(Z_{\ell };d)$ is the slowest convergence rate of the same components. The
next result is stating that these are the only quantities to look at when
considering the general case.
\begin{proof}[Proof Theorem \ref{corollario1}]
We apply Proposition \ref{BIGnourdinpeccati}.
In our case $H=L^2(\mathbb S^d)$ and
$$\displaylines{
\Var[\langle DZ_{\ell },-DL^{-1}Z_{\ell }\rangle _{
}]=\Var\left[ \langle \sum_{q_{1}=2}^{Q}\beta _{q_{1}}Dh_{\ell;q_{1},d},
-\sum_{q_{21}=2}^{Q}\beta _{q_{2}}DL^{-1}h_{\ell;q_{2},d}\rangle_{H}\right]=\cr
=\Var\left[ \sum_{q_{1}=2}^{Q}\sum_{q_{1}=2}^{Q}\beta _{q_{1}}\beta
_{q_{2}}\langle Dh_{\ell;q_{1},d},-DL^{-1}h_{\ell;q_{2},d}\rangle_
H}\right] \text{ .}
}$$
From Chapter \ref{background} recall that for $q_{1}\neq q_{2}$
\begin{equation*}
E[\langle Dh_{\ell;q_{1},d},-DL^{-1}h_{\ell ;q_{2},d}\rangle_
H}]=0\text{ ,}
\end{equation*
whence we write
$$\displaylines{
\Var\left[ \sum_{q_{1}=2}^{Q}\sum_{q_{2}=2}^{Q}\beta _{q_{1}}\beta
_{q_{2}}\langle Dh_{\ell;q_{1},d},-DL^{-1}h_{\ell;q_{2},d}\rangle_
H}\right]=\cr
=\sum_{q_{1}=2}^{Q}\sum_{q_{2}=2}^{Q}\beta _{q_{1}}^{2}\beta
_{q_{2}}^{2}\Cov\left( \langle Dh_{\ell;q_{1},d},-DL^{-1}h_{\ell;q_{1},d}
\rangle_{H},\langle Dh_{\ell;q_{2},d},-DL^{-1}h_{\ell ;q_{2},d}\rangle_{H}\right)+\cr
+\sum_{q_{1}, q_3=2}^{Q}\sum_{q_{2}\neq
q_{1}}^{Q}\sum_{q_{4}\neq q_{3}}^{Q}\beta _{q_{1}}\beta
_{q_{2}}\beta _{q_{3}}\beta _{q_{4}}\times \cr
\times
\Cov\left( \langle Dh_{\ell;q_{1},d},-DL^{-1}h_{\ell;q_{2},d}\rangle_{H},\langle
Dh_{\ell ;q_{3},d},-DL^{-1}h_{\ell ;q_{4},d}\rangle_{H}\right).
}$$
Now of course we have
$$\displaylines{
\Cov\left( \langle Dh_{\ell ;q_{1},d},-DL^{-1}h_{\ell
;q_{1},d}\rangle_{H},\langle Dh_{\ell
;q_{2},d},-DL^{-1}h_{\ell ;q_{2},d}) _{H}\right) \le \cr
\leq \left(\Var\left[\langle Dh_{\ell ;q_{1},d},-DL^{-1}h_{\ell
;q_{1},d}\rangle_{H}\right] \Var\left[ \langle Dh_{\ell
;q_{2},d},-DL^{-1}h_{\ell ;q_{2},d}\rangle_{H}\right] \right)^{1/2},
}$$
$$\displaylines{
\Cov\left( \langle Dh_{\ell ;q_{1},d},-DL^{-1}h_{\ell
;q_{2},d}\rangle_{H},\langle Dh_{\ell
;q_{3},d},-DL^{-1}h_{\ell ;q_{4},d}\rangle_{H}\right) \le \cr
\leq \left( \Var\left[ \langle Dh_{\ell ;q_{1},d},-DL^{-1}h_{\ell
;q_{2},d}\rangle_{H}\right] \Var\left[ \langle Dh_{\ell
;q_{3},d},-DL^{-1}h_{\ell ;q_{4},d}\rangle_{H}\right]
\right)^{1/2}.
}$$
Applying \cite{noupebook}, Lemma 6.2.1 it is immediate to show tha
$$\displaylines{
\Var\left[ \langle Dh_{\ell ;q_{1},d},-DL^{-1}h_{\ell ;q_{1},d}\rangle
_{H}\right]\le \cr
\leq q_{1}^{2}\sum_{r=1}^{q_{1}-1}((r-1)!)^{2}{
q_{1}-1 \choose
r-
}^{4}(2q_{1}-2r)!\left\Vert g_{\ell; q_{1}}\otimes
_{r}g_{\ell;q_{1}}\right\Vert _{H^{\otimes 2q_1-2r}}^{2} =\cr
=q_{1}^{2}\sum_{r=1}^{q_{1}-1}((r-1)!)^{2}{
q_{1}-1 \choose
r-
} ^{4}(2q_{1}-2r)! \mathcal{K}_{\ell }(q_1;r) \text{
.}
}$$
Also, for $q_{1}<q_{2}
$$\displaylines{
\Var\left[ \langle Dh_{\ell ;q_{1},d},-DL^{-1}h_{\ell ;q_{2},d}\rangle
_{H}\right]=\cr
=q_{1}^{2}\sum_{r=1}^{q_{1}}((r-1)!)^{2}{
q_{1}-1 \choose
r-
} ^{2}{
q_{2}-1 \choose
r-
} ^{2}(q_{1}+q_{2}-2r)!\left\Vert g_{\ell;q_{1}}\widetilde{\otimes
_{r}g_{\ell ; q_{2}}\right\Vert _{H^{\otimes
(q_{1}+q_{2}-2r)}}^{2}=\cr
=q_{1}^{2}((q_{1}-1)!)^{2}{
q_{2}-1 \choose
q_{1}-
}^{2}(2q_{1}-2r)!\left\Vert g_{\ell;q_{1}}\widetilde{\otimes
_{q_{1}}g_{\ell ; q_{2}}\right\Vert _{H^{\otimes (q_{2}-q_1)}}^{2} + \cr
+q_{1}^{2}\sum_{r=1}^{q_{1}-1}((r-1)!)^{2}{
q_{1}-1 \choose
r-
}^{2}{
q_{2}-1 \choose
r-
} ^{2}(q_{1}+q_{2}-2r)!\left\Vert g_{\ell;q_{1} }\widetilde{\otimes
_{r} g_{\ell ;q_{2}}\right\Vert _{H^{\otimes
(q_{1}+q_{2}-2r)}}^{2}
=:A+B\text{ .}
}$
Let us focus on the first summand $A$, which includes terms that, from Lemma \ref{contractions},
take the for
$$\displaylines{
\left\Vert g_{\ell;q_{1}}\widetilde{\otimes }_{q_{1}}g_{\ell;q_{2}
}\right\Vert_{H^{\otimes (q_{2}-q_1)}}^{2}\le
\left\Vert g_{\ell;q_{1}}\otimes_{q_{1}}g_{\ell;q_{2}
}\right\Vert_{H^{\otimes (q_{2}-q_1)}}^{2}=\cr
=\int_{(\cS^d)^{q_2-q_1}}\int_{(\cS^{d})^2}\left( \frac{n_{\ell; d
}{\mu_d }\right)^{q_{2}-q_{1}}
G_{\ell;d }(\cos d( x_{2},y_{1}) )\cdots \times \cr
\times \cdots G_{\ell;d }(\cos d( x_{2},y_{q_{2}-q_1}) )
G_{\ell;d }(\cos d(
x_{1},x_2) )^{q_1}\,dx_1 dx_2\times\cr
\times \int_{(\cS^d)^2}G_{\ell;d}(\cos d( x_{3},y_{1}) )
...G_{\ell;d}(\cos d(
x_{3},y_{q_{2}-q_1}) )G_{\ell;d }(\cos d(
x_{3},x_4) )^{q_1}\,dx_3 dx_4\,d\underline{y} =: I\ ,
}$$
where for the sake of simplicity we have set $d\underline{y}:=dy_{1}...dy_{q_2-q_1}$.
Applying $q_2-q_1$ times the reproducing formula for Gegenbauer polynomials (\cite{szego}) we get
\begin{equation}\label{anvedi}
I = \int_{(\cS^{d})^4}G_{\ell;d}(\cos d(
x_{1},x_{2}) )^{q_{1}}G_{\ell;d}(\cos d(
x_{2},x_{3}) )^{q_{2}-q_{1}}G_{\ell;d}(\cos d(
x_{3},x_{4}) )^{q_{1}}\,d\underline{x}\ .
\end{equation}
In graphical terms, these contractions correspond to the diagrams such that
all $q_{1}$ edges corresponding to vertex $1$ are linked to vertex 2, vertex
$2$ and $3$ are connected by $q_{2}-q_{1}$ edges, vertex $3$ and $4$ by
q_{1}$ edges, and no edges exist between $1$ and $4,$ i.e. the diagram has
no proper loop.
Now immediately we write
$$\displaylines{
\paref{anvedi} =\int_{\cS^{d}}G_{\ell;d}(\cos d( x_{1},x_{2})
)^{q_{1}}\,dx_{1}\int_{\cS^{d}}G_{\ell;d}(\cos d( x_{3},x_{4})
)^{q_{1}}\,dx_{4}\times \cr
\times \int_{(\cS^{d})^2}G_{\ell;d}(\cos d(
x_{2},x_{3}) )^{q_{2}-q_{1}}\,dx_{2}dx_{3}=\cr
=\frac{1}{(q_{1}!)^{2}} \Var[ h_{\ell;q_1,d}]^{2}
\int_{(\cS^{d})^2}G_{\ell;d}
(\cos d( x_{2},x_{3}) )^{q_{2}-q_{1}}\,dx_{2}dx_{3}\text{ .}
}$$
Moreover we have
\begin{equation}\label{eq=0}
\int_{(\cS^{d})^2}G_{\ell;d}(\cos d(
x_{2},x_{3}) )^{q_{2}-q_{1}}\,dx_{2}dx_{3}=0\ ,\quad \text{if}\ q_{2}-q_{1}=1\
\end{equation
and from \paref{momento 2} if $q_{2}-q_{1}\geq 2$
\begin{equation*}
\int_{(\cS^{d})^2}G_{\ell;d}(\cos d(
x_{2},x_{3}) )^{q_{2}-q_{1}}\,dx_{2}dx_{3}\leq \mu_d \int_{\cS^{d}}G_{\ell;d}(\cos d(x,y))^{2}\,dx=
O\left(\frac{1}{\ell^{d-1} }\right)\ .
\end{equation*
It follows that
\begin{equation}
\left\Vert g_{\ell;q_{1} }\otimes_{q_{1}}g_{\ell;q_{2}
}\right\Vert_{H^{\otimes (q_2-q_1)}}^{2}= O\left(\Var[ h_{\ell;q_1,d}]^{2}\frac{1}{\ell^{d-1} }\right)
\label{efficientbound}
\end{equation
always. For the second term, still from \cite{noupebook}, Lemma $6.2.1$ we hav
\begin{eqnarray*}
B \leq \frac{q_{1}^{2}}{2}\sum_{r=1}^{q_{1}-1}((r-1)!)^{2}\left(
\begin{array}{c}
q_{1}-1 \\
r-
\end{array
\right) ^{2}\left(
\begin{array}{c}
q_{2}-1 \\
r-
\end{array
\right) ^{2}(q_{1}+q_{2}-2r)!\times \\
\times \left( \left\Vert g_{\ell;q_{1} }\otimes _{q_{1}-r}g_{\ell;q_{1}
}\right\Vert _{H^{\otimes 2r}}^{2}+\left\Vert g_{\ell;q_{2}
}\otimes _{q_{2}-r}g_{\ell;q_{2} }\right\Vert _{H^{\otimes
2r}}^{2}\right)=
\end{eqnarray*
\begin{equation}
=\frac{q_{1}^{2}}{2}\sum_{r=1}^{q_{1}-1}((r-1)!)^{2}\left(
\begin{array}{c}
q_{1}-1 \\
r-
\end{array
\right) ^{2}\left(
\begin{array}{c}
q_{2}-1 \\
r-
\end{array
\right) ^{2}(q_{1}+q_{2}-2r)!\left( \mathcal{K}_{\ell }(q_{1};r)+\mathcal{K
_{\ell }(q_{2};r)\right)\ , \label{juve2}
\end{equation
where the last step follows from Lemma \ref{contractions}.
Let us first investigate the case $d=2$. From \paref{q=2}, \paref{int2} and \paref{q=4d=2}
it is immediate that
\begin{equation}
\Var[Z_{\ell }]=\sum_{q=2}^{Q}\beta _{q}^{2}\Var[h_{\ell ;q,2}]=
\begin{cases}
O(\ell ^{-1})\ ,\quad &\text{for }\beta _{2}\neq 0 \\
O(\ell ^{-2}\log \ell)\ ,\quad &\text{for }\beta _{2}=0\text{ , }\beta _{4}\neq 0 \\
O(\ell ^{-2})\ ,\quad &\text{otherwise.
\end{cases
\end{equation
Hence we have that for $\beta _{2}\neq 0$ and $Z\sim \mathcal{N}(0,1)$
\begin{eqnarray*}
d_{TV}\left(\frac{Z_{\ell }-EZ_{\ell }}{\sqrt{\Var[Z_{\ell }]}},Z\right)
=O\left(\frac{\sqrt{\mathcal{K}_{\ell }(2;r)}}{\Var[Z_{\ell }]}\right)
=O\left(\ell ^{-1/2}\right)\text{ ;}
\end{eqnarray*
for $\beta _{2}=0$ , $\beta _{4}\neq 0$
\begin{eqnarray*}
d_{TV}\left(\frac{Z_{\ell }-EZ_{\ell }}{\sqrt{\Var[Z_{\ell }]}},Z\right)
=O\left(\frac{\sqrt{\mathcal{K}_{\ell }(4;r)}}{\Var[Z_{\ell }]}\right)
=O\left(\frac{1}{\log \ell }\right)
\end{eqnarray*
and for $\beta _{2}=\beta _{4}=0$, $\beta _{5}\neq 0$ and $c_5 >0$
\begin{eqnarray*}
d_{TV}\left(\frac{Z_{\ell }-EZ_{\ell }}{\sqrt{\Var[Z_{\ell }]}},Z\right)
=O\left(\frac{\sqrt{\mathcal{K}_{\ell }(5;r)}}{\Var[Z_{\ell }]}\right)
=O\left(\frac{\log \ell }{\ell ^{1/4}}\right)\text{ ,}
\end{eqnarray*
and analogously we deal with the remaining cases, so that we obtain the claimed result for $d=2$.
For $d\ge 3$ from \paref{momento 2} and Proposition \ref{varianza}, it holds
\begin{equation*}
\Var[Z_{\ell }]=\sum_{q=2}^{Q}\beta _{q}^{2}\Var[h_{\ell ;q,d}]=
\begin{cases}
O(\ell ^{-(d-1)})\ ,\quad &\text{for }\beta _{2}\neq 0\ , \\
O(\ell ^{-d})\ ,\quad &\text{otherwise}\ .
\end{cases
\end{equation*}
Hence we have for $\beta _{2}\neq 0$
\begin{eqnarray*}
d_{TV}\left(\frac{Z_{\ell }-EZ_{\ell }}{\sqrt{\Var[Z_{\ell }]}},Z\right)
=O\left(\frac{\sqrt{\mathcal{K}_{\ell }(2;r)}}{\Var[Z_{\ell }]}\right) =
O\left(\frac{1}{\ell ^{\frac{d-1}{2}}}\right)\text{ .}
\end{eqnarray*
Likewise for $\beta _{2}=0$ , $\beta _{3},c_{3;d}\neq 0$,
\begin{eqnarray*}
d_{TV}\left(\frac{Z_{\ell }-EZ_{\ell }}{\sqrt{\Var[Z_{\ell }]}},Z\right)
=O\left(\frac{\sqrt{\mathcal{K}_{\ell }(3;r)}}{\Var[Z_{\ell }]}\right)
=O\left(\frac{1}{\ell ^\frac{d-5}{4}}\right)\text{
}
\end{eqnarray*
and for $\beta _{2}=\beta _{3}=0$, $\beta _{4}\neq 0$
\begin{eqnarray*}
d_{TV}\left(\frac{Z_{\ell }-EZ_{\ell }}{\sqrt{\Var[Z_{\ell }]}},Z\right)
=O\left(\frac{\sqrt{\mathcal{K}_{\ell }(4;r)}}{\Var[Z_{\ell }]}\right)
=O\left(\frac{1}{\ell ^{\frac{d-3}{2}}}\right)\text{
}
\end{eqnarray*
Finally if $\beta_2=\beta_3=\beta_4=0$, $\beta_q, c_{q;d} \ne 0$ for some $q$, then
\begin{eqnarray*}
d_{TV}\left(\frac{Z_{\ell }-EZ_{\ell }}{\sqrt{\Var[Z_{\ell }]}},Z\right)
=O\left(\frac{\sqrt{\mathcal{K}_{\ell }(q;r)}}{\Var[Z_{\ell }]}\right)
=O\left(\sqrt{\frac{\ell^{2d} }{\ell ^{2d +\frac{d}{2}-\frac{1}{2}}}}\right)=
O\left(\frac{1}{\ell ^{\frac{d-1}{4}}}\right)\text{ .}
\end{eqnarray*}
\end{proof}
\begin{remark}\rm
\label{rem0}\textrm{To compare our result in these specific circumstances
with the general bound obtained by Nourdin and Peccati, we note that for \paref{anvedi},
these
authors are exploiting the inequalit
\begin{equation*}
\displaylines{ \left\Vert g_{\ell;q_{1}}\otimes_{q_{1}} g_{\ell;q_{2}}
\right\Vert_{H^{\otimes (q_2-q_1)}} ^{2}\le \left\Vert g_{\ell;q_{1}}
\right\Vert _{H^{\otimes q_{1}}}^{2}\left\Vert g_{\ell;q_{2}}\otimes_{q_{2}-q_{1}}g_{\ell;q_{2}}
\right\Vert_{H^{\otimes 2q_{1}}}\ , }
\end{equation*
see \cite{noupebook}, Lemma $6.2.1$. In the special framework we
consider here (i.e., orthogonal eigenfunctions), this provides, however, a less efficient bound than
(\ref{efficientbound}): indeed from \paref{anvedi}, repeating the same argument as in Lemma
\ref{contractions},
one obtains
$$\displaylines{
\left\Vert g_{\ell ;q_{1}}\otimes _{q_{1}}g_{\ell ;q_{2}}
\right\Vert_{H^{\otimes (q_2-q_1)}}^{2}=
\int_{({\mathbb{S}}^{d})^4}G_{\ell ;d}(\cos
d(x_{1},x_{2}))^{q_1}G_{\ell ;d}(\cos d(x_{2},x_{3}))^{q_2-q_1}\times \cr
\times G_{\ell ;d}(\cos
d(x_{3},x_{4}))^{q_1}\,d\underline{x}
\le \int_{({\mathbb{S}}^{d})^2} G_{\ell
;d}(\cos d(x_{1},x_{2}))^{q_1}\,dx_1 dx_2\times \cr
\times \Big (\int_{({\mathbb{S}}^{d})^4}\!\!\!\!G_{\ell ;d}(\cos d(x_{1},x_{2}))^{q_1}
G_{\ell ;d}(\cos
d(x_{2},x_{3}))^{q_2-q_1}\times \cr
\times G_{\ell ;d}(\cos
d(x_{3},x_{4}))^{q_1}G_{\ell ;d}(\cos d(x_{1},x_{4}))^{q_2-q_1}\,d\underline{x} \Big )^{1/2}=\cr
=O\left( \Var[h_{\ell ;q_{1},d}] \sqrt{\mathcal{K}_{\ell
}(q_{2},q_{1})}\right) \text{ ,}
}$$
yielding a bound of order
\begin{equation}
O\left( \sqrt{\frac{ \Var[h_{\ell;q_{1},d}] \sqrt{\mathcal{K
_{\ell }(q_{2},q_{1})}}{\Var[h_{\ell ;q_{1},d}]^{2}}}\right)
=O\left( \frac{\sqrt[4]{\mathcal{K}_{\ell }(q_{2},q_{1})}}{\sqrt{\Var[h_{\ell
;q_{1},d}]}}\right) \label{cdip}
\end{equation
rather tha
\begin{equation}
O\left( \sqrt{\frac{\mathcal{K}_{\ell }(q_{2},q_{1})}{ \Var[h_{\ell
;q_{1},d}]^{2}}}\right) \text{ ;} \label{cdip2}
\end{equation
for instance, for $d=2$ note that (\ref{cdip}) is typically $=O(\ell
\times \ell ^{-9/8})=O(\ell ^{-1/8}),$ while we have established for (\ref{cdip2
) bounds of order $O(\ell ^{-1/4})$. }
\end{remark}
\begin{remark}\rm
\textrm{Clearly the fact that $\left\Vert g_{\ell;q_{1}
}\otimes_{q_{1}}g_{\ell;q_{2} }\right\Vert_{H^{\otimes (q_2-q_1)}}^{2}=0$ for $q_{2}=q_{1}+1$
entails that the contraction $g_{\ell;q_{1} }\otimes_{q_{1}}g_{\ell;q_{2} }$ is identically
null. Indeed repeating the same argument as in Lemma \ref{contractions}
$$\displaylines{
g_{\ell;q_{1} }\otimes_{q_{1}}g_{\ell; q_{1}+1}=\cr
=
\int_{(\cS^{d})^{2}}G_{\ell ;d}(\cos d( x_{1},y) )G_{\ell ;d}(\cos d(
x_{1},x_2) )^{q_1}\,dx_{1}dx_{2}=\cr
=\int_{{\mathbb{S}}^{d}}G_{\ell ;d}(\cos d( x_{1},y) )\left( \int_{
\mathbb{S}}^{d}}G_{\ell ;d}(\cos d( x_{1},x_2) )^{q_1}\,dx_{2} \right)\,dx_1=0\text{ ,}
}$$
as expected, because the inner integral in the last equation does not depend on $x_1$
by rotational invariance. }
\end{remark}
\section{Nonlinear functionals and excursion volumes}
\label{genvol}
The techniques and results developed previously are restricted to
finite-order polynomials. In the special case of the Wasserstein distance,
we shall show below how they can indeed be extended to general nonlinear
functionals of the form \paref{S}
\begin{equation*}
S_{\ell }(M)=\int_{{\mathbb{S}}^{d}}M(T_{\ell }(x))dx\text{ ;}
\end{equation*
here $M:\mathbb{R}\rightarrow \mathbb{R}$ is a measurable function such that
$\mathbb{E}[M(Z)^{2}]<\infty $, $Z\sim \mathcal N(0,1)$ as in \S \ref{outline}, and $J_{2}(M)\neq 0,$ where we recall
that $J_{q}(M):=\mathbb{E}[M(Z)H_{q}(Z)]$ .
\begin{remark}\rm
\textrm{Without loss of generality, the first two coefficients
J_{0}(M),J_{1}(M)$ can always be taken to be zero in the present framework.
Indeed, $J_{0}(M):=\mathbb{E}[M(Z)]=0,$ assuming we work with
centered variables and moreover as we noted earlier $h_{\ell ;1,d}=\int_{
\mathbb{S}}^{d}}T_{\ell }(x)\,dx=0$. }
\end{remark}
\begin{proof}[Proof Proposition \ref{general}]
As in \cite{wigexc}, from \paref{exp} we write the expansio
\begin{equation*}
S_{\ell }(M) =\int_{\cS^{d}}\sum_{q=2}^{\infty }\frac{J_{q}(M)H_{q}(T_{\ell
}(x))}{q!}dx\ .
\end{equation*}
Precisely, we write for $d=2$
\begin{equation}\label{sum2}
S_{\ell }(M) =\frac{J_{2}(M)}{
}h_{\ell;2,2}+ \frac{J_{3}(M)}{3!}h_{\ell;3,2} + \frac{J_{4}(M)}{4!}h_{\ell;4,2} +\int_{\cS^{2}}\sum_{q=5}^{\infty }\frac{J_{q}(M)H_{q}(T_{\ell }(x))}{q
}dx\text{ ,}
\end{equation}
whereas for $d\ge 3$
\begin{eqnarray}\label{sum2d}
S_{\ell }(M) =\frac{J_{2}(M)}{2}h_{\ell;2,d}
+\int_{\cS^{d}}\sum_{q=3}^{\infty }\frac{J_{q}(M)H_{q}(T_{\ell }(x))}{q
}dx\text{ .}
\end{eqnarray
Let us first investigate the case $d=2$.
Set for the sake of simplicit
\[
S_{\ell }(M;1):=\frac{J_{2}(M)}{
}h_{\ell;2,2}+ \frac{J_{3}(M)}{3!}h_{\ell;3,2} + \frac{J_{4}(M)}{4!}h_{\ell;4,2}\text{ ,}
\
\[
S_{\ell }(M;2):=\int_{\cS^{2}}\sum_{q=5}^{\infty }\frac{J_{q}(M)H_{q}(T_{\ell
}(x))}{q!}dx\text{ .}
\
Consider $Z\sim \mathcal N(0,1)$ and $Z_\ell \sim \mathcal{N}\left(0,\frac{\Var[S_{\ell }(M;1)]}
\Var[S_{\ell }(M)]}\right)$. Hence from \paref{sum2} and the triangular inequality
$$\displaylines{
d_{W}\left( \frac{S_{\ell }(M)}{\sqrt{\Var[S_{\ell }(M)]}},Z\right)\le\cr
\le d_{W}\left( \frac{S_{\ell }(M)}{\sqrt{\Var[S_{\ell }(M)]}},\frac
S_{\ell }(M;1)}{\sqrt{\Var[S_{\ell }(M)]}}\right) +d_{W}\left( \frac{S_{\ell
}(M;1)}{\sqrt{\Var[S_{\ell }(M)]}},Z_\ell\right)
+d_{W}\left( Z_\ell
Z\right)\le}$$
$$\displaylines{
\le \frac{1}{\sqrt{\Var[S_{\ell }(M)]}}\mathbb{E}\left[\left(
\int_{\cS^{2}}\sum_{q=5}^{\infty }\frac{J_{q}(M)H_{q}(T_{\ell }(x))}{q!}d
\right)^2\right]^{\tfrac12}+\cr
+d_{W}\left( \frac{S_{\ell }(M;1)}{\sqrt{\Var[S_{\ell }(M)]}},Z_\ell\right) +d_{W}\left( Z_\ell,Z\right)\ .
}$$
Let us bound the first term of the previous summation. Of course
$$\displaylines{
\Var[S_{\ell }(M)] = \Var[S_{\ell }(M;1)] +
\Var[S_{\ell }(M;2)]\ ;
}$$
now we have (see \cite{wigexc})
$$
\Var[S_{\ell }(M;1)] =\frac{J_{2}^{2}(M)}{2^2}\Var[h_{\ell ;2,2}]+\frac
J_{3}^{2}(M)}{6^2}\Var[h_{\ell ;3,2}]
+\frac{J_{4}^{2}(M)}{(4!)^2}\Var[h_{\ell ;4,2}]
$$
and moreover
$$\displaylines{
\Var[S_{\ell }(M;2)]=
\mathbb{E}\left[ \left(\int_{\cS^{2}}\sum_{q=5}^{\infty }\frac{J_{q}(M)H_{q}(T_{\ell
}(x))}{q!}dx \right)^2\right ]=\sum_{q=5}^{\infty }\frac{J_{q}^{2}(M)}{(q!)^2
\Var[h_{\ell ;q,2}]\ll \cr
\ll \frac{1}{\ell ^{2}}\sum_{q=5}^{\infty }\frac{J_{q}^{2}(M)}{q!}\ll
\frac{1}{\ell ^{2}}\text{ ,}
}$$
where the last bounds follows from \paref{int2} and \paref{cq2}. Remark that $$
\Var(S_\ell(M))=\sum_{q=0}^{\infty }\frac{J_{q}^{2}(M)}{q!} <+\infty\ .
$$
Therefore recalling also \paref{q=2} and \paref{q=4d=2}
\[
\frac{1}{\Var[S_{\ell }(M)]}\mathbb{E}\left[ \left(\int_{\cS^{2}}\sum_{q=5}^{\infty
\frac{J_{q}(M)H_{q}(T_{\ell }(x))}{q!}dx\right)^2\right]\ll \frac{1}{\ell
\text{ .}
\
On the other hand, from Proposition \ref{corollario1
\[
d_{W}\left( \frac{S_{\ell }(M;1)}{\sqrt{\Var[S_{\ell }(M)]}},Z_\ell\right) =O\left(\frac{1}{\sqrt{\ell
}}\right)
\
and finally, using Proposition 3.6.1 in \cite{noupebook},
\begin{eqnarray*}
d_{W}\left( Z_\ell
Z\right) &\leq &\sqrt{\frac{2}{\pi }}\left|\frac
\Var[S_{\ell }(M;1)]}{\Var[S_{\ell }(M)]}-1\right|=O\left(\frac{1}{\ell }\right)\text{ ,}
\end{eqnarray*
so that the proof for $d=2$ is completed.
The proof in the general case $d\ge 3$ is indeed
analogous, just setting
\[
S_{\ell }(M;1):=\frac{J_{2}(M)}{
}h_{\ell;2,d}\text{ ,}
\
\[
S_{\ell }(M;2):=\int_{\cS^{2}}\sum_{q=3}^{\infty }\frac{J_{q}(M)H_{q}(T_{\ell
}(x))}{q!}dx
\
and recalling from \paref{momento 2} that $\Var[h_{\ell;2,d}]=O( \frac{1}{\ell^{d-1}})$ whereas for
$q\ge 3$, $\Var[h_{\ell;q,d}]=O( \frac{1}{\ell^{d}})$ from Proposition \ref{varianza}.
\end{proof}
We are now in the position to establish our main result, concerning the volume
of the excursion sets, which we recall for any fixed $z\in \mathbb{R}$ is given b
\begin{equation*}
S_{\ell }(z):=S_{\ell }(\mathbb{I}(\cdot > z))=\int_{{\mathbb{S}}^{d}}\mathbb{
}(T_{\ell }(x) > z)dx\text{ .}
\end{equation*
Again, ${\mathbb{E}}[S_{\ell }(z)]=\mu _{d}(1-\Phi (z))$, where
$\Phi (z)$ is the cdf of the standard Gaussian law, and in this case we have
$M=M_z:=\mathbb{I}(\cdot > z)$, $J_{2}(M_z)=z\phi (z)$, $\phi$ denoting
the standard Gaussian density. The proof of Theorem
\ref{mainteo} is then just an immediate consequence of Proposition \ref{general}.
\begin{remark}\rm
\textrm{It should be noted that the rate obtained here is much sharper than
the one provided by \cite{pham} for the Euclidean case with $d=2$. The
asymptotic setting we consider is rather different from his, in that we
consider the case of spherical eigenfunction with diverging eigenvalues,
whereas he focusses on functionals evaluated on increasing domains
[0,T]^{d} $ for $T\rightarrow \infty .$ However the contrast in the
converging rates is not due to these different settings, indeed \cite{vale3}
establish rates of convergence analogous to those by \cite{pham} for
spherical random fields with more rapidly decaying covariance structure than
the one we are considering here. The main point to notice is that the slow
decay of Gegenbauer polynomials entails some form of long range dependent
behaviour on random spherical harmonics; in this sense, hence, our results
may be closer in spirit to the work by \cite{dehlingtaqqu} on empirical
processes for long range dependent stationary processes on $\mathbb{R}$. }
\end{remark}
\section{Technical proofs}\label{tech}
\subsection{On the variance of $h_{\ell ;q,d}$}\label{subvar}
In this section we study the variance of $h_{\ell ;q,d}$ defined in \eqref{hq}. By (\re
{hermite orto}) and the definition of Gaussian random eigenfunctions
\eqref{defT}, it follows that \eqref{int var} hold at once:
\begin{equation*}
\displaylines{ \Var[h_{\ell;q,d}]= \E \left[ \left( \int_{\cS^d}
H_q(T_\ell(x))\,dx \right)^2 \right] = \int_{(\cS^d)^2} \E[ H_q(T_\ell(x_1))
H_q(T_\ell(x_2))]\,dx_1 dx_2 = \cr = q! \int_{(\cS^d)^2} \E[T_\ell(x_1)
T_\ell(x_2)]^q\,dx_1 dx_2 = q! \int_{(\cS^d)^2} G_{\ell;d}(\cos
d(x_1,x_2))^q\,dx_1 dx_2=\cr = q! \mu_d \mu_{d-1} \int_0^{\pi}
G_{\ell;d}(\cos \vartheta)^q (\sin \vartheta)^{d-1}\, d\vartheta. }
\end{equation*
Now we prove Proposition \ref{varianza}, inspired by the proof of \cite{def}, Lemma $5.2$.
\begin{proof}[Proof of Proposition \protect\ref{varianza}]
By the Hilb's asymptotic formula for Jacobi polynomials (see \cite{szego}, Theorem $8.21.12$),
we have uniformly for $\ell\ge 1$, $\vartheta\in [0, \tfrac{\pi}2]$
$$\displaylines{
(\sin \vartheta)^{\frac{d}{2} - 1}G_{\ell ;d} (\cos \vartheta) =
\frac{2^{\frac{d}{2} - 1}}{{\ell + \frac{d}{2}-1\choose \ell}}
\left( a_{\ell, d} \left(\frac{\vartheta}{\sin
\vartheta}\right)^{\tfrac12}J_{\frac{d}{2} - 1}(L\vartheta) +
\delta(\vartheta) \right)\ , }$$ where $L=\ell + \frac{d-1}{2}$,
\begin{equation}\label{al}
a_{\ell, d} = \frac{\Gamma(\ell + \frac{d}{2})}{(\ell + \frac{d-1}{2})^{\tfrac{d}{2}-1} \ell !}\
\sim\ 1\quad \text{as}\ \ell \to \infty,
\end{equation}
and the remainder is
\begin{equation*}
\delta(\vartheta) \ll \begin{cases} \sqrt{\vartheta}\,
\ell^{-\tfrac32}\ & \qquad \ell^{-1}< \vartheta
< \tfrac{\pi}2\ , \\
\vartheta^{\left(\tfrac{d}2-1\right) + 2}\, \ell^{\tfrac{d}2-1}\ & \qquad 0 < \vartheta
< \ell^{-1}\ .
\end{cases}
\end{equation*}
Therefore, in the light of \eqref{al} and $\vartheta \to \frac{\vartheta}{\sin \vartheta}$ being bounded,
\begin{equation}\label{eq:G moment main error}
\begin{array}{c}
\displaystyle\int_{0}^{\frac{\pi}{2}} G_{\ell ;d} (\cos \vartheta)^q (\sin
\vartheta)^{d-1}d\vartheta =\\
\displaystyle=\left(\frac{2^{\frac{d}{2} - 1}}{{\ell + \frac{d}{2}-1\choose
\ell}}\right)^q a^q_{\ell,d} \int_0^{\frac{\pi}{2}} ( \sin
\vartheta)^{- q(\frac{d}{2} -1)}
\Big( \frac{\vartheta}{\sin \vartheta} \Big)^{\frac{q}{2}}
J^q_{\frac{d}{2}-1}(L\vartheta) (\sin \vartheta)^{d-1} d\vartheta\ +\qquad\cr
\displaystyle\qquad+O\left(\frac{1}{\ell^{q(\frac{d}{2}-1)}} \int_0^{\frac{\pi}{2}}
( \sin \vartheta)^{- q(\frac{d}{2} -1)}
|J_{\frac{d}{2}-1}(L\vartheta)|^{q-1}
\delta(\vartheta)(\sin \vartheta)^{d-1}d\vartheta\right),\cr
\end{array}
\end{equation}
where we used $${\ell + \frac{d}{2}-1\choose \ell} \ll \frac{1}{\ell^{\frac{d}{2}-1}}$$
(note that we readily neglected the smaller terms, corresponding to higher powers of $\delta(\vartheta)$).
We rewrite \eqref{eq:G moment main error} as
\begin{equation}
\label{eq:G moment=M+E}
\begin{split}
\int_{0}^{\frac{\pi}{2}} G_{\ell ;d} (\cos \vartheta)^q (\sin
\vartheta)^{d-1} d\vartheta
=N+E\ ,
\end{split}
\end{equation}
where
\begin{equation}
\label{eq:Mdql def}
N=N(d,q;\ell) := \left(\frac{2^{\frac{d}{2} - 1}}{{\ell + \frac{d}{2}-1\choose
\ell}}\right)^q a^q_{\ell,d} \int_0^{\frac{\pi}{2}} ( \sin
\vartheta)^{- q(\frac{d}{2} -1)}
\Big( \frac{\vartheta}{\sin \vartheta} \Big)^{\frac{q}{2}}
J_{\frac{d}{2}-1}(L\vartheta)^q (\sin \vartheta)^{d-1} d\vartheta\
\end{equation}
and
\begin{equation}
\label{eq:Edql def}
E=E(d,q;\ell) \ll \frac{1}{\ell^{q(\frac{d}{2}-1)}} \int_0^{\frac{\pi}{2}}
( \sin \vartheta)^{- q(\frac{d}{2} -1)}
|J_{\frac{d}{2}-1}(L\vartheta)|^{q-1}
\delta(\vartheta)(\sin \vartheta)^{d-1}d\vartheta\ .
\end{equation}
To bound the error term $E$ we split the range of the integration in \eqref{eq:Edql def}
and write
\begin{equation}
\label{eq:int error split}
\begin{split}
E \ll & \frac{1}{\ell^{q(\frac{d}{2}-1)}}
\int\limits_{0}^{\frac{1}{\ell}}( \sin \vartheta)^{- q(\frac{d}{2} -1)}
|J_{\frac{d}{2}-1}(L\vartheta)|^{q-1}
\vartheta^{\left(\tfrac{d}2-1\right) + 2}\, \ell^{\tfrac{d}2-1}(\sin \vartheta)^{d-1}\,d\vartheta +
\\&+\frac{1}{\ell^{q(\frac{d}{2}-1)}}
\int_{\frac{1}{\ell}}^{\frac{\pi}{2}}
( \sin \vartheta)^{- q(\frac{d}{2} -1)}
|J_{\frac{d}{2}-1}(L\vartheta)|^{q-1}
\sqrt{\vartheta}\,
\ell^{-\tfrac32}(\sin \vartheta)^{d-1}\,d\vartheta\ .
\end{split}
\end{equation}
For the first integral in \eqref{eq:int error split}
recall that $J_{\frac{d}{2}-1}(z) \sim z^{\frac{d}{2}-1}$ as
$z\to 0$, so that as $\ell\to \infty$,
$$\displaylines{
\frac{1 }{\ell^{(q-1)(\frac{d}{2}-1)}} \int_0^{\frac{1}{\ell}}
\left(\frac{\vartheta}{ \sin \vartheta}\right)^{ q(\frac{d}{2} -1)-d+1}
|J_{\frac{d}{2}-1}(L\vartheta)|^{q-1}
\vartheta^{-(q-1)\left(\tfrac{d}2-1\right) + d+1}\, \,d\vartheta
\ll\cr
}$$
\begin{equation}\label{err1}
\ll \int_0^{\frac{1}{\ell}}\vartheta^{d+1}\,d\vartheta = \frac{1}{\ell^{d+2}}\ ,
\end{equation}
which is enough for our purposes. Furthermore,
since for $z$ big $|J_{\frac{d}{2}-1}(z)|=O(z^{-\tfrac12})$ (and keeping in mind
that $L$ is of the same order of magnitude as $\ell$), we may bound the second integral
in \eqref{eq:int error split} as
$$\displaylines{\ll
\frac{1}{\ell^{q(\frac{d}{2}-1)+\frac32}}\int_{\frac{1}{\ell}}^{\frac{\pi}{2}}
\left( \frac{\vartheta}{\sin \vartheta}\right)^{ q(\frac{d}{2} -1)-d+1}
|J_{\frac{d}{2}-1}(L\vartheta)|^{q-1}
\vartheta^{-q(\frac{d}{2}-1)+d-\frac12}\,d\vartheta \ll \cr
\ll
\frac{1}{\ell^{q(\frac{d}{2}-1)+\frac32}}
\int_{\frac{1}{\ell}}^{\frac{\pi}{2}}
(\ell\vartheta)^{-\frac{q-1}{2}}
\vartheta^{-q(\frac{d}{2}-1)+d-\frac12}\,d\vartheta
=\frac{1}{\ell^{q(\frac{d}{2}-\frac12)+2}}
\int_{\frac{1}{\ell}}^{\frac{\pi}{2}}
\vartheta^{-q(\frac{d}{2}-\frac12)+d}\,d\vartheta \ll \cr
}$$
\begin{equation}\label{err2}
\ll \frac{1}{\ell^{(d+2)\wedge \left(q\left(\tfrac{d}2 -\tfrac{1}2\right) +1\right)}} = o(\ell^{-d})\ ,
\end{equation}
where the last equality in \paref{err2} holds for $q\ge 3$.
From \paref{err1} (bounding the first integral in \eqref{eq:int error split}) and
\paref{err2} (bounding the second integral in \eqref{eq:int error split}) we finally
find that the error term in
\eqref{eq:G moment=M+E} is
\begin{equation}
\label{resto}
E =o(\ell^{-d})
\end{equation}
for $q\ge 3$, admissible for our purposes.
Therefore, substituting \paref{resto} into \eqref{eq:G moment=M+E} we have
$$\displaylines{
\int_{0}^{\frac{\pi}{2}} G_{\ell ;d} (\cos \vartheta)^q (\sin
\vartheta)^{d-1}\, d\vartheta =\cr
=\left(\frac{2^{\frac{d}{2} -
1}}{{\ell + \frac{d}{2}-1\choose \ell}}\right)^q a^q_{\ell,d}
\int_0^{\frac{\pi}{2}} ( \sin \vartheta)^{- q(\frac{d}{2} -1)}
\Big( \frac{\vartheta}{\sin \vartheta} \Big)^{\frac{q}{2}}
J_{\frac{d}{2}-1}(L\vartheta)^q (\sin \vartheta)^{d-1}d\vartheta + o(\ell^{-d}) =
}$$
\begin{align}
\label{bene}
=\left(\frac{2^{\frac{d}{2} -
1}}{{\ell + \frac{d}{2}-1\choose \ell}}\right)^q a^q_{\ell,d} \frac{1}{L}
\int_0^{L\frac{\pi}{2}} ( \sin \frac{\psi}{L})^{- q(\frac{d}{2} -1)}
\Big( \frac{ \frac{\psi}{L}}{\sin \frac{\psi}{L}} \Big)^{\frac{q}{2}}\times \\
\nonumber
\times
J_{\frac{d}{2}-1}(\psi)^q (\sin \frac{\psi}{L})^{d-1}\, d\psi + o(\ell^{-d})\ ,
\end{align}
where in the last equality we transformed $\psi/L=\vartheta$; it then remains to evaluate
the first term in \paref{bene}, which we denote by
\begin{equation*}
N_L := \left(\frac{2^{\frac{d}{2} -
1}}{{\ell + \frac{d}{2}-1\choose \ell}}\right)^q a^q_{\ell,d} \frac{1}{L}
\int_0^{L\frac{\pi}{2}} ( \sin \psi/L)^{- q(\frac{d}{2} -1)}
\Big( \frac{\psi/L}{\sin \psi/L} \Big)^{\frac{q}{2}}
J_{\frac{d}{2}-1}(\psi)^q (\sin \psi/L)^{d-1}\, d\psi\ .
\end{equation*}
Now recall that as $\ell\to \infty$
\begin{equation*}
{\ell + \frac{d}{2}-1\choose \ell}\ \sim \ \frac{\ell^{\frac{d}{2}-1}}{(\frac{d}2-1)!}\ ;
\end{equation*}
moreover \paref{al} holds, therefore we find of course that as $L\to \infty$
\begin{equation}\label{mado}
N_L\ \sim\ \frac{(2^{\frac{d}{2} -
1}(\frac{d}2-1)!)^q}{L^{q(\frac{d}{2}-1)+1}}
\int_0^{L\frac{\pi}{2}} ( \sin \psi/L)^{- q(\frac{d}{2} -1)}
\Big( \frac{\psi/L}{\sin \psi/L} \Big)^{\frac{q}{2}}
J_{\frac{d}{2}-1}(\psi)^q (\sin \psi/L)^{d-1}\, d\psi\ .
\end{equation}
In order to finish the proof of Proposition \ref{varianza},
it is enough to check that, as $L\to \infty$
$$\displaylines{
L^d \, \frac{(2^{\frac{d}{2} -
1}(\frac{d}2-1)!)^q}{L^{q(\frac{d}{2}-1)+1}}
\int_0^{L\frac{\pi}{2}} ( \sin \frac{\psi}{L})^{- q(\frac{d}{2} -1)}
\Big( \frac{ \frac{\psi}{L}}{\sin \frac{\psi}{L}} \Big)^{\frac{q}{2}}
J_{\frac{d}{2}-1}(\psi)^q \left (\sin \frac{\psi}{L} \right )^{d-1}\, d\psi\, \to\, c_{q;d}\ ,
}$$
actually from \paref{bene} and \paref{mado}, we have
$$\displaylines{
\lim_{\ell\to +\infty} \ell^d \int_{0}^{\frac{\pi}{2}} G_{\ell ;d} (\cos \vartheta)^q (\sin
\vartheta)^{d-1}\, d\vartheta = \cr
= \lim_{L\to +\infty} L^d \, \frac{(2^{\frac{d}{2} -
1}(\frac{d}2-1)!)^q}{L^{q(\frac{d}{2}-1)+1}}
\int_0^{L\frac{\pi}{2}} ( \sin \frac{\psi}{L})^{- q(\frac{d}{2} -1)}
\Big( \frac{ \frac{\psi}{L}}{\sin \frac{\psi}{L}} \Big)^{\frac{q}{2}}
J_{\frac{d}{2}-1}(\psi)^q \left (\sin \frac{\psi}{L} \right )^{d-1}\, d\psi \ .
}$$
Now we write
$$\frac{\psi/L}{\sin \psi/L} = 1 + O\left( \psi^2/L^2 \right),$$
so that
$$\displaylines{
L^d\, \frac{\left(2^{\frac{d}{2} - 1}(\frac{d}2-1)!\right)^q}{L^{q(\frac{d}{2}-1)+1}}
\int_0^{L\frac{\pi}{2}} ( \sin \psi/L)^{- q(\frac{d}{2} -1)}
\Big( \frac{\psi/L}{\sin \psi/L} \Big)^{\frac{q}{2}}
J_{\frac{d}{2}-1}(\psi)^q (\sin \psi/L)^{d-1}\, d\psi=\cr
=\left(2^{\frac{d}{2} - 1}(\frac{d}2-1)!\right)^q
\int_0^{L\frac{\pi}{2}}
\Big( \frac{\psi/L}{\sin \psi/L} \Big)^{q(\tfrac{d}{2} -\tfrac12)-d+1}
J_{\frac{d}{2}-1}(\psi)^q \psi^{- q(\frac{d}{2} -1)+d-1}\, d\psi=\cr
=\left(2^{\frac{d}{2} - 1}(\frac{d}2-1)!\right)^q
\int_0^{L\frac{\pi}{2}}
\Big( 1 + O\left( \psi^2/L^2 \right) \Big)^{q(\tfrac{d}{2} -\tfrac12)-d+1}
J_{\frac{d}{2}-1}(\psi)^q \psi^{- q(\frac{d}{2} -1)+d-1}\, d\psi=\cr
=\left(2^{\frac{d}{2} - 1}(\frac{d}2-1)!\right)^q
\int_0^{L\frac{\pi}{2}}
J_{\frac{d}{2}-1}(\psi)^q \psi^{- q(\frac{d}{2} -1)+d-1}\, d\psi +\cr
+
O\left( \frac{1}{L^2}\int_0^{L\frac{\pi}{2}}
J_{\frac{d}{2}-1}(\psi)^q \psi^{- q(\frac{d}{2} -1)+d+1}\,d\psi \right).
}$$
Note that as $L\to +\infty$, the first term of the previous summation
converges to $c_{q;d}$ defined in \paref{ecq}, i.e.
\begin{equation}\label{to cq}
\left(2^{\frac{d}{2} - 1}(\frac{d}2-1)!\right)^q
\int_0^{L\frac{\pi}{2}}
J_{\frac{d}{2}-1}(\psi)^q \psi^{- q(\frac{d}{2} -1)+d-1}\, d\psi \to c_{q;d}\ .
\end{equation}
It remains to bound the remainder
$$
\frac{1}{L^2}\int_0^{L\frac{\pi}{2}}
|J_{\frac{d}{2}-1}(\psi)|^q \psi^{- q(\frac{d}{2} -1)+d+1}\,d\psi =
O(1) + \frac{1}{L^2}\int_1^{L\frac{\pi}{2}}
|J_{\frac{d}{2}-1}(\psi)|^q \psi^{- q(\frac{d}{2} -1)+d+1}\,d\psi\ .
$$
Now for the second term on the r.h.s.
$$\displaylines{
\int_1^{L\frac{\pi}{2}}
|J^q_{\frac{d}{2}-1}(\psi)| \psi^{- q(\frac{d}{2} -1)+d+1}\,d\psi \ll
\int_1^{L\frac{\pi}{2}} \psi^{- q(\frac{d}{2} -\frac{1}{2})+d+1}\,d\psi=\cr
= O(1 + L^{- q(\frac{d}{2} -\frac{1}{2})+d+2})\ .
}$$
Therefore we obtain
$$\displaylines{
\Bigl(2^{\frac{d}{2} - 1}(\frac{d}2-1)!\Bigr)^q \int_0^{L\frac{\pi}{2}}
J_{\frac{d}{2}-1}(\psi)^q \psi^{- q(\frac{d}{2} -1)+d-1}\, d\psi +\cr
+
O\left( \frac{1}{L^2}\int_0^{L\frac{\pi}{2}}
J_{\frac{d}{2}-1}(\psi)^q \psi^{- q(\frac{d}{2} -1)+d+1}\,d\psi
\right)
= \cr
=\left(2^{\frac{d}{2} - 1}(\frac{d}2-1)!\right)^q
\int_0^{L\frac{\pi}{2}} J_{\frac{d}{2}-1}(\psi)^q \psi^{-
q(\frac{d}{2} -1)+d-1}\, d\psi + O(L^{-2} + L^{- q(\frac{d}{2}
-\frac{1}{2})+d})\ , }$$ so that we have just checked the
statement of the present proposition for $q > \frac{2d}{d-1}$. This is indeed enough for each $q\ge
3$ when
$d\ge 4$ .
It remains to investigate separately just the case $d=q=3$.
Recall that for $d=3$ we have an explicit formula for the Bessel
function of order $\frac{d}{2}-1$ (\cite{szego}), that is
\begin{equation*}
J_{\frac{1}{2}}(z) = \sqrt{\frac{2}{\pi z}} \sin (z)\ ,
\end{equation*}
and hence the integral in \paref{ecq} is indeed convergent for $q=d=3$ by integrations by parts.
We have hence to study the convergence of the following integral
\begin{equation*}
\frac{8}{\pi^\frac{3}{2} }
\int_0^{L\frac{\pi}{2}}
\left( \frac{\psi/L}{\sin \psi/L} \right)
\frac{\sin^3 \psi}{\psi} \, d\psi\ .
\end{equation*}
To this aim, let us consider a large parameter $K\gg 1$ and divide the integration range
into $[0,K]$ and $[K, \frac{\pi}{2}]$;
the main contribution comes from the first term, whence we have to prove that the latter vanishes.
Note that
\begin{equation}
\label{eq:int K-> bound}
\int_K^{L\frac{\pi}{2}} \left( \frac{\psi/L}{\sin \psi/L} \right)
\frac{\sin^3 \psi}{\psi} \, d\psi \ll \frac{1}{K}\ ,
\end{equation}
where we use integration by part with the bounded function $I(T) = \int_0^T \sin^3 z\, dz$.
On $[0,K]$, we write
$$\displaylines{
\frac{8}{\pi^\frac{3}{2} }\int_0^{K} \left( \frac{\psi/L}{\sin \psi/L} \right)
\frac{\sin^3 \psi}{\psi} \, d\psi
= \frac{8}{\pi^\frac{3}{2} }\int_0^{K}
\frac{\sin^3 \psi}{\psi} \, d\psi + O\left( \frac{1}{L^2}\int_0^{K}
\psi \sin^3 \psi\, d\psi \right) = \cr
=\frac{8}{\pi^\frac{3}{2} }\int_0^{K}
\frac{\sin^3 \psi}{\psi} \, d\psi +O\left( \frac{K^2}{L^2} \right).
}$$
Consolidating the latter with \eqref{eq:int K-> bound} we find that
$$\displaylines{
\frac{8}{\pi^\frac{3}{2} }
\int_0^{L\frac{\pi}{2}}
\left( \frac{\psi/L}{\sin \psi/L} \right)
\frac{\sin^3 \psi}{\psi} \, d\psi = \frac{8}{\pi^\frac{3}{2} }\int_0^{K}
\frac{\sin^3 \psi}{\psi} \, d\psi + O\left( \frac{1}{K} + \frac{K^2}{L^2} \right).
}$$
Now as $K\to +\infty$,
$$
\frac{8}{\pi^\frac{3}{2} }\int_0^{K}
\frac{\sin^3 \psi}{\psi} \, d\psi \to c_{3;3}\ ;
$$
to conclude the proof, it is then enough to choose $K=K(L)\rightarrow\infty$
sufficiently slowly, i.e. $K=\sqrt{L}$.
\end{proof}
\subsection{Proofs of Propositions \ref{cum2} and \ref{cumd}}
\begin{proof}[Proof \paref{cum2}]
The bounds (\ref{hotel1}), (\ref{hotel2}) are known and indeed the
corresponding integrals can be evaluated explicitly in terms of Wigner's 3j
and 6j coefficients, see \cite{dogiocam} e.g. The
bounds in (\ref{hotel3}),(\ref{hotel4}) derives from a simple improvement in
the proof of Proposition 2.2 in \cite{Nonlin}, which can be obtained when
focussing only on a subset of the terms (the circulant ones) considered in
that reference. In the proof to follow, we exploit repeatedly
(\ref{tri1}), (\ref{tri2}), (\ref{tri3}) and \paref{tri4}.
Let us start investigating the case $q=5$
$$\displaylines{
\mathcal{K}_{\ell }(5;1)=\int_{(\cS^{2})^4}\left\vert
P_{\ell }(\cos d( x_{1},x_{2}) )\right\vert ^{4}\left\vert
P_{\ell }(\cos d( x_{2},x_{3}) )\right\vert \times \cr
\times \left\vert
P_{\ell }(\cos d( x_{3},x_{4}) )\right\vert ^{4}\left\vert
P_{\ell }(\cos d( x_{4},x_{1}) )\right\vert
dx_{1}dx_{2}dx_{3}dx_{4} \le \cr
\mathcal{\leq }\int_{(\cS^{2})^4}\left\vert P_{\ell
}(\cos d( x_{1},x_{2}) )\right\vert ^{4}\left\vert P_{\ell
}(\cos d( x_{2},x_{3}) )\right\vert \left\vert P_{\ell
}(\cos d( x_{3},x_{4}) )\right\vert
^{4}dx_{1}dx_{2}dx_{3}dx_{4} \le \cr
\mathcal{\leq }\int_{(\cS^{2})^3}\left\vert P_{\ell
}(\cos d( x_{1},x_{2}) )\right\vert ^{4}\left\vert P_{\ell
}(\cos d( x_{2},x_{3}) )\right\vert
\int_{\cS^{2}}\left\vert P_{\ell }(\cos d( x_{3},x_{4})
)\right\vert ^{4}dx_{4}\,\, dx_{1}dx_{2}dx_{3} \le\cr
\mathcal{\leq } O\left( \frac{\log \ell }{\ell ^{2}}\right) \times
\int_{\cS^{2}\times \cS^{2}}\left\vert P_{\ell }(\cos d(
x_{1},x_{2}) )\right\vert ^{4}\left\{ \int_{\cS^{2}}\left\vert
P_{\ell }(\cos d( x_{2},x_{3}) )\right\vert dx_{3}\right\}
dx_{1}dx_{2} \le \cr
\leq O\left( \frac{\log \ell }{\ell ^{2}}\right) \times O\left( \frac{1}
\sqrt{\ell }}\right) \times \int_{\cS^{2}\times \cS^{2}}\left\vert P_{\ell
}(\cos d( x_{1},x_{2}) )\right\vert ^{4}dx_{1}dx_{2} \le \cr
\leq O\left( \frac{\log \ell }{\ell ^{2}}\right) \times O\left( \frac{1}
\sqrt{\ell }}\right) \times O\left( \frac{\log \ell }{\ell ^{2}}\right)
=O\left( \frac{\log ^{2}\ell }{\ell ^{9/2}}\right) \text{ ;}
}$$
$$\displaylines{
\mathcal{K}_{\ell }(5;2)=\int_{(\cS^{2})^4}\left\vert
P_{\ell }(\cos d( x_{1},x_{2}) )\right\vert ^{3}\left\vert
P_{\ell }(\cos d( x_{2},x_{3}) )\right\vert^2 \times \cr
\times
\left\vert
P_{\ell }(\cos d( x_{3},x_{4}) )\right\vert ^{3}\left\vert
P_{\ell }(\cos d( x_{4},x_{1}) )\right\vert^2
dx_{1}dx_{2}dx_{3}dx_{4}\le \cr
\mathcal{\leq }\int_{(\cS^{2})^4}\left\vert P_{\ell
}(\cos d( x_{1},x_{2}) )\right\vert ^{3}\left\vert P_{\ell
}(\cos d( x_{2},x_{3}) )\right\vert^2 \left\vert P_{\ell
}(\cos d( x_{3},x_{4}) )\right\vert
^{3}dx_{1}dx_{2}dx_{3}dx_{4} \le \cr
\mathcal{\leq }\int_{(\cS^{2})^3}\left\vert P_{\ell
}(\cos d( x_{1},x_{2}) )\right\vert ^{3}\left\vert P_{\ell
}(\cos d( x_{2},x_{3}) )\right\vert^2
\int_{\cS^{2}}\left\vert P_{\ell }(\cos d( x_{3},x_{4})
)\right\vert ^{3}dx_{4}\,\, dx_{1}dx_{2}dx_{3} \le \cr
\mathcal{\leq }O\left( \sqrt{\frac{\log \ell }{\ell ^{3}}}\right) \times
\int_{\cS^{2}\times \cS^{2}}\left\vert P_{\ell }(\cos d(
x_{1},x_{2}) )\right\vert ^{3}\left\{ \int_{\cS^{2}}\left\vert
P_{\ell }(\cos d( x_{2},x_{3}) )\right\vert^2 dx_{3}\right\}
dx_{1}dx_{2} \le \cr
\leq O\left( \sqrt{\frac{\log \ell }{\ell ^{3}}}\right) \times O\left( \frac{1}
\ell }\right) \times \int_{\cS^{2}\times \cS^{2}}\left\vert P_{\ell
}(\cos d( x_{1},x_{2}) )\right\vert ^{3}dx_{1}dx_{2} \le \cr
\leq O\left( \sqrt{\frac{\log \ell }{\ell ^{3}}}\right)\times O\left( \frac{1}
\ell}\right) \times O\left( \sqrt{\frac{\log \ell }{\ell ^{3}}}\right)
=O\left( \frac{\log \ell }{\ell ^{4}}\right) \text{ .}
}$$
For $q=6$ and $r=1$ we simply note that $\mathcal{K}_{\ell }(6;1)\le \mathcal{K}_{\ell }(5;1)$, actually
$$\displaylines{
\mathcal{K}_{\ell }(6;1)=\int_{(\cS^{2})^4}\left\vert
P_{\ell }(\cos d( x_{1},x_{2}) )\right\vert ^{5}\left\vert
P_{\ell }(\cos d( x_{2},x_{3}) )\right\vert \times \cr
\times\left\vert
P_{\ell }(\cos d( x_{3},x_{4}) )\right\vert ^{5}\left\vert
P_{\ell }(\cos d( x_{4},x_{1}) )\right\vert
dx_{1}dx_{2}dx_{3}dx_{4}\le \cr
\leq \int_{(\cS^{2})^4}\left\vert P_{\ell }(\cos d(
x_{1},x_{2}) )\right\vert ^{4}\left\vert P_{\ell }(\cos d(
x_{2},x_{3}) )\right\vert \times \cr
\times \left\vert P_{\ell }(\cos d(
x_{3},x_{4}) )\right\vert ^{4}\left\vert P_{\ell }(\cos d(
x_{4},x_{1}) )\right\vert dx_{1}dx_{2}dx_{3}dx_{4}=
\mathcal{K}_{\ell }(5;1)=O\left( \frac{\log ^{2}\ell }{\ell ^{9/2}}\right)\ .
}$$
Then we find with analogous computations as for $q=5$ that
$$\displaylines{
\mathcal{K}_{\ell }(6;2)=\int_{(\cS^{2})^4}\left\vert
P_{\ell }(\cos d( x_{1},x_{2}) )\right\vert ^{4}\left\vert
P_{\ell }(\cos d( x_{2},x_{3}) )\right\vert ^{2}\times \cr
\times \left\vert
P_{\ell }(\cos d( x_{3},x_{4}) )\right\vert ^{4}\left\vert
P_{\ell }(\cos d( x_{4},x_{1}) )\right\vert
^{2}dx_{1}dx_{2}dx_{3}dx_{4}\le \cr
\le
\int_{(\cS^{2})^4}\left\vert P_{\ell
}(\cos d( x_{1},x_{2}) )\right\vert ^{4}\left\vert P_{\ell
}(\cos d( x_{2},x_{3}) )\right\vert ^{2}\times \cr
\times \left\vert P_{\ell
}(\cos d( x_{3},x_{4}) )\right\vert ^{4}\left\vert P_{\ell
}(\cos d( x_{4},x_{1}) )\right\vert
^{2}dx_{1}dx_{2}dx_{3}dx_{4}\le \cr
\mathcal{\leq }\int_{(\cS^{2})^2}\left\vert P_{\ell }(\cos d(
x_{1},x_{2}) )\right\vert ^{4}dx_{1}
\int_{\cS^{2}}\left\vert P_{\ell }(\cos d( x_{2},x_{3})
)\right\vert ^{2}dx_{2} \times \cr
\times \int_{\cS^{2}}\left\vert P_{\ell
}(\cos d( x_{3},x_{4}) )\right\vert ^{4}dx_{4}
dx_{3}=\cr
=O\left( \frac{\log \ell }{\ell ^{2}}\right) \times O\left( \frac{1}{\ell
\right) \times O\left( \frac{\log \ell }{\ell ^{2}}\right) =O\left( \frac
\log ^{2}\ell }{\ell ^{5}}\right)
}$$
and likewise
$$\displaylines{
\mathcal{K}_{\ell }(6;3)=\int_{(\cS^{2})^4}\left\vert
P_{\ell }(\cos d( x_{1},x_{2}) )\right\vert ^{3}\left\vert
P_{\ell }(\cos d( x_{2},x_{3}) )\right\vert ^{3}\times \cr
\times \left\vert
P_{\ell }(\cos d( x_{3},x_{4}) )\right\vert ^{3}\left\vert
P_{\ell }(\cos d( x_{4},x_{1}) )\right\vert
^{3}dx_{1}dx_{2}dx_{3}dx_{4}\le \cr
\mathcal{\leq }\int_{(\cS^{2})^4}\left\vert P_{\ell
}(\cos d( x_{1},x_{2}) )\right\vert ^{3}\left\vert P_{\ell
}(\cos d( x_{2},x_{3}) )\right\vert ^{3}\left\vert P_{\ell
}(\cos d( x_{3},x_{4}) )\right\vert
^{3}dx_{1}dx_{2}dx_{3}dx_{4}=\cr
=O\left( \frac{\sqrt{\log \ell }}{\ell ^{3/2}}\right) \times O\left( \frac
\sqrt{\log \ell }}{\ell ^{3/2}}\right) \times O\left( \frac{\sqrt{\log \ell
}{\ell ^{3/2}}\right) =O\left( \frac{\log ^{3/2}\ell }{\ell ^{9/2}}\right)
\text{ .}
}$$
Finally for $q=7$
$$\displaylines{
\mathcal{K}_{\ell }(7;1) =\int_{\cS^{2}\times ...\times S^{2}}\left\vert
P_{\ell }(\cos d( x_{1},x_{2}) )\right\vert ^{6}\left\vert
P_{\ell }(\cos d( x_{2},x_{3}) )\right\vert \times \cr
\times \left\vert
P_{\ell }(\cos d( x_{3},x_{4}) )\right\vert ^{6}\left\vert
P_{\ell }(\cos d( x_{4},x_{1}) )\right\vert
dx_{1}dx_{2}dx_{3}dx_{4} \le \cr
\leq \int_{\cS^{2}\times S^{2}}\left\vert P_{\ell }(\cos d(
x_{1},x_{2}) )\right\vert ^{6}dx_{1}
\int_{\cS^{2}}\left\vert P_{\ell }(\cos d( x_{2},x_{3})
)\right\vert dx_{3} \times \cr
\times \int_{\cS^{2}}\left\vert P_{\ell
}(\cos d( x_{3},x_{4}) )\right\vert ^{6}dx_{4}\,
dx_{2}
=O\left( \frac{1}{\ell ^{2}}\right) \times O\left( \frac{1}{\ell ^{1/2}
\right) \times O\left( \frac{1}{\ell ^{2}}\right) =O\left( \frac{1}{\ell
^{9/2}}\right)
}$$
and repeating the same argument we obtain
$$
\mathcal{K}_{\ell }(7;2)=O\left( \frac{1}{\ell ^{5}
\right)\qquad \text{and}\qquad \mathcal{K}_{\ell }(7;3) =
O\left( \frac{\log^{9/2} \ell}{\ell ^{11/2}}\right)\ .
$$
From
\paref{simm}, we have indeed computed the bounds for $\mathcal{K}_{\ell
}(q;r)$, $q=1,\dots,7$ and $r=1,\dots, q-1$.
To conclude the proof we note that, for $q>7$
\begin{equation*}
\max_{r=1,...,q-1}\mathcal{K}_{\ell }(q;r)=\max_{r=1,...,\left[ \frac{q}{2
\right] }\mathcal{K}_{\ell }(q;r)\leq \max_{r=1,...,3}\mathcal{K}_{\ell
}(6;r)=O\left( \frac{1}{\ell ^{9/2}}\right) \text{ .}
\end{equation*
Moreover in particular
\begin{equation*}
\max_{r=2,...,\left[ \frac{q}{2}\right] }\mathcal{K}_{\ell }(q;r)\leq
\mathcal{K}_{\ell }(7;2)\vee \mathcal{K}_{\ell }(7;3)=O\left( \frac{1}{\ell
^{5}}\right) \text{ ,}
\end{equation*
so that the dominant terms are of the form $\mathcal{K}_{\ell }(q;1).$
\end{proof}
\begin{proof}[Proof \paref{cumd}]
The proof relies on the same argument of the proof of Proposition \ref{cum2},
therefore we shall omit some calculations.
In what follows we exploit repeatedly the
inequalities \paref{tridq}, \paref{trid1}, \paref{trid3} and \paref{tridgen}.
For $q=3$ we immediately have
$$\displaylines{
\mathcal{K}_{\ell }(3;1)=\int_{(\cS^{d})^4}\left\vert G_{\ell ;d}(\cos d( x_{1},x_{2})
)\right\vert ^{2}\left\vert G_{\ell ;d}(\cos d(
x_{2},x_{3}) )\right\vert \times \cr
\times \left\vert G_{\ell
;d}(\cos d( x_{3},x_{4}) )\right\vert
^{2}\left\vert G_{\ell ;d}(\cos d( x_{4},x_{1})
)\right\vert dx_{1}dx_{2}dx_{3}dx_{4}\le \cr
\mathcal{\leq }\int_{(\cS^{d})^4}\left\vert
G_{\ell ;d}(\cos d( x_{1},x_{2}) )\right\vert
^{2}\left\vert G_{\ell ;d}(\cos d( x_{2},x_{3})
)\right\vert \left\vert G_{\ell ;d}(\cos d(
x_{3},x_{4}) )\right\vert
^{2}dx_{1}dx_{2}dx_{3}dx_{4} =\cr
= O\left( \frac{1 }{\ell ^{d-1}}\right) \times O\left( \frac{1}
\sqrt{\ell^{d-1} }}\right) \times O\left( \frac{1 }{\ell ^{d-1}}\right)
=O\left( \frac{1 }{\ell ^{2d +\tfrac{d}{2} - \tfrac{5}{2}}}\right) \text{ .}
}$$
Likewise for $q=4
$$\displaylines{
\mathcal{K}_{\ell }(4;1)=\int_{(\cS^{d})^4}\left\vert G_{\ell ;d}(\cos d( x_{1},x_{2})
)\right\vert ^{3}\left\vert G_{\ell ;d}(\cos d(
x_{2},x_{3}) )\right\vert \times \cr
\times \left\vert G_{\ell
;d}(\cos d( x_{3},x_{4}) )\right\vert
^{3}\left\vert G_{\ell ;d}(\cos d( x_{4},x_{1})
)\right\vert dx_{1}dx_{2}dx_{3}dx_{4}\le \cr
\mathcal{\leq }\int_{(\cS^{d})^4}\left\vert
G_{\ell ;d}(\cos d( x_{1},x_{2}) )\right\vert
^{3}\left\vert G_{\ell ;d}(\cos d( x_{2},x_{3})
)\right\vert \left\vert G_{\ell
;d}(\cos d( x_{3},x_{4}) )\right\vert^3\,dx_{1}dx_{2}dx_{3}dx_{4} = \cr
= O\left( \frac{1 }{\ell ^{d-\tfrac12}}\right) \times
O\left(\frac{1 }{\ell ^{\tfrac{d}2-\tfrac12}}\right) \times
O\left( \frac{1 }{\ell ^{d-\tfrac12}}\right) =O\left( \frac{1
}{\ell ^{2d + \tfrac{d}2-\tfrac32}}\right)
}$$
and moreover
$$\displaylines{
\mathcal{K}_{\ell }(4;2)=\int_{(\cS^{d})^4}\left\vert G_{\ell ;d}(\cos d( x_{1},x_{2})
)\right\vert ^{2} \times \cr
\times \left\vert G_{\ell ;d}(\cos d(
x_{2},x_{3}) )\right\vert^2 \left\vert G_{\ell
;d}(\cos d( x_{3},x_{4}) )\right\vert
^{2}\left\vert G_{\ell ;d}(\cos d( x_{4},x_{1})
)\right\vert^2 dx_{1}dx_{2}dx_{3}dx_{4}\le \cr
\mathcal{\leq }\int_{(\cS^{d})^4}\left\vert
G_{\ell ;d}(\cos d( x_{1},x_{2}) )\right\vert
^{2}\left\vert G_{\ell ;d}(\cos d( x_{2},x_{3})
)\right\vert^2 \left\vert G_{\ell
;d}(\cos d( x_{3},x_{4}) )\right\vert^2\,d\underline{x} =\cr
=O\left( \frac{1 }{\ell ^{d-1}}\right) \times O\left(
\frac{1 }{\ell ^{d-1}}\right) \times O\left( \frac{1 }{\ell
^{d-1}}\right) =O\left( \frac{1 }{\ell ^{3d - 3}}\right) \text{ ,}
}$$
where we set $d\underline{x}:=dx_{1}dx_{2}dx_{3}dx_{4}$.
Similarly,
for $q=5$ we get the bounds
$$\displaylines{
\mathcal{K}_{\ell }(5;1)=\int_{\cS^{d}\times ...\times
\cS^{d}}\left\vert G_{\ell ;d}(\cos d( x_{1},x_{2})
)\right\vert ^{4} \times \cr
\times \left\vert G_{\ell ;d}(\cos d(
x_{2},x_{3}) )\right\vert \left\vert G_{\ell
;d}(\cos d( x_{3},x_{4}) )\right\vert
^{4}\left\vert G_{\ell ;d}(\cos d( x_{4},x_{1})
)\right\vert dx_{1}dx_{2}dx_{3}dx_{4}\le \cr
\mathcal{\leq }\int_{\cS^{d}\times ...\times \cS^{d}}\left\vert
G_{\ell ;d}(\cos d( x_{1},x_{2}) )\right\vert
^{4}\left\vert G_{\ell ;d}(\cos d( x_{2},x_{3})
)\right\vert \left\vert G_{\ell ;d}(\cos d(
x_{3},x_{4}) )\right\vert
^{4}d\underline{x} =\cr
= O\left( \frac{1}{\ell ^{d}}\right) \times O\left( \frac{1}
\ell^{\tfrac{d}2-\tfrac12}}\right) \times O\left( \frac{1 }{\ell ^{d}}\right)
=O\left( \frac{ 1}{\ell ^{2d +\tfrac{d}2-\tfrac12 }}\right)
}$$
and
$$
\mathcal{K}_{\ell }(5;2)=O\left( \frac{1 }{\ell ^{3d -2}}\right)\ .
$$
It is immediate to check that
$$
\mathcal{K}_{\ell }(6;1)=\mathcal{K}_{\ell }(7;1)=O\left( \frac{1 }{\ell^{2d +\tfrac{d}2-\tfrac12 }}\right)\ ,\quad
\mathcal{K}_{\ell }(6;2)=\mathcal{K}_{\ell }(7;2)=O\left( \frac{1 }{\ell ^{2d + d -1}}\right)\ ,
$$
whereas
$$
\mathcal{K}_{\ell }(6;3)=O\left( \frac{1 }{\ell ^{2d + d - \tfrac32}}\right)\quad \text{and} \quad
\mathcal{K}_{\ell }(7;3)=O\left( \frac{1}{\ell ^{2d + d -\tfrac12}}\right)\ .
$$
The remaining terms are indeed bounded thanks to \paref{simm}.
In order to finish the proof, it is enough to note, as for that for $q>7
\begin{equation}
\max_{r=1,...,q-1}\mathcal{K}_{\ell }(q;r)=\max_{r=1,...,\left[ \frac{q}{2
\right] }\mathcal{K}_{\ell }(q;r)\leq \max_{r=1,...,3}\mathcal{K}_{\ell
}(6;r)=O\left( \frac{1}{\ell^{2d +\tfrac{d}2-\tfrac12 }}\right) \text{ .}
\end{equation
In particular we have
\begin{equation}
\max_{r=2,...,\left[ \frac{q}{2}\right] }\mathcal{K}_{\ell }(q;r)\leq
\mathcal{K}_{\ell }(7;2)\vee \mathcal{K}_{\ell }(7;3)=O\left( \frac{1}{\ell
^{3d-1}}\right) \text{ ,}
\end{equation
so that the dominant terms are again of the form $\mathcal{K}_{\ell }(q;1).$
\end{proof}
\chapter{On the Defect distribution}
In this chapter we refer to \cite{mau}, where the high-energy limit distribution of the Defect of random hyperspherical harmonics is investigated. Indeed in the previous chapter quantitative Central Limit Theorems for the empirical measure of $z$-excursion sets has been shown but for the case $z= 0$.
We find the exact asymptotic rate for the Defect variance and a CLT for the case of the $d$-sphere, $d>5$. The CLT in the $2$-dimensional case has been already proved in \cite{Nonlin} whereas the variance has been investigated in \cite{def}.
The remaining cases ($d=3,4,5$) will be investigated in \cite{mau}, where moreover quantitative CLTs will be proved (work still in progress).
\section{Preliminaries}
Consider the sequence of random eigenfunctions $T_\ell$, $\ell\in \N$ \paref{Telle} on $\mathbb S^d$, $d\ge 2$. As in the previous chapter, the empirical measure of excursion
sets \paref{excset} can be written, for $z\in \mathbb R$, as
\begin{equation}
S_\ell(z) := \int_{\mathbb S^d} 1(T_\ell(x) > z)\,dx\ ,
\end{equation}
where $1_{(z,+\infty)}$ is the indicator function of the interval $(z,+\infty)$.
The case $z\ne 0$ has been treated in \cite{Nonlin, maudom} (see also Chapter 5).
Now consider the Defect, i.e. the difference between ``cold'' and ``warm'' regions
\begin{equation}\label{defect}
D_\ell :=\int_{\mathbb S^d} 1(T_\ell(x) > 0)\,dx -\int_{\mathbb S^d} 1(T_\ell(x) < 0)\,dx\ ;
\end{equation}
note that
$$
D_\ell =2S_\ell(0) - \mu_d\ ,
$$
$\mu_d$ denoting the hyperspherical volume \paref{ms}.
Recall that the Heaviside function is defined for $t\in \R$ as
$$
\mathcal H(t) := \begin{cases} 1\ &t>0\\
0\ &t=0\\
-1\ &t<0\ ,
\end{cases}
$$
thus \paref{defect} can be rewritten simply as
$$
D_\ell = \int_{\mathbb S^d} \mathcal H(T_\ell(x))\,dx\ .
$$
Note that exchanging expectation and integration on $\mathbb S^d$ we have
$$
\E[D_\ell]= \int_{\mathbb S^d}\E\left [ \mathcal H(T_\ell(x)) \right ]\,dx = 0\ ,
$$
since $\E\left [ \mathcal H(T_\ell(x)) \right ]=0$ for every $x$, by the symmetry of the Gaussian distribution.
\section{The Defect variance}
The proofs in this section are inspired by \cite{def}, where the case $d=2$ has been investigated.
\begin{lemma}\label{lemI}
For $\ell$ even we have
\begin{equation}
\Var(D_\ell)= \frac{4}{\pi}\mu_d \mu_{d-1} \int_{0}^{\frac{\pi}{2}} \arcsin (G_{\ell;d}(\cos \theta)) (\sin \theta)^{d-1}\,d\theta\ ,
\end{equation}
where $\mu_d$ is the hyperspherical volume \paref{ms} and $G_{\ell;d}$ the $\ell$-th normalized Gegenbauer polynomial (Chapter \ref{background} or \cite{szego}).
\end{lemma}
\begin{proof}
The proof is indeed analogous to the proof of Lemma 4.1 in \cite{def}. First note that
$$\displaylines{
\Var(D_\ell) = \int_{\cS^d} \int_{\cS^d}\E[\mathcal H(T_\ell(x))\mathcal H_\ell(T_\ell(y))]\,dx dy=\cr
=\mu_d \int_{\cS^d} \E[\mathcal H(T_\ell(x))\mathcal H(T_\ell(N))]\,dx\
}$$ by the isotropic property of the random field $T_\ell$, where $N$ denotes some fixed point in $\mathbb S^d$. As explained in the proof of Lemma 4.1, we have
$$
\E[\mathcal H(T_\ell(x))\mathcal H_\ell(T_\ell(N))]= \frac{2}{\pi} \arcsin (G_{\ell;d}(\cos \vartheta))\ ,
$$
where $\vartheta$ is the geodesic distance between $x$ and $N$.
Moreover evaluating the last integral in hyperspherical coordinates we get
\begin{equation}\label{vardef1}
\Var(D_\ell) = \mu_d \mu_{d-1} \int_0^\pi \frac{2}{\pi} \arcsin (G_{\ell;d}(\cos \vartheta))(\sin \vartheta)^{d-1} d\vartheta\ .
\end{equation}
For $\ell$ even, we can hence write
\begin{equation}
\Var(D_\ell)=\frac{4}{\pi}\mu_d \mu_{d-1} \int_0^{\pi/2} \arcsin(G_{\ell;d}(\cos \vartheta))(\sin \vartheta)^{d-1} d\vartheta\ .
\end{equation}
\end{proof}
Note that, by \paref{vardef1}, if $\ell$ is odd, then $D_\ell=0$, therefore we can restrict ourselves to even $\ell$ only.
The main result of this section is the following.
\begin{theorem}\label{thdefvar}
The defect variance is asymptotic to, as $\ell\to +\infty$ along even integers
$$
\Var(D_\ell) = \frac{C_d}{\ell^d}(1 + o(1))\ ,
$$
where $C_d$ is a strictly positive constant depending on $d$, that can be expressed by the formula
$$\displaylines{
C_d=\frac{4}{\pi}\mu_d \mu_{d-1}\int_{0}^{+\infty} \psi^{d-1}
\Big ( \arcsin \left (2^{\frac{d}{2} - 1}\left (\frac{d}2-1 \right)!\,
J_{\frac{d}{2}-1}(\psi )\psi ^{-\left( {\textstyle\frac{d}{2}}
-1\right)}\right) +\cr
- 2^{\frac{d}{2} - 1}\left (\frac{d}2-1 \right)!\,
J_{\frac{d}{2}-1}(\psi )\psi ^{-\left( {\textstyle\frac{d}{2}}
-1\right)} \Big )\,d\psi\ .
}$$
\end{theorem}
\begin{proof}
Here we are inspired by \cite[Proposition 4.2, Theorem 1.2]{def}.
Since \cite{szego}
$$
\int_0^{\pi/2} G_{\ell;d}(\cos \vartheta)(\sin \vartheta)^{d-1} d\vartheta=0\ ,
$$
from Lemma \ref{lemI} we can write
$$\displaylines{
\Var(D_\ell)
\frac{4}{\pi}\mu_d \mu_{d-1} \int_{0}^{\frac{\pi}{2}} \left (\arcsin (G_{\ell;d}(\cos \theta))-
G_{\ell;d}(\cos \theta)\right )(\sin \theta)^{d-1}\,d\theta\ .
}$$
Let now
$$
\arcsin(t) - t = \sum_{k=1}^{+\infty} a_k t^{2k+1}
$$
be the Taylor expansion of the arcsine,
where
$$
a_k = \frac{(2k)!}{4^k (k!)^2 (2k+1)}\sim \frac{1}{2\sqrt{\pi}k^{3/2}}\ ,\quad k\to +\infty\ .
$$ Since the Taylor series is uniformly absolutely convergent,
we may write
$$
\Var(D_\ell)
=\frac{4}{\pi}\mu_d \mu_{d-1} \sum_{k=1}^{+\infty} a_k \int_{0}^{\frac{\pi}{2}}
G_{\ell;d}(\cos \theta)^{2k+1}(\sin \theta)^{d-1}\,d\theta\ .
$$
Now from Proposition \ref{varianza} we have
\begin{equation}\label{limG}
\lim_{\ell\to +\infty} \ell^d \int_{0}^{\frac{\pi}{2}}
G_{\ell;d}(\cos \theta)^{2k+1}(\sin \theta)^{d-1}\,d\theta= c_{2k+1;d}\ ,
\end{equation}
where
\begin{equation*}
c_{2k+1;d}=\left(2^{\frac{d}{2} - 1}\left (\frac{d}2-1 \right)!\right)^{2k+1}\int_{0}^{+\infty
}J_{\frac{d}{2}-1}(\psi )^{2k+1}\psi ^{-(2k+1)\left( {\textstyle\frac{d}{2}
-1\right) +d-1}d\psi\ . \label{cq}
\end{equation*
Therefore we would expect that
\begin{equation}\label{guess}
\Var(D_\ell) \sim \frac{C_d}{\ell^d}\ ,
\end{equation}
where
\begin{equation}\label{guess1}
C_d = \frac{4}{\pi}\mu_d \mu_{d-1} \sum_{k=1}^{+\infty} a_kc_{2k+1;d}\ .
\end{equation}
Before proving \paref{guess} that is the statement of this theorem,
let us check that $C_d>0$, assuming \paref{guess1} true.
This is easy since the r.h.s. of \paref{guess1} is a series of nonnegative terms and from \cite[p. 217]{andrews} we have
\begin{equation}
c_{3;d} = \left(2^{\frac{d}{2} - 1}\left (\frac{d}2-1 \right)!\right)^3
\frac{3^{\frac{d}{2} -\frac32}}{2^{3\left (\frac{d}2-1 \right)-1}\sqrt \pi\,
\Gamma \left ( \frac{d}{2} -\frac12 \right )}>0\ .
\end{equation}
Moreover, assuming \paref{guess1} true, we get
$$\displaylines{
C_d = \cr
= \frac{4}{\pi}\mu_d \mu_{d-1} \sum_{k=1}^{+\infty} a_k
\left(2^{\frac{d}{2} - 1}\left (\frac{d}2-1 \right)!\right)^{2k+1}\int_{0}^{+\infty
}J_{\frac{d}{2}-1}(\psi )^{2k+1}\psi ^{-(2k+1)\left( {\textstyle\frac{d}{2}}
-1\right) +d-1}d\psi=\cr
=\frac{4}{\pi}\mu_d \mu_{d-1}\int_{0}^{+\infty} \sum_{k=1}^{+\infty} a_k
\left(2^{\frac{d}{2} - 1}\left (\frac{d}2-1 \right)!\,
J_{\frac{d}{2}-1}(\psi )\psi ^{-\left( {\textstyle\frac{d}{2}}
-1\right)}\right)^{2k+1}\psi^{d-1}\, d\psi=\cr
=\frac{4}{\pi}\mu_d \mu_{d-1}\int_{0}^{+\infty}
\Big( \arcsin \left (2^{\frac{d}{2} - 1}\left (\frac{d}2-1 \right)!\,
J_{\frac{d}{2}-1}(\psi )\psi ^{-\left( {\textstyle\frac{d}{2}}
-1\right)}\right) +\cr
- 2^{\frac{d}{2} - 1}\left (\frac{d}2-1 \right)!\,
J_{\frac{d}{2}-1}(\psi )\psi^{-\left( {\textstyle\frac{d}{2}}
-1\right)} \Big)\psi^{d-1}\,d\psi\ ,
}$$
which is the second statement of this theorem.
To justify the exchange
of the integration and summation order, we consider the finite summation
$$
\sum_{k=1}^{m} a_k \int_0^{+\infty}
\left(2^{\frac{d}{2} - 1}\left (\frac{d}2-1 \right)!\,
J_{\frac{d}{2}-1}(\psi )\psi ^{-\left( {\textstyle\frac{d}{2}}
-1\right)}\right)^{2k+1}\psi^{d-1}\, d\psi
$$
using $a_k\sim \frac{c}{k^{3/2}}$ ($c>0$) and the asymptotic behavior of Bessel functions for large argument \cite{szego} to bound the contributions of tails, and take the limit $m\to +\infty$.
Let us now formally prove the asymptotic result for the variance \paref{guess}.
Note that
$$\displaylines{
\sum_{k=m+1}^{+\infty} a_k \int_{0}^{\frac{\pi}{2}}
\left | G_{\ell;d}(\cos \theta) \right |^{2k+1}(\sin \theta)^{d-1}\,d\theta\le\cr
\le \sum_{k=m+1}^{+\infty} a_k \int_{0}^{\frac{\pi}{2}}
\left | G_{\ell;d}(\cos \theta) \right |^{5}(\sin \theta)^{d-1}\,d\theta
\ll \frac{1}{\ell^d} \sum_{k=m+1}^{+\infty} \frac{1}{k^{3/2}}\ll \frac{1}{\sqrt{m}\, \ell^d}\ .
}$$
Therefore we have for $m=m(\ell)$ to be chosen
$$
\Var(D_\ell)=\frac{4}{\pi}\mu_d \mu_{d-1} \sum_{k=1}^{m} a_k \int_{0}^{\frac{\pi}{2}}
G_{\ell;d}(\cos \theta)^{2k+1}(\sin \theta)^{d-1}\,d\theta + O\left ( \frac{1}{\sqrt{m}\, \ell^d} \right)\ .
$$
From \paref{limG}, we can write
$$
\Var(D_\ell) =C_{d,m} \cdot \frac{1}{\ell^d} + o(\ell^{-d}) + \frac{1}{\sqrt{m}\, \ell^d}\ ,
$$
where
$$
C_{d,m}:=\frac{4}{\pi}\mu_d \mu_{d-1} \sum_{k=1}^{m} a_k c_{2k+1;d}\ .
$$
Now since $C_{d,m}\to C_d$ as $m\to +\infty$, we can conclude.
\end{proof}
\section{The CLT}
In this section we prove a CLT for the Defect of random eigenfunctions on the $d$-sphere, $d\ne 3,4,5$, whose proof is inspired by the proof of Corollary 4.2 in \cite{Nonlin}.
\begin{theorem}\label{thDef}
As $\ell\to +\infty$ along even integers, we have
$$
\frac{D_\ell}{\sqrt{\Var(D_\ell)}}\mathop{\to}^{\mathcal L} Z\ ,
$$
where $Z\sim \mathcal N(0,1)$.
\end{theorem}
Since our aim for the next future is to find a quantitative CLT, we will first compute its chaotic expansion.
\subsection{Chaotic expansions}
Let us write the chaotic expansion \paref{chaos exp} for the Defect in the form
$$
D_\ell = \sum_{q=0}^{+\infty} \frac{J_q(D_\ell)}{q!}\int_{\mathbb S^d} H_q(T_\ell(x))\,dx\ .
$$
Recalling that $D_\ell=2S_\ell(0) -\mu_d$, let us find the chaotic expansion for $S_\ell(0)$. Note that
$
\E[S_\ell(0)]= \frac12 \mu_d$.
For $q\ge 1$
$$\displaylines{
J_q(S_\ell(0)) = \int_\R 1(z>0) (-1)^q \phi^{-1}(z) \frac{d^q}{dz^q} \phi(z) \phi(z)\,dz=\cr
= (-1)^q \int_0^{+\infty} \frac{d^q}{dz^q} \phi(z) \,dz =\cr
=-(-1)^{(q-1)} \phi(z) \phi^{-1}(z) \frac{d^{(q-1)}}{dz^{(q-1)}} \phi(z) \Big |_0^{+\infty}=\cr
=-\phi(z) H_{q-1}(z)|_0^{+\infty}=\phi(0) H_{q-1}(0)=\begin{cases} 0\ & q\ \text{even}\\
\frac{(-1)^{\frac{q-1}{2}}}{\sqrt{2\pi} 2^{\frac{q-1}{2}} \left ( \frac{q-1}{2}\right )!}=
\frac{(-1)^{\frac{q-1}{2}}}{\sqrt{2\pi} (q-1)!!} & q\ \text{odd}\ .
\end{cases}
}$$
Therefore the Wiener-It\^o chaos decomposition for the Defect is
$$
D_\ell = 2 S_\ell(0)-\mu_d = \sum_{k=1}^{+\infty}
\sqrt{\frac{2}{\pi}}\frac{(-1)^{k}}{ (2k+1)! (2k)!!}\int_{\mathbb S^d} H_{2k+1}(T_\ell(x))\,dx\ ,
$$
with
$$
J_{2k}(D_\ell)=0\ , \qquad J_{2k+1}(D_\ell) =\sqrt{\frac{2}{\pi}}\frac{(-1)^{k}}{(2k)!!}\ .
$$
\subsection{Proof of Theorem \ref{thDef}}
Let $m\in \N, m\ge 2$ to be chosen later and set
$$
D_{\ell,m} :=\sum_{k=1}^{m-1}
\sqrt{\frac{2}{\pi}}\frac{(-1)^{k}}{ (2k+1)! (2k)!!}\int_{\mathbb S^d} H_{2k+1}(T_\ell(x))\,dx\ .
$$
Simple estimates give
$$\displaylines{
\E\left [ \left ( \frac{D_\ell}{\sqrt{\Var(D_\ell)}} - \frac{D_{\ell,m}}{\sqrt{\Var(D_{\ell,m})}} \right )^2 \right ]\le \cr
\le 2\E\left [ \left ( \frac{D_\ell}{\sqrt{\Var(D_\ell)}} - \frac{D_{\ell,m}}{\sqrt{\Var(D_{\ell})}} \right )^2 \right ] + 2\E\left [ \left ( \frac{D_{\ell,m}}{\sqrt{\Var(D_\ell)}} - \frac{D_{\ell,m}}{\sqrt{\Var(D_{\ell,m})}} \right )^2 \right ]\le \cr
\le 2\E\left [ \left ( \frac{D_\ell}{\sqrt{\Var(D_\ell)}} - \frac{D_{\ell,m}}{\sqrt{\Var(D_{\ell})}} \right )^2 \right ] + 2 \left ( \frac{\Var(D_{\ell,m})}{\Var(D_\ell)} +1 - 2\sqrt{ \frac{\Var(D_{\ell,m})}{\Var(D_{\ell})}} \right )\ .
}$$
The first term to control is therefore
$$
\frac{D_\ell}{\sqrt{\Var[D_\ell]}} - \frac{D_{\ell,m}}{\sqrt{\Var[D_\ell]}}\ .
$$
We have, repeating the same argument as in \cite{Nonlin}
$$\displaylines{
\E\left [\left (\frac{D_\ell}{\sqrt{\Var[D_\ell]}} -
\frac{D_{\ell,m}}{\sqrt{\Var[D_\ell]}} \right )^2 \right ]
=\frac{1}{\Var[D_\ell]}\E\left [\left (D_\ell -
D_{\ell,m} \right )^2 \right ] =\cr
=\frac{1}{\Var[D_\ell]}\E\left [\left (\sum_{k=m}^{+\infty}
\sqrt{\frac{2}{\pi}}\frac{(-1)^{k}}{ (2k+1)! (2k)!!}\int_{\mathbb S^d} H_{2k+1}(T_\ell(x))\,dx \right )^2 \right ] =\cr
=\frac{1}{\Var[D_\ell]}\sum_{k=m}^{+\infty}\frac{2}{\pi}\frac{1}{ ((2k+1)! (2k)!!)^2}
\Var\left (
\int_{\mathbb S^d} H_{2k+1}(T_\ell(x))\,dx\right ) =\cr
=\frac{1}{\Var[D_\ell]}\left (\frac{1}{\ell^d}\sum_{q=m}^{+\infty}a_q c_{2q+1;d} + o(\ell^{-d}) \right )\le \cr
\le \frac{1}{\Var[D_\ell]}\left (\frac{1}{2\sqrt{\pi}}
\frac{1}{\ell^d}\sum_{q=m}^{+\infty}\frac{c_{5;d}}{q^{\frac32}} + o(\ell^{-d}) \right ) = \frac{1}{\Var[D_\ell]}\times O \left ( \frac{1}{\ell^d \sqrt{m} } \right )=O \left (
\frac{1}{\sqrt m}\right)\ ,
}$$
where we used Theorem \ref{thdefvar}.
Moreover for the second term we have, from \paref{limG} and Theorem \ref{thdefvar}
$$\displaylines{
\frac{\Var(D_{\ell,m})}{\Var(D_\ell)} +1 - 2\sqrt{ \frac{\Var(D_{\ell,m})}{\Var(D_{\ell})}} = 2 +O\left( \frac{1}{\sqrt{m}} \right ) -2 \sqrt{1 +O\left( \frac{1}{\sqrt{m}} \right ) } =\cr
= O\left( \frac{1}{\sqrt{m}} \right )\ .
}$$
Putting thigns together we immediately get
$$
\E\left [ \left ( \frac{D_\ell}{\sqrt{\Var(D_\ell)}} - \frac{D_{\ell,m}}{\sqrt{\Var(D_{\ell,m})}} \right )^2 \right ]= O\left( \frac{1}{\sqrt{m}} \right )\ .
$$
For every fixed $m$, Corollary \ref{cor1} and \S \ref{genpoly} gives, if $d\ne 3,4,5$
$$
\frac{D_{\ell,m}}{\sqrt{\Var(D_{\ell,m})}}\to Z\ ,
$$
where $Z\sim \mathcal N(0,1)$, so that since $m$ can be chosen arbitrarily large we must have
$$
\frac{D_{\ell}}{\sqrt{\Var(D_{\ell})}}\to Z\ .
$$
\section{Final remarks}
For the remaining cases $d=3,4,5$, we need to prove a CLT for the random variables $h_{\ell,3;d}$, as $\ell\to +\infty$. Indeed bounds on fourth order cumulants obtained in Theorem \ref{teo1} are not enough to garantee the convergence to the standard Gaussian distribution. In \cite{mau} we will investigate the exact rate for fourth order cumulants of $h_{\ell,3;d}$, which will allow to extend Theorem \ref{thDef} to dimensions $d=3,4,5$.
Moreover, we will prove quantitative CLTs for the Defect in the high-energy limit which should be of the form
$$
d_W\left ( \frac{D_{\ell}}{\sqrt{\Var(D_{\ell})}} , Z \right ) = O\left( \ell^{-1/4} \right)\ ,
$$
where $d_W$ denotes Wasserstein distance \paref{prob distance}.
\chapter{Random length of level curves}
In this chapter, our aim is to investigate the high-energy behavior for the length of level curves of Gaussian spherical eigenfunctions $T_\ell$, $\ell\in \N$ \paref{Telle} on $\mathbb S^2$.
\section{Preliminaries}
\subsection{Length{{} : mean and variance}}
Consider the total length of level curves of random
eigenfunctions, i.e. the sequence the random variables $\{\Lc_{\ell}(z)\}_{\ell \in \N}$ given by, for $z\in \R$,
\begin{equation}\label{e:lengthS}
\Lc_\ell(z) := \text{length}(T_\ell^{-1}(z)).
\end{equation}
As already anticipated in the Introduction of this thesis, the expected value of $\Lc_{\ell}(z)$ was computed (see e.g. \cite{wigsurvey}) to be
\begin{equation}
\E[\mathcal L_\ell(z)]= 4\pi \frac{\e^{-z^2/2}}{2\sqrt{2}}\sqrt{\ell(\ell+1)},
\end{equation}
consistent to Yau's conjecture \cite{Yau, Yau2}.
The asymptotic behaviour of the variance $\Var(\Lc_{\ell}(z))$ of $\Lc_{\ell}(z)$ was
resolved in \cite{wigsurvey, Wig} as follows.
For $z\ne 0$, we have
\begin{equation}
\Var(\mathcal L_\ell(z))\sim \ell\cdot C z^4\e^{-z^2} \ ,\quad \ell\to +\infty\ ,
\end{equation}
for some $C>0$. Moreover, I.Wigman computed the exact constant (private computations)
$$
C= \frac{\pi^2}{2}\ .
$$
For the nodal case ($z=0$) we have
\begin{equation*}
\Var(\mathcal L_\ell(0))\sim \frac{1}{32} \cdot \log \ell\ ,\quad \ell\to +\infty\ .
\end{equation*}
The order of magnitude of $\Var(\Lc_{\ell}(0))$ is
smaller than what would be a natural guess
(i.e., $ \ell $ as for the non-nodal case); this situation is due to some cancelation in the asymptotic expansion of the nodal variance (``obscure Berry's cancellation'' -- see \cite{wigsurvey, Wig}) and is similar to the {\em cancellation phenomenon} observed by Berry
in a different setting \cite{Berry2002}.
\subsection{Main result}
{{}
Our principal aim is to study the asymptotic behaviour, as $\ell\to \infty$, of the distribution of the sequence of normalized random variables
\begin{equation}\label{e:culpS}
\widetilde{\Lc}_{\ell}(z) := \frac{\mathcal{L}_{\ell}(z) - \E[\mathcal{ L}_{\ell}(z)]}{\sqrt{\Var(\mathcal{L}_{\ell}(z) )}}, \quad \ell\geq 1.
\end{equation}
The following statement is the main result of this chapter.
\begin{theorem}
\label{main th S} For $z\ne 0$ the sequence $\big\{\widetilde{\mathcal L}_\ell(z) : \ell\geq 1\big\}$ converges in distribution to a standard Gaussian r.v. $Z$. In particular
\begin{equation}
\lim_{\ell\to +\infty} d(\widetilde{\mathcal L}_\ell(z), Z) = 0\ ,
\end{equation}
where $d$ denotes either the Kolmogorov distance \paref{prob distance}, or an arbitrary distance
metrizing the weak convergence on $\mathscr P$ the space of all probability measures on $\R$ (see Chapter \ref{background}).
\end{theorem}
}
{{}
\subsection{ Wiener chaos and Berry's cancellation}\label{ss:berryintroS}
The proof of our result rely on a pervasive use of Wiener-It\^o chaotic expansions (see Chapter \ref{background} e.g.) and the reader is referred
to the two monographs \cite{noupebook, P-T} for an exhaustive discussion.
According to \eqref{Telle}, the Gaussian spherical eigenfunctions considered in this work are built starting from a family of i.i.d. Gaussian r.v.'s $\{a_{\ell,m} : \ell\ge 1, m=1,\dots, 2\ell+1\}$, defined on some probability space $(\Omega, \mathscr{F}, \mathbb{P})$ and verifying the following properties:
$$\E[a_{\ell,m}]=0\ ,\quad \E[a_{\ell,m} a_{\ell',m'}]=\frac{4\pi}{2\ell+1}\delta_{m}^{m'} \delta_{\ell}^{\ell'}\ .$$
We define ${\bf A}$ to be the closure in $L^2(\mathbb{P})$ of all real finite linear combinations of $\{a_{\ell,m} : \ell\ge 1, m=1,\dots, 2\ell+1\}$. ${\bf A}$ is a real centered Gaussian space (that is, a linear space of jointly Gaussian centered real-valued random variables, that is stable under convergence in $L^2(\mathbb{P})$) (compare to Chapter \ref{background}).
\begin{definition}\label{d:chaosS}{\rm For every $q=0,1,2,...$ the $q$th {\it Wiener chaos} associated with ${\bf A}$ (compare to Chapter \ref{background}), written $C_q$, is the closure in $L^2(\mathbb{P})$ of all real finite linear combinations of random variables with the form
$$
H_{p_1}(\xi_1)H_{p_2}(\xi_2)\cdots H_{p_k}(\xi_k),
$$
where the integers $p_1,...,p_k \geq 0$ verify $p_1+\cdot+p_k = q$, and $(\xi_1,...,\xi_k)$ is a real centered Gaussian vector with identity covariance matrix extracted from ${\bf A}$ (note that, in particular, $C_0 = \mathbb{R}$).}
\end{definition}
$C_q \,\bot\, C_m$ (where the orthogonality holds in the sense of $L^2(\mathbb{P})$) for every $q\neq m$, and moreover
\begin{equation}\label{e:chaosS}
L^2(\Omega, \sigma({\bf A}), \mathbb{P}) = \bigoplus_{q=0}^\infty C_q,
\end{equation}
that is: each real-valued functional $F$ of ${\bf A}$ can be (uniquely) represented in the form
\begin{equation}\label{e:chaos2S}
F = \sum_{q=0}^\infty {\rm proj}(F \, | \, C_q),
\end{equation}
where ${\rm proj}(\bullet \, | \, C_q)$ stands for the projection operator onto $C_q$, and the series converges in $L^2(\mathbb{P})$. Plainly, ${\rm proj}(F \, | \, C_0) = \E[ F]$.
\
Now recall the definition of $T_\ell$ given in \eqref{Telle}: the following elementary statement shows that the Gaussian field
$$
\left\{ T_\ell(\theta),\, \frac{\partial}{\partial \theta_1} T_\ell (\theta),\, \frac{\partial}{\partial \theta_2} T_\ell (\theta) : \theta =(\theta_1,\theta_2)\in \mathbb{S}^2\right\}
$$
is a subset of ${\bf A}$, for every $\ell\in \N$.
\begin{prop} \label{p:fieldS}Fix $\ell\in \N$, let the above notation and conventions prevail. Then, for every $j=1,2$ one has that
\begin{equation}\label{e:partialS}
\partial_j T_\ell(\theta) := \frac{\partial}{\partial \theta_j} T_\ell(\theta) = \sum_{m} a_{\ell,m} \frac{\partial}{\partial \theta_j} Y_{\ell,m}(\theta),
\end{equation}
and therefore $T_\ell (\theta), \, \partial_1 T_\ell (\theta), \, \partial_2 T_\ell (\theta) \in {\bf A}$, for every $\theta\in \mathbb{S}^2$. Moreover, for every fixed $\theta\in \mathbb{S}^2$, one has that $T_\ell (\theta), \, \partial_1 T_\ell (\theta), \, \partial_2 T_\ell (\theta)$ are stochastically independent (see e.g. \cite{Wig}).
\end{prop}
We shall often use the fact that
$$\displaylines{
\Var[\partial_j T_\ell (\theta)] = \frac{\ell(\ell+1)}{2}\ ,
}$$
and, accordingly, for $\theta=(\theta_1, \theta_2)\in \mathbb{S}^2$ and $j=1,2$, we will denote by $\partial_j \widetilde T_\ell (\theta)$ the normalized derivative
\begin{equation}\label{e:normaS}
\partial_j \widetilde T_\ell (\theta) := \sqrt{\frac{2}{\ell(\ell+1)}} \frac{\partial}{\partial \theta_j} T_\ell (\theta) \ .
\end{equation}
The next statement gathers together some of the main technical achievements of the present chapter. It shows in particular that the already evoked `arithmetic Berry cancellation phenomenon' (see \cite{Berry2002}) -- according to which the variance of the nodal length $\mathcal{L}_\ell:=\mathcal L_\ell(0)$ (as defined in \eqref{e:lengthS}) has asymptotically the same order as $\log \ell$ (rather than the expected order $\ell$) -- \emph{should} be a consequence of the following:
\begin{itemize}
\item[\bf (i)] The projection of $\mathcal{L}_\ell(z)$ on the second Wiener chaos $C_2$ is {\it exactly equal to zero} for every $\ell\in \N$ if and only if $z=0$ (and so holds for the projection of $\mathcal{L}_\ell(0)$ onto any chaos of odd order $q\geq 3$).
\item[\bf (ii)] For $z\ne 0$, the variance of $ {\rm proj}(\mathcal{L}_{\ell}(z) \, | C_2)$ has the order $ \ell$, as $\ell\to +\infty$, and one has moreover that
$$
\Var(\Lc_{\ell}(z)) \sim \Var\left( {\rm proj}(\mathcal{L}_{\ell}(z) \, | C_2)\right)\ .
$$
\end{itemize}
\subsection{Plan}
The rest of the chapter is organized as follows: \S 7.2 contains a study of the chaotic
representation of nodal lengths, \S 7.3 focuses on the projection of nodal lengths on the second
Wiener chaos, whereas \S 7.4 contains a proof of our main result.
}
\section{Chaotic expansions}\label{expanS}
The aim of this section is to derive an explicit expression for each projection of the type ${\rm proj}(\Lc_\ell(z) \, | \, C_q)$, $q\geq 1$. In order to accomplish this task, we first focus on a family of auxiliary random variables $\{\Lc_\ell^\varepsilon (z) : \varepsilon > 0 \}$ that approximate $\Lc_\ell(z)$ in the sense of the $L^2(\P)$-norm.
\subsection{Preliminary results}
$\bullet$ For each $z\in \R$,
$\mathcal L_n(z)$
is
the $\omega$-a.s. limit, for $\varepsilon \to 0$, of the $\varepsilon$-approximating r.v.
\begin{equation}\label{napproxS}
\mathcal L_\ell^\varepsilon(z,\omega) := \frac{1}{2\varepsilon}
\int_{\mathbb S^2} 1_{[z-\varepsilon, z+\varepsilon]}(T_\ell(\theta,\omega))\|\nabla T_\ell (\theta,\omega) \|\,d\theta\ ,
\end{equation}
where
$$
\nabla T_\ell := (\partial_1 T_\ell,\partial_2
T_\ell),
$$
$\| \cdot \|$ is the norm in $\mathbb R^2$, and we have used the notation \eqref{e:partialS}.
We know that
$\bullet$ $\mathcal L_\ell (z)\in L^2(\P)$, for every $z\in \R$ (\cite{wigsurvey, Wig})\ .
\noindent Now we want to prove that $\mathcal L_\ell (z)$ is the $L^2(\Omega)$-limit, for
$\varepsilon \to 0$
of $\mathcal L_\ell^\varepsilon(z)$.
Remark first that analogous arguments as in \cite{Wig}, prove that
the function $z\mapsto \E[\mathcal L_\ell (z)^2]$ is continuous (further details will appear in \cite{mistosfera}).
\begin{lemma}\label{approxS}
It holds that
$$
\lim_{\varepsilon\to 0} \E[ (\mathcal L_\ell^\varepsilon (z) - \mathcal L_\ell(z) )^2 ] = 0\ .
$$
\end{lemma}
\begin{proof}
Since $\mathcal L^\varepsilon_\ell (z)\to_{\varepsilon} \mathcal L_\ell (z)$ a.s., it is enough to show that
$$
\E[\mathcal L^\varepsilon_\ell (z)^2]\to E[\mathcal L_\ell (z)^2]\ ,
$$
and then use the well-known fact that convergence a.s. plus convergence of the norms
implies convergence in mean square \cite[Proposition 3.39]{cannarsa}.
By Fatou's Lemma (for the first inequality) we have
$$\displaylines{
\E[\mathcal L_\ell (z)^2]\le \liminf_\varepsilon \E[\mathcal L^\varepsilon_\ell (z)^2]\le
\limsup_\varepsilon \E[\mathcal L^\varepsilon_\ell (z)^2]\mathop{=}^{*}\cr
\mathop{=}^{*}\limsup_\varepsilon \E\left [\left (
\int_{\mathbb \R} \mathcal L_\ell (u)\frac{1}{2\varepsilon}1_{[-\varepsilon, \varepsilon]}
(u-z)\,du\right) ^2 \right ]\ ,
}$$
where to establish the equality $\mathop{=}^{*}$
we have used the co-area formula \cite[(7.14.13)]{adlertaylor}, which in our case gives
$$\displaylines{
\mathcal L_\ell^\varepsilon(z) =\frac{1}{2\varepsilon}
\int_{\mathbb S^2} 1_{[z-\varepsilon, z+\varepsilon]}(T_\ell (\theta))\|\nabla T_\ell (\theta) \|\,d\theta=\cr
=\int_\R du \int_{T_\ell ^{-1}(u)}\frac{1}{2\varepsilon}
1_{[-\varepsilon, \varepsilon]}(u-z)\,d\theta
=\frac{1}{2\varepsilon}\int_\R \mathcal L_\ell (u)1_{[-\varepsilon, \varepsilon]}(u-z)\,du\ .
}$$
Now by Jensen inequality we find that
$$\displaylines{
\limsup_\varepsilon \E\left [\left (
\int_{\mathbb \R} \mathcal L_\ell (u)\frac{1}{2\varepsilon}1_{[-\varepsilon, \varepsilon]}
(u-z)\,du\right) ^2 \right ]\le \cr
\le \limsup_\varepsilon
\int_{\mathbb \R} \E\left [\mathcal L_\ell (u)^2\right ]\frac{1}{2\varepsilon}1_{[-\varepsilon, \varepsilon]}
(u-z)\,du =\cr
=\E\left [\mathcal L_\ell (z)^2\right ]\ ,
}$$
the last step following by continuity of the map $u\mapsto \E[\mathcal L_\ell (u)^2]$.
\end{proof}
To conclude the section, we observe that the previous result suggests that the random variable $\mathcal L_\ell(z)$ can be formally written as
\begin{equation}\label{formalS}
\mathcal L_\ell(z) = \int_{ \mathbb S^2} \delta_z(T_\ell(\theta))\| \nabla T_\ell \|
\,d\theta\ ,
\end{equation}
where $\delta_z$ denotes the Dirac mass in $z$.
\subsection{The chaotic expansion for $\mathcal L_\ell(z)$}
In view of the convention \eqref{e:normaS}, throughout the section we will rewrite \paref{napproxS} as
\begin{equation}\label{formal2S}
\mathcal L_\ell^\eps (z)= \sqrt{\frac{\ell(\ell+1)}{2}}\frac{1}{2\eps}\int_{\mathbb S^2}
1_{[z-\eps,z+ \eps]}(T_\ell(\theta)) \sqrt{\partial_1 \widetilde
T_\ell(\theta)^2+\partial_2 \widetilde T_\ell(\theta)^2}\,d\theta\ .
\end{equation}
We also need to introduce two collection of coefficients
$\{\alpha_{n,m} : n,m\geq 1\}$ and $\{\beta_{l}(z) : l\geq 0\}$, that are connected to the (formal) Hermite expansions of the norm $\| \cdot
\|$ in $\R^2$ and the Dirac mass $ \delta_z(\cdot)$ respectively.
These are given by
\begin{equation}\label{e:beta}
\beta_{l}(z):= \phi(z)H_{l}(z)\ ,
\end{equation}
where $\phi$ is the standard Gaussian pdf, $H_{l}$ denotes the $l$-th Hermite polynomial \paref{hermite} and $\alpha_{n,m}=0$ but for the case $n,m$ even
\begin{equation}\label{e:alpha}
\alpha_{2n,2m}=\sqrt{\frac{\pi}{2}}\frac{(2n)!(2m)!}{n!
m!}\frac{1}{2^{n+m}} p_{n+m}\left (\frac14 \right)\ ,
\end{equation}
where for $N=0, 1, 2, \dots $ and $x\in \R$
\begin{equation}\label{pN}
p_{N}(x) :=\sum_{j=0}^{N}(-1)^{j}\ \ (-1)^{N}{N
\choose j}\ \ \frac{(2j+1)!}{(j!)^2} x^j \ ,
\end{equation}
$\frac{(2j+1)!}{(j!)^2}$ being the so-called {\it swinging factorial}
restricted to odd indices.
{{}
\begin{prop}[\bf Chaotic expansion of $\mathcal L_\ell(z)$]\label{teoexpS} For every $\ell\in \N$ and $q\geq 2$,
\begin{eqnarray}\label{e:ppS}
&&{\rm proj}(\mathcal L_\ell(z)\, | \, C_{q}) \\
&&= \sqrt{\frac{\ell(\ell+1)}{2}}\sum_{u=0}^{q}\sum_{k=0}^{u}
\frac{\alpha _{k,u-k}\beta _{q-u}(z)
}{(k)!(u-k)!(q-u)!} \!\!\int_{\mathbb S^2}\!\! H_{q-u}(T_\ell (\theta))
H_{k}(\partial_1 \widetilde T_\ell (\theta))H_{u-k}(\partial_2
\widetilde T_\ell (\theta))\,d\theta.\notag
\end{eqnarray}
As a consequence, one has the representation
\begin{eqnarray}\label{chaosexpS}
\mathcal L_\ell(z) &=& \E\mathcal L_\ell(z) + \sqrt{\frac{\ell(\ell+1)}{2}}\sum_{q=2}^{+\infty}\sum_{u=0}^{q}\sum_{k=0}^{u}
\frac{\alpha _{k,u-k}\beta _{q-u}(z)
}{(k)!(u-k)!(q-u)!} \!\!\times\\
&&\hspace{4.5cm} \times \int_{\mathbb S^2}\!\! H_{q-u}(T_\ell (\theta))
H_{k}(\partial_1 \widetilde T_\ell (\theta))H_{u-k}(\partial_2
\widetilde T_\ell (\theta))\,d\theta,\notag
\end{eqnarray}
where the series converges in $L^2(\P)$.
\end{prop}
}
\noindent\begin{proof}[Proof of Proposition \ref{teoexpS}] The proof is divided into four steps.
\medskip
\noindent\underline{\it Step 1: dealing with indicators.} . We start by expanding the function $\frac{1}{2\eps}{1}_{[z-\eps ,z+\eps ]}(\cdot)$ into Hermite polynomials, as defined in \S \ref{ss:berryintroS}:
\begin{equation*}
\frac{1}{2\eps}{1}_{[z-\eps ,z+\eps]}(\cdot
)=\sum_{l=0}^{+\infty }\frac{1}{l!}\beta_l^\eps (z)\,H_{l}(\cdot )\ .
\end{equation*
{{}One has that $\beta_0^\eps(z) = \frac{1}{2\eps} \int_{z-\eps}^{z+\eps} \phi(x)\,dx$, and, for $l\geq 1$}
$$\displaylines{
\beta_l^\eps(z) = \frac{1}{2\eps} \int_{z-\eps}^{z+\eps} \phi(x) H_l(x) \,dx=
\frac{1}{2\eps} \int_{z-\eps}^{z+\eps} \phi(x) (-1)^l \phi^{-1}(x) \frac{d^l}{dx^l} \phi(x) \,dx =\cr
=(-1)^l\frac{1}{2\eps} \int_{z-\eps}^{z+\eps} \frac{d^l}{dx^l} \phi(x) \,dx\ .
}$$
Using the notation \eqref{e:beta}, we have that
$$
\lim_{\eps\to 0} \beta_0^\eps(z) = \phi(z) = \phi(z) H_0(z)=\beta_0(z)\ ,
$$
and for all $l\geq 1$,
\begin{equation}\label{e:satS}
\lim_{\eps\to 0} \beta_{l}^\eps(z) =(-1)^l \frac{d^l}{dx^l} \phi(x)_{|_{x=z}} =
\phi(z) H_l(z)= \beta_{l}(z) \\ .
\end{equation}
\noindent\underline{\it Step 2: dealing with the Euclidean norm.} {{} Fix $x\in \mathbb{S}^2$, and recall that, according to Proposition \ref{p:fieldS}, the vector $$
\nabla \widetilde T_\ell := (\partial_1 \widetilde T_\ell, \partial_2 \widetilde T_\ell)\ ,
$$ is composed of centered independent Gaussian random variables with variance one. Now, since the random variable $\| \nabla \widetilde T_\ell (\theta) \|$ is square-integrable, it can be expanded into the following infinite series of Hermite polynomials:
\begin{equation*}
\| \nabla \widetilde T_\ell (\theta) \| = \sum_{u=0}^{+\infty}
\sum_{m=0}^{u} \frac{\alpha_{u,u-m}}{u! (u-m)!} H_u(\partial_1 \widetilde T_\ell (\theta)) H_{u-m}(\partial_2 \widetilde T_\ell (\theta)),
\end{equation*}
where
\begin{equation}
\alpha_{n,n-m}=\frac{1}{2\pi} \int_{\R^2} \sqrt{y^2 + z^2} H_{n}(y) H_{n-m}(z)
\mathrm{e}^{-\frac{y^2+z^2}{2}}\,dy dz\ .
\end{equation}
Our aim is to compute $\alpha_{n,n-m}$ as explicitly as possible. }First of all, wet observe that, if $n$ or $n-m$ is odd, then the above integral
vanishes {{} (since the two mappings $z\mapsto \sqrt{y^2 + z^2}$ and $y\mapsto \sqrt{y^2 + z^2}$ are even)}. It follows therefore that
\begin{equation*}
\| \nabla \widetilde T_\ell (\theta) \| = \sum_{n=0}^{+\infty}
\sum_{m=0}^{n} \frac{\alpha_{2n,2n-2m}}{(2n)! (2n-2m)!} H_{2n}(\partial_1 \widetilde T_\ell (\theta)) H_{2n-2m}(\partial_2 \widetilde T_\ell (\theta)).
\end{equation*}
We are therefore left with the task of showing that the integrals
\begin{equation} \label{coeff}
\alpha_{2n,2n-2m}=\frac{1}{2\pi} \int_{\R^2} \sqrt{y^2 + z^2} H_{2n}(y)
H_{2n-2m}(z) \mathrm{e}^{-\frac{y^2+z^2}{2}}\,dy dz,
\end{equation}
where $n\ge 0$ and $m=0, \dots, n$, can be evaluated according to \eqref{e:alpha}. One elegant way for dealing with this task is to use the following Hermite polynomial expansion
\begin{equation}
\mathrm{e}^{\lambda y - \frac{\lambda^2}{2}} = \sum_{a=0}^{+\infty} H_a(y)
\frac{\lambda^a}{a!}, \quad {{} \lambda \in \R}.
\end{equation}
We start by considering the integral
\begin{equation*}
\frac{1}{2\pi} \int_{\mathbb R^2} \sqrt{y^2 + z^2} \e^{\lambda y - \frac{\lambda^2}{2}} \e^{\mu z - \frac{\mu^2}{2}}\e^{-\frac{y^2+z^2}{2}}\,dy dz
=\frac{1}{2\pi} \int_{\mathbb R^2} \sqrt{y^2 + z^2} \e^{-\frac{(y-\lambda)^2+(z-\mu)^2}{2}}\,dy dz\ .
\end{equation*}
This integral coincides with the expected value of the random variable $W:=\sqrt
Y^2 + Z^2}$ where $(Y,Z)$ is a vector of independent Gaussian random variables with variance one and mean $\lambda $ and $\mu$, respectively. Since $W^2=Y^2 + Z^2$ has a non central
\chi^2$ distribution (more precisely, $Y^2 + Z^2\sim \chi^2(2,\lambda^2 +
\mu^2)$) it is easily checked that the density of $W$ is given by
\begin{equation}
f_W(t) = \sum_{j=0}^{+\infty} \mathrm{e}^{-(\lambda^2 + \mu^2)/2}\frac
((\lambda^2 + \mu^2)/2)^j }{j!}f_{2+2j}(t^2)\, 2t \,{\mathbb{I}}_{\{t> 0\}}\
,
\end{equation}
where $f_{2+2j}$ is the density function of a $\chi^2_{2+2j}$ random variable. The expected
value of $W$ is therefore
\begin{equation}
2\sum_{j=0}^{+\infty} \mathrm{e}^{-(\lambda^2 + \mu^2)/2}\frac{((\lambda^2 +
\mu^2)/2)^j }{j!}\int_{0}^{+\infty} f_{2+2j}(t^2)\, t^2\,dt\ .
\end{equation}
From the definition of $f_{2+2j}$ we have
\begin{eqnarray*}
\int_{0}^{+\infty} f_{2+2j}(t^2)\, t^2\,dt &=&
\frac{1}{2^{1+j}\Gamma(1+j)} \int_{0}^{+\infty} t^{2j+2}\e^{-t^2/2}\,dt \\
&=& \frac{\prod_{i=1}^{1+j} (2i-1)\sqrt{\frac{\pi}{2}}}{2^{1+j}\Gamma(1+j)}\ .
\end{eqnarray*}
As a consequence,
\begin{eqnarray*}
&& \frac{1}{2\pi} \int_{\mathbb R^2} \sqrt{y^2 + z^2} \e^{-\frac{(y-\lambda)^2+(z-\mu)^2}{2}}\,dy dz
\\ &&=2\e^{-(\lambda^2 + \mu^2)/2}\sum_{j=0}^{+\infty} \frac{((\lambda^2
+ \mu^2)/2)^j }{j!}\frac{\prod_{i=1}^{1+j} (2i-1)\sqrt{\frac{\pi}{2}}}{2^{1+j}\Gamma(1+j)} =: F(\lambda, \mu)\ .
\end{eqnarray*}
We can develop the function $F$ as follows:
\begin{equation*}
\displaylines{
F(\lambda, \mu) =
2\sum_{a=0}^{+\infty} \frac{(-1)^a\lambda^{2a}}{2^a a!}
\sum_{b=0}^{+\infty} \frac{(-1)^b\mu^{2b}}{2^b b!}
\sum_{j=0}^{+\infty} \frac{1 }{j!} \sum_{l=0}^{j} {j \choose l} \lambda^{2l}
\mu^{2j-2l}
\frac{\prod_{i=1}^{1+j} (2i-1)\sqrt{\frac{\pi}{2}}}{2^{1+2j}\Gamma(1+j)} = \cr
= \sum_{a,b=0}^{+\infty} \frac{(-1)^a}{2^a a!}
\frac{(-1)^b}{2^b b!}\sum_{j=0}^{+\infty}
\frac{\prod_{i=1}^{1+j} (2i-1)\sqrt{\frac{\pi}{2}}}{j!2^{2j}\Gamma(1+j)}
\sum_{l=0}^{j} {j \choose l} \lambda^{2l+2a}
\mu^{2j+2b-2l}\ .
}
\end{equation*}
On the other hand
\begin{eqnarray*}
{{} F(\lambda, \mu) }&=&\frac{1}{2\pi} \int_{\mathbb R^2} \sqrt{y^2 + z^2} \e^{\lambda y - \frac{\lambda^2}{2}} \e^{\mu z - \frac{\mu^2}{2}}\e^{-\frac{y^2+z^2}{2}}\,dy dz\\
&& = \frac{1}{2\pi} \int_{\mathbb R^2} \sqrt{y^2 + z^2} \sum_{a=0}^{+\infty} H_a(y) \frac{\lambda^a}{a!} \sum_{b=0}^{+\infty} H_b(z) \frac{\mu^b}{b!}\e^{-\frac{y^2+z^2}{2}}\,dy dz \\
&& = \sum_{a,b=0}^{+\infty} \left( \frac{1}{a!b!2\pi} \int_{\mathbb R^2} \sqrt{y^2 + z^2} H_a(y) H_b(z) \e^{-\frac{y^2+z^2}{2}}\,dy dz \right)\lambda^a \mu^b.
\end{eqnarray*}
By the same reasoning as above, if $a$ or $b$ is odd, then the
integral coefficient in the previous expression must be zero. Setting $n:=l+a$ and
m:=j+b-l$, we also have that
\begin{eqnarray*}
&&\sum_{a,b=0}^{+\infty} \frac{(-1)^a}{2^a a!}
\frac{(-1)^b}{2^b b!}\sum_{j=0}^{+\infty} \frac{\prod_{i=1}^{1+j} (2i-1)\sqrt{\frac{\pi}{2}}}{j!2^{2j}\Gamma(1+j)}
\sum_{l=0}^{j} {j \choose l} \lambda^{2l+2a}
\mu^{2j+2b-2l} \\
&&=\sum_{n,m}^{} \sum_{j}^{}\frac{\prod_{i=1}^{1+j} (2i-1)\sqrt{\frac{\pi}{2}}}{j!2^{2j}\Gamma(1+j)} \sum_{l=0}^{j}\frac{(-1)^{(n-l)}}{2^{n-l} {(n-l)}!}
\frac{(-1)^{m+l-j}}{2^{m+l-j} {(m+l-j)}!}
{j \choose l} \lambda^{2n}
\mu^{2m}\ .
\end{eqnarray*}
Thus we obtain
\begin{eqnarray*}
\alpha_{2n,2m} &=& \frac{1}{2\pi} \int_{\mathbb R^2} \sqrt{y^2 + z^2} H_{2n}(y) H_{2m}(z) \e^{-\frac{y^2+z^2}{2}}\,dy dz \\ &=& (2n)!(2m)!\frac{(-1)^{m+n}}{2^{n+m}}\sum_{j}^{}(-1)^{j}\frac{\prod_{i=1}^{1+j} (2i-1)
\sqrt{\frac{\pi}{2}}}{2^j j!\Gamma(1+j)} \sum_{l=0}^{j}\frac{{j \choose l}}{ {(n-l)}! {(m+l-j)}!}
\ .
\end{eqnarray*}
Representation \eqref{e:alpha} now follows from the computations:
\begin{eqnarray*}
\alpha_{2n,2m}&=&\frac{1}{2\pi} \int_{\mathbb R^2} \sqrt{y^2 + z^2} H_{2n}(y) H_{2m}(z) \e^{-\frac{y^2+z^2}{2}}\,dy dz \\
&=& (2n)!(2m)!\frac{(-1)^{m+n}}{2^{n+m}}\sum_{j}^{}(-1)^{j}
\frac{\prod_{i=1}^{1+j} (2i-1)\sqrt{\frac{\pi}{2}}}{2^j j!\Gamma(1+j)}
\sum_{l=0}^{j}\frac{{j \choose l}}{ {(n-l)}! {(m+l-j)}!} \\
&=& (2n)!(2m)!\frac{(-1)^{m+n}}{2^{n+m}}
\sum_{j}^{}(-1)^{j}\frac{(2j+1)!!\sqrt{\frac{\pi}{2}}}{2^j(j!)^2}
\sum_{l=0}^{j}\frac{{j \choose l}}{ {(n-l)}! {(m+l-j)}!}\\
&=&\frac{(2n)!(2m)!}{n! m!}\frac{(-1)^{m+n}}{2^{n+m}}
\sum_{j=0}^{n+m}(-1)^{j}\frac{(2j+1)!!\sqrt{\frac{\pi}{2}}}{2^j j!}
\sum_{l=0}^{j}{ n\choose l}{ m\choose j-l} \\
&=&\sqrt{\frac{\pi}{2}}\frac{(2n)!(2m)!}{n! m!}\frac{(-1)^{m+n}}{2^{n+m}}
\sum_{j=0}^{n+m}(-1)^{j}\frac{(2j+1)!!}{2^j j!}
{ n+m\choose j} \\
&=&\sqrt{\frac{\pi}{2}}\frac{(2n)!(2m)!}{n! m!}\frac{(-1)^{m+n}}{2^{n+m}}
\sum_{j=0}^{n+m}(-1)^{j}\frac{(2(j+1))!}{2^{j+1} 2^j j! (j+1)!}
{ n+m\choose j} \\
&=&\sqrt{\frac{\pi}{2}}\frac{(2n)!(2m)!}{n! m!}\frac{(-1)^{m+n}}{2^{n+m}}
\sum_{j=0}^{n+m}(-1)^{j}\frac{(2j+1)!}{2^{2j} (j!)^2}
{ n+m\choose j}\ .
\end{eqnarray*}
\medskip
{{}
\noindent\underline{\it Step 3: letting $\eps \to 0$.} In view of Definition \ref{d:chaosS}, the computations at Step 1 and Step 2 (together with the fact that the three random variables $T_\ell (\theta),\, \partial_1 \widetilde T_\ell (\theta)$ and $\partial_2 \widetilde T_\ell (\theta)$ are stochastically independent ) show that, for fixed $\theta\in \mathbb{S}^2$, the projection of the random variable
$$
\frac{1}{2\eps} 1_{[z-\eps, z+\eps]}(T_\ell (\theta)) \sqrt{\partial_1 \widetilde
T_\ell (\theta)^2+\partial_2 \widetilde T_\ell (\theta)^2}
$$
on the chaos $C_q$ equals
$$
\sum_{u=0}^{q}\sum_{m=0}^{u}
\frac{\alpha _{m,u-m}\beta^\eps _{q-u}(z)
}{(m)!(u-m)!(q-u)!} H_{q-u}(T_\ell (\theta))
H_{m}(\partial_1 \widetilde T_\ell (\theta))H_{u-m}(\partial_2
\widetilde T_\ell (\theta))\ .
$$
Since $\int_{\mathbb{S}^2}\,dx<\infty$, standard arguments based on Jensen inequality and dominated convergence yield that, for every $q\geq 1$,
\begin{eqnarray*}
&&{\rm proj}(\mathcal L^\eps_\ell(z)\, | \, C_{q}) \\
&&= \sqrt{\frac{\ell(\ell+1)}{2}}\sum_{u=0}^{q}\sum_{m=0}^{u}
\frac{\alpha _{m,u-m}\beta^\eps _{q-u}(z)
}{(m)!(u-m)!(q-u)!}\times \\
&& \hspace{4cm} \times\!\!\int_{\mathbb S^2}\!\! H_{q-u}(T_\ell (\theta))
H_{m}(\partial_1 \widetilde T_\ell (\theta))H_{u-m}(\partial_2
\widetilde T_\ell (\theta))\,d\theta.\notag
\end{eqnarray*}
One has that, as $\eps\to 0$, ${\rm proj}(\mathcal L^\eps_\ell (z)\, | \, C_{q})$ converges necessarily to ${\rm proj}(\mathcal L_\ell(z)\, | \, C_{q})$ in probability. Using \eqref{e:satS}, we deduce from this fact that representation \eqref{e:ppS} is valid for every $q\geq 1$.
}
\end{proof}
\begin{remark}\label{Gilles Becker}\rm The coefficients $\alpha_{2n,2m}$ can be found also first using polar coordinates and then the explicit expression for Hermite polynomials \cite{szego}. Briefly,
\begin{equation*}
\displaylines{
\frac{1}{2\pi} \int_{\mathbb R^2} \sqrt{y^2 + z^2} H_{2n}(y) H_{2m}(z) \e^{-\frac{y^2+z^2}{2}}\,dy dz = \cr
= \frac{1}{2\pi} \int_{0}^{2\pi} \int_{0}^{+\infty} \rho^2 H_{2n}(\rho \cos \vartheta) H_{2m}(\rho \sin \vartheta) \e^{-\frac{\rho^2}{2}}\,d \rho d \vartheta = \cr
= \frac{(2n)!(2m)!}{2\pi} \sum_{a=0}^{n} \frac{(-1)^a}{2^aa!(2n - 2a)!}\sum_{b=0}^{m} \frac{(-1)^b}{2^bb!(2m - 2b)!}\times \cr
\times \int_{0}^{2\pi}\cos \vartheta^{2n - 2a} \sin \vartheta^{2m - 2b} d \vartheta \int_{0}^{+\infty} \rho^{2+2n - 2a+2m - 2b} \e^{-\frac{\rho^2}{2}} \,d \rho\ .
}
\end{equation*}
It remains to solve the previous integrals (which are well-known).
\qed
\end{remark}
\section{Asymptotic study of $\text{proj}(\mathcal L_\ell(z) | C_2)$}
In this section we find an explicit expression for the second order chaotic projection of the length of level curves.
\begin{prop}\label{2sfera}
We have
$$
\displaylines{
\text{proj}(\mathcal L_\ell(z) | C_2)= \sqrt{\frac{\ell(\ell+1)}{2}}
\sqrt{\frac{\pi}{8}} \phi(z) z^2 \int_{\mathbb S^2} H_2(T_\ell(x))\,dx=\cr
=\sqrt{\frac{\ell(\ell+1)}{2}}
\sqrt{\frac{\pi}{8}} \phi(z) z^2 \sum_{m=1}^{2\ell+1} \left (a_{\ell,m}^2 - \frac{4\pi}{2\ell +1} \right )\ .}
$$
\end{prop}
\begin{proof}
The second chaotic projection is, omitting the factor $\sqrt{\frac{\ell(\ell+1)}{2}}$,
$$
\displaylines{
\frac{\alpha_{0,0}\beta_{2}(z)}{2}
\int_{\mathbb S^2} H_{2}(T_{\ell}(x))\,dx
+ \frac{\alpha_{0,2}\beta_{0}(z)}{2}
\int_{\mathbb S^2} H_{2}(\widetilde \partial_2 T_\ell(x))\,dx +\cr
+ \frac{\alpha_{2,0}\beta_{0}(z)}{2}
\int_{\mathbb S^2} H_{2}(\widetilde \partial_1 T_\ell(x))\,dx\
= \frac12 \Big (\alpha_{0,0}\beta_{2}(z)
\int_{\mathbb S^2} (T_{\ell}(x)^2-1)\,dx
+ \cr
+\alpha_{0,2}\beta_{0}(z)
\int_{\mathbb S^2} ((\widetilde \partial_2 T_\ell(x))^2-1)\,dx
+ \alpha_{2,0}\beta_{0}(z)
\int_{\mathbb S^2} ((\widetilde \partial_1 T_\ell(x))^2-1)\,dx \Big )\ =\cr
= \frac12 \Big (\alpha_{0,0}\beta_{2}(z)
\int_{\mathbb S^2} T_{\ell}(x)^2\,dx
+\alpha_{0,2}\beta_{0}(z)
\frac{2}{\ell(\ell+1)}\int_{\mathbb S^2} ( \partial_2 T_\ell(x))^2\,dx
+ \cr
+\alpha_{2,0}\beta_{0}(z)
\frac{2}{\ell(\ell+1)}\int_{\mathbb S^2} (\partial_1 T_\ell(x))^2\,dx - 4\pi(\alpha_{0,0}\beta_{2}(z) + \frac{4}{\ell(\ell+1)}\alpha_{0,2}\beta_{0}(z))\Big )\ .
}
$$
Now, by Green's formula, we have for $j=1,2$
$$\displaylines{
\int_{\mathbb S^2} (\partial_j T_\ell(x))^2\,dx = -
\int_{\mathbb S^2} T_\ell(x) \partial_{j}^2 T_\ell(x)\,dx
}$$
and putting things together
$$\displaylines{
\frac12 \Big (\alpha_{0,0}\beta_{2}(z)
\int_{\mathbb S^2} T_{\ell}(x)^2\,dx
+\alpha_{0,2}\beta_{0}(z)
\frac{2}{\ell(\ell+1)}\int_{\mathbb S^2} ( \partial_2 T_\ell(x))^2\,dx
+ \cr
+\alpha_{2,0}\beta_{0}(z)
\frac{2}{\ell(\ell+1)}\int_{\mathbb S^2} (\partial_1 T_\ell(x))^2\,dx - 4\pi(\alpha_{0,0}\beta_{2}(z) + \frac{4}{\ell(\ell+1)}\alpha_{0,2}\beta_{0}(z))\Big )=}$$
$$\displaylines{
=\frac12 \Big (\alpha_{0,0}\beta_{2}(z)
\int_{\mathbb S^2} T_{\ell}(x)^2\,dx
-\alpha_{0,2}\beta_{0}(z)
\frac{2}{\ell(\ell+1)}
\int_{\mathbb S^2} T_\ell(x) \partial_{2}^2 T_\ell(x)\,dx
+ \cr
-\alpha_{2,0}\beta_{0}(z)
\frac{2}{\ell(\ell+1)}
\int_{\mathbb S^2} T_\ell(x) \partial_{1}^2 T_\ell(x)\,dx - 4\pi(\alpha_{0,0}\beta_{2}(z) + \frac{4}{\ell(\ell+1)}\alpha_{0,2}\beta_{0}(z))\Big )=\cr}$$
$$\displaylines{
=\frac12 \Big (\alpha_{0,0}\beta_{2}(z)
\int_{\mathbb S^2} T_{\ell}(x)^2\,dx
-\alpha_{0,2}\beta_{0}(z)
\frac{2}{\ell(\ell+1)}
\int_{\mathbb S^2} T_\ell(x) (\partial_{1}^2 T_\ell(x)+\partial_{2}^2 T_\ell(x))\,dx
+ \cr
- 4\pi(\alpha_{0,0}\beta_{2}(z) + \frac{4}{\ell(\ell+1)}\alpha_{0,2}\beta_{0}(z))\Big )=\cr}$$
$$\displaylines{
=\frac12 \Big (\alpha_{0,0}\beta_{2}(z)
\int_{\mathbb S^2} T_{\ell}(x)^2\,dx
-\alpha_{0,2}\beta_{0}(z)
\frac{2}{\ell(\ell+1)}
\int_{\mathbb S^2} T_\ell(x) \Delta T_\ell(x)\,dx
+ \cr
- 4\pi(\alpha_{0,0}\beta_{2}(z) + \frac{4}{\ell(\ell+1)}\alpha_{0,2}\beta_{0}(z))\Big )=\cr}$$
$$\displaylines{
=\frac12 \Big (\alpha_{0,0}\beta_{2}(z)
\int_{\mathbb S^2} T_{\ell}(x)^2\,dx
+\alpha_{0,2}\beta_{0}(z)
\frac{2}{\ell(\ell+1)} \ell(\ell+1)
\int_{\mathbb S^2} T_\ell(x)^2\,dx
+ \cr
- 4\pi(\alpha_{0,0}\beta_{2}(z) + \frac{4}{\ell(\ell+1)}\alpha_{0,2}\beta_{0}(z))\Big )=\cr
=\frac12 \Big ((\alpha_{0,0}\beta_{2}(z)+2\alpha_{0,2}\beta_{0}(z))
\int_{\mathbb S^2} T_{\ell}(x)^2\,dx
- 4\pi(\alpha_{0,0}\beta_{2}(z) + 2\alpha_{0,2}\beta_{0}(z))\Big )=\cr
=\frac12 (\alpha_{0,0}\beta_{2}(z)+2\alpha_{0,2}\beta_{0}(z))
\int_{\mathbb S^2} (T_{\ell}(x)^2-1)\,dx=\cr
=\frac12\sqrt{\frac{\pi}{2}}\phi(z) z^2 \int_{\mathbb S^2} H_2(T_{\ell}(x))\,dx\ .
}$$
Moreover
$$\displaylines{
\int_{\mathbb S^2} H_2(T_{\ell}(x))\,dx = \int_{\mathbb S^2} \sum_{m,m'}\left( a_{\ell,m} a_{\ell,m'} Y_{\ell,m}(x) Y_{\ell,m'}(x) - 1\right )\,dx =\cr
= \sum_{m=1}^{2\ell+1} \left(a_{\ell,m}^2 - \frac{4\pi}{2\ell+1}\right)\ ,
}$$
since $Y_{\ell,m}$ are an orthonormal family.
\end{proof}
Now it immediately follows that
\begin{cor}\label{corNodal}
The second chaotic projection of the length $\mathcal L_\ell(z)$ vanishes if and only if $z=0$.
\end{cor}
\begin{remark}\rm
Previous computations in the proof of Proposition \ref{2sfera} indeed holds on every two dimensional compact Riemannian manifold, actually we can always use Green's formula.
\end{remark}
\section{The CLT}
In this section we prove the main result of this chapter, that is a CLT for the length of $z$-level curve for $z\ne 0$.
Let us first show the following.
\begin{lemma}
For $z\ne 0$, we have
$$
\frac{\text{proj}(\mathcal L_\ell(z) | C_2)}{\sqrt{\Var(\text{proj}(\mathcal L_\ell(z) | C_2))}}\mathop{\goto}^{\mathcal L} Z\ ,
$$
where $Z\sim \mathcal N(0,1)$.
\end{lemma}
\begin{proof}
The variance of the second chaotic projection (Proposition \ref{2sfera}) is
$$\displaylines{
\Var(\text{proj}(\mathcal L_\ell(z) | C_2))= \ell(\ell+1) \frac{\pi}{16} \phi(z)^2 z^4 2 \cdot 4\pi\cdot 2\pi \int_0^{\pi} P_\ell(\cos\vartheta)^2\,d\vartheta=\cr
= \ell(\ell+1) \frac{\pi}{16} \phi(z)^2 z^4 2 \cdot 4\pi\cdot 2\pi \frac{2}{2\ell +1}=\ell(\ell+1) \frac{\pi}{16} \frac{1}{2\pi} \e^{-z^2} z^4 2 \cdot 4\pi\cdot 2\pi \frac{2}{2\ell +1}=\cr
=\ell(\ell+1)\frac{2}{2\ell +1}\cdot \frac{ \pi^2}{2} \e^{-z^2} z^4 \sim
\ell \cdot \frac{ \pi^2}{2} \e^{-z^2} z^4\ , \quad \ell\to +\infty\ ,
}$$
where we used the identity \paref{momento 2}
$$
\int_0^\pi P_\ell(\cos \vartheta)^2\,d\vartheta = \frac{2}{2\ell+1}\ .
$$
Moreover we can rewrite the second chaotic projection as
$$\displaylines{
\text{proj}(\mathcal L_\ell(z) | C_2) = \sqrt{\frac{\ell(\ell+1)}{2}}
\sqrt{\frac{\pi}{8}} \phi(z) z^2 \sum_m \left (a_{\ell,m}^2 - \frac{4\pi}{2\ell+1} \right )=\cr
=\sqrt{\frac{\ell(\ell+1)}{2}}
\sqrt{\frac{\pi}{4}} \phi(z) z^2\frac{4\pi}{\sqrt{2\ell+1}} \frac{1}{\sqrt{2(2\ell+1)}}\sum_m \left (\left (\sqrt{\frac{2\ell+1}{4\pi}}a_{\ell,m} \right )^2 - 1\right )\ .
}$$
Now we can apply the standard CLT to the sequence of normalized sums
$$
\frac{1}{\sqrt{2(2\ell+1)}}\sum_m \left (\left (\sqrt{\frac{2\ell+1}{4\pi}}a_{\ell,m} \right )^2 - 1\right )\mathop{\goto}^{\mathcal L} Z\ ,
$$
where $Z\sim \mathcal N(0,1)$. Finally this implies the CLT for the second chaotic projection
$$
\frac{\text{proj}(\mathcal L_\ell(z) | C_2)}{\sqrt{\Var(\text{proj}(\mathcal L_\ell(z) | C_2))}} \mathop{\goto}^{\mathcal L} Z\ ,
$$
which conclude the proof.
\end{proof}
Now we can easily prove Thereom \ref{main th S}.
\begin{proof}[Proof of Theorem \ref{main th S}]
We have, for $z\ne 0$,
\begin{equation}\label{nice}
\lim_\ell \frac{\Var(\text{proj}(\mathcal L_\ell(z) | C_2))}{\Var(\mathcal L_\ell(z))} = 1\ .
\end{equation}
It follows from the chaotic decomposition, that as $\ell\to \infty$
$$
\frac{\mathcal L_\ell(z)}{\sqrt{\Var(\mathcal L_\ell(z))}} = \frac{\text{proj}(\mathcal L_\ell(z) | C_2)}{\sqrt{\Var(\mathcal L_\ell(z))}} + o_\P (1)\ ,
$$
therefore $\frac{\mathcal L_\ell(z)}{\sqrt{\Var(\mathcal L_\ell(z))}}$ and
$\frac{\text{proj}(\mathcal L_\ell(z) | C_2)}{\sqrt{\Var(\mathcal L_\ell(z))}}$ have the same asymptotic distribution. Previous lemma allows to conclude the proof, recalling moreover that if the limit distribution is absolutely continuous, than the convergence in distribution is equivalent to the convergence in Kolmogorov distance.
\end{proof}
\chapter{Nodal lengths for arithmetic random waves}
\section{Introduction and main results}
In this chapter we investigate the asymptotic behavior of nodal lengths for arithmetic random waves.
\subsection{Arithmetic random waves}
Let $\T:=\R^2/\Z^2$ be the standard
$2$-torus and $\Delta$ the Laplace operator on $\Tb$. We are interested
in the (totally discrete) spectrum of $\Delta$ i.e. eigenvalues $E>0$
of the Schr\"{o}dinger equation
\begin{equation}
\label{eq:Schrodinger}
\Delta f + Ef=0.
\end{equation}
Let $$S=\{{{} n \in \Z : n} = a^2+b^2 \,\, \mbox{{} for some} \:a,\, b\in\Z\}$$ be the collection of all numbers
expressible as a sum of two squares. Then the eigenvalues of \eqref{eq:Schrodinger}
(also called ``energy levels" of the torus) are all numbers of the form $E_{n}=4\pi^{2}n$ with $n\in S$.
In order to describe a Laplace eigenspace corresponding to $E_{n}$ denote
$\Lambda_n$ to be the set of ``frequencies":
\begin{equation*}
\Lambda_n := \lbrace \lambda =(\lambda_1,\lambda_2)\in \Z^2 : \lambda_1^2 + \lambda_2^2 = n\rbrace\
\end{equation*}
of cardinality $| \Lambda_n |$. (Geometrically $\Lambda_{n}$ are all the standard lattice points
lying on the centered radius-$\sqrt{n}$ circle.)
For $\lambda\in \Lambda_{n}$ denote the complex exponential associated to the frequency $\lambda$
\begin{equation*}
e_{\lambda}(\theta) = \exp(2\pi i \langle \lambda, \theta \rangle)
\end{equation*}
with $\theta=(\theta_{1},\theta_{2})\in\Tb$.
The collection
\begin{equation*}
\{e_{\lambda}(\theta)\}_{\lambda\in \Lambda_n}
\end{equation*}
of complex exponentials corresponding to frequencies $\lambda \in \Lambda_{n}$
is an $L^{2}$-orthonormal basis of the eigenspace of $\Delta$ corresponding to
eigenvalue $E_{n}$. In particular, the dimension of $E_{n}$
equals the number of ways to express $n$ as a sum of two squares
\begin{equation*}
\mathcal N_n := \dim E_{n} = |\Lambda_n|
\end{equation*}
(also denoted in the number theoretic literature $r_{2}(n)=|\Lambda_{n}|$). The number
$\Nc_{n}$ is subject to large and erratic fluctuation; it grows ~\cite{La} {\em on average}
as $\sqrt{\log{n}}$, but could be as small as $8$ for (an infinite sequence of) prime numbers
$p\equiv 1\mod{4}$, or as large as a power of $\log{n}$.
Following ~\cite{RW} and ~\cite{AmP} we define
the ``arithmetic random waves" (random Gaussian toral Laplace eigenfunctions)
to be the random fields
\begin{equation}\label{defrf}
T_n(\theta)=\frac{1}{\sqrt{\mathcal N_n}}\sum_{ \lambda\in \Lambda_n}
a_{\lambda}e_\lambda(\theta),
\end{equation}
$\theta\in\Tb$, where the coefficients $a_{\lambda}$ are standard Gaussian i.i.d. save
to the relations $$a_{-\lambda}= \overline{a_{\lambda}}$$ (ensuring that $T_{n}$ are real-valued).
By the definition \eqref{defrf}, $T_n$ is a centered
Gaussian random field with covariance function
\begin{equation*}
r_n(\theta,\zeta) = r_{n}(\theta-\zeta) := \E[T_n(\theta) \overline{T_n(\zeta)}] = \frac{1}{\mathcal N_n}
\sum_{\lambda\in \Lambda_n}e_{\lambda}(\theta-\zeta)=\frac{1}{\mathcal N_n}\sum_{\lambda\in \Lambda_n}\cos\left(2\pi\langle \theta-\zeta,\lambda \rangle\right),
\end{equation*}
$\theta,\zeta\in\Tb$ (by the standard abuse of notation). Note that $r_{n}(0)=1$, i.e. $T_{n}$ is unit variance.
\subsection{Nodal length{{} : mean and variance}}
Consider the total {\em nodal length} of random
eigenfunctions, i.e. the sequence the random variables $\{\mathcal L_{n}\}_{n\in S}$ given by
\begin{equation}\label{e:length}
\mathcal L_n := \text{length}(T_n^{-1}(0)).
\end{equation}
The expected value of $\mathcal L_{n}$ was computed ~\cite{RW} to be
\begin{equation}
\E[\mathcal L_n]= \frac{1}{2\sqrt{2}}\sqrt{E_n},
\end{equation}
consistent to Yau's conjecture ~\cite{Yau,D-F}.
The more subtle question of asymptotic behaviour of the variance $\Var(\Lc_{n})$ of $\Lc_{n}$ was
addressed ~\cite{RW}, and fully resolved \cite{AmP} as follows.
Given $n\in S$ define a probability measure $\mu_{n}$ on the unit circle $\Sc^{1}\subseteq\R^{2}$
supported on angles corresponding to lattice points in $\Lambda_{n}$:
\begin{equation*}
\mu_{n} := \frac{1}{\mathcal N_n} \sum_{\lambda\in \Lambda_n} \delta_{\frac{\lambda}{\sqrt{n}}}.
\end{equation*}
It is known that for a density $1$ sequence of numbers $\{n_{j}\}\subseteq S$ the angles
of lattice points in $\Lambda_{n}$ tend to be equidistributed in the sense that
\begin{equation*}
\mu_{n_{j}}\Rightarrow \frac{d\theta}{2\pi}
\end{equation*}
(where $\Rightarrow$ is weak-$*$ convergence of probability measures). However the sequence $\{\mu_{n}\}_{n\in S}$
has other weak-$*$ partial limits ~\cite{Ci,AmP} (``attainable measures"), partially
classified in \cite{KW}.
It was proved ~\cite{AmP} that one has
\begin{equation}
\label{eq:var leading KKW}
\var(\Lc_{n}) =c_n \frac{E_n}{\Nc_{n}^2}(1 + o_{\Nc_{n}\rightarrow\infty}(1)),
\end{equation}
where
\begin{equation}\label{cn}
c_n = \frac{1+\widehat{\mu_n}(4)^2}{512},
\end{equation}
and for a measure $\mu$ on $\mathbb{S}^{1}$,
\begin{equation}\label{e:smet}
\widehat \mu_n(k) = \int_{\mathbb S^1} z^{-k}\,d\mu_n(z)
\end{equation}
are the usual Fourier coefficients of $\mu$ on the unit circle.
As $$|\widehat{\mu_{n}}(4)|\le 1$$ by the triangle inequality, the result \paref{eq:var leading KKW}
shows that the order of magnitude of $\var(\Lc_{n})$ is
$ \frac{E_n}{\Nc_n^2} $, that is, of smaller order than what would be a natural guess
$ \frac{E_n}{\Nc_n} $; this situation (`arithmetic Berry's cancellation' -- see \cite{AmP}) is similar to the {\em cancellation phenomenon} observed by Berry
in a different setting ~\cite{Berry2002}.
In addition, \eqref{eq:var leading KKW} shows that for $\var(\Lc_{n})$ to exhibit an
asymptotic law (equivalent to $\{c_{n}\}$ in \eqref{cn} convergent along a subsequence)
we need to pass to a subsequence $\{ n_{j} \}\subset S$ such that the limit
$$\lim\limits_{j\rightarrow\infty}| \widehat{\mu_{n_{j}}}(4)|$$ exists. For example,
if $\{ n_{j}\} \subset S$ is a subsequence such that $\mu_{n_{j}}\Rightarrow\mu$
for some probability measure $\mu$ on $\Sc^{1}$,
then \eqref{eq:var leading KKW} reads (under the usual extra-assumption $\Nc_{n_{j}}\rightarrow\infty$)
\begin{equation}\label{e:varz}
\var(\Lc_{n_j})\sim c({{} \mu}) \frac{E_{n_j}}{\Nc_{n_j}^2}
\end{equation}
with $$c(\mu) = \frac{1+\widehat{\mu}(4)^2}{512},$$
where, here and for the rest of the chapter, we write $a_n\sim b_n$ to
indicate that the two positive sequences $\{a_n\}$ and $\{b_n\}$ are such
that $a_n/b_n \rightarrow 1$, as $n\to\infty$. Note that the set of the possible
values for the $4$th Fourier coefficient $\widehat{\mu}(4)$ covers the whole interval
$[-1,1]$ (see \cite{AmP, KW}). This implies in particular that the possible values of
the constant $c(\mu)$ cover the whole interval
$$\left[\frac{1}{512},\frac{1}{256}\right];$$ the above discussion provides
a complete classification of the asymptotic behaviour of $\var(\Lc_{n})$.
\subsection{Main results}
{{}
Let $\{n_j : j\geq 1\}\subset S$ be a sequence within $S$, and assume that
$\lim_{j\to\infty }{\mathcal N}_{n_j} = \infty$. As it is customary, generic
subsequences of $\{n_j\}$ will be denoted by $\{n'_j\}$, $\{n''_j\}$, and so on.
Our principal aim in this chapter is to study the asymptotic behaviour, as $j\to \infty$,
of the distribution of the sequence of normalized random variables
\begin{equation}\label{e:culp}
\widetilde{\Lc}_{n_j} := \frac{\mathcal{L}_{n_j} - \E[\mathcal{ L}_{n_j}]}
{\sqrt{\Var[\mathcal{L}_{n_j} ]}}, \quad j\geq 1.
\end{equation}
Since, in this setting, the variance \eqref{eq:var leading KKW} {diverges to infinity}, it {seems}
reasonable to expect a central limit result, that is, that the sequence $\widetilde{\Lc}_{n_j}$, $ j\geq 1$,
{converges in distribution to a standard Gaussian random variable.}
Our main findings not only contradict this {somewhat naive prediction},
but also show the following non-trivial facts:
\begin{itemize}
\item[\bf (i)] the sequence $\big\{\widetilde{\Lc}_{n_j} \big\}$ does not
necessarily converge in distribution, and
\item[\bf (ii)] the adherent points of the sequence
$\big\{\widetilde{\Lc}_{n_j} : j\geq 1 \big\}$
(in the sense of the topology induced by the convergence in distribution of random variables)
coincide with the distributions spanned by a class of linear combinations of independent
squared Gaussian random variables; such linear combinations are moreover parameterized by
the adherent points of the numerical sequence
$$
j \mapsto \left|\widehat{\mu_{n_j}}(4)\right|, \quad j\geq 1.
$$
\end{itemize}
One should note that the phenomenon described at Point {\bf (ii)} is consistent with the fact that the
variance ${\rm Var}(\mathcal{L}_n)$ explicitly depends on the constant $\widehat{\mu_n}(4)^2$ (see
\eqref{e:varz}). In order to formally state our main findings, we introduce some further notation: for every
$\eta\in [0,1]$, we write $\mathcal{M}_\eta$ to indicate the random variable
\begin{equation}\label{e:r}
\mathcal{M}_\eta := \frac{1}{2\sqrt{1+\eta^2}} (2 - (1+\eta) X_1^2-(1-\eta) X_2^2),
\end{equation}
where $X=(X_{1},X_{2})$ is a two-dimensional centered Gaussian vector with identity covariance matrix (more
information on the distributions of the random variables $\Mc_{\eta}$ is provided in Proposition
\ref{p:meta}). For every $n\in S$, we write
\begin{equation}\label{e:k}
\Mc^n := \Mc_{ | \widehat{\mu_{n}}(4) | },
\end{equation}
where the quantity $\widehat{\mu_{n}}(4)$ is defined according to formula \eqref{e:smet}.
\smallskip
The following statement is the main result of the chapter.
\begin{theorem}\label{thm:lim dist sep}
Let the above notation and assumptions prevail. Then, the sequence $\big\{\mathbf{D}\big(\widetilde{\Lc}_{n_j}\big) : j\geq 1\big\}$ is relatively compact with respect to the topology of weak convergence, and a subsequence $\big\{ \widetilde{\Lc}_{n'_j}\big\}$ admits a limit in distribution if and only if the corresponding numerical subsequence $\big\{\big|\widehat{\mu_{n'_j}}(4)\big| : j\geq 1 \big\}$ converges to some $\eta\in [0,1]$, and in this case $$\widetilde{\Lc}_{n'_j} \stackrel{\rm d}{\longrightarrow} \mathcal{M}_\eta.$$
In particular, letting $d$ denote either the Kolmogorov distance \paref{prob distance}, or an arbitrary distance metrizing weak convergence on $\mathscr{P}$ (the space of all probability mesures on $\R$ - see Chapter 4) this implies that
\begin{equation}\label{e:b}
\lim_{j\to\infty} d\big(\widetilde{\Lc}_{n_j} , \Mc^{n_j}\big) = 0.
\end{equation}
\end{theorem}
The next result is a direct consequence of Theorem \ref{thm:lim dist sep}, of \cite[Theorem 11.7.1]{D} and of the fact that $\big\{\widetilde{\Lc}_{n_j} \big\}$ is a bounded sequence in $L^2$: it shows that one can actually couple the elements of the sequences $\{\widetilde{\Lc}_{n_j}\}$ and $\big\{\Mc^{n_j}\big\}$ on the same probability space, in such a way that their difference converges to zero almost surely and in $L^p$, for every $p<2$.
\begin{corollary}\label{c:coupling} There exists a probability space $(\Omega^*, \mathcal{F}^*, \P^*)$ as well as random variables $\{A_j, B_j : j\geq 1\}$ defined on it such that, for every $j\geq 1$, $A_j \stackrel{\rm d}{=} \widetilde{\Lc}_{n_j}$, $B_j \stackrel{\rm d}{=} \Mc^{n_j}$, and, as $j\to \infty$,
$$
A_j-B_j \to 0, \quad \mbox{a.s.}-\P^*.
$$
Also, for every $p\in (0,2)$, $\E^*[ | A_j-B_j| ^p ]\to 0$.
\end{corollary}
We conclude this section by stating some elementary properties of the random variables $\Mc_\eta$, $\eta\in [0,1]$, whose proof (left to the reader) can be easily deduced from the representation
\begin{equation}
\mathcal{M}_\eta = a(\eta) H_2(X_1)+b(\eta) H_2(X_2),
\end{equation}
where $H_2 (x) = x^2-1$ is the second Hermite polynomial,
$a(\eta) := -(1+\eta)/\sqrt{4(1+\eta^2)}$ and $b(\eta) := -(1-\eta)/\sqrt{4(1+\eta^2)}$,
as well as from the (classical) results presented in \cite[Section 2.7.4]{noupebook}.
In what follows, we will use the elementary fact that, if $\Mc_\eta$ is the random variable defined in \eqref{e:r} and if $\eta\to \eta_0\in [0,1]$, then $\Mc_\eta \stackrel{\rm d}{\longrightarrow} \Mc_{\eta_0}$.
\begin{proposition}[\bf About $\Mc_{\eta}$]\label{p:meta}
\begin{enumerate} Let the above notation prevail.
\item For every $\eta\in [0,1]$, the distribution of $\Mc_\eta$ is absolutely continuous with respect to the Lebesgue measure, with support equal to $\big(-\infty, (1+\eta^2)^{-1/2} \big)$.
\item For every $\eta\in [0,1]$, the characteristic function of $\Mc_\eta$ is given by
$$
\varphi_\eta(\mu) := \E[\exp(i\mu \Mc_\eta)] = \frac{e^{-i\mu(a(\eta)+b(\eta))}}{\sqrt{(1-2i\mu a(\eta))(1-2i\mu b(\eta))}}, \quad \mu\in \R.
$$
\item For every $\eta\in [0,1]$, the distribution of $\Mc_\eta$ is determined
by its moments (or, equivalently, by its cumulants). Moreover, the sequence of
the cumulants of $\Mc_\eta$, denoted by $\{\kappa_p(\Mc_\eta) : p\geq 1\}$,
admits the representation: $\kappa_p(\Mc_\eta) = 2^{p-1}(p-1)!
(a(\eta)^p + b(\eta)^p)$, for every $p\geq 1$ (in particular, $\mathcal{M}_\eta$ has unit variance).
\item Let $\eta_0, \eta_1\in [0,1]$ be such that $\eta_0\neq \eta_1$. Then,
${\bf D}(\Mc_{\eta_0}) \neq {\bf D}(\Mc_{\eta_1})$.
\end{enumerate}
\end{proposition}
We observe that Point 4 in the previous statement is an immediate consequence of Point 1 and of the fact that the mapping $\eta\mapsto (1+\eta^2)^{-1/2}$ is injective on $[0,1]$. In the next section, we will discuss the role of chaotic expansions in the proofs of our main findings.
}
{{}
\subsection{ Chaos and the Berry cancellation phenomenon}\label{ss:berryintro}
As in the previous chapter, the proofs of our results rely on a pervasive use of Wiener-It\^o chaotic expansions for non-linear functionals of Gaussian fields (the reader is referred to the two monographs \cite{noupebook, P-T} for an exhaustive discussion).
\medskip
According to \eqref{defrf}, the arithmetic random waves considered in this work are built starting from a family of complex-valued Gaussian random variables $\{a_\lambda : \lambda\in \mathbb{Z}^2\}$, defined on some probability space $(\Omega, \mathscr{F}, \mathbb{P})$ and verifying the following properties: {\bf (a)} each $a_\lambda$ has the form $x_\lambda+iy_\lambda$, where $x_\lambda$ and $y_\lambda$ are two independent real-valued Gaussian random variables with mean zero and variance $1/2$; {\bf (b)} $a_\lambda$ and $a_\tau$ are stochastically independent whenever $\lambda \notin\{ \tau, -\tau\}$, and {\bf (c)} $a_\lambda = \overline{a_{-\lambda}}$. We define ${\bf A}$ to be the closure in $L^2(\mathbb{P})$ of all real finite linear combinations of random variables $\xi$ having the form $\xi = z \, a_\lambda + \overline{z} \, a_{-\lambda}$, where $\lambda\in \mathbb{Z}^2$ and $z\in \mathbb{C}$. It is easily verified that ${\bf A}$ is a real centered Gaussian space (that is, a linear space of jointly Gaussian centered real-valued random variables, that is stable under convergence in $L^2(\mathbb{P})$).
\begin{defn}\label{d:chaos}{\rm For every $q=0,1,2,...$ the $q$th {\it Wiener chaos} associated with ${\bf A}$, written $C_q$, is the closure in $L^2(\mathbb{P})$ of all real finite linear combinations of random variables with the form
$$
H_{p_1}(\xi_1)H_{p_2}(\xi_2)\cdots H_{p_k}(\xi_k),
$$
where the integers $p_1,...,p_k \geq 0$ verify $p_1+\cdot+p_k = q$, and $(\xi_1,...,\xi_k)$ is a real centered Gaussian vector with identity covariance matrix extracted from ${\bf A}$ (note that, in particular, $C_0 = \mathbb{R}$).}
\end{defn}
Again $C_q \,\bot\, C_m$ (where the orthogonality holds in the sense of $L^2(\mathbb{P})$) for every $q\neq m$, and moreover
\begin{equation}\label{e:chaos}
L^2(\Omega, \sigma({\bf A}), \mathbb{P}) = \bigoplus_{q=0}^\infty C_q,
\end{equation}
that is: each real-valued functional $F$ of ${\bf A}$ can be (uniquely) represented in the form
\begin{equation}\label{e:chaos2}
F = \sum_{q=0}^\infty {\rm proj}(F \, | \, C_q),
\end{equation}
where ${\rm proj}(\bullet \, | \, C_q)$ stands for the projection operator onto $C_q$, and the series converges in $L^2(\mathbb{P})$. Plainly, ${\rm proj}(F \, | \, C_0) = \E F$. Now recall the definition of $T_n$ given in \eqref{defrf}: the following elementary statement shows that the Gaussian field
$$
\left\{ T_n(\theta),\, \frac{\partial}{\partial \theta_1} T_n(\theta),\, \frac{\partial}{\partial \theta_2} T_n(\theta) : \theta =(\theta_1,\theta_2)\in \mathbb{T}\right\}
$$
is a subset of ${\bf A}$, for every $n\in S$.
\begin{proposition} \label{p:field} Fix $n\in S$, let the above notation and conventions prevail. Then, for every $j=1,2$ one has that
\begin{equation}\label{e:partial}
\partial_j T_n(\theta) := \frac{\partial}{\partial \theta_j} T_n(\theta) = \frac{2\pi i}{\sqrt{\mathcal{N}_n} }\sum_{(\lambda_1,\lambda_2)\in \Lambda_n} \lambda_j a_\lambda e_\lambda(\theta),
\end{equation}
and therefore $T_n(\theta), \, \partial_1 T_n(\theta), \, \partial_2 T_n(\theta) \in {\bf A}$, for every $\theta\in \mathbb{T}$. Moreover, for every fixed $\theta\in \mathbb{T}$, one has that $T_n(\theta), \, \partial_1 T_n(\theta), \, \partial_2 T_n(\theta)$ are stochastically independent.
\end{proposition}
We shall often use the fact that
$$\displaylines{
\Var[\partial_j T_n(\theta)] = \frac{4\pi^2}{\mathcal N_n}
\sum_{\lambda\in \Lambda_n} \lambda_j^2 = 4\pi^2 \frac{n}{2}\ ,
}$$
and, accordingly, for $\theta=(\theta_1, \theta_2)\in \mathbb T$ and $j=1,2$, we will denote by $\partial_j \widetilde T_n(\theta)$ the normalized derivative
\begin{equation}\label{e:norma}
\partial_j \widetilde T_n(\theta) := \frac{1}{2\pi} \sqrt{\frac{2}{n}} \frac{\partial}{\partial \theta_j} T_n(\theta) = \sqrt{\frac{2}{n}}\frac{ i}{\sqrt{\mathcal N_n}}\sum_{ \lambda\in \Lambda_n}\lambda_j\,
a_{\lambda}e_\lambda(\theta)\ .
\end{equation}
The next statement gathers together some of the main technical achievements of the present chapter. It shows in particular that the already evoked `arithmetic Berry cancellation phenomenon' (see \cite{AmP}, as well as \cite{Berry2002}) -- according to which the variance of the nodal length $\mathcal{L}_n$ (as defined in \eqref{e:length}) has asymptotically the same order as $ \frac{E_n}{\Nc_n^2}$ (rather than the expected order $ \frac{E_n}{\Nc_n}$) -- is a consequence of the following two facts:
\begin{itemize}
\item[\bf (i)] The projection of $\mathcal{L}_n$ on the second Wiener chaos $C_2$ is {\it exactly equal to zero} for every $n\in S$ (and so is the projection of $\mathcal{L}_n$ onto any chaos of odd order $q\geq 3$).
\item[\bf (ii)] The variance of $ {\rm proj}(\mathcal{L}_{n} \, | C_4)$ has the order $ \frac{E_n}{\Nc_n^2}$, as $\Nc_n\to \infty$, and one has moreover that
$$
\var(\Lc_{n}) = \var\left( {\rm proj}(\mathcal{L}_{n} \, | C_4)\right)+ o\left(\frac{E_{n}}{\Nc_{n}^2}\right).
$$
\end{itemize}
Note that, in principle, if ${\rm proj}(\mathcal{L}_n \, | C_2)$ did not vanish, then the sequence $n\mapsto \var\left( {\rm proj}(\mathcal{L}_{n} \, | C_2)\right)$ would have provided the leading term (of the order $ \frac{E_n}{\Nc_n}$) in the asymptotic development of $\var(\Lc_{n})$.
\begin{proposition}[\bf Berry cancellation phenomenon]\label{p:berry} For every fixed $n\in S$,\\ one has that
\begin{equation}\label{e:berry}
{\rm proj}(\mathcal{L}_n \, | C_2) ={\rm proj}(\mathcal{L}_n \, | C_{2k+1}) = 0, \quad k=0,1,...,
\end{equation}
Moreover, if $\{n_j : j\geq 1\}\subset S$ is a sequence contained in $S$ such that $\lim_{j\to\infty }{\mathcal N}_{n_j} = \infty$, then (as $j \to \infty$)
$$
\var(\Lc_{n_j})\sim c({\mu_{n_j} }) \frac{E_{n_j}}{\Nc_{n_j}^2} \sim \var\left( {\rm proj}(\mathcal{L}_{n_j} \, | C_4)\right),
$$
and therefore,
$$
\sum_{k=3}^\infty \var\left( {\rm proj}\left(\mathcal{L}_{n_j} \, | C_{2k} \right)\right) = o\left( \frac{E_{n_j}}{\Nc_{n_j}^2}\right).
$$
\end{proposition}
\begin{remark}\label{remMau}\rm Nodal lengths of Gaussian Laplace eigenfunctions $T_\ell$, $\ell\in \N$,
on the two-dimensional sphere
have the same qualitative behavior as their toral counterpart. Indeed, in Proposition \ref{2sfera} it is shown that
the second chaotic term in the Wiener-It\^o
expansion of the length of level curves $T_\ell^{-1}(u)$, $u\in \mathbb R$ disappears
if and only if $u=0$. These findings shed some light on the Berry's cancellation
phenomenon, indeed they explain why
the asymptotic variance of the length of level curves respects the natural scaling --
except for the nodal case \cite{Wig,wigsurvey}.
\end{remark}
\begin{conjecture}\rm
Consider Gaussian eigenfunctions $T$ on some manifold
$\mathbb M$ and define as usual the $u$-excursion set as
\[
A_u(T,\mathbb M):=\{x \in \mathbb M : T(x)>u\}, \qquad u \in \mathbb{R}.
\]
Toral (resp. spherical) nodal lengths can be viewed as the length of the boundary of $A_u$ for
$u=0$ for $\mathbb M =\mathbb T$ the $2$-torus (resp. $\mathbb M = \mathbb S^2$ the $2$-sphere).
In this sense, as stated in the Introduction of this thesis,
they represent a special case of the second {\it Lipschitz-Killing curvature} of $A_u$, $u \in \mathbb{R}$
(see \cite{adlertaylor} for the definition and a comprehensive treatment of Lipschitz-Killing curvatures
on Gaussian excursion sets). We conjecture that for excursion sets $A_u$ of Gaussian
eigenfunctions on \emph{compact} manifolds $\mathbb M$ the projection of each Lipschitz-Killing
curvature on the second-order
Wiener chaos vanishes if and only if $u=0$; clearly the proof of this conjecture would represent a major step
towards a global understanding of the Berry's cancellation phenomenon. In the two-dimensional case, there
are three Lipschitz-Killing curvatures, which correspond to the area, half the boundary length and the
Euler-Poincar\'e characteristic of the excursion sets; for the $2$-sphere, we refer to
\cite{Nonlin,maudom,fluct} for
results supporting our conjecture in the case of the area and the Euler-Poincar\'e characteristic, and to
Remark
\ref{remMau} and Chapter 7
for the boundary lengths.
\end{conjecture}
\subsection{Plan}
The rest of the chapter is organized as follows: \S 8.2 contains a study of the chaotic
representation of nodal lengths, \S 8.3 focuses on the projection of nodal lengths on the fourth
Wiener chaos, whereas \S 8.4 contains a proof of our main result.
}
\section{Chaotic expansions}\label{expan}
The aim of this section is to derive an explicit expression for each projection of the type ${\rm proj}(\Lc_n \, | \, C_q)$, $q\geq 1$. In order to accomplish this task, as in Chapter 7, we first focus on a sequence of auxiliary random variables $\{\Lc_n^\eps : \eps>0\}$ that approximate $\Lc_n$ in the sense of the $L^2(\P)$ norm.
\subsection{Preliminary results}
Fix $n\geq 1$, and let $T_n$ be defined according to \eqref{defrf}. Define, for $\varepsilon >0$, the approximating random variables
\begin{equation}\label{napprox}
\mathcal L_n^\varepsilon :=\frac{1}{2\varepsilon}
\int_{\mathbb T} 1_{[-\varepsilon,\varepsilon]}(T_n(\theta))\|\nabla T_n(\theta) \|\,d\theta\ .
\end{equation}
\begin{lemma}\label{approx}
We have
$$
\lim_{\varepsilon\to 0} \E[|\mathcal L_n^\varepsilon - \mathcal L_n|^2]=0\ .
$$
\end{lemma}
\begin{proof}
We have that
$$
\lim_{\varepsilon\to 0} \mathcal L_n^\varepsilon = \mathcal L_n\ ,\qquad a.s.
$$
Moreover, for every $\varepsilon$
$$\displaylines{
|\mathcal L_n^\varepsilon - \mathcal L_n|^2\le 2((\mathcal L_n^\varepsilon)^2 +(\mathcal L_n)^2 )\le\cr
\le 2((12\sqrt{4\pi^2n})^2 + (\mathcal L_n)^2)\ ,
}$$
where the last equality follows form Lemma 3.2 in \cite{RW}. We can hence apply dominated convergence theorem to conclude.
\end{proof}
We observe that the previous result suggests that the random variable $\mathcal L_n$ can be formally written as
\begin{equation}\label{formal}
\mathcal L_n = \int_{ \mathbb T} \delta_0(T_n(\theta))\| \nabla T_n(\theta) \|
\,d\theta\ ,
\end{equation}
where $\delta_0$ denotes the Dirac mass in $0$.
\subsection{Chaotic expansion of nodal length $\mathcal{L}_n$}
In view of the convention \eqref{e:norma}, we will rewrite \paref{napprox} as
\begin{equation}\label{formal2}
\mathcal L_n^\eps = \frac{1}{2\eps}\sqrt{4\pi^2}\sqrt{\frac{n}{2}}\int_{\mathbb T}
1_{[-\eps, \eps]}(T_n(\theta)) \sqrt{\partial_1 \widetilde
T_n(\theta)^2+\partial_2 \widetilde T_n(\theta)^2}\,d\theta\ .
\end{equation}
Recall from Chapter 7 the two collection of coefficients
$\{\alpha_{2n,2m} : n,m\geq 1\}$ and $\{\beta_{2l} : l\geq 0\}$, that are connected to the (formal) Hermite expansions of the norm $\| \cdot
\|$ in $\R^2$ and the Dirac mass $ \delta_0(\cdot)$ respectively.
These are given by \paref{e:beta} and \paref{e:alpha}
\begin{equation*}
\beta_{2l}:= \frac{1}{\sqrt{2\pi}}H_{2l}(0)\ ,
\end{equation*}
where $H_{2l}$ denotes the $2l$-th Hermite polynomial and
\begin{equation*}
\alpha_{2n,2m}=\sqrt{\frac{\pi}{2}}\frac{(2n)!(2m)!}{n!
m!}\frac{1}{2^{n+m}} p_{n+m}\left (\frac14 \right)\ ,
\end{equation*}
where for $N=0, 1, 2, \dots $ and $x\in \R$ (as in \paref{pN})
\begin{equation*}
\displaylines{ p_{N}(x) :=\sum_{j=0}^{N}(-1)^{j}\ \ (-1)^{N}{N
\choose j}\ \ \frac{(2j+1)!}{(j!)^2} x^j \ , }
\end{equation*}
$\frac{(2j+1)!}{(j!)^2}$ being the so-called {\it swinging factorial}
restricted to odd indices. The following result provides the key tool in order to prove Proposition \ref{p:berry}.
{{}
\begin{proposition}[\bf Chaotic expansion of $\Lc_n$]\label{teoexp} Relation \eqref{e:berry} holds for every $n\in S$ and also, for every $q\geq 2$,
\begin{eqnarray}\label{e:pp}
&&{\rm proj}(\Lc_n\, | \, C_{2q}) \\
&&= \sqrt{\frac{4\pi^2n}{2}}\sum_{u=0}^{q}\sum_{k=0}^{u}
\frac{\alpha _{2k,2u-2k}\beta _{2q-2u}
}{(2k)!(2u-2k)!(2q-2u)!} \!\!\int_{\mathbb T}\!\! H_{2q-2u}(T_n(\theta))
H_{2k}(\partial_1 \widetilde T_n(\theta))H_{2u-2k}(\partial_2
\widetilde T_n(\theta))\,d\theta.\notag
\end{eqnarray}
As a consequence, one has the representation
\begin{eqnarray}\label{chaosexp}
\Lc_n &=& \E \Lc_n + \sqrt{\frac{4\pi^2n}{2}}\sum_{q=2}^{+\infty}\sum_{u=0}^{q}\sum_{k=0}^{u}
\frac{\alpha _{2k,2u-2k}\beta _{2q-2u}
}{(2k)!(2u-2k)!(2q-2u)!}\times\\
&&\hspace{4.5cm} \times \int_{\mathbb T}H_{2q-2u}(T_n(\theta))
H_{2k}(\partial_1 \widetilde T_n(\theta))H_{2u-2k}(\partial_2
\widetilde T_n(\theta))\,d\theta,\notag
\end{eqnarray}
where the series converges in $L^2(\P)$.
\end{proposition}
}
\noindent\begin{proof}[Proof of Proposition \ref{teoexp}] The proof of the chaotic projection formula is based on the same arguments as the proof of Proposition \ref{teoexpS}, therefore we can skip details.
Let us show that the ${\rm proj}(\Lc_n\, | \, C_{2})$ vanishes. It equals the quantity
$$\displaylines{
\sqrt{4\pi^2}\sqrt{\frac{n}{2}}\Big(
\frac{\alpha_{0,0}\beta_{2}}{2} \int_{\mathbb T}
H_{2}(T_n(\theta))\,d\theta + \frac{\alpha_{0,2}\beta_{0}}{2}
\int_{\mathbb T} H_{2}(\partial_2 \widetilde
T_n(\theta))\,d\theta + \cr
+\frac{\alpha_{2,0}\beta_{0}}{2}
\int_{\mathbb T} H_{2}(\partial_1 \widetilde T_n(\theta))\,d\theta
\Big )\ . }$$ Using the explicit expression $H_2(x)=x^2-1$, we deduce that
\begin{eqnarray*}
\int_{\mathbb T} H_{2}(T_n(\theta))\,d\theta &=& \int_{\mathbb T}
\left ( T_n(\theta)^2 -1 \right)\,d\theta=
\int_{\mathbb T}
\left ( \frac{1}{\mathcal N_n}
\sum_{\lambda, \lambda'\in \Lambda_n} a_\lambda \overline a_{\lambda'}
e_{\lambda - \lambda'}(\theta)
-1 \right)\,d\theta\\
&=&
\frac{1}{\mathcal N_n}\sum_{\lambda, \lambda'\in \Lambda_n} a_\lambda \overline a_{\lambda'}
\underbrace{\int_{\mathbb T}e_{\lambda - \lambda'}(\theta)\,d\theta}_{\delta_{\lambda}^{\lambda'}}
-1= \frac{1}{\mathcal N_n} \sum_{\lambda_\in \Lambda_n} (|a_\lambda|^2 -1),
\end{eqnarray*}
where $\delta_{\lambda}^{\lambda'}$ is the Kronecker symbol (observe that $\E[|a_\lambda|^2]=1$, hence the expected value of the integral
$\int_{\mathbb T} H_{2}(T_n(\theta))\,d\theta$ is $0$, as expected). Analogously, for $j=1,2$ we have that
\begin{eqnarray*}
\int_{\mathbb T} H_{2}(\partial_j \widetilde T_n(\theta))\,d\theta
&=& \int_{\mathbb T} \left ( \frac{2}{n}\frac{1}{\mathcal N_n}
\sum_{\lambda, \lambda'\in \Lambda_n} \lambda_j \lambda'_j
a_\lambda \overline a_{\lambda'} e_{\lambda - \lambda'}(\theta )
-1 \right)\,d\theta\\
&=&\frac{2}{n}\frac{1}{\mathcal N_n}
\sum_{\lambda\in \Lambda_n} \lambda_j^2 |a_\lambda|^2 - 1
=\frac{1}{\mathcal N_n} \frac{2}{n} \sum_{\lambda \in \Lambda_n}
\lambda_j^2 (|a_\lambda|^2 - 1)\ ,
\end{eqnarray*}
{{} where the last equality follows from the elementary identity
$$
\sum_{\lambda \in \Lambda_n}\lambda_j^2 = \frac{n \mathcal N_n} {2}.
$$
}Since $\alpha_{2n,2m}=\alpha_{2m,2n}$ we can rewrite ${\rm proj}(\Lc_n\, | \, C_{2})$ as
\begin{eqnarray*}
&&\sqrt{4\pi^2}\sqrt{\frac{n}{2}}\left( \frac{\alpha_{0,0}\beta_{2}}{2}
\frac{1}{\mathcal N_n} \sum_{\lambda_\in \Lambda_n} (|a_\lambda|^2 -1)
+ \frac{\alpha_{0,2}\beta_{0}}{2}
\frac{1}{\mathcal N_n} \frac{2}{n} \sum_{\lambda \in \Lambda_n}
\underbrace{(\lambda_1^2+\lambda_2^2)}_{=n} (|a_\lambda|^2 - 1)
\right)\\
&&=\sqrt{4\pi^2}\sqrt{\frac{n}{2}}\frac{1}{2\mathcal N_n}
\left( \alpha_{0,0}\beta_{2}
\sum_{\lambda_\in \Lambda_n} (|a_\lambda|^2 -1)
+ 2\alpha_{0,2}\beta_{0}
\sum_{\lambda \in \Lambda_n}
(|a_\lambda|^2 - 1)
\right)\\
&&=\sqrt{4\pi^2}\sqrt{\frac{n}{2}}\frac{1}{2\mathcal N_n}
\left( \alpha_{0,0}\beta_{2}+ 2\alpha_{0,2}\beta_{0}\right)
\sum_{\lambda_\in \Lambda_n} (|a_\lambda|^2 -1)\ .
\end{eqnarray*}
Easy computations show that
$$
\alpha_{0,0} = \sqrt{\frac{\pi}{2}}\ ,\quad \alpha_{0,2}=\alpha_{2,0} = \frac12
\sqrt{\frac{\pi}{2}}\ , \quad \beta_0 =\frac{1}{ \sqrt{2\pi}}\ , \quad \beta_2 = - \frac{1}{ \sqrt{2\pi}}\ ,
$$
and therefore
\begin{eqnarray*}
{\rm proj}(\Lc_n\, | \, C_{2}) &=&\sqrt{4\pi^2}\sqrt{\frac{n}{2}}\frac{1}{2\mathcal N_n}
\left(- \sqrt{\frac{\pi}{2}}\frac{1}{\sqrt{2\pi}}
+ 2\frac{1}{2}\sqrt{\frac{\pi}{2}}\frac{1}{\sqrt{2\pi}}
\right)
\sum_{\lambda_\in \Lambda_n} (|a_\lambda|^2 -1)
\\
&=&\sqrt{4\pi^2}\sqrt{\frac{n}{2}}\frac{1}{2\mathcal N_n}
\left(- \frac12 + \frac12
\right)
\sum_{\lambda_\in \Lambda_n} (|a_\lambda|^2 -1) = 0,
\end{eqnarray*}
thus concluding the proof.
\end{proof}
\section{Asymptotic study of ${\rm proj}(\Lc_n \, | \, C_4)$}
\subsection{Preliminary considerations}
As anticipated in the Introduction, one of the main findings of the present chapter is that, whenever $\Nc_{n_j}\to \infty$, the asymptotic behaviour of the normalized sequence $\{ \widetilde{\Lc}_{n_j}\}$ in \eqref{e:culp} is completely determined by that of the fourth order chaotic projections
\begin{equation}\label{e:ron}
{\rm proj}(\widetilde{\Lc}_{n_j} \, | \, C_4) = \frac{ {\rm proj}(\mathcal{L}_{n_j} \, | \, C_4)}{\sqrt{\Var[\mathcal{L}_{n_j} ]}} , \quad j\geq 1.
\end{equation}
The aim of this section is to provide a detailed asymptotic study of the sequence appearing in \eqref{e:ron}, by using in particular the explicit formula \eqref{e:pp}. For the rest of the chapter, we use the notation
\begin{equation}\label{e:pzi}
\psi(\eta ) := \frac{3+\eta}{8}, \quad \eta\in [-1,1],
\end{equation}
and will exploit the following elementary relations, valid as $\Nc_{n_j}\to \infty$:
\begin{enumerate}
\item[(i)] if $\widehat{\mu}_{n_j}(4)\to \eta\in [-1,1]$, then, for $\ell=1,2$,
\begin{equation}\label{e:easy1}
\frac{2}{n_j^2 \Nc_{n_j}} \sum_{\substack{\lambda = (\lambda_1,\lambda_2)\in \Lambda_{n_j} \\ \lambda_2\geq 0 }}\!\!\!\lambda_\ell^4 \longrightarrow \psi(\eta);
\end{equation}
\item[(ii)] if $\widehat{\mu}_{n_j}(4)\to \eta\in [-1,1]$, then
\begin{equation}\label{e:easy2}
\frac{2}{n_j^2 \Nc_{n_j}} \sum_{\substack{\lambda = (\lambda_1,\lambda_2)\in \Lambda_{n_j} \\ \lambda_2\geq 0 }}\!\!\!\lambda_1^2\lambda_2^2 \longrightarrow \frac12 - \psi(\eta).
\end{equation}
\end{enumerate}
Note that \eqref{e:easy1}--\eqref{e:easy2} follow immediately from the fact that, for every $n$,
$$
\widehat{\mu}_n(4) = \frac{1}{n^2 \Nc_n} \sum_{\lambda\in \Lambda_n} \left(\lambda_1^4 +\lambda_2^4-6\lambda_1^2\lambda_2^2\right),
$$
as well as from elementary symmetry considerations. We will also use the identity:
\begin{equation}\label{e:magic}
64\, \psi(\eta)^2-48\, \psi(\eta)+10 = \eta^2+1.
\end{equation}
One of our principal tools will be the following multivariate central limit theorem (CLT).
\begin{proposition}\label{p:clt} Assume that the subsequence $\{n_j\}\subset S$ is such that $\Nc_{n_j} \to \infty$ and $\widehat{\mu}_{n_j}(4) \to \eta\in [-1,1]$. Define
\begin{eqnarray*}
H(n_j)=\left(
\begin{array}{c}
H_{1}(n_j) \\
H_{2}(n_j) \\
H_{3}(n_j) \\
H_{4}(n_j)
\end{array
\right ) & :=& \frac{1}{n_j\sqrt{\Nc_{n_j}/2}}\sum_{\substack{\lambda = (\lambda_1,\lambda_2)\in \Lambda_{n_j} \\ \lambda_2\geq 0 }}\left(
|a_{\lambda }|^{2}-1\right) \left(
\begin{array}{c}
n_j \\
\lambda _{1}^{2} \\
\lambda _{2}^{2} \\
\lambda _{1}\lambda _{2
\end{array
\right) .
\end{eqnarray*}
Then, as $n_j\to \infty$, the following CLT holds:
\begin{eqnarray}\label{e:punz}
H(n_j) \stackrel{\rm d}{\longrightarrow} Z(\eta)= \left(
\begin{array}{c}
Z_{1} \\
Z_{2} \\
Z_{3} \\
Z_{4
\end{array
\right )\text{ ,}
\end{eqnarray}
where $Z(\eta)$ is a centered Gaussian vector with covariance
\begin{equation}\label{e:sig}
\Sigma =\left(
\begin{array}{cccc}
1 & \frac{1}{2} & \frac{1}{2} & 0 \\
\frac{1}{2} & \psi & \frac{1}{2}-\psi & 0 \\
\frac{1}{2} & \frac{1}{2}-\psi & \psi & 0 \\
0 & 0 & 0 & \frac{1}{2}-\psi
\end{array
\right) \text{.}
\end{equation
with $\psi = \psi(\eta)$, as defined in \eqref{e:pzi}.
\end{proposition}
\noindent\begin{proof}
Each component of the vector $H(n_j)$ is an element of the second Wiener chaos associated with ${\bf A}$ (see Section \ref{ss:berryintro}). As a consequence, according e.g. to \cite[Theorem 6.2.3]{noupebook}, in order to prove the desired result it is enough to establish the following relations (as $n_j\to \infty$): (a) the covariance matrix of $H(n_j)$ converges to $\Sigma$, and (b) for every $k=1,2,3,4$, $H_k(n_j)$ converges in distribution to a one-dimensional centered Gaussian random variable. Point (a) follows by a direct computation based on formulae \eqref{e:easy1}--\eqref{e:easy2}, as well as on the fact that the random variables in the set $$\left\{
|a_{\lambda }|^{2}-1 : \lambda \in \Lambda_{n_j}, \, \lambda_2\geq 0 \right\}$$ are centered, independent, identically distributed and with unit variance. To prove Point (b), write $\Lambda_{n_j}^+ := \{ \lambda \in \Lambda_{n_j}, \, \lambda_2\geq 0\}$ and observe that, for every $k$ and every $n_j$, the random variable $H_k(n_j)$ has the form
$$
H_k(n_j) = \sum_{\lambda \in \Lambda_{n_j}^+} c_k(n_j, \lambda)\times (|a_{\lambda }|^{2}-1)
$$
where $\{c_k(n_j, \lambda)\}$ is a collection of positive deterministic coefficients such that $$\max_{\lambda \in \Lambda_{n_j}^+} c_k(n_j, \lambda)\to 0,$$ as $n_j\to\infty$. An application of the Lindeberg criterion, e.g. in the quantitative form stated in \cite[Proposition 11.1.3]{noupebook}, consequently yields that $H_k(n_j)$ converges in distribution to a Gaussian random variable, thus concluding the proof.
\end{proof}
\begin{remark}{\rm
The eigenvalues of the matrix $\Sigma $ are given by: $\left\{ 0,\frac{3}{2}
\frac{1}{2}-\psi ,2\psi -\frac{1}{2}\right\} .$
}
\end{remark}
Following \cite{AmP}, we will abundantly use the structure of
the {\it length-$4$ correlation set of frequencies}:
$$
S_n(4) := \lbrace (\lambda, \lambda', \lambda'', \lambda''')\in (\Lambda_n)^4 :
\lambda + \dots + \lambda''' = 0 \rbrace\ .
$$
It is easily seen that an element of $S_n(4)$ necessarily verifies one of the following properties (A)--(C):
\begin{eqnarray*}
{\rm (A)} &:&\left\{
\begin{array}{l}
\lambda = - \lambda ^{\prime } \\
\lambda ^{\prime \prime }= - \lambda ^{\prime \prime \prime
\end{array
\right. , \\
{\rm (B)} &:&\left\{
\begin{array}{l}
\lambda =-\lambda ^{\prime \prime } \\
\lambda ^{\prime }=-\lambda ^{\prime \prime \prime
\end{array
\right. , \\
{\rm (C)} &:&\left\{
\begin{array}{l}
\lambda =- \lambda ^{\prime \prime \prime } \\
\lambda ^{\prime }= - \lambda ^{\prime \prime
\end{array
\right. .
\end{eqnarray*
We also have the following identities between sets:
\[
\{(\lambda, \lambda', \lambda'', \lambda''') : \mbox{(A) and (B) are verified} \} = \left\{ (\lambda, \lambda', \lambda'', \lambda''') : \lambda = - \lambda ^{\prime }=-\lambda ^{\prime \prime
}=\lambda ^{\prime \prime \prime }\right\} ,
\
\[
\{(\lambda, \lambda', \lambda'', \lambda''') : \mbox{(A) and (C) are verified} \}= \left\{(\lambda, \lambda', \lambda'', \lambda''') : \lambda =- \lambda ^{\prime }=\lambda ^{\prime \prime
}=- \lambda ^{\prime \prime \prime }\right\} ,
\
\[
\{(\lambda, \lambda', \lambda'', \lambda''') : \mbox{(B) and (C) are verified} \} = \left\{(\lambda, \lambda', \lambda'', \lambda''') : \lambda =\lambda ^{\prime }=-\lambda ^{\prime \prime
}=- \lambda ^{\prime \prime \prime }\right\} ,
\]
whereas, for $n\ne 0$, there is no element of $S_n(4)$ simultaneously verifying (A), (B) and (C). In view of these remarks, we can apply the inclusion-exclusion principle to deduce that, for $n\neq 0$,
\[
| S_{n}(4)| =3\Nc_{n}(\Nc_{n}-1)\text{ .}
\]
In the next subsections, we will establish a non-central limit theorem for the sequence defined in \eqref{e:ron}.
\subsection{Non-central convergence of the fourth chaotic projection: statement}
One of the main achievements of the present chapter is the following statement.
\begin{proposition}\label{p:4nclt} Let $\{n_j\} \subset S$ be such that $\Nc_{n_j}\to \infty$ and $\widehat{\mu}_{n_j}\to \eta\in [-1,1]$; set
\begin{equation}\label{e:varz}
v(n_j) :=\sqrt{\frac{4\pi^2n_j}{512}}\frac{1}{\Nc_{n_j}}, \quad j\geq 1.
\end{equation}
Then, as $n_j\to \infty$,
\begin{equation}\label{e:4nclt}
Q(n_j) := \frac{ {\rm proj}(\mathcal{L}_{n_j} \, | \, C_4)}{v(n_j) } \stackrel{\rm d}{\longrightarrow} 1+Z_1^2-2Z_2^2-2Z_3^2-4Z_4^2,
\end{equation}
where the four-dimensional vector $Z^{\top} = Z^{\top}(\eta) = (Z_1,Z_2,Z_3,Z_4)$ is defined in \eqref{e:punz}. Moreover, one has that
\begin{equation}\label{e:magic2}
{\rm Var} ( 1+Z_1^2-2Z_2^2-2Z_3^2-4Z_4^2) = 1+ \eta^2.
\end{equation}
\end{proposition}
Since the multidimensional CLT stated in \eqref{e:punz} implies that
$$
(H_1(n_j)^2, H_2(n_j)^2, H_3(n_j)^2, H_4(n_j)^2) \stackrel{\rm d}{\longrightarrow} (Z_1^2, Z_2^2, Z_3^2, Z_4^2),
$$
in order to prove Proposition \ref{p:4nclt} it is sufficient to show that
\begin{eqnarray}\label{e:zut}
&&Q(n_j) = H_1(n_j) ^2-2H_2(n_j)-2H_3(n_j)^2 - 4H_4(n_j)^2+ R(n_j),
\end{eqnarray}
where, as $n_j\to \infty$, the sequence of random variables $\{R_{n_j}\}$ converges in probability to some numerical constant $\alpha\in \R$. To see this, observe that, in view of \eqref{eq:var leading KKW} and of the orthogonal chaotic decomposition \eqref{e:chaos}, one has that $\{Q(n_j)\}$ is a centered sequence of random variables such that $\sup_j \E Q(n_j)^2 <\infty$. By uniform integrability, this fact implies that, as $n_j\to \infty$,
$$
\E[Z_1^2-2Z_2^2-2Z_3^2-4Z_4^2 ]+\alpha = \lim_{n_j\to \infty} \E[Q(n_j)] = \lim_{n_j\to \infty} 0 = 0,
$$
and therefore, since $\E[ Z_1^2-2Z_2^2-2Z_3^2-4Z_4^2 ] = 1-2\psi-2\psi-4(2^{-1}-\psi)=-1$, one has necessarily that $\alpha=1$. (Note that our results yield indeed a rather explicit representation of the term $R(n_j)$, so that the fact that $\alpha=1$ can alternatively be verified by a careful bookkeeping of the constants appearing in the computations to follow).
\medskip
Our proof of \eqref{e:zut} is based on a number of technical results that are gathered together in the next subsection. These results will be combined with the following representation of ${\rm proj}(\mathcal{L}_{n_j} \, | \, C_4)$, that is a direct consequence of \eqref{e:pp} in the case $q=2$:
\begin{eqnarray}\label{e:nus}
{\rm proj}(\mathcal{L}_{n_j} \, | \, C_4)&=&\sqrt{4\pi^2}\sqrt{\frac{n}{2}}\Big(\frac{\alpha_{0,0}\beta_4}{4!}\int_{\mathbb
T} H_4(T_n(\theta))\,d\theta\\ \notag
&&+\frac{\alpha_{0,4}\beta_0}{4!}\int_{\mathbb T} H_4(\partial_2
\widetilde T_n(\theta))\,d\theta
+\frac{\alpha_{4,0}\beta_0}{4!}\int_{\mathbb T} H_4(\partial_1
\widetilde T_n(\theta))\,d\theta+\\ \notag
&&+\frac{\alpha_{0,2}\beta_2}{2!2!}\int_{\mathbb T}
H_2(T_n(\theta))H_2(\partial_2 \widetilde
T_n(\theta))\,d\theta\\
\notag
&&+\frac{\alpha_{2,0}\beta_2}{2!2!}\int_{\mathbb T}
H_2(T_n(\theta))H_2(\partial_1 \widetilde
T_n(\theta))\,d\theta+\\ \notag
&& +\frac{\alpha_{2,2}\beta_0}{2!2!}\int_{\mathbb T}
H_2(\partial_1 \widetilde T_n(\theta))H_2(\partial_2 \widetilde
T_n(\theta))\,d\theta \Big),
\end{eqnarray}
where the coefficients $\alpha_{\cdot, \cdot}$ and $\beta_{\cdot}$ are defined according to equation \eqref{e:alpha} and equation \eqref{e:beta}, respectively.
\subsection{Some ancillary lemmas}
The next four lemmas provide some useful representations for the six summands appearing on the right-hand side of \eqref{e:nus}. In what follows, $n$ always indicates a positive integer different from zero and, moreover, in order to simplify the discussion we sometimes use the shorthand notation:
\begin{eqnarray*}
&& \sum_{\lambda} = \sum_{\substack{\lambda = (\lambda_1,\lambda_2)\in \Lambda_{n} }}, \quad\quad \sum_{\lambda, \lambda'} = \sum_{\substack{\lambda, \lambda' \in \Lambda_{n} }} \quad \quad\mbox{and}\quad \quad \sum_{\lambda : \lambda_2\geq 0} = \sum_{\substack{\lambda = (\lambda_1,\lambda_2)\in \Lambda_{n} \\ \lambda_2\geq 0 }},
\end{eqnarray*}
in such a way that the exact value of the integer $n$ will always be clear from the context. Also, the symbol $\{n_j\}$ will systematically indicate a subsequence of integers contained in $S$ such that $\Nc_{n_j}\to \infty$ and $\widehat{\mu}_{n_j} (4) \to \eta\in [-1,1]$, as $n_j\to \infty$. As it is customary, we write `$\stackrel{\mathbb{P}}\longrightarrow$' to denote convergence in probability, and we use the symbol $o_{\P}(1)$ to indicate a sequence of random variables converging to zero in probability, as $\mathcal{N}_n\to\infty$.
\begin{lemma}\label{lem1} One has the following representation:
$$\displaylines{
\int_{\mathbb T} H_{4}(T_n(\theta))\,d\theta
=\frac{6}{\Nc_{n}}\left (\frac{1}{\sqrt{\Nc_{n}/2}}\sum_{\lambda :\lambda
_{2}\geq 0}(|a_{\lambda }|^{2}-1) + o_{\P}(1) \right )^{2}-\frac{3}{\Nc_{n}^{2}}\sum_{\lambda
}|a_{\lambda }|^{4}.
}$$
Also, as $n_j\to\infty$,
$$
\frac{3}{\Nc_{n_j}}\sum_{\lambda
}|a_{\lambda }|^{4}\stackrel{\mathbb{P}}\longrightarrow 6.
$$
\end{lemma}
\noindent\begin{proof}
Using the explicit expression $H_4(x)=x^4 - 6x^2 +3$, we deduce that
\begin{eqnarray*}
\int_{\mathbb T} H_{4}(T_n(\theta))\,d\theta &=& \int_{\mathbb T}
\left ( T_n(\theta)^4 -6T_n(\theta)^2 +3 \right)\,d\theta\\
&=& \frac{1}{\mathcal N_n^2}
\sum_{\lambda, \dots, \lambda''' \in \Lambda_n} a_{\lambda} \overline a_{\lambda'} a_{\lambda''}
\overline a_{\lambda'''}
\int_{\mathbb T}
\exp(2\pi i\langle \lambda -\lambda' + \lambda'' - \lambda''', \theta\rangle )\,d\theta+\\
&& -\, 6\frac{1}{\mathcal N_n}
\sum_{\lambda, \lambda' \in \Lambda_n} a_{\lambda}
\overline a_{\lambda'}
\int_{\mathbb T}\exp(2\pi i\langle \lambda - \lambda', \theta\rangle )\,d\theta +3 \\
&=& \frac{1}{\mathcal N_n^2}
\sum_{ \lambda - \lambda' + \lambda'' - \lambda'''=0} a_{\lambda} \overline a_{\lambda'} a_{\lambda''}
\overline a_{\lambda'''}
-6\frac{1}{\mathcal N_n}
\sum_{\lambda \in \Lambda_n} |a_{\lambda}|^2
+3,
\end{eqnarray*}
where the subscript $ \lambda - \lambda' + \lambda'' - \lambda'''=0$ indicates that $(\lambda, -\lambda', \lambda'', -\lambda''')\in S_n(4)$. Owing to the structure of $S_n(4)$ discussed above,
the previous expression simplifies to
\begin{eqnarray*}
&&3\frac{1}{\mathcal N_n^2}\Big( \sum_{\lambda, \lambda'\in \Lambda_n} |a_\lambda|^2 |a_{\lambda'}|^2
-\sum_\lambda |a_{\lambda}|^4 \Big)
-6\frac{1}{\mathcal N_n}
\sum_{\lambda \in \Lambda_n} |a_{\lambda}|^2 +3 \\
&& =3\frac{1}{\mathcal N_n} \Big( \frac{1}{\sqrt{\mathcal N_n}}
\sum_{\lambda \in \Lambda_n} (|a_\lambda|^2-1 ) \Big)^2 -
3\frac{1}{\mathcal N_n^2}
\sum_{\lambda \in \Lambda_n} |a_\lambda|^4\\
&& = \frac{6}{\Nc_{n}}\left (\frac{1}{\sqrt{\Nc_{n}/2}}\sum_{\lambda
:\lambda _{2}\geq 0}(|a_{\lambda }|^{2}-1) +o_{\P}(1) \right
)^{2}-\frac{3}{\Nc_{n}^{2}}\sum_{\lambda }|a_{\lambda }|^{4},
\end{eqnarray*}
where $o_{\P}(1) = - (2\Nc_{n_j})^{-1/2} (|a_{(n^{-1/2}, 0)}|^2+|a_{(-n^{-1/2}, 0)}|^2-2)$, thus immediately yielding the first part of the statement (after developing the square). The second part of the statement follows from a standard application of the law of large numbers to the sum,
$$
\frac{3}{\Nc_{n_j}}\sum_{\lambda }|a_{\lambda }|^{4} =\frac{3}{\Nc_{n_j}/2}\sum_{\lambda : \lambda_2\geq 0}|a_{\lambda }|^{4} + o_{\P}(1),
$$
as well as from the observation that the set $\{|a_\lambda|^4 : \lambda\in \Lambda_{n_j}, \, \lambda_2\geq 0\}$ is composed of i.i.d. random variables such that $\E |a_\lambda|^4 =2$.
\end{proof}
\begin{lemma}\label{lem2} For $\ell =1,2$,
$$\displaylines{
\int_{\mathbb T} H_{4}(\partial_\ell \widetilde T_n(\theta))\,d\theta
= \frac{24}{\Nc_{n}}\left[ \frac{1}{\sqrt{\Nc_{n}/2}
\sum_{\lambda ,\lambda _{2}\geq 0}\left (\frac{\lambda _{\ell}^{2}}{n}\left( |a_{\lambda
}|^{2}-1\right) \right) + o_{\P}(1) \right] ^{2} +\cr
-\left( \frac{2}{n}\right) ^{2}\frac{3}{\Nc_{n}^{2}}\sum_{\lambda }\lambda
_{\ell}^{4}|a_{\lambda }|^{4}.}$$
Moreover, as $n_j\to \infty$,
\[
\left( \frac{2}{n_j}\right) ^{2}\frac{3}{\Nc_{n_j}}\sum_{\lambda }\lambda
_{\ell}^{4}|a_{\lambda }|^{4} \stackrel{\P}{\longrightarrow} 24\, \psi(\eta).
\]
\end{lemma}
\noindent\begin{proof}
The proof is similar to that of Lemma \ref{lem1}. We have that
\begin{eqnarray*}
&&\int_{\mathbb T} H_{4}(\partial_\ell \widetilde T_n(\theta))\,d\theta
=\int_{\mathbb T}
(\partial_\ell \widetilde T_n(\theta)^4 -6 \partial_\ell \widetilde T_n(\theta)^2+3)\,d\theta\\
&& = \frac{1}{\mathcal N_n^2}\frac{4}{n^2}
\sum_{\lambda, \dots, \lambda''' \in \Lambda_n} \lambda_\ell\lambda'_\ell\lambda''_\ell\lambda'''_\ell a_{\lambda} \overline a_{\lambda'} a_{\lambda''}
\overline a_{\lambda'''}
\int_{\mathbb T}
\exp(2\pi i\langle \lambda -\lambda' + \lambda'' - \lambda''', \theta\rangle )\,d\theta+\\
&& -6\frac{1}{\mathcal N_n}\frac{2}{n}
\sum_{\lambda, \lambda' } \lambda_\ell\lambda'_\ell a_{\lambda}
\overline a_{\lambda'}
\int_{\mathbb T}\exp(2\pi i\langle \lambda - \lambda', \theta\rangle )\,d\theta +3 \\
&&=\frac{1}{\mathcal N_n^2}\frac{4}{n^2}
\sum_{\lambda-\lambda'+\lambda''- \lambda''' =0}\lambda_\ell\lambda'_\ell\lambda''_\ell\lambda'''_\ell a_{\lambda} \overline a_{\lambda'} a_{\lambda''}
\overline a_{\lambda'''}
-6\frac{1}{\mathcal N_n}\frac{2}{n}
\sum_{\lambda \in \Lambda_n} \lambda_\ell^2 |a_{\lambda}|^2
+3\\
&&=\frac{3}{\mathcal N_n^2}\frac{4}{n^2}\left (
\sum_{\lambda,\lambda'}\lambda_\ell^2(\lambda'_\ell)^2 |a_{\lambda}|^2 |a_{\lambda'}|^2 - \sum_{\lambda} \lambda_\ell^4 |a_{\lambda}|^4\right )
-6\frac{1}{\mathcal N_n}\frac{2}{n}
\sum_{\lambda \in \Lambda_n}\lambda_\ell^2 |a_{\lambda}|^2
+3\\
&& =\frac{24}{\Nc_{n}}\left[ \frac{1}{\sqrt{\Nc_{n}/2}
\sum_{\lambda ,\lambda _{2}\geq 0}\left (\frac{\lambda _{\ell}^{2}}{n}\left( |a_{\lambda
}|^{2}-1\right) \right) + o_{\P}(1) \right] ^{2} -\left( \frac{2}{n}\right) ^{2}\frac{3}{\Nc_{n}^{2}}\sum_{\lambda }\lambda
_{\ell}^{4}|a_{\lambda }|^{4}\ .
\end{eqnarray*}
To conclude the proof, we observe that
\begin{eqnarray*}
&&\left( \frac{2}{n_j}\right) ^{2}\frac{3}{\Nc_{n_j}}\sum_{\lambda }\lambda
_{\ell}^{4}|a_{\lambda }|^{4} \\
&&= o_{\P}(1)+ \left( \frac{2}{n_j}\right) ^{2}\frac{3}{\Nc_{n_j}/2}\sum_{\lambda:\lambda_2\geq 0 }\lambda
_{\ell}^{4}(|a_{\lambda }|^{4} -2) +\frac{2\times 24}{n^2_j\Nc_{n_j} }\sum_{\lambda:\lambda_2\geq 0 }\lambda
_{\ell}^{4} := K_1(n_j) + K_2(n_j),
\end{eqnarray*}
so that the conclusion follows from \eqref{e:easy1}, as well as from the fact that, since the random variables $\{|a_\lambda|^4 -2: \lambda\in \Lambda_{n_j}, \, \lambda_2\geq 0\}$ are i.i.d., square-integrable and centered and $\lambda_\ell^4/n^2\leq 1$, $\E K_1(n_j)^2 = O(\Nc_{n_j}^{-1})\to 0$.
\end{proof}
\medskip
\begin{lemma}\label{lem3} One has that
\begin{eqnarray}\label{e:barber}
&& \int_{\mathbb T} H_2(T_n(\theta))\Big( H_2(\partial_1 \widetilde
T_n(\theta)) + H_2(\partial_2 \widetilde
T_n(\theta))\Big)\,d\theta\\
&&\hspace{3cm}= \frac{4}{\Nc_{n}}\left\{ \frac{1}{\sqrt{\Nc_{n}/2}}\sum_{\lambda
,\lambda _{2}\geq 0}\left( |a_{\lambda }|^{2}-1\right) +o_{\P}(1) \right\} ^{2}-\frac{
}{\Nc_{n}^{2}}\sum_{\lambda }|a_{\lambda ^{\prime }}|^{4}.\notag
\end{eqnarray}
\end{lemma}
\noindent \begin{proof} For $\ell=1,2$,
\begin{eqnarray*}
&& \int_{\mathbb T} H_2(T_n(\theta))H_2(\partial_\ell \widetilde
T_n(\theta))\,d\theta=\int_{\mathbb T} (T_n(\theta)^2-1)(\partial_\ell \widetilde
T_n(\theta)^2-1)\,d\theta\\
&&=\int_{\mathbb T} \left (\frac{1}{\mathcal N_n} \sum_{\lambda, \lambda'} a_\lambda \overline{a}_{\lambda'} e_\lambda(\theta) e_{-\lambda'}(\theta)-1\right )\left (\frac{2}{n}\frac{1}{\mathcal N_n} \sum_{\lambda'', \lambda'''} \lambda_\ell'' \lambda_\ell''' a_{\lambda''} \overline{a}_{\lambda'''} e_{\lambda''}(\theta) e_{-\lambda'''}(\theta)-1 \right )\,d\theta\\
&& =\frac{2}{n}\frac{1}{\mathcal N_n^2} \sum_{\lambda-\lambda'+\lambda''-\lambda'''=0} \lambda_\ell'' \lambda_\ell''' a_\lambda \overline{a}_{\lambda'} a_{\lambda''} \overline{a}_{\lambda'''}-
\frac{1}{\mathcal N_n} \sum_{\lambda} |a_\lambda|^2
-
\frac{2}{n}\frac{1}{\mathcal N_n} \sum_{\lambda}\lambda_\ell^2 |a_\lambda|^2 +1\ .
\end{eqnarray*}
An application of the inclusion-exclusion principle yields that the first summand in the previous expression equals
$$\displaylines{
\frac{2}{n}\frac{1}{\mathcal N_n^2} \left ( \sum_{\lambda,\lambda'} \lambda_j^2 |a_\lambda|^2 |a_{\lambda'}|^2 +2\sum_{\lambda,\lambda'} \lambda_j \lambda'_j |a_\lambda|^2 |a_{\lambda'}|^2 - 2\sum_\lambda \lambda_j^2|a_\lambda|^4 + \sum_\lambda \lambda_j^2|a_\lambda|^4 \right )\
}$$
Using the relation $a_{-\lambda} = \overline a_\lambda$, we also infer that
$$
\sum_{\lambda,\lambda'} \lambda_j \lambda'_j |a_\lambda|^2 |a_{\lambda'}|^2 = \left (\sum_\lambda \lambda_j |a_\lambda|^2 \right)^2 =0\ .
$$
Summing the terms corresponding to $\partial _{1}$ and $\partial _{2}$ we deduce that the left-hand side of \eqref{e:barber} equals
obtai
\begin{eqnarray*}
&=&\frac{2}{n}\frac{1}{\Nc_{n}^{2}}\left\{ \sum_{\lambda ,\lambda ^{\prime
}}\left( \lambda _{1}^{2}+\lambda _{2}^{2}\right) |a_{\lambda
}|^{2}|a_{\lambda ^{\prime }}|^{2}-\sum_{\lambda }\left( \lambda
_{1}^{2}+\lambda _{2}^{2}\right) |a_{\lambda ^{\prime }}|^{4}\right\} \\
&&-\frac{2}{\Nc_{n}}\sum_{\lambda }|a_{\lambda }|^{2}-\frac{2}{n}\frac{1}{\Nc_{n
}\sum_{\lambda }\left( \lambda _{1}^{2}+\lambda _{2}^{2}\right) |a_{\lambda
}|^{2}+2= \\
&=&\frac{2}{n}\frac{1}{\Nc_{n}^{2}}\left\{ \sum_{\lambda ,\lambda ^{\prime
}}n|a_{\lambda }|^{2}|a_{\lambda ^{\prime }}|^{2}-\sum_{\lambda
}n|a_{\lambda ^{\prime }}|^{4}\right\} -\frac{2}{\Nc_{n}}\sum_{\lambda }|a_{\lambda }|^{2}-\frac{2}{n}\frac{1}{\Nc_{n
}\sum_{\lambda }n|a_{\lambda }|^{2}+2 \\
&=&2\frac{1}{\Nc_{n}^{2}}\left\{ \sum_{\lambda ,\lambda ^{\prime }}|a_{\lambda
}|^{2}|a_{\lambda ^{\prime }}|^{2}-\sum_{\lambda }|a_{\lambda ^{\prime
}}|^{4}\right\} -\frac{2}{\Nc_{n}}\sum_{\lambda }|a_{\lambda }|^{2}-\frac{2}{\Nc_{n}
\sum_{\lambda }|a_{\lambda }|^{2}+2\\
&=&\frac{2}{\Nc_{n}}\left\{ \frac{\sqrt{2}}{\sqrt{\Nc_{n}/2}}\sum_{\lambda
,\lambda _{2}\geq 0}\left( |a_{\lambda }|^{2}-1\right) +o_{\P}(1) \right\} ^{2}-\frac{
}{\Nc_{n}^{2}}\sum_{\lambda }|a_{\lambda ^{\prime }}|^{4},
\end{eqnarray*}
which corresponds to the desired conclusion.
\end{proof}
\medskip
Our last lemma allows one to deal with the most challenging term appearing in formula \eqref{e:nus}.
\begin{lemma}\label{lem4}
We have that
\begin{eqnarray*}
&&\int H_{2}(\partial _{1}\widetilde{T}_{n})H_{2}(\partial _{2}\widetilde{T
_{n})\,d\theta \\
&&=-4\left[ \frac{1}{\sqrt{\Nc_{n}/2}}\frac{1}{n}\sum_{\lambda ,\lambda _{2}\geq
0}\lambda _{2}^{2}(|a_{\lambda }|^{2}-1) \right] ^{2}
-4\left[ \frac{1}{\sqrt{\Nc_{n}/2}}\frac{1}{n}\sum_{\lambda ,\lambda _{2}\geq
0}\lambda _{1}^{2}(|a_{\lambda }|^{2}-1)+o_{\P}(1)\right] ^{2}\\
&&\quad +4\left[ \frac{1}{\sqrt{\Nc_{n}/2}}\sum_{\lambda ,\lambda _{2}\geq
0}(|a_{\lambda }|^{2}-1) +o_{\P}(1)
\right] ^{2} \\
&& \quad+16\left[
\frac{1}{\sqrt{\Nc_{n}/2}}\frac{1}{n}\sum_{\lambda ,\lambda _{2}\geq
0}\lambda _{1}\lambda _{2}\left( |a_{\lambda }|^{2}-1\right)+o_{\P}(1)
\right] ^{2}
-\frac{12}{n^{2}}\frac{1}{\Nc_{n}^{2}}\sum_{\lambda
}\lambda _{1}^{2}\lambda _{2}^{2}|a_{\lambda }|^{4}\ .
\end{eqnarray*}
And the following convergence takes place as $n_j\to\infty$:
$$
\frac{12}{n_j^{2}}\frac{1}{\Nc_{n_j}^{2}}\sum_{\lambda
}\lambda _{1}^{2}\lambda _{2}^{2}|a_{\lambda }|^{4}\stackrel{\P}{\longrightarrow} 12 - 24\, \psi(\eta).
$$
\end{lemma}
\noindent\begin{proof} One has that
\begin{eqnarray}
&&\int H_{2}(\partial _{1}\widetilde{T}_{n})H_{2}(\partial _{2}\widetilde{T
_{n})d\theta \notag \\
&&=\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\sum_{\lambda -\lambda ^{\prime
}+\lambda ^{\prime \prime }-\lambda ^{\prime \prime \prime }=0}\lambda
_{1}\lambda _{1}^{\prime }\lambda _{2}^{\prime \prime }\lambda _{2}^{\prime
\prime \prime }a_{\lambda }\overline{a}_{\lambda ^{\prime }}a_{\lambda
^{\prime \prime }}\overline{a}_{\lambda ^{\prime \prime \prime }} \label{hope1}\\
&&-\frac{2}{n}\frac{1}{\Nc_{n}}\sum_{\lambda }\lambda _{1}^{2}|a_{\lambda
}|^{2}-\frac{2}{n}\frac{1}{\Nc_{n}}\sum_{\lambda }\lambda _{2}^{2}|a_{\lambda
}|^{2}+1\text{ .}\label{hope2}
\end{eqnarray
First of all, we note tha
\[
\mathbb{E}\left[ \frac{2}{n}\frac{1}{\Nc_{n}}\sum_{\lambda }(\lambda
_{1}^{2}+\lambda _{2}^{2})|a_{\lambda }|^{2}\right] =\mathbb{E}\left[ \frac{
}{\Nc_{n}}\sum_{\lambda }|a_{\lambda }|^{2}\right] =2\text{ .}
\
Let us now focus on (\ref{hope1}). Using the structure of $S_4(n)$ recalled above, we obtain
\[
\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\sum_{\lambda -\lambda ^{\prime }+\lambda
^{\prime \prime }-\lambda ^{\prime \prime \prime }=0}\lambda _{1}\lambda
_{1}^{\prime }\lambda _{2}^{\prime \prime }\lambda _{2}^{\prime \prime
\prime }a_{\lambda }\overline{a}_{\lambda ^{\prime }}a_{\lambda ^{\prime
\prime }}\overline{a}_{\lambda ^{\prime \prime \prime }}
\
\[
=\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\left\{ \sum_{\lambda ,\lambda
^{\prime }}\lambda _{1}^{2}(\lambda _{2}^{\prime })^{2}|a_{\lambda
}|^{2}|a_{\lambda ^{\prime }}|^{2}+2\sum_{\lambda ,\lambda
^{\prime }}\lambda _{1}\lambda _{2}\lambda _{1}^{\prime }\lambda
_{2}^{\prime }|a_{\lambda }|^{2}|a_{\lambda ^{\prime
}}|^{2}-3\sum_{\lambda }\lambda _{1}^{2}\lambda
_{2}^{2}|a_{\lambda }|^{4}\right\}.
\
Let us now writ
\begin{eqnarray*}
\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\sum_{\lambda ,\lambda ^{\prime }}\lambda
_{1}^{2}(\lambda _{2}^{\prime })^{2}|a_{\lambda }|^{2}|a_{\lambda ^{\prime
}}|^{2} &:=&A\text{ ,} \\
\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}2\sum_{\lambda ,\lambda ^{\prime }}\lambda
_{1}\lambda _{2}\lambda _{1}^{\prime }\lambda _{2}^{\prime }|a_{\lambda
}|^{2}|a_{\lambda ^{\prime }}|^{2} &:=&B\text{ ,} \\
-3\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\sum_{\lambda }\lambda _{1}^{2}\lambda
_{2}^{2}|a_{\lambda }|^{4} &:=&C\text{ ,} \\
\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\left\{ -N_{n}\frac{n}{2}\sum_{\lambda
}|a_{\lambda }|^{2}+\frac{\Nc_{n}^{2}n^{2}}{4}\right\} &:=&D\text{ .}
\end{eqnarray*
We have that $A$ equal
\begin{eqnarray*}
&&\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\sum_{\lambda ,\lambda ^{\prime
}}\lambda _{1}^{2}(\lambda _{2}^{\prime })^{2}|a_{\lambda }|^{2}|a_{\lambda
^{\prime }}|^{2} \\
&&=\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\frac{1}{2}\left\{ \sum_{\lambda
,\lambda ^{\prime }}\lambda _{1}^{2}(\lambda _{2}^{\prime })^{2}|a_{\lambda
}|^{2}|a_{\lambda ^{\prime }}|^{2}+\sum_{\lambda ,\lambda ^{\prime }}\lambda
_{1}^{2}(\lambda _{2}^{\prime })^{2}|a_{\lambda }|^{2}|a_{\lambda ^{\prime
}}|^{2}\right\}
\\
&&=\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\frac{1}{2}\left\{ \sum_{\lambda ,\lambda
^{\prime }}(n-\lambda _{2}^{2})(\lambda _{2}^{\prime })^{2}|a_{\lambda
}|^{2}|a_{\lambda ^{\prime }}|^{2}+\sum_{\lambda ,\lambda ^{\prime }}\lambda
_{1}^{2}(n-(\lambda _{1}^{\prime })^{2})|a_{\lambda }|^{2}|a_{\lambda
^{\prime }}|^{2}\right\},
\end{eqnarray*}
an expression that can be rewritten as
\begin{eqnarray*}
&&\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\frac{1}{2}\left\{ -\sum_{\lambda
,\lambda ^{\prime }}\lambda _{2}^{2}(\lambda _{2}^{\prime })^{2}|a_{\lambda
}|^{2}|a_{\lambda ^{\prime }}|^{2}-\sum_{\lambda ,\lambda ^{\prime }}\lambda
_{1}^{2}(\lambda _{1}^{\prime })^{2}|a_{\lambda }|^{2}|a_{\lambda ^{\prime
}}|^{2}\right\} \\
&&+\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\frac{1}{2}\left\{ n\sum_{\lambda
,\lambda ^{\prime }}(\lambda _{2}^{\prime })^{2}|a_{\lambda
}|^{2}|a_{\lambda ^{\prime }}|^{2}+n\sum_{\lambda ,\lambda ^{\prime
}}\lambda _{1}^{2}|a_{\lambda }|^{2}|a_{\lambda ^{\prime }}|^{2}\right\}
\\
&&=\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\frac{1}{2}\left\{ -\sum_{\lambda
,\lambda ^{\prime }}\lambda _{2}^{2}(\lambda _{2}^{\prime })^{2}|a_{\lambda
}|^{2}|a_{\lambda ^{\prime }}|^{2}-\sum_{\lambda ,\lambda ^{\prime }}\lambda
_{1}^{2}(\lambda _{1}^{\prime })^{2}|a_{\lambda }|^{2}|a_{\lambda ^{\prime
}}|^{2}\right\} \\
&&+\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\frac{1}{2}\left\{ n\sum_{\lambda
,\lambda ^{\prime }}(\lambda _{2}^{\prime })^{2}|a_{\lambda
}|^{2}|a_{\lambda ^{\prime }}|^{2}+n\sum_{\lambda ,\lambda ^{\prime
}}\lambda _{1}^{2}|a_{\lambda }|^{2}|a_{\lambda ^{\prime }}|^{2}\right\}.
\end{eqnarray*
As a consequence,
\begin{eqnarray*}
A+D &=&-\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\frac{1}{2}\left[ \sum_{\lambda
}\lambda _{2}^{2}(|a_{\lambda }|^{2}-1)\right] ^{2} \\
&&-\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}\frac{1}{2}\left[ \sum_{\lambda
}\lambda _{1}^{2}(|a_{\lambda }|^{2}-1)\right] ^{2} \\
&&+\frac{4}{\Nc_{n}^{2}}\frac{1}{2}\left[ \sum_{\lambda }(|a_{\lambda }|^{2}-1
\right] ^{2}.
\end{eqnarray*
On the other hand,
\begin{eqnarray*}
B &=&\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}2\sum_{\lambda ,\lambda ^{\prime
}}\lambda _{1}\lambda _{2}\lambda _{1}^{\prime }\lambda _{2}^{\prime
}|a_{\lambda }|^{2}|a_{\lambda ^{\prime }}|^{2} \\
&=&\frac{4}{n^{2}}\frac{1}{\Nc_{n}^{2}}2\left[ \sum_{\lambda }\lambda
_{1}\lambda _{2}\left( |a_{\lambda }|^{2}-1\right) \right] ^{2}.
\end{eqnarray*
The last assertion in the statement, which concerns the term $C$ defined above, is a direct consequence of \eqref{e:easy2} and of an argument similar to the one that concluded the proof of Lemma \ref{lem2}.
\end{proof}
\subsection{End of the proof of Proposition \ref{p:4nclt}}
Plugging the explicit expressions appearing in Lemma \ref{lem1}, Lemma \ref{lem2}, Lemma \ref{lem3} and Lemma \ref{lem4} into \eqref{e:nus} (and exploiting the fact that $p_2(1/4) =-1/8$), one deduces after some standard simplification that representation \eqref{e:zut} is indeed valid, so that the conclusion of Proposition \ref{p:4nclt} follows immediately. In order to prove relation \eqref{e:magic2}, introduce the centered Gaussian vector $\widetilde{Z}^{\top} := (\widetilde{Z}_{1}, \widetilde{Z}_{2}, \widetilde{Z}_{3}, \widetilde{Z}_{4})$, with covariance matrix given by
$$
\widetilde{\Sigma
:=\left(
\begin{array}{cccc}
1 & \frac{1}{2\sqrt{\psi }} & \frac{1}{2\sqrt{\psi }} & 0 \\
\frac{1}{2\sqrt{\psi }} & 1 & \frac{1}{2\psi }-1 & 0 \\
\frac{1}{2\sqrt{\psi }} & \frac{1}{2\psi }-1 & 1 & 0 \\
0 & 0 & 0 &
\end{array
\right) \text{ .}
$$
Then,
\begin{eqnarray*}
&& \Var\left [ {Z}_{1}^{2}-2{Z}_{2}^{2}-2{Z}_{3}^{2}-4{Z}_{4}^{2}\right ]
\\
&& =\Var\left [ \widetilde{Z}_{1}^{2}-2\psi \widetilde{Z}_{2}^{2}-2\psi
\widetilde{Z}_{3}^{2}-4(\frac{1}{2}-\psi )\widetilde{Z}_{4}^{2}\right ] \\
&&=\Var\left [ H_{2}(\widetilde{Z}_{1})-2\psi H_{2}(\widetilde{Z}_{2})-2\psi
H_{2}(\widetilde{Z}_{3})-4\left (\frac{1}{2}-\psi \right )H_{2}(\widetilde{Z
_{4})\right ] \\
&&=2+8\psi ^{2}+8\psi ^{2}+32\left (\frac{1}{2}-\psi \right )^{2}-4\psi \Cov\left[ H_{2}
\widetilde{Z}_{1}),H_{2}(\widetilde{Z}_{2})\right] \\
&&\quad-4\psi \Cov\left[ H_{2}(\widetilde{Z}_{1}),H_{2}(\widetilde{Z}_{3})\right]
+8\psi ^{2} \Cov\left[ H_{2}(\widetilde{Z}_{2}),H_{2}(\widetilde{Z}_{3})\right]
\\
&&=2+8\psi ^{2}+8\psi ^{2}+32(\frac{1}{2}-\psi )^{2} \\
&&\quad-8\psi (\frac{1}{2\sqrt{\psi }})^{2}-8\psi (\frac{1}{2\sqrt{\psi }
)^{2}+16\psi ^{2}(\frac{1}{2\psi }-1)^{2}
\\
&&=2+8\psi ^{2}+8\psi ^{2}+32(\frac{1}{4}+\psi ^{2}-\psi )-2-2+16\psi ^{2}
\frac{1}{4\psi ^{2}}+1-\frac{1}{\psi })
\\
&&=64\psi ^{2}-48\psi +10\text{,}
\end{eqnarray*}
and the conclusion follows from \eqref{e:magic}.
\section{End of the proof of Theorem \ref{thm:lim dist sep}}\label{s:mainproof}
A direct computation (obtained e.g. by diagonalising the covariance matrix $\Sigma$ appearing in \eqref{e:sig}) reveals that, for every $\eta\in [-1,1]$, the random variable
$$
\frac{1}{\sqrt{1+\eta^2}}\Big( 1+Z_1^2-2Z_2^2-2Z_3^2-4Z_4^2 \Big)
$$
has the same law as $\mathcal{M}_{|\eta|}$, as defined in $\eqref{e:r}$. This implies, in particular, that such a random variable has unit variance, and has a distribution that does not depend on the sign of $\eta$. Now let the assumptions and notations of Theorem \ref{thm:lim dist sep} prevail
(in particular, $\Nc_{n_j} \to \infty$). Since the sequence $\{|\widehat{\mu}_{n_j}(4)| : j\geq 1\}$ is nonnegative and bounded by 1, there exists a subsequence $\{n'_j\}$ such that $|\widehat{\mu}_{n'_j}(4)|$ converges to some $\eta\in [0,1]$. It follows that $\{n'_j\}$ necessarily contains a subsequence $\{n''_j\}\subset \{n'_j\}$ such that one of the following properties holds: either (i) $\widehat{\mu}_{n''_j}(4)$ converges to $\eta$, or (ii) $\widehat{\mu}_{n''_j}(4)$ converges to $-\eta$. Now, if $\{n''_j\}$ is of type (i), then our initial remarks together with \eqref{eq:var leading KKW} and \eqref{e:4nclt} imply that
$$
\lim_{n''_j} \frac{\Var[\mathcal L_{n''_j}]}{\Var[{\rm proj}(\Lc_{n''_j} \, | \, C_4) ]} = 1.
$$
In view of the chaotic decomposition \eqref{e:chaos2}, this result implies that, as $n''_j\to \infty$,
$$
\widetilde{\Lc}_{n''_j} = \frac{{\rm proj}(\Lc_{n''_j} \, | \, C_4)}{\sqrt{\Var[\mathcal L_{n''_j}]}} + o_{\P}(1),
$$
and consequently that $\widetilde{\Lc}_{n''_j}$ converges in distribution to $\Mc_{\eta}$, by virtue of Proposition \ref{p:4nclt}. An analogous argument shows that, if $\{n''_j\}$ is of type (ii), then necessarily $\widetilde{\Lc}_{n''_j}$ converges in distribution to $\Mc_{| -\eta|} = \Mc_{\eta}$. The results described above readily imply the following three facts: (a) if the subsequence $\{n'_j\}\subset \{n_j\}$ is such that $|\widehat{\mu}_{n'_j}(4)|\to \eta\in [0,1]$, then $\widetilde{\Lc}_{n'_j}\stackrel{\rm d}{\longrightarrow}\Mc_{\eta}$, (b) any subsequence $\{n'_j\}\subset \{n_j\} $ contains a further subsequence $\{n''_j\}\subset \{n'_j\}$ such that $|\widehat{\mu}_{n''_j}(4)|$ converges to some $\eta\in [0,1]$, and therefore $\widetilde{\Lc}_{n''_j}\stackrel{\rm d}{\longrightarrow}\Mc_{\eta}$, and (c) if the subsequence $\{n'_j\}$ is such that $|\widehat{\mu}_{n'_j}(4)|$ is not converging, then $\widetilde{\Lc}_{n'_j}$ is not converging in distribution, since in this case the set $\{{\bf D}(\widetilde{\Lc}_{n'_j})\}$ has necessarily two distinct adherent points (thanks to Point 4 in Proposition \ref{p:meta}). The first part of the statement is therefore proved. To prove \eqref{e:b}, use fact (b) above to deduce that, for every subsequence $\{n'_j\}$, there exists a further subsequence $\{n''_j\}$ such that $\widetilde{\Lc}_{n''_j}\stackrel{\rm d}{\longrightarrow}\Mc_{\eta}$ and $\Mc^{n''_j} \stackrel{\rm d}{\longrightarrow}\Mc_{\eta}$ (where we have used the notation \eqref{e:k}), and consequently
$$
d\big(\widetilde{\Lc}_{n''_j} \,,\, \Mc^{n''_j}\big)\leq d\big(\widetilde{\Lc}_{n''_j}\, ,\, \Mc_\eta \big)+d\big(\Mc_\eta\,,\, \Mc^{n''_j}\big)\longrightarrow 0, \quad n_j''\to \infty.
$$
The previous asymptotic relation is obvious whenever $d$ is a distance
metrizing weak convergence on $\mathscr{P}$. To deal with the case where
$d=d_K$ equals the Kolmogorov distance, one has to use the standard fact that,
since $\Mc_\eta$ has an absolutely continuous distribution,
then $X_n \stackrel{\rm d}{\longrightarrow} \Mc_\eta$ if and only if
$d_K(X_n,\Mc_\eta)\longrightarrow 0$. The proof of Theorem \ref{thm:lim dist sep} is complete.
\chapter*{Part 3\\ Spin random fields}
\addcontentsline{toc}{chapter}{Part 3: Spin random fields}
\chapter{Representation of Gaussian isotropic spin random fields}
This chapter is based on the second part of \cite{mauspin}: we investigate spin random fields on the sphere, extending the representation formula for Gaussian isotropic random fields on homogeneous spaces of a compact group in Chapter 2 to the spin case. Moreover we introduce a powerful tool for studying spin random fields and more generally random sections of homogeneous vector bundles, that is, the ``pullback'' random field.
\section{Random sections of vector bundles}
We now investigate the case of Gaussian isotropic spin random fields
on $\cS^2$, with the aim of extending the representation result
of Theorem \ref{real-general}. As stated in the Introduction of this thesis, these models have
received recently much attention (see \cite{bib:LS}, \cite{malyarenko} or \cite{dogiocam}),
being motivated by the modeling of CMB data. Actually our point of view begins from \cite{malyarenko}.
We consider first the case of a general vector bundle. Let $\xi= (E,
p, B)$ be a finite-dimensional \emph{complex vector bundle} on the
topological space $B$, which is called the \emph{base space}. The
surjective map
\begin{equation}
p: E\goto B
\end{equation}
is the
\emph{bundle projection}, $p^{-1}(x)$, $x\in B$ is the {\it fiber}
above $x$.
Let us denote $\B(B)$ the Borel $\sigma$-field of $B$.
A section of $\xi$ is a map $u: B \to E$ such that $p \circ u=id_B$. As $E$ is itself a topological space, we can speak of continuous sections.
We suppose from now on that every fibre $p^{-1}(x)$ carries an inner
product and a measure $\mu$ is given on the base space. Hence we can
consider square integrable sections, as those such that
$$
\int_B\langle u(x),u(x)\rangle_{p^{-1}(x)}\,d\mu(x)<+\infty
$$
and define the corresponding $L^2$ space accordingly.
Let $(\Omega, \F, \P)$ be a probability space.
\begin{definition}\label{definizione di sezione aleatoria}
A \emph{random section $T$
of the vector bundle $\xi$} is a collection of $E$-valued random variables
$(T_x)_{x\in B}$ indexed by the elements of the base space $B$ such that
the map $\Omega \times B \ni(\omega, x) \mapsto T_x(\omega)$
is $\F \otimes \B(B)$-measurable and, for every $\omega$, the path
$$
B\ni x\to T_x(\omega)\in E
$$
is a section of $\xi$, i.e. $p\circ T_\cdot(\omega) =id_B$.
\end{definition}
Continuity of a random section $T$ is easily defined by requiring that
for every $\omega\in \Omega$ the map $x \mapsto T_x$
is a continuous section of $\xi$. Similarly a.s. continuity is defined.
A random section $T$ of $\xi$ is a.s. square integrable if the map
$x \mapsto \| T_x (\omega)\|^2 _{p^{-1}(x)}$ is a.s. integrable, it is second order if $\E[ \| T_x \|^2_{p^{-1}(x)}] < +\infty$ for every $x\in B$ and
mean square integrable
if
$$
\E\Bigl[\int_B\| T_x \|^2 _{p^{-1}(x)} \, d\mu(x)\Bigr]< +\infty\ .
$$
As already remarked in $\cite{malyarenko}$, in defining the notion of mean square continuity for
a random section, the naive approach
$$
\lim_{y\to x} \E [\| T_x - T_y \|^2] =0
$$
is not immediately meaningful as $T_x$ and $T_y$ belong to different
(even if possibly isomorphic) spaces (i.e. the fibers).
A similar difficulty arises for the definition of strict sense invariance w.r.t. the action of a topological group on the bundle.
We shall investigate these points below.
A case of particular interest to us are the homogeneous (or twisted) vector bundles. Let $G$ be a compact group, $K$ a closed subgroup and $\X=G/K$.
Given an irreducible unitary representation $\tau$ of $K$ on the complex (finite-dimensional) Hilbert space $H$,
$K$ acts on the Cartesian product $G\times H$ by the action
$$
k(g,z):= (gk, \tau(k^{-1})z)\ .
$$
Let $G\times_\tau H=\lbrace \theta(g,z) : (g,z) \in G\times H
\rbrace$ denote the quotient space of the orbits $\theta(g,z) =
\lbrace (gk, \tau(k^{-1})z) : k\in K \rbrace$ under the above
action. $G$ acts on $G\times_\tau H$ by
\begin{equation}\label{action}
h \theta(g,z) := \theta(hg,z)\ .
\end{equation}
The map $G\times H \to \X: (g,z)\to gK$ is constant on the
orbits $\theta(g,z)$ and induces the projection
$$
G\times_\tau H\ni\theta(g,z)\enspace\mathop{\to}^{\pi_\tau\,}\enspace gK\in \X
$$
which is a continuous $G$-equivariant map. $\xi_\tau=
(G\times_\tau H, \pi_\tau, \X)$ is a $G$-vector bundle: it is the \emph{homogeneous vector bundle associated to
the representation $\tau$}. The fiber
$\pi^{-1}_\tau(x)$ is isomorphic to $H$ for every $x\in \X$ (see
\cite{B-D}). More precisely, for $x\in\X$ the fiber $\pi_\tau^{-1}(x)$
is the set of
elements of the form $\th(g,z)$ such that $gK=x$. We define the scalar
product of two such elements as
\begin{equation}\label{prod scalare}
\langle \th(g,z),\th(g,w)\rangle_{\pi_\tau^{-1}(x)}=\langle z,w\rangle_{H}
\end{equation}
for some fixed $g\in G$ such that $gK=x$, as it is immediate that this
definition does not depend on the choice of such a $g$.
Given a function $f:G \to H$ satisfying
\begin{equation}\label{funzioni di tipo W}
f(gk)=\tau(k^{-1})f(g)\ ,
\end{equation}
then to it we can associate the section of $\xi_\tau$
\begin{equation}\label{proiezione}
u(x)=u(gK)=\theta(g,f(g))
\end{equation}
as again this is a good definition, not depending
of the choice of $g$ in the coset $gK$. The functions $f$ satisfying to
(\ref{funzioni di tipo W}) are called right $K$-covariant functions
of type $\tau$ (\emph{functions of type $\tau$} from now on).
More interestingly, also the converse is true.
\begin{prop}\label{pullback-s-deterministic}
Given a section $u$ of $\xi_\tau$, there exists a unique function
$f$ of type $\tau$ on $G$ such that $u(x)=\theta(g,f(g))$ where
$gK=x$. Moreover $u$ is continuous if and only if
$f:G\to H$ is continuous.
\end{prop}
\begin{proof} Let $(g_x)_{x\in\X}$ be
a measurable selection such that
$g_xK=x$ for every $x\in\X$. If $u(x)=\theta(g_x, z)$, then define
$f(g_x):=z$ and extend the definition to the elements of the coset
$g_xK$ by $f(g_xk):= \tau(k^{-1})z$; it is easy to check that such a
$f$ is of type $\tau$, satisfies (\ref{proiezione}) and is the unique
function of type $\tau$ with this property.
Note that the continuity of $f$ is equivalent to the continuity of
the map
\begin{equation}\label{mappa1}
F: g\in G \to (g,f(g))\in G\times H\ .
\end{equation}
Denote $pr_1: G\to \X$ the canonical projection onto the quotient space $\X$
and $pr_2: G\times H \to G\times_\tau H$ the canonical projection
onto the quotient space $G\times_\tau H$. It is immediate that
\begin{equation*}\label{mappa2}
pr_2 \circ F = u \circ pr_1\ .
\end{equation*}
Therefore $F$ is continuous if and only if $u$ is continuous,
the projections $pr_1$ and $pr_2$ being continuous open mappings.
\end{proof}
We shall again call $f$ the \emph {pullback} of $u$.
Remark that, given two sections $u_1, u_2$ of $\xi_\tau$ and their respective pullbacks $f_1,f_2$, we have
\begin{equation}\label{prod_scalare}
\langle u_1, u_2 \rangle := \int_\X \langle u_1(x),
u_2(x)\rangle_{\pi_\tau^{-1}(x)}\,dx=
\int_G \langle f_1(g),f_2(g)\rangle_H\,dg
\end{equation}
so that $u\longleftrightarrow f$ is an isometry between the space $L^2(\xi_\tau)$ of
the square integrable sections of $\xi_\tau$ and the space $L^2_\tau(G)$ of the square
integrable functions of type $\tau$.
The left regular action of $G$ on $L^2_\tau(G)$ (also called the
\emph{representation of $G$ induced by $\tau$})
$$
L_h f(g) := f(h^{-1}g)
$$
can be equivalently realized on $L^2(\xi_\tau)$ by
\begin{equation}\label{indotta}
U_h u(x) = h u(h^{-1}x)\ .
\end{equation}
We have
$$
U_h u(gK) = h u(h^{-1}gK) = h \theta( h^{-1} g, f(h^{-1}g)) =
\theta(g, f(h^{-1}g)) = \theta(g, L_h f(g))
$$
so that, thanks to the uniqueness of the pullback function:
\begin{prop}\label{action-pullback}
If $f$ is the pullback function of the section $u$ then $L_hf$ is
the pullback of the section $U_hu$.
\end{prop}
Let $T=(T_x)_{x\in \X}$ be
a random section of the homogeneous vector bundle $\xi_\tau$. As, for
fixed $\omega$, $x\mapsto T_x(\omega)$ is a section of $\xi_\tau$, by Proposition
\ref{pullback-s-deterministic} there
exists a unique function $g\mapsto X_g(\omega)$ of type $\tau$ such that
$T_{gK}(\omega)=\theta(g, X_g(\omega))$. We refer to the
random field $X=(X_g)_{g\in G}$ as the \emph{pullback random field
of $T$}. It is a random field on $G$ of type $\tau$, i.e. $X_{gk}(\omega)=\tau(k^{-1})X_g(\omega)$ for each $\omega$.
Conversely every random field $X$ on $G$ of type $\tau$ uniquely defines a
random section of $\xi_\tau$ whose pullback random field is $X$. It is immediate that
\begin{prop}\label{prop-pull1}
Let $T$ be a random section of $\xi_\tau$.
a) $T$ is a.s. square integrable if and only if its pullback random field $X$ is a.s. square
integrable.
b) $T$ is second order if and only if its pullback random field $X$ is second order.
c) $T$ is a.s. continuous if and only if its pullback random field $X$ is a.s. continuous.
\end{prop}
Proposition \ref{prop-pull1} introduces the fact that many properties of random sections of
the homogeneous bundle can be stated or investigated through corresponding properties of
their pullbacks, which are just ordinary random fields to whom the results of previous
sections can be applied. A first instance is the following definition.
\begin{definition}\label{definizione di continuita in media quadratica}
The random section $T$ of the homogeneous vector bundle $\xi_\tau$ is
said to be \emph{mean square continuous} if its pullback random
field $X$ is mean square continuous, i.e.,
\begin{equation}
\lim_{h\to g} \E [ \| X_h - X_g \|_H^2 ]=0\ .
\end{equation}
\end{definition}
Recalling Definition \ref{invarian}, we state now the definition of
strict-sense invariance.
Let $T$ be an a.s. square integrable random section of
$\xi_\tau$.
For every $g\in G$, the ``rotated'' random section
$T^g$ is defined as
\begin{equation}
T^g_x(\cdot):= g^{-1} T_{gx}(\cdot)
\end{equation}
which is still an a.s. square integrable random section of $\xi_\tau$. For any square integrable
section $u$ of $\xi_\tau$, let
\begin{equation}
T(u):= \int_{\X} \langle T_x, u(x)\rangle_{\pi^{-1}(x)}\,dx\ .
\end{equation}
\begin{definition}\label{isotropia per sezioni aleatorie}
Let $T$ be an a.s. square integrable random section of the homogeneous vector bundle
$\xi_\tau$. It is said to be \emph{(strict-sense) $G$-invariant}
or \emph{isotropic} if and only if
for every choice of square integrable sections
$u_1, u_2, \dots, u_m$ of $\xi_\tau$, the random vectors
\begin{equation}\label{= in legge}
\bigl( T(u_1), \dots, T(u_m) \bigr)\quad
\mbox{and} \quad\bigl( T^g(u_1), \dots, T^g(u_m) \bigr)=\bigl( T( U_g u_1), \dots, T( U_g u_m) \bigr)
\end{equation}
have the same law for every $g\in G$.
\end{definition}
\begin{prop}\label{pullback-invariant}
Let $T$ be an a.s. square integrable random section of $\xi_\tau$ and let
$X$ be its pullback random field on $G$. Then $X$ is isotropic
if and only if $T$ is an isotropic random section.
\end{prop}
\begin{proof}
Let us denote $X(f) := \int_G \langle X_g, f(g) \rangle_H\,dg$.
Thanks to Proposition \ref{action-pullback} the equality in law (\ref{= in legge}) is equivalent to the requirement that
for every choice of square integrable functions $f_1, f_2, \dots, f_m$ of type $\tau$ (i.e. the pullbacks of corresponding sections of $\xi_\tau$), the random vectors
\begin{equation}\label{pullback invariante}
( X(f_1), \dots, X(f_m) )\quad \mbox{and}\quad( X(L_g f_1), \dots, X(L_g f_m) )
\end{equation}
have the same law for every $g\in G$.
As $L^2_\tau(G)$ is a closed subspace of $L^2(G)$ and is invariant under the left regular
representation of $G$, every square integrable function $f:G\to H$
can be written as the sum $f^{(1)}+ f^{(2)}$
with $f^{(1)}\in L^2_\tau(G)$, $ f^{(2)}\in L^2_\tau(G)^{\bot}$. As the paths of the random field $X$ are of type $\tau$ we have $X(f^{(2)})=X(L_h f^{(2)})=0$ for every $h\in G$ so that
\begin{equation}
X(f)=X(f^{(1)})\quad \mbox{and}\quad X(L_h f) = X(L_h f^{(1)})\ .
\end{equation}
Therefore (\ref{pullback invariante}) implies that, for every choice $f_1, f_2, \dots, f_m$
of square integrable $H$-valued functions on $G$, the random vectors
\begin{equation}
( X(L_g f_1), \dots, X(L_g f_m) )\quad \mbox{and} \quad ( X(f_1), \dots, X(f_m) )
\end{equation}
have the same law for every $h\in G$ so that the pullback random field $X$ is a strict-sense isotropic random field on $G$.
\end{proof}
As a consequence of Proposition \ref{Mean square continuity of invariant}
we have
\begin{cor} Every a.s.
square integrable, second order and isotropic random section $T$ of
the homogeneous vector bundle $\xi_\tau$ is mean square
continuous.
\end{cor}
In order to make a comparison with the pullback approach developed above, we briefly recall
the approach to the theory of random fields in vector bundles introduced by Malyarenko in \cite{malyarenko}.
The main tool is the
scalar random field associated to the random section $T$ of $\xi=(E,p,B)$.
More precisely, it is the complex-valued random field $T^{sc}$ indexed by the elements $\eta\in E$ given by
\begin{equation}\label{scalar random field}
T^{sc}_{\eta} := \langle \eta, T_{b} \rangle_{p^{-1}(b)}, \; b\in B, \eta\in p^{-1}(b)\ .
\end{equation}
$T^{sc}$ is a scalar random field on $E$ and we can give the definition that $T$ is mean square continuous
if and only if $T^{sc}$ is mean square continuous, i.e., if the map
\begin{equation}
E \ni \eta \mapsto T^{sc}_{\eta}\in L_\C^2(\P)
\end{equation}
is continuous.
Given a topological group $G$ acting with a continuous action
$(g,x)\mapsto gx, g\in G$ on the base space
$B$, an action of $G$ on $E$ is called associated if its restriction to any fiber $p^{-1}(x)$ is a linear isometry between
$p^{-1}(x)$ and $p^{-1}(gx)$. In our case of interest, i.e. the homogeneous vector
bundles $\xi_\tau=(G\times_\tau H, \pi_\tau, \X)$, we can consider the action defined in \paref{action} which is obviously associated. We can now define that $T$ is strict sense $G$-invariant w.r.t. the action of $G$ on $B$ if the finite-dimensional distributions of $T^{sc}$ are invariant under the associated action \paref{action}. In the next statement we prove the equivalence of the two approaches.
\begin{prop}\label{noimal} The square integrable random section $T$ of the homogeneous bundle $\xi_\tau$ is mean square continuous (i.e. its pullback random field on $G$ is mean square continuous) if and only if the associated scalar random field $T^{sc}$ is mean square continuous. Moreover if $T$ is a.s. continuous then it is isotropic if and only if the associated scalar random field $T^{sc}$ is $G$-invariant.
\end{prop}
\begin{proof}
Let $X$ be the pullback random field of $T$. Consider the scalar random field on $G\times H$ defined as $X^{sc}_{(g,z)} := \langle z, X_g \rangle_H$. Let us denote $pr$ the projection $G\times H\to G\times_\tau H$: keeping in mind (\ref{prod scalare}) we have
\begin{equation}\label{2=}
T^{sc} \circ pr =X^{sc}\ ,
\end{equation}
i.e.
$$
T^{sc}_{\th(g,z)} (\omega) = X^{sc}_{(g,z)}(\omega)
$$
for every $(g,z)\in G\times H$, $\omega\in \Omega$.
Therefore the map $G\times_\tau H \ni \theta(g,z) \mapsto T^{sc}_{\theta(g,z)}\in L^2_\C(P)$ is continuous if and only if the map $G\times H \ni (g,z) \mapsto X^{sc}_{(g,z)}\in L^2_\C(P)$ is continuous, the projection $pr$ being open and continuous.
Let us show that the continuity of the latter map is equivalent to the mean square continuity of the pullback random field $X$, which will allow to conclude.
The proof of this equivalence is inspired by the one of a similar statement in \cite{malyarenko}, $\S 2.2$.
Actually consider an orthonormal basis $\lbrace v_1, v_2, \dots, v_{\dim\tau} \rbrace$ of $H$, and
denote $X^i=\langle X,v_i\rangle$ the $i$-th component of $X$ w.r.t. the above basis. Assume that the map $G\times H \ni (g,z) \mapsto X^{sc}_{(g,z)}\in L^2_\C(P)$ is continuous, then the random field on $G$
$$
g\mapsto X^{sc}_{(g,v_i)}=\overline{ X_g ^i}
$$
is continuous for every $i=1, \dots, \dim\tau$.
As $\E[| \overline{X_g^i}-\overline{X_h^i}|^2]=\E [| X_g^i - X_h^i|^2]$,
$$
\lim_{h\to g} \E [\| X_g - X_h \|_H^2] =\lim_{h\to g} \sum_{i=1}^{\dim\tau}
\E [| X_g^i - X_h^i|^2] = 0\ .
$$
Suppose that the pullback random field $X$ is mean square continuous.
Then for each $i=1, \dots, \dim\tau$
$$
\dlines{
0 \le \limsup_{h\to g} \E[ | X_g^i - X_h^i|^2] \le \lim_{h\to g} \E[ \| X_g - X_h \|^2_H ]= 0
}$$
so that the maps $G\ni g\mapsto X_g ^i \in L^2_\C(\P)$ are continuous.
Therefore
$$\dlines{
\lim_{(h,w) \to (g,z)} \E [| X^{sc}_{(h,w)} - X^{sc}_{(g,z)} |^2] \le 2 \sum_{i=1}^{\dim\tau} \lim_{(h,w) \to (g,z)} \E[|w_i X_h^i - z_i X_g^i |^2] = 0\ ,
}$$
$a_i$ denoting the $i$-th component of $a\in H$.
Assume that $T$ is a.s. continuous and let us show that it is isotropic if and only if the associated scalar random field $T^{sc}$ is $G$-invariant.
Note first that, by \paref{2=} and $(T^{sc})^h=(X^{sc})^h\circ pr$ for any $h\in G$, $T^{sc}$ is $G$-invariant if and only if $X^{sc}$ is $G$-invariant.
Actually if the random fields $X^{sc}$ and its rotated $(X^{sc})^h$
have the same law, then $T^{sc} \mathop{=}^{law} X^{sc}$ and
vice versa.
Now recalling the definition of $X^{sc}$, it is obvious that $X^{sc}$ is $G$-invariant
if and only if $X$ is isotropic.
\end{proof}
\section{Random sections of the homogeneous line bundles on $\cS^2$}
We now concentrate on the case of the homogeneous line bundles on
$\X=\cS^2$ with $G=SO(3)$ and $K\cong SO(2)$. For every character $\chi_s$ of $K$, $s\in\Z$, let $\xi_s$ be the corresponding homogeneous vector bundle on $\cS^2$, as
explained in the previous section.
Given the action of $K$ on $SO(3)\times \mathbb{C}$:
$k(g,z)=(gk, \chi_s(k^{-1})z)$, $k\in K$, let $\mE_s:=SO(3)\times_s\C$ be the space of the orbits
$\mE_s=\lbrace \theta(g,z), (g,z)\in G\times \mathbb{C}\rbrace$
where $\theta(g,z) = \lbrace (gk, \chi_s(k^{-1})z); k\in K \rbrace$.
If $\pi_s: \mE_s \ni\theta(g,z)\to gK\in \cS^2$,
$\xi_s=(\mE_s, \pi_s, \cS^2)$ is an homogeneous line bundle (each fiber $\pi_s^{-1}(x)$ is isomorphic to $\C$ as a vector space).
The space $L^2(\xi_s)$ of the square integrable sections of
$\xi_s$ is therefore isomorphic to the space $L^2_s(SO(3))$ of the
square integrable \emph{functions of type $s$}, i.e. such that, for every $g\in G$ and $k \in K$,
\begin{equation}
f(gk)=\chi_s(k^{-1})f(g)=\overline{\chi_s(k)}f(g)\ .
\end{equation}
Let us investigate the Fourier expansion of a function of type $s$.
\begin{prop}\label{infinite-linear}
Every function of type $s$ is an infinite linear combination of the
functions appearing in the $(-s)$-columns of Wigner's $D$ matrices
$D^\ell$, $\ell \ge |s|$. In particular functions of type $s$ and type
$s'$ are orthogonal if $s\not=s'$.
\end{prop}
\begin{proof}
For every $\ell \ge |s|$, let $\widehat f(\ell)$ be as in (\ref{coefficiente ellesimo}). We have,
for every $k\in K$,
\begin{equation}\label{leading}
\begin{array}{c}
\displaystyle\widehat f(\ell)
= \sqrt{2\ell+1}\,\int_{SO(3)}f(g)D^\ell(g^{-1})\,dg =\\
\sqrt{2\ell+1}\,\chi_s(k)\int_{SO(3)} f(gk) D^\ell(g^{-1})\,dg=\\
\displaystyle=\sqrt{2\ell+1}\,\chi_s(k)\int_{SO(3)} f(g) D^\ell(kg^{-1})\,dg
=\\=\sqrt{2\ell+1}\,\chi_s(k)D^\ell(k)\int_{SO(3)} f(g)
D^\ell(g^{-1})\,dg
=\chi_s(k)D^\ell(k)\widehat f(\ell) \ ,\\
\end{array}
\end{equation}
i.e. the image of $\widehat f(\ell)$ is contained in the subspace
$H_{\ell}^{(-s)} \subset H_\ell$ of the vectors such that $D^\ell(k)v=
\chi_{-s}(k)v$ for every $k\in K$. In particular $\widehat f(\ell) \ne 0$ only if
$\ell \ge |s|$, as for every $\ell$ the restriction to $K$ of the representation
$D^\ell$
is unitarily equivalent to the direct sum of the representations $\chi_m$,
$m=-\ell, \dots, \ell$ as recalled at the end of \S2.
Let $\ell \ge |s|$ and $v_{-\ell}, v_{-\ell + 1}, \dots, v_{\ell}$ be the orthonormal basis of $H_\ell$
as in (\ref{restrizione}), in other words $v_m$ spans $H_{\ell}^{m}$, the
one-dimensional subspace of $H_\ell$
formed by the vectors that transform under the action of $K$ according to the
representation $\chi_{m}$.
It is immediate that
\begin{align}
\widehat f(\ell)_{i,j} = \langle \widehat f(\ell) v_j, v_i \rangle=0\ ,
\end{align}
unless $i=-s$. Thus the Fourier coefficients of $f$ vanish but those corresponding to
the column $(-s)$ of the matrix representation $D^\ell$ and the Peter-Weyl expansion (\ref{PW SO(3)}) of $f$ becomes, in $L^2(SO(3))$,
\begin{equation}\label{sviluppo per una funzione di tipo s}
f=\sum_{\ell \ge |s|} \sqrt{2\ell + 1} \sum_{m=-\ell}^{\ell}
\widehat f(\ell)_{-s,m} D^\ell_{m,-s}\ .
\end{equation}
\end{proof}
We introduced the spherical harmonics in (\ref{armoniche sferiche1}) from the entries $D^{\ell}_{m,0}$ of the central columns of Wigner's $D$ matrices.
Analogously to the case of $s=0$, for any $s\in \Z$ we define for $\ell \ge |s|, m=-\ell, \dots, \ell$
\begin{equation}\label{armoniche di spin}
_{-s} Y_{\ell,m} (x) := \theta \Bigl( g, \sqrt{\frac{2\ell +1}{4\pi}}\, \overline{D^\ell_{m,s}(g)} \Bigr)\ , \qquad x=gK\in \cS^2\ .
\end{equation}
$ _{-s} Y_{\ell,m}$
is a section of $\xi_s$ whose pullback function (up to a factor)
is $g\mapsto D^{\ell}_{-m,-s}(g)$ (recall the relation
$\overline{D^\ell_{m,s}(g)} = (-1)^{m-s} D^{\ell}_{-m,-s}(g)$, see \cite{dogiocam} p.~55 e.g.).
Therefore thanks to Proposition \ref{infinite-linear} the sections
$_{-s} Y_{\ell,m},\, \ell \ge |s|, m=-\ell, \dots, \ell$, form an
\emph{orthonormal} basis of $L^2(\xi_s)$. Actually recalling
(\ref{prod scalare}) and (considering the total mass equal to $4\pi$ on the sphere and to $1$ on $SO(3)$)
$$
\int_{\cS^2}\, _{-s} Y_{\ell,m}\, \overline{_{-s}Y_{\ell',m'}}\,dx =
4\pi \int_{SO(3)} \sqrt{\frac{2\ell +1}{4\pi}}\, \overline{D^\ell_{m,s}(g)}\,
\sqrt{\frac{2\ell' +1}{4\pi}}\,D^{\ell'}_{m',s}(g)\,dg = \delta_{\ell'}^{\ell}\delta_{m'}^{m}\ .
$$
The sections $_{-s} Y_{\ell,m},\, \ell \ge |s|, m=-\ell, \dots, \ell$ in
(\ref{armoniche di spin}) are called \emph{spin $-s$ spherical harmonics}.
Recall that the spaces $L^2_s(SO(3))$ and $L^2(\xi_s)$ are isometric through
the identification $u \longleftrightarrow f$ between a section $u$ and its
pullback $f$ and the definition of the scalar product on $L^2(\xi_s)$ in
(\ref{prod_scalare}).
Proposition (\ref{infinite-linear}) can be otherwise stated as
\noindent \emph{Every square integrable section $u$ of the homogeneous line
bundle $\xi_s=(\mE_s, \pi_s, \cS^2)$ admits a Fourier expansion in terms of spin
$-s$ spherical harmonics of the form
\begin{equation}
u(x) = \sum_{\ell \ge |s|} \sum_{m=-\ell}^{\ell} u_{\ell,m}\, _{-s} Y_{\ell,m}(x)\ ,
\end{equation}
where $u_{\ell,m} := \langle u,\, _{-s} Y_{\ell,m} \rangle_2$, the above series converging in $L^2(\xi_s)$.}
In particular we have the relation
$$
\dlines{
u_{\ell,m} = \int_{\cS^2} u(x)\, _{-s} Y_{\ell,m}(x) \,dx =
4\pi \int_{SO(3)} f(g) \sqrt{ \frac{2\ell +1}{4\pi}} D^\ell_{m,s}(g)\,dg =\cr
(-1)^{s-m} \sqrt{4\pi (2\ell+1)} \int_{SO(3)} f(g) \overline{ D^\ell_{-m,-s}(g)}\,dg = (-1)^{s-m} \sqrt{4\pi}\, \widehat f(\ell)_{-s,-m}\ . }
$$
\begin{definition}
Let $s\in \Z$. A square integrable function $f$ on $SO(3)$ is said
to be \emph{bi-$s$-associated} if for every $g\in SO(3), k_1, k_2 \in K$,
\begin{equation}
f(k_1 g k_2) = \chi_s (k_1) f(g) \chi_s(k_2)\ .
\end{equation}
\end{definition}
Of course for $s=0$ {bi-$0$-associated} is equivalent to bi-$K$-invariant.
We are particularly interested in bi-$s$-associated functions
as explained in the remark below.
\begin{remark}\label{associate-bi} \rm Let $X$ be an isotropic random field of type $s$ on
$SO(3)$. Then its associate positive definite function $\phi$ is
bi-$(-s)$-associated. Actually, assuming for simplicity that $X$ is
centered, as $\phi(g)=\E[X_g\overline{X_e}]$, we have, using
invariance on $k_1$ and type $s$ property on $k_2$,
$$\displaylines{
\phi(k_1gk_2)=\E[X_{k_1gk_2}\overline{X_e}]=
\E[X_{gk_2}\overline{X_{k_1^{-1}}}]=\cr
=
\chi_s(k_1^{-1})\E[X_g\overline{X_e}]\chi_s(k_2^{-1})=
\chi_{-s}(k_1)\phi(g)\chi_{-s}(k_2)\ .
}$$\qed
\end{remark}
Let us investigate the Fourier expansion of a bi-$s$-associated
function $f$: note first that a bi-$s$-associated function is also a
function of type $(-s)$, so that $\widehat f(\ell) \ne 0$
only if $\ell \ge |s|$ as above and all its rows vanish but for
the $s$-th. A repetition of the computation
leading to \paref{leading} gives easily that
$$
\widehat f(\ell)=\chi_{-s}(k)\widehat f(\ell)D^\ell(k)
$$
so that the only non-vanishing entry of the matrix $\widehat f(\ell)$ is
the $(s,s)$-th.
Therefore the Fourier expansion of a bi-$s$-associated function $\phi$ is
\begin{equation}\label{sviluppo per una funzione bi-s-associata}
f= \sum_{\ell \ge |s|} \sqrt{2\ell + 1}\, \alpha_\ell
D^\ell_{s,s}\ ,
\end{equation}
where we have set $\alpha_\ell=\widehat f(\ell)_{s,s}$.
Now let $T$ be an a.s. square integrable random section of the line
bundle $\xi_s$ and $X$ its pullback random field. Recalling that $X$
is a random function of type $s$ and its sample paths are a.s.
square integrable, we easily obtain the stochastic Fourier
expansion of $X$ applying (\ref{sviluppo per una funzione di tipo s})
to the functions $g\mapsto X_g(\omega)$. Actually define, for every $\ell \ge |s|$, the random operator
\begin{equation}
\widehat X(\ell)= \sqrt{2\ell + 1}\int_{SO(3)} X_g D^\ell(g^{-1})\,dg\ .
\end{equation}
The basis of $H_\ell$ being fixed as above and recalling (\ref{sviluppo per una funzione di tipo s}), we obtain, a.s. in $L^2(SO(3))$,
\begin{equation}\label{sviluppo di Fourier per X}
X_g =\sum_{\ell \ge |s|} \sqrt{2\ell + 1}
\sum_{m=-\ell}^{\ell} \widehat X(\ell)_{-s,m} D^\ell_{m,-s}(g)\ .
\end{equation}
If $T$ is isotropic, then by Definition
\ref{isotropia per sezioni aleatorie} its pullback
random field $X$ is also isotropic in the sense of Definition
\ref{invarian}. The following is a consequence of well known general properties of the random
coefficients of invariant random fields (see \cite{balditrapani} Theorem 3.2 or \cite{malyarenko} Theorem 2).
\begin{prop}\label{teorema di struttura}
Let $s\in \Z$ and $\xi_s=(\mE_s, \pi_s, \cS^2)$ be the homogeneous
line bundle on $\cS^2$ induced by the $s$-th linear character $\chi_s$ of $SO(2)$.
Let $T$ be a random section
of $\xi_s$ and $X$ its pullback random field. If $T$ is second order and strict-sense isotropic, then the Fourier coefficients $X(\ell)_{-s,m}$ of $X$
in its stochastic expansion \paref{sviluppo di Fourier per X}
are pairwise orthogonal and the variance, $c_\ell$, of $\widehat X(\ell)_{-s,m}$ does not depend on $m$.
Moreover $\E [ \widehat X(\ell)_{-s,m} ]=0$ unless $\ell=0, s=0$.
\end{prop}
For the random field $X$ of Proposition \ref{teorema di struttura} we have
immediately
\begin{equation}\label{conv}
\E[|X_g|^2]=\sum_{\ell \ge |s|} (2\ell + 1) c_\ell < +\infty\ .
\end{equation}
The convergence of the series above is also a consequence of
Theorem \ref{gangolli-true}, as the positive definite function $\phi$ associated to
$X$ is given by
$$
\dlines{
\phi(g)=\E[X_g\overline{X_e}]=\sum_{\ell \ge |s|} (2\ell + 1)c_\ell
\sum_{m=-\ell}^{\ell} D^\ell_{m,-s}(g)\overline{D^\ell_{m,-s}(e)}=\cr
=\sum_{\ell \ge |s|} (2\ell + 1)c_\ell
\sum_{m=-\ell}^{\ell} D^\ell_{m,-s}(g)D^\ell_{-s,m}(e)=
\sum_{\ell \ge |s|} (2\ell + 1)c_\ell D^\ell_{-s,-s}(g)\ . \cr
}
$$
\begin{remark} \rm
Let $X$ be a type $s$ random field on $SO(3)$ with $s\not=0$. Then the
relation $X_{gk}=\chi_s(k^{-1})X_{g}$ implies that $X$ cannot be real
(unless it is vanishing). If in addition it was Gaussian,
then, the identity in law between $X_g$ and $X_{gk}=\chi_s(k^{-1})X_{g}$
would imply that, for every $g\in G$, $X_g$ is a complex Gaussian r.v.
\end{remark}
\section{Construction of Gaussian isotropic spin random fields}
We now give an extension of the construction of \S\ref{sec4}
and prove that every complex Gaussian random section of a homogeneous
line bundle on $\cS^2$ can be obtained in this way, a result much
similar to Theorem \ref{real-general}.
Let $s\in \Z$ and let $\xi_s$ be the homogeneous line bundle associated to
the representation $\chi_s$.
Let $(X_n)_n$ be a sequence of i.i.d. standard Gaussian r.v.'s on some probability space $(\Omega, \F, \mathbb{P})$, and
$\mathscr{H}\subset L^2(\Omega,\F,\P)$ the \emph{complex} Hilbert space
generated by $(X_n)_n$.
Let $(e_n)_n$ be an orthonormal basis of $L^2(SO(3))$ and
define an isometry $S$ between $L^2(SO(3))$ and
$\mathscr{H}$ by
$$
L^2(SO(3))\ni \sum_k \alpha_k e_k\enspace \to\enspace \sum_k \alpha_k X_k \in \mathscr{H}\ .
$$
Let $f\in L^2(SO(3))$, we define a random field $X^f$ on $SO(3)$ by
\begin{align}\label{spin-def}
X^f_g=S(L_gf)\ ,
\end{align}
$L$ denoting as usual the left regular representation.
\begin{prop}\label{propspin=}
If $f$ is a square integrable bi-$s$-associated function
on $SO(3)$, then $X^f$ defined in \paref{spin-def} is a second order,
square integrable Gaussian isotropic random field of type $s$. Moreover it is
complex Gaussian.
\end{prop}
\begin{proof}
It is immediate that $X^f$ is second order as
$\E[|X^f_g|^2]=\Vert L_gf\Vert^2_2=\Vert f\Vert^2_2$. It
is of type $s$ as for every $g\in SO(3)$ and $k\in K$,
$$
X^f_{gk}=S(L_{gk}f)=\chi_s(k^{-1})S(L_gf)=\chi_s(k^{-1})X^f_g\ .
$$
Let us prove strict-sense invariance. Actually, $S$ being an isometry,
for every $h\in SO(3)$
$$
\dlines{
\E[ X^f_{hg}\overline{X^f_{hg'}}] = \E[ S(L_{hg}f)\overline{S(L_{hg'}f)}]
= \langle L_{hg}f, L_{hg'}f \rangle_2 =
\langle L_{g}f, L_{g'}f \rangle_2
= \E[X^f_{g}\overline{X^f_{g'}}]\ .
}$$
Therefore the random fields $X^f$ and its rotated $(X^f)^h$
have the same covariance
kernel. Let us prove that they also have the same relation function.
Actually we have, for every $g,g'\in SO(3)$,
\begin{equation}\label{complex-spin1}
\E[X^f_{g}X^f_{g'}]=\E[ S(L_{hg}f){S(L_{hg'}f)}]
= \langle L_{hg}f, \overline {L_{hg'}f} \rangle_2 =\langle L_{g}f, \overline{L_{g'}f} \rangle_2=0
\end{equation}
as the function $\overline{L_{g'}f}$ is bi-$(-s)$-associated and therefore of
type $s$ and orthogonal to $L_{g}f$ which is of type $-s$ (orthogonality of functions
of type $s$ and $-s$ is a consequence of Proposition \ref{infinite-linear}).
In order to prove that $X^f$ is complex Gaussian we must show that
for every $\psi\in L^2(SO(3))$, the r.v.
$$
Z=\int_{SO(3)}X^f_g\psi(g)\, dg
$$
is complex Gaussian. As $Z$ is Gaussian by construction we must just prove that
$\E[Z^2]=0$. But as, thanks to \paref{complex-spin1}, $\E[X^f_{g}X^f_{g'}]=0$
$$\displaylines{
\E[Z^2]=\E\Bigl[\int_{SO(3)}\int_{SO(3)}X^f_gX^f_h\psi(g)\psi(h)\,dg dh\Bigr]=\cr
=
\int_{SO(3)}\int_{SO(3)}\E[X^f_gX^f_h]\psi(g)\psi(h)\,dg dh=0\ .
}$$
\end{proof}
Let us investigate the stochastic Fourier expansion of
$X^f$.
Let us consider first the random field $X^\ell$ associated to
$f=D^\ell_{s,s}$. Recall first that the r.v. $Z=S(D^\ell_{s,s})$
has variance $\E[|Z|^2]=\Vert D^\ell_{s,s}\Vert_2^2=(2\ell+1)^{-1}$ and that
$\overline{D^\ell_{m,s}}=(-1)^{m-s}D^\ell_{-m,-s}$. Therefore
$$
\dlines{
X^\ell_g =S(L_g D^\ell_{s,s})=
\sum_{m=-\ell}^{\ell} S(D^\ell_{m,s}) D^\ell_{s,m}(g^{-1})=\cr
=\sum_{m=-\ell}^{\ell} S(D^\ell_{m,s})
\overline{D^\ell_{m,s}(g)}=
\sum_{m=-\ell}^{\ell} S(D^\ell_{m,s}) (-1)^{m-s}D^\ell_{-m,-s}(g)\ .\cr
}
$$
Therefore the r.v.'s
$$
a_{\ell,m}=\sqrt{2\ell+1}\, S(D^\ell_{m,s})(-1)^{m-s}
$$
are complex Gaussian, independent and with variance $\E[|a_{\ell,m}|^2]=1$ and
we have the expansion
\begin{equation}
X^\ell_g=
\frac{1}{\sqrt{2\ell+1}}\sum_{m=-\ell}^{\ell} a_{\ell, m} D^\ell_{m,-s}(g)\ .
\end{equation}
Note that the coefficients
$a_{\ell m}$ are independent complex Gaussian r.v.'s.
This is a difference with respect to the case $s=0$,
where in the case of a real random field, the coefficients $a_{\ell,m}$ and $a_{\ell,-m}$ were not independent. Recall that random fields of type $s\ne 0$ on $SO(3)$ cannot be real.
In general, for a square integrable bi-$s$-associated function $f$
\begin{equation}\label{fs}
f=\sum_{\ell \ge |s|} \sqrt{2\ell + 1}\, \alpha_\ell D^\ell_{s,s}
\end{equation}
with
$$
\Vert f\Vert_2^2=\sum_{\ell \ge |s|} |\alpha_\ell|^2 < +\infty\ ,
$$
the Gaussian random field $X^f$ has the expansion
\begin{align}\label{sviluppo per X}
X^f_g&=\sum_{\ell \ge |s|} \alpha_\ell
\sum_{m=\ell}^{\ell} a_{\ell, m} D^\ell_{m,-s}(g)\ ,
\end{align}
where $(a_{\ell,m} )_{\ell,m}$ are independent complex Gaussian
r.v.'s with zero mean and unit variance.
The associated positive definite function of $X^f$,
$\phi^f(g):=\E[X^f_g \overline{X^f_e}]$ is bi-$(-s)$-associated (Remark
\ref{associate-bi}) and
continuous (Theorem \ref{gangolli-true}) and, by \paref{convolution for phi},
is related to $f$ by
$$
\phi^f=f \ast \breve f (g^{-1})\ .
$$
This allows to derive its Fourier expansion:
$$
\dlines{
\phi^f(g)=f \ast \breve f (g^{-1})=
\int_{SO(3)} f(h) \overline{f(g h)}\,dh
=\cr
=\sum_{\ell, \ell' \ge |s|} \sqrt{2\ell + 1}\,\sqrt{2\ell'+1}\, \alpha_\ell \overline{\alpha_{\ell'}}
\int_{SO(3)} D^\ell_{s,s}(h) \overline{D^{\ell'}_{s,s}(g h)}\,dh=\cr
=\sum_{\ell, \ell' \ge |s|} \sqrt{2\ell + 1}\,\sqrt{2\ell'+1}\, \alpha_\ell \overline{\alpha_{\ell'}}
\sum_{j=-\ell}^{\ell} \underbrace{\Bigl (\int_{SO(3)} D^\ell_{s,s}(h)
\overline{D^{\ell'}_{j,s}(h)}\,dh\, \Bigr)}_{= \frac 1 {2\ell +1}
\delta_{\ell,\ell'} \delta_{s,j}} \overline{D^{\ell}_{s,j}(g)}=\cr
=\sum_{\ell \ge |s|} |\alpha_\ell |^2 D^\ell_{-s,-s}(g)\ .
}
$$
Note that in accordance with Theorem \ref{gangolli-true}, as
$|D^\ell_{-s,-s}(g)|\le D^\ell_{-s,-s}(e)=1$, the above series converges uniformly.
Conversely, it is immediate that, given a continuous positive definite
bi -$(-s)$- associated function $\phi$, whose expansion is
$$
\phi^f(g)=\sum_{\ell \ge |s|} |\alpha_\ell |^2 D^\ell_{-s,-s}(g)\ ,
$$
by choosing
$$
f(g)=\sum_{\ell \ge |s|} \sqrt{2\ell+1}\,\beta_\ell D^\ell_{-s,-s}(g)
$$
with $|\beta_\ell|=\sqrt{\alpha_\ell}$,
there exist a square integrable bi-$s$-associated function $f$ as in \paref{fs}
such that $\phi(g)=f*\breve f(g^{-1})$. Therefore, for every random field
$X$ of type
$s$ on $SO(3)$ there exists a square integrable bi-$s$-associated
function $f$ such that $X$ and $X^f$ coincide in law. Such a function $f$ is not
unique.
From $X^f$ we can define a random section $T^f$ of the homogeneous
line bundle $\xi_s$ by
\begin{equation}
T^f_{x} := \theta(g, X^f_g)\ ,
\end{equation}
where $x=gK\in \cS^2$.
Now, as for the case $s=0$ that was treated in \S\ref{sec4}, it is natural to ask whether every Gaussian isotropic section of $\xi_s$ can be obtained in this way.
\begin{theorem}\label{teospin=}
Let $s\in \Z\setminus \{0\}$.
For every square integrable, isotropic, (complex) Gaussian
random section $T$ of the homogeneous $s$-spin line bundle $\xi_s$, there exists a square integrable and bi-$s$-associated function $f$ on $SO(3)$
such that
\begin{equation}\label{=}
T^f\enspace \mathop{=}^{law}\enspace T\ .
\end{equation}
Such a function $f$ is not unique.
\end{theorem}
\begin{proof}
Let $X$ be the pullback random field (of type $s$) of $T$. $X$ is of course
mean square continuous. Let $R$ be its covariance kernel. The function
$\phi(g):=R(g,e)$ is continuous,
positive definite and bi-$(-s)$-associated, therefore has the expansion
\begin{equation}\label{sviluppo per fi 2}
\phi=\sum_{\ell \ge |s|} \sqrt{2\ell + 1}\,\beta_\ell D^\ell_{-s,-s}\ ,
\end{equation}
where
$\beta_\ell=\sqrt{2\ell + 1}\,\int_{SO(3)} \phi(g)
\overline{D^\ell_{-s,-s}(g)}\,dg \ge 0$. Furthermore,
by Theorem \ref{gangolli-true}, the series
in (\ref{sviluppo per fi 2}) converges uniformly, i.e.
$$
\sum_{\ell \ge |s|} \sqrt{2\ell + 1}\,\beta_\ell < +\infty\ .
$$
Now set $f:=\sum_{\ell \ge |s|} (2\ell +1)\sqrt{\beta_\ell} D^\ell_{s,s}$.
Actually, $f\in L^2_s(SO(3))$ as $\| f\|^2_{L^2(SO(3))} = \sum_{\ell \ge |s|} (2\ell +1)\beta_\ell < +\infty$ so that it is bi-$s$-associated.
Note that every function $f$ of the form
$f=\sum_{\ell \ge |s|} (2\ell +1)\alpha_\ell D^\ell_{s,s}$ where
$\alpha_\ell$ is such that $\alpha_\ell^2=\beta_\ell$ satisfies
(\ref{=}) (and clearly every function $f$ such that $\phi(g)=f*\breve f(g^{-1})$ is of this form).
\end{proof}
\section{The connection with classical spin theory}
There are different approaches to the theory of random sections of homogeneous line bundles
on $\cS^2$ (see \cite{marinuccigeller}, \cite{bib:LS}, \cite{malyarenko}, \cite{bib:NP} e.g.). In this section
we compare them, taking into account, besides the one outlined
in \S 6, the classical Newman and Penrose spin theory (\cite{bib:NP})
later formulated in a more mathematical framework by Geller and Marinucci
(\cite{marinuccigeller}).
Let us first recall some basic notions about vector bundles. From now on $s\in \Z$. We shall state them concerning
the complex line bundle $\xi_s=(\mE_s, \pi_s, \cS^2)$ even if they can be immediately extended
to more general situations. An atlas of $\xi_s$ (see \cite{Husemoller} e.g.) can be defined as follows.
Let $U\subset \cS^2$ be an open set and $\Psi$ a diffeomorphism between $U$ and an open set of $\R^2$. A chart $\Phi$ of $\xi_s$ over $U$ is an isomorphism
\begin{equation}\label{def chart}
\Phi: \pi^{-1}_s(U)\goto \Psi(U)\times \C\ ,
\end{equation}
whose restriction to every fiber $\pi_s^{-1}(x)$ is a linear isomorphism $\leftrightarrow\C$. An atlas of $\xi_s$ is a family $( U_j, \Phi_j)_{j\in J}$ such that $\Phi_j$ is a chart of $\xi_s$ over $U_j$
and the family $(U_j)_{j\in J}$ covers $\cS^2$.
Given an atlas $( U_j, \Phi_j)_{j\in J}$, For each pair $i,j\in J$ there exists a unique map
(see \cite{Husemoller} Prop. 2.2) $\lambda_{i,j}: U_i\cap U_j \goto \C\setminus 0$
such that for $x\in U_i\cap U_j, z\in \C$,
\begin{equation}
\Phi_i^{-1}(\Psi_i(x),z)=\Phi_j^{-1}(\Psi_j(x),\lambda_{i,j}(x)z)\ .
\end{equation}
The map $\lambda_{i,j}$ is called the \emph{transition function} from the chart
$(U_j,\Phi_j)$ to the chart $(U_i,\Phi_i)$.
Transition functions satisfy the cocycle conditions, i.e.
for every $i,j,l\in J$
\begin{equation}
\begin{array}{l}\label{cociclo}
\nonumber
\lambda_{j,j} = 1\ \ \ \ \qquad \text{on}\ \ \ U_j\ ,\cr
\lambda_{j,i} = \lambda_{i,j}^{-1}\ \ \ \ \quad \text{on}\ \ \ U_i\cap U_j\ ,\cr
\nonumber
\lambda_{l,i}\lambda_{i,j}=\lambda_{l,j}\ \ \ \text{on}\ \ \ U_i\cap U_j\cap U_l\ .
\end{array}
\end{equation}
Recall that we denote $K\cong SO(2)$ the isotropy group of the north pole as in \S 6, \S7, so that $\cS^2\cong SO(3)/K$.
We show now that an atlas of the line bundle $\xi_s$ is given as soon as we specify
a) an atlas $(U_j,\Psi_j)_{j\in J}$ of the manifold $\cS^2$,
b) for every $j\in J$ a family $(g_x^j)_{x\in U_j}$ of representative
elements $g_x^j\in G$ with $g_x^jK=x$.
\noindent More precisely, let $(g_x^j)_{x\in U_j}$ be as in b) such that $x\mapsto g^j_x$ is smooth for each $j\in J$.
Let $\eta \in \pi^{-1}_s(U_j)\subset \mE_s$ and $x:=\pi_s(\eta)\in U_j$,
therefore $\eta=\theta(g^j_x,z)$, for a unique $z\in \C$.
Define the chart $\Phi_j$ of $\xi_s$ over $U_j$ as
\begin{equation}\label{triv}
\Phi_j(\eta)= (\Psi_j(x), z)\ .
\end{equation}
Transition functions of this atlas are easily determined.
If $\eta \in \xi_s$ is such that $x=\pi_s(\eta)\in U_i\cap U_j$,
then $\Phi_j(\eta)=(\Psi_j(x), z_j)$, $\Phi_i(\eta)=(\Psi_i(x), z_i)$.
As $g_x^iK=g^j_xK$, there exists a unique $k=k_{i,j}(x)\in K$ such that
$g^j_x=g^i_xk$, so that
$\eta=\theta(g^i_x,z_i)=\theta(g^j_x, z_j)=\theta(g^i_xk, z_j)=\theta(g^i_x, \chi_s(k)z_j)$ which implies $z_i=\chi_s(k)z_j$.
Therefore
\begin{equation}\label{transizione}
\lambda_{i,j}(x)=\chi_s(k)\ .
\end{equation}
\bigskip
\noindent The spin $s$ concept was introduced by Newman and Penrose in \cite{bib:NP}:
\emph{a quantity $u$ defined on $\cS^2$ has spin weight $s$ if, whenever a tangent vector $\rho$
at any point $x$ on the sphere transforms under coordinate change by
$\rho'=e^{i \psi} \rho$, then the quantity at this point $x$ transforms
by $u'=e^{is\psi} u$}. Recently, Geller and Marinucci in \cite{marinuccigeller}
have put this notion in a more mathematical framework modeling such a $u$
as a section of a complex line bundle on $\cS^2$ and they describe this line bundle by giving charts and fixing transition functions to express the transformation laws under
changes of coordinates.
More precisely,
they define an atlas of $\cS^2$ as follows. They consider the open
covering $(U_R)_{R\in SO(3)}$ of $\cS^2$ given by
\begin{equation}\label{charts}
U_e := \cS^2 \setminus \lbrace x_0, x_1 \rbrace \qquad\text{and}\qquad U_R:=R U_e\ ,
\end{equation}
where $x_0=$the north pole (as usual), $x_1=$the south pole. On $U_e$ they consider the usual spherical coordinates $(\vartheta, \varphi)$, $\vartheta=$colatitude, $\varphi=$longitude
and on any $U_R$ the ``rotated'' coordinates $(\vartheta_R, \varphi_R)$ in such a way that $x$ in $U_e$ and $Rx$ in $U_R$ have the same coordinates.
The transition functions are defined as follows.
For each $x\in U_R$, let $\rho_R(x)$ denote the unit tangent vector at $x$, tangent to the circle $\vartheta_R = const$ and
pointing to the direction of increasing $\varphi_R$. If
$x\in U_{R_1}\cap U_{R_2}$, let $\psi_{R_2,R_1}(x)$ denote the (oriented) angle from
$\rho_{R_1}(x)$ to $\rho_{R_2}(x)$.
They prove that the quantity
\begin{equation}\label{triv-mg}
e^{is\psi_{R_2,R_1}(x)}
\end{equation}
satisfies the cocycle relations \paref{cociclo} so that this defines a unique (up to isomorphism)
structure of complex line bundle on $\cS^2$ having \paref{triv-mg} as transition
functions at $x$ (see \cite{Husemoller} Th. 3.2).
We shall prove that
this spin $s$ line bundle is the same as the
homogeneous line bundle $\xi_{-s}=(\mE_{-s}, \pi_{-s}, \cS^2)$.
To this aim we have just to check that, for a suitable choice of the atlas
$(U_R, \Phi_R)_{R\in SO(3)}$ of $\xi_{-s}$ of the type described in a), b) above,
the transition functions \paref{transizione} and \paref{triv-mg} are the same.
Essentially we have to determine the family $(g^R_x)_{R\in SO(3), x\in U_R}$ as in b).
Recall first that every rotation $R\in SO(3)$ can be realized as a composition of three rotations:
(i) a rotation by an angle $\gamma_R$ around the z axis, (ii) a rotation by an angle
$\beta_R$ around the y axis and (iii) a rotation by an angle $\alpha_R$
around the z axis (the so called z-y-z convention), ($\alpha_R$, $\beta_R$, $\gamma_R$) are the {\it Euler angles} of $R$.
Therefore the rotation $R$ acts on the north pole $x_0$
of $\cS^2$ as mapping $x_0$ to the new location on $\cS^2$
whose spherical coordinates are $(\beta_R, \alpha_R)$ after rotating the tangent plane at $x_0$
by an angle $\gamma_R$. In each coset $\cS^2\ni x=gK$ let us choose the element $g_x\in SO(3)$ as the rotation such that $g_xx_0=x$ and having its third Euler angle $\gamma_{g_x}$ equal to $0$. Of course
if $x\ne x_0,x_1$, such $g_x$ is unique.
Consider the atlas $(U_R, \Psi_R)_{R\in SO(3)}$ of $\cS^2$ defined as follows.
Set the charts as
\begin{align}
&\Psi_e(x) := (\beta_{g_x}, \alpha_{g_x})\ , \qquad x\in U_e\ ,\\
&\Psi_R(x) := \Psi_e (R^{-1}x)\ , \qquad x\in U_R\ .
\end{align}
Note that for each $R$, $\Psi_R(x)$ coincides with the ``rotated'' coordinates $(\vartheta_R, \varphi_R)$ of $x$.
Let us choose now the family $(g^R_x)_{x\in U_R, R\in SO(3)}$.
For $x\in U_e$ choose $g^e_x:=g_x$
and for $x\in U_R$
\begin{equation}
g^R_x:=Rg_{R^{-1}x}\ .
\end{equation}
Therefore the corresponding atlas
$( U_R, \Phi_R)_{R\in SO(3)}$ of $\xi_s$ is given, for $\eta\in \pi^{-1}_s(U_R)$, by
\begin{equation}
\Phi_R(\eta)=( \Psi_R(x), z)\ ,
\end{equation}
where $x:=\pi_s(\eta)\in U_R$ and $z$ is such that $\eta=\theta(g_x^R,z)$.
Moreover for $R_1, R_2\in SO(3)$, $x\in U_{R_1}\cap U_{R_2}$ we have
\begin{equation}\label{eq k}
k_{R_2,R_1}(x)= (g_{R_2^{-1}x})^{-1} R_2^{-1}R_1 g_{R_1^{-1}x}
\end{equation}
and the transition function
from the chart $(U_{R_1}, \Phi_{R_1})$ to the chart $(U_{R_2}, \Phi_{R_2})$ at $x$ is given by \paref{transizione}
\begin{equation}\label{fnz transizione}
\lambda^{(-s)}_{R_2, R_1}(x):=\chi_s(k)\ .
\end{equation}
From now on let us denote $\omega_{R_2,R_1}(x)$ the rotation angle of $k_{R_2,R_1}(x)$.
Note that, with this choice of the family $(g^R_x)_{x\in U_R, R\in SO(3)}$,
$\omega_{R_2,R_1}(x)$ is the third Euler angle of the rotation $R_2^{-1}R_1g_{R_1^{-1}x}$.
\begin{remark}\label{particular} \rm Note that we have
$$
R^{-1}g_x=g_{R^{-1}x}\ ,
$$
i.e. $g^R_x=g_x$, in any of the following two situations
a) $R$ is a rotation around the north-south axis (i.e. not changing the latitude of the points of $\cS^2$).
b) The rotation axis of $R$ is orthogonal to the plane $[x_0,x]$ (i.e. changes the colatitude of $x$ leaving its longitude unchanged).
Note that if each of the rotations $R_1,R_2$ are of type a) or of type b), then
$$
k_{R_2,R_1}(x)=g^{-1}_{R_2^{-1}x}R_2^{-1}R_1g_{R_1^{-1}x}=(R_2g_{R_2^{-1}x})^{-1}R_1g_{R_1^{-1}x}=
g_x^{-1}g_x= \mbox{the identity}
$$
and in this case the rotation angle of $k_{R_2,R_1}(x)$ coincides with the angle $-\psi_{R_2,R_1}(x)$, as neither $R_1$ nor $R_2$ change the orientation of the tangent plane at $x$.
Another situation in which the rotation $k$ can be easily computed appears when $R_1$ is the
identity and $R_2$ is a rotation of an angle $\gamma$ around an axis passing through $x$.
Actually
\begin{equation}\label{rot}
k_{R_2,e}(x)=g_x^{-1}R_2^{-1}g_x
\end{equation}
which, by conjugation, turns out to be a rotation of the angle $-\gamma$ around the north-south axis. In this case also it is immediate that the rotation angle $\omega_{R_2,R_1}(x)$ coincides with $-\psi_{R_2,R_1}(x)$.
\qed
\end{remark}
The following relations will be useful in the sequel, setting $y_1=R_1^{-1}x$, $y_2=R_2^{-1}x$,
\begin{align}
k_{R_2,R_1}(x)&=g^{-1}_{R_2^{-1}x}R_2^{-1}R_1g_{R_1^{-1}x}=g^{-1}_{R_2^{-1}R_1y_1}R_2^{-1}R_1g_{y_1}=
k_{R_1^{-1}R_2,e}(R_1^{-1}x)\label{al1}\ ,\cr
k_{R_2,R_1}(x)&=g^{-1}_{R_2^{-1}x}R_2^{-1}R_1g_{R_1^{-1}x}=g^{-1}_{y_2}R_2^{-1}R_1g_{R_1^{-1}R_2y_2}=
k_{e,R_2^{-1}R_1}(R_2^{-1}x)\ .\
\end{align}
We have already shown in Remark \ref{particular} that $\omega_{R_2,R_1}(x)=-\psi_{R_2,R_1}(x)$ in two particular situations:
rotations that move $y_1=R_1^{-1}x$ to $y_2=R_2^{-1}x$
without turning the tangent plane and rotations that turn the tangent plane without moving the point.
In the next statement, by combining these two particular cases,
we prove that actually they coincide always.
\begin{lemma}\label{lemma angolo}
Let $x\in U_{R_1}\cap U_{R_2}$, then
$\omega_{R_2,R_1}(x)=-\psi_{R_2,R_1}(x)$\ .
\end{lemma}
\begin{proof}
The matrix $R_2^{-1}R_1$ can be decomposed as $R_2^{-1}R_1=EW$ where $W$ is the product of a rotation around an axis that is orthogonal to the plane $[x_0,y_1]$ bringing $y_1$ to a point having the same colatitude as $y_2$ and of a rotation around the north-south axis taking this point to $y_2$. By Remark \ref{particular} we have
$Wg_{y_1}=g_{Wy_1}=g_{y_2}$. $E$ instead is a rotation around an axis passing by $y_2$ itself.
We have then, thanks to \paref{rot} and \paref{al1}
$$
\dlines{
k_{R_2,R_1}(x)=k_{R_1^{-1}R_2,e}(R_1^{-1}x)=k_{W^{-1}E^{-1},e}(y_1)=
g_{EWy_1}^{-1}EWg_{y_1}
=g_{y_2}^{-1}Eg_{y_2}=k_{E^{-1},e}(y_2)\ .\cr
}
$$
By the previous discussion, $\omega_{E^{-1},e}(y_2)=-\psi_{E^{-1},e}(y_2)$.
To finish the proof it is enough to show that
\begin{equation}\label{ss}
\psi_{R_2,R_1}(x)=\psi_{E^{-1},e}(y_2)\ .
\end{equation}
Let us denote $\rho(x)=\rho_e(x)$ the tangent vector at $x$ which is parallel to the curve $\vartheta=const$ and pointing in the direction of increasing $\varphi$. Then in coordinates
$$
\rho(x)=\frac 1{\sqrt{x_1^2+x_2^2}}\ \bigl(-x_2,x_1,0\bigr)
$$
and the action of $R$ is given by (\cite{marinuccigeller},\S3) $\rho_R(x)=R\rho(R^{-1}x)$. As $W\rho(y_1)=\rho(y_2)$ ($W$ does not change the orientation of the tangent plane),
$$
\dlines{
\langle \rho_{R_2}(x), \rho_{R_1}(x) \rangle =
\langle R_2 \rho(R_2^{-1}x), R_1 \rho(R_1^{-1}x) \rangle
= \langle R_1^{-1}R_2 \rho(R_2^{-1}x),\rho(R_1^{-1}x) \rangle=\cr
=\langle W^{-1}E^{-1} \rho(EWR_1^{-1}x), \rho(W^{-1}E^{-1}R_2^{-1}x) \rangle=
\langle E^{-1} \rho(Ey_2), W\rho(W^{-1}y_2) \rangle =\cr
=\langle E^{-1} \rho(y_2)), W\rho(y_1) \rangle=\langle E^{-1} \rho(y_2)), \rho(y_2) \rangle\ ,
}
$$
so that the oriented angle $\psi_{R_2,R_1}(x)$ between $\rho_{R_2}(x)$ and $\rho_{R_1}(x)$
is actually the rotation angle of $E^{-1}$.
\end{proof}
\clearpage
\fancyhf{} \fancyfoot[CE,CO]{\thepage}
\fancyhead[CO]{\textit{Acknowledgments}}
\fancyhead[CE]{\textit{Acknowledgments}}
\renewcommand{\headrulewidth}{0.5pt}
\renewcommand{\footrulewidth}{0.0pt}
\addtolength{\headheight}{0.5pt}
\fancypagestyle{plain}{\fancyhead{}\renewcommand{\headrulewidth}{0pt}}
\chapter*{Acknowledgments}\addcontentsline{toc}{chapter}{Acknowledgments}
This is the part of my thesis that maybe I like most. Indeed, here I can write whatever I want, with no definition, label, rule... For the same reasons it is the most difficult for me,
as if I feel something,
then I deeply feel it. With no rule, logic step and reason. And to write it down with no guideline and of course no usual word, I should work hard somehow.
If you think that what I will write for you is not enough, then yes, you are right.
But I am quite sure that you know how much indebted I am with you for your words, help, contribution... whatever.
\section*{Part 1}
\begin{center}
\emph{To Prof. Paolo Baldi:\\ you taught me fundamental things for life. To not use too much the emph-style in \LaTeX, how to deal with the Grushin operator, where to see the painting ``Battaglia di San Romano'', why the novel ``Persuasion'' is great, how to fast destroy some Introduction section. {\rm To think}. \\
I thank you for each of these things, for the great opportunity to work with you, for
your necessary help and solving ideas during these research years (master and PhD), all the fruitful discussions, computations, proofs, mistakes we did together. \\ But most of all I sincerely thank you for your friendship. }
\smallskip
\newpage
To Prof. Domenico Marinucci:\\ you made me improve several aspects of my life. Now I like both, the National Gallery and Clebsch-Gordan coefficients, I appreciate very much the movie ``Million dollar baby'' as well as the Cosmic Microwave Background radiation, I can show the best of Rome to some tourist in at most three hours, walking of course. \\
I thank you for all these improvements,
and for many other things. During these last years, your solving ideas, suggestions and help were valuable: I mean not only the discussions, long computations and proofs we did together, but also the special opportunity you gave me to join your research project ERC Grant 277742 \emph{Pascal} and to meet great mathematicians all around the world... Thanks, a lot.
\smallskip
To Prof. Lucia Caramellino: \\
I heartily thank you for your help, suggestions and ideas, for the time we spent together proving some theorem and most of all talking as good friends do.
\smallskip
To Prof. Giovanni Peccati:\\ I sincerely thank you for your help and ideas, the time we spent together in Luxembourg
discussing abouth maths and the opportunity you gave me to visit your nice department, to meet your friendly research team and especially to work with you.
\smallskip
To Prof. Igor Wigman: \\
I thank you for fruitful discussions we had in London. Your suggestions and ideas were very important and I heartily thank you for your kindness, for the great opportunity you gave me to visit your amazing department and most of all to work with you on topics which are collected in this thesis.
\smallskip
I wish to thank also Proff. Andrea Iannuzzi and Stefano Trapani \\for fruitful discussions on sereval topics and
valuable assistance in \cite{mauSO(3)}, which is completely collected in Chapter 3,
and Gilles Becker for the idea in Remark \ref{Gilles Becker}.
Finally, I sincerely thank all ERC \emph{Pascal} project members: Valentina Cammarota, Simon Campese and Claudio Durastanti for valuable comments on an earlier version of this thesis, Alessandro Renzi and Yabebal Fantaye for useful suggestions.
\end{center}
\section*{Part 2}
Many other people helped me to write this thesis, somehow. First of all my friends, and of course my family.
It is not easy for me to write this part, as you know, so I will give just one thought for each of you. Only one, but full.
When reading your part, remember that the main reason for which your name is here is simple: I thank you for making me smile.
\smallskip
\begin{center}
To Giada: \\you are always close to me. Everywhere.
\smallskip
To Camilla, Stefano, Chiara, Stefania, Eloisa, Serena and Erika: \\thanks for all the time we spent together - laughing, basically.
\smallskip
To Andrea, Vincenzo, Stefano, Claudio, Valentina, Gianluca, Simon, Alessandro and Yabebal:\\
I am so lucky to have colleagues and friends as nice as you are. My time at the Department would had been worse if any of you had not been there - but our favourite \emph{Cinsenke} (see e.g. The Red Bar) would had been useful, as well.
\smallskip
To my Cardio Combat Dance team:\\ sometimes, at night, kicking something with you (the air e.g.) helped me to forget sad situations.
\smallskip
To everyone with whom I shared the office somewhere in the world, even if for few days: thanks.
\smallskip
\emph{Finally my greatest thanks. \\To my family: \\
every reason I think of, is too much deep to
write it here.}
\end{center}
\vspace{17pt}
\begin{center}
No page of this thesis
would have had any meaning, if any of you had not been close to me.
\end{center}
\clearpage
\fancyhf{} \fancyfoot[CE,CO]{\thepage}
\fancyhead[CO]{\textit{Bibliography}}
\fancyhead[CE]{\textit{Biblyography}}
\renewcommand{\headrulewidth}{0.5pt}
\renewcommand{\footrulewidth}{0.0pt}
\addtolength{\headheight}{0.5pt}
\fancypagestyle{plain}{\fancyhead{}\renewcommand{\headrulewidth}{0pt}}
\addcontentsline{toc}{chapter}{Bibliography}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,108,101,564,889 | arxiv | \section{\label{sec:intro}Introduction}
Introduced in order to resolve problems within the Big-Bang cosmological model (such as the horizon, flatness, and magnetic-monopole problems), inflation also naturally provides the seeds for generating primordial matter fluctuations from quantum fluctuations (see for instance Ref.~\cite{kamionkowski16} and references herein).
Measurements of the cosmic microwave background (CMB) allow constraints to be placed on the amplitude of the tensor perturbations that are predicted to be generated by primordial gravitational waves during the inflationary epoch, leaving some imprints on the CMB anisotropies \cite{Seljak97a,Kamionkowski97,Seljak97b,HuWhite1997}.
Over the last decade, while no primordial signals have been discovered, significant improvements on the upper limit for the tensor-to-scalar ratio $r$ have progressively led to the constraint becoming lower than a few percent in amplitude: $r<0.11$ in 2013 using only temperature data from \textit{Planck}~\cite{planck2013-p11}; $r<0.12$ in 2015 using polarization from BICEP/Keck and \textit{Planck}~\cite{BKP} to debias the initially claimed detection from BICEP/Keck in 2014, $r=0.2^{+0.07}_{-0.05}$~\cite{BicepDetection}; $r<0.09$ in 2016 using BICEP/Keck and \textit{Planck}~\cite{planck2014-a15}; $r<0.07$ in 2018 using BICEP/Keck 2015 data~(\BK{15},~\cite{BK15}); $r<0.065$ in 2019 using \textit{Planck}\ in combination with BK15~\cite{planck2016-l06}; $r<0.044$ in 2021 using \textit{Planck}\ in combination with BK15~\cite{planck_tensor}; and $r<0.036$ in 2021 using the latest BICEP/Keck data (\BK{18},~\cite{BK18}).
\begin{figure}[htpb!]
\centering
\includegraphics[width=8.5cm,height=6.8cm]{fig_history.pdf}
\caption{History of constraints on the tensor-to-scalar ratio~$r$ (\textit{Planck}\ PR1~\cite{planck2013-p11}, \textit{Planck}\ PR1+BK~\cite{BKP}, \textit{Planck}\ PR2+BK~\cite{planck2014-a15}, BK15~\cite{BK15}, \textit{Planck}\ PR3+BK15~\cite{planck2016-l06}, \textit{Planck}\ PR4~\cite{planck_tensor}, \textit{Planck}\ PR4+BK15~\cite{planck_tensor}, BK18~\cite{BK18}, \textit{Planck}\ PR4+BK18 this work). Upper limits are given at 95\,\% CL.}
\label{fig:history}
\end{figure}
In this paper, we first discuss the impact of uncertainties in {$\rm{\Lambda CDM}$}\ parameters for constraints on $r$ derived from the latest BICEP/Keck data (\BK{18}) \cite{BK18} alone. Then we add in data from the latest \textit{Planck}\ release (PR4) \cite{planck2020-LVII} in order to provide the best currently available constraint on the tensor-to-scalar ratio $r$.
\section{\label{sec:model}Cosmological model}
The cosmological model used in this paper is based on adiabatic, nearly scale-invariant perturbations. It has been established as the simplest model that is consistent with the different cosmological probes and in particular with the CMB \cite{planck2016-l06}.
The standard {$\rm{\Lambda CDM}$}+r model includes 6+1 parameters.
Power spectra for scalar and tensor modes are parameterized by power laws with no running and so the parameters include the scalar amplitude $A_{\rm s}$ and the spectral scalar index $n_{\rm s}$, while the spectral index for the tensor mode $n_{\rm t}$ is set using single-field slow-roll inflation consistency. The amplitudes and the tensor-to-scalar power ratio, $r \equiv A_{\rm t}/A_{\rm s}$, are evaluated at a pivot scale of 0.05\,Mpc$^{-1}$.
Three other parameters ($\Omega_{\rm b}h^2$, $\Omega_{\rm c}h^2$, and $\theta_{\ast}$) determine the linear evolution of perturbations after they re-enter the Hubble scale.
Finally, the reionization is modeled with a widely-used step-like transition between an essentially vanishing ionized fraction at early times, to a value of unity at low redshifts. The transition is modeled using a $\tanh$ function with a non-zero width fixed to $\Delta z = 0.5$~\cite{lewis08}. The reionization optical depth $\tau$ is then directly related to the redshift at which this transition occurs.
The CMB power spectra are generated using the Boltzmann-solver code \textsc{camb}~\cite{Lewis:1999bs,Howlett:2012mh}.
We sample the likelihood combinations using the \textsc{cobaya} framework \cite{cobaya} with fast and efficient Markov chain Monte Carlo sampling methods described in Refs.~\cite{Lewis:2002ah} and \cite{Lewis:2013hha}. All the likelihoods that we use are publicly available on the \textsc{cobaya} web site\footnote{\href{https://cobaya.readthedocs.io}{cobaya.readthedocs.io}} and are briefly described in the next section.
\section{Planck likelihoods}
We use the polarized likelihood at large scales, \texttt{lowlEB}, described in Ref.~\cite{planck_tensor} and available on github.\footnote{\href{https://github.com/planck-npipe}{github.com/planck-npipe}} Specifically, it is a \textit{Planck}\ \mbox{low-$\ell$}\ polarization likelihood based on cross-spectra using the Hamimeche-Lewis approximation~\cite{hamimeche08,mangilli15}.
Using this formalism, the likelihood function consistently takes into account the two polarization fields $E$ and $B$ (including $EE$, $BB$, and $EB$ power-spectra), as well as {\it all\/} correlations between multipoles and modes. It is important to appreciate that such correlations are relevant at large angular scales where cut-sky effects and systematic residuals (both from the instrument and from the foregrounds) are important.
The cross-spectra are calculated on component-separated CMB ``detset'' maps processed by {\sc commander}\ from the \textit{Planck}\ PR4 frequency maps, on 50\,\% of the sky. The sky fraction is optimized in order to obtain maximum sensitivity (and lowest sample variance), while ensuring low contamination from residual foregrounds.
The covariance matrix is estimated from the PR4 Monte Carlos. The statistical distribution of the recovered $C_\ell$s naturally includes the effect of all components included in the Monte Carlo, namely the CMB signal, instrumental noise, \textit{Planck}\ systematic effects incorporated in the PR4 simulations (see Ref.~\cite{planck2020-LVII}), component-separation uncertainties, and foreground residuals.
In the case of \textit{Planck}, we are not able to analytically predict the shape of the full covariance matrix for component-separated maps. However, analytical predictions exist for the covariance of instrumental noise in low-resolution individual-frequency maps. Analysis of these matrices highlights non-trivial structures in the harmonic space noise, whose covariance cannot be approximated as diagonal. Since component-separated maps are a combination of the input frequency maps, part of these structures will carry over into the final covariance matrix, on top of any additional correlations induced by systematic effects, masking, and foreground residuals that cannot be modeled analytically but only reconstructed via simulations. Given these considerations, we cannot apply any type of simplifying ``conditioning'' (such as setting off-diagonal elements to zero), as done for some ground-based experiments, nor do we wish to make such assumptions about the data.
In this paper, unlike previous CMB work to our knowledge, we now marginalize the likelihood over the unknown true covariance matrix (as proposed in Ref.~\cite{sellentin16}) in order to propagate the uncertainty in the estimation of the covariance matrix caused by a limited number of simulations. This provides us with a likelihood that is unbiased and accurate for the estimation of the uncertainty. The robustness of the results is discussed further in the Appendix.
At large angular scales in temperature, we make use of the \textit{Planck}\ public \mbox{low-$\ell$}\ temperature-only likelihood, based on the CMB map recovered from the component-separation procedure (specifically {\sc commander}) described in detail in Ref.~\cite{planck2016-l05}.
At small scales, we use the \textit{Planck}\ \texttt{HiLLiPoP}\ likelihood, which can include the $TT$, $TE$, and/or $EE$ power spectra computed on the PR4 detset maps at 100, 143, and 217\,GHz. The likelihood is a spectrum-based Gaussian approximation, with semi-analytic estimates of the $C_\ell$ covariance matrix based on the data. The model consists of a linear combination of the CMB power spectrum and several foreground residuals, including Galactic dust, cosmic infrared background, thermal and kinetic Sunyaev-Zeldovich (SZ) effects, SZ-CIB correlations, and unresolved point sources. For details, see Refs.~\cite{planck_tensor} and \cite{planck2013-p08,planck2014-a13,couchot2017}.
\section{BICEP/Keck likelihood}
We use the publicly available BICEP/Keck likelihood (\BK{18}) corresponding to the data taken by the BICEP2, Keck Array, and BICEP3 CMB polarization experiments, up to and including the 2018 observing season~\cite{BK18}.
The format of the likelihood is identical to the one introduced in Refs.~\cite{BKP} and \cite{BK15};
it is a Hamimeche-Lewis approximation \cite{hamimeche08} to the joint likelihood of the ensemble of $BB$ auto- and cross-spectra taken between the BICEP/Keck (two at 95, one each at 150 and 220\,GHz), \textit{WMAP} (23 and 33\,GHz), and \textit{Planck}\ (PR4 at 30, 44, 143, 217, and 353\,GHz) frequency maps.
The effective coverage is approximately $400\,{\rm deg}^2$ (which corresponds to 1\,\% of the sky) centered on a region with low foreground emission.
The data model includes Galactic dust and synchrotron emission, as well as correlations between dust and synchrotron.
\\
In the following, we neglect correlations between the BICEP/Keck and \textit{Planck}\ data sets. This is justified because the BK18 spectra are estimated on 1\,\% of the sky, while the \textit{Planck}\ analysis is derived from 50\,\% of the sky.
\section{Impact of $\mathbf{\Lambda}$\bf{CDM} uncertainty}
In the baseline analysis described in Ref.~\cite{BK18}, the BICEP/Keck Collaboration fixed the cosmology to that of best-fitting model from \textit{Planck}\ 2018 and quote an upper limit of $r<0.036$ at 95\%~CL.
Within this baseline, the uncertainty on {$\rm{\Lambda CDM}$}\ parameters was not propagated, reducing the width of the posterior for the tensor-to-scalar ratio $r$.
We find that when fitting the BK18 data for {$\rm{\Lambda CDM}$}\ parameters in addition to $r$, the uncertainty on $r$ slightly increases (as illustrated in Fig.~\ref{fig:r}) because the {$\rm{\Lambda CDM}$}\ parameters except for $A_{\rm s}$ are poorly constrained.
\begin{figure}[htbp!]
\centering
\includegraphics[width=8.6cm,height=6cm]{fig_bk_r.pdf}
\caption{Posterior distribution for the tensor-to-scalar ratio $r$, showing the impact of marginalization over {$\rm{\Lambda CDM}$}\ parameters.}
\label{fig:r}
\end{figure}
The constraints on $r$ then become
\begin{eqnarray}
r = 0.014_{-0.011}^{+0.012} &&\quad \text{(BK18 with {$\rm{\Lambda CDM}$}\ fixed)},\label{eq:BK18fixed}\\
r = 0.015_{-0.013}^{+0.015} &&\quad \text{(BK18 with {$\rm{\Lambda CDM}$}\ free)},
\end{eqnarray}
all compatible with zero\footnote{Uncertainties in Eq.~\ref{eq:BK18fixed} are slightly larger than those in Ref.~\cite{BK18}, despite using the same likelihood. This small difference could be due to assuming different values for the reference {$\rm{\Lambda CDM}$}\ model parameters (we used \planck2018 TT,TE,EE+lowE+lensing~\cite{planck2016-l06}), or might arise from using different MCMC/Boltzmann solver codes.} and resulting in the following upper-limits at 95\,\% CL:
\begin{eqnarray}
r < 0.036 &&\quad \text{(BK18 with {$\rm{\Lambda CDM}$}\ fixed)};\\
r < 0.042 &&\quad \text{(BK18 with {$\rm{\Lambda CDM}$}\ free)}.
\end{eqnarray}
\\
\section{Combining \textit{Planck}\ and BICEP/Keck}
The addition of \textit{Planck}\ data allows us to constrain {$\rm{\Lambda CDM}$}\ parameters, thus reducing the uncertainty on $r$. This was mentioned in a secondary analysis of~Ref.~\cite{BK18} (their appendix~E.1), yielding an upper limit on $r$ similar to that of the baseline when using the earlier \textit{Planck}\ PR3 data. In this paper, we update the \textit{Planck}\ data to PR4 and add constraints from the polarized low-$\ell$ likelihood.
With the new BICEP/Keck data set included, the uncertainty on $r$ has decreased to $\sigma(r) = 0.014$. We may compare this to the \textit{Planck}\ uncertainty $\sigma(r) = 0.056$ based on polarized low multipoles; this uncertainty reduces to $\sigma(r)=0.024$ when the $TT+TE+EE$ high multipole data are included as well.
The addition of \mbox{low-$\ell$}\ from \textit{Planck}\ polarization modes allows the degeneracy with $\tau$ to be broken and also slightly reduces the width of the posterior distribution for $r$. This is illustrated in Fig.~\ref{fig:tau-r}.
\begin{figure}[htbp!]
\centering
\includegraphics[width=8.6cm]{fig_As-tau-r.pdf}
\caption{Posterior distributions for $\tau$, $A_{\rm s}$, and $r$ using \BK{18}~\cite{BK18}, \textit{Planck}~\cite{planck_tensor}, and the combination of the two.}
\label{fig:tau-r}
\end{figure}
The resulting constraint on $r$ using a combination of \textit{Planck}\ and \BK{18} data tightens to
\begin{eqnarray}
r = 0.014_{-0.009}^{+0.011} &&\quad \text{(Planck+BK18)},
\end{eqnarray}
which corresponds to $r < 0.034$ at 95\,\% CL. The reionization optical depth is found to be
\begin{eqnarray}
\tau = 0.057 \pm 0.007.
\end{eqnarray}
The combination of the two data sets allows us to cover the full range of multipoles that are most sensitive to tensor modes. In combination with baryon acoustic oscillation (BAO \cite{SDSSdr16}) and CMB lensing~\cite{planck2016-l08} data, we obtain an improved upper limit of
\begin{eqnarray}
r<0.032 \quad \text{(95\% CL).}
\end{eqnarray}
In the $n_{\rm s}$--$r$ plane (Fig.~\ref{fig:nsr}), the constraints now rule out the expected potentials for single-field inflation (strongly excluding $V\propto\phi^2$, $\phi$, and even $\phi^{2/3}$ at about 5$\,\sigma$).
\begin{figure}[htbp!]
\centering
\includegraphics[width=0.95\linewidth]{ns_r_inflation.pdf}
\caption{
Constraints in the tensor-to-scalar ratio $r$ versus scalar spectral index $n_{\rm s}$ plane for the {$\rm{\Lambda CDM}$}\ model, using CMB data in combination with baryon acoustic oscillation (BAO) and CMB lensing data. The CMB data are \textit{Planck}\ PR3 (TT,TE,EE+lowE, gray contour), \textit{Planck}\ PR4 \cite{planck_tensor} (TT,TE,EE+lowlEB, green contour), and \textit{Planck}\ PR4 joint with BK18 \cite{BK18} (blue contour, this paper).
These constraints assume the inflationary consistency relation and negligible running. Dotted lines show the loci of approximately constant e-folding number $50 < N < 60$, assuming simple $V \propto (\phi/m_{\rm Pl})^p$ single-field inflation. Solid lines show the approximate $n_{\rm s}$--$r$ relation for locally power-law potentials, to first order in slow roll. The solid black line (corresponding to a linear potential) separates concave and convex potentials. This plot is adapted from figure~28 in Ref.~\cite{planck2016-l06}.}
\label{fig:nsr}
\end{figure}
\section{\label{sec:conclusion}Conclusions}
We have derived constraints on the tensor-to-scalar ratio $r$ using the two most sensitive data sets to date, namely BICEP3 and \textit{Planck}\ PR4. The BICEP/Keck Collaboration recently released a likelihood derived from their data up to the 2018 observing season, demonstrating a sensitivity on $r$ of $\sigma_r = 0.013$, covering the multipole range from $\ell = 20$ to 300~\cite{BK18}. Complementary \textit{Planck}\ PR4 data released in 2020~\cite{planck2020-LVII} provide information on the large scales, with a polarized likelihood covering the multipole range from $\ell =2$ to $\ell = 150$~\cite{planck_tensor}. This has poorer sensitivity, with $\sigma_r = 0.024$, but offers independent information, with the constraint on $r$ coming from a combination of $TT$, $TE$, and large-scale $E$ and $B$ data. It is interesting to note that constraints derived purely from temperature anisotropies alone are not competitive anymore ($\sigma_r = 0.1$ \cite{planck_tensor}), since those data are dominated by cosmic variance.
The addition of \textit{Planck}\ data (including large angular scales in polarization, as well as small angular scales in $TT$ and $TE$) allows us to increase the sensitivity on $r$, as well as to break the degeneracy with the usual six parameters of the {$\rm{\Lambda CDM}$}\ model.
We find that other {$\rm{\Lambda CDM}$}\ parameters are not affected by the addition of BK18 data (Fig.~\ref{fig:lcdm}).
Combining \textit{Planck}\ PR4 and \BK{18}, we find an upper limit of $r<0.034$, which tightens to $r<0.032$ when adding BAO and CMB lensing data.
\begin{figure*}[htbp!]
\centering
\includegraphics[width=0.95\linewidth]{fig_LCDMr.pdf}
\caption{Constraint contours (at 68 and 95\,\% confidence) on parameters of a {$\rm{\Lambda CDM}$}+r model using \textit{Planck}\ (red) and \textit{Planck}+BK18 (black).}
\label{fig:lcdm}
\end{figure*}
Ground-based experiments (such as BICEP/Keck, the Simons Observatory \cite{SO}, and later CMB-S4 \cite{S4}) will observe the sky with ever deeper sensitivity, placing even stronger constraints on the tensor-to-scalar ratio $r$ (or detecting primordial $B$ modes of course).
However, improved measurements of the {$\rm{\Lambda CDM}$}\ parameters are essential to achieve strong constraints on $r$. In particular reionization optical depth require very large scales, which are extremely difficult to measure from ground. The next generation of polarized CMB space missions (including LiteBIRD~\cite{litebird}) will be able to deliver $\tau$ with a precision dominated by cosmic variance.
\begin{acknowledgments}
\textit{Planck}\ is a project of the European Space Agency (ESA) with instruments provided by two scientific consortia funded by ESA member states and led by Principal Investigators from France and Italy, telescope reflectors provided through a collaboration between ESA and a scientific consortium led and funded by Denmark, and additional contributions from NASA (USA).
We gratefully acknowledge support from the CNRS/IN2P3 Computing Center for providing computing and data-processing resources needed for this work.
This research used resources of the National Energy Research Scientific Computing Center, which is supported by the Office of Science of the U.S. Department of Energy under Contract No.\ DE-AC02-05CH11231.
Part of the research was carried out at the Jet Propulsion Laboratory, California Institute of Technology, under a contract with the National Aeronautics and Space Administration (80NM0018D0004).
\end{acknowledgments}
|
1,108,101,564,890 | arxiv | \section{Introduction}
\label{sec:introduction}
Gibbs processes are very popular point process models which are extensively used both in spatial statistics and in statistical physics, see e.g.\ \citep{moellerwaage04,ruelle69}. Especially the pairwise interaction processes allow a simple yet flexible modelling of point interactions. However, a major drawback of Gibbs processes is that in general there are no analytic formulas available for their intensities or higher order correlation functions. In a recent couple of articles \citep{baddeleynair12, bn12} Baddeley and Nair proposed an approximation method that is fast to compute and accurate as verified by Monte Carlo methods. There are however no theoretical results in this respect and hence no guarantees for accuracy in most concrete models nor quantifications of the approximation error.
The aim of the present paper is to derive rigorous lower and upper bounds for correlation functions and related quantities. These allow us to narrow down the true values quite precisely if the Gibbs process is not too far away from a Poisson process. Figure~\ref{fig:Strauss50} shows our bounds on the intensity for a two-dimensional Strauss process in dependence of its interaction parameter~$\gamma$. The pluses are estimates of the true intensity obtained as averages over the numbers of points in $[0,1]^2$ of 10,000 Strauss processes simulated by dominated coupling from the past. The point processes were simulated on a larger window in order avoid noticeable edge effects. All simulations and numerical computations in this paper were performed in the R language \citep{r12} using the contributed package \texttt{spatstat} \citep{spatstat12}.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[x=1pt,y=1pt]
\definecolor[named]{fillColor}{rgb}{1.00,1.00,1.00}
\path[use as bounding box,fill=fillColor,fill opacity=0.00] (0,0) rectangle (254.39,231.26);
\begin{scope}
\path[clip] ( 0.00, 0.00) rectangle (254.39,231.26);
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 50.64, 28.91) -- (232.66, 28.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 50.64, 28.91) -- ( 50.64, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 87.05, 28.91) -- ( 87.05, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (123.45, 28.91) -- (123.45, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (159.85, 28.91) -- (159.85, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (196.25, 28.91) -- (196.25, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (232.66, 28.91) -- (232.66, 22.91);
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 50.64, 15.71) {0.0};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 87.05, 15.71) {0.2};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (123.45, 15.71) {0.4};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (159.85, 15.71) {0.6};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (196.25, 15.71) {0.8};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (232.66, 15.71) {1.0};
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36, 35.87) -- ( 43.36,209.85);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36, 35.87) -- ( 37.36, 35.87);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36, 64.86) -- ( 37.36, 64.86);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36, 93.86) -- ( 37.36, 93.86);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36,122.86) -- ( 37.36,122.86);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36,151.86) -- ( 37.36,151.86);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36,180.85) -- ( 37.36,180.85);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36,209.85) -- ( 37.36,209.85);
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96, 33.46) {20};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96, 62.45) {25};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96, 91.45) {30};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96,120.45) {35};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96,149.45) {40};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96,178.44) {45};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96,207.44) {50};
\end{scope}
\begin{scope}
\path[clip] ( 43.36, 28.91) rectangle (239.94,216.81);
\definecolor[named]{fillColor}{rgb}{0.83,0.83,0.83}
\path[fill=fillColor] ( 50.64,138.76) --
( 52.46,139.20) --
( 54.28,139.65) --
( 56.10,140.09) --
( 57.92,140.54) --
( 59.74,141.00) --
( 61.56,141.46) --
( 63.38,141.92) --
( 65.20,142.38) --
( 67.02,142.85) --
( 68.84,143.33) --
( 70.66,143.80) --
( 72.48,144.28) --
( 74.30,144.77) --
( 76.12,145.26) --
( 77.94,145.75) --
( 79.76,146.25) --
( 81.58,146.75) --
( 83.40,147.25) --
( 85.23,147.76) --
( 87.05,148.28) --
( 88.87,148.79) --
( 90.69,149.32) --
( 92.51,149.84) --
( 94.33,150.37) --
( 96.15,150.91) --
( 97.97,151.45) --
( 99.79,152.00) --
(101.61,152.55) --
(103.43,153.10) --
(105.25,153.66) --
(107.07,154.23) --
(108.89,154.80) --
(110.71,155.37) --
(112.53,155.95) --
(114.35,156.54) --
(116.17,157.13) --
(117.99,157.72) --
(119.81,158.32) --
(121.63,158.93) --
(123.45,159.54) --
(125.27,160.16) --
(127.09,160.78) --
(128.91,161.41) --
(130.73,162.04) --
(132.55,162.68) --
(134.37,163.33) --
(136.19,163.98) --
(138.01,164.64) --
(139.83,165.31) --
(141.65,165.98) --
(143.47,166.65) --
(145.29,167.34) --
(147.11,168.03) --
(148.93,168.72) --
(150.75,169.43) --
(152.57,170.14) --
(154.39,170.86) --
(156.21,171.58) --
(158.03,172.31) --
(159.85,173.05) --
(161.67,173.79) --
(163.49,174.55) --
(165.31,175.31) --
(167.13,176.08) --
(168.95,176.85) --
(170.77,177.63) --
(172.59,178.43) --
(174.41,179.22) --
(176.23,180.03) --
(178.05,180.85) --
(179.87,181.67) --
(181.69,182.50) --
(183.51,183.35) --
(185.33,184.20) --
(187.15,185.05) --
(188.97,185.92) --
(190.79,186.80) --
(192.61,187.69) --
(194.43,188.58) --
(196.25,189.49) --
(198.07,190.40) --
(199.89,191.33) --
(201.71,192.26) --
(203.53,193.21) --
(205.35,194.16) --
(207.17,195.13) --
(208.99,196.10) --
(210.81,197.09) --
(212.63,198.09) --
(214.45,199.10) --
(216.27,200.12) --
(218.09,201.15) --
(219.91,202.20) --
(221.74,203.25) --
(223.56,204.32) --
(225.38,205.40) --
(227.20,206.49) --
(229.02,207.60) --
(230.84,208.72) --
(232.66,209.85) --
(232.66,209.85) --
(230.84,208.72) --
(229.02,207.59) --
(227.20,206.47) --
(225.38,205.37) --
(223.56,204.27) --
(221.74,203.18) --
(219.91,202.09) --
(218.09,201.02) --
(216.27,199.95) --
(214.45,198.89) --
(212.63,197.84) --
(210.81,196.80) --
(208.99,195.77) --
(207.17,194.74) --
(205.35,193.72) --
(203.53,192.71) --
(201.71,191.70) --
(199.89,190.71) --
(198.07,189.72) --
(196.25,188.73) --
(194.43,187.76) --
(192.61,186.79) --
(190.79,185.83) --
(188.97,184.88) --
(187.15,183.93) --
(185.33,182.99) --
(183.51,182.05) --
(181.69,181.13) --
(179.87,180.20) --
(178.05,179.29) --
(176.23,178.38) --
(174.41,177.48) --
(172.59,176.58) --
(170.77,175.69) --
(168.95,174.81) --
(167.13,173.93) --
(165.31,173.06) --
(163.49,172.20) --
(161.67,171.34) --
(159.85,170.49) --
(158.03,169.64) --
(156.21,168.80) --
(154.39,167.96) --
(152.57,167.13) --
(150.75,166.30) --
(148.93,165.48) --
(147.11,164.67) --
(145.29,163.86) --
(143.47,163.06) --
(141.65,162.26) --
(139.83,161.47) --
(138.01,160.68) --
(136.19,159.90) --
(134.37,159.12) --
(132.55,158.35) --
(130.73,157.58) --
(128.91,156.82) --
(127.09,156.06) --
(125.27,155.30) --
(123.45,154.56) --
(121.63,153.81) --
(119.81,153.07) --
(117.99,152.34) --
(116.17,151.61) --
(114.35,150.89) --
(112.53,150.16) --
(110.71,149.45) --
(108.89,148.74) --
(107.07,148.03) --
(105.25,147.33) --
(103.43,146.63) --
(101.61,145.93) --
( 99.79,145.24) --
( 97.97,144.56) --
( 96.15,143.88) --
( 94.33,143.20) --
( 92.51,142.53) --
( 90.69,141.86) --
( 88.87,141.19) --
( 87.05,140.53) --
( 85.23,139.87) --
( 83.40,139.22) --
( 81.58,138.57) --
( 79.76,137.92) --
( 77.94,137.28) --
( 76.12,136.64) --
( 74.30,136.01) --
( 72.48,135.38) --
( 70.66,134.75) --
( 68.84,134.13) --
( 67.02,133.51) --
( 65.20,132.89) --
( 63.38,132.28) --
( 61.56,131.67) --
( 59.74,131.06) --
( 57.92,130.46) --
( 56.10,129.86) --
( 54.28,129.27) --
( 52.46,128.68) --
( 50.64,128.09) --
cycle;
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\path[draw=drawColor,line width= 0.2pt,line join=round,line cap=round] ( 50.64,138.76) --
( 52.46,139.20) --
( 54.28,139.65) --
( 56.10,140.09) --
( 57.92,140.54) --
( 59.74,141.00) --
( 61.56,141.46) --
( 63.38,141.92) --
( 65.20,142.38) --
( 67.02,142.85) --
( 68.84,143.33) --
( 70.66,143.80) --
( 72.48,144.28) --
( 74.30,144.77) --
( 76.12,145.26) --
( 77.94,145.75) --
( 79.76,146.25) --
( 81.58,146.75) --
( 83.40,147.25) --
( 85.23,147.76) --
( 87.05,148.28) --
( 88.87,148.79) --
( 90.69,149.32) --
( 92.51,149.84) --
( 94.33,150.37) --
( 96.15,150.91) --
( 97.97,151.45) --
( 99.79,152.00) --
(101.61,152.55) --
(103.43,153.10) --
(105.25,153.66) --
(107.07,154.23) --
(108.89,154.80) --
(110.71,155.37) --
(112.53,155.95) --
(114.35,156.54) --
(116.17,157.13) --
(117.99,157.72) --
(119.81,158.32) --
(121.63,158.93) --
(123.45,159.54) --
(125.27,160.16) --
(127.09,160.78) --
(128.91,161.41) --
(130.73,162.04) --
(132.55,162.68) --
(134.37,163.33) --
(136.19,163.98) --
(138.01,164.64) --
(139.83,165.31) --
(141.65,165.98) --
(143.47,166.65) --
(145.29,167.34) --
(147.11,168.03) --
(148.93,168.72) --
(150.75,169.43) --
(152.57,170.14) --
(154.39,170.86) --
(156.21,171.58) --
(158.03,172.31) --
(159.85,173.05) --
(161.67,173.79) --
(163.49,174.55) --
(165.31,175.31) --
(167.13,176.08) --
(168.95,176.85) --
(170.77,177.63) --
(172.59,178.43) --
(174.41,179.22) --
(176.23,180.03) --
(178.05,180.85) --
(179.87,181.67) --
(181.69,182.50) --
(183.51,183.35) --
(185.33,184.20) --
(187.15,185.05) --
(188.97,185.92) --
(190.79,186.80) --
(192.61,187.69) --
(194.43,188.58) --
(196.25,189.49) --
(198.07,190.40) --
(199.89,191.33) --
(201.71,192.26) --
(203.53,193.21) --
(205.35,194.16) --
(207.17,195.13) --
(208.99,196.10) --
(210.81,197.09) --
(212.63,198.09) --
(214.45,199.10) --
(216.27,200.12) --
(218.09,201.15) --
(219.91,202.20) --
(221.74,203.25) --
(223.56,204.32) --
(225.38,205.40) --
(227.20,206.49) --
(229.02,207.60) --
(230.84,208.72) --
(232.66,209.85);
\path[draw=drawColor,line width= 0.2pt,line join=round,line cap=round] ( 50.64,128.09) --
( 52.46,128.68) --
( 54.28,129.27) --
( 56.10,129.86) --
( 57.92,130.46) --
( 59.74,131.06) --
( 61.56,131.67) --
( 63.38,132.28) --
( 65.20,132.89) --
( 67.02,133.51) --
( 68.84,134.13) --
( 70.66,134.75) --
( 72.48,135.38) --
( 74.30,136.01) --
( 76.12,136.64) --
( 77.94,137.28) --
( 79.76,137.92) --
( 81.58,138.57) --
( 83.40,139.22) --
( 85.23,139.87) --
( 87.05,140.53) --
( 88.87,141.19) --
( 90.69,141.86) --
( 92.51,142.53) --
( 94.33,143.20) --
( 96.15,143.88) --
( 97.97,144.56) --
( 99.79,145.24) --
(101.61,145.93) --
(103.43,146.63) --
(105.25,147.33) --
(107.07,148.03) --
(108.89,148.74) --
(110.71,149.45) --
(112.53,150.16) --
(114.35,150.89) --
(116.17,151.61) --
(117.99,152.34) --
(119.81,153.07) --
(121.63,153.81) --
(123.45,154.56) --
(125.27,155.30) --
(127.09,156.06) --
(128.91,156.82) --
(130.73,157.58) --
(132.55,158.35) --
(134.37,159.12) --
(136.19,159.90) --
(138.01,160.68) --
(139.83,161.47) --
(141.65,162.26) --
(143.47,163.06) --
(145.29,163.86) --
(147.11,164.67) --
(148.93,165.48) --
(150.75,166.30) --
(152.57,167.13) --
(154.39,167.96) --
(156.21,168.80) --
(158.03,169.64) --
(159.85,170.49) --
(161.67,171.34) --
(163.49,172.20) --
(165.31,173.06) --
(167.13,173.93) --
(168.95,174.81) --
(170.77,175.69) --
(172.59,176.58) --
(174.41,177.48) --
(176.23,178.38) --
(178.05,179.29) --
(179.87,180.20) --
(181.69,181.13) --
(183.51,182.05) --
(185.33,182.99) --
(187.15,183.93) --
(188.97,184.88) --
(190.79,185.83) --
(192.61,186.79) --
(194.43,187.76) --
(196.25,188.73) --
(198.07,189.72) --
(199.89,190.71) --
(201.71,191.70) --
(203.53,192.71) --
(205.35,193.72) --
(207.17,194.74) --
(208.99,195.77) --
(210.81,196.80) --
(212.63,197.84) --
(214.45,198.89) --
(216.27,199.95) --
(218.09,201.02) --
(219.91,202.09) --
(221.74,203.18) --
(223.56,204.27) --
(225.38,205.37) --
(227.20,206.47) --
(229.02,207.59) --
(230.84,208.72) --
(232.66,209.85);
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 50.64,129.88) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 59.74,132.55) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 68.84,136.21) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 77.94,139.52) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 87.05,142.40) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 96.15,146.00) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (105.25,148.24) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (114.35,152.28) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (123.45,155.59) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (132.55,159.26) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (141.65,162.66) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (150.75,166.11) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (159.85,170.42) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (168.95,174.24) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (178.05,178.79) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (187.15,182.46) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (196.25,187.50) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (205.35,192.28) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (214.45,196.71) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (223.56,203.13) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (232.66,208.13) {+};
\end{scope}
\begin{scope}
\path[clip] ( 0.00, 0.00) rectangle (254.39,231.26);
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36, 28.91) --
(239.94, 28.91) --
(239.94,216.81) --
( 43.36,216.81) --
( 43.36, 28.91);
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (141.65, 2.51) {$\lambda$};
\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 16.96,122.86) {$\gamma$};
\end{scope}
\end{tikzpicture}
\caption{\label{fig:Strauss50} Bounds on the intensities of two-dimensional Strauss processes with $\beta=50$, $r=0.05$ and values of $\gamma$ ranging from $0$ to $1$. The pluses are estimates of the intensities based on 10,000 simulations each.}
\end{center}
\end{figure}
Our main result, Theorem~\ref{thm:bounds}, more generally gives bounds on the probability generating functional of a Gibbs process. Let $\Xi$ be an arbitrary point process on $\mathbb{R}^d$. The \emph{probability generating functional (p.g.fl.)} $\Psi_{\Xi}$ is defined as
\begin{equation}
\label{eq:pgfl}
\Psi_\Xi(g)={\mathbb E} \Bigl(\prod_{y\in \Xi}g(y)\Bigr)
\end{equation}
for any measurable function $g\colon \mathbb{R}^d\to [0,1]$ for which $1-g$ has bounded support,
see e.g.\ \cite[p.\,59]{dvj08} for details.
Many statistics of point processes, such as the \emph{empty space function} ($F$ function), contain expectations as in \eqref{eq:pgfl}. For pairwise interaction processes the situation is even better. By the Georgii--Nguyen--Zessin equation~\eqref{eq:gnz} the \emph{nearest neighbour function} ($G$ function), Ripley's $K$ function and the correlation functions of all orders can be rewritten using the p.g.fl.
The idea for proving Theorem~\ref{thm:bounds} is to replace the Gibbs process $\Xi$ in \eqref{eq:pgfl} by a suitable Poisson process and bound the error using Stein's method.
The rest of the paper is organised as follows. In Section~\ref{sec:pre} we introduce some notation and state the main result. In Section~\ref{sec:bounds-int} we provide bounds on the intensity, and in Section~\ref{sec:s-stat} bounds on other summary statistics are derived. Section~\ref{sec:proofs} contains the proof of the main result.
\section{Preliminaries and main result}
\label{sec:pre}
Let $(\mathfrak{N},\mathcal{N})$ denote the space of locally finite point measures on $\mathbb{R}^d$ equipped with the $\sigma$-algebra generated by the evaluation maps $[\mathfrak{N}\ni\xi\mapsto \xi(A) \in \mathbb{Z}_{+}]$ for bounded Borel sets $A\in \mathbb{R}^d$. A point process is just a $\mathfrak{N}$-valued random element. We assume the point processes to be \emph{simple}, i.e.\ do not allow multi-points. Thus we can use set notation, i.e.\ $x\in \xi$ means that the point $x$ lies in the support of the measure $\xi$.
In spatial statistics point processes are usually defined on a bounded window $\mathcal{W}\subset \mathbb{R}^d$. Let $\mathfrak{N}\vert_\mathcal{W}$ denote the restriction of the $\mathfrak{N}$ to $\mathcal{W}$. A point process $\Xi$ on $\mathcal{W}$ is called a \emph{Gibbs process} if it has a hereditary density $u$ with respect to the distribution of the Poisson process with unit intensity. Hereditarity means that $u(\xi)>0$ implies $u(\eta)>0$ for all subconfigurations
$\eta\subset \xi$. By hereditarity we can define the \emph{conditional intensity} as
\begin{equation}
\label{eq:cond-int}
\lambda(x\mid \xi)=\frac{u(\xi\cup\{x\})}{u(\xi)},
\end{equation}
where $0/0=0$. Roughly speaking, the conditional intensity is the infinitesimal probability that $\Xi$ has a point at $x$, given that $\Xi$ coincides with the configuration $\xi$ everywhere else. Furthermore $\lambda(\cdot\, \vert \, \cdot)$ uniquely characterises the distribution of $\Xi$, since by \eqref{eq:cond-int} one can recursively recover an unnormalised density. It is well-known that the conditional intensity is the $dx \otimes \mathscr{L}(\Xi)$-almost everywhere unique product measurable function that satisfies the \emph{Georgii--Nguyen--Zessin equation}
\begin{equation}
\label{eq:gnz}
\mathbb{E} \biggl( \int_{\mathcal{W}} h(x, \Xi\setminus\{x\}) \; \Xi(\mathrm{d} x) \biggr) = \int_{\mathcal{W}} \mathbb{E} \bigl( h(x, \Xi) \lambda(x \, \vert \, \Xi) \bigr) \; dx
\end{equation}
for every measurable $h \colon \mathcal{W} \times \mathfrak{N}\vert_\mathcal{W} \to \mathbb{R}_{+}$.
So far $\lambda(\cdot\, \vert \,\cdot)$ is only a function on $\mathcal{W}\times \mathfrak{N}\vert_\mathcal{W}$, but in many cases there exists a natural extension to the whole space, which we shall also denote by $\lambda(\cdot\, \vert \,\cdot)$.
One way to generalise Gibbs processes to the whole space $\mathbb{R}^d$ is then by the so-called \emph{integral characterisation}. A point process $\Xi$ on $\mathbb{R}^d$ is a Gibbs process corresponding to the conditional intensity $\lambda(\cdot\, \vert \, \cdot)$ if it satisfies \eqref{eq:gnz} with $\mathcal{W}$ replaced by $\mathbb{R}^d$ for all measurable $h \colon \mathbb{R}^d\times \mathfrak{N} \to \mathbb{R}_{+}$; see \citep[p.\,95]{moellerwaage04}, or \citep{nguyenzessin79} for a more rigorous presentation. Unlike in the case of a bounded domain, $\Xi$ may not be uniquely determined by~\eqref{eq:gnz}. For the rest of this paper we will only deal with the conditional intensity, i.e.\ if we say that a result holds for a Gibbs process with conditional intensity $\lambda(\cdot\, \vert \, \cdot)$, we mean that it holds for \emph{all} processes corresponding to this conditional intensity.
A Gibbs process $\Xi$ is said to be a \emph{pairwise interaction process} if its conditional intensity is of the form
\begin{equation}
\label{eq:pip}
\lambda(x\, \vert \, \xi)=\beta \prod_{y \in \xi}\varphi (x,y)
\end{equation}
for a constant $\beta>0$ and a symmetric \emph{interaction function} $\varphi$. We denote the distribution of $\Xi$ by $\pip(\beta,\varphi)$. The process $\Xi$ is called \emph{inhibitory} if $\varphi \le 1$ and it is said to have a \emph{finite interaction range} if $1-\varphi$ is compactly supported. $\Xi$ is \emph{stationary} if $\varphi(x,y)$ depends only on the difference $x-y$; we then write $\varphi(x,y)=\varphi(x-y)$. If in addition the interaction function is rotation invariant, i.e.\ $\varphi(x)=\varphi(\norm{x})$, then $\Xi$ is called \emph{isotropic}. For conditions on $\varphi$ ensuring the existence of $\Xi$ we refer the reader to \citep{ruelle69}.
If $\Xi$ is a general point process, its \emph{expectation measure or first order moment measure} ${\mathbb E} \Xi$ on $\mathbb{R}^d$ is simpy given by $({\mathbb E}\Xi)(A) = {\mathbb E}(\Xi(A))$ for every Borel set $A \subset \mathbb{R}^d$. For $k \in \mathbb{N}$ the \emph{$k$-th order factorial moment measure} of $\Xi$ is the expectation measure of the factorial product measure
\begin{equation*}
\Xi^{[k]} = \sum_{\substack{X_1,\ldots,X_k \in \Xi \\[2pt] \text{pairwise different}}} \delta_{(X_1,\ldots,X_k)}
\end{equation*}
on $(\mathbb{R}^d)^k$. Any moment measure is said to \emph{exist} if it is locally finite.
The \emph{intensity (function)} $\lambda(x)$ of a Gibbs process $\Xi$ is the density of the first moment measure of $\Xi$ with respect to Lebesgue measure, provided the first moment measure exists. For a bounded $A\subset \mathbb{R}^d$, Equation~\eqref{eq:gnz} yields
\begin{equation*}
{\mathbb E} \Xi(A)=\int_A {\mathbb E}\big(\lambda(x\, \vert \, \Xi)\big)\; dx,
\end{equation*}
hence the existence of the intensity and the form $\lambda(x)={\mathbb E}(\lambda(x\, \vert \, \Xi))$. For stationary processes the intensity is obviously constant and we just write $\lambda$. For a stationary pairwise interaction process we get
\begin{equation}
\label{eq:lambda-pip}
\lambda={\mathbb E}\big(\lambda(0\, \vert \, \Xi)\big)=\beta \hspace*{1.5pt} {\mathbb E}\Bigl(\prod_{y\in \Xi}\varphi(y)\Bigr)=\beta \hspace*{1.5pt} \Psi_\Xi( \varphi).
\end{equation}
In a similar manner it is possible to obtain the densities of the higher order factorial moment measures, the so-called \emph{correlation functions}; see \citep{mw07,mase90}. For a stationary process $\Xi\sim\pip(\beta,\varphi)$ the $k$-th correlation function is given by
\begin{align}
\lambda_k(x_1,\dots,x_k)&=\beta^k\Bigl(\prod_{1\le i<j\le k}\varphi(x_i-x_j)\Bigr)\hspace*{1.5pt} {\mathbb E}\Bigl(\prod_{y\in \Xi}\varphi(y-x_1)\cdots\varphi(y-x_k)\Bigr) \nonumber \\
&=\beta^k\Bigl(\prod_{1\le i<j\le k}\varphi(x_i-x_j)\Bigr)\hspace*{1.5pt} \Psi_\Xi\big(\varphi(\cdot-x_1)\cdots\varphi(\cdot-x_k)\big).
\label{eq:corr-fun}
\end{align}
A frequently used function in spatial statistics is the \emph{pair correlation function} which is defined as
\begin{equation}
\label{eq:pcf}
\rho(x,y)=\frac{\lambda_2(x,y)}{\lambda(x)\lambda(y)}.
\end{equation}
In the stationary isotropic case this simplifies to
$\rho(s)=\lambda_2(x,y)/\lambda^2$, where $s=\|x-y\|$.
For our results we need a stability condition for the Gibbs processes. A Gibbs process $\Xi$ is called \emph{locally stable} if there exists a non-negative function $c^*$ such that $\int_\mathcal{W} c^*(x)\,dx < \infty$ for all bounded domains $\mathcal{W}\subset \mathbb{R}^d$ and the conditional intensity satisfies
\begin{equation}
\label{eq:loc-s}
\lambda(x\, \vert \, \xi)\le c^*(x),
\end{equation}
for all $\xi \in \mathfrak{N}$.
For the rest of the paper we restrict ourselves to stationary point processes on $\mathbb{R}^d$ and require $c^{*}$ to be a constant. The following is the key theorem for obtaining the results in Sections~\ref{sec:bounds-int} and~\ref{sec:s-stat}. Its proof is the subject of Section~\ref{sec:proofs}.
\begin{theorem}
\label{thm:bounds}
Let $\Xi$ be a stationary locally stable Gibbs process with intensity $\lambda$ and local stability constant $c^*$, and let $g\colon \mathbb{R}^d\to [0,1]$ be a function for which $1-g$ has bounded support. Then
\begin{equation}
\label{eq:bounds}
1-\lambda G \le {\mathbb E}\Bigl(\prod_{y\in \Xi}g(y)\Bigr) \le 1-\frac{\lambda}{c^*}\big(1-e^{-c^*G}\big),
\end{equation}
where $G=\int_{\mathbb{R}^d} 1-g(x)\, dx$.
\end{theorem}
\section{Bounds on the intensity}
\label{sec:bounds-int}
For the intensity of a inhibitory pairwise interaction process we immediately obtain from Theorem~\ref{thm:bounds} the following result.
\begin{theorem}
\label{thm:lambdaPIP}
Let $\Xi\sim \pip(\beta,\varphi)$ be inhibitory with finite interaction range. Then
\begin{equation}
\label{eq:lambdaPIP}
\frac{\beta}{1+\beta G}\le \lambda \le \frac{\beta}{2-e^{-\beta G}},
\end{equation}
where $G=\int_{\mathbb{R}^d}1-\varphi(x)\,dx$.
\end{theorem}
\begin{proof}
Recall from \eqref{eq:lambda-pip} that $\lambda=\beta \hspace*{1.5pt} {\mathbb E} \prod_{y\in \Xi}\varphi(y)$ and use $c^*=\beta$. Theorem~\ref{thm:bounds} then yields
\begin{equation*}
1-\lambda G \le \frac{\lambda}{\beta}\le 1-\frac{\lambda}{\beta}\big(1-e^{-\beta G}\big)
\end{equation*}
which can be rearranged as \eqref{eq:lambdaPIP}.
\end{proof}
\begin{remark}
The lower bound of \eqref{eq:lambdaPIP} can also be found in \cite[p.\,96]{ruelle69} with the restriction $\beta G <e^{-1}$, whereas our inequality holds for all values of $\beta$ and $G$.
\end{remark}
\begin{example}
Let $\Xi$ be a Strauss process, i.e.
\begin{equation*}
\varphi(x) =\begin{cases} \gamma \quad &\text{if} \quad \|x\| \le r\\
1 \quad &\text{if} \quad \|x\|>r \end{cases}
\end{equation*}
for some parameters $r>0$ and $0\le \gamma \le 1$. Then $G=(1-\gamma)\alpha_dr^d$, where $\alpha_d$ denotes the volume of the unit ball. Figure~\ref{fig:Strauss50} shows that for a reasonable choice of the parameters $(\beta,r,\gamma)$ the bounds on $\lambda$ are quite good. The maximal relative error between the bounds and the simulated values is about 3.5\%.
\end{example}
\begin{figure}[!ht]
\begin{center}
\begin{tikzpicture}[x=1pt,y=1pt]
\definecolor[named]{fillColor}{rgb}{1.00,1.00,1.00}
\path[use as bounding box,fill=fillColor,fill opacity=0.00] (0,0) rectangle (254.39,231.26);
\begin{scope}
\path[clip] ( 0.00, 0.00) rectangle (254.39,231.26);
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 50.64, 28.91) -- (232.66, 28.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 50.64, 28.91) -- ( 50.64, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 87.05, 28.91) -- ( 87.05, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (123.45, 28.91) -- (123.45, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (159.85, 28.91) -- (159.85, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (196.25, 28.91) -- (196.25, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (232.66, 28.91) -- (232.66, 22.91);
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 50.64, 15.71) {0.0};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 87.05, 15.71) {0.2};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (123.45, 15.71) {0.4};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (159.85, 15.71) {0.6};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (196.25, 15.71) {0.8};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (232.66, 15.71) {1.0};
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36, 35.87) -- ( 43.36,209.85);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36, 35.87) -- ( 37.36, 35.87);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36, 79.36) -- ( 37.36, 79.36);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36,122.86) -- ( 37.36,122.86);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36,166.35) -- ( 37.36,166.35);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36,209.85) -- ( 37.36,209.85);
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96, 33.46) {20};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96, 76.95) {40};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96,120.45) {60};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96,163.94) {80};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96,207.44) {100};
\end{scope}
\begin{scope}
\path[clip] ( 43.36, 28.91) rectangle (239.94,216.81);
\definecolor[named]{fillColor}{rgb}{0.83,0.83,0.83}
\path[fill=fillColor] ( 50.64,133.22) --
( 52.46,133.55) --
( 54.28,133.88) --
( 56.10,134.22) --
( 57.92,134.56) --
( 59.74,134.91) --
( 61.56,135.26) --
( 63.38,135.61) --
( 65.20,135.97) --
( 67.02,136.33) --
( 68.84,136.70) --
( 70.66,137.08) --
( 72.48,137.45) --
( 74.30,137.84) --
( 76.12,138.23) --
( 77.94,138.62) --
( 79.76,139.02) --
( 81.58,139.42) --
( 83.40,139.83) --
( 85.23,140.25) --
( 87.05,140.67) --
( 88.87,141.09) --
( 90.69,141.53) --
( 92.51,141.97) --
( 94.33,142.41) --
( 96.15,142.86) --
( 97.97,143.32) --
( 99.79,143.78) --
(101.61,144.25) --
(103.43,144.73) --
(105.25,145.21) --
(107.07,145.70) --
(108.89,146.20) --
(110.71,146.70) --
(112.53,147.22) --
(114.35,147.73) --
(116.17,148.26) --
(117.99,148.80) --
(119.81,149.34) --
(121.63,149.89) --
(123.45,150.45) --
(125.27,151.02) --
(127.09,151.59) --
(128.91,152.18) --
(130.73,152.77) --
(132.55,153.38) --
(134.37,153.99) --
(136.19,154.61) --
(138.01,155.24) --
(139.83,155.88) --
(141.65,156.54) --
(143.47,157.20) --
(145.29,157.87) --
(147.11,158.55) --
(148.93,159.25) --
(150.75,159.96) --
(152.57,160.67) --
(154.39,161.40) --
(156.21,162.15) --
(158.03,162.90) --
(159.85,163.67) --
(161.67,164.45) --
(163.49,165.24) --
(165.31,166.05) --
(167.13,166.87) --
(168.95,167.71) --
(170.77,168.56) --
(172.59,169.43) --
(174.41,170.31) --
(176.23,171.20) --
(178.05,172.12) --
(179.87,173.05) --
(181.69,174.00) --
(183.51,174.96) --
(185.33,175.94) --
(187.15,176.95) --
(188.97,177.97) --
(190.79,179.01) --
(192.61,180.07) --
(194.43,181.15) --
(196.25,182.25) --
(198.07,183.37) --
(199.89,184.52) --
(201.71,185.69) --
(203.53,186.88) --
(205.35,188.10) --
(207.17,189.34) --
(208.99,190.61) --
(210.81,191.90) --
(212.63,193.23) --
(214.45,194.58) --
(216.27,195.96) --
(218.09,197.37) --
(219.91,198.81) --
(221.74,200.28) --
(223.56,201.79) --
(225.38,203.33) --
(227.20,204.90) --
(229.02,206.51) --
(230.84,208.16) --
(232.66,209.85) --
(232.66,209.85) --
(230.84,208.16) --
(229.02,206.49) --
(227.20,204.84) --
(225.38,203.23) --
(223.56,201.63) --
(221.74,200.06) --
(219.91,198.52) --
(218.09,196.99) --
(216.27,195.49) --
(214.45,194.01) --
(212.63,192.56) --
(210.81,191.12) --
(208.99,189.70) --
(207.17,188.31) --
(205.35,186.93) --
(203.53,185.57) --
(201.71,184.23) --
(199.89,182.91) --
(198.07,181.61) --
(196.25,180.33) --
(194.43,179.06) --
(192.61,177.81) --
(190.79,176.58) --
(188.97,175.36) --
(187.15,174.16) --
(185.33,172.97) --
(183.51,171.80) --
(181.69,170.65) --
(179.87,169.51) --
(178.05,168.38) --
(176.23,167.27) --
(174.41,166.17) --
(172.59,165.09) --
(170.77,164.02) --
(168.95,162.96) --
(167.13,161.91) --
(165.31,160.88) --
(163.49,159.86) --
(161.67,158.86) --
(159.85,157.86) --
(158.03,156.88) --
(156.21,155.91) --
(154.39,154.95) --
(152.57,154.00) --
(150.75,153.06) --
(148.93,152.13) --
(147.11,151.22) --
(145.29,150.31) --
(143.47,149.41) --
(141.65,148.53) --
(139.83,147.65) --
(138.01,146.79) --
(136.19,145.93) --
(134.37,145.08) --
(132.55,144.25) --
(130.73,143.42) --
(128.91,142.60) --
(127.09,141.79) --
(125.27,140.99) --
(123.45,140.19) --
(121.63,139.41) --
(119.81,138.63) --
(117.99,137.86) --
(116.17,137.10) --
(114.35,136.35) --
(112.53,135.60) --
(110.71,134.87) --
(108.89,134.14) --
(107.07,133.42) --
(105.25,132.70) --
(103.43,131.99) --
(101.61,131.29) --
( 99.79,130.60) --
( 97.97,129.91) --
( 96.15,129.23) --
( 94.33,128.56) --
( 92.51,127.89) --
( 90.69,127.23) --
( 88.87,126.58) --
( 87.05,125.93) --
( 85.23,125.29) --
( 83.40,124.66) --
( 81.58,124.03) --
( 79.76,123.40) --
( 77.94,122.79) --
( 76.12,122.18) --
( 74.30,121.57) --
( 72.48,120.97) --
( 70.66,120.38) --
( 68.84,119.79) --
( 67.02,119.20) --
( 65.20,118.62) --
( 63.38,118.05) --
( 61.56,117.48) --
( 59.74,116.92) --
( 57.92,116.36) --
( 56.10,115.81) --
( 54.28,115.26) --
( 52.46,114.72) --
( 50.64,114.18) --
cycle;
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 50.64,126.40) --
( 52.46,126.84) --
( 54.28,127.28) --
( 56.10,127.73) --
( 57.92,128.18) --
( 59.74,128.64) --
( 61.56,129.10) --
( 63.38,129.56) --
( 65.20,130.03) --
( 67.02,130.50) --
( 68.84,130.97) --
( 70.66,131.45) --
( 72.48,131.94) --
( 74.30,132.43) --
( 76.12,132.92) --
( 77.94,133.42) --
( 79.76,133.93) --
( 81.58,134.44) --
( 83.40,134.95) --
( 85.23,135.47) --
( 87.05,135.99) --
( 88.87,136.52) --
( 90.69,137.05) --
( 92.51,137.59) --
( 94.33,138.14) --
( 96.15,138.69) --
( 97.97,139.25) --
( 99.79,139.81) --
(101.61,140.38) --
(103.43,140.95) --
(105.25,141.53) --
(107.07,142.12) --
(108.89,142.71) --
(110.71,143.31) --
(112.53,143.92) --
(114.35,144.53) --
(116.17,145.15) --
(117.99,145.78) --
(119.81,146.41) --
(121.63,147.05) --
(123.45,147.70) --
(125.27,148.35) --
(127.09,149.02) --
(128.91,149.69) --
(130.73,150.37) --
(132.55,151.06) --
(134.37,151.75) --
(136.19,152.46) --
(138.01,153.17) --
(139.83,153.89) --
(141.65,154.62) --
(143.47,155.36) --
(145.29,156.11) --
(147.11,156.87) --
(148.93,157.64) --
(150.75,158.42) --
(152.57,159.21) --
(154.39,160.01) --
(156.21,160.82) --
(158.03,161.64) --
(159.85,162.47) --
(161.67,163.32) --
(163.49,164.17) --
(165.31,165.04) --
(167.13,165.92) --
(168.95,166.82) --
(170.77,167.72) --
(172.59,168.64) --
(174.41,169.58) --
(176.23,170.53) --
(178.05,171.49) --
(179.87,172.47) --
(181.69,173.46) --
(183.51,174.47) --
(185.33,175.49) --
(187.15,176.54) --
(188.97,177.60) --
(190.79,178.67) --
(192.61,179.77) --
(194.43,180.88) --
(196.25,182.01) --
(198.07,183.16) --
(199.89,184.34) --
(201.71,185.53) --
(203.53,186.75) --
(205.35,187.98) --
(207.17,189.25) --
(208.99,190.53) --
(210.81,191.84) --
(212.63,193.18) --
(214.45,194.54) --
(216.27,195.93) --
(218.09,197.35) --
(219.91,198.79) --
(221.74,200.27) --
(223.56,201.78) --
(225.38,203.32) --
(227.20,204.90) --
(229.02,206.51) --
(230.84,208.16) --
(232.66,209.85);
\path[draw=drawColor,line width= 0.4pt,dash pattern=on 1pt off 3pt ,line join=round,line cap=round] ( 50.84, 0.00) --
( 52.46, 61.39) --
( 54.28, 67.55) --
( 56.10, 71.86) --
( 57.92, 75.30) --
( 59.74, 78.25) --
( 61.56, 80.87) --
( 63.38, 83.24) --
( 65.20, 85.43) --
( 67.02, 87.48) --
( 68.84, 89.41) --
( 70.66, 91.25) --
( 72.48, 93.01) --
( 74.30, 94.70) --
( 76.12, 96.33) --
( 77.94, 97.91) --
( 79.76, 99.44) --
( 81.58,100.94) --
( 83.40,102.40) --
( 85.23,103.83) --
( 87.05,105.23) --
( 88.87,106.61) --
( 90.69,107.96) --
( 92.51,109.30) --
( 94.33,110.61) --
( 96.15,111.91) --
( 97.97,113.19) --
( 99.79,114.46) --
(101.61,115.72) --
(103.43,116.97) --
(105.25,118.21) --
(107.07,119.43) --
(108.89,120.65) --
(110.71,121.87) --
(112.53,123.07) --
(114.35,124.27) --
(116.17,125.46) --
(117.99,126.65) --
(119.81,127.84) --
(121.63,129.02) --
(123.45,130.20) --
(125.27,131.38) --
(127.09,132.55) --
(128.91,133.73) --
(130.73,134.90) --
(132.55,136.07) --
(134.37,137.24) --
(136.19,138.41) --
(138.01,139.59) --
(139.83,140.76) --
(141.65,141.93) --
(143.47,143.11) --
(145.29,144.29) --
(147.11,145.47) --
(148.93,146.65) --
(150.75,147.84) --
(152.57,149.03) --
(154.39,150.22) --
(156.21,151.42) --
(158.03,152.62) --
(159.85,153.83) --
(161.67,155.04) --
(163.49,156.26) --
(165.31,157.48) --
(167.13,158.71) --
(168.95,159.94) --
(170.77,161.18) --
(172.59,162.43) --
(174.41,163.69) --
(176.23,164.95) --
(178.05,166.22) --
(179.87,167.50) --
(181.69,168.78) --
(183.51,170.08) --
(185.33,171.38) --
(187.15,172.70) --
(188.97,174.02) --
(190.79,175.35) --
(192.61,176.70) --
(194.43,178.05) --
(196.25,179.42) --
(198.07,180.80) --
(199.89,182.19) --
(201.71,183.59) --
(203.53,185.01) --
(205.35,186.44) --
(207.17,187.88) --
(208.99,189.34) --
(210.81,190.81) --
(212.63,192.30) --
(214.45,193.80) --
(216.27,195.33) --
(218.09,196.86) --
(219.91,198.42) --
(221.74,199.99) --
(223.56,201.58) --
(225.38,203.20) --
(227.20,204.83) --
(229.02,206.48) --
(230.84,208.15) --
(232.66,209.85);
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 50.64,118.73) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 59.74,121.12) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 68.84,124.36) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 77.94,127.71) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 87.05,130.55) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 96.15,134.09) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (105.25,137.11) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (114.35,140.39) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (123.45,144.11) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (132.55,147.98) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (141.65,152.07) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (150.75,155.78) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (159.85,160.21) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (168.95,165.08) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (178.05,169.61) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (187.15,174.75) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (196.25,180.82) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (205.35,186.63) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (214.45,193.56) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (223.56,200.35) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (232.66,208.45) {+};
\end{scope}
\begin{scope}
\path[clip] ( 0.00, 0.00) rectangle (254.39,231.26);
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36, 28.91) --
(239.94, 28.91) --
(239.94,216.81) --
( 43.36,216.81) --
( 43.36, 28.91);
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (141.65, 2.51) {$\lambda$};
\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 16.96,122.86) {$\gamma$};
\end{scope}
\end{tikzpicture}
\caption{\label{fig:Strauss100} Intensities of two-dimensional Strauss processes with $\beta=100$, $r=0.05$ and values of $\gamma$ ranging from $0$ to $1$. The solid line is $\lambda_{PS}$, the dashed line is $\lambda_{MF}$, and the grey area corresponds to the bounds in \eqref{eq:lambdaPIP}. The pluses are estimates of the intensities based on 10,000 simulations each.}
\end{center}
\end{figure}
For a comparison of our bounds on the intensity to known approximations from the literature we concentrate on two methods.
The first one is the \emph{Poisson-saddlepoint} approximation proposed in \citep{baddeleynair12}. The authors replaced in \eqref{eq:lambda-pip} the Gibbs process $\Xi\sim\pip(\beta,\varphi)$ by a Poisson process $\mathrm{H}_{\lambda_{PS}}$ with intensity $\lambda_{PS}$ such that the following equality holds
\begin{equation}
\label{eq:ps}
\lambda_{PS}={\mathbb E}\lambda(0\, \vert \, \mathrm{H}_{\lambda_{PS}})=\beta \hspace*{1.5pt} {\mathbb E}\Bigl(\prod_{y\in \mathrm{H}_{\lambda_{PS}}}\varphi(y)\Bigr).
\end{equation}
Solving this equation yields
\begin{equation}
\label{eq:lambda-PS}
\lambda_{PS}=\frac{W(\beta G)}{G},
\end{equation}
where $W$ is \emph{Lambert's W function}, the inverse of $x\mapsto xe^x$, and $G=\int_{\mathbb{R}^d}1-\varphi(x)\,dx$ as above.
The second method is the \emph{mean-field} approximation that was also described in \citep{baddeleynair12} and is given by
\begin{equation}
\label{eq:lambda-MF}
\lambda_{MF}=\frac{W(\beta \Gamma)}{\Gamma},
\end{equation}
where $\Gamma=-\int_{\mathbb{R}^d}\log(\varphi(x))\,dx$. Figure~\ref{fig:Strauss100} shows the two approximations and our bounds from Inequality~\eqref{eq:lambdaPIP} for two-dimensional Strauss processes.
In \citep{baddeleynair12} it is shown that under the conditions of Theorem~\ref{thm:lambdaPIP} we have $\lambda\ge \lambda_{MF}$. The authors also conjectured, based on simulations for Strauss processes, that $\lambda_{PS}$ is an upper bound for $\lambda$. However, the next example indicates that this is not generally true.
\begin{figure}[!h]
\begin{center}
\begin{tikzpicture}[x=1pt,y=1pt]
\definecolor[named]{fillColor}{rgb}{1.00,1.00,1.00}
\path[use as bounding box,fill=fillColor,fill opacity=0.00] (0,0) rectangle (420.48,183.96);
\begin{scope}
\path[clip] ( 14.45, 0.00) rectangle (210.24,169.51);
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 33.87, 6.28) --
(190.82, 6.28) --
(190.82,163.23) --
( 33.87,163.23) --
( 33.87, 6.28);
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 50.75, 87.67) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (156.13,153.79) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 97.62,127.92) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (171.27, 93.02) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 92.11, 69.67) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (168.68, 53.00) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (132.32, 37.20) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (159.91,151.13) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 99.79, 31.61) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 83.57, 39.26) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (131.29, 96.82) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 61.11, 54.40) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 53.80, 13.67) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (152.29,108.78) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (136.19, 18.46) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 39.94,135.09) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (133.97, 96.63) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 50.53,149.63) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (178.92,124.98) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (189.54,101.33) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (141.71, 74.55) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (164.45, 56.19) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 71.32, 88.91) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 69.81, 5.09) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 65.70, 5.85) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 59.88,100.79) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 61.97, 22.05) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 44.82, 27.44) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (136.64, 92.70) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (164.58, 6.95) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (186.23, 82.58) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 96.54, 84.64) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (157.23,150.24) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 82.72,107.27) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (141.69,137.69) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 46.55, 45.86) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (111.26,142.88) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 39.02,103.33) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 57.60,100.06) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 47.39, 33.32) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (167.18, 50.89) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (101.12,108.72) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (175.63,149.28) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 69.44, 91.17) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 82.07,107.55) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (159.39,156.16) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (128.51, 40.56) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 94.55, 89.65) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (165.56,121.95) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (150.22, 54.04) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (176.53,112.36) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (166.54, 50.60) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (138.06, 6.19) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 69.71,151.11) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 42.76,137.62) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (110.16,128.10) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (110.19, 93.40) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 63.53, 57.86) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 67.29,138.07) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (133.96, 19.67) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 96.59,153.25) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 38.07,136.29) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 50.63,152.13) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (145.42,138.06) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 96.74, 67.74) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (103.49,112.21) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 62.14, 77.06) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (158.87,151.58) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (149.79,105.90) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 67.54,136.06) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 63.50, 40.32) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (174.79, 93.06) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (129.63, 81.42) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 74.98, 51.37) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 39.08,138.38) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 59.97, 59.06) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (166.05,125.54) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (114.01,126.36) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (154.36, 37.60) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (121.62, 7.52) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (179.37,111.66) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (156.71,154.74) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 99.36,153.68) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (172.37, 93.10) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (186.39, 26.92) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 99.64,123.40) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (140.14,137.30) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 80.72,123.56) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (187.24, 83.14) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (178.62, 41.95) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (158.98,152.42) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (127.15, 36.05) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 85.88,141.03) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 93.15, 87.78) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (143.36,149.33) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (107.35, 78.49) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 46.96, 32.81) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (152.88, 20.40) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 79.01, 71.66) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (103.82, 19.50) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 67.72,151.07) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 97.68, 86.64) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 69.23,151.90) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (165.28, 54.35) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 42.76,140.92) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (156.32,106.44) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 64.42, 75.90) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 50.80, 89.57) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 68.28,117.61) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (116.56,140.42) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (131.32,125.99) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (109.81, 76.11) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (157.05,107.71) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (151.91,109.57) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (166.24,139.84) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (177.16,128.56) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (137.64, 5.73) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (129.32,143.67) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (121.78, 54.17) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 46.94, 90.72) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (143.43,154.02) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (136.83, 55.69) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 37.63,105.68) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 91.48, 22.66) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 98.72,110.36) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 99.75,122.98) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (124.80, 83.58) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (155.40,107.09) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (102.00,105.26) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 34.77,123.08) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 83.53,120.45) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (183.79,129.24) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (137.50,113.89) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 82.82, 4.95) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 35.21, 92.75) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 67.27,138.61) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 78.46,108.60) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (127.02, 39.73) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (144.84, 43.50) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (181.89, 81.87) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 64.45,152.46) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (142.40,149.68) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (174.65, 93.19) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (178.96,107.62) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 64.28,116.00) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (158.78,153.56) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (178.97, 40.77) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 84.74,137.82) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (184.44, 57.04) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 49.34,116.23) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 87.44, 7.52) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 34.95,157.68) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (166.28, 6.15) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 59.47, 77.41) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (153.53,110.76) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (172.38, 17.49) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (108.21, 91.02) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 40.22, 30.93) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (159.16, 32.54) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 86.39, 23.15) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (172.40, 89.55) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 78.75,108.90) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 65.11,156.24) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (183.13, 64.19) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (125.12, 64.91) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (118.68, 5.21) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (178.33, 94.09) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (165.26, 71.71) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 39.09, 68.57) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 69.53,138.01) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (151.21, 55.77) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (136.98, 20.91) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (100.87,110.08) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 96.39, 67.69) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 97.95, 67.93) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (165.11,139.28) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 46.11, 87.95) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (190.59, 29.64) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (167.97, 72.52) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 36.10, 41.51) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 34.17,156.54) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (125.89, 82.68) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 44.42, 34.04) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 47.14,113.75) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (179.90, 44.41) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (187.81, 82.84) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (156.53,108.59) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (132.34, 97.42) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (110.58, 60.00) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 85.86,107.16) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (114.27, 23.32) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (125.56, 84.68) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 57.48, 41.04) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 51.85,154.64) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 92.28, 50.19) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (130.77, 20.84) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 88.02, 5.21) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 67.67,156.12) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 58.91, 39.98) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 83.48, 39.82) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 68.53, 21.60) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (115.54, 18.70) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (128.79, 38.36) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (165.10, 5.30) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (164.33,124.73) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 64.03, 42.10) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (157.60,152.92) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 69.90,134.61) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (167.95,104.81) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (149.07, 40.38) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (136.17, 20.25) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (183.36,128.45) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 90.63, 23.66) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (178.52,124.77) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 96.61,156.23) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 45.15, 31.88) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 67.38,119.10) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 91.97, 51.14) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (152.26,110.98) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (127.21, 66.39) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 90.39,141.08) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 41.18,142.54) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (186.45, 59.85) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (144.23,154.98) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 93.83, 70.57) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 51.05, 88.27) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 61.58, 57.32) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (114.41, 78.86) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 77.67, 76.85) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (163.46, 30.82) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (170.59, 75.90) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (163.83, 73.06) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 38.68, 98.39) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (112.71,142.41) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 48.14, 50.97) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 67.02,114.56) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (147.15, 75.91) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (178.32, 43.35) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (132.68, 94.19) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 69.23, 23.45) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (151.46,109.91) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 47.91, 13.12) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (118.39, 25.25) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (114.06, 92.73) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (150.76,105.12) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (155.64, 18.19) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 34.69,156.62) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (184.53, 27.90) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 67.78, 92.75) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (190.27,145.75) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (102.59, 20.21) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (142.71,154.07) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 40.79, 29.50) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (105.26, 56.74) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (167.98, 56.36) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 68.44,134.77) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (109.75, 93.00) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 67.38,155.22) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 78.77, 76.29) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 45.45, 88.42) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 35.95, 67.37) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (176.20, 93.70) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (179.19,107.26) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (114.59, 76.54) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (165.55,127.64) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (128.03,124.65) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (136.13, 95.26) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (160.92,154.37) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 65.13,120.71) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 64.27,113.98) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 68.48, 5.76) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 46.35,114.05) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 50.90,153.66) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 89.25, 8.34) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (156.52, 91.94) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 68.05,153.05) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 65.69, 57.66) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (107.20, 77.45) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 97.26, 34.11) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (142.55, 74.72) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (170.40, 76.61) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 92.78, 70.48) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (170.69, 20.60) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (173.79,109.87) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 39.11, 68.66) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 80.70, 64.14) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 92.51,156.61) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (147.91, 74.27) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 46.35, 53.70) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 77.78, 75.04) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 38.59,137.31) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (154.31,130.32) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 69.86,137.48) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (129.75,150.57) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 99.55,108.64) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 41.18, 65.21) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (103.85, 57.12) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (104.37, 57.42) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (176.42,154.14) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 85.58, 21.87) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (131.23,144.97) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (112.62, 39.22) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (139.98,136.64) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (133.54, 39.36) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (155.77,106.80) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (155.54,153.58) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (150.79, 53.54) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 68.67, 21.51) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (170.08, 53.73) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 49.86,154.74) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (127.51,146.60) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (189.46,148.70) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 92.70, 89.18) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (128.15, 81.68) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (105.75,108.61) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (174.45, 18.53) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (111.42,141.54) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (169.20, 53.46) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 70.15, 93.39) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 98.76, 68.88) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 81.36,123.69) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 80.76,124.38) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 85.36,106.37) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 61.85, 53.08) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (181.92, 59.66) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (135.29, 20.40) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 58.84,100.26) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 74.00, 47.92) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (148.86, 19.34) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 84.70,103.89) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 35.71,156.91) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 39.08, 66.75) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 44.37,136.32) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (186.47, 26.34) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 65.03, 39.66) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (139.13,111.56) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 48.12, 90.02) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 96.56, 37.59) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 74.41, 49.81) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (148.48, 75.45) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (132.95, 35.81) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (132.43, 95.36) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 79.38, 79.14) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (110.68, 38.34) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (185.28, 30.00) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (149.11, 56.28) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (170.93, 16.31) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (149.57, 40.95) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 88.67, 52.96) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (130.67, 79.39) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (100.28,156.19) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 85.48, 23.26) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 95.47,128.07) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (170.79, 22.36) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 79.79, 78.97) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (189.80,149.31) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 85.11,141.62) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (102.80,129.11) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (115.65,125.52) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 79.53, 36.53) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 87.66, 52.59) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 60.13, 78.39) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 50.83,116.80) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (165.79,126.24) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 43.01, 50.44) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 81.71, 37.49) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 86.53,137.85) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (166.79, 51.79) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (148.68, 71.99) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (176.80, 44.11) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (182.32, 64.16) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 49.52, 52.29) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (114.71,142.60) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 46.66, 33.98) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (149.49, 20.01) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 99.07,129.73) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (155.50, 20.41) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (128.71,150.07) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (170.05, 20.70) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 97.46, 84.12) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (130.15, 81.41) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (130.64,144.64) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 54.58,103.66) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (160.17,152.14) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (154.63, 86.72) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 83.66, 37.00) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (163.75, 52.56) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (112.31,146.76) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (128.46, 68.00) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 50.54, 87.60) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (161.01,154.84) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (123.22,108.47) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (170.72,137.74) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 71.95, 88.08) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (121.12, 53.30) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 65.39, 20.61) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (155.52, 18.50) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (186.27, 28.36) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (108.50, 57.82) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (127.92,129.56) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 64.56, 39.56) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (165.65,140.29) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (181.29, 44.76) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 91.54, 48.40) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (163.82,123.79) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (160.57,153.00) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 42.90, 27.88) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (169.74, 72.11) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (106.49, 93.70) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 58.98, 78.27) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (107.66, 57.11) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (114.41,125.31) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (153.06, 89.01) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 83.86, 20.82) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 63.18, 24.06) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (156.22, 92.75) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 87.45, 50.62) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (112.22, 37.04) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (187.41,143.43) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 81.76,119.79) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (109.40, 41.70) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (128.46,126.91) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 70.41, 93.50) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (167.77, 53.31) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (186.01, 29.08) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 62.84,151.78) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 45.00, 26.41) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (117.09, 18.40) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (134.36, 91.75) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 39.20, 71.14) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (153.11, 22.14) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 69.73, 94.23) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (170.32, 33.55) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (150.36, 59.59) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (166.10,141.41) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 69.76,132.24) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (106.23,111.83) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (116.37, 23.00) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 50.80, 15.88) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (108.24, 60.85) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 81.84, 40.01) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 69.31,136.06) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 67.59, 22.99) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (105.22, 59.00) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 51.78,117.67) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 34.77,124.58) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 48.34, 84.34) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (141.81,118.71) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (119.43,156.26) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (173.60,153.44) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 64.60, 76.80) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 68.47,134.19) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 41.63, 67.97) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (156.11, 85.39) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (151.04, 5.87) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 64.66,116.91) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (132.63, 22.99) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (153.02,128.92) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 93.70, 67.25) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (110.13, 58.47) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (168.74, 69.22) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (161.75,156.47) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (155.12,108.05) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 66.56,118.62) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (143.39,153.86) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (184.97, 76.16) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (117.78,104.80) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (144.46, 75.06) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (151.92, 56.19) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (141.52,149.64) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (184.13,126.53) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (107.90, 61.14) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 60.34, 71.14) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (106.96, 93.92) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (173.57, 90.59) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 97.89, 37.46) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (133.45, 38.37) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (102.04, 16.66) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 63.67, 36.74) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 45.33, 85.69) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (111.33, 39.84) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 86.11,120.52) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (115.63,128.22) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (151.69,108.67) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (136.42, 96.42) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 45.52, 48.27) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at ( 89.71,137.73) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (154.14,130.07) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (156.36, 92.83) {+};
\end{scope}
\begin{scope}
\path[clip] (224.69, 0.00) rectangle (420.48,169.51);
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (244.11, 6.28) --
(401.06, 6.28) --
(401.06,163.23) --
(244.11,163.23) --
(244.11, 6.28);
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (386.19,111.68) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (385.36,158.59) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (326.38, 67.61) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (308.94,126.08) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (381.76,118.94) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (360.81, 65.79) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (365.27,108.59) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (386.52, 29.83) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (251.05, 25.48) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (379.65, 57.83) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (372.13, 74.95) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (370.97, 65.17) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (315.98,116.85) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (272.37, 60.39) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (321.43, 55.56) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (396.60, 28.43) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (310.96,158.80) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (310.57, 84.69) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (248.43, 44.86) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (356.11, 84.78) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (338.96, 55.74) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (247.85, 65.36) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (397.45, 49.85) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (294.53,153.29) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (302.57,131.88) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (310.21, 22.22) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (260.34,127.21) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (319.23, 47.42) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (335.05,100.77) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (359.28,124.25) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (267.52, 47.79) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (324.55,120.91) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (354.51, 14.46) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (292.11, 20.09) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (378.52, 37.82) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (277.23,158.10) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (351.53, 43.73) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (281.64, 59.31) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (287.58, 28.26) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (292.35,161.46) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (348.71, 51.85) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (385.27, 10.97) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (367.94, 51.75) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (310.78, 14.34) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (292.26,142.17) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (322.29, 24.97) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (249.96, 37.02) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (256.33,111.85) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (292.62, 81.45) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (262.51, 24.92) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (388.40, 88.75) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (329.65,157.50) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (375.10, 12.42) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (276.77, 14.99) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (290.63,111.43) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (371.59, 82.97) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (387.64, 42.46) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (325.63, 41.93) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (400.96,138.04) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (333.97,147.44) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (261.85, 70.21) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (391.23,151.45) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (251.45,160.44) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (393.12, 98.87) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (307.07,112.01) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (375.56,157.49) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (266.92, 16.94) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (308.66, 61.28) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (283.69, 50.67) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (275.72, 77.88) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (356.96, 37.60) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (396.41, 61.28) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (283.20,144.35) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (351.34, 5.03) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (384.54,130.22) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (381.89, 72.77) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (362.76,116.99) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (321.06,128.86) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (358.47, 96.61) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (253.99,135.74) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (271.70, 28.48) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (256.45, 42.63) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (252.56, 89.35) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (373.87, 44.16) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (267.17,154.58) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (337.49, 15.38) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (359.85, 49.11) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (313.78, 93.26) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (369.17,129.00) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (262.64,119.17) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (368.44,146.23) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (305.87, 44.32) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (400.75, 86.92) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (353.75, 28.31) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (301.31,145.26) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (358.10,140.61) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (258.43, 50.48) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (319.08, 36.67) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (283.10, 42.15) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (376.36,136.97) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (254.52,120.02) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (379.82,101.24) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (348.32, 79.55) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (271.90,120.27) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (333.09, 7.39) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (343.54, 5.94) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (313.51,105.02) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (313.70,148.60) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (271.51, 40.43) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (364.20, 76.89) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (254.58, 11.99) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (372.17, 20.71) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (366.20, 31.37) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (269.11,112.08) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (298.17, 6.04) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (286.80, 75.89) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (274.88, 94.14) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (271.69, 69.44) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (289.73, 58.35) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (325.26, 94.64) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (290.91, 11.43) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (304.52, 96.59) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (272.42,140.32) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (303.79,154.02) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (272.79,148.25) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (347.29,114.87) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (393.99,159.50) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (352.93, 62.45) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (303.18,120.52) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (387.63, 56.35) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (264.88,145.92) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (253.34,144.58) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (286.63,125.13) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (379.24, 25.30) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (392.66, 78.29) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (271.63,130.70) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (303.60, 53.13) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (295.89, 95.16) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (327.03, 84.60) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (325.61,103.09) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (393.35,143.78) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (381.96, 83.76) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (318.91,140.37) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (337.30,113.99) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (375.13,113.28) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (247.83,115.32) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (360.10, 5.29) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (382.35,150.70) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (333.40,135.41) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (348.63,159.77) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (282.13, 87.30) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (371.27,120.18) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (293.03, 69.56) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (258.24, 31.77) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (317.21, 63.76) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (392.69,134.01) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (398.59, 40.25) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (340.06, 47.77) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (248.21, 98.74) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (399.98,118.47) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (329.21, 50.42) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (267.77, 87.79) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (367.17, 11.80) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (280.09,118.17) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (255.24, 76.57) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (343.20,149.16) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (386.09, 19.65) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (340.72, 65.53) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (314.60, 73.94) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (265.44, 79.02) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (298.52,112.78) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (390.83,124.37) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (368.88, 94.29) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (349.08,132.10) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (330.32, 28.70) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (365.09, 41.00) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (349.93, 89.65) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (359.82, 57.14) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (355.48, 75.20) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (284.06,152.63) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (326.42,112.67) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (306.58, 72.06) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (388.70, 68.73) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (400.07,106.14) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (279.74,105.44) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (349.45,100.81) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (262.41, 98.93) {+};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.55] at (298.35, 39.62) {+};
\end{scope}
\end{tikzpicture}
\caption{\label{fig:ringprozess} The two processes in Example~\ref{ex:ringprozess} restricted to the unit square. In the left panel is a realisation of the hard annulus process having $489$ points, whereas $\lambda_{PS}=295.2$. In the right panel is a hard core process realisation having $188$ points.}
\end{center}
\end{figure}
\begin{example}
\label{ex:ringprozess}
Consider the process $\Xi\sim \pip(\beta,\varphi)$ with the interaction function
\begin{equation*}
\varphi(x) =\begin{cases} 1 \quad &\text{if} \quad \|x\| \le r\\
0 \quad &\text{if} \quad r<\|x\| \le R\\
1 \quad &\text{if} \quad \|x\| > R
\end{cases}
\end{equation*}
for constants $0\le r\le R $. We refer to this as a \emph{hard annulus process}. It is a special case of a so-called \emph{multiscale Strauss process}, see \citep[Ex. 6.2]{moellerwaage04}. Let $d=2$, $\beta=3000$, $r=0.05$ and $R=\sqrt{2}r$.
Then
\begin{equation*}
\lambda_{MF}=0,\quad \frac{\beta}{1+\beta G}=122.1, \quad \lambda_{PS}=295.2\quad \text{and}\quad \frac{\beta}{2-e^{-\beta G}}=1500.
\end{equation*}
An estimate of the intensity based on $300$ simulations gave $\hat{\lambda}=493.8 > \lambda_{PS}$. For comparison we also estimated the intensity of a Strauss hard core process ($\gamma=0$) with the same $\beta$ and $G$ and obtained $\hat{\lambda}=193.3$. Figure~\ref{fig:ringprozess} shows that although the two processes have the same $\beta$ and $G$, their realisations look quite different. All simulations were performed by long runs ($10^7$ steps) of Markov Chain Monte Carlo.
We were not able to prove that $\lambda>\lambda_{PS}$ in this case, but bring forward the following heuristic argument for the observed phenomenon. The simulations shows that for large $\beta$ the points tend to cluster on ``island'' of radius $\le r/2$ which are separated by a distance $\ge R$. Since the points within each island do not interact, we expect the intensity to grow linearly in $\beta$ for large $\beta$. However $\lambda_{PS}$ only grows logarithmically for large $\beta$, so that at some point the intensity will overtake.
\end{example}
Even if $\lambda_{PS}$ may not serve as a bound on $\lambda$ it remains useful as an approximation. Empirically its values stay relatively close to the simulated values, whereas the difference of our upper and lower bounds in \eqref{eq:lambdaPIP} increases for large $\beta G$.
The following result in connection with Theorem~\ref{thm:lambdaPIP} gives an upper bound on the error in Poisson saddlepoint approximation.
\begin{proposition}
\label{lemma:PS}
Under the conditions of Theorem~\ref{thm:lambdaPIP} we have
\begin{equation}
\label{eq:bounds-lambda-PS}
\frac{\beta}{1+\beta G}\le \lambda_{PS} \le \frac{\beta}{2-e^{-\beta G}}.
\end{equation}
\end{proposition}
\begin{proof}
Since $\lambda_{PS}=W(\beta G)/G$, it suffices to show the following two inequalities:
\begin{equation}
\label{eq:W}
\frac{x}{1+x}\le W(x) \quad \text{and} \quad
W(x) \le \frac{x}{2-e^{-x}}
\end{equation}
for all $x\ge0$. The first one follows from $x/(1+x)\le \log(1+x)$, see \citep[Eq.~4.1.33]{as64}, by transforming it to
\begin{equation*}
\frac{x}{1+x} \exp \bigl( \frac{x}{1+x} \bigr) \leq x
\end{equation*}
and applying the increasing function $W$ on both sides. For the second inequality note that
\begin{equation}
\log(2-e^{-x})\le \frac{x}{2-e^{-x}}.
\end{equation}
This holds because we have equality for $x=0$ and it is straightforward to see that the derivative of the left hand side is less than or equal to the derivative of the right hand side for all $x\ge 0$. A similar transformation as above and applying $W$ on both sides again gives the second inequality in~\eqref{eq:W}.
\end{proof}
\section{Summary statistics}
\label{sec:s-stat}
For a point process $\Xi$ the \emph{empty space function} or \emph{$F$ function} is defined as the cumulative distribution function of the distance from the origin to the nearest point in $\Xi$, i.e.
\begin{align*}
F(t)&=\P(\exists y\in \Xi \colon \|y\| \le t)=1-\P(\Xi(\mathbb{B}(0,t))=0)\\
&=1-{\mathbb E}\Bigl(\prod_{y\in\Xi}\mathbbm{1}\{y\notin \mathbb{B}(0,t)\}\Bigr)=1-\Psi_\Xi(\mathbbm{1}\{\cdot \notin \mathbb{B}(0,t)\}),
\end{align*}
where $\mathbb{B}(x,t)$ denotes the closed ball centred at $x\in\mathbb{R}^d$ with radius $t\ge0$. Thus for a locally stable process $\Xi$ with constant $c^*$ we obtain from Theorem~\ref{thm:bounds}
\begin{equation}
\label{eq:boundsF}
\frac{\lambda}{c^*}\big(1-\exp(-c^*\alpha_dt^d)\big)\le F(t) \le \lambda \alpha_d t^d.
\end{equation}
Note that for a Poisson process with intensity $\lambda$ we may choose $c^*=\lambda$, in which case the lower bound in \eqref{eq:boundsF} is exact. A minor drawback of the bounds in \eqref{eq:boundsF} is that the intensity is in general not known and has to be estimated as well, e.g.\ by the methods of Section~\ref{sec:bounds-int}.
The \emph{nearest neighbour function} or \emph{$G$ function} is defined as the cumulative distribution function of the distance from a typical point of $\Xi$ (in the sense of the Palm distribution) to its nearest neighbour. For pairwise interaction processes $\Xi\sim\pip(\beta,\varphi)$ the $G$ function is computed in \citep[Sec.~5]{mase90} as
\begin{equation}
\label{eq:G}
G(t)=1-\frac{\beta}{\lambda}{\mathbb E}\Big(\prod_{y\in \Xi} \mathbbm{1}\{y\notin \mathbb{B}(0,t)\}\varphi(y)\Big)=1-\frac{\beta}{\lambda}\Psi_\Xi\big(\mathbbm{1}\{\cdot \notin \mathbb{B}(0,t)\}\varphi(\cdot)\big).
\end{equation}
Thus if $\Xi$ is inhibitory and has finite interaction range, setting $c^*=\beta$ in Theorem~\ref{thm:bounds} yields
\begin{equation}
\label{eq:boundsG}
2-\frac{\beta}{\lambda}-\exp(-\beta \tilde{G}_t)\le G(t) \le 1-\frac{\beta}{\lambda}+\beta\tilde{G}_t,
\end{equation}
where $\tilde{G}_t=\int_{\mathbb{R}^d}1-\varphi(x)\mathbbm{1}\{\|x\|>t\}\,dx$. The left panel of Figure~\ref{fig:StraussK} shows these bounds for the hard annulus process from Example~\ref{ex:ringprozess} with parameters $\beta=70$, $r=0.025$ and $R=0.035$.
Let us furthermore assume that $\Xi$ is isotropic. Then the \emph{$K$ function} is defined as
\begin{equation*}
K(t)=\alpha_dd\int_0^ts^{d-1}\rho(s)\;ds,
\end{equation*}
where $\rho$ is the pair correlation function. By \eqref{eq:pcf}, \eqref{eq:corr-fun} and Theorem~\ref{thm:bounds} we obtain bounds on $\rho$ as
\begin{equation}
\label{eq:boundspcf}
\varphi(x)\biggl(\frac{\beta^2}{\lambda^2}-\frac{\beta^2\tilde{G}_x}{\lambda}\biggr)\le \rho(\|x\|) \le \varphi(x)\biggl(\frac{\beta^2}{\lambda^2}-\frac{\beta}{\lambda}\big(1-\exp(-\beta\tilde{G}_x)\big)\biggr),
\end{equation}
where $\tilde{G}_x=\int_{\mathbb{R}^d}1-\varphi(y)\varphi(y-x)\;dy$, and (in most cases numeric) integration of \eqref{eq:boundspcf} yields bounds on the $K$ function.
\begin{example}
\label{ex:StraussK}
Let $\Xi$ be a Strauss process in two dimensions. Then
\begin{equation}
\tilde{G}_x=2\pi r^2(1-\gamma)-2 r^2 (1-\gamma)^2\Biggr(\mathrm{arccos}\biggl(\frac{\|x\|}{2r}\biggr)-\frac{\|x\|}{2r}\sqrt{1-\Bigl(\frac{\|x\|}{2r}\Bigr)^2}\Biggr);
\end{equation}
see also \citep{bn12}.
Since we do not know the true intensity of the Strauss process, we plug in the bounds of \eqref{eq:lambdaPIP} into \eqref{eq:boundspcf} to obtain bounds on the $K$ function. This procedure causes twice an error and therefore the estimates on $K$ are good only for smaller values of $\beta G$.
The right panel of Figure~\ref{fig:StraussK} shows these estimates for $\beta=40$, $r=0.05$ and $\gamma=0$.
\begin{figure}[ht]
\begin{center}
\begin{tikzpicture}[x=1pt,y=1pt]
\definecolor[named]{fillColor}{rgb}{1.00,1.00,1.00}
\path[use as bounding box,fill=fillColor,fill opacity=0.00] (0,0) rectangle (462.53,210.24);
\begin{scope}
\path[clip] ( 0.00, 0.00) rectangle (462.53,210.24);
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.79, 28.91) -- (210.39, 28.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.79, 28.91) -- ( 49.79, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 89.94, 28.91) -- ( 89.94, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (130.09, 28.91) -- (130.09, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (170.24, 28.91) -- (170.24, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (210.39, 28.91) -- (210.39, 22.91);
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 49.79, 15.71) {0.00};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 89.94, 15.71) {0.02};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (130.09, 15.71) {0.04};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (170.24, 15.71) {0.06};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (210.39, 15.71) {0.08};
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36, 35.09) -- ( 43.36,159.13);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36, 35.09) -- ( 37.36, 35.09);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36, 76.44) -- ( 37.36, 76.44);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36,117.79) -- ( 37.36,117.79);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36,159.13) -- ( 37.36,159.13);
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96, 32.68) {0.0};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96, 74.03) {0.2};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96,115.37) {0.4};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 34.96,156.72) {0.6};
\end{scope}
\begin{scope}
\path[clip] ( 43.36, 28.91) rectangle (216.81,195.79);
\definecolor[named]{fillColor}{rgb}{0.83,0.83,0.83}
\path[fill=fillColor] ( 49.79, 36.95) --
( 50.14, 36.96) --
( 50.49, 36.96) --
( 50.84, 36.97) --
( 51.19, 36.98) --
( 51.54, 36.99) --
( 51.89, 37.01) --
( 52.25, 37.02) --
( 52.60, 37.04) --
( 52.95, 37.07) --
( 53.30, 37.09) --
( 53.65, 37.12) --
( 54.00, 37.16) --
( 54.35, 37.19) --
( 54.70, 37.23) --
( 55.06, 37.27) --
( 55.41, 37.31) --
( 55.76, 37.36) --
( 56.11, 37.41) --
( 56.46, 37.46) --
( 56.81, 37.51) --
( 57.16, 37.57) --
( 57.51, 37.63) --
( 57.87, 37.69) --
( 58.22, 37.76) --
( 58.57, 37.83) --
( 58.92, 37.90) --
( 59.27, 37.97) --
( 59.62, 38.05) --
( 59.97, 38.13) --
( 60.33, 38.21) --
( 60.68, 38.29) --
( 61.03, 38.38) --
( 61.38, 38.47) --
( 61.73, 38.56) --
( 62.08, 38.66) --
( 62.43, 38.76) --
( 62.78, 38.86) --
( 63.14, 38.97) --
( 63.49, 39.07) --
( 63.84, 39.18) --
( 64.19, 39.30) --
( 64.54, 39.41) --
( 64.89, 39.53) --
( 65.24, 39.65) --
( 65.60, 39.77) --
( 65.95, 39.90) --
( 66.30, 40.03) --
( 66.65, 40.16) --
( 67.00, 40.30) --
( 67.35, 40.44) --
( 67.70, 40.58) --
( 68.05, 40.72) --
( 68.41, 40.87) --
( 68.76, 41.02) --
( 69.11, 41.17) --
( 69.46, 41.32) --
( 69.81, 41.48) --
( 70.16, 41.64) --
( 70.51, 41.80) --
( 70.86, 41.97) --
( 71.22, 42.14) --
( 71.57, 42.31) --
( 71.92, 42.48) --
( 72.27, 42.66) --
( 72.62, 42.84) --
( 72.97, 43.02) --
( 73.32, 43.21) --
( 73.68, 43.39) --
( 74.03, 43.58) --
( 74.38, 43.78) --
( 74.73, 43.97) --
( 75.08, 44.17) --
( 75.43, 44.37) --
( 75.78, 44.58) --
( 76.13, 44.79) --
( 76.49, 45.00) --
( 76.84, 45.21) --
( 77.19, 45.43) --
( 77.54, 45.64) --
( 77.89, 45.87) --
( 78.24, 46.09) --
( 78.59, 46.32) --
( 78.94, 46.55) --
( 79.30, 46.78) --
( 79.65, 47.01) --
( 80.00, 47.25) --
( 80.35, 47.49) --
( 80.70, 47.74) --
( 81.05, 47.98) --
( 81.40, 48.23) --
( 81.76, 48.49) --
( 82.11, 48.74) --
( 82.46, 49.00) --
( 82.81, 49.26) --
( 83.16, 49.52) --
( 83.51, 49.79) --
( 83.86, 50.06) --
( 84.21, 50.33) --
( 84.57, 50.60) --
( 84.92, 50.88) --
( 85.27, 51.16) --
( 85.62, 51.44) --
( 85.97, 51.73) --
( 86.32, 52.01) --
( 86.67, 52.31) --
( 87.03, 52.60) --
( 87.38, 52.90) --
( 87.73, 53.20) --
( 88.08, 53.50) --
( 88.43, 53.80) --
( 88.78, 54.11) --
( 89.13, 54.42) --
( 89.48, 54.73) --
( 89.84, 55.05) --
( 90.19, 55.37) --
( 90.54, 55.69) --
( 90.89, 56.01) --
( 91.24, 56.34) --
( 91.59, 56.67) --
( 91.94, 57.00) --
( 92.29, 57.34) --
( 92.65, 57.68) --
( 93.00, 58.02) --
( 93.35, 58.36) --
( 93.70, 58.71) --
( 94.05, 59.06) --
( 94.40, 59.41) --
( 94.75, 59.77) --
( 95.11, 60.13) --
( 95.46, 60.49) --
( 95.81, 60.85) --
( 96.16, 61.22) --
( 96.51, 61.58) --
( 96.86, 61.96) --
( 97.21, 62.33) --
( 97.56, 62.71) --
( 97.92, 63.09) --
( 98.27, 63.47) --
( 98.62, 63.86) --
( 98.97, 64.25) --
( 99.32, 64.64) --
( 99.67, 65.03) --
(100.02, 65.37) --
(100.38, 65.37) --
(100.73, 65.37) --
(101.08, 65.37) --
(101.43, 65.37) --
(101.78, 65.37) --
(102.13, 65.37) --
(102.48, 65.37) --
(102.83, 65.37) --
(103.19, 65.37) --
(103.54, 65.37) --
(103.89, 65.37) --
(104.24, 65.37) --
(104.59, 65.37) --
(104.94, 65.37) --
(105.29, 65.37) --
(105.64, 65.37) --
(106.00, 65.37) --
(106.35, 65.37) --
(106.70, 65.37) --
(107.05, 65.37) --
(107.40, 65.37) --
(107.75, 65.37) --
(108.10, 65.37) --
(108.46, 65.37) --
(108.81, 65.37) --
(109.16, 65.37) --
(109.51, 65.37) --
(109.86, 65.37) --
(110.21, 65.37) --
(110.56, 65.37) --
(110.91, 65.37) --
(111.27, 65.37) --
(111.62, 65.37) --
(111.97, 65.37) --
(112.32, 65.37) --
(112.67, 65.37) --
(113.02, 65.37) --
(113.37, 65.37) --
(113.72, 65.37) --
(114.08, 65.37) --
(114.43, 65.37) --
(114.78, 65.37) --
(115.13, 65.37) --
(115.48, 65.37) --
(115.83, 65.37) --
(116.18, 65.37) --
(116.54, 65.37) --
(116.89, 65.37) --
(117.24, 65.37) --
(117.59, 65.37) --
(117.94, 65.37) --
(118.29, 65.37) --
(118.64, 65.37) --
(118.99, 65.37) --
(119.35, 65.37) --
(119.70, 65.37) --
(120.05, 65.37) --
(120.40, 65.37) --
(120.75, 65.37) --
(121.10, 65.92) --
(121.45, 66.48) --
(121.81, 67.05) --
(122.16, 67.63) --
(122.51, 68.20) --
(122.86, 68.78) --
(123.21, 69.36) --
(123.56, 69.94) --
(123.91, 70.53) --
(124.26, 71.12) --
(124.62, 71.71) --
(124.97, 72.30) --
(125.32, 72.90) --
(125.67, 73.50) --
(126.02, 74.10) --
(126.37, 74.71) --
(126.72, 75.32) --
(127.07, 75.93) --
(127.43, 76.54) --
(127.78, 77.16) --
(128.13, 77.78) --
(128.48, 78.40) --
(128.83, 79.03) --
(129.18, 79.66) --
(129.53, 80.29) --
(129.89, 80.92) --
(130.24, 81.56) --
(130.59, 82.20) --
(130.94, 82.84) --
(131.29, 83.48) --
(131.64, 84.13) --
(131.99, 84.78) --
(132.34, 85.43) --
(132.70, 86.09) --
(133.05, 86.75) --
(133.40, 87.41) --
(133.75, 88.07) --
(134.10, 88.74) --
(134.45, 89.41) --
(134.80, 90.08) --
(135.15, 90.76) --
(135.51, 91.43) --
(135.86, 92.12) --
(136.21, 92.80) --
(136.56, 93.49) --
(136.91, 94.18) --
(137.26, 94.87) --
(137.61, 95.56) --
(137.97, 96.26) --
(138.32, 96.96) --
(138.67, 97.66) --
(139.02, 98.37) --
(139.37, 99.08) --
(139.72, 99.79) --
(140.07,100.50) --
(140.42,101.22) --
(140.78,101.94) --
(141.13,102.66) --
(141.48,103.39) --
(141.83,104.12) --
(142.18,104.85) --
(142.53,105.58) --
(142.88,106.32) --
(143.24,107.06) --
(143.59,107.80) --
(143.94,108.54) --
(144.29,109.29) --
(144.64,110.04) --
(144.99,110.80) --
(145.34,111.55) --
(145.69,112.31) --
(146.05,113.07) --
(146.40,113.84) --
(146.75,114.60) --
(147.10,115.37) --
(147.45,116.15) --
(147.80,116.92) --
(148.15,117.70) --
(148.50,118.48) --
(148.86,119.27) --
(149.21,120.05) --
(149.56,120.84) --
(149.91,121.63) --
(150.26,122.43) --
(150.61,123.23) --
(150.96,124.03) --
(151.32,124.83) --
(151.67,125.64) --
(152.02,126.45) --
(152.37,127.26) --
(152.72,128.07) --
(153.07,128.89) --
(153.42,129.71) --
(153.77,130.53) --
(154.13,131.36) --
(154.48,132.19) --
(154.83,133.02) --
(155.18,133.85) --
(155.53,134.69) --
(155.88,135.53) --
(156.23,136.37) --
(156.59,137.22) --
(156.94,138.06) --
(157.29,138.91) --
(157.64,139.77) --
(157.99,140.62) --
(158.34,141.48) --
(158.69,142.35) --
(159.04,143.21) --
(159.40,144.08) --
(159.75,144.95) --
(160.10,145.82) --
(160.45,146.70) --
(160.80,147.57) --
(161.15,148.46) --
(161.50,149.34) --
(161.85,150.23) --
(162.21,151.12) --
(162.56,152.01) --
(162.91,152.90) --
(163.26,153.80) --
(163.61,154.70) --
(163.96,155.61) --
(164.31,156.51) --
(164.67,157.42) --
(165.02,158.33) --
(165.37,159.25) --
(165.72,160.17) --
(166.07,161.09) --
(166.42,162.01) --
(166.77,162.94) --
(167.12,163.87) --
(167.48,164.80) --
(167.83,165.73) --
(168.18,166.67) --
(168.53,167.61) --
(168.88,168.55) --
(169.23,169.50) --
(169.58,170.44) --
(169.93,171.40) --
(170.29,172.35) --
(170.64,173.31) --
(170.99,174.26) --
(171.34,175.23) --
(171.69,176.19) --
(172.04,177.16) --
(172.39,178.13) --
(172.75,179.10) --
(173.10,180.08) --
(173.45,181.06) --
(173.80,182.04) --
(174.15,183.02) --
(174.50,184.01) --
(174.85,185.00) --
(175.20,185.99) --
(175.56,186.99) --
(175.91,187.99) --
(176.26,188.99) --
(176.61,189.99) --
(176.96,191.00) --
(177.31,192.01) --
(177.66,193.02) --
(178.02,194.04) --
(178.37,195.05) --
(178.72,196.07) --
(179.07,197.10) --
(179.42,198.12) --
(179.77,199.15) --
(180.12,200.18) --
(180.47,201.22) --
(180.83,202.26) --
(181.18,203.30) --
(181.53,204.34) --
(181.88,205.39) --
(182.23,206.43) --
(182.58,207.49) --
(182.93,208.54) --
(183.28,209.60) --
(183.50,210.24) --
(326.44,210.24) --
(326.27,210.22) --
(325.92,210.19) --
(325.57,210.16) --
(325.22,210.12) --
(324.86,210.09) --
(324.51,210.05) --
(324.16,210.02) --
(323.81,209.98) --
(323.46,209.94) --
(323.11,209.91) --
(322.76,209.87) --
(322.40,209.83) --
(322.05,209.79) --
(321.70,209.76) --
(321.35,209.72) --
(321.00,209.68) --
(320.65,209.64) --
(320.30,209.60) --
(319.95,209.56) --
(319.59,209.52) --
(319.24,209.48) --
(318.89,209.44) --
(318.54,209.40) --
(318.19,209.36) --
(317.84,209.32) --
(317.49,209.27) --
(317.13,209.23) --
(316.78,209.19) --
(316.43,209.14) --
(316.08,209.10) --
(315.73,209.06) --
(315.38,209.01) --
(315.03,208.97) --
(314.68,208.92) --
(314.32,208.87) --
(313.97,208.83) --
(313.62,208.78) --
(313.27,208.73) --
(312.92,208.69) --
(312.57,208.64) --
(312.22,208.59) --
(311.87,208.54) --
(311.51,208.49) --
(311.16,208.44) --
(310.81,208.39) --
(310.46,208.34) --
(310.11,208.29) --
(309.76,208.24) --
(309.41,208.19) --
(309.05,208.14) --
(308.70,208.08) --
(308.35,208.03) --
(308.00,207.98) --
(307.65,207.92) --
(307.30,207.87) --
(306.95,207.81) --
(306.60,207.76) --
(306.24,207.70) --
(305.89,207.65) --
(305.54,207.59) --
(305.19,207.53) --
(304.84,207.47) --
(304.49,207.42) --
(304.14,207.36) --
(303.78,207.30) --
(303.43,207.24) --
(303.08,207.18) --
(302.73,207.12) --
(302.38,207.06) --
(302.03,206.99) --
(301.68,206.93) --
(301.33,206.87) --
(300.97,206.81) --
(300.62,206.74) --
(300.27,206.68) --
(299.92,206.61) --
(299.57,206.55) --
(299.22,206.48) --
(298.87,206.41) --
(298.52,206.35) --
(298.16,206.28) --
(297.81,206.21) --
(297.46,206.14) --
(297.11,206.07) --
(296.76,206.00) --
(296.41,205.93) --
(296.06,205.86) --
(295.70,205.79) --
(295.35,205.72) --
(295.00,205.64) --
(294.65,205.57) --
(294.30,205.50) --
(293.95,205.42) --
(293.60,205.35) --
(293.25,205.27) --
(292.89,205.20) --
(292.54,205.12) --
(292.19,205.04) --
(291.84,204.96) --
(291.49,204.88) --
(291.14,204.80) --
(290.79,204.72) --
(290.44,204.64) --
(290.08,204.56) --
(289.73,204.48) --
(289.38,204.40) --
(289.03,204.32) --
(288.68,204.23) --
(288.33,204.15) --
(287.98,204.06) --
(287.62,203.98) --
(287.27,203.89) --
(286.92,203.80) --
(286.57,203.72) --
(286.22,203.63) --
(285.87,203.54) --
(285.52,203.45) --
(285.17,203.36) --
(284.81,203.27) --
(284.46,203.17) --
(284.11,203.08) --
(283.76,202.99) --
(283.41,202.90) --
(283.06,202.80) --
(282.71,202.71) --
(282.35,202.61) --
(282.00,202.51) --
(281.65,202.42) --
(281.30,202.32) --
(280.95,202.22) --
(280.60,202.12) --
(280.25,202.02) --
(279.90,201.92) --
(279.54,201.82) --
(279.19,201.71) --
(278.84,201.61) --
(278.49,201.51) --
(278.14,201.40) --
(277.79,201.30) --
(277.44,201.19) --
(277.09,201.08) --
(276.73,200.97) --
(276.38,200.87) --
(276.03,200.76) --
(275.68,200.65) --
(275.33,200.53) --
(274.98,200.42) --
(274.63,200.31) --
(274.27,200.20) --
(273.92,200.08) --
(273.57,199.97) --
(273.22,199.85) --
(272.87,199.74) --
(272.52,199.62) --
(272.17,199.50) --
(271.82,199.38) --
(271.46,199.26) --
(271.11,199.14) --
(270.76,199.02) --
(270.41,198.90) --
(270.06,198.77) --
(269.71,198.65) --
(269.36,198.52) --
(269.01,198.40) --
(268.65,198.27) --
(268.30,198.14) --
(267.95,198.02) --
(267.60,197.89) --
(267.25,197.76) --
(266.90,197.63) --
(266.55,197.49) --
(266.19,197.36) --
(265.84,197.23) --
(265.49,197.09) --
(265.14,196.96) --
(264.79,196.82) --
(264.44,196.68) --
(264.09,196.55) --
(263.74,196.41) --
(263.38,196.27) --
(263.03,196.13) --
(262.68,195.98) --
(262.33,195.84) --
(261.98,195.70) --
(261.63,195.55) --
(261.28,195.41) --
(260.92,195.26) --
(260.57,195.11) --
(260.22,194.97) --
(259.87,194.82) --
(259.52,194.67) --
(259.17,194.51) --
(258.82,194.36) --
(258.47,194.21) --
(258.11,194.05) --
(257.76,193.90) --
(257.41,193.74) --
(257.06,193.59) --
(256.71,193.43) --
(256.36,193.27) --
(256.01,193.11) --
(255.66,192.95) --
(255.30,192.79) --
(254.95,192.62) --
(254.60,192.46) --
(254.25,192.29) --
(253.90,192.13) --
(253.55,191.96) --
(253.20,191.79) --
(252.84,191.62) --
(252.49,191.45) --
(252.14,191.28) --
(251.79,191.11) --
(251.44,190.94) --
(251.09,190.76) --
(250.74,190.59) --
(250.39,190.41) --
(250.03,190.23) --
(249.68,190.05) --
(249.33,189.88) --
(248.98,189.69) --
(248.63,189.51) --
(248.28,189.33) --
(247.93,189.15) --
(247.57,188.96) --
(247.22,188.78) --
(246.87,188.59) --
(246.52,188.40) --
(246.17,188.21) --
(245.82,188.02) --
(245.47,187.83) --
(245.12,187.64) --
(244.76,187.44) --
(244.41,187.25) --
(244.06,187.05) --
(243.71,186.86) --
(243.36,186.66) --
(243.01,186.46) --
(242.66,186.26) --
(242.31,186.06) --
(241.95,185.85) --
(241.60,185.65) --
(241.25,185.45) --
(240.90,185.24) --
(240.55,185.03) --
(240.20,184.83) --
(239.85,184.62) --
(239.49,184.41) --
(239.14,184.19) --
(238.79,183.98) --
(238.44,183.77) --
(238.09,183.55) --
(237.74,183.34) --
(237.39,183.12) --
(237.04,182.90) --
(236.68,182.68) --
(236.33,182.46) --
(235.98,182.24) --
(235.63,182.01) --
(235.28,181.79) --
(234.93,181.56) --
(234.58,181.34) --
(234.23,181.11) --
(233.87,180.88) --
(233.52,180.65) --
(233.17,180.42) --
(232.82,180.19) --
(232.47,179.95) --
(232.12,179.72) --
(231.77,179.48) --
(231.41,179.24) --
(231.06,179.01) --
(230.71,178.77) --
(230.36,178.53) --
(230.01,178.28) --
(229.66,178.04) --
(229.31,177.80) --
(228.96,177.55) --
(228.60,177.30) --
(228.25,177.05) --
(227.90,176.81) --
(227.55,176.55) --
(227.20,176.30) --
(226.85,176.05) --
(226.50,175.80) --
(226.14,175.54) --
(225.79,175.28) --
(225.44,175.03) --
(225.09,174.77) --
(224.74,174.51) --
(224.39,174.24) --
(224.04,173.98) --
(223.69,173.72) --
(223.33,173.45) --
(222.98,173.19) --
(222.63,172.92) --
(222.28,172.65) --
(221.93,172.38) --
(221.58,172.11) --
(221.23,171.83) --
(220.88,171.56) --
(220.52,171.29) --
(220.17,171.01) --
(219.82,170.73) --
(219.47,170.45) --
(219.12,170.17) --
(218.77,169.89) --
(218.42,169.61) --
(218.06,169.32) --
(217.71,169.04) --
(217.36,168.75) --
(217.01,168.47) --
(216.66,168.18) --
(216.31,167.89) --
(215.96,167.60) --
(215.61,167.30) --
(215.25,167.01) --
(214.90,166.71) --
(214.55,166.42) --
(214.20,166.12) --
(213.85,165.82) --
(213.50,165.52) --
(213.15,165.22) --
(212.80,164.92) --
(212.44,164.61) --
(212.09,164.31) --
(211.74,164.00) --
(211.39,163.70) --
(211.04,163.39) --
(210.69,163.08) --
(210.34,162.77) --
(209.98,162.45) --
(209.63,162.14) --
(209.28,161.82) --
(208.93,161.51) --
(208.58,161.19) --
(208.23,160.87) --
(207.88,160.55) --
(207.53,160.23) --
(207.17,159.91) --
(206.82,159.59) --
(206.47,159.26) --
(206.12,158.94) --
(205.77,158.61) --
(205.42,158.28) --
(205.07,157.95) --
(204.71,157.62) --
(204.36,157.29) --
(204.01,156.95) --
(203.66,156.62) --
(203.31,156.28) --
(202.96,155.95) --
(202.61,155.61) --
(202.26,155.27) --
(201.90,154.93) --
(201.55,154.59) --
(201.20,154.24) --
(200.85,153.90) --
(200.50,153.56) --
(200.15,153.21) --
(199.80,152.86) --
(199.45,152.51) --
(199.09,152.16) --
(198.74,151.81) --
(198.39,151.46) --
(198.04,151.11) --
(197.69,150.75) --
(197.34,150.39) --
(196.99,150.04) --
(196.63,149.68) --
(196.28,149.32) --
(195.93,148.96) --
(195.58,148.60) --
(195.23,148.23) --
(194.88,147.87) --
(194.53,147.51) --
(194.18,147.14) --
(193.82,146.77) --
(193.47,146.40) --
(193.12,146.03) --
(192.77,145.66) --
(192.42,145.29) --
(192.07,144.92) --
(191.72,144.54) --
(191.36,144.17) --
(191.01,143.79) --
(190.66,143.41) --
(190.31,143.03) --
(189.96,142.66) --
(189.61,142.27) --
(189.26,141.89) --
(188.91,141.51) --
(188.55,141.13) --
(188.20,140.74) --
(187.85,140.35) --
(187.50,139.97) --
(187.15,139.58) --
(186.80,139.19) --
(186.45,138.80) --
(186.10,138.41) --
(185.74,138.01) --
(185.39,137.62) --
(185.04,137.23) --
(184.69,136.83) --
(184.34,136.43) --
(183.99,136.04) --
(183.64,135.64) --
(183.28,135.24) --
(182.93,134.84) --
(182.58,134.44) --
(182.23,134.03) --
(181.88,133.63) --
(181.53,133.23) --
(181.18,132.82) --
(180.83,132.41) --
(180.47,132.01) --
(180.12,131.60) --
(179.77,131.19) --
(179.42,130.78) --
(179.07,130.37) --
(178.72,129.95) --
(178.37,129.54) --
(178.02,129.13) --
(177.66,128.71) --
(177.31,128.30) --
(176.96,127.88) --
(176.61,127.46) --
(176.26,127.04) --
(175.91,126.63) --
(175.56,126.21) --
(175.20,125.78) --
(174.85,125.36) --
(174.50,124.94) --
(174.15,124.52) --
(173.80,124.09) --
(173.45,123.67) --
(173.10,123.24) --
(172.75,122.81) --
(172.39,122.39) --
(172.04,121.96) --
(171.69,121.53) --
(171.34,121.10) --
(170.99,120.67) --
(170.64,120.24) --
(170.29,119.81) --
(169.93,119.37) --
(169.58,118.94) --
(169.23,118.50) --
(168.88,118.07) --
(168.53,117.63) --
(168.18,117.20) --
(167.83,116.76) --
(167.48,116.32) --
(167.12,115.88) --
(166.77,115.45) --
(166.42,115.01) --
(166.07,114.56) --
(165.72,114.12) --
(165.37,113.68) --
(165.02,113.24) --
(164.67,112.80) --
(164.31,112.35) --
(163.96,111.91) --
(163.61,111.46) --
(163.26,111.02) --
(162.91,110.57) --
(162.56,110.13) --
(162.21,109.68) --
(161.85,109.23) --
(161.50,108.79) --
(161.15,108.34) --
(160.80,107.89) --
(160.45,107.44) --
(160.10,106.99) --
(159.75,106.54) --
(159.40,106.09) --
(159.04,105.64) --
(158.69,105.18) --
(158.34,104.73) --
(157.99,104.28) --
(157.64,103.83) --
(157.29,103.37) --
(156.94,102.92) --
(156.59,102.47) --
(156.23,102.01) --
(155.88,101.56) --
(155.53,101.10) --
(155.18,100.65) --
(154.83,100.19) --
(154.48, 99.73) --
(154.13, 99.28) --
(153.77, 98.82) --
(153.42, 98.36) --
(153.07, 97.91) --
(152.72, 97.45) --
(152.37, 96.99) --
(152.02, 96.53) --
(151.67, 96.07) --
(151.32, 95.62) --
(150.96, 95.16) --
(150.61, 94.70) --
(150.26, 94.24) --
(149.91, 93.78) --
(149.56, 93.32) --
(149.21, 92.86) --
(148.86, 92.40) --
(148.50, 91.94) --
(148.15, 91.48) --
(147.80, 91.02) --
(147.45, 90.56) --
(147.10, 90.10) --
(146.75, 89.64) --
(146.40, 89.18) --
(146.05, 88.72) --
(145.69, 88.26) --
(145.34, 87.80) --
(144.99, 87.34) --
(144.64, 86.88) --
(144.29, 86.42) --
(143.94, 85.96) --
(143.59, 85.50) --
(143.24, 85.04) --
(142.88, 84.58) --
(142.53, 84.12) --
(142.18, 83.66) --
(141.83, 83.20) --
(141.48, 82.74) --
(141.13, 82.28) --
(140.78, 81.82) --
(140.42, 81.37) --
(140.07, 80.91) --
(139.72, 80.45) --
(139.37, 79.99) --
(139.02, 79.53) --
(138.67, 79.07) --
(138.32, 78.62) --
(137.97, 78.16) --
(137.61, 77.70) --
(137.26, 77.24) --
(136.91, 76.79) --
(136.56, 76.33) --
(136.21, 75.88) --
(135.86, 75.42) --
(135.51, 74.96) --
(135.15, 74.51) --
(134.80, 74.06) --
(134.45, 73.60) --
(134.10, 73.15) --
(133.75, 72.69) --
(133.40, 72.24) --
(133.05, 71.79) --
(132.70, 71.34) --
(132.34, 70.89) --
(131.99, 70.44) --
(131.64, 69.98) --
(131.29, 69.54) --
(130.94, 69.09) --
(130.59, 68.64) --
(130.24, 68.19) --
(129.89, 67.74) --
(129.53, 67.29) --
(129.18, 66.85) --
(128.83, 66.40) --
(128.48, 65.96) --
(128.13, 65.51) --
(127.78, 65.07) --
(127.43, 64.62) --
(127.07, 64.18) --
(126.72, 63.74) --
(126.37, 63.30) --
(126.02, 62.86) --
(125.67, 62.42) --
(125.32, 61.98) --
(124.97, 61.54) --
(124.62, 61.10) --
(124.26, 60.67) --
(123.91, 60.23) --
(123.56, 59.80) --
(123.21, 59.36) --
(122.86, 58.93) --
(122.51, 58.50) --
(122.16, 58.07) --
(121.81, 57.64) --
(121.45, 57.21) --
(121.10, 56.78) --
(120.75, 56.36) --
(120.40, 56.36) --
(120.05, 56.36) --
(119.70, 56.36) --
(119.35, 56.36) --
(118.99, 56.36) --
(118.64, 56.36) --
(118.29, 56.36) --
(117.94, 56.36) --
(117.59, 56.36) --
(117.24, 56.36) --
(116.89, 56.36) --
(116.54, 56.36) --
(116.18, 56.36) --
(115.83, 56.36) --
(115.48, 56.36) --
(115.13, 56.36) --
(114.78, 56.36) --
(114.43, 56.36) --
(114.08, 56.36) --
(113.72, 56.36) --
(113.37, 56.36) --
(113.02, 56.36) --
(112.67, 56.36) --
(112.32, 56.36) --
(111.97, 56.36) --
(111.62, 56.36) --
(111.27, 56.36) --
(110.91, 56.36) --
(110.56, 56.36) --
(110.21, 56.36) --
(109.86, 56.36) --
(109.51, 56.36) --
(109.16, 56.36) --
(108.81, 56.36) --
(108.46, 56.36) --
(108.10, 56.36) --
(107.75, 56.36) --
(107.40, 56.36) --
(107.05, 56.36) --
(106.70, 56.36) --
(106.35, 56.36) --
(106.00, 56.36) --
(105.64, 56.36) --
(105.29, 56.36) --
(104.94, 56.36) --
(104.59, 56.36) --
(104.24, 56.36) --
(103.89, 56.36) --
(103.54, 56.36) --
(103.19, 56.36) --
(102.83, 56.36) --
(102.48, 56.36) --
(102.13, 56.36) --
(101.78, 56.36) --
(101.43, 56.36) --
(101.08, 56.36) --
(100.73, 56.36) --
(100.38, 56.36) --
(100.02, 56.36) --
( 99.67, 56.10) --
( 99.32, 55.80) --
( 98.97, 55.51) --
( 98.62, 55.21) --
( 98.27, 54.91) --
( 97.92, 54.62) --
( 97.56, 54.33) --
( 97.21, 54.04) --
( 96.86, 53.75) --
( 96.51, 53.46) --
( 96.16, 53.17) --
( 95.81, 52.89) --
( 95.46, 52.61) --
( 95.11, 52.33) --
( 94.75, 52.05) --
( 94.40, 51.77) --
( 94.05, 51.49) --
( 93.70, 51.22) --
( 93.35, 50.95) --
( 93.00, 50.68) --
( 92.65, 50.41) --
( 92.29, 50.14) --
( 91.94, 49.88) --
( 91.59, 49.61) --
( 91.24, 49.35) --
( 90.89, 49.09) --
( 90.54, 48.83) --
( 90.19, 48.58) --
( 89.84, 48.32) --
( 89.48, 48.07) --
( 89.13, 47.82) --
( 88.78, 47.57) --
( 88.43, 47.32) --
( 88.08, 47.08) --
( 87.73, 46.84) --
( 87.38, 46.59) --
( 87.03, 46.35) --
( 86.67, 46.12) --
( 86.32, 45.88) --
( 85.97, 45.65) --
( 85.62, 45.42) --
( 85.27, 45.19) --
( 84.92, 44.96) --
( 84.57, 44.73) --
( 84.21, 44.51) --
( 83.86, 44.29) --
( 83.51, 44.07) --
( 83.16, 43.85) --
( 82.81, 43.63) --
( 82.46, 43.42) --
( 82.11, 43.21) --
( 81.76, 43.00) --
( 81.40, 42.79) --
( 81.05, 42.58) --
( 80.70, 42.38) --
( 80.35, 42.18) --
( 80.00, 41.98) --
( 79.65, 41.78) --
( 79.30, 41.58) --
( 78.94, 41.39) --
( 78.59, 41.20) --
( 78.24, 41.01) --
( 77.89, 40.82) --
( 77.54, 40.64) --
( 77.19, 40.46) --
( 76.84, 40.28) --
( 76.49, 40.10) --
( 76.13, 39.92) --
( 75.78, 39.75) --
( 75.43, 39.57) --
( 75.08, 39.40) --
( 74.73, 39.24) --
( 74.38, 39.07) --
( 74.03, 38.91) --
( 73.68, 38.75) --
( 73.32, 38.59) --
( 72.97, 38.43) --
( 72.62, 38.28) --
( 72.27, 38.13) --
( 71.92, 37.98) --
( 71.57, 37.83) --
( 71.22, 37.68) --
( 70.86, 37.54) --
( 70.51, 37.40) --
( 70.16, 37.26) --
( 69.81, 37.12) --
( 69.46, 36.99) --
( 69.11, 36.86) --
( 68.76, 36.73) --
( 68.41, 36.60) --
( 68.05, 36.47) --
( 67.70, 36.35) --
( 67.35, 36.23) --
( 67.00, 36.11) --
( 66.65, 36.00) --
( 66.30, 35.88) --
( 65.95, 35.77) --
( 65.60, 35.66) --
( 65.24, 35.56) --
( 64.89, 35.45) --
( 64.54, 35.35) --
( 64.19, 35.25) --
( 63.84, 35.15) --
( 63.49, 35.09) --
( 63.14, 35.09) --
( 62.78, 35.09) --
( 62.43, 35.09) --
( 62.08, 35.09) --
( 61.73, 35.09) --
( 61.38, 35.09) --
( 61.03, 35.09) --
( 60.68, 35.09) --
( 60.33, 35.09) --
( 59.97, 35.09) --
( 59.62, 35.09) --
( 59.27, 35.09) --
( 58.92, 35.09) --
( 58.57, 35.09) --
( 58.22, 35.09) --
( 57.87, 35.09) --
( 57.51, 35.09) --
( 57.16, 35.09) --
( 56.81, 35.09) --
( 56.46, 35.09) --
( 56.11, 35.09) --
( 55.76, 35.09) --
( 55.41, 35.09) --
( 55.06, 35.09) --
( 54.70, 35.09) --
( 54.35, 35.09) --
( 54.00, 35.09) --
( 53.65, 35.09) --
( 53.30, 35.09) --
( 52.95, 35.09) --
( 52.60, 35.09) --
( 52.25, 35.09) --
( 51.89, 35.09) --
( 51.54, 35.09) --
( 51.19, 35.09) --
( 50.84, 35.09) --
( 50.49, 35.09) --
( 50.14, 35.09) --
( 49.79, 35.09) --
cycle;
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 49.79, 35.09) --
( 51.79, 35.11) --
( 53.80, 35.23) --
( 55.81, 35.43) --
( 57.82, 35.80) --
( 59.82, 36.23) --
( 61.83, 36.67) --
( 63.84, 37.22) --
( 65.85, 37.83) --
( 67.85, 38.60) --
( 69.86, 39.33) --
( 71.87, 40.20) --
( 73.88, 41.17) --
( 75.88, 42.18) --
( 77.89, 43.13) --
( 79.90, 44.33) --
( 81.91, 45.52) --
( 83.91, 46.76) --
( 85.92, 48.02) --
( 87.93, 49.69) --
( 89.94, 51.16) --
( 91.94, 52.61) --
( 93.95, 54.37) --
( 95.96, 56.21) --
( 97.97, 57.99) --
( 99.97, 59.72) --
(101.98, 59.72) --
(103.99, 59.72) --
(106.00, 59.72) --
(108.00, 59.72) --
(110.01, 59.72) --
(112.02, 59.72) --
(114.03, 59.72) --
(116.03, 59.72) --
(118.04, 59.72) --
(120.05, 59.72) --
(120.76, 59.72) --
(122.06, 61.37) --
(124.06, 64.24) --
(126.07, 66.93) --
(128.08, 69.69) --
(130.09, 72.19) --
(132.09, 74.81) --
(134.10, 77.57) --
(136.11, 80.26) --
(138.12, 83.24) --
(140.12, 86.23) --
(142.13, 88.93) --
(144.14, 91.78) --
(146.15, 94.67) --
(148.15, 97.40) --
(150.16,100.26) --
(152.17,102.87) --
(154.18,105.64) --
(156.18,108.69) --
(158.19,111.69) --
(160.20,114.59) --
(162.21,117.25) --
(164.21,119.79) --
(166.22,122.53) --
(168.23,125.19) --
(170.24,128.05) --
(172.24,130.98) --
(174.25,133.71) --
(176.26,136.54) --
(178.27,139.25) --
(180.27,141.89) --
(182.28,144.49) --
(184.29,146.90) --
(186.30,149.47) --
(188.30,151.99) --
(190.31,154.41) --
(192.32,156.90) --
(194.33,159.36) --
(196.33,161.86) --
(198.34,164.17) --
(200.35,166.48) --
(202.36,168.69) --
(204.36,170.94) --
(206.37,173.47) --
(208.38,175.64) --
(210.39,177.57) --
(212.39,179.68) --
(214.40,181.46) --
(216.41,183.45) --
(218.42,185.41) --
(220.42,187.22) --
(222.43,189.10) --
(224.44,190.90) --
(226.45,192.75) --
(228.45,194.65) --
(230.46,196.31) --
(232.47,197.96) --
(234.48,199.70) --
(236.48,201.28) --
(238.49,202.94) --
(240.50,204.39) --
(242.51,205.70) --
(244.51,207.19) --
(246.52,208.62) --
(248.53,209.72) --
(249.39,210.24);
\path[draw=drawColor,line width= 0.4pt,dash pattern=on 1pt off 3pt ,line join=round,line cap=round] ( 49.79, 35.09) --
( 51.79, 35.13) --
( 53.80, 35.25) --
( 55.81, 35.45) --
( 57.82, 35.74) --
( 59.82, 36.10) --
( 61.83, 36.54) --
( 63.84, 37.06) --
( 65.85, 37.66) --
( 67.85, 38.34) --
( 69.86, 39.10) --
( 71.87, 39.93) --
( 73.88, 40.84) --
( 75.88, 41.82) --
( 77.89, 42.88) --
( 79.90, 44.00) --
( 81.91, 45.20) --
( 83.91, 46.47) --
( 85.92, 47.80) --
( 87.93, 49.20) --
( 89.94, 50.67) --
( 91.94, 52.20) --
( 93.95, 53.79) --
( 95.96, 55.44) --
( 97.97, 57.15) --
( 99.97, 58.91) --
(101.98, 60.73) --
(103.99, 62.60) --
(106.00, 64.52) --
(108.00, 66.49) --
(110.01, 68.50) --
(112.02, 70.56) --
(114.03, 72.66) --
(116.03, 74.80) --
(118.04, 76.98) --
(120.05, 79.19) --
(120.76, 79.99) --
(122.06, 81.44) --
(124.06, 83.72) --
(126.07, 86.02) --
(128.08, 88.35) --
(130.09, 90.71) --
(132.09, 93.09) --
(134.10, 95.49) --
(136.11, 97.90) --
(138.12,100.34) --
(140.12,102.78) --
(142.13,105.24) --
(144.14,107.70) --
(146.15,110.18) --
(148.15,112.65) --
(150.16,115.13) --
(152.17,117.62) --
(154.18,120.10) --
(156.18,122.57) --
(158.19,125.05) --
(160.20,127.52) --
(162.21,129.97) --
(164.21,132.42) --
(166.22,134.86) --
(168.23,137.28) --
(170.24,139.69) --
(172.24,142.08) --
(174.25,144.46) --
(176.26,146.81) --
(178.27,149.15) --
(180.27,151.46) --
(182.28,153.75) --
(184.29,156.01) --
(186.30,158.25) --
(188.30,160.47) --
(190.31,162.65) --
(192.32,164.81) --
(194.33,166.94) --
(196.33,169.03) --
(198.34,171.10) --
(200.35,173.13) --
(202.36,175.14) --
(204.36,177.11) --
(206.37,179.04) --
(208.38,180.94) --
(210.39,182.81) --
(212.39,184.64) --
(214.40,186.44) --
(216.41,188.20) --
(218.42,189.93) --
(220.42,191.62) --
(222.43,193.27) --
(224.44,194.89) --
(226.45,196.47) --
(228.45,198.02) --
(230.46,199.52) --
(232.47,201.00) --
(234.48,202.44) --
(236.48,203.84) --
(238.49,205.20) --
(240.50,206.54) --
(242.51,207.83) --
(244.51,209.09) --
(246.39,210.24);
\end{scope}
\begin{scope}
\path[clip] ( 0.00, 0.00) rectangle (462.53,210.24);
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (130.09, 2.51) {$t$};
\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at ( 16.96,112.35) {$G(t)$};
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] ( 43.36, 28.91) --
(216.81, 28.91) --
(216.81,195.79) --
( 43.36,195.79) --
( 43.36, 28.91);
\end{scope}
\begin{scope}
\path[clip] ( 0.00, 0.00) rectangle (462.53,210.24);
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (281.05, 28.91) -- (441.65, 28.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (281.05, 28.91) -- (281.05, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (321.20, 28.91) -- (321.20, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (361.35, 28.91) -- (361.35, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (401.50, 28.91) -- (401.50, 22.91);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (441.65, 28.91) -- (441.65, 22.91);
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (281.05, 15.71) {0.00};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (321.20, 15.71) {0.05};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (361.35, 15.71) {0.10};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (401.50, 15.71) {0.15};
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (441.65, 15.71) {0.20};
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (274.63, 35.09) -- (274.63,189.61);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (274.63, 35.09) -- (268.63, 35.09);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (274.63, 86.59) -- (268.63, 86.59);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (274.63,138.10) -- (268.63,138.10);
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (274.63,189.61) -- (268.63,189.61);
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at (266.23, 32.68) {0.00};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at (266.23, 84.18) {0.05};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at (266.23,135.69) {0.10};
\node[text=drawColor,anchor=base east,inner sep=0pt, outer sep=0pt, scale= 0.70] at (266.23,187.19) {0.15};
\end{scope}
\begin{scope}
\path[clip] (274.63, 28.91) rectangle (448.07,195.79);
\definecolor[named]{fillColor}{rgb}{0.83,0.83,0.83}
\path[fill=fillColor] (281.05, 35.09) --
(281.29, 35.09) --
(281.53, 35.09) --
(281.77, 35.09) --
(282.01, 35.09) --
(282.25, 35.09) --
(282.50, 35.09) --
(282.74, 35.09) --
(282.98, 35.09) --
(283.22, 35.09) --
(283.46, 35.09) --
(283.70, 35.09) --
(283.94, 35.09) --
(284.18, 35.09) --
(284.42, 35.09) --
(284.66, 35.09) --
(284.90, 35.09) --
(285.15, 35.09) --
(285.39, 35.09) --
(285.63, 35.09) --
(285.87, 35.09) --
(286.11, 35.09) --
(286.35, 35.09) --
(286.59, 35.09) --
(286.83, 35.09) --
(287.07, 35.09) --
(287.31, 35.09) --
(287.55, 35.09) --
(287.80, 35.09) --
(288.04, 35.09) --
(288.28, 35.09) --
(288.52, 35.09) --
(288.76, 35.09) --
(289.00, 35.09) --
(289.24, 35.09) --
(289.48, 35.09) --
(289.72, 35.09) --
(289.96, 35.09) --
(290.20, 35.09) --
(290.45, 35.09) --
(290.69, 35.09) --
(290.93, 35.09) --
(291.17, 35.09) --
(291.41, 35.09) --
(291.65, 35.09) --
(291.89, 35.09) --
(292.13, 35.09) --
(292.37, 35.09) --
(292.61, 35.09) --
(292.85, 35.09) --
(293.10, 35.09) --
(293.34, 35.09) --
(293.58, 35.09) --
(293.82, 35.09) --
(294.06, 35.09) --
(294.30, 35.09) --
(294.54, 35.09) --
(294.78, 35.09) --
(295.02, 35.09) --
(295.26, 35.09) --
(295.50, 35.09) --
(295.74, 35.09) --
(295.99, 35.09) --
(296.23, 35.09) --
(296.47, 35.09) --
(296.71, 35.09) --
(296.95, 35.09) --
(297.19, 35.09) --
(297.43, 35.09) --
(297.67, 35.09) --
(297.91, 35.09) --
(298.15, 35.09) --
(298.39, 35.09) --
(298.64, 35.09) --
(298.88, 35.09) --
(299.12, 35.09) --
(299.36, 35.09) --
(299.60, 35.09) --
(299.84, 35.09) --
(300.08, 35.09) --
(300.32, 35.09) --
(300.56, 35.09) --
(300.80, 35.09) --
(301.04, 35.09) --
(301.29, 35.09) --
(301.53, 35.09) --
(301.77, 35.09) --
(302.01, 35.09) --
(302.25, 35.09) --
(302.49, 35.09) --
(302.73, 35.09) --
(302.97, 35.09) --
(303.21, 35.09) --
(303.45, 35.09) --
(303.69, 35.09) --
(303.94, 35.09) --
(304.18, 35.09) --
(304.42, 35.09) --
(304.66, 35.09) --
(304.90, 35.09) --
(305.14, 35.09) --
(305.38, 35.09) --
(305.62, 35.09) --
(305.86, 35.09) --
(306.10, 35.09) --
(306.34, 35.09) --
(306.59, 35.09) --
(306.83, 35.09) --
(307.07, 35.09) --
(307.31, 35.09) --
(307.55, 35.09) --
(307.79, 35.09) --
(308.03, 35.09) --
(308.27, 35.09) --
(308.51, 35.09) --
(308.75, 35.09) --
(308.99, 35.09) --
(309.24, 35.09) --
(309.48, 35.09) --
(309.72, 35.09) --
(309.96, 35.09) --
(310.20, 35.09) --
(310.44, 35.09) --
(310.68, 35.09) --
(310.92, 35.09) --
(311.16, 35.09) --
(311.40, 35.09) --
(311.64, 35.09) --
(311.89, 35.09) --
(312.13, 35.09) --
(312.37, 35.09) --
(312.61, 35.09) --
(312.85, 35.09) --
(313.09, 35.09) --
(313.33, 35.09) --
(313.57, 35.09) --
(313.81, 35.09) --
(314.05, 35.09) --
(314.29, 35.09) --
(314.54, 35.09) --
(314.78, 35.09) --
(315.02, 35.09) --
(315.26, 35.09) --
(315.50, 35.09) --
(315.74, 35.09) --
(315.98, 35.09) --
(316.22, 35.09) --
(316.46, 35.09) --
(316.70, 35.09) --
(316.94, 35.09) --
(317.19, 35.09) --
(317.43, 35.09) --
(317.67, 35.09) --
(317.91, 35.09) --
(318.15, 35.09) --
(318.39, 35.09) --
(318.63, 35.09) --
(318.87, 35.09) --
(319.11, 35.09) --
(319.35, 35.09) --
(319.59, 35.09) --
(319.83, 35.09) --
(320.08, 35.09) --
(320.32, 35.09) --
(320.56, 35.09) --
(320.80, 35.09) --
(321.04, 35.09) --
(321.28, 35.21) --
(321.52, 35.33) --
(321.76, 35.45) --
(322.00, 35.57) --
(322.24, 35.69) --
(322.48, 35.81) --
(322.73, 35.93) --
(322.97, 36.06) --
(323.21, 36.18) --
(323.45, 36.31) --
(323.69, 36.43) --
(323.93, 36.56) --
(324.17, 36.68) --
(324.41, 36.81) --
(324.65, 36.94) --
(324.89, 37.07) --
(325.13, 37.20) --
(325.38, 37.33) --
(325.62, 37.46) --
(325.86, 37.59) --
(326.10, 37.72) --
(326.34, 37.85) --
(326.58, 37.98) --
(326.82, 38.12) --
(327.06, 38.25) --
(327.30, 38.39) --
(327.54, 38.52) --
(327.78, 38.66) --
(328.03, 38.79) --
(328.27, 38.93) --
(328.51, 39.07) --
(328.75, 39.21) --
(328.99, 39.35) --
(329.23, 39.49) --
(329.47, 39.63) --
(329.71, 39.77) --
(329.95, 39.91) --
(330.19, 40.05) --
(330.43, 40.19) --
(330.68, 40.34) --
(330.92, 40.48) --
(331.16, 40.62) --
(331.40, 40.77) --
(331.64, 40.92) --
(331.88, 41.06) --
(332.12, 41.21) --
(332.36, 41.36) --
(332.60, 41.50) --
(332.84, 41.65) --
(333.08, 41.80) --
(333.33, 41.95) --
(333.57, 42.10) --
(333.81, 42.25) --
(334.05, 42.41) --
(334.29, 42.56) --
(334.53, 42.71) --
(334.77, 42.87) --
(335.01, 43.02) --
(335.25, 43.17) --
(335.49, 43.33) --
(335.73, 43.49) --
(335.98, 43.64) --
(336.22, 43.80) --
(336.46, 43.96) --
(336.70, 44.12) --
(336.94, 44.27) --
(337.18, 44.43) --
(337.42, 44.59) --
(337.66, 44.76) --
(337.90, 44.92) --
(338.14, 45.08) --
(338.38, 45.24) --
(338.63, 45.40) --
(338.87, 45.57) --
(339.11, 45.73) --
(339.35, 45.90) --
(339.59, 46.06) --
(339.83, 46.23) --
(340.07, 46.40) --
(340.31, 46.56) --
(340.55, 46.73) --
(340.79, 46.90) --
(341.03, 47.07) --
(341.27, 47.24) --
(341.52, 47.41) --
(341.76, 47.58) --
(342.00, 47.75) --
(342.24, 47.93) --
(342.48, 48.10) --
(342.72, 48.27) --
(342.96, 48.45) --
(343.20, 48.62) --
(343.44, 48.80) --
(343.68, 48.97) --
(343.92, 49.15) --
(344.17, 49.32) --
(344.41, 49.50) --
(344.65, 49.68) --
(344.89, 49.86) --
(345.13, 50.04) --
(345.37, 50.22) --
(345.61, 50.40) --
(345.85, 50.58) --
(346.09, 50.76) --
(346.33, 50.94) --
(346.57, 51.13) --
(346.82, 51.31) --
(347.06, 51.49) --
(347.30, 51.68) --
(347.54, 51.86) --
(347.78, 52.05) --
(348.02, 52.24) --
(348.26, 52.42) --
(348.50, 52.61) --
(348.74, 52.80) --
(348.98, 52.99) --
(349.22, 53.18) --
(349.47, 53.37) --
(349.71, 53.56) --
(349.95, 53.75) --
(350.19, 53.94) --
(350.43, 54.14) --
(350.67, 54.33) --
(350.91, 54.52) --
(351.15, 54.72) --
(351.39, 54.91) --
(351.63, 55.11) --
(351.87, 55.30) --
(352.12, 55.50) --
(352.36, 55.70) --
(352.60, 55.89) --
(352.84, 56.09) --
(353.08, 56.29) --
(353.32, 56.49) --
(353.56, 56.69) --
(353.80, 56.89) --
(354.04, 57.09) --
(354.28, 57.30) --
(354.52, 57.50) --
(354.77, 57.70) --
(355.01, 57.91) --
(355.25, 58.11) --
(355.49, 58.32) --
(355.73, 58.52) --
(355.97, 58.73) --
(356.21, 58.94) --
(356.45, 59.14) --
(356.69, 59.35) --
(356.93, 59.56) --
(357.17, 59.77) --
(357.42, 59.98) --
(357.66, 60.19) --
(357.90, 60.40) --
(358.14, 60.61) --
(358.38, 60.83) --
(358.62, 61.04) --
(358.86, 61.25) --
(359.10, 61.47) --
(359.34, 61.68) --
(359.58, 61.90) --
(359.82, 62.11) --
(360.07, 62.33) --
(360.31, 62.55) --
(360.55, 62.77) --
(360.79, 62.99) --
(361.03, 63.21) --
(361.27, 63.43) --
(361.51, 63.65) --
(361.75, 63.87) --
(361.99, 64.09) --
(362.23, 64.31) --
(362.47, 64.54) --
(362.72, 64.76) --
(362.96, 64.98) --
(363.20, 65.21) --
(363.44, 65.44) --
(363.68, 65.66) --
(363.92, 65.89) --
(364.16, 66.12) --
(364.40, 66.35) --
(364.64, 66.58) --
(364.88, 66.81) --
(365.12, 67.04) --
(365.37, 67.27) --
(365.61, 67.50) --
(365.85, 67.73) --
(366.09, 67.97) --
(366.33, 68.20) --
(366.57, 68.43) --
(366.81, 68.67) --
(367.05, 68.91) --
(367.29, 69.14) --
(367.53, 69.38) --
(367.77, 69.62) --
(368.01, 69.86) --
(368.26, 70.10) --
(368.50, 70.34) --
(368.74, 70.58) --
(368.98, 70.82) --
(369.22, 71.06) --
(369.46, 71.30) --
(369.70, 71.55) --
(369.94, 71.79) --
(370.18, 72.03) --
(370.42, 72.28) --
(370.66, 72.53) --
(370.91, 72.77) --
(371.15, 73.02) --
(371.39, 73.27) --
(371.63, 73.52) --
(371.87, 73.76) --
(372.11, 74.01) --
(372.35, 74.26) --
(372.59, 74.52) --
(372.83, 74.77) --
(373.07, 75.02) --
(373.31, 75.27) --
(373.56, 75.53) --
(373.80, 75.78) --
(374.04, 76.04) --
(374.28, 76.29) --
(374.52, 76.55) --
(374.76, 76.81) --
(375.00, 77.06) --
(375.24, 77.32) --
(375.48, 77.58) --
(375.72, 77.84) --
(375.96, 78.10) --
(376.21, 78.36) --
(376.45, 78.62) --
(376.69, 78.89) --
(376.93, 79.15) --
(377.17, 79.41) --
(377.41, 79.68) --
(377.65, 79.94) --
(377.89, 80.21) --
(378.13, 80.48) --
(378.37, 80.74) --
(378.61, 81.01) --
(378.86, 81.28) --
(379.10, 81.55) --
(379.34, 81.82) --
(379.58, 82.09) --
(379.82, 82.36) --
(380.06, 82.63) --
(380.30, 82.90) --
(380.54, 83.18) --
(380.78, 83.45) --
(381.02, 83.72) --
(381.26, 84.00) --
(381.51, 84.27) --
(381.75, 84.55) --
(381.99, 84.83) --
(382.23, 85.11) --
(382.47, 85.38) --
(382.71, 85.66) --
(382.95, 85.94) --
(383.19, 86.22) --
(383.43, 86.50) --
(383.67, 86.78) --
(383.91, 87.07) --
(384.16, 87.35) --
(384.40, 87.63) --
(384.64, 87.92) --
(384.88, 88.20) --
(385.12, 88.49) --
(385.36, 88.77) --
(385.60, 89.06) --
(385.84, 89.35) --
(386.08, 89.64) --
(386.32, 89.93) --
(386.56, 90.22) --
(386.81, 90.51) --
(387.05, 90.80) --
(387.29, 91.09) --
(387.53, 91.38) --
(387.77, 91.67) --
(388.01, 91.97) --
(388.25, 92.26) --
(388.49, 92.56) --
(388.73, 92.85) --
(388.97, 93.15) --
(389.21, 93.44) --
(389.45, 93.74) --
(389.70, 94.04) --
(389.94, 94.34) --
(390.18, 94.64) --
(390.42, 94.94) --
(390.66, 95.24) --
(390.90, 95.54) --
(391.14, 95.84) --
(391.38, 96.14) --
(391.62, 96.45) --
(391.86, 96.75) --
(392.10, 97.06) --
(392.35, 97.36) --
(392.59, 97.67) --
(392.83, 97.97) --
(393.07, 98.28) --
(393.31, 98.59) --
(393.55, 98.90) --
(393.79, 99.21) --
(394.03, 99.52) --
(394.27, 99.83) --
(394.51,100.14) --
(394.75,100.45) --
(395.00,100.76) --
(395.24,101.08) --
(395.48,101.39) --
(395.72,101.71) --
(395.96,102.02) --
(396.20,102.34) --
(396.44,102.65) --
(396.68,102.97) --
(396.92,103.29) --
(397.16,103.61) --
(397.40,103.93) --
(397.65,104.25) --
(397.89,104.57) --
(398.13,104.89) --
(398.37,105.21) --
(398.61,105.53) --
(398.85,105.86) --
(399.09,106.18) --
(399.33,106.51) --
(399.57,106.83) --
(399.81,107.16) --
(400.05,107.48) --
(400.30,107.81) --
(400.54,108.14) --
(400.78,108.47) --
(401.02,108.80) --
(401.26,109.13) --
(401.50,109.46) --
(401.74,109.79) --
(401.98,110.12) --
(402.22,110.45) --
(402.46,110.78) --
(402.70,111.12) --
(402.95,111.45) --
(403.19,111.79) --
(403.43,112.12) --
(403.67,112.46) --
(403.91,112.80) --
(404.15,113.14) --
(404.39,113.47) --
(404.63,113.81) --
(404.87,114.15) --
(405.11,114.49) --
(405.35,114.83) --
(405.60,115.18) --
(405.84,115.52) --
(406.08,115.86) --
(406.32,116.21) --
(406.56,116.55) --
(406.80,116.89) --
(407.04,117.24) --
(407.28,117.59) --
(407.52,117.93) --
(407.76,118.28) --
(408.00,118.63) --
(408.25,118.98) --
(408.49,119.33) --
(408.73,119.68) --
(408.97,120.03) --
(409.21,120.38) --
(409.45,120.73) --
(409.69,121.09) --
(409.93,121.44) --
(410.17,121.79) --
(410.41,122.15) --
(410.65,122.51) --
(410.90,122.86) --
(411.14,123.22) --
(411.38,123.58) --
(411.62,123.93) --
(411.86,124.29) --
(412.10,124.65) --
(412.34,125.01) --
(412.58,125.37) --
(412.82,125.74) --
(413.06,126.10) --
(413.30,126.46) --
(413.54,126.82) --
(413.79,127.19) --
(414.03,127.55) --
(414.27,127.92) --
(414.51,128.28) --
(414.75,128.65) --
(414.99,129.02) --
(415.23,129.39) --
(415.47,129.76) --
(415.71,130.13) --
(415.95,130.50) --
(416.19,130.87) --
(416.44,131.24) --
(416.68,131.61) --
(416.92,131.98) --
(417.16,132.36) --
(417.40,132.73) --
(417.64,133.11) --
(417.88,133.48) --
(418.12,133.86) --
(418.36,134.23) --
(418.60,134.61) --
(418.84,134.99) --
(419.09,135.37) --
(419.33,135.75) --
(419.57,136.13) --
(419.81,136.51) --
(420.05,136.89) --
(420.29,137.27) --
(420.53,137.65) --
(420.77,138.04) --
(421.01,138.42) --
(421.25,138.81) --
(421.49,139.19) --
(421.74,139.58) --
(421.98,139.96) --
(422.22,140.35) --
(422.46,140.74) --
(422.70,141.13) --
(422.94,141.52) --
(423.18,141.91) --
(423.42,142.30) --
(423.66,142.69) --
(423.90,143.08) --
(424.14,143.47) --
(424.39,143.87) --
(424.63,144.26) --
(424.87,144.66) --
(425.11,145.05) --
(425.35,145.45) --
(425.59,145.84) --
(425.83,146.24) --
(426.07,146.64) --
(426.31,147.04) --
(426.55,147.44) --
(426.79,147.84) --
(427.04,148.24) --
(427.28,148.64) --
(427.52,149.04) --
(427.76,149.44) --
(428.00,149.85) --
(428.24,150.25) --
(428.48,150.66) --
(428.72,151.06) --
(428.96,151.47) --
(429.20,151.87) --
(429.44,152.28) --
(429.69,152.69) --
(429.93,153.10) --
(430.17,153.51) --
(430.41,153.92) --
(430.65,154.33) --
(430.89,154.74) --
(431.13,155.15) --
(431.37,155.56) --
(431.61,155.97) --
(431.85,156.39) --
(432.09,156.80) --
(432.34,157.22) --
(432.58,157.63) --
(432.82,158.05) --
(433.06,158.47) --
(433.30,158.88) --
(433.54,159.30) --
(433.78,159.72) --
(434.02,160.14) --
(434.26,160.56) --
(434.50,160.98) --
(434.74,161.40) --
(434.99,161.83) --
(435.23,162.25) --
(435.47,162.67) --
(435.71,163.10) --
(435.95,163.52) --
(436.19,163.95) --
(436.43,164.38) --
(436.67,164.80) --
(436.91,165.23) --
(437.15,165.66) --
(437.39,166.09) --
(437.63,166.52) --
(437.88,166.95) --
(438.12,167.38) --
(438.36,167.81) --
(438.60,168.24) --
(438.84,168.68) --
(439.08,169.11) --
(439.32,169.54) --
(439.56,169.98) --
(439.80,170.41) --
(440.04,170.85) --
(440.28,171.29) --
(440.53,171.72) --
(440.77,172.16) --
(441.01,172.60) --
(441.25,173.04) --
(441.49,173.48) --
(441.73,173.92) --
(441.97,174.36) --
(442.21,174.81) --
(442.45,175.25) --
(442.69,175.69) --
(442.93,176.14) --
(443.18,176.58) --
(443.42,177.03) --
(443.66,177.47) --
(443.90,177.92) --
(444.14,178.37) --
(444.38,178.82) --
(444.62,179.26) --
(444.86,179.71) --
(445.10,180.16) --
(445.34,180.61) --
(445.58,181.07) --
(445.83,181.52) --
(446.07,181.97) --
(446.31,182.42) --
(446.55,182.88) --
(446.79,183.33) --
(447.03,183.79) --
(447.27,184.24) --
(447.51,184.70) --
(447.75,185.16) --
(447.99,185.62) --
(448.23,186.08) --
(448.48,186.54) --
(448.72,187.00) --
(448.96,187.46) --
(449.20,187.92) --
(449.44,188.38) --
(449.68,188.84) --
(449.92,189.31) --
(450.16,189.77) --
(450.40,190.23) --
(450.64,190.70) --
(450.88,191.17) --
(451.13,191.63) --
(451.37,192.10) --
(451.61,192.57) --
(451.85,193.04) --
(452.09,193.51) --
(452.33,193.98) --
(452.57,194.45) --
(452.81,194.92) --
(453.05,195.39) --
(453.29,195.86) --
(453.53,196.34) --
(453.78,196.81) --
(454.02,197.28) --
(454.26,197.76) --
(454.50,198.23) --
(454.74,198.71) --
(454.98,199.19) --
(455.22,199.67) --
(455.46,200.15) --
(455.70,200.62) --
(455.94,201.10) --
(456.18,201.58) --
(456.43,202.07) --
(456.67,202.55) --
(456.91,203.03) --
(457.15,203.51) --
(457.39,204.00) --
(457.63,204.48) --
(457.87,204.97) --
(458.11,205.45) --
(458.35,205.94) --
(458.59,206.43) --
(458.83,206.91) --
(459.08,207.40) --
(459.32,207.89) --
(459.56,208.38) --
(459.80,208.87) --
(460.04,209.36) --
(460.28,209.86) --
(460.47,210.24) --
(462.53,210.24) --
(462.53,160.24) --
(462.45,160.13) --
(462.21,159.78) --
(461.97,159.44) --
(461.73,159.10) --
(461.48,158.75) --
(461.24,158.41) --
(461.00,158.07) --
(460.76,157.72) --
(460.52,157.38) --
(460.28,157.04) --
(460.04,156.70) --
(459.80,156.36) --
(459.56,156.02) --
(459.32,155.68) --
(459.08,155.34) --
(458.83,155.00) --
(458.59,154.67) --
(458.35,154.33) --
(458.11,153.99) --
(457.87,153.66) --
(457.63,153.32) --
(457.39,152.98) --
(457.15,152.65) --
(456.91,152.31) --
(456.67,151.98) --
(456.43,151.65) --
(456.18,151.31) --
(455.94,150.98) --
(455.70,150.65) --
(455.46,150.31) --
(455.22,149.98) --
(454.98,149.65) --
(454.74,149.32) --
(454.50,148.99) --
(454.26,148.66) --
(454.02,148.33) --
(453.78,148.00) --
(453.53,147.67) --
(453.29,147.35) --
(453.05,147.02) --
(452.81,146.69) --
(452.57,146.37) --
(452.33,146.04) --
(452.09,145.71) --
(451.85,145.39) --
(451.61,145.06) --
(451.37,144.74) --
(451.13,144.42) --
(450.88,144.09) --
(450.64,143.77) --
(450.40,143.45) --
(450.16,143.13) --
(449.92,142.80) --
(449.68,142.48) --
(449.44,142.16) --
(449.20,141.84) --
(448.96,141.52) --
(448.72,141.20) --
(448.48,140.89) --
(448.23,140.57) --
(447.99,140.25) --
(447.75,139.93) --
(447.51,139.61) --
(447.27,139.30) --
(447.03,138.98) --
(446.79,138.67) --
(446.55,138.35) --
(446.31,138.04) --
(446.07,137.72) --
(445.83,137.41) --
(445.58,137.10) --
(445.34,136.78) --
(445.10,136.47) --
(444.86,136.16) --
(444.62,135.85) --
(444.38,135.54) --
(444.14,135.23) --
(443.90,134.92) --
(443.66,134.61) --
(443.42,134.30) --
(443.18,133.99) --
(442.93,133.68) --
(442.69,133.37) --
(442.45,133.07) --
(442.21,132.76) --
(441.97,132.45) --
(441.73,132.15) --
(441.49,131.84) --
(441.25,131.54) --
(441.01,131.23) --
(440.77,130.93) --
(440.53,130.62) --
(440.28,130.32) --
(440.04,130.02) --
(439.80,129.72) --
(439.56,129.41) --
(439.32,129.11) --
(439.08,128.81) --
(438.84,128.51) --
(438.60,128.21) --
(438.36,127.91) --
(438.12,127.61) --
(437.88,127.31) --
(437.63,127.02) --
(437.39,126.72) --
(437.15,126.42) --
(436.91,126.12) --
(436.67,125.83) --
(436.43,125.53) --
(436.19,125.24) --
(435.95,124.94) --
(435.71,124.65) --
(435.47,124.35) --
(435.23,124.06) --
(434.99,123.77) --
(434.74,123.47) --
(434.50,123.18) --
(434.26,122.89) --
(434.02,122.60) --
(433.78,122.31) --
(433.54,122.02) --
(433.30,121.73) --
(433.06,121.44) --
(432.82,121.15) --
(432.58,120.86) --
(432.34,120.57) --
(432.09,120.29) --
(431.85,120.00) --
(431.61,119.71) --
(431.37,119.43) --
(431.13,119.14) --
(430.89,118.85) --
(430.65,118.57) --
(430.41,118.29) --
(430.17,118.00) --
(429.93,117.72) --
(429.69,117.43) --
(429.44,117.15) --
(429.20,116.87) --
(428.96,116.59) --
(428.72,116.31) --
(428.48,116.03) --
(428.24,115.75) --
(428.00,115.47) --
(427.76,115.19) --
(427.52,114.91) --
(427.28,114.63) --
(427.04,114.35) --
(426.79,114.07) --
(426.55,113.80) --
(426.31,113.52) --
(426.07,113.24) --
(425.83,112.97) --
(425.59,112.69) --
(425.35,112.42) --
(425.11,112.14) --
(424.87,111.87) --
(424.63,111.60) --
(424.39,111.32) --
(424.14,111.05) --
(423.90,110.78) --
(423.66,110.51) --
(423.42,110.24) --
(423.18,109.97) --
(422.94,109.70) --
(422.70,109.43) --
(422.46,109.16) --
(422.22,108.89) --
(421.98,108.62) --
(421.74,108.35) --
(421.49,108.08) --
(421.25,107.82) --
(421.01,107.55) --
(420.77,107.29) --
(420.53,107.02) --
(420.29,106.75) --
(420.05,106.49) --
(419.81,106.23) --
(419.57,105.96) --
(419.33,105.70) --
(419.09,105.44) --
(418.84,105.17) --
(418.60,104.91) --
(418.36,104.65) --
(418.12,104.39) --
(417.88,104.13) --
(417.64,103.87) --
(417.40,103.61) --
(417.16,103.35) --
(416.92,103.09) --
(416.68,102.83) --
(416.44,102.57) --
(416.19,102.32) --
(415.95,102.06) --
(415.71,101.80) --
(415.47,101.55) --
(415.23,101.29) --
(414.99,101.04) --
(414.75,100.78) --
(414.51,100.53) --
(414.27,100.27) --
(414.03,100.02) --
(413.79, 99.77) --
(413.54, 99.52) --
(413.30, 99.26) --
(413.06, 99.01) --
(412.82, 98.76) --
(412.58, 98.51) --
(412.34, 98.26) --
(412.10, 98.01) --
(411.86, 97.76) --
(411.62, 97.51) --
(411.38, 97.27) --
(411.14, 97.02) --
(410.90, 96.77) --
(410.65, 96.52) --
(410.41, 96.28) --
(410.17, 96.03) --
(409.93, 95.79) --
(409.69, 95.54) --
(409.45, 95.30) --
(409.21, 95.05) --
(408.97, 94.81) --
(408.73, 94.57) --
(408.49, 94.32) --
(408.25, 94.08) --
(408.00, 93.84) --
(407.76, 93.60) --
(407.52, 93.36) --
(407.28, 93.12) --
(407.04, 92.88) --
(406.80, 92.64) --
(406.56, 92.40) --
(406.32, 92.16) --
(406.08, 91.92) --
(405.84, 91.68) --
(405.60, 91.45) --
(405.35, 91.21) --
(405.11, 90.97) --
(404.87, 90.74) --
(404.63, 90.50) --
(404.39, 90.27) --
(404.15, 90.03) --
(403.91, 89.80) --
(403.67, 89.56) --
(403.43, 89.33) --
(403.19, 89.10) --
(402.95, 88.87) --
(402.70, 88.64) --
(402.46, 88.40) --
(402.22, 88.17) --
(401.98, 87.94) --
(401.74, 87.71) --
(401.50, 87.48) --
(401.26, 87.25) --
(401.02, 87.03) --
(400.78, 86.80) --
(400.54, 86.57) --
(400.30, 86.34) --
(400.05, 86.12) --
(399.81, 85.89) --
(399.57, 85.66) --
(399.33, 85.44) --
(399.09, 85.21) --
(398.85, 84.99) --
(398.61, 84.77) --
(398.37, 84.54) --
(398.13, 84.32) --
(397.89, 84.10) --
(397.65, 83.87) --
(397.40, 83.65) --
(397.16, 83.43) --
(396.92, 83.21) --
(396.68, 82.99) --
(396.44, 82.77) --
(396.20, 82.55) --
(395.96, 82.33) --
(395.72, 82.11) --
(395.48, 81.90) --
(395.24, 81.68) --
(395.00, 81.46) --
(394.75, 81.25) --
(394.51, 81.03) --
(394.27, 80.81) --
(394.03, 80.60) --
(393.79, 80.38) --
(393.55, 80.17) --
(393.31, 79.96) --
(393.07, 79.74) --
(392.83, 79.53) --
(392.59, 79.32) --
(392.35, 79.10) --
(392.10, 78.89) --
(391.86, 78.68) --
(391.62, 78.47) --
(391.38, 78.26) --
(391.14, 78.05) --
(390.90, 77.84) --
(390.66, 77.63) --
(390.42, 77.42) --
(390.18, 77.22) --
(389.94, 77.01) --
(389.70, 76.80) --
(389.45, 76.60) --
(389.21, 76.39) --
(388.97, 76.18) --
(388.73, 75.98) --
(388.49, 75.77) --
(388.25, 75.57) --
(388.01, 75.37) --
(387.77, 75.16) --
(387.53, 74.96) --
(387.29, 74.76) --
(387.05, 74.56) --
(386.81, 74.35) --
(386.56, 74.15) --
(386.32, 73.95) --
(386.08, 73.75) --
(385.84, 73.55) --
(385.60, 73.35) --
(385.36, 73.15) --
(385.12, 72.96) --
(384.88, 72.76) --
(384.64, 72.56) --
(384.40, 72.36) --
(384.16, 72.17) --
(383.91, 71.97) --
(383.67, 71.78) --
(383.43, 71.58) --
(383.19, 71.39) --
(382.95, 71.19) --
(382.71, 71.00) --
(382.47, 70.81) --
(382.23, 70.61) --
(381.99, 70.42) --
(381.75, 70.23) --
(381.51, 70.04) --
(381.26, 69.85) --
(381.02, 69.66) --
(380.78, 69.47) --
(380.54, 69.28) --
(380.30, 69.09) --
(380.06, 68.90) --
(379.82, 68.71) --
(379.58, 68.52) --
(379.34, 68.33) --
(379.10, 68.15) --
(378.86, 67.96) --
(378.61, 67.78) --
(378.37, 67.59) --
(378.13, 67.40) --
(377.89, 67.22) --
(377.65, 67.04) --
(377.41, 66.85) --
(377.17, 66.67) --
(376.93, 66.49) --
(376.69, 66.30) --
(376.45, 66.12) --
(376.21, 65.94) --
(375.96, 65.76) --
(375.72, 65.58) --
(375.48, 65.40) --
(375.24, 65.22) --
(375.00, 65.04) --
(374.76, 64.86) --
(374.52, 64.68) --
(374.28, 64.51) --
(374.04, 64.33) --
(373.80, 64.15) --
(373.56, 63.98) --
(373.31, 63.80) --
(373.07, 63.63) --
(372.83, 63.45) --
(372.59, 63.28) --
(372.35, 63.10) --
(372.11, 62.93) --
(371.87, 62.76) --
(371.63, 62.58) --
(371.39, 62.41) --
(371.15, 62.24) --
(370.91, 62.07) --
(370.66, 61.90) --
(370.42, 61.73) --
(370.18, 61.56) --
(369.94, 61.39) --
(369.70, 61.22) --
(369.46, 61.05) --
(369.22, 60.88) --
(368.98, 60.71) --
(368.74, 60.55) --
(368.50, 60.38) --
(368.26, 60.21) --
(368.01, 60.05) --
(367.77, 59.88) --
(367.53, 59.72) --
(367.29, 59.55) --
(367.05, 59.39) --
(366.81, 59.23) --
(366.57, 59.06) --
(366.33, 58.90) --
(366.09, 58.74) --
(365.85, 58.58) --
(365.61, 58.42) --
(365.37, 58.25) --
(365.12, 58.09) --
(364.88, 57.93) --
(364.64, 57.78) --
(364.40, 57.62) --
(364.16, 57.46) --
(363.92, 57.30) --
(363.68, 57.14) --
(363.44, 56.99) --
(363.20, 56.83) --
(362.96, 56.67) --
(362.72, 56.52) --
(362.47, 56.36) --
(362.23, 56.21) --
(361.99, 56.05) --
(361.75, 55.90) --
(361.51, 55.74) --
(361.27, 55.59) --
(361.03, 55.44) --
(360.79, 55.29) --
(360.55, 55.14) --
(360.31, 54.98) --
(360.07, 54.83) --
(359.82, 54.68) --
(359.58, 54.53) --
(359.34, 54.38) --
(359.10, 54.23) --
(358.86, 54.09) --
(358.62, 53.94) --
(358.38, 53.79) --
(358.14, 53.64) --
(357.90, 53.49) --
(357.66, 53.35) --
(357.42, 53.20) --
(357.17, 53.05) --
(356.93, 52.91) --
(356.69, 52.76) --
(356.45, 52.62) --
(356.21, 52.47) --
(355.97, 52.33) --
(355.73, 52.19) --
(355.49, 52.04) --
(355.25, 51.90) --
(355.01, 51.76) --
(354.77, 51.61) --
(354.52, 51.47) --
(354.28, 51.33) --
(354.04, 51.19) --
(353.80, 51.05) --
(353.56, 50.91) --
(353.32, 50.77) --
(353.08, 50.63) --
(352.84, 50.49) --
(352.60, 50.35) --
(352.36, 50.21) --
(352.12, 50.07) --
(351.87, 49.93) --
(351.63, 49.79) --
(351.39, 49.65) --
(351.15, 49.52) --
(350.91, 49.38) --
(350.67, 49.24) --
(350.43, 49.11) --
(350.19, 48.97) --
(349.95, 48.84) --
(349.71, 48.70) --
(349.47, 48.57) --
(349.22, 48.43) --
(348.98, 48.30) --
(348.74, 48.16) --
(348.50, 48.03) --
(348.26, 47.90) --
(348.02, 47.76) --
(347.78, 47.63) --
(347.54, 47.50) --
(347.30, 47.37) --
(347.06, 47.23) --
(346.82, 47.10) --
(346.57, 46.97) --
(346.33, 46.84) --
(346.09, 46.71) --
(345.85, 46.58) --
(345.61, 46.45) --
(345.37, 46.32) --
(345.13, 46.19) --
(344.89, 46.06) --
(344.65, 45.94) --
(344.41, 45.81) --
(344.17, 45.68) --
(343.92, 45.55) --
(343.68, 45.43) --
(343.44, 45.30) --
(343.20, 45.17) --
(342.96, 45.05) --
(342.72, 44.92) --
(342.48, 44.80) --
(342.24, 44.67) --
(342.00, 44.55) --
(341.76, 44.42) --
(341.52, 44.30) --
(341.27, 44.18) --
(341.03, 44.05) --
(340.79, 43.93) --
(340.55, 43.81) --
(340.31, 43.69) --
(340.07, 43.56) --
(339.83, 43.44) --
(339.59, 43.32) --
(339.35, 43.20) --
(339.11, 43.08) --
(338.87, 42.96) --
(338.63, 42.84) --
(338.38, 42.72) --
(338.14, 42.60) --
(337.90, 42.48) --
(337.66, 42.37) --
(337.42, 42.25) --
(337.18, 42.13) --
(336.94, 42.01) --
(336.70, 41.90) --
(336.46, 41.78) --
(336.22, 41.66) --
(335.98, 41.55) --
(335.73, 41.43) --
(335.49, 41.32) --
(335.25, 41.20) --
(335.01, 41.09) --
(334.77, 40.97) --
(334.53, 40.86) --
(334.29, 40.75) --
(334.05, 40.63) --
(333.81, 40.52) --
(333.57, 40.41) --
(333.33, 40.30) --
(333.08, 40.18) --
(332.84, 40.07) --
(332.60, 39.96) --
(332.36, 39.85) --
(332.12, 39.74) --
(331.88, 39.63) --
(331.64, 39.52) --
(331.40, 39.41) --
(331.16, 39.30) --
(330.92, 39.20) --
(330.68, 39.09) --
(330.43, 38.98) --
(330.19, 38.87) --
(329.95, 38.77) --
(329.71, 38.66) --
(329.47, 38.55) --
(329.23, 38.45) --
(328.99, 38.34) --
(328.75, 38.24) --
(328.51, 38.13) --
(328.27, 38.03) --
(328.03, 37.93) --
(327.78, 37.82) --
(327.54, 37.72) --
(327.30, 37.62) --
(327.06, 37.51) --
(326.82, 37.41) --
(326.58, 37.31) --
(326.34, 37.21) --
(326.10, 37.11) --
(325.86, 37.01) --
(325.62, 36.91) --
(325.38, 36.81) --
(325.13, 36.71) --
(324.89, 36.61) --
(324.65, 36.51) --
(324.41, 36.42) --
(324.17, 36.32) --
(323.93, 36.22) --
(323.69, 36.12) --
(323.45, 36.03) --
(323.21, 35.93) --
(322.97, 35.84) --
(322.73, 35.74) --
(322.48, 35.65) --
(322.24, 35.55) --
(322.00, 35.46) --
(321.76, 35.37) --
(321.52, 35.27) --
(321.28, 35.18) --
(321.04, 35.09) --
(320.80, 35.09) --
(320.56, 35.09) --
(320.32, 35.09) --
(320.08, 35.09) --
(319.83, 35.09) --
(319.59, 35.09) --
(319.35, 35.09) --
(319.11, 35.09) --
(318.87, 35.09) --
(318.63, 35.09) --
(318.39, 35.09) --
(318.15, 35.09) --
(317.91, 35.09) --
(317.67, 35.09) --
(317.43, 35.09) --
(317.19, 35.09) --
(316.94, 35.09) --
(316.70, 35.09) --
(316.46, 35.09) --
(316.22, 35.09) --
(315.98, 35.09) --
(315.74, 35.09) --
(315.50, 35.09) --
(315.26, 35.09) --
(315.02, 35.09) --
(314.78, 35.09) --
(314.54, 35.09) --
(314.29, 35.09) --
(314.05, 35.09) --
(313.81, 35.09) --
(313.57, 35.09) --
(313.33, 35.09) --
(313.09, 35.09) --
(312.85, 35.09) --
(312.61, 35.09) --
(312.37, 35.09) --
(312.13, 35.09) --
(311.89, 35.09) --
(311.64, 35.09) --
(311.40, 35.09) --
(311.16, 35.09) --
(310.92, 35.09) --
(310.68, 35.09) --
(310.44, 35.09) --
(310.20, 35.09) --
(309.96, 35.09) --
(309.72, 35.09) --
(309.48, 35.09) --
(309.24, 35.09) --
(308.99, 35.09) --
(308.75, 35.09) --
(308.51, 35.09) --
(308.27, 35.09) --
(308.03, 35.09) --
(307.79, 35.09) --
(307.55, 35.09) --
(307.31, 35.09) --
(307.07, 35.09) --
(306.83, 35.09) --
(306.59, 35.09) --
(306.34, 35.09) --
(306.10, 35.09) --
(305.86, 35.09) --
(305.62, 35.09) --
(305.38, 35.09) --
(305.14, 35.09) --
(304.90, 35.09) --
(304.66, 35.09) --
(304.42, 35.09) --
(304.18, 35.09) --
(303.94, 35.09) --
(303.69, 35.09) --
(303.45, 35.09) --
(303.21, 35.09) --
(302.97, 35.09) --
(302.73, 35.09) --
(302.49, 35.09) --
(302.25, 35.09) --
(302.01, 35.09) --
(301.77, 35.09) --
(301.53, 35.09) --
(301.29, 35.09) --
(301.04, 35.09) --
(300.80, 35.09) --
(300.56, 35.09) --
(300.32, 35.09) --
(300.08, 35.09) --
(299.84, 35.09) --
(299.60, 35.09) --
(299.36, 35.09) --
(299.12, 35.09) --
(298.88, 35.09) --
(298.64, 35.09) --
(298.39, 35.09) --
(298.15, 35.09) --
(297.91, 35.09) --
(297.67, 35.09) --
(297.43, 35.09) --
(297.19, 35.09) --
(296.95, 35.09) --
(296.71, 35.09) --
(296.47, 35.09) --
(296.23, 35.09) --
(295.99, 35.09) --
(295.74, 35.09) --
(295.50, 35.09) --
(295.26, 35.09) --
(295.02, 35.09) --
(294.78, 35.09) --
(294.54, 35.09) --
(294.30, 35.09) --
(294.06, 35.09) --
(293.82, 35.09) --
(293.58, 35.09) --
(293.34, 35.09) --
(293.10, 35.09) --
(292.85, 35.09) --
(292.61, 35.09) --
(292.37, 35.09) --
(292.13, 35.09) --
(291.89, 35.09) --
(291.65, 35.09) --
(291.41, 35.09) --
(291.17, 35.09) --
(290.93, 35.09) --
(290.69, 35.09) --
(290.45, 35.09) --
(290.20, 35.09) --
(289.96, 35.09) --
(289.72, 35.09) --
(289.48, 35.09) --
(289.24, 35.09) --
(289.00, 35.09) --
(288.76, 35.09) --
(288.52, 35.09) --
(288.28, 35.09) --
(288.04, 35.09) --
(287.80, 35.09) --
(287.55, 35.09) --
(287.31, 35.09) --
(287.07, 35.09) --
(286.83, 35.09) --
(286.59, 35.09) --
(286.35, 35.09) --
(286.11, 35.09) --
(285.87, 35.09) --
(285.63, 35.09) --
(285.39, 35.09) --
(285.15, 35.09) --
(284.90, 35.09) --
(284.66, 35.09) --
(284.42, 35.09) --
(284.18, 35.09) --
(283.94, 35.09) --
(283.70, 35.09) --
(283.46, 35.09) --
(283.22, 35.09) --
(282.98, 35.09) --
(282.74, 35.09) --
(282.50, 35.09) --
(282.25, 35.09) --
(282.01, 35.09) --
(281.77, 35.09) --
(281.53, 35.09) --
(281.29, 35.09) --
(281.05, 35.09) --
cycle;
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (281.05, 35.09) --
(281.44, 35.09) --
(281.83, 35.09) --
(282.23, 35.09) --
(282.62, 35.09) --
(283.01, 35.09) --
(283.40, 35.09) --
(283.80, 35.09) --
(284.19, 35.09) --
(284.58, 35.09) --
(284.97, 35.09) --
(285.37, 35.09) --
(285.76, 35.09) --
(286.15, 35.09) --
(286.54, 35.09) --
(286.94, 35.09) --
(287.33, 35.09) --
(287.72, 35.09) --
(288.11, 35.09) --
(288.51, 35.09) --
(288.90, 35.09) --
(289.29, 35.09) --
(289.68, 35.09) --
(290.08, 35.09) --
(290.47, 35.09) --
(290.86, 35.09) --
(291.25, 35.09) --
(291.65, 35.09) --
(292.04, 35.09) --
(292.43, 35.09) --
(292.82, 35.09) --
(293.22, 35.09) --
(293.61, 35.09) --
(294.00, 35.09) --
(294.39, 35.09) --
(294.79, 35.09) --
(295.18, 35.09) --
(295.57, 35.09) --
(295.96, 35.09) --
(296.36, 35.09) --
(296.75, 35.09) --
(297.14, 35.09) --
(297.53, 35.09) --
(297.93, 35.09) --
(298.32, 35.09) --
(298.71, 35.09) --
(299.10, 35.09) --
(299.50, 35.09) --
(299.89, 35.09) --
(300.28, 35.09) --
(300.67, 35.09) --
(301.07, 35.09) --
(301.46, 35.09) --
(301.85, 35.09) --
(302.24, 35.09) --
(302.64, 35.09) --
(303.03, 35.09) --
(303.42, 35.09) --
(303.81, 35.09) --
(304.21, 35.09) --
(304.60, 35.09) --
(304.99, 35.09) --
(305.38, 35.09) --
(305.78, 35.09) --
(306.17, 35.09) --
(306.56, 35.09) --
(306.95, 35.09) --
(307.35, 35.09) --
(307.74, 35.09) --
(308.13, 35.09) --
(308.52, 35.09) --
(308.92, 35.09) --
(309.31, 35.09) --
(309.70, 35.09) --
(310.09, 35.09) --
(310.49, 35.09) --
(310.88, 35.09) --
(311.27, 35.09) --
(311.66, 35.09) --
(312.06, 35.09) --
(312.45, 35.09) --
(312.84, 35.09) --
(313.23, 35.09) --
(313.63, 35.09) --
(314.02, 35.09) --
(314.41, 35.09) --
(314.80, 35.09) --
(315.20, 35.09) --
(315.59, 35.09) --
(315.98, 35.09) --
(316.37, 35.09) --
(316.77, 35.09) --
(317.16, 35.09) --
(317.55, 35.09) --
(317.94, 35.09) --
(318.34, 35.09) --
(318.73, 35.09) --
(319.12, 35.09) --
(319.51, 35.09) --
(319.91, 35.09) --
(320.30, 35.09) --
(320.69, 35.09) --
(321.08, 35.09) --
(321.48, 35.19) --
(321.87, 35.38) --
(322.26, 35.59) --
(322.65, 35.77) --
(323.05, 35.91) --
(323.44, 36.08) --
(323.83, 36.27) --
(324.22, 36.45) --
(324.62, 36.60) --
(325.01, 36.79) --
(325.40, 36.96) --
(325.79, 37.15) --
(326.19, 37.35) --
(326.58, 37.52) --
(326.97, 37.73) --
(327.36, 37.93) --
(327.76, 38.19) --
(328.15, 38.32) --
(328.54, 38.53) --
(328.93, 38.79) --
(329.33, 38.99) --
(329.72, 39.22) --
(330.11, 39.39) --
(330.50, 39.62) --
(330.90, 39.82) --
(331.29, 40.03) --
(331.68, 40.24) --
(332.07, 40.44) --
(332.47, 40.66) --
(332.86, 40.87) --
(333.25, 41.06) --
(333.64, 41.28) --
(334.04, 41.50) --
(334.43, 41.76) --
(334.82, 42.00) --
(335.21, 42.20) --
(335.60, 42.40) --
(336.00, 42.62) --
(336.39, 42.88) --
(336.78, 43.09) --
(337.17, 43.36) --
(337.57, 43.62) --
(337.96, 43.80) --
(338.35, 44.11) --
(338.74, 44.33) --
(339.14, 44.59) --
(339.53, 44.86) --
(339.92, 45.12) --
(340.31, 45.32) --
(340.71, 45.55) --
(341.10, 45.78) --
(341.49, 46.03) --
(341.88, 46.25) --
(342.28, 46.50) --
(342.67, 46.82) --
(343.06, 47.06) --
(343.45, 47.32) --
(343.85, 47.54) --
(344.24, 47.79) --
(344.63, 48.03) --
(345.02, 48.31) --
(345.42, 48.56) --
(345.81, 48.80) --
(346.20, 49.05) --
(346.59, 49.32) --
(346.99, 49.59) --
(347.38, 49.89) --
(347.77, 50.14) --
(348.16, 50.40) --
(348.56, 50.61) --
(348.95, 50.89) --
(349.34, 51.19) --
(349.73, 51.50) --
(350.13, 51.77) --
(350.52, 52.00) --
(350.91, 52.27) --
(351.30, 52.56) --
(351.70, 52.84) --
(352.09, 53.10) --
(352.48, 53.38) --
(352.87, 53.64) --
(353.27, 53.92) --
(353.66, 54.23) --
(354.05, 54.47) --
(354.44, 54.75) --
(354.84, 55.00) --
(355.23, 55.28) --
(355.62, 55.57) --
(356.01, 55.87) --
(356.41, 56.21) --
(356.80, 56.50) --
(357.19, 56.78) --
(357.58, 57.09) --
(357.98, 57.43) --
(358.37, 57.80) --
(358.76, 58.05) --
(359.15, 58.34) --
(359.55, 58.68) --
(359.94, 58.96) --
(360.33, 59.28) --
(360.72, 59.58) --
(361.12, 59.82) --
(361.51, 60.15) --
(361.90, 60.47) --
(362.29, 60.81) --
(362.69, 61.09) --
(363.08, 61.43) --
(363.47, 61.74) --
(363.86, 62.04) --
(364.26, 62.36) --
(364.65, 62.65) --
(365.04, 62.99) --
(365.43, 63.34) --
(365.83, 63.76) --
(366.22, 64.12) --
(366.61, 64.42) --
(367.00, 64.75) --
(367.40, 65.10) --
(367.79, 65.44) --
(368.18, 65.77) --
(368.57, 66.09) --
(368.97, 66.43) --
(369.36, 66.77) --
(369.75, 67.13) --
(370.14, 67.53) --
(370.54, 67.84) --
(370.93, 68.17) --
(371.32, 68.57) --
(371.71, 68.91) --
(372.11, 69.31) --
(372.50, 69.64) --
(372.89, 70.03) --
(373.28, 70.36) --
(373.68, 70.71) --
(374.07, 71.11) --
(374.46, 71.44) --
(374.85, 71.79) --
(375.25, 72.18) --
(375.64, 72.57) --
(376.03, 72.94) --
(376.42, 73.29) --
(376.82, 73.68) --
(377.21, 74.03) --
(377.60, 74.42) --
(377.99, 74.81) --
(378.39, 75.18) --
(378.78, 75.59) --
(379.17, 75.98) --
(379.56, 76.39) --
(379.96, 76.77) --
(380.35, 77.17) --
(380.74, 77.55) --
(381.13, 77.94) --
(381.53, 78.44) --
(381.92, 78.84) --
(382.31, 79.25) --
(382.70, 79.65) --
(383.10, 80.06) --
(383.49, 80.45) --
(383.88, 80.88) --
(384.27, 81.30) --
(384.67, 81.74) --
(385.06, 82.13) --
(385.45, 82.55) --
(385.84, 83.01) --
(386.24, 83.43) --
(386.63, 83.76) --
(387.02, 84.16) --
(387.41, 84.61) --
(387.81, 85.02) --
(388.20, 85.45) --
(388.59, 85.86) --
(388.98, 86.27) --
(389.38, 86.69) --
(389.77, 87.10) --
(390.16, 87.56) --
(390.55, 88.00) --
(390.94, 88.43) --
(391.34, 88.77) --
(391.73, 89.19) --
(392.12, 89.68) --
(392.51, 90.14) --
(392.91, 90.61) --
(393.30, 91.14) --
(393.69, 91.56) --
(394.08, 91.97) --
(394.48, 92.46) --
(394.87, 92.89) --
(395.26, 93.37) --
(395.65, 93.80) --
(396.05, 94.30) --
(396.44, 94.78) --
(396.83, 95.29) --
(397.22, 95.78) --
(397.62, 96.24) --
(398.01, 96.76) --
(398.40, 97.23) --
(398.79, 97.68) --
(399.19, 98.23) --
(399.58, 98.71) --
(399.97, 99.19) --
(400.36, 99.66) --
(400.76,100.20) --
(401.15,100.65) --
(401.54,101.18) --
(401.93,101.66) --
(402.33,102.07) --
(402.72,102.58) --
(403.11,103.00) --
(403.50,103.55) --
(403.90,104.09) --
(404.29,104.53) --
(404.68,105.04) --
(405.07,105.52) --
(405.47,105.97) --
(405.86,106.47) --
(406.25,106.90) --
(406.64,107.35) --
(407.04,107.83) --
(407.43,108.37) --
(407.82,108.88) --
(408.21,109.36) --
(408.61,109.86) --
(409.00,110.38) --
(409.39,110.90) --
(409.78,111.39) --
(410.18,111.94) --
(410.57,112.42) --
(410.96,112.90) --
(411.35,113.47) --
(411.75,113.91) --
(412.14,114.39) --
(412.53,114.90) --
(412.92,115.42) --
(413.32,115.99) --
(413.71,116.47) --
(414.10,116.93) --
(414.49,117.39) --
(414.89,117.93) --
(415.28,118.42) --
(415.67,118.98) --
(416.06,119.57) --
(416.46,120.07) --
(416.85,120.57) --
(417.24,121.11) --
(417.63,121.63) --
(418.03,122.16) --
(418.42,122.69) --
(418.81,123.25) --
(419.20,123.81) --
(419.60,124.36) --
(419.99,124.90) --
(420.38,125.42) --
(420.77,125.97) --
(421.17,126.49) --
(421.56,127.04) --
(421.95,127.63) --
(422.34,128.21) --
(422.74,128.79) --
(423.13,129.29) --
(423.52,129.85) --
(423.91,130.41) --
(424.31,130.99) --
(424.70,131.50) --
(425.09,132.06) --
(425.48,132.63) --
(425.88,133.21) --
(426.27,133.75) --
(426.66,134.33) --
(427.05,134.98) --
(427.45,135.58) --
(427.84,136.13) --
(428.23,136.69) --
(428.62,137.26) --
(429.02,137.80) --
(429.41,138.38) --
(429.80,138.95) --
(430.19,139.57) --
(430.59,140.16) --
(430.98,140.73) --
(431.37,141.28) --
(431.76,141.82) --
(432.16,142.42) --
(432.55,143.03) --
(432.94,143.60) --
(433.33,144.24) --
(433.73,144.82) --
(434.12,145.36) --
(434.51,145.99) --
(434.90,146.63) --
(435.30,147.24) --
(435.69,147.85) --
(436.08,148.49) --
(436.47,149.13) --
(436.87,149.79) --
(437.26,150.42) --
(437.65,151.05) --
(438.04,151.68) --
(438.44,152.33) --
(438.83,153.01) --
(439.22,153.63) --
(439.61,154.30) --
(440.01,154.97) --
(440.40,155.58) --
(440.79,156.21) --
(441.18,156.82) --
(441.58,157.44) --
(441.97,158.12) --
(442.36,158.78) --
(442.75,159.33) --
(443.15,159.96) --
(443.54,160.59) --
(443.93,161.22) --
(444.32,161.95) --
(444.71,162.58) --
(445.11,163.21) --
(445.50,163.76) --
(445.89,164.45) --
(446.28,165.04) --
(446.68,165.63) --
(447.07,166.25) --
(447.46,166.92) --
(447.85,167.50) --
(448.25,168.17) --
(448.64,168.77) --
(449.03,169.43) --
(449.42,170.03) --
(449.82,170.72) --
(450.21,171.42) --
(450.60,172.08) --
(450.99,172.72) --
(451.39,173.33) --
(451.78,173.96) --
(452.17,174.63) --
(452.56,175.35) --
(452.96,176.02) --
(453.35,176.66) --
(453.74,177.28) --
(454.13,178.06) --
(454.53,178.72) --
(454.92,179.41) --
(455.31,180.07) --
(455.70,180.64) --
(456.10,181.36) --
(456.49,182.04) --
(456.88,182.70) --
(457.27,183.34) --
(457.67,184.07) --
(458.06,184.77) --
(458.45,185.40) --
(458.84,186.13) --
(459.24,186.81) --
(459.63,187.55) --
(460.02,188.25) --
(460.41,189.05) --
(460.81,189.72) --
(461.20,190.40) --
(461.59,191.15) --
(461.98,191.88) --
(462.38,192.66) --
(462.53,192.89);
\path[draw=drawColor,line width= 0.4pt,dash pattern=on 1pt off 3pt ,line join=round,line cap=round] (281.05, 35.09) --
(281.44, 35.09) --
(281.83, 35.09) --
(282.23, 35.10) --
(282.62, 35.10) --
(283.01, 35.11) --
(283.40, 35.12) --
(283.80, 35.13) --
(284.19, 35.14) --
(284.58, 35.15) --
(284.97, 35.17) --
(285.37, 35.18) --
(285.76, 35.20) --
(286.15, 35.22) --
(286.54, 35.24) --
(286.94, 35.26) --
(287.33, 35.29) --
(287.72, 35.31) --
(288.11, 35.34) --
(288.51, 35.37) --
(288.90, 35.40) --
(289.29, 35.43) --
(289.68, 35.46) --
(290.08, 35.50) --
(290.47, 35.53) --
(290.86, 35.57) --
(291.25, 35.61) --
(291.65, 35.65) --
(292.04, 35.69) --
(292.43, 35.74) --
(292.82, 35.78) --
(293.22, 35.83) --
(293.61, 35.88) --
(294.00, 35.93) --
(294.39, 35.98) --
(294.79, 36.03) --
(295.18, 36.09) --
(295.57, 36.15) --
(295.96, 36.20) --
(296.36, 36.26) --
(296.75, 36.32) --
(297.14, 36.39) --
(297.53, 36.45) --
(297.93, 36.52) --
(298.32, 36.58) --
(298.71, 36.65) --
(299.10, 36.72) --
(299.50, 36.79) --
(299.89, 36.87) --
(300.28, 36.94) --
(300.67, 37.02) --
(301.07, 37.10) --
(301.46, 37.18) --
(301.85, 37.26) --
(302.24, 37.34) --
(302.64, 37.42) --
(303.03, 37.51) --
(303.42, 37.60) --
(303.81, 37.69) --
(304.21, 37.78) --
(304.60, 37.87) --
(304.99, 37.96) --
(305.38, 38.06) --
(305.78, 38.15) --
(306.17, 38.25) --
(306.56, 38.35) --
(306.95, 38.45) --
(307.35, 38.56) --
(307.74, 38.66) --
(308.13, 38.77) --
(308.52, 38.87) --
(308.92, 38.98) --
(309.31, 39.09) --
(309.70, 39.20) --
(310.09, 39.32) --
(310.49, 39.43) --
(310.88, 39.55) --
(311.27, 39.67) --
(311.66, 39.79) --
(312.06, 39.91) --
(312.45, 40.03) --
(312.84, 40.16) --
(313.23, 40.28) --
(313.63, 40.41) --
(314.02, 40.54) --
(314.41, 40.67) --
(314.80, 40.80) --
(315.20, 40.93) --
(315.59, 41.07) --
(315.98, 41.21) --
(316.37, 41.34) --
(316.77, 41.48) --
(317.16, 41.63) --
(317.55, 41.77) --
(317.94, 41.91) --
(318.34, 42.06) --
(318.73, 42.21) --
(319.12, 42.36) --
(319.51, 42.51) --
(319.91, 42.66) --
(320.30, 42.81) --
(320.69, 42.97) --
(321.08, 43.12) --
(321.48, 43.28) --
(321.87, 43.44) --
(322.26, 43.60) --
(322.65, 43.77) --
(323.05, 43.93) --
(323.44, 44.10) --
(323.83, 44.26) --
(324.22, 44.43) --
(324.62, 44.60) --
(325.01, 44.78) --
(325.40, 44.95) --
(325.79, 45.13) --
(326.19, 45.30) --
(326.58, 45.48) --
(326.97, 45.66) --
(327.36, 45.84) --
(327.76, 46.03) --
(328.15, 46.21) --
(328.54, 46.40) --
(328.93, 46.58) --
(329.33, 46.77) --
(329.72, 46.96) --
(330.11, 47.16) --
(330.50, 47.35) --
(330.90, 47.55) --
(331.29, 47.74) --
(331.68, 47.94) --
(332.07, 48.14) --
(332.47, 48.34) --
(332.86, 48.55) --
(333.25, 48.75) --
(333.64, 48.96) --
(334.04, 49.16) --
(334.43, 49.37) --
(334.82, 49.58) --
(335.21, 49.80) --
(335.60, 50.01) --
(336.00, 50.23) --
(336.39, 50.44) --
(336.78, 50.66) --
(337.17, 50.88) --
(337.57, 51.10) --
(337.96, 51.33) --
(338.35, 51.55) --
(338.74, 51.78) --
(339.14, 52.01) --
(339.53, 52.24) --
(339.92, 52.47) --
(340.31, 52.70) --
(340.71, 52.93) --
(341.10, 53.17) --
(341.49, 53.41) --
(341.88, 53.64) --
(342.28, 53.88) --
(342.67, 54.13) --
(343.06, 54.37) --
(343.45, 54.61) --
(343.85, 54.86) --
(344.24, 55.11) --
(344.63, 55.36) --
(345.02, 55.61) --
(345.42, 55.86) --
(345.81, 56.12) --
(346.20, 56.37) --
(346.59, 56.63) --
(346.99, 56.89) --
(347.38, 57.15) --
(347.77, 57.41) --
(348.16, 57.67) --
(348.56, 57.94) --
(348.95, 58.20) --
(349.34, 58.47) --
(349.73, 58.74) --
(350.13, 59.01) --
(350.52, 59.29) --
(350.91, 59.56) --
(351.30, 59.84) --
(351.70, 60.11) --
(352.09, 60.39) --
(352.48, 60.67) --
(352.87, 60.95) --
(353.27, 61.24) --
(353.66, 61.52) --
(354.05, 61.81) --
(354.44, 62.10) --
(354.84, 62.39) --
(355.23, 62.68) --
(355.62, 62.97) --
(356.01, 63.26) --
(356.41, 63.56) --
(356.80, 63.86) --
(357.19, 64.16) --
(357.58, 64.46) --
(357.98, 64.76) --
(358.37, 65.06) --
(358.76, 65.37) --
(359.15, 65.67) --
(359.55, 65.98) --
(359.94, 66.29) --
(360.33, 66.60) --
(360.72, 66.92) --
(361.12, 67.23) --
(361.51, 67.55) --
(361.90, 67.86) --
(362.29, 68.18) --
(362.69, 68.50) --
(363.08, 68.83) --
(363.47, 69.15) --
(363.86, 69.47) --
(364.26, 69.80) --
(364.65, 70.13) --
(365.04, 70.46) --
(365.43, 70.79) --
(365.83, 71.12) --
(366.22, 71.46) --
(366.61, 71.79) --
(367.00, 72.13) --
(367.40, 72.47) --
(367.79, 72.81) --
(368.18, 73.15) --
(368.57, 73.50) --
(368.97, 73.84) --
(369.36, 74.19) --
(369.75, 74.54) --
(370.14, 74.89) --
(370.54, 75.24) --
(370.93, 75.59) --
(371.32, 75.95) --
(371.71, 76.30) --
(372.11, 76.66) --
(372.50, 77.02) --
(372.89, 77.38) --
(373.28, 77.74) --
(373.68, 78.10) --
(374.07, 78.47) --
(374.46, 78.84) --
(374.85, 79.21) --
(375.25, 79.58) --
(375.64, 79.95) --
(376.03, 80.32) --
(376.42, 80.69) --
(376.82, 81.07) --
(377.21, 81.45) --
(377.60, 81.83) --
(377.99, 82.21) --
(378.39, 82.59) --
(378.78, 82.97) --
(379.17, 83.36) --
(379.56, 83.75) --
(379.96, 84.14) --
(380.35, 84.53) --
(380.74, 84.92) --
(381.13, 85.31) --
(381.53, 85.70) --
(381.92, 86.10) --
(382.31, 86.50) --
(382.70, 86.90) --
(383.10, 87.30) --
(383.49, 87.70) --
(383.88, 88.11) --
(384.27, 88.51) --
(384.67, 88.92) --
(385.06, 89.33) --
(385.45, 89.74) --
(385.84, 90.15) --
(386.24, 90.56) --
(386.63, 90.98) --
(387.02, 91.39) --
(387.41, 91.81) --
(387.81, 92.23) --
(388.20, 92.65) --
(388.59, 93.07) --
(388.98, 93.50) --
(389.38, 93.92) --
(389.77, 94.35) --
(390.16, 94.78) --
(390.55, 95.21) --
(390.94, 95.64) --
(391.34, 96.07) --
(391.73, 96.51) --
(392.12, 96.94) --
(392.51, 97.38) --
(392.91, 97.82) --
(393.30, 98.26) --
(393.69, 98.71) --
(394.08, 99.15) --
(394.48, 99.60) --
(394.87,100.04) --
(395.26,100.49) --
(395.65,100.94) --
(396.05,101.39) --
(396.44,101.85) --
(396.83,102.30) --
(397.22,102.76) --
(397.62,103.22) --
(398.01,103.68) --
(398.40,104.14) --
(398.79,104.60) --
(399.19,105.06) --
(399.58,105.53) --
(399.97,106.00) --
(400.36,106.47) --
(400.76,106.94) --
(401.15,107.41) --
(401.54,107.88) --
(401.93,108.36) --
(402.33,108.83) --
(402.72,109.31) --
(403.11,109.79) --
(403.50,110.27) --
(403.90,110.75) --
(404.29,111.24) --
(404.68,111.72) --
(405.07,112.21) --
(405.47,112.70) --
(405.86,113.19) --
(406.25,113.68) --
(406.64,114.18) --
(407.04,114.67) --
(407.43,115.17) --
(407.82,115.67) --
(408.21,116.17) --
(408.61,116.67) --
(409.00,117.17) --
(409.39,117.67) --
(409.78,118.18) --
(410.18,118.69) --
(410.57,119.20) --
(410.96,119.71) --
(411.35,120.22) --
(411.75,120.73) --
(412.14,121.25) --
(412.53,121.76) --
(412.92,122.28) --
(413.32,122.80) --
(413.71,123.32) --
(414.10,123.85) --
(414.49,124.37) --
(414.89,124.90) --
(415.28,125.42) --
(415.67,125.95) --
(416.06,126.48) --
(416.46,127.02) --
(416.85,127.55) --
(417.24,128.09) --
(417.63,128.62) --
(418.03,129.16) --
(418.42,129.70) --
(418.81,130.24) --
(419.20,130.78) --
(419.60,131.33) --
(419.99,131.88) --
(420.38,132.42) --
(420.77,132.97) --
(421.17,133.52) --
(421.56,134.07) --
(421.95,134.63) --
(422.34,135.18) --
(422.74,135.74) --
(423.13,136.30) --
(423.52,136.86) --
(423.91,137.42) --
(424.31,137.98) --
(424.70,138.55) --
(425.09,139.11) --
(425.48,139.68) --
(425.88,140.25) --
(426.27,140.82) --
(426.66,141.39) --
(427.05,141.97) --
(427.45,142.54) --
(427.84,143.12) --
(428.23,143.70) --
(428.62,144.28) --
(429.02,144.86) --
(429.41,145.44) --
(429.80,146.03) --
(430.19,146.61) --
(430.59,147.20) --
(430.98,147.79) --
(431.37,148.38) --
(431.76,148.97) --
(432.16,149.57) --
(432.55,150.16) --
(432.94,150.76) --
(433.33,151.36) --
(433.73,151.96) --
(434.12,152.56) --
(434.51,153.16) --
(434.90,153.77) --
(435.30,154.38) --
(435.69,154.98) --
(436.08,155.59) --
(436.47,156.20) --
(436.87,156.82) --
(437.26,157.43) --
(437.65,158.05) --
(438.04,158.66) --
(438.44,159.28) --
(438.83,159.90) --
(439.22,160.52) --
(439.61,161.15) --
(440.01,161.77) --
(440.40,162.40) --
(440.79,163.03) --
(441.18,163.66) --
(441.58,164.29) --
(441.97,164.92) --
(442.36,165.55) --
(442.75,166.19) --
(443.15,166.83) --
(443.54,167.46) --
(443.93,168.10) --
(444.32,168.75) --
(444.71,169.39) --
(445.11,170.03) --
(445.50,170.68) --
(445.89,171.33) --
(446.28,171.98) --
(446.68,172.63) --
(447.07,173.28) --
(447.46,173.94) --
(447.85,174.59) --
(448.25,175.25) --
(448.64,175.91) --
(449.03,176.57) --
(449.42,177.23) --
(449.82,177.89) --
(450.21,178.56) --
(450.60,179.23) --
(450.99,179.89) --
(451.39,180.56) --
(451.78,181.23) --
(452.17,181.91) --
(452.56,182.58) --
(452.96,183.26) --
(453.35,183.93) --
(453.74,184.61) --
(454.13,185.29) --
(454.53,185.98) --
(454.92,186.66) --
(455.31,187.34) --
(455.70,188.03) --
(456.10,188.72) --
(456.49,189.41) --
(456.88,190.10) --
(457.27,190.79) --
(457.67,191.49) --
(458.06,192.18) --
(458.45,192.88) --
(458.84,193.58) --
(459.24,194.28) --
(459.63,194.98) --
(460.02,195.69) --
(460.41,196.39) --
(460.81,197.10) --
(461.20,197.81) --
(461.59,198.52) --
(461.98,199.23) --
(462.38,199.94) --
(462.53,200.22);
\end{scope}
\begin{scope}
\path[clip] ( 0.00, 0.00) rectangle (462.53,210.24);
\definecolor[named]{drawColor}{rgb}{0.00,0.00,0.00}
\node[text=drawColor,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (361.35, 2.51) {$t$};
\node[text=drawColor,rotate= 90.00,anchor=base,inner sep=0pt, outer sep=0pt, scale= 0.70] at (248.23,112.35) {$K(t)$};
\path[draw=drawColor,line width= 0.4pt,line join=round,line cap=round] (274.63, 28.91) --
(448.07, 28.91) --
(448.07,195.79) --
(274.63,195.79) --
(274.63, 28.91);
\end{scope}
\end{tikzpicture}
\caption{\label{fig:StraussK} \emph{Left:} $G$ function of the hard annulus process from Example~\ref{ex:ringprozess} with parameters $\beta=70$, $r=0.025$, $R=0.035$. \emph{Right:} $K$ function of a Strauss hard core process with parameters $\beta=40$, $r=0.05$.
In both panels the solid line is an estimate of the true function based on 1,000 simulations, the dashed line is the the true function for a Poisson process (in the case of the $G$ function with the same intensity as the one obtained by simulation for the hard annulus process), and the grey area corresponds to the bounds computed in the text.}
\end{center}
\end{figure}
\end{example}
\section{Proof of Theorem~\ref{thm:bounds}}
\label{sec:proofs}
The main strategy of the proof is to replace in \eqref{eq:pgfl} the process $\Xi$ by a Poisson process $\mathrm{H}$, and then use Stein's method to bound the error
\begin{equation}
\label{eq:error}
{\mathbb E}\Big( \prod_{y\in \Xi} g(y) \Big)-{\mathbb E}\Big( \prod_{y\in \mathrm{H}} g(y) \Big).
\end{equation}
In the context of Poisson process approximation, Stein's method for bounding expressions of the form $\abs{{\mathbb E} f(\Xi)-{\mathbb E} f(\mathrm{H})}$ uniformly in $f$ from a class of functions $\mathcal{F}$ was first introduced in \citep{bb92}. It has undergone numerous developments in the subsequent years, including a recent generalisation to Gibbs process approximation \cite{ss12}. In what follows we briefly touch on the key points of Stein's method needed for our proof, referring to the nice expository chapter of \citep{xia05} for details.
Assume that $g \colon \mathbb{R}^d \to [0,1]$ is such that $A = \supp(1-g)$ is compact. Consider the function $f \colon \mathfrak{N} \to [0,1]$,
\begin{equation}
\label{eq:f}
f(\xi)=f(\xi \vert_A)=\prod_{y\in\xi}g(y).
\end{equation}
For any $\xi \in \mathfrak{N} \vert_A$ let $Z_{\xi} = \{Z_\xi(t)\}_{t\ge0}$ be a spatial immigration-death process on $A$ with immigration rate $\nu>0$ and unit per-capita death rate, started in the configuration $\xi$. This is a pure-jump Markov process on $\mathfrak{N} \vert_A$ that holds any state $\eta \in \mathfrak{N} \vert_A$ for an exponentially distributed time with mean $1/(\nu \abs{A} + \eta(A))$, where $\abs{A}$ denotes the Lebesgue measure of $A$; then a uniformly distributed point in $A$ is added with probability $\nu \abs{A}/(\nu \abs{A} + \eta(A))$, or a uniformly distributed point in $\eta$ is deleted with probability $\eta(A)/(\nu \abs{A} + \eta(A))$.
The process $Z_{\xi}$ has the Poisson process distribution on $A$ with intensity $\nu$ as its stationary distribution. If the process is started at the empty configuration $\emptyset$, then $Z_{\emptyset}(t)$ is a Poisson process with intensity $\nu (1-e^{-t})$ for every $t \geq 0$. Let $\{E_x\}_{x\in \xi}$ be i.i.d.\ standard exponentially distributed random variables and introduce the death process $D_\xi(t)=\sum_{x\in\xi}\mathbbm{1}\{E_x>t\}\delta_x$.
Constructing $Z_{\emptyset}$ and $D_{\xi}$ independently on the same probability space, $Z_\xi$ can be represented as $Z_\xi(t) \text{\raisebox{0pt}[0pt][0pt]{${}\stackrel{\mathscr{D}}{=}{}$}} Z_\emptyset(t)+D_\xi(t)$ for every $t \geq 0$; see \citep[Thm.~3.5]{xia05}
Let $\mathrm{H}_{\nu}$ be a Poisson process with intensity $\nu$ and define $h_f \colon \mathfrak{N} \to \mathbb{R}$ as
\begin{equation}
\label{eq:stein-sol}
h_f(\xi)=h_f(\xi \vert_A) = -\int_0^\infty \big[{\mathbb E} \bigl(f(Z_{\xi\vert_A}(t))\bigr)-{\mathbb E}(f(\mathrm{H}_\nu))\big]\;dt.
\end{equation}
By \citep[Lem.~5.2]{xia05} $h_f$ is well-defined and satisfies
\begin{align}
{\mathbb E} f(\Xi)-{\mathbb E} f(\mathrm{H}_\nu) &= {\mathbb E} \int_{A} \bigl[ h_f(\Xi+\delta_x)-h_f(\Xi) \bigr]
\hspace*{1.5pt} \nu \; dx+ {\mathbb E} \int_{A} \bigl[ h_f(\Xi-\delta_x)-h_f(\Xi) \bigr] \; \Xi(d x) \nonumber \\
&= {\mathbb E}\int_{A}\big[h_f(\Xi+\delta_x)-h_f(\Xi)\big]\big( \nu-\lambda(x\, \vert \, \Xi)\big)\; dx \nonumber \\
&= {\mathbb E}\int_{\mathbb{R}^d}\big[h_f(\Xi+\delta_x)-h_f(\Xi)\big]\big( \nu-\lambda(x\, \vert \, \Xi)\big)\; dx,
\label{eq:SteinGNZ}
\end{align}
where we applied the Georgii--Nguyen--Zessin equation on $\mathbb{R}^d$ to the function $\bigl[(x,\xi) \mapsto \mathbbm{1}_A(x) \bigl( h_f(\xi)-h_f(\xi+\delta_x) \bigr)\bigr]$ for obtaining the second equality.
Equation~\eqref{eq:SteinGNZ} is our starting point for further considerations.
\begin{proposition}
\label{prop:1}
Let $\Xi$ be a stationary Gibbs process with intensity $\lambda$ and conditional intensity $\lambda(\cdot\, \vert \, \cdot)$. Let $g\colon {\mathbb{R}^d} \to [0,1]$ be a function such that $1-g$ has bounded support. Then for all $\nu>0$
\begin{equation}
\label{eq:prop1}
{\mathbb E}\Big(\prod_{y\in\Xi}g(y)\Big)=1-\frac{\lambda}{\nu} \big(1-e^{-\nu G}\big)+I_\nu(g),
\end{equation}
where
\begin{equation}
\label{eq:I}
I_\nu(g)=e^{-\nu G}\hspace*{1.5pt}{\mathbb E}\biggl( \int_0^1e^{\nu Gs}\Big(1-\prod_{y\in \Xi}\big(1-s(1-g(y))\big)\Big)\;ds \hspace*{0.4em} \cdot \, \int_{\mathbb{R}^d}(1-g(x))(\lambda(x\, \vert \, \Xi)-\nu)\;dx\biggr).
\end{equation}
\end{proposition}
\begin{proof}
It is well known that for the Poisson process $\mathrm{H}_{\nu}$ we have
\begin{equation}
\label{eq:Pois-pgfl}
{\mathbb E} \Big(\prod_{y\in \mathrm{H}_\nu}g(y)\Big)=\exp\bigg(-\nu\int_{\mathbb{R}^d}1-g(x)\; dx\bigg)=e^{-\nu G};
\end{equation}
see e.g.\ \citep[Eq.~9.4.17]{dvj08}.
We then follow the main proof strategy laid out above in order to re-express
\begin{equation*}
{\mathbb E}\Big(\prod_{y\in \Xi}g(y)\Big)-e^{-\nu G}.
\end{equation*}
Starting from Equation~\eqref{eq:SteinGNZ}, we may use the decomposition $Z_{\xi+\delta_x} \text{\raisebox{0pt}[0pt][0pt]{${}\stackrel{\mathscr{D}}{=}{}$}} Z_\xi+D_{\delta_x}$ with independent $Z_\xi$ and $D_{\delta_x}$ to see that for any $\xi \in \mathfrak{N}\vert_A$
\begin{align*}
h_f(\xi+\delta_x)-h_f(\xi)&=\int_0^\infty {\mathbb E} f(Z_\xi(t))-{\mathbb E} f(Z_{\xi+\delta_x}(t))\;dt \\
&=\int_0^\infty {\mathbb E} f(Z_\xi(t))-{\mathbb E} f(Z_{\xi}(t)+D_{\delta_x}(t))\;dt \\
&=(1-g(x))\int_0^\infty {\mathbb E} f(Z_\xi(t)) \hspace*{1.5pt} \P(D_{\delta_x}(t) \neq \emptyset)\;dt \\
&=(1-g(x))\int_0^\infty {\mathbb E} f(Z_\xi(t)) \hspace*{1.5pt} e^{-t}\;dt \\
&=(1-g(x))\int_0^\infty {\mathbb E} \big(f(Z_\emptyset(t))+(f(Z_\xi(t)) - f(Z_\emptyset(t))\big) \hspace*{1.5pt} e^{-t}\;dt.
\end{align*}
By further decomposing $Z_\xi \text{\raisebox{0pt}[0pt][0pt]{${}\stackrel{\mathscr{D}}{=}{}$}} Z_\emptyset+D_\xi$ with independent $Z_{\emptyset}$ and $D_\xi$, we obtain
\begin{align*}
{\mathbb E} \big(f(Z_\xi(t)) - f(Z_\emptyset(t))\big)&={\mathbb E}\Big(\prod_{y\in Z_\emptyset(t)}g(y)\Big)\hspace*{1.5pt}{\mathbb E}\Big(\prod_{y\in D_\xi(t)}g(y)-1\Big)\\
&=\exp\big(-\nu(1-e^{-t})G\big)\Big(\prod_{y\in \xi}\bigl(1-e^{-t}(1-g(y))\bigr)-1 \Big),
\end{align*}
where for the first expectation we used \eqref{eq:Pois-pgfl} and that $Z_\emptyset(t)$ is a Poisson process with intensity $\nu(1-e^{-t})$; for the second expectation note that each point of $\xi$ survives independently up to time $t$ with probability $e^{-t}$. Thus in total
\begin{equation*}
\begin{split}
h_f(&\xi+\delta_x)-h_f(\xi)\\[1mm]
&=(1-g(x))e^{-\nu G}\int_0^\infty\Bigl[ e^{\nu Ge^{-t}} + e^{\nu Ge^{-t}}\Bigl(\prod_{y\in \xi}\bigl(1-e^{-t}(1-g(y))\bigr) -1\Bigr)\Bigr]e^{-t}\;dt \\
&= (1-g(x))\frac{1-e^{-\nu G}}{\nu G} + (1-g(x)) e^{-\nu G}\int_0^1 e^{\nu Gs}\Bigl(\prod_{y\in \xi}\bigl(1-s(1-g(y))\bigr) -1 \Bigr)\;ds
\end{split}
\end{equation*}
by the substitution $s=e^{-t}$.
Plugging this into Equation~\eqref{eq:SteinGNZ} and using ${\mathbb E} \lambda(x\, \vert \, \Xi)=\lambda$ finally yields
\begin{equation*}
{\mathbb E}\Big(\prod_{y\in \Xi}g(y)\Big)-e^{-\nu G}=\frac{\nu-\lambda}{\nu}\big(1-e^{-\nu G}\big)+I_\nu(g).
\end{equation*}
\end{proof}
\begin{proposition}
\label{prop:bounds-I}
Let $\Xi$ be a stationary locally stable Gibbs process with constant $c^*$. Then for all $0<\nu< c^*$
\begin{equation*}
\underline{I_{\nu}}(g)\le I_\nu(g)\le \overline{I_\nu}(g),
\end{equation*}
where
\begin{align}
\underline{I_{\nu}}(g)&= -\frac{1}{c^*-\nu}\big(c^*(1-e^{-\nu G})- \nu(1-e^{-c^*G})\big) \le 0,\label{eq:boundsI1}\\
\overline{I_\nu}(g)&= \frac{1}{\nu}\big(c^*(1-e^{-\nu G})- \nu(1-e^{-c^*G})\big)\ge 0.
\label{eq:boundsI2}
\end{align}
Furthermore $I_\nu(g) \le 0$ for all $\nu \ge c^*$.
\end{proposition}
\begin{proof
Since $-\nu \le \lambda(x\, \vert \, \Xi)-\nu\le c^*-\nu$ a.s., we get for $\nu<c^*$ the upper bound
\begin{equation}
\label{eq:proof-b-I}
I_\nu(g)\le (c^*-\nu)e^{-\nu G}G \int_0^1e^{\nu Gs}\Big(1-{\mathbb E}\prod_{y\in \Xi}\big(1-s(1-g(y))\big)\Big)\;ds ,
\end{equation}
and a similar lower bound, where $c^*-\nu$ is replaced by $-\nu$. Because $A=\supp(1-g)$ is bounded, $\Xi$ can be replaced by $\Xi\vert_A$ in \eqref{eq:proof-b-I}. It is a known fact that every locally stable Gibbs process on a bounded domain can be obtained as a dependent random thinning of a Poisson process; see \citep[Remark 3.4]{km00}. In particular, there exists a Poisson process $\mathrm{H}_{c^*}$ such that $\Xi\vert_A\subset \mathrm{H}_{c^*}$ a.s. Since $(1-s(1-g(y))\le 1$ for all $s\in [0,1]$ and for all $y\in \mathbb{R}^d$, we obtain
\begin{equation*}
1-{\mathbb E}\prod_{y\in \Xi}\bigl(1-s(1-g(y))\bigr) \le 1-{\mathbb E}\prod_{y\in \mathrm{H}_{c^*}}\bigl(1-s(1-g(y))\bigr) = 1-e^{-sc^*G},
\end{equation*}
where the last equality follows by \eqref{eq:Pois-pgfl}. Integrating and rearranging the terms yields the formulas \eqref{eq:boundsI1} and \eqref{eq:boundsI2}. If $\nu \ge c^*$, $I_\nu(g)$ is obviously non-positive.
\end{proof}
\vspace{4.5mm}
\noindent
\emph{Remainder of the proof of Theorem~\ref{thm:bounds}.}
Propositions~\ref{prop:1} and~\ref{prop:bounds-I} yield the upper bounds
\begin{align*}
{\mathbb E}\Big(\prod_{y\in \Xi}g(y)\Big)&\le 1-\frac{\lambda}{\nu}\big(1-e^{-\nu G}\big)+\frac{c^*}{\nu}\big(1-e^{-\nu G}\big)-\big(1-e^{-c^* G}\big)\\
&=(c^*-\lambda)G\frac{1-e^{-\nu G}}{\nu G}+e^{-c^*G}
\end{align*}
for $0<\nu<c^*$ and
\begin{equation*}
{\mathbb E}\Big(\prod_{y\in \Xi}g(y)\Big)\le 1-\lambda G\frac{1-e^{-\nu G}}{\nu G}
\end{equation*}
for $\nu \ge c^*$. Since the function $[x \mapsto (1-\exp(-x))/x]$
is monotonically decreasing for $x\ge 0$, we obtain the minimal upper bound for $\nu = c^{*}$, as
\begin{equation*}
{\mathbb E}\Big(\prod_{y\in \Xi}g(y)\Big)\le 1-\frac{\lambda}{c^*} \big(1-e^{-c^* G}\big).
\end{equation*}
For the lower bound recall the Weierstrass product inequality, which states
\begin{equation*}
\prod_{i=1}^n(1-a_i)\ge 1-\sum_{i=1}^na_i
\end{equation*}
for $0\le a_1,\dots,a_n\le 1$. Then, noting that the products below contain only finitely many factors $\neq 1$ by the boundedness of~$\supp(1-g)$, we have
\begin{align*}
{\mathbb E} \Big(\prod_{y\in \Xi}g(y)\Big)&={\mathbb E}\Big(\prod_{y\in \Xi}\big(1-(1-g(y))\big)\Big)\\
&\ge 1-{\mathbb E}\sum_{y\in \Xi}(1-g(y))\\
&=1-{\mathbb E}\int_{\mathbb{R}^d}1-g(x)\; \Xi(dx)\\
&=1-\lambda\int_{\mathbb{R}^d}1-g(x)\;dx =1-\lambda G
\end{align*}
by Campell's formula; see \cite[Section~$9.5$]{dvj08}. \hfill \qed
\begin{remark}
An alternative proof for the lower bound can be obtained by using Propositions~\ref{prop:1} and \ref{prop:bounds-I} in the analogous way as for the upper bound.
\end{remark}
|
1,108,101,564,891 | arxiv | \section{Introduction}
Wireless communications has come a long way, from mobile telephony in the 70s to nowadays massive, dynamic multimedia information access anywhere anytime in the era of Internet of everything (IoE). We have also witnessed a shift in research focus from outdoors to indoors since most data demand now tends to take place in indoor environments. This has motivated the emerging concept of smart radio environment which uses software-defined materials or software-controlled metasurfaces \cite{Akyildiz-18,Marco-19} to engineer a radio environment suitable for wireless communications. Many believe this will play a role in 6G \cite{Tariq-20}.
The use of software-controlled metasurfaces for improving wireless communications is a thriving research area. A majority of recent efforts have been devoted to the study of adopting passive radiation elements on a programmable surface that has the ability to apply arbitrary phase shifts on the receiving radio signals and reflecting them constructively as a focussed beam towards the intended receiver. This approach is widely known as reconfigurable intelligent surfaces (RISs) \cite{Ntontin-20}.
RIS is particularly attractive for their low power consumption and hardware cost, because of the use of relatively cheap radiating elements. It can be interpreted as using large surfaces present in the environment as a large aperture for collecting and transmitting radio signals for improved energy efficiency. Remarkably, it was reported in \cite{Dai-19} that it is possible to achieve a 21.7 dBi antenna gain using an RIS with 256 2-bit elements at 2.3 GHz, and a 19.1 dBi antenna gain at 28.5 GHz. It should be noted that programmable metasurfaces can also be used to directly modulate radio-frequency (RF) carrier signals, without the need of standard components such as mixers, filters, and phase shifters, greatly simplifying the hardware complexity for wireless communications systems \cite{Tang-19}.
The concept of smart radio environment is, however, much more than a low-cost alternative to relaying, beamforming and communication transceivers, and represents a new paradigm of engineering the radio environment through carefully designed, software-controlled metamaterials (or ``meta-atoms'') that can alter their electromagnetic (EM) properties to suit the purpose of various communication applications. Reducing interference, enhancing security, and extending the range of communication are amongst the most obvious applications \cite{Akyildiz-18}. Although the main advantages of metasurfaces are their low hardware cost and power consumption, such as in the case of RIS, utilizing programmable metasurfaces to create a smart radio environment may mean that additional signal processing and network intelligence will add to the cost and power consumption.
This article proposes a new vision of smart radio environment that considers the use of {\em non-radiative, trapped surface wave propagation} \cite{Barlow-1953,Tong-2019}, as opposed to free-space propagation where radio waves are launched from the surface in \cite{Akyildiz-18,Marco-19}.
The surface waves considered in this article are trapped surface waves \cite{Tong-2019} which glide at the interface of materials with different dielectric constants and the radio propagation is made to be confined to the surface. A unique advantage of surface wave communications (SWC) over free-space communications (FSC) is its much more favourable pathloss, which is inverse of the distance $1/d$, instead of the inverse of the squared distance $1/d^2$ in the case of FSC. Also, confining the communication to the surface means that interference management becomes a lot easier since communication can be managed on a particular pathway using software-controlled waveguiding surfaces, the concept that can be enabled by a software-controlled fluidic structure \cite{Fortuny-17}. The outcome resembles a transportation network of SWC superhighways on surfaces of meta-atoms, providing various functionalities of a smart radio environment.
The rest of this article is organized as follows. In Section~\ref{sec_Barlow}, we provide a high-level background of SWC and highlight the unique advantages that make it particularly appealing for the smart radio environment application. Section \ref{sec_vision} presents our vision of SWC superhighways. Then Section \ref{sec_enablers} describes the key enabling technologies for software-controlled SWC while Section \ref{sec_chall} discusses the main challenges of the proposed SWC paradigm. Finally, we conclude this article in Section \ref{sec_con}.
\section{Surface Wave}\label{sec_Barlow}
Surface wave is a non-radiating wave that propagates along the interface between two different media \cite{Barlow-1953}. The definition can be formally classified into eleven types according to their physical properties \cite{Schelkunoff-59}. When a radio wave is incident at a boundary from a denser medium and if the incident angle is equal to or greater than the critical angle of the media, then the radio wave will be `trapped' in the denser medium, with the evanescent fields in the rarer medium, and the wave will be confined to the surface. Figure \ref{fig:sw0} illustrates the geometry of the directions of the waves at the interface. In practice, both media have finite losses, and the E-field of the surface wave will attenuate as it propagates along the interface. A classical result is that the power of a trapped surface wave is inversely proportional to the propagation distance, $d$ \cite{Barlow-1953}:
\begin{equation}
P_{\sf SWC}(d)\propto\frac{1}{d},
\end{equation}
which is much more desirable than the inverse squared law of what normally occurs in space wave propagation or FSC.
\begin{figure}[]
\begin{center}
\includegraphics[width=0.95\linewidth]{./figs/fig-SW0-v2.png}
\caption{Illustration of different angles and the corresponding waves.}\label{fig:sw0}
\end{center}
\end{figure}
The surface resistance is associated with energy dissipation and determines the attenuation of the surface wave in the propagation direction whereas the surface reactance is associated with the energy stored in the interface, which defines the decay of the wave away from the surface in the propagation direction. The higher the surface reactance, the more closely the energy is stored to the surface and hence the more closely bound the wave is to the surface. Rectangular apertures of finite height are usually adopted as transducers to excite surface wave, with minimal space waves and reflected waves.
The most effective surface for SWC is the one which has a purely reactive surface impedance. Two possible approaches to obtain high impedance surfaces are corrugated surfaces and dielectric coated conductors. Corrugated surfaces have purely reactive impedance that depends upon the dimensions of the grooves and humps of the surface. The main limitation of a corrugated structure is its directional periodic structure, which is difficult to fabricate in millimeter-wave frequency bands. A more viable solution is the use of dielectric coated conductors which are much easier to make. The dielectric layer should have a high dielectric constant and low conductivity. Also, the surface impedance can be further adjusted by layering several different dielectric layers on top of each other. In \cite{Tong-2019}, a 52 GHz wideband trapped SWC system by utilizing the dielectric coated conductor approach was implemented. Figure \ref{fig:sw2} shows the E-field distribution along a dielectric coated conductor with surface impedance $j200\Omega$ at 60 GHz.
\begin{figure}[]
\begin{center}
\includegraphics[width=0.8\linewidth]{./figs/fig-SW2.png}
\caption{Simulation results showing the E-field distribution along the direction of propagation at 60 GHz with the surface impedance of $j200\Omega$.}\label{fig:sw2}
\end{center}
\end{figure}
\section{The Vision: SWC Superhighways}\label{sec_vision}
Less propagation loss of SWC means that it can be preferable to take a detour and travel a longer distance along walls or surfaces but still have higher received power than a direct path propagating in the free space, see Figure \ref{fig:pathloss}. SWC is thus super-energy-efficient and data can reach farther distance with the same energy consumption. More remarkably, confining the communication signals to the surface helps contain high-speed data streams, and simplifies interference management.
\begin{figure}[]
\includegraphics[width=1\linewidth]{./figs/fig-0.png}
\caption{SWC versus FSC: Longer distance beats shorter distance.}\label{fig:pathloss}
\end{figure}
Our vision of the future smart radio environment therefore resembles a transportation network of communications superhighways with exceptionally localized data access, as depicted in Figure \ref{fig:vision}. In this vision, dedicated pathways are created on programmable metasurfaces to carry superfast data that travel along the surfaces to reach the user equipment (UE) or arrive at a position near the UE. In the latter case, FSC will provide the last hop from the metasurface to the UE over a very short distance. The pathways are software-controlled to adapt to the radio environment and always provide the best routes requiring the least power consumption to the UEs free of interference, thanks to the extraordinary spatial selectivity of SWC.
\begin{figure*}[]
\begin{center}
\includegraphics[width=1\linewidth]{./figs/fig-2-v2.png}
\end{center}
\caption{The vision of SWC superhighways in a smart radio environment with exceptionally localized wireless coverage (contactless or not).}\label{fig:vision}
\end{figure*}
In this vision, one reality is that radio signals only appear where they should, and wireless coverage, contactless or not, is exceptionally localized. By contrast, communications relying on FSC tends to have radio waves unintentionally occupying the entire space unless multiple antennas are used to beamform the radio signals to be confined into certain space that requires advanced signal processing and resource management. This is the inherent problem in wireless communications. As a matter of fact, much processing and intelligence in wireless networks for 5G and beyond go to the management and control of radio signals for massive connectivity that allow signals to coexist without causing harmful interference to each other. This is a very challenging task because radio waves naturally propagate in all directions, and when radio waves hit objects along the way in the environment, the reflection and diffraction further complicate the interference pattern. This task will, however, be greatly simplified in SWC, since the radio waves will be kept on the surface and their presence is absolutely predictable. As envisaged in Figure \ref{fig:int}, in addition to walls, surfaces like office desk can also be equipped with meta-atoms to provide zonal, targeted data access, making possible the ultimate interference-free communications in indoor environments.
\begin{figure}[]
\includegraphics[width=1\linewidth]{./figs/fig-1.png}
\caption{SWC inherently confines communication signals to the surfaces and causes no interference to coexisting UEs without the need of sophisticated signal processing and interference management.}\label{fig:int}
\end{figure}
A natural result of this vision is that interference management becomes traffic control, and it is highly location-driven. In other words, the key will be to localize all the UEs relative to the infrastructure of metasurfaces in the environment, and determine the best possible routes to serve the UEs. Interestingly, localization in the SWC paradigm can be less complex. The reason is that there will be a large number of meta-atoms as anchors with known locations available in the environment that can take part in the localization task for UEs. In addition, the UEs are likely to be in close proximity of the anchors, and have direct line-of-sights (LoS) for ranging and localization. The assurance of sufficiently large number of direct LoS paths within a short distance from the UEs makes low-complexity high-resolution localization realizable. It is also worth pointing out that simultaneous SWC in the same pathway is possible as the metasurface can operate on a wide frequency band.
Another feature in the SWC-based smart radio environment paradigm is the massive deployment of antennas on surfaces. Apart from the cases where communications takes places via direct contact with the UEs (e.g., laptops on desk), as in the conventional RIS concept, the surfaces will be equipped with antennas as widely as possible so that the UEs can be reached using short-range FSC anywhere in the environment. The key difference is that the meta-atoms on the surfaces now need to switch between FSC and SWC, by acting as radiating elements and the propagation medium, respectively, depending on the situations, and such switching is done seamlessly.
\section{Enabling Technologies}\label{sec_enablers}
The vision of the SWC paradigm is not a dream but can be a realistic revolution being sought in 6G. In this section, we offer some ideas in the potential enabling technologies.
\subsection{Dual-Functionality Meta-Atoms}
The SWC paradigm suggests a hybrid communication network taking full advantage of SWC and FSC. To realize this, each meta-atom may serve as a tuneable propagation medium or radiating element at any given time and needs to be able to switch between the two functionalities. SWC propagation over a metasurface as the denser medium is a natural phenomenon, and the tricky part is how the signal comes off the surface to reach the intended UE. In general, two `exits' are possible. In the first type of `exit' where the UE is in direct contact with the metasurface (for example, a laptop on a ``smart'' desk), it is rather straightforward to have a transducer integrated on the UE to easily capture the signal and receive the data.
By contrast, in the case where the UE is not in contact with the surface, it requires that the metasurface have the capability to transform SWC into space wave to propagate over air to leap to the UE. Also, it is expected that the metasurface when acting as radiating elements, has the signal processing capability to form focused signal beams towards the intended UE and avoid interference, in the same way as in the RIS applications.
In the leaky-wave and holographic antenna literature, periodic metallic geometries have been well studied to radiate an RF signal from a surface \cite{Antarr-08}. In particular, a trapped surface wave can be diffracted by a periodic metallic geometry that will let part of the surface wave scatter into the free space as space wave. It has also proven possible to accurately control the direction of the radiation off the metasurface \cite{Johnston-10}. Several approaches have been proposed to reconfigure the amount of radiation and the angle of departure of the space wave from a diffracted surface, and they include using active semiconductor devices and metamaterials \cite{He-19}. Furthermore, recent advances in transparent conductive sheet using graphene, carbon nanotube or metallic compounds will help the realization of meta-atoms that can be lightweight and invisible on the surfaces.
\subsection{Software-Controlled Fluidic Waveguiding Structures}\label{sec_enablers_B}
The SWC vision is only realizable if we have a mechanism to create on demand dynamic pathways on surfaces for SWC. This is important for interference management and pathloss reduction of the data streams on the metasurface. Normally, an RF signal that leaves the surface wave transducer will travel along the surface wave plane, following a radial pattern with an angular coverage determined by the electrical size of aperture width of the transducer \cite{Tong-2019}. Research on dynamic creation of pathways on the surface is very limited although high surface impedance will surely enhance surface wave propagation. A great deal of research efforts in the SWC area have been trying to realize reconfigurable surface impedances on a surface.
Flexibly and dynamically creating pathways on a surface is a much more challenging task but practically achievable. One possible approach is to leverage a microfluidic system where a large number of micro-tubes are pre-fabricated in the surface substrate of a few millimeters thickness. The micro-tubes are connected to an array of software-controlled pumps which can inject conductive fluid into the tubes if required. If some of the pumps are activated, a selected group of the micro-tubes will be filled with conductive fluid, which then creates an integrated waveguiding structure to form a tunnel of pathway. The pattern which the conductive fluid-filled micro-tubes form, determines the pathway for SWC. In Figure \ref{fig:fluid}, some preliminary results are provided to illustrate the feasibility of such concept with a surface impedance of $j96.5\Omega$ at the operating frequency of $28$ GHz. In this example, Galinstan that is a liquid metal alloy with high conductivity is chosen as the conductive fluid.
It is worth noting that software-controlled fluidic structures have recently been investigated for antenna design \cite{Fortuny-17}. The knowhow in that application is anticipated to be useful in the engineering and signal processing of the microfluidic system for programmable metasurfaces. This architecture makes possible joint optimization of the signal and data streams, the resource allocation and management of the communication, and the propagation media that accommodate the communications. Intelligence in such holistic approach will be essential.
\begin{figure*}%
\centering
\subfigure[Tubes on surface, with those in blue filled with conductive fluid.]{%
\label{fig:6a}%
\includegraphics[height=1.8in]{./figs/fluidSWC-a.png}}%
\qquad
\subfigure[Guided surface wave propagation over a preset pathway.]{%
\label{fig:6b}%
\includegraphics[height=2in]{./figs/fluidSWC-b.png}}%
\caption{Illustration of the software-controlled waveguiding structures for dynamic pathway creation with surface impedance $j96.5\Omega$ at 28 GHz.}\label{fig:fluid}
\end{figure*}
\subsection{Artificial Intelligence (AI) Empowered SWC}
Recent advances have seen AI especially deep learning to be given a major role for 5G and beyond communications \cite{Tariq-20}. There have been numerous successful examples of employing AI for wireless communications, from physical layer designs such as channel estimation and precoding, to network resource allocation such as traffic control and user pairing, to security and authentication, and to topology formation and management, to fault prediction and detection, and so on. Parallel can be drawn in the SWC paradigm and AI will serve as the brain to empower the superhighway network on the metasurface.
There are several technical directions in which AI is anticipated to be key to the realization of the concept. One obvious avenue is the localization of UEs in the smart radio environment where connections need to be established seamlessly. In the SWC paradigm, the UEs' locations are most important as most communication remains on the surface but only leaps to reach the UEs in the last hop if required. The availability of the locations of the UEs allows the data traffic to be managed on the surface with carefully designed pathways such that the power consumption (and hence the pathloss) is minimized and the interference over the surface is also eliminated.
Localization of the UEs is expected to be done easily from simple energy sensing, as the last hop from the surface to the UEs is likely to be short-ranged and has the direct LoS. Also, the large number of meta-atoms ensures more than sufficient LoS paths to exist and localize the UEs in a way similar to the traditional multilateration approach. The difference is that in this case, most of the paths come from the same direction with different energy levels, instead of paths coming from a variety of directions. Apart from this, metasurfaces are preinstalled to form a smart radio environment, meaning that a fingerprinting approach can be effectively utilized to localize the UEs. This can be addressed using deep learning by training an artificial neural network (ANN) over some simulated energy level data given the floor plan of the environment. Real-time localization of the UEs can then be easily achieved by a simple pass to the ANN after taking energy measurement from the meta-atoms.
AI can also be useful to find the best pathways for the data to travel from the base station (which is now equipped with a surface wave launcher connecting to a metasurface network) to the data-receiving UEs. Such SWC map can be derived from simulated data using deep learning. The key would be to link the UEs' locations and data traffic demand with the physical structures and resources of the metasurfaces making up the radio environment. The optimal SWC superhighway network as a function of the UEs' parameters will be learned.
\section{Opportunities and Challenges}\label{sec_chall}
\subsection{Fully Reconfigurable Meta-Atoms}
Metamaterials have been researched for nearly two decades, having generated many mind-blowing results including cloaking that can make objects invisible to sensors by controlling their EM radiation. For mobile communications, metamaterial-based antennas are also hopeful to make small-sized highly efficient wideband antennas possible. The notion of metasurface-empowered smart radio environments is expected to make a huge impact in 6G and has been led by the mainstream efforts of RIS where passive meta-atoms are considered. Despite the promising results of RIS, the fact that the meta-atom is based on the current microstrip patch antenna technology means that the bandwidth unfortunately tends to be narrow, which hinders its development for high frequency bands. Exploring the use of double negative (DNG) metamaterials would be key to unlock the bandwidth limitation and miniaturize the meta-atoms for increasing the aperture for performance improvement.
In addition, the SWC paradigm requires that the meta-atoms not only act as antennas for radiation but operate as a medium for surface wave propagation. For the latter mission, research is required to devise a technology to adaptively control the characteristic impedance of the meta-atoms so that the surface can be optimized for adhering the radio waves to the surface for superior propagation loss and ease of interference management. A micro-electronics approach that can achieve fine control and resolution of the impedance of a meta-atom via a digital signal should be studied. Note that the current state-of-the-art meta-atom technology nevertheless comes with the limitation that the amplitude and phase of the radiation off the meta-atoms are strongly coupled. This will need to be tackled if metasurfaces are to be fully intelligent for advanced coding and signal processing. Even if this can be achieved, deciding on the appropriate surface impedance is far from trivial which couples with the action to create SWC pathways that affects the characteristic impedance of the medium above the surface.
Depending upon the communication networking and whereabouts of the UEs, it may also be necessary for a meta-atom to act as a radiating element on one frequency but a propagation medium for SWC on another frequency at the same time. Such dual functionality will certainly need a more advanced meta-atom technology that can achieve independent control of the radiation characteristics over multiple frequency bands. All in all, programmable, reconfigurable and multi-functional meta-atom technologies will need to be sought.
\subsection{Pathway Creation and Control}
Creating dynamic pathways on surfaces is the central idea of the SWC paradigm. In Section \ref{sec_enablers_B}, an enabling technology which utilizes a microfluidic system of liquid metal to provide on-demand pathways is discussed. However, many obstacles are yet to be overcome. One apparent issue is the choice of the conductive fluid used for creating the pathways. Liquid metals such as mercury are toxic and should not be used. There is also a nontrivial trade-off between the response time which is related to its density and the conductivity of the fluid. More will need to be investigated to fully understand the practicality and feasibility of such approach in a living environment.
Another pressing challenge is the fabrication of tube spaces in millimetre scale for creating an adaptive metasurface. It gets much more difficult when realizing the fact that micro-tubes do have finite thickness and will distort propagation along the fluid-made pathways. A thorough analysis and proper design of the architecture that integrates with the micro-fluidic system to make possible rapid distribution of conductive fluid with great precision and control will be indispensable.
Moreover, the size of the micro-tubes and their spacing are frequency-dependent. How to have an implementable structure that permits flexible size and spacing to accommodate different frequency bands is extremely challenging, not to mention the difficulty of realizing different pathways in the same space at the same time for different frequencies. Although this issue may be mitigated by careful frequency planning and pathway optimization, much more research will need to be conducted to obtain a feasible architecture for dynamic waveguide technologies such as the fluid-based approach discussed above.
\subsection{Model-Driven AI Signal Processing and Networking}
Signal processing and communication networking in such a smart radio environment depend greatly upon the floor plan of the indoor environment because metasurfaces on walls dictate the paths and positions in which data can be delivered to the UEs. One would expect that it is possible to use the floor plan as a model to develop and train a deep ANN that takes the UE's locations and service demands as inputs and produces as output the signal processing and networking solution. If such AI solution becomes available, then real-time optimization of the signal processing for SWC will be straightforward. Doing this, however, requires several obstacles to be tackled.
The first hurdle is to come up with a general representation of the floor plan that contains the essential information in a concise and manageable fashion. This task will impact on the training efficiency of the ANN that extracts the logical features of the environment and translates them into parameters that are ready for the optimization. Secondly, the inputs to the ANN need to be properly defined which may be the locations of the UEs, their service demands, or even their energy signatures. It is in fact more challenging to define the outputs of the ANN as the variable space can be gigantic. This could include, from the coding rate of transmission to the transmit power, the pathway, the frequency allocation, the beamforming of meta-atoms to even security-related processing and remote charging. Besides, the physical constraints of the meta-atoms and the metasurface as a whole will need to be incorporated in the design.
Even after the inputs and outputs of the ANN are defined, it is unclear how the ANN can be trained properly. Supervised learning requires the availability of labelled training examples whereas unsupervised learning looks for undetected patterns of datasets with no pre-existing labels. There is a long way to figure out how reliable datasets can be obtained to train such ANN. Ad hoc heuristics by using a combination of traditional techniques should present useful examples to start building a good dataset. Transfer learning is also expected to be useful to ensure generalizability to cope with a range of situations.
\subsection{User-Centric Context-Aware Service-Based Coverage}
The concept of smart radio environment suggests strongly an integration between UEs and the environment. The large area of metasurfaces makes available multi-dimensional time-series of energy spectrum of the UEs that captures very rich behavioural data of the users and contextual information. The enriched understanding of the UEs and their needs mean that context-aware service and applications can be provided to the level that has not been achievable before. Note that the large number of meta-atoms brings the service of a massive number of sensors as well that enable numerous applications such as remote patient monitoring. Opportunities are plenty in terms of applications but it would be tricky to interpret the signals from sensors with highly uneven distributions.
\subsection{Security and Anti-Wiretapping}
With increasing reliance on mobile applications on our daily life, security has become a major concern. In 5G and beyond systems, physical-layer security schemes appear to provide an additional layer of defence to complement traditional cryptographic approaches by exploiting the
randomness of wireless channels and their unreplicability. The SWC network however makes the communication data more exposed since the signals glide on surfaces and are more predictable, although one merit of SWC is its high predictability to allow simple interference management. Wiretapping on metasurfaces is a real threat that needs to be looked at carefully and addressed.
Anti-wiretapping techniques will be key to ensure security on metasurfaces. Apart from operating as radiating and propagation elements, meta-atoms should act as sensors to probe, identify and localize any suspected wiretappers or adversaries so that SWC can be rerouted. Together with AI, metasurfaces should possess the brain power to predict suspicious activities of malicious intrusion, and optimize SWC accordingly.
\section{Conclusion}\label{sec_con}
Recent research has seen the notion of smart radio environment emerge as a mainstream effort to shape the environment to support communication needs by using software-controlled metasurfaces. Contrary to the conventional studies, this article has advocated the vision to utilize metasurface not only as a radiating platform but propagation medium taking advantage of SWC for much less pathloss and simple interference control. The SWC paradigm greatly reduces unnecessary radiation off the surfaces and only beams to the UEs in the last leg if needed. This novel SWC concept is made possible by several enabling technologies, including the one that can dynamically adapt communication pathways on a metasurface by digitally controlling a microfluidic system of liquid metal, all of which have been discussed. This article has also touched upon the opportunities and challenges that come with the vision. It is hoped that this article will serve as a catalyst to trigger further research to make the SWC vision practically feasible.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
1,108,101,564,892 | arxiv | \section{Introduction}
In online music streaming services such as {\it Spotify}\footnote{https://www.spotify.com/}, a huge number of active users are interacting with a library of over 40 million audio tracks. Here, an important challenge is to recommend the right music item to each user. To this end, there has been a large related body of works in music recommender systems. A standard approach was to construct a global model based on user's play counts\cite{celma2010music, van2013deep} and acoustic features\cite{van2013deep}. However, a significant aspect missing in these works is how a particular user sequentially interacts with the streamed contents. This can be thought as a problem of personalization\cite{cho2002personalized} with few-shot, or meta-learning\cite{snail} with external memory\cite{santoro2016meta}. The WSDM Cup 2019 tackles this issue by defining a new task with a real dataset\cite{brost2019music}. We can summarize the task as follows:
\begin{itemize}
\item The length $L^i$ of an $i$-th listening session for a blinded-particular user varies in the range from 10 to 20. We omit $i$ for readability from next page.
\item We denote the input sequence (Figure 1) from the first half (={\it support}) and second half(={\it query}) of each session $i$ as $X^i_s$ and $X^i_q$, respectively.
\item $X^i_s$ contains complete information including session logs and acoustic features.
\item $X^i_q$ contains only acoustic features.
\item $Y^i_s$ is the labels representing whether the supports were skipped$(=1)$ or not$(=0)$.
\item Given a set of inputs $\{X^i_s, Y^i_s, X^i_q\}$, our task is to predict $Y^i_q$ (Figure 2).
\end{itemize}
One limitation of our research was that we did not make use of any external dataset nor pre-trained model from them. The code\footnote{https://github.com/mimbres/SeqSkip} and evaluation results\footnote{https://www.crowdai.org/challenges/spotify-sequential-skip-prediction-challenge} are available online.
\begin{figure*}
\includegraphics[width=6.1in]{input.png}\Description{Several flies, spanning two
columns of text}
\caption{Input structure; The blue and yellow blocks represent the inputs of supports and queries for prediction, respectively.}
\end{figure*}
\section{Model Architectures}
\begin{figure}
\includegraphics[width=3.1in]{output.png}\Description{A fly image,
to $1''\times1''$}
\caption{Output structure; The red block represents the skip-labels to be predicted for the $i$-th session.}
\end{figure}
In this section, we explain two different branches of algorithms based on 1) metric learning, and 2) sequence learning.
In metric learning-based approach, one key feature is that we do not assume the presence of orders in a sequence. This allows us to formulate the skip prediction problem in a similar way with the previous works\cite{sung2018learning} on few-shot learning that {\it learns to compare}.
In sequence learning-based approach, we employ temporal convolution layers that can learn or memorize information by assuming the presence of orders in a sequence. In this fashion, we formulate the skip prediction problem as a meta-learning\cite{snail} that learns to {\it refer past experience}.
\subsection{Metric Learning}
\begin{figure}
\includegraphics[width=3.2in]{rnbc2_ue.png}
\caption{``rnb1'' is a relation network-based few-shot metric learner. It can predict a pair-wise similarity (green arrows) by learnt latent metric space: it constructs all possible relation pairs from the few-shot features and labels. ``rnbc2-UE'' (pink) shares the structure of ``rnb1'', and it can be trained as a few-shot classifier that directly predicts skip-labels.}
\label{rnbc2ue}
\end{figure}
This model aims to learn how to compare a pair of input acoustic features, through a latent metric space, within the context given from the supports. Previously, Sung et al.\cite{sung2018learning} proposed a metric learning for few-shot classification. The relation score $r_{m,n}$ for a pair of support and query inputs, $\{x_{s(m)}, x_{q(n)}\}$ and the label $y_{s(m)}$ can be defined by:
\begin{equation}
r_{m,n} = \text{RN}(~C(~f_\theta(x_{s(m)}, f_\theta(x_{q(n)}), y_{s(m)}~)~),
\end{equation}
where RN$(.)$ is the relation networks\cite{santoro2017simple}, $f_\theta$ is an MLP for embedding network, and $C(.)$ is a concatenation operator. In the original model\cite{sung2018learning} denoted by \textbf{rnb1}, the sum of the relation score is trained to match the binary target similarity. The target similarity can be computed with {\it XNOR} operation for each relation pair. For example, a pair of items that has same labels will have a target similarity $1$; otherwise $0$. The final model is denoted as \textbf{rnbc2-UE} (Figure \ref{rnbc2ue}) with:
\begin{enumerate}
\item training the classifier to predict the skip-labels directly, instead of similarity.
\item trainable parameters to calculate weighted sum of the relation score $r$,
\item additional embedding layers (the red arrows in Figure 3) to capture the user preference-like.
\end{enumerate}
\begin{figure}
\includegraphics[width=2in]{seq1HL.png}
\caption{``seq1HL'' has 2-stack of causal encoders. A red right triangle represents causal encoder, that is not allowed to observe future inputs.}
\label{seq1hl}
\end{figure}
\subsection{Sequence Learning}
\begin{figure}
\includegraphics[width=2in]{att.png}
\caption{``att(seq1eH(S), seq1eH(Q))'' has non-causal encoder for the supports. This allows model to observe future inputs, as represented with a red isosceles triangle.}
\label{att}
\end{figure}
In Figure \ref{seq1hl}, this model consists of dilated convolution layers followed by highway\cite{srivastava2015highway}-activations or GLUs (gated linear units\cite{dauphin2016language}). A similar architecture can be found in the text encoder part of a recent TTS (Text-to-speech) system\cite{dctts}. In practice, we found that non-auto-regressive (non-AR)-models performed consistently better than the AR-models. This was explainable as the noisy outputs of the previous steps degraded the outputs of the next steps cumulatively. The final model, \textbf{seq1HL}, has the following features:
\begin{enumerate}
\item a non-AR model,
\item highway-activations with instance norm\cite{vedaldi2016instance}, instead of using GLUs,
\item $1$-$d$ causal convolution layers with a set of dilation parameters $d = \{1,2,4,8,16\}$ and kernel size $k=2$,
\item in train, parameters are updated using the loss of $Y_q$, instead of the entire loss of $\{Y_q, Y_q\}$.
\end{enumerate}
We have two variants of the sequence learning model with attention modules. The model in Figure \ref{att} has separate encoders for supports and queries. The support encoder has $1$-stack of non-causal convolution with a set of dilation parameters $d = \{1,3,9\}$ and kernel size $k=3$. The query encoder has $1$-stack of causal convolution with a set of dilation parameters $d = \{1,2,4\}$ and kernel size $k=\{2,2,3\}$. These encoders are followed by a dot product attention operation\cite{vaswani2017attention}.
\begin{figure}
\includegraphics[width=2in]{snail.png}
\caption{SNAIL\cite{snail}-like model. We removed the first embedding layer, and trained it as a non-AR model.}
\label{snailf}
\end{figure}
In contrast with the models mentioned above, SNAIL\cite{snail} (in Figure \ref{snailf}) has attention module at the bottom, and the causal convolution layer follows. For the multi-head attention, we set the number of head to 8.
\section{Experiments}
\subsection{Pre-processing}
From the {\it Spotify} dataset\cite{brost2019music}, we decoded the categorical text labels in session logs into one-hot vectors. Other integer values from the logs, such as ``number of times the user did a seek forward within track'' were min-max normalized after taking logarithm. We didn't make use of dates. The acoustic features were standardized to have mean=$0$ with std=$1$.
\subsection{Evaluation Metric}
The primary metric for the challenge was {\it Mean Average Accuracy} (MAA), with the average accuracy defined by
\begin{math}
AA = \sum_{i=1}^T A(i)L(i) / {T},
\end{math}
where $T$ is the number of tracks to be predicted for the given session, $A(i)$ is the accuracy at position $i$ of the sequence, and $L(i)$ is the boolean indicator for if the $i$-th prediction was correct.
\subsection{Training}
In all experiments displayed in Table 1, we trained the models using $80$\% of train set. The rest of train set was used for validation. \textbf{rnb1} and \textbf{rnb2-UE} was trained with MSE loss. All other models were trained with binary cross entropy loss. We used Adam\cite{kingma2014adam} optimizer with learning rate $10^{-3}$, annealed by 30\% for every 99,965,071 sessions (= 1 epoch). Every training was stopped within 10 epochs, and the training hour varied from 20 to 48. We uniformly applied the batch-size 2,048. For the baseline algorithms that have not been submitted, we display the validation MAA instead. The total size of trainable parameters for each model can vary. For comparison of model architectures, we maintained the in-/output dimensions of every repeated linear units in metric learning as 256. In sequence learning, we maintained the size of in-/output channels as 256 for every encoder units.
\begin{table}
\small
\begin{threeparttable}
\caption{Main Results}
\label{mainresult}
\begin{tabular}{lccl}
\toprule
Model & Category & MAA(ofc) & MAA(val)\\
\midrule
rnb1 & M & - & 0.540\\
rnb2-UE & M & - & 0.564\\
rnbc2-UE & M &0.574 & 0.574\\
\midrule
seq1eH (1-stack) & S &0.633 & 0.633\\
seq1HL (2-stack)& S & \textbf{0.637} & \textbf{0.638}\\
att(seq1eH(S), seq1eH(Q))& S & - & 0.633\\
self-att. transformer & S & - & 0.631\\
replicated-SNAIL & S & - &0.630\\
\bottomrule
\end{tabular}
\begin{tablenotes}
\footnotesize
\item \textbf{MAA(ofc)} from official evaluation; \textbf{MAA(val)} from our validation; \textbf{M} and \textbf{S} denote metric and sequence learning, respectively; \textbf{rnb1} was the replication of ``learning to compare''\cite{sung2018learning}; \textbf{rnbc2-UE} and \textbf{seq1HL} were our final model for metric and sequence learning, respectively;
\end{tablenotes}
\end{threeparttable}
\end{table}
\subsection{Main Results and Discussion}
Note that we only discuss here the results from non-AR setting. The main results are displayed in Table 1. We can compare the metric learning-based algorithms in the first three rows. \textbf{rnb1} was the firstly implemented algorithm. \textbf{rnb2-UE} had two additional embedding layers. It achieved 2.4\%p improvements over \textbf{rnb1}. The final model, \textbf{rnbc2-UE} additionally achieved 1\%p improvements by changing the target label from similarity to skip-labels.
The five rows from the bottom display the performance of sequence learning-based algorithms. \textbf{seq1eH} and \textbf{seq1HL} shared the same architecture, but differed in the depth of the networks. \textbf{seq1HL} achieved the best result, and it showed 0.5\%p improvement over \textbf{seq1eH}. \textbf{att(seq1eH(S), seq1eH(Q))} showed a comparable performance with \textbf{seq1eH}. The trans-former\cite{vaswani2017attention} and SNAIL\cite{snail} were also attention-based models. However, we could observe that sequence learning-based model without attention unit worked better.
Overall, the sequence learning-based approaches outperformed the metric learning-based approaches by at least 5.9\%p. The large difference in performance implied that sequence learning was more efficient, and the metric learning-based models were missing crucial information from the sequence data.
\subsection{How helpful would it be if complete information was provided to query sets?}
\begin{table}
\small
\caption{The effect of complete information provided to query}
\label{tab:freq}
\begin{tabular}{ccccl}
\toprule
Model&User-logs & Acoustic feat. & Skip-label & MAA(val) \\
\midrule
Teacher&use & use & - & 0.849\\
seq1HL&- & use & - & 0.638\\
\bottomrule
\end{tabular}
\end{table}
So far, the input query set $X_q$ has been defined as acoustic features (see Figure 1). In this experiment, we trained a new model \textbf{Teacher} using both user-logs and acoustic features that were available in dataset. In Table 2, the performance of the \textbf{Teacher} was 21.1\%p higher than our best model \textbf{seq1HL}. This revealed that the user-logs for $X_q$ might contain very useful information for sequential skip prediction. In future work, we will discover how to distill the knowledge.
\section{Conclusions}
In this paper, we have described two different approaches to solve the sequential skip prediction task with few-shot in online music service. The first approach was based on metric learning, which aimed to learn how to compare the music contents represented by a set of acoustic features and user interaction logs. The second approach was based on sequence learning, which has been widely used for capturing temporal information or learning how to refer past experience. In experiments, our models were evaluated in WSDM Cup 2019, using the real dataset provided by {\it Spotify}. The main results revealed that the sequence learning approach worked consistently better than metric learning. In the additional experiment, we verified that giving a complete information to the query set could improve the prediction accuracy. In future work, we will discover how to generate or distill these knowledge by the model itself.
\begin{acks}
\small
This work was supported by Kakao and Kakao Brain corporations, and by National Research Foundation (NRF2017R1E1A1A01076284).
\end{acks}
|
1,108,101,564,893 | arxiv | \section{Conclusion}
In this paper we presented a novel approach for communicating information about a robot's internal state during physical interaction. Specifically, we introduced a class of soft, wrapped haptic displays that are mounted on the robot arm at the point of contact between the human and robot to communicate the robot's uncertainty. We designed and manufactured these pneumatic devices using flexible pouches that render one or more pressure signals, and then wrapped the soft displays around rigid robot arms (Section~\ref{Haptic Display and Design}). We finally performed psychophysics and robotics experiments with (a) $1$-DoF displays and (b) multi-DoF displays.
Starting with the $1$-DoF setting, our results suggest that humans can accurately distinguish between different pressures rendered by the wrapped haptic display (Section~\ref{sec:p1}), and that the wrapped display provides more informative feedback about robot learning than the current alternatives (Section~\ref{sec:vt1}). User study participants physically taught the robot in less time while making larger improvements when the $1$-DoF display was wrapped around the robot arm.
We next explored whether multi-DoF haptic displays could be leveraged to communicate more detailed and fine-grained feedback. We compared two approaches: localizing separate $1$-DoF haptic displays to different regions of the robot arm, or distributing identical $3$-DoF displays along the entire arm. From a psychophysics perspective, we found that localized feedback resulted in more accurate communication but at slower speeds: because these signals were spatially distributed over a larger surface area, humans could distinguish them more accurately, but it took longer for participants to move their hands, perceive each region, and recognize the signal (Section~\ref{sec:p2}). We next applied both types of haptic displays to a robot learning task. Here we found that distributed $3$-DoF signals were preferable to localized $1$-DoF signals in terms of teaching time, demonstration improvement, and subjective responses (Section~\ref{sec:vt2}). Participants needed to use their hands to kinethetically teach the robot arm --- but because the localized feedback required humans to continually change their grasp and feel along the robot arm, this localized feedback conflicted with the human's teaching. In the context of learning from demonstration we therefore found that using multi-DoF haptic displays to concentrate signals in a smaller space resulted in more seamless communication and teaching.
\p{Limitations} This work is a first step towards wrapping pneumatic displays to convey the robot's internal state during physical human-robot interaction. One limitation of our work is that --- without written or verbal explanations --- human users do not know how to interpret the robot's feedback. For instance, we had to explain to users that increased pressures corresponded to increased robot uncertainty. Although it seems reasonable to provide operators with an instruction manual, moving forward we want to ensure that the robot's signals are as intuitive as possible.
\section{Developing a Wrapped Haptic Display} \label{Haptic Display and Design}
We first aim to design a soft haptic display that can be wrapped around a robot arm, conforming to the surface and effectively adding a haptic interface to existing points of contact between the human and robot. This section describes the identification of three critical requirements for the design of the haptic displays (low volume, fast inflation, and textured surface). With these requirements in mind, we outline two designs of wrapped haptic displays built on the same underlying principle. We first discuss the design of a simple 1-DoF display with a large area for contacting the device. Then, we discuss how the lessons learned from the 1-DoF device were used to create a more complex N-DoF design, consisting of multiple reduced width "ring" sleeves placed side by side in a smaller area than the first 1-DoF sleeve design. Finally, we describe the implementation of these wrapped haptic displays in the experimental setting.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=2.0\columnwidth]{Figures/device4.pdf}
\vspace{-0.5em}
\caption{Overview of the soft wrapped haptic display design. (a) Detailed view of square-cell array implemented in the 1-DoF sleeve display. The thick lines indicate places where the LDPE plastic tube was heat-sealed. The sleeve is composed of 3 pouches taped together to form a sleeve with circumference $2{\pi}R$. The sleeve display is shown (b) deflated and (c) inflated. (d) Detailed view of the square-cell array implemented in the 1-DoF ring display. Grouping multiple individually-actuated pouches, placed side by side, forms a N-DoF wrapped haptic displays. A 3-DoF ring display is shown in two states: (e) deflated and (f) one of the DoF (center) is actuated, while the others are deflated.}
\label{fig:device}
\end{center}
\vspace{-2.0em}
\end{figure*}
\subsection{Requirements}\label{Requirements}
While designing the wrapped haptic display concept we considered three key requirements, linked both to the operation and to improving the haptic sensation: low volume, fast inflation, and textured surface. First, we wanted to design a display that would clearly show inflation without using either large volumes of air or large volumes of static materials. This requirement is linked to keeping the haptic display flexible enough to easily wrap around target objects like the robot arm and to limiting the effect the display had on the users interaction with the surface, especially when deflated. Limiting the volume of air that the display holds also helps with the requirement for fast inflation and deflation. This is an important design feature since fast transitions between inflation levels would allow for faster changes in the signals that the display is producing. An additional requirement was to create an inflatable surface that would produce textured tactile sensations. Our hypothesis is that a textured surface would help users to quickly identify pressure changes in the display since there are more surface features to explore with their hands. Since the target application of the wrapped haptic display is robot learning, an additional design constraint was the need to fully wrap the display around robot arms without constraining motion or impairing demonstration.
\subsection{Soft Haptic Display Concept}\label{concept}
To summarize our requirements, the goal was to conceptualize a soft haptic display that is relatively flat and conformed to a surface while not actuated, but that features low volume, fast inflation, and textured surface when actuated. To pursue this concept, we use thin heat-sealable and relatively inextensible materials that can be formed into air-tight pouches. These pouches are then heat-sealed with patterns to generate a textured surface and constrict the volume of the bag when inflated. Initial testing showed that having a single inflatable pouch without heat-sealing to add texture did not provide enough surface change to assist users in identifying pressure changes, as well as being slow to inflate. Additionally, while initially being flat when deflated, these pouches generated a large restorative moment against bending when inflated, making it difficult to wrap them around objects. Adding heat sealed patterns subdivides the bag, limiting the volume, adding additional texture, and allowing the overall surface to remain flexible when inflated and deflated. The final soft wrapped haptic display design consists of an array of 2.54~cm square-shaped cells patterned into a low-density polyethylene (LDPE) plastic tube using a linear heat sealer (H-89 Foot-Operated Impulse Sealer, ULINE). The cells are interconnected to allow for smooth and fast inflation of the array via a pattern of gaps in sealing. A repeated and homogeneous pattern across the entire length allowed for even and reliable inflation of the display. If the pattern was not homogeneous, we found that issues such as superfluous contraction and unintentional airflow blocking would stop some cells from inflating consistently. The square-array design is shown in Figure~\ref{fig:device} in two form factors. The dimension and shape of the display can be varied to fit different applications and conform to varied surfaces. A unit of a soft wrapped haptic display consists of one or more patterned pouches attached to the same pressure source, forming a single degree of freedom (1-DoF) and multiple degrees of freedom can be attached together to form an N degree of freedom (N-DoF) display. Given this general descrption of the soft haptic display, we will next describe the specific 1-DoF and 3-DoF displays that were used in experimental testing.
\subsection{Large Surface Display}\label{subsec:1-dof}
We first experimented with a simple 1-DoF display with a large surface area for humans to interact with. The 1-DoF soft wrapped haptic display is made from a set of three connected pouches made from a 10.16~cm flat-width LDPE plastic tube (S-5522, ULINE). The plastic tube was cut to fit the length of one of the sections of a UR-10 robotic arm (40.64 cm). As previously mentioned, the square pattern was manufactured into the LDPE tube using a heat sealer. The sealed lines are 1.27 cm long, alternated in rows and columns to create the 2.54 cm-squares. Figure~\ref{fig:device}(a) shows the design with the dimensions in more detail. Through-wall straight connectors (5779K675, McMaster-Carr) were attached to one of the sides of each bag strip to allow for individual inflation. The circumference of the robot arm segment ($2 \pi R$) was found to match the width of three bags placed side by side, as shown in Figure~\ref{fig:device}(a). The display was made of three bags taped together using viscoelastic adhesive tape (MD-9000, Marker Tape) to construct a sleeve that entirely wrapped the cylindrical surface. The bags were then connected using tee-adapters to inflate the three bags using a single pressure line, essentially creating a 1-DoF soft wrapped haptic display in the shape of a sleeve. Tests showed that the 1-DoF soft wrapped haptic display can be inflated quickly; pressures above 1.5 psi (10.43 kPa) inflate the display in 0.86 seconds, and it can variate inflation pressure from 1 to 3 psi (6.89 to 20.68 kPa) in 0.72 seconds, and deflate back to 1 psi in 0.18 seconds. The display can operate to a maximum of 3.5 psi (24.13 kPa). Above that pressure the heat-sealed edges begin to tear, producing leaks.
\subsection{Multi-Degree of Freedom Display}\label{subsec:n-dof}
We next aimed to increase the signal complexity while maintaining the design requirements by building on the design of the 1-DoF display. We did this by grouping multiple individually-actuated pouches placed side by side, forming a N-DoF wrapped haptic display, as shown in Figure~\ref{fig:device}(d) and (e). For this design, each of the pouches consisted of a 2.54~cm flat-width LDPE plastic tube (S-11155, ULINE), cut to fit the circumference ($2 \pi R$) of varied segments of a target segment on a robot arm. This way, the length of the pouch guaranteed that the haptic display fully wrapped around the cylindrical surfaces of the segments, forming a ring-shaped soft wrapped haptic display. The pattern is modified from the one used in the 1-DoF displays to better fit the available off-the-shelf LDPE tubing. The 2.54~cm square cell grid was achieved by heat sealing 1.7~cm long lines across the length of the tube, alternating sides as shown in Figure~\ref{fig:device}(c). The pattern allowed ring displays to conform to a cylindrical surface better, while providing a textured surface and restriction of excessive inflation. Silicon tubing (0.66cm OD) was attached to an end of the individual ring display to inflate. For the studies described in Sections~\ref{sec:p2} and \ref{sec:vt2}, three ring displays were placed side by side. The ends of the displays were taped with a 1.9~cm separation between each. Grommets were placed in the ends of the group of displays, and elastic bands tied the device around the cylindrical surface. The separation between pouches is intended to assist in making the identification of each easier. This design allows the rendering of multiple signals in a smaller area than the 1-DoF sleeve design. For example, N ring displays can be placed in a single location to render N individual signals. Individual actuation of ring displays in sequences can further increase the complexity of haptic signals rendered by a set of N-ring displays. Since the N-DoF display covers a smaller area, it is easier to mount it in different places of the robot arm. The geometry of these ring displays influenced its actuation performance when compared to the 1-DoF sleeve design. Since the width of these displays (and therefore their radius when inflated) is small, they have smaller volume when inflated and resist higher pressures, therefore producing faster inflation/deflation speeds. These ring-shaped soft wrapped haptic displays can be inflated to pressures above 1.5 psi (10.43 kPa) in 0.55 seconds, and withstand a maximum of 5 psi (34.48 kPa). This design also allows for faster variations in inflation pressure, switching from 1 to 3 psi (6.89 to 20.68 kPa) in 0.38 seconds, and deflate back to 1 psi in 0.12 seconds.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.80\columnwidth]{Figures/implementation.pdf}
\vspace{-0.5em}
\caption{The actuation of the soft wrapped haptic displays consisted of (1) a pressure regulator that supplied an electronically controlled pressurized-air supply and (2) a pressure release feature for deflating the displays. (a) Pneumatic arrangement implemented for the studies described in Sections \ref{sec:vt1} and \ref{sec:vt2}, which uses a 550-AID pressure regulator and an electronic pressure sensor. (b) Pneumatic arrangement implemented for the studies described in Sections \ref{sec:p1} and \ref{sec:p2}, which uses a QB3 pressure regulator, which has an integrated pressure sensor. In both cases, if faster switching between inflation and full deflation is needed, on-off solenoid valves can be implemented.}
\label{fig:implement}
\end{center}
\vspace{-2.0em}
\end{figure}
\subsection{Implementation} \label{subsec:implementation}
As mentioned in Subsections~\ref{subsec:1-dof} and \ref{subsec:n-dof}, the haptic displays were mounted on cylindrical surfaces for the studies outlined in the remainder of this work, either sections of the robot arms or a PVC pipe acting as a passive stand-in. The mounting arrangements fixed the wrapped display in place, restricting it to less than 10\% contraction. Figure~\ref{fig:implement} shows the basic pneumatic control systems used to actuated the wrapped haptic displays. In summary, the implementation of the actuation was (1) a pressure regulator that supplied an electronically controlled pressurized-air supply and (2) a pressure release feature for deflating the displays. Two different electronically controlled pressure regulators were used for the studies described in this paper. A pressure regulator (QB3, Proportion-Air, McCordsville, Indiana) was used for the studies outlined in Sections \ref{sec:p1} and \ref{sec:p2}. The regulator was controlled using an Arduino Uno via MATLAB. The Arduino sent analog signals to the pressure regulator, which provided accurate pressure values needed for the studies. This pressure regulator has a built-in sensor and exhaust line. For the user studies described in Sections \ref{sec:vt1} and \ref{sec:vt2}, a pressure regulator (550-AID, ControlAir, Amherst, New Hampshire) was controlled using either the UR-10's I/O controller or an Arduino Uno. This pressure regulator does not have a built-in sensor, but does have an exhaust line. The inflation pressure was measured using an electronic pressure sensor (015PGAA5, Honeywell Sensing, Gold Valley, Minnesota). For both of the pressure regulators, we initially used on-off solenoid valves (ASCO Z134A, Emerson, St. Louis, Missouri) to switch the air supply and allow air to escape for deflation. The exhaust valves in the pressure regulators are not capable of deflating the display to zero-volume, but just to zero-pressure leaving some air in the display. However, since the experiments did not involve complete deflation of the haptic displays, we determined solenoid valves were not needed. If faster switching between inflation and full deflation is needed, on-off solenoid valves can be implemented. It is important to note that each 1-DoF device (either the 1-DoF sleeve or the individual rings in the 3-DoF display configuration) are connected to individual pressure supplies. For the case of the 3-DoF display, one can configure the device to effectively act as a 1-DoF device by connecting the individual rings to a single pressure, or have 3-DoF control if three pressure regulators are used.
\section{Introduction}
Imagine teaching a rigid robot arm to clean objects off a table (see \fig{front}). One intuitive way for you to teach this robot is through \textit{physical interaction}: you push, pull, and guide the arm along each part of the task. Of course, the robot may not learn everything from a single demonstration, and so you show multiple examples of closing shelves, removing trash, and sorting objects. As you kinesthetically teach the robot you are faced with two questions: i) has the robot learned enough to clear the table by itself and ii) if not, what features of the task is the robot still uncertain about?
While existing work enables robots to learn from physical human interaction \cite{argall2009survey,akgun2012keyframe,pastor2009learning,losey2021physical}, having the robot effectively provide \textit{real-time feedback} to human teachers remains an open problem. Ideally, this feedback should not be cumbersome or distracting (i.e., the human must be able to focus on seamlessly guiding the robot) and should be easily interpretable (i.e., the human must be able to clearly distinguish between different signals). These requirements present a tradeoff as human fingertips provide the densest mechanoreceptors, but placing rigid devices at the hand will impact task performance. Recent research has created communication channels by instead wrapping \textit{haptic devices} around the human's arm \cite{che2020efficient, mullen2021communicating, dunkelberger2020multisensory}, but locating feedback at unrelated locations on the human's body can create a disconnect with the task.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\columnwidth]{Figures/front.pdf}
\vspace{-0.5em}
\caption{Human physically teaching a robot arm. We wrap a soft pneumatic display around the arm and render haptic signals by controlling the pressure of the display. The robot learner leverages this haptic display in real-time to communicate the parts of the task that it is confident about, as well as the parts where it is uncertain and needs additional guidance.}
\label{fig:front}
\end{center}
\vspace{-2.0em}
\end{figure}
Our insight is that --- instead of asking the human teacher to wear a feedback device or watch a computer monitor ---
\begin{center}\vspace{-0.3em}
\textit{We can take advantage of the preexisting physical contact between the human and robot through slim form-factor soft haptic displays that \emph{wrap} around the robot arm.}\vspace{-0.3em}
\end{center}
Accordingly, in this paper we develop and analyze wrapped haptic displays for communicating robot learning based on soft robotic principles. We distribute these soft displays along rigid robot arms so that the human can physically interact with the robot to demonstrate a task while simultaneously perceiving the robot's feedback. We actively control the \textit{pressures} of the pneumatic display to render where in the task and what features of the task the robot is \textit{uncertain} about: the display inflates for regions and features of the task where the robot is unsure about its actions (and needs additional human teaching), and deflates where the robot is confident about the task (and does not need any additional human guidance). Our hypothesis is that --- because the soft wrapped display creates a channel for communication on any surface without impacting the task --- humans will be able to more intuitively and robustly use this feedback with a greater level of focus compared to other feedback modalities. We experimentally demonstrate that this pressure-based feedback enables humans i) to determine whether the robot has learned enough to be deployed and ii) to identify parts of the task where kinesthetic teaching is still required. Additionally, we demonstrate the importance of the location and distribution of the feedback on the robot arm for creating this improvement.
Parts of this work were previously published in \cite{valdivia2022wrapped}, which presented the experimental results for our one degree-of-freedom (DoF) haptic display. This current paper builds on that initial research by demonstrating the design, analysis, and application of \textit{multi-DoF} spatial signals localized or distributed along the robot arm. Here we also present follow-up analysis for the psychophysics of the $1$-DoF device. Overall, we make the following contributions:
\p{Developing Wrapped Haptic Display} We design and build a compliant pneumatic haptic device that wraps around and conforms to the robot, providing haptic stimuli that are localized to the robot arm and distributed along its geometry. This device is manufactured using soft, flexible pouches that render haptic signals through pressure.
\p{Measuring User Ability to Perceive Wrapped Displays} We perform a psychophysics study to find the range of pressures that humans can distinguish. We report the just noticeable difference (JND) for pressures rendered by the soft display.
\p{Applying Wrapped Displays to Communicate Learning} We ask participants to kinesthetically teach a robot arm while the robot provides real-time feedback about its learning. We map the robot's uncertainty to the pressure of our wrapped display. When compared to a graphical user interface, rendering feedback with our wrapped haptic display leads to faster and more informative human teaching, and is subjectively preferred by human teachers
\p{Extension on Wrapped Displays to Multiple Degrees of Freedom} We extend and generalize the wrapped display design to create multi-degree of freedom displays. These displays can be configured to fit different robotic manipulator geometries and to change the interconnections between degrees of freedom.
\p{Measuring Effect of Display Distribution on User Perception} We perform a psychophysics study to understand how the spatial distribution of the wrapped haptic display signals affects the accuracy and speed of signal identification. We demonstrate a tradeoff between speed of identification and accuracy as signals are spread further apart.
\p{Measuring Effect of Display Distribution of Multi-Degree of Freedom Displays for Communicating Learning} We repeated kinesthetic teaching with three degree of freedom displays, confirming that users still improve demonstrations over baseline as signal complexity increases. When comparing different options to distribute feedback in 3-DoF displays, users performed better with and subjectively preferred wrapped display layouts where all feedback was displayed the small area where contact was already occurring instead of distributed in larger areas along the robot arm.
\section{Related Work}
In this paper we introduce a wrapped haptic display for communicating robot learning in real-time during physical human-robot interaction. We build on previous research for kinesthetic teaching, haptic interfaces, and soft displays.
\p{Kinesthetic Teaching} Humans can show robot arms how to perform new tasks by physically demonstrating those tasks on the robot \cite{argall2009survey,akgun2012keyframe,pastor2009learning,losey2021physical}. As the human backdrives the robot, the robot records the states that it visits and the human's demonstrated actions at those states. The robot then learns to imitate the human's actions and perform the task by itself \cite{ross2011reduction}. One important output of the learning process is the robot’s \textit{uncertainty} about the task. The uncertainty can be measured as the robot’s \textit{overall} confidence in \textit{what} to do at different states \cite{hoque2021thriftydagger,menda2019ensembledagger}, or also include measuring the robot’s confidence on \textit{how} to perform the task \cite{hough2017s, cakmak2012designing, basu2018learning, habibian2022here}. In this paper we explore how robots should \textit{communicate} their learning uncertainty back to the human teacher. Keeping the human up-to-date with what the robot has learned builds trust and improves teaching \cite{hellstrom2018understandable}. Outside of physical human-robot interaction, prior research has developed multiple non-haptic modalities to communicate robot learning and intent: these include robot motion \cite{dragan2013legibility}, graphical user interfaces \cite{huang2019enabling}, projections into the environment \cite{andersen2016projecting}, and augmented reality headsets \cite{walker2018communicating}. Within a teleoperation domain, our recent work suggests that \textit{haptic interfaces} are particularly effective at communicating low-dimensional representations of robot learning \cite{mullen2021communicating}. Here we will leverage these results to develop a real-time feedback interface \textit{specifically for} kinesthetic teaching.
\p{Haptics for Information Transfer}
When considering using haptic signals for communicating features of robot learning, the type of information being transferred is important to consider. While haptic devices have a general goal of stimulating the human sense of touch, haptics has also previously been applied to communicate \textit{robot intent} or similar social features. For instance --- when studying how humans and robots should interact in shared spaces --- prior works have used haptics to explicitly convey the robot's intended direction of motion or planned actions \cite{che2020efficient, cini2021relevance, casalino2018operator, grushko2021improved}. Recent work has shown that, given appropriate context, complex human-to-human social touch signals, like stroking \cite{nunez2020investigating, muthukumarana2020touch}, hugging \cite{HuggyPajamaTeh}, dancing \cite{kobayashi2022whole}, and emotional communication \cite{SalvatoTOH2021, ju2021haptic, rognon2022linking}, can be replicated and understood in a wearable format. Some other work has shown the use of haptic interfaces for high information tasks, like assisting navigation, by rendering patterns with a certain meaning, such as direction guidance or identification of points of interest \cite{paneels2013WHC}. Lastly, work has shown communicating alerts with different urgency levels in car driving settings \cite{di2020purring, locken2015tacticar} and communicating contact events in teleoperation and AR/VR through hand-held devices with haptic feedback \cite{choi2018claw, mintchev2019portable}. These past works suggest that a wide range of social and collaborative information can be transferred using haptics with appropriate design of the interface and signals.
\p{Soft Haptic Devices}
Soft haptic devices offer an attractive option for human-robot communication due to their compliance and adaptability, either through the \textit{flexibility} of the interface or the \textit{compliance} of the actuators themselves. Soft haptic displays have been shown with a range of compliant actuation types: pneumatic actuation \cite{raitor2017wrap,HapWRAP2018}, shape memory alloys \cite{muthukumarana2020touch}, dielectric elastomers \cite{zhao2020wearable}, and fluidic elastomers \cite{barreiros2018fluidic}. Soft devices can target smalls areas of stimulation. Soft wearable fingertip devices have successfully targeted a range of stimuli in the skin \cite{yin2021wearable}, such as vibrations \cite{ji2021untethered, feng2017submerged, hinchet2018dextres}, indentation \cite{leonardis2022parallel, boys2018soft, hwang2017design}, skin-stretch \cite{minamizawa2010simplified, leonardis2015wearable}, or some combination of those \cite{zhakypov2022fingerprint, leroy2020multimode, youn2021wearable, giraud2022haptigami}. Soft haptic approaches scale easily to increased areas of stimulation; work on developing haptic surfaces out of arrays of actuators and sensors show scaling to fit varied areas. These developments have typically used rigid elements embedded in cloths and silicone layers to create bi-directional interfaces that can cover large areas: actuation has included NFC electronics \cite{yu2019skin}, thin-metal film strain sensors \cite{sonar2020closed}, and piezo films \cite{suh2014soft}. These rigid elements can limit the flexibility of the device, and lead to issues with wear over time and comfort. Some tabletop haptic displays have used pneumatically actuated soft composite materials \cite{yao2013pneui} or combined particle jamming and pneumatic actuation \cite{stanley2015controllable} to control the shape and mechanical properties of surfaces, leading to highly complex signals and comfortable interaction.
Soft haptic interfaces also easily support a range of device types distinguished by the method of interaction: graspable, wearable, or touchable \cite{culbertson2018haptics}. This method of interaction can have a large impact on the usability of the devices. Many haptic interfaces are designed to be wearable. Fingertip worn devices serve as an obvious choice for providing high fidelity and interpretable signals \cite{yin2021wearable,hinchet2018dextres,zhakypov2022fingerprint}. These devices are popular for virtual reality where physical contact with the real world is unlikely but in other applications they can reduce the user's ability to use there hands during the target task. This motivated wearable devices for body areas other that the fingertip, such as hand dorsal \cite{chossat2019soft, wang2019three}, wrists/forearms \cite{raitor2017wrap, muthukumarana2020touch, HapWRAP2018}, or gloves that cover the whole hand \cite{in2015exo}. Placing haptic signals \textit{directly on the human body} enables the human to move about the space while receiving real-time feedback; but as feedback is moved away from the fingertip and physically separated from the task it potentially requires additional mental energy to decode the intended message. A different approach has focused on developing touchable haptic surfaces consisting of arrays of actuators and sensors \cite{yu2019skin, sonar2020closed, suh2014soft}. These devices use the fingertip mechanoreceptors without burdening the user's hands. Soft touchable displays allow installation of haptic interfaces in common touch areas, like car steering wheels \cite{di2020purring}. While not a haptic display, recent work showed pneumatic actuators wrapped around robot arms to visualize the weight load carried by the robot \cite{larsen2022Roso}. Based on this past work, we target a touchable device placed at the point of human-robot interaction, and use soft pneumatic actuation to maximize the flexibility and transparency of the display.
\section{Measuring Human Perception of 1-DoF Wrapped Haptic Displays} \label{sec:p1}
Understanding the human sensory perception of the soft display, especially as it compares to rigid haptic displays, is essential in determining how to apply and control the wrapped haptic display.
To that end, we conducted a psychometric user study to measure the basic ability to distinguish touch sensations outside of the context of the target application scenario and to obtain qualitative data of how users perceive the display. Participants physically interacted with the 1-DoF display and were asked to distinguish between pairs of pressures. We focused on studying the user’s ability to differentiate pressure inflation levels in the display to understand the minimum pressure differential that can produce clear haptic signals.
\subsection{Experiment Setup} \label{Exp Setup P}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.95\columnwidth]{Figures/setup_study1.pdf}
\vspace{-0.5em}
\caption{Experimental Setup. The participants were instructed to sit in the desk right in front of the curtain and put on hearing protection headphones.}
\label{fig:exp_setup}
\end{center}
\vspace{-2.0em}
\end{figure}
The 1-DoF inflatable haptic display was mounted on a PVC pipe of identical diameter to the UR-10 used in Section~\ref{sec:vt1}. The pipe was placed lying flat and secured to a table. As shown in Figure~\ref{fig:exp_setup}, we placed a curtain to block the user’s vision and instructed users to wear hearing protection to ensure the perception study was focused entirely on tactile sensations.
The study was conducted as a forced-choice comparison where participants were asked to identify the higher pressure. The pressures were shown in pairs (i.e., reference pressure, $P_o$, vs. test pressure, $P$) to the user, distinguished as "Pressure 1" and "Pressure 2". We selected 2 psi (13.79 kPa) as the reference pressure, and the test pressure values of 1.5, 1.75, 1.875, 2.0, 2.125, 2.25, and 2.5 psi (10.34, 12.07, 12.93, 13.79, 14.65, 15.51, and 17.93 kPa) since these pressures are a safe range of pressures for the operation of the display. Each pressure was compared against the reference ten times.
We randomized the order in which the $P_o$ and $P$ pairs would be shown to the participant, as well as the order in which the reference and test pressure would be shown in each pair. We also showed the reference pressure against itself to measure bias on whether participants preferred choosing the first or the second pressure when unsure.
The participants were instructed to sit at the desk, positioned in front of the curtain, and put on hearing protection headphones. Before beginning the experiments, we demonstrated the display function to the participants by inflating the display to three pressure levels and allowing them to interact with it. Each experimental trial started by inflating the display to the selected "Pressure 1". Once the display reached a steady-state, constant pressure, the participants were asked to touch and interact with the display for an unrestricted period of time and then release it. There was no restriction on how the participants could grasp or touch the display; however, they were allowed to interact only while the device had a constant pressure. Then, the display was inflated to ``Pressure 2''. Again, the participants were asked to touch the display and then release it. Once they interacted with both pressure levels, we asked which one felt like a higher inflation pressure. The subjects were not told the correct answers during the experiment. This procedure was then repeated until all pairs of pressures were tested ten times. Since seven different pressures were tested against the reference pressure, we had a total of 70 pairs in the study.
After completing the interaction portion of the experiment, the participants were given a post-experiment questionnaire. The questionnaire asked about the overall experience during the study (clarity of instructions, sense of safety during the experiment) and about their previous experiences and familiarity with haptic technology, robotics, and video games. The entire experiment took approximately 35 minutes, with an optional break after the first 35 experimental pairs.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.89\columnwidth]{Figures/single_sigmoid_10.pdf}
\vspace{-1em}
\caption{Raw data and sigmoid function fit for a single participant. The percentages represent the times this subject selected the test pressure, $P$, as higher. The JNDs were calculated using the sigmoid function to solve for the pressure value corresponding to the 75\% threshold, and subtracting it from the reference pressure.}
\label{fig:single_sigmoid}
\vspace{-0.75em}
\end{center}
\end{figure}
\subsection{Results}
A total of $10$ participants ($5$ female or nonbinary, average age $20.6$ years, age range $18-23$ years) participated in this experiment after giving informed consent. Out of the group, $9$ participants were right-handed, and $1$ was left-handed. The Purdue Institutional Review Board approved the study protocols.
Figure \ref{fig:single_sigmoid} shows a single subject's responses to the experiment. Each dot shows the percentage of times the test pressure was selected as higher when compared against the reference pressure. The just noticeable difference (JND) was calculated by first fitting a sigmoid function to the data
\begin{equation}\label{sigmoid}
q = \frac{100}{1+e^{-k(P-P_o)}}
\end{equation}
where $q$ is the modeled percentage of times the user choose the test pressure ($P$) as higher, $k$ is the steepness factor for fitting a sigmoid curve, $P$ is the test pressure, and $P_0$ is the reference pressure.
Using this fit, the JNDs are calculated by finding the pressure value corresponding to the 75\% threshold, $P_{75}$, and subtracting the reference pressure, $P_0$:
\begin{equation}\label{JND}
JND = P_{75}-P_o = -\frac{1}{k} ln\left(\frac{100}{75}-1 \right)
\end{equation}
Figure~\ref{fig:raw_sigmoid} shows the sigmoid function fit for each of the subjects, as well as the fit for the collection of responses from all subjects.
\subsection{Analysis} \label{subsec:study1_analysis}
The experimental results show that the $k$ steepness factor for the overall sigmoid fit (shown as the orange line in Figure~\ref{fig:raw_sigmoid}) was 4.678, with 95\% confidence bounds between 3.605 and 5.751, giving a JND of 0.235~psi (1.62~kPa). Table~\ref{tab:1} summarizes the JNDs for each of the participants. Individual JNDs ranged 0.099-0.444 psi (0.68-3.06 kPa). The mean JND was defined as the mean of the values obtained for all participants, which was found to be 0.228 psi (1.57 kPa), with a standard deviation of 0.109 psi (0.75 kPa). The Weber fraction (WF), calculated as the ratio of the JND and the reference pressure, ranged between 4.9\% and 22.2\%, with a mean value of 11.4\%. Although there was no restriction on how the user could interact with the display, multiple users reported (via post-experiment questionnaire) using active interaction with the inflation to explore the display. This means that users explored reactive force sensing to explore the dynamics of inflation and determine how much pressure was used to inflate the display. Additionally, the users reported they mainly used their fingertips. Previous studies on fingertip psychophysics tests show similar values for JNDs and WF. Frediani and Carpi \cite{frediani2020tactile} conducted psychophysical tests for a fingertip-mounted pneumatic haptic display, reporting JNDs varying in the range of 0.12-0.33 psi (0.8-2.3 kPa) for driving pressure between 0.58 and 2.90 psi (4 and 20 kPa). The WF found for this experiment was 15\%. Another study evaluating a haptic jamming display found fingertips WF to be 16\% ($\sigma$ = 7.4\%) and 14.3\% ($\sigma$ = 2.6\%) for stiffness and size perception, respectively \cite{genecov2014perception}. A different study testing stiffness perception for a rigid vibrotactile, fingertip-mounted haptic device reported WF between 17.7 and 29.9\% \cite{maereg2017wearable}. The results of this study demonstrate that our wrapped haptic display performs according to the psychometric baselines found in the literature. The JNDs and Weber fractions obtained show that the display produced detectable signals and matched previously developed rigid or soft haptic devices in performance.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.9\columnwidth]{Figures/raw_sigmoid_7.pdf}
\vspace{-1em}
\caption{Sigmoid function fit for each of the subjects (grey), and the collection of responses from all subjects (orange). The dots represent percentages associated with individual subject responses. The k steepness factor for the overall sigmoid fit was 4.678, giving a JND of 0.235~psi. The individual steepness factors ranged 2.477-11.15, with JNDs varying between 0.099 and 0.444~psi (0.68-3.06~kPa).}
\label{fig:raw_sigmoid}
\end{center}
\vspace{-2em}
\end{figure}
As mentioned in Section \ref{Exp Setup P}, the reference pressure was shown against itself 10 times to the subjects to measure bias on whether users had a preference for choosing ``Pressure 1'' or ``Pressure 2''. Overall, the results showed that there was no bias on their choices. The subjects chose ``Pressure 1'' as the higher pressure 45\% of the time, and ``Pressure 2'' 55\% of the time. Two subjects had a large preference for choosing ``Pressure 2'' as the highest when shown this pair of identical pressures (80\% of the time). Looking at the qualitative data, one of these subjects mentioned that they were unsure about their answers throughout the experiment, which may explain the discrepancy in their bias relative to the average bias shown by the complete pool of participants.
The qualitative data collected from the post-experiment questionnaire shows that, besides the participant already mentioned (who had the highest JND), no other participants struggled to identify the pressures. A majority of the participants (7 out of 10) mentioned that they could detect the differences and that they ``more or less agree'' or ``completely agree'' that they were sure about their answers throughout the experiment. Additionally, $9$ out of the $10$ participants said they felt safe interacting with the haptic display. It is also worth noting that the subjects with the highest correctness rates when comparing pressures mentioned they have dexterity-related hobbies or skills. For example, subject 2, who had the smallest JND and Weber fraction, mentioned that they play multiple string musical instruments. This activity requires them to vary contact pressure, which explains their high performance in the experiment. Other hand-related activities mentioned by high-performing participants include knitting, piano playing, and American Sign Language proficiency.
This study shows that the sensations produced by our wrapped haptic display match the psychometric measures for other haptic devices. The fingertip JNDs were in close agreement with those found in the literature. Additionally, qualitative data showed that users felt safe interacting with the display. The users were able to distinguish pressure changes without a specific task context and visual feedback. The qualitative and quantitative data show that the wrapped display fulfilled the requirements outlined in Section \ref{Requirements}. Overall, we demonstrated that the soft wrapped haptic display can perform as well as other haptic devices (both rigid or soft) in displaying tactile signals without encumbering interaction.
\begin{table}[t]
\centering
\caption{Experimental results for psychophysics study.}
\vspace{-0.5em}
\label{tab:1}
\begin{tabularx}{0.90\columnwidth}{
| >{\centering\arraybackslash}X
| >{\centering\arraybackslash}X
|| >{\centering\arraybackslash}X
| >{\centering\arraybackslash}X |}
\hline
\textbf{Subject} &\textbf{\textit{k}} &\textbf{JND (psi)} &\textbf{WF (\%)}\\
\hline \hline
1 &5.048 &0.218 &10.88 \\ \hline
2 &11.15 &0.099 &4.927 \\ \hline
3 &3.846 &0.286 &14.28 \\ \hline
4 &2.478 &0.443 &22.17 \\ \hline
5 &4.989 &0.220 &11.01 \\ \hline
6 &8.557 &0.128 &6.419 \\ \hline
7 &2.477 &0.444 &22.18 \\ \hline
8 &5.008 &0.219 &10.97 \\ \hline
9 &4.574 &0.240 &12.01 \\ \hline
10 &5.102 &0.215 &10.77 \\ \hline \hline
\textbf{Mean} &4.810 &0.228 &11.42 \\ \hline
\textbf{St Dev} &2.524 &0.109 &5.431 \\ \hline
\textbf{Overall} &4.678 &0.235 &11.74 \\ \hline
\end{tabularx}
\vspace{-1.5em}
\end{table}
\subsection{Follow-Up Study}
As timing became a significant factor during the later studies in Sections~\ref{sec:vt1}-\ref{sec:vt2}, a follow-up study was conducted, replicating the wrapped haptic display experimental procedure with the addition of a graphical user interface (GUI). The purpose of the GUI was to enable participants to control the pace of the experiment without the influence of the experimenter. The GUI allowed for accuracy in recording the time spent by the user exploring each pressure. By evaluating time, we can better understand later result on timing and difficulty of interpreting haptic signals.
A total of 12 participants (6 female or nonbinary and 6 male, average age 21.9 years, age range 21 - 23 years) participated in the follow-up experiment after providing informed consent. Due to technical difficulties in data collection, 2 participants were removed from the study. 1 additional participant was excluded from analysis as an outlier since they performed equivalently to guessing. Of the remaining 9 participants, 7 were right-handed, and 2 were left-handed.
The results show a JND of 0.279 psi (1.923 kPa). Individual JNDs ranged 0.114-0.674 psi (0.788-4.650 kPa), with a mean JND of 0.310 psi (2.136 kPa) and standard deviation of 0.173 psi (1.195 kPa). The WF ranged between 5.7\% and 33.7\%, with a mean value of 15.5\%. These findings are consistent with those of the initial study.
Participants spent an average of 13.84 seconds on the first pressure ($\sigma$ = 7.323 seconds), an average of 11.27 seconds on the second pressure ($\sigma$ = 5.746 seconds), and an average of 25.11 seconds per pressure pair ($\sigma$ = 10.855 seconds). By one-way ANOVA, total time spent per pressure pair was found to significantly impact correctness ($p$ = 0.024). Subjects answering incorrectly spent significantly more time on average assessing the haptic device than when answering correctly. In particular, participants spent an average of 26.84 seconds assessing the pressure when answering incorrectly, and an average of 24.56 seconds when answering correctly. Notably, it was determined that mean time itself did not have a significant influence on overall accuracy ($p$ = 0.973).
\section{Applying Wrapped Haptic Displays to Communicate 1-DoF Robot Uncertainty} \label{sec:vt1}
So far we have studied the precision with which humans can perceive the 1-DoF wrapped haptic display. Next, we apply this display to convey robot learning from physical interactions. In this experiment, participants kinesthetically teach a UR-10 robot arm to perform a set of cleaning tasks. We apply an existing learning algorithm to measure the robot's uncertainty \cite{menda2019ensembledagger} and then convey that uncertainty back to the human in real-time. We highlight two key differences from the experiment in Section~\ref{sec:p1}: the robot arm is \textit{moving during interaction} (i.e., the wrapped haptic display is not stationary), and the haptic display \textit{now conveys a specific signal} that the human must interpret and react to during interaction. We recognize that --- because participants are now interacting with a moving robot arm --- they will experience both the forces they apply to the arm and the pressure rendered by the haptic display, which will also be changing to represent uncertainty. We anticipate this will make changes in pressure easier to recognize compared to the perception study, where participants interacted with constant signals.
\p{Independent Variables} We compared three different types of feedback (see \fig{setup_study2}):
\begin{itemize}
\item A graphical user interface (\textbf{GUI}) that displayed the robot's uncertainty on a computer monitor.
\item Our soft haptic display placed \textbf{Flat} on the table.
\item Our proposed approach where we \textbf{Wrapped} the haptic display around the robot arm.
\end{itemize}
All three types of feedback showed the same information but used different modalities.
Within the \textbf{GUI} baseline we displayed uncertainty on a computer screen that was located in front of the user. Here uncertainty was shown as a percentage, where numbers close to $0\%$ meant that the robot was certain about that specific part of the task, and numbers close to $100\%$ indicated that the robot was uncertain about what it had learned. The \textbf{Flat} and \textbf{Wrapped} interfaces used the 1-DoF soft haptic display from Section \ref{Haptic Display and Design}. Uncertainty was linearly scaled on the haptic display from $1-3$ psi ($6.89 - 20.68$ kPa). Here $1$ psi (deflated bags) corresponded to $0\%$ uncertainty and $3$ psi (inflated bags) corresponded to $100\%$ uncertainty. The \textbf{Flat} haptic display was placed in a designated area next to the human, such that participants could periodically touch it while guiding the robot.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\columnwidth]{Figures/setup_study2.pdf}
\caption{Participant kinesthetically teaching the robot arm the \textit{Cleaning} task. (Top) We compared our proposed approach (\textbf{Wrapped}) to two alternatives. \textbf{GUI} displayed the robot's uncertainty on a screen, while in \textbf{Flat} we placed the haptic display on table. (Bottom) We initialized the robot with data from known segments. During their first demonstration the human attempted to identify the region where the robot was uncertain (i.e., the new segment). The human then gave a second demonstration where they only guided the robot through the region(s) where they thought it was uncertain.}
\label{fig:setup_study2}
\end{center}
\vspace{-1.5em}
\end{figure}
\p{Experimental Setup} Participants completed three different tasks with each of the three feedback conditions (i.e., nine total trials). In the \textit{Organizing} task participants were asked to guide the robot to close a drawer, pick up a ball, and then place the ball in the basket. In the \textit{Shelving} task participants kinesthetically taught the robot to close a drawer and then pull an empty container from the shelf. Finally, in the \textit{Cleaning} task participants taught the robot to pick up a ball from the top of the shelf, place it in the basket, and drag the basket to a marked location (\fig{setup_study2} shows \textit{Cleaning} task).
Before conducting any experiments we first initialized the robot's uncertainty. We collected five expert demonstrations of each task and trained the robot with a behavior cloning approach \cite{menda2019ensembledagger}. This approach outputs the robot's uncertainty at each state (i.e., uncertainty was a function of the robot's joint position). We purposely \textit{removed} segments of the expert's demonstrations from the training set: specifically, we trained the robot without showing it how to perform either the first segment or the last segment of the task. As a result, when participants interacted with the robot, the robot was uncertain about either the start or the end of the task.
For each trial the participant provided \textit{two demonstrations}. First, the participant kinesthetically guided the robot throughout the entire task while receiving real-time feedback from \textbf{GUI}, \textbf{Flat}, or \textbf{Wrapped}. Based on this feedback, the participant attempted to identify the region of the task where the robot was uncertain (and needed additional teaching). During the second demonstration, the human \textit{only taught the segment} of the task where they believed the robot was \textit{uncertain} (i.e., the region they identified in the first demonstration). If the feedback is effective, participants should only reteach segments where the robot is confused without repeating parts of the task that the robot already knows.
\p{Participants and Procedure} We recruited ten participants
from the Virginia Tech community to take part in our study ($5$
female, average age $22.9$ years, age range $19 - 26$ years). All subjects provided
informed written consent prior to the experiment. Only one participant had prior experience physically interacting with a robot arm. Before starting the trials, we allowed participants to familiarize themselves with each task and feedback method. We used a within-subject study design: every participant interacted with all three feedback conditions. To mitigate the confounding
effect of participants improving over time, we \textit{counterbalanced}
the order of the feedback conditions (e.g., different participants start with different feedback types).
\begin{figure}[t]
\begin{center}
\vspace{0.5em}
\includegraphics[width=1.0\columnwidth]{Figures/example_study2.pdf}
\caption{Participant teaching the same task under two different feedback conditions. (Top) When working with \textbf{GUI}, participants must occasionally look at the visual interface to monitor the robot's uncertainty. (Bottom) Wrapping the feedback around the robot arm enables the human to seamlessly teach the robot without having to remember to check an external interface.}
\label{fig:example_study2}
\end{center}
\vspace{-1.5em}
\end{figure}
\p{Dependent Measures -- Objective}
Our objective measures were based on the user's \textit{second demonstration} (i.e., the demonstration where they tried to reteach the uncertain part of the task). We recorded the amount of time users spent on this second demonstration (\textit{Teaching Time}) and the percentage of this second demonstration that overlapped with the segment where the robot was actually uncertain (\textit{Correct Segment}). Offline, we retrained the robot using the participant's second demonstration. We then measured the percentage reduction in uncertainty due to the user's demonstration (\textit{Improvement}). Let $U_1$ be the robot's uncertainty after the first demonstration, and $U_2$ be the uncertainty after the second demonstration. Here \textit{Improvement} $=\frac{U_1 - U_2}{U_1} \cdot 100$.
\p{Dependent Measures -- Subjective} Participants filled out a 7-point Likert scale survey after completing all three tasks with a given method. Questions were grouped into six multi-item scales: was the user able to recognize parts they needed to repeat (\textit{informative}), did the robot’s feedback
have any effect on the user’s ability to demonstrate the task (\textit{easy}), was the user able to fully \textit{focus} on teaching the task, did the robot’s feedback seem \textit{natural} to the user, did the user find robot’s feedback \textit{intuitive} and understandable, and did the
user \textit{prefer} this current feedback method to the alternatives.
\p{Hypotheses}
We had two hypotheses for this user study:
\begin{displayquote}
\textbf{H1.} \emph{Participants will most efficiently teach the robot with wrapped haptic displays.}
\end{displayquote}
\begin{displayquote}
\textbf{H2.} \emph{Participants will subjectively prefer our wrapped haptic display over other methods.}
\end{displayquote}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=2.0\columnwidth]{Figures/results_study2.pdf}
\caption{Objective and subjective results when communicating 1-DoF robot uncertainty in real-time with \textbf{GUI}, \textbf{Flat}, and \textbf{Wrapped} feedback. Participants taught the robot three tasks; we here report the aggregated results across tasks. Error bars show standard error of the mean (SEM), and $*$ indicates statistically significant comparisons ($p < .05$). (Left) Wrapping the haptic display around the robot arm caused participants to spend less time teaching the robot, focused their teaching on regions where the robot was uncertain and improved the robot's understanding of the task after the human's demonstration. (Right) Participants thought that the wrapped display best enabled them to focus on the task, and they preferred this feedback type to the alternatives.}
\label{fig:results_study2}
\end{center}
\vspace{-1.5em}
\end{figure*}
\p{Results -- Objective} We report our aggregated results in \fig{results_study2} and show an example interaction in \fig{example_study2}.
We first ran a repeated measures ANOVA, and found that the robot's feedback type had a statistically significant effect on \textit{Teaching Time}, \textit{Correct Segment}, and \textit{Improvement}. Post hoc analysis revealed that participants spent less time teaching the robot with \textbf{Wrapped} than with either \textbf{GUI} or \textbf{Flat} ($p < .05$). Participants also better focused their teaching on the region where the robot was actually uncertain: \textbf{Wrapped} resulted in a higher \textit{Correct Segment} than \textbf{Flat} ($p < .05$). However, here the differences between \textbf{Wrapped} and \textbf{GUI} were not statistically significant ($p=.287$).
Recall that \textit{Improvement} captures how much more confident the robot is about the task after the participant's demonstration. This metric is especially important: we want to enable humans to teach robots efficiently, and \textit{Improvement} quantifies how much the robot learned from the human's teaching. We found that the robot's confidence improved the most in the \textbf{Wrapped} condition as compared to either \textbf{GUI} or \textbf{Flat} ($p < .05$). Overall, these results support \textbf{H1}: when users get real-time feedback from a haptic display wrapped around the robot arm, they provide shorter duration kinesthetic demonstrations that more precisely hone in on the robot's uncertainty and efficiently correct the robot.
To better explain why \textbf{Wrapped} outperformed \textbf{GUI}, we show an example interaction in \fig{example_study2}. Notice that --- when the feedback was not located on the robot arm --- participants had to periodically turn their attention away from the task to check the robot's uncertainty. For \textbf{Flat}, this required taking a hand away from the robot and feeling the haptic display on the table; for \textbf{GUI}, participants had to look up and check the computer monitor. The key difference with \textbf{Wrapped} is that this haptic display is located at the point of interaction, so participants could experience feedback while still remaining focused on the task and their physical demonstration.
We were initially surprised that --- although users with \textbf{Wrapped} and \textbf{GUI} scored similarly for \textit{Correct Segment} --- the results for \textit{Improvement} were significantly different. However, we believe the explanation for this lies in the quality of the participants' demonstrations. Returning to \fig{example_study2}, we recognize that with \textbf{GUI} participants often had to pause and check the uncertainty, breaking up their demonstration (and causing the demonstration to include multiple stops). Our subjective results support this explanation: as we will show, participants reported that they were more distracted with \textbf{GUI} than with \textbf{Wrapped} feedback.
\p{Results -- Subjective} \fig{results_study2} depicts the results from our Likert scale survey. After confirming that our six scales were reliable (using Cronbach's $\alpha$), we grouped these scales into combined scores and ran a one-way repeated measures ANOVA on each resulting score.
Participants perceived each of the feedback methods as similarly natural. But post hoc analysis showed that participants thought that \textbf{Wrapped} was more informative, easier to interact with, less distracting, and more intuitive than either one or both of the alternatives ($p < .05$). Participants also indicated that they preferred \textbf{Wrapped} over \textbf{GUI} and \textbf{Flat}. When explaining this preference, one participant said, \textit{``I definitely prefer \textbf{Wrapped} over other methods. I was able to clearly focus and the other methods were distracting.''}. Our subjective results support \textbf{H2}, and indicate that users perceived wrapped haptic displays as preferable when compared to alternatives like visual interfaces.
\section{Measuring Human Perception of 3-DoF Wrapped Haptic Displays} \label{sec:p2}
Having explored the human perception and application of the 1-DoF wrapped haptic display in the shape of a sleeve, we next pursue a study that will expand to a multi-degree of freedom display and help us understand how the spatial distribution affects the perception of multiple-DoF soft haptic displays. Both temporally and spatially varying signals can help us add complexity when we need to communicate multiple haptic signals within the space that a human might contact. We also seek to understand how the spatial distribution of signals might affect the effectiveness of the display in terms of accuracy of identification and the time needed to identify signals. To do so, we conducted a user study to measure the ability to distinguish haptic signals in different spatial distributions and outside of the context of the target application scenario. We select pressure levels considering the psychometric baselines (JNDs) obtained in Section~\ref{sec:p1}, and designed a study in which participants physically interacted with 3-DoF displays. The displays were arranged in two ways: (1) a 3-DoF ring display placed in a single location, and (2) three 1-DoF displays, made up of three rings each (by interconnecting the individual rings) and placed at three different locations. We called these arrangements \textbf{Global} (for the 3-DoF display), meaning all information was available at the single point of contact, so it would be "globally" available, and \textbf{Local} (for the three 1-DoF displays), meaning the information for each degree of freedom was only available locally. In each display, the user was asked to identify the signal with the highest pressure out of the three, and we hypothesized that distribution of the signals (whether three in a single location or spread over a distance) would affect performance. As a note, these same methods are later used in the experiment in Section~\ref{sec:vt2}, but there three of the \textbf{Global} displays are used instead of one to keep the total area of the display on the robot constant and allow different users to contact at different locations based on preference while still receiving the same feedback
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.85\columnwidth]{Figures/setup_study3.pdf}
\vspace{-0.5em}
\caption{Experimental Setup. (Top) The \textbf{Local} setup consists of three sets of 3-DoF displays configures as 1-DoF each, with a separation in between each. (Middle) The \textbf{Global} setup consists of a single 3-DoF display. Both methods essentially have 3-DoF, but the difference is the spatial distribution of each of the DoF. The three DoF were named \textit{Left}, \textit{Center}, and \textit{Right}, for both methods. (Bottom) Participants were instructed to sit in front of the setup; here we show a participant interacting with the \textbf{Local} setup.}
\label{fig:exp_setup3}
\end{center}
\vspace{-2.0em}
\end{figure}
\subsection{Experiment Setup and Procedure} \label{Exp Setup P2}
The 3-DoF wrapped haptic displays were mounted on passive stand-ins. For the \textbf{Local} method, three stand-ins with a 3-DoF configured as a 1-DoF each were placed on the table, with a separation in between each. For the \textbf{Global} method, a single stand-in with a 3-DoF display was used. Both methods essentially have 3-DoF, but the difference is the spatial distribution of each of the degrees of freedom; for \textbf{Global}, all signals are located in a small space, while for \textbf{Local} the signals are distributed in a 1~m space. The three degrees of freedom were named \textit{Left}, \textit{Center}, and \textit{Right}, for both methods. The setups are illustrated in Figure~\ref{fig:exp_setup3}. Participants were instructed to wear hearing protection and safety glasses during the study. The task was to identify which of the signals, \textit{Left}, \textit{Center}, or \textit{Right}, was inflated to the higher pressure. Two of the degrees of freedom were inflated to a reference pressure $P_o$ (2psi) and one to a high pressure $P_H$ (2.75 psi). Subjects were not told that two degrees of freedom had the same pressure, they were just instructed to identify the one inflated to the different pressure. We selected the $P_o$ and $P_H$ values based on the findings of the previous pyschophysics study and taking into consideration that there is an increase in the complexity of haptic signals for this new study. As reported in Section~\ref{subsec:study1_analysis}, the average JND found in the previous study was 0.228~psi. However, some of the participants had JNDs almost double of the mean (i.e. Subjects 4 and 7, see \ref{tab:1}). With that in mind, we determined that a difference between the signals of $\Delta$P = 0.75~psi was a large enough so that we could guarantee all subjects could perform to an adequate level in this study.
Each of the DoF (\textit{Left}, \textit{Center}, and \textit{Right}) were rendered to the participant as the $P_H$ a total of 16 times each, for a total of 48 trials. The process was performed for both \textbf{Global} and \textbf{Local} methods. Half of the participants completed the procedure with \textbf{Global} first then \textbf{Local}, and the other half the opposite. The study was conducted as follows. Participants were instructed to sit in the desk right in front of the arrangements. They interacted with a GUI developed in MATLAB to navigate through the study. The GUI first guided the participants through a demo to demonstrate the study procedure. The GUI shows a red light that would turn green to indicate the times when the participant was allowed to touch the displays. For each of the trials, the GUI asked the participant to click a "Next" button to continue. Once clicked, the red light would turn to green once the displays have reached their corresponding steady-state pressures. The participants were then allowed to touch the displays for a unrestricted period of time. There were also no restrictions on the way participants could explore the displays, and they were allowed to use both hands if desired. Right after the light turns green, the GUI displayed the question \textit{"Which one has the different pressure?"}, and showed options for selecting \textit{Left}, \textit{Center}, and \textit{Right}. After the participants selected an option, they were instructed to click an "Enter" button to answer, and the GUI showed if they were correct, and if not, what the right answer was. It is important to note that the GUI was configured to measure the participants' response time in the background; a timer would start when the light turns green, and would stop when the participants answered the question. To continue with the next trial, the participants then had to "Next." The procedure was repeated until the 48 trials were completed for the first method, and then for the second method. Participants were granted a break in the middle of each method's study, and another break in between methods. After completing the interaction portion of the experiment, the participants were given a post-experiment questionnaire. The questionnaire asked about how distinguishable the signals were, if they were often unsure about their answers, and if they were increasingly confident about their answers as the study progressed. We also asked about the overall experience during the study (clarity of instructions, sense of safety during the experiment) and about their previous experiences and familiarity with haptic technology, robotics, and video games. The study was 45 minutes long.
\begin{figure}[t]
\begin{center}
\includegraphics[width=1.0\columnwidth]{Figures/matrices.pdf}
\caption{Confusion matrices showing the mean accuracy for each signal rendered (\textit{Left}, \textit{Center}, \textit{Right}) in both methods (\textbf{Local} and \textbf{Global}).}
\label{fig:matrices}
\end{center}
\vspace{-2em}
\end{figure}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=2.0\columnwidth]{Figures/results_study3.pdf}
\vspace{-0.5em}
\caption{Experimental results. (Left) Count of incorrect guesses for each of the methods. A Wilcoxon signed-rank test showed that there is a significant association between participants' accuracy and methods ($Z$~=~-2.335, $p$~<~.05). (Center) Mean response time of individual participants for each method. Nine out of ten participants had a higher response time using the \textbf{Local} method. (Right) Mean response time for both methods, displayed by signal type (\textit{Left}, \textit{Center}, \textit{Right}). Signal type had a statistically significant effect on response time ($p$~=~.047) but to a lesser extent than the method type ($p$~<~.001). For \textbf{Global}, participants spent more time responding the question when the \textit{Left} signal was the highest pressure than when it was \textit{Center} ($p$~<~.01) or \textit{Right} ($p$~=~.078).}
\label{fig:results_study3}
\end{center}
\vspace{-2.0em}
\end{figure*}
\subsection{Results}
We recruited 10 participants (4 female, ages $22 \pm 3$ years) from the Purdue community. All participants completed the study after giving informed consent. The Purdue Institutional Review Board approved the study protocols (IRB \#2021-1283). Out of the group, 9 participants were right-handed, and one was left-handed.
The confusion matrices in Figure~\ref{fig:matrices} summarize the accuracy of participants. Overall, participants’ accuracy was higher for the Local method (average $\Bar{x}$~=~96.25\%, standard deviation $\sigma$~=~3.88) than Global ($\Bar{x}$~=~92.71\%, $\sigma$~=~7.17). Participants spent an average of 15.09s ($\sigma$~=~7.55) using the Local method, and 12.12s ($\sigma$~=~5.877) for Global. Interestingly, looking at the complete pool of participants' responses (whether global or local), we found that participants had a greater response time when they responded incorrectly ($\Bar{x}$~=~16.89s, $\sigma$~=~7.68s) than when they answered correctly ($\Bar{x}$~=~13.41s, $\sigma$~=~6.83). Figure~\ref{fig:results_study3} shows the average time spent by each participant for both \textbf{Local} and \textbf{Global} methods.
\subsection{Analysis}
The two quantitative measures that we used to understand the results are \textit{Accuracy} and \textit{Response Time}. To further analyze the accuracy of participants, a Wilcoxon signed-rank test was conducted to understand the relation between accuracy and the methods used. The results showed that there is a significant association between participants' accuracy and methods ($Z$~=~-2.335, $p$~<~.05). This means that although participants responded faster to the task while using \textbf{Global} as shown by the mean response time values, participants were not as accurate at detecting the higher pressure as when they were using \textbf{Local}. Figure~\ref{fig:results_study3} shows the count of incorrect guesses for both local and global methods. Another Wilcoxon test was conducted to determine whether the order in which the experiments were conducted (Local first, then Global, or vice-versa) affected subjects' accuracy. The results showed that there was no significant association ($Z$~=~-0.143, $p$~=~.886), suggesting that subjects did not benefit from learning to improve their accuracy for the second half of the study.
To analyze response time, we used a one-way repeated measures ANOVA. We found that the method type had a statistically significant effect on response time. Post hoc analysis revealed that participants spent less time identifying the target signal with \textbf{Global} as compared to \textbf{Local} ($p$~<~.001). This observation matches the mean values for response time previously mentioned, and also the mean response time for each participant as shown in Figure~\ref{fig:results_study3}. Nine out of ten participants spent more time using \textbf{Local} compared to \textbf{Global}. We also found that the rendered signal (whether \textit{Left}, \textit{Center}, or \textit{Right}) had a statistically significant effect on answering time ($p$~=~.047) but with a smaller effect size than the method type. Data shows that while using \textbf{Global}, participants spent more time responding when the \textit{Left} signal was the highest pressure than when it was \textit{Center} ($p$~<~.01) or \textit{Right} ($p$~=~.078). For \textbf{Local}, we did not find any statistically significant distinction between signals and their mean response time. These results can be observed in Figure~\ref{fig:results_study3}, where we show the response time of participants for each signal type (\textit{Left}, \textit{Center}, \textit{Right}) when using \textbf{Local} and \textbf{Global} methods.
This study shows that the spatial distribution of soft wrapped haptic displays is an important factor to consider, since it has an effect in both accuracy of detection and response time. Using the psychometric measures found in the previous study, we showed that participants were better able to identify the highest pressure out of a set of three when the signals were spatially distributed/separated (\textbf{Local}) than when the signals are condensed in a smaller space (\textbf{Global}). However, the response time for the spatially distributed signals was higher; this makes sense because participants had to move around a larger space to interact with the places where the haptic signals were located. The results of the post-experiment questionnaire show that participants thought the pressure differences were detectable, they were sure about their answers throughout the experiment, and that they felt safe interacting with the displays. Some participants mentioned that during the \textbf{Local} portion of the experiment, they wished they could place the displays together to make the exercise easier; this suggests that users consciously thought that having displays dispersed in different locations was an inconvenience, even though the results show participants were slightly more accurate with this method than with \textbf{Global}. As a summary \textbf{Global} method had the fastest response time, but \textbf{Local} had the highest accuracy. These observations show the trade-off between response time and accuracy when we increase the complexity of haptic signals in a smaller space or distribute them in a larger space.
\section{Using Multi-DoF Wrapped Haptic Displays to Communicate 3-DoF Robot Learning} \label{sec:vt2}
In Section~\ref{sec:vt1} we demonstrated that robot arms can leverage a haptic display to communicate with human teachers. However, this haptic device only had $1$-DoF: the same pressure was rendered along the entire robot arm. One degree-of-freedom is sufficient when the robot learner wants to convey whether or not it is uncertain --- but what if the robot needs to communicate more complicated feedback? For instance, the robot may want to indicate \textit{what} it is confused about or \textit{how} the human teacher could improve their demonstrations.
In our final user study we wrap multiple $3$-DoF haptic displays around a Franka Emika robot arm. Participants physically teach the robot to perform a mock welding task, and the robot applies multi-dimensional feedback to indicate what aspects of the task the human teacher must emphasize. From Section~\ref{sec:p2} we know that the speed and accuracy of humans' perception of $3$-DoF haptic displays depend on the distribution of the degrees of freedom. Here we build on these psychophysics results: we use the same haptic design and pressure differences as in Section~\ref{sec:p2}. But we also highlight the differences between these studies --- now the human is interacting with a moving robot arm, and the human must interpret the robot's feedback in real-time to actively change their own behavior.
Overall, our goal is to compare two different feedback distributions shown in Section~\ref{sec:p2} and understand how they impact humans kinesthetically teaching a task to the robot. Remember that we are wrapping haptic displays along the robot arm. One option is to \textit{localize} different signals to different parts of the arm, such that the place where the bags inflate helps indicate and remind users what the robot is uncertain about. Our second option is to \textit{distribute} all three signals along the entire arm; here the human perceives the same haptic rendering no matter where they grasp the robot. In this user study we explore how human teachers perceive and leverage multiple displays that use both feedback layouts.
\p{Independent Variables} The robot learner was confused about various parts of a mock welding task. We compared three different types of feedback for communicating when the robot was uncertain and what motions it needed the human teacher to emphasize (see \fig{setup_study4}):
\begin{itemize}
\item A \textbf{GUI} baseline where the robot showed its numerical uncertainty on a computer monitor.
\item Three $3$-DoF, each configured as $1$-DoF, wrapped haptic displays with signals localized to different regions of the robot arm (\textbf{Local})
\item Three $3$-DoF wrapped haptic display with signals distributed across the entire robot arm (\textbf{Global})
\end{itemize}
All conditions provided the same information to the participants. Similar to Section~\ref{sec:vt1}, in \textbf{GUI} the robot displayed its uncertainties as a percentage: values close to $100\%$ meant that the robot needed assistance. For \textbf{Local} and \textbf{Global} we actuated three separate wrapped haptic displays with pressures between $1 - 3$ psi ($6.89 - 20.68$ kPa). In \textbf{Local} each location of the haptic display had a single pressure signal; i.e., bags at the end-effector were one pressure, bags at the base of the arm were another pressure, and bags in the middle of the arm were a third pressure. In \textbf{Global} each haptic display location rendered all three of the potentially different pressures using three independent degrees of freedom, and all \textbf{Global} displays rendered those same three pressures --- participants could feel the same feedback at the base, middle, and end of the robot arm. \textbf{GUI}, \textbf{Local}, and \textbf{Global} each provided a total of $3$-DoF feedback. The difference was whether that feedback was wrapped around the robot, and if so, how the feedback was distributed along the arm. We emphasize that with \textbf{Global} participants had to discern which segments of the $3$-DoF haptic display were inflated, while with \textbf{Local} participants needed to determine at which parts of the robot arm the haptic displays were inflated. Based on the results of our study in Section~\ref{sec:p2}, we anticipate that participants using the wrapped haptic displays will discern the robot's feedback signals faster and more accurately during the teaching process.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=2.0\columnwidth]{Figures/example_study4.pdf}
\caption{Participant teaching the welding task with \textbf{GUI}, \textbf{Local}, or \textbf{Global}. We show task progress in $5$ second intervals. (Top) With \textbf{GUI} users need to look at the computer monitor to obtain feedback. The monitor is placed on the near side of the table: this participant is looking at the \textbf{GUI} at times $t=5$, $t=10$, and $t=25$ seconds. (Middle) With \textbf{Local} participants must move their hands --- and change their grasp --- to sense the different wrapped displays. This participant keeps one hand on the end-effector, and then moves their other hand between the haptic displays at the middle and base of the robot arm. (Bottom) Finally, with \textbf{Global} the participants receive feedback through $3$-DoF Haptic displays. \textbf{Global} helped this user remain focused on the task: notice that they are continually looking at the welding task, and keep both hands on the end-effector (where a $3$-DoF haptic display is located).}
\label{fig:example_study4}
\end{center}
\vspace{-1.5em}
\end{figure*}
\p{Experimental Setup} Participants physically interacted with a $7$-DoF robot arm (Franka Emika) to complete a mock welding task (see \fig{setup_study4}). We mounted lasers to the robot's end-effector: participants kinesthetically guided the robot across a table while the lasers marked where the robot was ``welding.''
The welding task consisted of three features: how close the end-effector was to the edge of the table, the end-effector's height from the table, and the orientation of the end-effector. When the task started participants would guide the robot arm towards the fixed goal position. As they moved, the robot would leverage its feedback to notify the human \textit{which feature} they needed to emphasize. For example, during the first third of the task the robot may prompt the human to keep the lasers close to the table; in the middle of the task the human should move the end-effector to the table edge; and during the final third of the task the human might need to align the robot's orientation. Participants had to dynamically determine \textit{what} feature the robot currently needed help with and then \textit{modify} their motion to emphasize that feature. Note that the robot asked for assistance with all three features at different segments of the task --- we randomized these segments so that participants could not anticipate the robot's feedback.
\p{Participants and Procedure} We recruited $12$ participants ($5$ female, ages $28 \pm 5.6$ years) from the Virginia Tech community. All participants provided informed written consent consistent with university guidelines (IRB \# $20$-$755$). None of the participants for this study took part in the previous study from Section~\ref{sec:vt1}. Three of the twelve participants reported that they had physically interacted with robot arms before.
Each user completed the welding task four times. First, we asked users to demonstrate the task without any feedback from the robot. We used this initial demonstration as a baseline to measure their improvement. Next, participants completed the welding task with \textbf{GUI}, \textbf{Local}, and \textbf{Global}. We counterbalanced the order of these feedback conditions: four participants started with \textbf{GUI}, four participants started with \textbf{Local}, and four participants started with \textbf{Global}. Overall, we followed a within-subjects design where all participants worked with every feedback condition. Prior to the experiment we explained and demonstrated each condition to the users so that they understood how to interpret the robot's feedback.
\p{Dependent Measures -- Objective} We measured the total time it took for participants to demonstrate the welding task (\textit{Teaching Time}). This includes idle time where the human has paused and is not moving the robot; for instance, the human may have stopped to feel the different haptic displays or to carefully read the \textbf{GUI} feedback. We also measured the \textit{Improvement} between the human's initial demonstration and their demonstration under each feedback condition. Let $e(\xi)$ be the total error between the correct feature values and the current features along trajectory $\xi$. Intuitively, $e(\xi)$ is the distance between the ideal demonstration and the human's actual demonstration. We defined \textit{Improvement} as $\big(e(\xi_{initial}) - e(\xi)\big)/e_{max} \cdot 100$, where $e_{max}$ is the maximum possible error for the welding task. \textit{Improvement} captures the percentage change in demonstration quality for each feedback condition: positive \textit{Improvement} reveals that the human is demonstrating the task more accurately.
\begin{table*}[b]
\caption{Questions on the Likert scale survey from Section~\ref{sec:vt2}. We grouped questions into five scales and examined their reliability using Cronbach's $\alpha$. Questions explored whether participants thought the robot's feedback was \textit{easy} to interpret, if they could \textit{focus} on teaching, how \textit{distinguishable} the robot's signals were, which methods were \textit{intuitive}, and their overall \textit{preferences}. For \textit{preference} we did not check for reliability since there was only a single item. We then performed a one-way repeated measures ANOVA on the grouped scores: here an $*$ denotes statistical significance.}
\label{table:likert}
\centering
\begin{tabular}{lcccc}
\hline Questionnaire Item & Reliability & $F(2,22)$ & p-value \bigstrut \\ \hline
\bigstrut[t]
-- It was hard to figure out what the robot was trying to convey to me. & \multirow{2}{*}{$.75$} & \multirow{2}{*}{$1.699$} & \multirow{2}{*}{$.206$} \\ -- I could \textbf{easily} tell what the robot wanted. \bigstrut[b] \\ \hline
\bigstrut[t]
-- I could \textbf{focus} on the robot's feedback without having to look up or move my hands. & \multirow{2}{*}{$.74$} & \multirow{2}{*}{$6.266$} & \multirow{2}{*}{$<.01^{*}$} \\ -- I had to physically go out of my way to get the robot's feedback. \bigstrut[b] \\ \hline
\bigstrut[t]
-- It was easy to \textbf{distinguish} the different feedback signals. & \multirow{2}{*}{$.64$} & \multirow{2}{*}{$1.733$} & \multirow{2}{*}{$.215$} \\ -- I had to think carefully about what I was seeing / feeling to determine the signal. \bigstrut[b] \\ \hline
\bigstrut[t]
-- The way the robot provided feedback seemed \textbf{intuitive} to me. & \multirow{2}{*}{$.86$} & \multirow{2}{*}{$.081$} & \multirow{2}{*}{$.923$} \\ -- I thought the robot's feedback was unintuitive and hard to understand. \bigstrut[b] \\ \hline
\bigstrut[t]
-- Overall, I \textbf{prefer} this communication modality. & \multirow{1}{*}{$-$} & \multirow{1}{*}{$5.189$} & \multirow{1}{*}{$.191$} \bigstrut[b] \\ \hline
\end{tabular}
\vspace{-0.5em}
\end{table*}
\p{Dependent Measures -- Subjective} Participants responded to a $7$-point Likert scale survey after each feedback condition. Our survey was composed of four multi-item scales and one single-item scale (see Table~\ref{table:likert}). We asked participants how \textit{easy} it was to understand the robot's feedback, whether they could \textit{focus} on the task, how \textit{distinguishable} was the robot's feedback, if the feedback was \textit{intuitive} for this task, and to what extent they \textit{prefer} this condition as a communication modality. Finally, after participants had finished working with all the conditions they responded to a forced-choice comparison: ``Which method did you like the most?'' Here users had to select one of \textbf{GUI}, \textbf{Local}, or \textbf{Global}.
\p{Hypotheses}
We had two hypotheses for this user study:
\begin{displayquote}
\textbf{H3.} \emph{Distributing multi-DoF haptic feedback along the robot arm (\textbf{Global}) will lead to improved demonstrations and lower teaching time.}
\end{displayquote}
\begin{displayquote}
\textbf{H4.} \emph{Participants will prefer distributed feedback (\textbf{Global}) as compared to localized feedback (\textbf{Local}).}
\end{displayquote}
\begin{figure*}[t]
\begin{center}
\includegraphics[width=2.0\columnwidth]{Figures/results_study4.pdf}
\caption{Objective and subjective results when communicating multi-dimensional robot feedback. We compared using a computer monitor (\textbf{GUI}), localizing wrapped haptic feedback to specific parts of the robot (\textbf{Local}), and distributing $3$-DoF feedback along the arm (\textbf{Global}). Error bars show standard error of the mean and an $*$ indicates statistically significant comparisons. (Left) Participants spent less time teaching the robot with \textbf{Global} as compared to \textbf{GUI}: shaded regions show the amount of time where participants stopped moving the robot to think about their next actions. The human's demonstrations improved more with \textbf{Global} feedback as compared to \textbf{Local} feedback. (Middle) Participants perceived the multi-DoF wrapped haptic display as similar to the alternatives, but indicated that \textbf{Global} enabled them to focus on teaching the robot. (Right) At the end of the experiment users were asked to choose their favorite method. Of the $12$ total participants, $8$ selected \textbf{Global}, $4$ selected \textbf{GUI}, and none selected \textbf{Local}.}
\label{fig:results_study4}
\end{center}
\vspace{-1.5em}
\end{figure*}
\p{Results -- Objective} The results from this user study are summarized in \fig{results_study4}. To get a sense of the users' experience, we also show participant demonstrations in \fig{example_study4}.
Let us start our analysis by looking at the objective results. Using a one-way repeated measures ANOVA, we determined that feedback type had a significant effect on \textit{Teaching Time} ($F(2,22)=3.423$, $p<.05$). Post hoc tests revealed that participants spent less time demonstrating the task with \textbf{Global} than with \textbf{GUI} ($p<.05$), while the differences between \textbf{Global} and \textbf{Local} were not significant ($p=.675$). To explain these results we measured the amount of idle time during the demonstration. We found that with \textbf{GUI} users needed to stop, look at the monitor, and think about their next action: shifting attention back-and-forth between the monitor and the welding task contributed to the increased \textit{Teaching Time}.
So with \textbf{Global} participants taught the robot more quickly --- but did they provide accurate, informative demonstrations? Remember that to measure \textit{Improvement} we first collected a demonstration without any feedback, and then compared that initial demonstration to the user's behavior under each feedback condition. The type of robot feedback had a significant effect on \textit{Improvement} ($F(2,22)=12.707$, $p<.001$). With both \textbf{GUI} and \textbf{Global} the participants made similar improvements to their teaching ($p=.769$). However, \textit{Improvement} was significantly lower for \textbf{Local} as compared to \textbf{Global} ($p<.01$). To illustrate why we turn to \fig{example_study4}. When participants received \textbf{Local} feedback they frequently had to change their grasp and move their hands across the three haptic displays; by contrast, in \textbf{GUI} and \textbf{Global} the participants could maintain a fixed grasp. When \textbf{Local} participants did not constantly check all three haptic displays missed out on the robot's signals (and failed to emphasize the corresponding features).
Overall, our objective results support \textbf{H3}. Distributed, multi-DoF wrapped haptic feedback enabled users to teach robots more seamlessly than \textbf{GUI} and more accurately than \textbf{Local}.
\p{Results -- Subjective} Table~\ref{table:likert} and \fig{results_study4} outline the results of our Likert scale survey and forced-choice comparison. We first checked the reliability of our four multi-item scales: \textit{easy}, \textit{focus}, and \textit{intuitive} were reliable (Cronbach's $\alpha > 0.7$) but \textit{distinguish} was not. We then grouped each scale into a combined score and performed a one-way repeated measures ANOVA on the result. Note that we did not check for reliability in \textit{prefer} because we only had one item (i.e., one question) on this scale.
We found that participants perceived \textbf{GUI}, \textbf{Local}, and \textbf{Global} to be similar along several axes. For instance, users did not think that any of the feedback types were more distinguishable ($p=.215$) or intuitive ($p=.923$) than the others. However, users reported that they were better able to \textit{focus} on the task with \textbf{Global} than with \textbf{GUI} ($p<.05$) or with \textbf{Local} ($p<.001$). This result matches \fig{example_study4}, where we see examples of a participant shifting their attention during \textbf{GUI} and \textbf{Local} conditions. After the experiment was finished we asked users to select their favorite feedback type: eight of the twelve participants chose \textbf{Global}, and the remaining four selected \textbf{GUI}. These subjective results support \textbf{H4}. We were particularly interested to find that participants preferred \textbf{Global} feedback compared to \textbf{Local} feedback --- it seemed that the convenience of having the same three signal available at different contact points along the entire arm outweighed the potential difficulty in interpreting those signals and determining which parts of the bag were inflating. One participant mentioned that ``\textit{I liked \textbf{Local} the least, since it requires repositioning hands to get feedback.}''
|
1,108,101,564,894 | arxiv | \section{Introduction}
\IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file''
for IEEE journal papers produced under \LaTeX\ using
IEEEtran.cls version 1.8b and later.
I wish you the best of success.
\hfill mds
\hfill August 26, 2015
\subsection{Subsection Heading Here}
Subsection text here.
\subsubsection{Subsubsection Heading Here}
Subsubsection text here.
\section{Conclusion}
The conclusion goes here.
\appendices
\section{Proof of the First Zonklar Equation}
Appendix one text goes here.
\section{}
Appendix two text goes here.
\section*{Acknowledgment}
The authors would like to thank...
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section{Introduction}
\IEEEPARstart{S}{tate}-of-the-art autonomous humanoid robots have demonstrated remarkable dynamic motion capabilities in controlled environments \cite{Atlas}. However, when operating in unknown environments---such as navigating in disaster scenes and manipulating everyday objects in homes---autonomous humanoid robots tend to fall short of societies' expectations.
This limitation exists largely because these robots' artificial brains lack the intelligence and intuition required for planning complex actions under uncertainties. On the other hand, humans possess such planning skills through motor learning. Hence, teleoperation, which supplies human movement information to robots as motion plans, appears promising in transforming physically capable humanoid robots into dynamically intelligent---and useful---ones.
Humanoid robot teleoperation includes two sub-domains: 1) Manipulation teleoperation. 2) Locomotion teleoperation. Manipulation teleoperation typically involves upper limbs, and many related works concern fixed-base teleoperation systems and their kinematics alone \cite{Hauser_trina, Ciocarlie_teleoperation, my_ICRA_2021}. Locomotion teleoperation, however, typically involves lower limbs and needs to account for the system's dynamics due to the nature of human locomotion. This work aims to contribute to the specific area of dynamic humanoid robot locomotion teleoperation.
\begin{figure}[t]
\centering
\includegraphics[width = 0.999\linewidth]{Fig_1_Envisioned_System.pdf}
\caption{(Top) Design rendering of our envisioned teleoperation system, where a human uses a whole-body HMI to teleoperate a bi-wheeled humanoid robot's locomotion via body tilt. Images are not in scale. (Bottom) The four-stage bilateral teleoperation architecture. The red box indicates the scope of this work. Grey font color indicates the components not implemented in this work. }
\label{fig:envisioned_system}
\end{figure}
Several existing works on humanoid robot locomotion teleoperation utilize a human-machine interface (HMI), a reduced model, and a teleoperation mapping \cite{iCub_2018, iCub_2019, MECHA_Unilateral, MECHA_Bilateral, Ramos_TRO_2018, Ramos_RAL_2018}. The HMI captures the human's information to generate the human reduced model. The teleoperation mapping then translates variables of the human reduced model into control commands for the robot reduced model. In this way, the teleoperation problem becomes a tractable four-stage tracking control problem, as in Fig. \ref{fig:envisioned_system}.
In practice, commercial tetherless motion capture systems and omnidirectional treadmills have been popular HMIs because they provide useful human locomotion information and are readily available. The planar linear inverted pendulum (LIP) model has been a popular reduced model because it is one of the simplest models that encode the core dynamics of humanoid robot locomotion \cite{hof_LIP}. With these popular HMIs and LIP as the reduced model, teleoperation mappings designed to match the human's and the robot's zero-moment points (ZMPs) and divergent components of motion (DCMs) have produced meaningful locomotion behaviors of humanoid robots \cite{MECHA_Unilateral, iCub_2018, iCub_2019, multimode}.
However, with such teleoperation methods, the human pilot cannot feel what the robot feels or comfortably control the robot's physical interaction with the environment. This is because the teleoperation is unilateral---the human sends information to the robot but receives little information from the robot. Unlike when a driver drives and feels a car inside it, the human pilot is physically away from the robot and cannot feel the robot's response to the human's command or the environment. In this regard, bilateral teleoperation with force feedback becomes valuable: it aims to allow the human pilot to feel what the robot feels in addition to synchronizing movements of the two. Due to the lack of available commercial hardware, most existing works on bilateral locomotion teleoperation of humanoid robots utilize custom-built HMIs. With LIP as the reduced model, ZMP- and DCM-based teleoperation and force feedback mappings, these works have achieved initial successes \cite{MECHA_Bilateral, haptic_walking, Ramos_Science_Robotics_2019, Ramos_TRO_2018}.
Yet, the locomotion behaviors of those bilaterally teleoperated humanoid robots are preliminary. Some of them move quasi-statically, whereas others that are capable of dynamic motions cannot travel reliably for long distances. These limitations exist because walking is these robots' form of locomotion. The sophisticated dynamics of walking plus the human-robot coupled dynamics make dynamic bilateral teleoperation of walking and locomotion challenging.
To tackle this challenge, we envision a bi-wheeled humanoid robot design for teleoperation, as shown in Fig. \ref{fig:envisioned_system}. The wheels simplify the locomotion teleoperation problem, and the humanoid design preserves the robot's anthropomorphism and intuitiveness of teleoperation. We developed a whole-body HMI with high backdrivability, bandwidth, and force capacity for bilateral teleoperation of our envisioned robot. The contribution of this work is twofold: 1) To introduce the HMI's design and performance. 2) To demonstrate and evaluate our concept of wheeled humanoid robot locomotion teleoperation by body tilt using the HMI via an experiment.
Specifically, we employed LIP as the reduced model, and developed two teleoperation mappings that map the human's body tilt to the robot's velocity or acceleration. To prevent the human from falling during the teleoperation, we designed the HMI feedback force as a spring force proportional to the human's body tilt. Then, we conducted an experiment, where seven human subjects teleoperated a simulated LIP with the HMI to perform dynamic target tracking tasks. The experimental results suggest that all subjects accomplished the tasks after practice, and the force feedback generally improved their performances. However, the subjects exhibited two distinct teleoperation styles, which benefited from the force feedback differently. Moreover, the force feedback affected the subjects' preferences on the teleoperation mappings, though most subjects performed better with the velocity mapping.
\begin{figure*}[t]
\centering
\includegraphics[width = 0.999\linewidth]{Fig_2_LISA.pdf}
\caption{LISA's overall design, transmission topology, and performance. Images are not in scale. (a) (Top) Trapezoidal force command VS time profile in trapezoid test. Time interval $T \in \left\{ 1, 2, 3 \right\}$ s. Force amplitude $A \in \left\{ \pm 20, \pm 60, \pm 100 \right\}$ N, where tension is positive and compression negative. We performed five trials for each $T$-$A$ combination. (Bottom) Identified LISA force VS steady-state current profile. (b) Step responses. Step amplitudes are the same as the force amplitudes in the trapezoid test. We performed five trials for each step amplitude. (c) Chirp test force and joint position results. The identified joint position in the blue dashed line is the response obtained by entering the force VS time data to the identified transfer function model as input. }
\label{fig:LISA}
\end{figure*}
\section{HMI Design and Performance}
The HMI we developed consists of three subsystems: 1) The linear sensor and actuator (LISA). 2) The forceplate. 3) The motion capture linkage. We introduced the motion capture linkage in our previous work on arm teleoperation \cite{my_ICRA_2021}. Hence, the following subsections will detail LISA's and the forceplate's designs and performances.
\subsection{Linear Sensor and Actuator (LISA)}
LISA is the subsystem that senses the human pilot's center of mass (CoM) position and exerts feedback forces to the human. As shown in Fig. \ref{fig:LISA}, it is a three-degree-of-freedom (3-DoF) serial mechanism with two passive revolute joints on a gimbal and one actuated and backdrivable prismatic joint that contains the end-effector. A timing belt-pulley transmission converts the rotation of a Turnigy 9235-100 KV brushless DC motor to the end-effector's translation. The theoretical transmission ratio is $\frac{1000}{26}$ $\text{m}^{-1}$. The motor is current controlled by an ODrive V3.6 motor driver with a 500-Hz PWM command and a 40 V, 4 Ah lithium-ion battery. Three joint encoders sense the motor's and the gimbal's rotational axes at 1 kHz. A uniaxial tension/compression load cell senses the end-effector's actuation force at 7.58 kHz. Upon use, the gimbal is mounted to a fixed frame, and the end-effector is mounted to the human pilot's back at CoM height via a spherical joint and a vest. This mounting design allows unconstrained movement of the human's torso with a maximum swivel of about $90 \degree$.
Empirically, a force of 50--60 N exerted at a human pilot's CoM would sufficiently perturb the human for humanoid robot teleoperation \cite{Ramos_TRO_2018, Ramos_RAL_2018}. To ensure an ample force capacity, we designed LISA's maximum actuation force to be 100 N. To evaluate LISA's actual force capacity and identify its actuation force VS motor current profile, we conducted a trapezoid test. First, we mounted LISA's gimbal ground and end-effector to the same fixed frame such that LISA's end-effector was perpendicular to the frame, and would apply a pure normal force to the frame when the prismatic joint was actuated. Second, we fed trapezoidal current command VS time profiles to LISA, and recorded the end-effector load cell reading and current command at 200 Hz. The result is a relatively linear curve with a maximum force of about 100 N at 35 A for both tension and compression, as shown in Fig. \ref{fig:LISA}.(a).
After the trapezoid test, we programmed an open-loop force controller for LISA using the identified force-current plot's linear best-fit curve. Then, we changed the data logging frequency to 1 kHz, and conducted a step response test. Fig. \ref{fig:LISA}.(b) shows the results. The responses are relatively consistent, and the 100\% rise time is within 10 ms. The steady-state tracking error is acceptable because LISA's actuation force will be the feedback force for the human. A few Newtons of error should be negligible in this case. The relatively long settling time could be caused by LISA's mechanical compliance.
Lastly, we conducted a chirp test to estimate the reflected inertia on LISA's prismatic joint. During the test, a human grasped LISA's end-effector, and back drove the prismatic joint in a sine-sweep manner for ten seconds, as shown in Fig. \ref{fig:LISA}.(c). The data were recorded at 1 kHz. We then loaded the data to the MATLAB System Identification Toolbox with a $2^{\text{nd}}$-order transfer function model. The result was approximately 1.3 kg reflected inertia, which is about $1\%$--$2\%$ of human body mass. Hence, we assume LISA's joint dynamics to be negligible compared with the human's dynamics. Furthermore, we measured the mass of all moving components on LISA's prismatic joint using a weight scale. The result was 0.655 kg.
\subsection{Forceplate}
The forceplate is the subsystem that senses the human pilot's ground reaction wrench and center of pressure (CoP) position. As shown in Fig. \ref{fig:envisioned_system} and Fig. \ref{fig:forceplate}, it consists of a rigid hexagonal platform connected to a fixed base frame via six legs. Each leg contains two spherical joints and a uniaxial tension/compression load cell operating at 7.58 kHz, so the subsystem is 6-DoF. However, these six DoFs are the six legs' axial rotations, which do not affect any leg's length. Hence, the platform has zero DoF and remains static upon external forces. Meanwhile, the forceplate's coordinate frame is the entire HMI's coordinate frame. Its origin is at the geometric center of the hexagonal platform's top surface.
The forceplate's leg arrangement is identical to that of a Stewart platform, and the forceplate functions as a Stewart platform sensor \cite{stewart_sensor_1, stewart_sensor_2}. Specifically, since the hexagonal platform is static, the forceplate has a constant square Jacobian, which establishes a linear relationship between the six load cell readings and the wrench experienced by the platform:
\begin{align}
\mathcal{F} = J^{-\intercal} f,
\end{align}
where $\mathcal{F} \in \mathbb{R}^6$ is the wrench the platform experiences, $f \in \mathbb{R}^6$ is the column vector of the six load cell readings, and $J \in \mathbb{R}^{6 \times 6}$ is the Jacobian. Since $J$ is constant and invertible, $J^{-\intercal}$ can be directly obtained via a calibration process: applying six or more different known wrenches to the platform that excite all three axes of the forceplate's coordinate frame. Once the corresponding load cell readings are collected, a least-square matrix inversion would yield $J^{-\intercal}$. Lastly, the human's CoP position can be computed from the wrench itself.
\begin{figure}[t]
\centering
\includegraphics[width = 0.999\linewidth]{Fig_3_Forceplate.pdf}
\caption{Forceplate's overall design, leg design, and coordinate frame definition. The forceplate's coordinate frame is the entire HMI's coordinate frame.}
\label{fig:forceplate}
\end{figure}
\section{LIP-Based Locomotion Teleoperation and Force Feedback Mappings}
The planar LIP model is a 1-DoF model where the pendulum's CoP controls its CoM, which translates at a constant height \cite{hof_LIP}. LIP's dynamical equation is:
\begin{align} \label{LIP_dynamics}
\ddot{x} \left( t \right) = \omega^2 \left( x \left( t \right) - p \left( t \right) \right),
\end{align}
where $x(t)$ and $p(t)$ are the pendulum's CoM and CoP positions, respectively, and $\omega = \sqrt{g/h}$ is the natural frequency. $g$ and $h$ are gravitational acceleration and pendulum height, respectively, which are constant. Note that $x(t)$ is a continuous state while $p(t)$ is the input, which could be discontinuous.
For locomotion teleoperation of our envisioned bi-wheeled humanoid robot, we assume that the planar LIP model represents the human and the robot in their respective sagittal planes. Physically, the LIP's CoM corresponds to the robot's CoM, and the LIP's CoP corresponds to the robot's wheel-ground contact point. For the human, the LIP's CoM and CoP are sensed by LISA and the forceplate, respectively. We designed two teleoperation mappings---feedback (FB) and feedforward (FF) mappings---that map the human LIP's tilt to the robot LIP's velocity or acceleration. With LISA's actuation capability, we also designed a force feedback mapping. The following subsections detail these mapping designs.
\begin{figure}[t]
\centering
\includegraphics[width = 0.999\linewidth]{Fig_4_LIP.pdf}
\caption{Schematics of the human LIP (left) and the robot LIP (right).}
\label{fig:LIP}
\end{figure}
\subsection{Feedback Teleoperation Mapping}
FB mapping maps the human LIP's tilt to the robot LIP's CoM velocity. Intuitively, the human body acts as a control lever that commands the speed at which the robot should be travelling. Technically, the robot has a stabilizing feedback controller that accepts a velocity command. FB mapping translates the human's body tilt into that velocity command with a gain. The name ``feedback" comes from the design that the robot has its own feedback controller.
Mathematically, the robot's feedback controller is:
\begin{align} \label{p_R_FB}
p_{R} = - \frac{\ddot{x}_{Rcmd}}{\omega_R^2} - \frac{2\zeta_R}{\omega_R} \left( \dot{x}_{Rcmd} - \dot{x}_R \right) - x_{Rcmd} + 2 x_R,
\end{align}
where $p_R$ and $x_R$ are the robot LIP's CoP and CoM positions, respectively, $\omega_R$ is the robot LIP's natural frequency, and $\zeta_R$ is the controller's damping ratio. Subscript $``cmd"$ stands for ``command". Substituting (\ref{p_R_FB}) into the robot LIP's dynamical equation yields:
\begin{equation}
\left( \ddot{x}_{Rcmd} - \ddot{x}_{R} \right) + 2\zeta_R \omega_R \left( \dot{x}_{Rcmd} - \dot{x}_{R} \right) + \omega_R^2 \left( x_{Rcmd} - x_R \right) = 0,
\end{equation}
which is the standard $2^{\text{nd}}$-order error dynamics. FB mapping, then, has the expression:
\begin{align}
\dot{x}_{Rcmd} = K_{FB} \left( x_H - p_H \right),
\end{align}
where $K_{FB}$ is the user-selected mapping gain, and $x_H$ and $p_H$ are the human LIP's CoM and CoP positions, respectively. $\left( x_H - p_H \right)$ represents the human LIP's tilt. The higher the gain, the faster the robot will travel with a certain human tilt, hence the more sensitive the teleoperation. For implementation, we set $\ddot{x}_{Rcmd} = 0$ and $\zeta_R = 1$. We also compute $x_{Rcmd}$ by integrating $\dot{x}_{Rcmd}$ based on the initial $x_R$ to ensure the coherence between velocity and position commands.
\subsection{Feedforward Teleoperation Mapping}
FF mapping maps the human LIP's tilt to the robot LIP's tilt. Since an LIP's tilt is proportional to its CoM acceleration, FF mapping can also be understood as mapping the human LIP's tilt to the robot LIP's CoM acceleration. Intuitively, the human body ``becomes" the robot with FF mapping, achieving a sense of dynamic similarity \cite{Ramos_RAL_2018}. On the other hand, the robot does not have any feedback controller, so its stability entirely relies on feedforward control from the human, which gives the mapping's name.
Mathematically, FF mapping has the expression:
\begin{align} \label{p_R_FF}
p_R = x_R - K_{FF} \frac{h_R}{h_H} \left( x_H - p_H \right),
\end{align}
where $h_R$ and $h_H$ are the robot LIP's and human LIP's respective heights, and $K_{FF}$ is the user-selected mapping gain. Rearranging (\ref{p_R_FF}) yields:
\begin{align}
\frac{x_R - p_R}{h_R} = K_{FF} \frac{x_H - p_H}{h_H},
\end{align}
which represents the synchronization between the human LIP's and the robot LIP's tilts adjusted by the gain. The higher the gain, the more the robot will tilt, i.e., accelerate, with a certain human tilt, hence the more sensitive the teleoperation.
Note that FB and FF mappings reflect opposite design philosophies and represent the two extremities in shared control. One allows the robot the stabilize itself and merely regards the human as a command input, whereas the other makes the robot completely dependent on the human. In other words, one grants total control authority to the robot, whereas the other to the human.
\subsection{Force Feedback Mapping}
To prevent the human from falling during the teleoperation, we designed the HMI feedback force as a spring force proportional to the human LIP's tilt:
\begin{align}
F_{HMI} = K_{HMI} \left( x_H - p_H \right),
\end{align}
where $F_{HMI}$ and $K_{HMI}$ stand for the HMI feedback force and the virtual spring stiffness, respectively. $K_{HMI}$ is dependent on the human LIP's height. It takes the numerical value such that LISA will exert its 100 N maximum force when the human tilts to the robot's prescribed maximum tilt angle.
\section{Experimental Design}
\begin{figure*}[t]
\centering
\includegraphics[width = 0.999\linewidth]{Fig_5_Experimental_Setup.pdf}
\caption{(a) Experimental setup and the graphical interface. (b) Experimental process for each subject and constant parameters during the experiment. }
\label{fig:experimental_setup_and_process}
\end{figure*}
The experiment's objectives are to compare FB and FF mappings and evaluate the force feedback's effect. Inspired by works in human-computer interaction and neuromechanics \cite{Fitts_1954, Hogan_Dynamic_Primitive}, we developed a setup and a human subject experiment using a simulated LIP as the robot. The robot's mass and height are 15 kg and 0.5 m, respectively. To allow the robot to move dynamically, we set its maximum tilt angle as $20\degree$, i.e., $|\theta_R| \leq 20 \degree$ in Fig. \ref{fig:LIP}. $\theta_H$ does not have any constraint.
As shown in Fig. \ref{fig:experimental_setup_and_process}.(a), the setup consists of the HMI, a monitor raised to human eye level, and a graphical interface. The interface is in the robot body frame's top view, and shows the robot's CoM, CoP, and a rectangular target. We designed two tests based on this interface: 1) Position test. 2) Velocity test. In position test, the target will stand still at a random distance between 3--5 m in front of the robot. In velocity test, the target will appear at a random distance between 1--2 m in front of the robot, and then move away from the robot at a random constant velocity between 2--4 m/s after a three-second countdown. For both tests, the human's task is to teleoperate the robot after the countdown such that the robot's CoM stays within the target for three seconds. The human must complete the task as fast as possible. An NI cRIO-9082 real-time computer executes the simulation and experiment at 1 kHz, and logs data at 200 Hz. The following video demonstrates the experiment in action: \href{https://youtu.be/GRI0GLWt-hs}{\textcolor{blue}{\underline{https://youtu.be/GRI0GLWt-hs}}}. The University of Illinois at Urbana-Champaign Institutional Review Board has reviewed and approved this research study.
\begin{figure}[t]
\centering
\includegraphics[width = 0.999\linewidth]{Fig_6_Mapping_Gain.pdf}
\caption{Mapping gains selected by every subject in every combination. }
\label{fig:mapping_Gain}
\end{figure}
We recruited six male and one female subjects for the experiment. The seven subjects' age mean and standard deviation are 27.86 and 3.08 years, respectively. We used the subjects' navel height as their CoM height. Fig. \ref{fig:experimental_setup_and_process}.(b) shows the experimental process for each subject. In LISA-off and LISA-on sections, LISA's actuation is off and on, respectively. Mapping 1 is FB mapping for subjects 1--3 and FF mapping for subjects 4--7. With two sections, two mappings, and two tests, each subject undergoes eight testing combinations (e.g., LISA-off section + FB mapping + position test is one combination). For each combination, the subject will first practice for however many trials he/she desires and tune the mapping gain. During velocity test practice, the target's velocity is always 4 m/s. After the practice, the gain will be fixed, and the subject will perform 20 trials. At the end of each section, the subject will complete a survey involving: 1) The NASA TLX \cite{NASA_TLX}. 2) Choosing the preferred mapping. 3) Ranking the section's four combinations in difficulty. 4) Commenting subjectively. The subject will rest between the two sections. Excluding the practice, each section consumes about two hours.
\begin{figure*}[t]
\centering
\includegraphics[width = 0.999\linewidth]{Fig_7_Experimental_Result.pdf}
\caption{(a) Every subject's completion time VS target position/velocity plots for every combination. The numerical value on each subplot's top left corner is the best-fit performance line's slope. (b) Normalized completion time and deviation. (c) Maximum human tilt. (d) Maximum normalized LISA force.}
\label{fig:experimental_results}
\end{figure*}
\section{Experimental Results and Discussion}
\subsection{Normalized Completion Time}
The most representative experimental result is the combination completion time, as shown in Fig. \ref{fig:experimental_results}.(a). Since the target position and velocity are random in every trial, and a farther and faster target will lead to a longer completion time, we normalized every subject's per-trial completion times. Specifically, inspired by Fitts's law \cite{Fitts_1954}, we used the linear best-fit line of the 20 data points on the per-trial completion time VS target position/velocity plot to represent a subject's performance in one combination. We then computed the best-fit line's y-coordinate at the x-coordinate of the median of possible target position and velocity, i.e., 4 m and 3 m/s. That y-coordinate is the normalized completion time. The deviation is the absolute difference between the best-fit line's y-coordinates at the two ends of the target position/velocity range. Fig. \ref{fig:experimental_results}.(b) summarizes the normalized results. High performance is characterized by short normalized completion time and small deviation, i.e., small absolute value of the best-fit line's slope. The results suggest that most subjects performed better with FB mapping and when the force feedback was on.
\subsection{Teleoperation Style Difference}
Though most subjects performed better with the force feedback, some subjects experienced less performance variation after the force feedback became available. Particularly, subjects 3 and 7 exhibit consistently high and stable performances compared with other subjects. In fact, the force feedback appears to have benefited subjects 3 and 7 little, but they are the two best-performing subjects in five out of eight combinations in terms of normalized completion time. A closer look reveals that these two subjects chose higher mapping gains, especially with FF mapping, as shown in Fig. \ref{fig:mapping_Gain}.
\begin{figure*}[t]
\centering
\includegraphics[width = 0.999\linewidth]{Fig_8_Style_Comparison.pdf}
\caption{Teleoperation style comparison for FF mapping (a) and FB mapping (b) in one trial. The human postions are normalized by human CoM height. }
\label{fig:style_comparison}
\end{figure*}
Following this evidence, we computed every subject's mean and standard deviation of per-trial maximum human tilt magnitudes for every combination using the expression: $\text{tilt} = \text{atan} \left( \frac{\left| x_H - p_H \right|}{h_H} \right)$. As shown in Fig. \ref{fig:experimental_results}.(c), the results indicate that subjects 3 and 7 tilted less than other subjects did in most combinations. They also did not tilt more drastically in LISA-on section than in LISA-off section as other subjects did. Hence, we summarize two distinct teleoperation styles: 1) Low gain + large tilt (LGLT) style, represented by all subjects except subjects 3 and 7. 2) High gain + small tilt (HGST) style, represented by subjects 3 and 7.
Fig. \ref{fig:style_comparison} illustrates comparisons between the two styles in one trial. As shown in Fig. \ref{fig:style_comparison}.(a), with FF mapping, subject 7's mapping gain was 15 times of subject 3's, but both subjects achieved similar completion times and robot trajectories. Yet, subject 6's normalized human CoM and CoP trajectories have larger amplitudes and are smoother than subject 7's. Subject 6's robot CoP trajectory is also smoother than subject 7's. This phenomenon is consistent with the mapping design that a higher gain corresponds to more sensitive teleoperation. Since the robot CoP position is the input, which is discontinuous in general, sensitive teleoperation may cause jittery robot CoP trajectory, as shown in subject 7's case. Similar phenomenon also occurred with FB mapping, as in Fig. \ref{fig:style_comparison}.(b).
We have two assumptions about why such teleoperation style difference exists. The first one is that higher mapping gains reflect higher teleoperation proficiencies. The reasoning is that an HGST-style subject likely possesses motor skills that enable him/her to react fast enough to the sensitive teleoperation, whereas an LGLT-style subject might not. Specifically, high-gain teleoperation requires more jerky movements and greater actuation efforts than low-gain teleoperation. Hence, the LGLT style, which most subjects chose, could be a natural tendency to minimize jerk and effort \cite{Hogan_jerk}. In this regard, the HGST-style subjects overcame their natural tendency and employed more advanced motor skills. Since the robot can attain higher top speed and accelerate faster with higher gains, the HGST-style subjects chose high gains to minimize the time for the robot to catch the target and the completion time instead of jerk or effort. They did so because they could handle the more demanding teleoperation, which an LGLT-style subject might not be able to handle. This assumption is supported by the subjects' normalized completion time ranking.
The second assumption is that teleoperation style is a subjective preference. The LGLT-style subjects had long normalized completion times because a robot with low gains could not move fast enough to catch the target as quickly as a robot with high gains. This was especially true for combinations with FB mapping and velocity test, where the mapping gain determines the robot's top speed. As shown in Fig. \ref{fig:mapping_Gain}, with FB mapping, all subjects selected higher gains in velocity test than in position test. This assumption is also consistent with control rate setting in model aircraft aerobatics, which depends on the maneuvers the aircraft is performing and the pilot's preference \cite{model_aircraft_dual_rate}.
\subsection{Robot CoP Trajectory During FB Mapping}
Although the teleoperation style difference is consistent with both teleoperation mappings, we observed two features that are exclusive for FB mapping: 1) The robot CoP trajectory has discontinuous snaps. 2) The human's and the robot's CoP trajectories become more jittery when the robot moves slowly. Both features exist for subjects with both teleoperation styles.
The root cause of both features is that the robot CoP position is the discontinuous input. Since the robot has its own feedback controller with FB mapping, the human cannot directly control the robot CoP position. Instead, the robot controls it to track the human's velocity command, which may not embed dynamic similarity with the human or follow the relatively smooth human CoP trajectory.
In addition, feature 1) involves the maximum tilt angle imposed on the robot. By (\ref{LIP_dynamics}), this constraint is equivalent to limiting the maximum robot CoM acceleration. Hence, there could be moments when the robot cannot catch up with the change of the human's velocity command even at its maximum acceleration. In such cases, the robot CoP will abruptly move to and stay at its maximum relative to the CoM, which causes the snaps in the robot CoP trajectory in Fig. \ref{fig:style_comparison}.(b).
Feature 2) is a sign that the robot is rapidly switching between acceleration and deceleration. The robot does so because the human rarely commands a perfectly constant velocity, but subtly and continuously adjusts his/her tilt to recover from overshoot and accelerate/decelerate the robot. Because the robot CoP must move beyond the CoM to produce deceleration, the robot CoP trajectory appears as a high-frequency oscillation around the robot CoM trajectory. The oscillation's frequency is positively correlated to the mapping gain, which is demonstrated by the comparison between the two subjects' robot trajectories in Fig. \ref{fig:style_comparison}.(b).
\subsection{Normalized HMI Feedback Force}
Fig. \ref{fig:experimental_results}.(d) shows every subject's mean of per-trial maximum normalized LISA force magnitudes. We normalized LISA force by dividing it by the subject's body weight. Consistent with the maximum tilt results in Fig. \ref{fig:experimental_results}.(c), the HGST-style subjects have smaller normalized LSIA forces than the LGLT-style subjects, since LISA force was a spring force proportional to the human's tilt. Moreover, for all subjects' LISA-off sections, the maximum normalized LISA force is between about $1\%$--$2.3\%$, which reflects LISA's high backdrivability.
\begin{figure}[t]
\centering
\includegraphics[width = 0.999\linewidth]{Fig_9_Survey_Result.pdf}
\caption{Survey results of preferred mapping (left) and combination difficulty ranking (right). $1^{\text{st}}$ is the most difficult and $4^{\text{th}}$ the easiest. Numerical value in each entry represents number of subjects. }
\label{fig:survey}
\end{figure}
\subsection{Survey Results}
For the NASA TLX, most subjects reported lower demands, effort, and frustration, and higher performance for LISA-on section than for LISA-off section. All subjects considered the force feedback to be helpful for the teleoperation. Specifically, all subjects commented that the force feedback made them feel more secure when they tilted their body. Six subjects commented that the force feedback enabled them to find their zero tilt angle better and move their CoP more quickly and drastically, especially when they tilted backward. Four subjects mentioned that the force feedback reduced the physical strains the tilting caused on their toes and heels. Two subjects mentioned that, with FB mapping, the robot sometimes ``fought against" the human command and was not as responsive as with FF mapping. They also suggested that FB mapping might be more suitable for smooth and gradual maneuvers, whereas FF mapping more suitable for swift and dynamic maneuvers. Furthermore, the two HGST-style subjects acknowledged that they did not benefit significantly from the force feedback. In fact, one of them teleoperated the robot by pressing different parts of the feet to the forceplate without moving the upper body in some trials.
Fig. \ref{fig:survey} shows the subjects' mapping preferences and combination difficulty rankings, from which we observed the following: 1) The force feedback appears to have changed the subjects' mapping preferences, as more subjects preferred FF mapping when the force feedback was on. They did so even though most of them achieved shorter normalized completion times with FB mapping in LISA-on section. 2) More subjects considered velocity test with FF mapping to be easier when the force feedback was on. However, FF mapping + position test remains the most difficult for most subjects.
\subsection{Limitations of This Study}
This study's primary limitation is the relatively small number of subjects due to the experiment's time consumption and strenuousness. Moreover, the target velocities during the experiment are relatively high considering the simulated robot's size. This part of experimental design might have been biased toward the HGST-style subjects, since the LGLT-style subjects could not teleoperate the robot to catch the target as fast as the HGST-style subjects could. Finally, the experiment was based on a simulated robot, which assumes a linear model and perfect sensing and actuation. These idealizations will break down during hardware implementation, so the teleoperation's practicality must be verified on a real robot.
\section{Conclusions and Future Work}
The contributions of this work are: 1) To introduce the HMI's design and performance. 2) To demonstrate and evaluate our concept of wheeled humanoid robot locomotion teleoperation by body tilt using the HMI and a simulated robot via a dynamic target tracking experiment. For contribution 1), we presented LISA's and the forceplate's designs and the analysis on LISA's force-current profile, step response, and reflected inertia. For contribution 2), we examined the subjects' normalized completion times, maximum tilts, maximum normalized LISA forces, and survey responses. The results suggest that most subjects performed better with FB mapping and the force feedback. However, we discovered two teleoperation styles, and the force feedback benefited the HGST-style subjects less than the LGLT-style subjects. We compared the two styles with human and robot CoM and CoP position data from one trial for both mappings, and proposed two assumptions about why the teleoperation style difference exists. We also discussed the oscillations in robot CoP trajectory that occurred exclusively for FB mapping. Lastly, the survey results show that more subjects preferred FF mapping and considered velocity test with FF mapping to be easier when the force feedback was on. Yet, most subjects achieved shorter normalized completion times with FB mapping in LISA-on section, and ranked FF mapping + position test as the most difficult regardless of the force feedback's availability.
Future works after this study include: 1) To implement the two teleoperation mappings on a real wheeled humanoid robot and evaluate the mappings' performances. 2) To combine the two teleoperation mappings---potentially in state-dependent manners---to preserve the strengths of both mappings while compensating for the weaknesses of each to achieve stable, intuitive, and dynamic locomotion teleoperation.
\section*{Acknowledgement}
The corresponding author would like to sincerely thank Dillan Kenney and Dr. Yeongtae Jung for their assistance in system integration, Guillermo Colin and Yu Zhou for their dedication in data collection, and all subjects for their commitment and perseverance during the experiment.
\bibliographystyle{IEEEtran}
|
1,108,101,564,895 | arxiv | \section{Introduction} \label{intro}
Let $R$ be a commutative ring, and fix two scalars $[2]_s,[2]_t \in R$.
The \defnemph{two-colored Temperley--Lieb algebra} $2\mathrm{TL}_R(\prescript{}{s}{n}):=2\mathrm{TL}_R(\prescript{}{s}{n};[2]_s,[2]_t)$ is the $R$-algebra with generators
$e_i$ for $1 \leq i \leq n-1$,
subject to the relations
\begin{align}
e_i^2& =-[2]_s & & \text{$i$ odd,} \label{eq:oddquadratic} \\
e_i^2& =-[2]_t & & \text{$i$ even,} \label{eq:evenquadratic} \\
e_i e_j & =e_j e_i & & \text{for $|i-j|>1$,} \\
e_i e_{i\pm 1} e_i & =e_i & & \label{eq:braidreln}
\end{align}
The algebra $2\mathrm{TL}_R(\prescript{}{t}{n})$ is defined identically, except that the parity conditions on the relations \eqref{eq:oddquadratic} and \eqref{eq:evenquadratic} are swapped.
These algebras form a generalization of the ordinary Temperley--Lieb algebra, which occurs as a special case when $[2]_s=[2]_t$.
The relations immediately imply that there is an $R$-basis of $2\mathrm{TL}_R(\prescript{}{s}{n})$ consisting of monomials in the generators $e_i$.
We call a non-zero idempotent $\mathrm{JW}_R(\prescript{}{s}{n}) \in 2\mathrm{TL}_R(\prescript{}{s}{n})$ (and similarly for $\prescript{}{t}{n}$) a \defnemph{two-colored Jones--Wenzl projector} if $e_i\mathrm{JW}_R(\prescript{}{s}{n})=0$ for all $1 \leq i \leq n-1$ and the coefficient of $1$ in $\mathrm{JW}_R(\prescript{}{s}{n})$ is $1$.
Such idempotents (if they exist) are unique.
The behavior of $2\mathrm{TL}_R(\prescript{}{s}{n})$ is controlled by certain elements $[n]_s,[n]_t \in R$ for $n \in \mathbb{Z}$ called the \defnemph{two-colored quantum numbers}.
These elements (defined in \eqref{eq:twocolqnum} below) are bivariate polynomials in $[2]_s$ and $[2]_t$ which are analogous to ordinary quantum numbers.
For an integer $0 \leq k \leq n$ the \defnemph{two-colored quantum binomial coefficient}
\begin{equation*}
\qbinom{n}{k}_{s}=\frac{[n]_{s}!}{[k]_{s}![n-k]_{s}!}=\frac{[n]_{s}[n-1]_{s}\dotsm [n-k+1]_{s}}{[k]_{s}[k-1]_{s} \dotsm [1]_{s}}
\end{equation*}
can also be shown to be an element of $R$.
Our first main result is the two-colored analogue of the well-known existence theorem for ordinary Jones--Wenzl projectors.
\begin{thmletter} \label{existence}
The two-colored Jones--Wenzl projector $\mathrm{JW}_R(\prescript{}{s}{n})$ exists if and only if $\qbinom{n}{k}_{s}$ is invertible in $R$ for each integer $0 \leq k \leq n$.
\end{thmletter}
The terminology for two-colored Temperley--Lieb algebras comes from their presentation as diagram algebras.
We associate the labels $s$ and $t$ with the colors red and blue, respectively, writing ${\color{red} s}$ and ${\color{blue} t}$ for emphasis.
A \defnemph{two-colored Temperley Lieb diagram} is a Temperley--Lieb diagram with the planar regions between strands colored with alternating colors.
The algebra $2\mathrm{TL}_R(\prescript{}{{\color{red} s}}{n})$ is spanned by two-colored Temperley--Lieb diagrams with $n$ boundary points on the top and bottom whose leftmost region is colored red.
A blue circle inside a red region evaluates to $-[2]_{{\color{red} s}}$, while a red circle inside a blue region evaluates to $-[2]_{{\color{blue} t}}$.
We draw the two-colored Jones--Wenzl projector as a rectangle labeled $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$:
\begin{equation*}
\begin{gathered}
\begin{tikzpicture}[xscale=-.3,yscale=0.2]
\begin{scope}
\clip (-3,3.5) rectangle (3,-3.5);
\draw[fill=dgrmred] (0,4) rectangle (4,-4);
\draw[fill=dgrmblu] (1,4) rectangle (2,-4);
\draw[fill=dgrmblu] (-4,4) rectangle (-2,-4);
\draw[fill=white] (-2.5,2) rectangle (2.5,-2);
\node at (0,0) {$\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$};
\node at (-1,2.7) {$\dots$};
\node at (-1,-2.7) {$\dots$};
\end{scope}
\draw[dashed] (-3,3.5) to (3,3.5);
\draw[dashed] (-3,-3.5) to (3,-3.5);
\end{tikzpicture} \\
\text{$n$ odd}
\end{gathered} \qquad \qquad \qquad \qquad
\begin{gathered}
\begin{tikzpicture}[xscale=-.3,yscale=0.2]
\begin{scope}
\clip (-3,3.5) rectangle (3,-3.5);
\draw[fill=dgrmred] (0,4) rectangle (4,-4);
\draw[fill=dgrmblu] (1,4) rectangle (2,-4);
\draw[fill=dgrmred] (-4,4) rectangle (-2,-4);
\draw[fill=white] (-2.5,2) rectangle (2.5,-2);
\node at (0,0) {$\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$};
\node at (-1,2.7) {$\dots$};
\node at (-1,-2.7) {$\dots$};
\end{scope}
\draw[dashed] (-3,3.5) to (3,3.5);
\draw[dashed] (-3,-3.5) to (3,-3.5);
\end{tikzpicture} \\
\text{$n$ even}
\end{gathered}
\end{equation*}
Suppose both $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ and $\mathrm{JW}_R(\prescript{}{{\color{blue} t}}{n})$ exist.
We say that $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ is \defnemph{rotatable} if the (clockwise and counterclockwise) rotations of $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ by one strand are equal to some scalar multiple of $\mathrm{JW}_R(\prescript{}{{\color{blue} t}}{n})$:
\begin{align*}
\begin{gathered}
\begin{tikzpicture}[xscale=.5,yscale=.4]
\begin{scope}
\clip (-3,2) rectangle (3.5,-2);
\draw[fill=dgrmblu] (-3.3,3) rectangle (1.4,-3);
\draw[fill=dgrmred] (3.8,3) rectangle (1.4,-3);
\draw[fill=dgrmred] (-.9,3) rectangle (0.5,-3);
\draw[fill=white] (-.3,3) rectangle (0.8,-3);
\draw[fill=dgrmblu] (2,3) to (2,-1)
to[out=-90,in=180] (2.5,-1.5) to[out=0,in=-90] (3,-1) to
(3,3) to (2,3);
\draw[fill=dgrmred] (-1.5,-3) to (-1.5,1)
to[out=90,in=0] (-2,1.5) to[out=180,in=90] (-2.5,1) to
(-2.5,-3) to (-1.5,-3);
\draw[fill=white] (-1.9,1) rectangle (2.4,-1);
\node at (.25,0) {$\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$};
\node at (.3,1.4) {$\dots$};
\node at (.3,-1.4) {$\dots$};
\end{scope}
\draw[dashed] (-3,2) to (3.5,2);
\draw[dashed] (-3,-2) to (3.5,-2);
\end{tikzpicture}
\end{gathered}
& =
\lambda
\begin{gathered}
\begin{tikzpicture}[xscale=.5,yscale=.4]
\begin{scope}
\clip (-2.3,2) rectangle (2.8,-2);
\draw[fill=dgrmblu] (-3.3,3) rectangle (.5,-3);
\draw[fill=dgrmred] (-1.5,3) rectangle (-.9,-3);
\draw[fill=dgrmred] (3.8,3) rectangle (.5,-3);
\draw[fill=dgrmblu] (2,3) rectangle (1.4,-3);
\draw[fill=white] (-.3,3) rectangle (.8,-3);
\draw[fill=white] (-1.9,1) rectangle (2.4,-1);
\node at (.25,0) {$\mathrm{JW}_R(\prescript{}{{\color{blue} t}}{n})$};
\node at (.3,1.4) {$\dots$};
\node at (.3,-1.4) {$\dots$};
\end{scope}
\draw[dashed] (-2.3,2) to (2.8,2);
\draw[dashed] (-2.3,-2) to (2.8,-2);
\end{tikzpicture}
\end{gathered}
=
\begin{gathered}
\begin{tikzpicture}[xscale=.5,yscale=-.4]
\begin{scope}
\clip (-3,2) rectangle (3.5,-2);
\draw[fill=dgrmblu] (-3.3,3) rectangle (1.4,-3);
\draw[fill=dgrmred] (3.8,3) rectangle (1.4,-3);
\draw[fill=dgrmred] (-.9,3) rectangle (0.5,-3);
\draw[fill=white] (-.3,3) rectangle (0.8,-3);
\draw[fill=dgrmblu] (2,3) to (2,-1)
to[out=-90,in=180] (2.5,-1.5) to[out=0,in=-90] (3,-1) to
(3,3) to (2,3);
\draw[fill=dgrmred] (-1.5,-3) to (-1.5,1)
to[out=90,in=0] (-2,1.5) to[out=180,in=90] (-2.5,1) to
(-2.5,-3) to (-1.5,-3);
\draw[fill=white] (-1.9,1) rectangle (2.4,-1);
\node at (.25,0) {$\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$};
\node at (.3,1.4) {$\dots$};
\node at (.3,-1.4) {$\dots$};
\end{scope}
\draw[dashed] (-3,2) to (3.5,2);
\draw[dashed] (-3,-2) to (3.5,-2);
\end{tikzpicture}
\end{gathered} & \text{($n$ odd)} \\
\begin{gathered}
\begin{tikzpicture}[xscale=.5,yscale=.4]
\begin{scope}
\clip (-3,2) rectangle (3.5,-2);
\draw[fill=dgrmblu] (-3.3,3) rectangle (3.8,-3);
\draw[fill=dgrmred] (-.9,3) rectangle (1.4,-3);
\draw[fill=white] (-.3,3) rectangle (.8,-3);
\draw[fill=dgrmred] (2,3) to (2,-1)
to[out=-90,in=180] (2.5,-1.5) to[out=0,in=-90] (3,-1) to
(3,3) to (2,3);
\draw[fill=dgrmred] (-1.5,-3) to (-1.5,1)
to[out=90,in=0] (-2,1.5) to[out=180,in=90] (-2.5,1) to
(-2.5,-3) to (-1.5,-3);
\draw[fill=white] (-1.9,1) rectangle (2.4,-1);
\node at (.25,0) {$\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$};
\node at (.3,1.4) {$\dots$};
\node at (.3,-1.4) {$\dots$};
\end{scope}
\draw[dashed] (-3,2) to (3.5,2);
\draw[dashed] (-3,-2) to (3.5,-2);
\end{tikzpicture}
\end{gathered}
& =
\lambda
\begin{gathered}
\begin{tikzpicture}[xscale=.5,yscale=.4]
\begin{scope}
\clip (-2.3,2) rectangle (2.8,-2);
\draw[fill=dgrmblu] (-3.3,3) rectangle (3.8,-3);
\draw[fill=dgrmred] (-1.5,3) rectangle (2,-3);
\draw[fill=dgrmblu] (-.9,3) rectangle (1.4,-3);
\draw[fill=white] (-.3,3) rectangle (.8,-3);
\draw[fill=white] (-1.9,1) rectangle (2.4,-1);
\node at (.25,0) {$\mathrm{JW}_R(\prescript{}{{\color{blue} t}}{n})$};
\node at (.3,1.4) {$\dots$};
\node at (.3,-1.4) {$\dots$};
\end{scope}
\draw[dashed] (-2.3,2) to (2.8,2);
\draw[dashed] (-2.3,-2) to (2.8,-2);
\end{tikzpicture}
\end{gathered}
=
\begin{gathered}
\begin{tikzpicture}[xscale=.5,yscale=-0.4]
\begin{scope}
\clip (-3,2) rectangle (3.5,-2);
\draw[fill=dgrmblu] (-3.3,3) rectangle (3.8,-3);
\draw[fill=dgrmred] (-.9,3) rectangle (1.4,-3);
\draw[fill=white] (-.3,3) rectangle (.8,-3);
\draw[fill=dgrmred] (2,3) to (2,-1)
to[out=-90,in=180] (2.5,-1.5) to[out=0,in=-90] (3,-1) to
(3,3) to (2,3);
\draw[fill=dgrmred] (-1.5,-3) to (-1.5,1)
to[out=90,in=0] (-2,1.5) to[out=180,in=90] (-2.5,1) to
(-2.5,-3) to (-1.5,-3);
\draw[fill=white] (-1.9,1) rectangle (2.4,-1);
\node at (.25,0) {$\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$};
\node at (.3,1.4) {$\dots$};
\node at (.3,-1.4) {$\dots$};
\end{scope}
\draw[dashed] (-3,2) to (3.5,2);
\draw[dashed] (-3,-2) to (3.5,-2);
\end{tikzpicture}
\end{gathered} & \text{($n$ even)}
\end{align*}
Our second main result gives a combined condition for the existence and rotatability of two-colored Jones--Wenzl projectors.
\begin{thmletter} \label{existsrotates}
The two-colored Jones--Wenzl projectors $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ and $\mathrm{JW}_R(\prescript{}{{\color{blue} t}}{n})$ exist and are rotatable if and only if $\qbinom{n+1}{k}_{{\color{red} s}}=\qbinom{n+1}{k}_{{\color{blue} t}}=0$ in $R$ for all integers $1 \leq k \leq n$.
\end{thmletter}
In the course of proving Theorems~\ref{existence} and \ref{existsrotates} we generalize several well-known results to the two-color setting which may be of independent interest.
These including computations of the greatest common divisor and least common multiple of (two-colored) quantum binomial coefficients (\cref{qbinomideal} and \cref{qbinominvideal}) and the genericness of coefficients of $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ (\cref{genericcoefs}) over arbitrary rings.
\subsection*{Soergel bimodules}
Two-colored Jones--Wenzl projectors lie at the heart of the Elias--Williamson construction of the diagrammatic Hecke category \cite{ew-soergelcalc}.
Recently Abe has shown that there is a ``bimodule-theoretic'' category (a modification of the category of classical Soergel bimodules) which is equivalent to the diagrammatic Hecke category under certain assumptions \cite{abe-bimodhecke,abe-homBS}.
An important consequence of \cref{existsrotates} (which we discuss in the final section) is that these assumptions essentially always hold.
\begin{corletter} \label{abeequalsew}
The diagrammatic Hecke category is equivalent to Abe's category of Soergel bimodules.
\end{corletter}
More precisely, we use \cref{existsrotates} to give an algebraic condition on the base ring for determining when the diagrammatic Hecke category is well defined (\cref{realizcorrected}), completely correcting an error in \cite{ew-soergelcalc} (first identified and partially corrected in \cite{ew-localizedcalc}).
This algebraic condition is precisely Abe's \cite[Assumption~1.1]{abe-homBS}, so \cref{abeequalsew} follows from Abe’s results \cite[Theorem~3.9]{abe-homBS} and \cite[Theorem~5.6]{abe-bimodhecke}.
We find it noteworthy that our correction gives the best possible equivalence result for two seemingly distinct categorifications of the Hecke algebra.
\subsection*{Acknowledgments}
We thank the Royal Commission for the Exhibition of 1851 and EPSRC (EP/V00090X/1) for financial support.
\section{Preliminaries}
\label{prelim}
Let $A=\mathbb{Z}[x_{{\color{red} s}},x_{{\color{blue} t}}]$ be the integral polynomial ring in two variables.
The \defnemph{two-colored quantum numbers} are defined as follows.
First set $[1]_{{\color{red} s}}=[1]_{{\color{blue} t}}=1$, $[2]_{{\color{red} s}}=x_{{\color{red} s}}$, and $[2]_{{\color{blue} t}}=x_{{\color{blue} t}}$ in $A$.
For $n>1$ we inductively define
\begin{align}
[n+1]_{{\color{red} s}}& =[2]_{{\color{red} s}} [n]_{{\color{blue} t}} - [n-1]_{{\color{red} s}} \text{,} & [n+1]_{{\color{blue} t}} & =[2]_{{\color{blue} t}} [n]_{{\color{red} s}} - [n-1]_{{\color{blue} t}} \text{.} \label{eq:twocolqnum}
\end{align}
These formulas can be rearranged to inductively define $[n]_{{\color{red} s}}$ and $[n]_{{\color{blue} t}}$ for $n \leq 0$.
For a commutative $A$-algebra $R$, we also define two-colored quantum numbers in $R$ to be the specializations of two-colored quantum numbers in $A$, which we will write in the same way.
These polynomials are bivariate extensions of the usual (one-colored) quantum numbers, which can be recovered as follows.
Let $\overline{A}=A/(x_{{\color{red} s}}-x_{{\color{blue} t}}) \cong \mathbb{Z}[x]$, where $x$ is the image of $x_{{\color{red} s}}$ or $x_{{\color{blue} t}}$.
Then the one-colored quantum number $[n]$ is the image of $[n]_{{\color{red} s}}$ or $[n]_{{\color{blue} t}}$ in $\overline{A}$.
When $n$ is odd, $[n]$ is an even polynomial, so we can formally evaluate $[n]$ at $x=\sqrt{x_{{\color{red} s}} x_{{\color{blue} t}}}$ to obtain an element of $A$.
When $n$ is even, $[n]/[2]$ is an even polynomial, which we can similarly formally evaluate at $x=\sqrt{x_{{\color{red} s}} x_{{\color{blue} t}}}$.
In both cases, it is easy to show by induction that
\begin{equation} \label{eq:colordependence}
\begin{aligned}
[n]_{{\color{red} s}}& =[n](\sqrt{x_{{\color{red} s}} x_{{\color{blue} t}}})=[n]_{{\color{blue} t}} & & \text{if $n$ is odd,} \\
\frac{[n]_{{\color{red} s}}}{[2]_{{\color{red} s}}}& =\left(\frac{[n]}{[2]}\right)(\sqrt{x_{{\color{red} s}} x_{{\color{blue} t}}})=\frac{[n]_{{\color{blue} t}}}{[2]_{{\color{blue} t}}} & & \text{if $n$ is even}
\end{aligned}
\end{equation}
in $A$.
In other words, two-colored quantum numbers are essentially the same as ordinary quantum numbers up to a factor of $[2]_{{\color{red} s}}$ and $[2]_{{\color{blue} t}}$ depending on color.
It is self-evident that the automorphism of $A$ which exchanges $x_{{\color{red} s}}$ and $x_{{\color{blue} t}}$ (``color swap'') also exchanges $[n]_{{\color{red} s}}$ and $[n]_{{\color{blue} t}}$ for all $n$.
For this reason, we will generally write statements only for $[n]_{{\color{red} s}}$ and leave it to the reader to formulate color-swapped analogues.
Similarly we have $2\mathrm{TL}(\prescript{}{{\color{red} s}}{n}; [2]_{{\color{red} s}},[2]_{{\color{blue} t}}) \cong 2\mathrm{TL}(\prescript{}{{\color{blue} t}}{n}; [2]_{{\color{blue} t}},[2]_{{\color{red} s}})$, and this isomorphism maps $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ to $\mathrm{JW}_R(\prescript{}{{\color{blue} t}}{n})$ when they exist, so we will only state our results for $2\mathrm{TL}_R(\prescript{}{{\color{red} s}}{n})$ and $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$.
As mentioned in \cref{intro}, the two-colored Temperley--Lieb algebra $2\mathrm{TL}(\prescript{}{{\color{red} s}}{n})$ has a basis consisting of reduced monomials in the generators $e_i$ (i.e.~monomials whose length cannot be reduced using \eqref{eq:oddquadratic}--\eqref{eq:braidreln}).
Each basis element corresponds to a unique two-colored Temperley--Lieb diagram on $n$ strands, whose leftmost region is colored red.
Given an element $f \in 2\mathrm{TL}_R(\prescript{}{{\color{red} s}}{n})$ and a two-colored Temperley--Lieb diagram $D$ we will write
\begin{equation*}
\coeff_{{} \in f} D
\end{equation*}
for the coefficient of $D$ when $f$ is written in the diagrammatic basis.
If $R$ is a commutative $A$-algebra for which $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ exists for all $n$ (e.g.~$R=\Frac A$), then the coefficients of $\mathrm{JW}_{R}(\prescript{}{{\color{red} s}}{n})$ can be calculated inductively as follows.
Suppose $D$ is a two-colored Temperley--Lieb diagram in $2\mathrm{TL}_R(\prescript{}{{\color{red} s}}{(n+1)})$.
Let $\hat{D}$ be the diagram with $n+2$ bottom boundary points and $n$ top boundary points obtained by folding down the strand connected to the top right boundary point of $D$.
If there is a strand connecting the $i$th and $(i+1)$th bottom boundary points of $\hat{D}$, let $D_i$ denote the two-colored Temperley--Lieb diagram with $n$ strands so obtained by deleting this cap.
For example, if
\begin{equation*}
D=\begin{gathered}
\begin{tikzpicture}[xscale=.4,yscale=.2]
\begin{scope}
\clip (-3,3.5) rectangle (3,-3.5);
\draw[fill=dgrmred] (2,4) rectangle (-3,-4);
\draw[fill=dgrmblu] (2,4) rectangle (3,-4);
\draw[fill=dgrmblu] (-2,-4) to (-2,4) to (-1,3.5) to[out=-90,in=90] (1,-3.5) to (1,-4) to (0,-3.5) to[out=90,in=0] (-.5,-2.5) to[out=180,in=90] (-1,-3.5) to (-2,-4);
\draw[fill=dgrmblu] (0,4) to (0,3.5) to[out=-90,in=180] (.5,2.5) to[out=0,in=-90] (1,3.5) to (0,4);
\end{scope}
\draw[dashed] (-3,3.5) to (3,3.5);
\draw[dashed] (-3,-3.5) to (3,-3.5);
\end{tikzpicture}
\end{gathered}
\end{equation*}
then
\begin{equation*}
\hat{D}=\begin{gathered}
\begin{tikzpicture}[xscale=.4,yscale=.2]
\begin{scope}
\clip (-3,3.5) rectangle (3.5,-3.5);
\draw[fill=dgrmred] (3.5,4) rectangle (-3,-4);
\draw[fill=dgrmblu] (2,-4) to (2,-3.5) to[out=90,in=180] (2.5,-2.5) to[out=0,in=90] (3,-3.5) to (2,-4);
\draw[fill=dgrmblu] (-2,-4) to (-2,4) to (-1,3.5) to[out=-90,in=90] (1,-3.5) to (1,-4) to (0,-3.5) to[out=90,in=0] (-.5,-2.5) to[out=180,in=90] (-1,-3.5) to (-2,-4);
\draw[fill=dgrmblu] (0,4) to (0,3.5) to[out=-90,in=180] (.5,2.5) to[out=0,in=-90] (1,3.5) to (0,4);
\end{scope}
\draw[dashed] (-3,3.5) to (3.5,3.5);
\draw[dashed] (-3,-3.5) to (3.5,-3.5);
\end{tikzpicture} \\
\end{gathered}
\end{equation*}
and
\begin{equation*}
D_2=\begin{gathered}
\begin{tikzpicture}[xscale=.4,yscale=.2]
\begin{scope}
\clip (-3,3.5) rectangle (2,-3.5);
\draw[fill=dgrmred] (3.5,4) rectangle (-3,-4);
\draw[fill=dgrmblu] (-2,4) rectangle (-1,-4);
\draw[fill=dgrmblu] (0,-4) to (0,-3.5) to[out=90,in=-180] (.5,-2.5) to[out=0,in=90] (1,-3.5) to (0,-4);
\draw[fill=dgrmblu] (0,4) to (0,3.5) to[out=-90,in=180] (.5,2.5) to[out=0,in=-90] (1,3.5) to (0,4);
\end{scope}
\draw[dashed] (-3,3.5) to (2,3.5);
\draw[dashed] (-3,-3.5) to (2,-3.5);
\end{tikzpicture} \\
\end{gathered} \text{,} \qquad \qquad \qquad
D_5=\begin{gathered}
\begin{tikzpicture}[xscale=.4,yscale=.2]
\begin{scope}
\clip (-3,3.5) rectangle (2,-3.5);
\draw[fill=dgrmred] (3.5,4) rectangle (-3,-4);
\draw[fill=dgrmblu] (-2,-4) to (-2,4) to (-1,3.5) to[out=-90,in=90] (1,-3.5) to (1,-4) to (0,-3.5) to[out=90,in=0] (-.5,-2.5) to[out=180,in=90] (-1,-3.5) to (-2,-4);
\draw[fill=dgrmblu] (0,4) to (0,3.5) to[out=-90,in=180] (.5,2.5) to[out=0,in=-90] (1,3.5) to (0,4);
\end{scope}
\draw[dashed] (-3,3.5) to (2,3.5);
\draw[dashed] (-3,-3.5) to (2,-3.5);
\end{tikzpicture} \\
\end{gathered} \text{.}
\end{equation*}
\begin{thm} \label{JWcoef}
Suppose $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ and $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{(n+1)})$ both exist.
We have
\begin{equation*}
\coeff_{{} \in \mathrm{JW}_{R}(\prescript{}{{\color{red} s}}{(n+1)})} D=\sum_{i} \frac{[i]_{{\color{Plum} u}}}{[n+1]_{{\color{red} s}}}\coeff_{{} \in \mathrm{JW}_{R}(\prescript{}{{\color{red} s}}{n})} D_i \text{,}
\end{equation*}
where the sum is taken over all positions $i$ where $D_i$ is defined, and ${\color{Plum} u}$ is the color of the deleted cap.
\end{thm}
\begin{proof}
The argument in the one-color setting (see \cite[Proposition~4.1]{morrison-JW} or \cite[Corollary~3.7]{frenkelkhovanov}) follows essentially unchanged from \cite[(6.29)]{ew-localizedcalc}.
\end{proof}
We will carefully show later that this computation is ``generic'', i.e.~if $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ exists, then its coefficients are specializations of the coefficients of $\mathrm{JW}_{\Frac A}(n)$.
The existence criterion in \cref{existence} is known to hold in the one-color setting, i.e.~when the images of $x_{{\color{red} s}}$ and $x_{{\color{blue} t}}$ in $R$ are equal.
In these circumstances we write $\mathrm{TL}_R(n)$ and $\mathrm{JW}_R(n)$ for the one-color Temperley--Lieb algebra and Jones--Wenzl projector.
\begin{thm}[{\cite[Theorem~A.2]{el-univcox}}] \label{existence-onecol}
Suppose $R$ is a commutative $A$-algebra which factors through $\overline{A}$.
Then $\mathrm{JW}_R(n)$ exists if and only if the one-color quantum binomial coefficients
\begin{equation*}
\qbinom{n}{k}=\frac{[n]!}{[k]![n-k]!}=\frac{[n][n-1]\dotsm [n-k+1]}{[k][k-1] \dotsm [1]}
\end{equation*}
are invertible in $R$ for all integers $0 \leq k \leq n$.
\end{thm}
In light of the ``generic'' nature of the coefficients of $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$, we can interpret \cref{existence-onecol} as description of the denominators of the coefficients of $\mathrm{JW}_{\Frac \overline{A}}(n)$.
Unfortunately, none of the known proofs of this result (most of which use connections to Lie theory in a crucial way) generalize easily to the two-colored setting.
Finally, we will give an alternative criterion for checking rotatability.
For $f \in 2\mathrm{TL}_R(\prescript{}{{\color{red} s}}{n})$ define the \defnemph{partial trace} of $f$ to be
\begin{align*}
\pTr(f)& =\begin{gathered}
\begin{tikzpicture}[xscale=.6,yscale=0.8]
\begin{scope}
\clip (-1.5,1) rectangle (3,-1);
\draw[fill=dgrmblu] (1,2) rectangle (3.5,-2);
\draw[fill=dgrmred] (1.5,.6) to[out=90,in=180] (2,.8)
to[out=0,in=90] (2.5,0) to[out=-90,in=0] (2,-.8) to[out=180,in=-90]
(1.5,-.6) to (1.5,.6);
\draw[fill=dgrmred] (0,2) rectangle (-2,-2);
\draw[fill=dgrmblu] (-1,2) rectangle (-.5,-2);
\draw[fill=white] (-1.25,.6) rectangle (1.9,-.6);
\node at (.25,0) {$f$};
\node at (.55,.8) {$\dots$};
\node at (.55,-.8) {$\dots$};
\end{scope}
\draw[dashed] (-1.5,1) to (3,1);
\draw[dashed] (-1.5,-1) to (3,-1);
\end{tikzpicture}
\end{gathered} & &
\text{($n$ odd)} \\
\pTr(f)& =\begin{gathered}
\begin{tikzpicture}[xscale=.6,yscale=0.8]
\begin{scope}
\clip (-1.5,1) rectangle (3,-1);
\draw[fill=dgrmred] (1,2) rectangle (3.5,-2);
\draw[fill=dgrmblu] (1.5,.6) to[out=90,in=180] (2,.8)
to[out=0,in=90] (2.5,0) to[out=-90,in=0] (2,-.8) to[out=180,in=-90]
(1.5,-.6) to (1.5,.6);
\draw[fill=dgrmred] (0,2) rectangle (-2,-2);
\draw[fill=dgrmblu] (-1,2) rectangle (-.5,-2);
\draw[fill=white] (-1.25,.6) rectangle (1.9,-.6);
\node at (.25,0) {$f$};
\node at (.55,.8) {$\dots$};
\node at (.55,-.8) {$\dots$};
\end{scope}
\draw[dashed] (-1.5,1) to (3,1);
\draw[dashed] (-1.5,-1) to (3,-1);
\end{tikzpicture}
\end{gathered} & &
\text{($n$ even)}
\end{align*}
From the definition of the Jones--Wenzl projector, it is easy to see that $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ is rotatable if and only if $\pTr(\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n}))=0$.
Using entirely standard techniques (e.g.~\cite[\S 6.6]{ew-localizedcalc}), one can show that
\begin{equation}
\pTr(\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n}))=-\frac{[n+1]_{{\color{red} s}}}{[n]_{{\color{red} s}}}\mathrm{JW}_R(\prescript{}{{\color{red} s}}{(n-1)}) \label{eq:genericpTr}
\end{equation}
when both $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ and $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{(n-1)})$ exist.
This gives the following partial rotatability criterion.
\begin{prop} \label{genericrotatability}
Suppose both $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ and $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{(n-1)})$ exist.
Then $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ is rotatable if and only if $[n+1]_{{\color{red} s}}=0$.
\end{prop}
The key to proving the full rotatability criterion will be to interpret \eqref{eq:genericpTr} generically.
\section{Principal ideals}
In this section, we show that several ideals generated by certain two-colored quantum numbers and binomial coefficients are principal.
Recall that for ordinary quantum numbers, one can show that if $d|n$ then $[d]|[n]$.
Using $\eqref{eq:colordependence}$ it immediately follows that $[d]_{{\color{red} s}}|[n]_{{\color{red} s}}$.
\begin{lem}[Quantum B\'{e}zout's identity] \label{qbezout}
Let $m,n \in \mathbb{N}$.
There exist polynomials $a,b \in A$ such that
\begin{equation*}
a[m]_{{\color{red} s}}+b[n]_{{\color{red} s}}=[\gcd(m,n)]_{{\color{red} s}} \text{.}
\end{equation*}
\end{lem}
\begin{proof}
Suppose without loss of generality that $m<n$.
We will show that the ideal in $A$ generated by $[m]_{{\color{red} s}}$ and $[n]_{{\color{red} s}}$ contains $[n-m]_{{\color{red} s}}$.
If $m$ and $n$ are not both odd, then
\begin{align*}
\begin{split}
[n-1]_{{\color{blue} t}}[m]_{{\color{red} s}} - [m-1]_{{\color{blue} t}}[n]_{{\color{red} s}} & =
([m+n-2]_{{\color{red} s}}+[m+n-4]_{{\color{red} s}}+\dotsb+[-(n-m)+2]_{{\color{red} s}}) \\
& \quad -([m+n-2]_{{\color{red} s}}+[m+n-4]_{{\color{red} s}}+\dotsb+[n-m+2]_{{\color{red} s}})
\end{split} \\
& =[n-m]_{{\color{red} s}}+[n-m-2]_{{\color{red} s}}+\dotsb+[-(n-m)+2]_{{\color{red} s}} \\
& =[n-m]_{{\color{red} s}}
\end{align*}
by \cite[(6.5a)--(6.5c)]{ew-localizedcalc}.
If $m$ and $n$ are both odd, a similar calculation yields
\begin{equation*}
[n-1]_{{\color{red} s}}[m]_{{\color{red} s}} - [m-1]_{{\color{red} s}}[n]_{{\color{red} s}}=[n-m]_{{\color{red} s}} \text{.}
\end{equation*}
By repeating this step multiple times, we can run Euclid's algorithm, and the result follows.
\end{proof}
Next we introduce the cyclotomic parts of quantum numbers, which are roughly analogous to cyclotomic polynomials.
Recall that the one-color quantum numbers are renormalizations of Chebyshev polynomials of the second kind.
More precisely, if we evaluate a quantum number $[n]$ at $x=2\cos \theta$, we obtain
\begin{equation*}
[n](2\cos \theta) = \frac{\sin n\theta}{\sin \theta} \text{.}
\end{equation*}
Since $[n]$ is a monic polynomial in $x$ of degree $n-1$ we conclude that
\begin{equation*}
[n]=\prod_{k=1}^{n-1} \left(x-2\cos \frac{k\pi}{n}\right)
\end{equation*}
We define the \defnemph{cyclotomic part} of the one-color quantum number $[n]$ to be the polynomial
\begin{equation*}
\Theta_n = \prod_{\substack{1 \leq k < n\\ (k,n)=1}} \left(x-2\cos \frac{k\pi}{n}\right) \text{.}
\end{equation*}
\begin{lem} \label{cyclofacts}
Let $n \in \mathbb{N}$.
We have
\begin{enumerate}[label={\rm (\roman*)}]
\item \label{item:degree} $\Theta_n \in \mathbb{Z}[x]$, and $\deg \Theta_n=\varphi(n)$ when $n>1$;
\item \label{item:prod} $[n]=\prod_{k|n} \Theta_n$;
\item \label{item:mobinv} $\Theta_n=\prod_{k|n} [k]^{\mu(n/k)}$, where $\mu:\mathbb{N} \rightarrow \{\pm 1\}$ is the M\"{o}bius function.
\end{enumerate}
Moreover, if $n>2$ then we also have $\Theta_n(x)=\Psi_n(x^2)$, where $\Psi_n \in \mathbb{Z}[x]$ is the minimal polynomial of $4\cos^2(\pi/n)$.
\end{lem}
\begin{proof}
Both \ref{item:degree} and \ref{item:prod} follow from the definition and basic properties of cyclotomic fields and algebraic integers.
Applying M\"obius inversion to \ref{item:prod} yields \ref{item:mobinv}.
For the final claim, we observe that if $n>2$ then $\Theta_n$ is an even polynomial, so is of the form of $\Psi_n(x^2)$ for some $\Psi_n \in \mathbb{Z}[x]$ of degree $\varphi(n)/2$.
By construction $4\cos^2(\pi/n)$ is a root of $\Psi_n$.
Since
\begin{equation*}
4\cos^2 \frac{\pi}{n}=2\cos \frac{2\pi}{n}+2
\end{equation*}
and $\mathbb{Q}(2\cos(2\pi/n)+2)=\mathbb{Q}(\cos(2\pi/n))$ is a field extension of $\mathbb{Q}$ of degree $\varphi(n)/2$, $\Psi_n$ must be the minimal polynomial of $4\cos^2(\pi/n)$.
\end{proof}
\begin{defn}
For $n \in \mathbb{N}$, we define the \defnemph{cyclotomic part} of the two-colored quantum number $[n]_{{\color{red} s}}$ to be
\begin{equation*}
\Theta_{n,{\color{red} s}}=\begin{cases}
\Psi_n(x_{{\color{red} s}} x_{{\color{blue} t}}) & \text{if $n>2$,} \\
x_{{\color{red} s}} & \text{if $n=2$,} \\
1 & \text{if $n=1$.}
\end{cases}
\end{equation*}
\end{defn}
Using \eqref{eq:colordependence} and \cref{cyclofacts} we similarly obtain $[n]_{{\color{red} s}}=\prod_{k|n} \Theta_{n,{\color{red} s}}$ and $\Theta_{n,{\color{red} s}}=\prod_{k|n} [n]_{{\color{red} s}}^{\mu(n/k)}$.
\begin{lem} \label{thetabezout}
Let $m,n \in \mathbb{N}$ such that $m \nmid n$ and $n \nmid m$.
There exist polynomials $a,b \in A$ such that
\begin{equation*}
a\Theta_{m,{\color{red} s}}+b\Theta_{n,{\color{red} s}}=1 \text{.}
\end{equation*}
\end{lem}
\begin{proof}
Suppose without loss of generality that $m<n$, and let $d=\gcd(m,n)$.
By \cref{qbezout} there exist $a',b' \in A$ such that
\begin{equation*}
a'[m]_{{\color{red} s}}+b'[n]_{{\color{red} s}}=[d]_{{\color{red} s}} \text{.}
\end{equation*}
By assumption $d<m<n$, so we have
\begin{align*}
\frac{[m]_{{\color{red} s}}}{[d]_{{\color{red} s}}} & \in \Theta_{m,{\color{red} s}} A \\
\frac{[n]_{{\color{red} s}}}{[d]_{{\color{red} s}}} & \in \Theta_{n,{\color{red} s}} A
\end{align*}
and thus dividing by $[d]_{{\color{red} s}}$ we obtain
\begin{equation*}
a\Theta_{m,{\color{red} s}}+b\Theta_{n,{\color{red} s}}=1 \text{.} \qedhere
\end{equation*}
\end{proof}
\begin{prop} \label{thetaprincipal}
Let $m_1,m_2,\dotsc,m_k,n_1,n_2,\dotsc,n_l \in \mathbb{N}$ such that for all $i,j$ either $m_i=n_j$ or $m_i \nmid n_j$ and $n_j \nmid m_i$.
Then the ideal
\begin{equation*}
(\Theta_{m_1,{\color{red} s}}\Theta_{m_2,{\color{red} s}}\dotsm \Theta_{m_k,{\color{red} s}}, \Theta_{n_1,{\color{red} s}}\Theta_{n_2,{\color{red} s}}\dotsm \Theta_{n_l,{\color{red} s}})
\end{equation*}
in $A$ is principal.
\end{prop}
\begin{proof}
Let $I$ be the ideal above.
We may assume without loss of generality that $m_i \neq n_j$ for all $i,j$, i.e.~the generators of $I$ are coprime in $A$.
For each $i,j$ we can apply \cref{thetabezout} to obtain $a_{i,j},b_{i,j} \in A$ such that $a_{i,j}\Theta_{m_i,{\color{red} s}}+b_{i,j}\Theta_{n_j,{\color{red} s}}=1$.
Taking the product over all $i$ and $j$ we obtain
\begin{equation*}
1=\prod_{i} \left(\prod_{j} (a_{i,j}\Theta_{m_i,{\color{red} s}} + b_{i,j}\Theta_{n_j,{\color{red} s}})\right) \in \prod_{i} (\Theta_{m_i,{\color{red} s}},\Theta_{n_1,{\color{red} s}}\Theta_{n_2,{\color{red} s}}\dotsm \Theta_{n_l,{\color{red} s}}) \subseteq I
\end{equation*}
so $I=(1)$ is principal.
\end{proof}
For $f \in A$, we define the cyclotomic valuation $\nu_{l,{\color{red} s}}(f)$ to be the exponent of the highest power dividing $f$.
This extends to $\Frac A$ in the obvious way, namely we define $\nu_{l,{\color{red} s}}(f/g)=\nu_{l,{\color{red} s}}(f)-\nu_{l,{\color{red} s}}(g)$ for $f,g \in A$.
If $f$ and $g$ are products of ${\color{red} s}$-colored cyclotomic parts then
\begin{equation*}
\frac{f}{g}=\prod_l \Theta_{l,{\color{red} s}}^{\nu_{l,{\color{red} s}}(f/g)} \text{.}
\end{equation*}
\begin{lem} \label{valbinom}
Let $n,k$ be non-negative integers.
For all integers $1 \leq l \leq n$ we have
\begin{equation*}
\nu_{l,{\color{red} s}} \qbinom{n}{k}_{{\color{red} s}}=\left\lfloor \frac{n}{l} \right\rfloor - \left\lfloor \frac{k}{l} \right\rfloor - \left\lfloor \frac{n-k}{l} \right\rfloor \text{.}
\end{equation*}
In particular, $\nu_{l,{\color{red} s}} \qbinom{n}{k}_{{\color{red} s}} \in \{0,1\}$.
\end{lem}
\begin{proof}
Clearly
\begin{equation*}
\nu_{l,{\color{red} s}}[m]_{{\color{red} s}}=\begin{cases}
1 & \text{if $l|m$,} \\
0 & \text{otherwise,}
\end{cases}
\end{equation*}
so $\nu_l([m]_{{\color{red} s}}!)=\lfloor m/l \rfloor$ and the equation above follows.
To show the bound on the valuation, note that $m/l-1<\lfloor m/l\rfloor \leq m/l$, so
\begin{multline*}
-1=\left(\frac{n}{l}-1\right)-\frac{k}{l}-\frac{n-k}{l}<\left\lfloor \frac{n}{l} \right\rfloor - \left\lfloor \frac{k}{l} \right\rfloor - \left\lfloor \frac{n-k}{l} \right\rfloor \\
<\frac{n}{l}-\left(\frac{k}{l}-1\right)-\left(\frac{n-k}{l}-1\right)=2 \text{.}
\end{multline*}
\end{proof}
\begin{thm} \label{qbinomideal}
Let $n \in \mathbb{N}$.
The ideal
\begin{equation*}
\left(\qbinom{n}{1}_{{\color{red} s}}, \qbinom{n}{2}_{{\color{red} s}}, \dotsc, \qbinom{n}{n-1}_{{\color{red} s}}\right)
\end{equation*}
in $A$ is principal, generated by $\Theta_{n,{\color{red} s}}$.
\end{thm}
\begin{proof}
We will prove the result by induction.
Let
\begin{equation*}
I_m=\left(\qbinom{n}{1}_{{\color{red} s}},\qbinom{n}{2}_{{\color{red} s}}, \dotsc, \qbinom{n}{m}_{{\color{red} s}}\right)
\end{equation*}
and write
\begin{equation*}
[n]_{{\color{red} s}}^{>m}=\prod_{\substack{k>m\\ k|n}} \Theta_{k,{\color{red} s}} \text{.}
\end{equation*}
Suppose we have shown that $I_m$ is principal, generated by $[n]_{{\color{red} s}}^{>m}$.
We will show that $I_{m+1}$ is principal, generated by $[n]_{{\color{red} s}}^{>m+1}$.
It is enough to show that
\begin{equation}
\left(\qbinom{n}{m+1}_{{\color{red} s}},[n]_{{\color{red} s}}^{>m}\right)=([n]_{{\color{red} s}}^{>m+1}) \text{.} \label{eq:qbinomideal-induction}
\end{equation}
Clearly $[n]_{{\color{red} s}}^{>m+1}$ divides $[n]_{{\color{red} s}}^{>m}$.
If $k>m+1$ and $k|n$ it is easy to see that
\begin{equation*}
\left\lfloor\frac{n}{k}\right\rfloor - \left\lfloor\frac{m+1}{k}\right\rfloor - \left\lfloor\frac{n-(m+1)}{k}\right\rfloor=1 \text{,}
\end{equation*}
so $\Theta_k$ divides $\qbinom{n}{m+1}_{{\color{red} s}}$ exactly once, and thus $[n]_{{\color{red} s}}^{>m+1}$ divides $\qbinom{n}{m+1}_{{\color{red} s}}$.
If $m+1 \nmid n$ then $[n]_{{\color{red} s}}^{>m+1}=[n]_{{\color{red} s}}^{>m}$, \eqref{eq:qbinomideal-induction} follows trivially.
Otherwise suppose $m+1|n$.
We claim that if $\Theta_{l,{\color{red} s}}$ divides $\qbinom{n}{m+1}/[n]_{{\color{red} s}}^{>m+1}$ we must have $l \nmid m+1$ and $m+1 \nmid l$.
This implies that
\begin{equation*}
\left(\frac{\qbinom{n}{m+1}_{{\color{red} s}}}{[n]_{{\color{red} s}}^{>m+1}},\Theta_{m+1,{\color{red} s}}\right)=(1)
\end{equation*}
by \cref{thetaprincipal}, from which \eqref{eq:qbinomideal-induction} holds and the result follows.
To prove the claim, suppose $l|m+1$.
It is straightforward to check that
\begin{equation*}
\left\lfloor\frac{n}{l}\right\rfloor - \left\lfloor\frac{m+1}{l}\right\rfloor - \left\lfloor\frac{n-(m+1)}{l}\right\rfloor=0 \text{,}
\end{equation*}
so $\Theta_{l,{\color{red} s}}$ does not divide $\qbinom{n}{m+1}_{{\color{red} s}}$, let alone $\qbinom{n}{m+1}_{{\color{red} s}}/[n]_{{\color{red} s}}^{>m+1}$.
Similarly, suppose $m+1|l$, and take $0 \leq r<l$ such that $n-(m+1)=ql+r$.
If $\Theta_{l,{\color{red} s}}$ divides $\qbinom{n}{m+1}_{{\color{red} s}}$ then
\begin{equation*}
\left\lfloor\frac{n}{l}\right\rfloor - \left\lfloor\frac{m+1}{l}\right\rfloor - \left\lfloor\frac{n-(m+1)}{l}\right\rfloor=1
\end{equation*}
and we must have $r+m+1 \geq l$.
Now let $d=\gcd(l,n-(m+1))$.
As $m+1|n$ and $m+1|l$, we have $m+1|d$ and in particular $m+1 \leq d$.
We also have $d|r$, so in particular $l-r \geq d$.
We combine these two equalities to obtain $r+m+1 \leq l$, with equality if and only if $l-r=d$ and $m+1=d$.
This immediately implies that $l|n$, so $\Theta_{l,{\color{red} s}}$ does not divide $\qbinom{n}{m+1}_{{\color{red} s}}/[n]_{{\color{red} s}}^{>m+1}$.
\end{proof}
\begin{thm} \label{qbinominvideal}
Let $n \in \mathbb{N}$.
The fractional ideal of $A$ generated by
\begin{equation*}
\qbinom{n}{0}_{{\color{red} s}}^{-1}, \qbinom{n}{1}_{{\color{red} s}}^{-1}, \dotsc, \qbinom{n}{n}_{{\color{red} s}}^{-1}
\end{equation*}
is principal, generated by
\begin{equation*}
\left(\prod_{\substack{1 \leq k \leq n\\ k \nmid n+1}} \Theta_{k,{\color{red} s}}\right)^{-1} \text{.}
\end{equation*}
\end{thm}
\begin{proof}
We follow a similar strategy as in the proof of \cref{qbinomideal}.
Let $I_m$ denote the fractional ideal generated by
\begin{equation*}
\qbinom{n}{0}_{{\color{red} s}}^{-1},\qbinom{n}{1}_{{\color{red} s}}^{-1}, \dotsc, \qbinom{n}{m}_{{\color{red} s}}^{-1}
\end{equation*}
and let
\begin{equation*}
g_m=\prod_{\substack{k|n-m+i \text{ for some } 1 \leq i \leq m\\ k\nmid n+1}} \Theta_{k,{\color{red} s}} \text{.}
\end{equation*}
Suppose we have shown that $I_m$ is principal, generated by $g_m^{-1}$.
We will show that $I_{m+1}$ is principal, generated by $g_{m+1}^{-1}$.
It is enough to show that the fractional ideal generated by
\begin{equation*}
\qbinom{n}{m+1}_{{\color{red} s}}^{-1},g_m^{-1}
\end{equation*}
is equal to the principal fractional ideal generated by $g_{m+1}^{-1}$.
This is equivalent to proving equality of the following (ordinary) ideals
\begin{equation}
\left(g_m,\qbinom{n}{m+1}_{{\color{red} s}}\right)=\left(\frac{\qbinom{n}{m+1}_{{\color{red} s}}}{h_m}\right) \label{eq:qbinominvideal-induction}
\end{equation}
of $A$, where
\begin{equation*}
h_m=\frac{g_{m+1}}{g_m}=\prod_{\substack{k|n-m\\ k \nmid n-m+1, k \nmid n-m+2, \dotsc ,k \nmid n+1}} \Theta_{k,{\color{red} s}} \text{.}
\end{equation*}
We first check that the ideal on the right-hand side of \eqref{eq:qbinominvideal-induction} is an ordinary ideal.
If $\Theta_{k,{\color{red} s}}$ divides $h_m$ (i.e.~if $k|n-m$ and $k \nmid n-m+i$ for all $1 \leq i \leq m+1$) then $k \nmid m+1$ and the fractional part of $(n-(m+1))/k$ is $(k-1)/k$.
This implies that
\begin{equation*}
\left\lfloor \frac{n}{k} \right\rfloor - \left\lfloor \frac{m+1}{k} \right\rfloor - \left\lfloor \frac{n-(m+1)}{k} \right\rfloor = 1
\end{equation*}
so $\Theta_{k,{\color{red} s}}$ also divides $\qbinom{n}{m+1}_{{\color{red} s}}$.
It is clear that $\qbinom{n}{m+1}_{{\color{red} s}}/h_m$ divides $\qbinom{n}{m+1}_{{\color{red} s}}$.
Suppose $\Theta_{k,{\color{red} s}}$ divides $\qbinom{n}{m+1}_{{\color{red} s}}/h_m$.
Since we can write
\begin{equation*}
\qbinom{n}{m+1}=\frac{[n]_{{\color{red} s}} [n-1]_{{\color{red} s}} \dotsm [n-m]_{{\color{red} s}}}{[m+1]_{{\color{red} s}} [m]_{{\color{red} s}} \dotsm [1]_{{\color{red} s}}} \text{,}
\end{equation*}
this implies that either $k\nmid n-m$ and $k|n-m+i$ for some $1 \leq i \leq m$, or $k|n-m$ and $k|n-m+i$ for some $1 \leq i \leq m+1$.
In either case, it is easy to check that $k\nmid n+1$, for otherwise $n/k$ has fractional part $(k-1)/k$, so
\begin{equation*}
\left\lfloor \frac{n}{k} \right\rfloor - \left\lfloor \frac{m+1}{k} \right\rfloor - \left\lfloor \frac{n-(m+1)}{k} \right\rfloor = 0
\end{equation*}
and $\Theta_{k,{\color{red} s}}$ cannot divide $\qbinom{n}{m+1}_{{\color{red} s}}$.
This shows that $\qbinom{n}{m+1}_{{\color{red} s}}/h_m$ divides $g_m$.
We will now show that
\begin{equation*}
\left(\frac{g_{m+1}}{\qbinom{n}{m+1}_{{\color{red} s}}},h_m\right)=(1)
\end{equation*}
using \cref{thetaprincipal}, from which \eqref{eq:qbinominvideal-induction} holds and the result follows.
It is enough to show that for any $l,d>1$, we do not have $\Theta_{l,{\color{red} s}}|g_{m+1}/\qbinom{n}{m+1}_{{\color{red} s}}$ and $\Theta_{ld,{\color{red} s}}|h_m$, or $\Theta_{ld,{\color{red} s}}|g_{m+1}/\qbinom{n}{m+1}_{{\color{red} s}}$ and $\Theta_{l,{\color{red} s}}|h_m$.
Suppose first that $\Theta_{l,{\color{red} s}}|g_{m+1}/\qbinom{n}{m+1}_{{\color{red} s}}$ and $\Theta_{ld,{\color{red} s}}|h_m$.
Then $l \nmid n+1$ and $ld|n-m$, so $l|n-m$ and $l \nmid m+1$.
This shows that the fractional part of $(n-(m+1))/l$ is $(l-1)/l$ and the fractional part of $(m+1)/l$ is non-zero, so
\begin{equation*}
\left\lfloor \frac{n}{l} \right\rfloor - \left\lfloor \frac{m+1}{l} \right\rfloor - \left\lfloor \frac{n-(m+1)}{l} \right\rfloor = 1
\end{equation*}
which contradicts $\Theta_{l,{\color{red} s}}|g_{m+1}/\qbinom{n}{m+1}_{{\color{red} s}}$.
Similarly, suppose that $\Theta_{ld,{\color{red} s}}|g_{m+1}/\qbinom{n}{m+1}_{{\color{red} s}}$ and $\Theta_{l,{\color{red} s}}|h_m$.
Then $l|n-m$ and $l \nmid n-m+i$ for $1 \leq i \leq m+1$, while $ld|n-m+i$ for some $0 \leq i \leq m$ and $ld \nmid n+1$.
The only way this can happen is if $ld|n-m$.
This implies that $ld \nmid m+1$, and we similarly obtain
\begin{equation*}
\left\lfloor \frac{n}{ld} \right\rfloor - \left\lfloor \frac{m+1}{ld} \right\rfloor - \left\lfloor \frac{n-(m+1)}{ld} \right\rfloor = 1
\end{equation*}
which contradicts $\Theta_{ld,{\color{red} s}}|g_{m+1}/\qbinom{n}{m+1}_{{\color{red} s}}$.
\end{proof}
\section{Existence and rotatability}
Let $Q=\Frac A$ and $\overline{Q}=\Frac \overline{A}$.
Our goal in this section is to prove \cref{existence} by showing that the denominators of the coefficients of $\mathrm{JW}_Q(\prescript{}{{\color{red} s}}{n})$ divide
\begin{equation*}
\prod_{\substack{1 \leq k \leq n\\ k \nmid n+1}} \Theta_{k,{\color{red} s}}
\end{equation*}
by comparing them with the coefficients of $\mathrm{JW}_{\overline{Q}}(n)$.
First, we prove the analogous statement for $\mathrm{JW}_{\overline{Q}}(n)$.
\begin{lem} \label{onecolvalbound}
Let $k \in \mathbb{N}$, and let $D$ be a one-colored Temperley--Lieb diagram in $\mathrm{TL}_{\overline{Q}}(n)$.
Then
\begin{equation*}
\nu_k\left(\coeff_{{}\in \mathrm{JW}_{\overline{Q}}(n)} D \right) \geq -1 \text{,}
\end{equation*}
with equality only if $1 \leq k \leq n$ and $k \nmid n+1$.
\end{lem}
\begin{proof}
We proceed by induction.
Suppose the result holds for $n=m$, and let $D$ be a one-colored Temperley--Lieb diagram in $\mathrm{TL}_{\overline{Q}}(m+1)$.
By the one-color version of \cref{JWcoef} we have
\begin{equation}
\begin{split}
\nu_k\left(\coeff_{{}\in \mathrm{JW}_{\overline{Q}}(m+1)} D \right)& =\nu_k\left(\sum_{\{i\}} \frac{[i]}{[m+1]}\coeff_{{}\in \mathrm{JW}_{\overline{Q}}(m)} D_i\right) \\
& \geq \min_{\{i\}} \left(\nu_k\left(\frac{[i]}{[m+1]}\right)+\nu_k\left(\coeff_{{}\in \mathrm{JW}_{\overline{Q}}(m)} D_i\right)\right) \text{.}
\end{split}\label{eq:onecolvalbound}
\end{equation}
If $k \nmid m+1$, then $\nu_k([i]/[m+1]) \geq 0$ for any $i$ and $\nu_k(\coeff_{{}\in \mathrm{JW}_{\overline{Q}}(m)} D_i) \geq -1$.
On the other hand, if $k|m+1$, then $\nu_k([i]/[m+1]) \geq -1$ while $\nu_k(\coeff_{{}\in \mathrm{JW}_{\overline{Q}}(m)} D_i) \geq 0$.
In either case, the sum of the two valuations is at least $-1$, so the right-hand side of \eqref{eq:onecolvalbound} is at least $-1$.
In the case of equality, we must have $1 \leq k \leq m+1$ and $k \nmid m+2$ by \cref{existence-onecol} and the one-color version of \cref{qbinominvideal}.
\end{proof}
Now let $A'=A[x]/(x^2-x_{{\color{red} s}} x_{{\color{blue} t}})$.
We view $A'$ as both an $A$-algebra and an $\overline{A}$-algebra in the obvious way.
Writing $Q'=\Frac A'$, we have an isomorphism
\begin{equation*}
\begin{aligned}
\mathrm{TL}_{Q'}(n)& \longrightarrow 2\mathrm{TL}_{Q'}(\prescript{}{{\color{red} s}}{n})\\
e_i & \longmapsto \begin{cases}
\frac{x}{x_{{\color{red} s}}} e_i & \text{$i$ odd,} \\
\frac{x}{x_{{\color{blue} t}}} e_i & \text{$i$ even,}
\end{cases}
\end{aligned}
\end{equation*}
which maps $\mathrm{JW}_{Q'}(n) \mapsto \mathrm{JW}_{Q'}(\prescript{}{{\color{red} s}}{n})$.
So for any two-colored Temperley--Lieb diagram $D$, we have
\begin{equation*}
\coeff_{{}\in \mathrm{JW}_{Q}(\prescript{}{{\color{red} s}}{n})} D=\coeff_{{}\in \mathrm{JW}_{Q'}(\prescript{}{{\color{red} s}}{n})} D=x^a x_{{\color{red} s}}^b x_{{\color{blue} t}}^c \coeff_{{}\in \mathrm{JW}_{Q'}(n)} \overline{D}=x^a x_{{\color{red} s}}^b x_{{\color{blue} t}}^c \coeff_{{}\in \mathrm{JW}_{\overline{Q}}(n)} \overline{D}
\end{equation*}
for some integers $a,b,c$, where $\overline{D}$ denotes the one-color diagram obtained from $D$ by forgetting the coloring.
It follows that when $k>2$ we have
\begin{equation}
\nu_{k,{\color{red} s}}\left(\coeff_{{}\in \mathrm{JW}_{Q}(n)} D \right)=\nu_k\left(\coeff_{{}\in \mathrm{JW}_{\overline{Q}}(\prescript{}{{\color{red} s}}{n})} \overline{D} \right) \text{.} \label{eq:valequalitynottwo}
\end{equation}
\begin{lem} \label{twocolvalbound}
Let $k \in \mathbb{N}$, and let $D$ be a two-colored Temperley--Lieb diagram in $2\mathrm{TL}_{Q}(\prescript{}{{\color{red} s}}{n})$.
Then
\begin{equation*}
\nu_{k,{\color{Plum} u}}\left(\coeff_{{}\in \mathrm{JW}_{Q}(\prescript{}{{\color{red} s}}{n})} D \right) \geq -1 \text{,}
\end{equation*}
and if we have equality then $1 \leq k \leq n$ and $k \nmid n+1$, and ${\color{Plum} u}={\color{red} s}$ if $k=2$.-
\end{lem}
\begin{proof}
By \cref{onecolvalbound} and \eqref{eq:valequalitynottwo} we need only concern ourselves with the case where $k=2$.
We proceed by induction as in the proof of \cref{onecolvalbound}.
Suppose the result holds for $n=m$, and let $D$ be a two-colored Temperley--Lieb diagram in $\mathrm{TL}_{Q}(m+1)$.
By \cref{JWcoef} we have
\begin{equation}
\begin{split}
\nu_{2,{\color{Plum} u}}\left(\coeff_{{}\in \mathrm{JW}_{Q}(m+1)} D \right)& =\nu_{2,{\color{Plum} u}}\left(\sum_{\{i\}} \frac{[i]_{\color{OliveGreen} v}}{[m+1]_{{\color{red} s}}}\coeff_{{}\in \mathrm{JW}_{Q}(m)} D_i\right) \\
& \geq \min_{\{i\}} \left(\nu_{2,{\color{Plum} u}}\left(\frac{[i]_{{\color{OliveGreen} v}}}{[m+1]_{{\color{red} s}}}\right)+\nu_{2,{\color{Plum} u}}\left(\coeff_{{}\in \mathrm{JW}_{Q}(m)} D_i\right)\right) \text{.}
\end{split}\label{eq:twocolvalbound}
\end{equation}
If $m$ is even or ${\color{Plum} u}={\color{blue} t}$, then for all $i$
\begin{equation*}
\nu_{2,{\color{Plum} u}}\left(\frac{[i]_{{\color{OliveGreen} v}}}{[m+1]_{{\color{red} s}}}\right) \geq 0 \qquad \text{ and } \qquad \nu_{2,{\color{Plum} u}}\left(\coeff_{{}\in \mathrm{JW}_{Q}(m)} D_i\right) \geq -1 \text{.}
\end{equation*}
On the other hand, if $m$ is odd and ${\color{Plum} u}={\color{red} s}$, then for all $i$
\begin{equation*}
\nu_{2,{\color{Plum} u}}\left(\frac{[i]_{{\color{OliveGreen} v}}}{[m+1]_{{\color{red} s}}}\right) \geq -1 \qquad \text{ and } \qquad \nu_{2,{\color{red} s}}\left(\coeff_{{}\in \mathrm{JW}_{\overline{Q}}(m)} D_i\right) \geq 0 \text{.}
\end{equation*}
In either case, the sum of the two valuations is at least $-1$, so the right-hand side of \eqref{eq:twocolvalbound} is at least $-1$.
Now suppose the left-hand side of \eqref{eq:twocolvalbound} is $-1$ but $m$ is even.
Let $D'$ be the diagram in $2\mathrm{TL}(\prescript{}{{\color{red} s}}{m+1})$ obtained by reflecting $D$ about a vertical axis, and then recoloring to obtain a diagram in $2\mathrm{TL}(\prescript{}{{\color{red} s}}{n})$.
This reflection map extends to a twisted involution of $2\mathrm{TL}(n)$ mapping $e_i$ to $e_{n-i}$ and swapping $[2]_{{\color{red} s}}$ and $[2]_{{\color{blue} t}}$.
Thus $\nu_{k,{\color{blue} t}}(\coeff_{{} \in \mathrm{JW}_Q(m+1)} D')=-1$, which is a contradiction.
\end{proof}
\begin{lem} \label{qbinominvcoef}
Let $n,k$ be integers with $0 \leq k \leq n$.
There exists a two-colored diagram $D$ such that $\coeff_{{}\in \mathrm{JW}(\prescript{}{{\color{red} s}}{n})} D=\qbinom{n}{k}_{{\color{red} s}}^{-1}$.
\end{lem}
\begin{proof}
Take $D$ to be the diagram with $k$ nested caps on the bottom left, $k$ nested cups on the top right, and all other strands connected from bottom to top.
For example, if $n=5$ and $k=2$ we set
\begin{equation*}
D=\begin{gathered}
\begin{tikzpicture}[xscale=.4,yscale=.2]
\begin{scope}
\clip (-3,3.5) rectangle (3,-3.5);
\draw[fill=dgrmred] (2.5,4) rectangle (-4,-4);
\draw[fill=dgrmblu] (4,-4) to (2,-3.5) to[out=90,in=-90] (-2,3.5) to (-2,4) to (-1,3.5) to[out=-90,in=180] (0.5,1.5) to[out=0,in=-90] (2,3.5) to (4,4) to (4,-4);
\draw[fill=dgrmblu] (-2,-4) to (-2,-3.5) to[out=90,in=180] (-.5,-1.5) to[out=0,in=90] (1,-3.5) to (1,-4) to (0,-3.5) to[out=90,in=0] (-.5,-2.5) to[out=180,in=90] (-1,-3.5) to (-2,-4);
\draw[fill=dgrmblu] (0,4) to (0,3.5) to[out=-90,in=180] (.5,2.5) to[out=0,in=-90] (1,3.5) to (0,4);
\end{scope}
\draw[dashed] (-3,3.5) to (3,3.5);
\draw[dashed] (-3,-3.5) to (3,-3.5);
\end{tikzpicture}
\end{gathered}
\end{equation*}
The result follows by \cref{JWcoef} and induction on $n$.
\end{proof}
\begin{thm} \label{genericcoefs}
For each two-colored Temperley--Lieb diagram $D$ there are coprime elements $f_D,g_D \in A$ such that if $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ exists,
\begin{equation*}
\coeff_{{} \in \mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})} D
\end{equation*}
is the specialization of $f_D/g_D$ in $R$.
In particular, the existence of $\mathrm{JW}_R(n)$ implies that the specialization of $g_D$ in $R$ is invertible for all diagrams $D$.
\end{thm}
\begin{rem}
We consider generic computation of the coefficients of one-colored Jones--Wenzl projectors (at least for subrings of $\mathbb{C}$) to be mathematical folklore, i.e.~a ``known'' result without a published proof.
In \cite[Theorem~6.13]{ew-localizedcalc} Elias--Williamson carefully prove an analogous result under the assumption that $R$ is both an integral domain and a henselian local ring.
Our proof does not require any restrictions on $R$ but is essentially equivalent to \cref{existence}.
\end{rem}
\begin{proof}
Let
\begin{equation*}
T_R=\{f \in 2\mathrm{TL}_R(\prescript{}{{\color{red} s}}{n}) : e_i f=0 \text{ for all } 1 \leq i \leq n-1\} \text{.}
\end{equation*}
In other words, $T_R$ is the (right) annihilator of the generators $e_1,\dotsc,e_{n-1}$.
One can show that $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ exists if and only if there exists $f \in T_R$ for which $\coeff_{{} \in f} 1$ is invertible in $R$ (see e.g.~\cite[Exercise~9.25]{emtw} for the one-colored case).
When this happens, $T_R=R\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$.
Clearly $\mathrm{JW}_Q(\prescript{}{{\color{red} s}}{n})$ exists so $T_Q=Q\mathrm{JW}_Q(n)$.
Thus $T_A$ is a free $A$-module of rank $1$, generated by $c\mathrm{JW}_Q(n) \in \mathrm{TL}_A(n)$, where $c$ is the least common multiple of the denominators of the coefficients of $\mathrm{JW}_Q(n)$.
\Cref{twocolvalbound} implies that $c$ divides
\begin{equation*}
g_n=\prod_{\substack{1 \leq k \leq n\\ k \nmid n+1}} \Theta_{k,{\color{red} s}} \text{,}
\end{equation*}
while \cref{qbinominvcoef} and \cref{qbinominvideal} give $c=g_n$.
If $\mathrm{JW}_R(n)$ exists, we have $T_R=R\mathrm{JW}_R(n) \geq R \otimes_A g_n\mathrm{JW}_Q(n)$, and thus $g_n\mathrm{JW}_R(n)=1 \otimes_A g_n\mathrm{JW}_Q(n)$.
But the coefficients of $g_n\mathrm{JW}_Q(n)$ (which lie in $A$) generate $(1)$ as an ideal of $A$ (again by \cref{qbinominvideal}), which directly implies that $g_n$ is invertible in $R$, and $\mathrm{JW}_R(n)=g_n^{-1} \otimes_A g_n \mathrm{JW}_Q(n)$.
\end{proof}
\begin{proof}[Proof of \cref{existence}]
The invertibility of $\qbinom{n}{k}_{{\color{red} s}}$ is necessary for $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ to exist by \cref{qbinominvcoef}.
Sufficiency follows from \cref{qbinominvideal}, \cref{twocolvalbound}, and \cref{genericcoefs} (or more simply from the proof of \cref{genericcoefs}).
\end{proof}
For $f \in Q$, say that $f$ exists in $R$ if there are $a,b \in A$ with $f=a/b$ and $b$ invertible in $R$.
\begin{lem}
Suppose $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ exists.
Then $\frac{[n+1]_{{\color{red} s}}}{[k]_{{\color{red} s}}}$ exists in $R$ for any integer $1 \leq k \leq n+1$.
\end{lem}
\begin{proof}
We have
\begin{equation*}
\frac{[n+1]_{{\color{red} s}}}{[k]_{{\color{red} s}}}=\frac{\prod_{l|n+1} \Theta_{l,{\color{red} s}}}{\prod_{l|k} \Theta_{l,{\color{red} s}}}=\frac{\prod_{\substack{l|n+1\\ l \nmid k}} \Theta_{l,{\color{red} s}}}{\prod_{\substack{l|k\\ l \nmid n+1}} \Theta_{l,{\color{red} s}}} \text{,}
\end{equation*}
and the denominator of the right-hand side divides
\begin{equation*}
\prod_{\substack{1 \leq l \leq n\\ l \nmid n+1}} \Theta_{l,{\color{red} s}}
\end{equation*}
which is invertible by \cref{existence} and \cref{qbinominvideal}.
\end{proof}
\begin{prop} \label{simple-rotatability}
Suppose the two-colored Jones--Wenzl projectors $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ and $\mathrm{JW}_R(\prescript{}{{\color{blue} t}}{n})$ exist.
Then $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ is rotatable if and only if $\frac{[n+1]_{{\color{red} s}}}{[k]_{{\color{red} s}}}=0$ for all integers $1 \leq k \leq n$.
\end{prop}
\begin{proof}
Calculating generically, we have
\begin{equation*}
\pTr(\mathrm{JW}_Q(\prescript{}{{\color{red} s}}{n}))=-\frac{[n+1]_{{\color{red} s}}}{[n]_{{\color{red} s}}}\mathrm{JW}_Q(\prescript{}{{\color{red} s}}{(n-1)})
\end{equation*}
by \eqref{eq:genericpTr}.
From the proof of \cref{genericcoefs} the coefficients of $\mathrm{JW}_Q(\prescript{}{{\color{red} s}}{(n-1)})$ can be written as sums of fractions of the form $a\qbinom{n-1}{k}_{{\color{red} s}}^{-1}$ for some $a \in A$ and some integer $0 \leq k \leq n-1$.
Now observe that
\begin{equation*}
-\frac{[n+1]_{{\color{red} s}}}{[n]_{{\color{red} s}}}\frac{a}{\qbinom{n-1}{k}_{{\color{red} s}}}=-\frac{[n+1]_{{\color{red} s}}[k]_{{\color{red} s}}! a}{[n]_{{\color{red} s}}[n-1]_{{\color{red} s}} \dotsm [n-k]_{{\color{red} s}}}=-\frac{[n+1]_{{\color{red} s}}}{[k+1]_{{\color{red} s}}}\frac{a}{\qbinom{n}{k+1}_{{\color{red} s}}}
\end{equation*}
noting that since $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ exists, $\qbinom{n}{k+1}_{{\color{red} s}}$ is invertible.
Thus $\mathrm{JW}_R(\prescript{}{{\color{red} s}}{n})$ is rotatable if $\frac{[n+1]_{{\color{red} s}}}{[k+1]_{{\color{red} s}}}=0$.
Conversely, by \cref{qbinominvcoef} there is a a diagram whose coefficient in $\mathrm{JW}_Q(\prescript{}{{\color{red} s}}{n})$ is exactly $\qbinom{n-1}{k}_{{\color{red} s}}^{-1}$, so the above calculation shows that rotatability implies $\frac{[n+1]_{{\color{red} s}}}{[k+1]_{{\color{red} s}}}=0$.
\end{proof}
\begin{proof}[Proof of \cref{existsrotates}]
The condition on quantum binomial coefficients is the same as \cite[Assumption~1.1]{abe-homBS}.
By \cite[Proposition~3.4]{abe-homBS} this implies that the quantum binomial coefficients $\qbinom{n}{k}_{{\color{red} s}}$ and $\qbinom{n}{k}_{{\color{blue} t}}$ are all invertible.
Since
\begin{equation}
\qbinom{n+1}{k}_{{\color{red} s}}=\frac{[n+1]_{{\color{red} s}}}{[k]_{{\color{red} s}}}\qbinom{n}{k-1}_{{\color{red} s}} \label{eq:binomcoefincrement}
\end{equation}
and similarly for ${\color{blue} t}$, we conclude that $\frac{[n+1]_{{\color{red} s}}}{[k]_{{\color{red} s}}}=\frac{[n+1]_{{\color{blue} t}}}{[k]_{{\color{blue} t}}}=0$ for all integers $1 \leq k \leq n$.
Conversely, if the two-colored Jones--Wenzl projectors exist and are rotatable, then \eqref{eq:binomcoefincrement} combined with \cref{existence} and \cref{simple-rotatability} show that $\qbinom{n+1}{k}_{{\color{red} s}}$ and $\qbinom{n+1}{k}_{{\color{blue} t}}$ vanish for all integers $1 \leq k \leq n$.
\end{proof}
\section{Applications to Soergel bimodules}
The diagrammatic Hecke category of Elias--Williamson is constructed from a reflection representation of a Coxeter group called a \defnemph{realization}.
For each finite parabolic dihedral subgroup they identify a corresponding two-colored Temperley--Lieb algebra, whose defining parameters depend on the realization \cite[\S 5.2]{ew-soergelcalc}.
In \cite[\S 5]{ew-localizedcalc} Elias--Williamson highlight some hidden assumptions about their realizations from \cite{ew-soergelcalc}.
Their most basic assumption (without which the diagrammatic Hecke category is not well defined) is that certain two-colored Jones--Wenzl projectors exist and are rotatable.
For the benefit of future work we give a corrected definition of a realization (which ensures the existence and rotatability of these Jones--Wenzl projectors) below.
\begin{defn} \label{realizcorrected}
Let $\Bbbk$ be an integral domain.
A \defnemph{realization} of a Coxeter system $(W,S)$ over $\Bbbk$ consists of a free, finite rank $\Bbbk$-module $V$ along with subsets
\begin{align*}
\{\alpha_s : s \in S\} & \subset V & \{\alpha_s^\vee : s \in S\} \subset V^\ast=\Hom_\Bbbk(V,\Bbbk)
\end{align*}
such that
\begin{enumerate}[label=(\roman*)]
\item $\langle \alpha_s^\vee,\alpha_s \rangle=2$ for all $s \in S$;
\item the assignment
\begin{equation*}
s(\beta)=\beta-\langle \alpha_s^\vee, \beta \rangle \alpha_s
\end{equation*}
for all $s \in S$ and $\beta \in V$ defines a representation of the Coxeter group $W$ on $V$;
\item \label{item:abecond} for all $s,t \in S$ with $m_{st}<\infty$, we have
\begin{equation*}
\qbinom{m_{st}}{k}_{{\color{red} s}}(\langle \alpha_s^\vee,\alpha_t\rangle, \langle \alpha_t^\vee,\alpha_s\rangle)=\qbinom{m_{st}}{k}_{{\color{blue} t}}(\langle \alpha_s^\vee,\alpha_t\rangle, \langle \alpha_t^\vee,\alpha_s\rangle)=0
\end{equation*}
for all integers $1 \leq k \leq m_{st}-1$.
\end{enumerate}
\end{defn}
By \cref{existsrotates}, condition \ref{item:abecond} above is equivalent to the existence and rotatability of $\mathrm{JW}_{\Bbbk}(\prescript{}{{\color{red} s}}{(m_{st}-1)})$ and $\mathrm{JW}_{\Bbbk}(\prescript{}{{\color{blue} t}}{(m_{st}-1)})$ for $[2]_{{\color{red} s}}=\langle \alpha_s^\vee,\alpha_t\rangle$ and $[2]_{{\color{blue} t}}=\langle \alpha_t^\vee,\alpha_s\rangle$.
This condition is exactly Abe's assumption \cite[Assumption~1.1]{abe-homBS}, so \cref{abeequalsew} immediately follows by Abe's results \cite[Theorem~3.9]{abe-homBS} and \cite[Theorem~5.6]{abe-bimodhecke}.
It is also equivalent to
\begin{equation} \label{eq:minpolycond}
\begin{aligned}
\Psi_{m_{st}}(\langle \alpha_s^\vee,\alpha_t \rangle \langle \alpha_t^\vee,\alpha_s \rangle)& =0 & &\text{if $m_{st}>2$,} \\
\langle \alpha_s^\vee,\alpha_t \rangle=\langle \alpha_t^\vee, \alpha_s^\vee \rangle& =0 & & \text{if $m_{st}=2$,}
\end{aligned}
\end{equation}
by \cref{qbinomideal}.
\begin{rem}
In \cite{ew-soergelcalc} Elias--Williamson incorrectly state that
\begin{equation} \label{eq:ewtechnicalcondition}
[m_{st}]_{{\color{red} s}}(\langle \alpha_s^\vee,\alpha_t\rangle, \langle \alpha_t^\vee,\alpha_s\rangle)=[m_{st}]_{{\color{blue} t}}(\langle \alpha_s^\vee,\alpha_t\rangle, \langle \alpha_t^\vee,\alpha_s\rangle)=0
\end{equation}
is enough to ensure the existence and rotatability of $\mathrm{JW}_\Bbbk(\prescript{}{{\color{red} s}}{(m_{st}-1)})$.
(This error was identified in \cite{ew-localizedcalc} but only partially resolved there.)
In the same paper Elias--Williamson also incorrectly state that \eqref{eq:ewtechnicalcondition} is equivalent to \eqref{eq:minpolycond}.
Amusingly, when these two statements are combined these errors accidentally cancel and the result is a correct statement!
\end{rem}
\printbibliography
\end{document} |
1,108,101,564,896 | arxiv |
\chapter{Introduction}
Two world-wide networks are currently searching for
extra-solar planetary systems by making densely sampled observations of
ongoing microlensing events toward the Galactic bulge (PLANET,
Albrow et al.\ 1996; GMAN, Pratt et al.\ 1996).
Several other groups will join the search shortly and
there is serious discussion of new initiatives that would intensify the
search by an order of magnitude. More than 100 microlensing events have
been detected to date by four groups, MACHO (Alcock et al.\ 1996),
EROS (Ansari et al.\ 1996),
OGLE (Udalski et al.\ 1994), and DUO (Alard 1996) based on observations made
once or twice per night. The events typically last one week to a few months.
MACHO and OGLE have reported ``alerts'', events detected before peak. This
alert capability
is what has allowed PLANET and GMAN to make intensive, sometimes
round-the-clock, follow-up observations in hopes of finding the planetary
perturbations which are expected to last a day or less.
In sharp contrast to this explosion of observational activity,
theoretical work on planet detection has been rather sparse, amounting to only
five papers in as many years. Mao \&
Paczy\'nski (1991) originally suggested that planets might be detected
in microlensing events. Gould \& Loeb (1992) developed a formalism for
understanding the character of planetary perturbations and made systematic
estimates of the rate of detection for various planetary-system parameters.
Bolatto \& Falco (1994) studied the detection rate in the more general
context of binary systems. These early works assumed that the lensed star could
be treated as a point source. The usefulness of this approximation depends
primarily on the angular size of the source $\theta_*$, relative to the
planetary Einstein ring, $\theta_p$,
$$\theta_p = \biggl({m\over M}\biggr)^{1/2}\theta_e,\qquad
\theta_e = \biggl({4 G M {D_{\rm ls}}\over c^2{D_{\rm ol}}{D_{\rm os}}}\biggr)^{1/2}.\eqn\thetap$$
Here $\theta_e$ is the Einstein ring of the lensing star, $m$ and $M$ are the
masses of the planet and its parent star, and
${D_{\rm ol}}$, ${D_{\rm ls}}$, and ${D_{\rm os}}$ are the distances between the observer, lens, and
source. For Jupiter-mass planets at typical distances $({D_{\rm ls}}\sim 2\,{\rm kpc})$
from bulge giant sources, $\theta_p\sim 3\theta_*$ so the approximation is
a reasonable one. However, for Saturns, Neptunes, and especially Earths,
the finite size of the source becomes quite important, and even for Jupiters
it is not completely negligible. Moreover, as we will stress below, it is
quite possible to mistake a ``Jupiter event'' in which the source size is
negligible for a ``Neptune event'' with $\theta_*>\theta_p$. Hence it
is essential to understand finite-source effects even to interpret events
where the source size is in fact small.
Progress on finite-source effects was substantially delayed by
problems of computation. Like all binary lenses, planetary-systems have
caustics, curves in the source plane where a point-source is infinitely
magnified as two images
either appear or disappear. If one attempts to integrate the magnification
of a finite source that crosses a caustic, one is plagued with numerical
instabilities near the caustic. While it is straight forward to solve these
problems for any given geometry, the broad range of possible geometries makes
it difficult to develop an algorithm sufficiently robust for a statistical
study of lensing events. Bennett \& Rhie (1996) solved this problem by
integrating in the image plane (where the variation of the magnification is
smooth) rather than the source plane (where it is discontinuous). They were
thereby able to investigate for the first time the detectability of Earth
to Neptune mass planets. Gould \& Gaucherel (1996) showed that this approach
could be simplified from a two-dimensional integral over the image of the
source to a one-dimensional integral over its boundary.
The implementation of this method requires some care. We describe
the practical procedures elsewhere (Gaudi 1996).
The difficult computational problems originally posed by
finite-source effects are now completely solved.
To date, the analysis of planetary-system lensing events has focused
on the question of ``detectability'' which was quantified by Gould \& Loeb
(1992) as a certain minimal fractional deviation from a standard
Paczy\'nski (1986) light curve having magnification
$$A(x) = {x^2+ 2\over x(x^2+4)^{1/2}},\qquad x(t) = \biggl[{(t-t_0)^2\over
t_e^2}+\beta^2\biggr]^{1/2},\eqn\aofx$$
where $x$ is the projected lens-source separation in units of $\theta_e$.
Note that this curve is characterized by just three parameters: $t_0$ the
time of closest approach, $\beta$, the impact parameter in units of
$\theta_e$, and $t_e$, the Einstein radius crossing time. Bennett \& Rhie
(1996) adopted a similar approach but added the qualification that the
deviation persist for a certain minimum time.
Here we investigate a different question: How well can the parameters
of the planetary-system be measured? As discussed by Gould \& Loeb (1992),
if there are light-curve data of ``sufficient quality'', two planetary-system
parameters can generically be extracted from a microlensing event that displays
a planetary perturbation. These are the planet/star mass ratio, $q$, and the
planet/star projected separation in units of the stellar Einstein ring,
$y_p$,
$$q \equiv {m\over M},\qquad y_p \equiv {a_p\over r_e}.\eqn\qdef$$
Here $a_p$ is the physical projected separation, and $r_e={D_{\rm ol}}\theta_e$.
As we discuss in \S\ 8,
it will often be possible to make additional observations that specify the
mass and distance of the lensing star, or equivalently $M$ and $r_e$.
For these cases, the measurements of $q$ and $r_e$ yield
the mass $m=q M$ and projected separation $a_p = y_p r_e$.
If a planet were detected by observing a deviation from the standard curve, but its
mass ratio remained uncertain by a factor of 10, the scientific value of the
detection would be severely degraded. Indeed such ``detections''
would probably not receive general acceptance. Thus, the problems of planet
detection and parameter measurement are intimately connected.
Microlensing planet-detection programs must monitor a total of at least
several hundred events in order to obtain representative statistics on the
frequency of planets. These observations require large blocks of 1--2 meter
class telescope time coordinated over several continents. For funding
agencies and time allocation committees to make rational decisions
about the allocation of scarce resources, and for observers to make rational
choices among prospective targets, it is essential to determine what are the
minimum observational requirement for detecting planetary systems {\it and}
measuring the characteristics of the detected systems.
\chapter{Types of Degeneracy}
\section{Discrete}
\FIG\one{
Discrete degeneracies. Panel (a) shows a lensing light curve
with ({\it solid curve}) and without ({\it dashed curve}) taking account of
the presence of a planet with mass ratio $q=10^{-3}$.
Panel (b) shows the associated lensing geometry.
The two solid curves represent the path of the images relative to the lens.
The crosses represent the image positions
at the time of the perturbation.
The circles are the four planet positions for which the light curves
reproduce the measured parameters $\delta_d$ (maximum fraction deviation)
and $t_d$ (FWHM of deviation) at the peak of the disturbance when
the source-lens separation is $x_d$.
The filled circle is the ``actual'' planet position.
Panel (c) shows the four associated light curves for times
near the peak of the perturbation, $t_{0,d}$. Note that time
is expressed in units of the perturbation time scale, $t_d$, not
$t_e$. The bold curve
corresponds to the ``actual'' planet position. Clearly, if the light curve
is well sampled, the two dashed curves corresponding to the image position
inside the Einstein ring in panel (b) could be ruled out immediately.
However, the two solid curves are less easily distinguished. These
differ by $\sim 15\%$ in planet/star separation and $10\%$ in mass.
See panel (b) and Table 1.
}
\topinsert
\mongofigure{fig1.ps}{6.4}{6.5}{7.0}
{
Discrete degeneracies. Panel (a) shows a lensing light curve
with ({\it solid curve}) and without ({\it dashed curve}) taking account of
the presence of a planet with mass ratio $q=10^{-3}$.
Panel (b) shows the associated lensing geometry.
The two solid curves represent the path of the images relative to the lens.
The crosses represent the image positions
at the time of the perturbation.
The circles are the four planet positions for which the light curves
reproduce the measured parameters $\delta_d$ (maximum fraction deviation)
and $t_d$ (FWHM of deviation) at the peak of the disturbance when
the source-lens separation is $x_d$.
The filled circle is the ``actual'' planet position.
Panel (c) shows the four associated light curves for times
near the peak of the perturbation, $t_{0,d}$. Note that time
is expressed in units of the perturbation time scale, $t_d$, not
$t_e$. The bold curve
corresponds to the ``actual'' planet position. Clearly, if the light curve
is well sampled, the two dashed curves corresponding to the image position
inside the Einstein ring in panel (b) could be ruled out immediately.
However, the two solid curves are less easily distinguished. These
differ by $\sim 15\%$ in planet/star separation and $10\%$ in mass.
See panel (b) and Table 1.
}
\endinsert
Planetary-system lensing events are subject to two different
discrete degeneracies. The first ambiguity relates to which image the
planet is perturbing: the major image outside the Einstein ring or the minor
image inside the Einstein ring. For almost all cases, this degeneracy is
easily broken provided there is good temporal coverage of the light curve.
However, if it is not broken the uncertainty in $q$ and $y_p$ can be a factor
of a few. The magnitudes of these uncertainties depend only on the overall
geometry of the event and not on the mass of the planet. The second ambiguity
relates to whether the planet lies closer to or farther from the star than
does the position of the source image that it is perturbing. This degeneracy
is more difficult to break, but it does not seriously affect the determination
of $q$, and the uncertainty induced in $y_p$ is proportional to
$q^{1/2}$ and is therefore often much smaller than the one induced by first
degeneracy. These two discrete degeneracies are illustrated in Figure \one.
The values of $q$ and $y_p$ for each of the four possible solutions are
displayed in Table 1.
$$\vbox{\halign{#\hfil\quad&\hfil#\quad&\hfil#
\quad&\hfil#\hfil\quad&\hfil#\hfil\quad&\hfil#\hfil\cr
\multispan{3}{\hfil TABLE 1 \hfil}\cr
\noalign{\medskip}
\multispan{3}{\hfil Degenerate Parameter Values: Discrete\hfil}\cr
\noalign{\smallskip}
\noalign{\hrule}
\noalign{\smallskip}
\noalign{\hrule}
\noalign{\smallskip}
&\hfil planet/star \hfil&\hfil planet/star
\hfil\cr
&\hfil separation\hfil&\hfil mass ratio
\hfil\cr
&\hfil $y_p$ \hfil&\hfil $q/q_0$ \hfil\cr
\noalign{\smallskip}
\noalign{\hrule}
\noalign{\smallskip}
Major Image&\hfil1.40\hfil&\hfil1.00\hfil \cr
&\hfil1.19\hfil&\hfil0.91\hfil \cr
Minor Image&\hfil0.75\hfil&\hfil1.08\hfil \cr
&\hfil0.80\hfil &\hfil0.88\hfil \cr
\noalign{\smallskip}
\noalign{\hrule}
}}
$$
\FIG\two{
Ten light curves with shear $\gamma=0.6$, $\phi=90^\circ$
(see eqs.\ 3.1 and 3.2), all with maximum
deviation $\delta_d=10\%$ and FWHM
$t_d = 0.06\,t_e$. The ratios of source radius to planet Einstein ring range
from $\rho=0.1$ to $\rho=2.87$, the largest source radius consistent
with this maximum deviation. Table 2 gives the corresponding
values of $q=m/M$, and
proper motion, $\mu$, relative to the fiducial values $q_0$ and
$\mu_0$ at the arbitrarily chosen value $\rho=0.3$.
}
\topinsert
\mongofigure{fig2.ps}{6.4}{5.5}{6.0}
{
Ten light curves with shear $\gamma=0.6$, $\phi=90^\circ$
(see eqs.\ 3.1 and 3.2), all with maximum
deviation $\delta_d=10\%$ and FWHM
$t_d = 0.06\,t_e$. The ratios of source radius to planet Einstein ring range
from $\rho=0.1$ to $\rho=2.87$, the largest source radius consistent
with this maximum deviation. Table 2 gives the corresponding
values of $q=m/M$, and
proper motion, $\mu$, relative to the fiducial values $q_0$ and
$\mu_0$ at the arbitrarily chosen value $\rho=0.3$.
}
\endinsert
\section{Continuous}
In addition, there is a continuous
degeneracy arising from finite-source
effects being misinterpreted as a larger value of $q$. This is because
$q$ is determined from the (square of the) duration of the planetary
perturbation relative to the total duration of the event. If the size of the
source
is larger than the Einstein ring of the planet, then the duration of the
planetary perturbation will be the crossing time of the source, not of the
planet Einstein ring. Figure \two\ shows 10 light curves all with the same
maximum fractional deviation, $\delta_d$,
and same full width half maximum (FWHM) of perturbation, $t_d$.
The parameter that differs in each of these curves is the ratio
of source radius, $\theta_*$, to planet Einstein radius, $\theta_p=q\theta_e$,
$$\rho = {\theta_*\over \theta_p}.\eqn\rhodef$$
Table 2 gives the inferred values of $q$
and of the proper motion $\mu$ (of the planetary
system relative to the observer-source line of sight)
associated with each curve in units of the ``fiducial'' values
associated with $\rho=0.3$. In so far as one could not distinguish among these
curves, any of these parameter combinations would be acceptable.
The fiducial parameters $q_0$ and $\mu_0$ would then be measurable but the
actual values of $\mu$ and $q$ would not. The proper motion of
both bulge and disk lenses is typically $\mu \sim {\cal{O}}(V_{\rm{LSR}}/ R_0)
\sim 30\,{\rm km}\,{\rm s}^{-1} {\rm kpc}^{-1}$, where $V_{\rm{LSR}}\sim 220\, {\rm km}\,{\rm s}^{-1}$ is the
rotation speed of the Local Standard of Rest, and $R_0 \sim 8\, {\rm kpc}$
is the Galactocentric distance.
If, for the example shown in Table 2, the fiducial value were
measured as $\mu_0 \sim V_{\rm{LSR}}/R_0$,
one might then choose to argue that the proper motions
associated with the low-mass solutions (i.e.\ $\mu \sim \mu_0/3$)
would be so low as to be {\it a priori} unlikely.
However, these solutions could not actually be
ruled out by such an argument, since the distribution of
$\mu$ is rather broad (see Han \& Gould 1995).
Thus, there would remain a factor $\sim 15$
uncertainty in the planet/star mass ratio.
$$\vbox{\halign{#\hfil\quad&\hfil#\quad&\hfil#
\quad&\hfil#\hfil\quad&\hfil#\hfil\quad&\hfil#\hfil\cr
\multispan{3}{\hfil TABLE 2 \hfil}\cr
\noalign{\medskip}
\multispan{3}{\hfil Degenerate Parameter Values:
Continuous Major Image\hfil}\cr
\noalign{\smallskip}
\noalign{\hrule}
\noalign{\smallskip}
\noalign{\hrule}
\noalign{\smallskip}
\hfil dimensionless \hfil&\hfil
planet/star \hfil&\hfil proper motion \hfil\cr
\hfil source radius\hfil&\hfil
mass ratio\hfil&\hfil \hfil\cr
\hfil $\rho$ \hfil&\hfil $q/q_0$ \hfil&\hfil $\mu/\mu_0$ \hfil\cr
\noalign{\smallskip}
\noalign{\hrule}
\noalign{\smallskip}
\hfil0.10\hfil&\hfil1.095\hfil&\hfil2.867\hfil \cr
\hfil0.20\hfil&\hfil1.041\hfil&\hfil1.470\hfil \cr
\hfil0.30\hfil&\hfil1.000\hfil&\hfil1.000\hfil \cr
\hfil0.60\hfil&\hfil0.957\hfil&\hfil0.511\hfil \cr
\hfil0.90\hfil&\hfil0.767\hfil&\hfil0.381\hfil \cr
\hfil1.20\hfil&\hfil0.566\hfil&\hfil0.332\hfil \cr
\hfil1.50\hfil&\hfil0.373\hfil&\hfil0.327\hfil \cr
\hfil1.80\hfil&\hfil0.236\hfil&\hfil0.343\hfil \cr
\hfil2.10\hfil&\hfil0.163\hfil&\hfil0.354\hfil \cr
\hfil2.40\hfil&\hfil0.127\hfil&\hfil0.351\hfil \cr
\hfil2.70\hfil&\hfil0.093\hfil&\hfil0.364\hfil \cr
\hfil2.87\hfil&\hfil0.074\hfil&\hfil0.383\hfil \cr
\noalign{\smallskip}
\noalign{\hrule}
}}
$$
\section{Relation Between Degeneracies in $q$ and $\mu$}
From the relation $\mu=\theta_e/t_e$, we obtain the
identity $\mu =(\theta_e/\theta_p)(\theta_p/\theta_*)(\theta_*/t_e)$ or
$$\mu\rho q^{1/2}={\theta_*\over t_e}.\qquad \eqn\murhodeg$$
Since the quantities on the right hand side of this equation are observables,
the product on the left hand side must be constant for all allowed parameter
combinations in any given planetary event: $\mu\rho q^{1/2}=$constant. This
equation then establishes a relationship between degeneracies in $q$ and
degeneracies in $\mu$. If a range of solutions are permitted that have
different values of $\rho$ but very similar values of $q$,
then we say that the mass ratio is not degenerate. However, it follows from
equation \murhodeg\ that the proper motion $\mu$ then varies inversely as $\rho$ and
therefore that it is degenerate. Similarly, if the range of allowed solutions
all have the same value of $\mu$, then the proper motion is not degenerate
but then $q\propto \rho^{-2}$ and so the mass ratio is degenerate.
This relationship is illustrated by Table 2. The region $\rho \le 0.3$
has well-determined $q$ but degenerate $\mu$, while the region
$\rho \gsim 1$ has well-determined $\mu$ but degenerate $q$.
\chapter{The Chang-Refsdal Lens Approximation}
In order to systematically investigate the role of these degeneracies
and to determine the data that are required to break them, we follow Gould \&
Loeb (1992) and approximate the planetary perturbation as a Chang-Refsdal
lens (Chang \& Refsdal 1979; Schneider, Ehlers, \& Falco 1992).
A Chang-Refsdal lens is a point mass (in this case the planet) superimposed
on a uniform background shear $\gamma$. For any given lensing event, the
value of $\gamma$ is simply the shear due to the lensing star at the
{\it unperturbed} position of the image that is perturbed by the planet.
The evaluation of $\gamma$ is made at the mid-point of the
perturbation. The source
position at this midpoint, $x_d$ is known from the light curve
(see Figs.\ \one-a,b). The associated image positions are $y_{d,\pm}$ with
shears $\gamma_\pm$ given by
$$\gamma_\pm = y_{d,\pm}^{-2},\qquad y_{d,\pm}
={(x_d^2+4)^{1/2}\pm x_d\over 2}.
\eqn\gammadef$$
Thus the shear is known (up to a two-fold ambiguity) simply from the position
of the planetary perturbation on the overall light curve.
\FIG\three{
Chang-Refsdal magnification contours of a point source as
a function of source position in units of the planet Einstein ring, $\theta_p=q\theta_e$,
for various pairs of shears
$(\gamma_+$ and $\gamma_-=\gamma_+^{-1})$ corresponding to planetary
perturbations
of the major and minor images, respectively.
Magnification contours are calculated including the contribution
of the unperturbed image. Contour pairs are for
$\gamma_+=0.8,$ 0.6, 0.4, and 0.2 in panels (a,b), (c,d), (e,f), and (g,h),
and correspond to source positions at perturbation of
$x_d = 0.22,$ 0.52, 0.95, and 1.79. Super-bold contour is no deviation.
Bold contours are $\delta = 5\%$, 10\%, 20\%, and $\infty$. Non-bold contours
are $-5\%$, $-10\%$, and $-20\%$. Diagonal lines in panels (c) and (d)
represent possible trajectories assuming that the overall light curve shows
$x_d=0.52$ (i.e.\ $\gamma_+=0.6$) and $\beta=0.4$ (i.e.\ $\phi=\sin^{-1}{\beta/
x_d} \sim 50^\circ$).
If the maximum deviation were observed to be $\delta_d=20\%$ (and the
point-source approximation were known to be valid), then the trajectory
must be either B or D.
}
When computing the fractional deviation, $\delta$, of the
Chang-Refsdal lens from the standard Paczy\'nski curve, one always
normalizes relative to the total unperturbed magnification [eq.\ \aofx]
which includes both the image perturbed by the planet and the image that
remains unperturbed (Gould \& Loeb 1992).
\topinsert
\mongofigure{fig3.ps}{6.4}{6.5}{7.0}
{
Chang-Refsdal magnification contours of a point source as
a function of source position in units of the planet Einstein ring, $\theta_p=q\theta_e$,
for various pairs of shears
$(\gamma_+$ and $\gamma_-=\gamma_+^{-1})$ corresponding to planetary
perturbations
of the major and minor images, respectively.
Magnification contours are calculated including the contribution
of the unperturbed image. Contour pairs are for
$\gamma_+=0.8,$ 0.6, 0.4, and 0.2 in panels (a,b), (c,d), (e,f), and (g,h),
and correspond to source positions at perturbation of
$x_d = 0.22,$ 0.52, 0.95, and 1.79. Super-bold contour is no deviation.
Bold contours are $\delta = 5\%$, 10\%, 20\%, and $\infty$. Non-bold contours
are $-5\%$, $-10\%$, and $-20\%$. Diagonal lines in panels (c) and (d)
represent possible trajectories assuming that the overall light curve shows
$x_d=0.52$ (i.e.\ $\gamma_+=0.6$) and $\beta=0.4$ (i.e.\ $\phi=\sin^{-1}{\beta/
x_d} \sim 50^\circ$).
If the maximum deviation were observed to be $\delta_d=20\%$ (and the
point-source approximation were known to be valid), then the trajectory
must be either B or D.
}
\endinsert
The Chang-Refsdal approximation permits an immense conceptual
simplification of the problem. For a point source, all possible
light curves of an event with a given $x_d$ can be represented on a
pair of diagrams, one for $\gamma_+$ and one for $\gamma_-$. All possible
planetary perturbations can therefore be represented by a single-parameter
family of such diagrams.
See Figure \three. For a given event, one knows $\beta$
and $x_d$ from the overall light curve. One can therefore compute
$\gamma_\pm$ using equation \gammadef\ and thereby pick out which two diagrams
are relevant. One also knows the angle $\phi$ at which the source cuts through
the diagram,
$$\sin \phi = {\beta\over x_d}.\eqn\sinphi$$
If, for example, $x_d=0.516$ and $\beta=0.4$, then all possible light curves
are represented by the parallel lines indicated in Figures \three-c,d. If
the light curve is well sampled, it is easy to distinguish between $\gamma_+$
and $\gamma_-$. Suppose that $\gamma_+$ (Fig.\ \three-b) is correct, and
say that the maximum fractional deviation is $\delta_d=20\%$.
Then one can immediately identify the correct curve as being either B or D.
The observed duration of the perturbation relative to that of
the whole event then
sets the scale of diagram relative to $\theta_e$ and thus determines the
mass ratio.
\FIG\four{
Chang-Refsdal magnification contours of a non-point source as
a function of source position in units of $\theta_p$, for the pair
of shears
$(\gamma_+,\gamma_-)=(0.6,1.67)$ corresponding to planetary perturbations
at source position $x_d=0.52$. The ratios of planet Einstein radius to
source radius are $\rho=0.5,$ 1.0, 1.5, and 2.5. Compare with point-source
case shown in Figs.\ \three-c,d. Contours levels are the same as for Fig.\
\three.
}
\topinsert
\mongofigure{fig4.ps}{6.4}{7.0}{7.0}
{
Chang-Refsdal magnification contours of a non-point source as
a function of source position in units of $\theta_p$, for the pair
of shears
$(\gamma_+,\gamma_-)=(0.6,1.67)$ corresponding to planetary perturbations
at source position $x_d=0.52$. The ratios of planet Einstein radius to
source radius are $\rho=0.5,$ 1.0, 1.5, and 2.5. Compare with point-source
case shown in Figs.\ \three-c,d. Contours levels are the same as for Fig.\
\three.
}
\endinsert
Of course, one does not know {\it a priori} that finite source
effects can be ignored. However, for any $x_d$, all possible events can still
be represented by a single-parameter family of diagrams. The relevant
parameter is, $\rho$,
the ratio of the angular radius of the source to the Einstein radius of the
planet. Hence, it is quite easy to study all possible degeneracies.
See Figure \four.
The drawback of using the Chang-Refsdal approximation is that it is
not exact. Moreover, for any given lensing event, it is straight forward to
construct models that are exact. As we argue below, however, the lack of
exactness has no significant impact in the analysis of degeneracies. On the
other hand, using the exact solution increases the dimensionality of parameter
space and thereby the conceptual complexity of the problem, without any
compensating benefits. We therefore strongly advocate using the Chang-Refsdal
framework. We present a more detailed analysis in the Appendix.
\chapter{Degeneracy Between Major and Minor Images}
Figure \three\ indicates that it should generally be quite easy to
distinguish between perturbations of the major and minor images provided that
there is good temporal coverage: perturbations of the major image have one
major positive excursion, while perturbations of the minor image have two
positive excursions separated by a large negative excursion. However, if the
observations are made from only one site, then good temporal coverage is
far from automatic. The time scale for these excursions is the minimum
of the crossing time of the star, $\theta_*/\mu\sim 10\,$ hours for a giant,
and the crossing time of the planet Einstein ring $\theta_p/\mu\sim
10\,(m/50 M_\oplus)^{1/2}\,$hours. Thus it would be quite possible to observe
a positive excursion (or a significant fraction of it) one night and then
miss any subsequent excursions due to one or two nights of bad weather.
However, if there were three observing sites on different continents, such
large data gaps would be rare.
\FIG\five{
Major/minor image degeneracy for $\phi=0$, with $\gamma_+=0.61, 0.38, 0.25$,
in panels a,c,and e. The solid curve corresponds to $\gamma_+$, the
dashed curve to $\gamma_-=\gamma_+^{-1}$. Also shown in panels b,d, and f are
the associated fractional color change $\Delta(V-H)$.
}
\topinsert
\mongofigure{fig5.ps}{6.4}{6.5}{6.5}
{
Major/minor image degeneracy for $\phi=0$, with $\gamma_+=0.61, 0.38, 0.25$,
in panels a,c,and e. The solid curve corresponds to $\gamma_+$, the
dashed curve to $\gamma_-=\gamma_+^{-1}$. Also shown in panels b,d, and f are
the associated fractional color change $\Delta(V-H)$.
}
\endinsert
There is nevertheless another possible source of degeneracy between
the major and minor images. If $\phi$ is sufficiently small, then a source
coming close to one of the caustics of the perturbation of the minor image
could cross the star-planet axis at a point far enough from the planet that
the negative excursion along this axis would be very small. In this case,
the light curve might be mistaken for one due to a perturbation
of the major image. In Figure \five, we present examples of this
degeneracy for three different values of $\gamma_+$ for $\phi = 0$,
along with the corresponding color shifts $\Delta(V-H)$ (see \S\ 6.3).
The parameter $\rho$ was chosen in each case such that the
curves for $\gamma_+$ and $\gamma_-$ would be most similar.
It is clear that
the degenerate curves could be distinguished only if precise measurements
could be made at the wings of the perturbation.
Considering now the curves for
$\Delta(V-H)$, there are relatively large ($2-10\%$) fractional
color changes associated with the $\gamma_+$ curves throughout the event,
while the fractional color changes
associated with the $\gamma_-$ curves are always negligible.
This large difference in the magnitude of the fractional color change
between the $\gamma_+$ and $\gamma_-$ curves arises from the fact that
the gradient of the magnification across the face of the star is
much larger in the $\gamma_+$ case, and thus the color effects are
therefore more pronounced (see \S\ 6.3).
Thus by measuring $\Delta(V-H)$, one can
distinguish between the two degenerate cases
even during the peak of the perturbation.
The larger the value of $\phi$, the larger the negative excursion in the
minor-image perturbation, and thus curves
with $\phi > 0$ will be less degenerate than the examples shown
in Figure \five.
\chapter{Degeneracy of Planet Position Relative to Unperturbed Image}
In general, this degeneracy introduces an uncertainty in $y_p$
which is $\Delta y_p\sim 2\alpha\theta_p/\theta_e$ where $\alpha\theta_p$
is the separation between the planet and the unperturbed image at the mid-point
of the perturbation. From Figure \three\ one sees that if perturbations
$\delta_d\sim 5\%$ are detectable, the typical planetary event will have
$\alpha\sim 5$. Hence, if the degeneracy remains unbroken, the fractional
uncertainty is $\Delta y_p/y_p\sim (100 m/M)^{1/2}$. For Jupiter-mass
planets in orbit around M dwarfs, this error is of order unity, while
for Neptune-mass planets it is $\sim 10\%$. On the other hand, the degeneracy
in $q$ is small (see Table 1 and also Appendix).
In this section we consider only point sources. If the planet has
a low mass, so that finite source effects are important, then (as mentioned
above) the difference between the two degenerate solutions is small and
distinguishing between them is relatively less important. In addition,
finite-source effects are more properly addressed in the context of the
mass/finite-source degeneracy discussed in \S\ 6.
\section{Perturbations of the Major Image}
\FIG\six{
Asymmetry factor $P$ (see eqs.\ \pdef\ and \pphidef) for nine values
of $\gamma$,
as a function of maximum deviation, $\delta_d$. The actual asymmetry
of a given light curve, $P_\phi(\gamma,\delta_d)$, is given by
$P_\phi(\gamma,\delta_d)\sim P(\gamma,\delta_d)\cot\phi$. Using this figure
and formula, one can therefore determine whether the degeneracy can be
broken for any given sensitivity threshold.
}
It is clear from Figure \three\ that it is impossible to break the
degeneracy if $\phi=90^\circ$, i.e., if the planetary perturbation takes
place at the peak of the light curve ($x_d=\beta$). If the source crosses
the perturbation structure in the region of the caustic $(\alpha\lsim 1)$
then degeneracy is relatively unimportant and, in any event, is easily
broken (provided $\phi<90^\circ$) due to the richness of the structure
in this region. We therefore
focus on the case $\alpha>1$. From Figure \three, one
sees that there is an asymmetry in the light curve which has opposite senses
depending on whether the source crosses to the left or the right of the
planet. If it passes to the right, then the deviation is more pronounced
at the beginning of the perturbation than at the end, and if it passes to
the left, the deviation is more pronounced at the end.
We define the
asymmetry factor $P_\phi$ as the maximum over all times $t$ of the fractional
difference,
$$ P_\phi(\gamma,\delta_d) \equiv {{\rm max}\{|\delta(t_{0,d} + t)-
\delta(t_{0,d} -t)|\}}\eqn\pdef$$
where $t_{0,d}$ is the mid-point of the perturbation and
$\delta(t)$ is the fractional
deviation as a function of time. To lowest order,
one may approximate
$$P_\phi(\gamma,\delta_d)= P(\gamma,\delta_d)\cot\phi .\eqn\pphidef$$
From Figure \three, one can see that as $\gamma$ increases, the
positive contours of $\delta$ become more stretched along the
planet-star axis, and thus low-peak perturbations occur farther
from the areas of negative excursion. One would therefore expect smaller
values of $P$ for larger
values of $\gamma$.
Figure \six\ shows $P$ as a function of $\delta_d$ for several values of
$\gamma$.
As expected, $P$
generally decreases with increasing $\gamma$. From
Figure \six, one can determine, for a given sensitivity,
whether the degeneracy can be broken for any perturbation.
For example, consider trajectories
such that $\phi \gsim 75^{\circ}$. If one
were sensitive to asymmetries of $P_{\phi} \sim 1\%$, i.e.,
$P \sim .04$, then the degeneracy could
be broken only for $\gamma \lsim 0.3$ , and then only if
$\delta_d > 0.1$. On the other hand, if $\phi \sim 30^{\circ} (P\sim 0.004)$,
then the degeneracy could be broken for essentially all values of $\gamma$.
\topinsert
\mongofigure{fig6.ps}{6.4}{5.0}{6.5}
{
Asymmetry factor $P$ (see eqs.\ \pdef\ and \pphidef) for nine values
of $\gamma$,
as a function of maximum deviation, $\delta_d$. The actual asymmetry
of a given light curve, $P_\phi(\gamma,\delta_d)$, is given by
$P_\phi(\gamma,\delta_d)\sim P(\gamma,\delta_d)\cot\phi$. Using this figure
and formula, one can therefore determine whether the degeneracy can be
broken for any given sensitivity threshold.
}
\endinsert
\section{Perturbations of the Minor Image}
As we discussed in \S\ 4, to distinguish minor-image from
major-image perturbations, it is necessary to observe the negative excursion
(centered on the $x$-axes of the right-hand side of Fig.\ \three. If these
are observed, then one can easily distinguish the case where the source
transits the right side from the case where it transits the left side of the
$x$-axis (provided $\phi<90^\circ$). Hence, there is no degeneracy of
minor-image perturbations unless the more severe major/minor-image degeneracy
remains unbroken.
\chapter{Continuous Mass Degeneracy of Major Image Perturbations}
By far the potentially most crippling form of parameter degeneracy
is the one that is illustrated in Figure \two\ and is tabulated in Table 2.
The basic character of this degeneracy can be understood analytically using
the following
theorem (Gould \& Gaucherel 1996): if the unperturbed major image
crosses the position of the planet and the source is larger than
the major-image caustic structure, then
$$\delta_d \simeq {2\over \rho^2 A(\gamma)},\qquad A(\gamma) =
{1+\gamma^2\over 1-\gamma^2}.\eqn\massdegone$$
The FWHM of such an event is $t_d\sim 2(\csc\phi)\rho q^{1/2}t_e$. On the
other hand, the FWHM of a low-peak perturbation of a point-source is
$t_d\sim 2(\csc\phi) q^{1/2}t_e$. Suppose that an event has observables
$t_e$, $\beta$, $x_d$, $t_d$, and $\delta_d$. One can form the combination
of observables $Q\equiv [(\beta/x_d)(t_d/t_e)/2]^2$, and can obtain one
possible solution that reproduces the maximum deviation and FWHM:
$$ q\sim Q,\qquad \rho\lsim 1.\eqn\solone$$
However, the solution
$$ q\sim {Q\over \rho_{\rm max}^2}, \qquad \rho \sim \rho_{\rm max}, \qquad \rho_{\rm max}
\equiv \biggl({2\over\delta_d}\,
{1-\gamma^2\over 1+\gamma^2}\biggr)^{1/2},\eqn\soltwo$$
would also reproduce the height and width of the curve. Note that the
ratio of masses for the two solutions is $\rho_{\rm max}^2$. For
$\delta_d\sim 5\%$, this ratio is typically $\gsim 20$ and can be as high
as 40. Thus, unless this degeneracy is broken, any low-peak perturbation
of a point source by a Jupiter-mass planet can masquerade as a Neptune-mass
event, and vice versa. All intervening masses are permitted as well.
Clearly, unless this degeneracy is broken, low-peak perturbations will contain
very little information about mass, and unambiguous detection of low-mass
planets will be impossible. There are three possible paths to breaking this
degeneracy.
\FIG\seven{
Fractional deviations $\delta$ for $H$-band light curves.
Similar to Fig.\ \two, except now shown for shears
$\gamma=0.2,$ 0.4, 0.6, and 0.8, and for trajectory of source motion
$\phi=30^\circ$, $45^\circ$, $60^\circ$, and $90^\circ$. (Corresponding curves
for $\phi\rightarrow -\phi$ can be found by reversing the $x$-axes.)\ \
In each curve, the maximum
deviation is $\delta_d=10\%$ and the FWHM is $t_d=0.06\,t_e$.
}
\FIG\eight{
The values of $(q/q_0)^{-1}$ (bold lines) and $(\mu/\mu_0)^{-1}$
as functions of
$\rho$ for each set of degenerate curves in Fig.\ \seven. The
fiducial values $q_0$ and $\mu_0$ are associated with the curve
with $\rho=0.1$.
}
\FIG\nine{
Fractional color change $\Delta(V-H)$ for light curves shown in Fig.\ \seven.
}
\topinsert
\mongofigure{fig7.ps}{7.5}{8.5}{7.5}
{
Fractional deviations $\delta$ for $H$-band light curves.
Similar to Fig.\ \two, except now shown for shears
$\gamma=0.2,$ 0.4, 0.6, and 0.8, and for trajectory of source motion
$\phi=30^\circ$, $45^\circ$, $60^\circ$, and $90^\circ$ (left to right).
Corresponding curves
for $\phi\rightarrow -\phi$ can be found by reversing the $x$-axes.\ \
In each curve, the maximum
deviation is $\delta_d=10\%$ and the FWHM is $t_d=0.06\,t_e$.
}
\endinsert
\topinsert
\mongofigure{fig8.ps}{7.5}{7.5}{7.5}
{
The values of $(q/q_0)^{-1}$ (bold lines) and $(\mu/\mu_0)^{-1}$
as functions of
$\rho$ for each set of degenerate curves in Fig.\ \seven. The
fiducial values $q_0$ and $\mu_0$ are associated with the curve
with $\rho=0.1$.
}
\endinsert
\topinsert
\mongofigure{fig9.ps}{9.0}{8.7}{7.5}
{
Fractional color change $\Delta(V-H)$ for light curves shown in Fig.\ \seven.
}
\endinsert
\section{Proper Motion Measurement}
If the proper motion $\mu$ of the lens relative to the source were
measured, then one could partially break the degeneracy between equations
\solone\ and \soltwo. The angular size of the source $\theta_*$ would be
known from its dereddened color and magnitude and Stefan's law. The time
for the source to cross the perturbation $x$-axis, $t_c\sim 2\theta_*\csc\phi/\mu$,
would then also be known. If one found $t_c<t_d$, this would imply that
$t_d$ was dominated by the size of the planet Einstein ring, not the source.
Hence, the solution \solone\ would be indicated. On the other hand, if
$t_c\sim t_d$, one would
know only that the solution \solone\ was not correct, and
that the mass ratio lay somewhere in the interval $\rho_{\rm max}^{-2}Q\leq q<Q$.
See Table 2.
Proper motions can be measured if the lensing star transits or nearly
transits the source or by imaging the split image of the source using
infrared interferometry. See Gould (1996) for a review. Another approach
would be simply to wait a few decades and measure the angular separation of
the lens and source. Since typical proper motions are 5 mas yr${}^{-1}$, the
separation should be $\sim 0.1''$ after a few decades. Unfortunately, most
lenses are probably fainter than $M_I=10$, while typical giant sources are
$M_I\sim 0$, and even turnoff stars are $M_I\sim 3$. Thus it would be
difficult to image the lens until it was quite well separated from the source
at which point it might be hard to distinguish it from random field stars.
We explore this possibility further in \S\ 8.
\section{Detailed Light Curves}
Although the parameter combinations \solone\ and \soltwo\ reproduce
the gross features of the perturbation (peak and FWHM) equally well, the
detailed structures of the light curves are different. Figure \two\
illustrates the principal difference for elongated perturbation
structures, in this case $\gamma=0.6$. When $\rho\sim \rho_{\rm max}$, the wings
show a dip because the source passes over the caustic which is surrounded
by regions of negative perturbation (see Figs.\ \three-c and \four). On
the other hand, when $\rho\lsim 1$ the approach to the peak is smooth
because the source is passing over the smooth outer portion of the ridge
seen in Figure \three-c. From Figure \two\ it is clear that
if these wing structures could be resolved at the $\sim 1\%$ level, then
the degeneracy in mass could be reduced from the factor $\sim 15$ seen
in Table 2, to a factor $\sim 1.5$.
Figure \seven\ is an array of 16 diagrams each similar to Figure \two,
but with different values of $\gamma$ (0.2, 0.4, 0.6, and 0.8) and different
angles of source motion $\phi$ ($30^\circ$, $45^\circ$, $60^\circ$,
and $90^\circ$).
It is clear
that it is easier to break the degeneracy as $\gamma$
increases and as $\phi$ decreases. For $\gamma \gsim 0.4$
the uncertainty in $q$ could be significantly reduced if
the wing structures could be resolved at the $\sim 1\%$ level.
For $\gamma \lsim 0.2$, however,
distinguishing between the degenerate curves would
require an accuracy $\ll 1\%$.
Figure \eight\ shows the values of $q/q_0$ and $\mu/\mu_0$ as a function
of $\rho$ for each of the combinations of $\phi$ and $\gamma$ in Figure
\seven. Note that larger values of $\rho_{\rm max}$ are allowed for
smaller values of $\gamma$, and thus the range of acceptable values of
$q$ is largest for small $\gamma$. This is especially disturbing in
light of the fact that the degenerate curves are most similar for
small $\gamma$.
\section{Optical/Infrared Colors}
A major shortcoming of the detailed-light-curve method for
breaking the degeneracy is that it depends critically on obtaining accurate
observations during two brief intervals covering the wings of the light curve.
As a practical matter, it may be difficult to obtain such coverage for
a variety of reasons. Once the event is noticed, observatories that are
dedicated to the planet search can engage in frequent monitoring and thereby
obtain very accurate light curves. However, it is quite possible, indeed
likely, that the planetary perturbation will not be recognized in time for
intensive monitoring of the first wing. Sometimes observation of the first
wing is crucial to breaking the degeneracy. Moreover, the second wing will
likely be observable from at most one observatory which could be affected
by bad weather.
Optical/infrared color measurements by contrast yield
degeneracy-breaking information throughout the event. The reason is that
by the principle of equivalence, lensing of point source is achromatic. If
lensing introduces color changes, the lens must be resolving the source
(Witt 1995; Loeb \& Sasselov 1995). The best opportunity to observe this
effect is by looking for optical/infrared color differences
(Gould \& Welch 1996) because giant
stars are more limb-darkened in the optical than in the infrared
(Manduca, Bell \& Gustafsson 1977).
Thus, if the planet Einstein ring is larger than the source (and the low peak
is due to the source passing over regions of small perturbation), the color
changes will be very small. On the other hand, if $\theta_*>\theta_p$
(and the low peak occurs when the large source passes over the caustic), the
caustic structure will resolve the differential limb-darkening of the star
and the color changes will be more pronounced.
Figure \nine\ shows the $V-H$ colors for the same parameters as are used for
the $H$ band curves as in Figure \seven. The magnitude of the
fractional color change is largest for smallest $\gamma$. This is
fortunate, since, as discussed in \S\ 6.2, the degeneracy is most severe
for small $\gamma$, both in terms of the similarity in the light curves,
and in the range of allowed values of $q$. It is therefore essential
to have optical/infrared color measurements to ensure that the
continuous degeneracy can be broken for all possible values of $\gamma$.
\chapter{Continuous Degeneracy of Minor Image Perturbations}
\FIG\ten{a) Ten light curves with $\gamma_-=1.67$, $\phi=90^\circ$,
all with maximum
deviation $\delta_d=-10\%$ and FWHM
$t_d = 0.06\,t_e$. The ratios of source radius to planet Einstein ring range
from $\rho=0.1$ to $\rho=1.83$, the largest source radius consistent
with this maximum deviation. The corresponding relative
values of $q=m/M$, and the
relative proper motion, $\mu/\mu_0$, are given in Table 4.
Source radii of $\rho = 1.6, 1.7, 1.83,$ and $1.80$ correspond
to the bold, dashed, bold dotted, and bold dashed curves.
b) The fractional color change $\Delta(V-H)$ for the ten curves in panel (a).
}
There is also a continuous degeneracy for minor-image perturbations,
but the degeneracy is considerably
less severe than for major-image perturbations because
the caustic structure is qualitatively different.
As with major image perturbations, the basic character of the minor
image degeneracy can be understood analytically. Consider the following
theorem (Gould \& Gaucherel 1996): if the unperturbed minor image
crosses the position of the planet and the source encloses
both minor-image caustics, then
$$
\delta_d \simeq -{{f(\rho,\gamma)}\over \rho^4A(\gamma)}
\rightarrow -{2\over{\rho^4 A(\gamma)}},\qquad A(\gamma) =
{\gamma^2+1\over \gamma^2-1},\eqn\massdegtwo
$$
where $f(\rho,\gamma)=[(1/2 + \rho^{-2})^2 -\gamma^2\rho^{-4}]^{-1/2}$,
and the limit applies for $\rho \gg \gamma^{1/2}$.
That is, in contrast to the major-image perturbation [cf.\ eq.\ \massdegone],
the minor-image perturbation goes to zero rapidly for large sources.
For minor image perturbations, the caustics are located at
(Schneider et al.\ 1992)
$$d_{\rm{caus}} \sim 2(\gamma-1)^{1/2}. \eqn\caus$$
Thus the source must have $\rho \gsim 2(\gamma-1)^{1/2}$
to enclose both caustics. If the source is significantly
larger than this, the perturbation will be negligibly small.
Hence we can restrict attention to sources $\rho < d_{\rm{caus}}$.
For minor-image perturbations, $t_d$ is the FWHM of
the negative deviation.
For a low-peak point source perturbation ($\alpha \gsim 2$),
this duration scales as the distance between the contours of $\delta = 0$
at $\alpha$. (See Fig.\ \three).
For finite sources of increasing $\rho$, the trajectories must move
closer to the center to maintain the observed value of $\delta_d$.
(See Fig.\ \four). However, until $\rho$ becomes so large as to cover
the caustics, the positions of the $\delta=0$ contours basically
do not change. (Compare Fig.\ \three d with Fig.\ \four). Since
these contours are approximately horizontal, $t_d$ is not greatly
influenced by changes in $\rho$. Finally, since the largest permitted
source has $\rho \sim d_{\rm{caus}}$, which is also approximately
equal to the separation of the $\delta=0$ contours, $t_d$ remains roughly
the same even for this extreme case (see Fig.\ 3).
The small degeneracy that does exist arises from the difference
between this extreme case on the one hand and the smaller sources
and point sources on the other. Examining Figure 3, we can expect that the
degeneracy will be somewhat larger for larger values of $\gamma$, since
the contours of $\delta=0$ become less horizontal as $\gamma$ increases.
In Table 3 we give the degeneracy in the inferred values of $q$ for
$\delta_d = -10\%$ and $-5\%$, and for several
values of $\gamma$. Note that
the largest degeneracy in $q$ is only a factor of $\sim 4$.
$$\vbox{\halign{#\hfil\quad&\hfil#\quad&\hfil#
\quad&\hfil#\hfil\quad&\hfil#\hfil\quad&\hfil#\hfil\cr
\multispan{3}{\hfil TABLE 3 \hfil}\cr
\noalign{\medskip}
\multispan{3}{\hfil
Continuous Minor Image Degeneracy\hfil}\cr
\noalign{\smallskip}
\noalign{\hrule}
\noalign{\smallskip}
\noalign{\hrule}
\noalign{\smallskip}
\hfil \hfil&\hfil\hfil&\hfil mass ratio \hfil\cr
\hfil \hfil&\hfil\hfil&\hfil degeneracy\hfil\cr
\hfil $\delta_d$\hfil&\hfil $\gamma$\hfil&\hfil $q_{\rm{max}}/q_{\rm{min}}$ \hfil\cr
\noalign{\smallskip}
\noalign{\hrule}
\noalign{\smallskip}
\hfil-10\%\hfil&\hfil1.25\hfil&\hfil1.45\hfil\cr
\hfil \hfil&\hfil1.43\hfil&\hfil1.09\hfil\cr
\hfil \hfil&\hfil1.67\hfil&\hfil1.68\hfil\cr
\hfil \hfil&\hfil2.00\hfil&\hfil3.82\hfil\cr
\noalign{\smallskip}
\noalign{\hrule}
\noalign{\smallskip}
\hfil -5\%\hfil&\hfil1.25\hfil&\hfil1.66\hfil\cr
\hfil \hfil&\hfil1.43\hfil&\hfil1.16\hfil\cr
\hfil \hfil&\hfil1.67\hfil&\hfil1.22\hfil\cr
\hfil \hfil&\hfil2.00\hfil&\hfil1.67\hfil\cr
\hfil \hfil&\hfil2.50\hfil&\hfil2.34\hfil\cr
\noalign{\smallskip}
\noalign{\hrule}
}}
$$
\FIG\ell{Ten light curves with $\gamma_-=0.6^{-1}$, $\phi=60^\circ$,
all with maximum deviation $\delta_d=-10\%$ and FWHM
$t_d = 0.06\, t_e$. All other parameters are the same
as Fig. \ten, and are given in Table 4. Curves are
as in Fig. \ten.
}
\topinsert
\mongofigure{fig10.ps}{6.4}{7.5}{6.5}
{
a) Ten light curves with $\gamma_-=1.67$, $\phi=90^\circ$,
all with maximum
deviation $\delta_d=-10\%$ and FWHM
$t_d = 0.06\,t_e$. The ratios of source radius to planet Einstein ring range
from $\rho=0.1$ to $\rho=1.83$, the largest source radius consistent
with this maximum deviation. The corresponding relative
values of $q=m/M$, and the
relative proper motion, $\mu/\mu_0$, are given in Table 4.
Source radii of $\rho = 1.6, 1.7, 1.83,$ and $1.80$ correspond
to the bold, dashed, bold dotted, and bold dashed curves.
b) The fractional color change $\Delta(V-H)$ for the ten curves in panel (a).
}
\endinsert
\topinsert
\mongofigure{fig11.ps}{6.4}{4.5}{6.5}
{
Ten light curves with $\gamma_-=0.6^{-1}$, $\phi=60^\circ$,
all with maximum deviation $\delta_d=-10\%$ and FWHM
$t_d = 0.06\, t_e$. All other parameters are the same
as Fig. \ten, and are given in Table 4. Curves are
as in Fig. \ten.
}
\endinsert
Figure \ten a shows twelve light curves for $\gamma = 1.67$ and
$\phi = 90^\circ$, all with
maximum negative perturbation $\delta_d = -10\%$, and all with the
same FWHM. Table 4 gives the inferred values of $q$ and $\mu$ for
each curve, relative to the fiducial
values $q_0$ and $\mu_0$ associated with $\rho=0.3$.
Note that the degeneracy in the derived mass ratios is only a
factor of $\sim 1.5$. Also note that the inferred mass ratios of
the first nine curves agree to $\sim 4 \%$. Thus to resolve the small
degeneracy in $q$, one only needs to distinguish between the
last four curves. From Figure \ten a it is clear that this would be possible
if one could resolve the positive perturbation structures at the $\sim 1\%$ level.
Furthermore, the situation presented in Figure \ten, for which $\phi = 90^\circ$,
is the worst case scenario. Due to the structure of the caustics of
minor-image perturbations, trajectories with $\phi < 90^\circ$ display
marked asymmetry about $t_{0,d}$, excepting trajectories with $\alpha \sim 0$, which are
nearly symmetric. This enables one to distinguish between curves
with $\alpha \gsim 1$ and $\alpha \sim 0$ more easily when $\phi < 90^\circ$. This is demonstrated in
Figure \ell, which shows twelve light curves with the same parameters as
Figure \ten, except that now $\phi = 60^\circ$. Comparing Figs.\ \ten\ and \ell,
it is clear that the curves are appreciably less degenerate for
$\phi=60^\circ$ than for $\phi=90^\circ$. From Figure \ten b, we see
that the magnitude of the fractional color change
for perturbations with $\gamma=1.67$ is always small, $\Delta(V-H) \lsim 1\%$.
From numerical calculations, we find that $\Delta(V-H) \lsim 1\%$ regardless
of the value of $\gamma$. Thus,
in contrast to major image perturbations,
optical/infrared color are not useful in resolving the degeneracy in minor image
perturbations, since the magnitude of the fractional color change is always small.
$$\vbox{\halign{#\hfil\quad&\hfil#\quad&\hfil#
\quad&\hfil#\hfil\quad&\hfil#\hfil\quad&\hfil#\hfil\cr
\multispan{4}{\hfil TABLE 4 \hfil}\cr
\noalign{\medskip}
\multispan{4}{\hfil Degenerate Parameter Values:
Continuous Minor Image\hfil}\cr
\noalign{\smallskip}
\noalign{\hrule}
\noalign{\smallskip}
\noalign{\hrule}
\noalign{\smallskip}
\hfil impact \hfil&\hfil dimensionless \hfil&\hfil
planet/star \hfil&\hfil proper motion \hfil\cr
\hfil parameter\hfil&\hfil source radius\hfil&\hfil
mass ratio\hfil&\hfil \hfil\cr
\hfil $\alpha$\hfil&\hfil $\rho$ \hfil&\hfil $q/q_0$ \hfil&\hfil $\mu/\mu_0$ \hfil\cr
\noalign{\smallskip}
\noalign{\hrule}
\noalign{\smallskip}
\hfil4.32\hfil&\hfil0.10\hfil&\hfil0.993\hfil&\hfil4.014\hfil \cr
\hfil4.30\hfil&\hfil0.20\hfil&\hfil1.016\hfil&\hfil1.984\hfil \cr
\hfil4.25\hfil&\hfil0.40\hfil&\hfil1.000\hfil&\hfil1.000\hfil \cr
\hfil4.21\hfil&\hfil0.60\hfil&\hfil1.006\hfil&\hfil0.665\hfil \cr
\hfil4.00\hfil&\hfil0.80\hfil&\hfil1.020\hfil&\hfil0.495\hfil \cr
\hfil3.70\hfil&\hfil1.00\hfil&\hfil1.033\hfil&\hfil0.394\hfil \cr
\hfil3.50\hfil&\hfil1.20\hfil&\hfil1.017\hfil&\hfil0.331\hfil \cr
\hfil3.20\hfil&\hfil1.40\hfil&\hfil0.994\hfil&\hfil0.287\hfil \cr
\hfil2.60\hfil&\hfil1.60\hfil&\hfil1.004\hfil&\hfil0.249\hfil \cr
\hfil2.00\hfil&\hfil1.70\hfil&\hfil1.093\hfil&\hfil0.225\hfil \cr
\hfil1.00\hfil&\hfil1.83\hfil&\hfil1.591\hfil&\hfil0.173\hfil \cr
\hfil0.00\hfil&\hfil1.80\hfil&\hfil1.628\hfil&\hfil0.174\hfil \cr
\noalign{\smallskip}
\noalign{\hrule}
}}
$$
\chapter{From Mass Ratios to Planet Masses}
If the various degeneracies described in this paper are broken, one
generally recovers two planetary-system parameters from
a planetary microlensing event: $q$ and $y_p$. While $q$ is of some interest
in its own right, $y_p$ is not. The quantities one would most like to know
are the planet mass $m=q M$ and the physical projected separation
$a_p = r_e y_p$. One could take a purely statistical approach to estimating
these quantities: given the measured time scale $t_e$ of the event and a
plausible model of the distribution and velocities of lenses and sources
along the line of sight, $r_e$ and $M$ can be estimated to a factor of 3.
In this section, we discuss what further constraints might be
obtained on $M$ and $r_e$ in order to determine $m$ and $a_p$.
The single most powerful method of acquiring additional information
would be to launch a parallax satellite
(Refsdal 1966; Gould 1995a; Gaudi \& Gould 1997) which would routinely measure
$\tilde r_e\equiv ({D_{\rm os}}/{D_{\rm ls}})r_e$ and often measure the direction of motion
as well. This information would, by itself, narrow the uncertainty in the
mass to a factor $\sim 1.7$ (see Han \& Gould 1995, especially figure 7).
However, if the proper motion $\mu$ were also measured, this would yield
a complete solution of the lensing geometry including both $M$ and $r_e$
(e.g.\ Gould 1996). In general, one expects to measure $\mu$ only in
$\sim 20\%$ of giant events even with relatively aggressive observations
(Gould 1996). However, for events with planetary perturbations, $\mu$
can be measured much more frequently. Recall from \S\ 6 that
for giant sources, major image perturbations,
and planetary masses $m\lsim 100\,M_\oplus$, the planet
usually resolves the source (if it is detected at all) and that in the
process of resolving the resulting degeneracy, one measures $\mu$.
Even when $\theta_*<\theta_p$, the source will sometimes cross a caustic
in which case $\mu$ can be measured. Finally, for $\theta_*<\theta_p$ one
can obtain a lower limit $\mu>\mu_{\rm min}$ based on the lack of detection of
finite-source effects. Since the mass is given by
$M = (c^2/4 G)\tilde r_e t_e\mu$, and since $\tilde r_e$ and $t_e$ are
measured, this gives a lower limit on the mass (Gould 1995b).
However, it is much more difficult to resolve the finite source
degeneracy for minor-image perturbations. Even though
(or rather, because) the measurement
of the mass ratio, $q$, is not seriously hampered by this degeneracy,
the proper motion $\mu$ is poorly determined. See \S\ 2.3. Thus, it would be
necessary to measure $\mu$ using other methods. See \S\ 6.1.
We now address several questions related to one of those methods:
direct imaging of the source and lensing star several
decades after the event. For definiteness, we suppose that the measurement
is made after 20 years. The expected separation is $\sim 0.\hskip-2pt ''1$,
but could plausibly be $\sim 0.\hskip-2pt ''3$. At Baade's Window, the
expected number of stars $M_I<10$ inside this radius is $\sim 0.5$
(Light et al.\ 1996). Thus one would not be overwhelmed with candidates.
On the other hand, the great majority of lensing events are almost certainly
due to objects that are fainter than $M_I=10$ simply because one does not
come close to accounting for the observed events from the observed ($M_I<10$)
stars alone (Han 1996). Thus, to positively identify a candidate star
as the lens, one needs additional information. A parallax satellite could
provide two pieces of corroborating data. First, the measured $\tilde r_e$
together with the proper motion inferred from the candidate-source angular
separation would give a mass and distance to the lens (Gould 1995b). One could
then predict an apparent magnitude and see if it agreed with that of the
candidate. Second, if the parallax measurement gave the angle of motion,
one could check this against the direction of the source-candidate
separation vector. In addition, the candidate's inferred proper motion must
satisfy the lower limit derived from lack of finite-source effects as
discussed above. Finally, one could wait another decade or so to see if the
direction of the candidate's proper motion was indeed away from the source.
{\bf Acknowledgements}:
This work was supported in part by grant AST 94-20746 from the NSF,
and in part by grant NAG5-2864 from NASA.
\APPENDIX{A}{Justification for the Chang-Refsdal Approximation}
What errors are introduced by the Chang-Refsdal approximation?
The unperturbed image structure
consists of two images separated by $>2\theta_e$. The planet, with an
effective sphere of influence $\sim \theta_p\ll \theta_e$ can have a major
effect on it at most one of these. For definiteness, say this is the major
image. In the Chang-Refsdal approximation, the minor image is then treated as
being completely unaffected by the presence of the planet. In fact,
the planet will change the shear at the minor image by
${\cal O}
(\theta_p/\theta_e)$ and therefore change the magnification by a similar
amount. However, what is directly of interest for analyzing the planetary
perturbation is not the absolute difference in magnification with and without
the planet. Rather, it is the {\it change} in this difference over the
lifetime of the planetary perturbation. Hence, the net effect is
${\cal O}[(\theta_p/\theta_e)^2]$, i.e., of higher order
than the effects being analyzed.
We now turn to the errors in the Chang-Refsdal estimate of the
magnification of the perturbed image. In general, the perturbed image is
split by the planet into two or four images. For each such image, $i$, the
shear due to the parent star is $\gamma_i$. If this value were exactly
equal to $\gamma$, the shear at the position of the unperturbed image, then
the Chang-Refsdal approximation would be exact. Typically,
$\Delta\gamma_i\equiv \gamma_i -\gamma$ is small,
$\Delta\gamma_i/\gamma\sim {\cal O}(\theta_p/\theta_e)$,
so one expects that the errors induced by the approximation are small.
We focus first on perturbations of the major image. Let
$\Delta\theta$ be the separation between the planet and the unperturbed image
and define $\alpha=\Delta\theta/\theta_p$. Consider first the case
$\alpha\gg 1$ which is important when $\gamma\gsim 0.5$ because the
magnification contours then become significantly elongated (see Fig.\ \three).
The image is then split into two images, one very close to planet and the
other very close to the unperturbed image. For the image close to the planet,
the shear due to the parent star may be significantly misestimated,
$\Delta\gamma_i/\gamma\sim \alpha\theta_p/\theta_e$. However, for this image,
the total shear is dominated by the planet and is ${\cal O}(\alpha^2)$,
so true fractional error is only $\sim \alpha^{-1}\theta_p/\theta_e$.
Moreover, the magnification of this image is small, ${\cal O}(\alpha^{-4})$,
so the total error induced by the approximation is
$\sim \alpha^{-5}\theta_p/\theta_e$ and is completely negligible. The other
image is displaced by $\sim\alpha^{-1}\theta_p$ from the unperturbed image,
so $\Delta\gamma_i/\gamma\sim \alpha^{-1}\theta_p/\theta_e$ which induces
a similar small change in magnification. Recall from Figure \three-c, that the
source trajectory is determined up to a two-fold degeneracy from the maximum
magnification. Since the sign of the image displacement is different
for the two allowed solutions, the error in estimating the magnification
structure could result in two types of errors. First, there
is an error in the planet star separation, but this is only
$\sim\alpha^{-1}\theta_p$ and is therefore lower by $\alpha^{-1}$ than the
basic degeneracy indicated in Figure \three-c. Second, there is an error
in the estimate of $q$ and, in fact a degeneracy because the error has
opposite sign for the two allowed solutions. This could in principle be
significant because, within the Chang-Refsdal
framework, the two allowed solutions indicated in Figure \three-c\ have
identical values of $q$, and this effect is therefore the lowest order
degeneracy. However, the mass ratio is estimated from the FWHM of the light
curve which is only a weak function of position along the elongated
magnification contours. Moreover, the misestimate of that position is small.
We therefore estimate a fractional mass degeneracy of
$\Delta q/q\sim \alpha^{-2} q^{1/2}$.
For $\alpha\lsim 1$ and sources that are small compared to the caustic
structure (seen e.g., in Fig.\ \three), the situation is similar to that of
caustic-crossing binary-lens events. The light curves are highly
non-degenerate, and one determines not only $q$ and $x_p$, but also $\rho$.
From the standpoint of understanding degeneracies, the important case is
when the source is of order or larger than the caustic. Here, there are
roughly equally magnified images displace roughly by $\theta_p$ on either
side of the planet. Hence, the lowest order errors cancel and the next
order errors are $\sim {\cal O}[(\theta_p/\theta_e)^2]$, and can therefore
be ignored.
There is one exception to this conclusion. In the argument given above,
we implicitly assumed that the planetary perturbation would
be significant only over an interval of source motion $\sim\theta_p$. This
assumption fails when the perturbation structure is elongated
$(\gamma\gsim 0.5$) {\it and} when the angle of source motion is low
($\sin\phi=\beta/x_d\ll 1$). In this case, the local shear is no longer
well approximated by the shear at the center of the perturbation. A proper
calculation would then require that the shear be recalculated at every point
along the source trajectory, holding
the planet fixed. This was the approach of
Gould \& Loeb (1992) and the resulting magnification for fixed planet position
can be seen in their Figure 3. [In the present work, by contrast, what is
held fixed in constructing Figs.\ \three\ and \four\ is the observable:
the shear at mid-point of the perturbation.]\ \ As can be seen by comparing
Figure 3-c of Gould \& Loeb (1992) and Figure \three-c of the present work,
for Jupiter mass planets the difference in contours can be significant.
However, there are three points to note. First, such events are rare
both because the conditions ($\gamma\gsim 0.5$, $\beta\ll x_d$) together imply
$\beta\lsim 0.2$ and because the elongated contours are encountered
``edge on'', so the cross section is only $\sim \theta_p/\theta_e$. Second,
the effect is proportional to $q^{1/2}$ and so would not be significant for,
e.g., Earth-mass planets. Third, the nature of the effect is to provide
information to break degeneracies in cases when the Chang-Refsdal approximation
would lead one to believe that there is no information. In brief, in certain
rare cases, the Chang-Refsdal approximation leads one to underestimate the
amount of information available.
For perturbations of the minor image, the two principle sources of
degeneracy are first, confusion of the two caustic peaks with each other and
second, confusion of one of these peaks with a perturbation of the major
image. Because these peaks are offset in the direction perpendicular to
the star-planet axis, the error in their location is
${\cal O}[(\theta_p/\theta_e)^2]$ and hence of higher order than their
separation. As in the case of the major image, there are certain rare
events with $\sin\phi\ll 1$ for which the Chang-Refsdal approximation makes
the degeneracy seem somewhat worse than it is.
\endpage
\Ref\alard{Alard, C.\ 1996, in Proc. IAU Symp.\ 173,
Astrophysical Applications of Gravitational Lensing, p.\ 214,
(Eds.\ C.\ S.\ Kochanek,
J.\ N.\ Hewitt), Kluwer Academic Publishers}
\Ref\albrow{Albrow M., et al.\ 1996, in Proc. IAU Symp.\ 173,
Astrophysical Applications of Gravitational Lensing, p.\ 227
(Eds.\ C.\ S.\ Kochanek,
J.\ N.\ Hewitt), Kluwer Academic Publishers}
\Ref\Alcock{Alcock, C., et al.\ 1996, ApJ, submitted
\Ref\Ansari{Ansari, et al.\ 1996, A\&A, in press}
\Ref\bandr{Bennett, D., \& Rhie, H.\ 1996, ApJ, in press}
\Ref\bandf{Bolatto, \& Falco, E.\ 1994, ApJ, 436, 112}
\Ref\cr{Chang, K., \& Refsdal, S.\ 1979, Nature, 282, 561}
\Ref\gaudi{Gaudi, B.\ 1996, in preparation}
\Ref\gandg{Gaudi, B., \& Gould, A.\ 1997, ApJ, 477, 000}
\Ref\goulda{Gould, A.\ 1995a, ApJ, 441, L21}
\Ref\gouldb{Gould, A.\ 1995b, ApJ, 447, 491}
\Ref\gouldb{Gould, A.\ 1996, PASP, 108, 465}
\Ref\gauch{Gould, A., \& Gaucherel, 1996, ApJ, 477, 000}
\Ref\gandl{Gould, A., \& Loeb, A.\ 1992, ApJ, 396, 104}
\Ref\gw{Gould, A., \& Welch, D.\ 1996, ApJ, 464, 212}
\Ref\ls{Loeb, A., \& Sasselov, D.\ 1995, ApJ, 449, 33L}
\Ref\mbg{Manduca, A., Bell, R.\ A., \& Gustafsson, B.\ 1977, A\&A, 61,809}
\Ref\mandp{Mao, S., \& Paczy\'nski, B.\ 1991, ApJ, 374, 37}
\Ref\pac{Paczy\'nski, B.\ 1986, ApJ, 304, 1}
\Ref\pratt{Pratt et al.\ 1996, in Proc. IAU Symp.\ 173,
Astrophysical Applications of Gravitational Lensing, p.\ 221
(Eds.\ C.\ S.\ Kochanek,
J.\ N.\ Hewitt), Kluwer Academic Publishers}
\Ref\Refs{Refsdal, S.\ 1964, MNRAS, 128, 295}
\Ref\sef{Schneider, P., Ehlers, J., \& Falco, E.\ E.\ 1992, Gravitational
Lenses (Berlin: Springer-Verlag)}
\Ref\udal{Udalski, A., et al.\ 1994, Acta Astronomica, 44, 165}
\Ref\witt{Witt, H.\ 1995, ApJ, 449, 42}
\refout
\endpage
\bye
|
1,108,101,564,897 | arxiv | \section{Introduction}
The East process is a one-dimensional spin system that was introduced in the physics literature by J\"{a}ckle and Eisinger~\cite{JE91} in 1991
to model the behavior of cooled liquids near the glass transition point, specializing a class of models that goes back to~\cite{FH}.
Each site in ${\ensuremath{\mathbb Z}} $ has a $\{0,1\}$-value (vacant/occupied), and, denoting this configuration by $\omega$, the process attempts to update $\omega_x$ to $1$ at rate $0<p<1$ (a parameter)
and to $0$ at rate $q=1-p$, only accepting the proposed update if
$\omega_{x-1}=0$ (a ``kinetic constraint'').
It is the properties of the East process before and towards reaching equilibrium --- it is reversible w.r.t.\ $\pi$, the product of Bernoulli($p$) variables ---
which are of interest, with the standard gauges for the speed of convergence to stationarity being the inverse spectral-gap and the total-variation mixing time (${\rm gap}^{-1}$ and $T_{\rm mix}$)
on a finite interval $\{0,\ldots,L\}$, where we fix $\omega_0=0$ for ergodicity (postponing formal definitions to \S\ref{sec:prelims}).
That the spectral-gap is uniformly bounded away from 0 for any
$p\in(0,1)$ was first proved in a beautiful work of Aldous and
Diaconis~\cite{AD02} in 2002.
This implies that $T_{\rm mix}$
is of order $L$ for any fixed threshold $0<\epsilon<1$ for the total-variation distance from $\pi$.
For a configuration $\omega$ with $\sup\{x:\omega_x=0\}<\infty$, call this rightmost 0 its \emph{front} $X(\omega)$; key questions on the East process $\omega(t)$ revolve the law $\mu^t$ of the sites behind the front at time $t$,
basic properties of which remain unknown. One can imagine that the front advances to the right as a biased walk, behind which $\mu^t \approx \pi$ (its trail is mixed).
Indeed, if one (incorrectly!) ignores dependencies between sites as well as the randomness in the position of the front, it is tempting to conclude that $\mu^t$ converges to $\pi$, since upon updating a site $x$ its marginal is forever set to Bernoulli($p$). Whence, the positive vs.\ negative increments to $X(\omega)$ would have rates $q$ (a 0-update at
$X(\omega)+1$) vs.\ $pq$ (a 1-update at $X(\omega)$ with a 0 at its
left), giving the front an asymptotic speed $v =q^2>0$.
Of course, ignoring the irregularity near the front is problematic, since it is precisely the distribution of those spins that governs the speed of the front (hence mixing).
Still, just as a biased random walk, one expects the front to move at a positive speed with normal fluctuations,
whence its concentrated passage time through an interval would imply total-variation \emph{cutoff} --- a sharp transition in mixing --- within an $O(\sqrt{L})$-window.
To discuss the behavior behind the front, let $\Omega_{\rm F}$ denote the set of configurations $\omega^{\rm F}$ on the negative half-line ${\ensuremath{\mathbb Z}} _-$ with a fixed 0 at the origin, and let $\omega^{\rm F}(t)$ evolve via the East process constantly re-centered (shifted by at most 1) to keep its front at the origin.
Blondel~\cite{Blondel} showed (see Theorem~\ref{Blondel1}) that the process $\omega^{\rm F}(t)$
converges to an invariant measure $\nu$, on which very little is
known, and that $\frac1{t} X(\omega(t)) $ converges in probability
to a positive limiting value $v$ as $t\to\infty$ (an asymptotic velocity)
given by the formula
\[
v = q-pq^* \qquad\mbox{ where }\qquad q^* := \nu(\omega_{-1}=0) .
\]
(We note that $q < q^* < q/p$ by the invariance of the measure $\nu$ and the fact that $v>0$.)
The East process $\omega(t)$ of course entails the joint distribution of $\omega^{\rm F}(t)$ and $X(\omega(t))$; thus, it is crucial to understand the dependencies between these as well as the rate at which $\omega^{\rm F}(t)$ converges to $\nu$
as a prerequisite for results on the fluctuations of $X(\omega(t))$.
Our first result confirms the biased random walk intuition for the front of the East process $X(\omega(t))$,
establishing a CLT for its fluctuations around $v t$ (illustrated in Fig.~\ref{fig:front}).
\begin{figure}[t]
\includegraphics[width=.75\textwidth]{front}
\caption{Trajectory of the front of an East process for $p=\frac14$ along a time interval of $10^5$, vs.\ its mean and standard deviation window.}
\label{fig:front}
\end{figure}
\begin{maintheorem}\label{th:main1} There exists a non-negative constant
$\s_*=\s_*(p)$ such that for all $\o\in \O_{\rm F}$,
\begin{align}
\label{th1.1}
\lim_{t\to \infty} \tfrac 1t X(\o(t))&=v \quad {\ensuremath{\mathbb P}} _\o\text{-a.s},\\
\label{th1.2}
{\ensuremath{\mathbb E}} _\o\left[X(\o(t))\right]&=vt+O(1),\\
\label{th1.3}
\lim_{t\to \infty} \tfrac 1t \operatorname{Var}_\o\left(X(\o(t))\right)&=\s_*^2.
\end{align}
Moreover, $X(\o(t))$ obeys a
central limit theorem:
\begin{equation}
\label{th1.4}
\frac{X(\o(t))-v
t}{\sqrt{t}} \overset{d}{\rightarrow} \ensuremath{\mathcal N}(0,\s_*^2) \quad w.r.t.\quad {\ensuremath{\mathbb P}} _\o\mbox{ as $t\to\infty$}.
\end{equation}
\end{maintheorem}
A key ingredient for the proof is a quantitative bound on the rate of convergence to $\nu$, showing that it is exponentially fast (Theorem~\ref{coupling}). We then show that the increments
\begin{equation}
\label{eq-xi_n-def}
\xi_n:=X(\o(n))-X(\o(n-1)) \qquad (n\in {\ensuremath{\mathbb N}} )
\end{equation}
behave (after an initial burn-in time) as a stationary sequence of weakly dependent random variables (Corollary~\ref{cor:wf}),
whence one can apply an ingenious Stein's-method based argument of Bolthausen~\cite{Bolthausen} from 1982 to derive the CLT.
Moving our attention to finite volume, recall that
the \emph{cutoff phenomenon} (coined by Aldous and Diaconis~\cite{AD86}; see~\cites{Aldous,DiSh} as well as~\cite{Diaconis} and the references therein) describes a sharp transition in the convergence of a finite Markov chain to stationarity: over a negligible period of time (the cutoff window) the distance from equilibrium drops from near 1 to near $0$. Formally, a sequence of chains indexed by $L$ has cutoff around $t_L$ with window $w_L=o(t_L)$ if $T_{\rm mix}(L,\epsilon) = t_L + O_\epsilon(w_L)$ for any fixed $0<\epsilon<1$.
It is well-known (see, e.g.,~\cite{DiFi}*{Example 4.46}) that a biased random walk with speed $v>0$ on an interval of length $L$ has cutoff at $v^{-1}L$ with an $O(\sqrt{L})$-window due to normal fluctuations. Recalling the heuristics that depicts the front of the East process as a biased walk flushing a law $\mu^t\approx\pi$ in its trail, one expects precisely the same cutoff behavior. Indeed, the CLT in Theorem~\ref{th:main1} supports a result exactly of this form.
\begin{maintheorem}
\label{th:main2}
The East process on $\L=\{1,2,\dots ,L\}$ with parameter $0<p<1$
exhibits cutoff at $v^{-1}L$ with an $O(\sqrt{L})$-window:
for any fixed $0<\epsilon<1$ and large enough $L$,
\begin{align*}
T_{\rm mix}(L,\epsilon)&= v^{-1} L + O\left(\Phi^{-1}(1-\epsilon)\, \sqrt{L}\right),
\end{align*}
where $\Phi$ is the c.d.f.\ of $\ensuremath{\mathcal N}(0,1)$ and the implicit constant in the $O(\cdot)$ depends only on $p$.
\end{maintheorem}
\begin{figure}[t]
\vspace{-0.5cm}
\includegraphics
[width=.75\textwidth]{nu_p020304}
\vspace{-0.35cm}
\caption{The invariant measure $\nu$ behind the front of the East process
(showing $\nu(\o_{-i}=0)$ simulated
via Monte-Carlo for $p\in\{0.2,0.3,0.4\}$.)}
\label{fig:nu}
\vspace{-0.5cm}
\end{figure}
While these new results relied on a refined understanding of the
convergence of the process behind the front to its invariant law
$\nu$ (shown in Fig.~\ref{fig:nu}), various basic
questions on $\nu$ remain unanswered. For instance, are the single-site
marginals of
$\nu$ monotone in the distance from the front? What are the correlations between adjacent spins? Can one explicitly obtain $q^*=\nu(\omega_{-1}=0)$, thus yielding an expression for the velocity $v$?
For the latter, we remark that the well-known upper bound on $T_{\rm mix}$ in terms of the spectral-gap (Eq.~\eqref{eq-tmix-gap}), together with Theorem~\ref{th:main2}, gives the lower bound (cf.\ also~\cite{CFM})
\[
v\ge \limsup_{L\to\infty}\frac{{\rm gap}(\ensuremath{\mathcal L}_{[0,L]})}{\log\left(1/(p\wedge q)\right)}=\frac{{\rm gap}(\ensuremath{\mathcal L})}{\log\left(1/(p\wedge q)\right)}.
\]
\smallskip
Finally, we accompany the concentration for $X(\omega(t))$ and cutoff for the East process by analogous results --- including cutoff with an $O(1)$-window --- on the corresponding kinetically constrained models on trees, where a site is allowed to update (i.e., to be reset into a Bernoulli($p$) variable) given a certain configuration of its children (e.g., all-zeros/at least one zero/etc.). These results are
detailed in \S\ref{sec:trees} (Theorems~\ref{th:main3}--\ref{th:main4}).
\begin{remark*}
The concentration and cutoff results for the kinetically constrained models on trees (Theorems~\ref{th:main3}--\ref{th:main4})
do not apply to every scale but rather to infinitely many scales, as is sometimes the case in the context of tightness for maxima of branching random walks or discrete Gaussian Free Fields;
see, e.g.,~\cites{BDZ,DH91} as well as the beautiful method in~\cites{BZ1,BZ2} to overcome this hurdle for certain branching random walks. Indeed, similarly to the latter, one of the models here gives rise to a distributional recursion involving the maximum of i.i.d.\ copies of the random variable of interest, plus a non-negative increment. Unfortunately, unlike branching random walks, here this increment is not independent of those two copies, and extending our analysis to every scale appears to be quite challenging.
\end{remark*}
\section{Preliminaries and tools for the East process}\label{sec:prelims}
\subsection{Setup and notation}
\label{setting-notation}
Let $\O=\{0,1\}^{\ensuremath{\mathbb Z}} $ and let $
\O^*\subset \O$ consist of those configurations $\o\in \O$ such that the
variable $X(\o):=
\sup\{x:\omega_x=0\}$ is finite.
In the sequel, for any $\o\in \O^*$ we will often refer to $X(\o)$ as
the \emph{front} of $\o$. Given $\L\subset {\ensuremath{\mathbb Z}} $ and $\o\in \O$ we
will write $\o_\L$ for the restriction of $\o$ to $\L$.
\begin{enumerate}[(i)]
\item \emph{The East process.} For any $\o\in \O$ and $x\in {\ensuremath{\mathbb Z}} $ let $c_x(\o)$ denote the indicator
of the event $\{\o_{x-1}=0\}$. We will consider the Markov
process $\{\o(t)\}_{t\ge 0}$ on $\O$ with generator acting on local functions (\hbox{\it i.e.\ } depending
on finitely many coordinates) $f:\O\mapsto {\ensuremath{\mathbb R}} $ given by
\[
\ensuremath{\mathcal L} f(\o)=\sum_{x\in {\ensuremath{\mathbb Z}} }c_x(\o)\left[\pi_x(f)(\o)-f(\o)\right],
\]
where $\pi_x(f)(\o):= pf(\o^{(x,1)})+qf(\o^{(x,0)})$
and $\o^{(x,1)},\o^{(x,0)}$ are the configurations in $\O$ obtained from $\o$ by fixing equal to $1$
or to $0$ respectively the coordinate at $x$. In the sequel the above
process will be referred to as the \emph{East
process on ${\ensuremath{\mathbb Z}} $} and we will write ${\ensuremath{\mathbb P}} _\o(\cdot)$ for its law
when the starting configuration is $\o$. Average and variance w.r.t.\ to
${\ensuremath{\mathbb P}} _\o(\cdot)$ will be denoted by ${\ensuremath{\mathbb E}} _\o[\cdot]$ and
$\operatorname{Var}_\o(\cdot)$ respectively. Similarly we
will write ${\ensuremath{\mathbb P}} _\o^t(\cdot)$ and ${\ensuremath{\mathbb E}} _\o^t[\cdot]$ for the law and
average at a fixed time $t>0$. If the starting
configuration is distributed according to an initial distribution
$\eta$ we will simply write ${\ensuremath{\mathbb P}} _\eta(\cdot)$ for $\int
d\eta(\o){\ensuremath{\mathbb P}} _\o(\cdot)$ and similarly for ${\ensuremath{\mathbb E}} _\eta[\cdot]$.
It is easily seen that the East process has the following graphical
representation. To
each $x\in{\ensuremath{\mathbb Z}} $ we associate a rate-1 Poisson process and,
independently, a family of independent Bernoulli$(p)$ random variables
$\{s_{x,k} : k \in \mathbb N\}$. The occurrences of the Poisson process
associated to $x$ will be denoted by $\{t_{x,k} : k \in \mathbb N\}$. We
assume independence as $x$ varies in ${\ensuremath{\mathbb Z}} $. That fixes the
probability space. Notice that almost surely all
the occurrences $\{t_{x,k}\}_{k\in\mathbb N, x\in{\ensuremath{\mathbb Z}} }$ are different.
On the above probability we construct a Markov process according to the following rules. At each time
$t_{x,n}$ the site $x$ queries the state of its own constraint $c_x$.
If and only if the constraint is satisfied ($c_x = 1$) then $t_{x,n}$
is called a \emph{legal ring} and the configuration resets its
value at site $x$ to the value of the corresponding Bernoulli variable
$s_{x,n}$. Using the graphical construction it is simple to see that if $
\o\in \O^*$
then
\[
{\ensuremath{\mathbb P}} _\o(\o(t)\in \O^* \, \forall t\ge 0)=1.
\]
\item \emph{The half-line East process.} Consider now $a\in {\ensuremath{\mathbb Z}} $ and let $\O^a$ consist of those
configurations $\o\in \O$ with a \emph{leftmost} zero at $a$.
Clearly, for any $\o\in \O^a$,
${\ensuremath{\mathbb P}} _\o(\o(t)\in \O^a\ \forall t>0)=1$
because $c_x(\o)=0$ for any $x\le a$. We will refer to the corresponding process in $\O^a$
as the East process on the half-line $(a,\infty)$. Notice that in this case the
variable at $a+1$ will always be unconstrained because $c_a(\o)=1$ for
all $\o\in \O^a$. The corresponding generator will be denoted by $\ensuremath{\mathcal L}_{(a,\infty)}$.
\item \emph{The finite volume East process.} Finally, if $\L\subset {\ensuremath{\mathbb Z}} $ is
a discrete interval of the form
$\L=[a+1,\dots a+L]$, the projection on $\O_\L\equiv \{0,1\}^\L$ of the half-line East process on
$(a,\infty)$ is a continuous time Markov chain because each vertex
$x\in \L$ only
queries the state of the spin to its left. In the sequel the above chain will be referred to as the
\emph{East process} in $\L$. Let $\ensuremath{\mathcal L}_{\L}$ denote the corresponding generator.
\end{enumerate}
The main properties of the above processes can be summarized as
follows (cf.~\cite{East-survey} for a survey). They are all ergodic and
reversible w.r.t.\ to the product Bernoulli($p$) measure $\pi$ (on the
corresponding state space). Their generators
$\ensuremath{\mathcal L},\ensuremath{\mathcal L}_{(a,\infty)},\ensuremath{\mathcal L}_\L$ are self-adjoint operators on
$L^2(\pi)$ satisfying the following
natural ordering:
\[
{\rm gap}(\ensuremath{\mathcal L})\le {\rm gap}(\ensuremath{\mathcal L}_{(a,\infty)})\le {\rm gap}(\ensuremath{\mathcal L}_{\L}).
\]
\begin{remark*}
By translation invariance the value of ${\rm gap}(\ensuremath{\mathcal L}_{(a,\infty)})$ does not
depend on $a$ and, similarly, ${\rm gap}(\ensuremath{\mathcal L}_\L)$ depends only on the cardinality of $\L$.
\end{remark*}
As mentioned before, the fact that ${\rm gap}(\ensuremath{\mathcal L})>0$ (but only for $p\sim 1$) was first proved by Aldous and
Diaconis~\cite{AD02}, where it was further shown that
\begin{align}
\label{eq-AD-gap}
e^{-(\frac{1}{\log 2}+o(1))\log^2(1/q)} \leq {\rm gap}(\ensuremath{\mathcal L}) \leq e^{-(\frac{1}{2\log 2}+o(1))\log^2(1/q)} \quad \mbox{as $q\downarrow 0$},
\end{align}
the order of the exponent in the lower bound matching non-rigorous predictions in the physics literature.
The positivity of ${\rm gap}(\ensuremath{\mathcal L})$ was rederived and extended to all $p\in (0,1)$
in~\cite{CMRT} by different methods, and the correct asymptotics
of the exponent as
$q\downarrow 0$ --- matching the \emph{upper bound} in~\eqref{eq-AD-gap} ---
was very recently established in~\cite{CFM}. It is easy to check (e.g., from~\cite{CMRT})
that $\lim_{p\to 0}{\rm gap}(\ensuremath{\mathcal L})=1$, a fact that
will be used later on.
For the East process in $\L$ it is natural to consider its mixing times
$T_{\rm mix}(L,\epsilon)$, $\epsilon\in (0,1)$, defined by
\[
T_{\rm mix}(L,\epsilon)=\inf\Big\{t:\ \max_{\o\in \O_\L}\|P_\o^t(\cdot)-\pi\|\le \epsilon\Big\},
\]
where $\|\cdot \|$ denotes
total-variation distance.
It is a standard result for reversible Markov chains (see
e.g.~\cites{AF,LPW,Saloff}) that
\begin{equation}
\label{eq-tmix-gap}
T_{\rm mix}(L,\epsilon) \le \frac 12 {\rm gap}(\ensuremath{\mathcal L}_\L)^{-1}\left(2+\log
\frac{1}{\pi_\L^*}\right) \log \frac1\epsilon ,
\end{equation}
where $\pi_\L^*:= \min_{\o\in \O_\L}\pi(\o)$. In particular
$T_{\rm mix}(L,\epsilon)\le c(p) L \log 1/\epsilon$.
A lower bound which also grows linearly in the length $L$ of the
interval $\L$ follows easily from the \emph{finite speed of
information propagation}:
If we run the East
model in $\L$ starting from the configuration of $\omega\equiv 1$
except for a zero at the origin, then, in order to create zeros
near the right boundary of $\L$ a sequence of order $L$ of successive
rings of the Poisson clocks at consecutive sites must have
occurred. That happens with probability $O(1)$ iff we allow a time
which is linear in $L$ (see \S\ref{sec:finite-speed} and in particular Lemma~\ref{finitespeed}).
\subsection{The process behind the front}\label{sec:proc-front}
Given two probability measures $\nu,\mu$ on $\O$ and $\L\subset
{\ensuremath{\mathbb Z}} $ we will write $\|\mu-\nu\|_\L$ to denote the total variation
distance between the marginals of $\mu$ and $\nu$ on $\O_\L=\{0,1\}^\L$.
When the process starts from a initial configuration $\o\in \O^*$ with
a front, it is convenient to
define a new process $\{\o^{\rm F}(t)\}_{t\ge 0}$ on $\O_{\rm F}:=\{\o\in \O^*:\ X(\o)=0\}$ as \emph{the
process as seen from the front} \cite{Blondel}. Such a process is obtained
from the original one by a random shift $-X(\o(t))$
which forces the front to be always at the origin. More precisely
we define on $\O_{\rm F}$ the Markov process with generator $\ensuremath{\mathcal L}^{\rm F}=\ensuremath{\mathcal L}^{\rm E}+\ensuremath{\mathcal L}^{\rm S}$ given by
\begin{align*}
\ensuremath{\mathcal L}^{\rm E} f(\o) &=
\sum_{x<0}c_x(\o)\left[\pi_x(f)(\o)-f(\o)\right], \\
\ensuremath{\mathcal L}^{\rm S} f(\o)&=
(1-p)\left[f(\vartheta^-\o)-f(\o)\right]+ p \,c_0(\o)\left[f(\vartheta^+\o)-f(\o)\right],
\end{align*}
where
\begin{equation*}
\left( \vartheta^\pm\o\right)_x=
\begin{cases}
0&\text{ if $x=0$}\\
1 &\text{ if $x>0$}\\
\o_{x\mp1}&\text{ otherwise}.
\end{cases}
\end{equation*}
That is, the generator $\ensuremath{\mathcal L}^{\rm F}$ incorporates the moves of the East
process behind the front plus $\pm1$ shifts corresponding to whenever the front itself jumps forward/backward.
\begin{remark*}
The same graphical construction that was given for the East process $\omega(t)$ applies to the process $\omega^{\rm F}(t)$: this is clear for the East part of the generator $\ensuremath{\mathcal L}^{\rm E}$; for the shift part $\ensuremath{\mathcal L}^{\rm S}$, simply apply a positive
shift $\vartheta^+$ when there is a ring at the origin and the
corresponding Bernoulli variable is one. If the Bernoulli variable is
zero, operate a negative shift $\vartheta^-$.
\end{remark*}
With this notation, the main result of Blondel~\cite{Blondel} can be summarized as follows.
\begin{theorem}[\cite{Blondel}]
\label{Blondel1} The front of the East process, $X(\omega(t))$, and the process as seen from the front, $\omega^{\rm F}(t)$, satisfy the following:
\begin{enumerate}[(i)]
\item There exists a unique invariant measure
$\nu$ for the process $\{\o^{\rm F}(t)\}_{t\ge 0}$. Moreover, $\|\n-\pi\|_{(-\infty,-x]}$
decreases exponentially fast in $x>0$.
\item Let $q^*:=\nu(\o_{-1}=0)$ and let $v= q-pq^*$. Then $v>0$ and for any $\o\in \O_{\rm F}$,
\[
\lim_{t\to \infty}\frac{X(\o(t))}{t}\
\stackrel{{\ensuremath{\mathbb P}} _\o}{\longrightarrow} \ v.
\]
\end{enumerate}
\end{theorem}
Thus, if the East process has a front at time $t=0$ then it will have
a front at any later time. The latter progresses in time with an
asymptotically constant speed $v$.
\subsection{Local relaxation to equilibrium}
In this section we review the main technical results on the local
convergence to the stationary measure $\pi$ for the (infinite volume) East process. The key message here is
that \emph{each} vacancy in the starting configuration, in a time lag $t$,
induces the law $\pi$ in an interval in front of its position of length
proportional to $t$. That explains why the distance between the
invariant measure $\nu$ and $\pi$ deteriorates when we approach the
front from behind.
\begin{definition}
Given a configuration $\omega\in \Omega$ and an interval $I$ we say that $\o$
satisfies the \textbf{Strong Spacing Condition (SSC)} in $I$ if the largest sub-interval of
$I$ where $\omega$ is identically equal to one has length at most
$10\log |I|/(|\log p| \wedge 1)$.
Similarly, given $\d,\epsilon\in (0,1/4)$, we will say that $\o$ satisfies the
\textbf{$(\d,\epsilon)$-Weak Spacing Condition (WSC)} in $I$ if the largest sub-interval
of $I$ where $\omega $ is identically equal to one has length at most
$\delta|I|^{\epsilon}$.
\end{definition}
For brevity, we will omit the $(\d,\epsilon)$ dependence in WSC case when these are made clear from the context.
\begin{proposition}
\label{prop:key1}
There exist universal positive constants $c^*,m$ independent
of $p$ such that the following holds. Let $\L=[1,2,\dots,\ell]$ and let $\o\in \O$ be such that $\o_0=0$.
Further let $\Delta(\o)$ be largest between the
maximal spacing between two consecutive zeros of $\o$ in $\L$ and
the distance of the last zero of $\o$ from the vertex $\ell$. Then
\[
\|{\ensuremath{\mathbb P}} ^t_\o-\pi\|_{\L}\le \ell\left(c^*/q \right)^{\Delta(\o)} e^{-t
\,({\rm gap}(\ensuremath{\mathcal L})\wedge m)}.
\]
\end{proposition}
To prove this proposition, we need the following lemma.
\begin{lemma}
\label{infinitesupport} There exist universal positive constants $c^*,m$ independent
of $p$ such that the following holds. Fix $\o\in \O$ with $\o_0=0$,
let $\ell\in
{\ensuremath{\mathbb N}} $ and let $f:\O_{(-\infty,\ell\,]}\mapsto
{\ensuremath{\mathbb R}} $ with $\|f\|_\infty\le 1$. Let also $\pi_{\ell}(f)$ denote the new
function obtained by averaging $f$ w.r.t. the marginal of $\pi$ over the
spin at $x=\ell$. Then,
\begin{equation}
\label{eq:fabio2}
|\mathbb{E}_{\omega}\left[f(\omega(t))-\pi_{\ell}(f)(\omega(t))\right]|\le
(c^*/q)^\ell e^{-t\, ({\rm gap}(\ensuremath{\mathcal L})\wedge m)}.
\end{equation}
\end{lemma}
\begin{remark*}
If we replaced the r.h.s.\ of \eqref{eq:fabio2} with $\left(2\sqrt{2}/(p\wedge q)\right)^\ell e^{-t\, {\rm gap}(\ensuremath{\mathcal L})}$, then the statement
would coincide with that in \cite{Blondel}*{Proposition
4.3}. Notice that as $p \downarrow 0$, the term $c^*/q$ does not blow up---
unlike $2\sqrt{2}/(p\wedge q)$---and as remarked below~\eqref{eq-AD-gap}, ${\rm gap}(\ensuremath{\mathcal L})$ stays bounded away from $0$. Hence, as $p\downarrow 0$, the time after which the r.h.s.\ in \eqref{eq:fabio2} becomes small is bounded from
above by $C_0\times \ell$ for some universal $C_0>0$ not depending on $p$. This fact will be crucially used in the proofs of some of the theorems to follow.
\end{remark*}
\begin{proof}[Proof of Lemma~\ref{infinitesupport}]
As mentioned in the remark using \cite{Blondel}*{Proposition
4.3} it suffices to assume that
$p<1/3$. Fix $\o$ as in the lemma and let $\O^\o_{(-\infty,\ell\,]}$ be the set of all configurations
$\o'\in \O_{(-\infty,\ell\,]}$ which coincides with $\o$ on the half
line $(-\infty,0]$. The special configuration in
$\O^\o_{(-\infty,\ell\,]}$ which is identically equal to one in the
interval $[1,\ell]$ will be denoted by $\o^*$.
Observe that, using reversibility together with the fact that the
updates in $(-\infty,0]$ do not check the spins to the right of the
origin,
\begin{align}
\label{eq:fabio}
\sum_{\o'\in \O^\o_{(-\infty,\ell\,]}}\pi_{[1,\ell]}(\o')
\mathbb{E}_{\omega'}\left[f(\omega'(t)\right]
&=\mathbb{E}_{\omega}\left[\pi_{[1,\ell]}(f)(\omega(t)\right]\nonumber\\
\sum_{\o'\in \O^\o_{(-\infty,\ell\,]}}\pi_{[1,\ell]}(\o')
\mathbb{E}_{\omega'}\left[\pi_\ell(f)(\omega'(t)\right]
&=\mathbb{E}_{\omega}\left[\pi_{[1,\ell]}(f)(\omega(t)\right].
\end{align}
Using the
graphical construction as a grand coupling for the processes with
initial condition in $\O^\o_{(-\infty,\ell\,]}$, it is easy to verify
that, at the hitting time $\tau_\ell$
of the set $\{\o'\in \O_{(-\infty,\ell]}:\ \o'_\ell=0\}$ for the
process started from
$\o^*$, the processes starting from \emph{all} possible initial conditions in $\O^\o_{(-\infty,\ell\,]}$
have coupled.
Let $\o'\in\O^\o_{(-\infty,\ell\,]}$ be distributed according to $\pi_{[1,\ell]}.$
Then using the grand coupling,
\begin{align*}
|\mathbb{E}_{\omega}\left[f(\omega(t))-\pi_{\ell}(f)(\omega(t))\right]| &=|\mathbb{E}_{\omega,\pi_{[1,\ell]}}\left[f(\omega(t))-f(\omega'(t))+\pi_{\ell}(f)(\omega'(t))-\pi_{\ell}(f)(\omega(t))\right]|\\
&\le 4 \sup_{\o'\in \O^\o_{(-\infty,\ell\,]}}{\ensuremath{\mathbb P}} (\exists \, x\in
[1,\ell]:\ \o_x(t)\neq \o_x'(t))\\
&\le 4 {\ensuremath{\mathbb P}} _{\o^*}(\tau_\ell >t)\\
&\le 4 {\ensuremath{\mathbb P}} _{\o^*}(X(\o^*(t))<\ell).
\end{align*}
The first equality follows by adding and subtracting $\mathbb{E}_{\omega}\left[\pi_{[1,\ell]}(f)(\omega(t)\right]$ from the l.h.s.\ and then using \eqref{eq:fabio}. The rest of the inequalities are immediate from the above discussion.
In order to bound the above probability, we observe that the front
$X(\o^*(t))$, initially at $x=0$, can be coupled to an asymmetric random
walk $\xi(t)$, with $q (\rm{resp.} p)$ as jump rate to the right(resp. left), in such a way that $X(\o^*(t))\ge \xi(t)$ for all $t\ge 0$. Since we have assumed that $p< 1/3,$
by standard hitting time estimates for biased random walk there exist universal constants $c,m$ such that, for $t\ge c
\ell$, the above probability is smaller than $e^{-mt}$.
\end{proof}
\begin{proof}[Proof of Proposition~\ref{prop:key1}]
Let $\o\in \O$ be such that $\o_0=0$.
Then
\begin{align*}
\max_{\substack{ f:\O_\L\mapsto {\ensuremath{\mathbb R}} \\
\|f\|_\infty\le 1}}
&|{\ensuremath{\mathbb E}} _\o\left[f(\omega(t))-\pi(f)\right]| \\
&\le \max_{\substack{f:\O_\L\mapsto {\ensuremath{\mathbb R}} \\
\|f\|_\infty\le
1}}|{\ensuremath{\mathbb E}} _\o\left[f(\omega(t))-\pi_\ell(f)(\omega(t))\right]|+
\max_{\substack{f:\O_\L\mapsto {\ensuremath{\mathbb R}} \\ \|f\|_\infty\le 1}}|{\ensuremath{\mathbb E}} _\o\left[\pi_\ell(f)(\omega(t))-\pi(f)\right]|
\\
&\le (c^*/q )^{\Delta(\o)} e^{-t\,
({\rm gap}(\ensuremath{\mathcal L})\wedge m)} +
\max_{\substack{f:\O_\L\mapsto {\ensuremath{\mathbb R}} \\\|f\|_\infty\le 1}}|{\ensuremath{\mathbb E}} _\o\left[\pi_\ell(f)(\omega(t))-\pi(f)\right]|,
\end{align*}
where we applied the above lemma to the shifted configuration in which
the origin coincides with the rightmost zero in $\L$ of $\o$. \\
We now
observe that the new function $\pi_\ell(f)$ depends only on the first
$\ell-1$ coordinates of $\o$ and that $\|\pi_\ell(f)\|_\infty\le
1$. Thus we can iterate the above bound $(\ell -1)$ times to get that
\begin{equation*}
\|{\ensuremath{\mathbb P}} _\o^t-\pi\|_\L\le 2\max_{\substack{f:\O_\L\mapsto {\ensuremath{\mathbb R}} \\
\|f\|_\infty\le 1}}|{\ensuremath{\mathbb E}} _\o\left[f(\omega(t))-\pi(f)\right]| \le
\ell (c^*/q )^{\Delta(\o)} e^{-t\,({\rm gap}(\ensuremath{\mathcal L})\wedge m)}.
\qedhere
\end{equation*}
\end{proof}
\begin{corollary}
\label{cor:spacing}
Fix $\o\in \O^*\,$, $\ell \in {\ensuremath{\mathbb N}} $ and let
$I^\ell_\o=[X(\o),X(\o)+\ell-1]$. Then
\begin{align}
\label{eq:1}
\sup_{\o\in \O^*}&\|\,{\ensuremath{\mathbb P}} ^t_\o-\pi\|_{I^\ell_\o}\le (c^*/q )^{\ell} e^{-t\,
({\rm gap}(\ensuremath{\mathcal L})\wedge m)}.\\
\label{eq:2} \sup_{\o\in \O^*}&{\ensuremath{\mathbb P}} _\o\left(\o(t) \text{ does not satisfy SSC in $I^\ell_\o$}\right)
\le \ell (c^*/q )^{\ell} e^{-t\,
({\rm gap}(\ensuremath{\mathcal L})\wedge m)} +\ell^{-9}.\\
\label{eq:3}
\sup_{\o\in \O^*}&{\ensuremath{\mathbb P}} _\o\left(\o(t) \text{ does not satisfy WSC in
$I^\ell_\o$}\right)\le
(c^*/q )^{\ell} e^{-t\,
({\rm gap}(\ensuremath{\mathcal L})\wedge m)} + \ell p^{\d \ell^{\varepsilon/2}}.
\end{align}
\end{corollary}
\begin{proof}
By construction, $\Delta_{I_\o^\ell}(\o)=\ell$ for any $\o\in \O^*$. Thus the first statement follows at once
from Proposition~\ref{prop:key1}. The other two statements follow
from the
fact that
\[
\pi\left(\{\o: \text{$\o$ does not satisfy SSC
in $[1,\dots,\ell]$\}}\right) \le \ell^{-9}
\]
and
\begin{equation*}
\pi\left(\{\o: \text{$\o$ does not satisfy the WSC
in $[1,\dots,\ell]$\}}\right) \le \ell p^{\d \ell^{\varepsilon/2}}.
\qedhere
\end{equation*}
\end{proof}
\subsection{Finite speed of information propagation}\label{sec:finite-speed}
As the East process is an interacting particle system whose rates are
bounded by one, it is well known that in this case information can
only travel through the system at finite speed. A quantitative statement of
the above general fact goes as follows.
\begin{lemma} \label{finitespeed} For $x<y\in {\ensuremath{\mathbb Z}} $ and $0\le s<t$, define
the ``linking event'' $F(x,y;s,t)$ as the event that there exists a ordered sequence
$s\le t_{x}<t_{x+1}<\dots <t_y<t$ or $s\le t_{y}<t_{y-1}<\dots <t_x<t$ of rings of the Poisson clocks
associated to the corresponding sites in $[x,y]\cap {\ensuremath{\mathbb Z}} $. Then there exists a constant $v_{\rm max}$ such that, for all $|y-x|\ge v_{\rm max}(t-s)$,
\[
{\ensuremath{\mathbb P}} (F(x,y;s,t))\le e^{-|x-y|}.
\]
\end{lemma}
\begin{proof}The probability of $F(x,y;s,t)$ is equal to the
probability that a Poisson process of intensity $1$ has at least
$|x-y|$ instances within time $t-s$.
\end{proof}
\begin{remark}
\label{rem:speed}
An important consequence of the above lemma is the following
fact. Let $0<s<t$ and let $\ensuremath{\mathcal F}_s$ be the $\s$-algebra
generated by all the rings of the Poisson clocks and all the coin
tosses up to time $s$ in the graphical construction of the East
process. Fix $x<y<z$ and let $A,B$ be two events depending on
$\{\o_a\}_{a\le x}$ and $\{\o_a\}_{a\ge z}$ respectively. Then
\begin{gather*}
{\ensuremath{\mathbb P}} _\o\left(\{\o(t)\in A\cap B\}\cap F(y,z;s,t)^c \mid
\ensuremath{\mathcal F}_s\right)\\
={\ensuremath{\mathbb P}} _\o\left(\{\o(t)\in A\}\mid \ensuremath{\mathcal F}_s\right)
{\ensuremath{\mathbb P}} _\o\left(\{\o(t)\in B\}\cap F(y,z;s,t)^c\mid \ensuremath{\mathcal F}_s\right).
\end{gather*}
This is because: (i) on the event $F(y,z;s,t)^c$ the occurrence of the
event $B$ does not depend anymore on the Poisson rings and coin tosses to the
left of $y$; (ii) the occurrence of the event $A$ depends only on
the Poisson rings and coin tosses to the left of $x$ because of the
oriented character of the East process.
\end{remark}
The finite speed of information propagation, together with the results of~\cite{AD02}, implies the following rough bound on the position of the front
$X(\o(t))$ for the East process started from $\o\in \O^*$ (also see, e.g., \cite{Blondel}*{Lemma~3.2}).
\begin{lemma}
\label{linearspeed}
There exists constants $v_{\rm min}>0$ and $\gamma>0$ such that
\[
\sup_{\o\in \O^*}{\ensuremath{\mathbb P}} _\o\left(X(\o(t))\in[X(\o)+ v_{\rm min}t,X(\o)+v_{\rm max}t]\right) \ge
1-e^{-\gamma t}.
\]
\end{lemma}
\begin{remark}\label{speedbnd} When $p\downarrow 0$ one can obtain the above statement with $v_{\rm min}\to 1$ and $\gamma$ uniformly bounded away from $0$ by using our Proposition~\ref{prop:key1} instead of \cite{Blondel}*{Proposition~4.3} in the proof of \cite{Blondel}*{Lemma~3.2}.
\end{remark}
The second consequence of the finite speed of information propagation
is a kind of mixing result behind the front $X(\o(t))$ for the process started from
$\o\in \O^*$. We first need few additional notation.
\begin{definition}
\label{shift}For any $a\in {\ensuremath{\mathbb Z}} $, we define the
\emph{shifted} configuration $\vartheta_a \o$ by
\[
\vartheta_a \o_x= \o_{x+a},\ \forall x\in {\ensuremath{\mathbb Z}} .
\]
\end{definition}
\begin{proposition}
\label{prop:decor}
Let $\L\subset (-\infty, -\ell]\cap {\ensuremath{\mathbb Z}} $ and let
$B\subset \{0,1\}^{\L}$.
Assume $\ell \ge 2v_{\rm max}(t-s)$. Then for any $\o\in \O^*$ and any
$a\in {\ensuremath{\mathbb Z}} $ the following holds:
\begin{gather*}
\Big|\,{\ensuremath{\mathbb E}} _\o\left[{\mathds 1}_{\{\vartheta_{X(\o(s))} [\o(t)]\in B\}}{\mathds
1}_{\{X(\o(t))=a\}}\mid \ensuremath{\mathcal F}_s\right]-{\ensuremath{\mathbb E}} _\o\left[{\mathds 1}_{\{\vartheta_{X(\o(s))} [\o(t)]\in B\}}\mid \ensuremath{\mathcal F}_s\right]{\ensuremath{\mathbb E}} _\o\left[{\mathds 1}_{\{X(\o(t))=a\}}\mid \ensuremath{\mathcal F}_s\right]\,\Big|\\= O(e^{-\ell}).
\end{gather*}
\end{proposition}
To see what the proposition roughly tells we first assume that the front at time $s$ is at $0$. Then the above result says
that at a later time $t$ any event supported on $(-\infty, -\ell]$ is almost independent of the location of the front.
\begin{proof}
Recall the definition of the event $F(x,y;s,t)$ from Lemma~\ref{finitespeed} and let
\begin{align*}
B_1 &:=F\left(X(\o(s))-\ell,X(\o(s))-\ell/2-1;s,t\right)\\
B_2&:= F\left(X(\o(s))-\ell/2,X(\o(s));s,t\right).
\end{align*}
We now write
\begin{align*}
{\mathds 1}_{\{\vartheta_{X(\o(s))} [\o(t)]_\L\in B\}}{\mathds
1}_{\{X(\o(t))=a\}}
&={\mathds 1}_{\{\vartheta_{X(\o(s))} [\o(t)]_\L\in B\}}{\mathds
1}_{\{X(\o(t))=a\}}{\mathds 1}_{\{B_1^c\}}{\mathds 1}_{\{B_2^c\}} \\
&+{\mathds 1}_{\{\vartheta_{X(\o(s))} [\o(t)]_\L\in B\}}{\mathds
1}_{\{X(\o(t))=a\}}\left[1-{\mathds 1}_{\{B_1^c\}}{\mathds 1}_{\{B_2^c\}}\right].
\end{align*}
We first note that given $\ensuremath{\mathcal F}_s$ for any $a< X(\o(s))-\ell/2$,
\begin{align*} {\mathds
1}_{\{X(\o(t))=a\}}{\mathds
1}_{\{B_2^c\}}=0,
\end{align*}
and hence
\begin{align*} {\ensuremath{\mathbb E}} _\o\left[{\mathds
1}_{\{X(\o(t))=a\}}{\mathds
1}_{\{B_2^c\}}\mid \ensuremath{\mathcal F}_s\right]=0.
\end{align*}
Thus, we may assume that $a\ge X(\o(s))-\ell/2$.
Now
\begin{align*}
{\ensuremath{\mathbb E}} _\o&\left[{\mathds 1}_{\{\vartheta_{X(\o(s))} [\o(t)]_\L\in B\}}{\mathds
1}_{\{X(\o(t))=a\}}{\mathds 1}_{\{B_1^c\}}{\mathds
1}_{\{B_2^c\}}\mid \ensuremath{\mathcal F}_s\right]\\
&={\ensuremath{\mathbb E}} _\o\left[{\mathds 1}_{\{\vartheta_{X(\o(s))} [\o(t)]_\L\in B\}}{\mathds
1}_{\{B_1^c\}}\mid \ensuremath{\mathcal F}_s\right]
{\ensuremath{\mathbb E}} _\o\left[{\mathds
1}_{\{X(\o(t))=a\}}{\mathds
1}_{\{B_2^c\}}\mid \ensuremath{\mathcal F}_s\right]
\end{align*}
because under the assumption that $a\ge X(\o(s))-\ell/2$, the two events are functions of an independent set of
variables in the graphical construction (cf.\ Remark~\ref{rem:speed}).
By Lemma~\ref{finitespeed} we know that ${\ensuremath{\mathbb P}} (B^c_i\mid \ensuremath{\mathcal F}_s)\le
e^{-\ell},\ i=1,2$ and the proof is complete.
\end{proof}
\section{The law behind the front of the East process}\label{sec:front}
Our main result in this section is a quantitative estimate on the rate of
convergence as $t\to \infty$ of the law $\mu_\o^t$ of the process seen from the front
to its invariant measure $\nu$.
Consider the process $\{\o^{\rm F}(t)\}_{t\ge 0}$ seen from the front (recalling \S\ref{sec:proc-front}) and let $\mu_\o^t$ be its law at time $t$ when the starting configuration
is $\o$.
\begin{theorem}
\label{coupling}
For any $p\in (0,1)$ there exist $\a\in (0,1)$ and $v^*>0$ such that
\[
\sup_{\o\in \O_{\rm F}}\|\,\mu_\o^t -\nu\|_{[-v^*t,\,0]}=O(e^{-t^\a}).
\]
Moreover, $\a$ and $v^*$ can be chosen uniformly as $p\to 0$.
\end{theorem}
A corollary of this result --- which will be key in the proof of Theorem~\ref{th:main1} --- is to show that, for
any $\o\in \O_{\rm F}$, the increments in the position of the front
(the variables $\xi_n$ below) behave asymptotically as a
stationary sequence of weakly dependent random variables with
exponential moments.
Fix $\Delta\footnote{In the sequel we will always use the letter $\Delta$ to denote a time lag. Its value will depend on the context and will be specified in advance. }>0$ and let $t_n=n\Delta$ for $ n\in {\ensuremath{\mathbb N}} $. Define
\[
\xi_n:= X(\o(t_n))-X(\o(t_{n-1})),
\]
so that
\begin{equation}
\label{eq:19}
X(\o(t))=\sum_{n=1}^{N_t} \xi_n + \left[X(\o(t))-X(\o(t_N))\right], \quad N=
\lfloor t/\Delta\rfloor.
\end{equation}
Recall also that
$\a,v^*$ are the constants appearing in Theorem~\ref{coupling}.
\begin{corollary}
\label{cor:wf}\ Let $f:{\ensuremath{\mathbb R}} \mapsto [0,\infty)$ be such that $e^{-|x|}f^2(x)\in L^1({\ensuremath{\mathbb R}} )$. Then
\begin{equation}
\label{eq:20}
C_f\equiv \sup_{\o\in
\O_{\rm F}}{\ensuremath{\mathbb E}} _\o\left[f(\xi_1)^2\right]<\infty.
\end{equation}
Moreover, there exists a constant $\gamma>0$ such that
\begin{align}
\label{eq:20tris}
\sup_{\o\in \O_{\rm F}}|{\ensuremath{\mathbb E}} _\o\left[f(\xi_n)\right]- {\ensuremath{\mathbb E}} _\nu\left[f(\xi_1)\right]|=
O(e^{-\gamma n^\a}) \quad \forall n\ge 1,
\end{align}
\begin{gather}
\label{eq:13}
\sup_{\o\in \O_{\rm F}}|{\operatorname{Cov}}_\o\left(\xi_j,\xi_n \right)-{\operatorname{Cov}}_\nu\left(\xi_1,\xi_{n-j}\right)|= O(e^{- \gamma j^\a})\wedge
O(e^{-\gamma(n-j)^\a})\quad \forall j<n
\end{gather}
and
\begin{gather}
\label{eq:13fabio}
\sup_{\o\in \O_{\rm F}}|{\operatorname{Cov}}_\o\left(f(\xi_j),f(\xi_n)
\right)|=O(e^{-\gamma (n-j)^\a}), \quad \ \forall j\le n,
\end{gather}
where the constants in the r.h.s.\ of \eqref{eq:20tris} and
\eqref{eq:13fabio} depend on $f$ only through the constant $C_f$.
Finally, for any $k,n \in {\ensuremath{\mathbb N}} $ such that $v^*k>n v_{\rm max}$ and for any bounded $F:{\ensuremath{\mathbb R}} ^n\mapsto {\ensuremath{\mathbb R}} $ ,
\begin{equation}
\label{eq:21}
\sup_{\o\in \O_{\rm F}}\Big|
{\ensuremath{\mathbb E}} _\o\left[F\Bigl(\xi_{k},\xi_{k+1},\dots,\xi_{k+n-1}\Bigr)\right] -
{\ensuremath{\mathbb E}} _\nu\left[F\Bigl(\xi_{1},\xi_{2},\dots,\xi_{n}\Bigr)\right]\Big|=
O\left(e^{-\gamma t_k^\a}\right).
\end{equation}
\end{corollary}
To prove Theorem~\ref{coupling} we will require a technical result, Theorem~\ref{th:key3} below, which can informally be summarized as follows:
\begin{itemize}
\item Starting from $\o\in \O^*$, at any fixed large time $t$, with high probability the configuration satisfies
WSC apart from an interval behind the front $X(\o(t))$ of length proportional to
$t^\epsilon$ .
\item If the above property is true at time $t$, then at a later
time $t'=t+ \text{const}\times t^\epsilon$ the law of the process will be very
close to $\pi$ apart from a small interval behind the front
where the strong spacing property will occur with high probability.
\end{itemize}
Formally, fix a constant $\kappa$ to be chosen later on and $t>0$. Let
$\ell\equiv t^{\epsilon}$, where $\epsilon$ appears in the WSC and let $t_\ell=t-\kappa\ell/v_{\rm
min}$. Let $\ensuremath{\mathcal S}_\ell$ denotes the set of
configurations which fail to satisfy SSC
in the interval $[-3(v_{\rm max}/v_{\rm min})\kappa\,\ell,- \kappa \log \ell)\cap {\ensuremath{\mathbb Z}} $ and let
$\ensuremath{\mathcal W}_{\ell,t}$ be those
configurations which fail to satisfy WSC
in the interval $[-v_{\rm min}t,- \ell)\cap {\ensuremath{\mathbb Z}} $.
\begin{theorem}
\label{th:key3}
It is possible to choose $\d$ small enough and $\kappa$ large enough depending only on $p$ in such a way
that for all $t$ large enough the following
holds:
\begin{align}
\label{1} &\sup_{\o\in \O_{\rm F}}\m^t_\o\left(\ensuremath{\mathcal W}_{\ell,t}\right) = O(e^{-t^{\epsilon/2}}),\\
\label{2}
&\sup_{\o\in \O_{\rm F}}\m^t_\o\left(\ensuremath{\mathcal S}_\ell \mid \ensuremath{\mathcal F}_{t_\ell}\right)= O(t^{-7\epsilon}) +
{\mathds 1}_{\ensuremath{\mathcal W}_{\ell,t}}(\o(t_\ell)),\\
\label{3} &\sup_{\o\in \O_{\rm F}}\|\mu_\o^t(\cdot \mid
\ensuremath{\mathcal F}_{t_\ell})-\pi\|_{[-v_{\rm min}t,-3(v_{\rm max}/v_{\rm min})\kappa\ell]}= O(e^{-t^{\epsilon/2}}) +
{\mathds 1}_{\ensuremath{\mathcal W}_{\ell,t}}(\o(t_\ell)).
\end{align}
Moreover, $\kappa$ stays bounded as $p\downarrow 0.$
\end{theorem}
\subsection{Non-equilibrium properties of the law behind the front: Proof of Theorem~\ref{th:key3}}
We begin by proving \eqref{1}. Bounding $\sup_{\o\in
\O_{\rm F}}\m^t_\o\left(\ensuremath{\mathcal W}_{\ell,t}\right)$ from above is equivalent to bounding $\sup_{\o\in
\O_{\rm F}}{\ensuremath{\mathbb P}} _\o(\o(t)\in \ensuremath{\mathcal W}^*_{\ell,t})$ from above, where
$\ensuremath{\mathcal W}^*_{\ell,t}$ denotes the set of configurations $\o\in \O^*$ which do not
satisfy the
spacing condition in $[X(\o)-v_{\rm min}t,X(\o)-\ell]$.
Using Lemma~\ref{linearspeed}, with probability greater than $1- e^{-\gamma
t}$ we can assume that $X(\o(t))\in [v_{\rm min}t, v_{\rm
max}t]$.
Next we observe that, for any $a\in [v_{\rm min}t, v_{\rm
max}t]$, the events $\{X(\o(t))=a\}$ and $\{\o(t)\in
\ensuremath{\mathcal W}^*_{\ell,t}\}$ imply that there exists $x\in {\ensuremath{\mathbb Z}} $ with the following properties:
\begin{itemize}
\item \hskip 1cm $0\le x \le a-\ell$;
\item \hskip 1cm The hitting time $\tau_x:=\inf\{s>0: X(\o(s))=x\}$ is smaller
than $t$;
\item \hskip 1cm $\o(t)$ is identically equal to one in the interval $I_x:=[x,x+\d (v_{\rm min}\,t)^{\epsilon}/2]$;
\item \hskip 1cm The linking event $F(x,a;\t_x,t)$ defined in Lemma~\ref{finitespeed}
occurred.
\end{itemize}
In conclusion, using twice a union bound (once for the
choice of $a\in [v_{\rm min}t, v_{\rm
max}t]$ and once for the choice of $x\in [0,a-\ell]$) together with the strong
Markov property at time $\t_x$, we get
\begin{align*}
& {\ensuremath{\mathbb P}} _\o(\o(t)\in \ensuremath{\mathcal W}^*_{\ell,t}) \\
\quad & \le e^{-\gamma t} + \sum_{a=v_{\rm min}t}^{v_{\rm max}t}\
\sum_{x=0}^{a-\ell } \ {\ensuremath{\mathbb P}} \left(F(x,a;\t_x,t)\right){\mathds
1}_{\{|x-a|\ge v_{\max}(t-\t_x)\}} \\
& \quad \qquad \, + \sum_{a=v_{\rm min}t}^{v_{\rm max}t}\ \sum_{x=0}^{a-\ell } \left[
\|{\ensuremath{\mathbb P}} ^t_{\o(\t_x)}-\pi\|_{I_x} + p^{\frac{\d}{2}( v_{\rm min}t)^{\epsilon}}\right]{\mathds 1}_{\{|x-a|\le v_{\rm max}(t-\t_x)\}} \\
\quad &\le (v_{\rm \max}t)^2 \bigg[
e^{-\gamma t}+ 2e^{-\ell}+ \sqrt{2}{{\d}( v_{\rm min}t)^{\epsilon}}\,\left(\frac{c^*}{ q
}\right)
^{\frac{\d}{2}( v_{\rm min}t)^{\epsilon}} e^{-\frac{\ell}{v_{\rm max}}
\,({\rm gap}(\ensuremath{\mathcal L})\wedge m)}+ p^{\frac{\d}{2}( v_{\rm min}t)^{\epsilon}}\bigg].
\end{align*}
Above we used Lemma~\ref{finitespeed} in the case $
|x-a|\ge v_{\rm max}(t-\t_x)$ and \eqref{eq:1} of Corollary~\ref{cor:spacing}
otherwise. The statement \eqref{1} now follows by taking $\d$ small enough.
\medskip
We now prove \eqref{2}. As before we give the result in the East
process setting (\hbox{\it i.e.\ } for the law ${\ensuremath{\mathbb P}} _\o^t(\cdot\mid \ensuremath{\mathcal F}_s)$ and
$\ensuremath{\mathcal S}_{\ell}$ replaced by its random shifted version
$\ensuremath{\mathcal S}^*_{\ell}$). We decompose the interval $[X(\o(t))-3(v_{\rm max}/v_{\rm
min})\kappa\,\ell, X(\o(t)) -\kappa \log \ell)]\cap {\ensuremath{\mathbb Z}} $ where we want SSC to
hold into $[X(\o(t_\ell)),X(\o(t))-\kappa \log \ell]$ and $[X(\o(t))-3(v_{\rm
max}/v_{\rm min})\kappa\,\ell,X(\o(t_\ell))].$\\
Note that by Lemma~\ref{linearspeed} we can ignore the events $\{X(\o(t_\ell))>X(\o(t))-\kappa \log \ell\}$ and $\{X(\o(t))-3(v_{\rm
max}/v_{\rm min})\kappa\,\ell > X(\o(t_\ell))\}.$ \\
\noindent
We now proceed in two steps: \begin{inparaenum}[(1)]
\item we show that
SSC occurs with high probability in the first interval. Here we do not use the condition that
$\o(t_\ell)\notin \ensuremath{\mathcal W}^*_{\ell,t}$.
\item we prove the same
statement for the second
interval. Here instead the fact that $\o(t_\ell)\notin
\ensuremath{\mathcal W}^*_{\ell,t}$ will be crucial.
\end{inparaenum}
\\
- {\it Step (1).} Let $\Delta\equiv 5\log \ell/(|\log p|\wedge 1)$. For any
intermediate time $s\in [t_\ell, t -(\kappa/v_{\rm
max})\log \ell]$,
Corollary~\ref{cor:spacing} together with the Markov property at time
$s$ show that
\begin{align}
{\ensuremath{\mathbb P}} _\o&\left(\o(t)_x=1\ \forall x\in [X(\o(s)),X(\o(s))+\Delta]\mid
\ensuremath{\mathcal F}_s\right)\nonumber\\
&\le \left\|{\ensuremath{\mathbb P}} _\o(\cdot\mid \ensuremath{\mathcal F}_s)-\pi\right\|_{[X(\o(s)),X(\o(s))+\Delta]}+
\pi\left(\o_x=1\ \forall x\in [X(\o(s)),X(\o(s))+\Delta]\right)
\nonumber\\
\label{eq:4}
&\le \Delta\left(\frac{c^*}{q}\right)^\Delta e^{-(t-s)({\rm gap}(\ensuremath{\mathcal L})\wedge m)} +p^{|\Delta|}=O(t^{-10\epsilon}).
\end{align}
Above we used the fact that $t-s\ge \kappa/v_{\rm max}\log
\ell$. Hence, $\kappa$ can be chosen depending only on
$p$ such that \eqref{eq:4} holds and $\kappa$ stays bounded as $p\downarrow 0.$\\
\noindent
We now take the union of the random intervals $[X(\o(s)),X(\o(s))+\Delta]$
over discrete times $s$ of the form $s_j= t_\ell+ j/\ell^2$,
$j=0,1,\dots, n$ and $n$ such that $s_n=t -(\kappa/v_{\rm max})\log
\ell$. Thus $n=O(\ell^3)$=$O(t^{3\epsilon})$. The aim here is to show
that, with high probability,
the above union is actually an interval containing the target one $[X(\o(t_\ell)),
X(\o(t))-\kappa \log \ell]$, with the additional property that it does not contain
a sub-interval of length $\Delta$ where $\o(t)$ is constantly equal to one (which will then imply~\eqref{2}, with room to spare).\\
\noindent
We now upper bound the probability that the set
$\cup_{j=0}^n[X(\o(s_j)),X(\o(s_j))+\Delta]$ is not an interval. It is an easy observation that if $X(\o(s_n))>X(\o(s_0))$ then the aforementioned event occurs if $X(\o(s_{j+1}))-X(\o(s_j))\le \Delta$ for $j=0,1,\ldots n.$
Now by the lower bound in Lemma~\ref{linearspeed}
$${\ensuremath{\mathbb P}} (X(\o(s_n))>X(\o(s_0)))\ge 1-e^{-ct^{\epsilon}}$$ for some constant $c.$
Also
\begin{align*}
\sum_j {\ensuremath{\mathbb P}} _\o&(X(\o(s_{j+1}))-X(\o(s_j))\ge \Delta)\\
&\le \sum_j\mathbb{E}\left[
{\ensuremath{\mathbb P}} _\o\left(F(X(\o(s_j)),X(\o(s_j))+\Delta; s_j,s_{j+1})\mid
\ensuremath{\mathcal F}_{s_j}\right)\right]\\
&\le n e^{-\Delta} = O(t^{-8\epsilon}).
\end{align*}
Above $F(X(\o(s_j)),X(\o(s_j))+\Delta; s_j,s_{j+1})$ is the linking event
and we used Lemma~\ref{finitespeed} because $\Delta\gg (s_{j+1}-s_j)$.
Moreover, Lemma~\ref{linearspeed} implies that $\kappa$ can be chosen (bounded as $p\downarrow 0$), such that with probability greater than
\[
1-e^{-\gamma (t-s_0)}-e^{-\gamma (t-s_n)}=1-O(t^{-10 \epsilon}),
\]
the front $X(\o(t))$ satisfies
\begin{align*}
X(\o(t))\le X(\o(s_n))+v_{\rm max}(t-s_n)\le X(\o(s_n))+\kappa\log
\ell.
\end{align*}
Thus
\[
[X(\o(t_\ell)),X(\o(t))-\kappa \log \ell] \subset \cup_{j=0}^n[X(\o(s_j)),X(\o(s_j))+\Delta].
\]
with probability $1-O(t^{-10 \epsilon})$.
Finally, using \eqref{eq:4} and union bound, the probability that there exists $j\le n$ such that
$\o(t)$ is identically equal to one in $[X(\o(s_j)),X(\o(s_j))+\Delta]$ is
$O(t^{-7\epsilon})$ uniformly in the configuration at time $t_\ell$.
In conclusion we proved that SSC holds with probability $1-
O(t^{-8\epsilon})$ in an interval containing $[X(\o(t_\ell)),
X(\o(t)-\kappa \log \ell]$. The first step is complete.\\
- {\it Step (2).} Let $x^*=\max\{x\le X(\o(t_\ell))- 3\kappa (v_{\rm
max}/v_{\rm min})\ell: \ \o(t_\ell)_x=0\}$. Since $\o(t_\ell)\notin
\ensuremath{\mathcal W}^*_{\ell,t}$ such a zero exists.
Moreover, $\o(t_\ell)\notin \ensuremath{\mathcal W}^*_{\ell,t}$ implies that
$\o(t_\ell)$ has a zero in every sub-interval of $[x^*, X(\o(t_\ell))-
\ell]$ of length $\d t^\epsilon=\d \ell$. Hence we can apply Proposition
\ref{prop:key1} to the interval $[x^*, X(\o(t_\ell))]$ to get that
\[
\|\,{\ensuremath{\mathbb P}} ^t_\o(\cdot \mid \ensuremath{\mathcal F}_{t_\ell})-\pi\|_{[x^*, X(\o(t_\ell))]}=
O(e^{- t^{\epsilon/2}}),
\]
by choosing $\kappa$ large enough. Since by Remark~\ref{speedbnd} $v_{\rm min}\rightarrow 1$ as $p \downarrow 0$, we can choose $\kappa$ to be bounded as $p \downarrow 0$.
Also
\[
\pi\left(\o:\ \o \text{
violates SSC in }[x^*, X(\o(t_\ell))]\right)= O(t^{-7\epsilon}).
\]
Thus we have proved that SSC holds in $[x^*, X(\o(t_\ell))]$ with probability $1-O(t^{-7\epsilon})$.
Finite speed of propagation in the
form of Lemma~\ref{linearspeed} guarantees that, with probability $1- O(e^{-\gamma
(t-t_\ell)})$, $x^* < X(\o(t))-2\kappa (v_{\rm max}/v_{\rm
min})\ell$.
The proof of \eqref{2} is complete.
\medskip
It remains to prove \eqref{3}. Let $\L:= [-v_{\rm
min}t,-3(v_{\rm max}/v_{\rm min})\kappa\ell]\cap {\ensuremath{\mathbb Z}} $ and let $A\subset \{0,1\}^{\L}$. Recall
Definition~\ref{shift} of the shifted configuration $\vartheta_a \o$ and that $t_{\ell}=t-\kappa \ell/{v_{\rm min}}$.
Then \eqref{3} follows once we show that
\[
|{\ensuremath{\mathbb P}} _\o(\vartheta_{X(\o(t)} \o(t)_\L\in A\mid \ensuremath{\mathcal F}_{t_\ell})-\pi(A)|
\le e^{-t^{\epsilon/2}}
\]
whenever $\o(t_\ell)$ satisfies WSC in the interval
$I=[X(\o(t_\ell))-v_{\rm min}t, X(\o(t_\ell))-\ell]$. This property is
assumed henceforth.
Let us decompose ${\ensuremath{\mathbb P}} _\o(\vartheta_{X(\o(t)} \o(t)_\L\in A\mid
\ensuremath{\mathcal F}_{t_\ell})$ according to the value of the front:
\begin{gather*}
{\ensuremath{\mathbb P}} _\o\left(\vartheta_{X(\o(t))} \o(t)\in A\mid
\ensuremath{\mathcal F}_{t_\ell}\right)=
\sum_{a\in {\ensuremath{\mathbb Z}} }{\ensuremath{\mathbb E}} _\o\left[{\mathds 1}_{\{\vartheta_{a}
\o(t)_\L\in A\}}\,{\mathds 1}_{\{X(\o(t))=a\}}\mid \ensuremath{\mathcal F}_{t_\ell}\right].
\end{gather*}
Using Lemma~\ref{linearspeed}, $0<X(\o(t))-X(\o(t_\ell))\le
v_{\rm max}(t-t_\ell)$ occurs with probability
greater than $1-e^{-\gamma(t-t_\ell)}$. Thus
\begin{gather*}
\sum_{a\in {\ensuremath{\mathbb Z}} }{\ensuremath{\mathbb E}} _\o\left[{\mathds 1}_{\{\vartheta_{a}
\o(t)_\L\in A\}}\,{\mathds 1}_{\{X(\o(t))=a\}}\mid
\ensuremath{\mathcal F}_{t_\ell}\right]\\
=
\sum_{\substack{ a\in {\ensuremath{\mathbb Z}} \\ 0<a-X(\o(t_\ell))\le v_{\rm max}(t-t_\ell)}}{\ensuremath{\mathbb E}} _\o\left[{\mathds 1}_{\{\vartheta_{a}
\o(t)_\L\in A\}}\,{\mathds 1}_{\{X(\o(t))=a\}}\mid
\ensuremath{\mathcal F}_{t_\ell}\right] + e^{-\gamma(t-t_\ell)}.
\end{gather*}
By definition, the event $\{\vartheta_{a}
\o(t)_\L\in A\}$ is the same as the event $\{
\o(t)_{\L+a}\in A\}$.
Using the restriction that $|a-X(\o(t_\ell))|\le v_{\rm max}(t-t_\ell)$, the choice of $\L$ and the fact that $(v_{\rm max}/v_{\rm min})\kappa\ell \ge v_{\rm max}(t-t_\ell)$, we get $\L+a \subset (-\infty,X(\o(t_\ell))-2( v_{\rm max}/v_{\rm min})\kappa\ell]$. Thus, the event $\{
\o(t)_{\L+a}\in A\}$ satisfies the hypothesis of Proposition~\ref{prop:decor}, which can then be applied to
each term in the above sum to
get
\begin{align*}
&\sum_{\substack{a\in {\ensuremath{\mathbb Z}} \\ 0<a-X(\o(t_\ell))\le v_{\rm max}(t-t_\ell)}}\!\!\!{\ensuremath{\mathbb E}} _\o\left[{\mathds 1}_{\{\vartheta_{a}
\o(t)_\L\in A\}}\,{\mathds 1}_{\{X(\o(t))=a\}}\mid \ensuremath{\mathcal F}_{t_\ell}\right]
\\
=&\sum_{\substack{a\in {\ensuremath{\mathbb Z}} \\ 0<a-X(\o(t_\ell))\le v_{\rm max}(t-t_\ell)}}
\!\!\!{\ensuremath{\mathbb E}} _\o\left[{\mathds 1}_{\{\vartheta_{a}
\o(t)_\L\in A\}}\mid \ensuremath{\mathcal F}_{t_\ell}\right]\,{\ensuremath{\mathbb E}} _\o\left[{\mathds
1}_{\{X(\o(t))=a\}}\mid \ensuremath{\mathcal F}_{t_\ell}\right] +O(\ell \,e^{-\ell}).
\end{align*}
Finally we claim that, for any $a$ such that $0<a-X(\o(t_\ell))\le v_{\rm
max}(t-t_\ell)$, if $\d$ is chosen small enough and $\kappa$ large
enough depending on $p$ (bounded as $p \downarrow 0$),
\begin{equation}
\label{eq:any a}
{\ensuremath{\mathbb E}} _\o\left[{\mathds 1}_{\{\vartheta_{a}
\o(t)_\L\in A\}}\mid \ensuremath{\mathcal F}_{t_\ell}\right] = \pi(A) + O(e^{-t^{\epsilon/2}}).
\end{equation}
To prove it we apply Proposition~\ref{prop:key1} to the interval $I=[X(\o(t_\ell))-v_{\rm min}t, X(\o(t_\ell))-\ell]$) to get that
\begin{equation}
\label{eq:5}
\|{\ensuremath{\mathbb P}} ^t_\o(\cdot\mid \ensuremath{\mathcal F}_{t_\ell})-\pi\|_I \le
|I|\left(\frac{c^*}{ q}\right)^{\d |I|^\epsilon}
e^{-(t-t_\ell)({\rm gap}(\ensuremath{\mathcal L})\wedge m)},
\end{equation}
where $|I|=O(t)$ is the length of $I$, since by assumption
$\o(t_\ell)$ satisfies WSC in $I$. Because of our choice of the
parameters $(\ell, t_\ell)$ the r.h.s.\ of \eqref{eq:5} is
$O(e^{-t^{\epsilon/2}})$ if $\d,\kappa$ are chosen small enough and large enough
respectively depending on $p$. Since by Remark~\ref{speedbnd} $v_{\rm min}\rightarrow 1$ as $p \downarrow 0,$ $\kappa$ can be chosen to be bounded as $p \downarrow 0.$\\
\noindent
The claim now follows because $\{\o:
\vartheta_a\o \in A\}\subset \{0,1\}^{\L+a}$, with
\begin{align*}
\L+a &=[-v_{\rm min}t +a, -3(v_{\rm max}/v_{\rm min})\kappa\ell +a]\\
&\subset [X(\o(t_\ell))-v_{\rm min}t_\ell,\, X(\o(t_\ell))- (v_{\rm max}/v_{\rm
min})\kappa\ell\,] \subset I,
\end{align*}
together with the translation invariance of $\pi$ expressed by $\pi\left(\{\o: \vartheta_a\o \in A\}\right)=\pi(A)$.
This establishes \eqref{3} and concludes the proof of Theorem~\ref{th:key3}.
Notice that at all points in the proof, $\kappa$ was chosen to be bounded as $p \downarrow 0.$
\qed
\subsection{On the rate of convergence to
the invariant
measure $\nu$: Proof of Theorem~\ref{coupling}}
The proof is based on a coupling argument. There exists $v^*>0$ such
that, for any $t$ large enough
and for any pair of starting
configurations $(\o,\o')\in \O_{\rm F}\times \O_{\rm F}$,
\begin{equation}
\label{eq:6}
\|\mu_\o^t-\mu_{\o'}^t\|_{[-v^*t,\,0]} \le c' e^{-t^\a},
\end{equation}
with $(c',\a)$ independent of $(\o,\o')$. Also $v^*, \alpha $ can be chosen uniformly as $p\downarrow 0.$ Once this step is
established and using the invariance of the measure $\nu$ under the
action of the semigroup $e^{t\ensuremath{\mathcal L}^{\rm F}}$,
\begin{align*}
\|\,\mu_\o^t -\nu\|_{[-v^*t,\,0]}&=
\|\,\mu_\o^t -\int d\nu(\o')\mu_{\o'}^t\,\|_{[-v^*t,\,0]}\\
&\le \int d\nu(\o')\|\,\mu_\o^t -\mu_{\o'}^t\,\|_{[-v^*t,\,
0]}\le c' e^{-t^\a}.
\end{align*}
We now prove \eqref{eq:6}. We first fix a bit of notation.
Given $\epsilon\in (0,1)$ and a large $t>0$, let $\Delta_1=(\kappa/v_{\rm
min}) t^\epsilon$ where $\kappa$ is the constant appearing in Theorem
\ref{th:key3}, let
$\Delta_2=\kappa\epsilon\log t$ and define $\Delta=\Delta_1+\Delta_2$. We then set
\[
t_0=(1-\epsilon)t,\quad t_n=t_{n-1}+\Delta, \quad n=1,\dots N,\quad N=
\lfloor\epsilon\, t/\Delta\rfloor.
\]
It will be
convenient to refer to the time lag $[t_{n-1},t_n)$ as the
$n^{\rm th}$-round. In turn we split each round into two parts: from
$t_{n-1}$ to $s_n:= t_{n-1}+\Delta_1$ and from $s_n$ to $t_n$. We will
refer to the first part of the round as the \emph{burn-in part} and to
the second part as the \emph{mixing part}. We also set
$I_n=[-v_{\rm min}t_n+ 2v_{\rm max} \Delta n, 0]$. Observe that $I_n\neq
\emptyset$ for any $n\le N+1$ if $\epsilon$ is chosen smaller
than $v_{\rm min}/v_{\rm max}$ and $t$ is large enough depending on $\epsilon$.
Next, for any pair $(\mu,\mu')$ of probability
measures on a finite set, we denote by ${\rm MC}(\mu,\mu')$ their
\emph{maximal coupling}, namely the one that achieves the variation
distance between $\mu,\mu'$ in the variational formula (see, e.g.,~\cite{LPW})
\[
\|\mu-\mu'\|=\inf\{M(\o\neq \o'):\ M \text{ a coupling of } \mu,\mu'\}.
\]
If $(\mu,\mu')$ are probability
measures on $\O$ and $\L$ is a finite subset of
${\ensuremath{\mathbb Z}} $, we define the \emph{
$\L$-maximal coupling} ${\rm MC}_\L(\mu,\mu')$ as follows:
\begin{enumerate}[a)]
\item first sample $(\o_\L,\o_\L')$ according to
the maximal coupling of the marginals of $\mu,\mu'$ on $\O_\L$;
\item then sample \emph{independently}
$(\o_{{\ensuremath{\mathbb Z}} \setminus \L}, \o'_{{\ensuremath{\mathbb Z}} \setminus \L})$ according to their
respective conditional distribution $\mu(\cdot\mid \o_\L), \mu'(\cdot\mid \o'_\L)$.
\end{enumerate}
Finally the \emph{basic coupling} for the
East process will be the one in which two configurations evolve
according to the graphical construction using the same Poisson clocks
and the same coin tosses.
We are now ready to recursively construct the coupling $M_{\o,\o'}^t$ of
$\mu_\o^t,\mu_{\o'}^t$ satisfying \eqref{eq:6}. For lightness of
notation, in the sequel the starting configurations $(\o,\o')$ will be
sometimes omitted. \\
\begin{definition}[The coupling $M_{\o,\o'}^t$] We first define a
family $\{M^{(n)}\}$ of couplings for
$\{\left(\mu_\o^{t_n},\mu^{t_n}_{\o'}\right)\}_{n=0}^N$ as follows.
$M^{(0)}$ is the trivial product coupling. Given $M^{(n)}$, the coupling $M^{(n+1)}$ at time $t_{n+1}$ is constructed
according to the following algorithm:
\begin{enumerate}[(a)]
\item Sample $(\o(t_n),\o'(t_n))$ from $M^{(n)}$. If they
coincide in the interval $I_n$ then let them evolve according
to the basic coupling for a time lag $\Delta$;\\
\item otherwise, sample $(\o(s_n),\o'(s_n))$ at the
end of the burn-in part of round$(n+1)$ via the
$\L_n$-maximal coupling $MC_{\L_n}$ for the laws
$\mu_\o^{s_n}(\cdot\mid \ensuremath{\mathcal F}_{t_n})$ and $\mu_{\o'}^{s_n}(\cdot\mid\ensuremath{\mathcal F}_{t_n})
$ at the configurations $(\o(t_n),\o'(t_n))$ from step (a). Here $\L_n=[-v_{\rm min}s_n, -3\left(v_{\rm max}/v_{\rm
min}\right)\kappa t^\epsilon]$.\\
\begin{enumerate}[(i)]
\item If $(\o(s_n),\o'(s_n))$ are not equal in the interval $\L_n$,
then let them evolve for the mixing part of the round (i.e., from
time $s_n$ to
time $t_{n+1}$) via
the basic coupling.\\
\item If instead they agree on $\L_n$, then search for the rightmost common zero of
$(\o(s_n),\o'(s_n))$ in
$\L_n$ and call $x_*$ its position. If there is no such a zero,
define $x_*$ to be the right boundary of $\L_n$. Next sample a
Bernoulli random variable $\xi$ with
$\text{\rm Prob}(\xi=1)=e^{-2\Delta_2}$. The value $\xi=1$ has to be
interpreted as corresponding to the event that the two Poisson clocks
associated to $x_*$ and to the origin in the graphical construction did not ring during the mixing part of
the round.
\vskip 0.1cm \begin{enumerate}[(1)]
\item
If $\xi=1$, set $\o(t_{n+1})_{x_*}=\o(s_n)_{x_*}$ and
similarly for $\o'$. The remaining part of the configurations at
time $t_{n+1}$ is sampled using the basic coupling to the left of
$x_*$ and the maximal coupling for the East process in the
interval $[x_*+1,-1]$ with boundary condition at $x_*$ equal to $\o(s_n)_{x_*}$.
\vskip 0.1cm
\item If $\xi=0$ we let evolve $\left(\o(s_n),\o'(s_n)\right)$ with the basic
coupling conditioned to have at least one ring either at $x_*$ or at
the origin or both.
\end{enumerate}
\end{enumerate}
\end{enumerate}
The final coupling $M^t_{\o,\o'}$ will be obtained by first sampling $\left(\o(t_N),\o'(t_N)\right)$
from $M^{(N)}$ and then by applying the basic coupling for the time
lag $(t-t_N)$.
\end{definition}
It is easy to check that $\{M^{(n)}\}$ is indeed a family of couplings
for $\{\left(\mu_\o^{t_n},\mu^{t_n}_{\o'}\right)\}_{n=0}^N$. Define now
\[
p_n:= M^{(n)}\left(\o\neq \o' \text{ in the interval } I_n\right)
\]
and recall that $\epsilon$ is the exponent entering in the definition of
the round length $\Delta$.
\begin{claim}\label{cm1} There exist $\epsilon_0>0$ such that, for all
$\epsilon<\epsilon_0$ and all $t$ large enough depending on $\epsilon$,
\[
p_N= O(e^{-t^\a}),
\]
for some positive $\a=\a(\epsilon)$.
\end{claim}
\begin{proof
The claim follows from the recursive inequality:
\begin{equation}
\label{eq:8}
p_{n+1}\le Ce^{-t^{\epsilon/2}} +p_n(1-e^{-2\Delta_2}/2),
\end{equation}
for some constant $C$.
In fact, if we assume \eqref{eq:8} and recall
that $e^{-2\Delta_2}= t^{-2\kappa \epsilon}$, we get
\begin{align*}
p_N & \le C e^{-t^{\epsilon/2}}[1+(1-e^{-2\Delta_2}/2)+(1-e^{-2\Delta_2}/2)^2+\ldots] + \left(1-e^{-2\Delta_2}/2\right)^N \\
& \le 2C e^{-t^{\epsilon/2}}t^{2\kappa \epsilon} + \left(1-e^{-2\Delta_2}/2\right)^N=O(e^{-t^{\epsilon/3}}),
\end{align*}
provided that $1-\epsilon(1+ 2\kappa)> \epsilon/3$, \hbox{\it i.e.\ } $\epsilon< 3/(4+6\kappa),$ since $N>c t^{1-\epsilon}$ for some constant $c$.
Notice crucially that since $\kappa$ was bounded as $p \downarrow 0$ in the statement of Theorem \ref{th:key3}, $\epsilon_0$ and $\alpha(\epsilon)$ can be chosen uniformly as $p \downarrow 0.$\\
\noindent
To prove \eqref{eq:8} we use Lemma~\ref{finitespeed} together with Theorem~\ref{th:key3}.
We begin by examining the possible occurrence of two very unlikely
events each of which will contribute to
the constant term in \eqref{eq:8}.
\begin{itemize}[$\bullet$]
\item The first possibility is that $\o(t_{n})= \o'(t_{n}) \text{ in the
interval } I_n$ and $F(a_{n},a_{n+1};t_n,t_{n+1})$
occurred. Here $a_n=-v_{\rm
min}t_n+2v_{\rm max}\Delta n$ is the left boundary of $I_n$
and similarly for $a_{n+1}$. The linking event could in fact move
possible discrepancies between $\o(t_n),\o'(t_n)$ sitting outside $I_n$ to the inside of $I_{n+1}$. Since $|a_{n}-a_{n+1}|\ge v_{\rm
max}(t_{n+1}-t_n)$, Lemma~\ref{finitespeed} shows that this case gives a
contribution to $p_{n+1}$ which is $O(e^{-|a_{n}-a_{n+1}|})=O(e^{-v_{\rm max}t^\varepsilon})$.
\item The second possibility is that either $\o(t_n)$ or $\o'(t_n)$ do not
satisfy the $(\d,\epsilon)$-weak
spacing condition in $[-v_{\rm min}t_n,-t^\epsilon_n]$. The bound
\eqref{1} of Theorem
\ref{th:key3} shows that the contribution of such a case is
$O(e^{-t^{\epsilon/2}})$.
\end{itemize}
Having discarded the occurrence of the above ``extremal'' situations, we now assume that $(\o(t_{n}),\o'(t_{n}))$ are such that: (i) they
are different in the interval
$I_n$; (ii) they satisfy the $(\d,\epsilon)$-weak
spacing condition in $[-v_{\rm min}t_n,-t^\epsilon_n]$. It will be useful
to denote by $\ensuremath{\mathcal G}_n$ the
set of pairs $(\o,\tilde \o)$ fulfilling (i) and (ii) above.
We will
show that, \emph{uniformly} in $(\o,\tilde\o)\in \ensuremath{\mathcal G}_n$, the
probability that at the end of the round $\left(\o(\Delta),\tilde \o(\Delta)\right)$ are not
coupled inside the interval $I_{n+1}$ is smaller
than $(1- \frac 12 e^{-2\Delta_2})$. That clearly
proves the second term in \eqref{eq:8}.
To prove that, recall the definition of the $\L_n$-maximal coupling
$MC_{\L_n}$, fix $(\o,\tilde\o)\in \ensuremath{\mathcal G}_n$ and consider the event $\ensuremath{\mathcal B}$
that:
\begin{enumerate}[(i)]
\item at the
end of the \emph{burn-in} part of the round $\o(\Delta_1)=\tilde\o(\Delta_1)$ in
$\L_n$,
\item the vertex $x_*$ appearing in (ii) of step (b) of
Definition~\ref{coupling} is within $\epsilon \log t$ from the right
boundary of $\L_n$ and $\o(\Delta_1)_{x_*}=\tilde\o(\Delta_1)_{x_*}=0$,
\item
$\o(\Delta_1)$ and $\tilde \o(\Delta_1)$ satisfy SSC in the interval $[-3(v_{\rm max}/v_{\rm min})\kappa
t^\epsilon,-\kappa \epsilon \log t]$.
\end{enumerate}
Theorem~\ref{th:key3} proves that, uniformly in $\o,\tilde\o\in \ensuremath{\mathcal G}_n$,
\[
MC_{\L_n}(\ensuremath{\mathcal B}) \ge 1- O(e^{-t^{\epsilon/2}}) - O(t^{-7\epsilon}) -
p^{\epsilon \log t}= 1- O(p^{\epsilon \log t}).
\]
The first error term takes into account the variation distance
from $\pi$ of the marginals in $\L_n$ of ${\ensuremath{\mathbb P}} _\o^{\Delta_1}$ and ${\ensuremath{\mathbb P}} _{\tilde
\o}^{\Delta_1}$, the second error term bounds the probability that
either $\o(\Delta_1)$ or $\tilde\o(\Delta_1)$ do not satisfy the SSC
condition in the interval $[-3(v_{\rm max}/v_{\rm min})\kappa
t^\epsilon,-\kappa \epsilon \log t]$ and the third term bounds the $\pi$-probability that the event in item (ii) does not occur.
Next we claim that, for any $\kappa$ large enough and any $z\in \L_n$ at
distance at most $\epsilon\log t$ from the right boundary of $\L_n$,
\begin{gather}
\sup_{\o,\tilde\o\in \ensuremath{\mathcal G}_n}{\ensuremath{\mathbb P}} \left(\o(\Delta)\neq \tilde \o(\Delta) \text{ in } I_{n+1}\mid
\ensuremath{\mathcal B},\{x_*=z\}, \{\xi =1\}\right)\nonumber \\
\label{eq:10}
\le e^{-|a_n-a_{n+1}|}\ + 3\kappa t^\epsilon
\left(\frac{c^*}{ q}\right)^{\epsilon \log t}\, e^{-\Delta_2({\rm gap}(\ensuremath{\mathcal L})\wedge m)}= O(t^{-2\epsilon}).
\end{gather}
The first term in the r.h.s.\ is the probability that the linking
event $F(a_n,a_{n+1};\Delta_1,\Delta)$ occurred. The second term comes
from Proposition~\ref{prop:key1} and it bounds from above the probability that, under the
maximal coupling for the East process in the interval $[x_*+1,-1]$ and
in a time lag $\Delta_2$, we see a discrepancy.
In conclusion, the probability that $\o(\Delta)= \tilde \o(\Delta)$ in
$I_{n+1}$ is larger
than
\[
MC_{\L_n}(\ensuremath{\mathcal B})(1-o(1)){\ensuremath{\mathbb P}} (\xi=1)\ge \frac 12 e^{-2\Delta_2},
\]
thus proving the claim.
\end{proof}
We are now in a position to finish the proof of Theorem~\ref{coupling}. Let
$v^*\equiv v_{\rm min} -3\epsilon v_{\rm max}$ and let $a_N=-v_{\rm min}t_N+\epsilon v_{\rm max} t$ be the left boundary of the interval $I_N=[a_N,0]$.
Since by Remark~\ref{speedbnd}, $v_{\rm min}$ converges to $1$ as $p \downarrow 0,$ $v^*$ can be chosen uniformly as $p \downarrow 0.$\\
\noindent
Pick two configurations
$\o(t_N),\o'(t_N)$ at time $t_N$ and make them evolve under the basic
coupling until time $t$. Clearly the events
$\{\o(t_N)_x=\o'(t_N)_x\ \forall x\in I_N\}$ and $\{\exists\, x\in [-v^*t,0]:\ \o_x(t)\neq
\o'_x(t)\}$ imply the linking event
$F(a_N,-v^*t;t_N,t)$ from Lemma~\ref{finitespeed}. By construction
$|v^*t-a_N|=\epsilon v_{\rm max}t \ge v_{\rm max}(t-t_N)$ for large enough $t$. Therefore,
\begin{align*}
M^t_{\o,\o'}(\exists\, x\in [-v^*t,0]:\ \o_x\neq \o'_x)&\le p_N +
{\ensuremath{\mathbb P}} \left(F(a_N,-v^*t;t_N,t)\right)\\
&\le O(e^{-t^\a})+ e^{-\epsilon v_{\rm max}t},
\end{align*}
as required. Moreover, by the proof of Claim \ref{cm1}, $\alpha$ can be chosen uniformly as $p \downarrow 0$. Thus we are done.\qed
\subsection{Mixing properties of the front increments: Proof of Corollary~\ref{cor:wf}}
To prove \eqref{eq:20} we observe that, for any $n\ge v_{\rm max}\Delta$, the event $|\xi_1|\ge n$
implies the occurrence of the linking event $F(0,n;0,\Delta)$. Lemma~\ref{finitespeed} now gives that
\begin{equation}
\label{eq:9}
{\ensuremath{\mathbb E}} _\o\left[f(\xi_1)^2\right] \le \max_{|x|\le v_{\rm max}\Delta}f(x)^2
+\sum_{n\ge v_{\rm max}\Delta} f(n+1)^2 e^{-n} <\infty.
\end{equation}
In order to prove \eqref{eq:20tris} we apply the Markov property at
time $t_{n-1}$ and write
\begin{gather*}
{\ensuremath{\mathbb E}} _\o\left[f(\xi_n)\right]=
\int d\mu^{t_{n-1}}_\o(\o')\, {\ensuremath{\mathbb E}} _{\o'}\left[f(\xi_1)\right].
\end{gather*}
At this stage we would like to appeal to Theorem
\ref{coupling} to get the
sought statement. However Theorem
\ref{coupling} only says that, for any $t$ large enough, $\mu^t_\o$ is very close to the
invariant measure $\nu$ in the interval $[-v^*t,0]$. In order to
overcome this problem,
for any $\o\in \O_{\rm F}$ and any $t>0$ we define $\Phi_t(\o)\in \O_{\rm F}$
as that configuration which is equal to $\o$ in $[-v^* t,0]$ and identically equal to $1$
elsewhere. Then, under the basic coupling, the front at time $t$
starting from $\Phi_t(\o)$ is different from the front starting from
$\o$
iff the linking event $F(-v^*t,0;0,\Delta)$ occurred.
In conclusion, if $v^* t_{n-1}\ge v_{\rm max}\Delta$,
\begin{align*}
\sup_{\o\in \O_{\rm F}}&\bigg|\int d\mu^{t_{n-1}}_\o(\o')\,
{\ensuremath{\mathbb E}} _{\o'}\left[f(\xi_1)\right] -
\int d\mu^{t_{n-1}}_\o(\o')\, {\ensuremath{\mathbb E}} _{\Phi_{t_{n-1}}(\o')}\left[f(\xi_1)\right]\bigg|\\
&\le
{\ensuremath{\mathbb P}} (F(-v^*t_{n-1},0;0,\Delta))^{1/2}\sup_{\o\in
\O_{\rm F}}{\ensuremath{\mathbb E}} _\o\left[f(\xi_1)^2\right]^{1/2}\\
&\le
e^{-v^*t_{n-1}/2} \sup_{\o\in
\O_{\rm F}}{\ensuremath{\mathbb E}} _\o\left[f(\xi_1)^2\right]^{1/2}.
\end{align*}
We can now apply Theorem~\ref{coupling} to get that
\begin{gather*}
\bigg|\int d\mu^{t_{n-1}}_\o(\o')\,
{\ensuremath{\mathbb E}} _{\Phi_{t_{n-1}}(\o')}\left[f(\xi_1)\right]-{\ensuremath{\mathbb E}} _{\nu}\left[f(\xi_1)\right]\bigg|
\\
\le
\left[ \sup_{\o\in \O_{\rm F}}\|\mu_\o^{t_{n-1}}-\nu\|^{1/2}_{[-v^*t_{n-1},0]} +
e^{-v^*t_{n-1}/2}\right]\sup_{\o\in
\O_{\rm F}}{\ensuremath{\mathbb E}} _\o\left[f(\xi_1)^2\right]^{1/2}
= O(e^{-t_{n-1}^\a/2}).
\end{gather*}
To prove \eqref{eq:13} suppose first that $v^*(j-1)\ge v_{\rm max}(n-j)$
where $v^*$ is the constant appearing in Theorem~\ref{coupling}. Then we can
use the Markov property at time $t_{j-1}$ and repeat the previous
steps to get the result. If instead $v^*(j-1)\le
v_{\rm max}(n-j)$ it suffices to write
\[
{\operatorname{Cov}}_\o\left(\xi_j,\xi_n \right)= {\operatorname{Cov}}_\o\left(\xi_j,{\ensuremath{\mathbb E}} _\o[\xi_n\thinspace |\thinspace \ensuremath{\mathcal F}_{t_{j}}] \right)
\]
and apply \eqref{eq:20tris} to ${\ensuremath{\mathbb E}} _\o[\xi_n\thinspace |\thinspace \ensuremath{\mathcal F}_{t_{j}}]$ to
get that in this case
\begin{equation}
\label{eq:16}
\sup_{\o\in \O_{\rm F}}\left| {\operatorname{Cov}}_\o\left(\xi_j,\xi_n \right) \right| =O(e^{-\gamma (n-j)^\a})
\end{equation}
for some constant $\gamma$ depending on $v^*,v_{\rm max}$.
Following the exact steps as above after replacing $\xi_j,\xi_n$ by $f(\xi_j),f(\xi_n)$ yields \eqref{eq:13fabio}.
Finally, \eqref{eq:21} follows from exactly the same steps leading to the proof of \eqref{eq:20tris}.
\qed
\section{Proofs of main results}\label{sec:main-proofs}
\subsection{Proof of Theorem~\ref{th:main1}}
We begin with the proofs of \eqref{th1.1} and \eqref{th1.2}.
As far as \eqref{th1.2} is concerned, this follows directly from observing that
\[
\frac{d}{dt} {\ensuremath{\mathbb E}} _{\o}\left[X(\o(t))\right]= q-p\mu^t_\o(\o(-1)=0)=v +O(e^{-t^\a}).
\]
Appealing to \eqref{eq:19} and Corollary~\ref{cor:wf} we get immediately that for any $\o\in \O_{\rm F}$
\[
{\ensuremath{\mathbb E}} _\o\left[\left((X(\o(t))-vt)/t\right)^4\right]=O(t^{-2})
\]
and \eqref{th1.1} follows at once.
\medskip
We next prove \eqref{th1.3}. Using Corollary~\ref{cor:wf} with $f(x)=x^2$, we get that, for any $n$ large enough,
\begin{align*}
\operatorname{Var}_\o(\xi_n)= \operatorname{Var}_\nu(\xi_1)
+ O(e^{-n^\a}).
\end{align*}
Hence
\begin{gather*}
\lim_{t\to \infty} \frac 1t \left[\sum_{n=1}^{N_t} \operatorname{Var}_\o(\xi_n) +
\operatorname{Var}_\o(X(\o(t))-X(\o(t_N)))\right]
=\Delta^{-1}\operatorname{Var}_\nu(\xi_1).
\end{gather*}
Moreover, \eqref{eq:13} implies that
\begin{gather*}
\lim_{t\to \infty} \frac 2t \Bigg[\sum_{j<n}^{N_t} {\operatorname{Cov}}_\o(\xi_j,\xi_n) +
\sum_{n=1}^{N_t} {\operatorname{Cov}}_\o(\xi_n,X(\o(t))-X(\o(t_N)))\Bigg]
\\=\frac{2}{\Delta}\sum_{n\ge 2}{\operatorname{Cov}}_\nu\left(\xi_1,\xi_{n}\right),
\end{gather*}
the series being absolutely convergent because of \eqref{eq:16}.
In conclusion, for any $\o\in \O_{\rm F}$
\begin{gather}
\label{eq:18}
\lim_{t\to \infty}
\frac 1t \operatorname{Var}_\o\left(X(\o(t))\right)=\Delta^{-1}\Bigg[\operatorname{Var}_\nu(\xi_1)+2\sum_{n\ge 2}{\operatorname{Cov}}_\nu\left(\xi_1,\xi_{n}\right)\Bigg].
\end{gather}
Next we show that for $p$ small enough the r.h.s.\ of \eqref{eq:18} is positive.
We first observe that there exists $c=c(p)$ such that $\limsup_{p\to
0^+}c(p)<\infty$ and
\begin{equation}
\label{eq:facile}
\sup_{\Delta}\sum_{n\ge 2}|{\operatorname{Cov}}_\nu\left(\xi_1,\xi_{n}\right)|\le c(p).
\end{equation}
To prove \eqref{eq:facile} assume without loss of generality that
$\Delta\in {\ensuremath{\mathbb N}} $ and write $\xi_1= \sum_{i=1}^\Delta \xi_i'$
and $\xi_{n}=\sum_{i=(n-1)\Delta+1}^{n\Delta} \xi_i'$, where the increments $\xi_i'\,$'s
refer to a unit time lag. Thus
\[
\sum_{n\ge 2}|{\operatorname{Cov}}_\nu\left(\xi_1,\xi_{n}\right)| \le
\sum_{n\ge 2}\sum_{i=1}^\Delta\sum_{j=(n-1)\Delta+1}^{n\Delta}|{\operatorname{Cov}}_\nu\left(\xi'_i,\xi'_{j}\right)|
\]
The claim now follows from \eqref{eq:13} together with the fact that
the constants $\a,v^*$ are uniformly bounded away from zero as $p\to
0$.
Thus, in order to show
that the r.h.s.\ of \eqref{eq:18} is positive, it is enough to show that
it is possible to choose $\Delta$ and $p$ such that $
\operatorname{Var}_\nu\left(\xi_1\right) > \limsup c(p)$.
Recall that $q^*=\nu(\o_{-1}=0)$. Then a little computation shows that
\begin{align}
\frac{d}{dt} \operatorname{Var}_\nu\left(X(\o(t))\right)&=q+pq^*-2p\,{\operatorname{Cov}}_\nu\left(X(\o(t)),{\mathds
1}_{\{\o(t)\in \O^{**}\}}\right)\nonumber\\
&\ge q+pq^*-2p
\operatorname{Var}_\nu\left(X(\o(t))\right)^{1/2}\left(q^*(1-q^*)\right)^{1/2}\\
\label{eq:17}&\ge q+pq^*-p\operatorname{Var}_\nu\left(X(\o(t))\right)^{1/2},
\end{align}
where $\O^{**}=\{\o\in \O^*:\ \o_{X(\o)-1}=0\}$.
If
$\left[\operatorname{Var}_\nu\left(\xi_1\right)\right]^{1/2}\le \frac{q+pq^*}{2p}$
for all $\Delta>0$, then \eqref{eq:17} implies that
\[
\lim_{\Delta\to \infty}\operatorname{Var}_\nu\left(\xi_1\right)=\infty.
\]
Otherwise there exists $\Delta>0$ such that
$
\left[\operatorname{Var}_\nu\left(\xi_1\right)\right]^{1/2}\ge
\frac{q+pq^*}{2p}
$;
hence, the desired inequality~\eqref{th1.3} follows by taking $p$ small enough.
It remains to prove~\eqref{th1.4}. If $\s^*=0$, then necessarily
\[
\sup_\Delta \operatorname{Var}_\n\left(\xi_1\right)<\infty.
\]
In this case the Chebyshev
inequality suffices to prove that, for any $\o\in \O_{\rm F}$,
\[
(X(\o(t))-vt)/\sqrt{t}\ \stackrel{{\ensuremath{\mathbb P}} _\o}{\longrightarrow}\ 0, \quad
\text{ as }t\to \infty.
\]
If instead $\s^*>0$, we appeal to an old result on the central limit
theorem for mixing stationary random fields \cite{Bolthausen}. Unfortunately our mixing
result, as expressed e.g.\ in Corollary~\ref{cor:wf} (cf.~\eqref{eq:21}), is not exactly
what is needed there and we have to go through some of the steps of
\cite{Bolthausen} to prove the sought statement.
Consider the sequence $\{\xi_j\}$ defined above (with e.g.\ $\Delta=1$) and let $\bar \xi_j:= \xi_j- v\Delta$. Further let
$S_n=\sum_{j=1}^n \bar \xi_j$.
It suffices to prove that, for all
$\o\in \O_{\rm F}$, the law of $S_n/\s_* \sqrt{n}$ converges to the normal
law $\ensuremath{\mathcal N}(0,1)$. As in \cite{Bolthausen} let $f_N(x)=\max\left[\min(x,N),-N\right]$ and
let $\tilde f_N(x):=x-f_N(x)$. Clearly $\operatorname{Var}(\tilde f_N(\bar \xi_j))\rightarrow 0$ as $N \rightarrow \infty$ uniformly in $j.$
Then Corollary~\ref{cor:wf} \eqref{eq:13fabio} implies that
\[
{\ensuremath{\mathbb E}} _\o\left[\frac{\sum_{j=1}^n \tilde f_N(\bar \xi_j)-{\ensuremath{\mathbb E}} _\o[\tilde
f_N(\bar \xi_j)]}{n^{1/2}}\right]^2=\frac 1n \sum_{j,k=1}^n {\operatorname{Cov}}_\o\left(\tilde f_N(\bar \xi_j),\tilde
f_N(\bar \xi_k)\right)
\]
converges to $0$ as $N\to \infty$ uniformly in $n$. Hence it is enough
to prove the result for the truncated variables $f_N(\bar \xi_j)$. For
lightness of notation we assume henceforth that the $\bar\xi_j$'s are bounded.
Let now $\ell_n= n^{1/3}$ and let
\begin{align*}
S_{j,n}= \sum_{k=1}^n{\mathds 1}_{|k-j|\le \ell_n}\bar\xi_k, \quad
\a_n= \sum_{j=1}^n{\ensuremath{\mathbb E}} _\o\left[\bar \xi_jS_{j,n}\right],\quad j\in \{1,\dots,n\}.
\end{align*}
The decay of covariances \eqref{eq:13} implies that $\a_n=
\operatorname{Var}_\o(S_n)+ o(1)$. Hence it is enough to show that $S_n/\sqrt{\a_n}$
is asymptotically normal. The main observation of \cite{Bolthausen},
in turn inspired by the Stein method \cite{Stein}, is that the latter property
of $S_n/\sqrt{\a_n}$ follows if
\begin{equation}
\label{eq:190}
\lim_{n\to \infty} {\ensuremath{\mathbb E}} _\o\left[(i\l-S_n)e^{i\l
\frac{S_n}{\sqrt{\a_n}}}\right]=0, \quad \forall \l\in {\ensuremath{\mathbb R}} .
\end{equation}
In turn \eqref{eq:190} follows if (see \cite{Bolthausen}*{Eqs.~(4)--(5)})
\begin{align}
\label{eq:19bis}
\lim_{n\to \infty} {\ensuremath{\mathbb E}} _\o\Bigl[\bigl(1-\frac{1}{\sqrt{\a_n}}\sum_{j=1}^n\bar\xi_j
S_{j,n}\bigr)^2\Bigr]&=0\\
\label{eq:19tris}
\lim_{n\to \infty} \frac{1}{\sqrt{\a_n}} {\ensuremath{\mathbb E}} _\o\Bigl[\Big|\ \sum_{j=1}^n \bar\xi_j\bigl(1-e^{-i\l
\frac{S_n}{\sqrt{\a_n}}}-i\l S_{j,n}\bigr)\Big|\Bigr]&=0\\
\label{eq:19quatris}\lim_{n\to \infty}
\frac{1}{\sqrt{\a_n}}\sum_{j=1}^n{\ensuremath{\mathbb E}} _\o\Bigl[\bar \xi_j\, e^{i\l
\frac{(S_n-S_{j,n})}{\sqrt{\a_n}}}\Bigr] &=0.
\end{align}
As in \cite{Bolthausen}, the mixing properties \eqref{eq:13} and
\eqref{eq:21} easily prove that \eqref{eq:19bis} and
\eqref{eq:19tris} hold.
As far as \eqref{eq:19quatris} is concerned the formulation of Theorem~\ref{coupling} forces us to argue a bit differently than
\cite{Bolthausen}. We first observe that, using the boundedness of the variables $\bar \xi_j$'s, \eqref{eq:19quatris}
is equivalent to
\begin{equation}
\label{eq:20bis}
\lim_{n\to \infty}
\frac{1}{\sqrt{\a_n}}\sum_{j=\ell_n}^n{\ensuremath{\mathbb E}} _\o\left[\bar \xi_j\, e^{i\l
\frac{(S_n-S_{j,n})}{\sqrt{\a_n}}}\right]=0,
\quad \forall \l\in {\ensuremath{\mathbb R}} .
\end{equation}
Fix two numbers $M$ and $L$ with $L\le M/10$ (eventually they will be
chosen logarithmically increasing in $n$)
and write
\begin{align*}
e^{i\l \frac{(S_n-S_{j,n})}{\sqrt{\a_n}}}&= \sum_{m=0}^M
\frac{(i\l)^m}{m!}\left(\frac{(S_n-S_{j,n})}{\sqrt{\a_n}}\right)^m \\ &+ \sum_{m=M+1}^\infty
\frac{(i\l)^m}{m!}\left(\frac{(S_n-S_{j,n})}{\sqrt{\a_n}}\right)^m{\mathds
1}_{\{| \frac{(S_n-S_{j,n})}{\sqrt{\a_n}}|\le L\}}
\\ &+ \left[e^{i\l \frac{(S_n-S_{j,n})}{\sqrt{\a_n}}}- \sum_{m=0}^M
\frac{(i\l)^m}{m!}\left(\frac{(S_n-S_{j,n})}{\sqrt{\a_n}}\right)^m\right]{\mathds
1}_{\{| \frac{(S_n-S_{j,n})}{\sqrt{\a_n}}|> L\}}\\
&=: \ Y^{(j)}_1+Y^{(j)}_2+Y^{(j)}_3.
\end{align*}
Let us first examine the contribution of $Y^{(j)}_2$ and $Y^{(j)}_3$ to
the covariance term \eqref{eq:20bis}.
Using the boundedness of the variables $\{\bar \xi_j\}_{j=1}^n$ there exists a positive constant $c$ such that:
\begin{align*}
\frac{1}{\sqrt{\a_n}}\sum_{j=\ell_n}^n|{\ensuremath{\mathbb E}} _\o\left[\bar
\xi_j\,Y^{(j)}_2\right]| &\le c \,\sqrt{n} \ \frac{L^{M+1}}{M!},\\
\frac{1}{\sqrt{\a_n}}\sum_{j=\ell_n}^n|{\ensuremath{\mathbb E}} _\o\left[\bar
\xi_j\,Y^{(j)}_3\right]|&\le c\,\sqrt{n}\max_j {\ensuremath{\mathbb E}} _\o\left[e^{2|\l|
\frac{|S_n-S_{j,n}|}{\sqrt{\a_n}}}\right]^{1/2}
{\ensuremath{\mathbb P}} _\o\left(| \frac{(S_n-S_{j,n})}{\sqrt{\a_n}}|> L\right).
\end{align*}
\begin{lemma}
\label{large-dev} There exists $c>0$ such that,
for all $n$ large enough and any $\b=O(\log n)$,
\begin{equation}
\label{eq:22}
{\ensuremath{\mathbb E}} _\o\left[e^{\b\frac{|S_n-S_{j,n}|}{\sqrt{\a_n}}}\right] \le
2 e^{c \b^2}.
\end{equation}
Moreover, there exists $c'>0$ such that, for all $n$ large enough and
all $L\le \log n$,
\begin{equation}
\label{eq:23}
{\ensuremath{\mathbb P}} _\o\left(| \frac{(S_n-S_{j,n})}{\sqrt{\a_n}}|> L\right) \le
e^{-c' L^2}.
\end{equation}
\end{lemma}
Assume for the moment the lemma and choose
$L=M/10$ and $M=\log n$. We can conclude that
\begin{gather*}
\frac{1}{\sqrt{\a_n}}\sum_{j=\ell_n}^n|{\ensuremath{\mathbb E}} _\o\left[\bar
\xi_j\,(Y^{(j)}_2+Y^{(j)}_3)\right]|
\le
C\sqrt{n}\left[e^{-c'L^2}+ \frac{L^{M+1}}{M!}\right],
\end{gather*}
so that
\[
\lim_{n\to \infty}\frac{1}{\sqrt{\a_n}}\sum_{j=\ell_n}^n|{\ensuremath{\mathbb E}} _\o\left[\bar
\xi_j\,(Y^{(j)}_2+Y^{(j)}_3)\right]|=0.
\]
We now examine the contribution of $Y^{(j)}_1$ to \eqref{eq:20bis}. Recall $$S_n-S_{j,n}=\sum_{\substack{1\le i \le n \\ |i-j|>\ell_n}} \bar
\xi_i.$$ Thus clearly,
\[
\frac{1}{\sqrt{\a_n}}\sum_{j=\ell_n}^n{\ensuremath{\mathbb E}} _\o\left[\bar
\xi_j\,(Y^{(j)}_1)\right]=\frac{1}{\sqrt{\a_n}}\sum_{j=\ell_n}^n\sum_{m=1}^M
\left( \frac{i\l}{\sqrt{n}}\right)^m\sumtwo{i_1,\dots, i_m}{\min_k
|i_k-j|\ge \ell_n}{\ensuremath{\mathbb E}} _\o\left[\bar \xi_j \prod_{i=k}^m \bar \xi_{i_k}\right],
\]
where the labels $i_1,\dots i_m$ run in $\{1,2,\dots, n\}$.
\begin{lemma}
\label{covar2}
Let $M=\log n$. Then, for any $m\le M$, any $j\in\{\ell_n,\dots,n\}$ and any $\{i_1,\dots i_m\}$ satisfying $\min_k
|i_k-j|\ge \ell_n\,$, it holds that
\[
|{\ensuremath{\mathbb E}} _\o\Bigl[\bar \xi_j \prod_{i=k}^m \bar \xi_{i_k}\Bigr]|=O(e^{-n^{\a/6}}).
\]
Here $\a$ is the mixing exponent appearing in Theorem~\ref{coupling}.
\end{lemma}
Assuming the lemma we get immediately that also
\[
\lim_{n\to \infty}\frac{1}{\sqrt{\a_n}}\sum_{j=\ell_n}^n {\ensuremath{\mathbb E}} _\o\left[\bar
\xi_j\,(Y^{(j)}_1)\right]=0
\]
and \eqref{eq:20bis} is established. In conclusion, \eqref{th1.4} would follow from Lemmas~\ref{large-dev}--\ref{covar2}.
\begin{proof}[Proof of Lemma~\ref{large-dev}]
Let us begin with \eqref{eq:22}. For simplicity we prove that,
for any constant $\b=O(\log n)$, ${\ensuremath{\mathbb E}} _\o\left[\exp(\b
S_n/\sqrt{n})\right]\le e^{c\b^2}$ for some constant $c>0$.
Similarly one could proceed for ${\ensuremath{\mathbb E}} _\o\left[\exp(-\b
S_n/\sqrt{n})\right]$ and get that
\[
{\ensuremath{\mathbb E}} _\o\left[\exp(\b |S_n|/\sqrt{n})\right]\le {\ensuremath{\mathbb E}} _\o\left[\exp(\b
S_n/\sqrt{n})\right] +
{\ensuremath{\mathbb E}} _\o\left[\exp(-\b S_n/\sqrt{n})\right]\le 2 e^{c\b^2}.
\]
We partition the discrete interval $\{1,2,\dots, n\}$ into disjoints blocks of
cardinality $n^{1/3}$. Given a integer $\kappa$, by applying the Cauchy-Schwarz inequality a
finite number of times depending on $\kappa$, it is sufficient to prove the result for
$S_n$ replaced by the sum $S^{(\kappa)}_{\ensuremath{\mathcal B}}$ of the $\bar\xi_j$'s
restricted to an arbitrary collection $\ensuremath{\mathcal B}$ of blocks with the property that
any two blocks in $\ensuremath{\mathcal B}$ are
separated by at least $\kappa$ blocks.
Fix one such collection $\ensuremath{\mathcal B}$ and let $B$ be the rightmost block
in $\ensuremath{\mathcal B}$. Let $n_B$ be the largest label in $\ensuremath{\mathcal B}$ which is not in the
block $B$ and let
$t_B=n_B\Delta$ be the corresponding time. Further let $Z_B=\sum_{j\in
B}\bar\xi_j$. If $c\kappa>v_{\rm max}$ where $c$ is the constant
appearing in Theorem~\ref{coupling}, we can appeal to \eqref{eq:21} to
obtain
\[
{\ensuremath{\mathbb E}} _\o\left[\exp(\b Z_B/\sqrt{n})\thinspace |\thinspace \ensuremath{\mathcal F}_{t_B}\right]= {\ensuremath{\mathbb E}} _\nu\left[\exp(\b
Z_B/\sqrt{n})\right]+ O(e^{-n^{\a/3}}e^{\b n^{-1/6}}).
\]
Using the trivial bound
$Z_B/\sqrt{n}= O(n^{-2/3})$ we have
\[
{\ensuremath{\mathbb E}} _\nu\left[\exp(c Z_B/\sqrt{n})\right] = 1 +\frac{\b^2}{2n}\operatorname{Var}_\nu(Z_B) + O(\b^3
n^{-7/6})\operatorname{Var}_\nu(Z_B),
\]
where $\operatorname{Var}_\nu(Z_B)= O(n^{1/3})$ thanks to \eqref{eq:13}. Above we used
the trivial bound
\[
{\ensuremath{\mathbb E}} _\nu\left[|Z_B|^3\right]\le c\, n^{1/3}\operatorname{Var}_\nu(Z_B).
\]
In conclusion, using the apriori bound $\b\le \log n$, we get that
\[
{\ensuremath{\mathbb E}} _\o\left[\exp(\b Z_B/\sqrt{n})\thinspace |\thinspace \ensuremath{\mathcal F}_{t_B}\right] \le 1+c \frac{\b^2}{n^{2/3}}.
\]
The Markov property and a simple iteration imply that,
\[
{\ensuremath{\mathbb E}} _\nu\left[\exp\big(\b S^{(\kappa)}_{\ensuremath{\mathcal B}}/\sqrt{n}\big)\right]\le \left[1+c
\frac{\b^2}{n^{2/3}}\right]^{|\ensuremath{\mathcal B}|}\le \exp(c' \b^2),
\]
uniformly in the cardinality $|\ensuremath{\mathcal B}|$ of the collection. The bound
\eqref{eq:22} is proved.
The bound \eqref{eq:23} follows at once from \eqref{eq:22} and
the exponential Chebyshev inequality
\[
{\ensuremath{\mathbb P}} _\o\left(| \frac{(S_n-S_{j,n})}{\sqrt{\a_n}}|> L\right) \le
e^{-\b L}\,{\ensuremath{\mathbb E}} _\o\left[\exp(| \frac{(S_n-S_{j,n})}{\sqrt{\a_n}}|)\right],
\]
with $\b=\varepsilon L$, $\varepsilon$ being a
sufficiently small constant.
\end{proof}
\begin{proof}[Proof of Lemma~\ref{covar2}] Fix $j\in [1,\dots,n]$ and $m\le \log n$, together with
a choice of labels
$1\le i_1\le i_2 \le\dots \le i_m\le n$ such that $\min_k|i_k-j|\ge
\ell_n$. Let $t_{i_k}=i_k\Delta$. \\
$\bullet$ If $i_m\le j-\ell_n$ then we can apply the Markov property at time
$t_{i_m}$
together with Corollary~\ref{cor:wf} to get
\[
\bigg|{\ensuremath{\mathbb E}} _\o\Bigl[\bar \xi_j \prod_{i=k}^m \bar \xi_{i_k}\Bigr]\bigg|\le
e^{-n^{\a/3}}{\ensuremath{\mathbb E}} _\o\Bigl[\prod_{i=k}^m |\bar \xi_{i_k}|\Bigr]\le
c^{m}e^{-n^{\a/3}}.
\]
$\bullet$ If instead there exists $b\le m-1$ such that $i_b < j < i_{b+1}$
we need to distinguish between two sub-cases.
(a) For all $k\ge b+2$ it holds that
$i_k-i_{k-1}\le n^{1/6}$ and in particular
$t_m-t_{b+1}\le n^{1/6}\Delta$. In this case the fact that
$t_{b+1}-j\Delta \ge \ell_n$ and
$v_{\rm max}(t_m-t_{b+1})\ll \ell_n$ together with \eqref{eq:21}, imply that
\[
{\ensuremath{\mathbb E}} _\o\Bigl[\bar \xi_j \prod_{i=k}^m \bar \xi_{i_k}\Bigr]=
{\ensuremath{\mathbb E}} _\o\Bigl[\bar \xi_j \prod_{i=k}^{b} \bar
\xi_{i_k}\Bigr]\left[{\ensuremath{\mathbb E}} _\nu\Bigl[\prod_{i=b+1}^{m} \bar
\xi_{i_k}\Bigr]+
O\Bigl(e^{-n^{\a/3}}{\rm poly}(n)\Bigr)\right].
\]
The conclusion of the lemma then follows from the previous case
$i_m\le j-\ell_n$.
(b) We now assume that $k^*:=\max\{k\ge b+1:\ i_{k+1}\ge
i_{k}+n^{1/6}\}<n$. By repeating the previous step with the
Markov property applied at time
$t_{i_{k^*}}$ we get
\[
{\ensuremath{\mathbb E}} _\o\Bigl[\bar \xi_j \prod_{i=k}^m \bar \xi_{i_k}\Bigr]=
{\ensuremath{\mathbb E}} _\o\Bigl[\bar \xi_j \prod_{i=k}^{k^*} \bar
\xi_{i_k}\Bigr]\left({\ensuremath{\mathbb E}} _\nu\Bigl[\prod_{i=k^*+1}^{m} \bar
\xi_{i_k}\Bigr]+
O\Bigl(e^{-n^{\a/3}}{\rm poly}(n)\Bigr)\right).
\]
By iterating the above procedure we can reduce ourselves to case (a)
and get the sought result.
\end{proof}
As Lemmas~\ref{large-dev}--\ref{covar2} imply~\eqref{th1.4}, this concludes the proof of Theorem~\ref{th:main1}.
\qed
\begin{remark}\label{rem:sigma*-small-p}
The above proof also established that the limiting variance
$\s_*^2$ is strictly positive for all $p$ small enough.
\end{remark}
\subsection{Proof of Theorem~\ref{th:main2}}
\label{East-cutoff}
Given the interval $\L=[1,\dots,L]$ and $\o\in \O_\L$, let ${\ensuremath{\mathbb P}} ^{\L,t}_{\o}$ be the law of
the process started from $\o$. Recall that
\[
\|{\ensuremath{\mathbb P}} ^{\L,t}_{\o}-{\ensuremath{\mathbb P}} ^{\L,t}_{\o'}\|=\inf\{M(\o(t)\neq \o'(t)):\ M \text{ a coupling of } {\ensuremath{\mathbb P}} ^{\L,t}_{\o} \text{ and } {\ensuremath{\mathbb P}} ^{\L,t}_{\o'}\},
\]
and introduce the hitting time
\[
\tau(L)=\inf\{t: X(\o(t))=L\},
\]
where the initial configuration is identically equal to one (in the
sequel $\mathbf 1$). It
is easy to check (see, e.g.,~\cite{East-survey}) that at time $\t(L)$
the basic coupling (cf.~\S\ref{setting-notation}) has coupled all initial
configurations.
Thus
\[
d_{{\textsc{tv}}}(t)\le \sup_{\o,\o'}\|{\ensuremath{\mathbb P}} ^{\L,t}_{\o}-{\ensuremath{\mathbb P}} ^{\L,t}_{\o'}\|\le
{\ensuremath{\mathbb P}} ^{\L}(\tau(L)> t).
\]
Using the graphical construction, up to time $\t(L)$ the East process in $\L$ started from the configuration
$\mathbf{1}$ coincides with the infinite East process started from the
configuration $\o^*\in \O_{\rm F}$ with a single zero at the
origin. Therefore
\[
{\ensuremath{\mathbb P}} ^{\L}_{\mathbf{1}}(\tau(L)> t) \le {\ensuremath{\mathbb P}} _{\o^*}(X(\o(t))<L),
\]
thus establishing a bridge with Theorem~\ref{th:main1}. Recall now the
definition of $\s_*$ from Theorem~\ref{th:main1} and distinguish
between the two cases $\s_*>0$ and $\s_*=0$.
\smallskip\noindent$\bullet$
The case $\s_*>0$. Here we will show that
\begin{equation}
\label{eq-tmix(epsilon)-sigma*>0}
T_{\rm mix}(L,\epsilon)= v^{-1}L + (1+o(1))\frac{\sigma_{*}}{v^{3/2}}\Phi^{-1}(1-\epsilon)\, \sqrt{L}\,.
\end{equation}
For $s\in {\ensuremath{\mathbb R}} $, let $t_\star=L/v+s\sqrt{L}$. Then \eqref{th1.3} implies that
\[
{\ensuremath{\mathbb P}} _{\o^*}\left(X(\o({t_\star}))<L\right)= {\ensuremath{\mathbb P}} _{\o^*}\Bigl(
\frac{X(\o(t_\star))-vt_\star}{\sqrt{L/v}}< -v^{3/2} s\Bigr)\rightarrow
\Phi\Bigl(-\frac{v^{3/2}s}{\sigma_{*}}\Bigr)
\]
as $L\to \infty.$
Hence,
\begin{equation}\label{sup1}
\limsup_{L\to \infty} d_{{\textsc{tv}}}(L/v+s\sqrt{L}) \le \Phi\Bigl(-\frac{v^{3/2}s}{\sigma_{*}}\Bigr).
\end{equation}
To prove a lower bound on the total variation norm, set $a_L=\log L$ (any diverging sequence which is $o(\sqrt{L})$ would do here)
and define the event $$A_t=\bigl(\o_x(t)=1 \text{ for all }x\in{(L-a_L,L]}\bigr).$$
Then
$$
{\ensuremath{\mathbb P}} ^\L_{\mathbf{1}}(A_t)\ge {\ensuremath{\mathbb P}} _{\o^*}\left(X(\o(t))\le L-a_L\right) \quad\text{and}\quad
\pi(A_t)=p^{a_L} = o(1),
$$
and so any lower bound on ${\ensuremath{\mathbb P}} _{\o^*}(X(\o(t_\star))\le L-a_L)$ would translate to a lower bound on $d_{\textsc{tv}}(t_\star)$ up to an additive $o(1)$-term.
Again by \eqref{th1.3},
\[ {\ensuremath{\mathbb P}} _{\o^*}\Bigl(X(\o(t_\star))\le L-a_L\Bigr)={\ensuremath{\mathbb P}} _{\o^*}\biggl(
\frac{X(\o(t_\star))- vt_\star}{\sqrt{L/v}}\le -v^{3/2}s -a_L\sqrt{v/L}\ \biggr)\rightarrow \Phi\Bigl(-\frac{v^{3/2}s}{\sigma_{*}}\Bigr)\]
as $L\to \infty.$
Thus we conclude that
\begin{equation}\label{inf1}
\liminf_{L\to \infty} d_{{\textsc{tv}}}(L/v+s\sqrt{L}) \ge \Phi\Bigl(-\frac{v^{3/2}s}{\sigma_{*}}\Bigr).
\end{equation}
Eq.~\eqref{eq-tmix(epsilon)-sigma*>0} now follows from \eqref{sup1} and \eqref{inf1} by choosing $s=\s_* v^{-3/2}\Phi^{-1}(1-\varepsilon).$
\smallskip\noindent$\bullet$
The case $\s_*=0.$ Here a similar argument shows that
\[ T_{\rm mix}(L,\epsilon)= v^{-1} L + O_\epsilon(1),\]
using the fact (following the results in \S\ref{sec:front}) that
$\sup_{\o}\sup_{t}\operatorname{Var}_{\o}(X(\o(t))) < \infty$ if $\s_*=0$.
\smallskip
This concludes the proof of Theorem~\ref{th:main2}.\qed
\section{Cutoff and concentration for constrained models on
trees}\label{sec:trees}
In this section we consider constrained oriented models on
regular trees and prove strong concentration results for hitting
times which are the direct analog of the hitting time $\t(L)$ define in \S\ref{East-cutoff} for the East process. As a
consequence we derive a strong cutoff result for the ``maximally
constrained model'' (see below).
\subsection{Kinetically constrained models on trees} Let ${\ensuremath{\mathbb T}} $ be the $k$-ary rooted tree, $k\ge 2$, in which each vertex
$x$ has $k$ children. We will denote by $r$ the root and by ${\ensuremath{\mathbb T}} _L$
the subtree of ${\ensuremath{\mathbb T}} $ consisting of the first $L$-levels starting
from the root.
In analogy to the East process, for a given integer $1\le j\le k$
consider the constrained oriented
process OFA-jf on $\O=\{0,1\}^{{\ensuremath{\mathbb T}} }$ (cf.~\cite{MT}) in which
each vertex waits an independent mean one exponential
time and then, provided that $j$ among its children are in state
$0$, updates its spin variable $\o_x$ to $1$ with probability $p$
and to $0$ with probability $q=1-p$. It is known that this process exhibits an ergodicity breakdown above a certain critical probability $p=p_c(k,j)$ (defined more precisely later).
In this paper we will only
examine the two extreme cases $j=1$ and $j=k$ which will be referred to
in the sequel as the \emph{minimally} and \emph{maximally} constrained
models.
The finite volume version of the
OFA-jf process is a continuous time Markov chain
on $\O_{{\ensuremath{\mathbb T}} _L}=\{0,1\}^{{\ensuremath{\mathbb T}} _L}$. In this case, in order to guarantee
irreducibility, the variables at leaves of ${\ensuremath{\mathbb T}} _L$ are assumed to be
unconstrained. As in the case of the East process, the product Bernoulli$(p)$
measure $\pi$ is the unique reversible measure and the same graphical
construction described in \S\ref{setting-notation} holds in this new context.
\subsection{New Results} We are now in a position to state our
results for the minimally and maximally constrained finite volume OFA-jf models. Recall that
\[
T_{\rm mix}(L,\varepsilon) := \inf\{t:\ \max_{\o\in \O_{{\ensuremath{\mathbb T}} _L}}\|\mu^t_\o-\pi\|\le
\varepsilon\},\quad \varepsilon\in (0,1)
\]
and define $T_{\rm hit}(L):={\ensuremath{\mathbb E}} \left[\t(L)\right]$,
where $\t(L)$ is the first legal ring for the root for the OFA-jf
process on $\O_{{\ensuremath{\mathbb T}} _L}$ started from the configuration identically equal
to one.
Our first result addresses the concentration of $\t(L)$. Recall that $O_\delta(\cdot)$ denotes that the implicit constant may depend on $\delta$.
\begin{theorem}
\label{th:main3}
The following hold for the centered variable
$\t(L)-T_{\rm hit}(L)$, denoted $\bar \t(L)$.
\begin{enumerate}[(i)]
\item Consider either the minimally or the maximally constrained model
and fix $p<p_c$. For any fixed $\d>0$, if $n\in{\ensuremath{\mathbb N}} $ is large enough there exists $L_n\in [n,(1+\d)n]$ such that
\[
{\ensuremath{\mathbb E}} |\bar \t(L_n)| = O_\delta(1).
\]
\item Consider the maximally constrained model and choose
$p=p_c$. For
any fixed $\d>0$, if $n\in{\ensuremath{\mathbb N}} $ is large enough then there exists $L_n\in [n,(1+\d)n]$ such that
\[
{\ensuremath{\mathbb E}} |\bar \t(L_n)| = O_\delta\left(L_n^{-1}\, T_{\rm hit}(L_n)\right).
\]
\end{enumerate}
\end{theorem}
The second result concerns the cutoff phenomenon.
\begin{theorem}
\label{th:main4} Consider the maximally constrained model.
\begin{enumerate}[(i)]
\item If $p<p_c$ then for any $\d>0$ and any large enough $n$ there exists $L_n\in [n,(1+\d)n]$ such that
\[
|T_{\rm mix}(L_n,\varepsilon)-T_{\rm hit}(L_n)|= O_{\varepsilon,\d}(1) \quad \forall \varepsilon\in (0,1).
\]
\item If $p=p_c$ then for any $\d>0$ and any large enough $n$ there exists $L_n\in [n,(1+\d)n]$ such that
\[
|T_{\rm mix}(L_n,\varepsilon)-T_{\rm hit}(L_n)|=O_{\varepsilon,\d}\bigl(L_n^{-1} \,T_{\rm hit}(L_n)\bigr) \quad\forall \varepsilon\in(0,1).
\]
\end{enumerate}
\end{theorem}
\subsection{Previous work}
Before proving our results we recall the main findings of \cite{MT}
and \cite{CMRTtree}. We now formally define the critical density for the OFA-jf model:
\[
p_c =\sup\{p\in[0,1]:\text{0 is simple eigenvalue of } \ensuremath{\mathcal L}\},
\]
where $\ensuremath{\mathcal L}$ is the generator of the process. The regime $p<p_c$ is called the {\sl ergodic regime} and we say that
an {\sl ergodicity breaking transition} occurs at the critical density
$p_c$.
Let
\[
g_p(\l):=p\sum_{i=k-j+1}^{k}\binom{k}{i}\l^i (1-\l)^{k-i}
\]
be the natural bootstrap percolation recursion map (cf.~\cite{MT})
associated to the OFA-jf process and let
\[
\tilde p :=\sup \{p\in[0,1]:~\l=0 \text{ is the unique fixed point of } g_p(\l)\}.
\]
In \cite{MT} it was proved that $p_c=\tilde p$ and that $p_c\in (0,1)$
for $j\ge 2$ and $p_c=1$ for $j=1$. Notice that, for $j=k$, the value
$\tilde p$ coincides with the site percolation threshold on ${\ensuremath{\mathbb T}} $ so
that $p_c=\tilde p=1/k$.
Consider now the finite volume OFA-jf process on $\O_{{\ensuremath{\mathbb T}} _L}$ and let
$\mu^t_\o$ be the law of the process at
time $t$ when the initial configuration is $\o$.
Further let $h^t_\o$ be the relative density of $\mu^t_\o$ w.r.t the
reversible stationary measure $\pi$.
Define the family of mixing times $\{T_a(L)\}_{a\ge 1}$ by
\[
T_a(L):= \inf\left\{t\ge 0:\ \max_\o\pi\left(|h^t_\o -1|^a\right)^{1/a}\le 1/4\right\}.
\]
Notice that $T_1(L)$ coincides with the usual mixing time $T_{\rm mix}(L)$ of the chain (see,
e.g.,~\cite{LPW}) and that, for any $a\ge 1$, one has $T_1(L)\le
T_a(L)$. Further let $T_{\rm rel}(L)$ be the relaxation time of the chain,
ie the inverse of the spectral gap of the generator $\ensuremath{\mathcal L}_{{\ensuremath{\mathbb T}} _L}$.
\begin{theorem}[\cite{MT}]
\label{io e C}\
\begin{enumerate}[(i)]
\item Assume $p<p_c$ and consider the finite
volume OFA-jf model on $\O_{{\ensuremath{\mathbb T}} _L}$. Then
\[
\sup_L T_{\rm rel}(L)<\infty.
\]
If instead $p>p_c$ then $T_{\rm rel}(L)$ is exponentially large in $L$.
\item For all $p\in (0,1)$ there exists a constant $c>0$ such that
\[
T_2(L+1)-T_2(L)\le c \, T_{\rm rel}(L).
\]
In particular
\[
T_{\rm mix}(L)\le T_2(L) \le c\, T_{\rm rel}(L)\, L
\]
\end{enumerate}
\end{theorem}
The second result concerns the critical behavior $p=p_c$.
\begin{theorem}[\cite{CMRTtree}]
\label{noi}Consider the maximally constrained model $j=k$ and choose $p=p_c$. Then there exists $\b\ge 2$ and $c>0$ such that
\begin{align*}
c^{-1} L^2\le T_{\rm rel}(L)\le c \,L^{\beta}.
\end{align*}
Moreover,
\begin{equation*}
c^{-1}L T_{\rm rel}(L) \leq T_{\rm mix}(L)\le T_2(L) \le
c L\,T_{\rm rel}(L).
\end{equation*}
\end{theorem}
\subsection{Proof of Theorem~\ref{th:main3}}
We first need a preliminary result saying that, for infinitely many values of
$L$,
the increments of $T_{\rm hit}(L)$ can be controlled by the
corresponding relaxation time.
\begin{lemma}
\label{treelem:1} There exists a
constant $c_1$ such that, for all $\d>0$ and all $n$ large enough, the
following holds.
\begin{enumerate}[(a)]
\item In the maximally constrained model at $p\le p_c$
\[
\max\Bigl(T_{\rm hit}(L_n)-T_{\rm hit} (L_n-1),\ T_{\rm hit}(L_n+1)-T_{\rm hit} (L_n)\,\Bigr)\le \frac{c_1}{\d} \,T_{\rm rel}((1+\d)n),
\]
for some $L_n\in [n,(1+\d)n]$.
\item In the minimally constrained model
\[
T_{\rm hit}(L_n+1)-T_{\rm hit} (L_n)\ge -\frac{c_1}{\d} \,T_{\rm rel}((1+\d)n)
\]
for some $L_n\in [n,(1+\d)n]$.
\end{enumerate}
\end{lemma}
\begin{proof}
Fix $\d$ and $n\ge 1/\d$ and consider the maximally constrained model.
Using part (ii) of Theorem~\ref{io e C},
\begin{equation}
\label{tree:1}
T_{\rm mix}(n)\le T_2(n)\le c \sum_{i=1}^n T_{\rm rel}(i)\le c \, nT_{\rm rel}(n),
\end{equation}
where we used the fact that $T_{\rm rel}(i)\le T_{\rm rel}(n)$ for all $i\le n$.
Fix now $c_1>0$ and suppose that, for all $i\in [n, (1+\d)n-1]$,
\begin{gather*}
\max\Bigl(T_{\rm hit}(i+1)-T_{\rm hit}(i) ,\,T_{\rm hit}(i+1)-T_{\rm
hit}(i)\,\Bigr)
\ge \frac{c_1}{\d}
\,T_{\rm rel}\left((1+\d)n\right).
\end{gather*}
In particular
\[
T_{\rm hit} ((1+\d)n)\ge c_1n \,T_{\rm rel}\left((1+\d)n\right) /2.
\]
On the other hand, using the results in\cite{Aldous}, there exists a
constant $\l=\l(p)$ such that
\begin{equation}
\label{tree:2}
T_{\rm hit}((1+\d)n) \le \l T_{\rm mix}((1+\d)n).
\end{equation}
In conclusion, using Theorem~\ref{io e C},
\begin{gather*}
T_{\rm rel}\left((1+\d)n\right) \le \frac{2}{c_1n }T_{\rm hit} ((1+\d)n)
\le \frac{2\l}{c_1n} T_{\rm mix}((1+\d)n)\\\le
\frac{2\l c (1+\d)}{c_1}T_{\rm rel}((1+\d)n),
\end{gather*}
and we reach a contradiction by choosing
$c_1> 2\l c(1+\d)$.
Similarly, in the minimally constrained case, assume
\[
T_{\rm hit}(i+1)-T_{\rm hit}(i)\le -\frac{c_1}{\d}
\,T_{\rm rel}\left((1+\d)n\right),\quad \forall i\in [n, (1+\d)n-1],
\]
so that
\[
0\le T_{\rm hit} ((1+\d)n)\le T_{\rm hit}(L)-c_1n \,T_{\rm rel}\left((1+\d)n\right).
\]
Using again Theorem~\ref{io e C} together with \eqref{tree:2} we get
\begin{gather*}
T_{\rm rel}\left((1+\d)n\right)\le \frac{1}{c_1n }T_{\rm hit}(L)\\
\le
\frac{\l}{c_1n } T_{\rm mix}(L)\le
\frac{c\l}{c_1n } L
T_{\rm rel}(L)\le \frac{\l c (1+\d)}{c_1}T_{\rm rel}((1+\d)n).
\end{gather*}
and again we reach a contradiction by choosing
$c_1> \l c(1+\d)$.
\end{proof}
\subsubsection{Proof of theorem~\ref{th:main3} for the maximally
constrained model}
The key observation here is that, for any $L\in {\ensuremath{\mathbb N}} $, the hitting
time $\t(L+1)$ is stochastically larger than the maximum between $k$
independent copies $\{\t^{(i)}(L)\}_{i=1}^k$ of the hitting time $\t(L)$. That follows
immediately by noting that:
\begin{itemize}
\item starting from the configuration identically equal to $1$, a
vertex $x$ can be updated only after the first time at which all
its $k$-children have been updated;
\item the projection of the OFA-jf process
on the sub-trees rooted at each one of the children of the root of ${\ensuremath{\mathbb T}} _{L+1}$
are independent OFA-jf processes on ${\ensuremath{\mathbb T}} _L$.
\end{itemize}
Henceforth, the proof follows from a beautiful argument of Dekking and Host that was used in~\cite{DH91}
to derive tightness for the minima of certain branching random walks.
\begin{align*}
T_{\rm hit}(L+1)&\ge
{\ensuremath{\mathbb E}} \bigl[\max_{i=1,\dots,k}\t^{(i)}(L)\bigr]\\
&\ge \frac 12 {\ensuremath{\mathbb E}} \bigl[\t^{(1)}(L)+\t^{(2)}(L) + |\t^{(1)}(L)-\t^{(2)}(L)|\bigr]\\
&=T_{\rm hit}(L) +\frac 12
{\ensuremath{\mathbb E}} \bigl[|\t^{(1)}(L)-\t^{(2)}(L)|\bigr]\\
&\ge T_{\rm hit}(L) +\frac 12 {\ensuremath{\mathbb E}} \bigl[|\bar \t^{(1)}(L)|\bigr],
\end{align*}
since whenever $X',X''$ are i.i.d.\ copies of a variable one has
$\mathbb{E}|X'-X''|
\geq \mathbb{E} \left| X' - \mathbb{E} X'\right|$
by conditioning on $X''$ and then applying Cauchy-Schwarz. Altogether,
\begin{equation}
\label{eq:tree7}
{\ensuremath{\mathbb E}} \bigl[|\bar \t^{(1)}(L)|\bigr] \le 2 \left(T_{\rm
hit}(L+1)-T_{\rm hit}(L)\right).
\end{equation}
The conclusion of the theorem now follows from Lemma~\ref{treelem:1}
and Theorem~\ref{io e C}.
\qed
\subsubsection{Proof of theorem~\ref{th:main3} for the minimally constrained model}
In this case we define
\[
\t_{\rm min}(L):= \min_{i=1,\dots,k}
\tau^{(i)}(L),
\]
where $\tau^{(i)}(L)$ is the first time that the
$i^{th}$-child of the root of ${\ensuremath{\mathbb T}} _{L+1}$ is updated and we write
\[
T_{\rm hit}(L+1)\le {\ensuremath{\mathbb E}} \bigl[\t_{\rm min}(L)\bigr] + \sup_L\sup_{\o\in \ensuremath{\mathcal G}_L} {\ensuremath{\mathbb E}} _\o\bigl[\t(L)\bigr],
\]
with $\ensuremath{\mathcal G}_L$ the set of configurations in $\O_{{\ensuremath{\mathbb T}} _{L}}$ with
$\o_r=1$ and at
least one zero among the children of the children of the root $r$.
\begin{lemma}
\label{l.1}
$
\sup_L\sup_{\o\in \ensuremath{\mathcal G}_L} {\ensuremath{\mathbb E}} _\o \t(L)< \infty.
$
\end{lemma}
Assuming the lemma we write
\begin{align*}
T_{\rm hit}(L+1)&\le {\ensuremath{\mathbb E}} \bigl[\t_{\rm min}(L)\bigr] +c\\
&\le \frac 12 {\ensuremath{\mathbb E}} \bigl[\t^{(1)}(L)+\t^{(2)}(L) -
|\t^{(1)}(L)-\t^{(2)}(L)|\bigr] +c\\
&=T_{\rm hit}(L) -\frac 12
{\ensuremath{\mathbb E}} \bigl[|\t^{(1)}(L)-\t^{(2)}(L)|\bigr]+c.
\end{align*}
Thus
\[
{\ensuremath{\mathbb E}} \bigl[|\bar \t(L)|\bigr]\le {\ensuremath{\mathbb E}} \bigl[|\t^{(1)}(L)-\t^{(2)}(L)|\bigr]
\le 2\bigl(T_{\rm hit}(L)-T_{\rm hit}(L+1)\bigr) +2c.
\]
Hence, if $L_n\in [n, (1+\d)n]$ satisfies property (b) of Lemma~\ref{treelem:1}, we get
\[
{\ensuremath{\mathbb E}} \bigl[|\bar \t^{(1)}(L)|\bigr]\le 2 \frac{c_1}{\d}
T_{\rm rel}\left((1+\d)n\right) +2c.
\]
The conclusion of the theorem now follows from Theorem~\ref{io e C}.
\qed
\begin{proof}[Proof of Lemma~\ref{l.1}]
Fix $L$
and $\o\in \ensuremath{\mathcal G}_L$ and observe that
\begin{gather*}
{\ensuremath{\mathbb P}} _\o(\o_r(t)=1)\\
={\ensuremath{\mathbb P}} _\o(\o_r(t)=1\mid \t(L)\ge t){\ensuremath{\mathbb P}} _\o(\t(L)\ge t)+
{\ensuremath{\mathbb P}} _\o(\o_r(t)=1\mid \t(L)<t){\ensuremath{\mathbb P}} _\o(\t(L)<t) \\
= (1-p){\ensuremath{\mathbb P}} _\o(\t(L)\ge t) +p.
\end{gather*}
That is because $\o_r=1$ at time $t=0$ while it is a Bernoulli(p)
random variable given that the root has
been updated at least once. Thus
\[
{\ensuremath{\mathbb E}} _\o[\t(L)]\le \frac{1}{1-p}\int_0^\infty dt \,
|{\ensuremath{\mathbb P}} _\o(\o_r(t)=1) -p |.
\]
In order to bound from above the above integral we closely follow the strategy of \cite{CMST}*{\S4}. In what
follows, for any finite subtree $\ensuremath{\mathcal T}$ of ${\ensuremath{\mathbb T}} $, we will refer to the \emph{children} of $\ensuremath{\mathcal T}$ as the vertices of ${\ensuremath{\mathbb T}} \setminus \ensuremath{\mathcal T}$
with their parent in $\ensuremath{\mathcal T}$. Using the graphical construction, for all times $t\ge 0$ we define
a (random) \emph{distinguished} tree $\ensuremath{\mathcal T}_t$ according to the following algorithm:
\begin{enumerate}[(i)]
\item
$\ensuremath{\mathcal T}_0$ coincides with the root together with those among its children which have
at least one zero among their children (\hbox{\it i.e.\ } they are unconstrained).
\item
$\ensuremath{\mathcal T}_t=\ensuremath{\mathcal T}_0$ until the first ``legal'' ring at time $t_1$ at one of
the children of $\ensuremath{\mathcal T}_0$, call it $x_0$.
\item $\ensuremath{\mathcal T}_{t_1}=\ensuremath{\mathcal T}_0\cup \{x_0\}$.
\item Iterate.
\end{enumerate}
Exactly as in \cite{CMST}*{\S4.1}, one can easily verify the
following key properties of the above construction:
\begin{enumerate}[(a)]
\item for all $t\ge 0$ each leaf of $\ensuremath{\mathcal T}_t$ is unconstrained \hbox{\it i.e.\ }
there is a zero among its children;
\item if at time $t=0$ the variables $\{\o_x\}_{x\in \ensuremath{\mathcal T}_0}$ are not
fixed by instead are i.i.d with law $\pi$, then, conditionally on $\{\ensuremath{\mathcal T}_s\}_{s\le t}$,
the same is true for the variables $\{\o_x(t)\}_{x\in \ensuremath{\mathcal T}_t}$.
\item For all $i\ge 1$, given $\ensuremath{\mathcal T}_{t_i}$ and $t_i$, the law of the
random time $t_{i+1}-t_i$ does not depend on the variables (clock
rings and coin tosses) of the graphical construction in $\ensuremath{\mathcal T}_{t_i}$.
\end{enumerate}
As in \cite{CMST}*{Eqs.~(4.8) and~(4.10)}, the above properties imply
that
\begin{align*}
\operatorname{Var}_\pi({\ensuremath{\mathbb E}} _\o\left[\o_r(t)\mid \{\ensuremath{\mathcal T}_s\}_{s\le t}\right]\le
e^{-2t/T_{\rm rel}(L)}.
\end{align*}
Therefore,
\begin{align*}
\sup_{\o\in \ensuremath{\mathcal G}_L}\big|\,{\ensuremath{\mathbb E}} _\o\left[\o_r(t)-p\right]\big|&\le
\sup_{\o\in \ensuremath{\mathcal G}_L}{\ensuremath{\mathbb E}} _\o\,\Big|\, {\ensuremath{\mathbb E}} _\o\left[\o_r(t)-p\mid \{\ensuremath{\mathcal T}_s\}_{s\le
t}\right]\Big| \\
&\le \left(\frac{1}{p\wedge q}\right)^{|\ensuremath{\mathcal T}_0|}
\sup_{\o\in \ensuremath{\mathcal G}_L}{\ensuremath{\mathbb E}} _\o\Bigl[\sum_{\o\in \O_{\ensuremath{\mathcal T}_0}}\pi(\o)\big|\,{\ensuremath{\mathbb E}} _\o\left(\o_r(t)-p\mid \{\ensuremath{\mathcal T}_s\}_{s\le t}\right)\big|\Bigr]\\
&\le \left(\frac{1}{p\wedge q}\right)^{|\ensuremath{\mathcal T}_0|}\sup_{\o\in \ensuremath{\mathcal G}_L}{\ensuremath{\mathbb E}} _\o\left[\operatorname{Var}_{\pi}\left({\ensuremath{\mathbb E}} _\o\left(\o_r(t)\thinspace |\thinspace \{\xi_s\}_{s\le t}\right)\right)^{1/2}\right]
\\
&\le \left(\frac{1}{p\wedge q}\right)^{|\ensuremath{\mathcal T}_0|}e^{-t/T_{\rm rel}(L)}\,.
\end{align*}
By Theorem~\ref{io e C} we have that $\sup_LT_{\rm rel}(L)<\infty$, and the proof is complete.
\end{proof}
Consider the maximally constrained process on $\O_{{\ensuremath{\mathbb T}} _{L+1}}$ and
let $\t^{\rm max}(L)$ be the first time at which all the children of
the root have been updated at least once starting from the configuration identically
equal to one. For a given $\o\in \O_{{\ensuremath{\mathbb T}} _{L+1}}$ and $x\in
{\ensuremath{\mathbb T}} _{L+1}$, further let $\ensuremath{\mathcal C}_\o(x)$ be the maximal subtree rooted at $x$ where
$\o$ is equal to one.
Finally, recall that
${\ensuremath{\mathbb P}} (\cdot)$ denotes the basic coupling given by the graphical
construction and that $\o(t)$ denotes the process at time $t$ started
from the initial configuration $\o$.
\begin{lemma}
\label{l.1bis} There exists some $c>0$ such that
\begin{align*}
\max_{\o\in \O_{{\ensuremath{\mathbb T}} _{L+1}}}{\ensuremath{\mathbb P}} \left( |\ensuremath{\mathcal C}_{\o(\t^{\rm
max}(L))}(r)|\ge n\right)&\le c\, \pi\left(|\ensuremath{\mathcal C}_\o(r)|\ge \frac{n-2}{k-1}\right),
\end{align*}
and in particular,
\[
\max_{\o\in \O_{{\ensuremath{\mathbb T}} _{L+1}}}{\ensuremath{\mathbb E}} \left|\ensuremath{\mathcal C}_{\o(\t^{\rm max}(L))}(r)\right|\le c \sum_\o\pi(\o)|\ensuremath{\mathcal C}_\o(r)|.
\]
\end{lemma}
\begin{proof}
Recall that under the basic coupling all the
starting configurations have coupled by time $\t^{\rm max}(L)$. Hence,
\begin{align*}
{\ensuremath{\mathbb P}} &\left(\exists \,\o \in\O_{{\ensuremath{\mathbb T}} _{L+1}}:\ |\ensuremath{\mathcal C}_{\o(\t^{\rm max}(L))}(r)|\ge n\right) =
\sum_\o\pi(\o) {\ensuremath{\mathbb P}} \left(|\ensuremath{\mathcal C}_{\o(\t^{\rm max}(L))}(r)|\ge n\right)\\
&\le k \sum_\o\pi(\o) {\ensuremath{\mathbb P}} \left(|\ensuremath{\mathcal C}_{\o(\t^{(1)}(L))}(r)|\ge n\, ,\, \t^{\rm max}(L)=\t^{(1)}(L)\right) \\
&\le k \sum_\o\pi(\o) {\ensuremath{\mathbb P}} \left(|\ensuremath{\mathcal C}_{\o(\t^{(1)}(L))}(r)|\ge n\right) ,
\end{align*}
where $\t^{(1)}(L)$ is the first time that the first (in some chosen order)
child of the root has been updated starting from all ones. By
construction, at time $\t^{(1)}(L)$ the first child has all its
children equal to zero. Therefore the event $\{\ensuremath{\mathcal C}_{\o(\t^{(1)}(L))}(r)|\ge n\}$ implies that there exists some other child $x$ of the
root such that $\ensuremath{\mathcal C}_{\o(\t^{(1)}(L))}(x)$ has cardinality at least $(n-2)/(k-1)$. Using reversibility and the independence between
$\t^{(1)}(L) $ and the
process in the subtree of depth $L$ rooted at $x$ together with a
union bound over the choice of $x$, we conclude that
\begin{gather*}
\sum_\o\pi(\o) {\ensuremath{\mathbb P}} \left(|\ensuremath{\mathcal C}_{\o(\t^{(1)}(L))}(r)|\ge n \right)\le
(k-1)\pi\left(|\ensuremath{\mathcal C}_\o(r)|\ge \frac{n-2}{k-1}\right).
\end{gather*}
The statement of the lemma follows at once by summing over $n$.
\end{proof}
Using the lemma we can now prove the analogue of Lemma~\ref{l.1}
\begin{lemma}
\label{l.2} Fix any positive integer $\ell$. For all $p\le p_c$ there exists $c=c(\ell,p)$ such that
\begin{align*}
&(i) &T_{\rm hit}(L+\ell)&\le {\ensuremath{\mathbb E}} \left[\t^{\rm max}(L)\right]+ c\,T_{\rm rel}(L)\qquad
&\text{if } p<p_c,\\
&(ii)&T_{\rm hit}(L+\ell)&\le {\ensuremath{\mathbb E}} \left[\t^{\rm max}(L)\right]+ c L T_{\rm rel}(L) \qquad
&\text{if } p=p_c.
\end{align*}
Moreover, for any $d>0$,
\begin{equation}
\label{eq:tail}
{\ensuremath{\mathbb P}} \bigl(\t(L+\ell)-\t^{\rm max}( L) \ge d\,T_{\rm
rel}\bigr)=
\begin{cases}
O(d^{-1})&\text{ if }p<p_c,\\
O(d^{-1/3})&\text{ if }p=p_c.\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
For simplicity we give a proof for the case $\ell=1$. The general proof is similar and we omit the details. We first claim that, starting from $\o\in \O_{{\ensuremath{\mathbb T}} _{L+1}}$, one has
\begin{equation}
\label{eq:tree15}
{\ensuremath{\mathbb E}} _\o[\t(L+1)]\le c\, |\ensuremath{\mathcal C}_\o|T_{\rm rel}(L)
\end{equation}
for some constant $c$, where $|\ensuremath{\mathcal C}_\o|$ denotes the cardinality of $\ensuremath{\mathcal C}_\o$.
If we assume the claim, the strong Markov property implies that
\[
T_{\rm hit}(L+1)\le {\ensuremath{\mathbb E}} \left[\t^{\rm max}(L)\right]+c\, {\ensuremath{\mathbb E}} \left[|\ensuremath{\mathcal C}_{\o(\t^{\rm max}(L))}|\right]\,T_{\rm rel}(L)
\]
where all expectations are computed starting
from all ones. Using Lemma~\ref{l.1bis},
\[
{\ensuremath{\mathbb E}} \left[|\ensuremath{\mathcal C}_{\o(\t^{\rm
max}(L))}|\right]\le c' \sum_\o\pi(\o)|\ensuremath{\mathcal C}_\o(r)|
\] for some constant $c'$ and parts (i) and (ii) of the lemma
follow by standard results on percolation on regular trees (see, e.g.,~\cite{Grimmett}).
To prove \eqref{eq:tree15} we proceed exactly as in Lemma
\ref{l.1}. We first write
\[
{\ensuremath{\mathbb E}} _\o\left[\t(L+1)\right]\le \frac{1}{1-p}\int_0^\infty dt \,
|{\ensuremath{\mathbb P}} _\o(\o_r(t)=1) -p |
\]
and then we apply the results of \cite{CMST}*{\S4} to get that
\[
|{\ensuremath{\mathbb P}} _\o(\o_r(t)=1) -p |\le \min\left[1,\left(\frac{1}{p\wedge q}\right)^{|\ensuremath{\mathcal C}_\o|}e^{-t/T_{\rm rel}(L)}\right].
\]
Thus,
\[
\frac{1}{1-p}\int_0^\infty dt \, |{\ensuremath{\mathbb P}} _\o(\o_r(t)=1) -p |\le c
|\ensuremath{\mathcal C}_\o|\, T_{\rm rel}(L)
\]
for some constant $c$ and \eqref{eq:tree15} follows.
Lastly we prove \eqref{eq:tail}. The subcritical case $p<p_c$ follows easily from $(i)$ and Markov's inequality, while the critical case follows from \eqref{eq:tree15}. To see this, write
\begin{align*}
{\ensuremath{\mathbb P}} &\bigl(\t(L+1)-\t^{\rm max}(L)\ge d\,T_{\rm rel}(L)\bigr)\\
&={\ensuremath{\mathbb P}} \bigl(\t(L+1)-\t^{\rm max}(L)\ge d\,T_{\rm rel}(L)\, ,\, |\ensuremath{\mathcal C}_{\o(\t^{\rm max}(L))}| \le {d}^{2/3}\bigr)\\
&+{\ensuremath{\mathbb P}} \bigl(\t(L+1)-\t^{\rm max}(L)\ge d\,T_{\rm rel}(L)\,,\,|\ensuremath{\mathcal C}_{\o(\t^{\rm max}(L))}| >{d}^{2/3}\bigr).
\end{align*}
Using Markov's inequality and \eqref{eq:tree15},
\begin{align*}
{\ensuremath{\mathbb P}} &\bigl(\t(L+1)-\t^{\rm max}(L)\ge d\,T_{\rm rel}(L)\, ,\,
|\ensuremath{\mathcal C}_{\o(\t^{\rm max}(L))}| \le {d}^{2/3}\bigr)\\
&\le \frac{1}{d T_{\rm rel}(L)}\,
{\ensuremath{\mathbb E}} \left[{\mathds 1}_{\{|\ensuremath{\mathcal C}_{\o(\t^{\rm max}(L))}| \le
d^{2/3}\}}{\ensuremath{\mathbb E}} _{\o(\t^{\rm
max}(L))}\left[\t(L+1)\right]\right]\\
&\le \frac{c}d \, {\ensuremath{\mathbb E}} \left[{\mathds 1}_{\{|\ensuremath{\mathcal C}_{\o(\t^{\rm max}(L))}| \le
d^{2/3}\}}|\ensuremath{\mathcal C}_{\o(\t^{\rm max}(L))}|\right] \leq c {d}^{-1/3}.
\end{align*}
The second term is also $O\left(d^{-1/3}\right)$
using Lemma~\ref{l.1bis} and the fact that, for $p=p_c$,
\begin{equation*}
\pi\left(|\ensuremath{\mathcal C}_\o|\ge
n\right)= O(1/\sqrt{n}).
\qedhere
\end{equation*}
\end{proof}
\subsection{Proof of Theorem~\ref{th:main4}}
Fix $\varepsilon\in (0,1/2)$. Let $\{L_n\}$ be a sequence such that,
for all $n$ large enough,
\begin{equation}
\label{eq:tree12}
\max\Bigl(T_{\rm hit}(L_n)-T_{\rm hit}(L_n-1) ,T_{\rm hit}(L_n+1)-T_{\rm hit}(L_n)\Bigr)\le c T_{\rm rel}(L_n),
\end{equation}
for some constant
$c$ independent of $n$. The existence of such a sequence is guaranteed
by Lemma~\ref{treelem:1}. We begin by proving that
\begin{equation}
\label{eq:tree11}
T_{\rm mix}(L_n,\varepsilon)\le T_{\rm hit}(L_n)+ O_\varepsilon(T_{\rm rel}(L_n)).
\end{equation}
Exactly as for the East process, one readily infers from the
graphical construction that at time $\t(L_n)$ all initial configurations
$\o\in \O_{{\ensuremath{\mathbb T}} _{L_n}}$ have coupled. Therefore (cf.~\S\ref{East-cutoff}),
\[
\max_{\o,\o'}\left\|{\ensuremath{\mathbb P}} ^{{\ensuremath{\mathbb T}} _{L_n},t}_\o-{\ensuremath{\mathbb P}} ^{{\ensuremath{\mathbb T}} _{L_n},t}_{\o'}\right\|\le
{\ensuremath{\mathbb P}} (\t(L_n)>t).
\]
If $t=T_{\rm hit}(L_n)+\Delta $, Markov's inequality together with
\eqref{eq:tree7}
imply that
\begin{align*}
{\ensuremath{\mathbb P}} \left(\t(L_n)>T_{\rm hit}(L_n)+\Delta\right)&\le
\frac{1}{\Delta}{\ensuremath{\mathbb E}} \left(|\bar \t(L_n)|\right)
\le \frac{2}{\Delta} \bigl[T_{\rm hit}(L_n+1)-T_{\rm
hit}(L_n)\bigr]\\
&\le \frac{2}{\Delta} c\,T_{\rm rel}(L_n).
\end{align*}
Inequality \eqref{eq:tree11} now follows by choosing
$\Delta=2c\,T_{\rm rel}(L_n)/\varepsilon$.
Next we prove the lower bound
\begin{equation}
\label{eq:tree11bis}
T_{\rm mix}(L_n,1-\varepsilon)\ge T_{\rm hit}(L_n)- O_\varepsilon(T_{\rm rel}(L_n)).
\end{equation}
Start the process from the configuration $\o$ identically equal to one
and let $\t^{\rm max}(L_n-\ell)$ be the time when all the vertices at distance $\ell$ from the root have been updated at least once.
Conditionally on $\t^{\rm max}(L-\ell)>t$, the root is connected by a path of $1's$ to some vertex at distance $\ell$
at time $t$. On the other hand, standard percolation results for $p\le
p_c$ imply that the $\pi$-probability
of the above event is smaller than $\varepsilon/2$ provided that $\ell$ is
chosen large enough. Therefore, for such value of $\ell$,
\begin{align*}
\|\mu^t_\o-\pi\|&\ge {\ensuremath{\mathbb P}} (\t^{\rm max}(L_n-\ell) >t)-\varepsilon/2.
\end{align*}
It remains to show that
\[
{\ensuremath{\mathbb P}} (\t^{\rm max}(L_n-\ell) >t)\ge 1-\frac{\varepsilon}{2},
\]
for $t= T_{\rm hit}(L_n)-O_\varepsilon(T_{\rm rel}(L))$.
We prove this by
contradiction. Let $t=T_{\rm
hit}(L_n)-DT_{\rm rel}$, where $D$ is a constant to be specified
later, and suppose that ${\ensuremath{\mathbb P}} (\t^{\rm max}(L_n-\ell) >t)<
1-\frac{\varepsilon}{2}$. Using Lemma~\ref{l.2} we can choose a large constant $\Delta$ independent of $L_n$ such that
\[
{\ensuremath{\mathbb P}} (\t(L_n)- \t^{\rm max}(L_n-\ell) \ge \Delta T_{\rm rel})\le \varepsilon/4,
\] and hence, by a union bound,
$${\ensuremath{\mathbb P}} \bigl(\t(L_n)<t+\Delta T_{\rm rel}\bigr)>\varepsilon/4.$$
However, for large enough $D$, this contradicts Theorem~\ref{th:main3}.
Theorem~\ref{th:main4} now follows from \eqref{eq:tree11},
\eqref{eq:tree11bis}, Theorems~\ref{io e C} and~\ref{noi}, and Lemma~\ref{treelem:1}. \qed
\subsection*{Acknowledgments}
We are grateful to Y. Peres for pointing out the relevant literature on branching random walks, which led to improved estimates in Theorems~\ref{th:main3}--\ref{th:main4}. We also thank O. Zeitouni for an interesting conversation about the concentration results on trees and O. Blondel for several useful comments.
This work was carried out while F.M.\ was a
Visiting Researcher at the Theory Group of Microsoft Research
and S.G.\ was an intern there; they thank the group for its hospitality.
\begin{bibdiv}
\begin{biblist}
\bib{Aldous}{article}{
author = {Aldous, David},
title = {Random walks on finite groups and rapidly mixing {M}arkov chains},
booktitle = {Seminar on probability, XVII},
series = {Lecture Notes in Math.},
volume = {986},
pages = {243--297},
publisher = {Springer},
address = {Berlin},
year = {1983},
}
\bib{AD86}{article}{
author = {Aldous, David},
author = {Diaconis, Persi},
title = {Shuffling cards and stopping times},
journal = {Amer. Math. Monthly},
volume = {93},
pages = {333--348},
year = {1986},
}
\bib{AD02}{article}{
author={Aldous, David},
author={Diaconis, Persi},
title={The asymmetric one-dimensional constrained Ising model: rigorous
results},
journal={J. Stat. Phys.},
volume={107},
date={2002},
number={5-6},
pages={945--975},
}
\bib{AF}{book}{
author={Aldous,David},
author={Fill, Jim},
title={Reversible Markov Chains and Random Walks on Graphs},
note = {In preparation, \texttt{http://www.stat.berkeley.edu/$\sim$aldous/RWG/book.html}},
}
\bib{Blondel}{article}{
author = {Blondel, Oriane},
title = {Front progression for the East model},
journal = {Stochastic Process. Appl.},
volume={123},
pages={3430--3465},
year = {2013},
}
\bib{Bolthausen}{article}{
author={Bolthausen, E.},
title={On the central limit theorem for stationary mixing random fields},
journal={Ann. Probab.},
volume={10},
number={4},
pages={1047},
year={1982},
}
\bib{BDZ}{article}{
author={Bolthausen, Erwin},
author={Deuschel, Jean Dominique},
author={Zeitouni, Ofer},
title={Recursions and tightness for the maximum of the discrete, two
dimensional Gaussian free field},
journal={Electron. Commun. Probab.},
volume={16},
date={2011},
pages={114--119},
}
\bib{BZ1}{article}{
author={Bramson, Maury},
author={Zeitouni, Ofer},
title={Tightness for the minimal displacement of branching random walk},
journal={J. Stat. Mech. Theory Exp.},
date={2007},
number={7},
pages={P07010, 12},
}
\bib{BZ2}{article}{
author={Bramson, Maury},
author={Zeitouni, Ofer},
title={Tightness for a family of recursion equations},
journal={Ann. Probab.},
volume={37},
date={2009},
number={2},
pages={615--653},
}
\bib{CMRT}{article}{
author={Cancrini, N.},
author={Martinelli, F.},
author={Roberto, C.},
author={Toninelli, C.},
title={Kinetically constrained spin models},
journal={Probab. Theory Related Fields},
volume={140},
date={2008},
number={3-4},
pages={459--504},
}
\bib{CFM}{article}{
author={Chleboun, Paul},
author={Faggionato, Alessandra},
author={Martinelli, Fabio},
title={{Time scale separation and dynamic heterogeneity in the low
temperature East model}},
journal={Comm. Math. Phys.},
status={to appear},
}
\bib{CMRTtree}{article}{
author = {Cancrini, Nicoletta},
author = {Martinelli, Fabio},
author = {Roberto, Cyril},
author = {Toninelli, Cristina},
title= {Mixing time of a kinetically constrained spin model on trees: power law scaling at criticality},
journal={Probab. Theory Related Fields},
status={to appear},
}
\bib{CMST}{article}{
author={Cancrini, N},
author={Martinelli, F},
author={Schonmann, R},
author={Toninelli, C},
title={{Facilitated Oriented Spin Models: Some Non Equilibrium
Results}},
date={2010-01},
journal={J. Stat. Phys.},
volume={138},
number={6},
pages={1109\ndash 1123},
}
\bib{Diaconis}{article}{
author = {Diaconis, Persi},
title = {The cutoff phenomenon in finite {M}arkov chains},
journal = {Proc. Nat. Acad. Sci. U.S.A.},
volume = {93},
year = {1996},
number = {4},
pages = {1659--1664},
}
\bib{DiFi}{article}{
AUTHOR = {Diaconis, Persi},
AUTHOR = {Fill, James Allen},
TITLE = {Strong stationary times via a new form of duality},
JOURNAL = {Ann. Probab.},
VOLUME = {18},
YEAR = {1990},
NUMBER = {4},
PAGES = {1483--1522},
}
\bib{DiSh}{article}{
author = {Diaconis, Persi},
author = {Shahshahani, Mehrdad},
title = {Generating a random permutation with random transpositions},
journal = {Z. Wahrsch. Verw. Gebiete},
volume = {57},
year = {1981},
number = {2},
pages = {159--179},
}
\bib{DH91}{article}{
author={Dekking, F. M.},
author={Host, B.},
title={Limit distributions for minimal displacement of branching random
walks},
journal={Probab. Theory Related Fields},
volume={90},
date={1991},
number={3},
pages={403--426},
}
\bib{East-survey}{article}{
author = {Faggionato, Alessandra},
author={Martinelli, Fabio},
author={Roberto, Cyril},
author = {Toninelli, Cristina},
title = {The East model: recent results and new progresses},
journal = {Markov Processes and Related Fields},
status= {in press},
}
\bib{FH}{article}{
title = {Kinetic Ising Model of the Glass Transition},
author = {Fredrickson, Glenn H.},
author = {Andersen, Hans C.},
journal = {Phys. Rev. Lett.},
volume = {53},
number = {13},
pages = {1244--1247},
date = {1984},
}
\bib{Grimmett}{book}{
author={Grimmett, Geoffrey},
title={Percolation},
series={Grundlehren der Mathematischen Wissenschaften [Fundamental
Principles of Mathematical Sciences]},
volume={321},
edition={2},
publisher={Springer-Verlag},
place={Berlin},
date={1999},
pages={xiv+444},
}
\bib{JE91}{article}{
author={J\"{a}ckle, J.},
author={Eisinger, S.},
title={A hierarchically constrained kinetic Ising model},
date={1991},
journal={Zeitschrift für Physik B Condensed Matter},
volume={84},
number={1},
pages={115-124},
}
\bib{LPW}{book}{
author={Levin, David A.},
author={Peres, Yuval},
author={Wilmer, Elizabeth L.},
title={Markov chains and mixing times},
note={With a chapter by James G. Propp and David B. Wilson},
publisher={American Mathematical Society},
place={Providence, RI},
date={2009},
pages={xviii+371},
isbn={978-0-8218-4739-8},
}
\bib{MT}{article}{
author={Martinelli, Fabio},
author = {Toninelli, Cristina},
title={{Kinetically constrained spin models on trees}},
date={2013-02},
journal={Ann. Appl. Probab.},
volume={23},
number={5},
date={2013},
pages={1721--2160},
}
\bib{Saloff}{article}{
author={Saloff-Coste, Laurent},
title={Lectures on finite Markov chains},
conference={
title={Lectures on probability theory and statistics},
address={Saint-Flour},
date={1996},
},
book={
series={Lecture Notes in Math.},
volume={1665},
publisher={Springer},
place={Berlin},
},
date={1997},
pages={301--413},
}
\bib{Stein}{article}{
author={Stein, Charles},
title={A bound for the error in the normal approximation to the
distribution of a sum of dependent random variables},
conference={
title={ Proc. of the Sixth Berkeley Symp. on Math. Statist. and Prob.},
},
book={
publisher={Univ. California Press},
},
date={1972},
pages={583--602},
}
\end{biblist}
\end{bibdiv}
\end{document}
|
1,108,101,564,898 | arxiv | \chapter{ILC Accelerator Parameters and Detector Concepts \label{sid:chapter_accelerator_detector}}
\input{Chapter_Accelerator_Detector/Section_ILCAcceleratorParameters.tex}
\input{Chapter_Accelerator_Detector/Section_DetectorConcepts.tex}
\input{Chapter_Accelerator_Detector/Section_SystematicErrors.tex}
\section{Detector Concepts} \label{sid:Accelerator_Detector:sec:detectorconcepts}
\input{Chapter_Accelerator_Detector/Subsection_ILD}
\input{Chapter_Accelerator_Detector/Subsection_SiD}
\section{ILC Accelerator Parameters} \label{sid:Accelerator_Detector:sec:accelparams}
\subsection{TDR Baseline ILC 250 - 500 GeV}
The International Linear Collider (ILC) is a high-luminosity linear electron-positron
collider based on \SI{1.3}{GHz} superconducting
radio-frequency (SCRF) accelerating technology.
Its center-of-mass-energy range is \SIrange{200}{500}{\GeV} (extendable
to \SI{1}{\TeV}). A schematic view of the accelerator complex,
indicating the location of the major sub-systems, is shown in \Fref{fig:tdres:ilcschematic}:
\thisfloatsetup{floatwidth=\textwidth}
\begin{figure}[htb]
\includegraphics[trim=8 14 0 12,clip,width=\hsize]{Chapter_Accelerator_Detector/figs/ilc-layout-schematic.pdf}
\caption[Schematic layout of the ILC]
{Schematic layout of the ILC, indicating all the major subsystems (not to scale).}
\label{fig:tdres:ilcschematic}
\end{figure}
\begin{itemize}
\item a polarized electron source based on a photocathode DC gun;
\item a polarized positron source in which positrons are obtained
from electron-positron pairs by converting high-energy photons
produced by passing the high-energy main electron beam through an
undulator;
\item \SI{5}{\GeV} electron and positron damping rings (DR) with a
circumference of \SI{3.2}{\km}, housed in a common tunnel;
\item beam transport from the damping rings to the main linacs,
followed by a two-stage bunch-compressor system prior to injection
into the main linac;
\item two \SI{11}{\km} main linacs, utilizing \SI{1.3}{GHz} SCRF
cavities operating at an average gradient
of \SI{31.5}{\mega\volt/\meter}, with a pulse length
of \SI{1.6}{\milli\second};
\item two beam-delivery systems, each \SI{2.2}{\km} long, which bring
the beams into collision with a \SI{14}{\milli\radian} crossing
angle, at a single interaction point which can be occupied by two
detectors in a so-called ``push-pull'' configuration.
\end{itemize}
The total footprint of the ILC complex is $\sim$~\SI{31}{\km}
long. The electron source, positron source (including an independent
low-powered auxiliary source), and the electron and positron damping
rings are centrally located around the interaction region (IR) in the
Central Region. The damping-ring complex is displaced laterally to
avoid interference with the detector hall. The electron and positron
sources themselves are housed in the same (main accelerator) tunnels
as the beam-delivery systems, which reduces the overall cost and size
of the central-region underground construction.
The top-level parameters for the baseline operational range of
center-of-mass energies from 250 to \SI{1000}{\GeV} were set in close
discussion with the physics community that will exploit the ILC. The
baseline performance requirements thus obtained have been optimized
with respect to cost, physics performance and risk. All have been
either directly demonstrated, or represent justifiable extrapolations
from the current state of the art. Table~\ref{tab:tdres:prms} shows the parameters for
several center-of-mass energies, including possible upgrades and
staging.
The parameters in \Tref{tab:tdres:prms} represent relatively conservative operating
points resulting from optimization subject to the constraints imposed
by the various accelerator sub-systems. For example, the bunch charge,
bunch spacing and the total number of bunches in the damping rings are
limited by various instability thresholds (most notably the electron
cloud in the positron ring), realistic rise-times for the injection
and extraction kickers, and the desire to minimize the circumference
of the rings. Secondly, the maximum length of the beam pulse is
constrained to $\sim$~\SI{1.6}{ms}, which is routinely achieved in the available
\SI{1.3}{GHz} \SI{10}{MW} multi-beam klystrons and modulators. The beam current is
further constrained by the need to minimize the number of klystrons
(peak power) and higher-order modes (cryogenic load and beam
dynamics). Dynamic cryogenic load (refrigeration) is also a cost
driver, which limits the repetition rate of the machine. Thirdly, both
the electron and positron sources constrain the achievable beam
current and total charge: For the laser-driven photocathode polarized
electron source, the limits are set by the laser; for the
undulator-based positron source, the limits are set by the power
deposition in the photon target. The beam pulse length is further
constrained by the achievable performance of the warm RF capture
sections (both sources). Finally, at the interaction point,
single-bunch parameters are limited by the strong beam-beam effects
and requirements on both the beam-beam backgrounds and beam stability.
\thisfloatsetup{floatwidth=0.95\textwidth}
\begin{landscape}
\begin{table}[p]
\caption{Summary table of the \SIrange{250}{500}{\GeV} baseline and luminosity and energy upgrade parameters.
Also included is a possible 1st stage \SI{250}{\GeV} parameter set (half the original main linac length)}.
\setlength{\tabcolsep}{8pt}
\begin{tabular}{p{5.5cm} llcccccccccc }
\toprule
& & & \multicolumn{3}{c}{Baseline \SI{500}{\GeV} Machine} & & 1st Stage & & L Upgrade & & \multicolumn{2}{c}{$E\sub{CM}$ Upgrade} \\
\tcmidrule{4-6}
\tcmidrule{8-8}
\tcmidrule{10-10}
\tcmidrule{12-13}
& & & & & & & & & & & A & B \\
Center-of-mass energy & $E\sub{CM}$ & \si{\GeV} & 250 & 350 & 500 & & 250 & & 500 & & 1000 & 1000 \\
\midrule
Collision rate & $f\sub{rep}$ & \si{\Hz} & 5 & 5 & 5 & & 5 & & 5 & & 4 & 4 \\
Electron linac rate & $f\sub{linac}$ & \si{\Hz} & 10 & 5 & 5 & & 10 & & 5 & & 4 & 4 \\
Number of bunches & $n\sub b$ & & 1312 & 1312 & 1312 & & 1312 & & 2625 & & 2450 & 2450 \\
Bunch population & $N$ & $\times$\num{e10} & 2.0 & 2.0 & 2.0 & & 2.0 & & 2.0 & & 1.74 & 1.74 \\
Bunch separation & $\Delta t\sub b$ & \si{\ns} & 554 & 554 & 554 & & 554 & & 366 & & 366 & 366 \\
Pulse current & $I\sub{beam}$ & \si{\mA} & 5.8 & 5.8 & 5.8 & & 5.8 & & 8.8 & & 7.6 & 7.6 \\
\\
Main linac average gradient & $G\sub a$ & \si{\MV\per\metre} & 14.7 & 21.4 & 31.5 & & 31.5 & & 31.5 & & 38.2 & 39.2 \\
Average total beam power & $P\sub{beam}$ & \si{\MW} & 5.9 & 7.3 & 10.5 & & 5.9 & & 21.0 & & 27.2 & 27.2 \\
Estimated AC power & $P\sub{AC}$ & \si{\MW} & 122 & 121 & 163 & & 129 & & 204 & & 300 & 300 \\
\\
RMS bunch length & $\sigma\sub z$ & \si{\mm} & 0.3 & 0.3 & 0.3 & & 0.3 & & 0.3 & & 0.250 & 0.225 \\
Electron RMS energy spread & $\Delta p/p$ & \% & 0.190 & 0.158 & 0.124 & & 0.190 & & 0.124 & & 0.083 & 0.085 \\
Positron RMS energy spread & $\Delta p/p$ & \% & 0.152 & 0.100 & 0.070 & & 0.152 & & 0.070 & & 0.043 & 0.047 \\
Electron polarization &$P\sub{-}$ & \% & 80 & 80 & 80 & & 80 & & 80 & & 80 & 80 \\
Positron polarization & $P\sub{+}$ & \% & 30 & 30 & 30 & & 30 & & 30 & & 20 & 20 \\
\\
Horizontal emittance & $\gamma\epsilon\sub x$ & \si{\um} & 10 & 10 & 10 & & 10 & & 10 & & 10 & 10 \\
Vertical emittance & $\gamma\epsilon\sub y$ & \si{\nm} & 35 & 35 & 35 & & 35 & & 35 & & 30 & 30 \\
\\
IP horizontal beta function & $\beta\sub x^{*}$ & \si{\mm} & 13.0 & 16.0 & 11.0 & & 13.0 & & 11.0 & & 22.6 & 11.0 \\
IP vertical beta function & $\beta\sub y^{*}$ & \si{\mm} & 0.41 & 0.34 & 0.48 & & 0.41 & & 0.48 & & 0.25 & 0.23 \\
\\
IP RMS horizontal beam size & $\sigma\sub x^{*}$ & \si{\nm} & 729.0 & 683.5 & 474 & & 729 & & 474 & & 481 & 335 \\
IP RMS vertical beam size & $\sigma\sub y^{*}$ & \si{\nm} & 7.7 & 5.9 & 5.9 & & 7.7 & & 5.9 & & 2.8 & 2.7 \\
\\
Luminosity & $L$ & $\times$\SI{e34}{\cm^{-2}\s^{-1}} & 0.75 & 1.0 & 1.8 & & 0.75 & & 3.6 & & 3.6 & 4.9 \\
Fraction of luminosity in top 1\% & $L\sub{0.01}/L$ & & 87.1\% & 77.4\% & 58.3\% & & 87.1\% & & 58.3\% & & 59.2\% & 44.5\% \\
Average energy loss & $\delta\sub{BS}$ & & 0.97\% & 1.9\% & 4.5\% & & 0.97\% & & 4.5\% & & 5.6\% & 10.5\% \\
Number of pairs per bunch crossing & $N\sub{pairs}$ & $\times$\num{e3} & 62.4 & 93.6 & 139.0 & & 62.4 & & 139.0 & & 200.5 & 382.6 \\
Total pair energy per bunch crossing & $E\sub{pairs}$ & \si{\TeV} & 46.5 & 115.0 & 344.1 & & 46.5 & & 344.1 & & 1338.0 & 3441.0 \\
\bottomrule
\end{tabular}
\label{tab:tdres:prms}
\end{table}
\end{landscape}
\subsection{Luminosity and Energy Upgrade Options}
The ILC TDR outlines two upgrades. One is the luminosity upgrade to double the average beam power by adding RF to the linacs. The
second is the energy
upgrade to double the center of mass energy to 1 TeV by extending the main linacs. The latter will require substantial additional tunnel
construction and the upgraded 1 TeV machine will consume more electrical power. The TDR also describes a possible first
stage 250 GeV center of mass energy \"Higgs Factory\". These options are included in \Tref{tab:tdres:prms}.
Two additional options should be considered~\cite{Ross:2013aa}. The first is operation at 250 GeV center of mass energy following the baseline luminosity upgrade.
The second is the potential for
operation at 1.5 TeV center of mass energy. The latter is briefly mentioned in the ILC cover letter submission to the European Strategy
Preparatory Group~\cite{Barish:2012es}
Here we only consider the luminosity upgrade at 250 GeV center of mass energy.
For operation at 250~GeV, a second step may be considered in which the collider is operated at 10 Hz, instead of 5 Hz,
with an average beam power equivalent to that shown in the L Upgrade 500 column in \Tref{tab:tdres:prms}. It is assumed in what follows that the full
Baseline 500 and L Upgrade 500 have been completed. At that point, if the main linac gradient is reduced to half of nominal, the
repetition rate can be doubled without substantially increasing the overall average power consumption (in a scheme quite similar
to that adopted for the electron linac at center of mass energy below 300 GeV). Naturally, the average beam power is also the same
as the L Upgrade 500 beam power. This second step scheme allows the ILC Light Higgs Factory luminosity to be increased by a factor
four from 0.75 e34 cm-2s-1 to 3.0 e34 cm-2s-1. \Tref{tab:loweupgrade}
and \Fref{fig:loweupgrade} summarize
the primary parameters for these three ILC operational modes.
\begin{table}
\caption{ILC Higgs factory operational modes}.
\begin{tabular}{llcccc }
\toprule
& & & 1st Stage & Baseline ILC, after & High Rep Rate \\
& & & Higgs Factory & Lumi Upgrade & Operation \\
\tcmidrule{4-4}
\tcmidrule{5-5}
\tcmidrule{6-6}
& & & & & \\
Center-of-mass energy & $E\sub{CM}$ & \si{\GeV} & 250 & 250 & 250 \\
\midrule
Collision rate & $f\sub{rep}$ & \si{\Hz} & 5 & 5 & 10 \\
Electron linac rate & $f\sub{linac}$ & \si{\Hz} & 10 & 10 & 10 \\
Number of bunches & $n\sub b$ & & 1312 & 2625 & 2625 \\
Pulse current & $I\sub{beam}$ & \si{\mA} & 5.8 & 8.75 & 8.75 \\
\\
Average total beam power & $P\sub{beam}$ & \si{\MW} & 5.9 & 10.5 & 21 \\
Estimated AC power & $P\sub{AC}$ & \si{\MW} & 129 & 160 & 200 \\
\\
Luminosity & $L$ & $\times$\SI{e34}{\cm^{-2}\s^{-1}} & 0.75 & 1.5 & 3.0 \\
\bottomrule
\end{tabular}
\label{tab:loweupgrade}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=0.75\hsize]{Chapter_Accelerator_Detector/figs/ecm250_lum_upgrade.pdf}
\end{center}
\caption{ILC Stages and Upgrades. The baseline design (yellow) is fully optimized and represents
the starting point for evaluating options. Three options
(1st stage, L upgrade and TeV upgrade) are described in the TDR (blue). A further 3 options are mentioned here (orange and red). }
\label{fig:loweupgrade}
\end{figure}
The main impact of low energy ten Hz collision rate operation is on the injector systems: these must be able to cope with high repetition rate operation without
any reduction in gradient, as is presently conceived only for the electron side (see~\cite{Adolphsen:2013jya}, Part II, Section 2.2.2, page 9). Furthermore, the positron source undulator
must be able to produce adequate positrons using only the nominal 125 GeV luminosity-production electron beam. The latter may require further development of
superconducting helical undulator technology as the helix pitch should be reduced from the present 12 cm (as demonstrated in Ref.~\cite{Adolphsen:2013jya}, Part I, Section 4.3.2, page 129)
to 0.9 cm without reducing the peak field. It is possible that a longer undulator with ILC TDR parameters would be adequate. Alternatively, an intermediate
solution could be considered with a reduced positron yield and possibly higher electron beam energy. For the latter additional electrical power would be
required and the e+/e- beams might not have equal energy. This does not pose a problem for machine operation in principle but requires study. In addition,
the positron injector system would be operated at 10 Hz, full gradient, requiring about two times more RF power and cryogenic capacity. Because of the
low center of mass energy positron scheme, this aspect of electron injector operation is already accounted in the TDR.
\input{Chapter_Accelerator_Detector/Subsection_gammagamma.tex}
\subsection{Energy/Luminosity Running Scenarios}
It is of interest to consider the evolution of ILC Higgs physics results over time
given the ILC machine parameters defined in \Tref{tab:tdres:prms} and \Tref{tab:loweupgrade}.
Taking eighteen years as a reasonable ILC lifetime, and using the concept of a Snowmass Year
where an accelerator is assumed to run at its nominal luminosity for one-third of the time,
we assume that the ILC runs for a total of $18\times 10^{7}$ seconds at nominal luminosity
during its life. Without optimization we make the simple assumption that we run
for $3\times 10^7$~s at the baseline luminosity at each of the
center of mass energies 250, 500, and 1000 GeV, in that order. Following those runs we go back and
run for $3\times 10^7$~s at the upgraded luminosity at each of the three center of mass energies.
To avoid a proliferation of table entries, most results are only presented for
the four different combinations of energy and luminosity listed in Table~\ref{tab:ecmlumruns}.
Each scenario corresponds to the accumulated luminosity at different points in time.
In the summary chapter, however, we present results for some alternative scenarios where, for example,
runs at center of mass energies of 250 and 500 GeV take place at the upgraded luminosity before
any runs at 1000 GeV.
\begin{table}
\begin{center}
\begin{tabular}{lcccccccccc}
Nickname & Ecm(1) & Lumi(1) & + & Ecm(2) & Lumi(2) & + & Ecm(3) & Lumi(3) & Runtime & Wall Plug E \cr
& (GeV) & (fb$^{-1}$) & & (GeV) & (fb$^{-1}$) & & (GeV) & (fb$^{-1}$) & (yr) & (MW-yr) \cr \hline
ILC(250) & 250 & 250 & & & & & & & 1.1 & 130 \cr
ILC(500) & 250 & 250 & & 500 & 500 & & & & 2.0 & 270 \cr
ILC(1000) & 250 & 250 & & 500 & 500 & & 1000 & 1000 & 2.9 & 540 \cr
ILC(LumUp) & 250 & 1150 & & 500 & 1600 & & 1000 & 2500 & 5.8 & 1220 \cr
\hline
\end{tabular}
\caption{Energy and luminosity scenarios assumed in this paper.}
\label{tab:ecmlumruns}
\end{center}
\end{table}
\section{Systematic Errors} \label{sid:Accelerator_Detector:sec:systematicerrors}
Most of the errors quoted in this document include statistical errors only.
For the three baseline luminosity scenarios this is an excellent
approximation of the total error. For the luminosity upgrade scenario,
however, some thought has to be given to systematic errors.
\input{Chapter_Accelerator_Detector/Subsection_FlavorTagging}
\input{Chapter_Accelerator_Detector/Subsection_Luminosity}
\input{Chapter_Accelerator_Detector/Subsection_Polarization}
\input{Chapter_Accelerator_Detector/Subsection_SysError_Summary} \label{sid:Accelerator_Detector:sec:systematicerrors:subsec:summary}
\subsection{Flavor Tagging}
\subsubsection{Introduction}
We give a ballpark estimate of the systematic uncertainties
arising from $b$ tagging in the context of the Higgs branching ratio measurements. %
The strategy is to employ control samples to evaluate
the $b$ tagging efficiencies as well as
the fake rate due to non-$b$ jets (primarily $c$ jets) passing the
$b$ tag requirements.
For the former, we give an estimate using
a $b$ jet rich sample selecting the
$ZZ\rightarrow \ell\ell b\overline{b}$ process.
For the latter, we use the
$WW\rightarrow \ell\nu qq$ process
to obtain a control sample
containing very few $b$ jets in the event.
We then evaluate the impact on the uncertainties of
$BR(h\rightarrow b\overline{b})$ assuming
a center-of-mass energy of $\sqrt{s}=250$~GeV
and an integrated luminosity of $\mathcal{L}=250$~fb$^{-1}$ (nominal ILC case)
with an extrapolation to $\mathcal{L}=1150$~fb$^{-1}$ for the high luminosity ILC case.
For the $b$ tagging efficiency points,
we use the following two points in our estimates
$\epsilon=80\%$ and 50\%
with the $c$ and $uds$ fake rate summarized in
Tab.~\ref{tab:btag-wp}, which are read off from
Fig.~\ref{fig:btag}
which shows the signal and background efficiencies
obtained using LCFIPlus.
\begin{table}[hbtp]
\centering
\caption{$b$ tagging working points and fake rate
for $e^+e^-\rightarrow q\overline{q}$ samples at $\sqrt{s}=91.2$~GeV
using LCFIPlus.}
\label{tab:btag-wp}
\begin{tabular}{cccc}
\hline
$b$ tag efficiency & $c$ fake rate & $q$ fake rate \\
\hline
80\% & 8\% & 0.8\% \\
50\% & 0.13\% & 0.05\% \\
\hline
\end{tabular}
\end{table}
\begin{figure}[hbtp]
\centering
\includegraphics[width=0.5\linewidth]{Chapter_Accelerator_Detector/figs/eval-lcfiweights-test.pdf}
\caption{$b$ tagging efficiencies versus background efficiencies
for $e^+e^-\rightarrow q\overline{q}$ samples at $\sqrt{s}=91.2$~GeV
using LCFIPlus.
}
\label{fig:btag}
\end{figure}
\subsubsection{Estimate of $b$ tag efficiency using
$ZZ\rightarrow \ell^+\ell^- b\overline{b}$}
The signal efficiency is assumed to be 50\%,
by noting that the analysis will be very similar to the
$e^+e^-\rightarrow Zh\rightarrow \ell\ell qq$ analysis\cite{ild_loi_higgs_br}.
Background efficiency from the $WW$ process
is assumed to be 1\%, which should be a conservative estimate.
\begin{table}[hbtp]
\centering
\caption{Selection table for the $ZZ$ analysis.
The $b$ tag is applied to one of the two jets.}
\label{tab:zz-cut}
\begin{tabular}{cccc}
\hline
Process & Before selection & After selection &
Tag $b$ ($\epsilon=50\%$) \\
\hline
$ZZ\rightarrow\ell\ell bb$ & 30000 & 15000 & 7500 \\
$ZZ\rightarrow\ell\ell cc$ & 24000 & 12000 & 14 \\
$ZZ\rightarrow\ell\ell qq$ & 86000 & 43000 & 22 \\
$WW\rightarrow\ell\nu cs$ & $1.3\times10^6$ & 13000 & 13 \\
$WW\rightarrow\ell\nu ud$ & $1.3\times10^6$ & 13000 & 7 \\
\hline
\end{tabular}
\end{table}
The sample after the first $b$ tag is used as the control sample.
We apply the $b$ tagging with varying efficiencies to our control sample.
Since our sample achieves a purity of over $99\%$,
we can safely neglect the contribution of
fake $b$ jets in our estimate of $b$ tagging efficiencies.
We use the standard recipe for computing the
selection efficiencies and use the
uncertainty from the binomial distribution
$ \sqrt{ p(1-p)/N } $
where $N=7500$ and $p$ is chosen for the varying efficiencies.
The results are summarized in Tab.~\ref{tab:btag-result}.
\begin{table}[hbtp]
\centering
\caption{Expected $b$-tagging uncertainties at various selection efficiencies.}
\label{tab:btag-result}
\begin{tabular}{ccccc}
\hline
Efficiency & Uncertainty \\
\hline
80\% & 0.46\% \\
70\% & 0.53\% \\
60\% & 0.57\% \\
50\% & 0.58\% \\
\hline
\end{tabular}
\end{table}
We therefore conclude that the uncertainty in the
$b$ tagging efficiency is around 0.3\%. Since the uncertainty
scales to the statistics of the control samples,
it goes down to around 0.15\% in the high luminosity ILC case.
\subsubsection{Estimate of $b$ tagging fake rate using
the $WW\rightarrow \ell\nu qq$ process}
Here we assume a selection efficiency of 10\% for
$WW \rightarrow \ell\nu qq$ events.
The selection for this process could proceed by
selecting an isolated lepton with tight lepton identification criteria
and no more than one isolated lepton with loose lepton identification criteria.
We assume that the dominant background in this case will be
due to $ZZ\rightarrow \tau\tau bb$ events
where one $\tau$ will result in hadronic jets
while the other $\tau$ undergoes one-prong leptonic decay.
\begin{table}[hbtp]\centering
\caption{Summary of selection for the fake rate measurement.
Here the $b$ tag selection is such that
one of the two jets will pass the $b$ tag requirement
at the specified efficiency.}
\label{tab:ww-cut}
\begin{tabular}{ccccc}
\hline
Process & Before selection & After selection &
$b$ tag ($\epsilon_b=80\%$) & $b$ tag ($\epsilon_b=50\%$) \\
\hline
$WW\rightarrow \ell\nu cs$
& $1.3\times 10^6$ & $1.3\times 10^5$ (10\%) &
11310 (8.7\%) & 234 (0.18\%) \\
$WW\rightarrow \ell\nu ud$
& $1.3\times 10^6$ & $1.3\times 10^5$ (10\%) &
2080 (1.6\%) & 130 (0.1\%) \\
$ZZ\rightarrow \tau\tau b\overline{b}$
& $8500$ & $85$ (1\%) &
82 (96\%) & 64 (75\%) \\
\hline
\end{tabular}
\end{table}
We give an estimate of the fake rate as follows.
The contribution of the fake rate from
$WW\rightarrow \ell\nu cs$ will be dominated by $c$ jets
as we can infer from Tab.~\ref{tab:btag-wp}.
As the uncertainty for this number,
we take the number of $WW\rightarrow \ell\nu ud$ events
as the full uncertainty, which should be a very conservative estimate.
This gives 20\% (100\%) as the relative uncertainty in the fake rate
for the case of $\epsilon_b=80\%$ ($50\%$).
\subsubsection{Estimate of branching ratio systematic uncertainty}
We now translate these results into the Higgs branching ratio analysis.
We take the nominal Higgs branching ratios of
$BR(h\rightarrow b\overline{b})=58\%$ and
$BR(h\rightarrow c\overline{c})=2.9\%$.
For the $b$ tagging working point of $\epsilon_b=80\%$,
the fake rate from $c$ jet is around $\epsilon_c=8\pm2\%$
including the uncertainty which was just estimated.
Applying this $b$ tag gives
$BR(h\rightarrow b\overline{b})\cdot\epsilon_b = 46.4 \pm 0.14\%$
and
$BR(h\rightarrow c\overline{c})\cdot\epsilon_c = 0.23 \pm 0.06\%$
where we took 0.3\% as the relative uncertainty in $\epsilon_b$
and 20\% for the uncertainty in $\epsilon_c$.
Similarly for the $\epsilon_b=50\%$ working point, we compute
$BR(h\rightarrow b\overline{b})\cdot\epsilon_b = 29.0 \pm 0.09\%$
and
$BR(h\rightarrow c\overline{c})\cdot\epsilon_c = 0.038 \pm 0.038\%$,
where we took 0.3\% as the relative uncertainty in $\epsilon_b$
and 100\% for the uncertainty in $\epsilon_c$.
We conclude that despite the large relative uncertainty in $c$ jet tagging,
the overall uncertainty is dominated by the uncertainty in $b$ jet tagging
due to the small $h\rightarrow c\overline{c}$ branching ratio.
It is estimated that the uncertainty in the $b$ tagging efficiency in the observable
$\sigma(e^+e^-\rightarrow Zh)\cdot BR(h\rightarrow b\overline{b})$
is at the 0.3\% level in the nominal ILC and at the 0.15\% level in the ILC luminosity upgrade case.
Prospects for improving these numbers include
refined selection of the control samples (before the first $b$ tagging)
and the addition of other $ZZ$ and $Z\gamma$ modes
which will require background estimates with an actual simulation analysis.
Moving up to $\sqrt{s}=350$~GeV provides additional clean control samples from
fully leptonic top pair decays
$e^+e^-\rightarrow t\overline{t}\rightarrow b\ell\nu b\ell\nu$.
\subsubsection{Summary and prospects}
We put forth a ballpark argument for the $b$ tagging systematic uncertainty
in the context of the Higgs branching ratio measurement.
Our preliminary findings are that the dominant contribution comes from
the uncertainty in the estimate of the $b$ efficiency,
which is at the level of 0.3\% (nominal ILC) / 0.15\% (high luminosity ILC) when applied to the
Higgs branching ratio measurement.
This number is expected to improve by including additional modes.
The contribution from the fake rate is found to be negligible.
It is highly desired to refine these estimates using
a proper simulation study including all background processes.
\subsection{Gamma-Gamma Option}
High energy photon-photon collisions can be achieved by integrating high average power short-pulse lasers to the Linear Collider, enabling an expanded physics program for the facility including:
\begin{itemize}
\item Single Higgs production sensitive to charged particles of arbitrary mass
\item Greater reach for single production of supersymmetric Higgs bosons, H and A
\item Probe of CP nature of the observed Higgs bosons through control of the polarization of the Compton photons that define the initial state
\item Anomalous couplings in single and double W boson production
\item Potential production of supersymmetric particles in electron-gamma collisions
\end{itemize}
The technology required to realize a photon linear collider continues to mature. Compton back-scattering technology is being developed worldwide for light source applications and high average power lasers continue to advance for Inertial Confinement Fusion.
Compton scattering can transfer $\sim80\%$ of the incident electron energy to the backscattered photons when a 1 micron wavelength laser pulse is scattered from a 250~GeV electron beam. A laser pulse of 5 Joules, compressed to 1 ps width and focused to a diffraction limited spot can convert most of the incoming electrons in a bunch to high energy photons. An enormous amount of average laser power is required to provide 15,000 laser pulses per second to match the electron beam structure. Since most of the laser energy goes unused in the Compton process the required energy can be greatly reduced if the laser pulses can be recirculated.
A design of a recirculating cavity \cite{Will:2001ha} was created in 2001 which takes advantage of the long inter-bunch spacing in the superconducting machine to recirculate the laser pulses around the outside of the detector. Calculations showed that the required laser power could be reduced by a factor of 300 in this design. Recent studies have shown that a laser with sufficient phase stability to drive such a cavity is achievable with current technology. The available power saving for a recirculating system depends on the achievable cavity size that determines the number of times a laser pulse could be reused in a single electron bunch train.
Implementation of the photon collider option has several requirements for both the detector and the electron accelerator. Apertures must be opened in the forward part of the detector to allow the laser pulses to reach the Interaction Point and be focused a few millimeters before the electron beams collide. The electron beam will be left with an enormous energy spread after the Compton backscatter and a large crossing angle will be required in order to allow sufficient aperture for the spent beam to be extracted. Finally, the photon collider option will require its own beam dump design in order to handle the photon beam which will have about 50\% of the final beam energy.
Compton backscattering for the creation of MeV gamma-ray light sources is a world-wide activity. The basic techniques of bringing an electron beam and a laser pulse into collision is independent of the electron beam energy and these facilities are providing vital experience in the development of these techniques for the linear collider. These facilities are also developing the technology for recirculating laser pulses which will be critical to achieve a cost effective solution for the photon linear collider.
Current MeV gamma-ray sources include the ThomX\cite{Variola:2011zb} machine at LAL, the LUCX\cite{Fukuda:2010zzb} machine at KEK and the T-REX\cite{Hartemann:2010zz} machine at LLNL. The MightyLaser collaboration is developing a four mirror recirculating cavity for the demonstration of Compton backscattering at ATF\cite{Delerue:2011nk}.
While the photon linear collider has always been envisioned as a later stage to the basic linear collider program there may be advantages to considering it as a first stage. The photon collider requires an electron linear collider to drive it but it does not require positrons and it does not require flat electron beams at the Interaction Point in order to reduce the beamstrahlung. This opens up the possibility of creating a first stage linear collider without a positron source. The creation of a low-emittance RF electron gun. might also create the possibility of eliminating the damping rings in the first stage. Consideration of a dedicated photon collider Higgs factory as a first stage to the linear collider program is motivated by the discovery of a low mass Higgs boson at the LHC.
\subsection{ILD}
The ILD detector is a multi-purpose detector.
It has been designed for optimal particle-flow (PFA) performance and high precision
vertexing and tracking performance. The tracking system consists of a high-precision pixel
vertex detector, silicon trackers and a time-projection chamber. The calorimeter system
consists of highly segmented electro-magnetic calorimeter and hadron calorimeter.
They are placed inside a 3.5 Tesla solenoid magnet and achieves
high precision measurements of particle flows, track momentum and vertexes. On the outside of the magnet coil, the iron
return yoke is instrumented as a muon system and as a tail catcher calorimeter.
The forward region is covered by 2 layers of pixel and 5 layers of silicon strip tracker.
Calorimeter system covers down to 5 mrad from the outgoing beam except
the hole for the in-coming beam due to 14 mrad crossing angle.
The quadrant view of the ILD detector is shown in Fig.\ref{fig:ILD-quad-view}.
Further detail of the detector will be found in the reference\cite{Behnke:2013lya}.
\begin{figure}[htb]
\ffigbox{\CommonHeightRow{\begin{subfloatrow}[2]%
\ffigbox[\FBwidth]{}{\includegraphics[height=\CommonHeight]{Chapter_Accelerator_Detector/figs/ILD_all_110826.pdf}}
\ffigbox[\FBwidth]{}{\includegraphics[trim=0 0 0 6,clip,height=\CommonHeight]{Chapter_Accelerator_Detector/figs/ILD_quadrant_2.pdf}}
\end{subfloatrow}}%
}{%
\caption{The ILD detector, showing (left) an isometric view on the platform, and (right) a quadrant view.}
\label{fig:ILD-quad-view}
}
\vspace{0.7cm}
\end{figure}
The ILC beam operates at 5 Hz, with 1 msec of beam collision period followed by 199 msec
quiet period. This unique beam pulse structure allows data acquisition without a hard ware trigger and
lower power consumption electronics system by adapting the pulsed operation of read out electronics.
The requirement of the electronics system cooling is moderate and a thin detector system
could be realized.
Particle flow requires a thin tracker, to minimize interactions before the calorimeters and thick calorimeters
to fully absorb the showers. Thin vertex detector, as well as the small beam pipe radius, helps
a precise vertex reconstruction even for low momentum tracks.
Figure.\ref{fig:ILD-detector-material} (left) shows the material in the detector in radiation lengths
up to the end of the tracking system. The amount of material up to the end of the tracking is
mostly below 10\% for the full solid angle. The right-hand plot shows the total interaction length
including hadron calorimeter, showing a calorimeter coverage by 7 interaction length of coverage in almost all
solid angle
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[h]
\begin{tabular}{cc}
\includegraphics[width=0.52\hsize,viewport={0 -10 600 500},clip]{Chapter_Accelerator_Detector/figs/material-budget-new.pdf}
\includegraphics[width=0.5\hsize]{Chapter_Accelerator_Detector/figs/intlen_ILD_o1_v05.pdf}
\end{tabular}
\caption{Left: Average total radiation length of the material in the tracking detectors
as a function of polar angle. Right: Total interaction length in the
detector up to the end of the calorimeter system and including the coil.}
\label{fig:ILD-detector-material}
\end{figure}
For the ILC TDR study, we have developed a realistic detector simulation model.
In the model, we have implemented materials for electronics, cooling system,
and support structure based on the ILD baseline design in order to evaluate
the detector performance as realistically as possible.
The simulated data were analyzed with a realistic tracking software, (Marlin tracking packages\cite{Gaede:2006pj}),
particle flow analysis(PandoraPFANew\cite{Marshall:2013bda}), flavor tagging analysis(LCFIPlus\cite{LCFIPlus}). In physics event analysis,
background events as low $P_T$ hadronic background due to
collisions of bremsstrahlung or beamstrahlung photons and low energy
electron/positron backgrounds hitting beam calorimeter were overlaid on signal events.
According to the performance study using
$e^+e^-\rightarrow q\bar{q}$ events and single $\mu$ events, we have obtained
the jet energy resolution ( 90\% truncated RMS error ) of below
solid angle and the momentum resolution of $\sigma_{P_T}=2\times 10^{-5}$ GeV$^{-1}$ for
high momentum tracks. From the study using $e^+e^- \rightarrow t\bar{t}$ events,
the average track reconstruction efficiency of 99.7\% for tracks greater than
1 GeV across the entire polar angle range has been achieved. For $e^+e^-\rightarrow q\bar{q}$
events at 91 GeV, $b$-quark($c$-quark) tagging purity at 60\% efficiency was about 100\% (60\%).
\subsection{Luminosity}
The number of Bhabha events
per bunch crossing for a detector with minimum and maximum polar angle coverage
$\theta_{min}$ and $\theta_{max}$ (in mrad) is:
$$ N = 0.5\rm{pb}\frac{L}{R}\int\limits_{\theta_{min}}^{\theta_{max}}\frac{d\rm{cos}\theta}{\rm{sin}^4\theta/2} \sim 6 \times 10^{-6} \left(
\frac{1}{\theta_{min}^2}-\frac{1}{\theta_{max}^2}\right) $$
\noindent for \ensuremath{\sqrt{s}}\xspace=0.5~TeV, L=2$\times10^{34}
\rm{cm}^{-2}\rm{s}^{-1}$, and bunch crossing rate R=$1.4\times10^4
\rm{s}^{-1}$. Our goal is to measure the luminosity normalization
with an accuracy of several $10^{-4}$ for \ensuremath{\sqrt{s}}\xspace=0.5~TeV.
To do
this one needs $\approx 10^8$ events collected over $\approx10^7$ s,
or about ten events per second. One can then calculate the absolute
luminosity with $\approx10\%$ statistical error every several
minutes during the run.
With a bunch crossing rate of $1.4\times10^4
\rm{s}^{-1}$, we need $>10^{-3}$ events per bunch crossing. To
achieve this statistical accuracy, we start the fiducial region for
the precision luminosity measurement well away from the
beamstrahlung pair edge at $\theta$=20~mrad, with a fiducial
region beginning at $\theta_{min}$=46~mrad, which gives $\approx 2 \times 10^{-3}$
events per bunch crossing.
Since the Bhabha cross
section is $\sigma \sim 1/\theta^3$, the luminosity precision can be
expressed as
$$\frac{\Delta L}{L} = \frac{2\Delta\theta}{\theta_{min}},$$
\noindent
where $\Delta\theta$ is a systematic error (bias) in polar angle measurement
and $\theta_{min} = 46$~mrad is the minimum polar angle of the fiducial region.
Because of the steep angular dependence, the precision of the minimum polar
angle measurement determines the luminosity precision.
To reach the luminosity precision goal of $10^{-3}$,
the polar angle must be measured with a precision
$\Delta\theta <$ 0.02~mrad and the radial positions of the sensors
must be controlled within 30~\ensuremath{\upmu\mathrm{m}}\xspace relative to the IP.
\subsection{Polarization}
The primary polarization measurement comes from dedicated
Compton polarimeters detecting backscattered electrons and positrons.
A relative polarization error of 0.1\% is expected from implementing polarimeters both upstream
and downstream of the Interaction Region. In addition the polarization can be measured directly
with physics processes such as $e^+e^-\rightarrow W^+W^-$ . Combining the two techniques we
assume a polarization systematic error on cross section times branching ratios of
0.1\% and 0.05\% for the baseline and upgraded luminosities, respectively.
\subsection{SiD}
{SiD}\xspace is a general-purpose detector designed to perform precision measurements at
a Linear Collider\cite{Aihara:2009ad,Behnke:2013lya}. It satisfies the challenging detector
requirements for physics at the ILC. {SiD}\xspace is the result of many years
of creative design by physicists and engineers, backed up by a
substantial body of past and ongoing detector research and
development. While each component has benefitted from continual
development, the {SiD}\xspace design integrates these components into a
complete system for excellent measurements of jet energies, based on
the Particle Flow Algorithm (PFA) approach, as well as of charged
leptons, photons and missing energy. The use of robust silicon vertexing and
tracking makes {SiD}\xspace applicable
to a wide range of energies from a Higgs factory to beyond \SI{1}{\TeV}. {SiD}\xspace has been designed in a cost-conscious manner, with the
compact design that minimizes the volumes of high-performing,
high-value, components, while maintaining critical levels of
performance. The restriction on dimensions is offset by the relatively
high central magnetic field from a superconducting solenoid.
{SiD}\xspace is a compact detector based on a powerful silicon pixel vertex detector,
silicon tracking, silicon-tungsten electromagnetic calorimetry (ECAL)
and highly segmented hadronic calorimetry (HCAL). {SiD}\xspace also
incorporates a high-field solenoid, iron flux return, and a muon
identification system (see \Fref{sid:ConceptOverview:Ovw_1}).
The choice of silicon detectors for tracking and vertexing ensures
that {SiD}\xspace is robust with respect to beam backgrounds or beam loss,
provides superior charged-particle momentum resolution, and eliminates
out-of-time tracks and backgrounds. The main tracking detector and
calorimeters are ``live'' only during each single bunch crossing, so
beam-related backgrounds and low-\ensuremath{p_\mathrm{T}}\xspace backgrounds from \ensuremath{\upgamma\upgamma \rightarrow \mathrm{hadrons}}\xspace
processes will be reduced to the minimum possible levels. The {SiD}\xspace
calorimetry is optimized for excellent jet-energy measurement using
the PFA technique. The complete tracking and calorimeter systems are
contained within a superconducting solenoid, which has
a \SI{5}{\tesla} field strength, enabling the overall compact
design. The coil is located within a layered iron structure that
returns the magnetic flux and is instrumented to allow the
identification of muons.
\begin{figure}[htb]
\ffigbox{\CommonHeightRow{\begin{subfloatrow}[2]%
\ffigbox[\FBwidth]{}{\includegraphics[height=\CommonHeight]{Chapter_Accelerator_Detector/figs/dbd1.jpg}}
\ffigbox[\FBwidth]{}{\includegraphics[trim=0 0 0 6,clip,height=\CommonHeight]{Chapter_Accelerator_Detector/figs/crossview.jpeg}}
\end{subfloatrow}}%
}{%
\caption{The {SiD}\xspace detector, showing (left) an isometric view on the platform, and (right) a quadrant section.
Colour coding: tracking (red), ECAL (green), HCAL (violet) and the flux return
(blue).}
\label{sid:ConceptOverview:Ovw_1}
}
\vspace{0.7cm}
\end{figure}
The tracking system is a key element as the particle-flow algorithm
requires excellent tracking with superb efficiency and good
two-particle separation. The requirements for precision measurements,
in particular in the Higgs sector, place high demands on the momentum
resolution at the level of $\delta (1/\ensuremath{p_\mathrm{T}}\xspace) \sim$~2--\SI{5e-5}{(GeV/c)^{-1}}
and the material budget of the tracking system.
Highly efficient tracking is achieved using the pixel detector and
main tracker to recognize and measure prompt tracks.
The {SiD}\xspace vertex detector uses a barrel and disk layout. The barrel
section consists of five silicon pixel layers with a pixel size of
20$\times$\SI{20}{\square\micro\meter}. The forward and backward regions each have
four silicon pixel disks. In addition, there are three silicon pixel
disks at a larger distance from the interaction point to provide
uniform coverage for the transition region between the vertex detector
and the outer tracker. This configuration provides for very good
hermeticity with uniform coverage and guarantees excellent
charged-track pattern-recognition capability and impact-parameter
resolution over the full solid angle. The vertex detector design
relies on power pulsing during bunch trains to minimize heating and
uses forced air for its cooling. The main tracker technology of
choice is silicon-strip sensors arrayed in five nested cylinders in
the central region with an outer cylinder radius of \SI{1.25}{\meter} and four
disks in each of the endcap regions. The geometry of the endcaps
minimizes the material budget to enhance forward tracking. The
detectors are single-sided silicon sensors with a readout pitch of
\SI{50}{\micro\meter}.
The choice of PFA imposes a number of basic requirements on the calorimetry.
The central calorimeter system must be contained within the solenoid in order
to reliably associate tracks to energy deposits. The electromagnetic and
hadronic sections must have imaging capabilities that allow both efficient
track-following and correct assignment of energy clusters to tracks. These
requirements imply that the calorimeters must be finely segmented both
longitudinally and transversely.
The combined ECAL and HCAL systems consist of a central barrel part and two
endcaps, nested inside the barrel. The entire barrel system is contained within
the volume of the cylindrical superconducting solenoid. The electromagnetic
calorimeter has silicon active layers between tungsten absorber layers. The
active layers use 3.5$\times$\SI{3.5}{mm^2} hexagonal silicon pixels, which provide excellent
spatial resolution. The structure has 30 layers in total, the first 20 layers
having a thinner absorber than the last ten layers. This configuration is a
compromise between cost, electromagnetic shower radius, sampling frequency, and
shower containment. The total depth of the electromagnetic calorimeter is 26
radiation lengths (\ensuremath{\mathrm{X}_{0}}\xspace) and one nuclear interaction length. The hadronic
calorimeter has a depth of 4.5 nuclear interaction lengths, consisting of
alternating steel plates and active layers. The baseline choice for the active
layers is the glass resistive-plate chamber with an individual readout segmentation of
10$\times$\SI{10}{mm^2}.
Two special calorimeters are foreseen in the very forward region: LumiCal for
precise measurement, and BeamCal for fast estimation, of the luminosity.
The {SiD}\xspace superconducting solenoid is based on the CMS solenoid
design philosophy and construction techniques, using a slightly modified CMS
conductor as its baseline design. Superconducting strand count in the coextruded
Rutherford cable was increased from 32 to 40 to accommodate the higher \SI{5}{\tesla}
central field.
The flux-return yoke is instrumented with position sensitive detectors to serve
as both a muon filter and a tail catcher. The {SiD}\xspace Muon System
baseline design is based on scintillator technology, using extruded scintillator
readout with wavelength-shifting fiber and SiPMs. Simulation studies have shown
that nine or more layers of sensitive detectors yield adequate energy
measurements and good muon-detection efficiency and purity.
A large fraction of the software for the generation, simulation and
reconstruction is shared between the detector concepts. The {SiD}\xspace detector is
fully implemented and simulated using \textsc{SLIC}\xspace, which is based on \textsc{Geant4}\xspace. The background
originating from incoherent pair interactions and from \ensuremath{\upgamma\upgamma \rightarrow \mathrm{hadrons}}\xspace for one bunch
crossing is fully taken into account by the simulation.
The events are then passed through the reconstruction software suite, which
encompasses digitization, tracking, vertexing and the Pandora PFA algorithm.
The material budget of the simulated tracker and the simulated tracking
performance for single particles are shown in \Fref{sid:trackingplots}.
\begin{figure}[htb]
\ffigbox{\CommonHeightRow{\begin{subfloatrow}[2]%
\ffigbox[\FBwidth]{}{\includegraphics[height=\CommonHeight]{Chapter_Accelerator_Detector/figs/sidloi3TrackerMaterialScan-ES.pdf}}
\ffigbox[\FBwidth]{}{\includegraphics[height=\CommonHeight]{Chapter_Accelerator_Detector/figs/single_muon_resolution_pt2_vs_p-ES.pdf}}
\end{subfloatrow}}%
}{%
\caption{Left: {SiD}\xspace Tracker Material budget in terms of \ensuremath{\mathrm{X}_{0}}\xspace. Right: the
normalised transverse momentum resolution for single-muon events.}
\label{sid:trackingplots}
}
\vspace{1.0cm}
\end{figure}
\begin{figure}[htb]
\ffigbox{\CommonHeightRow{\begin{subfloatrow}[2]%
\ffigbox[\FBwidth]{}{\includegraphics[height=\CommonHeight]{Chapter_Accelerator_Detector/figs/flavorTagging_with_background-ES.pdf}}
\ffigbox[\FBwidth]{}{\includegraphics[height=\CommonHeight]{Chapter_Accelerator_Detector/figs/massResolutionZZ-ES.pdf}}
\end{subfloatrow}}%
}{%
\caption{Left: Mis-identification efficiency of light quark (red points)
and c quark events (green points) as b quark jets versus the b
identification efficiency in di-jet events at \ensuremath{\sqrt{s}}\xspace
= \SI{91}{\GeV} including background from \ensuremath{\upgamma\upgamma \rightarrow \mathrm{hadrons}}\xspace and incoherent
pairs. Right: Mass resolution of reconstructed $\PZ\PZ$ events
with and without the backgrounds from
\ensuremath{\upgamma\upgamma \rightarrow \mathrm{hadrons}}\xspace and incoherent pairs at different values of \ensuremath{\sqrt{s}}\xspace.}
\label{sid:flavorpfaplots}
}
\vspace{1.0cm}
\end{figure}
The material budget of the entire tracking system is less than 0.2 \ensuremath{\mathrm{X}_{0}}\xspace down to
very low angles. The current design achieves an asymptotic momentum resolution
of $\delta (1/\ensuremath{p_\mathrm{T}}\xspace) = $~\SI{1.46e-5}{(GeV/c)^{-1}} and an transverse
impact parameter resolution better than \SI{2}{\micro\meter}.
The ability to tag bottom and charm decays with high purity has been a driving
factor in the design of the vertex detector. Figure~\ref{sid:flavorpfaplots}~(left)
illustrates the capability of the {SiD}\xspace to separate b-quarks also in
the presence of the full beam background.
Besides the detector performance, sophisticated reconstruction algorithms are
necessary to obtain a jet-energy resolution that allows the separation of hadronic W
and Z decays. To avoid a bias from possible tails, the rms$_{90}$ value is computed
to describe the energy or mass resolution of a particle-flow algorithm. It is
defined as the standard deviation of the distribution in the smallest range that
contains 90\% of the events. Figure~\ref{sid:flavorpfaplots}~(right)
shows the mass resolution of
reconstructed \PZ bosons in \epem~$\rightarrow~\PZ\PZ$ events at different
collision energies, where one \PZ decays to neutrinos, the other to two light
quarks that give rise to two jets.
\subsection{Systematic Error Summary}
The systematic errors that are used throughout this paper are
summarized in \Tref{tab:syserrorsummary}.
\begin{table}
\begin{center}
\begin{tabular}{lcc}
\hline
& Baseline & LumUp \cr \hline
luminosity & 0.1\% & 0.05\% \cr
polarization & 0.1\% & 0.05\% \cr
b-tag efficiency & 0.3\% & 0.15\% \cr
\hline
\end{tabular}
\caption{Systematic errors assumed throughout the paper. }
\label{tab:syserrorsummary}
\end{center}
\end{table}
\chapter{Higgs Couplings, Total Width and Branching Ratios \label{sid:chapter_couplings}}
\input{Chapter_Couplings/Section_ModelIndependentCouplings.tex}
\input{Chapter_Couplings/Section_BR.tex}
\input{Chapter_Couplings/Section_ModelDependentCouplings.tex}
\input{Chapter_Couplings/Section_EffectiveHiggsOperators.tex}
\section{Model Independent Determination of Higgs Cross Sections and Higgs Branching Ratios}
Alternatively, in the $\chi^2$ of our global fit, we can define the fit parameters to be the three cross sections $\sigma_{ZH}$, $\sigma_{\nu\bar{\nu}H}$, $\sigma_{t\bar{t}H}$, and the eight branching ratios
${\rm Br}(H\rightarrow b\bar{b})$, ${\rm Br}(H\rightarrow c\bar{c})$, ${\rm Br}(H\rightarrow gg)$, ${\rm Br}(H\rightarrow WW^*)$, ${\rm Br}(H\rightarrow ZZ^*)$, ${\rm Br}(H\rightarrow \tau^+\tau^-)$,
${\rm Br}(H\rightarrow \mu^+\mu^-)$, ${\rm Br}(H\rightarrow \gamma\gamma)$. Taking again the ILC(1000) luminosity scenario as an example, we use
the 34 independent cross section and cross section times branching ratio measurements from
\Tref{tab:stimesbrbase} and appropriately redefined
$Y^{'}_i$ functions to solve for the 11 parameters through the minimization of an alternate $\chi^2$ function.
The cross section and branching ratio accuracies for all four of our energy and luminosity scenarios
are summarized in \Tref{tab:brmodelindglobalfit}.
\begin{table}
\begin{center}
\begin{tabular}{|l|cccc|}
\hline
& ILC(250) & ILC500 & ILC(1000) & ILC(LumUp) \cr \hline
process & \multicolumn{4}{c|}{$\Delta \sigma /\sigma$} \\
\hline
$e^+e^-\rightarrow ZH$ & 2.6 \% & 2.0 \% & 2.0 \% & 1.0 \% \cr
$e^+e^-\rightarrow \nu\bar{\nu}H$ & 11 \% & 2.3 \% & 2.2 \% & 1.1 \% \cr
$e^+e^-\rightarrow t\bar{t}H$ & - & 28 \% & 6.3 \% & 3.8 \% \cr
\hline
mode & \multicolumn{4}{c|}{$\Delta $\rm Br$ /{\rm Br}$} \\
\hline
$H\rightarrow ZZ$ & 19 \% & 7.5 \% & 4.2 \% & 2.4 \% \cr
$H\rightarrow WW$ & 6.9 \% & 3.1 \% & 2.5 \% & 1.3 \% \cr
$H\rightarrow b\bar b$ & 2.9 \% & 2.2 \% & 2.2 \% & 1.1 \% \cr
$H\rightarrow c\bar c$ & 8.7 \% & 5.1 \% & 3.4 \% & 1.9 \% \cr
$H\rightarrow gg$ & 7.5 \% & 4.0 \% & 2.9 \% & 1.6 \% \cr
$H\rightarrow \tau^+\tau^-$ & 4.9 \% & 3.7 \% & 3.0 \% & 1.6 \% \cr
$H\rightarrow \gamma\gamma$ & 34 \% & 17 \% & 7.9 \% & 4.7 \% \cr
$H\rightarrow \mu^+\mu^-$ & 100 \% & 100 \% & 31 \% & 20 \% \cr
\hline
\end{tabular}
\caption{Summary of expected accuracies for the three cross sections
and eight branching ratios obtained from an eleven parameter
global fit of all available data.}
\label{tab:brmodelindglobalfit}
\end{center}
\end{table}
\section{Effective Higgs Operators}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\columnwidth]{Chapter_Couplings/figs/a1SM.pdf}
\includegraphics[width=0.48\columnwidth]{Chapter_Couplings/figs/c5SM.pdf}
\caption{
Distribution of the angle $\phi$ between two decay planes of $W$ and $W^*$ from the decay $H\to WW^\ast\to 4j$ with the inclusion of anomalous couplings \cite{Takubo:2010tc}.
(a) The SM curve along with that for $a=1$, $b=\tilde{b}=0$, $\Lambda=1$~TeV; the position of the minimum is the
same for both distributions. (b) The SM result with the cases $\tilde{b}=\pm5$, $a=b=0$, $\Lambda=1$~TeV;
the position of the minimum is now shifted as discussed in the text.
From \cite{Takubo:2010tc}.
}
\label{fig:phi}
\end{center}
\end{figure}
The $h \to WW^*$ decay provides an interesting
opportunity to study its differential
width and probe the Lorentz structure of the $hWW$ coupling through
angular
analyses of the decay products. The relevant part of the general
interaction
Lagrangian, which couples the Higgs boson to $W$ bosons in a both
Lorentz- and gauge-symmetric fashion, can be parameterized as
\begin{equation}
\label{eq:Lhww}
\mathcal{L}_{\rm hWW}
= 2 m_W^2\left(\frac{1}{v}+\frac{a}{\Lambda}\right) h\ W_\mu^+ W^{-\mu}
+\frac{b}{\Lambda} h\ W^+_{\mu\nu} W^{-\mu\nu}
+\frac{\tilde b}{\Lambda} h\ \epsilon^{\mu\nu\sigma\tau} W^+_{\mu\nu} W^-_{\sigma\tau}\ ,
\end{equation}
where $W_{\mu\nu}^\pm$ is the usual gauge field strength tensor,
$\epsilon^{\mu\nu\sigma\tau}$ is the Levi-Civita tensor, $v$ is the
VEV of the Higgs field, and $\Lambda$ is a cutoff scale\footnote{
The Lagrangian (\ref{eq:Lhww}) is not by itself gauge invariant; to
restore explicit
gauge invariance we must also include the corresponding anomalous
couplings
of the Higgs boson to $Z$ bosons and photons.
}.
The real dimensionless coefficients, $a$, $b$, and $\tilde{b}$, are
all zero in the
Standard Model and measure the anomaly in the $hWW$ coupling, which
arise
from some new physics at the scale $\Lambda$. The coefficient $a$
stands
for the correction to the Standard Model coupling.
The
coefficients $b$ and $\tilde{b}$ parametrize the
leading dimension-five non-renormalizable interactions
and corresponding to
$(\mathbold{E} \cdot \mathbold{E}
- \mathbold{B} \cdot \mathbold{B})$-type $CP$-even and
$(\mathbold{E} \cdot \mathbold{B})$-type $CP$-odd contributions.
The $a$ coefficient, if nonzero, would modify just
the normalization of the Standard Model coupling, while
the $b$ and $\tilde{b}$ coefficients would change the angular
correlations of the decay planes. This effect is shown
in Fig.~\ref{fig:phi}~\cite{Takubo:2010tc}.
Nonzero $b$ and $\tilde{b}$ would also modify the momentum
distribution of the $W$ boson in the Higgs rest frame.
Simultaneous fits to $p_{W}$ and $\phi_{\mathrm{plane}}$ result in the
contour plots in Figs.\ref{fig:cont_a-b} and \ref{fig:cont_a-bt}.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[t]
\includegraphics[width=\hsize]{Chapter_Couplings/figs/cont_a-b.pdf}
\caption{Probability contours for $\Delta \chi^{2} =$ 1, 2.28,
and 5.99 in the $a$-$b$ plane, which correspond to 39\%, 68\%, and 95\% C.L., respectively.}
\label{fig:cont_a-b}
\end{figure}
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[t]
\includegraphics[width=\hsize]{Chapter_Couplings/figs/cont_a-bt.pdf}
\caption{Contours similar to Fig.$\,$\ref{fig:cont_a-b} plotted in the $a$-$\tilde{b}$ plane.}
\label{fig:cont_a-bt}
\end{figure}
\section{Model-Dependent Coupling Parameterizations}
While the couplings of the Higgs boson and the total
Higgs width can be determined at the ILC
without model assumptions, it is sometimes
useful to extract couplings from ILC data within
the context of certain models. Such
analyses makes it easier to compare the experimental
precision of the ILC with other facilities, such
as the LHC, that cannot determine Higgs couplings
in a model independent manner.
\subsection{Benchmark Parameterizations of the LHC HXSWG}
The LHC Higgs Cross Section Working Group (HXSWG) has proposed
a series of benchmark Higgs coupling parameterizations~\cite{LHCHiggsCrossSectionWorkingGroup:2012nn,Dittmaier:2011ti}.
We take as an example the parameterization with seven free parameters $\kappa_g,\kappa_{\gamma},\kappa_W,\kappa_Z,\kappa_b,\kappa_t,\kappa_{\tau}$
and a dependent parameter $\kappa_H(\kappa_i)$ described in Section~10.3.7 of Ref.~\cite{Dittmaier:2011ti}.
In this parameterization 2nd generation fermion Higgs couplings are related to 3rd generation couplings via $\kappa_c=\kappa_t$,
$\kappa_\mu=\kappa_\tau$, etc., and the total Higgs width is assumed to be the sum of the partial widths for all Standard model decays.
We implement these boundary conditions by adding two new terms to our model independent chisquare function:
\begin{equation}
\chi^2 = \sum^{i=33}_{i=1}({Y_i-Y^{'}_i \over \Delta Y_i})^2+({\xi_{ct}\over\Delta\xi_{ct}})^2+({\xi_{\Gamma}\over\Delta\xi_{\Gamma}})^2
\end{equation}
where
\begin{equation}
\xi_{ct}=\kappa_c-\kappa_t={g_c \over g_c^{SM}}-{g_t \over g_t^{SM}}\,, \ \ \xi_\Gamma=\Gamma_T-\sum_{i=1}^{9}\Gamma_i\,,\ \ {\rm and} \ \ \Gamma_i=G_i\cdot g_i^2 \, .
\end{equation}
The error $\Delta\xi_{ct}$ is obtained by propagating the total theory errors on $g_c^{SM}$ and $g_t^{SM}$, while the error $\Delta\xi_\Gamma$ is obtained
by propagating the errors on $G_i$:
\begin{equation}
\Delta\xi_\Gamma=\Gamma_{SM}\left[\sum_i ({\Delta G_i \over G_i})^2(BR_i)^2\right]^{\frac{1}{2}} \approx \Gamma_{SM}{\Delta G \over G}\left[\sum_i (BR_i)^2\right]^{\frac{1}{2}} \approx 0.63\ \Gamma_{SM}{\Delta G \over G} \, .
\end{equation}
The results for the seven parameters in the HXSWG parameterization
are shown in \Tref{tab:modeldep7parhxswgtheory0p1} and \Tref{tab:modeldep7parhxswgtheory0p5}
assuming all theory errors are given by $\Delta\xi_{ct}=\Delta G_i/G_i=\Delta F_i/F_i=0.1\%$ and $0.5\%$, respectively.
\begin{table}
\begin{center}
\begin{tabular}{lcccc}
Mode & ILC(250) & ILC(500) & ILC(1000) & ILC(LumUp) \cr \hline
$\gamma\gamma$ & 17 \% & 8.3 \% & 3.8 \% & 2.3 \% \cr
$gg$ & 6.1 \% & 2.0 \% & 1.1 \% & 0.7 \% \cr
$WW$ & 4.7 \% & 0.4 \% & 0.3 \% & 0.2 \% \cr
$ZZ$ & 0.7 \% & 0.5 \% & 0.5 \% & 0.3 \% \cr
$t\bar t$ & 6.4 \% & 2.5 \% & 1.3 \% & 0.9 \% \cr
$b\bar b$ & 4.7 \% & 1.0 \% & 0.6 \% & 0.4 \% \cr
$\tau^+\tau^-$ & 5.2 \% & 1.9 \% & 1.3 \% & 0.7 \% \cr
$\Gamma_T(h)$ & 9.0 \% & 1.7 \% & 1.1 \% & 0.8 \% \cr
\hline
\end{tabular}
\caption{Expected accuracies $\Delta g_i/g_i$ for Higgs boson couplings and the total width $\Gamma_T(h)$ using the seven parameter
HXSWG benchmark parameterization described in Section~10.3.7 of Ref.~\cite{Dittmaier:2011ti}
assuming all theory errors are given 0.1\%.
}
\label{tab:modeldep7parhxswgtheory0p1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{lcccc}
Mode & ILC(250) & ILC(500) & ILC(1000) & ILC(LumUp) \cr \hline
$\gamma\gamma$ & 17 \% & 8.3 \% & 3.8 \% & 2.3 \% \cr
$gg$ & 6.1 \% & 2.0 \% & 1.2 \% & 0.7 \% \cr
$WW$ & 4.7 \% & 0.5 \% & 0.3 \% & 0.2 \% \cr
$ZZ$ & 0.8 \% & 0.5 \% & 0.5 \% & 0.3 \% \cr
$t\bar t$ & 6.4 \% & 2.6 \% & 1.4 \% & 0.9 \% \cr
$b\bar b$ & 4.7 \% & 1.0 \% & 0.6 \% & 0.4 \% \cr
$\tau^+\tau^-$ & 5.2 \% & 2.0 \% & 1.3 \% & 0.8 \% \cr
$\Gamma_T(h)$ & 9.0 \% & 1.8 \% & 1.1 \% & 0.9 \% \cr
\hline
\end{tabular}
\caption{Expected accuracies $\Delta g_i/g_i$ for Higgs boson couplings and the total width $\Gamma_T(h)$ using the seven parameter
HXSWG benchmark parameterization described in Section~10.3.7 of Ref.~\cite{Dittmaier:2011ti}
and assuming all theory errors are 0.5\%.
}
\label{tab:modeldep7parhxswgtheory0p5}
\end{center}
\end{table}
\clearpage
\subsection{Higgs Couplings to $W$ and $Z$ Bounded by SM Couplings}
A different method to fit for Higgs couplings using LHC data is given in Ref.~\cite{Peskin:2012we},
where an effort is made to minimize the model dependence of the coupling fit.
Under rather general
conditions~\cite{Gunion:1990kf}, each scalar with a vev makes a
positive contribution to the masses of the $W$ and
$Z$. Since the Higgs couplings to the $W$ and $Z$ also arise from the
vev, this implies that the coupling of any single
Higgs field is bounded above by the coupling that would give the full
mass of the vector bosons. This implies
\begin{equation}
g^2(hWW) \leq g^2(hWW)|_{SM} \quad \mbox{and }
g^2(hZZ) \leq g^2(hZZ)|_{SM}
\eeqpeskin{upperbound}
Then the measurement of the $\sigma\cdot BR$ for a process such as
$WW$ fusion to $h$ with decay to $WW^*$, which
is proportional to $g^4(hWW)/\Gamma_T$, puts an upper limit on
$\Gamma_T$. This constraint was first
applied to Higgs coupling fitting by
D\"uhrssen {\it et al.}~\cite{Duhrssen:2004cv}. In the
literature, this constraint
is sometimes applied together with the relation
\begin{equation}
g^2(hWW)/g^2(hZZ) = \cos^2\theta_w \ .
\eeqpeskin{gsqrat}
The relation \leqn{gsqrat}, however, requires models in which
the Higgs is a mixture of $SU(2)$ singlet and doublet
fields only, while \leqn{upperbound} is more general~\cite{Low:2010jp}.
An estimate of Higgs coupling errors from the LHC under the assumption of Eqn.~\leqn{upperbound}
can be found in Ref.~\cite{Peskin:2012we}.
We have carried out a global
fit to the ILC measurements under the constraint \leqn{upperbound}
with 9 parameters representing
independent Higgs boson couplings to $WW$, $ZZ$, $b\bar b$,
$gg$, $\gamma\gamma$, $\tau^+\tau^-$, $c \bar c$, $t\bar t$, and the total Higgs width $\Gamma_T(h)$.
The results for
the
errors on Higgs couplings are shown in Table~\ref{tab:globalfit}. The
four columns represent the combination of results from LHC (300 fb$^{-1}$, 1 detector)~\cite{Peskin:2012we} and
our four ILC luminosity scenarios.
\begin{table}
\begin{center}
\begin{tabular}{lcccc}
& LHC(300 fb$^{-1}$) & LHC(300 fb$^{-1}$) & LHC(300 fb$^{-1}$) & LHC(300 fb$^{-1}$) \cr
Mode & +ILC(250) & +ILC(500) & +ILC(1000) & +ILC(LumUp) \cr \hline
$\gamma\gamma$ & 4.8 \% & 4.2 \% & 3.0 \% & 2.0 \% \cr
$gg$ & 3.8 \% & 1.9 \% & 1.1 \% & 0.7 \% \cr
$WW$ & 1.9 \% & 0.2 \% & 0.1 \% & 0.1 \% \cr
$ZZ$ & 0.4 \% & 0.3 \% & 0.3 \% & 0.1 \% \cr
$t\bar t$ & 12.0 \% & 9.6 \% & 2.9 \% & 1.8 \% \cr
$b\bar b$ & 2.8 \% & 1.0 \% & 0.6 \% & 0.3 \% \cr
$\tau^+\tau^-$ & 3.3 \% & 1.8 \% & 1.2 \% & 0.7 \% \cr
$c\bar c$ & 5.1 \% & 2.6 \% & 1.4 \% & 0.8 \% \cr
$\Gamma_T(h)$ & 4.7 \% & 1.6 \% & 0.9 \% & 0.5 \% \cr
\hline
\end{tabular}
\caption{Expected accuracies for Higgs boson couplings under the
assumption of Eqn.~\leqn{upperbound} and assuming LHC results with 300~fb$^{-1}$ are combined
with ILC results.
}
\label{tab:globalfit}
\end{center}
\end{table}
\section{Model Independent Determination of Higgs Couplings}
The sigma times branching ratio measurements in the previous chapters
imply a very high level of precision for the
various Higgs boson couplings. To quantify this we perform a global fit of the Higgs boson couplings and total Higgs width using
all the available cross section and cross section times branching ratio data.
Before discussing the global fit in detail, it would be helpful to show an
example explaining how we get the absolute couplings and Higgs total width.
Let's look at the following four independent measurements:
\begin{Eqnarray}
Y_1 &=& \sigma_{ZH} = F_1\cdot g^2_{HZZ} \nonumber \\
Y_2 &=& \sigma_{ZH}\times {\rm Br}(H\rightarrow b\bar{b}) = F_2\cdot {g^2_{HZZ}g^2_{Hb\bar{b}} \over \Gamma_T} \nonumber \\
Y_3 &=& \sigma_{\nu\bar{\nu}H}\times {\rm Br}(H\rightarrow b\bar{b}) = F_3\cdot {g^2_{HWW}g^2_{Hb\bar{b}} \over \Gamma_T} \nonumber \\
Y_4 &=& \sigma_{\nu\bar{\nu}H}\times {\rm Br}(H\rightarrow WW^*) = F_4\cdot {g^4_{HWW} \over \Gamma_T}\,, \nonumber
\end{Eqnarray}
where $\Gamma_T$ is the Higgs total width, $g_{HZZ}$, $g_{HWW}$, and $g_{Hb\bar{b}}$ are the couplings
of the Higgs to $ZZ$, $WW$, and $b\bar{b}$, respectively, and $F_1$, $F_2$, $F_3$, $F_4$
are calculable quantities. It is straightforward to get the couplings with the
following steps:
\begin{enumerate}[label=\roman*.)]
\item from the measurement $Y_1$ we can get the coupling $g_{HZZ}$.
\item from the ratio $Y_2/Y_3$ we can get the coupling ratio $g_{HZZ}/g_{HWW}$.
\item with $g_{HZZ}$ and $g_{HZZ}/g_{HWW}$, we can get $g_{HWW}$.
\item once we know $g_{HWW}$, we can get the Higgs total width $\Gamma_T$ from the measurement $Y_4$
\item from the ratio $Y_3/Y_4$ we get the ratio $g_{Hbb}/g_{HWW}$, from which we obtain $g_{Hbb}$.
\end{enumerate}
This example already gave quite clear synergy between the two main Higgs production channels. The
best energy
to investigate the Higgsstrahlung production $e^+e^-\rightarrow ZH$ is around 250 GeV, however the
$e^+e^-\rightarrow \nu\bar{\nu}H$ at 250 GeV is very small. WW-fusion production will be fully open
at 500 GeV with cross section one order of magnitude larger. This is one essential motivation to go to higher
energy after running at 250 GeV.
We discuss in detail the model independent fit of the Higgs couplings for the ILC(1000) luminosity scenario. For this scenario
the 33 independent $\sigma \times {\rm Br}$ measurements in \Tref{tab:stimesbrbase} are used as experimental input. The
$\sigma \times {\rm Br}$ measurements are labelled with
with $Y_i$, $i=1,2,...,33$. The predicted values of these measurements as a function of the Higgs couplings are given by
$Y^{'}_i=F_i\cdot {g^2_{HZZ}g^2_{HXX} \over \Gamma_T}$,
or $Y^{'}_i=F_i\cdot {g^2_{HWW}g^2_{HXX} \over \Gamma_T}$, $Y^{'}_i=F_i\cdot {g^2_{Htt}g^2_{HXX} \over \Gamma_T}$,
where $XX$ means some specific decay particle from Higgs and $F_i$ is some factor corresponding to the decay. In addition we have one absolute cross section
measurement $Y_{34}=\sigma_{ZH}$ which can be predicted as
$Y^{'}_{34}=F_{34}\cdot g^2_{HZZ}$.
In total we have 34 independent measurements and 10 fit parameters consisting of 9
fundamental couplings $HZZ$, $HWW$, $Hb\bar b$, $Hc \bar c$,
$Hgg$, $H\tau^+\tau^-$, $H\mu\mu$, $Htt$ and $H\gamma\gamma$, and the Higgs total width $\Gamma_T$.
The factors $F_i$ can be written
\begin{equation}
F_i=S_iG_i\ \ \ {\rm where}\ S_i=({\sigma_{ZH}\over g_Z^2})\,,\ ({\sigma_{\nu\bar\nu H}\over g_W^2})\,,\ {\rm or}\ ({\sigma_{t\bar{t}H}\over g_t^2})\,, \ {\rm and}\ G_i=({\Gamma_i\over g_i^2})\, .
\end{equation}
These are theoretical calculations with parametric and theoretical uncertainties.
Because the relevant quantities are ratios of cross sections and partial widths to couplings squared, the total theory errors for $S_i$, and particularly $G_i$, should be less than
the total theory errors for the corresponding cross sections and partial widths. We believe that a total theory error of 0.5\% or less can be achieved for the $F_i$ parameters at the time of ILC running.
We quote coupling results assuming total theory errors of $\Delta F_i/F_i=0.1\%$ and $\Delta F_i/F_i=0.5\%$.
The fitted couplings and width are obtained by minimizing the chi-square function $\chi^2$ defined by
\begin{equation}
\chi^2 = \sum^{34}_{i=1}({Y_i-Y^{'}_i \over \Delta Y_i})^2\,,
\end{equation}
where $\Delta Y_i$ is the square root of the sum in quadrature of the error on the measurement $Y_i$
and the total theory error for $Y^{'}_i$.
The results for theory errors of $\Delta F_i/F_i=0.1\%$ and $\Delta F_i/F_i=0.5\%$ are summarized in \Tref{tab:modelindglobalfit0p1} and \Tref{tab:modelindglobalfit0p5}, respectively.
\begin{table}
\begin{center}
\begin{tabular}{lcccc}
Mode & ILC(250) & ILC(500) & ILC(1000) & ILC(LumUp) \cr \hline
$\gamma\gamma$ & 18 \% & 8.4 \% & 4.0 \% & 2.4 \% \cr
$gg$ & 6.4 \% & 2.3 \% & 1.6 \% & 0.9 \% \cr
$WW$ & 4.8 \% & 1.1 \% & 1.1 \% & 0.6 \% \cr
$ZZ$ & 1.3 \% & 1.0 \% & 1.0 \% & 0.5 \% \cr
$t\bar t$ & -- & 14 \% & 3.1 \% & 1.9 \% \cr
$b\bar b$ & 5.3 \% & 1.6 \% & 1.3 \% & 0.7 \% \cr
$\tau^+\tau^-$ & 5.7 \% & 2.3 \% & 1.6 \% & 0.9 \% \cr
$c\bar c$ & 6.8 \% & 2.8 \% & 1.8 \% & 1.0 \% \cr
$\mu^+\mu^-$ & 91 \% & 91 \% & 16 \% & 10 \% \cr
$\Gamma_T(h)$ & 12 \% & 4.9 \% & 4.5 \% & 2.3 \% \cr
\hline
\end{tabular}
\caption{Expected accuracies $\Delta g_i/g_i$ for Higgs boson couplings
for a completely model independent fit assuming theory errors of $\Delta F_i/F_i=0.1\%$
}
\label{tab:modelindglobalfit0p1}
\end{center}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{lcccc}
Mode & ILC(250) & ILC(500) & ILC(1000) & ILC(LumUp) \cr \hline
$\gamma\gamma$ & 18 \% & 8.4 \% & 4.0 \% & 2.4 \% \cr
$gg$ & 6.4 \% & 2.3 \% & 1.6 \% & 0.9 \% \cr
$WW$ & 4.9 \% & 1.2 \% & 1.1 \% & 0.6 \% \cr
$ZZ$ & 1.3 \% & 1.0 \% & 1.0 \% & 0.5 \% \cr
$t\bar t$ & -- & 14 \% & 3.2 \% & 2.0 \% \cr
$b\bar b$ & 5.3 \% & 1.7 \% & 1.3 \% & 0.8 \% \cr
$\tau^+\tau^-$ & 5.8 \% & 2.4 \% & 1.8 \% & 1.0 \% \cr
$c\bar c$ & 6.8 \% & 2.8 \% & 1.8 \% & 1.1 \% \cr
$\mu^+\mu^-$ & 91 \% & 91 \% & 16 \% & 10 \% \cr
$\Gamma_T(h)$ & 12 \% & 5.0 \% & 4.6 \% & 2.5 \% \cr
\hline
\end{tabular}
\caption{Expxected accuracies $\Delta g_i/g_i$ for Higgs boson couplings
for a completely model independent fit assuming theory errors of $\Delta F_i/F_i=0.5\%$
}
\label{tab:modelindglobalfit0p5}
\end{center}
\end{table}
\chapter{Gamma-Gamma and e-Gamma Option \label{sid:chapter_gamma_gamma}}
Higgs production in $\gamma\gamma$ collisions,
first studied in \cite{Barklow:1990ah,Gunion:1992ce,Borden:1993cw},
offers a unique
capability to measure the two-photon width of the Higgs and
to determine its charge conjugation and parity (CP)
composition through control of the photon
polarization. Both measurements have unique value in understanding
the nature of a Higgs boson eigenstate. Photon-photon collisions
also offer one of the best means for producing a heavy Higgs
boson singly, implying significantly greater mass reach than
electron-positron production of a pair of Higgs bosons.
There are many important reasons for measuring
the $\gamma\gamma$ coupling of a Higgs boson, generically denoted
$h$. In the Standard Model, the coupling
of the Higgs boson, $h_{SM}$, to two photons
receives contributions from loops containing any charged particle
whose mass arises in whole or part from the vacuum expectation
value (vev) of the neutral Higgs field.
In the limit of infinite mass for the charged particle in the loop, the
contribution asymptotes to a value that depends
on the particle's spin (i.e., the contribution does not decouple).
Thus, a measurement of $\Gamma(h_{SM}\to\gamma\gamma)$
provides the possibility of revealing the presence of arbitrarily
heavy charged particles, since in the SM context all particles
acquire mass via the Higgs
mechanism.\footnote{Loop contributions from charged particles that acquire
a large mass from some other mechanism, beyond
the SM context, will decouple as $({\rm mass})^{-2}$
and, if there is a SM-like Higgs boson $h$,
$\Gamma(h\to\gamma\gamma)$ will not be sensitive to their presence.}
Even if there are no new particles that acquire mass via the Higgs
mechanism, a precision measurement of $N(\gamma\gamma\to h\to X)$
for specific final states $X$ ($X=b\overline b,WW^*,\ldots$)
can allow one to distinguish between a $h$ that is part
of a larger Higgs sector and the SM $h_{SM}$. The ability to detect
deviations from SM expectations will be enhanced by combining this
with other types of precision measurements for the SM-like Higgs boson.
Observation of small deviations would be typical
for an extended Higgs sector as one approaches the decoupling limit
in which all other Higgs bosons are fairly heavy, leaving
behind one SM-like light Higgs boson. In such models,
the observed small deviations could then be interpreted as implying the
presence of heavier Higgs bosons.
The ability to detect $\gamma\gamma\to H^0, A^0$ will be of
greatest importance if the $H^0$ and $A^0$ cannot be detected either
at the LHC or in $e^+e^-$ collisions at the ILC. In fact, there is
a very significant section of parameter space in the MSSM for which
this is the case. The $\gamma\gamma$ collider would also play a very important role
in exploring a non-supersymmetric general two-Higgs-doublet model
(2HDM) of which the MSSM Higgs sector is a special case.
Once one or several Higgs bosons have been detected,
precision studies can be
performed. Primary on the list would be
the determination of the CP nature of any observed Higgs boson.
This and other types of measurements
become especially important if one is in the decoupling limit of a 2HDM.
The decoupling limit is defined by the situation in which
there is a light SM-like Higgs boson, while the other
Higgs bosons ($H^0,A^0,H^\pm$) are heavy and quite degenerate.
In the MSSM context, such decoupling is automatic
in the limit of large $m_{A^0}$.
In this situation, a detailed scan to
separate the $H^0$ and $A^0$ would be very important and
entirely possible at the $\gamma\gamma$ collider. Further,
measurements of relative branching fractions for the $H^0$ and
$A^0$ to various possible final states would also be possible
and reveal much about the Higgs sector model.
\section{Production Cross Sections and Luminosity Spectra}
The gamma-gamma option at the ILC opens a new opportunity for truly high energy two-photon physics that is not limited to the QCD studies performed by most $e^+e^-$ colliders. The production cross sections for charged particles are considerably larger in $\gamma\gamma$ collisions than in $e^+e^-$ enabling the study of new particles above threshold at a higher rate - e.g. $WW$ pair production at 500~GeV is a factor of 20 larger than in $e^+e^-$. This effect more than offsets that factor of $5-10$ lower $\gamma\gamma$ luminosity compared to the corresponding $e^+e^-$ collider. Similarly the cross sections for charged scalars, lepton and top pairs are a factor of $5-10$ higher at a photon collider compensating for the luminosity reduction.
The proposed technique for the gamma-gamma option consists of Compton backscattering a $\sim1$~MeV laser photons from the 125-500~GeV electron and position beams. The backscattered photon receives a large fraction of the incoming electron energy. This is described in detail in \cite{Asner:2001ia}. The maximum energy of the generated photons is given by $E^{max}_\gamma = xE_e/(1+x)$, where $E_e$ is the electron beam energy and $x = 4E_eE_L\cos^2(\theta/2)/m^2_ec^4$ with $E_L$ and $\theta$ the laser photon energy and angle between the electron and laser beam. The distance from the conversion point to the interaction point is in the range of a few millimeters to a few centimeters. The optimal values of $x$ are around 4.8, yielding $E^{max}\approx0.82 E_e$, which maximizes the spin-0 luminosity near $E_{\gamma\gamma}=0.8 E_{ee}$, for a particular configuration of beam and laser polarizations as shown in Figure~\ref{fig:gamgamlum_plot}. The fundamental laser wavelength is determined by available technology and are typically 1.054 $\mu m$. For machine energies of $\sqrt{s}$=250, 500, 1000 GeV the corresponding values of $x$ are 2.26, 4.52 and 9.03, respectively. The maximum $E_{\gamma\gamma}$ are 173 GeV, 409 GeV, and 900 GeV with the peak in the spin-0 luminosity somewhat lower. The optimal machine energy (using a 1.054 $\mu m$ laser) to study a 126~GeV Higgs-like particle is about 215~GeV with $x=1.94$. As mentioned above larger values of $x$ are desirable and can be obtained using non-linear optics to triple the laser frequency
\footnote{The efficiency
with which the standard $1.054~\mu$ laser beam is converted to $0.351~\mu$
is 70\%. Thus, roughly 40\% more laser power is required
in order to retain the subpulse power.}
In this case, the optimal machine energy to study a 126~GeV Higgs-like particle is $\sim$170 GeV and $x=4.55$ much closer to the optimal value.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}
\includegraphics[width=0.99\hsize]{Chapter_Gamma_Gamma/figs/gamgamlum_plot_x5pt69_x4pt334_x1pt86_lame-80.pdf}
\vspace*{0.1cm}
\caption[0]{The normalized differential luminosity
${1\over {\cal L}_{\gamma\gamma}}{d{\cal L}_{\gamma\gamma}\over dy}$ and the corresponding
$\protect\vev{\lambda\lambda'}$ for $\lambda_e=\lambda'_e=.4$ (80\%
polarization)
and three different choices of the initial laser photon polarizations
$P$ and $P'$. The distributions shown are for
$\rho^2\ll 1$ \cite{Ginzburg:1981vm,Ginzburg:1982yr}. Results for $x=5.69$,
$x=4.334$ and $x=1.86$ are compared.
}
\label{fig:gamgamlum_plot}
\end{figure}
\section{Higgs Studies}
A Standard Model-like Higgs boson $h$ arises in many models containing physics beyond
the SM. The $h\to\gamma\gamma$ coupling receives contributions from loops
containing any charged particle
whose mass, $M$, arises in whole or part from the vacuum expectation
value of the corresponding neutral Higgs field.
When the mass, $M$, derives in whole or part from the vacuum expectation
value ($v$) of the neutral Higgs
field associated with the $h$,
then in the limit of $M\gg m_h$ for the particle in the loop, the
contribution asymptotes to a value that depends
on the particle's spin (i.e., the contribution does not decouple).
As a result, a measurement of $\Gamma(h\to\gamma\gamma)$
provides the possibility of revealing the presence of
heavy charged particles that acquire their mass via the Higgs mechanism.
In addition, we note that $B(h\to X)$
is entirely determined by the spectrum
of particles with mass $<m_{h}/2$, and is not affected by heavy states
with $M>m_h/2$. Consequently,
measuring $N(\gamma\gamma\to h\to X)$ provides
an excellent probe of new heavy particles with mass deriving
from the Higgs mechanism.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}
\includegraphics[width=0.99\hsize]{Chapter_Gamma_Gamma/figs/higgs_80_aspect.pdf}
\vspace*{0.1cm}
\caption
Higgs signal and heavy quark backgrounds in units of events per 2 GeV
for a Higgs mass of 115~GeV and assuming a running year of $10^7$ sec
\cite{Asner:2001vh}.
}
\label{fig:higgs}
\end{figure}
\subsection{$h_{SM}$ Mass Measurement}
A special feature of the $\gamma\gamma$ collider is the sharp edge of
the $\gamma\gamma$ luminosity function, as depicted in Fig.~\ref{fig:gamgamlum_plot}.
The position of this edge can be controlled by changing the electron beam
energy. As it sweeps across the threshold for Higgs production, the
number of, e.g., $b\overline b$ events will increase dramatically.
Since the position of this turn-on
depends on the Higgs mass, a threshold scan offers the possibility to
measure the Higgs mass kinematically, as developed in Ref.~\cite{Ohgaki:1999ez}.
This possibility was studied in the context of
CLICHE~\cite{Asner:2001vh}, assuming that the Higgs mass is already
known to within a GeV or so.
There is a point of optimum sensitivity to the
Higgs mass a few~GeV below the peak of the cross section. The raw number
of events at a single energy cannot be used to measure the
mass, however, because the $\gamma\gamma$ partial width cannot be assumed
known {\it a priori}. There is another point, though, close to the maximum
of the cross section, at which there is no sensitivity to the Higgs mass,
and with maximum sensitivity to $\Gamma_{\gamma\gamma}$, allowing the
separation of these two quantities. These points are illustrated in
Fig.~\ref{fig:scan}. Furthermore, the background can be estimated using
data obtained by running below the threshold. To estimate the
sensitivity of the yields to $m_H$, we work with a simple
observable based on the ratio of background-subtracted yields
at peak and at threshold:
\begin{displaymath}
Y = \frac{N_{\mathrm{peak}} - N_{\mathrm{below}}\cdot r_p}
{N_{\mathrm{threshold}} - N_{\mathrm{below}}\cdot r_t}
\end{displaymath}
where $N$ is the number of events in a mass window logged at the peak,
on the threshold, and below threshold, and $r_p$ and $r_t$ are scale
factors to relate the background data taken below threshold to
the expectation at peak and at threshold. We have propagated
statistical uncertainties, and, assuming one year of data on peak,
half a year on threshold and another half below threshold, we find
$\sigma_Y / Y = 0.088$. This translates into an error
on the inferred Higgs mass of~100~MeV. A more refined treatment should
improve this estimate somewhat. This estimate is
obtained using the laser and beam energies proposed for CLIC~1 and the
analysis results are similar to those shown in in Fig.~\ref{fig:higgs}. It is still necessary to investigate how
sensitive the luminosity function is to the shape of the luminosity curve.
It is not sensitive to the electron polarization precision.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}
\includegraphics[width=0.3\hsize]{Chapter_Gamma_Gamma/figs/threshold_fom.pdf}
\includegraphics[width=0.3\hsize]{Chapter_Gamma_Gamma/figs/relyield_vs_mh.pdf}
\includegraphics[width=0.3\hsize]{Chapter_Gamma_Gamma/figs/mass_measurement.pdf}
\caption[0]{
(a) A figure of merit quantifying the measurement error on the mass as a function of the $e^-e^-$ center-of-mass energy. The optimum and zero
sensitivity points are marked.
(b) Relative yield for a 115~GeV Higgs boson at the point of optimum
sensitivity and zero sensitivity to $m_H$.
(c) Behavior of the observable~$Y$ as a function of $m_H$,
and the projected error.
}
\label{fig:scan}
\end{figure}
\subsection{Branching Fractions}
The precision to which the most important decay modes of a Standard Model Higgs boson can be measured at a gamma-gamma collider are presented in Table~\ref{tab:gghiggsbf}. One of the objectives of a gamma-gamma collider would be to test the Standard Model predictions for Higgs branching fractions and to use measurements of them to distinguish between the
Standard Model and its possible extensions, such as the minimal
supersymmetric extension of the Standard Model (MSSM) or a more general
two-Higgs-doublet model (2HDM).
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{table}[htbp]
\begin{tabular}{cc}\toprule
Measurement & Precision \\ \midrule
$\Gamma_{\gamma\gamma}\times B(h\to b \overline b)$ & 0.012 \\
$\Gamma_{\gamma\gamma}\times B(h\to WW)$ & 0.035 \\
$\Gamma_{\gamma\gamma}\times B(h\to \gamma\gamma)$ & 0.121 \\
$\Gamma_{\gamma\gamma}\times B(h\to ZZ)$ & 0.064 \\
$\Gamma_{\gamma\gamma}\times B(h\to \gamma Z)$ & 0.020 \\ \midrule
$\Gamma_{\gamma\gamma}^*$ & 0.021 \\
$\Gamma_{Total}^*$ & 0.13 \\ \midrule
$m_{h_{SM}}$ $(h \to \gamma\gamma)$ & 61 MeV \\
$CP$ Asymmetry $(h \to WW)$ & 0.035-0.040 \\ \bottomrule
*Taking $BR(h \to b \overline b)$ from $e^+e^-$ running at ILC & \cr
\end{tabular}
\caption{Summary of Higgs Branching Fraction and other measurements in 3 years of design luminosity at a Higgs Factory. This study assumed a 120~GeV Standard Model-like Higgs Boson and accelerator parameters as described in \cite{Asner:2001vh}.}
\label{tab:gghiggsbf}
\end{table}
\subsubsection{$h \to b\overline b$}
If there are no new particles that acquire mass via the Higgs
mechanism, a precision measurement of $\Gamma({\widehat h}\to\gamma\gamma)$
can allow one to distinguish between a ${\widehat h}$ that is part
of a larger Higgs sector and the SM $h_{SM}$. Figure~\ref{fig:higgs}
shows the dijet invariant
mass distributions for the
Higgs signal and
for the $b\overline b(g)$ and $c\overline c(g)$ backgrounds,
after
all cuts.
Due to the large branching ratio for $H\rightarrow\bar{b}b$ decay
for a Higgs mass $\sim 115$~GeV, this is the main channel for Higgs
studies at CLICHE. This channel has received
the most attention and the studies are already quite
detailed~\cite{Ohgaki:1997jp,Asner:2001vh}.
Our analysis includes perturbative QCD backgrounds,
including $\gamma \gamma \rightarrow {\bar b} b(g)$ and
$\gamma \gamma \rightarrow {\bar c} c(g)$.
The ${\bar q} q$ backgrounds are suppressed by choosing like polarizations
for the colliding photons, but this suppression is not so strong when the
final states contain additional gluons.
The mass resolution is around 6~GeV with a jet energy resolution of
$\sigma_E=0.6 \times \sqrt{E}$. The distribution in the dijet invariant
mass,
$m_{jets}$, for a $m_H=115$~GeV Higgs found in this study
with an integrated luminosity of 200~fb$^{-1}$ is shown in
Fig.~\ref{fig:higgs}. A clear signal peak can be seen above sharply falling
backgrounds. Including the three bins nearest to $m_{jets}\sim 115$~GeV,
we obtain 4952 signal events and 1100 background events. Thus, the
signal-to-background ratio is expected to be 4.5 after all cuts.
A feature which is not taken into account in these studies is the pile-up
of events from different bunch crossings. Initial studies indicate that pile-up of order 10 bunch crossings degrades the Higgs signal only slightly.
This would yield a measurement of
$\Gamma(h_{SM}\to\gamma\gamma)B(h_{SM}\to b\overline b)$ with an accuracy of
$\sqrt{S+B}/S\sim 1.2\%$ in 3 years of design luminosity at a Higgs Factory. This study assumed a 120~GeV Standard Model-like Higgs Boson and accelerator parameters as described in \cite{Asner:2001vh}.
\subsubsection{$h \to WW$}
Observation of this decay mode is extremely difficult at high-energy
$\gamma\gamma$ colliders, because of the large cross section for $W$~pair
production. If the $\gamma\gamma$ center-of-mass energy is below the
$W^+W^-$ threshold, however, the continuum production of $W$ pairs is
greatly reduced, allowing the observation of resonant production through a
Higgs boson. The sharp peak in the $\gamma\gamma$ luminosity function seen in
Fig.~\ref{fig:gamgamlum_plot} plays a key role here.
Figure~\ref{fig:wwcross}(a) compares the cross sections for the
continuum $W$~pair production with the Higgs resonance curve. As shown,
the cross sections for $\sigma(\gamma\gamma\to W^+W^-)$ and
${\cal B}r(h\to W^+W^-) \times \sigma(\gamma\gamma\to h)$ are comparable,
if $E_{CM}(e^-e^-)=150$~GeV for a $m_H=115$~GeV.
One significant difference between the two type of events is
the energy distribution of the $W^+W^-$ pairs, as illustrated
in Figure~\ref{fig:wwcross}(b).
Our study is concentrated on the hadronic decays of the $W$ pairs,
applying several kinematic cuts. One pair of jets must reconstruct to the
$W$~mass, while the other pair is required to saturate the remaining phase
space. This cuts allows us not only to reduce the $W^+W^-$ pairs
to those with energy similar to those produced in Higgs events, but
also to reject
any possible $\gamma\gamma \to qq(g)$ background. There must be at least
four jets in the event and the jet reconstruction efficiency is assumed to
be 100\%. In contrast to the $h\to b\bar{b}$ analysis, here we are imposing
a $y=0.003$ cut in the Durham algorithm used in the jet reconstruction.
In addition, the transverse momentum is required to be smaller than 0.1.
After these cuts we have a 29\% reconstruction efficiency.
A comparison of the signal and the background after cuts is given in
Fig.~\ref{fig:wwcross}(c), which corresponds to a signal-to-background ratio
of 1.3, and the statistical precision in the
signal rate measurement is expected to be 5\%.
The other event topologies (two leptons and missing energy, or one lepton,
missing energy and jets) remain to be studied. Techniques similar to those
described in \cite{Dittmar:1996ss} may be used. We also believe that the
decay $H \rightarrow ZZ, Z\gamma$ might be interesting, despite their
relatively small branching ratios.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}
\includegraphics[width=0.49\hsize]{Chapter_Gamma_Gamma/figs/reduce_x_ww_comparison_120_2.pdf}
\includegraphics[width=0.49\hsize]{Chapter_Gamma_Gamma/figs/higgs_ww_75.pdf}
\caption[0]{
(a) Cross sections for $\gamma\gamma\rightarrow h$,
$\gamma\gamma\rightarrow h \times {\cal B}r(h\to WW)$ for
$m_H=115$~GeV and $\gamma\gamma\rightarrow WW$
production. (b) Comparison of the ideal invariant mass
of the $WW$ pairs from signal and background events.
(c) Selection of the $WW$ decay mode of the Higgs boson for
$m_H=115$~GeV, running at $E_{CM}(\gamma\gamma)=115$~GeV at CLICHE.
}
\label{fig:wwcross}
\end{figure}
\subsubsection{$h \to \gamma\gamma$}
In almost any phenomenological context, the decay $H \rightarrow
\gamma\gamma$ is a very rare one. However,
the number of Higgs events is large at a $\gamma\gamma$~collider,
so an interesting number of $H \to \gamma\gamma$
events would be produced. Furthermore, the backgrounds are expected
to be quite small, below 2~fb~\cite{Jikia:1993yc},
since there is no tree-level coupling of photons,
and the box-mediated processes are peaked very sharply in the
forward direction. A complete background study has not yet been made, but
initial estimates indicate that a clear peak
in the $\gamma\gamma$~mass distribution should be observable,
and we assume here that the background error would be negligible.
The number of events produced in this channel is proportional
to ${\Gamma_{\gamma\gamma}^2/\Gamma_{\mathrm{total}}}$. The quadratic
dependence is interesting, because if $\Gamma_{\mathrm{total}}$ could
be measured elsewhere, a small error on $\Gamma_{\gamma\gamma}$ would
be obtained. Similarly, if $\Gamma_{\gamma\gamma}$ is measured elsewhere,
a small error $\Gamma_{\mathrm{total}}$ could be obtained.
In Fig.~\ref{fig:ggtogg}, we can see that a 10\% measurement of
${\Gamma_{\gamma\gamma}^2/\Gamma_{\mathrm{total}}}$ can be made with
less than a year of data taking.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}
\includegraphics[width=0.99\hsize]{Chapter_Gamma_Gamma/figs/gg_all.pdf}
\caption[0]{
The expected precision in the ${h \rightarrow \gamma\gamma}$ decay
width from direct measurements
of $h\rightarrow\gamma\gamma$ for $m_H = 115$~GeV. The precision is
less than in the equivalent measurement of $H\rightarrow WW, \bar{b}b$,
but this observable is unique to a low-energy $\gamma\gamma$ collider
like CLICHE.
}
\label{fig:ggtogg}
\end{figure}
The cleanliness of these events and good energy resolution in the
electromagnetic calorimeter would allow for an independent measurement
of the Higgs mass. Assuming that the calorimeter energy scales can
be sufficiently well calibrated, a resolution better than $100$~MeV
can be expected.
\subsection{Determining CP Nature of a Higgs Boson}
Precision studies of the Standard Model-like Higgs can be performed using the peaked luminosity spectrum (II) with $\sqrt s=m_{\rm Higgs}/y_{\rm peak}$.
These include: determination of CP properties; a detailed scan to
separate the $H^0$ and $A^0$ when in the decoupling limit of a 2HDM; branching ratios, and the ration of vacuum expectation values - $\tan \beta$.
Determination of the CP properties of
any spin-0 Higgs $\widehat h$ produced in $\gamma\gamma$
collisions is possible since $\gamma\gamma\to {\widehat h}$ must proceed at one
loop, whether ${\widehat h}$ is CP-even, CP-odd or a mixture.
As a result, the CP-even and CP-odd parts of ${\widehat h}$ have
$\gamma\gamma$ couplings
of similar size. However, the structure of the couplings is very different:
\begin{equation}
{\cal A}_{CP=+}\propto \vec \epsilon_1\cdot\vec \epsilon_2\,,\quad
{\cal A}_{CP=-}\propto (\vec\epsilon_1\times\vec \epsilon_2)\cdot \hat p_{\rm beam}\,.
\end{equation}
By adjusting the orientation of the photon polarization vectors with
respect to one another, it is possible to determine the relative
amounts of CP-even and CP-odd content in the resonance ${\widehat h}$
\cite{Grzadkowski:1992sa}.
If ${\widehat h}$ is a mixture, one can use helicity asymmetries for this purpose
\cite{Grzadkowski:1992sa,Kramer:1993jn}.
However, if ${\widehat h}$ is either purely CP-even
or purely CP-odd, then one must employ transverse linear polarizations
\cite{Gunion:1994wy,Kramer:1993jn}.
For a Higgs boson of pure CP, one finds that the Higgs cross section
is proportional to
\begin{equation}
{d{\cal L}\over dE_{\gamma\gamma}}\left(1+\vev{\lambda\lambda'}+{\cal CP} \vev{\lambda_T\lambda_T'}
\cos2\delta \right)
\label{tranxsec}
\end{equation}
where ${\cal CP}=+1$ (${\cal CP}=-1$)
for a pure CP-even (CP-odd) Higgs boson and
and $\delta$ is the angle between the transverse polarizations of
the laser photons. Thus, one measure of
the CP nature of a Higgs is the asymmetry
for parallel vs. perpendicular orientation
of the transverse linear polarizations of the initial laser beams.
In the absence of background, this would take the form
\begin{equation}
{\cal A}\equiv{N_{\parallel}-N_{\perp}\over N_{\parallel}+N_{\perp}}
={{{\cal L} CP}\vev{\lambda_T\lambda_T'}\over 1+\vev{\lambda\lambda'}} \,,
\label{asymzerob}
\end{equation}
which is positive (negative) for a CP-even (odd) state.
The $b\overline b(g)$ and $c\overline c(g)$ backgrounds result
in additional contributions
to the $N_{\parallel}+N_{\perp}$ denominator, which dilutes the
asymmetry. The backgrounds do not contribute to the numerator
for CP invariant cuts. Since, as described below, total
linear polarization for the laser beams translates into
only partial polarization for the back-scattered photons
which collide to form the Higgs boson, both $N_{\parallel}$
and $N_{\perp}$ will be non-zero for the signal.
The expected value of ${\cal A}$ must be carefully computed for a given model
and given cuts.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}
\includegraphics[width=0.99\hsize]{Chapter_Gamma_Gamma/figs/gamgamlum_plot_x1pt86_lame-80_pt1.pdf}
\caption[0]{We plot the luminosities and corresponding $\vev{\lambda\lambda'}$
and $\vev{\lambda_T\lambda_T^\prime}$
for operation at $\sqrt s=206$~GeV and $x=1.86$,
assuming 100\% transverse polarization for the laser photons
and $\lambda_e=\lambda_e^\prime=0.4$. These plots are for the naive
non-CAIN distributions.
}
\label{linlumnaive}
\end{figure}
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}
\includegraphics[width=0.99\hsize]{Chapter_Gamma_Gamma/figs/asner_lowe_lum_lin_new.pdf}
\caption[0]{We plot the luminosity,
$L=d{\cal L}/dE_{\gamma\gamma}$, in units of ${\rm fb}^{-1}/4.28$~GeV
and corresponding $\vev{\lambda\lambda'}$ predicted by CAIN
for operation at $\sqrt s=206$~GeV and $x=1.86$,
assuming 100\% transverse polarization for the laser photons
and $\lambda_e=\lambda_e^\prime=0.4$.
The dashed (dotted) curve gives the component
of the total luminosity that derives from the $J_z=0$ ($J_z=2$) two-photon
configuration. The solid luminosity curve is the sum of these two
components and $\vev{\lambda\lambda'}=(L_{J_z=0}-L_{J_z=2})/(L_{J_z=0}+L_{J_z=2})$.
}
\label{linlum}
\end{figure}
At the kinematic limit, $z=z_{\rm max}=x/(1+x)$,
the ratio of $\lambda$ to $\lambda_T$ is given by
\begin{equation}
{\lambda\over \lambda_T}= \lambda_e x{2+x\over 1+x}\sim 1
\end{equation}
for $\lambda_e=0.4$ and $x=1.86$.
Substantial luminosity and values of $\lambda_T$ close to the
maximum are achieved for moderately smaller $z$.
Operation at $x=1.86$ (corresponding to $\sqrt s=206$~GeV and laser
wave length of $\lambda\sim 1~\mu$) would allow
$\lambda_T^{\rm max}\sim\lambda^{\rm max}\sim 0.6$.
Making these choices for both beams
is very nearly optimal for the CP study for the
following reasons. First, these choices will maximize
${d{\cal L}\over dE_{\gamma\gamma}}\vev{\lambda_T\lambda_T'}$
at ${E_{\gamma\gamma}=120}$~GeV. As seen
from earlier equations, it is
the square root of the former quantity that
essentially determines the accuracy with which
the CP determination can be made.
Second, $\lambda_e=\lambda_e'=0.4$ results in $\vev{\lambda\lambda'}>0$. This is
desirable for suppressing the background. (If there
were no background, Eq.~(\ref{asymzerob}) implies
that the optimal choice would be to employ
$\lambda_e$ and $\lambda_e'$ such that $\vev{\lambda\lambda'}<0$.
However, in practice the background is very substantial and it
is very important to have $\vev{\lambda\lambda'}>0$ to suppress
it as much as possible.)
In Fig.~\ref{linlumnaive}, we plot
the naive luminosity distribution
and associated values of $\vev{\lambda\lambda'}$ and $\vev{\lambda_T\lambda_T'}$
obtained for $\lambda_e=\lambda_e'=0.4$ and 100\% transverse polarization
for the laser beams.
As discussed in \cite{Gunion:1994wy}, the
asymmetry studies discussed below are not very sensitive to
the polarization of the colliding $e$ beams.
Thus, the studies could be performed in parasitic fashion during
$e^-e^+$ operation if the $e^+$ polarization is small. (As emphasized
earlier, substantial $e^+$ polarization would be needed for precision
studies of other $h_{SM}$ properties.)
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}
\includegraphics[width=0.6\hsize]{Chapter_Gamma_Gamma/figs/higgs_103_aspect_new.pdf}
\caption[0]{We plot the signal and $b\overline b$ and $c\overline c$
backgrounds for a SM Higgs boson with $m_{h_{SM}}=120$~GeV
assuming $\gamma\gamma$ operation at $\sqrt s=206$~GeV and $x=1.86$,
based on the luminosity and polarization distributions of Fig.~\ref{linlum}
for the case of linearly polarized laser photons.
The cross sections presented are those for $\delta=\pi/4$, \ie\
in the absence of any contribution from the transverse polarization
term in Eq.~(\ref{tranxsec}).
}
\label{linsignal}
\end{figure}
The luminosity distribution predicted by the CAIN
Monte Carlo for transversely polarized laser photons
and the corresponding result for $\vev{\lambda\lambda'}$ are plotted in
Fig.~\ref{linlum}. We note that
even though the luminosity spectrum is not peaked, it is very nearly the same
at $E_{\gamma\gamma}=120$~GeV as in the circular polarization case.
As expected from our earlier discussion
of the naive luminosity distribution,
at $E_{\gamma\gamma}=120$~GeV we find $\vev{\lambda\lambda'}\sim
\vev{\lambda_T\lambda_T'}\sim 0.36$. Since CAIN includes multiple interactions
and non-linear Compton processes, the luminosity is actually
non-zero for $E_{\gamma\gamma}$ values above the naive kinematic limit of
$\sim 132$~GeV. Both $\vev{\lambda\lambda'}$ and $\vev{\lambda_T\lambda_T'}$
continue to increase as one enters this region. However, the luminosity
becomes so small that we cannot make effective use of this region
for this study.
We employ these luminosity and polarization results
in the vicinity of $E_{\gamma\gamma}=120$~GeV
in a full Monte Carlo for Higgs production and decay as outlined
earlier in the circular polarization case. All the same cuts
and procedures are employed.
The resulting
signal and background rates for $\delta=\pi/4$
are presented in Fig.~\ref{linsignal}.
The width of the Higgs resonance peak is $5.0\pm 0.3$~GeV (using a Gaussian
fit), only slightly larger than in the circularly polarized case.
However, because of the shape of the luminosity distribution,
the backgrounds rise more rapidly for $m_{b\overline b}$
values below $120$~GeV than in the case of circularly polarized laser beams.
Thus, it is best to use a slightly higher cut on the $m_{b\overline b}$
values in order to obtain the best statistical significance for the signal.
Ref.~\cite{Asner:2001ia} finds $\sim 360$
reconstructed two-jet signal events with $m_{b\overline b}\geq 114$~GeV
in one year of operation, with roughly 440 background events
in this same region. Under luminosity assumptions similar to those used in Table~\ref{tab:gghiggsbf}, this corresponds to a precision
of $\sqrt{S+B}/S\sim 0.032$ for the measurement
of $\Gamma(h_{SM}\to\gamma\gamma)B(h_{SM}\to b\overline b)$.
Not surprisingly, this is not as good as for the circularly polarized
setup, but it is still indicative of a very strong Higgs signal.
Turning to the CP determination, let us assume that
we run 50\% in the parallel
polarization configuration and 50\% in the perpendicular
polarization configuration.
Then, because we have only 60\% linear
polarization for the colliding photons for $E_{\gamma\gamma} \sim 120$~GeV,
$N_{\parallel}\sim 180[1+(0.6)^2]+273\sim 518$ and
$N_{\perp}\sim 180[1-(0.6)^2]+273=388$.
For these numbers, ${\cal A}=130/906\sim 0.14$.
The error in ${\cal A}$ (again with luminosity assumptions similar to those used in Table~\ref{tab:gghiggsbf})
is $\delta{\cal A}=\sqrt{N_{\parallel}N_{\perp}/N^3}\sim 0.007$
($N\equiv N_\parallel+N_\perp$), yielding
${\delta{\cal A}\over{\cal A}}={\delta {\cal CP}\over {\cal CP}}\sim 0.05$.
This measurement would thus provide a
fairly strong confirmation of the CP=+ nature of the $h_{SM}$
after three $10^7$ sec years devoted to this study.
\section{Understanding gamma-gamma backgrounds at the ILC}
QCD aspects of gamma-gamma physics have been studied at electron-positron colliders over the last several decades years. At LEP, gamma-gamma collisions with $\sqrt{s}$ up to 140 GeV have been studied.
Up to now, the photons have been produced via bremsstrahlung from the electron and positron beams, leading to soft energy spectra with only limited statistics at high $\sqrt{s}$, whereas the
gamma-gamma option of the ILC will produce collisions in the high-energy part of the spectrum. A plethora of QCD physics topics in two-photon interactions can be addressed with a gamma-gamma collider,
as discussed in \cite{Asner:2001vh}. These topics include total gamma-gamma to hadrons cross sections and studies of the (polarized) photon structure functions. Furthermore, good knowledge and understanding of
two-photon processes will be essential for controlling physics background contributions to other processes and machine backgrounds at TeV and multi-TeV linear electron-positron colliders.
\section{Summary}
A gamma-gamma (and $e$-gamma) collider provide exciting physics opportunities that are complementary to and thus strengthen the physics case for the ILC.
This section presented a summary of Higgs studies possible at a gamma-gamma collider. The broader physics program of a photon collider is summarized in Table~\ref{gammagammasumtab}.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}
\includegraphics[width=0.99\hsize]{Chapter_Gamma_Gamma/figs/gammagamma-summary-table.pdf}
\caption[0]{Summary of gamma-gamma collider golden modes}
\label{gammagammasumtab}
\end{figure}
\chapter*{Introduction \label{sid:chapter_introduction}}
\addcontentsline{toc}{chapter}{Introduction}
Following an intense and successful R\&D phase, the ILC has now achieved a state of maturity and readiness, culminating recently with the publication
of the Technical Design Report~\cite{Behnke:2013xla,Baer:2013cma,Adolphsen:2013jya,Adolphsen:2013kya,Behnke:2013lya}. Several important physics goals at the TeV energy scale have motivated this effort. These include a detailed study of the properties of the recently discovered
Standard Model-like Higgs boson, including precision measurements of its couplings to fermions and bosons, and an improved knowledge of the top quark
and W, Z boson interactions to a high level of precision. In all of these, the ILC will yield substantial improvements over LHC measurements and will
have a qualitative advantage on signatures that have high backgrounds at LHC or are difficult to trigger on. Moreover, the ILC provides a unique
sensitivity in the search for signals of new physics beyond the Standard Model arising from the electroweak production of new particles (assuming these
are kinematically accessible), as well as extending the probe of new interactions to higher mass scales via the precision measurements
of W, Z and two-fermion processes. In this way, the ILC experiments will be sensitive to new phenomena such as supersymmetric partners of known
particles, new heavy gauge bosons, extra spatial dimensions, and particles connected with alternative theories of electroweak symmetry breakingt~\cite{Baer:2013cma}. Indeed, the
ILC experiments will bring qualitatively new capabilities; detailed simulations with realistic detector designs show that the ILC can reach the precision goals needed~\cite{Behnke:2013lya}.
The requirements of the ILC~\cite{Heuer:2003nn} include tunability between center-of-mass energies of 200 and 500 GeV, with rapid changes in energy over a limited range for threshold scans.
The luminosity, which must exceed $10^{34}$ cm$^{-2}$s$^{-1}$ at 500 GeV, roughly scales proportionally with center-of-mass collision energy. Highly polarized electrons ($>80$\%) are specified,
with polarized positrons desirable. The TDR design~\cite{Adolphsen:2013jya,Adolphsen:2013kya} has met these specifications. R\&D has achieved
the accelerating gradient goal of 35 MV/m in test stands and 31.5 MV/m in
installed cryomodules with beam loading. Cavity fabrication to these specifications has been industrialized. The effects of the electron cloud in the positron damping ring have
been studied experimentally, leading to proven techniques for its mitigation. Fast kickers needed for damping ring beam injection and ejection have been developed. The required
small final focus spot size is being demonstrated in a test facility. The final focus and interaction region, including the detector push-pull system, has been designed. Two
detailed detector designs have been developed~\cite{Behnke:2013lya}, with R\&D supporting these designs. Beam tests with highly granular calorimeters have demonstrated the calorimetry performance
needed by using the particle flow technique. Similarly, tracking R\&D has advanced for vertex detection based on thin CMOS monolithic pixel sensors, outer tracking with low-mass
supported silicon microstrips, and advanced TPC technologies employing micropattern gas detectors or silicon sensors for readout.
Recently, the Japanese government has expressed a desire to host the ILC, and international negotiations are underway. In a staged approach, beginning at a center-of-mass energy
of 250 GeV, a physics program would start with precision measurements of the Higgs branching ratios and properties. Raising the energy to 500 GeV would move to precision measurements
of top quark properties well beyond those possible at the LHC. Measurements of the top coupling to the Higgs and the Higgs self coupling would begin at 500 GeV. Should there be
accessible new particles such as supersymmetric partners of gauge bosons, Higgs bosons and leptons, the ILC with the power of polarized beams is the only place where they can be studied in full detail. If there are
additional Higgs boson states (which are often difficult to observe at the LHC even if not too heavy), the ILC would be needed to measure their masses, quantum numbers, and
couplings to Standard Model particles. Extension of the ILC to 1 TeV is straightforward, with lengthened linac tunnels and additional cryomodules, building on the original
ILC sources, damping rings, final focus and interaction regions, and beam dumps.
\subsection{$q\bar{q}h$ at $\sqrt{s}=500$~GeV}
At $\sqrt{s}=500$ GeV, the total cross section for $e^{+}e^{-}\rightarrow q\bar{q}h$ is about 70~fb with
an $e^{-}/e^{+}$ beam polarization of -80\%/+30\%. About 35k such events are produced for 500 fb$^{-1}$
integrated luminosity. Unlike the situation at $\sqrt{s}=250$~GeV, the Z and h contain a significant boost at $\sqrt{s}=500$~GeV and
the decay product jets can be unambiguously associated with the parent Z or h boson.
The major background processes
are 4-fermion $W^{+}W^{-}$, $Z^0Z^0$, and 2-fermion $q\bar{q}$.
The energy of the $Z$ from the $Zh$ process is more than 200 GeV, and the two jets from the $Z$
overlap significantly. Therefore, we reconstruct the hadronically decaying $Z$ as a single jet by the
$k_t$ jet algorithm with a jet radius of 1.2. From reconstructed jets, candidate jets are preselected by
requiring that (1) the jet p$_{t}$ be greater than 50 GeV, (2) the jet mass be between 70 and 150 GeV/c$^{2}$,
and (3) the jet energy be between 210 and 300 GeV.
With a fixed jet radius, both jet mass and jet energy are reduced if some decay products from the $Z$ are outside
the jet radius. This effect was corrected by assuming a linear relationship between jet mass and jet energy; in this way
a better separation between $Z$ jets and non-$Z$ jets was achieved. For the final selection
we required that (1) the corrected jet mass be between 87 and 105 GeV/c$^{2}$, (2) the maximum
energy of a photon in the event be less than 100 GeV, (3) the number of particles in the jet be greater than 20, and
(4) the jet angle satisfy $|\cos\theta_{jet}|<0.7$.
The recoil mass distribution of selected events is shown in Fig.~\ref{fig:incjet-recoil-mass-500}.
The figure shows the distribution after subtracting background events. The error bar and the central value of the histogram
correspond to the actual event statistics. All standard model processes simulated for the ILC TDR
were considered as background. For the number of events with the recoil mass between 100 and 210 GeV/c$^{2}$,
S/N is 11113/175437=0.063. 43\% of the backgrounds are due to 4-quark events through $ZZ$ and $WW$ processes. Other 4-fermion
processes and 2-fermion hadron events constitute 26\% and 27\% of background events, respectively. The relative cross section error for
500 fb$^{-1}$ is 3.9\%~\cite{Miyamoto:2013zva}.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\hsize]{Chapter_Mass_Spin_CP/figs/incjet-recoil-mass-500.pdf}
\end{center}
\caption{The recoil mass of inclusive jet pair after subtracting background
processes.}
\label{fig:incjet-recoil-mass-500}
\end{figure}
\subsection{$l^+l^-h$ at $\sqrt{s}=500$~GeV}
\begin{figure}
\includegraphics[width=0.49\hsize, height=0.23\vsize, keepaspectratio=false]{Chapter_Mass_Spin_CP/figs/mm-0p8-recoil-arranged.pdf}
\includegraphics[width=0.49\hsize, height=0.23\vsize, keepaspectratio=false]{Chapter_Mass_Spin_CP/figs/ee-0p9-recoil-arranged.pdf}
\caption{Results of the model independent analysis of the Higgs-strahlung
process $e^+e^- \to Zh$ at $\sqrt{s}=500$~GeV in which (a) $Z \to \mu^+\mu^-$ and
(b) $Z \to e^+e^- (n\gamma)$. The results are shown for
$P(e^+, e^-) = (+30 \%, -80 \%)$ beam polarization. \label{fig:mumuheeh}}
\end{figure}
A Higgs recoil mass analysis has been done at $\sqrt{s} = 500$ GeV with ILD full detector simulation.
At $\sqrt{s} = 500$ GeV the $Zh$ cross section is about one-third compared to $\sqrt{s} = 250$ GeV.
Also there are numerous backgrounds from $t$-channel processes such as $ZZ$ at$\sqrt{s} = 500$ GeV.
Those aspects make the $Zh$ analysis at $\sqrt{s} = 500$ GeV less powerful than at $\sqrt{s} = 250$ GeV;
however the result can be combined with the $\sqrt{s} = 250$ result to improve the overall $Zh$ total cross section accuracy.
Firstly, lepton tagging is applied to both the muon and electron channels.
For the muon, the cluster energy is required to be smaller than half of the track energy.
For the electron, the ratio of the energy deposited in the ECAL to the total calorimeter energy must be greater than 90\%,
and the cluster energy is required to be between 70\% and 140\% with respect to the track energy.
For the electron channel, all neutral particles with $\cos\theta < 0.99$ with respect to the electron candidate
are added to the candidate electron to recover photons from final state radiation and bremsstrahlung.
If more than two lepton candidates are found, a pair giving the dilepton mass nearest to the $Z$ mass
is selected.
Cuts on the $Z$ mass, recoil mass, and di-lepton $p_T$ are applied.
Additional cuts are applied to the acoplanarity of the di-lepton system and the difference between the $p_T$ of the di-lepton and
the most energetic neutral particle.
Likelihood cuts are applied as the final cuts, with input variables of
di-lepton $p_T$, $Z$ mass, di-lepton $\cos\theta$ and acollinearity of di-lepton.
The resultant recoil mass distributions with fit are shown in Figure \ref{fig:mumuheeh}.
The cross section error is 6.49\% in $\mu\mu{}h$ channel and 7.10\% in $eeh$ channel.
The combined resolution for the $Zh\rightarrow\ell\ell{}h$ at $\sqrt{s} = 500$ GeV is 4.8\%.
\chapter{Higgs Mass, ZH Cross Section, Spin and CP \label{sid:chapter_mass_spin_cp}}
\input{Chapter_Mass_Spin_CP/Section_Mass.tex}
\input{Chapter_Mass_Spin_CP/Section_SpinParity.tex}
\input{Chapter_Mass_Spin_CP/Section_CP.tex}
\section{Higgs Sector CP Measurements}
\subsection{Introduction }
The analytic power of the ILC is emphasized when we consider more
detailed questions.
It is possible that the $h$ is not a CP eigenstate but rather a
mixture of CP even and CP odd components. This occurs if there is
CP violation in the Higgs sector. It is known that CP violation
from the CKM matrix cannot explain the cosmological excess of baryons
over antibaryons; thus, a second source of CP violation in nature is
needed. One possibility is that this new CP violation comes from
the Higgs sector and gives rise to net baryon number at the
electroweak phase transitions, through mechanisms that we will discuss
in Section 9.1 of this report. For these models, the $h$ mass
eigenstates can be mainly CP even but contain a small admixture of a
CP odd component.
\subsection{$e^+e^-\rightarrow ZH$ }
A small CP odd contribution to the $hZZ$ coupling
can affect the threshold behavior. The right-hand side of
Fig.~\ref{fig:ZH:CP}
shows the determination of this angle at a center of mass energy of
350~GeV from the value of the total cross section and from an
appropriately defined optimal observable~\cite{Schumacher:2001ax}.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\hsize]{Chapter_Mass_Spin_CP/figs/Tesla-Hangle-lines3.pdf}
\end{center}
\caption{ Determination of
$CP$-mixing with $1$-$\sigma$ bands expected at
$\sqrt{s}=350$\,GeV and $500$\,fb$^{-1}$ \cite{Schumacher:2001ax}. }
\label{fig:ZH:CP}
\end{figure}
A new result was presented during the Snowmass study \cite{Anderson:2013}
for the CP mixing that would appear in the $hZZ$ coupling in both the $pp$ and $e^+e^-$ colliders.
The analysis utilized a simplified detector simulation based on the smearing of parton-level information and simply assumed $30\,\%$ efficiency and $10\,\%$ background for the signal process: $e^+e^- \to Zh \to \mu^+\mu^- b \bar{b}$.
From the cross section and the observables concerning the production and decay angles of both the $Z$ and $h$ bosons, the analysis estimated the expected sensitivity to the effective fraction of events due to the CP violating coupling, $f_{a3}$, which was then translated to that of the corresponding fraction of the anomalous contribution for the Higgs to two vector boson decays, $f_{a3}^{\rm dec}$, used in earlier CMS studies.
If the $Zh$ cross-section is first measured at the center of mass energy, $\sqrt{s}=250\,$GeV ($250\,$fb$^{-1}$), and then at $350$ ($350\,$fb$^{-1}$), $500$ ($500\,$fb$^{-1}$), and $1000\,$GeV ($1000\,$fb$^{-1}$), $f_{a3}$ can be measured to $0.035$, $0.041$, and $0.055$, respectively, which would translate to precision on $f_{a3}^{\rm dec}$ of $10^{-4}$, $4 \times 10^{-5}$, and $10^{-5}$, respectively.
However, the relative contributions of various possible anomalous couplings to the cross section might depend on the underlying dynamics that would appear as form factors in the anomalous couplings and would depend on the virtuality of $Z^*$.
At the ILC, the $q^2$ dependence can be separated by performing angular analyses separately at different energies since the virtuality of $Z^*$ is fixed at a fixed center-of-mass energy.
The expected precision of $f_{a3}$ is in the range of $0.03$ - $0.04$, being independent of the center-of-mass energy, and translates to $7 \times 10^{-4}$ to $8 \times 10^{-6}$, entering a region sensitive to a possible loop-induced CP-violating contribution.
\subsection{$H\rightarrow \tau^+\tau^-$ }
Tests of mixed CP property using the $hZZ$ coupling may not be the
most effective ones, since the CP odd $hZZ$ coupling is of higher
dimension and may be generated only through loops. It is more
effective to use a coupling for which the CP even and CP odd
components are on the same footing. An example is the $h$ coupling
to $\tau^+\tau^-$, given by
\begin{equation}
\Delta \L = - {m_\tau\over v} h\ \bar \tau (\cos\alpha + i
\sin\alpha \gamma^5) \tau
\label{eq:taucouple}
\end{equation}{taucouple}
for a Higgs boson with a CP odd component. The polarizations of the
final state $\tau$s can be determined from the kinematic distributions
of their
decay
products; the CP even and odd components interfere in these
distributions~\cite{Kramer:1993jn}.
In \cite{Desch:2003rw}, it
is estimated that the angle $\alpha$ can be determined to
an accuracy of 6$^\circ$ with $1\,$ab$^{-1}$at $\sqrt{s}=350\,$GeV
in the case of maximal CP mixing, $\alpha=\pi/4$.
A similar study has been performed in \cite{Reinhard:2009} for a $120\,$GeV Higgs boson
assuming the baseline ILC machine running at $\sqrt{s}=230\,$GeV for
an integrated luminosity of $500\,$fb$^{-1}$ with beam polarizations
of $(P_{e^-}, P_{e^+})=(-0,8,+0.3)$.
A full simulation for the $e^+e^- \to Zh \to \mu^+\mu^- \tau^+\tau^-$ mode
in the study showed that with an inclusion of other $Z$ decay modes
an expected statistical precision of $\Delta\alpha=0.135$ (i.e. $28\,\%$) could be
achieved for $\alpha=-\pi/8$ given the baseline integrated luminosity of $500\,$fb$^{-1}$.
\subsection{$e^+e^- \to t\bar{t}H$ }
In the presence of CP violation, only the CP--even
component of the $HZZ$ coupling is projected out in Higgs decays to $ZZ$.
The $ZZ$ couplings of a pure
CP--odd $A$ state are zero at tree--level and are generated only through tiny
loop corrections.
The decays of the Higgs boson to fermions provide a more democratic probe of its CP
nature since, in this case, the CP--even and CP--odd components can have the
same magnitude. One therefore needs to look at channels where the Higgs boson is
produced and/or decays through these couplings.
A promising production mode for studying the Higgs CP properties is
$\end{equation} \to t\bar t H$.
The production of a spin 0 state with
arbitrary model-independent CP properties in association with a top
quark pair at the ILC was investigated in Ref.~\cite{BhupalDev:2007is,Godbole:2011hw}.
The CP properties of
the Higgs coupling to the top quarks were parametrized in a
model-independent way by a parameter $a$ for a CP-even Higgs, by a
parameter $b$ for a CP-odd Higgs and by simultaneously non-vanishing
$a$ and $b$ for a CP-mixed state:
\begin{equation}
C_{tt\Phi} = -i g_{ttH} (a+ib \gamma_5)\;.
\end{equation}
Notice that in the Standard Model, $a=1$ and $b=0$.
These parameters were determined
by a measurement of the total cross section, the polarization
asymmetry of the top quark and the up-down asymmetry of the antitop
quark with respect to the top-electron plane. The former two
observables are CP-even and can be exploited to distinguish a CP-even
from a CP-odd Higgs boson. Since the up-down asymmetry $A_\phi$ is
CP-odd, it can be exploited directly and unambiguously to test CP violation.
The sensitivities to $a$ and $b$ were studied in each observable
separately before investigating the combination of all three observables.
It was found that the total cross section is most sensitive to $a$ and to
some extent to $b$. The observables $P_t$ and $A_\phi$ do not exhibit
much sensitivity to $a$ and $b$, although polarization of the initial
$e^\pm$ beams slightly improves the sensitivity in case of $P_t$.
The combination of all three observables, however, reduces
the error on $a$ for polarized $e^\pm$ beams as shown in Fig.~\ref{fig:tthCP}.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\hsize]{Chapter_Mass_Spin_CP/figs/Sens_3.pdf}
\end{center}
\caption{ Errors $\Delta a^+$ (upper left) and
$\Delta a^-$ (upper right) on $a$ as well as $\Delta b^+$ (lower
left) and $\Delta b^-$ (lower right) on $b$, by combining all 3
observables $\sigma, P_t, A_\phi$, at $1 \sigma$ confidence level
for $M_\Phi=120$ GeV and $\sqrt{s}=800$ GeV with ${\cal L} = 500$
fb$^{-1}$. The electron and positron beams are polarized with $(P_{e^-}, P_{e^+})=(-0,8,+0.3)$.
The colour code
indicates the magnitude of the respective error.}
\label{fig:tthCP}
\end{figure}
If we assume that $a^2+b^2=1$ and parametrize $a$ and $b$ as $a = \cos\phi$ and $b=\sin\phi$,
as in eq. (\ref{eq:taucouple}) for $h \to \tau^+\tau^-$, then the cross section alone will be a measure of the mixing angle,
$\phi$.
Fig.\ref{fig:tth-cp}-(a) shows the $e^+e^- \to t\bar{t}h$ cross section as a function of $\sin^2\phi$ at three
different center of mass energies: $\sqrt{s}=500, 520,$ and $1000\,$GeV.
The cross section values are translated into the expected 1-$\sigma$ bounds and shown in Fig.\ref{fig:tth-cp}-(b) as a function of $\sin^2\phi$ for the three energies assuming $500\,$fb$^{-1}$ for $\sqrt{s}=500$ and $520\,$GeV, and the baseline $1\,$ab$^{-1}$ and the upgraded $2.5\,$ab$^{-1}$ at $\sqrt{s}=1\,$TeV \cite{Tanabe:2013tth-cp}.
The figure tells us that the contribution from the CP-odd component could be constrained to $\sim 5\%$ at 1-$\sigma$.
\begin{figure}
\begin{center}
\includegraphics[width=0.45\hsize]{Chapter_Mass_Spin_CP/figs/tth-cp-xsec.pdf}
\includegraphics[width=0.45\hsize]{Chapter_Mass_Spin_CP/figs/tth-cp.pdf}
\end{center}
\caption{(a) $e^+e^- \to t\bar{t}h$ cross section as a function of $\sin^2\phi$ at three
different center of mass energies: $\sqrt{s}=500, 520,$ and $1000\,$GeV,
(b) the expected 1-$\sigma$ bound on the CP-odd contribution, $\Delta\sin^2\phi$, as a function of $\sin^2\phi$ at the three energies.
The corresponding beam polarizations and integrated luminosities are indicated in the figure.}
\label{fig:tth-cp}
\end{figure}
\section{Higgs Mass and $\sigma(ZH)$ Measurements}
The Higgs mass and the total cross section for $e^+e^- \to Zh$
are measured simultaneously in the process
$e^+e^- \to Zh$, with $Z \to \mu^+\mu^-$, $Z \to e^+e^-$, and $Z \to q\bar{q}$ decays.
Here the shape of the distribution of the invariant mass recoiling against the
reconstructed $Z$ provides a precise measurement of $m_h$, while the
normalization of the distribution provides the total cross section $\sigma(ZH)$
independently of the Higgs decay mode. In particular, the
$\mu^+\mu^-X$ final state provides a particularly precise
measurement as the $e^+e^-X$ channel suffers from larger
experimental uncertainties due to bremsstrahlung.
It should be noted that it is the capability to precisely
reconstruct the recoil mass distribution from $Z \to \mu^+\mu^-$
that defines the momentum resolution requirement for an ILC
detector. A measurement using $Z \to q\bar{q}$ decays appears to
only be feasible at $\sqrt{s} \ge 350$~GeV. A study of this channel
at $\sqrt{s}=500$~GeV is presented here.
\subsection{$l^+l^-h$ at $\sqrt{s}=250$~GeV}
The reconstructed recoil mass distributions,
calculated assuming the $Zh$ is produced with four-momentum
$(\sqrt{s}, 0)$, are shown in Fig.\ref{mass:mass:fig:Mrecoil}. In the $e^+e^-X$
channel FSR and bremsstrahlung photons are identified and used
in the calculation of the $e^+e^- (n\gamma)$ recoil mass. Fits to
signal and background components are used to extract $m_h$ and $\sigma(ZH)$.
Based on this model-independent analysis of Higgs production in
the ILD detector, it is shown that $m_h$ can be determined with a
statistical precision of $40$~MeV ($80$~MeV) from the $\mu^+\mu^-X$
($e^+e^-X$) channel. When the two channels are combined an
uncertainty of $32$~MeV is obtained \cite{Abe:2010aa,Li:2012taa}. The
corresponding
model independent uncertainty on the Higgs production cross
section is $2.6$\,\%. For a luminosity of 1150~fb$^{-1}$ at $\sqrt{s}$=250~GeV
(our scenario 4) the uncertainty on the Higgs mass and production cross
section drop to $15$~MeV and $1.2$\,\%, respectively.
Similar results were obtained from SiD
\cite{Aihara:2009ad}. It should be emphasized that these measurements
only used the information
from the leptonic decay products of the $Z$ and are independent of
the Higgs decay mode. As such this analysis technique could be
applied
even if the Higgs decayed invisibly and hence allows us to determine
the absolute branching ratios including that of invisible Higgs
decays.
\begin{figure}
\includegraphics[width=0.49\hsize]{Chapter_Mass_Spin_CP/figs/new_mh_mmX.pdf}
\includegraphics[width=0.49\hsize]{Chapter_Mass_Spin_CP/figs/new_mh_eeX_br.pdf}
\caption{Results of the model independent analysis of the Higgsstrahlung
process $e^+e^- \to Zh$ at $\sqrt{s}=250$~GeV in which (a) $Z \to \mu^+\mu^-$ and
(b) $Z \to e^+e^- (n\gamma)$. The results are shown for
$P(e^+, e^-) = (+30 \%, -80 \%)$ beam polarization. \label{mass:mass:fig:Mrecoil}}
\end{figure}
\input{Chapter_Mass_Spin_CP/ll-500-recoil.tex}
\input{Chapter_Mass_Spin_CP/2jet-recoil.tex}
\section{Higgs Spin Measurement}
The threshold behavior of the $Zh$ cross section has a characteristic
shape for each spin and each possible CP parity. For spin 0,
the cross section rises as $\beta$ near the threshold for a CP even
state and as $\beta^3$ for a CP odd state. For spin 2,
for the canonical
form of the coupling to the energy-momentum tensor, the rise is also
$\beta^3$.
If the spin
is higher than 2, the cross section will grow as a higher power of
$\beta$.
With a three-$20$\,fb$^{-1}$-point threshold scan of the $e^+e^- \to
Zh$
production cross section we can
separate these possibilities~\cite{Dova:2003py}
as
shown in Fig.~\ref{fig:ZH:JPC}.
The discrimination of more
general forms of the coupling is possible by the use of angular
correlations in the boson decay; this is discussed in detail in
\cite{Miller:2001bi}.
At energies well above the $Zh$ threshold, the $Zh$ process will
be dominated by longitudinal $Z$ production as implied by the
equivalence theorem. The reaction will then behave like a scalar pair
production, showing the characteristic $\sim \sin^2\theta$ dependence
if the $h$ particle's spin is zero. The measurement of the angular
distribution will hence strongly corroborate that the $h$ is indeed a
scalar particle.
\begin{figure}
\begin{center}
\includegraphics[width=0.75\hsize]{Chapter_Mass_Spin_CP/figs/spin_determination.pdf}
\end{center}
\caption{ Threshold scan of the $e^+e^- \to Zh$ process for $m_h =
120$\,GeV, compared with theoretical predictions for $J^{P}= 0^{+}$,
$1^{-}$, and $2^{+}$ \cite{Dova:2003py}.}
\label{fig:ZH:JPC}
\end{figure}
\chapter{Non-Minimal Higgs Models \label{sid:chapter_multiple_higgs}}
\input{Chapter_Multiple_Higgs/Section_Direct.tex}
\input{Chapter_Multiple_Higgs/Section_TanBeta.tex}
\section{Direct Production of Non-Minimal Higgs Bosons}
\defH^\pm{H^\pm}
\defm_{H}{m_{H}}
\defm_{h}{m_{h}}
\defm_{A}{m_{A}}
\defm_{\hpm}{m_{H^\pm}}
\defh_{\rm SM}{h_{\rm SM}}
\defm_{\hsm}{m_{h_{\rm SM}}}
\defG^0{G^0}
\defH{H}
\defh{h}
\defA{A}
\defH^+{H^+}
\defH^-{H^-}
\def\nonumber{\nonumber}
\def{\bar a}{{\bar a}}
\def{\bar b}{{\bar b}}
\def{\bar c}{{\bar c}}
\def{\bar d}{{\bar d}}
\def{\bar e}{{\bar e}}
\def{\bar f}{{\bar f}}
\def{\bar g}{{\bar g}}
\def{\bar h}{{\bar h}}
\defQ^0_L{Q^0_L}
\defU^0_R{U^0_R}
\defD^0_R{D^0_R}
\defQ_L{Q_L}
\defU_R{U_R}
\defD_R{D_R}
\defM_U{M_U}
\defM_D{M_D}
\def\eta_1^{U,0}{\eta_1^{U,0}}
\def\eta_2^{U,0}{\eta_2^{U,0}}
\def\eta_1^{D,0}{\eta_1^{D,0}}
\def\eta_2^{D,0}{\eta_2^{D,0}}
\def\eta_1^U{\eta_1^U}
\def\eta_2^U{\eta_2^U}
\def\eta_1^D{\eta_1^D}
\def\eta_2^D{\eta_2^D}
\def\eta_i^{U,0}{\eta_i^{U,0}}
\def\eta_i^{D,0}{\eta_i^{D,0}}
\def\eta_i^U{\eta_i^U}
\def\eta_i^D}\def\lam{\lambda{\eta_i^D}\def\lam{\lambda}
\def\eta_a^{U,0}{\eta_a^{U,0}}
\def\eta_a^{D,0}{\eta_a^{D,0}}
\def\eta_{\abar}^{U,0}{\eta_{{\bar a}}^{U,0}}
\def\eta_{\abar}^{D,0}{\eta_{{\bar a}}^{D,0}}
\def\eta_a^U{\eta_a^U}
\def\eta_a^D{\eta_a^D}
\def\eta_{\abar}^U{\eta_{{\bar a}}^U}
\def\eta_{\abar}^D{\eta_{{\bar a}}^D}
\def\kappa^U{\kappa^U}
\def\rho^U{\rho^U}
\def\kappa^D{\kappa^D}
\def\rho^D{\rho^D}
\def(\rho^D)\lsup{*}{(\rho^D)\lsup{*}}
\def(\rho^D)\lsup{T}{(\rho^D)\lsup{T}}
The discovery of additional Higgs bosons such as $H$, $A$, $H^\pm$ and
$H^{\pm\pm}$ would give direct evidence for extended Higgs sector.
As discussed in Section~\ref{specialforms} there are many possibilities for the
decay branching ratios of these particles.
The ongoing searches at LHC
rely on specific production and decay
mechanisms that occupy only a part of the complete model parameter
space. At the ILC, the extended Higgs bosons are produced
in electroweak pair production through cross sections that depend only
on the $SU(2)\times U(1)$ quantum numbers and the mixing angles.
Thus, the reach of the ILC is typically limited to masses less than
$\sqrt{s}/2$, but it is otherwise almost uniform over the parameter space.
\subsection{Neutral Higgs pair production at ILC}
The signals from $HA$ production in the
$bbbb$ and $bb\tau\tau$ channels, in the context of
the MSSM (Type-II 2HDM), was carried out in the studies
of Ref.~\cite{Aguilar-Saavedra:2001rg,Desch:2004yb}.
A rather detailed detector simulation was performed in \cite{Desch:2004yb},
including all the SM backgrounds
at $\sqrt{s}=500$, 800 and 1000 GeV.
Using a kinematical fit which imposes energy momentum conservation and
under the assumed experimental conditions, a statistical accuracy on
the Higgs boson mass
from 0.1 to 1 GeV is found to be achievable.
The topological cross section of $e^+e^- \to HA \to bbbb$
($e^+e^- \to HA \to \tau\tau bb$) could be determined with a relative
precision of 1.5\% to 7\% (4\% to 30\%).
The width of $H$ and $A$ could also be determined with an
accuracy of 20\% to 40\%, depending
on the mass of the Higgs bosons.
Figure~\ref{FIG:4tau_dist} shows, on the left,
the $\tau^+\tau^-$ invariant mass obtained by a kinematic
fit in $e^+e^- \to HA \to b\bar b \tau^+\tau^-$
for $m_A=140$ GeV and $m_H =150$ GeV, for
$\sqrt{s}=500$ GeV and 500 fb$^{-1}$~\cite{Desch:2004yb}.
The $\tau^+\tau^-\tau^+\tau^-$ and
$\mu^+\mu^-\tau^+\tau^-$ final states would be dominant for the
type X (lepton specific) 2HDM.
When $\sqrt{s}=500$ GeV, assuming an integrated luminosity of
$500$ fb$^{-1}$, one expects to collect 16,000 (18,000)
$\tau^+\tau^-\tau^+\tau^-$ events
in the type X (type II) 2HDM, and 110 (60)
$\mu^+\mu^-\tau^+\tau^-$ events in the same models,
assuming $m_H^{}=m_A^{}=m_{H^\pm}^{}=130$ GeV, $\sin(\beta-\alpha)=1$
and $\tan\beta=10$. These numbers do not change much for $\tan\beta\gtrsim 3$.
It is important to recognize that the four-momenta
of the $\tau$ leptons can be solved by a kinematic fit based on the
known center of mass energy and momentum, by applying
the collinear approximation to each set of $\tau$ lepton decay
products~\cite{Schael:2006cr,Abdallah:2002qj}.
Figure~\ref{FIG:4tau_dist} shows, on the right,
the two dimensional invariant mass
distribution of the $\tau$ lepton pairs from the neutral
Higgs boson decays as obtained with
a simulation at 500~GeV in which the masses of the neutral Higgs bosons
are taken to be 130 GeV and 170 GeV~\cite{Tsumura}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.42\hsize]{Chapter_Multiple_Higgs/figs/Hee-MSSM-HA2ex.pdf}\
\includegraphics[width=0.46\hsize]{Chapter_Multiple_Higgs/figs/ILC_4T_Mtata1_Mtata2.pdf}
\caption{Left:
Invariant mass reconstruction from the kinematical fit in the process
$e^+e^- \to HA \to b\bar{ b} \tau^+\tau^-$ in the Type-II (MSSM like) 2HDM
for $m_A=140$ GeV and $m_H
=150$ GeV at $\sqrt{s}=500$ GeV and 500 fb$^{-1}$~\cite{Desch:2004yb}
Right:
Two dimensional distribution of ditau invariant mass
in $e^+e^-\to HA \to \tau^+\tau^- \tau^+\tau^-$ in the Type X (lepton
specific) 2HDM for $m_A =170$ GeV and $m_H=130$ GeV
for $\sqrt{s}=500$ GeV and 500 fb$^{-1}$~\cite{Tsumura}.
}
\label{FIG:4tau_dist}
\end{center}
\end{figure}
In an extended Higgs sector with singlets, it is very common to have lighter Higgs bosons
with suppressed couplings to the Z, but which can be seen at an $e^+e^-$ collider either through direct
production or by decays of the 125~GeV Higgs boson. A specific NMSSM example
that has been studied is the cascade decay $h_1\rightarrow aa \rightarrow (\tau^+\tau^-)(\tau^+\tau^-)$
at the ILC\cite{Liu:2013gea}. In addition to discovery, the masses can be measured to better than 1\%.
Although the associated Higgs production process $e^+e^- \to HA$ is
a promising one for testing the properties of the extended Higgs sectors, the
kinematic reach is restricted by $m_H + m_A < \sqrt{s}$ and is not
available beyond this limit.
Above the threshold of the $HA$ production, the
associated production processes
$t \bar t \Phi$, $b \bar b \Phi$ and $\tau^+\tau^- \Phi$
($\Phi=h, H, A$) could be used~\cite{Djouadi:1992gp,Djouadi:1991tk}.
In particular, for $b \bar b \Phi$ and $\tau^+\tau^- \Phi$,
the mass reach is extended almost up to the collision energy.
The cross sections for these processes
are proportional to the Yukawa interaction,
so they directly depend on the type of Yukawa coupling
in the 2HDM structure. In MSSM or the Type II 2HDM (Type I 2HDM),
these processes are enhanced (suppressed) for large $\tan\beta$ values.
In Type X 2HDM, only the $\tau^+\tau^- H/A$ channels
could be significant while only $b \bar b H/A$ channels would be
important in Type I and Type Y 2HDMs. These reactions can then
be used to discriminate
the type of the Yukawa interaction.
\subsection{Charged Higgs boson production}
At the ILC, charged Higgs bosons
are produced in pairs in $e^+e^- \to H^+H^-$~\cite{Komamiya:1988rs}. The cross
section is a function only of $m_{H^\pm}$ and is independent of the
type of Yukawa interaction in the 2HDM. Therefore, as in the
case of the $HA$ production, the study of the final state channels
can be used to determine the type of Yukawa interaction.
When $m_{H^\pm} > m_t + m_b$, the main decay mode is $tb$ in Type I,
II and Y, while in Type X the main decay mode is $\tau\nu$ for
$\tan\beta >2$.
When $H^\pm$ cannot decay into $tb$, the main decay mode is
$\tau\nu$ except in Type Y for large $\tan\beta$ values.
For $m_{H^\pm} < m_t -m_b$, the charged Higgs boson can also be studied
via the decay of top quarks $t \to b H^\pm$ in 2HDMs except
in Type X 2HDM case with $\tan\beta >2$.
In the MSSM, a detailed simulation study of this reaction
has been performed for the final state
$e^+e^- \to H^+H^- \to t \bar b \bar t b$ for $m_{H^\pm}=300$ GeV
at $\sqrt{s}=800$ GeV~\cite{Battaglia:2001be}.
The final states is 4 $b$-jets with 4 non-$b$-tagged jets. Assuming
an integrated luminosity of 1 ab$^{-1}$, a mass resolution of
approximately 1.5\% can be achieved (Figure~\ref{fig:tbtb} (left)).
The decay mode $tbtb$ can also
be used to determine $\tan\beta$, especially for relatively small
values, $\tan\beta <5$), where the production rate of the signal
strongly depends on this parameter.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.45\hsize]{Chapter_Multiple_Higgs/figs/e3017_fig3.pdf}
\
\includegraphics[width=0.53\hsize]{Chapter_Multiple_Higgs/figs/eebthpm_1000_MH.pdf}
\caption{
Left: Fitted charged Higgs boson mass in $H^+H^- \to (t \bar{ b})(\bar{ t }b)$
in the MSSM, with $m_{H^\pm}$ = 300 GeV, measured at the ILC
at CM energy 800 GeV with 1 ab$^{-1}$ of data.
The background is shown by the dark histogram~\cite{Battaglia:2001be}.
Right: Differential distribution of the reconstructed Higgs mass for the
signal $e^+e^-\to b \bar{ t} H^+ + t \bar{b} H^-\to t \bar{ t} b \bar{ b}$
and the background $e^+e^- \to t \bar{ t }g^\ast\to t \bar{ t} b \bar{ b}$
in the MSSM or the Type II 2HDM~\cite{Moretti:2003cd}.
}
\label{fig:tbtb}
\end{center}
\end{figure}
The pair production is kinematically limited to relatively light charged
Higgs bosons with $m_{H^\pm} < \sqrt{s}/2$.
When $m_{H^\pm} > \sqrt{s}/2$, one can make use of
the single production processes
$e^+e^- \to t \bar b H^+$,
$e^+e^- \to \tau \bar \nu H^+$,
$e^+e^- \to W^- H^+$,
$e^+e^- \to H^+ e^- \nu$ and
their charge conjugates.
The cross sections for the first two of these processes are
directly proportional to the square of the Yukawa coupling constants. The
others are one-loop induced.
Apart from the pair production rate, these single production processes
strongly depend on the type of Yukawa interaction in the 2HDM
structure.
In general, their rates are small and quickly suppressed for larger
values of $m_{H^\pm}$.
They can be used only for limited parameter regions where $m_H^\pm$
is just above the threshold for the
pair production with very large or low $\tan\beta$
values.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[t]
\includegraphics[width=0.7\hsize]{Chapter_Multiple_Higgs/figs/tanb_error_totalonly_bw.pdf}
\caption{
Estimates of the 1 $\sigma$ statistical upper and lower bounds on $\tan\beta$
from ILC measurements, for an MSSM model with $m_{H^\pm} \sim m_A = 200$~GeV,
assuming $\sqrt{s}=500$ GeV and 2000 fb$^{-1}$ of data, from
\cite{Gunion:2002ip}. The quantity plotted is the relative error,
$\Delta\tan\beta/\tan\beta$.}
\label{totalonly}
\end{figure}
In Ref.~\cite{Moretti:2003cd}, a simulation study for the process
$e^+e^- \to t \bar b H^- + b \bar t H^+ \to 4b + jj+\ell + p_T^{\rm
miss}$ ($\ell=e$, $\mu$) has been done for $m_{H^\pm}$ just above
the pair production threshold $m_{H^\pm} \simeq\sqrt{s}/2$.
It is shown that this process provides a significant
signal of $H^\pm$ in a relatively small region just above
$\sqrt{s}/2$, for very large or very small values of $\tan\beta$,
assuming a high $b$-tagging efficiency. The reconstructed $H^+$ mass
distribution is shown in the right-hand side of Fig.~\ref{fig:tbtb}.
\section{Measurements of $\tan\beta$ at the ILC}
In multi-Higgs models, mixing angles between bosons with the same quantum numbers are
important parameters. In the CP-conserving two Higgs doublet model, there are two mixing angles
$\alpha$ and $\beta$, where $\alpha$ is introduced to diagonalize the mass matrix of the CP-even scalar states,
and $\tan\beta$ is defined as the ratio of vacuum expectation values of two Higgs doublets diagonalizing
the charged and CP-odd scalar states. All coupling constants associated with the Higgs bosons,
i.e. the couplings of $h$, $H$, $A$ and $H^\pm$ to gauge bosons, fermions and themselves,
depend on these mixing angles.
The information on $\sin(\beta-\alpha)$ ($\cos(\beta-\alpha)$) can be directly extracted
from the precision measurement of the couplings of the SM-like boson $h$ (the extra Higgs boson $H$)
to weak gauge bosons, $hVV$ ($HVV$).
%
At the LHC, the SM-like coupling $hVV$ ($VV=WW$ and $ZZ$) is being measured, and the current
data indicates $\sin^2(\beta-\alpha) \simeq 1$ within the error of order 10-20\%.
At the ILC, the $hWW$ and $hZZ$ couplings can be measured precisely to the percent level or better.
%
When $\sin(\beta-\alpha)$ is precisely determined, all the Yukawa couplings $h f \overline f$ and $H f \overline f$
are a function of $\tan\beta$, so that one can extract $\tan\beta$ by precise measurements
of the Yukawa interactions.
%
The $\tan\beta$ dependences in the Yukawa couplings are different for each type of Yukawa interaction~\cite{Barger:1989fj,Aoki:2009ha,Su:2009fz,Logan:2009uf}.
In the Type-II 2HDM, the $\tan\beta$ dependences are large for Yukawa interactions of $H$ and $A$ with down type
fermions such as $H b \overline b$ $A b\overline b$, $H \tau^+\tau^-$ and $A \tau^+\tau^-$
($Y_{Hbb, Abb} \sim m_b\tan\beta$, $Y_{H\tau\tau, A\tau\tau} \sim m_\tau\tan\beta$ ),
while in the Type-X (lepton specific) 2HDM the Yukawa couplings of $H$ or $A$
to charged leptons are sensitive to $\tan\beta$ ($Y_{H\tau\tau, A\tau\tau} \sim m_\tau \tan\beta$).
In Fig.~\ref{FIG:Bbb} the branching ratios of $h\to b\overline b$, $H\to b \overline b$ and $A\to b\overline b$
are shown as a function of $\tan\beta$ for a fixed value of $\sin^2(\beta-\alpha)=1$, $0.99$ and $0.98$
in the Type-II 2HDM (MSSM)~\cite{Kanemura:2013eja}. In Fig.~\ref{FIG:Btautau},
similar figures for the branching ratios of $h\to\tau^+\tau^-$, $H\to \tau^+\tau^-$
and $A\to\tau^+\tau^-$ are shown in the Type-X (lepton specific) 2HDM.
\begin{figure}[t]
\centering
\includegraphics[height=4.8cm]{Chapter_Multiple_Higgs/figs/TypeII_Bb_100.pdf}
\includegraphics[height=4.8cm]{Chapter_Multiple_Higgs/figs/TypeII_Bb_99.pdf}
\includegraphics[height=4.8cm]{Chapter_Multiple_Higgs/figs/TypeII_Bb_98.pdf}
\caption{The decay branching ratios as a function of
$\tan\beta$ for a fixed $\sin^2(\beta-\alpha)$ for $h\to b\bar
b$ (black curves), $H\to b\bar b$ (red curves), and $A\to b\bar
b$ (blue curves) in the Type-II 2HDM. From left to right, $\sin^2(\beta-\alpha)$ is
taken to be $1$, $0.99$, and $0.98$.
The solid (dashed) curves denote the case with $\cos(\beta-\alpha) \leq 0$
($\cos(\beta-\alpha) \geq 0$).
}
\label{FIG:Bbb}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[height=4.8cm]{Chapter_Multiple_Higgs/figs/TypeX_Btau_100.pdf}
\includegraphics[height=4.8cm]{Chapter_Multiple_Higgs/figs/TypeX_Btau_99.pdf}
\includegraphics[height=4.8cm]{Chapter_Multiple_Higgs/figs/TypeX_Btau_98.pdf}
\caption{The decay branching ratios are shown as a function of
$\tan\beta$ with a fixed value of $\sin^2(\beta-\alpha)$ for $h\to
\tau\tau$ (black curves), for $H\to \tau\tau$ (red curves), and
for $A\to \tau\tau$ (blue curves) in the Type-X 2HDM.
From left to right, $\sin^2(\beta-\alpha)$ is taken to be $1$, $0.99$,
and $0.98$.
The solid (dashed) curves denote the case with $\cos(\beta-\alpha) \leq
0$ ($\cos(\beta-\alpha) \geq 0$).
}
\label{FIG:Btautau}
\end{figure}
In Refs.~\cite{Barger:2000fi,Gunion:2002ip} methods using the production and decay of the $H$ and $A$ have been
studied in the context of the MSSM. Since the masses of the $H$ and $A$ can
be measured by the invariant mass distributions in an appropriate decay mode in $e^+e^-\to HA$, the branching ratios can be predicted as a function of $\tan\beta$.
Thus one can extract $\tan\beta$ by measuring the branching ratios of $H$ and $A$. Since the $\tan\beta$
dependence of the branching ratio is large in small $\tan\beta$ regions, this method is useful for small $\tan\beta$.
A second method~\cite{Barger:2000fi} is based on the measurement of the total decay widths of the $H$ and $A$.
For large $\tan\beta$ values, the total decay widths are dominated by the $b\overline b$ and $\tau \tau$ decay modes in the Type-II and Type-X 2HDMs,
respectively, whose partial widths are proportional to $(\tan\beta)^2$. Therefore, $\tan\beta$ can be extracted using this method
in large $\tan\beta$ regions.
In addition to these two methods, a new method using the precision measurement of the SM-like Higgs boson $h$ has been proposed
in Ref.~\cite{Kanemura:2013eja}. This can be applied to the case where $\sin^2(\beta-\alpha) $ is smaller than unity through the $\tan\beta$ dependences
in the Yukawa couplings for $h$.
%
In the limit of $\sin^2(\beta-\alpha)=1$, the Yukawa couplings for the SM-like Higgs boson $h$ are identical to the SM ones,
so that there is no $\tan\beta$ dependence in them.
However, if $\sin^2(\beta-\alpha)$ turns out to be slightly smaller than unity in future precision measurements,
then the Yukawa couplings for $h$ can also depend on $\tan\beta$ significantly. For example, for the Type-II 2HDM
\begin{eqnarray}
Y_{hb\overline b} &\sim& \sin(\beta-\alpha) - \tan\beta \cos(\beta-\alpha) , \\
Y_{h\tau\tau} &\sim& \sin(\beta-\alpha) - \tan\beta \cos(\beta-\alpha),
\end{eqnarray}
and for the Type-X 2HDM
\begin{eqnarray}
Y_{hb\overline b} &\sim& \sin(\beta-\alpha) + \cot\beta \cos(\beta-\alpha) , \\
Y_{h\tau\tau} &\sim& \sin(\beta-\alpha) - \tan\beta \cos(\beta-\alpha).
\end{eqnarray}
At the ILC, the main decay modes of $h$ can be measured precisely to the few percent level.
The precision measurement of the decay of $h$ can be used to determine $\tan\beta$ for the case with $\sin(\beta-\alpha)<1$.
In Fig.~\ref{FIG:2HDM-II}, the numerical results for the sensitivities of the
$\tan\beta$ measurements are shown for the Type-II 2HDM~\cite{Kanemura:2013eja}.
The production cross section and the number of the signal events are evaluated for
$m_H=m_A=200$ GeV with $\sqrt{s} =500$ GeV and ${\cal L}_{\rm int}=250$ fb$^{-1}$. The acceptance ratio
of the $4b$ final states in the $e^+e^-\rightarrow HA$ signal process is set to 50\%.
The results for the three methods are shown. The results for 1 $\sigma$ (solid) and
2 $\sigma$ (dashed) sensitivities for the branching ratios, the total width of $H$ and $A$, and
the branching ratio of $h$ are plotted in the red, blue and black curves, respectively.
The parameter $\sin^2(\beta-\alpha) $ is set to 1 (left), 0.99 (middle) and 0.98 (right)
for $\cos(\beta-\alpha)<0$.
In Fig.~\ref{FIG:2HDM-X}, the sensitivities to $\tan\beta$ are shown for the case of
the Type-X 2HDM, where the channels $H\to \tau^+\tau^-$ and $A\to\tau^+\tau^-$
are the main decay modes~\cite{Kanemura:2013eja}.
With or without the assumption of $\sin^2(\beta-\alpha)=1$, the total width measurement of
$H$ and $A$ is a useful probe for the large $\tan\beta$ regions.
For the smaller $\tan\beta$ regions, the branching ratio measurement of $H$ and $A$
can probe $\tan\beta$. For $\sin(\beta-\alpha)=0.99$ and $0.98$, the measurement of
the branching ratio of $h\to\tau^+\tau^-$ can give good $\tan\beta$ sensitivity over a wide range of
$\tan\beta$.
Here, comments on the $\tan\beta$ measurements for the other 2HDM types are given.
In the Type-I 2HDM, the Yukawa coupling constants are universally changed from those in the SM.
In the SM-like limit, $\sin(\beta-\alpha)=1$, the Yukawa interactions for
$H$ and $A$ become weak for $\tan\beta > 1$.
As for the $\tan\beta$ measurement at the ILC, the method using the
total width of $H$ and $A$ is useless, because the absolute value of the decay
width is too small compared to the detector resolution.
Without the SM-like limit, the branching ratio measurement of $H$ and $A$
using the fermionic decay modes may be difficult, because the bosonic decay
modes $H\to WW$ and $A\to Zh$ become important.
Furthermore, the decays of $h$ are almost unchanged from the SM because
there is no $\tan\beta$ enhancement.
Thus, the $\tan\beta$ determination in the Type-I 2HDM seems to be
difficult even at the ILC.
In the Type-Y 2HDM, the $\tan\beta$ sensitivity at the ILC would be
similar to that of the Type-II 2HDM, because the Yukawa interactions of
the neutral scalar bosons with the bottom quarks are enhanced by
$\tan\beta$ in the same way.
\begin{figure}[tb]
\centering
\includegraphics[height=4.8cm]{Chapter_Multiple_Higgs/figs/TypeII_dTanB_100.pdf}
\includegraphics[height=4.8cm]{Chapter_Multiple_Higgs/figs/TypeII_dTanB_99_Negative.pdf}
\includegraphics[height=4.8cm]{Chapter_Multiple_Higgs/figs/TypeII_dTanB_98_Negative.pdf}
\caption{Sensitivities to the $\tan\beta$ measurement by the three
methods in the Type-II 2HDM.
From left to right, $\sin^2(\beta-\alpha)$ is taken to be $1$, $0.99$,
and $0.98$, with $\cos(\beta-\alpha) \le 0$.
Estimated $\Delta\tan\beta/\tan\beta$ using the branching ratio of
$H/A\to b\bar b$ (red curves), the total width of $H/A$ (blue curves),
and the branching ratio of $h\to b\bar b$ (black curves) are plotted as
a function of $\tan\beta$.
The solid curves stand for $1\sigma$ sensitivities, and the dashed
curves for $2\sigma$.
For $HA$ production, $m_H^{}=m_A^{}=200$ GeV with $\sqrt{s}=500$ GeV and
${\cal L}_{\rm int}=250$~fb$^{-1}$ are assumed.
For the $h\to b\bar b$ measurement, $\Delta{\cal
B}/{\cal B} = 1.3\%$ ($1\sigma$) and $2.7\%$ ($2\sigma$)
are used.
}~\label{FIG:2HDM-II}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[height=4.8cm]{Chapter_Multiple_Higgs/figs/TypeX_dTanB_100.pdf}
\includegraphics[height=4.8cm]{Chapter_Multiple_Higgs/figs/TypeX_dTanB_99_Negative.pdf}
\includegraphics[height=4.8cm]{Chapter_Multiple_Higgs/figs/TypeX_dTanB_98_Negative.pdf}
\caption{The same as FIG.~\ref{FIG:2HDM-II}, but $\tau\tau$ decay modes
are used for the analysis in the Type-X 2HDM.
From left to right, $\sin^2(\beta-\alpha)$ is taken to be $1$, $0.99$,
and $0.98$, with $\cos(\beta-\alpha) \leq 0$.
For ${\cal B}^h_{\tau\tau}$, $\Delta{\mathcal B}/{\mathcal B} = 2\%
$ ($1\sigma$) and $5\%$ ($2\sigma$) are assumed.
}
\label{FIG:2HDM-X}
\end{figure}
\chapter{Cross Section Times Branching Ratio Measurements I \label{sid:chapter_sigma_times_br_i}}
The measurement accuracies of the cross section times branching ratio ($\sigma\cdot BR$)
for Higgs decay to $b\bar{b}$, $c\bar{c}$, $gg$, $WW^*$ and $\tau^+\tau^-$ are
described in this chapter.
\section{$h\rightarrow b\bar{b},\ c\bar{c},\ gg$}
\subsection{250 GeV and 350 GeV}
The measurement accuracies of the cross section times branching ratio $\Delta(\sigma\cdot BR)$
for Higgs decays to $b\bar{b}$, $c\bar{c}$ and gluons were studied in the ILD and SiD LOI's~\cite{Aihara:2009ad,Abe:2010aa}
at $\sqrt{s}=250$ GeV assuming $m_h = 120$~GeV. A comprehensive study
at 250 GeV and 350 GeV with $m_h = 120 $~GeV was reported in Ref.~\cite{Ono:2013higgsbr},
which is presented below.
At these energies the Higgsstrahlung process ($e^+e^-\rightarrow Zh)$ is the dominant contribution to
the Higgs production. Therefore, the event signatures of 4-jet($q\bar{q}h$) and 2-lepton ($e^+e^- or \mu^+\mu^-$)+2-jet ($\ell\bar{\ell}h$)
were studied in addition to missing energy + 2-jet ($\nu\bar{\nu}h$) events.
In the case of the 4-jet analysis, the particles in the event were forcibly clustered to four-jets,
from which the dijet pairs for the $h$ and $Z$ candidates were selected
as the pairs which minimized the dijet mass $\chi^2$ for $Z$ and $h$ bosons.
The background events were rejected by cuts on the number of tracks for each jet,
the maximum scaled jet mass ($y_{max}$) needed to cluster as four jets, the thrust,
the thrust angle and the Higgs production angle. The kinematical constraint fit was
applied to the four jets to improve background rejection. Finally, the likelihood ratio (LR) was
derived from the thrust, $\cos\theta_{thrust}$, the minimum angle between all the jets,
the number of particles in the Higgs candidate jets, and the fitted $Z$ and Higgs masses.
The cut position to select 4-jet candidates was chosen to maximize the signal significance.
The background fractions after all cuts are 80\% $q\bar{q}q\bar{q}$ and 20\% $q\bar{q}$ at 250 GeV, and
60\% $q\bar{q}q\bar{q}$, 30\% $q\bar{q}$ and 10\% $t\bar{t}$ at 350 GeV
In the case of the 2-lepton + 2-jet mode an event must have an $e^+e^-$ or $\mu^+\mu^-$ pair with mass
consistent with the $Z$, and the mass of everything else must be consistent with the $h$. Additionally,
cuts on the production angle of the $Z$ and the lepton pair recoil mass were applied
to improve the signal to noise ratio.
In the analysis of the missing energy+2-jet mode all visible objects were forced
into two jets, and the four vector sum of the two jets had to have a $P_T$
and mass consistent with the Higgs. In contrast to the 1 TeV study, the recoil mass calculated from the two jets was
required to be consistent with the $Z$ mass because the Higgsstrahlung process is dominant at these energies; in addition
this cut was effective in reducing backgrounds from non-Higgs four fermion processes. The likelihood ratio (LR)
was formed from the recoil mass, the number of particles, the jet momentum, the jet pair mass and
the minimum of the scaled jet mass for forced 2-jet clustering.
With 250 fb$^{-1}$ at 250 GeV (250 fb$^{-1}$ at 350 GeV), the signal significance, $S/\sqrt{S+B}$, is
47.9 (66.4) for $\nu\bar{\nu}h$, 32.3(47.1) for $q\bar{q}h$, 22.4(16.7) for $e^+e^-h$ and
28.2 (19.2) for $\mu\bar{\mu}h$.
In order to evaluate the flavor content of the selected events, the flavor likeness of dijet events
was calculated by LCFIPlus and fitted by a template consisting of $h\rightarrow b\bar{b}$, $h\rightarrow c\bar{c}$, $h\rightarrow gg$,
other Higgs decays and the standard model processes. Pseudo experiments
were performed with the fraction of $b\bar{b}$, $c\bar{c}$ and $gg$ as free parameters, and $\Delta \sigma\cdot BR$ was determined by the widths of the fitted distribution. The results are summarized in
Table~\ref{table:bb-cc-gg-sigma-br}
\thisfloatsetup{floatwidth=\hsize,capposition=top}
\begin{table}[!h]
\caption[Sensitivity to $\sigma\cdot BR$ for $h\rightarrow b\bar{b}, c\bar{c}$, $gg$]
{
\label{table:bb-cc-gg-sigma-br}
Summary of the sensitivity to
$\sigma\cdot BR$ for Higgs decay to $b\bar{b}$, $c\bar{c}$, $gg$
at 250 GeV with 250 fb$^{-1}$ and $P(e^{-})/P(e^{+}) = -80\%/+30\%$
and
350 GeV with 250 fb$^{-1}$ and $P(e^{-})/P(e^{+}) = -80\%/+30\%$.
$m_h=120$~GeV was used for this analysis.
}
\begin{center}
\begin{tabular}{lllllll}
\toprule
Energy & channel & missing+2-jet & 4-jet & $e^+e^-$+2-jet & $\mu^+\mu^-$+2-jet & Combined \\
\midrule
250 GeV &
${\Delta\sigma\cdot BR \over \sigma\cdot BR} (h\rightarrow b\bar{b})$ &
1.7 & 1.5 & 3.8 & 3.3 & 1.0 \\
& ${\Delta\sigma\cdot BR \over \sigma\cdot BR} (h\rightarrow c\bar{c})$ &
11.2 & 10.2 & 26.8 & 22.6 & 6.9 \\
& ${\Delta\sigma\cdot BR \over \sigma\cdot BR} (h\rightarrow gg)$ &
13.9 & 13.1 & 31.3 & 33.0 & 8.5 \\
\midrule
350 GeV &
${\Delta\sigma\cdot BR \over \sigma\cdot BR} (h\rightarrow b\bar{b})$ &
1.4 & 1.5 & 5.3 & 5.1 & 1.0 \\
& ${\Delta\sigma\cdot BR \over \sigma\cdot BR} (h\rightarrow c\bar{c})$ &
8.6 & 10.1 & 30.5 & 30.9 & 6.2 \\
& ${\Delta\sigma\cdot BR \over \sigma\cdot BR} (h\rightarrow gg)$ &
9.2 & 13.7 & 35.8 & 33.0 & 7.3 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
The $\sigma\cdot BR$ accuracies for a Higgs with 125~GeV mass were obtained by scaling the number
of signal events according to the branching ratio while keeping the number of background events
the same. From this extrapolation
${\Delta\sigma\cdot BR \over \sigma\cdot BR}$ for $h\rightarrow b\bar{b}, c\bar{c},$ and $gg$
were estimated to be 1.2\%, 8.3\% and 7.0\%, respectively, assuming 250~fb$^{-1}$ at $\sqrt{s}=250$~GeV\cite{Junping-keisuke-fitting}.
The $\nu_{e}\bar{\nu_{e}}h$ $WW-$fusion channel was studied in the $h\rightarrow b\bar{b}$ channel at $\sqrt{s}=250$~GeV
using the ILD full simulation\cite{durig-jenny-lcws2012}.
From a $\chi^{2}$ fit of the missing mass distribution, the contributions from the
$WW$-fusion channel and the Higgsstrahlung channel were separated. A measurement
accuracy of ${\Delta\sigma_{\nu\bar{\nu}h}\cdot BR \over \sigma_{\nu\bar{\nu}h}\cdot BR}=0.11$ was obtained with 250 fb$^{-1}$ assuming
$P(e^{-})/P(e^{+})=-80\%/+30\%$ and $m_h=126$~GeV.
\subsection{500 GeV}
The Higgs decay to $b\bar{b}$ in the process $e^+e^-\rightarrow \nu\bar{\nu}h$ at 500 GeV was studied
by ILD\cite{durig-jenny-lcws2012} using full simulation with $m_h=125$~GeV.
In order to remove piled up background particles, the anti-$k_t$ jet algorithm was employed
with the jet size parameter $R=1.5$.
Events with an isolated muon or electron were removed, and events with 2 $b$-tagged jets were selected.
The visible energy and missing $P_T$ were required to be consistent with $\nu\bar{\nu}$ production, and
the recoil mass opposite the dijet was required to be greater than 172~GeV to reject
$Z\rightarrow \nu\bar{\nu}$ events. The dijet mass distribution for selected
events is shown in Figure~\ref{fig:vvh-visible-dijet-mass-500GeV}.
The signal events were selected with 66\% efficiency and a signal-to-noise ratio of 3.7. The main background
was $e^+e^- \rightarrow \nu\bar{\nu}Z$, which is labelled as {\tt 4f\_sznu\_sl} in Figure~\ref{fig:vvh-visible-dijet-mass-500GeV}.
The signal significance for the $h\rightarrow b\bar{b}$ channel was 150 and $\Delta(\sigma\cdot BR)/(\sigma\cdot BR)=0.667\%$.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[h]
\centerline{\includegraphics[width=0.8\hsize]{Chapter_Sigma_Times_BR_I/figs/vvh_500GeV_h2bb.pdf}}
\caption{The dijet mass distribution of $e^+e^-\rightarrow \nu\bar{\nu}h\rightarrow \nu\bar{\nu}b\bar{b}$
at $\sqrt{s}=500$~GeV assuming 500 fb$^{-1}$, $P(e^{-})/P(e^{+}) = -80\%/+30\%$,
and $m_h=125$~GeV. }
\label{fig:vvh-visible-dijet-mass-500GeV}
\end{figure}
The decays $h\rightarrow c\bar{c}$ and $gg$ were studied at 500~GeV\cite{Ono:2013higgsdbd}
using the ILD full simulation samples for the ILC TDR.
The analysis strategy is similar to the 1 TeV case in order to select Higgs production via the
$WW$ fusion process $e^+e^-\rightarrow \nu_{e}\bar{\nu_e}h$. However, since
no low~$p_T$ $\gamma\gamma\rightarrow\ \rm{hadron}$ background was overlaid the
Durham jet clustering algorithm\cite{Catani:1991} was applied instead of the
$k_t$ algorithm. Events with two jets were selected by cuts on $P_T$, $P_Z$, $P_{max}$,
$N_{charged}$ with efficiencies of 57\%, 46\%, 65\% for $h\rightarrow b\bar{b}$, $c\bar{c}$
and $gg$, respectively. Among the background processes considered, $\nu\bar{\nu}q\bar{q}$
and $\nu\ell q\bar{q}$ were the largest. Flavor composition was determined using
the template method described above. The following sensitivities were obtained assuming 500 fb$^{-1}$ and
$P(e^{-})/P(e^{+})=-80\%/+30\%$:
$\Delta(\sigma\cdot BR)/(\sigma\cdot BR)=0.6\%(b\bar{b})$, $5.2\%(c\bar{c}$) and $5.0\%(gg)$.
The result can be extrapolated to the case of $m_h=125$~GeV by scaling the signal yield by
the total cross section and the branching ratio:
$\Delta(\sigma\cdot BR)/(\sigma\cdot BR)=0.66\%(b\bar{b})$, $6.2\%(c\bar{c}$) and $4.1\%(gg)$.
Note that this result for $h\rightarrow b\bar{b}$
is consistent with the dedicated $h\rightarrow b\bar{b}$ study described earlier.
The results from the $Zh$ process were obtained by extrapolating the 250~GeV full simulation results.
The number of signal and background events before template fitting were scaled according
to the cross section, and then they were extrapolated according to the enhanced statistical significance from
the template fitting. As a result, $\Delta\sigma \cdot BR \over \sigma \cdot BR$
with 500 fb$^{-1}$ at 500 GeV with $(-80\%, +30\%$) polarization was estimated
to be 1.8\%, 13\%, and 11\% for $h\rightarrow b\bar{b}, c\bar{c},$ and $gg$ respectively.
\subsection{1 TeV}
The Higgs decays to $b\bar{b}$, $c\bar{c}$, and $gg$ were studied at 1 TeV as one of the detector benchmark
studies for the ILC TDR by the ILD and SiD concept groups. At this energy
the Higgs is produced dominantly by the process $e^{+}e^{-} \rightarrow \nu\bar{\nu}h$.
Therefore, the event signature is a large missing $P_T$ due to un-detected
neutrinos and 2 jets from Higgs decays to $b\bar{b}, c\bar{c}$, and $gg$, with their
invariant mass consistent with the Higgs. To minimize the effect of the low $P_T$
hadron events, which are produced at an average rate of 4.1 events per bunch crossing at 1 TeV,
both ILD and SiD employed the $k_t$ jet clustering algorithm with a size parameter, R,
of 1.5 (1.1 in the case of ILD).
After the jet clustering the candidate 2-jet events were selected by cuts on
the visible $P_T$, visible energy, visible mass, the jet production angles,
and the number of tracks. In the case of the SiD analysis, these variables were used to form
Fisher Discriminants implemented in TMVA together with the flavor tagging variables
for $b$ jets and $c$ jets. Fisher discriminants which maximized the significance
for each decay mode were used to obtain the final results. The uncertainties
on the cross section times Higgs branching ratios were determined
from the numbers of signal and background events passing each selection.
A typical Higgs mass distribution in the case of SiD is shown in Figure~\ref{fig:visible-mass-higgs}.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[h]
\centerline{\includegraphics[width=0.8\hsize]{Chapter_Sigma_Times_BR_I/figs/higgs-visible-mass-sid.pdf}}
\caption{The visible mass distribution for the $h\rightarrow b\bar{b}$ analysis
without the visible mass cut for 500 fb$^{-1}$ and $P(e^{-})/P(e^{+}) = -80\%/+20\%$. }
\label{fig:visible-mass-higgs}
\end{figure}
In the case of the ILD analysis\cite{Ono:2013higgsdbd}, a flavor tagging template fitting
was performed to extract $\sigma\cdot BR$ for the different channels. The flavor templates of
$h\rightarrow b\bar{b}$, $c\bar{c}$, $gg$, and background channels were obtained from the flavor
tagging output of the LCFIPlus package.
Taking into account the b-tagging efficiency systematic error of $0.3\%$, the accuracies for 1~ab$^{-1}$ and $P(e^{-})/P(e^{+}) = -80\%/+20\%$ beam polarization
were 0.49\%, 3.9\%, and 2.8\% for $h\rightarrow b\bar{b}$, $c\bar{c}$,
and $gg$ respectively. Following the publication of the ILC TDR, improvements to background rejection were developed~\cite{Junping-keisuke-fitting}, leading to relative
$\sigma\cdot BR$ errors of 0.45\%, 3.1\%, and 2.3\% for $h\rightarrow b\bar{b}$, $c\bar{c}$,
and $gg$ respectively.
\section{$h\rightarrow WW^*$}
\subsection{500 GeV}
The full simulation study of the process, $e^+e^- \rightarrow \nu\bar{\nu}H \rightarrow \nu\bar{\nu}WW^*$
was performed using the fully hadronic mode of $WW^*$. In this case the event signature is
4 jets with missing energy and missing momentum and mass consistent with Higgs. The 2 jets from the $W^*$ are soft and the
piled up low $P_T$ particles due to $\gamma\gamma$ collisions have to be removed effectively.
To this end, a multivariate analysis (MVA) to identify pile up particles was employed using $P_T$ and rapidity.
In the case of charged particles, the closest approach to the interaction point along the beam axis ($Z_0$)
was also used to reduce background contamination. Figure~\ref{fig:Z0-and-MVA-for-BKG-regection} shows the boosted
decision tree (BDT) MVA
response for neutral and charged particles.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[h]
\begin{tabular}{cc}
\includegraphics[width=0.5\hsize]{Chapter_Sigma_Times_BR_I/figs/vvh_500GeV_WW_TMVA_neutral.pdf}&
\includegraphics[width=0.5\hsize]{Chapter_Sigma_Times_BR_I/figs/vvh_500GeV_WW_TMVA_charged.pdf} \\
\end{tabular}
\caption{BDT response for particles from $\nu\bar{\nu}h\rightarrow WW^* \rightarrow q\bar{q}q\bar{q}$
events and low $P_T$ hadron background events for neutral particles (left) and charged particles (right).}
\label{fig:Z0-and-MVA-for-BKG-regection}
\end{figure}
After rejecting background tracks by the MVA,
events with isolated muons or electrons were removed
and anti-$k_t$ jet clustering was employed to select 4-jet events.
Each jet was required to not be tagged as a $b$-jet, and one jet pair
must have its mass consistent with the $W$ with the other jet pair mass
between 11 and 64~GeV.
The 4-jet mass distibution for selected events is shown in Figure~\ref{fig:vvh_500GeV_WW_4jetmass}.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[h]
\centerline{
\includegraphics[width=0.8\hsize]{Chapter_Sigma_Times_BR_I/figs/vvh_500GeV_WW_4jetmass.pdf}
}
\caption{4-jet mass distribution for selected events in the $h\rightarrow WW^*$ study.}
\label{fig:vvh_500GeV_WW_4jetmass}
\end{figure}
The signal selection efficiency was about 43\%.
The major backgrounds were other Higgs decays and the
semi-leptonic channels for $e^+e^-\rightarrow ZZ$ or $WW$.
The signal-to-noise ratio was about 1. For 500 fb$^{-1}$, the signal significance, $S/\sqrt{S+B}$,
was 35 and $\Delta(\sigma\cdot BR)/(\sigma\cdot BR)=2.8\%$.
When combined with an analysis of the semileptonic channel for $h\rightarrow WW*$~\cite{durig-jenny-lcws2012} the precision improves to
$\Delta(\sigma\cdot BR)/(\sigma\cdot BR)=2.4\%$.
\subsection{1 TeV}
The decay $h\rightarrow WW^*$ was studied at 1 TeV by ILD and SiD for the ILC TDR using the fully hadronic
decay mode of $WW^*$. The signal final state is four jets consistent with $WW^*$,
with total mass consistent with the Higgs mass, and large missing energy
and missing transverse momentum.
In the ILD analysis background from pile-up events was removed by
employing the $k_t$ jet clustering algorithm with $R=0.9$ and $N_{jet}=4$.
Further, the Durham algorithm was applied to force the remaining particles
to be clustered into four jets, which were paired so that one dijet system
had a mass consistent with the $W$, while the other had a mass between 15 and 60~GeV.
To reduce background from $h\rightarrow b\bar{b}$
the $b$-likeness of each jet was required to be low. The signal selection efficiency
was 12.4\%, and the remaining major backgrounds were
4-fermions ($e^+e^-\rightarrow \nu\bar{\nu}q\bar{q}$),
3-fermions ($e\gamma \rightarrow \nu q\bar{q}$)
and other decay channels of the Higgs.
The reconstructed Higgs mass distribution is shown in Figure~\ref{fig:massdistri-higgs-to-ww}.
With 1 ab$^{-1}$ luminosity and a beam
polarization of $P(e^{-}) = -80\%$, $P(e^{+}) = +20\%$, ILD obtained
${\Delta(\sigma\cdot BR) / (\sigma\cdot BR)} = 2.5\%$. SiD obtained a similar result.
By including the semi-leptonic topology for $h\rightarrow WW^*$, and by using the particle-based MVA
technique to better reject pileup, the precision for $h\rightarrow WW^*$ improves to ${\Delta(\sigma\cdot BR) / (\sigma\cdot BR)} = 1.6\%$~\cite{Junping-keisuke-fitting}.
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[h]
\centerline{\includegraphics[width=0.8\hsize]{Chapter_Sigma_Times_BR_I/figs/vvh_1TeV_h2ww.pdf}}
\caption{ILD reconstructed Higgs mass distribution for the $h\rightarrow WW^*$ analysis in the fully hadronic decay channel.
}
\label{fig:massdistri-higgs-to-ww}
\end{figure}
\section{$h\rightarrow \tau^+\tau^- $}
\subsection{250 GeV}
The full simulation samples of the ILD LOI~\cite{Abe:2010aa} were used for the study of $h\rightarrow\tau^{+}\tau^{-}$
in Ref.~\cite{Kawada:2013tautau}. In this study the Higgsstrahlung process ($e^+e^- \rightarrow Zh$) at $\sqrt{s}=250$ GeV
was considered using $m_h=120$~GeV and the $Z$ decay modes $Z\rightarrow l^+l^-$ ($l=e,\mu$) and $Z\rightarrow q\bar{q}$.
In the case of $Z\rightarrow l^+l^-$, events with an $l^+l^-$ mass consistent with the $Z$
were selected, where the $l^+l^-$ tracks were required to come from the IP to reject such tracks from $\tau$ decay.
Particles other than those from the $Z$ were fed to a tau jet finder, which
identified low mass ( $< 2 $~GeV ) jets using particles within 1 radian of an energetic track.
Signal events were required to have a $\tau^+$, a $\tau^-$ and an $l^+l^-$ recoil mass close to the
Higgs mass. The signal events were selected with an efficiency of $47\%$ and
$62\%$ for the $e^+e^-$ and $\mu^+\mu^-$ channels, respectively. The $S/N$ was 1.43 (1.44) for $e^+e^-$ $(\mu^+\mu^-)$,
and the signal significance was 8.0$\sigma$ (8.8$\sigma$) for the $e^+e^-$ ($\mu^+\mu^-$)
channel.
In the case of $Z\rightarrow q\bar{q}$, a tau-jet was formed using
an energetic track and all particles within 0.2 radians of the energetic track. The mass of a tau-jet was required to be less than 2~GeV,
and additional cuts on the $\tau$ energy and isolation were applied to
reduce mis-identified quark jets.
Low energy charged tracks found in a jet were detached one by one until a unit charged jet
with 1 or 3 prong charged multiplicity was obtained. Once a tau jet pair was found, the kinematics of the tau jet pair were
reconstructed assuming that the visible tau decay products and neutrinos were collinear, and that the missing
momentum of the event was generated only by the neutrinos from the tau decays. Following the reconstruction of the two $\tau$ jets,
$q\bar{q}$ jets were reconstructed by clustering the remaining particles with the Durham jet algorithm.
Variables such as jet mass, energy, and production angle were used together with particle multiplicities and the impact parameters of tracks from
$\tau$ jets to select $Zh\rightarrow q\bar{q}\tau^{+}\tau^{-}$
events.
The efficiency for signal selection was about 0.24 and the $S/N$ was 1.85. With an integrated luminosity of 250~fb$^{-1}$,
the signal significance was 25.8$\sigma$
If we combine the results for $Z\rightarrow l^+l^-$ and $Z\rightarrow q\bar{q}$, the
significance is 28.4, which corresponds to a measurement accuracy of $\Delta(\sigma\cdot BR)/(\sigma\cdot BR)=3.5\%$.
Table~\ref{tab:zh-to-tautau-kawada} shows the extrapolation of this result to the case of $m_h=125$~GeV,
where it was assumed that the signal selection efficiency is unchanged.
\thisfloatsetup{floatwidth=\hsize,capposition=top}
\begin{table}[!h]
\caption{Relative error on $\sigma\cdot BR$ at $\sqrt{s}=250$~GeV for $h\rightarrow \tau^+\tau^-$ assuming $m_h=125$~GeV,
250 fb$^{-1}$ luminosity and beam polarization $P(e^{-})=-80\%$ and $P(e^+)=+30\%$. The results were obtained by
scaling the errors for $m_h=120$~GeV.}
\label{tab:zh-to-tautau-kawada}
\begin{center}
\begin{tabular}{c c c | c c }
\toprule
$Z\rightarrow e^+e^-$ & $Z\rightarrow \mu^+\mu^-$ & $Z\rightarrow q\bar{q}$ & Combined &
${\Delta(\sigma\cdot BR)\over (\sigma\cdot BR) }$ \\
\midrule
$6.8\sigma$ & $7.4\sigma$ & $21.9\sigma$ & $24.1\sigma$ & $4.2\%$ \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{500 GeV}
The decay $h\rightarrow \tau^+\tau^-$ was studied at $\sqrt{s}=500$~GeV using the ILD full simulation
with $m_h = 125$~GeV~\cite{Kawada-Seattle-Higgs-tautau}.
At this energy both Higgsstrahlung and $WW$ fusion
processes contribute with comparable weight.
For the Higgsstrahlung process $e^+e^- \rightarrow Zh \rightarrow q\bar{q}h$, methods similar to those used at $\sqrt{s}=250$~GeV
were employed. A signal efficiency of 21.0\% and a precision of ${\Delta(\sigma_{ZH}\cdot BR)\over \sigma_{ZH} \cdot BR} = 5.4\%$ were obtained
for -80\%/+30\% $e^{-}/e^{+}$ polarization and 500 fb$^{-1}$ luminosity.
Further improvement is expected by including the $Z\rightarrow \ell\bar{\ell}$ mode.
In the $WW$ fusion case $e^+e^- \rightarrow \nu_e \bar{\nu}_e h\rightarrow \nu_e \bar{\nu}_e \tau^+ \tau^-$, a jet with mass less than 2~GeV was considered
a $\tau$ jet. The most energetic $\tau^{+}$ and $\tau^{-}$ were combined as the Higgs boson, and cuts
were applied to the tau pair mass and event missing energy. A signal efficiency of 25\% and a precision of ${\Delta(\sigma_{\nu_e \bar{\nu}_e h}\cdot BR)\over \sigma_{\nu_e \bar{\nu}_e h} \cdot BR} = 9.0\%$ were obtained
for -80\%/+30\% $e^{-}/e^{+}$ polarization and 500 fb$^{-1}$ luminosity.
\section{$h\rightarrow ZZ^*$}
A full simulation study of the decay $h\rightarrow ZZ^*$ has been performed using the process
$\epem \rightarrow Zh \rightarrow ZZZ^*$
at \ensuremath{\sqrt{s}}\xspace=250~GeV. This decay has a SM branching ratio of 2.7\% given the Higgs mass of 125~GeV.
The final state is characterized by two on-shell Z bosons and one off-shell Z boson, leading to
a variety of combinations of jets, isolated leptons and missing energy. The analysis is
directed toward topologies where the $Z$ opposite the Higgs boson decays in any manner $Z\to q,l,\nu$,
while the Higgs decays without missing energy, $h\to ZZ^*\to q\bar{q}\, \ \rm{or}\, \ l^+l^-\, , \ l=e,\mu$.
The datasets used for this analysis are shown in Table~\ref{sid:benchmarking:tab:vvHdatasets}.
\begin{table}
\caption{\label{sid:benchmarking:tab:vvHdatasets}
Simulated data samples used for the \nuenueH analysis.
}
\begin{center}
\begin{tabular}{c l l}\toprule
Process & $P(e^{-})/P(e^{+})$ & $N_{Events}$ \\ \midrule
$f^+f^-h\,\ h \rightarrow ZZ^*$ &+80\%/-30\%&120,012\\\midrule
All SM background mix&+80\%/-30\%&2,058,374\\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Event reconstruction for $ h \to ZZ^*$}
Events are classified as 6-jet or 4-jet, depending on whether the visible
energy in the event is greater or less than 140~GeV; here the term
``jet'' can also refer to an isolated electron or muon.
In the low visible energy signal events
we expect a 4-jet topology if the $Z$ and $Z^{*}$ decay visibly. One pair of jets
must have a mass consistent with the $Z$ mass. Events that have opposite signed electrons
or muons with a mass consistent with the $Z$ mass are unlikely to come from the
WW background. Because of large missing energy and momentum from the
invisible $Z$ decay, it is unlikely that the reconstructed $Z$ bosons are back-to-back
and so we cut on the angle between them. Cutting on the number of tracks helps
to remove much of the two-photon background.
The high visible energy signal events are largely true six jet events with all $Z$ bosons decaying
visibly. Backgrounds that come from ZZ and WW decays can be cut using the Durham jet clustering
y24 and y56 variables. All pairs of jets are tried for the pair most consistent with
the mass of the Z. Then from the remaining jets, the next pair most consistent with
the mass of the $Z$ is found and the remaining pair is taken as coming from the $Z^{*}$.
Each $Z$ is then paired with the $Z^{*}$ to see which one gives a mass most consistent
with coming from a 125 GeV Higgs. The analysis then proceeds similarly to the
4 jet analysis using this pair of $ZZ^*$.
Before applying an MVA selection, preselection cuts are applied separately for
Evis$<$140 GeV and Evis$>$140 GeV. The preselection cuts exclude regions only where
there is almost no signal. Events are preselected based on the Higgs topology being studied using the
criteria shown in Table~\ref{sid:benchmarking:tab:hvvpresel}.
\begin{table}[h!]
\caption{\label{sid:benchmarking:tab:hvvpresel} Overview of the preselections for the different
Higgs decay modes. The cuts as well as the efficiencies for signal and background events are shown.}
\begin{center}
\begin{small}
\begin{tabular}{l l c c} \toprule
Higgs decay & Preselection cuts & Signal eff. & Background eff. \\ \midrule
$ h \to ZZ^* (E_{vis}<140 GeV)$ & \parbox{4cm}{
$25 < p^{T}_{\text{vis}} < 70$~GeV \\
$95. < M^{\text{higgs}}_{\text{vis}} < 140.$~GeV \\
$|\cos(\theta_{\text{jet}})|< 0.90$ \\
$N_{\text{PFO}} > 5$ \\
$y_{\text{34}} > 0.$ \\
E$_{\text{Z}} > 120 GeV$
} & & \\
\midrule
$ h \to ZZ^* (E_{vis}>140 GeV)$ & \parbox{4cm}{
$90. < M^{\text{higgs}}_{\text{vis}} < 160.$~GeV \\
$|\cos(\theta_{\text{jet}})|< 0.90$ \\
$N_{\text{PFO}} > 5$ \\
$y_{\text{34}} > 0.$ \\
E$_{\text{Z}} > 120 GeV$\\
$|$thrust$|$ < 0.98
} & & \\
\midrule
Both & \parbox{4cm}{
} & 77\%& $1.5 \times 10^{-2}$\\
\bottomrule
\end{tabular}
\end{small}
\end{center}
\end{table}
\newpage
\subsection{Multi-Variate Analysis}
After the preselection, multivariate methods, as implemented in TMVA, are then
used to maximize the significance (${S} / \sqrt{S+B}$) of the selection. They
are trained using 50\% of the signal and background events and done separately for the
different polarizations and integrated luminosities. The cuts on the
Fisher discriminant which maximize the significance for each decay mode are used
to obtain the final results. The input variables for the multivariate methods are:\\
\begin{itemize}
\item{visible mass of the event}
\item{the visible energy, mass and transverse momentum}
\item{B-Likeness from b-tag flavor tagging values}
\item{C-likeness from c-tag flavor tagging values}
\item{Number of High Energy Electrons}
\item{Higgs Mass = mass of the reconstructed $ZZ^*$}
\item{reconstructed $Z$ energy}
\item{reconstructed $Z^{*}$ energy}
\item{cosine of the reconstructed $Z$ polar angle}
\item{cosine of the reconstructed $Z^{*}$ polar angle}
\item{reconstructed $Z$ mass}
\item{reconstructed $Z^{*}$ mass}
\item{the angle between the reconstructed $Z$ and $Z^{*}$ in the plane perpendicular to the beam axis.}
\item{the event thrust magnitude}
\item{number Charged Tracks}
\item{number of identified electrons}
\item{number of identified muons}
\item{Durham jet clustering y34 value}
\item{Durham jet clustering y56 value}
\item{lepton pair (PDG ID1 = -ID2) mass closest to m(Z)}
\item{jet pair mass closest to m(W)}
\item{sum of the absolute differences of the best W jet pair mass w.r.t. m(W)}
\end{itemize}
The flavor tagging is used as implemented in the LCFIPlus package which uses boosted decision trees
on vertexing quantities to determine b-tag and c-tag probabilities for bottom and charm jets respectively.
It is trained using samples of four-jet events from $ZZ^{*} \to
\ensuremath{\mathrm{\PQb}\mathrm{\PAQb}}\xspace, \ensuremath{\mathrm{\PQc}\mathrm{\PAQc}}\xspace$~and~$\qqbar$ at \ensuremath{\sqrt{s}}\xspace = 350~GeV and the tagging is accordingly
applied to all signal and background samples.
\begin{figure}[htbp]
\includegraphics[width=0.90\textwidth]{Chapter_Sigma_Times_BR_II/figs/rejBvsS-pdf-p8g16.pdf}
\caption{Rejection of background vs. signal for selecting Higgs boson decays to
$ZZ^*$ from the BDT, BDTG, Fisher, Likelihood and CUTS multivariate methods.\label{sid:benchmarking:tmvaprobs}}
\end{figure}
The performance of the various MVA methods is shown in Figure~\ref{sid:benchmarking:tmvaprobs}.
It is found that the BDT method significantly out performs the other methods.
Plots showing the efficiency and significance curves vs. cuts on the BDT
output are shown in Figure~\ref{sid:benchmarking:vvHsig}.
\begin{figure}[htbp]
\includegraphics[width=0.90\textwidth]{Chapter_Sigma_Times_BR_II/figs/overtrain_BDT-pdf-p8g16.pdf}
\caption{
The multi-variate BDT output for the signal ($ h \to ZZ^*$) and background for the training samples and test samples (points).\label{sid:benchmarking:vvHsig}}
\end{figure}
The composition of the samples of events passing all selections of the analysis are shown in Table~\ref{sid:benchmarking:tab:vvHcomp}
for the polarization P(\Pem) = +80\%, P(\Pep) = -30\% and 250~\ensuremath{\mathrm{fb}^{-1}}\xspace. The fraction of events passing all selections is
10.8$\%$ for the signal and 0.0008$\%$ for the background. The significance of the signal after the preselection is 1.0. After applying the
cut on the BDT output, the significance is 5.6.
\begin{table}[h!]
\caption{\label{sid:benchmarking:tab:vvHcomp}
Composition of the events passing all analysis selections for the polarizations P(\Pem) = +80\%, P(\Pep) = -30\%
and an integrated luminosity of 250~\ensuremath{\mathrm{fb}^{-1}}\xspace collected by SiD at a center of mass energy of 250~GeV.}
\begin{center}
\begin{tabular}{l r r r r}\toprule
& $ h\rightarrow ZZ^*$ \\
& \multicolumn{1}{c}{(\%)} \\
\midrule
$\epem \to 2~\text{fermions}$ & 50 \\
$\epem \to 4~\text{fermions}$ & 462 \\
$\epem \to 6~\text{fermions}$ & 0 \\
$\gamgam \to X$ & 0 \\
$\gamma e^+ \to X$ & 0 \\
$e^- \gamma \to X$ & 0 \\
$qq h \to ZZ^{*}$ & 68 \\
$ee h,\mu\mu h \to ZZ^{*}$& 24 \\
$\tau\tau h \to ZZ^{*}$ & 3 \\
$\nu\nu h \to ZZ^{*}$ & 49 \\
\bottomrule
\end{tabular}
\end{center}
\end{table}
\subsection{Results for $ f^+f^-h \to ZZ^*$}
The uncertainties on the cross sections times Higgs branching fractions, $\Delta ( \sigma \times BR )$, are determined from
the numbers of signal and background events passing each selection. For 250~\ensuremath{\mathrm{fb}^{-1}}\xspace of \epem P(\Pem) = +80\%, P(\Pep) = -30\%
250 GeV collisions in the SiD detector this benchmark indicates that a precision of
18\% can be obtained.
\chapter{Cross Section Times Branching Ratio Measurements II \label{sid:chapter_sigma_times_br_ii}}
\input{Chapter_Sigma_Times_BR_II/hzz.tex}
\section{$h\rightarrow \gamma\gamma$}
Fast Monte Carlo studies of $e^+e^- \to Zh \to f\bar{f}\gamma\gamma,\, \ f=q,\nu$ at $\sqrt{s}=250$~GeV~\cite{Boos:2000bz} and
$e^+e^- \to \nu \bar{\nu}h \to \nu \bar{\nu}\gamma\gamma$ at $\sqrt{s}=1000$~GeV~\cite{Barklow:2003hz} have been supplemented recently
with a full simulation study of $e^+e^- \to f\bar{f}h \to f\bar{f}\gamma\gamma,\, \ f=q,\nu$ at $\sqrt{s}=250$~GeV and $500$~GeV~\cite{Calancha:2013gm}.
These studies indicate that the ILC can measure $\sigma\cdot BR(h\to \gamma \gamma)$ with an accuracy of 34\% using $e^+e^- \to Zh$
at $\sqrt{s}=250$~GeV assuming 250~fb$^{-1}$. The process $e^+e^- \to \nu \bar{\nu}h \to \nu \bar{\nu}\gamma\gamma$ yields
errors of 23\% for 500~fb$^{-1}$ at $\sqrt{s}=500$~GeV and 8.5\% for 1000~fb$^{-1}$ at $\sqrt{s}=1000$~GeV.
\section{$h\rightarrow \mu^+\mu^- $}
The decay $h\rightarrow \mu^+\mu^- $ has been studied using $e^+ e^- \rightarrow Zh \rightarrow q\bar{q} \mu^+\mu^-$ at $\sqrt{s}=250$~GeV~\cite{Aihara:2009ad}
and $e^+ e^- \rightarrow \nu \bar{\nu} h \rightarrow \nu \bar{\nu} \mu^+\mu^- $ at $\sqrt{s}=1000$~GeV~\cite{Behnke:2013lya}.
This decay has a SM branching ratio of 0.02\%. The very small event rate at the ILC can
be compensated somewhat by the excellent $\delta (1/\ensuremath{p_\mathrm{T}}\xspace) \sim$~2--\SI{5e-5}{(GeV/c)^{-1}} charged particle momentum resolution of the ILD and SiD detectors.
At $\sqrt{s}=250$~GeV the largest background is $e^+ e^- \rightarrow ZZ \to q\bar{q} \mu^+\mu^-$. Following all cuts an error of 91\% for $\sigma\cdot BR(h\to \mu^+\mu^-$
was obtained in Ref.~\cite{Aihara:2009ad} for 250~fb$^{-1}$ assuming a 120~GeV Higgs mass.
Scaling to a Higgs mass of 125~GeV this error becomes 100\%.
Figure~\ref{fig:mumumass} shows the
reconstructed muon pair mass distributions for signal and background after all cuts at $\sqrt{s}=1000$~GeV. At this center of mass energy the largest backgrounds
following all cuts are $\epem\to \nu_e\bar{\nu}_e\mpmm$, $\epem\to W^+W^- \to \nu_\mu\bar{\nu}_\mu\mpmm $, and $\gamma\gamma\to W^+W^-\to \nu_\mu\bar{\nu}_\mu\mpmm $. With 1000~fb$^{-1}$
an error of 31\% was obtained in Ref.~\cite{Behnke:2013lya}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.47\linewidth]{Chapter_Sigma_Times_BR_II/figs/SignalMumuMass.pdf}\ \includegraphics[width=0.47\linewidth]{Chapter_Sigma_Times_BR_II/figs/BgndMumuMass.pdf}
\end{center}
\caption{ Muon pair mass for $e^+ e^- \rightarrow \nu \bar{\nu} h \rightarrow \nu \bar{\nu} \mu^+\mu^- $ at $\sqrt{s}=1000$~GeV (left) and for all
Standard Model background (right) following all cuts. The plots are normalized to 1000~fb$^{-1}$ luminosity.
}
\label{fig:mumumass}
\end{figure}
\section{Invisible Higgs Decays}
The $h$ decay to invisible final states, if any, can be measured by
looking at the recoil mass under the condition that nothing observable
is recoiling against the $Z$ boson.
Higgs portal models predict such decays and provide a unique opportunity
to access dark matter particles~\cite{Englert:2011yb}.
The main background is $e^+e^- \to ZZ$ followed by one $Z$ decaying into
a lepton pair or quark pair, and the other into a neutrino pair. With an integrated
luminosity of 250\,fb$^{-1}$ at $\sqrt{s} = 250$\,GeV, the ILC can set
a 95\%~CL limit on the invisible branching ratio of
4.8\%
using the golden $Z \to \mu^+\mu^-$ mode alone~\cite{ref:2012onoc}. Using
other modes including $Z \to q\bar{q}$, we can improve this significantly to
0.9\%~\cite{ref:2012yamamoto}. Assuming a luminosity of 1150\,fb$^{-1}$ at $\sqrt{s} = 250$\,GeV
the 95\% CL limit is 0.4\%
\section{Top Yukawa Coupling Measurement}
The
cross section for the process $e^+e^- \to t\bar{t}h$
is
significantly enhanced near the threshold due to the
bound-state effects between $t$ and $\bar{t}$
\cite{Dittmaier:1998dz,Dawson:1998ej,Belanger:2003nm,Denner:2003zp,You:2003zq,Farrell:2005fk,Farrell:2006xe}.
The effect is made
obvious in the right-hand plot of Fig.~\ref{fig:sigtth}. This enhancement
implies that
the measurement of the top Yukawa coupling
might be possible already at $\sqrt{s} = 500$~GeV~\cite{Juste:2006sv}.
A serious simulation study at $\sqrt{s} = 500$~GeV
was performed for the first time, with the QCD bound-state effects
consistently taken into account for both signal and background cross sections,
in \cite{Yonamine:2011jg}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.47\linewidth]{Chapter_Sigma_Times_BR_II/figs/sigtth.pdf}\
\includegraphics[width=0.47\linewidth]{Chapter_Sigma_Times_BR_II/figs/mtt_tth.pdf}
\end{center}
\caption{
Left: Cross section for the $e^+e^- \to t\bar{t}h$ process
as a function of $\sqrt{s}$, together with those
of background processes, $e^+e^- \to t\bar{t}Z$,
$\to t\bar{t}g^*$, and $\to t\bar{t}$.
Right: The invariant mass distribution of the
$t\bar{t}$ system from the $e^+e^- \to t\bar{t}h$
process with and without the non-relativistic QCD correction.
}
\label{fig:sigtth}
\end{figure}
\begin{figure}
\begin{center}
\includegraphics[width=0.25\hsize]{Chapter_Sigma_Times_BR_II/figs/Diagram-ttH-a.pdf}
\includegraphics[width=0.25\hsize]{Chapter_Sigma_Times_BR_II/figs/Diagram-ttH-b.pdf}
\includegraphics[width=0.25\hsize]{Chapter_Sigma_Times_BR_II/figs/Diagram-ttH-c.pdf}
\end{center}
\caption{
Three diagrams contributing to the $e^+e^- \to t\bar{t}h$ process.
The $h$-off-$t$ or $\bar{t}$ diagrams, (a) and (b), contain the top
Yukawa coupling while the $h$-off-$Z$ diagram (c) does not.}
\label{fig:tthdiagrams}
\end{figure}
The $e^+e^- \to t\bar{t}h$ reaction takes place through the three
diagrams shown in Fig.~\ref{fig:tthdiagrams}.
As shown in Fig.~\ref{fig:sigtth} (left), the contribution from the
irrelevant $h$-off-$Z$ diagram is negligible at $\sqrt{s} = 500$\,GeV,
thereby allowing us to extract the top Yukawa coupling $g_t$ by just
counting the number of signal events.
By combining the 8-jet and 6-jet-plus-lepton modes of $e^+e^- \to t\bar{t}h$
followed by $h \to b\bar{b}$, the analysis of \cite{Yonamine:2011jg}
showed that a measurement of the
top Yukawa coupling to $\Delta g_t / g_t = 14.1\%$ is possible for
$m_h=120$~GeV
with polarized electron and positron beams of $(P_{e^-}, P_{e^+})=(-0,8,+0.3)$
and an integrated luminosity of $500$~fb$^{-1}$. This result obtained with a
fast Monte Carlo simulation has just recently been corroborated by a full
simulation~\cite{Tabassam:2012it}.
When extrapolated to $m_h=125$~GeV, and taking into account a recent analysis improvement,
the corresponding expected precision would be $\Delta g_t/g_t = 14.0\%$.
It should be noted that a small increase in the center of mass
energy beyond $\sqrt{s} = 500$\,GeV can increase the cross section for
$e^+e^- \to t\bar{t}h$ significantly, as can be seen in Fig.~\ref{fig:sigtth}. By increasing the center of mass
energy to $\sqrt{s} = 520$\,GeV, for example, the cross section for $e^+e^- \to t\bar{t}h$ can be doubled and hence the precision can be improved to $9.9\%$
assuming $500$~fb$^{-1}$.
The $14$\% accuracy on the top quark Yukawa coupling expected at $\sqrt{s}=500$\,GeV can be significantly improved by the data taken at 1000\,GeV,
thanks to the larger cross section and the less background from $e^+e^- \to t\bar{t}$. Fast simulations at
$\sqrt{s}=800$\,GeV showed that we would be able to determine the top Yukawa coupling to $6$\% for
$m_h=120$\,GeV, given an integrated luminosity of $1$\,ab$^{-1}$ and residual background uncertainty
of $5$\%~\cite{Juste:1999af,Gay:2006vs}.
As described in the Detector Volume of the ILC TDR~\cite{Behnke:2013lya}
full simulations just recently completed by SiD and ILD
show that the top Yukawa coupling can indeed be measured to a
statistical precision of $3.1$\% for $m_h = 125\,$GeV
with~$1$\,ab$^{-1}$.
With luminosities of $1600$~fb$^{-1}$ at $500$~GeV and
$2500$~fb$^{-1}$ at $1000$~GeV, the statistical precision can be improved to $2.0\%$.
\section{Higgs Self Coupling Measurement}
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[t]%
\begin{subfigure}[b]{0.35\hsize-0.5\columnsep}%
\includegraphics[width=\hsize]{Chapter_Sigma_Times_BR_II/figs/ZHHdiagram.pdf}%
\end{subfigure}%
\hspace{\columnsep}%
\begin{subfigure}[b]{0.38\hsize-0.5\columnsep}%
\includegraphics[width=\hsize]{Chapter_Sigma_Times_BR_II/figs/nunuHHdiagram.pdf
\end{subfigure}%
\caption{
Relevant diagrams containing the triple Higgs coupling for the two
processes:
$e^+e^-\rightarrow Zhh$ (left) and $e^+e^-\rightarrow\nu_e\overline{\nu}_e hh$.
}
\label{fig:HHHdiagrams}
\end{figure}
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[t]
\includegraphics[width=0.8\hsize]{Chapter_Sigma_Times_BR_II/figs/xsec_HHprod_120.pdf}
\caption{
Cross sections for the two processes
$e^+e^-\rightarrow Zhh$ (left) and $e^+e^-\rightarrow\nu_e\overline{\nu}_e hh$
as a function of $\sqrt{s}$
for $m_h=120$~GeV.}
\label{fig:sigHHH}
\end{figure}
The triple Higgs boson coupling can be studied at the ILC through
the processes $e^+e^-\rightarrow Zhh$ and
$e^+e^-\rightarrow\nu_e\overline{\nu}_e hh$. The relevant Feynman diagrams
are shown in Fig.~\ref{fig:HHHdiagrams}~\cite{Djouadi:1999gv}.
The cross sections for the two processes are plotted as a function of
$\sqrt{s}$ for $m_h=120$\,GeV in Fig.~\ref{fig:sigHHH}.
The cross section reaches its maximum of about $0.18$\,fb at around
$\sqrt{s}=500$\,GeV, which is dominated by the former process.
A full
simulation study of the process $e^+e^- \to Zhh$
followed by $h \to b\bar{b}$ at $\sqrt{s}=500$~GeV has recently been carried out
using the ILD detector~\cite{Tian:2013hhbbbb}.
From the combined result of the three channels corresponding to
different $Z$ decay modes, $Z \to l^+l^-$, $\nu\bar{\nu}$, and $q\bar{q}$,
it was found that the process can be detected with an excess
significance of $4.5$-$\sigma$ and the cross section can be
measured to $\Delta \sigma / \sigma = 0.30$ for an integrated
luminosity of $1.6$\,ab$^{-1}$ with beam polarization
$(P_{e^-}, P_{e^+})=(-0,8,+0.3)$. Unlike the $e^+e^- \to t\bar{t}h$ case,
however, the contribution from the background diagrams without the
self-coupling is significant and the relative error on the
self-coupling $\lambda$ is $\Delta \lambda / \lambda = 0.49$
with a proper event weighting to enhance the contribution from
the self-coupling diagram.
When extrapolated to $m_h=125$~GeV, taking into account a $20\%$ relative improvement
expected from a recent preliminary full simulation result including $hh \to b\bar{b}WW^*$ mode,
the precision would be improved to $46\%$.
At $\sqrt{s}=1000$\,GeV, the $e^+e^- \to \nu\bar{\nu}hh$ process will become significant~\cite{Yasui:2002se}. The cross
section for this process is only about $0.07$\,fb$^{-1}$, but the sensitivity to the self-coupling is
potentially higher since the contribution from the background diagrams is smaller, leading to the relation
$\Delta \lambda / \lambda \simeq 0.85 \times (\Delta \sigma_{\nu\bar{\nu}hh} / \sigma_{\nu\bar{\nu}hh})$, as
compared to $\Delta \lambda / \lambda \simeq 1.8 \times (\Delta \sigma_{Zhh} / \sigma_{Zhh})$ for the
$e^+e^- \to Zhh$ process at 500\,GeV.
The measurement of the
self-coupling has been studied at 1~TeV with full simulation.
That analysis
is described in the Detector Volume of the ILC TDR~\cite{Behnke:2013lya}. The
result, for $2.5$\,ab$^{-1}$ with $(P_{e^-}, P_{e^+})=(-0.8,+0.2)$,
is $\Delta \lambda/\lambda \simeq 0.16$ for $m_h=125$~GeV. This has recently been improved to $13\%$
with the inclusion of the $hh \to b\bar{b}WW^*$ mode~\cite{Kawada:2013hhbbww}.
Further improvements would be possible by adding more decay modes and/or improvements in jet clustering
\footnote{With perfect jet clustering we expect a $40\%$ relative improvement in the self-coupling precision.}.
\section{Cross Section Times Branching Ratio Summary}
The accuracies for all cross section and $\sigma\cdot BR$ measurements
considered in this paper are summarized in \Tref{tab:stimesbrbase} and \Tref{tab:stimesbrupgrade}.
\Tref{tab:stimesbrbase} shows the accuracies assuming you run $3\times 10^7$~s at
the baseline differential luminosity for each of the center of mass energies 250, 500 and 1000~GeV. \Tref{tab:stimesbrupgrade}
gives the accuracies when you add the luminosities of \Tref{tab:stimesbrbase} to $3\times 10^7$~s
times the upgraded differential luminosities at each of the three center of mass energies.
\begin{table}[t]
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|r|r|r|r|}
\hline
$\sqrt{s}$ and ${\cal L}$ & \multicolumn{2}{c|}{250\,fb$^{-1}$ at 250\,GeV}
& \multicolumn{4}{c|}{500\,fb$^{-1}$ at 500\,GeV}
& \multicolumn{3}{c|}{1\,ab$^{-1}$ at 1\,TeV }\\
$(P_{e^-},P_{e^+})$ & \multicolumn{2}{c|}{(-0.8,+0.3)}
& \multicolumn{4}{c|}{(-0.8,+0.3)}
& \multicolumn{3}{c|}{(-0.8,+0.2)} \\
\hline
& $Zh$ & $\nu\bar{\nu}h$
& $Zh$ & $\nu\bar{\nu}h$ & $t\bar{t}h$ & $Zhh$
& $\nu\bar{\nu}h$ & $t\bar{t}h$ & $\nu\bar{\nu}hh$ \\
\hline\hline
$\Delta \sigma / \sigma$ & 2.6\% & - & 3.0 & - & & 42.7\% & & & 26.3\% \\ \hline
BR(invis.) & $ <$ 0.9 \% & - & - & - & - & & \\ \hline \hline
mode & \multicolumn{9}{c|}{$\Delta (\sigma \cdot BR) / (\sigma \cdot BR)$} \\
\hline
$h \to b\bar{b}$ & 1.2\% & 10.5\% & 1.8\% & 0.7\% & 28\% & & 0.5\% & 6.0\% & \\
$h \to c\bar{c}$ & 8.3\% & - & 13\% & 6.2\% & & & 3.1\% & & \\
$h \to gg$ & 7.0\% & - & 11\% & 4.1\% & & & 2.3\% & & \\
$h \to WW^*$ & 6.4\% & - & 9.2\% & 2.4\% & & & 1.6\% & & \\
$h \to \tau^+\tau^-$ & 4.2\% & - & 5.4\% & 9.0\% & & & 3.1\% & & \\
$h \to ZZ^*$ & 18\% & - & 25\% & 8.2\% & & & 4.1\% & & \\
$h \to \gamma\gamma$ & 34\% & - & 34\% & 23\% & & & 8.5\% & & \\
$h \to \mu^+\mu^-$ & 100\% & - & - & - & & & 31\% & & \\
\hline
\end{tabular}
\caption{Expected accuracies for cross section and cross section times branching ratio
measurements for the $125\,$GeV $h$ boson assuming you run $3\times 10^7$~s at
the baseline differential luminosity for each center of mass energy. For invisible decays of the Higgs,
the number quoted is the 95\% confidence upper limit on the branching ratio.
}
\label{tab:stimesbrbase}
\end{center}
\end{table}
\begin{table}[t]
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|r|r|r|r|}
\hline
$\sqrt{s}$ and ${\cal L}$ & \multicolumn{2}{c|}{1150\,fb$^{-1}$ at 250\,GeV}
& \multicolumn{4}{c|}{1600\,fb$^{-1}$ at 500\,GeV}
& \multicolumn{3}{c|}{2.5\,ab$^{-1}$ at 1\,TeV }\\
$(P_{e^-},P_{e^+})$ & \multicolumn{2}{c|}{(-0.8,+0.3)}
& \multicolumn{4}{c|}{(-0.8,+0.3)}
& \multicolumn{3}{c|}{(-0.8,+0.2)} \\
\hline
& $Zh$ & $\nu\bar{\nu}h$
& $Zh$ & $\nu\bar{\nu}h$ & $t\bar{t}h$ & $Zhh$
& $\nu\bar{\nu}h$ & $t\bar{t}h$ & $\nu\bar{\nu}hh$ \\
\hline\hline
$\Delta \sigma / \sigma$ & 1.2\% & - & 1.7 & - & & 23.7\% & & & 16.7\% \\ \hline
BR(invis.) & $ <$ 0.4 \% & - & - & - & & & - & & \\ \hline \hline
mode & \multicolumn{9}{c|}{$\Delta (\sigma \cdot BR) / (\sigma \cdot BR)$} \\
\hline
$h \to b\bar{b}$ & 0.6\% & 4.9\% & 1.0\% & 0.4\% & 16\% & & 0.3\% & 3.8\% & \\
$h \to c\bar{c}$ & 3.9\% & - & 7.2\% & 3.5\% & & & 2.0\% & & \\
$h \to gg$ & 3.3\% & - & 6.0\% & 2.3\% & & & 1.4\% & & \\
$h \to WW^*$ & 3.0\% & - & 5.1\% & 1.3\% & & & 1.0\% & & \\
$h \to \tau^+\tau^-$ & 2.0\% & - & 3.0\% & 5.0\% & & & 2.0\% & & \\
$h \to ZZ^*$ & 8.4\% & - & 14\% & 4.6\% & & & 2.6\% & & \\
$h \to \gamma\gamma$ & 16\% & - & 19\% & 13\% & & & 5.4\% & & \\
$h \to \mu^+\mu^-$ & 46.6\% & - & - & - & & & 20\% & & \\
\hline
\end{tabular}
\caption{Expected accuracies for cross section and cross section times branching ratio
measurements for the $125\,$GeV $h$ boson assuming you run $3\times 10^7$~s at
the sum of the baseline and upgrade differential luminosities for each center of mass energy.
For invisible decays of the Higgs,
the number quoted is the 95\% confidence upper limit on the branching ratio.
}
\label{tab:stimesbrupgrade}
\end{center}
\end{table}
\chapter{Summary \label{sid:chapter_summary}}
A summary of all model independent coupling precisions is given in
\Tref{tab:summodelindglobalfit0p1}.
\begin{table}[h]
\begin{center}
\begin{tabular}{lcccc}
& ILC(250) & ILC(500) & ILC(1000) & ILC(LumUp) \cr
$\sqrt{s}$ (GeV) & 250 & 250+500 & 250+500+1000 & 250+500+1000 \cr
L (fb$^{-1}$) & 250 & 250+500 & 250+500+1000 & 1150+1600+2500 \cr \hline
$\gamma\gamma$ & 18 \% & 8.4 \% & 4.0 \% & 2.4 \% \cr
$gg$ & 6.4 \% & 2.3 \% & 1.6 \% & 0.9 \% \cr
$WW$ & 4.8 \% & 1.1 \% & 1.1 \% & 0.6 \% \cr
$ZZ$ & 1.3 \% & 1.0 \% & 1.0 \% & 0.5 \% \cr
$t\bar t$ & -- & 14 \% & 3.1 \% & 1.9 \% \cr
$b\bar b$ & 5.3 \% & 1.6 \% & 1.3 \% & 0.7 \% \cr
$\tau^+\tau^-$ & 5.7 \% & 2.3 \% & 1.6 \% & 0.9 \% \cr
$c\bar c$ & 6.8 \% & 2.8 \% & 1.8 \% & 1.0 \% \cr
$\mu^+\mu^-$ & 91\% & 91\% & 16 \% & 10 \% \cr
$\Gamma_T(h)$ & 12 \% & 4.9 \% & 4.5 \% & 2.3 \% \cr
$hhh$ & -- & 83 \% & 21 \% & 13 \% \cr \hline
BR(invis.) & $ <$ 0.9 \% & $ <$ 0.9 \% & $ <$ 0.9 \% & $ <$ 0.4 \% \cr
\hline
\end{tabular}
\caption{Summary of expected accuracies $\Delta g_i/g_i$ for model independent
determinations of the Higgs boson couplings. The theory errors are $\Delta F_i/F_i=0.1\%$. For the invisible
branching ratio, the numbers quoted are 95\% confidence upper limits.}
\label{tab:summodelindglobalfit0p1}
\end{center}
\end{table}
For the purpose of comparing ILC coupling precisions with those of other facilities we present
the coupling errors in \Tref{tab:summatchlhctechnique0p1}.
\begin{table}[h]
\begin{center}
\begin{tabular}{lcccc}
& ILC(250) & ILC(500) & ILC(1000) & ILC(LumUp) \cr \hline
$\sqrt{s}$ (GeV) & 250 & 250+500 & 250+500+1000 & 250+500+1000 \cr
L (fb$^{-1}$) & 250 & 250+500 & 250+500+1000 & 1150+1600+2500 \cr \hline
$\gamma\gamma$ & 17 \% & 8.3 \% & 3.8 \% & 2.3 \% \cr
$gg$ & 6.1 \% & 2.0 \% & 1.1 \% & 0.7 \% \cr
$WW$ & 4.7 \% & 0.4 \% & 0.3 \% & 0.2 \% \cr
$ZZ$ & 0.7 \% & 0.5 \% & 0.5 \% & 0.3 \% \cr
$t\bar t$ & 6.4 \% & 2.5 \% & 1.3 \% & 0.9 \% \cr
$b\bar b$ & 4.7 \% & 1.0 \% & 0.6 \% & 0.4 \% \cr
$\tau^+\tau^-$ & 5.2 \% & 1.9 \% & 1.3 \% & 0.7 \% \cr
$\Gamma_T(h)$ & 9.0 \% & 1.7 \% & 1.1 \% & 0.8 \% \cr \hline
$\mu^+\mu^-$ & 91 \% & 91 \% & 16 \% & 10 \% \cr
$hhh$ & -- & 83 \% & 21 \% & 13 \% \cr
BR(invis.) & $ <$ 0.9 \% & $ <$ 0.9 \% & $ <$ 0.9 \% & $ <$ 0.4 \% \cr \hline
$c\bar c$ & 6.8 \% & 2.8 \% & 1.8 \% & 1.0 \% \cr
\hline
\end{tabular}
\caption{Summary of expected accuracies $\Delta g_i/g_i$ of Higgs boson couplings
using, for each coupling, the fitting technique that most closely matches
that used by LHC experiments.
For $g_g,g_{\gamma},g_W,g_Z,g_b,g_t,g_{\tau},\Gamma_T(h)$ the seven parameter
HXSWG benchmark parameterization described in Section~10.3.7 of Ref.~\cite{Dittmaier:2011ti} is used.
For the couplings $g_\mu$, $g_{hhh}$ and the limit on invisible branching ratio independent analyses are used.
The charm coupling $g_c$ comes from our 10 parameter model independent fit.
All theory errors are $0.1\%$.
For the invisible
branching ratio, the numbers quoted are 95\% confidence upper limits.}
\label{tab:summatchlhctechnique0p1}
\end{center}
\end{table}
In the energy and luminosity scenarios discussed in this paper it was assumed that
the luminosity upgrades at 250 and 500 GeV center of mass energy occurred after
the energy upgrade at 1000 GeV. It is of interest to consider
a scenario where the 250 GeV and 500 GeV luminosity upgrade running occurs before the
energy upgrade to 1000 GeV. This would correspond to the energies and luminosities in \Tref{tab:ecmlumrunsnotev}.
\begin{table}
\begin{center}
\begin{tabular}{lccccccc}
Nickname & Ecm(1) & Lumi(1) & + & Ecm(2) & Lumi(2) & Runtime & Wallplug E \cr
& (GeV) & (fb$^{-1}$) & & (GeV) & (fb$^{-1}$) & (yr) & (MW-yr) \cr \hline
ILC(250) & 250 & 250 & & & & 1.1 & 130 \cr
ILC(500) & 250 & 250 & & 500 & 500 & 2.0 & 270 \cr
ILC500(LumUp) & 250 & 1150 & & 500 & 1600 & 3.9 & 660 \cr
\hline
\end{tabular}
\caption{Energy and luminosities assuming no running at 1 TeV center of mass energy.}
\label{tab:ecmlumrunsnotev}
\end{center}
\end{table}
A summary of all model independent coupling precisions for the case where the 250 GeV and 500 GeV luminosity upgrade running occurs before the
energy upgrade to 1000 GeV is shown in \Tref{tab:summodelindglobalfit0p1notev}.
\begin{table}[h]
\begin{center}
\begin{tabular}{lccc}
& ILC(250) & ILC(500) & ILC500(LumUp) \cr \hline
$\sqrt{s}$ (GeV) & 250 & 250+500 & 250+500 \cr
L (fb$^{-1}$) & 250 & 250+500 & 1150+1600 \cr \hline
$\gamma\gamma$ & 18 \% & 8.4 \% & 4.5 \% \cr
$gg$ & 6.4 \% & 2.3 \% & 1.2 \% \cr
$WW$ & 4.8 \% & 1.1 \% & 0.6 \% \cr
$ZZ$ & 1.3 \% & 1.0 \% & 0.5 \% \cr
$t\bar t$ & -- & 14 \% & 7.8 \% \cr
$b\bar b$ & 5.3 \% & 1.6 \% & 0.8 \% \cr
$\tau^+\tau^-$ & 5.7 \% & 2.3 \% & 1.2 \% \cr
$c\bar c$ & 6.8 \% & 2.8 \% & 1.5 \% \cr
$\mu^+\mu^-$ & 91 \% & 91 \% & 42 \% \cr
$\Gamma_T(h)$ & 12 \% & 4.9 \% & 2.5 \% \cr
$hhh$ & -- & 83 \% & 46 \% \cr \hline
BR(invis.) & $ <$ 0.9 \% & $ <$ 0.9 \% & $ <$ 0.4 \% \cr
\hline
\end{tabular}
\caption{Summary of expected accuracies $\Delta g_i/g_i$ for model independent
determinations of the Higgs boson couplings. The theory errors are $\Delta F_i/F_i=0.1\%$. For the invisible
branching ratio, the numbers quoted are 95\% confidence upper limits.
}
\label{tab:summodelindglobalfit0p1notev}
\end{center}
\end{table}
The facility comparison table in the case where the 250 GeV and 500 GeV luminosity upgrade running occurs before the
energy upgrade to 1000 GeV is shown in \Tref{tab:summatchlhctechnique0p1notev}.
\begin{table}[h]
\begin{center}
\begin{tabular}{lccc}
& ILC(250) & ILC(500) & ILC500(LumUp) \cr \hline
$\sqrt{s}$ (GeV) & 250 & 250+500 & 250+500 \cr
L (fb$^{-1}$) & 250 & 250+500 & 1150+1600 \cr \hline
$\gamma\gamma$ & 17 \% & 8.3 \% & 4.4 \% \cr
$gg$ & 6.1 \% & 2.0 \% & 1.1 \% \cr
$WW$ & 4.7 \% & 0.4 \% & 0.3 \% \cr
$ZZ$ & 0.7 \% & 0.5 \% & 0.3 \% \cr
$t\bar t$ & 6.4 \% & 2.5 \% & 1.4 \% \cr
$b\bar b$ & 4.7 \% & 1.0 \% & 0.6 \% \cr
$\tau^+\tau^-$ & 5.2 \% & 1.9 \% & 1.0 \% \cr
$\Gamma_T(h)$ & 9.0 \% & 1.7 \% & 1.0 \% \cr \hline
$\mu^+\mu^-$ & 91 \% & 91 \% & 42 \% \cr
$hhh$ & -- & 83 \% & 46 \% \cr
BR(invis.) & $ <$ 0.9 \% & $ <$ 0.9 \% & $ <$ 0.4 \% \cr \hline
$c\bar c$ & 6.8 \% & 2.8 \% & 1.5 \% \cr
\hline
\end{tabular}
\caption{Summary of expected accuracies $\Delta g_i/g_i$ of Higgs boson couplings
using, for each coupling, the fitting technique that most closely matches
that used by LHC experiments.
For $g_g,g_{\gamma},g_W,g_Z,g_b,g_t,g_{\tau},\Gamma_T(h)$ the seven parameter
HXSWG benchmark parameterization described in Section~10.3.7 of Ref.~\cite{Dittmaier:2011ti} is used.
For the couplings $g_\mu$, $g_{hhh}$ and the limit on invisible branching ratio independent analyses are used.
The charm coupling $g_c$ comes from our 10 parameter model independent fit.
All theory errors are $0.1\%$.
For the invisible
branching ratio, the numbers quoted are 95\% confidence upper limits.
}
\label{tab:summatchlhctechnique0p1notev}
\end{center}
\end{table}
A comparison of model independent coupling precisions with and without 1 TeV running
is shown in \Tref{tab:summodelindglobalfit0p1notevvstev}.
\begin{table}[h]
\begin{center}
\begin{tabular}{lcc}
& ILC500(LumUp) & ILC(LumUp) \cr \hline
$\sqrt{s}$ (GeV) & 250+500 & 250+500+1000 \cr
L (fb$^{-1}$) & 1150+1600 & 1150+1600+2500 \cr \hline
$\gamma\gamma$ & 4.5 \% & 2.4 \% \cr
$gg$ & 1.2 \% & 0.9 \% \cr
$WW$ & 0.6 \% & 0.6 \% \cr
$ZZ$ & 0.5\% & 0.5 \% \cr
$t\bar t$ & 7.8 \% & 1.9 \% \cr
$b\bar b$ & 0.8 \% & 0.7 \% \cr
$\tau^+\tau^-$ & 1.2 \% & 0.9 \% \cr
$c\bar c$ & 1.5 \% & 1.0 \% \cr
$\mu^+\mu^-$ & 42 \% & 10 \% \cr
$\Gamma_T(h)$ & 2.5 \% & 2.3 \% \cr
$hhh$ & 46 \% & 13 \% \cr \hline
BR(invis.) & $ <$ 0.4 \% & $ <$ 0.4 \% \cr
\hline
\end{tabular}
\caption{Summary of expected accuracies $\Delta g_i/g_i$ for model independent
determinations of the Higgs boson couplings. The theory errors are $\Delta F_i/F_i=0.1\%$. For the invisible
branching ratio, the numbers quoted are 95\% confidence upper limits.
}
\label{tab:summodelindglobalfit0p1notevvstev}
\end{center}
\end{table}
The facility comparison table with and without 1 TeV running
is shown in \Tref{tab:summatchlhctechnique0p1notevvstev}.
\begin{table}[h]
\begin{center}
\begin{tabular}{lcc}
& ILC500(LumUp) & ILC(LumUp) \cr \hline
$\sqrt{s}$ (GeV) & 250+500 & 250+500+1000 \cr
L (fb$^{-1}$) & 1150+1600 & 1150+1600+2500 \cr \hline
$\gamma\gamma$ & 4.4 \% & 2.3 \% \cr
$gg$ & 1.1 \% & 0.7 \% \cr
$WW$ & 0.3 \% & 0.2 \% \cr
$ZZ$ & 0.3 \% & 0.3 \% \cr
$t\bar t$ & 1.4 \% & 0.9 \% \cr
$b\bar b$ & 0.6 \% & 0.4 \% \cr
$\tau^+\tau^-$ & 1.0 \% & 0.7 \% \cr
$\Gamma_T(h)$ & 1.0 \% & 0.8 \% \cr \hline
$\mu^+\mu^-$ & 42 \% & 10 \% \cr
$hhh$ & 46 \% & 13 \% \cr
BR(invis.) & $ <$ 0.4 \% & $ <$ 0.4 \% \cr \hline
$c\bar c$ & 1.5 \% & 1.0 \% \cr
\hline
\end{tabular}
\caption{Summary of expected accuracies $\Delta g_i/g_i$ of Higgs boson couplings
using, for each coupling, the fitting technique that most closely matches
that used by LHC experiments.
For $g_g,g_{\gamma},g_W,g_Z,g_b,g_t,g_{\tau},\Gamma_T(h)$ the seven parameter
HXSWG benchmark parameterization described in Section~10.3.7 of Ref.~\cite{Dittmaier:2011ti} is used.
For the couplings $g_\mu$, $g_{hhh}$ and the limit on invisible branching ratio independent analyses are used.
The charm coupling $g_c$ comes from our 10 parameter model independent fit.
All theory errors are $0.1\%$.
For the invisible
branching ratio, the numbers quoted are 95\% confidence upper limits.
}
\label{tab:summatchlhctechnique0p1notevvstev}
\end{center}
\end{table}
\chapter{Higgs Theory \label{sid:chapter_theory}}
\section{Introduction: the Higgs mechanism}
Quantum field theory has been enormously successful in describing the
behavior of fundamental point particles and their interactions in a framework
that is consistent with the principles of relativity and quantum mechanics.
Indeed, once these principles are invoked, quantum field theory appears to
be the only consistent framework for incorporating
interacting fundamental point particles. If such a framework is
to be predictive (i.e., dependent only on a finite number of input parameters
that are provided by experimental measurements), then
the properties of such fundamental particles are highly constrained---only
spin 0, spin 1/2 and spin 1 are allowed~\cite{Weinberg:1995mt,Weinberg:1996kr}.
Moreover, if the spin 1 particles are self-interacting,
they must be described by a gauge theory. It is remarkable that this is precisely the
spectrum of fundamental particles that have been observed in nature.
A gauge theory of fundamental self-interacting spin-1 gauge bosons
naively appears to require that gauge bosons should be massless, since
an explicit mass term for the gauge boson in the Lagrangian manifestly
violates the gauge symmetry. However, due to the seminal work of
Brout, Englert~\cite{Englert:1964et} and
Higgs~\cite{Higgs:1964ia,Higgs:1964pj} and subsequent work by Guralnik, Hagen and
Kibble~\cite{Guralnik:1964eu,Guralnik:1965uza,Kibble:1967sv}, a
mass-generation mechanism for gauge bosons that is consistent with the
gauge symmetry was developed. The simplest realization of this
mechanism was subsequently employed by Weinberg, when he incorporated
a self-interacting complex scalar doublet into a gauge theory of
electroweak interactions~\cite{Weinberg:1967tq}. The neutral scalar
of the doublet acquires a vacuum expectation value (vev), which
spontaneously breaks the gauge symmetry and generates mass for the
$W^\pm$ and $Z$ bosons of electroweak theory while leaving the photon
massless. Moreover, by coupling the complex scalar doublet to the
chiral fermions of the Standard Model (where no gauge-invariant mass
terms for the fermions are allowed prior to symmetry breaking), one
can also generate masses for all quarks and charged leptons. In the
Glashow-Weinberg-Salam theory of the electroweak
interactions~\cite{Glashow:1961tr,Weinberg:1967tq,Salam:1968rm}, the
gauge bosons acquire mass via the Higgs mechanism by absorbing three
of the four degrees of freedom of the complex scalar doublet, which
provide for the longitudinal degrees of freedom of the $W^\pm$ and $Z$
bosons. One physical scalar degree of freedom is left over---the
Higgs boson of the Standard Model.
There are other possible dynamics that can be used for achieving a
spontaneously broken gauge theory of the electroweak interactions (via
the Higgs mechanism) in which elementary scalar bosons are not
employed. For example, it is possible to spontaneously break a gauge
theory by introducing a strongly interacting fermion pair that
condenses in the vacuum, in analogy with Cooper pairs of
superconductivity (for a nice review, see Ref.~\cite{King:1994yr}).
However, in the summer of 2012 a new scalar boson
was discovered at the LHC by the ATLAS and CMS
Collaborations~\cite{Aad:2012tfa,Chatrchyan:2012ufa}, whose
properties appear to be consistent(within the experimental error) with
those expected of the Standard Model Higgs boson~\cite{Aad:2013wqa,Aad:2013xqa,Chatrchyan:2013lba,Chatrchyan:2012jja}. Consequently, it
appears that nature has chosen to realize the Higgs mechanism via
scalar fields that are either elementary or appear elementary at the
electroweak scale. Although the scalar sector need not be a minimal
one, the data seems to favors the existence of one state of the scalar
sector whose properties resemble those of the Standard Model Higgs
bosons; any deviations from Standard Model behavior, if they
exist, are likely to be small. Clearly, precision measurements of
the newly discovered scalar state will be critical for establishing
and testing the theory that governs the dynamics of electroweak
symmetry breaking.
\subsection{Vector boson mass generation and the unitarity of $VV\to VV$ scattering ($V=W$ or $Z$)}
Consider the theory of electroweak interactions without the attendant scalar sector. If one attempts
to simply add an explicit mass term to the $W^\pm$ and $Z$ bosons, then the resulting theory would
be mathematically inconsistent. One signal of this inconsistency would be revealed by using the
theory to compute the cross section for the scattering of longitudinally polarized gauge bosons,
$VV \rightarrow VV$ (where $V=W$ or $Z$) at tree-level. Such a calculation would yield a
scattering amplitude whose energy dependence grows with the square of the center of mass energy,
a result that grossly violates unitarity. Such a result would be in violation of one of the sacred
principles of quantum mechanics (which requires that the sum of probabilities can never exceed unity).
It is remarkable that this tree-level unitarity violation can be
mitigated by postulating the existence of an elementary scalar
particle that couples to $W^+ W^-$ and $ZZ$ with coupling strength
$gm_V^2/m_W$ (where $V=W$ or $Z$). This new interaction introduces an
additional contribution to $VV \rightarrow VV$, which exactly cancels
the bad high energy behavior of the scattering amplitude, and leaves a
result that approaches a constant at infinite energy. Thus, one can
reconstruct the Standard Model by imposing tree-level unitarity on all
scattering amplitudes of the
theory~\cite{LlewellynSmith:1973ey,Cornwall:1973tb,Cornwall:1974km}.
Thus, if the newly discovered scalar $h$ is to be interpreted as the
Higgs boson of the Standard Model, one should confirm that
\begin{equation} \label{hvv}
g_{hVV}=\frac{\sqrt{2}\,m_V^2}{v}\,,\qquad g_{hhVV}=\frac{\sqrt{2}\,m_V^2}{v^2}\,,
\end{equation}
where the Higgs vev, $v=174$~GeV, is related to the $W$ mass via $m_W=gv/\sqrt{2}$.
Suppose deviations from eq.~(\ref{hvv}) were uncovered by experimental Higgs studies. Then, one would surmise
that the scalar Higgs sector is not minimal, and other scalar states play a role in achieving tree-level unitarity~\cite{Gunion:1990kf}.
Indeed, one can examine the case of an arbitrary scalar sector and derive unitarity sum rules that
replace eq.~(\ref{hvv}) in the more general Higgs model. We shall impose one constraint on all extended Higgs
sectors under consideration---the scalar multiplets and vevs employed should satisfy the tree-level constraint that
\begin{equation}
\rho\equiv \frac{m_W^2}{m_Z^2\cos^2\theta_W}= 1\,,
\end{equation}
a result that is strongly suggested by precision electroweak
measurements~\cite{ErlerPDG,Baak:2012kk}. For example, consider a CP-conserving
extended Higgs sector that has the
property that $\rho=1$ and no tree-level $ZW^\pm\phi^\mp$ couplings (where $\phi^\pm$ are physical charged scalars
that might appear in the scalar spectrum), then it follows that~\cite{Gunion:1990kf}
\begin{Eqnarray}
\sum_i g^2_{h_iVV}&=&\frac{2m_V^4}{v^2}\,,\label{sumv} \\
m_W^2 g_{h_i ZZ}&=&m_Z^2 g_{h_i WW}\,,\label{rel}
\end{Eqnarray}
where the sum in eq.~(\ref{sumv}) is taken over all neutral CP-even scalars $h_i$.
In this case, it follows that $g_{h_i VV}\leq g_{hVV}$ for all $i$ (where $h$ is the Standard Model Higgs boson).
Models that contain only scalar singlets and doublets satisfy the requirements stated above and hence respect
the sum rule given in eq.~(\ref{sumv}) and the coupling relation given in eq.~(\ref{rel}).
However, it is possible to violate $g_{h_i VV}\leq g_{hVV}$
and $m_W^2 g_{h_i ZZ}=m_Z^2 g_{h_i WW}$ if tree-level $ZW^\pm\phi^\mp$
couplings are present. Indeed, in this case, one can show that doubly charged
Higgs bosons must also occur in the model~\cite{Gunion:1990kf}..
\subsection{Chiral fermion mass generation and the unitarity of $VV \rightarrow f\bar{f}$ scattering}
In the Standard Model, left-handed fermions are electroweak doublets and right-handed fermions are
electroweak singlets. A fermion mass term would combine a left-handed and right-handed fermion field, so
that gauge invariance does not allow for explicit fermion mass terms. However, in the Standard Model, it
is possible to couple a left-handed and right-handed fermion field to the scalar doublets. Such interactions
comprise the Yukawa couplings of the Standard Model. When the scalar field acquires a vev, mass terms for
the quarks and charged leptons are generated.
One can repeat the previous analysis by again considering the theory of electroweak interactions
without the attendant scalar sector. If one attempts
to simply add explicit mass terms to the quarks and charged leptons and the $W^\pm$ and $Z$ bosons, then the resulting theory would
again be mathematically inconsistent. One signal of this inconsistency would be revealed by using the
theory to compute the cross section for the scattering of longitudinally polarized gauge bosons into a pair of top quarks,
$VV \rightarrow t\bar{t}$ at tree-level.
Such a calculation would yield a
scattering amplitude whose energy dependence grows with the the center of mass energy,
which violates tree-level unitarity.
Once again, the addition of an elementary particle that couples to $W^+ W^-$ and $ZZ$ with coupling strength $gm_V^2/m_W$
and couples to $t\bar{t}$ with coupling strength $gm_t/(2m_W)$ is sufficient to cure the bad high energy behavior.
Thus, if the newly discovered scalar $h$ is to be interpreted as the Higgs boson of the Standard Model,
one should confirm that
\begin{equation} \label{htt}
g_{hVV}g_{hff}=\frac{m^2_V m_f}{v^2}\,,
\end{equation}
for all quarks and charged leptons $f$ (in practice, $f=t,b, c$ and $\tau$ are the most relevant). In models of extended
Higgs sectors, eq.~(\ref{htt}) would be replaced by a unitarity sum rule in which the right-hand side of eq.~(\ref{htt}) would
be the result of summing over multiple Higgs states in the model~\cite{Gunion:1990kf}.
\section{Theoretical structure of the Standard Model Higgs boson}
\subsection{Tree level Higgs boson couplings}
The Higgs sector of the Standard Model (SM), which takes the minimal form,
consists of
one isospin doublet scalar field $\Phi$ with the hypercharge
$Y=1$~\cite{Gunion:1989we}.
The most general SU(2)$\times$U(1)-invariant renormalizable Higgs
potential is give by
\begin{eqnarray}
V(\Phi)= \mu^2 |\Phi|^2 + \tfrac{1}{2}\lambda |\Phi|^4\,.
\end{eqnarray}
The Higgs doublet field is parameterized as
\begin{eqnarray}
\Phi = \left(
\begin{array}{c} \omega^+ \\
v + (h + i z)/\sqrt{2} \end{array}\right),
\end{eqnarray}
where $\omega^\pm$ and $z$ represent the Nambu-Goldstone boson, $h$ is a
physical state, the Higgs boson, and $v=174$ GeV is the vacuum expectation
value (vev) of the Higgs field.
The self-coupling constant $\lambda$ is positive to guarantee the stability of vacuum.
Assuming that $\mu^2 < 0$, the shape of the potential resembles the
Mexican hat, and the minimum of the scalar potential occurs
at $\langle \Phi \rangle = v$, where $\mu^2=-\lambda v^2$.
The SU(2)$\times$U(1) electroweak symmetry is then broken down to
U(1)$_{\rm EM}$. Expanding the scalar field around its vacuum
expectation value, the scalar potential immediately yields
the mass and the self-couplings of the Higgs boson $h$,
\begin{eqnarray}
m_h^2 = 2\lambda v^2\,, \qquad \lambda_{hhh} = 3\sqrt{2}\,\lambda
v\,,\qquad \lambda_{hhhh}=3\lambda\,.
\end{eqnarray}
Hence, the Higgs mass and self-couplings are related by
\begin{eqnarray}
\lambda_{hhh} = \frac{3m_h^2}{v\sqrt{2}}\,,\qquad \lambda_{hhhh}=\frac{3m_h^2}{2v^2}\,.
\end{eqnarray}
That is, the Higgs mass is directly related to the dynamics of the
Higgs sector.
In particular, the heavier the Higgs mass the stronger the strength of
the Higgs self-couplings.
Indeed, the observed Higgs mass of 126 GeV implies that
$\lambda_{hhhh}\simeq 0.787$, which implies that the Higgs
dynamics is weakly coupled.
The Higgs field couples to the weak gauge bosons (W and Z) via the
covariant derivative~\cite{Abers:1973qs},
$|{\cal D}_\mu \Phi |^2$, where ${\cal D}_\mu
= \partial_\mu + \tfrac{1}{2} i g A_\mu^a \tau^a + \tfrac{1}{2} i g' B_\mu Y$. Here,
the $\tau^a$ are the usual Pauli matrices, the electric charge
operator is $Q=\tfrac{1}{2}(\tau^3+Y)$, and $g$ and $g'$ are gauge coupling
constants for SU(2)$_T$ and U(1)$_Y$, respectively. The masses of the
gauge bosons, which are generated by electroweak symmetry breaking via
the Higgs mechanism, are proportional to the neutral scalar field vev,
\begin{eqnarray}
m_W^2 = \tfrac{1}{2} g^2 v^2, \hspace{1cm} m_Z^2 = \tfrac{1}{2} (g^2+g'^2) v^2.
\end{eqnarray}
Electroweak symmetry breaking also generates the $hWW$ and $hZZ$
couplings,
\begin{Eqnarray}
g(hWW) = \frac{1}{\sqrt{2}} g^2 v, \hspace{1cm} g(hZZ) = \frac{1}{\sqrt{2}} (g^2+g'^2) v.
\end{Eqnarray}
Therefore, the gauge boson masses and the couplings to the Higgs boson
are related as noted previously in \eq{hvv}.
The Higgs field also couples to quarks and leptons via Yukawa
interactions. For example, the coupling of the Higgs fields to the
three generations of quarks is given by
\begin{equation} \label{smyuk}
-\mathcal{L}_{\rm Yukawa}=Y_U^{0ij}(\overline U_L\lsup{\,0i} U_R^{0j}\Phi^{0\,\ast}
-\overline D_L\lsup{\,0i}
U_R^{0j}\Phi^-)+Y_D^{0ij}(\overline D_L\lsup{\,0i} D_R^{0j}
\Phi^0
+\overline U_L\lsup{\,0i} D_R^{0j}\Phi^{+})+{\rm h.c.}\,,
\end{equation}
where $i,j$ are generation labels, $U^{0}=(u^0,c^0,t^0)$ and
$D^{0}=(d^0,s^0,b^0)$ are the interaction-eigenstate quark fields,
and $Y^0_U$ and $Y^0_D$ are arbitrary
complex $3\times 3$ matrices (the sum over repeated indices is
implied). In \eq{smyuk} we have introduced left and right-handed
quark fields via $Q^0_L\equiv P_LQ^0$ and $Q^0_R\equiv P_R Q^0$ where
$P_{R,L}\equiv\tfrac{1}{2}(1\pm\gamma\lsub{5})$.
Setting the Goldstone boson fields to
zero and writing $\Phi^0=v+h^0/\sqrt{2}$, we identify the
quark mass matrices,
\begin{equation}
M_U^0\equiv vY^0_U\,,\qquad\quad
M_D^0\equiv vY^0_D\,.
\end{equation}
We now determine the quark mass eigenstate fields, $U=(u,c,t)$ and
$D=(d,s,b)$ by introducing the following unitary transformations,
\begin{equation}
U_L= V_L^U U^0_L\,,\qquad U_R= V_R^U U^0_R\,,\qquad
D_L = V_L^D D^0_L\,,\qquad D_R= V_R^D D^0_R\,,
\end{equation}
where $V_L^U$, $V_R^U$, $V_L^D$, and $V_R^D$ are unitary matrices chosen
such that
\begin{equation}
M_U\equiv V_L^{U} M^0_U V_R^{U\,\dagger}={\rm diag}(m_u\,,\,m_c\,,\,m_t)\,,\qquad
M_D\equiv V_L^{D} M^0_D V_R^{D\,\dagger}={\rm diag}(m_d\,,\,m_s\,,\,m_b)\,,
\end{equation}
such that the $m_i$ are the positive quark masses (this is the
\textit{singular value decomposition} of linear algebra).
Having diagonalized the quark mass matrices, the neutral Higgs Yukawa couplings
are automatically flavor-diagonal. That is, if we define
\begin{equation}
Y_U\equiv V_L^{U} Y^0_U V_R^{U\,\dagger}=M_D/v\,,\qquad\quad
Y_D\equiv V_L^{D} Y^0_U V_D^{U\,\dagger}=M_U/v\,,
\end{equation}
then \eq{smyuk} can be rewritten in terms of quark mass eigenstates
as:
\begin{equation} \label{yukmass}
-\mathcal{L}_{\rm Yukawa}=\overline{U}_LY_U
U_R\Phi^{0\,\ast}-\overline{D}_LK^\dagger
Y_UU_R\Phi^-+\overline{U}_LKY_D D_R \Phi^++\overline{D}_L Y_D
D_R\Phi^0+{\rm h.c.}\,,
\end{equation}
where
\begin{equation}
K\equiv V_L^U V_L^{D\,\dagger}\,,
\end{equation}
is the Cabibbo-Kobayashi-Maskawa (CKM) matrix.
Hence the SM possesses no flavor-changing neutral currents (FCNCs) mediated
by neutral Higgs boson exchange at tree-level.
Note that independently of the Higgs sector,
the quark couplings to $Z$ and $\gamma$ are automatically flavor diagonal.
Flavor dependence only enters the quark couplings to the $W^\pm$ via
the CKM matrix.
The Yukawa coupling of the Higgs doublets to the
leptons can be similarly treated by
replacing $U\to N$, $D\to E$, $M_U\to 0$, $M_D\to M_E$ and
$K\to\mathds{1}$, where $N=(\nu_e,\nu_\mu,\nu_\tau)$, $E=(e,\mu,\tau)$
and $M_E$ is the diagonal charged lepton mass matrix. In the present
treatment, the right-handed neutrino fields are absent, in which case the
neutrinos are exactly masses. One can accommodate the very small
neutrino masses by including the right-handed neutrino fields and
adding a SU(2)$\times$U(1)-invariant mass term $\overline{N}_L M_N
N_R+{\rm h.c.}$ to \eq{yukmass}. Assuming that the eigenvalues of $M_N$ are much
larger than the scale of electroweak symmetry breaking, one obtains
three very light Majorana neutrino mass eigenstates due to the
seesaw mechanism~\cite{Minkowski:1977sc,GellMann:1980vs,Yanagida:1980xy,Mohapatra:1979ia,Mohapatra:1980yp}.
The very small neutrino masses have almost no impact on Higgs physics.
Consequently, we shall simply treat the neutrinos as massless in this chapter.
In the SM, there is an universal relation between the
masses of the fundamental particles and their couplings to the Higgs boson,
\begin{eqnarray} \label{universal}
\frac{g(hWW)}{\sqrt{2}m_W^2} = \frac{g(hZZ)}{\sqrt{2}m_Z^2} = \frac{y_c}{m_c} = \frac{y_\tau}{m_\tau}=\frac{y_b}{m_b}
=\frac{y_t}{m_t}=\frac{\sqrt{2}\lambda(hhh)}{3m_h^2} = \cdot\cdot\cdot = \frac{1}{v}.
\end{eqnarray}
This is a unique feature of the SM with one Higgs doublet field.
By accurately measuring the mass and coupling to the Higgs boson independently for each particle,
one can test the mass generation mechanism of the SM by using this relation.
In Fig.~\ref{FIG:c-m}, the Standard Model relation is shown along with expected
precision from the full ILC program for the coupling determinations.
If the Higgs sector takes a non-minimal form, deviations from this
universal relation are expected.
Each non-minimal Higgs sector possesses a specific pattern of
deviations. Thus, if the Higgs couplings can be measured with
sufficient precision, these measurements would provide a way to
distinguish among different models of extended Higgs sectors.
\begin{figure}[t]
\begin{center}
\includegraphics[width=90mm]{Chapter_Theory/figs/mass-coupling1TeV.pdf}
\caption{ The Standard Model prediction
that the Higgs coupling to each particle is proportional to its mass.
Expected precision from the full ILC program for the coupling determination is also shown.}
\label{FIG:c-m}
\end{center}
\end{figure}
\subsection{Higgs couplings at one-loop}
The Higgs boson $h$ is a charge and color neutral state; hence, it does not couple to photons and gluons at the tree level.
However, beyond the tree level, the coupling $hgg$, $h\gamma\gamma$ and $h\gamma Z$ appear via the dimension-6 operators,
\begin{eqnarray}
\frac{1}{\Lambda^2} |\Phi|^2 F_{\mu\nu} F^{\mu\nu}, \hspace{1cm} \frac{1}{\Lambda^2} |\Phi|^2 G_{\mu\nu} G^{\mu\nu} .
\end{eqnarray}
In the SM, the effective $hgg$ coupling is induced at the one-loop
level via quark-loop diagrams, with the dominant contribution arising
from the top quark loop. In contrast, the $h\gamma\gamma$ and
$h\gamma Z$ couplings are induced via the top loop diagram and the
W-loop diagram in the SM. The leading contribution to the coupling of
$h\gamma\gamma$ is the $W^\pm$ boson loop, which is roughly 4.5
times larger in amplitude than the contribution of the top quark
loop. Analytic expressions for the $h\to gg$ decay width and the
diphoton partial width are given by \cite{Ellis:1975ap,Shifman:1979eb}
\begin{eqnarray}
\Gamma(h\to gg) &=&
\frac{G_F \alpha_s^2 m_h^3}{512\sqrt{2}\pi^3} \left|N_c Q_t^2 A_{1/2}(\tau_t) \right |^2 , \\
\Gamma(h\to \gamma\gamma) &=&
\frac{G_F \alpha^2 m_h^3}{128\sqrt{2}\pi^3}\left|A_1(\tau_W)+ N_c Q_t^2 A_{1/2}(\tau_t) \right |^2 ,
\end{eqnarray}
where $G_F$ is the Fermi constant, $N_c=3$ is the number of color, $Q_t=+2/3$ is
the top quark electric charge in units of $e$, and $\tau_i\equiv
4m_i^2/m_h^2$
(for $i=t, W$).
Below the $WW$ threshold, the loop functions for spin-1 ($W$ boson) and spin-1/2 (top quark) particles are
given in the Appendix in Ref.~\cite{Carena:2012xa}.
In the limit that the particle running in the loop has a mass much heavier than the Higgs, we have
\begin{equation}
\label{eq:limit}
A_1 \rightarrow -7 \ , \qquad N_c Q_t^2\, A_{1/2} \rightarrow \frac{4}3 N_c Q_t^2 \ .
\end{equation}
For a Higgs mass below the $WW$ threshold, the $W$ boson contribution is always dominant and monotonically decreasing from $A_1=-7$ for very small Higgs masses to $A_1\approx -12.4$ at the threshold,
while the top quark contribution is well-approximated by the asymptotic value of $(4/3)^2\approx 1.78$.
If we consider a Higgs mass at 126 GeV, the $W$ and top contributions are
\begin{equation}
m_h=126 \ \ {\rm GeV}: \ A_1=-8.32 \ , \quad N_c Q_t^2 A_{1/2}=1.84\ .
\end{equation}
There have been many studies on the new physics loop
contributions to the $hgg$ as well as $h\gamma\gamma$ couplings.
Recently, Carena, Low and Wagner have performed a comprehensive study for the effects on
the diphoton width from adding new colorless charged particles of spin-0, spin-1/2, and spin-1,
which would interfere with the SM contributions~\cite{Carena:2012xa}.
In general, the contribution of a heavy particle in loop diagrams to
a decay amplitude
scales inversely as a positive power of the corresponding particle mass.
That is, in the infinite mass limit, the effects of the heavy particle loops
decouple, as a consequence of the decoupling theorem of Appelquist and
Carazzone~\cite{Appelquist:1974tg}. However, the validity of the
decoupling theorem depends on the assumption that all couplings
are held fixed. In cases where the
origin of the heavy particle mass is due to electroweak symmetry
breaking, the squared-mass of the boson or the mass of the
fermion is proportional to the vacuum expectation value as indicated
in \eq{universal}, and the constant of proportionality is the
corresponding Higgs coupling. Thus in this case, one can only take the limit of large
mass by taking the corresponding Higgs coupling to be large.
As a result, the corresponding contribution of such particles in
loop diagrams do \textit{not} decouple.
For example, the loop contributions of weak gauge bosons
($W$ and $Z$) and chiral fermions such as the top quark to $h\to gg$ and
$h\to\gamma\gamma$ are examples where the corresponding
one-loop contributions to the decay amplitudes approach a constant in
the large mass limit.
Non-decoupling effects can also appear in radiative corrections to various observables.
As a dramatic example,
the one loop correction to the triple Higgs boson coupling $hhh$ is large because it receives a non-decoupling effect
proportional to the quartic power of the top quark mass after renormalization~\cite{Sirlin:1985ux,Kanemura:2004mg},
\begin{eqnarray}
\lambda_{hhh}^{\rm ren} \simeq \frac{3 m_h^2}{\sqrt{2}\,v} \left(1 - \frac{N_c m_t^4}{16 \pi^2} \right).
\end{eqnarray}
In theories that go beyond the Standard Model (BSM), new particles may exist that couple to the Higgs boson.
For example, new bosonic loops yield positive contributions and fermionic loops yield negative
contributions to the $hhh$ coupling.
The loop induced couplings $hgg$, $h\gamma\gamma$, $hZ\gamma$ and the radiatively-corrected $hhh$ coupling
are particularly sensitive to
new particles in the loop when electroweak symmetry breaking provides the dominant contributions to the corresponding
new particle masses. Thus, the couplings of the Higgs boson to SM fields can exhibit deviations from SM
predictions due to BSM loop effects even when the corresponding tree-level couplings are fixed to their SM values.
The non-decoupling contribution of new particles can affect the
effective potential at finite temperatures. For example, a new bosonic loop
contribution can make the electroweak phase transition sufficiently strongly
first order as required for a successful scenario of electroweak
baryogenesis~\cite{Cohen:1993nk,Morrissey:2012db}. Such a
non-decoupling effect results in a large deviation in the $hhh$
coupling, so that one can test this scenario by measuring the $hhh$
coupling accurately. In Ref.~\cite{Kanemura:2004ch}, the correlation
between the condition for a first order phase transition and the
deviation in the $hhh$ coupling is studied. To test this scenario
of electroweak baryogenesis requires a determination of the $hhh$
coupling with a $10$--$20\%$ accuracy. A
measurement of the $hhh$ coupling with the required precision can be achieved at
the ILC as shown in Section 5.6.
\subsection{Higgs decays}
The Higgs boson couples to all the particles of the SM. Therefore, there are many decay modes. In particular,
with the mass of about 126 GeV the Higgs boson decays
into $b\bar b$, $WW^\ast$, $\tau^+\tau^-$, $gg$, $c\bar c$, $ZZ^\ast$,
$\gamma\gamma$ and $\gamma Z$, $\mu\mu$,
where $\gamma\gamma$ and $\gamma Z$ are one-loop induced decay processes.
\begin{figure}[b]
\begin{center}
\includegraphics[width=90mm]{Chapter_Theory/figs/Higgs_br.pdf}
\caption{Branching ratio of the Higgs boson in the SM as a function of the mass.\label{FIG:hbr_SM}}
\end{center}
\end{figure}
In Fig.~\ref{FIG:hbr_SM}, branching ratios for various decay modes are
shown as a function of the mass of the Higgs boson. The decay
branching ratios strongly depend on the mass of the Higgs boson $m_h$.
In Tables~\ref{tab_bf_hff} and \ref{tab_bf_hvv}, the predicted values
of decay branching ratios of the Standard Model Higgs boson are listed
for $m_h= 125.0$, 125.3, 125.6, 125.9, 126.2 and 126.5
GeV~\cite{Heinemeyer:2013tqa}. In Table~\ref{tab_bf_hvv} the
predicted values of the total decay width of the Higgs boson are also
listed. It is quite interesting that with a Higgs mass of 126 GeV, a
large number of decay modes have similar sizes and are accessible to
experiments. Indeed, the universal relation between the mass and the
coupling to the Higgs boson for each particle shown in
Fig.~\ref{FIG:c-m} can be well tested by measuring these branching
ratios as well as the total decay width accurately at the ILC. For
example, the top Yukawa coupling and the triple Higgs boson coupling
are determined respectively by measuring the production cross sections
of top pair associated Higgs boson production and double Higgs boson
production mechanisms.
\begin{table}[t!]
\centering
\caption{The Standard Model values of branching ratios of fermionic decays of the Higgs boson
for each value of the Higgs boson mass $m_h$.
\label{tab_bf_hff}
\\}
\begin{tabular}{|c||c|c|c|c|c|}\hline
$m_h$ (GeV) & $b\bar b$ & $\tau^+\tau^-$ & $\mu^+\mu^-$ & $c\bar c$ & $s \bar s$ \\
\hline
125.0 & 57.7 \% & 6.32 \% & 0.0219 \% & 2.91 \% & 0.0246 \% \\
125.3 & 57.2 \% & 6.27 \% & 0.0218 \% & 2.89 \% & 0.0244 \% \\
125.6 & 56.7 \% & 6.22 \% & 0.0216 \% & 2.86 \% & 0.0242 \% \\
125.9 & 56.3 \% & 6.17 \% & 0.0214 \% & 2.84 \% & 0.0240 \% \\
126.2 & 55.8 \% & 6.12 \% & 0.0212 \% & 2.81 \% & 0.0238 \% \\
126.5 & 55.3 \% & 6.07 \% & 0.0211 \% & 2.79 \% & 0.0236 \% \\
\hline
\end{tabular}
\end{table}
\begin{table}[t!]
\centering
\caption{The Standard Model values of branching ratios of bosonic decays of the Higgs boson for each
value of the Higgs boson mass $m_h$.
The predicted value of the total decay width of the Higgs boson is also listed for each value of $m_h$.
\label{tab_bf_hvv}
\\}
\begin{tabular}{|c||c|c|c|c|c|c|}\hline
$m_h$ (GeV) & $gg$ & $\gamma\gamma$ & $Z\gamma$ & $W^+W^-$ & $ZZ$ & $\Gamma_H$ (MeV) \\
\hline
125.0 & 8.57 \% & 0.228 \% & 0.154 \% & 21.5 \% & 2.64 \% & 4.07\\
125.3 & 8.54 \% & 0.228 \% & 0.156 \% & 21.9 \% & 2.72 \% & 4.11\\
125.6 & 8.52 \% & 0.228 \% & 0.158 \% & 22.4 \% & 2.79 \% & 4.15 \\
125.9 & 8.49 \% & 0.228 \% & 0.162 \% & 22.9 \% & 2.87 \% & 4.20\\
126.2 & 8.46 \% & 0.228 \% & 0.164 \% & 23.5 \% & 2.94 \% & 4.24\\
126.5 & 8.42 \% & 0.228 \% & 0.167 \% & 24.0 \% & 3.02 \% & 4.29\\
\hline
\end{tabular}
\end{table}
\subsection{Higgs production at the ILC}
\begin{figure}[b!]
\begin{center}
\includegraphics[width=40mm]{Chapter_Theory/figs/ZHdiagram.pdf}
\includegraphics[width=40mm]{Chapter_Theory/figs/nunuHdiagram.pdf}
\includegraphics[width=40mm]{Chapter_Theory/figs/Diagram-ttH-a.pdf}
\caption{Two important Higgs boson production processes at the ILC.
The Higgsstrahlung process (Left), the W-boson fusion process (Middle) and the top-quark association (Right). }
\label{FIG:Hpro_diags_SM}
\end{center}
\end{figure}
At the ILC, the SM Higgs boson $h$ is produced mainly via production mechanisms such as
the Higgsstrahlung process $e^+e^- \rightarrow Z^\ast \rightarrow Z h$ (Fig.~\ref{FIG:Hpro_diags_SM} Left)
and the the weak boson fusion
processes $e^+e^- \rightarrow W^{+\ast} W^{-\ast} \nu \bar \nu \rightarrow h \nu \bar \nu$
(Fig.~\ref{FIG:Hpro_diags_SM} (Middle))
and
$e^+e^- \rightarrow Z^{\ast} Z^{\ast} e^+e^- \rightarrow h e^+e^-$.
The Higgsstrahlung process is an $s$-channel process so that it is maximal just above
the threshold of the process, whereas vector boson fusion is a
$t$-channel process which yields a cross section that grows logarithmically
with the center-of-mass energy.
%
The Higgs boson is also produced in association with a fermion pair.
The most important process of this type is Higgs production
in association with a top quark pair, whose typical diagram
is shown in Fig.~\ref{FIG:Hpro_diags_SM} (Right).
The corresponding production cross sections at the ILC
are shown in Figs.~\ref{FIG:Higgs_xsec_SM} (Left) and (Right)
as a function of the collision energy by assuming
the initial electron (positron) beam polarization to be $-0.8$ ($+0.2$).
\begin{figure}[t!]
\begin{center}
\includegraphics[width=60mm]{Chapter_Theory/figs/higgs_xsec_P-08_02_1TeV.pdf}
\includegraphics[width=85mm]{Chapter_Theory/figs/xsec_log.pdf}
\caption{(Left)The production cross sections of the Higgs boson with the mass of 125 GeV
at the ILC as a function of the collision energy
$\sqrt{s}$. Polarization of the electron beam (80\%) and the positron beam (20\%) is
assumed.
(Right) The cross sections of the production processes $e^+e^- \to hZ$, $e^+e^- \to H \nu_e \bar \nu_e$,
$e^+e^- \to H e^+ e^-$, $e^+e^- \to t \bar t H$,
$e^+e^- \to HHZ$ and $e^+e^- \to HH \nu_e \bar \nu_e$ as a function of
the collision energy for the mass of 125 GeV.
No polarization is assumed for the initial electron and positron beams. \label{FIG:Higgs_xsec_SM}}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=50mm]{Chapter_Theory/figs/ZHHdiagram.pdf}
\includegraphics[width=50mm]{Chapter_Theory/figs/nunuHHdiagram.pdf}
\caption{Typical diagrams for double Higgs boson production via off-shell Higgsstrahlung (Left) and $W$-boson fusion (Right) processes. \label{FIG:HHpro_diags_SM}}
\end{center}
\end{figure}
The ILC operation will start with the $e^+e^-$ collision energy of 250
GeV (just above threshold for $hZ$ production), where
the Higgsstrahlung process is dominant and the contributions of
the fusion processes are small, as shown in Fig.~\ref{FIG:Higgs_xsec_SM} (Left) .
As the center-off-mass energy,$\sqrt{s}$ increases, the Higgsstrahlung
cross-section falls off as $1/s$. Consequently,
the $W$-boson fusion mechanism is more significant at higher energies, and
its production cross section grows logarithmically and becomes
larger than that of the Higgsstrahlung cross section for $\sqrt{s} > 450$ GeV.
At $\sqrt{s} = 500$ GeV, both the Higgsstrahlung process and the W-boson fusion process are
important, and at $\sqrt{s}=1$ TeV the W-boson fusion is dominant.
The cross section of $e^+e^- \to t \bar t h$ is shown in
Fig.~\ref{FIG:Higgs_xsec_SM} (Right) . The threshold of the production process
is roughly 480 GeV, so that the $t\bar{t}h$ cross section can be measured at the
ILC with the energy of 1 TeV.
Finally, the triple Higgs boson coupling can be determined from measuring the
double Higgs production mechanisms $e^+e^- \to Z hh$ and $e^+e^- \to \nu \bar \nu hh$
by extracting the contribution of the Feynman diagram shown in Fig.~\ref{FIG:HHpro_diags_SM}.
The production cross section for the $Zhh$ process is typically of the order of 0.1 fb at the
collision energy just above the threshold at about 400 GeV
as shown in Fig.~\ref{FIG:Higgs_xsec_SM}(Right).
At the ILC with a center-of-mass energy of 500 GeV, the triple Higgs boson coupling can be measured
via this process.
On the other hand, at higher energies the cross section of the fusion process
$e^+e^- \to \nu \bar \nu hh$ becomes larger. This process becomes relevant
for the measurement of the triple Higgs boson coupling
at the energies around 1 TeV.
\subsection{Vacuum Stability}
The mass of the Higgs boson is proportional to the strength of the
Higgs
self-coupling $\lambda$ via $m_h^2 = 2 \lambda v^2$.
The magnitude of $\lambda$ at high energies can be predicted from the size of $\lambda$ at the electroweak scale
by using the renormalization group equation (RGE). The RGE for the coupling constant $\lambda$ is given by~\cite{Cabibbo:1979ay}
\begin{eqnarray}
16 \pi^2 \mu \frac{d}{d\mu} \lambda = 12(\lambda^2 +\lambda y_t^2-
y_t^4) -3\lambda(3g^2+g^{\prime\,2})+\tfrac{3}{4}\bigl[2g^4+(g^2+g^{\prime\,2})^2\bigr]+
\ldots,
\end{eqnarray}
where the $\ldots$ indicates terms proportional to the Yukawa
couplings of the five light quarks and the charged leptons, which can
be neglected in first approximation. If the mass is large, $\lambda$
is large and the $\beta$-function is positive. Then, $\lambda$ is
larger for higher energies and blows up at some high energy point
(the Landau pole), which can be below the Planck scale. In contrast,
when the mass is small and the $\beta$-function is negative due to
the term proportional to the fourth power of the top-quark Yukawa
coupling. In this case, the coupling $\lambda$ decreases
as the energy scale increases and finally becomes negative.
If $\lambda$ is driven negative below the Planck scale (at which point
quantum gravitational effects would have to be taken into account), then we could
conclude that electroweak vacuum is not the global minimum,
since either a deeper scalar potential minimum exists or the
scalar potential is unbounded from below. In either case, the
electroweak minimum would no longer be stable.
By assuming that the electroweak minimum is stable up to a given
high energy scale $\Lambda$, below which the coupling $\lambda$ does
not blow up nor is driven negative,
one can derive upper and the lower Higgs mass bounds
as a function of $\Lambda$.
Given that the mass of the Higgs boson is
now known to be around 126 GeV, which corresponds to
$\lambda\sim 0.26$ at the electroweak scale,
it follows that the
$\beta$-function is negative. The recent RGE analysis in the NNLO
approximation~\cite{Degrassi:2012ry} shows that the scale $\Lambda$ where
$\lambda$ becomes negative is between $10^{7}$ GeV to $10^{15}$ GeV at the 3$\sigma$ level.
The main uncertainty comes from the top quark mass, $\alpha_s$, and the theoretical uncertainties in QCD corrections.
When the mass of the top quark is measured with the accuracy of about 30 MeV at the ILC, the cut-off scale of the
SM can be much better determined, as exhibited by Fig.~\ref{FIG:SM_VS}.
\begin{figure}[b!]
\includegraphics[width=55mm]{Chapter_Theory/figs/run125.pdf}
\includegraphics[width=85mm]{Chapter_Theory/figs/deadoraliveG2012.pdf}
\caption{
{\it Left}: RG evolution of $\lambda$ varying $M_t$ and $\alpha_{\rm s}$ by $\pm 3\sigma$.
{\it Right}: Regions of absolute stability, metastability and instability of the SM vacuum in the $M_t$--$M_h$
plane in the region of the preferred experimental range of $M_h$ and $M_t$ (the gray areas denote the allowed
region at 1, 2, and 3$\sigma$).
The three boundaries lines correspond to $\alpha_s(M_Z)=0.1184\pm 0.0007$, and the grading of the
colors indicates the size of the theoretical error.
The dotted contour-lines show the instability scale $\Lambda$ in GeV assuming $\alpha_s(M_Z)=0.1184$.
}
\label{FIG:SM_VS}
\end{figure}
With a Standard Model Higgs mass of 126 GeV, the central value of
$\lambda$ is negative at the Planck scale. Therefore, the electroweak
vacuum is not stable in the Standard Model unless new physics enters
below the Planck scale. However, if we only require that the
electroweak vacuum is metastable, with a lifetime considerably longer
than the age of the Universe, then Fig.~\ref{FIG:SM_VS} indicates that
the Standard Model can be valid with no new physics required all the
way up to the Planck scale.
Finally, note that the bound from vacuum stability is largely relaxed when extended Higgs sectors
are considered where the lightest scalar behaves like the SM
Higgs boson\cite{Sher:1988mj,Kanemura:1999xf}.
For example, if we consider the scalar sector with two Higgs doublets, the cut off scale
where the vacuum stability is violated can be easily above the Planck scale.
Due to the loop contribution of extra scalar fields, the beta-function of the
quartic coupling constant of the SM-like Higgs boson is in general larger than that in the SM.
Therefore, the cutoff scale is higher than that of the SM.
\section{The two-Higgs-doublet model (2HDM)}
\label{tdhm}
Given that there are multiple generations of quarks and leptons, it is
reasonable to consider the possibility that the Higgs sector of
electroweak theory is also non-minimal. The introduction of the
two-Higgs doublet extension of the Standard Model (2HDM)~\cite{Branco:2011iw}
was motivated for various reasons over the years. It was initially introduced to
provide a possible new source of CP violation mediated by neutral
scalars~\cite{Lee:1973iz}. Subsequently, the 2HDM was studied for phenomenological
reasons, as it provides for new scalar degrees of freedom including a
charged Higgs pair, a neutral CP-odd Higgs scalar in the case of a
CP-conserving scalar potential and neutral scalars of indefinite CP in
the case of a CP-violating scalar potential and/or vacuum~\cite{Gunion:1989we}.
These features yield new phenomenological signals for the production
and decay of fundamental spin-0 particles.
Today, the main motivation for the 2HDM is connected with models of
TeV-scale supersymmetry. Such models provide the only natural
framework for weakly-coupled fundamental scalar particles (for further
details, see Section~\ref{alternate}). In particular, the
minimal supersymmetric extension of the Standard Model (MSSM)
requires a Higgs sector with at least two Higgs doublet fields.
The MSSM Higgs sector is a 2HDM that is highly constrained
by supersymmetry. The structure of the MSSM Higgs sector will be
explored further in Section~\ref{mssm}.
The most general version of the 2HDM, which contains all possible
renormalizable terms (mass terms and interactions) allowed by the
electroweak gauge invariance, is not phenomenologically viable due to
the presence of Higgs--quark Yukawa interaction terms that lead to
tree-level Higgs-mediated flavor changing neutral currents (FCNCs)~\cite{Glashow:1976nt,Paschos:1976ay}.
Such effects are absent in the MSSM Higgs sector due to the
constraints imposed by supersymmetry on the Yukawa interactions. In
non-supersymmetric versions of the 2HDM, one can also naturally avoid
FCNCs by imposing certain simple discrete symmetries on the the scalar
and fermion fields, as discussed in Section~\ref{specialforms}. These symmetries reduce the parameter freedom of
the 2HDM and automatically eliminate the dangerous FCNC interactions.
Nevertheless, it is instructive to examine the structure of the most
general 2HDM, as constrained versions of the 2HDM can then be examined
as limiting cases of the most general 2HDM.
\subsection{Model-independent treatment}
\label{modelind}
The scalar fields of the 2HDM are complex SU(2) doublet, hypercharge-one fields, $\Phi_1$ and $\Phi_2$,
where the corresponding vevs are $\langle\Phi_i\rangle=v_i$, and
$v^2\equiv |v_1|^2+|v_2|^2=(174~{\rm GeV})^2$ is fixed by the observed
$W$ mass, $m_W=gv/\sqrt{2}$. The most general
renormalizable SU(2)$\times$U(1) scalar potential is given by
\begin{Eqnarray}
\mathcal{V}&=& m_{11}^2 \Phi_1^\dagger \Phi_1+ m_{22}^2 \Phi_2^\dagger \Phi_2 -[m_{12}^2
\Phi_1^\dagger \Phi_2+{\rm h.c.}]
+\tfrac{1}{2} \lambda_1(\Phi_1^\dagger \Phi_1)^2+\tfrac{1}{2} \lambda_2(\Phi_2^\dagger \Phi_2)^2
+\lambda_3(\Phi_1^\dagger \Phi_1)(\Phi_2^\dagger \Phi_2)
\nonumber\\
&&\quad
+\lambda_4( \Phi_1^\dagger \Phi_2)(\Phi_2^\dagger \Phi_1)
+\left\{\tfrac{1}{2} \lambda_5 (\Phi_1^\dagger \Phi_2)^2 +\big[\lambda_6 (\Phi_1^\dagger
\Phi_1) +\lambda_7 (\Phi_2^\dagger \Phi_2)\big] \Phi_1^\dagger \Phi_2+{\rm
h.c.}\right\}\,.\label{genpot}
\end{Eqnarray}
In the most general 2HDM, the fields $\Phi_1$ and $\Phi_2$ are indistinguishable.
Thus, it is always possible to define two orthonormal linear combinations of the two
doublet fields without modifying any prediction of the model. Performing such a
redefinition of fields leads to a new scalar potential with the same form as \eq{genpot}
but with modified coefficients. This implies that the coefficients that parameterize
the scalar potential in \eq{genpot} are not directly physical~\cite{Davidson:2005cw}.
To obtain a scalar potential that is more closely related to physical observables, one
can introduce the so-called \textit{Higgs basis} in which the redefined doublet fields (denoted below
by $H_1$ and $H_2$ have the property that $H_1$ has a non-zero vev whereas $H_2$ has a zero
vev~\cite{Branco:1999fs}. In particular, we define new Higgs doublet fields:
\begin{equation} \label{higgsbasispot}
H_1=\begin{pmatrix}H_1^+\\ H_1^0\end{pmatrix}\equiv \frac{v_1^* \Phi_1+v_2^*\Phi_2}{v}\,,
\qquad\quad H_2=\begin{pmatrix} H_2^+\\ H_2^0\end{pmatrix}\equiv\frac{-v_2 \Phi_1+v_1\Phi_2}{v}
\,.
\end{equation}
It follows that $\vev{H_1^0}=v$ and $\vev{H_2^0}=0$.
The Higgs basis is uniquely defined
up to an overall rephasing, $H_2\to e^{i\chi} H_2$ (which does not alter the fact that
$\vev{H_2^0}=0$). In the Higgs basis, the scalar potential is
given by~\cite{Branco:1999fs,Davidson:2005cw}:
\begin{Eqnarray} \mathcal{V}&=& Y_1 H_1^\dagger H_1+ Y_2 H_2^\dagger H_2 +[Y_3
H_1^\dagger H_2+{\rm h.c.}]
+\tfrac{1}{2} Z_1(H_1^\dagger H_1)^2+\tfrac{1}{2} Z_2(H_2^\dagger H_2)^2
+Z_3(H_1^\dagger H_1)(H_2^\dagger H_2)
\nonumber\\
&&\quad
+Z_4( H_1^\dagger H_2)(H_2^\dagger H_1)
+\left\{\tfrac{1}{2} Z_5 (H_1^\dagger H_2)^2 +\big[Z_6 (H_1^\dagger
H_1) +Z_7 (H_2^\dagger H_2)\big] H_1^\dagger H_2+{\rm
h.c.}\right\}\,,
\end{Eqnarray}
where $Y_1$, $Y_2$ and $Z_1,\ldots,Z_4$ are real and uniquely defined,
whereas $Y_3$, $Z_5$, $Z_6$ and $Z_7$ are complex and transform under
the rephasing of $H_2$,
\vspace{-0.1in}
\begin{equation} \label{rephase}
[Y_3, Z_6, Z_7]\to e^{-i\chi}[Y_3, Z_6, Z_7] \quad{\rm and}\quad
Z_5\to e^{-2i\chi} Z_5\,.
\end{equation}
After minimizing the scalar potential, $Y_1=-Z_1 v^2$
and $Y_3=-Z_6 v^2$. This leaves 11 free parameters:
1 vev, 8 real parameters, $Y_2$, $Z_{1,2,3,4}$, $|Z_{5,6,7}|$, and two relative phases.
If $\Phi_1$ and $\Phi_2$ are indistinguishable fields, then
observables can only depend on combinations of Higgs basis
parameters that are independent of $\chi$. Symmetries, such as
discrete symmetries or supersymmetry, can distinguish between
$\Phi_1$ and $\Phi_2$, which then singles out a specific physical basis for
the Higgs fields, and can yield additional observables such as
$\tan\beta\equiv |v_2|/|v_1|$ in the MSSM.
In the general 2HDM,
the physical charged Higgs boson is the charged component of the Higgs-basis doublet $H_2$, and its mass
is given by
\begin{equation} \label{chhiggsmass}
m_{H^\pm}^2=Y_{2}+Z_3 v^2\,.
\end{equation}
The three physical neutral Higgs boson mass-eigenstates
are determined by diagonalizing a $3\times 3$ real symmetric squared-mass
matrix that is defined in the Higgs basis~\cite{Branco:1999fs,Haber:2006ue}
\begin{equation} \label{mtwo}
\mathcal{M}^2=2v^2\left( \begin{array}{ccc}
Z_1&\,\, \Re(Z_6) &\,\, -\Im(Z_6)\\
\Re(Z_6) &\,\, \tfrac{1}{2} (Z_{345}+Y_2/v^2) & \,\,
- \tfrac{1}{2} \Im(Z_5)\\ -\Im(Z_6) &\,\, - \tfrac{1}{2} \Im(Z_5) &\,\,
\tfrac{1}{2} (Z_{345}+Y_2/v^2)-\Re(Z_5)\end{array}\right),
\end{equation}
where $Z_{345}\equiv Z_3+Z_4+\Re(Z_5)$.The diagonalizing matrix is a $3\times 3$
real orthogonal matrix that depends on three angles:
$\theta_{12}$, $\theta_{13}$ and~$\theta_{23}$.
Under the rephasing $H_2\to e^{i\chi}H_2$~\cite{Haber:2006ue},
\begin{equation}
\theta_{12}\,,\, \theta_{13}~{\hbox{\text{are invariant, and}}}\quad
\theta_{23}\to \theta_{23}-\chi\,.
\end{equation}
By convention, we choose
\begin{equation} \label{range}
-\tfrac{1}{2}\pi\leq\theta_{12}\,,\,\theta_{13}<\tfrac{1}{2}\pi\,.
\end{equation}
It is convenient to define
invariant combinations of $\theta_{12}$ and $\theta_{13}$, denoted by $q_{k1}$ and $q_{k2}$
in Table~\ref{tabinv} below, where $k=1,2,3$ corresponds to the associated neutral Higgs mass eigenstate $h_k$~\cite{Haber:2006ue}.
\begin{table}[h!]
\centering
\caption{Invariant combinations of neutral Higgs mixing angles $\theta_{12}$ and $\theta_{13}$,
where $c_{ij}\equiv\cos\theta_{ij}$ and $s_{ij}\equiv\sin\theta_{ij}$. \label{tabinv}
\\}
\begin{tabular}{|c||c|c|}\hline
$\phantom{a} k\phantom{a} $ &\phantom{a} $q_{k1}\phantom{a} $ & \phantom{a} $q_{k2} \phantom{a} $ \\
\hline
$1$ & $c_{12} c_{13}$ & $-s_{12}-ic_{12}s_{13}$ \\
$2$ & $s_{12} c_{13}$ & $c_{12}-is_{12}s_{13}$ \\
$3$ & $s_{13}$ & $ic_{13}$ \\ \hline
\end{tabular}
\end{table}
\noindent
The physical neutral Higgs states ($h_{1,2,3}$) are then given by:
\begin{equation}
h_k=\frac{1}{\sqrt{2}}\biggl\{q_{k1}^*\left(H_1^0-v\right)+q_{k2}^*H_2^0 e^{i\theta_{23}}+{\rm h.c.}\biggr\}\,.
\end{equation}
It is convenient to choose the mass ordering of the states such that $m_1<m_{2,3}$. The mass ordering fixes the neutral
Higgs mixing angles $\theta_{12}$ and $\theta_{13}$. Although the explicit formulae for the Higgs masses and mixing
angles are quite complicated, there are numerous relations among them which take on rather simple forms. The
following results are noteworthy~\cite{Haber:2006ue,Haber:2010bw}:
\begin{Eqnarray}
2Z_1 v^2&=&m_1^2 c_{12}^2 c_{13}^2+m_2^2 s_{12}^2 c_{13}^2 + m_3^2
s_{13}^2\,,\label{z1v} \\[5pt]
2\Re(Z_6\,e^{-i\theta_{23}})\,v^2 &=& c_{13}s_{12}c_{12}(m_2^2-m_1^2)\,,
\label{z6rv} \\[5pt]
2\Im(Z_6\,e^{-i\theta_{23}})\,v^2 &=& s_{13}c_{13}(c_{12}^2 m_1^2+s_{12}^2
m_2^2-m_3^2) \,, \label{z6iv} \\[5pt]
2\Re(Z_5\,e^{-2i\theta_{23}})\,v^2 &=& m_1^2(s_{12}^2-c_{12}^2 s_{13}^2)+m_2^2(c_{12}^2-s_{12}^2 s_{13}^2)-m_3^2 c_{13}^2\,,
\label{z5rv} \\[5pt]
\Im(Z_5\,e^{-2i\theta_{23}})\,v^2 &=& s_{12}c_{12}s_{13}(m_2^2-m_1^2)\,. \label{z5iv}
\end{Eqnarray}
If we also define the physical charged Higgs state by $H^\pm=e^{\pm i\theta_{23}}H_2^\pm$, then all the Higgs mass eigenstate
fields ($h_1$, $h_2$, $h_3$ and $H^\pm$) are invariant under the rephasing $H_2\to e^{i\chi}H_2$.
Thus, we have established a second well-defined basis of the general
2HDM, which corresponds to the
mass-eigenstate basis for the neutral Higgs bosons.
\subsection{Constraints on 2HDM scalar potential parameters}
The assumption of tree-level unitarity in the scattering of longitudinal gauge bosons
yields via the equivalence theorem upper bounds on the quartic couplings of the scalar
potential. The bounds are rather simple when expressed in the Higgs basis.
For example, the following bounds obtained in Ref.~\cite{Haber:2010bw} are based on single
channel scattering processes,
\begin{Eqnarray}
&& |Z_1|<4\pi\,,\qquad |Z_3|<8\pi\,,\qquad |Z_3+Z_4|<8\pi\,,\quad
|{\rm Re}(Z_5 e^{-2i\theta_{23}})|<2\pi\,,\nonumber \\
&& |{\rm Im}(Z_5 e^{-2i\theta_{23}})|<8\pi\,,\qquad
|{\rm Re}(Z_6 e^{-i\theta_{23}})|<2\pi \qquad |{\rm Im}(Z_6 e^{-i\theta_{23}})|<\tfrac{8}{3}\pi\,.
\end{Eqnarray}
There are no unitarity restrictions at tree-level on $Z_2$ and $Z_7$
as these quantities are absent from the neutral scalar mass matrix.
One can obtain somewhat improved tree-level bounds by considering
multi-channel scattering processes and analyzing the eigenvalues of
the corresponding amplitude matrices. If the $|Z_i|$ are too large,
then the scalar sector becomes strongly coupled, and the tree-level
unitarity bounds become suspect. Nevertheless, it is common practice
to consider weakly-coupled scalar sectors, in which case one should
not allow any of the $|Z_i|$ to become too large. For practical
purposes, we shall assume that $|Z_i|\lesssim 2\pi$, in order to
maintain unitarity and perturbativity of tree-level amplitudes.
Additional constraints on the 2HDM scalar potential parameters arise from
the analysis of precision electroweak observables, which are sensitive to
Higgs bosons via loop corrections to Standard Model processes.
The $S$, $T$, and $U$ parameters, introduced by Peskin and
Takeuchi~\cite{Peskin:1991sw}, are independent ultraviolet-finite combinations of
radiative corrections to gauge boson two-point functions (the
so-called ``oblique'' corrections). The parameter $T$ is related to
the well known $\rho$-parameter of electroweak physics~\cite{Veltman:1977kh} by
$\rho - 1 = \alpha T$. The oblique parameters can be expressed in
terms of the transverse part of the gauge boson two-point
functions. For example,
\begin{equation}
\label{oblique}
\alpha\, T \equiv \frac{\Pi^{\rm new}_{WW}(0)}{m_W^2}
-\frac{\Pi^{\rm new}_{ZZ}(0)}{m_Z^2}\,,
\end{equation}
where $\alpha\equiv e^2/(4\pi)$ is the electromagnetic coupling defined
in the $\overline{\rm MS}$ scheme evaluated at $m_Z$.
The $\Pi_{V_a V_b}^{\rm new}$ are the new physics contributions
to the one-loop $V_a$---$V_b$ vacuum polarization functions (where $V=W$ or $Z$).
New physics contributions are defined as those that enter relative
to the Standard Model with the Higgs mass fixed to its observed value.
The definition of the two other oblique parameters $S$ and $U$ can be found
in Ref.~\cite{ErlerPDG}.
Explicit expressions for $S$, $T$ and $U$ in the general 2HDM have been written down
and the numerical contributions of the 2HDM states
(relative to that of the SM) to the oblique parameters in the
2HDM have been studied in Refs.~\cite{Haber:2010bw,Froggatt:1991qw,Froggatt:1992wt,Grimus:2008nb}.
The general conclusion is that corrections
to $S$ and $U$ due to the contribution of the Higgs sector are small, consistent
with the present experimental limits. However, the contributions
to $T$ may or may not be significant depending on the amount of custodial symmetry breaking
introduced by the 2HDM scalar potential. Indeed, in Ref.~\cite{Haber:2010bw} it is shown that
a custodial symmetric scalar potential is one in which CP is conserved and in addition,
the following condition is satisfied,
\begin{equation} \label{basindcust}
\hspace{-0.65in} Z_4=\begin{cases} \varepsilon_{56}|Z_5|\,, & \quad \text{for}~~Z_6\neq 0\,, \\[5pt]
\varepsilon_{57}|Z_5|\,, & \quad \text{for}~~Z_7\neq 0\,, \\[5pt]
\pm |Z_5|\,, & \quad \text{for}~~Z_6=Z_7=0\,,
\end{cases}
\end{equation}
where the two sign factors, $\varepsilon_{56}$ and $\varepsilon_{57}$ are defined by:
\begin{equation} \label{signs}
\varepsilon_{56}=Z_5^* Z_6^2=\varepsilon_{56}|Z_5| |Z_6|^2\,,\quad\qquad
\varepsilon_{57}=Z_5^* Z_7^2=\varepsilon_{57}|Z_5| |Z_6|^2\,.
\end{equation}
Note that since the scalar potential is assumed to be CP conserving (otherwise the custodial
symmetry is violated), it follows that ${\rm Im}(Z_5^* Z_6^2)={\rm Im}(Z_5^* Z_7^2)=0$.
Hence $\varepsilon_{56}$ and $\varepsilon_{57}$ are real numbers of unit modulus.
A numerical study shows that the 2HDM contribution to $T$ is within the experimentally
measured bounds as long as there is not a significant mass splitting between the
charged Higgs boson and the heavy neutral Higgs bosons. Such mass splittings would
require rather large values of some of the scalar quartic couplings (which would be
approaching their unitarity limits).
\subsection{Tree-level Higgs boson couplings--the general case}
The interactions of the Higgs bosons with the gauge bosons and the Higgs self-interactions,
when computed in the Higgs basis, can be expressed in terms of the
parameters $Z_i$, $\theta_{12}$, $\theta_{13}$
and $\theta_{23}$~\cite{Haber:2006ue}. In fact, the only combinations that
appear will be invariant with respect to the rephasing $H_2\to e^{i\chi} H_2$
(since observables cannot depend on the arbitrary angle $\chi$).
Indeed, the interaction terms will depend on the invariant quantities $q_{k1}$ and $q_{k2}$ defined
in Table~\ref{tabinv} and on invariant combinations of the $Z_i$ and $e^{-i\theta_{23}}$.
The interactions of the Higgs bosons and vector bosons of the Standard Model are given by:
\begin{Eqnarray}
\mathcal{L}_{VVH}&=&\left(gm_W W_\mu^+W^{\mu\,-}+\frac{g}{2c_W} m_Z
Z_\mu Z^\mu\right)q_{k1} h_k
\,,\label{VVH}\\[8pt]
\mathcal{L}_{VVHH}&=&\left[\tfrac{1}{4} g^2 W_\mu^+W^{\mu\,-}
+\frac{g^2}{8c_W^2}Z_\mu Z^\mu\right]h_k h_k \nonumber \\
&& +\left[\tfrac{1}{2} g^2 W_\mu^+ W^{\mu\,-}+
e^2A_\mu A^\mu+\frac{g^2}{c_W^2}\left(\tfrac{1}{2}
-s_W^2\right)^2Z_\mu Z^\mu +\frac{2ge}{c_W}\left(\tfrac{1}{2}
-s_W^2\right)A_\mu Z^\mu\right]H^+H^-
\nonumber \\
&& +\biggl\{ \left(\tfrac{1}{2} eg A^\mu W_\mu^+
-\frac{g^2s_W^2}{2c_W}Z^\mu W_\mu^+\right)
q_{k2}H^-h_k +{\rm h.c.}\biggr\}
\,,\label{VVHH} \\[8pt]
\mathcal{L}_{VHH}&=&\frac{g}{4c_W}\,\epsilon_{jk\ell}q_{\ell 1}
Z^\mu h_j\ddel_\mu h_k -\tfrac{1}{2} g\biggl[iq_{k2}W_\mu^+
H^-\ddel\lsup{\,\mu} h_k
+{\rm h.c.}\biggr]\nonumber \\
&& +\left[ieA^\mu+\frac{ig}{c_W}\left(\tfrac{1}{2} -s_W^2\right)
Z^\mu\right]H^+\ddel_\mu H^-\,, \label{VHH}
\end{Eqnarray}
where $s_W\equiv \sin\theta_W$, $c_W\equiv\cos\theta_W$,
and the sum over pairs of repeated indices $j,k=1,2,3$ is implied.
The trilinear Higgs self-interactions are given by
\begin{Eqnarray}
\mathcal{L}_{3h}&=&-\frac{v}{\sqrt{2}}\, h_j h_k h_\ell
\biggl[q_{j1}q_{k1}q_{\ell 1} Z_1
+q_{j2}q^*_{k2}q_{\ell 1}(Z_3+Z_4) + q_{j1} \Re(q_{k2}q_{\ell 2}Z_5\,
e^{-2i\theta_{23}}) \nonumber \\
&&
+3q_{j1}q_{k1}\Re\!\left(q_{\ell
2}Z_6\,e^{-i\theta_{23}}\right) +\Re(q_{j2}^*q_{k2}q_{\ell
2}Z_7\,e^{-i\theta_{23}})
\biggr]\nonumber \\
&&
+\sqrt{2}\,v\,h_k
H^+H^-\biggl[q_{k1}Z_3+\Re(q_{k2}\,e^{-i\theta_{23}}Z_7)\biggr]\,, \label{hhh}
\end{Eqnarray}
where there is an implicit sum over repeated indices.
Note that the complex $Z_{5,6,7}$ are always paired with the correct
power of $e^{-i\theta_{23}}$ such that the corresponding product is
invariant under the rephasing of $H_2$. Finally, for completeness, the
quadralinear Higgs self-interactions are exhibited,
\begin{Eqnarray}
\mathcal{L}_{4h}&=&
-\tfrac{1}{8} h_j h_k h_l h_m \biggl[q_{j1}q_{k1}q_{\ell 1}q_{m1}Z_1
+q_{j2}q_{k2}q^*_{\ell 2}q^*_{m2}Z_2
+2q_{j1}q_{k1}q_{\ell 2}q^*_{m2}(Z_3+Z_4)\nonumber \\
&& +2q_{j1}q_{k1}\Re(q_{\ell 2}q_{m2}Z_5\,e^{-2i\theta_{23}})
+4q_{j1}q_{k1}q_{\ell 1}\Re(q_{m2}Z_6\,e^{-i\theta_{23}})
+4q_{j1}\Re(q_{k2}q_{\ell
2}q^*_{m2}Z_7\,e^{-i\theta_{23}})\biggr]\nonumber\\
&& -\tfrac{1}{2} h_j h_k H^+ H^-\biggl[q_{j2}q^*_{k2} Z_2 +
q_{j1}q_{k1}Z_3
+2q_{j1}\Re(q_{k2}Z_7\,e^{-i\theta_{23}})\biggr] -\tfrac{1}{2} Z_2 H^+H^- H^+ H^-. \label{hhhh}
\end{Eqnarray}
It is remarkable how compact the expressions are for the Higgs boson interactions
when written explicitly in terms of invariant quantities that can be directly
related to observables.
We next turn to the Higgs-fermion Yukawa couplings. For simplicity, we focus
on the interaction of the Higgs bosons with three generations of quarks. The
corresponding interactions with leptons are easily obtained from the latter by
the appropriate substitutions. One starts out initially with a Lagrangian
expressed in terms of the scalar doublet fields $\Phi_i$ ($i=1,2$) and
interaction--eigenstate quark fields. After electroweak symmetry breaking,
one can transform the scalar doublets into the Higgs basis fields $H_1$ and $H_2$.
At the same time,
one can identify the $3\times 3$ quark mass matrices. By redefining the left
and right-handed quark fields appropriately, the quark
mass matrices are transformed into diagonal form, where the diagonal elements are real
and non-negative. The resulting Higgs--quark Yukawa couplings are given by~\cite{Haber:2010bw}
\begin{Eqnarray}
-\mathcal{L}_{\rm Y}&=&\overline U_L (\kappa^U H_1^{0\,\dagger}
+\rho^U H_2^{0\,\dagger})U_R
-\overline D_L K^\dagger(\kappa^U H_1^{-}+\rho^U H_2^{-})U_R \nonumber \\
&& +\overline U_L K (\kappa^{D\,\dagger}H_1^++\rho^{D\,\dagger}H_2^+)D_R
+\overline D_L (\kappa^{D\,\dagger}H_1^0+\rho^{D\,\dagger}H_2^0)D_R+{\rm h.c.},
\label{lyuk}
\end{Eqnarray}
where $U=(u,c,t)$ and $D=(d,s,b)$ are the mass-eigenstate quark fields, $K$ is the
CKM
mixing matrix and $\kappa$ and $\rho$ are $3\times 3$
Yukawa coupling matrices. Note that $Q_{R,L}\equiv P_{R,L}Q$,
where $Q=U$ or~$D$ and $P_{R,L}\equiv\tfrac{1}{2}(1\pm\gamma\lsub{5})$ are
the right and left handed projection operators, respectively.
By setting $H_1^0=v$ and $H_2^0=0$, one can relate
$\kappa^U$ and $\kappa^D$ to the diagonal
quark mass matrices $M_U$ and $M_D$,
respectively,
\begin{equation} \label{mumd}
M_U=v\kappa^U={\rm diag}(m_u\,,\,m_c\,,\,m_t)\,,\qquad
M_D=v\kappa^{D\,\dagger}={\rm
diag}(m_d\,,\,m_s\,,\,m_b) \,.
\end{equation}
However, the complex matrices $\rho^Q$ ($Q=U,D$) are unconstrained. Moreover,
\begin{equation} \label{rhoq}
\rho^Q\to e^{-i\chi}\rho^Q\,,
\end{equation}
under the rephasing $H_2\to e^{i\chi}H_2$.
The Yukawa coupling of the Higgs doublets to the
leptons can be similarly treated by
replacing $U\to N$, $D\to E$, $M_U\to 0$, $M_D\to M_E$ and
$K\to\mathds{1}$, where $N=(\nu_e,\nu_\mu,\nu_\tau)$, $E=(e,\mu,\tau)$
and $M_E$ is the diagonal charged lepton mass matrix.
To obtain the physical Yukawa couplings of the Higgs boson, one must relate the
Higgs basis scalar fields to the Higgs mass-eigenstate fields. This yields the
physical Higgs--quark Yukawa couplings,
\begin{Eqnarray}
-\mathcal{L}_Y &=& \frac{1}{\sqrt{2}}\,\overline D\sum_{k=1}^3
\biggl\{q_{k1}\frac{M_D}{v} +
q_{k2}\,[e^{i\theta_{23}}\rho^D]^\dagger P_R+
q^*_{k2}\,e^{i\theta_{23}}\rho^D P_L\biggr\}Dh_k \nonumber \\
&& +\frac{1}{\sqrt{2}}\,\overline U \sum_{k=1}^3\biggl\{q_{k1}\frac{M_U}{v}+
q^*_{k2}\,e^{i\theta_{23}}\rho^U P_R+
q_{k2}\,[e^{i\theta_{23}}\rho^U]^\dagger P_L\biggr\}U h_k
\nonumber \\
&& +\biggl\{\overline U\bigl[K[e^{i\theta_{23}}\rho^D]^\dagger
P_R-[e^{i\theta_{23}}\rho^U]^\dagger KP_L\bigr] DH^+ +{\rm
h.c.}\biggr\}\,. \label{YUK}
\end{Eqnarray}
The
combinations $e^{i\theta_{23}}\rho^U$ and $e^{i\theta_{23}}\rho^D$ that appear in the interactions above are invariant under
the rephasing of $H_2$.
Note that no $\tan\beta$ parameter appears above! This is because $\tan\beta$ is
the absolute value of the ratio of the
two neutral Higgs vevs defined with respect to some arbitrary basis of the scalar doublets.
But, since the two Higgs doublet fields
are identical at this stage, there is no physical principle that
singles out a particular basis. Indeed, physical observables cannot depend on the choice of basis. Hence,
$\tan\beta$ is an unphysical parameter. In contrast, all parameters that appear in \eq{YUK} are physical
and can be directly related to some observable.
It is convenient to rewrite the Higgs-fermion Yukawa couplings in terms of
the following two $3\times 3$ hermitian matrices that are invariant
with respect to the rephasing of $H_2$,
\begin{Eqnarray}
\rho^Q_R &\equiv& \frac{v}{2}\,M^{-1/2}_Q
\biggl\{e^{i\theta_{23}}\rho^Q +
[e^{i\theta_{23}}\rho^Q]^\dagger\biggr\}M^{-1/2}_Q\,,
\qquad \text{for $Q=U,D$}\,,\nonumber \\[6pt]
\rho^Q_I &\equiv& \frac{v}{2i}M^{-1/2}_Q
\biggl\{e^{i\theta_{23}}\rho^Q -
[e^{i\theta_{23}}\rho^Q]^\dagger\biggr\}M^{-1/2}_Q\,,
\qquad \text{for $Q=U,D$}\,.\label{rhodefs}
\end{Eqnarray}
Then, the Yukawa couplings take the following form:
\begin{Eqnarray}
\!\!\!\!\!\!\!\!\!\!
-\mathcal{L}_Y &=& \frac{1}{v\sqrt{2}}\,\overline D\sum_{k=1}^3 M_D^{1/2}
\biggl\{q_{k1}\mathds{1} + \Re(q_{k2})\rho^D_R+\Im(q_{k2})\rho^D_I+i\gamma\lsub{5}(\Im(q_{k2})\rho^D_R
-\Re(q_{k2})\rho^D_I \biggr\} M_D^{1/2}Dh_k \nonumber \\
&& +\frac{1}{v\sqrt{2}}\,\overline U \sum_{k=1}^3 M_U^{1/2}\biggl\{q_{k1}\mathds{1}
+ \Re(q_{k2})\rho^U_R+\Im(q_{k2})\rho^U_I+i\gamma\lsub{5}(\Im(q_{k2})\rho^U_R
-\Re(q_{k2})\rho^U_I \biggr\} M_U^{1/2}Uh_k \nonumber \\
&& +\frac{1}{v}\biggl\{\overline U\bigl[KM_D^{1/2}(\rho^D_R-i\rho^D_I)
M_D^{1/2}P_R-M_U^{1/2}(\rho^U_R-i\rho^U_I) M_U^{1/2}KP_L\bigr] DH^+ +{\rm
h.c.}\biggr\}, \label{YUK2}
\end{Eqnarray}
where $\mathds{1}$ is the $3\times 3$ identity matrix.
The appearance of unconstrained complex $3\times 3$ Yukawa matrices
$\rho^Q_{R,I}$ in \eq{YUK2} indicates the presence of potential flavor-changing neutral Higgs--quark interactions.
If the off-diagonal elements of $\rho^Q_{R,I}$ are unsuppressed, they will generate tree-level Higgs-mediated FCNCs
that are incompatible with the strong suppression of FCNCs observed in nature.
\subsection{Tree-level Higgs boson couplings---the CP-conserving case}
It is instructive to consider the case of a CP-conserving Higgs scalar potential. If CP is \textit{explicitly} conserved,
then there exists a basis for the scalar fields in which all the parameters of the scalar potential are simultaneously
real. Such a basis (if it exists) is called a \text{real basis}. If in addition the vacuum conserves CP, then
a real basis exists in which the vevs are simultaneously real. In
this case, a
real Higgs basis exists that is unique up to a redefinition of $H_2\to -H_2$.
Thus, without loss of generality, we can adopt a convention in which
the sign of $Z_6$ or $Z_7$ is fixed to be either positive or negative.
Having chosen a real Higgs basis, one can diagonalize the neutral Higgs mass matrix given in
\eq{mtwo}.
One immediately finds two neutral CP-even scalars, $h$ and $H$ (with $m_h<m_H)$ with squared-masses,
\begin{equation}
m_{H,h}^2=\tfrac{1}{2}\biggl\{Y_2+\bigl(Z_{345}+2Z_1\bigr)v^2\pm
\sqrt{\bigl[Y_2+\bigl(Z_{345}-2Z_1\bigr)v^2\bigr]^2+16Z_6^2v^4}\,\biggr\}\,,
\end{equation}
where $Z_{345}\equiv Z_3+Z_4+Z_5$ (since $Z_5$ is real by assumption),
and a CP-odd scalar $A$, with squared-mass
\begin{equation}
m_A^2=Y_2+(Z_3+Z_4-Z_5)v^2\,.
\end{equation}
Only one neutral Higgs mixing angle $\theta_{12}$ is required, since
$\theta_{13}=0$ and $e^{i\theta_{23}}={\rm sgn}~Z_6$.
It is conventional to
rotate from the Higgs basis to an arbitrary basis by an angle
$\beta$. In this basis, the conventionally defined Higgs mixing angle
$\alpha$ is related to $\theta_{12}$ by,
\begin{equation} \label{alphacp}
\alpha=\beta-\theta_{12}-\tfrac{1}{2}\pi\,.
\end{equation}
The quantity $\beta-\alpha=\theta_{12}+\tfrac{1}{2}\pi$ is clearly
independent of the choice of basis used to define $\beta$. In this
notation, we have
\begin{equation} \label{bma}
\cos\theta_{12}=\sin(\beta-\alpha)\,,\qquad\quad
\sin\theta_{12}=-\cos(\beta-\alpha)\, {\rm sgn}~Z_6\,,
\end{equation}
where $0\leq\beta-\alpha<\pi$ [in light of \eq{range}].
\Eqs{z1v}{z6rv} yield
\begin{Eqnarray}
\cos^2(\beta-\alpha)&=&\frac{2Z_1 v^2-m_1^2}{m_2^2-m_1^2}\,,\label{c2exact}\\
\sin(\beta-\alpha)\cos(\beta-\alpha)&=&-\frac{2Z_6v^2}{m_2^2-m_1^2}
\label{scexact}\,.
\end{Eqnarray}
It is convenient to
adopt a convention where $Z_6>0$ in which case we can take
$\theta_{23}=0$. In this convention, \eq{scexact} implies that the
sign of $\cos(\beta-\alpha)$ is negative (since by assumption,
$0\leq\sin(\beta-\alpha)\leq 1$ and $m_2>m_1$).
The corresponding invariant combinations of neutral Higgs mixing
angles given in Table~\ref{tabinv} simplify as shown in
Table~\ref{tabinvcp} below.
\begin{table}[h!]
\centering
\caption{Invariant combinations of Higgs mixing angles in the
CP-conserving case, where $c_{\beta-\alpha}\equiv\cos(\beta-\alpha)$ and
$s_{\beta-\alpha}\equiv\sin(\beta-\alpha)$, in a convention where $Z_6>0$. These are obtained from
Table~\ref{tabinv} by setting $\theta_{12}=\beta-\alpha-\tfrac{1}{2}\pi$
and $\theta_{13}=0$. \label{tabinvcp}
\\}
\begin{tabular}{|c||c|c|}\hline
$\phantom{a} k\phantom{a} $ &\phantom{a} $q_{k1}\phantom{a} $ & \phantom{a} $q_{k2} \phantom{a} $ \\
\hline
$1$ & $\phantom{-}s_{\beta-\alpha}$ & $c_{\beta-\alpha}$ \\
$2$ & $-c_{\beta-\alpha}$ & $s_{\beta-\alpha}$ \\
$3$ & $0$ & $i$ \\ \hline
\end{tabular}
\end{table}
\noindent
Using the results of Table~\ref{tabinvcp} and identifying
$h_1=h$, $h_2=-H$ and $h_3=A$ to match the standard conventions
of the CP-conserving 2HDM, we can obtain
from eqs.~(\ref{VVH})--(\ref{hhhh}) and \eq{YUK2}
the complete list of Higgs couplings in the CP-conserving case.
The properties of the three-point and
four-point Higgs boson-vector boson couplings are conveniently summarized
by listing the couplings that are proportional
to either $s_{\beta-\alpha}$ or $c_{\beta-\alpha}$, and the couplings
that are independent of $\beta-\alpha$~\cite{Gunion:1989we}:
\begin{equation}
\renewcommand{\arraycolsep}{1cm}
\let\us=\underline
\begin{array}{lll}
\us{\cos(\beta-\alpha)}& \us{\sin(\beta-\alpha)} &
\us{\hbox{\rm{angle-independent}}} \\ [3pt]
\noalign{\vskip3pt}
H W^+W^-& h W^+W^- & \qquad\longdash \\
H ZZ& h ZZ & \qquad\longdash \\
Z\hah& Z\haH & ZH^+H^-\,,\,\,\gamma H^+H^-\\
W^\pm H^\mph& W^\pm H^\mpH & W^\pm H^\mpA \\
ZW^\pm H^\mph& ZW^\pm H^\mpH & ZW^\pm H^\mpA \\
\gamma W^\pm H^\mph& \gamma W^\pm H^\mpH & \gamma W^\pm H^\mpA \\
\quad\longdash &\quad\longdash & VV\phi\phi\,,\,VVA\ha\,,\,VV H^+H^-
\end{array}
\label{littletable}
\end{equation}
where $\phi=h$ or $H$ and $VV=W^+W^-$, $ZZ$, $Z\gamma$ or
$\gamma\gamma$.
Note in particular that {\it all} vertices
in the theory that contain at least
one vector boson and {\it exactly one} non-minimal Higgs boson state
($H$, $A$ or $H^\pm$) are proportional to $\cos(\beta-\alpha)$.
This can be understood as a consequence of unitarity sum rules which
must be satisfied by the tree-level amplitudes of the
theory \cite{Cornwall:1974km,Lee:1977eg,Weldon:1984wt,Gunion:1990kf}.
\subsection{The decoupling/alignment limit of the 2HDM}
\label{sec:decouplalign}
Many models of extended Higgs sectors possess a decoupling limit, in which
there exists one scalar whose properties coincide with those of the
Standard Model Higgs boson~\cite{Haber:1989xc}.
The decoupling limit of the 2HDM corresponds
to the limiting case in which the Higgs doublet $H_2$ (in the Higgs basis)
receives a very large mass and is therefore decoupled from the
theory. This can be achieved by
assuming that $Y_2\gg v^2$ and $|Z_i|\lesssim\mathcal{O}(1)$ for all~$i$~\cite{Gunion:2002zf,Haber:2006ue}.
The effective low energy theory consists of a single Higgs doublet field
(namely, $H_1$), corresponding to the Higgs sector of the Standard Model.
The alignment limit of the 2HDM corresponds to the limiting case in
which the mixing of the two Higgs doublet fields $H_1$ and $H_2$ (in
the Higgs basis) is suppressed~\cite{Craig:2013hca}. This can be achieved by assuming that
$|Y_3|\ll 1$ [which implies that $|Z_6|\ll 1$ via the scalar potential
minimum conditions given below \eq{rephase}]. In both the decoupling
and alignment limits, the neutral Higgs mass eigenstate is
approximately given by $\sqrt{2}\,\Re(H_1^0-v)$, and its couplings
approach those of the Standard Model (SM) Higgs boson.
In this section, we provide a general analysis of the
decoupling/alignment limit of the 2HDM following the work of Ref.~\cite{Haber:2013}.
It is convenient to order the neutral scalar masses such that $m_1\leq m_{2,3}$
and define the invariant Higgs mixing angles accordingly.
If we identify $h_1$ as the SM-like Higgs boson so that
\begin{equation}
\frac{g_{h_1VV}}{g_{h_{\rm SM}VV}}=c_{12} c_{13}\simeq 1\,,\qquad \text{where $V=W$ or $Z$}\,,
\end{equation}
then it follows that $s_{12}$, $s_{13}\ll 1$. Thus, in the
decoupling/alignment limit, \eqs{z6rv}{z6iv} yield~\cite{Haber:2006ue}:
\begin{Eqnarray}
s_{12}\equiv\sin\theta_{12}&\simeq &
\frac{2\,\Re(Z_6 e^{-i\theta_{23}})v^2}{m_2^2-m_1^2}\ll 1\,, \label{done}\\
s_{13}\equiv\sin\theta_{13}&\simeq&
-\frac{2\,\Im(Z_6 e^{-i\theta_{23}})v^2}{m_3^2-m_1^2}\ll 1\,.\label{dtwo}
\end{Eqnarray}
In addition, eq.~(\ref{z5iv}) implies that one additional small quantity characterizes the
decoupling/alignment limit,
\begin{equation} \label{dthree}
\Im(Z_5 e^{-2i\theta_{23}})\simeq \frac{(m_2^2-m_1^2) s_{12}s_{13}}{v^2}
\simeq-\frac{2\,\Im(Z_6^2 e^{-2i\theta_{23}})v^2}{m_3^2-m_1^2}\ll 1\,.
\end{equation}
Note that in the decoupling/alignment limit, \eq{z5rv} yields
\begin{equation}
m_2^2-m_3^2\simeq 2\,\Re(Z_5 e^{-2i\theta_{23}})v^2\,.
\end{equation}
In the decoupling limit, $m_1^2\simeq 2Z_1 v^2\ll m_2^2$, $m_3^2$,
$m_{H^\pm}^2$, which guarantees that \eqst{done}{dthree} are satisfied.
In addition, $m_2^2-m_3^2\simeq
m_{H^\pm}^2-m_3^2=\mathcal{O}(v^2)$.
That is, the
mass splittings among the heavy Higgs states are of order $v^2/m_3$.
In the alignment limit, $|Z_6|\ll 1$ ensures that \eqst{done}{dthree} are satisfied.
We again find that $m_1^2\simeq Z_1 v^2$, but
with no requirement that $h_2$, $h_3$ and $H^\pm$ must be
significantly heavier than $h_1$.
The couplings of $h_1$ to the vector bosons and fermions and the Higgs
self-couplings in the approach to the decoupling/alignment limit are exhibited in
Table~\ref{tabdecouplings}.
\begin{table}[h!]
\centering
\caption{2HDM couplings of the SM-like Higgs boson $h\equiv h_1$ normalized to
those of the SM Higgs boson, in the
decoupling/alignment limit. In the Higgs couplings to vector bosons,
$VV=W^+ W^-$ or $ZZ$.
In the Higgs self-couplings,
$Z_{6R}\equiv \Re(Z_6 e^{-i\theta_{23}})$ and $Z_{6I}\equiv \Im(Z_6 e^{-i\theta_{23}})$.
For the fermion couplings,
$D$ is a column vector of three down-type fermion fields
(either down-type quarks or charged leptons)
and $U$ is a column vector of three up-type quark fields. The
$3\times 3$ hermitian matrices, $\rho^Q_R$ and $\rho^Q_I$ (where $Q=U$
or $D$) are defined in \eq{rhodefs}. The normalization of the
pseudoscalar coupling of the Higgs boson $h$ to fermions is relative to
the corresponding scalar coupling to fermions.
\label{tabdecouplings} \\}
\begin{tabular}{|c||c|c|}\hline
Higgs interaction & 2HDM coupling & decoupling/alignment limit\\
\hline
$hVV$ & $c_{12} c_{13}$ & $1-\tfrac{1}{2} s_{12}^2-\tfrac{1}{2} s_{13}^2$
\\[6pt]
$hhh$ & see eq.~(\ref{hhh}) & $1-3(s_{12}Z_{6R}-s_{13}Z_{6I})/Z_1$
\\[6pt]
$hhhh$ & see eq.~(\ref{hhhh}) & $1-4(s_{12}Z_{6R}-s_{13}Z_{6I})/Z_1$
\\[6pt]
$h\overline{D}D$ & $c_{12} c_{13}\mathds{1}-s_{12}\rho^D_R-c_{12}s_{13}\rho^D_I$ &
$\mathds{1}-s_{12}\rho^D_R-s_{13}\rho^D_I$
\\[6pt]
$ih\overline{D}\gamma\lsub{5}D$ &
$s_{12}\rho^D_I-c_{12}s_{13}\rho^D_R$ &
$s_{12}\rho^D_I-s_{13}\rho^D_R$
\\[6pt]
$h\overline{U}U$ & $c_{12} c_{13}\mathds{1}-s_{12}\rho^U_R-c_{12}s_{13}\rho^U_I$ &
$\mathds{1}-s_{12}\rho^U_R-s_{13}\rho^U_I$
\\[6pt]
$ih\overline{U}\gamma\lsub{5}U$ &
$-s_{12}\rho^U_I+c_{12}s_{13}\rho^U_R$ &
$-s_{12}\rho^U_I+s_{13}\rho^U_R$
\\[6pt] \hline
\end{tabular}
\end{table}
If the scalar potential is CP-conserving, then in the
conventions established above, $\theta_{13}=\theta_{23}=0$ and $Z_6$
is real and positive. In this
case eqs.~(\ref{dtwo}) and (\ref{dthree}) are automatically satisfied.
The decoupling/alignment limit is then
achieved when eq.~(\ref{done}) is satisfied. Using \eq{scexact}, the
decoupling/alignment limit corresponds to~\cite{Gunion:2002zf}:
\begin{equation} \label{z6decoupling}
\cos(\beta-\alpha)\simeq -\frac{2Z_6 v^2}{m_H^2-m_h^2}\ll 1\,.
\end{equation}
In this limit, the neutral Higgs masses are given by,
\begin{equation}
m_h^2\simeq 2 Z_1 v^2\,,\qquad\quad m^2_{H,A}\simeq Y_2+(Z_3+Z_4\pm
Z_5)v^2\,.
\end{equation}
In the 2HDM with a CP-conserving scalar potential, the couplings of $h$ to the vector bosons and fermions and the Higgs
self-couplings in the approach to the decoupling/alignment limit are exhibited in
Table~\ref{tabdecouplingscp}.
Note that if the Yukawa coupling matrices $\rho^U$ and/or $\rho^D$ are complex,
then small CP-violating pseudoscalar couplings of the SM-like
Higgs boson to fermion pairs will be present, suppressed by a factor
of $c_{\beta-\alpha}$.
\begin{table}[h!]
\centering
\caption{2HDM couplings of the SM-like Higgs boson $h$ normalized to
those of the SM Higgs boson, in the
decoupling/alignment limit. The $hH^+H^-$ coupling given below is normalized to
the SM $hhh$ coupling.
The scalar Higgs potential is taken to be CP-conserving.
For the fermion couplings,
$D$ is a column vector of three down-type fermion fields
(either down-type quarks or charged leptons)
and $U$ is a column vector of three up-type quark fields. The
$3\times 3$ hermitian matrices, $\rho^Q_R$ and $\rho^Q_I$ (where $Q=U$
or $D$) are defined in \eq{rhodefs}. The normalization of the
pseudoscalar coupling of the Higgs boson $h$ to fermions is relative to
the corresponding scalar coupling to fermions.
\label{tabdecouplingscp} \\}
\begin{tabular}{|c||c|c|}\hline
Higgs interaction & 2HDM coupling & decoupling/alignment limit \\
\hline
$hVV$ & $s_{\beta-\alpha}$ & $1-\tfrac{1}{2}c^2_{\beta-\alpha}$ \\[6pt]
$hhh$ & see eq.~(\ref{hhh})
& $1+3(Z_6/Z_1)c_{\beta-\alpha}$ \\[6pt]
$hH^+H^-$ & see \eq{hhh} &
$\tfrac{1}{3}\left[(Z_3/Z_1)+(Z_7/Z_1)c_{\beta-\alpha}\right]$\\[6pt]
$hhhh$ & see eq.~(\ref{hhhh})
& $1+4(Z_6/Z_1)c_{\beta-\alpha}$ \\[6pt]
$h\overline{D}D$ & $s_{\beta-\alpha}\mathds{1}+c_{\beta-\alpha}\rho^D_R$
& $\mathds{1}+c_{\beta-\alpha}\rho^D_R$\\[6pt]
$ih\overline{D}\gamma\lsub{5}D$ &
$c_{\beta-\alpha}\rho^D_I$
& $c_{\beta-\alpha}\rho^D_I$\\[6pt]
$h\overline{U}U$ & $s_{\beta-\alpha}\mathds{1}+c_{\beta-\alpha}\rho^U_R$
& $\mathds{1}+c_{\beta-\alpha}\rho^U_R$ \\[6pt]
$ih\overline{U}\gamma\lsub{5}U$ &
$c_{\beta-\alpha}\rho^U_I$
& $c_{\beta-\alpha}\rho^U_I$\\[6pt] \hline
\end{tabular}
\end{table}
The 2HDM couplings of $H$ and $A$ in the decoupling/alignment limit
are also noteworthy. The couplings to vector boson pairs and fermion
pairs are displayed in Table~\ref{tabheavyhiggs}.
The pattern of Higgs couplings noted in \eq{littletable} indicate
that all couplings that involve at least one vector boson and exactly
one of the non-minimal Higgs states ($H$, $A$ or $H^\pm$) is
suppressed by a factor of $c_{\beta-\alpha}$ in the decoupling/alignment limit.
\begin{table}[h!]
\centering
\caption{2HDM couplings of $H$ and $A$ normalized to
those of the SM Higgs boson, in the
decoupling/alignment limit. The $Hhh$ coupling given below is
normalized to the SM $hhh$ coupling.
The scalar Higgs potential is taken to be
CP-conserving. In the convention of $Z_6>0$, we identify $H\equiv -h_2$
and $A\equiv h_3$.
See caption to Table~\ref{tabdecouplingscp}.
\label{tabheavyhiggs} \\}
\begin{tabular}{|c||c|c|}\hline
Higgs interaction & 2HDM coupling & decoupling/alignment limit \\
\hline
$HW^+W^-$\,,\,$HZZ$& $c_{\beta-\alpha}$ & $c_{\beta-\alpha}$ \\[6pt]
$Hhh$ & see \eq{hhh} & $-Z_6/Z_1+[1-\tfrac{2}{3}(Z_{345}/Z_1)]c_{\beta-\alpha}$
\\[6pt]
$H\overline{D}D$ & $c_{\beta-\alpha}\mathds{1}-s_{\beta-\alpha}\rho^D_R$ &
$c_{\beta-\alpha}\mathds{1}-\rho^D_R$
\\[6pt]
$iH\overline{D}\gamma\lsub{5}D$ &
$s_{\beta-\alpha}\rho^D_I$ &
$\rho^D_I$
\\[6pt]
$H\overline{U}U$ & $c_{\beta-\alpha}\mathds{1}-s_{\beta-\alpha}\rho^U_R$ &
$c_{\beta-\alpha}\mathds{1}-\rho^U_R$
\\[6pt]
$iH\overline{U}\gamma\lsub{5}U$ &
$-s_{\beta-\alpha}\rho^U_I$ &
$-\rho^U_I$
\\[6pt]
$AW^+W^-$\,,\,$AZZ$ & $0$ & $0$ \\[6pt]
$h_3\overline{D}D$ & $\rho^D_I$ &
$\rho^D_I$
\\[6pt]
$iA\overline{D}\gamma\lsub{5}D$ & $\rho^D_R$
&
$\rho^D_R$
\\[6pt]
$A\overline{U}U$ & $\rho^U_I$ &
$\rho^U_I$
\\[6pt]
$iA\overline{U}\gamma\lsub{5}U$ &
$-\rho^U_R$ &
$-\rho^U_R$
\\[6pt] \hline
\end{tabular}
\end{table}
For completeness we note that it may be possible to identify the SM-like Higgs
boson with $h_2=-H$. In this case,
we have $c_{\beta-\alpha}\simeq 1$ and $s_{\beta-\alpha}\ll 1$, in order to achieve a
SM-like $HVV$ coupling. In this case, \eq{scexact} yields
\begin{equation} \label{Hsm}
s_{\beta-\alpha}\simeq -\frac{2Z_6 v^2}{m_H^2-m_h^2}\ll 1\,.
\end{equation}
This cannot be satisfied in the decoupling limit, since by assumption
we are identifying $H$ with the SM-like Higgs boson, with $m_H>m_h$.
However, \eq{Hsm} can be satisfied in the alignment limit where
$Z_6\ll 1$.
The corresponding neutral Higgs masses are:
\begin{equation}
m_H^2=2Z_1 v^2\,,\qquad\quad
m_{h,A}^2=Y_2+(Z_3+Z_4\pm Z_5)v^2\,,
\end{equation}
which requires that $2Z_1 v^2>Y_2+(Z_3+Z_4+Z_5)v^2$ (since
$m_h<m_H$). In order for this interpretation to be viable, one must
check that the other Higgs states would have not been discovered at
LEP. Although it is not yet possible to fully rule out this case,
we shall not consider it further here.
\subsection{Higgs production at the ILC}
In the CP-conserving 2HDM, the neutral Higgs bosons are produced via
Higgsstrahlung and fusion processes as in the SM. In these
production mechanisms, the CP-even Higgs bosons are produced
via the coupling to gauge bosons.
Consequently, the production cross section of $h$ and $H$
via these processes are simply given by
\begin{eqnarray}
\sigma_{\rm 2HDM}(h) &=& \sigma_{\rm SM}(h) \sin^2(\beta-\alpha), \\
\sigma_{\rm 2HDM}(H) &=& \sigma_{\rm SM}(h)\cos^2(\beta-\alpha) .
\end{eqnarray}
In the decoupling regime where $\sin^2(\beta-\alpha) \simeq 1$ and $\cos^2(\beta-\alpha) \ll 1$,
the production cross section of $h$ is similar to that of the Higgs boson in the SM,
while that of $H$ is small.
The production of the CP-odd Higgs boson $A$ via Higgsstrahlung or
gauge boson fusion is highly suppressed since the $A$
does not couple to weak gauge boson pairs at tree-level.
\begin{figure}[h!]
\begin{center}
\includegraphics[width=50mm]{Chapter_Theory/figs/e+e-_HhA.pdf}\hspace{1cm}
\includegraphics[width=45mm]{Chapter_Theory/figs/e+e-H+H-.pdf}
\caption{Pair production diagrams for neutral and charged Higgs bosons. }
\label{FIG:HA}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\begin{minipage}{0.4\hsize}
\includegraphics[width=50mm,angle=-90]{Chapter_Theory/figs/AH_pro.pdf}
\end{minipage}
\hspace{1cm}
\begin{minipage}{0.4\hsize}
\includegraphics[width=80mm]{Chapter_Theory/figs/cs_2HDM.pdf}
\end{minipage}
\caption{Production cross sections of $e^+e^- \to H^+H^-$ and $e^+e^-\to AH$}
\label{FIG:HApair}
\end{center}
\end{figure}
In addition, $H$ (or $h$) and $A$ are pair produced via the couplings
$H A Z$ and $h A Z$, as shown in
Fig.~\ref{FIG:HA} (Left). In light of \eq{littletable}, the corresponding cross-sections
are proportional to $\sin^2(\beta-\alpha)$ and $\cos^2(\beta-\alpha)$, respectively.
In the decoupling regime, where $\sin^2(\beta-\alpha) \simeq 1$, the $HA$ production
is maximal. In Fig.~\ref{FIG:HApair} (Left), the production cross section of $e^+e^- \to Z^\ast \to
HA$ is shown as a function of $m_A$, assuming $m_A=m_H$ for $\sqrt{s}=250$ and 500 GeV.
In these pair production mechanisms, the mass reach is kinematically limited by $\sqrt{s}/2$.
Beyond the threshold of pair production, single production processes
$e^+e^- \to f \bar f H$ ($f \bar f A$) could be used although the cross sections are
rather small due to the limited phase space.
Charged Higgs bosons are produced in pairs via $e^+e^-\to H^+H^-$ as long as
it is kinematically allowed as illustrated in Fig.~\ref{FIG:HA} (Right).
In Fig.~\ref{FIG:HApair} (Right), the production cross section
of $e^+e^- \to Z^\ast (\gamma) \to H^+H^-$ is shown
as a function of $m_{H^\pm}$ for $\sqrt{s}=300$, 500, 800 and 1000 GeV.
The associated production process, $e^+e^- \to H^\pm W^\mp$, is highly
suppressed in the
2HDM due to the absence of a $H^\pm W^\mp Z$ vertex at tree level.
Therefore, this process is especially sensitive to charged Higgs bosons
in extended Higgs sectors with exotic scalar fields in a
triplet or septet representation, where a tree-level $H^\pm W^\mp Z$
vertex is present.
If $m_{H^\pm}>\tfrac{1}{2}\sqrt{s}$, then pair production of charged Higgs
bosons is not kinematically allowed. In this case,
single charged Higgs boson production processes such as
$e^+e^- \to\tau \nu H^\pm$, $e^+e^- \to c s H^\pm$ and $e^+e^- \to t b H^\pm$
can be studied, although the cross sections for these processes
are rather small, typically 0.1 fb or less.
The production of multiple Higgs boson states can in principle
probe many of the Higgs self-coupling parameters and allow for a
(partial) reconstruction of the Higgs potential. The
mechanisms for double Higgs and triple Higgs production
in high energy $e^+e^-$ collisions have been
considered in Refs.~\cite{Djouadi:1996ah,Djouadi:1999gv,Djouadi:1999rca,Ferrera:2007sp,Hodgkinson:2009uj,LopezVal:2009qy}.
\subsection{Special forms for the Higgs-fermion Yukawa interactions}
\label{specialforms}
In the most general 2HDM, there is no reason why the matrices
$\rho_R^Q$ and $\rho_I^Q$, which appear in the Higgs-fermion Yukawa interactions
[cf.~eq.~(\ref{YUK2})], should be approximately diagonal,
as required by the absence of large FCNCs (mediated by tree-level
neutral Higgs exchange) in the data. Indeed in the general case, the diagonal structure
of $\rho_R^Q$ and $\rho_I^Q$ is not stable with respect to radiative
corrections, so that imposing such a condition
requires an artificial fine-tuning of parameters.
However, for special forms for the Higgs-fermion Yukawa interactions,
it turns out that the matrices $\rho_R^Q$ and $\rho_I^Q$ are
automatically diagonal, due to the presence of a symmetry that
guarantees that the diagonal structure is radiatively stable. In this
case, tree-level Higgs mediated FCNCs are ''naturally'' absent (in the
same way that tree-level FCNCs mediated by $Z$-exchange are absent due
to the GIM mechanism~\cite{Glashow:1970gm}).
In a general extended Higgs model, tree-level Higgs mediated FCNCs are
absent if for some choice of basis of the scalar fields,
at most one Higgs multiplet is responsible for
providing mass for quarks or leptons of a given electric
charge~\cite{Glashow:1976nt,Paschos:1976ay}. This Glashow-Weinberg-Pascos (GWP)
condition can be imposed by a symmetry
principle, which guarantees that the absence of tree-level Higgs
mediated FCNCs is natural. By an appropriate choice of symmetry transformation
laws for the fermions and the Higgs scalars, the resulting
Higgs-fermion Yukawa interactions take on the required form in some
basis. The symmetry also restricts the form of the Higgs scalar
potential in the same basis. These considerations were first applied in
the 2HDM in Refs.~\cite{Haber:1978jt} and \cite{Donoghue:1978cj}.
More generally, consider the Higgs--quark
Yukawa interactions of the 2HDM in the $\Phi_1$--$\Phi_2$ basis,
\begin{Eqnarray}
-\mathcal{L}_{\rm Y}&=&\overline U_L \Phi_{a}^{0\,*}{{h^U_a}} U_R -\overline
D_L K^\dagger\Phi_{a}^- {{h^U_a}}U_R
+\overline U_L K\Phi_a^+{{h^{D\,\dagger}_{a}}} D_R
+\overline D_L\Phi_a^0 {{h^{D\,\dagger}_{a}}}D_R \nonumber \\
&&\qquad\quad +\overline N_L\Phi_a^+{{h^{L\,\dagger}_{a}}} E_R
+\overline E_L\Phi_a^0 {{h^{L\,\dagger}_{a}}}E_R +{\rm h.c.}\,,
\label{higgsql}
\end{Eqnarray}
where we have made explicit both the couplings to the quarks and leptons.
In \eq{higgsql},
$h^{U,D,L}$ are $3\times 3$ Yukawa coupling matrices and there is an
implicit sum over $a=1,2$.
The GWP condition can be implemented in four different ways~\cite{Hall:1981bc,Barger:1989fj,Aoki:2009ha}:
\begin{enumerate}
\item Type-\Rmnum{1} Yukawa couplings: $h_1^U=h_1^D=h_1^L=0$,
\item Type-\Rmnum{2} Yukawa couplings: $h_1^U=h_2^D=h_2^L=0$.
\item Type-X Yukawa couplings: $h_1^U=h_1^D=h_2^L=0$,
\item Type-Y Yukawa couplings: $h_1^U=h_2^D=h_1^L=0$.
\end{enumerate}
The four types of Yukawa couplings can be implemented by a discrete
symmetry as shown in Table~\ref{Tab:type}.
\begin{table}[ht!]
\begin{center}
\caption{Four possible $\mathbb{Z}_2$ charge assignments that forbid
tree-level Higgs-mediated FCNC effects in the 2HDM.~\cite{Aoki:2009ha}.}
\label{Tab:type}
\begin{tabular}{|cl||c|c|c|c|c|c|}
\hline && $\Phi_1$ & $\Phi_2$ & $U_R^{}$ & $D_R^{}$ & $E_R^{}$ &
$U_L$, $D_L$, $N_L$, $E_L$ \\ \hline
Type I && $+$ & $-$ & $-$ & $-$ & $-$ & $+$ \\
Type II &(MSSM like)& $+$ & $-$ & $-$ & $+$ & $+$ & $+$ \\
Type X &(lepton specific) & $+$ & $-$ & $-$ & $-$ & $+$ & $+$ \\
Type Y &(flipped) & $+$ & $-$ & $-$ & $+$ & $-$ & $+$ \\
\hline
\end{tabular}
\end{center}
\end{table}
The imposition of the discrete symmetry also restricts the form of
the Higgs scalar potential given in \eq{genpot} by setting
$m_{12}^2=\lambda_6=\lambda_7=0$. In this case, one can always
rephase $\Phi_1$ such that $\lambda_5$ is real, in which case the
scalar potential is CP-conserving. Moreover, assuming that a
U(1)$_{\rm EM}$-conserving potential minimum exists, the
corresponding vacuum is CP-conserving, corresponding to real
vacuum expectation values, $v_i\equiv\vev{\Phi_i^0}$. Thus, the
parameter
\begin{equation} \label{tanbeta}
\tan\beta\equiv\frac{v_2}{v_1}\,,
\end{equation}
is now meaningful since it refers to vacuum expectation values with respect to the basis
of scalar fields where the discrete symmetry has been imposed. By
convention, we shall take $0\leq \beta\leq\tfrac{1}{2}\pi$, in which case
$\tan\beta$ is non-negative. This can be achieved by redefining
$\Phi_2\to -\Phi_2$ if $\tan\beta$ is negative. However, such a redefinition would
also reverse the signs of $Z_6$ and $Z_7$.
Thus, by adopting the convention that $\tan\beta$ is non-negative, we
can no longer a choose the convention where, say, $Z_6>0$. Indeed, in a convention
where $\tan\beta$ and $\sin(\beta-\alpha)$ are non-negative, both
$Z_6$ and $\cos(\beta-\alpha)$ can be of either sign [subject to the
constraint that $Z_6\cos(\beta-\alpha)<0$ due to \eq{scexact}].
It is straightforward to evaluate the $\rho^Q_{R,I}$ for the
Type-\Rmnum{1} and Type-\Rmnum{2} Higgs-quark Yukawa couplings.
Using the corresponding results for the $\rho^Q_{R,I}$, the couplings
of $h$, $H$ and $A$ are easily obtained from Tables~\ref{tabdecouplingscp} and \ref{tabheavyhiggs}.
\begin{enumerate}
\item
Type-\Rmnum{1}: $\rho^D_R=\rho^U_R=\mathds{1}\cot\beta$\,,\qquad $\rho^D_I=\rho^U_I=0$.
\begin{Eqnarray}
h\overline{D}D\,,\, h\overline{U}U:&&
\phantom{-}\frac{\cos\alpha}{\sin\beta}=s_{\beta-\alpha}+c_{\beta-\alpha}\cot\beta\,,\nonumber\\
H\overline{D}D\,,\, H\overline{U}U:&&
\phantom{-}\frac{\sin\alpha}{\sin\beta}=c_{\beta-\alpha}-s_{\beta-\alpha}\cot\beta\,,\nonumber\\
iA\overline{D}\gamma\lsub{5}D:&& \phantom{-}\cot\beta\,,\nonumber\\
iA\overline{U}\gamma\lsub{5}U:&& -\cot\beta\,.\label{typeoneff}
\end{Eqnarray}
\item
Type-\Rmnum{2}:
$\rho^D_R=-\mathds{1}\tan\beta$\,,\qquad $\rho^U_R=\mathds{1}\cot\beta$\,,\qquad $\rho^D_I=\rho^U_I=0$.
\begin{Eqnarray}
h\overline{D}D:&& -\frac{\sin\alpha}{\cos\beta}=s_{\beta-\alpha}-c_{\beta-\alpha}\tan\beta\,,\nonumber\\
h\overline{U}U:&&
\phantom{-}\frac{\cos\alpha}{\sin\beta}=s_{\beta-\alpha}+c_{\beta-\alpha}\cot\beta\,,\nonumber \\
H\overline{D}D:&& \phantom{-}\frac{\cos\alpha}{\cos\beta}=c_{\beta-\alpha}+s_{\beta-\alpha}\tan\beta\,,\nonumber\\
H\overline{U}U:&&
\phantom{-}\frac{\sin\alpha}{\sin\beta}=c_{\beta-\alpha}-s_{\beta-\alpha}\cot\beta\,,\nonumber\\
iA\overline{D}\gamma\lsub{5}D:&& -\tan\beta\,,\nonumber\\
iA\overline{U}\gamma\lsub{5}U:&& -\cot\beta\,.\label{t2}
\end{Eqnarray}
\end{enumerate}
Likewise, the charged Higgs Yukawa couplings to quarks are given by
\begin{Eqnarray}
-\mathcal{L}_Y&\ni&
\phantom{-}\frac{\sqrt{2}}{v}\cot\beta\biggl(\overline{U}\bigl[KM_DP_R-M_UKP_L\bigr]DH^+
+ {\rm h.c.}\biggr)\,,\qquad\qquad\quad \text{Type-I}\,,\label{chhiggsy1}\\
-\mathcal{L}_Y&\ni&
-\frac{\sqrt{2}}{v}\biggl(\overline{U}\bigl[KM_DP_R\tan\beta-M_UKP_L\cot\beta\bigr]DH^+
+ {\rm h.c.}\biggr)\,,\qquad\quad\!\!\! \text{Type-II}\,,\label{chhiggsy2}
\end{Eqnarray}
where
$M_{U,D}$ are the diagonal up-type and down-type $3\times 3$
quark mass matrices and $K$ is the CKM mixing matrix.
The Type-I [Type-II] neutral and charged
Higgs Yukawa coupling to quarks also apply to Type-X [Type-Y],
respectively.
Following the prescription below \eq{rhoq}, the charged
Higgs Yukawa coupling to leptons are obtained from the couplings to
quarks given above by
replacing $U\to N$, $D\to E$, $M_U\to 0$, $M_D\to M_E$ and
$K\to\mathds{1}$. The Type-I [Type-II] neutral and charged
Higgs Yukawa coupling to leptons also apply to Type-Y [Type-X], respectively.
The neutral Higgs Yukawa couplings to quarks and leptons (relative
to the corresponding couplings of the SM Higgs boson) are
conveniently summarized
in Table~\ref{yukawa_tab} for the four possible implementations of
the GWP condition.
\begin{table}[hb!]
\begin{center}
\caption{Higgs--fermion couplings in the 2HDM subject to the
$\mathbb{Z}_2$ symmetries given in Table~\ref{Tab:type}. The
couplings listed below are normalized relative to the SM Higgs
couplings $h_{\rm SM}\overline{U}U$, $h_{\rm SM}\overline{D}D$,
and $h_{\rm SM}\overline{E}E$.}
\label{yukawa_tab}
{\renewcommand\arraystretch{1.5}
\begin{tabular}{|c||ccccccccc|}\hline
&$h\overline{U}U$
&$h\overline{D}D$&$h\overline{E}E$&$H\overline{U}U$&$H\overline{D}D$&$H\overline{E}E$&$iA\overline{U}\gamma\lsub{5}U$&$iA\overline{D}\gamma\lsub{5}D$
& $iA\overline{E}\gamma\lsub{5}E$\\
&$\xi_h^u$&$\xi_h^d$&$\xi_h^e$&$\xi_H^u$&$\xi_H^d$&$\xi_H^e$&$\xi_A^u$&$\xi_A^d$&$\xi_A^e$\\
\hline\hline
Type I &$\frac{\cos\alpha}{\sin\beta}$&$\phantom{-}\frac{\cos\alpha}{\sin\beta}$&$\phantom{-}\frac{\cos\alpha}{\sin\beta}$&$\frac{\sin\alpha}{\sin\beta}$&$\frac{\sin\alpha}{\sin\beta}$&$\frac{\sin\alpha}{\sin\beta}$&$-\cot\beta$&$\phantom{-}\cot\beta$&$\phantom{-}\cot\beta$\\\hline
Type II &$\frac{\cos\alpha}{\sin\beta}$&$-\frac{\sin\alpha}{\cos\beta}$&$-\frac{\sin\alpha}{\cos\beta}$&$\frac{\sin\alpha}{\sin\beta}$&$\frac{\cos\alpha}{\cos\beta}$&$\frac{\cos\alpha}{\cos\beta}$&$-\cot\beta$&$-\tan\beta$&$-\tan\beta$\\\hline
Type X &$\frac{\cos\alpha}{\sin\beta}$&$\phantom{-}\frac{\cos\alpha}{\sin\beta}$&$-\frac{\sin\alpha}{\cos\beta}$&$\frac{\sin\alpha}{\sin\beta}$&$\frac{\sin\alpha}{\sin\beta}$&$\frac{\cos\alpha}{\cos\beta}$&$-\cot\beta$&$\phantom{-}\cot\beta$&$-\tan\beta$\\\hline
Type Y &$\frac{\cos\alpha}{\sin\beta}$&$-\frac{\sin\alpha}{\cos\beta}$&$\phantom{-}\frac{\cos\alpha}{\sin\beta}$&$\frac{\sin\alpha}{\sin\beta}$&$\frac{\cos\alpha}{\cos\beta}$&$\frac{\sin\alpha}{\sin\beta}$&$-\cot\beta$&$-\tan\beta$&$\phantom{-}\cot\beta$\\\hline
\end{tabular}}
\end{center}
\end{table}
In implementing the $\mathbb{Z}_2$ discrete symmetries given in
Table~\ref{Tab:type}, we noted above that the parameters of the scalar
Higgs potential are restricted such that
$m_{12}^2=\lambda_6=\lambda_7=0$ in the basis in which the discrete
symmetry is manifest. However, these latter conditions can be slightly
relaxed by taking $m_{12}^2\neq 0$ (while maintaining
$\lambda_6=\lambda_7=0$). In this case, it is convenient to introduce a
squared-mass parameter,
\begin{equation} \label{Mtwo}
M^2\equiv \frac{2m_{12}^2}{\sin 2\beta}=m_A^2+\lambda_5 v^2\,.
\end{equation}
When $M^2\neq 0$, the discrete symmetry is softly broken by a
dimension-two term in the scalar potential (while it is respected by
all dimension-four terms of the Lagrangian). In a 2HDM of this type,
Higgs--mediated FCNCs are still absent at tree-level, but can be
generated at the one-loop level. Since the neutral Higgs Yukawa
couplings are suppressed by fermion masses, one can check that
sensible parameter regimes exist in which the
radiatively generated FCNCs in this model are sufficiently suppressed
so as not to be in conflict with experimental data.
The existence of a softly-broken $\mathbb{Z}_2$ symmetry
that imposes $\lambda_6=\lambda_7=0$ in some basis yields the following constraint
on the Higgs basis scalar potential parameters:
\begin{equation}
(Z_6+Z_7)(Z_2-Z_1)(Z_1+Z_2-2Z_{345})+(Z_6-Z_7)\left[(Z_2-Z_1)^2-4(Z_6+Z_7)^2\right]=0\,,
\end{equation}
where $Z_{345}\equiv Z_3+Z_4+Z_5$.
The parameter $\beta$ is also determined
\begin{equation}
\tan 2\beta=\frac{2(Z_6+Z_7)}{Z_2-Z_1}\,.
\end{equation}
The case of $Z_1=Z_2$ and $Z_6=-Z_7$ must be treated separately. In this case, a
$\mathbb{Z}_2$ symmetry governing the quartic terms of the scalar potential is automatically present, and the corresponding value
of $\beta$ is determined from the following quadratic equation,
\begin{equation}
(Z_1-Z_{345})\tan 2\beta+2Z_6(1-\tan^2 2\beta)=0\,.
\end{equation}
In the constrained 2HDMs considered in this subsection, there are a
number of benefits in allowing for a soft breaking of the
$\mathbb{Z}_2$ discrete symmetry, which permits a
nonzero $m_{12}^2$ in the basis where $\lambda_6=\lambda_7=0$.
First, this allows us to treat the MSSM Higgs sector, which employs
the Type-II Higgs--fermion Yukawa couplings as a consequence
of supersymmetry rather than a $\mathbb{Z}_2$ discrete symmetry.
Second, the 2HDM with a soft breaking of the $\mathbb{Z}_2$ discrete
symmetry possesses a decoupling limit (which corresponds to large
$m_{12}^2$). If $m_{12}^2=0$, no decoupling limit exists since the
scalar potential minimum conditions imply that $Y_2\sim\mathcal{O}(Z_i v^2)$.
Thus, in this latter case, a SM-like Higgs boson emerges only in the
alignment limit. Finally, taking $m_{12}^2\neq 0$ allows for
a new source of CP-violation in the Higgs sector. One can check~\cite{Gunion:2002zf}
that the Higgs scalar potential is explicitly
CP-violating if $\Im[(m_{12}^2)^2\lambda_5^*]\neq 0$.
If the scalar potential is explicitly CP-conserving, then one can
rephase the scalar fields such that $m_{12}^2$ and $\lambda_5$ are
real. In this case spontaneous CP-violation can occur if
$0<|m_{12}^2|<2\lambda_5 |v_1||v_2|$, in which case the minimum
of the scalar potential yields a relative phase
$\vev{\Phi_1^\dagger\Phi_2}= |v_1||v_2|e^{i\xi}$, where
$\cos\xi=m_{12}^2/(2\lambda_5 |v_1||v_2|)$.
The decays of the Higgs bosons in the constrained
2HDM depend on the Type of Yukawa interactions.
When $\sin(\beta-\alpha)=1$, the decay pattern of $h$ is
the same as those in the Standard Model at tree level.
When $\sin(\beta-\alpha)$ differs from from 1, the couplings
of $h$ will deviate from Standard Model expectations. In particular,
the couplings of $h$
to down-type quarks and leptons will differ from those of the SM
with a pattern of deviations that
strongly depends on the Type of Yukawa Interactions.
The precision measurement of these coupling make it possible to
discriminate among the various Types of Yukawa interactions of the 2HDM.
On the other hand,
the decay patterns of $H$, $A$, and $H^\pm$ can vary over a large range~\cite{Barger:1989fj,Aoki:2009ha,Su:2009fz,Logan:2009uf}.
Figure~\ref{FIG:br_200} shows the decay branching ratios of $H$, $A$ and $H^\pm$
as a function of $\tan\beta$ for Higgs boson masses of $200$ GeV
and $\sin(\beta-\alpha)=1$ for $m_H^{}=m_A^{}=m_{H^\pm}^{}=M=200$ GeV
[where $M$ is defined in \eq{Mtwo}], assuming Type I, II, X and Y Yukawa couplings.
For example, the amplitudes for the
fermionic and the $gg$ decays mode of $H$, $A$ and $H^\pm$
in the Type-I 2HDM are all proportional to
$\cot\beta$. Consequently the corresponding branching ratios are
roughly independent of $\tan\beta$.
Note that for $\sin(\beta-\alpha)=1$, the
couplings of $H$ and $A$ to fermion pairs are equal in magnitude [cf.~\eq{typeoneff}].
The differences among
the $A$ and $H$ branching ratios can be attributed to
$\Gamma(A\to gg, \gamma\gamma)>\Gamma(H\to gg, \gamma\gamma)$, which arises due to different loop factors
for the CP-odd and CP-even scalar couplings to $gg$ and $\gamma\gamma$ via the dominant top-quark loop.
In principle, the partial widths $\Gamma(H,A\to\gamma\gamma)$ can also differ due to
the contributions of the $W$ and charged Higgs loops to the amplitude for
$H\to\gamma\gamma$ (the corresponding couplings to $A$ are absent due to the assumed
CP conservation). However, in the limit of $\lambda_6=\lambda_7=\cos(\beta-\alpha)=0$,
the $W^+ W^- H$ coupling vanishes and
the $H^+ H^- H$ coupling takes a particularly simple form,
$g\lsub{H^+H^- H}=2(m_H^2-M^2)\cot 2\beta/v$~\cite{Gunion:2002zf},
which vanishes for the parameter choices employed in Figures~\ref{FIG:br_200} and \ref{FIG:br_400}.
\begin{figure}[h!]
\begin{center}
\includegraphics[width = 35mm]{Chapter_Theory/figs/bH_1_200.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/bH_2_200.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/bH_X_200.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/bH_Y_200.pdf}\\
\vspace{5mm}
\includegraphics[width = 35mm]{Chapter_Theory/figs/A_1_200.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/A_2_200.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/A_X_200.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/A_Y_200.pdf}\\
\vspace{5mm}
\includegraphics[width = 35mm]{Chapter_Theory/figs/CH_1_200.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/CH_2_200.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/CH_X_200.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/CH_Y_200.pdf}
\caption{Decay branching ratios of $H$, $A$ and $H^\pm$
in the four different Types of 2HDM as a function of $\tan\beta$
for $m_H^{}=m_A^{}=m_{H^\pm}^{}=M=200$ GeV.
The SM-like limit $\sin(\beta-\alpha) =1$ is taken.\label{FIG:br_200}}
\end{center}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width = 35mm]{Chapter_Theory/figs/bH_1_400.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/bH_2_400.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/bH_X_400.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/bH_Y_400.pdf}\\
\vspace{5mm}
\includegraphics[width = 35mm]{Chapter_Theory/figs/A_1_400.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/A_2_400.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/A_X_400.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/A_Y_400.pdf}\\
\vspace{5mm}
\includegraphics[width = 35mm]{Chapter_Theory/figs/CH_1_400.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/CH_2_400.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/CH_X_400.pdf
\includegraphics[width = 35mm]{Chapter_Theory/figs/CH_Y_400.pdf}
\caption{Decay branching ratios of $H$, $A$ and $H^\pm$
in the four different Types of 2HDM as a function of $\tan\beta$
for $m_H^{}=m_A^{}=m_{H^\pm}^{}=M=400$ GeV.
The SM-like limit $\sin(\beta-\alpha) =1$ is taken.\label{FIG:br_400}}
\end{center}
\end{figure}
In Figure~\ref{FIG:br_400}, figures of the decay
branching ratios of $H$, $A$ and $H^\pm$ similar to those in Fig.~\ref{FIG:br_200}
are shown for $\sin(\beta-\alpha)=1$ for $m_H^{}=m_A^{}=m_{H^\pm}^{}=M=400$ GeV.
The two-body decays $H/A \to t \bar t$ are now kinematically allowed in this case.
In general, the complexity of the $H$, $A$, $H^\pm$ decay schemes in
the various Types of Yukawa interactions
make it difficult to determine the underlying
model unless these scalars are created through a simple and well-characterized
pair-production reaction. Thus, even if these scalars are discovered at the
LHC, it will be important to study them via the pair-production
process at the ILC.
\subsection{Constraints due to flavor physics}
\begin{figure}[b!]
\vspace{-0.8in}
\begin{minipage}{0.49\hsize}
\includegraphics[width=7.5cm,angle=0]{Chapter_Theory/figs/BR_mh.pdf}
\end{minipage}
\begin{minipage}{0.49\hsize}
\includegraphics[width=7.5cm,angle=0]{Chapter_Theory/figs/BR_tanb.pdf}
\end{minipage}
\caption{Predictions of the decay branching ratio for $b\to s\gamma$
are shown at the NLO approximation as a function of $m_{H^\pm}^{}$ and $\tan\beta$.
The dark (light) shaded band represents $1\sigma$ $(2\sigma)$ allowed region
of current experimental data. In the left panel, solid (dashed) curves denote
the prediction for $\tan\beta=2$ $(50)$ in various 2HDMs. In the right panel, solid,
dashed and dot-dashed curves are those for $m_{H^\pm}^{}=100, 300$ and $600$ GeV,
respectively.}
\label{FIG:bsg}
\end{figure}
Indirect contributions of Higgs bosons to precisely measurable observables
can be used to constrain extended Higgs sectors.
In this section, we summarize the experimental bounds from flavor
experiments on the constrained 2HDMs introduced in
Section~\ref{specialforms}. These bounds arise primarily due to
tree-level or loop diagrams that contain the charged Higgs boson.
The corresponding amplitudes involve the Yukawa interactions and hence
strongly depend on which Type of 2HDM is employed.
It is well known that the charged Higgs boson mass in the Type-II 2HDM
is stringently constrained by the precision measurements of the
radiative decay of $b\to s\gamma$~\cite{Hermann:2012fc}.
The process $b\to s\gamma$ receives contributions from the $W$ boson loop
and the charged Higgs boson loop in the 2HDM.
It is noteworthy that these two contributions
always constructively interfere in the Type-II (Type-Y) 2HDM,
whereas this is not the case in the Type-I (Type-X) 2HDM~\cite{Barger:1989fj,Aoki:2009ha,Su:2009fz,Logan:2009uf}.
In Fig.~\ref{FIG:bsg}~\cite{Aoki:2009ha}, we show the branching ratio of $B\to X_s\gamma$ for
each Type of 2HDM as a function of $m_{H^\pm}^{}$
(left-panel) and $\tan\beta$ (right-panel), which are evaluated at the next-to-leading
order (NLO) following the formulas in Ref.~\cite{Ciuchini:1997xe}.
The SM prediction at the NLO is also shown for comparison.
The theoretical uncertainty is about $15\%$
in the branching ratio (as indicated by dotted curves in Fig.~\ref{FIG:bsg}),
which mainly comes from the pole mass of charm quark $m_c^\text{pole}=1.67\pm 0.07$
GeV ~\cite{Beringer:1900zz}. (Note that Ref.~\cite{Ciuchini:1997xe} quotes and error in
$m_c^{\rm pole}$ of about $7\%$, which then leads to
a smaller theoretical uncertainty in the branching ratio for $b\to s\gamma$.)
The experimental bounds of the branching ratio are also indicated, where
the current world average value is given by
${\rm BR}(B\to X_s\gamma)=(3.52\pm 0.23\pm0.09)\times 10^{-4}$~\cite{Barberio:2008fa}.
One can see from Fig.~\ref{FIG:bsg} that the branching ratio in
the Type-I (Type-X) 2HDM lies within the 2 $\sigma$ experimental error
in all the regions of $m_{H^{\pm}}$ indicated for $\tan\beta\gtrsim 2$,
while that in the Type-II (Type-Y) 2HDM is far from the value indicated by
the data for a light charged Higgs boson region $(m_{H^\pm}^{}\lesssim 200$
GeV$)$.
In the right figure, a cancellation occurs in the Type-I (Type-X) 2HDM
since there are destructive interferences between the $W$ boson and
the $H^\pm$ contributions.
The results of these figures indicate that the $B\to X_s\gamma$
experimental results still permit a light charged Higgs boson
in the Type-I (Type-X) 2HDM.
We note that in the MSSM the chargino contribution can compensate
the charged Higgs boson contribution~\cite{Goto:1994ck,Ciuchini:1998xy}.
This cancellation weakens the limit on $m_{H^\pm}^{}$ from $b\to s\gamma$
in the Type-II 2HDM,
and allows a light charged Higgs boson
as in the Type-I (Type-X) 2HDM.
At the NNLO approximation, the branching ratio for $b\to s\gamma$
has been evaluated in the SM in
Refs.~\cite{Misiak:2006ab,Misiak:2006zs,Becher:2006pu}.
The predicted value at the NNLO approximation is less than that at the NLO approximation
over a wide range of renormalization scales.
The branching ratio for $b\to s\gamma$ in the Standard Model is $(3.15\pm 0.23)\times
10^{-4}$~\cite{Misiak:2006ab}, and a lower bound for $m_{H^\pm}^{}$, after adding the
NLO charged Higgs contribution, is found to be
$m_{H^\pm}^{} \gtrsim 380$~GeV ($95\%$ CL) in the Type-II (Type-Y)
2HDM~\cite{Hermann:2012fc}. (Note that the calculation of
Refs.~\cite{Misiak:2006zs,Becher:2006pu} for
the NNLO branching ratio in the SM yields $(2.98\pm 0.26)\times
10^{-4}$, and the corresponding charged Higgs mass bound is somewhat relaxed.)
On the other hand, in the Type-I (Type-X) 2HDM, although the branching ratio
becomes smaller as compared to the NLO evaluation, no serious bound on
$m_{H^\pm}^{}$ can be found for $\tan\beta \gtrsim 2$.
Therefore, the charged Higgs boson mass is not expected to be strongly constrained in
the Type-I (Type-X) 2HDM even at the NNLO, and the conclusion that
the Type-I (Type-X) 2HDM
is favored for $m_{H^\pm}^{}\lesssim 200$ GeV based on the NLO
analysis should not be changed.
The decay $B\to\tau\nu$ has been examined in the Type-II 2HDM in Refs.~\cite{Akeroyd:2003zr,Krawczyk:2007ne}.
The data for ${\rm BR}(B^+\to\tau^+\nu_\tau)=(1.65\pm 0.34)\times 10^{-4}$
are obtained at the $B$ factories~\cite{Beringer:1900zz}.
The decay branching ratio can be written as~\cite{Krawczyk:2007ne}
\begin{align}
\frac{{\mathcal B}(B^+\to\tau^+\nu_\tau)_\text{2HDM}}{{\mathcal B}(B^+\to\tau^+\nu_\tau)_\text{SM}}
\simeq\left(1-\frac{m_B^2}{m_{H^\pm}^2}\xi_A^d\xi_A^e\right)^2,
\end{align}
where coefficients $\xi_A^d$ and $\xi_A^e$ are defined in Table~\ref{yukawa_tab}.
In Fig.~\ref{FIG:mH+tanb}, the allowed region from the
experimental $B\to\tau\nu$ results
is shown in the Type-II 2HDM. The dark (light) shaded region denotes
the $2\sigma$ $(1\sigma)$ exclusion from the $B\to\tau\nu$ measurements.
The process is important only in the Type-II 2HDM at large values of $\tan\beta$.
The other Types of Yukawa interactions are not constrained by this process.
\begin{figure}[t!]
\begin{minipage}{0.49\hsize}
\includegraphics[width=7.5cm]{Chapter_Theory/figs/Btaunu.pdf}
\end{minipage}
\begin{minipage}{0.49\hsize}
\includegraphics[width=7.5cm]{Chapter_Theory/figs/TauLeptonicDecay.pdf}
\end{minipage}
\vspace{-0.4in}
\caption{Bounds from $B\to\tau\nu$ (left panel) and tau leptonic decay (right panel) on $m_{H^\pm}^{}$ as a function of $\tan\beta$ are shown. The dark (light) shaded region
corresponds to the $2\sigma$ $(1\sigma)$ exclusion of these experimental results.
In the Type-II 2HDM the wide parameter space is constrained by $B\to\tau\nu$,
while only the tau leptonic decays are important for the Type-X 2HDM.\label{FIG:mH+tanb}}
\end{figure}
The rate for the leptonic decay of the tau lepton, $\tau\to\mu\overline{\nu}\nu$,
can deviate from the SM value due to the presence of a light charged Higgs
boson~\cite{Krawczyk:2004na}.
The partial decay rate is approximately expressed as~\cite{Krawczyk:2007ne}
\begin{align}
\frac{\Gamma_{\tau\to\mu\overline{\nu}\nu}^\text{2HDM}}{\Gamma_{\tau\to\mu\overline{\nu}\nu}^\text{SM}}
&\simeq 1-\frac{2m_\mu^2}{m_{H^\pm}^2}{\xi_A^e}^2 \kappa\left(\frac{m_\mu^2}{m_\tau^2}\right)+\frac{m_\mu^2m_\tau^2}{4m_{H^\pm}^4}{\xi_A^e}^4,
\end{align}
where the function $\kappa(x)$ is defined by
\begin{equation}
\kappa(x)=\frac{1+9x-9x^2-x^3+6x(1+x)\ln x}{1-8x+8x^3-x^4-12x^2\ln x}\,.
\end{equation}
In the Type-II (Type-X) 2HDM, the leptonic Yukawa interaction
can be enhanced in the large $\tan\beta$ region. Hence, both model Types are weakly
constrained by tau decay data, as indicated in Fig.~\ref{FIG:mH+tanb}.
The precision measurement of the muon anomalous magnetic moment
can yield a mass bound on the Higgs boson in the SM~\cite{Jackiw:1972jz,Fujikawa:1972fe}.
This constraint can be applied to models with additional
interactions such as the 2HDM~\cite{Leveille:1977rc,Haber:1978jt,Krawczyk:1996sm}.
At the one-loop level, the 2HDM contribution is given by
\begin{align}
\delta a_\mu^{1-\text{loop}}
&\simeq \frac{G_Fm_\mu^4}{4\pi^2\sqrt2}
\left[\sum_{\phi^0=h,H}\frac{{\xi_{\phi^0}^e}^2}{m_{\phi^0}^2}
\left(\ln\frac{m_{\phi^0}^2}{m_\mu^2}-\frac76\right)
+\frac{{\xi_A^e}^2}{m_A^2}
\left(-\ln\frac{m_A^2}{m_\mu^2}+\frac{11}6\right)
-\frac{{\xi_A^e}^2}{6m_{H^\pm}^2}\right].
\end{align}
This process is also purely leptonic and only yields milder bounds
on the Higgs boson masses for very large $\tan\beta$ values
in the Type-II (Type-X) 2HDM. No effective bound on the Type-I
(Type-Y) 2HDM is obtained.
It is also known that the two-loop (Barr-Zee type) diagram
can significantly affect $a_\mu$~\cite{Barr:1990vd}.
The contribution from this class of diagrams is only important
for large $\tan\beta$ values with smaller Higgs boson masses in the Type-II 2HDM.
For the other Types of 2HDM, a much less effective bound
on the parameter space is obtained.
\begin{figure}[t]
\centering
\includegraphics[width=7cm]{Chapter_Theory/figs/type1.pdf}
\includegraphics[width=7cm]{Chapter_Theory/figs/type2.pdf}\\
\includegraphics[width=7cm]{Chapter_Theory/figs/type3.pdf}
\includegraphics[width=7cm]{Chapter_Theory/figs/type4.pdf}
\caption{
Excluded regions of the ($m_{H^+}\,,\,\tan\beta$) parameter space for
$Z_2$-symmetric 2HDM Types. The Type Y and X models [cf.~Table~\ref{Tab:type}]
are denoted above by Type III and IV, respectively.
The color coding is as follows: $\mathrm{BR}(B\to X_s\gamma)$ (red), $\Delta_{0-}$ (black contour),
$\Delta M_{B_d}$ (cyan), $B_u\to\tau\nu_\tau$ (blue), $B\to D\tau\nu_\tau$ (yellow),
$K\to\mu\nu_\mu$ (gray contour), $D_s\to\tau\nu_\tau$ (light green),
and $D_s\to\mu\nu_\mu$ (dark green).
\label{fig:combined}
}
\end{figure}
The $B^0-\bar B^0$ mass differences, $\Delta M_{B_d}$ and $\Delta M_{B_s}$,
are sensitive to charged Higgs exchange via box-type diagrams in which top
quarks are also exchanged. The data exclude relatively large top Yukawa couplings
that are proportional $m_t \cot\beta$ for smaller charged Higgs boson masses.
This constraint is common among the four Types of 2HDMs.
In light of the current data for $\Delta M_{B_d}$,
$\tan\beta < 1$ is ruled out for $m_{H^\pm} < 500$ GeV at 95 \% C.L.~\cite{Mahmoudi:2009zx}
All the important constraints on the parameter space for each Type of 2HDM
are summarized in Fig.~\ref{fig:combined}, where excluded regions from the data of
$\mathrm{BR}(B\to X_s\gamma)$, $\Delta_{0-}$, $\Delta M_{B_d}$, $B_u\to\tau\nu_\tau$,
$B\to D\tau\nu_\tau$, $K\to\mu\nu_\mu$, $D_s\to\tau\nu_\tau$, and $D_s\to\mu\nu_\mu$
are plotted in the ($m_{H^+}$\,,\,$\tan\beta$) plane~\cite{Mahmoudi:2009zx}.
Here, we have included among the list of flavor observables the degree
of isospin asymmetry in the exclusive decay mode $B\to K^*\gamma$,
defined as~\cite{Kagan:2001zk,Bosch:2001gv}
\begin{equation}
\Delta_{0-}\equiv\frac{\Gamma(\overline{B}\llsup{0}\to\overline{K}\llsup{*0})-
\Gamma(\overline{B}\llsup{\,-}\to\overline{K}\llsup{*\,-})}
{\Gamma(\overline{B}\llsup{0}\to\overline{K}\llsup{*0})-
\Gamma(\overline{B}\llsup{\,-}\to\overline{K}\llsup{*\,-})}\,.
\end{equation}
The exclusion of low $\tan\beta< 1$ in all four model Types for $m_{H^+}<500$~GeV,
arises as a result of three observables: $\mathrm{BR}(B\to X_s\gamma)$, $\Delta_{0-}$, and $\Delta M_{B_d}$.
The constraints at low $\tan\beta$ are similar between the model Types, since the couplings to the up-type quarks are universal. In the Type I 2HDM,
a value of $\tan\beta>1$ signals the decoupling of one Higgs doublet from the whole fermion sector.
In Type II and Type III (=Type Y), which share the same coupling pattern for the quarks, there
exists a $\tan\beta$-independent lower limit of $m_{H^+}\gtrsim 300$ GeV imposed by
$\mathrm{BR}(B\to X_s\gamma)$. (This latter constraint is now
somewhat more stringent in light of Ref.~\cite{Hermann:2012fc}.)
No generic lower limit on $m_{H^+}$ is found in Type I and Type IV (=Type X) models.
Constraints for high $\tan\beta$ are only obtained in the Type II model.
This can be understood by noting that the leptonic and semi-leptonic observables
require $\tan\beta$-enhanced couplings $\lambda_{dd}\lambda_{\ell\ell}\sim \tan^2\beta\gg 1$
($d=d,s,b$) for the contributions to be interesting. In the Type III (=Type Y) and
and the Type IV (=Type X) 2HDMs, these couplings are instead always $\lambda_{dd}\lambda_{\ell\ell}=-1$,
while in Type I they are proportional to $\cot^2\beta$.
Finally, recently current data from BaBar of the $\bar B \to D \tau
\bar \nu$ and $\bar B \to D^\ast \tau \bar \nu$ slightly deviate from
the SM predictions by 2.0 $\sigma$ and 2.7 $\sigma$,
respectively~\cite{Lees:2012xj}. Moreover, these data are also inconsistent with the
Type-I (X) and Type-II (Y) 2HDMs, since both decay rates, which depend
on the charged Higgs mass, cannot be
explained simultaneously for the same value of $m_{H^\pm}$.
However, these data can be compatible in
the context of a more general 2HDM with unconstrained Higgs-quark
Yukawa interactions~\cite{Tanaka:2012nw}. Meanwhile, there is no
confirmation yet of the BaBar results for $\bar B \to D \tau \bar\nu$
and $\bar B \to D^\ast \tau \bar \nu$ from the BELLE collaboration.
Thus, it is certainly premature to claim a definitive deviation from
the predictions of the Standard Model as well as all 2HDMs with Types
I, II, X, or Y Yukawa interactions.
\subsection{The inert 2HDM}
\label{inertsec}
The inert 2HDM is one of the simplest extensions of the
Standard Model~\cite{Barbieri:2006dq,Deshpande:1977rw}.
The inert 2HDM is defined as a Type-I model
in which the
$\mathbb{Z}_2$ discrete symmetry is imposed in the Higgs basis.
The $\mathbb{Z}_2$ charge assignments for the inert 2HDM are given
in Table~\ref{tabinert}. In particular, the Higgs basis field $H_2$,
which has no vev, is odd under the discrete symmetry.
As a result of this discrete symmetry, $Y_3=Z_6=Z_7=0$ and
there are no Yukawa couplings of $H_2$ to fermions.
\begin{table}[ht!]
\begin{center}
\caption{The $\mathbb{Z}_2$ charge assignments that define the Inert
2HDM, where $H_1$ and $H_2$ are the Higgs doublet fields in the
Higgs basis.}
\label{tabinert}
\begin{tabular}{|cl||c|c|c|c|c|c|}
\hline && $H_1$ & $H_2$ & $U_R^{}$ & $D_R^{}$ & $E_R^{}$ &
$U_L$, $D_L$, $N_L$, $E_L$ \\ \hline
Type I Inert && $+$ & $-$ & $-$ & $-$ & $-$ & $+$ \\
\hline
\end{tabular}
\end{center}
\end{table}
Since $Z_6=0$,
we are in the exact alignment limit, in which $h=\sqrt{2}\,\Re(H_1^0-v)$
is a mass-eigenstate whose couplings are equivalent to those of the
SM Higgs bosons. We also identify
\begin{equation}
\Phi_2=\begin{pmatrix} H^+ \\ (H+iA)/\sqrt{2}\end{pmatrix}\,,
\end{equation}
where $H$, $A$ and $H^+$ are the other Higgs mass eigenstates.
The $\mathbb{Z}_2$ discrete symmetry is unbroken in the vacuum since
$\vev{H_2^0}=0$. Thus, there are no couplings involving an odd
number of $H$, $A$ and $H^\pm$ fields. In particular, the lightest
inert particle (LIP) will be absolutely stable and is a potential dark matter
candidate~\cite{Barbieri:2006dq,LopezHonorez:2006gr,Gustafsson:2012aj,
Goudelis:2013uca}.
In addition, the inert 2HDM has rich phenomenological features. For
example, the dark matter could play a critical role in the breaking of
the electroweak symmetry~\cite{Hambye:2007vf} and the triggering of the
electroweak phase transition~\cite{Chowdhury:2011ga}. One can also
add a $\mathbb{Z}_2$-odd right-handed neutrino to the model thereby providing
a mechanism for generating the light neutrino masses at the one loop
level~\cite{Ma:2006km}.
The scalar potential for the inert 2HDM is given (in the Higgs basis)
by \eq{higgsbasispot} with $Y_3=Z_6=Z_7=0$. One can always rephase
$H_2\to e^{i\chi}H_2$ such that the one potentially complex parameter,
$Z_5$ is real.
Indeed, the sign of $Z_5$ is not physical, since the sign can be
flipped by redefining $H_2\to iH_2$.
Thus the Higgs sector of the inert 2HDM is CP-conserving
and depends on seven real
parameters $\{Y_1,Y_2,Z_1,Z_2,Z_3,Z_4,|Z_5|\}$. The potential
minimum condition is given by $Y_1=-Z_1 v^2$.
Using \eqs{chhiggsmass}{mtwo}, it follows that the physical Higgs
masses are given by
\begin{Eqnarray}
m^2_{h}&=&2Z_1 v^2\,,\\
m^2_{H,A}&=&Y_2+(Z_3+Z_4\pm |Z_5|)v^2\,,\\
m^2_{H^\pm}&=&Y_2+Z_3 v^2\,,
\end{Eqnarray}
Here, we have adopted a notation in which $h$ corresponds to the
scalar whose properties are precisely those of the SM Higgs boson. No
mass ordering of $h$ and $H$ is assumed. Indeed,
\eqs{c2exact}{scexact} imply that either $\cos(\beta-\alpha)=0$ if
$h$ is identified as $h_1$ or $\sin(\beta-\alpha)=0$ if $h$ is identified
as~$h_2$. In either case, this is the exact alignment limit with $h$
identified as the SM Higgs boson.
Moreover, we have used the
notation $H$ and $A$ above for the CP-even and odd bosons from the
inert sector. However, an examination of the couplings of $H$ and $A$
implies only that $H$ and $A$ have CP-quantum numbers of opposite sign.
However, one cannot assign a unique CP-quantum number to $H$ and $A$
separately. (For further details, see Ref.~\cite{Haber:2010bw}.)
The couplings of the inert scalars to the gauge bosons and Higgs
bosons are easily obtained by setting $\cos(\beta-\alpha)=0$
[$\sin(\beta-\alpha)=0$] if one identifies the SM Higgs boson with $h_1$ [$h_2$].
If we require that all scalar squared masses are positive, with $v^2=-Y_1/Z_1$,
then it follows that~\cite{Deshpande:1977rw}
\begin{equation}
Y_1<0\,,\qquad Z_1 Y_2>Z_3 Y_1\,,\qquad Z_1
Y_2>(Z_3+Z_4\pm|Z_5|)Y_1\,.
\end{equation}
If we also require that the scalar potential is bounded from below
(vacuum stability), then the following additional constraints must be
satisfied~\cite{Deshpande:1977rw}
\begin{equation}
Z_1>0\,,\qquad Z_2>0\,,\qquad Z_3>-(Z_1 Z_2)^{1/2}\,,\qquad Z_3+Z_4\pm |Z_5|
>-(Z_1 Z_2)^{1/2}\,.
\end{equation}
If one associates the dark matter with an electrically
neutral LIP then $m_{H^\pm}>m_{H,A}$, which yields~\cite{Ginzburg:2010wa}
\begin{equation}
Z_4<|Z_5|\,.
\end{equation}
Finally, one can impose the conditions of
perturbativity~\cite{Barbieri:2006dq} which can be used to
to restrict the magnitudes of the $Z_i$.
The seven parameters of the Higgs potential can be replaced by
the vev $v$, four masses of the Higgs boson and the inert scalars,
$(m_h,m_{H^\pm},m_{H},m_{A})$, the two of the scalar self-couplings
$Z_2$ and $Z_3$. For example, we can use this set of input
parameters to compute $Y_1=-\tfrac{1}{2} m_h^2$,
$Y_2=m^2_{H^\pm}-Z_3 v^2$, $Z_1 v^2=\tfrac{1}{2} m_h^2$,
$Z_4 v^2=\tfrac{1}{2}(m_H^2+m_A^2)-m_{H^\pm}^2$ and $|Z_5|v^2=\tfrac{1}{2}|m_H^2-m_A^2|$.
Collider phenomenology of the inert scalars in the inert 2HDM has been studied in
Refs.~\cite{Barbieri:2006dq,Cao:2007rm,Lundstrom:2008ai,Goudelis:2013uca}.
In Ref.~\cite{Lundstrom:2008ai}, experimental bounds on the inert scalar
masses are obtained by using the experimental results at
the LEP collider~\cite{Acciarri:1999km,Abbiendi:1999ar,Abdallah:2003xe}.
At the LHC, even though the parameter regions where the inert scalars could be
discovered have been suggested~\cite{Dolle:2009ft,Miao:2010rg,Gustafsson:2012aj},
a detailed search for the inert scalars and a
determination of their masses and quantum numbers would be difficult.
The ILC phenomenology for the inert scalars has been considered in
Ref.~\cite{Aoki:2013lhm}. Without loss of generality, we assume in what follows that $m_H<m_A$.
Four benchmark points for the mass spectrum of
inert scalars are listed in Table~\ref{tab:ilc}, which satisfy all available
theoretical and phenomenological constraints.
In the four benchmark points, the mass of $H$ is fixed to 65~GeV, so
that it does not permit the invisible decay of the SM Higgs boson,
$h\to HH$.
While a mass of $H$ up to $\sim80$~GeV is consistent with the dark matter relic
abundance analysis~\cite{LopezHonorez:2006gr,Gustafsson:2012aj,Goudelis:2013uca}, the
collider phenomenology does not change qualitatively by varying $m_H$
in this range.
\begin{table}[b!]
\begin{tabular}{c||ccc|cc}
& \multicolumn{3}{c|}{Inert scalar masses} &
\multicolumn{2}{c}{ILC cross sections [$\sqrt{s}=250$~GeV (500~GeV)]} \\
& $m_{H}$~[GeV] & $m_{A}$~[GeV] & $m_{H^\pm}$~[GeV] &
$\sigma_{e^+e^-\to HA}$~[fb] & $\sigma_{e^+e^-\to H^+H^-}$~[fb] \\
\hline
(I) & 65. & 73. & 120. & 152. (47.) & 11. (79.) \\
(II) & 65. & 120. & 120. & 74. (41.) & 11. (79.) \\
(III) & 65. & 73. & 160. & 152. (47.) & 0. (53.) \\
(IV) & 65. & 160. & 160. & 17. (35.) & 0. (53.)
\end{tabular}
\caption{Masses of inert scalars and ILC cross sections
for our four benchmark points.}\label{tab:ilc}
\end{table}
In Table~\ref{tab:ilc}, the cross sections of $HA$
production and $H^+H^-$ production at $\sqrt{s}=250$~GeV
and 500~GeV are shown.
The production cross sections of the inert
scalars are large enough to be tested at the ILC.
The cross section of $HA$ production can take the largest value, i.e.\
186~fb at $\sqrt{s}=190$~GeV, 78~fb at $\sqrt{s}=280$~GeV, and 46~fb at
$\sqrt{s}=350$~GeV for cases (I, III), (II), and (IV), respectively.
The cross section of $H^+H^-$ production can take the largest value, i.e.\
96~fb at $\sqrt{s}=380$~GeV and 53~fb at $\sqrt{s}=500$~GeV for
cases (I, II) and (III, IV), respectively.
At $\sqrt{s}=1$~TeV, they are about 10~fb and 20~fb for $HA$
production and $H^+H^-$ production for all the four benchmark points, respectively.
For cases (II, IV), $H^{\pm}$ decays into $W^\pm H$ predominantly,
where we admit the $W$-boson to be off-shell if $m_{H^{\pm}}-m_{H}<m_W$.
For cases (I) and (III), $H^{\pm}\to W^\pm A$ decay would be
sizable as well, with the branching ratios about 32\% and 27\%,
respectively.
The decay of the $A$-boson is dominated by $A\to Z^{(*)}H$.
In the left panel of Fig.~\ref{fig:Mass}, the expected accuracy of
mass determination by the measurements of the four observables
for cases (I) and (II) at $\sqrt{s}=250$~GeV is shown~\cite{Aoki:2013lhm}.
The four bands are plotted in the ($m_H\,,\,m_{H^\pm}$) plane by
assuming that these four quantities are measured
in $\pm 2$~GeV accuracy (without any systematic shifts).
The accuracy of the $m_{H^\pm}$ ($m_{H}$) determination
would be $\pm 2$~GeV ($\pm 1$~GeV).
In the right panel of Fig.~\ref{fig:Mass}, the four bands are plotted in
the ($m_H\,,\,m_{H^\pm}$) plane by assuming that the four
observables are measured within the $\pm 2$~GeV accuracy.
By combining the four measurements with the uncertainty of $\pm 2$~GeV,
$m_{H^\pm}$ and $m_{H}$ can be determined in $\pm 1$~GeV accuracy.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth,clip]{Chapter_Theory/figs/Off.pdf}
\includegraphics[width=0.49\textwidth,clip]{Chapter_Theory/figs/On.pdf}
\caption{Determinations of $m_{H^\pm}$ and $m_{H}$ by the four
observables are illustrated in the left [right] panel for the cases
(I, II) [(III, IV)] at $\sqrt{s}=250$~GeV [500~GeV].
Each observable is assumed to be measured in $\pm 2$~GeV accuracy.
} \label{fig:Mass}
\end{center}
\end{figure}
The determination of $m_{A}$ can also be achieved by combining the
observables in the process $e^+e^-\to HA$. However, at $\sqrt{s}=250$~GeV and
$\sqrt{s}=500$~GeV, since the two
constraints are very similar, these masses cannot be determined at one point.
In this case, one requires a fixed value of $m_{H}$ in the process
$e^+e^-\to H^+H^-$ as an input to determine $m_A$.
Then, the expected accuracy of the mass determination is
$\pm3$~GeV for the measurement of the observables in $\pm2$~GeV
accuracy.
The scenarios discussed above provide examples of parameter regions of the
inert 2HDM that cannot be detected at the LHC but can be probed in detail
at the ILC.
\subsection{The MSSM Higgs sector}\label{mssm}
In the minimal supersymmetric extension of the Standard Model (MSSM)
[see Ref.~\cite{HaberPDG} for a review and references],
all Standard Model particles are accompanied by an associated
supersymmetric partner whose spin differs by half a unit. However,
adding a doublet hypercharge-one higgsino superpartner would yield
a gauge anomaly, rendering the theory mathematically inconsistent.
In the MSSM, this problem is overcome by considering the
supersymmetric extension of a two-Higgs doublet model, where the two
Higgs doublets possess hypercharges $\pm 1$, respectively. As a
result, the corresponding two higgsino superpartners of opposite
hypercharge form a vector representation of spin-1/2 fermions, and the
gauge anomalies resulting from the higgsinos cancel exactly.
The Higgs sector of the MSSM is a 2HDM, whose Yukawa couplings and
scalar potential are constrained by supersymmetry (SUSY).
Instead of employing to hypercharge-one scalar doublets $\Phi_{1,2}$,
it is more convenient to introduce a $Y=-1$ doublet $H_d\equiv i\sigma_2 \Phi_1^*$
and a $Y=+1$ doublet
$H_u\equiv\Phi_2$:
\begin{equation}
H_d =\begin{pmatrix} H_d^1 \\ H_d^2 \end{pmatrix}=\begin{pmatrix}\Phi_1^{0\,*}\\
-\Phi_1^-\end{pmatrix}\,,\qquad\quad
H_u=\begin{pmatrix} H_u^1\\ H_u^2\end{pmatrix}=\begin{pmatrix}\Phi_2^+\\
\Phi_2^0\end{pmatrix}\,.
\end{equation}
The notation $H_{u,d}$ is motivated by the form of Higgs Yukawa Lagrangian,
\begin{equation} \label{mssmyuk}
\mathcal{L}_{\rm Yukawa}=-h_u^{ij}(\bar u_R^i u_L^j H_u^2-\bar u_R^i
d_L^j H_u^1)-h_d^{ij}(\bar d_R^i d_L^j H_d^1-\bar d_R^i u_L^j H_d^2)+{\rm h.c.}\,,
\end{equation}
which arises as a consequence of supersymmetry.
That is, the neutral Higgs field $H_u^2$ couples exclusively to up-type quarks and
the neutral Higgs field $H_d^1$ couples exclusively to down-type
quarks.
In particular, the
so-called \textit{wrong Higgs interactions}~\cite{Haber:2007dj},
\begin{equation} \label{wronghiggs}
\mathcal{L}_{\rm wrong~Higgs}=-h_u^{\prime\,ij}(\bar u_R^i u_L^j H_d^{1\,*}+\bar u_R^i
d_L^j H_d^{2\,*})-h_d^{\prime,ij}(\bar d_R^i d_L^j H_u^{2\,*}
+\bar d_R^i u_L^j H_u^{1\,*})+{\rm h.c.}\,,
\end{equation}
are not supersymmetric (due to the appearance of the
complex-conjugated scalar fields in the terms exhibited explicitly
above). Thus, the MSSM Higgs sector possesses Type-II Yukawa couplings
as a consequence of
supersymmetry (and not a $\mathbb{Z}_2$ discrete symmetry as discussed
in Section~\ref{specialforms}.)
The Higgs potential of the MSSM is:
\begin{Eqnarray}
V&=&
\left(m_d^2+|\mu|^2\right) H_d^{i*}H_d^i
+ \left(m_u^2+|\mu|^2\right) H_u^{i*}H_u^i
-b\left(\epsilon^{ij}H_d^iH_u^j+{\rm h.c.}\right) \nonumber \\
&&\qquad +\tfrac{1}{8}
\left(g^2 + g^{\prime\,2}\right) \left[H_d^{i*}H_d^i-H_u^{j*}H_u^j\right]^2
+\tfrac{1}{2} g^2 |H_d^{i*}H_u^i|^2\,, \label{susypot}
\end{Eqnarray}
where $\epsilon^{12}=-\epsilon^{21}=1$ and $\epsilon^{11}=\epsilon^{22}=0$,
and the sum over repeated indices is implicit.
In \eq{susypot}, $\mu$ is a supersymmetric Higgsino mass parameter and $m_d^2$,
$m_u^2$, $b$ are soft-supersymmetry-breaking squared-mass parameters.
The quartic
Higgs couplings are related to the SU(2) and U(1)$_{\rm Y}$ gauge couplings
as a consequence of SUSY.
After minimizing the Higgs potential, the neutral components of the
Higgs fields (in an appropriately chosen phase convention)
acquire real positive vevs: $\vev{H_d^0}=v_d$ and $\vev{H_u^0}=v_u$,
where
$v^2\equiv v_d^2+v_u^2={2m_W^2/ g^2}=(174~{\rm GeV})^2$.
The ratio of the two vevs is
\begin{equation}
\tan\beta\equiv\frac{v_u}{v_d}\,,\qquad 0\leq\beta\leq\tfrac{1}{2}\pi\,.
\end{equation}
In the Higgs basis, the phase of $H_2$ can be chosen such that
$Z_5$, $Z_6$ and $Z_7$ are real. In particular,
\begin{Eqnarray}
Z_1&=&Z_2=\tfrac{1}{4}(g^2+g^{\prime\,2})\cos^2 2\beta\,,\qquad Z_3=Z_5+\tfrac{1}{4}(g^2-g^{\prime\,2})\,,\qquad
Z_4=Z_5-\tfrac{1}{2} g^2\,,\nonumber \\
Z_5&=&\tfrac{1}{4}(g^2+g^{\prime\,2})\sin^2 2\beta\,,\qquad\qquad\quad
Z_7=-Z_6=\tfrac{1}{4}(g^2+g^{\prime\,2})\sin 2\beta\cos 2\beta\,,\label{zsusy}
\end{Eqnarray}
in the notation of Section~\ref{modelind}.
The existence of a Higgs basis where $Z_5$, $Z_6$ and $Z_7$ are
simultaneously real implies that the tree-level MSSM Higgs sector
is CP-conserving. Thus the neutral Higgs mass-eigenstates
are states of definite CP.
The five physical Higgs particles
consist of a charged Higgs pair
\begin{equation}
H^\pm=H_d^\pm\sin\beta+ H_u^\pm\cos\beta\,,
\end{equation}
with squared mass given by $m_{\hpm}^2 =m_{A}^2+m_W^2$,
one CP-odd scalar
\begin{equation}
A^0= \sqrt{2}\left({\rm Im\,}H_d^0\sin\beta+{\rm Im\,}H_u^0\cos\beta
\right)\,,
\end{equation}
with squared mass given by $m_A^2=2b/\sin 2\beta$, and two CP-even scalars
\begin{Eqnarray}
h^0 &=&\sqrt{2}\bigl[ -({\rm Re\,}H_d^0-v_d)\sin\alpha+
({\rm Re\,}H_u^0-v_u)\cos\alpha\bigr]\,,\nonumber\\
H^0 &=& \sqrt{2}\bigl[({\rm Re\,}H_d^0-v_d)\cos\alpha+
({\rm Re\,}H_u^0-v_u)\sin\alpha\bigr]\,,\nonumber
\end{Eqnarray}
where we have now labeled the Higgs fields according to their
electric charge. The angle $\alpha$ arises when the CP-even Higgs
squared-mass matrix (in the $H_d^0$---$H_u^0$ basis) is
diagonalized to obtain the physical CP-even Higgs states.
Equivalently, one can perform the diagonalization of the CP-even
Higgs squared-mass matrix in the Higgs basis, in which case the
corresponding diagonalization angle is given by $\alpha-\beta$.
All Higgs masses and couplings can be expressed in terms of two
parameters usually chosen to be $m_{A}$ and $\tan\beta$.
The CP-even Higgs bosons $h$ and $H$ are eigenstates of the
squared-mass matrix, which in the Higgs basis is given by
\begin{equation}
\mathcal{M}_e^2 =
\begin{pmatrix} m^2_Z \cos^2 2\beta&
-m^2_Z\sin 2\beta\cos 2\beta \\
-m^2_Z\sin 2\beta\cos 2\beta&
m_{A}^2+ m^2_Z \sin^2 2\beta \end{pmatrix}\,.
\end{equation}
The eigenvalues of $\mathcal{M}_e^2$ are
the squared-masses of the two CP-even Higgs scalars
\begin{equation} \label{mssmhm}
m^2_{H,h} = \tfrac{1}{2} \left( m_{A}^2 + m^2_Z \pm
\sqrt{(m_{A}^2+m^2_Z)^2 - 4m^2_Z m_{A}^2 \cos^2 2\beta}
\; \right)\,,
\end{equation}
and $\alpha$ is given by
\begin{equation}
\cos 2\alpha=-\cos 2\beta\frac{m_A^2-m_Z^2}{m_H^2-m_h^2}\,,\qquad\quad
\sin 2\alpha=-\sin 2\beta\frac{m_A^2+m_Z^2}{m_H^2-m_h^2}\,.
\end{equation}
Conventionally, one takes $0\leq\beta\leq\tfrac{1}{2}\pi$ and
$-\tfrac{1}{2}\pi\leq\alpha\leq 0$. It follows that [cf.~\eqs{c2exact}{scexact}]:
\begin{Eqnarray}
\cos^2(\beta-\alpha)&=&\frac{m_Z^2\cos^2 2\beta-m_h^2}{m_H^2-m_h^2}\,,
\label{cbma2}\\
\cos(\beta-\alpha)\sin(\beta-\alpha)&=&\frac{m_Z^2\sin 2\beta\cos 2\beta}{m_H^2-m_h^2}\,. \label{sbmacbma}
\end{Eqnarray}
Note that $0\leq\beta-\alpha<\pi$ so that
$0\leq\sin(\beta-\alpha)\leq 1$ and the sign of $\cos(\beta-\alpha)$ is
given by the sign of $\sin 4\beta$.
The tree-level mass of the lightest CP-even Higgs
boson of the MSSM is bounded,
\begin{equation} \label{hbound}
m_{h}\leqm_Z |\cos 2\beta|\leqm_Z\,.
\end{equation}
This inequality arises because all Higgs self-coupling
parameters of the MSSM are related to the squares of the electroweak
gauge couplings. However, radiative corrections can boost the upper
bound of the lightest CP-even Higgs mass above its tree level bound of
$m_Z$. The leading effects of the radiative corrections will be
discussed further below.
The tree-level couplings of the MSSM Higgs bosons are those of a
CP-conserving 2HDM with Type-II Higgs--fermion Yukawa couplings.
For example, the Higgs couplings to gauge boson pairs ($V=W$ or $Z$)
are given by
\begin{equation}
g\lsub{h VV}= g\lsub{V} m\lsub{V}s_{\beta-\alpha} \,,\qquad\qquad
g\lsub{H VV}= g\lsub{V} m\lsub{V}c_{\beta-\alpha}\,,
\end{equation}
where
$g\lsub{V}\equiv \sqrt{2}m_V/v$.
There are no tree-level couplings of $A$ or $H^\pm$ to $VV$.
The couplings of $V$ to two neutral Higgs bosons
(which must have opposite CP-quantum numbers) are denoted by
$g_{\phiA Z}(p_\phi-p_A)$, where $\phi=h$ or $H$ and the
momenta $p_\phi$ and $p_A$ point into the vertex, and
\begin{equation}
g\lsub{hA Z}={gc_{\beta-\alpha}\over 2\cos\theta_W}\,,\qquad\qquad\quad
g\lsub{HA Z}=-\,{gs_{\beta-\alpha}\over 2\cos\theta_W}\,.
\end{equation}
The properties of the three-point and
four-point Higgs boson--vector boson couplings are conveniently summarized
in \eq{littletable} by listing the couplings that are proportional
to either $\sin(\beta-\alpha)$ or $\cos(\beta-\alpha)$ or are angle-independent.
Finally, the couplings of the MSSM Higgs bosons to quarks are given in
\eqs{t2}{chhiggsy2}
[with the corresponding coupling to leptons obtained by the
substitutions specified below \eq {chhiggsy2}].
The decoupling behavior of the MSSM Higgs sector is exhibited in
the limit of $m_{A}\ggm_Z$, where the corresponding tree-level
squared-masses of the Higgs bosons are given by:
\begin{Eqnarray}
m_{h}^2 &\simeq & m_Z^2\cos^2 2\beta\,,\nonumber \\
m_{H}^2 &\simeq & m_{A}^2+m_Z^2\sin^2 2\beta\,,\nonumber\\
m_{\hpm}^2& = & m_{A}^2+m_W^2\,.
\end{Eqnarray}
Since $\sin(\beta-\alpha)\simeq 1$ and $m_H\simeq m_A$ in the decoupling limit,
\eq{sbmacbma} yields
\begin{equation} \label{cbmadecoup}
\cos(\beta-\alpha)={m_Z^2\sin 4\beta\over 2m_{A}^2}+\mathcal{O}\left(\frac{m_Z^4}{m_A^4}\right)\,.
\end{equation}
Note that \eq{cbmadecoup} follows from the corresponding 2HDM result
given by \eq{z6decoupling} when $Z_6$ is given by its supersymmetric
value [cf.~\eq{zsusy}].
As expected, in the decoupling limit $m_{A}\simeqm_{H}
\simeqm_{\hpm}$, up to corrections of ${\cal O}(m_Z^2/m_{A})$, and
$\cos(\beta-\alpha)=0$ up to corrections of ${\cal O}(m_Z^2/m_{A}^2)$.
In general, in the limit of $\cos(\beta-\alpha)\to 0$,
all the $h^0$ couplings to SM particles approach their SM limits.
In particular, if $\lambda_V$ is a Higgs coupling to vector bosons
and $\lambda_t$ [$\lambda_b$] are Higgs couplings to up-type
[down-type] fermions, then
\begin{Eqnarray}
\frac{\lambda_{V}}{[\lambda_{V}]_{\rm SM}}&=&\sin(\beta-\alpha)=1+\mathcal{O}\left(\frac{m_Z^4\sin^2 4\beta}{m_A^4}\right)\,,\\
\frac{\lambda_{t}}{[\lambda_{t}]_{\rm
SM}}&=&1+\mathcal{O}\left(\frac{m_Z^2\cos^2\beta\cos 2\beta}{m_A^2}\right)\,.\\
\frac{\lambda_{b}}{[\lambda_{b}]_{\rm
SM}}&=&1+\mathcal{O}\left(\frac{m_Z^2\sin^2\beta\cos 2\beta}{m_A^2}\right)\,.\label{uf}
\end{Eqnarray}
Note that the approach to decoupling is fastest in the case of the
$hVV$ couplings and slowest in the case of the $hbb$ couplings at
large $\tan\beta$ (where the corresponding trigonometric factor in
\eq{uf} is maximal). This implies that at large $\tan\beta$,
a precision measurement of the
$h^0b\bar{b}$ coupling provides the greatest sensitivity to $m_A$~\cite{Carena:2001bg}.
When radiative corrections
are incorporated, additional parameters of the supersymmetric model
enter via virtual loops. The impact of these corrections
can be significant~\cite{Haber:1990aw,Okada:1990vk,Ellis:1990nz}.
For example, the tree-level prediction for
the upper bound of the lightest $CP$-even Higgs mass,
given by \eq{hbound}\cite{Inoue:1982ej,Flores:1982pr,Gunion:1984yn}, can be
substantially modified when
radiative corrections are included.
The qualitative behavior of these radiative corrections can be most easily
seen in the large top squark mass limit.
Denoting the geometric mean of the two top squark squared masses by
$M_{\rm S}^2\equivM_{\widetilde t_1}M_{\widetilde t_2}$ and assuming that
$m_{A}>m_Z$ and that the top squark mass
splitting is small compared to $M_S$, the predicted upper bound for $m_h$
(which reaches its maximum at large $\tan\beta$ and $m_{A}\ggm_Z$)
is approximately given by
\begin{equation} \label{eqmH1up0}
m_{h}^2\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}}
m_Z^2\cos^2 2\beta+\frac{3g^2m_t^4}{8\pi^2m_W^2}
\left[\ln\left(\frac{M_{\rm S}^2}{m_t^2}\right)+
\frac{X_t^2}{M_{\rm S}^2}
\left(1-\frac{X_t^2}{12M_{\rm S}^2}\right)\right],
\end{equation}
where $X_t\equiv A_t-\mu\cot\beta$ is the top squark mixing factor.
A more complete treatment of the radiative corrections~\cite{Degrassi:2002fi,Martin:2007pg,Kant:2010tf}\
shows that
\eq{eqmH1up0} somewhat overestimates the true upper bound of $m_{h}$.
These more refined computations, which incorporate
renormalization group improvement and the
leading two-loop (and even some three-loop) contributions, yield $m_{h}\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 130$~GeV
(with an accuracy of a few GeV)
for $m_t=173$~GeV and $M_S\mathrel{\raise.3ex\hbox{$<$\kern-.75em\lower1ex\hbox{$\sim$}}} 2$~TeV. The observed Higgs
mass of 126 GeV is consistent with this bound.
Radiative corrections also can have an important impact on the
MSSM Higgs couplings.
Although radiatively-corrections to couplings generically tend to be at the
few-percent level, there is some potential for significant effects.
In certain cases, radiative corrections are enhanced for
values of $\tan\beta\gg 1$.
In addition,
CP-violating effects induced by complex SUSY-breaking parameters that
enter in loops, can yield new effects in the Higgs sector not present
at tree-level.
One leading correction of note is the renormalization of the mixing
angle $\alpha$. This modifies the quantity $\cos(\beta-\alpha)$ which
governs the decoupling limit and
plays a critical role in the couplings of the Higgs bosons. Employing
the same approximations used in obtaining \eq{eqmH1up0},
one finds that \eq{cbma2} is modified by replacing the
tree-level bound, $m_Z^2\cos^ 2\beta$, with the
radiatively-corrected bound given in \eq{eqmH1up0}, and replacing
$m_h$ and $m_H$ with the corresponding loop-corrected masses.
However, this approximation can still miss some important loop
contributions that can drastically alter the tree-level results.
Let $M_{\rm SUSY}$ characterize the mass scale of SUSY-breaking
(or equivalently, the masses of the superpartners that
appear in the loops that contribute to the radiative corrections). If
$M_{\rm SUSY}\gg m_A$, then one can integrate out the superpartners
that appear in the loops and obtain an
effective low-energy theory that is equivalent to the most general
2HDM. In this effective 2HDM,
the supersymmetric value of $Z_6$ given in \eq{zsusy} is modified by
radiative corrections. In certain special regions of the MSSM
parameter space, the radiative corrections are $\tan\beta$-enhanced
and can approximately cancel
out the tree-level value for a particular (large) value of
$\tan\beta$,
leaving $Z_6\simeq 0$. This is the
alignment limit (which is not possible in the MSSM Higgs sector at
tree-level) and yields $\cos(\beta-\alpha)\simeq 0$ in light of
\eq{z6decoupling}. In this case, the lightest CP-even Higgs
boson is very SM-like, whereas the other Higgs bosons of the model
need not be significantly heavier.
Moreover, the supersymmetric Yukawa couplings given
in \eq{mssmyuk} are modified by the radiative corrections,
$h_q\to h_q+\delta h_q$ ($q=u$, $d$). In particular, since the
effective 2HDM below the SUSY-breaking scale
does not respect supersymmetry, the wrong-Higgs Yukawa
interactions given in \eq{wronghiggs} are also generated by the
radiative corrections; the corresponding Yukawa couplings will be
denoted by $h'_q\equiv \Delta h_q$. Of particular interest are the
wrong Higgs Yukawa couplings to bottom quarks,
$$
\Delta h _b \simeq h_b\left[\frac{2\alpha_s}{3\pi}\mu M_3
\mathcal{I}(M_{\tilde b_1},M_{\tilde b_2},
M_g) + \frac{h_t^2}{16\pi^2}\mu A_t \mathcal{I}
(M_{\tilde t_1}, M_{\tilde t_2}, \mu)\right]\,,
$$
where, $M_3$ is the Majorana gluino mass,
$\mu$ is the supersymmetric Higgs-mass parameter, and $\widetilde b_{1,2}$
and $\widetilde t_{1,2}$ are the mass-eigenstate bottom squarks and top
squarks, respectively. The loop integral is given by
\begin{equation}
\mathcal{I}(a,b,c) =
\frac{a^2b^2\ln(a^2/b^2) + b^2c^2\ln(b^2/c^2)
+ c^2a^2\ln(c^2/a^2)}{(a^2-b^2)(b^2-c^2)(a^2-c^2)}\,.
\end{equation}
In the limit where at least one of the arguments of
$\mathcal{I}(a,b,c)$ is large,
\begin{equation}
\mathcal{I}(a,b,c)\sim 1/{\rm max}(a^2,b^2,c^2)\,.
\end{equation}
Thus, in the limit where $M_3\sim \mu\sim A_t\sim M_{\tilde b}\sim M_{\tilde t}
\sim M_{\rm SUSY}\gg m_Z$, the one-loop contributions to $\Delta h_b$ do
\textit{not} decouple.
The wrong-Higgs couplings yield
$\tan\beta$-enhanced modifications of some physical observables.
After expressing the Higgs doublet fields $H_d$ and $H_u$ in terms of
the physical Higgs mass-eigenstates, one can identify the
the $b$-quark mass,
\begin{equation}
m_b = h_bv \cos \beta \left(1 + \frac{\delta h_b}{h_b}
+ \frac{\Delta h_b \tan \beta}{h_b}\right)
\equiv h_bv\cos \beta (1 + \Delta_b)\,,
\end{equation}
which defines the quantity $\Delta_b$.
In the limit of large $\tan\beta$ the term proportional to $\delta
h_b$ can be neglected,
in which case,
\begin{equation}
\Delta_b\simeq \frac{\Delta h_b}{h_b}\tan\beta\,.
\end{equation}
Thus, $\Delta_b$ is $\tan\beta$--enhanced if $\tan\beta\gg 1$.
As previously noted, $\Delta_b$ survives in the limit of large $M_{\rm SUSY}$;
this effect does not decouple.
From the effective Yukawa Lagrangian, one can obtain the couplings of the physical Higgs
bosons to third generation fermions.
For example, the one-loop corrections can generate measurable shifts
in the decay rate for $h^0\to b\bar b$,
\begin{equation}
g_{h^\circ b\bar b}= -\frac{gm_b}{2m_W}\frac{\sin\alpha}{\cos\beta}
\left[1+\frac{1}{1+\Delta_b}\left(\frac{\delta h_b}{h_b}-
\Delta_b\right)\left( 1 +\cot\alpha \cot\beta \right)\right]\,.
\end{equation}
At large $\tan\beta\sim 20$---50, $\Delta_b$ can be as large as 0.5 in magnitude and of
either sign, leading to a significant enhancement or suppression of the
Higgs decay rate to $b\bar b$.
If $m_A\sim\mathcal{O}(M_{\rm SUSY})$, then below the scale of
supersymmetry-breaking one must also integrate out the second Higgs
doublet (in the Higgs basis). In this case, the
low-energy effective Higgs theory is a one-Higgs doublet model, and thus
$g_{h^0 b\bar b}$ must approach its SM value. Indeed in this limit,
\begin{equation}
1+\cot\alpha\cot\beta=-\frac{2m_Z^2}{m_A^2}\cos
2\beta+\mathcal{O}\left(\frac{m_Z^4}{m_A^4}\right)\,.\nonumber
\end{equation}
Thus the previously non-decoupling SUSY radiative corrections
decouple for $m_A\gg m_Z$ as expected.
The one-loop corrected effective Higgs-fermion Lagrangian
can exhibit CP-violating effect due to possible CP-violating phases
in $\mu$, $A_t$ and $M_3$. This leads to mixed-CP neutral Higgs
states and CP-violating couplings. This is the so-called CPX scenario of the MSSM.
In the limit of $m_{H^\pm}\ll M_{\rm SUSY}$, the effective low-energy
theory is the most general CP-violating 2HDM. Thus, the
model-independent treatment of the general 2HDM is applicable.
Further details on the CPX scenario can be found in Refs.~\cite{Carena:2000ks,Carena:2001fw,Carena:2002es,Accomando:2006ga}.
\section{Other extended Higgs sectors}
\subsection{Constraints from the tree-level rho parameter}
Apart from the part of the SM that is governed by the gauge
principle, there is no theoretical principle for fixing the number of Higgs scalars.
Indeed, there are a variety of possibilities for non-minimal
Higgs sectors that are consistent with phenomenological requirements.
We have already treated in detail the two-Higgs doublet extension of
the SM in Section~\ref{tdhm}. However, it is also possible
that the Higgs sector contains additional Higgs doublets and/or
one or more non-doublet representations.
A very strong constraint on exotic Higgs sectors derives from
the electroweak rho parameter whose experimental value is very close to unity.
The current value for the rho parameter is given by
$\rho=1.0008 ^{+0.0017}_{-0.0007}$~\cite{ErlerPDG}.
In the SM, the rho parameter is exactly unity at the tree level,
\begin{eqnarray}
\rho = \frac{m_W^2}{m_Z^2 \cos^2\theta} =1. \label{rho_sm}
\end{eqnarray}
Moreover, including radiative corrections, the
SM with the Higgs boson mass of 126 GeV yields a value of $\rho$ that
is consistent with the experimentally measure value~\cite{Baak:2012kk}.
Models with only Higgs doublets and singlets do not spoil the
tree-level prediction of $\rho=1$. However,
the addition of scalars that transform under higher dimensional
representations generally violate the tree-level prediction of $\rho=1$.
In a general SU(2)$\times$U(1) model with $n$ scalar multiplets $\phi_i$ with isospin $T_i$ and
hypercharge $Y_i$, the rho parameter is given at the tree level by~\cite{Gunion:1989we}
\begin{equation} \label{rhogen}
\rho= 1+
\frac{\sum_{i} [4T_i(T_i+1)-3Y_i^2]|v_i|^2 c_i}
{\sum_{i} 2Y_i^2 |v_i|^2}\,,
\end{equation}
where
\begin{equation}
c_i=\begin{cases} 1, & (T,Y)\in \text{complex representation}, \\
\tfrac{1}{2}, & (T,Y=0)\in \text{real representation}.
\end{cases}
\end{equation}
For a Higgs sector composed of complex ($c=1$) hypercharge-1 Higgs
doublets ($T=1/2$ and $Y=1$). it follows that $\rho=1$, independently
of the value of the vacuum expectation value $v$. One can also add
an arbitrary number of Higgs singlets ($T=Y=0$) without changing the
value of $\rho$.
To automatically have $\rho=1$ independently of the
Higgs vevs, one must satisfy the Diophantine equation~\cite{Tsao:1980em},
\begin{equation} \label{diaph}
(2T+1)^2-3Y^2=1\,,
\end{equation}
for non-negative integer values of
$(2T,Y)$. The smallest nontrivial solution beyond the complex
$Y=1$ Higgs doublet is a complex Higgs septet with $T=3$ and $Y=4$.
For extended Higgs sectors with multiplets that do not satisfy
\eq{diaph}, the tree-level value for $\rho$ will generally differ from
1. In order to be consistent with the rho parameter data,
there are two possible strategies. First, one can fine-tune the values of the
vevs $v_i$ such that $\rho=1$. This may require some vevs to be
significantly smaller than 174~GeV, or it may require an unnatural
cancellation of terms in the numerator of the second term in \eq{rhogen}.
As an example of the first strategy, consider the effect of adding to
the Standard Model an extra hypercharge-two complex scalar triplet
field $\phi_2=\Delta$ ($T_2=1, Y_2= 2$) ,
which has been employed for generating the neutrino mass by the
so-called type-II seesaw mechanism~\cite{Mohapatra:1980yp}. Denoting the
vev of the neutral component of the scalar triplet by $v_\Delta$,
\eq{rhogen} yields
\begin{eqnarray}
\rho = \frac{1 + 2 v_\Delta^2/v^2}{1+4v_\Delta^2/v^2}.
\end{eqnarray}
Therefore, there is a strong upper bound for $v_\Delta$ ( $\lesssim$ a few GeV )
in light of the rho parameter data.
Second, one can make a clever
choice of Higgs multiplets such that the required cancellation of terms in the
numerator of the second term in \eq{rhogen} appears to be natural. The simplest
example of this mechanism is the Georgi-Machacek model~\cite{Georgi:1985nv}, which consists
of the SM hypercharge-one complex scalar doublet $\Phi(T=\tfrac{1}{2},Y=1)$ ,
a complex hypercharge-two scalar triplet $\Delta(T=1,Y=2)$ and a real scalar triplet
$\xi(T=1,Y=0)$. Suppose that the vevs of the neutral fields of
the two scalar triplets are equal, $v_\Delta=v_\xi$, which can be arranged with a
special choice of the scalar potential parameters corresponding to
an enhanced SU(2)$_L\times$SU(2)$_R$ global symmetry. In this case,
it is easy to check that
\eq{rhogen} yields $\rho=1$ independently of the value of the vev of
the scalar doublet, $v_\Phi$, and the common value of the triplet
vevs, $v_\Delta=v_\xi$. Indeed
$v_\Delta/v_\Phi\sim 1$ is phenomenologically viable, which would lead to
a very different phenomenology than the simple doublet plus triplet
model considered above. However, since the enhanced
SU(2)$_L\times$SU(2)$_R$ global symmetry of the scalar potential is
not respected by the hypercharge gauge interactions and the Yukawa
interactions, the condition that $v_\Delta=v_\xi$ is not stable under
radiative corrections and therefore must be considered as a tuning of parameters~\cite{Gunion:1990dt}.
\subsection{An upper bound for the Higgs coupling to vector boson pairs}
\label{Anupperbound}
Consider a CP-conserving
extended Higgs sector that has the
property that $\rho=1$ and no tree-level $ZW^\pm\phi^\mp$ couplings (where $\phi^\pm$ are physical charged scalars
that might appear in the scalar spectrum). Then it follows that~\cite{Gunion:1990kf}
\begin{equation}
\sum_i g^2_{\phi_iVV}=g^2 m_W^2=g^2_{h_{\rm SM}VV}\,,\qquad\quad
m_W^2 g_{\phi_i ZZ}=m_Z^2 g_{\phi_i WW}\,,
\end{equation}
where the sum is taken over all neutral CP-even scalars $\phi_i$ and
$h_{\rm SM}$ is the Higgs boson of the SM.
In this case, it follows that $g_{\phi_i VV}\leq g_{h_{\rm SM}VV}$ for all $i$.
Models that contain only scalar singlets and doublets satisfy the requirements stated above and hence respect
the sum rule and the coupling relation given above.
However, it is possible to violate $g_{\phi_i VV}\leq g_{h_{\rm SM}VV}$
and $m_W^2 g_{\phi_i ZZ}=m_Z^2 g_{\phi_i WW}$ if tree-level $ZW^\pm\phi^\mp$
and/or $\phi^{++}W^-W^-$ couplings are present~\cite{Gunion:1990kf,Hisano:2013sn, Kanemura:2013mc}. A more general sum rule is:
\begin{equation}
\sum_i g^2_{\phi_iVV}=g^2 m_W^2+\sum_k|g_{\phi^{++}_k W^- W^-}|^2\,.
\end{equation}
The Georgi-Machacek model provides an instructive example~\cite{Georgi:1985nv, Chanowitz:1985ug,Gunion:1989ci}.
This model
consists of a complex
Higgs doublet with $Y=1$, a complex Higgs triplet with $Y=2$ and a
real Higgs triplet with $Y=0$, with doublet vev $v_\Phi$ and triplet
vevs $v_\Delta=v_\xi$, such
that $v^2=v_\Phi^2+8v_\Delta^2$.
It is convenient to write~\cite{Gunion:1989ci}
$$
c_H\equiv\cos\theta_H=\frac{v_\Phi}{\sqrt{v_\Phi^2+8v_\Delta^2}}\,,
$$
and $s_H\equiv\sin\theta_H=(1-c_H^2)^{1/2}$. Then, the following couplings are noteworthy:
\begin{Eqnarray}
&&H_1^0 W^+ W^-:\quad g c_H m_W\,,\qquad\qquad\,\, \, H_1^{\prime\,0}W^+ W^-:\quad \sqrt{8/3}gm_W s_H\,,\nonumber\\
&&H_5^0 W^+ W^-:\quad \sqrt{1/3}gm_W s_H\,,\qquad H_5^{++}W^-W^-:\quad \sqrt{2}gm_W s_H\,.\nonumber
\end{Eqnarray}
$H_1^{\prime\,0}$ and $H_5^0$, $H_5^{++}$ have no coupling to fermions, whereas the $H_1^0$ coupling to fermions
is given by
$$
H_1^0 f\bar{f}:\quad \frac{gm_q}{2m_W c_H}\,.
$$
In general $H_1^0$ and $H_1^{\prime\,0}$ can mix.
In the absence of $H_1^0$--$H_1^{\prime\,0}$ mixing and $c_H=1$, we see that the couplings of $H_1^0$
match those of the SM. In contrast, in the case of $s_H=\sqrt{3/8}$, the
$H_1^{\prime\,0}$ coupling to $W^+ W^-$ matches that of the SM.
Nevertheless, this does not saturate the $\Phi_iWW$ sum rule! Moreover, it
is possible that the $H_1^{\prime\,0}W^+ W^-$ coupling is
\textit{larger} than $gm_W$, without violating the $\Phi_iWW$ sum rule.
Including $H_1^0$--$H_1^{\prime\,0}$ mixing allows for even more
baroque possibilities not possible in a multi-doublet extension of the
SM. Deviations above the $h_{\rm SM}VV$ coupling by as much as $10\%$
or more are possible.
Thus we have demonstrated the possibility that
\begin{eqnarray}
g_{\phi VV}^2 > g_{h_{\rm SM} VV}^2\,, \label{eq:hVVexotic}
\end{eqnarray}
in Higgs sectors with exotic (larger than doublet) Higgs representations.
In Table~\ref{tab:hVV_exotic}, the deviations in the Higgs boson couplings from the SM values are
listed in various extended Higgs sectors (further details are given in Ref.~(\cite{Kanemura:2013mc}).
Moreover, except for the case of the 2HDM, the couplings of the SM like Higgs boson with
the weak gauge bosons shown in Table~\ref{tab:hVV_exotic} can be greater than 1.
At the ILC the $hVV$ couplings can be measured at the percent level.
Therefore, even if extra Higgs bosons are not discovered directly,
a Higgs sector with exotic multiplets can be distinguished
via the precision measurement of the $hVV$ coupling.
\begin{table}[t]
\begin{center}
{\renewcommand\arraystretch{1.3}
\begin{tabular}{|c||c|c|c|c|}\hline
Model & $\tan\beta$ &$\tan\beta'$& $c_{hWW}$ & $c_{hZZ}$ \\\hline\hline
$\phi_1+\phi_2$ (2HDM) &$v_{\phi_2}/v_{\phi_1}$&$v_{\phi_2}/v_{\phi_1}$ &$\sin(\beta-\alpha)$ & $\sin(\beta-\alpha)$ \\\hline
$\phi+\Delta$ (cHTM) &$\sqrt{2}v_\Delta/v_\phi$&$2v_\Delta/v_\phi$& $\cos\beta \cos\alpha + \sqrt{2}\sin\beta\sin\alpha$ & $\cos\beta' \cos\alpha + 2\sin\beta'\sin\alpha$ \\\hline
$\phi+\xi$ (rHTM) &$2v_\xi/v_\phi$&-& $\cos\beta \cos\alpha + 2\sin\beta\sin\alpha$ & $\cos\alpha$ \\\hline
$\phi+\Delta+\xi$ (GM model) &$2\sqrt{2}v_\Delta/v_\phi$& $2\sqrt{2}v_\Delta/v_\phi$ &$\cos\beta \cos\alpha +\frac{2\sqrt{6}}{3}\sin\beta \sin\alpha$ &$\cos\beta \cos\alpha +\frac{2\sqrt{6}}{3}\sin\beta \sin\alpha$ \\\hline
$\phi+\varphi_7$ &$4v_{\varphi_7}/v_\phi$& $4v_{\varphi_7}/v_\phi$ &$\cos\beta \cos\alpha +4\sin\beta \sin\alpha$ &$\cos\beta \cos\alpha +4\sin\beta \sin\alpha$ \\\hline
\end{tabular}}
\caption{The deviations in the Higgs boson couplings from the SM values in various extended Higgs sectors.
$\phi$, $\Delta$, $\xi$ and $\varphi_7$ are respectively denoted as Higgs fields with ($T,Y$)=($1/2,1$), ($1,2$), ($1,0$) and ($3,4$).
In the second and third column, $v_X$ is the vev of the Higgs field $X$.
The mixing angle $\alpha$ is defined for each extended Higgs sector in Ref.~\cite{Kanemura:2013mc}.
}
\label{tab:hVV_exotic}
\end{center}
\end{table}
\subsection{Adding Higgs singlets}
The introduction of an additional Higgs singlet field to the SM Higgs sector
does not affect $\rho=1$, and does not generate any flavor changing Higgs-mediated neutral current processes
as it does not couple to quarks, leptons and gauge bosons.
For example, such a singlet field has been introduced in new physics models with an
extra U(1) gauge symmetry, where the U(1) boson couples to B$-$L~\cite{Khalil:2006yi}.
A neutral singlet scalar field is also employed in the Next-to-Minimal
supersymmetric extension of the Standard Model (NMSSM) along with the
second doublet field required in SUSY~\cite{Ellwanger:2009dp}.
The existence of a singlet field $\phi_2=S$ ($T_2=0$, $Y_2=2$) only change the Higgs boson couplings
via mixing of the singlet and doublet Higgs fields.
In the model with only one additional neutral singlet scalar field to the SM, $S$ and $\Phi$
can be parameterized as
\begin{eqnarray}
\Phi=\left(\begin{array}{c}
\omega^+ \\
v + (\phi + i \chi)/\sqrt{2}\\ \end{array}
\right) , \hspace{1cm} S=v_S + \phi_S
\end{eqnarray}
where $v=174$ GeV, and $v_S$ is the vev of $S$. The two CP-even mass eigenstates $h$ and $H$ are
defined by
\begin{eqnarray}
h = \phi \cos\theta - \phi_S \sin\theta, \hspace{1cm} H=\phi \sin\theta + \phi_S \cos\theta.
\end{eqnarray}
In models with an extra U(1) gauge boson, this boson absorbs the CP-odd component $\chi_S$ via the Higgs mechanism.
The difference from the SM is just one additional CP-even scalar boson $H$.
There is no physical charged scalar state in this model.
All of the SM fields obtain mass from the vev, $v$, while the couplings of $h$ and $H$ are
obtained by the replacement of $\phi_{\rm SM } \to h \cos\theta + H \sin\theta$ in the Standard Model Lagrangian.
In the decoupling region $\theta \sim 0$, $h$ is the SM-like Higgs boson with its couplings reduced from their SM values
by $\cos\theta \sim 1 -\theta^2/2$. On the other hand, when $\tan\theta \sim {\cal O} (1)$, both $h$ and $H$ behave
as SM-like Higgs bosons, sharing the SM couplings to gauge bosons and fermions. If $h$ and $H$ are almost degenerate
in mass, the two bosons might appear as a single SM Higgs boson in the LHC experiments.
At the ILC, by tagging the Higgs mass
in $e^+e^-\to Z +(h,H)$ by the invariant mass recoiling against the $Z$,
the two Higgs bosons could be better separated.
The ILC phenomenology of the Higgs sector in the minimal B-L model is surveyed in Ref.~\cite{Basso:2010si}.
\subsection{Adding Higgs triplets}
Triplet Higgs fields are introduced in several new physics models.
An example of these models is the Higgs sector with the SM Higgs doublet $\Phi$ with
an additional triplet $\Delta$ with $T_2=1, Y_2= 2$ . If $\Delta$ carries the lepton number of 2, it can
couple to leptons by
\begin{eqnarray}
{\cal L}_Y = h_{ij} \overline{L^{ic}} i\tau_2 \Delta L_L^j + {\rm h.c.} \label{triplet_Yukawa}
\end{eqnarray}
If $\Delta$ obtains the vev that is proportional to the explicit violation term
for the lepton number in the Higgs sector, then a neutrino mass matrix is generated,
\begin{eqnarray}
{\cal M}_{ij} = \sqrt{2} h_{ij} v_\Delta. \label{eq:neutrinomass}
\end{eqnarray}
The Higgs fields $\Phi$ and $\Delta$ are expressed in terms of component fields as
\begin{eqnarray}
\Phi = \left( \begin{array}{c} \omega^+ \\ v_\Phi + (\phi + i \chi)/\sqrt{2} \\ \end{array}\right) ,
\hspace{1cm}
\Delta = \left(
\begin{array}{cc}
\Delta^+/\sqrt{2} & \Delta^{++} \\
v_\Delta+(\delta + i \eta)/\sqrt{2} & - \Delta^+/\sqrt{2} \\
\end{array}
\right),
\end{eqnarray}
where $v_\Phi$ and $v_\Delta$ are vevs of $\Phi$ and $\Delta$. The physical scalar states are two CP-even ($h$ and $H$),
a CP-odd ($A$), a singly charged pair ($H^\pm$) and a doubly charged pair ($H^{\pm\pm}$), which are
derived from the component fields by diagonalizing the squared-mass matrices with the mixing angles $\alpha$, $\beta_0$ and $\beta_\pm$ as
\begin{eqnarray}
&& h = \phi \cos\alpha + \delta \sin\alpha, \hspace{4mm}\qquad H = - \phi \sin\alpha + \delta \cos\alpha, \nonumber\\
&& A = -\chi \sin\beta_0 + \eta \cos\beta_0, \hspace{4mm}\, H^\pm= - \phi^\pm \sin\beta_\pm + \Delta^\pm \cos\beta_\pm,
\hspace{4mm}\, H^{++} = \Delta^{++}.
\end{eqnarray}
In light of the constraint from the rho parameter, $v_\Delta \ll v$ must be taken, and then the masses of the scalar bosons
are given by
\begin{eqnarray}
m_h^2 \simeq 2 \lambda_1 v^2, \hspace{1cm} m^2_{H^{++}} - m_{H^+} \simeq m^2_{H^+} - m_A^2, \hspace{1cm} m_H^2 \simeq m_A^2,
\label{trip}
\end{eqnarray}
with $\alpha \ll 1$, $\beta_0 \ll 1$, and $\beta_\pm \ll 1$,
where $\lambda_1$ represents the quartic coupling constant of the doublet field.
Therefore, $h$ behaves as the SM Higgs boson, and the others scalar states satisfy
the relations among the masses given in \eq{trip}.
The most interesting characteristic feature in this model is the existence of doubly charged Higgs bosons $H^{\pm\pm}$.
Its discovery would be a direct probe of the exotic Higgs sectors.
In general, doubly charged Higgs fields can arise from
the singlet with $Y=4$, the doublet with $Y=3$ and the triplet with $Y=2$.
In the model with an additional triplet field, the doubly charged Higgs bosons $H^{\pm\pm}$ can
decay into $\ell^\pm \ell^\pm$, $H^\pm W^\pm$ and $W^\pm W^\pm$
depending on the magnitude of $v_\Delta$~\cite{Perez:2008ha}.
In Fig.~\ref{FIG:BR1_HTM} , the branching ratios are shown as a function of
the vacuum expectation value of the triplet field, $v_\Delta$, for the cases with the mass difference $\Delta m=m_{H^{++}}-m_{H^+}=0$,
$10$ GeV and $30$ GeV~\cite{Aoki:2011pz}.
\begin{figure}[t]
\centering
\includegraphics[width=48mm]{Chapter_Theory/figs/br_Hpp_dm0_300.pdf}
\includegraphics[width=48mm]{Chapter_Theory/figs/br_Hpp_dm10_320.pdf}
\includegraphics[width=48mm]{Chapter_Theory/figs/br_Hpp_dm30_360.pdf}
\caption{Decay branching ratio of $H^{++}$ as a function of $v_\Delta$.
In the left figure, $m_{H^{++}}$ is
fixed to be 300 GeV, and $\Delta m$ is taken to be zero.
In the middle figure, $m_{H^{++}}$ is
fixed to be 320 GeV, and $\Delta m$ is taken to be 10 GeV.
In the right figure, $m_{H^{++}}$ is
fixed to be 360 GeV, and $\Delta m$ is taken to be 30 GeV.}
\label{FIG:BR1_HTM}
\centering
\end{figure}
When $v_\Delta$ is smaller than $10^{-3}$ GeV, the dilepton decay $H^{\pm\pm} \to \ell^\pm \ell^\pm$ is dominant. The signal
directly shows the existence of the doubly charged scalar boson with lepton number 2, which can be a strong evidence for the
neutrino mass generation via Eq.~(\ref{eq:neutrinomass}).
At the LHC, the current search results for $H^{\pm\pm}$ using this decay mode gives the lower bound
on the mass of $m_{H^{++}} > 400$ GeV~\cite{ATLAS:2012hi,Chatrchyan:2012ya}.
\begin{figure}[b!]
\begin{center}
\includegraphics[width=90mm]{Chapter_Theory/figs/bound7TeV_v2.pdf}
\caption{The signal cross section after the $M_{\ell\ell}$ cut as a function of $m_{H^{++}}$ with the collision energy to be 7 TeV.
The light (dark) shaded band shows the 95\% CL (expected) upper limit for the cross section from the data for the
$\mu^+\mu^+$ channel with the integrate luminosity to be 4.7 fb$^{-1}$ (20 fb$^{-1}$).
}
\label{FIG:diboson}
\end{center}
\end{figure}
In contrast, when $v_\Delta$ is sufficiently larger than $10^{-3}$ GeV, the diboson decay $H^{\pm\pm} \to W^\pm W^\pm$
becomes dominant. In this case, the signal can also be same sign four leptons, but its rate is reduced by
the branching ratios of leptonic decays of $W$s.
The current lower bound from this final state is only $m_{H^{++}} > 60$ GeV at 95\% CL
from data at the LHC with 5 fb$^{-1}$~\cite{Kanemura:2013vxa} as exhibited in Fig.~\ref{FIG:diboson}.
This bound is greatly relaxed as compared to the dilepton decay scenario.
By the extrapolation of the data to 20~fb$^{-1}$ with the same collision energy,
the lower limit is estimated to be 85 GeV.
If there is a mass difference between $H^{\pm\pm}$ and $H^\pm$, the parameter region where
$H^{\pm\pm}$ can mainly decay into $H^\pm W^\pm$ appears. For example, for $v_\Delta=1$ GeV with a
mass difference $\Delta m = m_{H^{++}} - m_{H^+} \sim 10$ GeV, the decay into $H^+W^+$ dominates for a wide range of $v_\Delta$.
In this case, $H^{++}$ could be identified through its cascade decay. If there is a mass difference with the opposite sign,
$\Delta m \sim -10$ GeV for $v_\Delta \sim 1$ GeV, $H^{\pm\pm} \to W^+W^+$ is dominant.
There are wide regions of parameter space where the diboson decay is dominant. In this case, a relatively
light $H^{\pm\pm}$ with a mass of few 100 GeV is expected to be still allowed even after
the LHC run at 14 TeV with an integrated luminosity with 3000 fb$^{-1}$.
The $H^{\pm\pm}$ can be pair produced at the ILC with $\sqrt{s} = 250$ GeV or $500$ GeV,
and a doubly-charged Higgs signal can be easily detected.
\subsection{The NMSSM Higgs sector}
The minimal Higgs sector required for an anomaly-free supersymmetric
extension of the Standard Model consists of two Higgs doublets as
described in Section~\ref{mssm}. But, the Higgs sector of the MSSM has
a number of troubling features. First, the Higgs-Higgsino Lagrangian
contains a dimensionful parameter $\mu$, which in principle can be
arbitrarily large. However, phenomenological considerations require
this parameter to be no larger than a typical supersymmetry-breaking
mass parameter, which should be of $\mathcal{O}(1~{\rm TeV})$ or less
in order to provide a natural explanation for the origin of the
scale of electroweak symmetry breaking (EWSB). Second, the coefficients of
the quartic terms of the MSSM Higgs potential are fixed by the
SU(2)$\times$U(1) gauge couplings $g$ and $g'$. This is the origin of
the tree-level bound $m_h\leq m_Z$, and implies that radiative
corrections due to loops of top-quarks must be large enough to explain
the observed mass of $m_h\simeq 126$~GeV. This in turn requires
rather large top squark masses and/or mixing, which pushes at least
one of the top squark masses to values above 1~TeV. Indeed, there is
already considerable tension in the MSSM between
achieving a large enough Higgs mass while maintaining a natural
explanation of the EWSB scale.
In the NMSSM, one add an additional complex singlet scalar field $S$ to
the MSSM Higgs sector. A comprehensive review of the NMSSM
can be found in Refs.~\cite{Ellwanger:2009dp,Maniatis:2009re}.
The additional degrees of freedom of the NMSSM Higgs sector provide
an opportunity for ameliorating some of the troubling features of the
MSSM Higgs sector. In the NMSSM, one can
set $\mu=0$ and generate an effective $\mu$ parameter dynamically that
is proportional to the vacuum expectation value of $S$. Thus, a
phenomenologically acceptable NMSSM Higgs sector exist that contains
no new fundamental dimensionful parameters. Second,
the NMSSM scalar potential contains a new quartic term proportional to
a dimensionless parameter $\lambda$ that is independent
of gauge couplings. Thus, the mass of the observed Higgs boson now
depends on an unknown coupling, and it is significantly easier to
achieve the observed mass of $m_h\simeq 126$~GeV without extremely
large top squark masses and/or mixing. As a result, the flexibility of the
additional degrees of freedom of the NMSSM can (somewhat) reduce the tension
with naturalness as compared with the MSSM.
In this section, we briefly review the structure of the Higgs sector
of the NMSSM. We first consider the general NMSSM (sometimes called
the GNMSSM~\cite{Ross:2011xv}), in which all possible terms of dimension-four or less are
allowed in the Higgs Lagrangian. The Higgs potential of the GNMSSM
is:
\begin{Eqnarray}
V&=&(m_d^2+|\mu+\lambda S|^2)H_d^{i*}H_d^i+(m_u^2+|\mu+\lambda
S|^2)H_u^{i*}H_u^i-b(\epsilon^{ij}H_d^i H_u^j+{\rm h.c.})
\nonumber \\
&&\qquad +m_s^2|S|^2+(\xi_s S+\tfrac{1}{2} b_s S^2+\tfrac{1}{3}\kappa A_\kappa S^3
-\lambda A_\lambda S\epsilon^{ij}H_d^i H_u^j+{\rm h.c.})\\
&&\qquad |\xi+\mu_s S +\kappa S^2-\lambda\epsilon^{ij}H_d^i H_u^j|^2
+\tfrac{1}{8}
\left(g^2 + g^{\prime\,2}\right) \left[H_d^{i*}H_d^i-H_u^{j*}H_u^j\right]^2
+\tfrac{1}{2} g^2 |H_d^{i*}H_u^i|^2\,,\nonumber
\end{Eqnarray}
where $\mu$ and $\mu_S$ are the supersymmetric Higgsino mass
parameters,
$\lambda$ and $\kappa$ are dimensionless supersymmetric scalar self-interaction
parameters, $m_d^2$, $m_u^2$, $b$, $b_s$ and $\xi$ are
soft-supersymmetry-breaking squared-mass parameters, $A_\kappa$ and
$A_\lambda$ are soft-supersymmetry-breaking mass parameters, and
$\xi_s$ is a soft-supersymmetry-breaking cubed-mass parameter.
To eliminate the supersymmetric preserving mass parameters, one can
impose a discrete $\mathbb{Z}_3$ symmetry, such that the scalar potential
is invariant under $\{H_d, H_u, S\}\to \omega\{H_d, H_u, S\}$ with
$\omega^3=1$ and $\omega\neq 1$. In this case we have
$\mu=\mu_s=b=b_s=\xi=\xi_s=0$, and the scalar potential is specified in
terms of two supersymmetric dimensionless parameters, $\lambda$ and
$\kappa$, and five dimensionful supersymmetry-breaking parameters,
$m_d^2$, $m_u^2$, $m_s^2$, $A_\lambda$ and $A_\kappa$. This model is
often referred to simply as the NMSSM (more accurately, it is also
called as the $\mathbb{Z}_3$-invariant NMSSM).
Unlike the MSSM, the tree-level (G)NMSSM allows for the possibility of
CP-violation in the Higgs sector. For simplicity, we shall assume in
the following
that the (G)NMSSM scalar potential is CP-conserving, and take all
scalar potential parameters and vacuum expectation values to be real,
\begin{equation}
\langle{H_d^0}\rangle=v_d\,,\qquad\quad
\langle{H_u^0}\rangle=v_u\,,\qquad\quad
\langle{S}\rangle=v_s\,.
\end{equation}
In this case, the scalar spectrum consists of three CP-even neutral
scalars, two CP-odd neutral scalars and a pair of charged Higgs
scalars $H^\pm$. It is again convenient to go to the Higgs basis in
which linear combinations of the two doublet fields, denoted by $H_1$ and $H_2$, are defined such
that $\langle{H_1^0}\rangle=v=174~{\rm GeV}$ and $\langle{H_2^0}\rangle=0$, where $v^2\equiv v_u^2+v_d^2$.
In this basis, the squared-mass matrix of the CP-even neutral Higgs bosons is given by
\begin{equation} \label{me}
\mathcal{M}_e^2 =
\begin{pmatrix} m^2_Z \cos^2 2\beta+\lambda^2 v^2 \sin^2 2\beta &
-(m^2_Z-\lambda^2 v^2)\sin 2\beta\cos 2\beta & M_{e13}^2
\\
-(m^2_Z-\lambda^2 v^2)\sin 2\beta\cos 2\beta&
m_{A}^2+ m^2_Z \sin^2 2\beta & M_{e23}^2\\
M_{e13}^2 & M_{e23}^2 & M_{e33}^2
\end{pmatrix}\,,
\end{equation}
where
\begin{equation}
m_A^2\equiv \frac{2\bigl[b+\lambda(\mu_s v_s+\xi)+\lambda v_s(A_\lambda+\kappa v_s)\bigr]}{\sin 2\beta}\,,
\end{equation}
and $M_{e13}^2$, $M_{e23}^2$ and $M_{e33}^2$ can be expressed in terms
of the scalar potential parameters (explicit expressions can be found
in Ref.~\cite{Ross:2011xv}). The parameter $m^2_A$ is no longer the
squared-mass of the CP-odd Higgs boson. Rather, it is a diagonal
element of the CP-odd Higgs squared-mass matrix,
\begin{equation}
\mathcal{M}_o^2=\begin{pmatrix} m_A^2 & M_{o12}^2 \\ M_{o12}^2 & M_{o22}^2
\end{pmatrix}\,,
\end{equation}
where $M_{o12}^2$ and $M_{o22}^2$ can be expressed in terms of the
scalar potential parameters (explicit expressions can be found in
Ref.~\cite{Ross:2011xv}).
The squared-masses of the neutral Higgs bosons are obtained by
computing the eigenvalues of $\mathcal{M}^2_e$ and $\mathcal{M}_o^2$.
Finally, the charged Higgs mass is given by
\begin{equation}
m^2_{H^\pm}=m_W^2+m_A^2-\lambda^2 v^2\,.
\end{equation}
The phenomenology of the GNMSSM is richer than that of the MSSM due to
the additional Higgs states and the larger parameter space. The
simplest scenario is an MSSM-like scenario in which either $M^2_{e33}$
and $M^2_{o22}$ are large
and/or $M_{e13}^2$, $M_{e23}^2$ and $M_{o12}^2$ are small.
In this case, the mixing
of the singlet and doublet Higgs components is suppressed and two of
the three CP-even Higgs bosons are governed by the $2\times 2$ block
obtained from the first two rows and columns of $\mathcal{M}^2_e$,
while the doublet-like CP-odd Higgs boson has mass $m_A$.
Nevertheless, the MSSM squared-mass relations are corrected by terms
of $\mathcal{O}(\lambda^2 v^2)$. Using \eq{me}, it follows that
the CP-even Higgs squared-mass
inequality given by \eq{eqmH1up0} is modified to~\cite{Ellwanger:2006rm},
\begin{equation}
m_h^2\leq m_Z^2\cos^2 2\beta+\lambda^2 v^2\sin^2 2\beta+
\frac{3g^2m_t^4}{8\pi^2m_W^2}
\left[\ln\left(\frac{M_{\rm S}^2}{m_t^2}\right)+
\frac{X_t^2}{M_{\rm S}^2}
\left(1-\frac{X_t^2}{12M_{\rm S}^2}\right)\right]\,,
\end{equation}
which includes the leading one-loop radiative correction. Whereas the bound is
saturated at large $\tan\beta$ in the MSSM (where $\lambda=0$), we see
that in the NMSSM it is possible to have a SM-like Higgs boson with a
mass of 126 GeV for relatively modest values of $\tan\beta$ if
$\lambda$ is sufficiently large. Indeed, in contrast to
the MSSM, one does not need as large a boost from the radiative corrections,
which means that lower top squark masses are allowed given the
observed Higgs mass (thereby lessening the tension with naturalness).
Typically, the region of interest corresponds to $\tan\beta\sim 2$ and
$\lambda\sim 0.7$. For values of $\lambda>0.7$, the scalar
self-coupling running parameter $\lambda(Q)$ blows up at scales below
the Planck scale. Although such a scenario is not consistent
with perturbative unification, it does lead to some interesting model
building opportunities for a highly natural Higgs boson with a mass of 126 GeV
over a wide range of parameters~\cite{Hall:2011aa}.
The scalar states that are dominantly singlet can only couple to gauge
bosons and fermions through the small doublet admixture in their wave
functions. Thus, these states are very difficult to produce and
observe at the ILC. One possible exception to this statement occurs
in the limit where both the mixing terms $M_{e13}^2$, $M_{e23}^2$ and
$M_{o12}^2$ are small, and the diagonal elements of the CP-even and/or
CP-odd scalar squared-mass matrices are also small. In this case, the
lightest scalar particles of the Higgs spectrum can be dominantly
singlet. This leaves open the possibility that the decay channels
$h\to h_1 h_1$ and/or $h\to a_1 a_1$ are allowed, where $h$ is
identified as the observed SM-like Higgs boson and $h_1$ and $a_1$ are
the light dominantly singlet CP-even and CP-odd scalar states.
These light states would then decay dominantly
via their small scalar doublet admixtures into a pair of the heaviest
fermions that are kinematically allowed.
There are some experimental limits to this scenario due to
searches at LEP, Tevatron and LHC, but allowed regions of the (G)NMSSM
parameter space with light singlet-like scalars still persist.
Finally, it is possible that the mixing between singlet and doublet
components is not particularly small. In this case, one can still
find parameter regimes (e.g. large $m_A$)
in which the lightest CP-even state is dominantly doublet and
SM-like (to be identified with the observed Higgs boson), while the
heavier states are admixtures of doublet and singlet states.
In this scenario, all Higgs states are in play and can be studied at
the ILC if their masses are less than half the center-of-mass energy.
\section{Model-independent treatments of Higgs properties}
In the quest for identifying the underlying physics of electroweak
symmetry breaking it will be crucial to study the properties of the
observed signal at 126~GeV with high precision, taking into account also
the limits from Higgs searches in other regions of the parameter space.
Besides the interpretation of the experimental results in specific
models, it is also useful to express the properties of the Higgs sector
in terms of less model-dependent parameterizations. For the observed
signal at 126~GeV this refers in particular to its mass, spin, CP
properties and couplings.
While the mass can be determined in an essentially model-independent
way,
and the spin quantum number for a single resonance
can be obtained from confronting distinct
hypotheses for different spin states with the data (see below), the
determination of CP properties and couplings is more involved. An
observed state can in general consist of any admixture of CP-even and
CP-odd components. Testing the distinct hypotheses of a pure CP-even and
a pure CP-odd state can therefore only be a first step in determining
the CP properties of a new particle. While it is an obvious goal to
extract information on the couplings of the discovered state to other
particles, some care is necessary regarding the actual definition of
those couplings. It would be tempting to treat the
couplings in a certain model, for instance the SM, as independent
free parameters and perform a fit to the experimental data. This is not
possible, however, once (electroweak) higher-order corrections are
taken into account, since model-specific relations between the couplings
and the other parameters of the theory are required to ensure properties
like UV-finiteness and gauge cancellations.
Moreover, modifying a certain
coupling as compared to its form in the SM will in general change both
the overall coupling strength and the tensor structure of the coupling.
The latter implies a modification of
the CP properties of the considered state. As a consequence, in
general the determination of couplings cannot be treated
separately from the determination of the CP properties.
Accordingly, in order to analyze the coupling properties of the
discovered state a well-defined framework is required where the
state-of-the art predictions within the SM
(or any other reference model), including all relevant higher-order
corrections, are supplemented by a parameterization of
possible deviations of the couplings from their reference values
(including also possible changes of the tensor structure). If one assumes that
effects of new physics occur only at rather high scales, such that the
contributions of heavy degrees of freedom
can be systematically integrated out, such
a framework can be formulated with the help of an effective Lagrangian.
\subsection{Effective Lagrangian treatments}
\label{sec:th_efflag}
Assuming that effects of new light particles in loops are absent,
physics beyond the Standard Model (BSM)
can be described via an effective Lagrangian in terms of the SM
fields. This approach has been pioneered in Ref.~\cite{Buchmuller:1985jz},
where a list of operators of dimensions 5 and 6 in the linear
parameterization of the Higgs sector with a Higgs doublet has been
provided. Those higher-dimensional operators arise from integrating out
the contributions of heavy degrees of freedom. Restricting to operators
of up to dimension 6 that are relevant for Higgs physics, such an
effective Lagrangian has the general form
\begin{equation}
{\cal L}_{\mathrm{eff}} = {\cal L}^{(4)}_{\mathrm{SM}}
+ \frac{1}{\Lambda^2}\sum_k \alpha_k {\cal O}_k,
\end{equation}
where ${\cal L}^{(4)}_{\mathrm{SM}}$ is the SM Lagrangian,
${\cal O}_k \equiv {\cal O}^{d=6}_k$
denotes dimension-6 operators, $\alpha_k$ the corresponding
Wilson coefficients, and $\Lambda$ is the scale of new physics.
Taking into account all dimension-6 operators that are in accordance
with gauge invariance leads to a rather large number of operators.
A minimal complete basis can be constructed using the equations of
motions to eliminate redundant operators~\cite{Grzadkowski:2010es}.
Proposals that are suitable for the analysis of the upcoming data at the
LHC are currently under discussion, (e.g., see Ref.~\cite{Heinemeyer:2013tqa} and
references therein). For the analysis of the LHC results up to 2012 an
``interim framework'' has been adopted that is based on a simplified
approach using ``leading order inspired'' scale factors
$\kappa_i$~\cite{LHCHiggsCrossSectionWorkingGroup:2012nn,Heinemeyer:2013tqa}.
In particular, in order to make use of reinterpretations of searches
that have been performed within the context of the SM, in this approach
only overall changes in the coupling strengths are
considered, while effects that change kinematic distributions are not
taken into account.
\subsection{Simplified approach for the analysis of Higgs couplings}
\label{sec:th_kappas}
The searches for a SM-like Higgs carried out at the LHC so far have
mainly focused on the production processes gluon fusion, $gg \to H$,
weak-boson fusion, $qq^\prime \to qq^\prime H$, associated production
with $W$ or $Z$, $q \bar q \to WH / ZH$, and associated production with
a top-quark pair, $q \bar q / gg \to t \bar t H$. The searches were based on the decay channels $\gamma\gamma$, $ZZ^{(*)}$, $WW^{(*)}$,
$b \bar b$ and $\tau^+\tau^-$. The couplings involved in those processes
have been analyzed in an ``interim framework'' under the following simplifying
assumptions~\cite{LHCHiggsCrossSectionWorkingGroup:2012nn,Heinemeyer:2013tqa}:
\begin{itemize}
\item
The observed signal is assumed to correspond to a single narrow
resonance. The case of several, possibly overlapping, resonances
is not considered.
\item
The zero-width approximation is assumed for this state. This implies
that all channels can be decomposed into a production cross section
times a decay branching ratio.
\item
Only modifications of coupling strengths, i.e.\ of absolute values of
couplings, are considered. No modifications of the tensor structure as
compared to the SM case are taken into account. This means in
particular that the observed state is assumed to be a CP-even scalar.
\end{itemize}
In order to parameterize possible deviations from the SM predictions in
this framework scale factors $\kappa_i$ are introduced.
The scale factors $\kappa_i$ are defined in such a way that the cross
sections $\sigma_{ii}$ or the partial decay widths $\Gamma_{ii}$
associated with the SM particle $i$ scale
with the factor $\kappa_i^2$ when compared to the corresponding SM
prediction. For a process $ii \to H \to jj$ the application of the scale
factors results in the term $\kappa_i^2\kappa_j^2/\kappa_H^2$ relative
to the SM prediction, where $\kappa_H^2$ denotes the scale factor for
the total width of the observed signal.
By construction,
the SM predictions according to the current
state-of-the-art, i.e.\ including the available higher-order
corrections, are recovered if all $\kappa_i = 1$.
Since higher-order corrections in general do
not factorize with respect to the rescalings, the theoretical accuracy
degrades for $\kappa_i \neq 1$. This is a drawback of this simplified
framework in comparison to the effective Lagrangian approach discussed
above, where possible deviations from the SM predictions are
parameterized in a more systematic way.
For loop-induced processes such as $gg \to H$ and $H \to \gamma\gamma$
the scale factors $\kappa_g$ and $\kappa_\gamma$ are in general treated
as free parameters. If on the other hand
one assumes that there are no contributions of
BSM particles to the loops, those scale factors can be related to the
scale factors of the corresponding SM particles in the loop, e.g.\
$\kappa_\gamma =
\kappa_\gamma(\kappa_b, \kappa_t, \kappa_\tau, \kappa_W, m_H)$
in this approximation.
The total width $\Gamma_H$ is the sum of all Higgs partial widths. The
corresponding scale factor $\kappa_H^2$, i.e.\ $\Gamma_H = \kappa_H^2
\Gamma_H^\mathrm{SM}$, in general needs to be treated as
a free parameter. Under the assumption that no
additional BSM Higgs decay modes
(into either invisible or undetectable final states)
contribute to the total width and making additional approximations for
the scale factors of the currently undetectable decay modes into SM
particles, e.g.\ $\kappa_c = \kappa_t$, $\kappa_s = \kappa_b$ etc.,
the scale factor $\kappa_H^2$ can be related to the scale factors of the
partial widths of the different decay modes in the SM,
\begin{equation}
\kappa_H^2 = \kappa_H^2(\kappa_j, m_H) ,
\label{eq:kappaH}
\end{equation}
where $j = W, Z, b, \tau, \ldots$.
Within this interim framework, several benchmark parameterizations have
been considered. Since the available data do not permit a measurement of
the total width $\Gamma_H$, in general it is not possible to directly
determine scale factors $\kappa_i$, but one is limited to determining
ratios of scale factors of the form $\kappa_i\kappa_j/\kappa_H$.
If one assumes that no BSM Higgs decay modes contribute to the total width and
using the approximations mentioned above for the currently undetectable
SM modes, one can relate $\kappa_H$ to the other scale factors as given
in eq.~(\ref{eq:kappaH}), which makes an absolute determination of the
$\kappa_i$ possible (a milder assumption that also allows to constrain
the total width is to assume $\kappa_W \leq 1$ and $\kappa_Z \leq 1$~\cite{Duhrssen:2004cv,Heinemeyer:2013tqa}).
For the experimental analyses up to now often
benchmark parameterizations with two free parameters have been used. In
particular, a parameterization in terms of a common scale factor for the
couplings to fermions, $\kappa_F$, and a common scale factor for the
couplings to $W$ and $Z$, $\kappa_V$, has been considered, where
$\kappa_F = \kappa_t = \kappa_b = \kappa_\tau$ and
$\kappa_V = \kappa_W = \kappa_Z$. Besides assuming that all couplings to
fermions and the couplings to $W$ and $Z$ can be scaled with a universal
factor, in this case it is furthermore assumed that contributions of BSM
particles to the loop-induced processes are absent and that the
contributions to the total width can be approximated according to
eq.~(\ref{eq:kappaH}). The most general parameterization that has been
investigated up to now involves the parameters
$\kappa_V, \kappa_t, \kappa_b, \kappa_\tau, \kappa_g, \kappa_\gamma$~\cite{Chatrchyan:2013lba}.
\section{Alternative approaches to electroweak symmetry breaking
dynamics}\label{alternate}
In the Standard Model, electroweak symmetry is broken by perturbative
scalar dynamics. The scalar potential exhibits a minimum at a
non-zero value for the neutral component of a hypercharge one,
complex doublet of scalar fields. The scalar fields are elementary
(not composite) degrees of freedom, at least at energy scales of order
1 TeV and below. In all extensions of the Higgs sector discussed
previously in this document, the elementarity of the scalar fields is maintained and the
weakly-coupled nature of the scalar dynamics is preserved.
The Standard Model cannot be a fundamental theory of elementary
particle interactions to arbitrarily high energy scales. At Planck
scale energies ($M_{\rm PL}\simeq 10^{19}~{\rm GeV}$), gravitational
phenomena at the microscopic scale can no longer be neglected.
Indeed, other new scales of fundamental physics may exist between the
scale of electroweak symmetry breaking (EWSB) of order $v=174$~GeV and the
Planck scale, e.g. the grand unification scale, the seesaw scale (that
governs right-handed neutrinos and is responsible for generating mass
for the light neutrinos) and the mass scale associated with dark matter.
In the Standard Model, the scale of EWSB is not protected by any known
symmetry. Thus, it is deeply puzzling how the EWSB scale can be
stable with respect to the Planck scale (and other high energy scales
if they exist). An equivalent statement is that there is no mechanism
in the Standard Model that can keep the mass of an elementary scalar
field much lighter than the highest fundamental mass scale of the
theory, $\Lambda$. That is, the natural value for the squared-mass of
the scalar is~\cite{Weisskopf:1939zz}
\begin{equation}
m_h^2\sim \frac{g^2}{16\pi^2}\Lambda^2\,,
\end{equation}
where $g$ is the coupling of the scalar to other sectors of the theory.
That is, the scale of EWSB and the attendant Higgs mass is
extremely \textit{unnatural} if $\Lambda\gg \mathcal{O}(1~{\rm TeV})$.
Only if $\Lambda\sim 1$~TeV do we have a chance of providing a natural
mechanism for the EWSB dynamics~\cite{Susskind:1982mw}.
The quest for a natural theory of EWSB is one of the motivations for
TeV-scale supersymmetry~\cite{Maiani:1979cx,Witten:1981nf,Dimopoulos:1981zb,Sakai:1981gr,Kaul:1981wp,Kaul:1981hi}.
In this framework, elementary scalars are
related by supersymmetry to elementary fermionic superpartners. The
fermion masses can be naturally small due to weakly broken chiral
symmetries, which in turn protects the masses of the scalar partners.
In theories of TeV-scale supersymmetry, we identify $\Lambda$ with the
scale of supersymmetry breaking. Ideally, this scale should be no
larger than of $\mathcal{O}(1~{\rm TeV})$.
The fact that supersymmetry has not yet been discovered at the LHC
provides some tension for natural EWSB in the supersymmetric
framework. A pedagogical review of alternative EWSB scenarios can be found in
Ref.~\cite{Bhattacharyya:2009gw}.
In this section, we shall briefly consider non-supersymmetric
approaches that could provide a natural explanation for EWSB dynamics.
One of the first leading contenders for a natural theory of EWSB dynamics
was technicolor. (Reviews and references can be found in
Refs.~\cite{Farhi:1980xs,Kaul:1981uk,Hill:2002ap}.
In this approach, EWSB was generated by the
condensation of bilinears of new fermion fields. No elementary scalar
fields were needed, and the naturalness problem associated with them
was avoided. Unfortunately, this approach was ultimately
unsuccessful. Apart from the fact that it was very difficult to
generate a realistic fermion mass spectrum (which required additional
new dynamics beyond the introduction of technicolor and the associated
techniquarks), the constraints of the precision electroweak observables
were extremely difficult to accommodate. The discovery of Higgs boson
with Standard Model like properties may have provided the final nail in the
coffin (although not all technicolor proponents have conceded~\cite{Eichten:2012qb,Foadi:2012bb}).
Any theory of EWSB dynamics must explain the presence of a
weakly-coupled SM-like Higgs boson whose mass is considerably smaller
than $\mathcal{O}(1~{\rm TeV})$.
\subsection{The Higgs boson as a pseudo-Goldstone boson}
Apart from supersymmetry, there is a known mechanism that can yield
naturally light elementary scalars. When a continuous global symmetry
is broken, one of the consequences is an exactly massless Goldstone
boson. If this global symmetry is now explicitly broken, the would-be
Goldstone boson acquires a mass proportional to the strength of the
symmetry breaking. This is a mechanism for producing naturally
light scalars---pseudo-Goldstone bosons whose masses are generated by
small symmetry-breaking effects~\cite{Georgi:1975tz}.
Thus, perhaps the Higgs boson is in fact a pseudo-Goldstone boson (PGB)
generated by strong dynamics associated with scale of new
physics $\Lambda$~\cite{Georgi:1984af,Dugan:1984hq}.
Even though the tree-level mass of the
PGB can be significantly smaller that $\Lambda$, one-loop corrections
to the PGB squared-mass due to the new physics at the scale $\Lambda$
will still be quadratically sensitive to $\Lambda$. This would imply
that $\Lambda\sim\mathcal{O}(1~{\rm TeV})$, which is in conflict with
precision electroweak observables that do no show any sign of
strongly-coupled new physics effects at a mass scale of order 1 TeV.
By a clever construction, one can overcome this last objection by
arranging to have the quadratic sensitivity at one loop ameliorated by
a cancellation of contributions to the one-loop radiative
corrections. The quadratic sensitivity will persist at two-loops, but
the presence of the extra loop factor would imply that
$\Lambda\sim\mathcal{O}(10~{\rm TeV})$, which is no longer in conflict with
precision electroweak observables. This is precisely the
mechanism employed by the little Higgs
Models~\cite{ArkaniHamed:2002pa}. In this framework, the
Higgs boson is a PGB associated with the explicit breaking of some
global symmetry But in this case, the global symmetry becomes exact
when two different interactions separately vanish (this phenomenon is
known as collective symmetry breaking). That is, the
lightness of the Higgs boson mass is doubly protected. Indeed, at one
loop the quadratic sensitivity of the Higgs squared-mass to $\Lambda$
vanishes due to the cancellation between Standard Model particles and
partner particles of the same spin (in contrast to supersymmetry where
the cancellation is due to partners that differ by a half a unit of
spin). For example, the top quark must be accompanied by new
fermionic top partners whose masses should be of order 1 TeV.
Likewise, such models typically include additional gauge bosons, which
are partners of the $W^\pm$ and $Z$.
Numerous realizations of little Higgs models can be found in the
literature~\cite{Schmaltz:2005ky,Chen:2006dy,Perelstein:2005ka}.
The challenge of precision electroweak observables
is still present but can be overcome by introducing a discrete
$T$-parity~\cite{Cheng:2003ju,Cheng:2004yc}
(whose consequences are similar to that of $R$-parity in supersymmetry
models). The presence of new physics (such as top partners and new
gauge bosons) can modify the properties of the 126 GeV Higgs boson and
provide additional constraints on model building.
An alternative approach for constructing Higgs bosons as PGBs arises
in composite models of Higgs bosons~\cite{Georgi:1984af,Dugan:1984hq}.
A pedagogical review of the recent progress in
developing realistic models of this type can be found in Ref.~\cite{Contino:2010rs}.
Such models often arise as low-energy effective theories of models constructed
in higher dimensions of spacetime, where the Higgs boson degree of freedom is
identified as the fifth component of a five-dimensional gauge field.
(A recent review of this mechanism, called
gauge-Higgs unification, can be found in Ref.~\cite{Serone:2009kf}.)
In this approach, $f\sim\mathcal{O}(1~{\rm TeV})$ characterizes
the scale of new strong interactions that produce the PGB when some
larger global symmetry (in which the Standard Model gauge group is
embedded) is broken. The effective cutoff of the theory is
$\Lambda\sim 4\pi f\sim 10~{\rm TeV}$. The natural value for the
Higgs mass is a few hundred GeV, so some amount of tuning is
required to identify the Higgs boson of these models with the 126 GeV
Higgs boson. Typically, deviations from SM Higgs properties are
expected at the 10\% level or higher due to the composite nature of
the Higgs state. Moreover, such approaches typically predict the
existence of other composite resonant states with masses below 1 TeV,
which can be searched for at the LHC~\cite{Pomarol:2012qf}.
These class of models are best studied using the
effective Lagrangian analysis of Section~\ref{sec:th_efflag}.
\subsection{The Higgs boson as a dilaton}
A massless scalar can also arise due to the spontaneous breaking of
conformal symmetry. In this case, the corresponding massless
Goldstone boson is called a dilaton. If there is a small explicit
breaking of conformal symmetry, the corresponding dilaton will be
light. So perhaps the Higgs boson can be identified as the dilaton of
a broken conformal symmetry. Indeed, if one sets the Higgs potential
of the Standard Model to zero, then the Standard Model is classically
scale invariant. The Higgs field can take on any value (without
costing energy at the classical level). If the Higgs field assumes a
nonzero value, then both the electroweak symmetry and the conformal
invariance is broken. In this case, the Higgs boson is identified as
the dilaton of spontaneously broken conformal symmetry. (In practice,
the resulting Higgs boson is not massless due to quantum effects that
break the conformal symmetry due to the conformal anomaly.)
Models of this type have been recently reviewed in Ref.~\cite{Terning:2013}.
It is not clear whether a theoretically consistent model of this type
exists. One would have to demonstrate that given a model with
spontaneously broken conformal symmetry with a flat direction
for the dilaton, a suitable weak explicit breaking of that symmetry
can be introduced that picks out a unique vacuum and generates a small
mass for the dilaton. Identifying the breaking scale $f$ with the
scale of EWSB $v$, it then follows that the leading couplings of the
dilaton to SM fermions is $m_f/v$ and to SM bosons is $m_b^2/v$ which
matches precisely with the couplings of the SM Higgs boson. However,
there could be corrections to the one-loop couplings of the dilaton to
gluons and photons that depend in a model-independent way on the
details of the conformal sector, which would yield deviations from
the expected one-loop couplings of the SM Higgs boson~\cite{Bellazzini:2012vz}.
The most significant difference between a SM Higgs boson and a dilaton
Higgs boson would be found in the triple and quartic Higgs
self-couplings. All these deviations can be parameterized via
the effective Lagrangian treatment of Section~\ref{sec:th_efflag}.
In models where $f\neq v$, the dilaton is a scalar distinct
from the Higgs boson. This would yield a phenomenology quite
distinctive from that of a typical extended Higgs sector~\cite{Chacko:2012vm}. However,
such an approach would not provide any fundamental understanding of
how the SM Higgs boson mass remains significantly smaller than the higher
energy scale that define this theory.
\section{Probing the properties of the signal at 126 GeV}
After the spectacular discovery of a signal at 126~GeV in the
Higgs searches at the LHC~\cite{Aad:2012tfa,Chatrchyan:2012ufa}, it is critically important to
determine the properties of the new state as comprehensively and as
accurately as possible. This information will provide crucial input for
identifying the nature of the
electroweak symmetry-breaking mechanism.
\subsection{Present status and prospects for the upcoming LHC runs}
We briefly discuss here the present status and the prospects for the
upcoming LHC runs. We keep this discussion at a rather qualitative level.
For a more quantitative treatment we refer to the latest results from
ATLAS and CMS (see in particular
\cite{Aad:2013xqa,Aad:2013wqa,Chatrchyan:2012jja,Chatrchyan:2013lba}
and references therein) and the
future projections that have been made under various assumptions.
The determination of the mass of the new particle is already at the
level of a precision measurement with the 2012 data, driven by the
$\gamma\gamma$ and $ZZ^* \to 4 \ell$ channels. The accuracy will further
improve with increasing statistics, requiring however a careful
treatment of systematic effects.
Concerning the spin of the discovered particle, the
observation in the $\gamma\gamma$ channel rules out the possibility of
a $J=1$ state as a consequence of the Landau--Yang theorem~\cite{Landau:1948kw,Yang:1950rg}. It should be
mentioned that there are two caveats to this argument. First,
the Landau--Yang theorem strictly applies to an
on shell resonance, so that the $J=1$ hypothesis can be excluded
only by making an additional small-width assumption~\cite{Ralston:2012ye}.
Second, the decay product could in principle
consist of two pairs of boosted
photons each misinterpreted as a single photon.
Nevertheless, assuming that the discovered state corresponds to a single
resonance rather than to several overlapping resonances corresponding to
different spin states, the spin of the discovered particle
can be determined by discriminating between
the distinct hypotheses for spin 0, (1), 2 states. Some care is
necessary in modeling possible incarnations of the spin~2 hypothesis in
order to make sure that the discriminating power of the analysis
actually refers to the spin properties rather than to some unphysical
behavior of the spin~2 implementation. The experimental results obtained
up to now are well
compatible with the spin~0 hypothesis~\cite{Chatrchyan:2012jja,Aad:2013xqa}, and there is growing evidence
against the alternative hypotheses.
The determination of the CP properties of the observed state is a much
more difficult task, since the observed state could in principle consist
of any admixture of CP-even and CP-odd components.
The analyses so far are mainly based on observables involving the coupling
of the new state to two gauge bosons, $HVV$, where $V = W, Z$,
in particular $H \to ZZ^* \to 4 \ell$.
The angular and kinematic distributions in these processes will only
provide sensitivity for a discrimination between CP-even and CP-odd
properties if a possible CP-odd component $A$ of the new state couples
with sufficient strength to $WW$ and $ZZ$. However,
in many models of physics beyond the
SM there is no lowest-order coupling between a pseudoscalar $A$ and a pair
of gauge bosons, so that the $AVV$ coupling is
strongly suppressed compared to the coupling of the CP-even component.
In this case, the angular and kinematic distributions will show
essentially no deviations from the expectations of a pure CP-even
state, even if
the state had a sizable CP-odd component. The difference between a pure
CP-even state and a state that is a
mixture of CP-even and CP-odd components would rather manifest itself
as a reduction of the total rate. However, such a reduction in rate could
be caused by other effects (and there could even be a
compensation with other contributions leading to an enhancement of the
rate). The couplings of the Higgs boson to
fermions offer a more democratic test of its CP nature,
since in this case the CP-even and odd components can have the same
magnitude.
Using the results of the $H \to ZZ^* \to 4 \ell$ channel to discriminate
between the distinct hypotheses of a pure CP-even and
a pure CP-odd state has led to a growing evidence against the pure
CP-odd hypothesis. Furthermore, first results of more general analyses
that take into account the possibility of a CP-admixture have been
obtained. As explained above, in the channels involving the $HVV$
coupling the effects of even a large CP-admixture can be heavily
suppressed as a consequence of a small coupling of the CP-odd component
to gauge bosons.
Concerning the determination of the couplings and the total width of the
observed particle, a modification of a
coupling will give rise to a change in the tensor structure and thus in
the CP properties. The determination of coupling and CP properties are
therefore closely related. For the analysis of the data taken
at the LHC so far, an ``interim framework'' (described in Section~\ref{sec:th_kappas}) has been
introduced for determining coupling properties. In this framework, it is assumed that
only the overall coupling strength gets modified while the tensor
structure of the different couplings is the same as in the SM.
In this way, results for scale factors $\kappa_i$ (or, with fewer
assumptions, ratios of scale factors) have been obtained
under certain assumptions, as discussed in Section~\ref{sec:th_kappas}. At
the present level of accuracy, these analyses do not show a significant
deviation from the SM predictions (the SM case corresponds to $\kappa_i = 1$
for all scale factors). Projections for future accuracies of the scale
factors $\kappa_i$ have also been discussed. The reported projections~\cite{CMSprojections}
should be interpreted with some care given the fact that one of the
goals of the analyses during the next run of the LHC will be to go
beyond the ``interim framework'' used for the definition of the
$\kappa_i$ in order to obtain more general results with less theoretical
assumptions (see Section~\ref{sec:th_efflag}).
The self-coupling $HHH$ will be very difficult to access at the LHC, even
with the integrated luminosities obtainable at the high-luminosity upgraded LHC. The
prospects are even
worse for the quartic self-coupling $HHHH$.
The total decay width for a light Higgs boson with a mass in the
observed range is not expected to be directly observable at the LHC. The
predicted total width of the Standard Model Higgs boson is about 4~MeV, which is
several orders of magnitude smaller than the LHC experimental mass
resolution. Furthermore, as all LHC channels rely on the identification
of Higgs decay products, the total Higgs width cannot be measured in
those analyses without additional assumptions. More sensitive
constraints on the total width than the ones limited by the experimental
mass resolution can be expected from the analysis of interference
effects between signal an background~\cite{Dixon:2003yb,Martin:2012xc,Martin:2013ula,Dixon:2013haa,Caola:2013yja}.
The limited access to the total width at the LHC implies that
without further assumptions only ratios of couplings can be determined
rather than the absolute values of the couplings.
In summary, while the experimental information obtained so far
about the signal
at 126~GeV is compatible with the expectations for the
Higgs boson of the SM, a large variety of other interpretations of
the discovered particle is also possible, corresponding to very
different underlying physics.
Some scenarios of this kind have been discussed in
the previous sections. These include
the possibility that the observed
state is composite or that it is an admixture or shares properties with
other scalar states of new physics.
Extended Higgs sectors are an simplest alternatives to the SM Higgs boson.
In this context the signal at 126~GeV can be
interpreted as the lightest state of an extended Higgs sector, but
interpretations involving at least one lighter Higgs state below
126~GeV, having significantly suppressed couplings to gauge bosons as
compared to the SM case, are also possible. The sensitivity for
discriminating among the different possible interpretations correlates
with the achievable precision in confronting the experimental results
with the theory predictions.
\subsection{Experimental precision required to discriminate between
different possible interpretations}
If the observed signal at about 126~GeV is interpreted as the lightest
state of an extended Higgs sector, this interpretation
typically refers to
the decoupling region of the respective model, where the lightest state
has SM-like properties, while the heavier Higgs states decouple from the
gauge bosons of the SM. A concrete example of this kind is the MSSM,
where solely from the measured mass value of about 126~GeV important
constraints can be inferred
if the signal is interpreted in terms of the light CP-even
Higgs boson $h$. Requiring the prediction for the mass
of the light CP-even Higgs boson, $m_h$, to be compatible
with the measured mass value of about 126~GeV leads to a lower bound of
about 200~GeV on the mass of the CP-odd Higgs boson, $m_A$, if the
masses of the superpartners are in the TeV range~\cite{Carena:2013qia}.
The value of $m_A$ is therefore much larger than
$m_Z$ in this case, which corresponds to the decoupling region of the
MSSM. This implies that the properties of the state at 126~GeV are
expected to be SM-like, and that one generically would not have expected any
deviations from SM-like properties in the LHC measurements of the new
resonance carried out so far.
In the actual decoupling limit, the couplings of the light Higgs state to
SM particles are exactly the same as for the SM Higgs. Of course,
even if the couplings to SM particles were very close to the
SM values, there could still be deviations from the SM predictions
in the branching ratios and the total width if there is a
significant branching ratio into invisible BSM particles. The
deviations of the Higgs couplings from the SM limit depend on the mass scale
of the new physics.
In general 2HDM-type models (including the case of the MSSM) one
typically expects deviations from the SM predictions at the percent level for
BSM particles in the TeV range. In this context, one expects
the largest deviations to occur in the couplings to fermions
that get their mass from the Higgs doublet with the smaller vacuum
expectation value. For example, within the MSSM this refers in particular
to the couplings to $b \bar b$
and $\tau^+\tau^-$)~\cite{Carena:2001bg,Baer:2013cma}.
The couplings to $W$ and $Z$ are usually less
affected by deviations from the decoupling limit. This can be
illustrated in a general 2HDM by the feature that the deviation of the
$HVV$ coupling ($V = W, Z$) from its SM values behaves quadratically in an expansion of the
deviation term, while the couplings to fermions behave linearly, as discussed in
Section~\ref{sec:decouplalign}. The loop-induced couplings $H\gamma\gamma$
and $Hgg$ can be significantly affected by the presence of
relatively light BSM particles in the loops. See
Refs.~\cite{Baer:2013cma,Gupta:2012mi} for a discussion of other electroweak
symmetry breaking scenarios.
Examples of the coupling patterns in specific models are discussed in
the following section.
\subsection{Examples of analyses in different models}
The decays of the Higgs bosons in the 2HDM depend on the Type of Yukawa
interactions. In the decoupling/alignment limit where $\sin(\beta-\alpha)=1$,
all the tree-level couplings of $h$ coincide with those in the SM, as discussed
in Section~\ref{sec:decouplalign}.
However at the loop level, the effects of the additional Higgs bosons $H$, $A$ and $H^\pm$
can generate deviations in the $h$ couplings from the SM predictions in the alignment
limit if the masses of the additional Higgs boson are not significantly heavier than
the mass of the external particles.
When $\sin(\beta-\alpha)$ is slightly smaller than 1, the couplings of $h$
to various SM particles can differ from the SM predictions by mixing effects in addition to
the loop corrections due to extra fields.
The gauge couplings $hVV$
($VV=WW$ and $ZZ$) are modified by the factor $\sin(\beta-\alpha)$ relative to the SM values,
and Yukawa interactions of $h$ differ from the SM predictions by the factors given in
Table~\ref{yukawa_tab}. The pattern of deviation in Yukawa couplings strongly
depends on the Type of Yukawa Interactions in the 2HDM.
Therefore, we can basically separate the type of an extended Higgs sector
by precision measurements of the couplings of $h$ at the ILC.
For example, we discuss here the deviations from the SM of the Yukawa couplings of $h$ in 2HDMs
with a softly broken $\mathbb{Z}_2$ discrete symmetry, which is imposed to avoid
tree-level Higgs-mediated flavor changing neutral currents.
The Yukawa interactions of the SM-like Higgs boson ($h$) are given by
\begin{align}
{\mathcal L}_\text{yukawa}^\text{2HDM}
=&-\sum_f\frac{m_f}{v}\xi_h^f{\overline f}fh,
\end{align}
where the scaling factors $\xi^f_h$ are displayed in Table.\ref{yukawa_tab}.
The scaling parameters for the gauge couplings to $h$ are given by
$\kappa_V=\sin(\beta-\alpha)$, while those for the Yukawa interactions are
given by $\kappa_f=\xi_h^f$ for $f=u,d, \ell$.
The pattern in deviations for each coupling is different among the
various Types of Yukawa interactions.
In Fig.~\ref{FIG:lhc-ilc}, the scale factors $\kappa_f=\xi_h^f$
in the 2HDM with a softly broken $\mathbb{Z}_2$ symmetry are plotted on the $\kappa_\ell$-$\kappa_d$ plane
and the $\kappa_\ell$-$\kappa_u$ plane as a function of
$\tan\beta$ and $\kappa_V^{}=\sin(\beta-\alpha)$ with $\cos(\beta-\alpha) \le 0$.
The points and the dashed curves denote changes of $\tan\beta$ by steps of one.
The scaling factor $\kappa_V$ for the Higgs-gauge-gauge couplings
is taken to be $\kappa_V^2 = 0. 99, 0.95$ and $0.90$.
For $\kappa_V^{}=1$, all the scaling factors for the couplings of $h$ to SM particles become unity.
In Fig.~\ref{FIG:lhc-ilc}, the current LHC constraints and the expected LHC and ILC
sensitivities for $\kappa_d$ and $\kappa_\ell$ at 68.27 \% C.L. are also shown.
For the current LHC constraints (LHC30), we take the numbers
from the universal fit in Eq.~(18) of Ref.~\cite{Giardino:2013bma},
\begin{align}
\epsilon_b = -0.23 \pm0.31, \quad \epsilon_\tau = +0.00 \pm 0.19, \quad
\rho = \begin{pmatrix} 1 & 0.45 \\ 0.45 & 1 \end{pmatrix}
\end{align}
where $\kappa_x = 1 +\epsilon_x$.
Those including $\epsilon_t$ are not provided in Ref.~\cite{Giardino:2013bma},
because the uncertainties are much larger than unity.
For the future LHC sensitivities (LHC300 and LHC3000),
the expectation numbers are taken from the Scenario 1
in Table. 1 of Ref.~\cite{CMS:2012zoa}.
The central values and the correlations are assumed to be the same as in LHC30 (although in practice,
the correlations would change at a collision energy of $\sqrt{s}=14$ TeV).
The ILC sensitivities are based on the numbers in Table. 2.6 in Ref.~\cite{Baer:2013cma}.
The same central values and no correlation are assumed for the plots of ILC sensitivity curves.
Therefore, by precisely measuring the couplings of $h$ at the ILC, one can
detect deviations from the SM. Those deviations can then be used to discriminate among the various types of extended Higgs sectors
by fingerprinting predictions in each model with the precision data of the $h$ coupling measurement.
\begin{figure}[t!]
\centering
\includegraphics[width=7cm]{Chapter_Theory/figs/KdKe.pdf}
\includegraphics[width=7cm]{Chapter_Theory/figs/KuKe.pdf}
\caption{
The deviation in $\kappa_f=\xi_h^f$ in the 2HDM with Type I, II, X and
Y Yukawa interactions are plotted
as a function of $\tan\beta=v_2/v_1$ and $\kappa_V^{}=\sin(\beta-\alpha)$
with $\cos(\beta-\alpha) \le 0$.
For the sake of clarity of illustration, several lines with $\kappa_x=\kappa_y$ are shifted
slightly so as to be separately visible.
The points and the dashed curves denote changes of $\tan\beta$ by one steps.
The scaling factor for the Higgs-gauge-gauge couplings is taken to be
$\kappa_V^2 = 0. 99, 0.95$ and $0.90$.
For $\kappa_V^{}=1$, all the scaling factors with SM particles become unity.
The current LHC constraints, expected LHC and ILC sensitivities on
(left) $\kappa_d$ and $\kappa_\ell$ and (right) $\kappa_u$ and $\kappa_\ell$
are added.\label{FIG:lhc-ilc}}
\end{figure}
The behavior of the scaling factors depends on the structure of the extended Higgs sector.
For example, a model with mixing of the SM-like Higgs boson with a singlet Higgs field predicts a universal suppression
of the SM-like Higgs couplings, $\kappa_F^{} = \kappa_V^{} = \cos\alpha$,
where $\alpha$ is the mixing angle between the doublet field and the singlet field.
In contrast, $\kappa_F^{} \neq \kappa_V^{}$ in more complicated extended Higgs models
such as the 2HDM, the Georgi-Machacek model~\cite{Georgi:1985nv} and
doublet-septet model~\cite{Hisano:2013sn,Kanemura:2013mc}.
The scaling factors for these models are summarized in
Table.~\ref{Tab.ScalingFactor2} .
Note that in exotic models with higher representation
scalar fields such as the Georgi-Machacek model and doublet-septet model,
$\kappa_V$ can be greater than 1 as already discussed in Section~\ref{Anupperbound},
which is the clear signature of these exotic Higgs sectors.
\begin{table}[h!]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline & Doublet-Singlet & 2HDM-I & Georgi-Machacek & Doublet-Septet\\ \hline \hline
$\tan\beta$ & --- & $v_2/v'_2$ & $v_2/(2\sqrt2\, v_3)$ & $v_2/(4\, v_7)$\\
$\xi_h^f$ & $c_\alpha$ & $\frac{c_\alpha}{s_\beta}$ & $\frac{c_\alpha}{s_\beta}$ & $\frac{c_\alpha}{s_\beta}$\\
$\xi_h^V$ & $c_\alpha$ & $s_{\beta-\alpha} (= s_\beta c_\alpha - c_\beta s_\alpha)$ & $s_\beta c_\alpha -\tfrac{2\sqrt6}3 c_\beta s_\alpha$ & $s_\beta c_\alpha-4 c_\beta s_\alpha$\\
\hline
\end{tabular}
\end{center}
\caption{Scaling factors in models with universal Yukawa coupling
constants.} \label{Tab.ScalingFactor2}
\end{table}
In Fig.~\ref{FIG:exotic}, the predictions for the scale factors of the universal Yukawa coupling $\kappa_F$ and
the gauge coupling $\kappa_V$ are plotted in the doublet-singlet model, the Type-I 2HDM, the
Georgi-Machacek model and the doublet-septet model for each set of $\tan\beta$ and $\alpha$.
The current LHC constraints, expected LHC and ILC
sensitivities for $\kappa_F$ and $\kappa_V$ at 68.27 \% C.L. are also shown.
By precision measurements of $\kappa_V$ and $\kappa_F$ one can discriminate among exotic models.
The central values of the contours correspond to the the SM prediction.
For the contours for LHC~300 and LHC~3000, $\kappa_\tau$ is used for the scaling factor
of the Yukawa coupling, which exhibits the best sensitivity among the fermionic
channels.
For the contours for ILC250 and ILC500, the scaling factors are chosen as
$(\kappa_V, \kappa_F)=(\kappa_Z, \kappa_b)$ without making combinations.
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{Chapter_Theory/figs/KVKF.pdf}
\caption{The scaling factors in models with universal Yukawa couplings.}
\label{FIG:exotic}
\end{figure}
When $\kappa_V$ is slightly different from unity, we can obtain the upper bound on
the mass scale of the second Higgs boson.
Extended Higgs sectors usually contain additional mass parameters which are
irrelevant to electroweak symmetry breaking.
The mass of the second Higgs boson then is a free parameter
and can be taken to be very heavy, so that all the couplings of the lightest Higgs boson $h$
coincide with the SM value at tree level.
Although we cannot predict the mass of the second Higgs boson,
when the coupling $hVV$ is slightly different from the SM prediction,
the upper bound on the heavy Higgs mass scale can be theoretically obtained as a function of $\kappa_V$
by using the properties of
the SM-like Higgs boson and the constraint from vacuum stability and perturbative unitarity.
In the case of the 2HDM with a softly-broken discrete $\mathbb{Z}_2$ symmetry,
the vacuum stability bound is given by~\cite{Deshpande:1977rw}
\begin{align}
\lambda_1>0\,,\qquad \lambda_2>0\,,\qquad \sqrt{\lambda_1\lambda_2}+\lambda_3+\text{min}\bigl\{0\,,\,\lambda_4+\lambda_5\,,\,\lambda_4-\lambda_5\bigr\} > 0.
\end{align}
The unitarity bounds are obtained by imposing the conditions,
$|x_i|<\tfrac{1}{2}$,
where the $x_i$ are the eigenvalues of the $s$-wave amplitude matrix for the elastic scattering of two scalar states.
They are calculated in Ref.~\cite{Kanemura:1993hm},
\begin{align}
x_1^\pm &= \frac{1}{16\pi}
\left[\tfrac{3}{2}(\lambda_1+\lambda_2)\pm\sqrt{\tfrac{9}{4}(\lambda_1-\lambda_2)^2+(2\lambda_3+\lambda_4)^2}\right],\\
x_2^\pm &=
\frac{1}{16\pi}\left[\tfrac{1}{2}(\lambda_1+\lambda_2)\pm\sqrt{\tfrac{1}{4}(\lambda_1-\lambda_2)^2+\lambda_4^2}\right],\\
x_3^\pm &= \frac{1}{16\pi}\left[
\tfrac{1}{2}(\lambda_1+\lambda_2)\pm\sqrt{\tfrac{1}{4}(\lambda_1-\lambda_2)^2+\lambda_5^2}
\right],\\
x_4 &= \frac{1}{16\pi}(\lambda_3+2\lambda_4-3\lambda_5),\qquad
x_5 = \frac{1}{16\pi}(\lambda_3-\lambda_5),\\
x_6 &= \frac{1}{16\pi}(\lambda_3+2\lambda_4+3\lambda_5),\qquad
x_7 = \frac{1}{16\pi}(\lambda_3+\lambda_5),\qquad
x_8 = \frac{1}{16\pi}(\lambda_3+\lambda_4).
\end{align}
\begin{figure}[t]
\centering
\includegraphics[width=8cm]{Chapter_Theory/figs/kv_M.pdf}
\caption{Regions inside the curves are allowed by the constraints from unitarity and vacuum
stability in the ($M\,,\,\kappa_V^2$) plane
for each fixed value of $\tan\beta$.
We take $M=m_A=m_H=m_{H^+}$.
The solid and dashed curves correspond to the boundaries of the exclusion regions due to
vacuum stability and unitarity, respectively.
}
\label{FIG:kv_M}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=7cm]{Chapter_Theory/figs/tanb_M3.pdf}
\includegraphics[width=7cm]{Chapter_Theory/figs/tanb_M5_v2-1.pdf}
\caption{
Regions below the curves are allowed by the constraints from unitarity and vacuum stability
in the ($\tan\beta\,,\,m_A$) plane for each fixed value of $\kappa_V^2$ for $M=m_A=m_H=m_{H^+}$
in the Type II and Type X 2HDMs.
Expected excluded areas of the parameter space are also shown by blue (orange) shaded regions
from the gluon fusion production and associate production of $A$ and $H$ with bottom quarks and
tau leptons at the LHC with the collision energy to be 14 TeV with an integrated luminosity of
300 fb$^{-1}$ (3000 fb$^{-1}$). }\label{FIG:tanb_M}
\end{figure}
In Fig.~\ref{FIG:kv_M}, an upper limit on the mass of the second lightest Higgs boson is shown
in the 2HDM with a softly broken $\mathbb{Z}_2$ discrete symmetry.
Regions on the left side of each curve are the allowed regions by the constraints from unitarity and vacuum
stability in the ($M\,,\,\kappa_V^2$) plane for each fixed value of $\tan\beta$.
We take $M=m_A=m_H=m_{H^+}$. If the equality of the heavier Higgs masses is relaxed, then
the bound on the mass of the second lightest Higgs boson is typically stronger.
The solid and dashed curves correspond to the boundaries of the exclusion regions due to
vacuum stability and unitarity, respectively.
In Fig.~\ref{FIG:tanb_M}, the upper bound on the mass scale of the additional Higgs bosons
are shown as a function of $\tan\beta$ for each fixed value of
$\kappa_V^2=\sin^2(\beta-\alpha)$ under the constraints of perturbative unitarity and vacuum
stability~\cite{Kanemura:1993hm,Deshpande:1977rw}.
The expected discovery regions at LHC with 300 fb$^{-1}$ and 3000 fb$^{-1}$ are also
shown assuming a Type-II Yukawa interaction and a Type-X Yukawa interaction.
These discovery regions are obtained from the analysis of the tau lepton decay
of $H$ and $A$ from gluon fusion production processes and
associate production processes with the bottom quarks and tau leptons,
\begin{align}
&gg \to \phi^0 \to \tau^+\tau^-,\\
&gg \to b\bar{b}\phi^0 \to b\bar{b}\tau^+\tau^-, \\
&gg \to \tau^+ \tau^-\phi^0 \to \tau^+\tau^- \tau^+ \tau^-,
\end{align}
where $\phi^0$ represents $H$ or $A$.
The cross section is obtained by rescaling the values of gluon fusion cross section
for $h_{\text{SM}}$ at 14 TeV from Ref.~\cite{Dittmaier:2011ti}, and the signal and background
analysis in the MSSM given in Ref.~\cite{Aad:2009wy} is used. The signal significance $\mathcal{S}$ is
computed by rescaling the results to the case of the 2HDMs, and the expected excluded
regions are obtained by requiring that $\mathcal{S} > 2$.
For moderate values of $\tan\beta$, it
may not be possible to detect the second lightest Higgs boson of the 2HDM at the LHC.
In this case, it is important to determine the mass scale of the second Higgs boson
in an indirect way. For example, it is possible to measure
$\kappa_V$ at the ILC with a precision at the one percent level or better.
If $\kappa_V$ is found to be slightly different from unity at the ILC at the percent level,
then the upper bound on the heavy Higgs mass scale can be obtained from perturbative unitarity.
If the deviation is a few percent, these upper bounds are above the discovery reach
at LHC with 300 fb$^{-1}$ in wide region of $\tan\beta$ in both Type-II and Type-X 2HDMs.
At the LHC with 3000 fb$^{-1}$, regions with relatively large $\tan\beta$ can be surveyed.
The ILC with a center-of-mass energy of 1~TeV can directly survey the extra Higgs bosons
with masses less than 500 GeV for relatively low $\tan\beta$ regions,
where the LHC cannot detect them.
\subsection{The $hhh$ coupling and electroweak baryogenesis}
How accurately we should measure the $hhh$ coupling?
\begin{figure}[b!]
\begin{center}
\includegraphics[width=7cm]{Chapter_Theory/figs/ewbg_hhh.pdf}
\caption{The region of strong first order phase transition
($\varphi_c/T_c>1$) required for successful electroweak baryogenesis
and the contour plot of the deviation in the triple Higgs boson
coupling from the SM prediction~\cite{Kanemura:2004ch}, where
$m_\Phi$ represents the common mass of $H$, $A$ and $H^\pm$ and
$M$ is the soft-breaking mass of the $\mathbb{Z}_2$ discrete symmetry in the Higgs
potential.
}
\label{fig:ewbg_hhh}
\end{center}
\end{figure}
Given a sufficient accuracy of the $hhh$ coupling measurement,
one can test certain scenarios of electroweak baryogenesis.
The matter anti-matter asymmetry in our Universe cannot be explained within
the Standard Model of particle physics. In particular,
the baryon-to-photon ratio is given by
$n_b/n_\gamma \simeq \text{(5.1---6.5)} \times 10^{-10}$
at 95 \% CL~\cite{FieldsPDG}, where $n_b$ is the difference in the number density
between baryons and anti-baryons and $n_\gamma$ is the photon number density.
In order to generate the baryon asymmetry from a baryon number symmetric
initial state, three conditions first given by Sakharov must be satisfied~\cite{Sakharov:1967dj}.
The electroweak gauge theory can satisfy these conditions
by employing sphaleron processes at high temperatures, C and CP violation in
the theory and the strongly first order phase transition
of the electroweak symmetry.
The mechanism of baryogenesis using such a scenario
is called electroweak baryogenesis~\cite{Kuzmin:1985mm,Cohen:1993nk,Morrissey:2012db},
which directly relates to the Higgs sector.
Electroweak baryogenesis is especially attractive because of its testability
at collider experiments.
In the SM, this scenario is already excluded by the data~\cite{Cohen:1993nk,Morrissey:2012db}.
The simplest viable model is the 2HDM~\cite{Fromme:2006cm},
which provides additional CP violating phases and a sufficiently strong first order
electroweak phase transition compatible with the 126 GeV SM-like
Higgs boson due to the loop effect of the extra Higgs bosons.
One of the interesting phenomenological predictions
for such a scenario is a large deviation in the
triple Higgs boson coupling~\cite{Grojean:2004xa,Kanemura:2004ch}.
The requirement of a sufficiently strong first order phase transition
results in a large deviation in the triple Higgs boson coupling
as seen in Fig.~\ref{fig:ewbg_hhh}.
This suggests that the electroweak baryogenesis scenario can be
tested by measuring the $hhh$ coupling with a 10\% accuracy.
An analysis of the first order phase transition has
also been performed in Ref.~\cite{Grojean:2004xa} using a simple Higgs
potential with higher order operators where similar
deviations in the $hhh$ coupling are predicted.
Moreover, the correlation between the condition of strong first order
phase transition and the deviation in the $hhh$ coupling from the SM prediction
can be seen in various extended Higgs models~\cite{Aoki:2008av,Kanemura:2012hr}.
Therefore, measuring the $hhh$ coupling accurately is a useful
probe of the class of models of electroweak baryogenesis.
The measurement of the $hhh$ coupling for $m_h \simeq 126$ GeV
is very challenging at the LHC and even at the high luminosity upgrade of the LHC.
At the ILC, the $hhh$ coupling can be measured via
$e^+e^- \rightarrow Zhh$
and $e^+e^- \rightarrow hh \nu\bar\nu$. As indicated in Chapter~\ref{sid:chapter_summary},
for the combined data taken at the ILC with $\sqrt{s}=250$ with 1150 fb$^{-1}$ and $500$ GeV with 1600 fb$^{-1}$,
the $hhh$ coupling can be measured with an accuracy of about 46\%.
By adding additional data from a run of $\sqrt{s}=1$ TeV with 2500 fb$^{-1}$,
one can determine the $hhh$ coupling to an accuracy of about 13\%.
Therefore, the scenario for electroweak baryogenesis would be
testable by measuring the triple Higgs boson coupling at the ILC.
\subsection{Value added by the ILC Higgs program post-LHC}
What will be the value added by the ILC Higgs program in
the context of the current and future results from the LHC?
We provide a qualitative assessment of this question in this section.
The ILC will provide crucial information for identifying the underlying nature
of electroweak symmetry breaking. In particular,
high-precision measurements of the properties of
the signal observed at 126~GeV will be performed at the ILC.
For example, for the Higgs couplings to gauge bosons and
fermions, one typically expects an order of magnitude
improvement from the ILC measurements as compared to the
ultimate LHC precision. This expected accuracy provides a high
sensitivity for discriminating among possible realizations of
electroweak symmetry breaking, such as effects of an extended Higgs
sector, of additional states of new physics or deviations in the
couplings from the respective SM values that would occur in case the observed
signal is a composite state.
Besides those quantitative improvements, the ILC Higgs program will also
give rise to crucial qualitative improvements in studying the properties
of the observed signal. In particular, the
Higgsstrahlung process $e^+e^-\to ZH$ provides the unique opportunity
to make absolute measurements of Higgs couplings in a model-independent
way. The
clean experimental environment and the relatively low SM cross sections
for background processes allow $e^+e^-\to ZH$
events to be selected based on the identification of two oppositely
charged leptons with invariant mass consistent with $m_Z$. The remainder
of the event, i.e.\ the Higgs decay, is
not considered in the event selection.
Because only the properties of the dilepton system are used in the
selection, this decay-mode independent measurement provides an absolute
determination of the Higgsstrahlung cross section.
Subsequently, by identifying the individual final states for different
Higgs and $Z$ decay modes, absolute measurements of the Higgs boson
branching fractions can be made. Moreover,
the ILC provides a unique sensitivity to
invisible decay modes of the observed signal.
If dark matter consists of a particle (or more than one) with
a mass that is less than half the mass of the observed
signal, there could be a significant branching ratio of the discovered
state at 126 GeV into a pair of dark matter particles.
If an invisible decay mode is detected, this could be the first
hint for the production of dark matter in collider experiments.
Furthermore, the absolute measurements of the Higgs boson branching ratios
imply that the ILC can provide an absolute measurement of the
total width a model-independent way. This can be accomplished
using the relationship
between the total and partial decay widths, for example
\begin{equation}
\Gamma_H = \frac{\Gamma(H \to W W^*)}{{\rm BR}(H \to W W^*)} ,
\end{equation}
where $\Gamma_H$ denotes the total width. The partial width
$\Gamma(H \to W W^*)$ can be determined from
the measurement of the $HWW$ coupling obtained from the
fusion process $e^+e^- \to H \nu\bar\nu$.
When combined with the direct measurement of ${\rm BR}(H \to W W^*)$,
the total Higgs width can be inferred.
The measurement of the Higgs trilinear self-coupling is of particular
importance, since it provides direct access to the form of the Higgs
potential that gives rise to electroweak symmetry breaking.
This measurement is therefore
crucial for experimentally establishing the scalar dynamics of electroweak symmetry breaking.
As mentioned above, the measurement of the Higgs trilinear self-coupling
will be extremely challenging at the LHC even with 3000\,fb$^{-1}$ of
data. This is due to the complexity of the final state and the
smallness of the cross sections. At the ILC the processes
$e^+e^- \to ZHH$ and $e^+e^- \to HH \nu\bar\nu$ provide sensitivity
to the trilinear self-coupling given sufficiently high luminosity.
Besides a high-precision determination of the properties of the observed
signal at 126~GeV, the ILC has also a high physics potential in the
direct search for additional states of an extended Higgs sector.
The search capacity of the ILC for the pair production of heavy Higgs states is
expected to be close to the kinematic limit of $\tfrac{1}{2}\sqrt{s}$. An extended
Higgs sector could however also contain at least one state that is
{\em lighter\/} than 126~GeV with significantly
suppressed couplings to gauge bosons as compared to the case of a
SM-like Higgs. The search for such a light Higgs state
in the mass range between 60~GeV and 100~GeV is very challenging
in the standard search channels at the LHC. In contrast, at the
ILC there will be a high
sensitivity for probing scenarios of this kind.
\chapter{Test}
\section{One}
\blindtext
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[h]
\missingfigure[figwidth=\hsize]{}
\caption{Full width figure}
\label{fig:test1}
\end{figure}
\thisfloatsetup{floatwidth=\SfigwHalf,capposition=beside}
\begin{figure}[h]
\missingfigure[figwidth=\hsize]{}
\caption{Half width figure}
\label{fig:test2}
\end{figure}
\subsection{One Two}
\blindtext
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[h]%
\caption{Two subfigures, horizontal}%
\begin{subfigure}[b]{0.5\hsize-0.5\columnsep}%
\missingfigure[figwidth=\hsize]{}%
\caption{Subcaption}%
\label{fig:test3a}%
\end{subfigure}%
\hspace{\columnsep}%
\begin{subfigure}[b]{0.5\hsize-0.5\columnsep}%
\missingfigure[figwidth=\hsize]{}%
\caption{Subcaption}%
\label{fig:test3b}%
\end{subfigure}%
\label{fig:test3}%
\end{figure}
\subsection{One Three}
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[h]
\begin{subfigure}[b]{\hsize}%
\missingfigure[figwidth=\hsize]{}%
\caption{Subcaption}%
\label{fig:test4a}%
\vspace{1ex}
\end{subfigure}
\begin{subfigure}[b]{\hsize}%
\missingfigure[figwidth=\hsize]{}%
\caption{Subcaption}%
\label{fig:test4b}%
\end{subfigure}
\caption{Two subfigures, vertical}
\label{fig:test4}
\end{figure}
\section{Two}
\blindtext
\thisfloatsetup{floatwidth=\SfigwFull,capposition=beside}
\begin{figure}[h]
\begin{subfigure}[b]{0.33\hsize-0.66\columnsep}%
\missingfigure[figwidth=\hsize]{}%
\caption{Subcaption 1}%
\label{fig:test5a}%
\end{subfigure}%
\hspace{\columnsep}%
\begin{subfigure}[b]{0.33\hsize-0.66\columnsep}%
\missingfigure[figwidth=\hsize]{}%
\caption{Subcaption 2}%
\label{fig:test5b}%
\end{subfigure}%
\hspace{\columnsep}%
\begin{subfigure}[b]{0.33\hsize-0.66\columnsep}%
\missingfigure[figwidth=\hsize]{}%
\caption{Subcaption 3}%
\label{fig:test5c}%
\end{subfigure}%
\caption{Three subfigures}%
\label{fig:test5}%
\end{figure}
\subsection{Two One}
\thisfloatsetup{capposition=beside}
\begin{table}[htb]
\ttabbox{
\caption[Assumed imperfections in the BDS.] {Assumed imperfections
the BDS. The assumed magnet strength errors are very tight; it is
expected that more realistic larger errors mainly lead to slower
convergence of the procedures.}
\label{tab:as:sim:bdserr}
}{%
\begin{tabular}{ l c r }
\toprule
Error & with respect to & size \\
\midrule
Quad, Sext, Oct x/y transverse alignment & perfect machine & \SI{200}{\micro \meter}\\
Quad, Sext, Oct x/y roll alignment & element axis & \SI{300}{\micro \radian}\\
Initial BPM alignment & magnet center & \SI{30}{\micro \meter}\\
Strength Quads, Sexts, Octs & nominal & 1e-4\\
Mover resolution (x/y) & & \SI{50}{\nano \meter}\\
BPM resolutions (Quads) & & \SI{1}{\micro \meter}\\
BPM resolutions (Sexts, Octs) & & \SI{100}{nm}\\
Power supply resolution & & \SI{14}{bit}\\
Luminosity measurement & & \SI{1}{\percent}\\
\bottomrule
\end{tabular}
}
\end{table}
\listoffigures
\listoftables
\end{document}
|
1,108,101,564,899 | arxiv |
\section*{Acknowledgments}
This research work has been funded by the German Federal Ministry of Education and Research and the Hessian Ministry of Higher Education, Research, Science and the Arts within their joint support of the National Research Center for Applied Cybersecurity ATHENE.
\section*{Appendix}
\subsection{Hyperparameters of Involved AV Methods}
\begin{table} [h!]
\centering\small
\begin{tabular}{lrr}
\toprule
\textbf{Hyperparameter} & \textbf{Our grid search range} & \textbf{Original} \\ \midrule
$U_1$ = initial feature set sizes & $\{ 5, 15, 25, 35, 50, 75, 100, 150 \}$ & 250 \\
$U_2$ = \# eliminated features & $\{ 2, 3, 5 \}$ & 3 \\
$U_3$ = \# iterations & $\{ 3, 5, 7 \}$ & 10 \\
$U_4$ = chunk sizes (in words) & $\{ 5, 15, 25, 35, 50, 75 \}$ & 500 \\
$U_5$ = \# folds & $\{ 3, 5, 7, 10 \}$ & 10 \\
\bottomrule
\end{tabular}
\caption{Original and adjusted hyperparameter ranges of the \AV method \koppelUnmask (\# stands for \quote{number of}). \label{tab:UnmaskingHyperparams}}
\end{table}
\begin{table} [h!]
\centering\footnotesize
\begin{tabular}{ll rrcc l rrrrr}
\toprule
$\bm{\Corpus}$ & \textbf{Representation} & \multicolumn{4}{c}{\textbf{\stamatatosProf}} & & \multicolumn{5}{c}{\textbf{\koppelUnmask}} \\
& & $L_u$ & $L_k$ & $n$ & $d$ & & $U_1$ & $U_2$ & $U_3$ & $U_4$ & $U_5$ \\\midrule
\multirow{3}{*}{$\CorpusGutenberg$} & \originalCorpus & 8,000 & 3,000 & 5 & $d_0$ && 150 & 5 & 5 & 5 & 10 \\
& \posNoise & 3,000 & 1,000 & 4 & $d_0$ & & 100 & 2 & 7 & 25 & 10 \\
& \textDistortion & 1,000 & 2,000 & 4 & $d_0$ & & 150 & 5 & 5 & 5 & 10 \\\midrule
\multirow{3}{*}{$\CorpusWikiSockpuppets$} & \originalCorpus & 2,000 & 6,000 & 5 & $d_0$ && 75 & 3 & 7 & 15 & 10 \\
& \posNoise & 1,000 & 2,000 & 4 & $d_0$ && 100 & 3 & 3 & 15 & 10 \\
& \textDistortion & 1,000 & 8,000 & 5 & $d_0$ && 75 & 3 & 5 & 5 & 10 \\\midrule
\multirow{3}{*}{$\CorpusACL$} & \originalCorpus & 1,000 & 4,000 & 5 & $d_0$ && 150 & 2 & 3 & 5 & 10 \\
& \posNoise & 2,000 & 2,000 & 5 & $d_1$ && 100 & 5 & 3 & 5 & 10 \\
& \textDistortion & 9,000 & 8,000 & 5 & $d_0$ && 75 & 5 & 3 & 5 & 10 \\\midrule
\multirow{3}{*}{$\CorpusPeeJ$} & \originalCorpus & 2,000 & 3,000 & 5 & $d_0$ && 150 & 2 & 5 & 5 & 10 \\
& \posNoise & 1,000 & 2,000 & 4 & $d_0$ && 75 & 2 & 5 & 5 & 10 \\
& \textDistortion & 1,000 & 2,000 & 4 & $d_0$ && 75 & 2 & 3 & 5 & 10 \\\midrule
\multirow{3}{*}{$\CorpusTelegraph$} & \originalCorpus & 2,000 & 5,000 & 5 & $d_0$ && 100 & 5 & 7 & 25 & 10 \\
& \posNoise & 10,000 & 15,000 & 5 & $d_0$ && 150 & 3 & 7 & 5 & 10 \\
& \textDistortion & 6,000 & 10,000 & 5 & $d_0$ && 100 & 2 & 3 & 5 & 10 \\\midrule
\multirow{3}{*}{$\CorpusApricity$} & \originalCorpus & 7,000 & 5,000 & 5 & $d_0$ && 75 & 5 & 5 & 50 & 10 \\
& \posNoise & 1,000 & 1,000 & 4 & $d_0$ && 150 & 5 & 7 & 5 & 10 \\
& \textDistortion & 1,000 & 3,000 & 5 & $d_0$ && 100 & 3 & 5 & 50 & 10 \\\midrule
\multirow{3}{*}{$\CorpusReddit$} & \originalCorpus & 7,000 & 9,000 & 5 & $d_0$ && 50 & 3 & 5 & 5 & 10 \\
& \posNoise & 2,000 & 4,000 & 5 & $d_0$ && 35 & 2 & 3 & 5 & 10 \\
& \textDistortion & 1,000 & 2,000 & 4 & $d_0$ && 35 & 2 & 3 & 5 & 10 \\
\bottomrule
\end{tabular}
\caption{Hyperparameters of \stamatatosProf and \koppelUnmask. The hyperparameters of \stamatatosProf have the following notation: $L_u$ = profile size of the unknown document, $L_k$ = profile size of the known document, $n$ = order of \charNgrams and $d$ = dissimilarity function. The definitions of the three dissimilarity functions $d_0$, $d_1$ and \e{SPI} are described in detail in \cite[Section~3.1]{StamatatosProfileCNG:2014}. The five hyperparameters $U_1\,$--$\,U_5$ of \koppelUnmask are described in Table~\ref{tab:UnmaskingHyperparams}. \label{tab:BaselinesHyperparameters}}
\end{table}
\section{Conclusion and Future Work} \label{Conclusions}
We discussed a serious problem in \av, which affects many \AV methods that have no control over the features they capture.
As a result, the classification predictions of the respective \AV methods may be biased by the topic of the investigated documents.
To address this problem, we proposed a simple but effective approach called \posNoise, which aims to mask topic-related text units in documents.
In this way, only those features in the documents that relate to the authors' writing style are retained, so that the actual goal of an \AV method can be achieved.
In contrast to the alternative topic masking technique \textDistortion, our approach follows a two-step strategy to mask topic-related content in a given document $\D$.
The idea behind \posNoise is first to substitute topic-related words with \posTags predefined in a set $\mathcal{S}$.
In a second step, a predefined list $\mathcal{L}$ is used to retain certain categories of stylistically relevant words and phrases that occur in $\D$.
In addition to this list, we also retain all remaining words in $\D$, for which their corresponding \posTags are not contained in $\mathcal{S}$.
The result of this procedure is a topic-agnostic document representation that allows \AV methods to better quantify stylistically relevant features.
Besides a POS tagger and a predefined list $\mathcal{L}$, no further linguistic resources are required.
In particular, there is no hyperparameter that requires careful adjustment, as it is the case with the \textDistortion approach.
To assess our approach, we performed a comprehensive evaluation with six existing \AV approaches applied to seven test corpora with related and mixed topics.
Our results have shown that \AV methods based on \posNoise lead to better results than \textDistortion in 34 out of 42 cases with an accuracy improvement of up to 10\%.
Also, we have shown that regardless of which $k$ is chosen for \textDistortion, our approach consistently leads to a better trade-off between style and topic.
\\
\\
However, besides the benefits of our approach, there are also several issues that require further consideration.
When considering languages other than English, \posNoise must be adjusted accordingly.
First, a predefined list of topic-agnostic words and phrases must be compiled for the target language.
Second, our approach relies on a POS tagger so that the availability of a trained model for the desired language must be ensured.
Third, due to the imperfection of the tagger, incorrect \posTags may appear in the topic-masked representations.
Although we have hardly noticed this issue with respect to documents written in English, it is likely to happen with documents written in other languages.
Fourth, due to the underlying tagging process, \posNoise is slower than the existing approach \textDistortion, where the runtime depends on several factors such as the text length or the complexity of the trained model.
Besides these issues, \posNoise leaves further room for improvement. One idea for future work, is to investigate automated possibilities to extend the compiled patterns list.
This can be achieved, for example, by using alternative linguistic resources such as lexical databases that are also available in multiple languages (\eg WordNet and GermaNet).
Another direction for future work is to investigate the question in which verification scenarios \posNoise is also applicable.
One idea, for example, is to perform experiments under cross-domain \AV conditions, which often occur in real forensic cases
(for example, how a model trained on forum posts or cooking recipes performs on suicide letters, for which no training data is available yet).
Beyond the boundaries of \AV, we also aim to investigate the suitability and effectiveness of our approach in related disciplines of authorship analysis, such as author clustering and author diarization.
\section{Experimental Evaluation} \label{Evaluation}
In the following, we present our experimental evaluation.
First, we present our seven compiled corpora and explain for them where the documents were obtained and how they were preprocessed, followed by a summary of their main statistics.
Next, we mention which existing \AV methods were chosen for the evaluation as well as how they were trained and optimized.
Afterwards, we explain how an appropriate setting was chosen with regard to the topic-regularization hyperparameter of \textDistortion to allow a fair comparison between it and our \posNoise approach.
Finally, we present the results and describe our analytical findings.
\subsection{Corpora}
To compare our approach against \textDistortion, we compiled seven English corpora covering a variety of challenges,
such as texts (of the same authors) written at different periods of time, texts with an excessive use of slang and texts with no cohesion.
In total, the corpora comprise 4.742 verification cases, were each corpus $\Corpus = \{ \Problem_1, \Problem_2, \ldots \}$ is split into author-disjunct training and test partitions based on a 40/60 ratio.
Each $\Problem \in \Corpus$ denotes a verification case $(\Dunk, \Arefset)$, where $\Dunk$ represents an unknown document and $\Arefset$ a set of sample documents of the known author $\A$.
To counteract \e{population homogeneity}, a form of an \AV bias described by Bevendorff \cite{BevendorffBiasAV:2019},
we ensured that for each $\A$ there is one \textbf{same-authorship} (\classY) and one \textbf{different-authorship} (\classN) verification case.
Furthermore, we constructed all corpora so that the number of \classY and \classN cases is equal (in other words, all training and test corpora are \textbf{balanced}).
In the Sections~\ref{Corpora_Gutenberg}$\,$-$\,$\ref{Corpora_Reddit}, we introduce the corpora and summarize their key statistics in Table~\ref{tab:CorpusStatistics}.
\newcolumntype{R}{>{\raggedleft\arraybackslash}X}
\begin{table}
\centering \footnotesize
\begin{tabularx}{11cm}{lXllrrRR} \toprule
\bfseries\boldmath Corpus & \textbf{Genre} & \textbf{Topic} & \textbf{Partition} & \boldmath$|\Corpus|$ & \boldmath$|\Arefset|$ & \bfseries\boldmath avg$|\DA|$ & \bfseries\boldmath avg$|\Dunk|$ \\\midrule
\multirow{2}{*}{$\CorpusGutenberg$} & Fiction & Mixed & $\CorpusTrain$ & 104 & 1 & 21660 & 21787 \\
& books & topics & $\CorpusTest$ & 156 & 1 & 21879 & 21913 \\\midrule
\multirow{2}{*}{$\CorpusWikiSockpuppets$} & Wikipedia & Related & $\CorpusTrain$ & 150 & 3 & 2280 & 2983 \\
& talk pages & topics & $\CorpusTest$ & 226 & 3 & 2363 & 3042 \\\midrule
\multirow{2}{*}{$\CorpusACL$} & Scientific & Related & $\CorpusTrain$ & 186 & 1 & 9497 & 14196 \\
& papers & topics & $\CorpusTest$ & 280 & 1 & 8250 & 13913 \\\midrule
\multirow{2}{*}{$\CorpusPeeJ$} & Chat & Related & $\CorpusTrain$ & 208 & $\approx\,$2 & 2329 & 2946 \\
& logs & topics & $\CorpusTest$ & 312 & $\approx\,$2 & 2324 & 2967 \\ \midrule
\multirow{2}{*}{$\CorpusTelegraph$} & News & Related & $\CorpusTrain$ & 220 & 2 & 3329 & 4946 \\
& articles & topics & $\CorpusTest$ & 332 & 2 & 3068 & 4702 \\\midrule
\multirow{2}{*}{$\CorpusApricity$} & Forum & Related & $\CorpusTrain$ & 228 & $\approx\,$4 & 3900 & 4023 \\
& posts & topics & $\CorpusTest$ & 340 & $\approx\,$4 & 3921 & 4020 \\\midrule
\multirow{2}{*}{$\CorpusReddit$} & Social & Mixed & $\CorpusTrain$ & 800 & 3 & 5735 & 6785 \\
& news & topics & $\CorpusTest$ & 1200 & 3 & 5854 & 6794 \\\bottomrule
\end{tabularx}
\caption{Overview of our corpora and their key statistics. Notation: $\bm{|\Corpus|}$ denotes the \textbf{number of verification cases} in each corpus $\bm{\Corpus}$, while $\bm{|\Arefset|}$ denotes the \textbf{number of the known documents}. The \textbf{average character length} of $\bm{\Dunk}$ and $\bm{\DA}$ is denoted by avg$\bm{|\Dunk|}$ and avg$\bm{|\DA|}$, respectively. Note that in this context $\bm{\DA}$ represents the \textbf{concatenation} of all documents in $\bm{\Arefset}$. \label{tab:CorpusStatistics}}
\end{table}
\subsubsection{Gutenberg Corpus ($\CorpusGutenberg$)} \label{Corpora_Gutenberg}
This corpus comprises 260 texts taken from the \e{Webis Authorship Verification Corpus 2019} dataset released by Bevendorff et al. \cite{BevendorffBiasAV:2019}.
The texts in $\CorpusGutenberg$ represent fragments extracted from fiction books that stem from the \e{Project Gutenberg} portal.
$\CorpusGutenberg$ is the only corpus where the contained documents have not been preprocessed by us, as they have already gone through a clean and well-designed preprocessing routine, which is described in detail by Bevendorff et al. \cite{BevendorffBiasAV:2019}. The only action we took was to re-arrange the training and test partitions and to balance the \classYdash{} and \classNdash{cases}, since the original partitions were both imbalanced.
\subsubsection{Wikipedia Sockpuppets Corpus ($\CorpusWikiSockpuppets$)} \label{Corpora_WikipediaSockpuppets}
This corpus comprises 752 excerpts of 288 Wikipedia talk page editors taken from the \e{Wikipedia Sockpuppets} dataset released by Solorio et al. \cite{SolorioSockpuppet:2014}.
The original dataset contains two partitions comprising sockpuppets and non-sockpuppets cases, where for $\CorpusWikiSockpuppets$ we considered only the latter subset.
In addition, we did not make use of the full range of authors within the non-sockpuppets cases, as appropriate texts of sufficient length were not available for each of the authors.
From the considered texts, we removed Wiki markup, timestamps, URLs and other types of noise.
Besides, we discarded sentences with many digits, proper nouns and near-duplicate string fragments as well as truncated sentences.
\subsubsection{ACL Anthology Corpus ($\CorpusACL$)} \label{Corpora_AclAnthology}
This corpus comprises 466 paper excerpts from 233 researchers, which were collected from the computational linguistics archive \e{ACL Anthology}\footnote{\url{https://www.aclweb.org/anthology}}. The corpus was constructed in such a way that for each author there are exactly two papers\footnote{Note that we ensured that each paper was single-authored.}, stemming from different periods of time. From the original papers we tried, as far as possible, to restrict the content of each text to only such sections that mostly comprise natural-language text (\eg \e{abstract}, \e{introduction}, \e{discussion}, \e{conclusion} or \e{future work}).
To ensure that the extracted fragments met the important \AV bias cues of Bevendorff et al. \cite{BevendorffBiasAV:2019}, we preprocessed each paper extract in $\CorpusACL$ manually. Among others, we removed tables, formulas, citations, quotes, references and sentences that include non-language content such as mathematical constructs or specific names of researchers, systems or algorithms. The average time span between both documents of each author is $\approx$12 years, whereas the minimum and maximum time span are 8 and 31 years, respectively.
Besides the temporal aspect of $\CorpusACL$, another characteristic of this corpus is the formal (scientific) language, where the usage of stylistic devices (\eg repetitions, metaphors or rhetorical questions) is more restricted, in contrast to other genres such as chat logs.
\subsubsection{Perverted Justice Corpus ($\CorpusPeeJ$)} \label{Corpora_PeeJ}
This corpus comprises 738 chat logs of 260 sex offenders, which have been crawled from the \e{Perverted-Justice} portal\footnote{\url{http://www.perverted-justice.com}}.
The chat logs stem from older instant messaging clients (\eg \e{MSN}, \e{AOL} or \e{Yahoo}), where we ensured for each conversation that only chat lines from the offender were extracted.
To obtain as much language variability as possible regarding the content of the conversations, we selected chat lines from different messaging clients and different time spans (where possible) and considered such lines that differed mostly from each other using a similarity measure\footnote{For this purpose, we used the \e{FuzzyWuzzy} library available under \url{https://github.com/seatgeek/fuzzywuzzy} and selected the \e{Levenshtein} distance as a similarity measure.}.
One characteristic of the chats in $\CorpusPeeJ$ is an excessive use of slang, a variety of specific abbreviations and other forms of noise.
As further preprocessing steps, we discarded chat lines with less than 5 tokens (or 20 characters) and such lines containing usernames, timestamps, URLs as well as words with repetitive characters (\eg \quotetxt{yoooo} or \quotetxt{jeeeez}).
\subsubsection{The Telegraph Corpus ($\CorpusTelegraph$)}
This corpus consists of 828 excerpts of news articles from 276 journalists, crawled from \e{The Telegraph} website.
Due to their nature, the original articles contain many verbatim quotes, which can distort the writing style of the author of the article.
To counter this problem, we sampled from each article such sentences that did not contain quotations and other types of noise including headlines and URLs.
As a result, the underlying characters in each preprocessed article are solely restricted to (case-insensitive) letters, spaces and common punctuation marks.
Finally, we concatenated the preprocessed sentences from each article into a single document.
Note that due to this procedure the coherence of the resulting document is distorted.
Consequently, \AV methods that make use of character and/or word $n$-grams may capture \quote{artificial features} that occur across sentence boundaries.
\subsubsection{The Apricity Corpus ($\CorpusApricity$)}
This corpus comprises 1,395 posts from 284 users that were obtained from \e{The Apricity - A European Cultural Community}\footnote{\url{https://theapricity.com}} portal. The postings are distributed across different subforums with related topics (\eg \e{anthropology}, \e{genetics}, \e{race and society} or \e{ethno cultural discussion}).
To construct $\CorpusApricity$, we ensured that all documents within each verification case stem from different subforums.
The crawled postings have been cleaned from markup tags, URLs, signatures, quotes, usernames and other forms of noise.
\subsubsection{Reddit Corpus ($\CorpusReddit$)} \label{Corpora_Reddit}
This corpus consists of 4,000 posts from 1,000 users, which were crawled from the \e{Reddit} community network.
Each document in $\CorpusReddit$ has been aggregated from multiple posts from the same so-called \emph{subreddit} to obtain a sufficient length.
However, all documents within each verification case originate from different \emph{subreddits} with unrelated topics.
Hence, in contrast to the $\CorpusApricity$ corpus, $\CorpusReddit$ represents a mixed-topic corpus.
In total, $\CorpusReddit$ covers exactly 1,388 different topics including \e{politics}, \e{science}, \e{books}, \e{news} and \e{movies}.
\subsection{Considered AV Methods}
To compare the effectiveness of \posNoise and \textDistortion, we selected six well-known \AV methods that have shown their potential in various studies
(\eg \cite{HalvaniARES:2017,BevendorffUnmasking:2019,StamatatosTextDistortion:2017}).
All of the chosen methods are based on \textbf{implicit feature categories} and are thus susceptible to topic influences.
Four of these (\coav \cite{HalvaniARES:2017}, \occav \cite{HalvaniOCCAV:2018}, \veenmanNNCD \cite{VeenmanPAN13:2013} and \stamatatosProf \cite{StamatatosProfileCNG:2014}) rely on \charNgrams, while the remaining two (\spatium \cite{KocherSavoySpatiumL1:2017} and \koppelUnmask \cite{KoppelAVOneClassClassification:2004}) are based on frequent words/tokens. In the following, we describe some design decisions we have made with regard to these methods and explain which and how their hyperparameters were set.
\subsubsection{Source of Impostor Documents}
\veenmanNNCD and \spatium represent binary-extrinsic \AV methods (cf. \cite{HalvaniAssessingAVMethods:2019}), meaning that they rely on so-called \quote{impostor documents} (external documents outside the respective verification case) for their classification of whether or not there is a matching authorship. In the original paper, Veenman and Li \cite{VeenmanPAN13:2013} did not provide an automated solution to generate the impostor documents, but collected them in a manual way using a search engine.
However, since this manual approach is not scalable, we opted for an alternative idea in which the impostor documents were taken directly from the test corpora.
This strategy has been also considered by Kocher and Savoy \cite{KocherSavoySpatiumL1:2017} with respect to their \spatium approach.
Although using static corpora is not as flexible as using search engines, it has the advantage that due to the available metadata (for instance, user names of the authors) the true author of the unknown document is likely not among the impostors\footnote{However, we cannot guarantee if different user names in fact refer to different persons (multiple accounts might refer to the same person).}. Furthermore, the documents contained in the test corpora are already cleaned and therefore do not require additional preprocessing.
\subsubsection{Uniform (Binary) Predictions}
In their original form, \spatium and \stamatatosProf allow the three possible prediction outputs \classY (same-author), \classN (different-author) and \unanswered (unanswered), whereas for the remaining approaches only binary predictions (\classY/\classN) are considered. To enable a fair comparison, we therefore decided to unify the predictions of all involved \AV methods to the binary case.
In this context, verification cases for which the \AV methods determined similarity values greater than 0.5 were classified as \classY, otherwise as \classN. Here, all similarity values were normalized into the range $[0, 1]$, so that 0.5 marks the decision threshold.
\subsubsection{Settings for \coav, \occav, and \veenmanNNCD}
All these three represent compression-based \AV methods based on the PPMd compression algorithm, as specified in the original papers \cite{HalvaniARES:2017,HalvaniOCCAV:2018,VeenmanPAN13:2013}. However, in these papers it has not been mentioned how the \e{model-order} hyperparameter of PPMd has been set. We therefore decided to set this hyperparameter to 7 for all three methods, based on our observation\footnote{Regarding the \e{model-order} hyperparameter we experimented with values in $[1,10]$.} that this value led to the best accuracy across all training corpora.
Moreover, we used the same dissimilarity functions that were specified in the original papers (CDM for \veenmanNNCD as well as CBC for \coav\footnote{An implementation of \coav is available under \newline \url{https://paperswithcode.com/paper/authorship-verification-based-on-compression}.} and \occav\footnote{An implementation of \occav is available under \newline \url{https://paperswithcode.com/paper/authorship-verification-in-the-absence-of}.}). Apart from these, there are no other hyperparameters for these approaches.
\subsubsection{Counteracting Non-Deterministic Behavior}
The two \AV methods \spatium and \koppelUnmask involve different sources of randomness (\eg impostor selection and chunk generation) and, due to this, cause non-deterministic behavior regarding their predictions. In other words, applying these methods multiple times to the same verification case can result in different predictions which, in turn, can lead to a biased evaluation (cf. \cite{HalvaniAssessingAVMethods:2019}).
To address this issue, we performed 11 runs for each method and selected the run where the accuracy score represented the median. The reason we avoided averaging multiple runs (as was the case, for example, in \cite{StamatatosPothaImprovedIM:2017}) was to obtain more precise numbers with respect to our analysis.
\subsubsection{Model and Hyperparameters}
Apart from \spatium\footnote{We used the original implementation of \spatium available under \url{https://github.com/pan-webis-de}.} and \occav, the remaining four \AV methods involve model and adjustable hyperparameters. The former represent parameters that are estimated directly from the data, while in contrast hyperparameters must be set manually.
Of the four respective \AV methods considered in our experiments, model parameters represent the weights that form the SVM-hyperplanes (used by \koppelUnmask) or the thresholds required to accept or reject the questioned authorships (used by \coav and \stamatatosProf). To obtain the model parameters of \stamatatosProf and \coav, we trained both methods on the \originalCorpus, \posNoise and \textDistortion training corpora, respectively.
The hyperparameters involved in our selected \AV approaches represent, among others, the number of $k$ cross-validation folds (used by \koppelUnmask) or the $n$-order of the \charNgrams (used by \stamatatosProf) and have been optimized as follows. Regarding \koppelUnmask, an adjustment was needed to fit our experimental setting. In the original definition of this method, Koppel and Schler \cite{KoppelAVOneClassClassification:2004,KoppelUnmasking:2007} considered entire books to train and evaluate \koppelUnmask, which differ in lengths from the documents used in our experiments. Therefore, instead of using the original fixed hyperparameter settings (which would make \koppelUnmask inapplicable in our evaluation setting), we decided to consider individual hyperparameter ranges with values that are more appropriate for shorter documents as available in our corpora. The customized hyperparameter ranges are listed in Table~\ref{tab:UnmaskingHyperparams}.
For \stamatatosProf, on the other hand, we employed the same hyperparameter ranges described in the original paper \cite{StamatatosProfileCNG:2014}.
Based on the original and adjusted hyperparameter ranges of \stamatatosProf and \koppelUnmask, we optimized both methods on the training partitions of \textbf{\originalCorpus}, \textbf{\posNoise} and \textbf{\textDistortion} using grid search guided by accuracy as a performance measure. The resulting hyperparameters are listed in Table~\ref{tab:BaselinesHyperparameters}.
\subsection{Comparison Between Representations} \label{TextDistortion_Setup}
To allow a reasonable comparison between \posNoise and \textDistortion, a suitable and fixed configuration for the latter is necessary.
For this, we opted for the DV-SA variant on the basis of the following considerations:
\begin{itemize}
\item Stamatatos \cite{StamatatosTextDistortion:2017} conducted a number of \AA and \AV experiments with the two variants DV-SA and DV-MA, where no differences were observed in the context of \AA. However, with regard to the \AV experiments Stamatatos found that the DV-SA variant was more competitive than DV-MA.
\item Both \posNoise and the \textDistortion variant DV-SA substitute topic-related text units token-wise, which allows a better comparability.
\item None of the selected \AV methods consider word length as a separate feature, so there is no advantage with regard to this possibility that is only maintained by the DV-MA variant.
\end{itemize}
Besides the choice between the two variants DV-SA and DV-MA an adjustable hyperparameter $k$ must be set for \textDistortion, which regulates which tokens (not necessarily words) are retained, while all other tokens are masked.
However, finding a suitable value for $k$ can be seen as a problematic trade-off, since a low value suppresses too many style-related tokens, while in contrast a higher value leaves many topic-related tokens unmasked. To address this trade-off, we therefore pursued the following systematic approach.
Given $W_k$ (the list on which \textDistortion is based) we divided all the words contained in it with respect to their index in $W_k$ into two groups,
which include style-related tokens (\eg conjunctions, determiners, prepositions, pronouns, etc.) and topic-related tokens (\eg nouns, verbs and adjectives), respectively.
Then, we visualized the distributions of the tokens in both groups, where it can be seen in Figure~\ref{TextDistortion_ChoiceOfK} that the distribution
of the style-related tokens increase at a decreasing rate, while the topic-related tokens increase linearly as the $k$ value increases.
In other words, as $k$ increases, fewer style-related tokens occur in $W_k$, while at the same time a higher number of topic-related tokens are present in $W_k$.
\\
\\
To determine an appropriate $k$ that (1) suppresses topic-related words and (2) retains stylistically relevant patterns as much as possible,
we chose $k=170$ which represents the value at which the style-related tokens outnumber the topic-related tokens the most in terms of absolute frequency.
For $k \in \{ 1, 2, \ldots, |W_k| \}$, the setting $k = 170$ satisfies conditions (1) and (2).
Note that in Section~\ref{Results} we perform a more in-depth analysis showing which topic-related tokens occur in the \textDistortion-preprocessed documents
and the consequences of considering higher values for $k$.
\begin{figure} [h!]
\centering
\includegraphics[width=0.45\linewidth]{images/TextDistortion_choice_of_k}
\caption{Choice of the topic regularization hyperparameter $\bm{k}$ of \textDistortion. \label{TextDistortion_ChoiceOfK}}
\end{figure}
Based on the setting $k=170$, we examined the extent to which the two topic-masked representations generated by \posNoise and \textDistortion differ from each other. For this, we list in Table~\ref{table:ComparisonTopicMasking} several example sentences taken from the documents in our test corpora,
which show the differences regarding the outputs of both approaches.
\begin{table}
\centering \footnotesize
\begin{tabular}{ll} \toprule
\textbf{Repres.} & \textbf{Original / topic-masked sentences} \\ \midrule
\originalCorpus & \texttt{As an example, let us analyze the following English sentence.} \\
\posNoise & \texttt{As an example, let us Ø the following @ \#.} \\
\textsf{TextDist.} & \texttt{As an *, * us * the * * *.} \\\midrule
\originalCorpus & \texttt{Like before, further improvements to this section are welcome.} \\
\posNoise & \texttt{Like before, further \# to this \# are @.} \\
\textsf{TextDist.} & \texttt{Like *, * * to this * are *.} \\\midrule
\originalCorpus & \texttt{I'd like to see some other editors' opinions on this question.} \\
\posNoise & \texttt{I'd like to see some other \#' \# on this \#.} \\
\textsf{TextDist.} & \texttt{*'* like to see some other *' * on this *.} \\\midrule
\originalCorpus & \texttt{Therefore we add another operator to erase this function.} \\
\posNoise & \texttt{Therefore we Ø another \# to Ø this \#.} \\
\textsf{TextDist.} & \texttt{* we * another * to * this *.} \\\midrule
\originalCorpus & \texttt{Regarding the lexicon, the model allows for clusters.} \\
\posNoise & \texttt{Regarding the \#, the \# Ø for \#.} \\
\textsf{TextDist.} & \texttt{* the *, the * * for *.} \\\midrule
\originalCorpus & \texttt{Look, most people have been lied to, most are...} \\
\posNoise & \texttt{Look, most \# have been Ø to, most are...} \\
\textsf{TextDist.} & \texttt{*, most people have been * to, most are...} \\
\bottomrule
\end{tabular}
\caption{Comparison between the resulting topic-masked representations generated by \posNoise and \textDistortion. \label{table:ComparisonTopicMasking}}
\end{table}
It can be clearly seen that both approaches entirely mask topic-related words.
However, in contrast to \textDistortion, our approach retains a greater number of syntactic structures including multi-word expressions
(\quotetxt{As an example} and \quotetxt{Like before}), contractions (\quotetxt{I'd}) or sentence openers (\quotetxt{Regarding} and \quotetxt{Therefore}) that represent important stylistic features.
Another difference that can be seen in Table~\ref{table:ComparisonTopicMasking} is that \posNoise not only \textbf{retains} stylistically relevant words and phrases occurring in the documents but also \textbf{generates} additional features \ie \posTags that increase the diversity of the documents feature space. Depending on the considered \AV method, a variety of feature compositions can be derived from \posNoise representations, which include, for instance,
\posTags with preceding/succeeding punctuation marks or \posTags surrounded by function words. Such feature compositions can play a decisive role in the prediction of an \AV method and are therefore desirable.
\subsection{Results} \label{Results}
After training and optimizing all \AV methods, we applied these to the respective test partitions of the \originalCorpus, \posNoise and \textDistortion corpora. The overall results are shown in Table~\ref{tab:ExperimentComparisonResults}, where a compact summary with respect to the average/median improvements of \posNoise over \textDistortion is provided in Table~\ref{table:ImprovementsSummary}.
In what follows, we focus on the results between \posNoise and \textDistortion (using $k=170$) in Table~\ref{tab:ExperimentComparisonResults}, which are highlighted in yellow.
As can be seen from this table, \posNoise leads to better results than \textDistortion in terms of accuracy and \auc in 34 and 32 of 42 cases (81\% and 76\%, respectively). In total, there are 4 ties in terms of accuracy, where in two cases \posNoise leads to better \auc results, which we selected as a secondary performance measure.
Besides the setting $k=170$, we further evaluated the six \AV methods on additional \textDistortion-preprocessed corpora using the settings 100, 300, 500 and 1000. A closer look at the results in Table~\ref{tab:ExperimentComparisonResults} shows that \textDistortion leads to increasingly larger accuracy improvements the higher the setting of $k$ is. What is not reflected in these results, however, is at what price the \quote{improvements} occur.
\\
\\
To gain a better understanding of how higher settings for $k$ affect the performance of the methods, we first show how much topic-related text units remain in the documents preprocessed by \textDistortion. First, we concatenated all documents in each \textDistortion corpus into a single text $\D$. Next, we tokenized $\D$ and subtracted from the resulting list all words that appear in our pattern list $\listStylePatterns$.
The remaining words and patterns can be inspected in detail in the word clouds illustrated in Table~\ref{tbl:TextDistortion_Influence_of_K}.
As can be seen from these word clouds, the document representations masked by \textDistortion retain a wide variety of topic-related words
(\eg \quotetxt{system}, \quotetxt{data}, \quotetxt{information}, etc.) the greater $k$ increases, which are not present in the \posNoise representations.
The presence of these topic-related text units provides a first indication that the improvement in verification results can be attributed to them.
To investigate this further, we proceed on the basis of the following assumption: If higher $k$-values are related to the topic of the texts, this should be reflected in the context of a topic classification with correspondingly high results. Low $k$-values, on the other hand, should lead to low topic classification results.
As a first step, we selected three well-known benchmark corpora for topic classification namely:
\e{AG's News Topic Classification Dataset} \cite{GulliAGNewsArticles:2005} (denoted by $\CorpusAGNews$),
\e{BBC News Dataset} \cite{GreeneCunninghamBBCDataset:2006} (denoted by $\CorpusBBC$) and
\e{Yahoo! Answers Topic Classification Dataset} \cite{ZhangCNNtextClassifiction:2015} (denoted by $\CorpusYahoo$),
where all corpora were left in their original form (\ie no subsampling or subsets were considered) in order to allow reproducibility of our results.
Next, we trained a standard logistic regression classifier\footnote{We used the \e{Logistic Regression} classifier implementation available at \url{https://scikit-learn.org} with the default settings.} (on the basis of tokens as features) using the \originalCorpus, \posNoise and \textDistortion representations of $\CorpusAGNews$, $\CorpusYahoo$ and $\CorpusBBC$, where for \textDistortion we selected $k \in \{ 100, 170, 300, 500, 1000 \}$.
The results of the topic classifier (based on 5-fold cross-validation) for each $\Corpus \in \{ \CorpusAGNews, \CorpusYahoo, \CorpusBBC \}$ along with its \originalCorpus, \posNoise and \textDistortion representations are visualized against the median of the \AV results of the six methods in the scatter plots shown in Figure~\ref{Accuracy_Topic_vs_AV}.
\begin{figure*} [t]
\centering
\includegraphics[width=0.3\textwidth]{images/Topic_Vs_AV/AG_All_Classes_4}
\hspace{.5cm}
\includegraphics[width=0.3\textwidth]{images/Topic_Vs_AV/Yahoo_All_Classes_10}
\hspace{.5cm}
\includegraphics[width=0.3\textwidth]{images/Topic_Vs_AV/BBC_All_Classes_5}
\caption{Comparison between topic and \AV classification results. \label{Accuracy_Topic_vs_AV}}
\end{figure*}
As can be seen from the three scatter plots, the \textDistortion representations lead to higher \AV classification results,
but at the same time also to higher topic classification results as $k$ increases.
Conversely, lower $k$-values lead to both lower topic and \AV results.
In contrast, \posNoise approaches more closely the best possible compromise between topic and style (across $\CorpusAGNews, \CorpusYahoo, \CorpusBBC$), which is located at the very bottom right.
\\
\\
When comparing the results of topic classification with respect to the \originalCorpus and the \posNoise corpora, one can further observe that the degree of topic-related text units is significantly high regarding the former (up to $\approx$35\% higher). This shows that \posNoise leads to a substantial reduction of topic-related text units without strongly affecting the style-related classification performance.
It should be noted that we consider the absolute accuracy values as not significant for this experiment, since in all three topic corpora the function words alone already represent a strong classification signal. This is reflected by the fact that restricting the topic classifier exclusively on function words still yields high accuracy scores of 56\% ($\CorpusAGNews$), 35\% ($\CorpusYahoo$) and 70\% ($\CorpusBBC$), which are comparable to \posNoise.
Separately, one can see from the three scatter plots that regarding \textDistortion, the setting $k=170$ in fact represents a good compromise between topic and style. While the median \AV classification results are almost identical ($\approx$1\% variance in terms of accuracy) the topic classification results vary from 3 to 6\%. Overall, this observation justifies our decision of choosing this setting for \textDistortion to be able to compare both approaches.
\begin{table*} [h!]
\centering \small
\setlength\tabcolsep{-0.05cm}
\begin{tabularx}{\textwidth}{c@{\hspace{0.3cm}}ccccccc}
\toprule
$\bm{k}$ & $\bm{\CorpusGutenberg}$ & $\bm{\CorpusWikiSockpuppets}$ & $\bm{\CorpusACL}$ & $\bm{\CorpusPeeJ}$ & $\bm{\CorpusTelegraph}$ & $\bm{\CorpusApricity}$ & $\bm{\CorpusReddit}$ \\\midrule
100 &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-100/Gutenberg_k-100} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-100/DocSig_k-100} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-100/ACL_k-100} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-100/Perv_k-100} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-100/Tel_k-100} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-100/Apric_k-100} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-100/Reddit_k-100} \end{minipage} \\
\midrule
170 &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-170/Gutenberg_k-170} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-170/DocSig_k-170} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-170/ACL_k-170} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-170/Perv_k-170} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-170/Tel_k-170} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-170/Apric_k-170} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-170/Reddit_k-170} \end{minipage} \\
\midrule
300 &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-300/Gutenberg_k-300} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-300/DocSig_k-300} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-300/ACL_k-300} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-300/Perv_k-300} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-300/Tel_k-300} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-300/Apric_k-300} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-300/Reddit_k-300} \end{minipage} \\
\midrule
500 &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-500/Gutenberg_k-500} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-500/DocSig_k-500} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-500/ACL_k-500} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-500/Perv_k-500} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-500/Tel_k-500} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-500/Apric_k-500} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-500/Reddit_k-500} \end{minipage} \\
\midrule
1000 &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-1000/Gutenberg_k-1000} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-1000/DocSig_k-1000} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-1000/ACL_k-1000} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-1000/Perv_k-1000} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-1000/Tel_k-1000} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-1000/Apric_k-1000} \end{minipage} &
\begin{minipage}{.142\textwidth} \includegraphics[width=0.9\textwidth,trim=0cm 3.4cm 0cm 0cm,clip]{images/k-1000/Reddit_k-1000} \end{minipage} \\
\bottomrule
\end{tabularx}
\caption{Influence of the topic regularization hyperparameter $\bm{k}$ across all documents in each \textDistortion corpus. The word clouds illustrate those text units remaining in the corpora subtracted from all words and phrases present in $\listStylePatterns$. \label{tbl:TextDistortion_Influence_of_K}}
\end{table*}
\begin{table}
\centering\small
\begin{tabular}{l cccccc} \toprule
& \textbf{\coav} & \textbf{\occav} & \textbf{\veenmanNNCD} & \textbf{\stamatatosProf} & \textbf{\spatium} & \textbf{\koppelUnmask} \\\midrule
\textbf{Average} & 3.74\% & 1.88\% & 4.30\% & 2.33\% & 1.80\% & 4.94\% \\
\textbf{Median} & 3.53\% & 3.33\% & 4.64\% & 2.11\% & 0.88\% & 6.41\% \\
\bottomrule
\end{tabular}
\caption{Method-wise accuracy improvements with respect to \posNoise compared to \textDistortion for all test corpora. \label{table:ImprovementsSummary}}
\end{table}
\setlength{\aboverulesep}{0pt}
\setlength{\belowrulesep}{0pt}
\setlength{\extrarowheight}{.15ex}
\definecolor{white}{HTML}{ffffff}
\definecolor{lightyellow}{HTML}{fff7bc}
\definecolor{lightgray}{HTML}{f0f0f0}
\newcolumntype{y}{>{\columncolor{lightyellow}}r}
\newcolumntype{g}{>{\columncolor{lightgray}}r}
\begin{table*} [h!]
\centering\small
\begin{tabular}{p{0.21cm} l rr@{\hskip 0.25cm} yr@{\hskip 0.25cm} yr@{\hskip 0.25cm} gg@{\hskip 0.25cm} gg@{\hskip 0.25cm} gg@{\hskip 0.25cm} gg} \toprule
& & \multicolumn{2}{l}{\multirow{2}{*}{$\bm{\mathsf{Original}}$}} & \multicolumn{2}{l}{\cellcolor{white}\multirow{2}{*}{$\bm{\mathsf{POSNoise}}$}} & \multicolumn{10}{c}{$\bm{\mathsf{TextDistortion}}$} \\[0.04cm] \cline{7-16}
& & & & \multicolumn{2}{l}{} & \multicolumn{2}{l}{$\bm{k=170}$} & \multicolumn{2}{l}{$\bm{k=100}$} & \multicolumn{2}{l}{$\bm{k=300}$} & \multicolumn{2}{l}{$\bm{k=500}$} & \multicolumn{2}{c}{$\bm{k=1000}$} \\[0.09cm]
& \textbf{Method} & \textbf{Acc.} & \textbf{AUC} & \textbf{Acc.} & \textbf{AUC} & \textbf{Acc.} & \textbf{AUC} & \textbf{Acc.} & \textbf{AUC} & \textbf{Acc.} & \textbf{AUC} & \textbf{Acc.} & \textbf{AUC} & \textbf{Acc.} & \textbf{AUC} \\ \midrule
& \coav & 0.744 & 0.853 & 0.731 & 0.828 & 0.731 & \underline{0.850} & 0.737 & 0.830 & 0.737 & 0.857 & 0.750 & 0.861 & 0.769 & 0.868 \\
& \occav & 0.500 & 0.754 & 0.513 & 0.780 & \textbf{0.519} & 0.772 & 0.513 & 0.742 & 0.519 & 0.782 & 0.699 & 0.785 & 0.532 & 0.804 \\
& \veenmanNNCD & 0.769 & 0.843 & \textbf{0.712} & 0.815 & 0.692 & 0.825 & 0.641 & 0.809 & 0.641 & 0.839 & 0.679 & 0.842 & 0.679 & 0.845 \\
& \stamatatosProf & 0.718 & 0.788 & 0.686 & \underline{0.815} & 0.686 & 0.735 & 0.724 & 0.794 & 0.705 & 0.803 & 0.679 & 0.790 & 0.699 & 0.781 \\
& \spatium & 0.705 & 0.757 & \textbf{0.731} & 0.773 & 0.692 & 0.740 & 0.692 & 0.774 & 0.699 & 0.775 & 0.699 & 0.781 & 0.705 & 0.759 \\
\multirow{-6}{*}{\large\rotatebox{90}{$\bm{\CorpusGutenberg}$}}
& \koppelUnmask & 0.667 & 0.763 & \textbf{0.744} & 0.800 & 0.679 & 0.745 & 0.679 & 0.770 & 0.660 & 0.770 & 0.724 & 0.778 & 0.686 & 0.732 \\\midrule
& \coav & 0.898 & 0.949 & \textbf{0.823} & 0.908 & 0.770 & 0.857 & 0.726 & 0.814 & 0.801 & 0.891 & 0.805 & 0.898 & 0.836 & 0.918 \\
& \occav & 0.783 & 0.864 & \textbf{0.770} & 0.834 & 0.730 & 0.805 & 0.695 & 0.778 & 0.704 & 0.807 & 0.765 & 0.822 & 0.748 & 0.829 \\
& \veenmanNNCD & 0.832 & 0.965 & \textbf{0.770} & 0.957 & 0.704 & 0.927 & 0.690 & 0.937 & 0.748 & 0.954 & 0.717 & 0.954 & 0.770 & 0.965 \\
& \stamatatosProf & 0.774 & 0.869 & 0.735 & 0.827 & \textbf{0.739} & 0.809 & 0.726 & 0.792 & 0.752 & 0.821 & 0.774 & 0.856 & 0.783 & 0.872 \\
& \spatium & 0.770 & 0.851 & \textbf{0.783} & 0.849 & 0.774 & 0.857 & 0.730 & 0.827 & 0.774 & 0.854 & 0.783 & 0.871 & 0.770 & 0.857 \\
\multirow{-6}{*}{\large\rotatebox{90}{$\bm{\CorpusWikiSockpuppets}$}}
& \koppelUnmask & 0.708 & 0.793 & \textbf{0.743} & 0.844 & 0.721 & 0.790 & 0.704 & 0.749 & 0.708 & 0.821 & 0.726 & 0.809 & 0.712 & 0.806 \\\midrule
& \coav & 0.782 & 0.883 & \textbf{0.775} & 0.844 & 0.675 & 0.762 & 0.654 & 0.713 & 0.725 & 0.799 & 0.764 & 0.841 & 0.768 & 0.844 \\
& \occav & 0.500 & 0.713 & 0.500 & \underline{0.692} & 0.500 & 0.631 & 0.500 & 0.606 & 0.504 & 0.658 & 0.507 & 0.703 & 0.504 & 0.718 \\
& \veenmanNNCD & 0.736 & 0.996 & \textbf{0.739} & 0.994 & 0.693 & 0.996 & 0.657 & 0.997 & 0.686 & 0.995 & 0.718 & 0.995 & 0.725 & 0.994 \\
& \stamatatosProf & 0.739 & 0.793 & \textbf{0.732} & 0.789 & 0.668 & 0.703 & 0.664 & 0.693 & 0.689 & 0.711 & 0.696 & 0.728 & 0.718 & 0.776 \\
& \spatium & 0.664 & 0.717 & \textbf{0.721} & 0.736 & 0.661 & 0.733 & 0.654 & 0.713 & 0.650 & 0.720 & 0.721 & 0.760 & 0.696 & 0.740 \\
\multirow{-6}{*}{\large\rotatebox{90}{$\bm{\CorpusACL}$}}
& \koppelUnmask & 0.679 & 0.738 & \textbf{0.689} & 0.762 & 0.625 & 0.688 & 0.614 & 0.667 & 0.632 & 0.712 & 0.679 & 0.769 & 0.664 & 0.734 \\\midrule
& \coav & 0.939 & 0.988 & \textbf{0.910} & 0.970 & 0.885 & 0.936 & 0.830 & 0.917 & 0.885 & 0.946 & 0.904 & 0.960 & 0.913 & 0.975 \\
& \occav & 0.776 & 0.967 & 0.753 & 0.959 & \textbf{0.760} & 0.923 & 0.715 & 0.909 & 0.740 & 0.940 & 0.747 & 0.945 & 0.747 & 0.953 \\
& \veenmanNNCD & 0.978 & 1.000 & \textbf{0.946} & 1.000 & 0.923 & 1.000 & 0.907 & 1.000 & 0.936 & 1.000 & 0.946 & 1.000 & 0.952 & 1.000 \\
& \stamatatosProf & 0.859 & 0.928 & \textbf{0.856} & 0.927 & 0.824 & 0.905 & 0.843 & 0.917 & 0.824 & 0.902 & 0.856 & 0.914 & 0.865 & 0.922 \\
& \spatium & 0.894 & 0.963 & \textbf{0.888} & 0.956 & 0.881 & 0.947 & 0.859 & 0.941 & 0.862 & 0.954 & 0.878 & 0.954 & 0.881 & 0.954 \\
\multirow{-6}{*}{\large\rotatebox{90}{$\bm{\CorpusPeeJ}$}}
& \koppelUnmask & 0.910 & 0.971 & \textbf{0.885} & 0.958 & 0.798 & 0.895 & 0.824 & 0.873 & 0.827 & 0.906 & 0.824 & 0.926 & 0.856 & 0.909 \\\midrule
& \coav & 0.855 & 0.927 & \textbf{0.810} & 0.883 & 0.798 & 0.866 & 0.750 & 0.836 & 0.807 & 0.878 & 0.834 & 0.898 & 0.828 & 0.912 \\
& \occav & 0.732 & 0.828 & \textbf{0.684} & 0.790 & 0.648 & 0.752 & 0.636 & 0.744 & 0.678 & 0.768 & 0.699 & 0.785 & 0.732 & 0.814 \\
& \veenmanNNCD & 0.795 & 1.000 & \textbf{0.720} & 1.000 & 0.696 & 1.000 & 0.675 & 1.000 & 0.693 & 1.000 & 0.729 & 1.000 & 0.759 & 1.000 \\
& \stamatatosProf & 0.795 & 0.877 & \textbf{0.735} & 0.828 & 0.714 & 0.823 & 0.717 & 0.820 & 0.717 & 0.828 & 0.747 & 0.843 & 0.768 & 0.862 \\
& \spatium & 0.765 & 0.856 & \textbf{0.774} & 0.853 & 0.762 & 0.849 & 0.741 & 0.825 & 0.753 & 0.850 & 0.774 & 0.862 & 0.777 & 0.867 \\
\multirow{-6}{*}{\large\rotatebox{90}{$\bm{\CorpusTelegraph}$}}
& \koppelUnmask & 0.741 & 0.832 & \textbf{0.783} & 0.831 & 0.684 & 0.783 & 0.711 & 0.794 & 0.714 & 0.782 & 0.747 & 0.837 & 0.738 & 0.801 \\\midrule
& \coav & 0.832 & 0.921 & \textbf{0.859} & 0.924 & 0.824 & 0.906 & 0.800 & 0.891 & 0.844 & 0.910 & 0.844 & 0.909 & 0.850 & 0.914 \\
& \occav & 0.809 & 0.877 & \textbf{0.818} & 0.881 & 0.782 & 0.857 & 0.779 & 0.842 & 0.788 & 0.868 & 0.788 & 0.878 & 0.782 & 0.863 \\
& \veenmanNNCD & 0.832 & 0.998 & \textbf{0.803} & 1.000 & 0.738 & 0.999 & 0.735 & 0.998 & 0.759 & 1.000 & 0.765 & 1.000 & 0.776 & 1.000 \\
& \stamatatosProf & 0.779 & 0.853 & \textbf{0.776} & 0.851 & 0.759 & 0.835 & 0.765 & 0.854 & 0.738 & 0.836 & 0.768 & 0.851 & 0.768 & 0.856 \\
& \spatium & 0.806 & 0.870 & \textbf{0.800} & 0.863 & 0.791 & 0.894 & 0.800 & 0.889 & 0.782 & 0.874 & 0.797 & 0.874 & 0.809 & 0.887 \\
\multirow{-6}{*}{\large\rotatebox{90}{$\bm{\CorpusApricity}$}}
& \koppelUnmask & 0.735 & 0.831 & 0.774 & 0.858 & 0.774 & \underline{0.869} & 0.724 & 0.826 & 0.715 & 0.808 & 0.788 & 0.843 & 0.738 & 0.828 \\\midrule
& \coav & 0.836 & 0.909 & \textbf{0.818} & 0.876 & 0.782 & 0.836 & 0.753 & 0.823 & 0.787 & 0.851 & 0.796 & 0.867 & 0.818 & 0.892 \\
& \occav & 0.778 & 0.851 & \textbf{0.778} & 0.845 & 0.745 & 0.817 & 0.730 & 0.809 & 0.755 & 0.822 & 0.750 & 0.826 & 0.768 & 0.839 \\
& \veenmanNNCD & 0.774 & 0.999 & \textbf{0.761} & 1.000 & 0.703 & 1.000 & 0.692 & 1.000 & 0.709 & 1.000 & 0.728 & 1.000 & 0.745 & 1.000 \\
& \stamatatosProf & 0.764 & 0.821 & \textbf{0.769} & 0.835 & 0.737 & 0.799 & 0.756 & 0.828 & 0.753 & 0.815 & 0.766 & 0.828 & 0.753 & 0.828 \\
& \spatium & 0.797 & 0.863 & 0.809 & 0.873 & \textbf{0.818} & 0.880 & 0.826 & 0.892 & 0.823 & 0.881 & 0.824 & 0.886 & 0.813 & 0.884 \\
\multirow{-6}{*}{\large\rotatebox{90}{$\bm{\CorpusReddit}$}}
& \koppelUnmask & 0.719 & 0.785 & \textbf{0.731} & 0.796 & 0.722 & 0.797 & 0.716 & 0.801 & 0.738 & 0.815 & 0.758 & 0.822 & 0.755 & 0.812 \\\bottomrule
\end{tabular}
\caption{Comparison between the \numberOfExistingBaselines \AV methods applied to all \originalCorpus, \posNoise and \textDistortion test corpora.
Bold values indicate the best accuracy (\textbf{Acc.}) results with respect to the \posNoise and \textDistortion ($\bm{k=170}$) corpora.
In case of ties, \textbf{\auc} serves as a secondary ranking option represented by underlined values. \label{tab:ExperimentComparisonResults}}
\end{table*}
\section{Introduction} \label{Introduction}
Texts are written for a variety of purposes and appear in numerous digital and non-digital forms, including emails, websites, chat logs, office documents, magazines and books. They can be categorized according to various aspects including language, genre, topic, sentiment, readability or writing style. The latter is particularly relevant when the question regarding the authorship of a certain document (\eg ghostwritten paper, blackmail letter, suicide note, letter of confession or testament) arises.
Stylometry is the quantitative study of writing style, especially with regard to questions of authorship, and can be dated back to the 19th century \cite{HolmesEvolutionStylometryHumanities:1998}. Stylometry uses statistical methods to analyze style on the basis of measurable features and has historical, literary and forensic applications. The underlying assumption in stylometry is that authors tend to write in a recognizable and unique way \cite{EderStyloSystemMultilevelTextAnalysis:2017,BrennanAdversarialStylometry:2012}.
Over the years, a number of authorship analysis disciplines have been established, of which \aa (\AA) is the most widely researched.
The task \AA is concerned with is to assign an anonymous text to the most likely author based on a set of sample documents from candidate authors.
\AA relies on the so-called \quote{closed-set assumption} \cite{HitschlerAAWithCNNs:2017,SavoyBookAAandAP:2020}, which states that the true author of the anonymous text is indeed in this candidate set. However, if for some reason this assumption cannot be met, then an \AA method will necessarily fail to select the true author of the anonymous text.
\\
\\
A closely related discipline to \AA is \av (\AV), which deals with the \textbf{fundamental problem} of whether two given documents $\D_1$ and $\D_2$ were written by the same person \cite{KoppelFundamentalProblemAA:2012}. If this problem can be solved, almost any conceivable \AA problem can be solved \cite{KoppelFundamentalProblemAA:2012}, which is the reason why \AV is particularly attractive for practical use cases. Based on the fact that any \AA problem can be broken down into a series of \AV problems \cite{KoppelWinter2DocsBy1:2014}, we have decided to focus in this paper on the \AV problem.
From a machine learning point of view, \AV represents a similarity detection problem, where the focus lies on the \textbf{writing style} rather than the \textbf{topic} of the documents.
In spite of this, it can be observed in the literature that a large number of AV methods, including \cite{AgbeyangiAVYorubaBlogPostsCharNgrams:2020,BrocardoStylometryAV:2013,BrocardoDeepBeliefAV:2017,CastroAVAverageSimilarity:2015,KoppelSeidmanEMNLP:2013,KoppelWinter2DocsBy1:2014,LitvakAVwithCNNs:2019,NealAVviaIsolationForests:2018,StamatatosPothaImprovedIM:2017,PothaStamatatosExtrinsicAV:2019}, are based on implicit\footnote{A feature category is \textbf{implicit}, if it is not clear, which type of features will be indeed captured. Whenever a sliding window is moved over a stream of text units such as characters or tokens, the captured features are necessarily implicit, as it cannot be determined beforehand what exactly they represent (in contrast to explicit features).} feature categories such as character/word or token $n$-grams.
However, often it remains unclear which specific \quote{linguistic patterns} they cover in contrast to explicit\footnote{A feature category is \textbf{explicit}, if the specification of the underlying features is known beforehand. For example, function words are explicit, as we not only know how the extracted features will look like (\eg \quotetxt{while}, \quotetxt{for} or \quotetxt{thus}) but also what they represent (words that express grammatical relationships regarding other words).} feature categories such as punctuation marks, function words or part-of-speech (POS) tags, which can be interpreted directly.
Since in general one has no control over implicitly defined features, it is important to ensure (for example, through a post-hoc analysis) what they actually capture. Otherwise, predictions made with \AV methods based on such features may be influenced by the topic rather than the writing style of the documents. This, in turn, can prevent \AV methods from achieving their intended goal.
\\
\\
To counter this problem, we propose a simple but effective technique that deprives \AV methods of the ability to consider topic-related features with respect to their predictions.
The basic idea is to retain stylistically relevant words and phrases using a predefined list, while replacing topic-related text units with their corresponding \posTags. The latter represent word classes such as nouns, verbs or adjectives and thus provide grammatical information which refer to the content of the corresponding words. \posTags have been widely used in \AA, \AV and many other disciplines related to authorship analysis. They have been confirmed to be effective stylistic features, not only for documents written in English \cite{BrocardoDeepBeliefAV:2017,HitschlerAAWithCNNs:2017,PatchalaAAConsensusAmongFeatures:2018} but also in other languages such as Russian \cite{LitvinovaPOSNgramsAP:2015}, Estonian \cite{PetmansonAVEstonian:2014} and German \cite{DiederichAAwithSVMs:2003}.
While many \AV and \AA approaches consider simple \posTags \cite{PetmansonAVEstonian:2014}, other variants are also common in the literature including \posTag $n$-grams \cite{HirstAVUnmaskingAlzheimer:2012}, \posTag one-hot encodings \cite{HitschlerAAWithCNNs:2017}, \posTags combined with function words \cite{DiederichAAwithSVMs:2003} and probabilistic \posTag structures \cite{DingAuthorshipAnalysisRepresentations:2019}.
\\
\\
The remainder of the paper is organized as follows: Section~\ref{ExistingApproaches} discusses previous work that served as an motivation for our approach, which is proposed in Section~\ref{ProposedApproach}. Section~\ref{Evaluation} describes our experiments and Section~\ref{Conclusions} concludes the work and provides suggestions for future work.
\section{Previous Work} \label{ExistingApproaches}
A fundamental requirement of any \AV method is the choice of a suitable data representation, which aims to model the writing style of the investigated documents.
The two most common representations that can be used for this purpose are (1) \textbf{vector space models} and (2) \textbf{language models}.
A large part of existing \AV approaches fall into category (1). Kocher and Savoy \cite{KocherSavoySpatiumL1:2017}, for example, as well as
Koppel and Schler \cite{KoppelAVOneClassClassification:2004,KoppelUnmasking:2007}, proposed \AV methods that consider the most frequent words occurring in the documents.
Other approaches, that also make use of vector space models, are those of Potha and Stamatatos \cite{StamatatosPothaImprovedIM:2017},
Koppel and Winter \cite{KoppelWinter2DocsBy1:2014}, Hürlimann et al. \cite{GLAD:2015}, Barbon et al. \cite{BarbonAV4CompromisedAccountsSocialNetworks:2017}
and Neal et al. \cite{NealAVviaIsolationForests:2018} which, among others, consider the most frequent character $n$-grams.
On the other hand, \AV methods based on neural networks such as the approaches of Hosseinia and Mukherjee \cite{HosseiniaNeuralAV:2018},
Boenninghoff et al. \cite{BoenninghoffSocialMediaAV:2019}, Bagnall \cite{BagnallRNN:2015} and Jasper et al. \cite{JasperStylometricEmbeddingsAV:2018} fall into category (2).
These approaches employ continuous space\footnote{Continuous-space language models are not limited to a fixed-size context. In contrast, count-based $n$-gram models, which represent the core of compression-based \AV methods (\eg \cite{HalvaniOCCAV:2018,HalvaniARES:2017,VeenmanPAN13:2013}) have restricted contexts, where $n$ is the limit.} language models that operate on the word and character level of the documents.
In contrast to these, \AV approaches as those proposed by Veenman and Li \cite{VeenmanPAN13:2013} or Halvani et al. \cite{HalvaniOCCAV:2018,HalvaniARES:2017} are based on compression-based language models, where internally a probability distribution for a given document is estimated based on all characters and their preceding contexts. Regarding the latter, Bevendorff et al. \cite{BevendorffUnmasking:2019} have shown that this type of \AV methods are effective compared to the current \stateOfTheArt in \AV.
Regardless of their strengths and effectiveness, all the above-mentioned \AV approaches suffer from the same problem. They lack a control mechanism which ensures that their decision with respect to the questioned authorship, is not inadvertently distorted by the topic of the documents. In the absence of such a control mechanism, \AV methods can (in the worst case) degenerate from style to simple topic classifiers.
\\
\\
To address this problem, Stamatatos \cite{StamatatosTextDistortion:2017} proposed a technique that we refer to in this paper as \textDistortion\footnote{An implementation of \textDistortion is available under \newline \url{https://paperswithcode.com/paper/authorship-attribution-using-text-distortion}.}. The method aims to mask topic-specific information in documents, before passing them further to \AA or \AV methods. The topic-specific information is not related to the author's personal writing style, which is why masking helps to maintain the correct objective
(classifying documents by their \textbf{writing style} rather than by their \textbf{content}).
To achieve this, occurrences of infrequent words are substituted entirely by uniform symbols. In addition, numbers are masked such that their structure is retained while hiding their specific value. Given these transformations, most of the syntactical structure of the text is retained (including capitalization and punctuation marks) which is more likely to be associated with the authors writing style \cite{StamatatosTextDistortion:2017}.
Stamatatos introduced the following two variants of \textDistortion, which require as a prerequisite a word list $W_{k}$ containing the $k$ most frequent words\footnote{\textDistortion uses the \e{British National Corpus} (BNC) word list available under \newline \url{https://www.kilgarriff.co.uk/bnc-readme.html}.} in the English language:
\begin{itemize}
\item \textbf{Distorted View - Single Asterisk (\textbf{DV-SA}):} Every word $w \notin W_{k}$ in a given document $\D$ is masked by replacing each word occurrence with a single asterisk \texttt{*}. Every sequence of digits in $\D$ is replaced by a single hashtag \texttt{\#}.
\item \textbf{Distorted View - Multiple Asterisks (\textbf{DV-MA}):} Every word $w \notin W_{k}$ in $\D$ is masked by replacing each of its characters with \texttt{*}. Every digit in $\D$ is replaced by \texttt{\#}.
\end{itemize}
Both variants can be applied to any given document $\D$ written in English, without the need for specific NLP tools or linguistic resources (besides the word list $W_{k}$).
However, in order to apply \textDistortion to $\D$, the hyperparameter $k$, which regulates how much content is going to remain in $\D$, must be carefully specified beforehand.
Moreover, one must take into account that the replacement of each potentially topic-related word $w$ in $\D$ is performed uniformly without any further distinction as to what $w$ represents.
Consequently, the masking procedure may necessarily miss relevant information associated with $w$ that could serve as a useful stylistic feature.
\section{Proposed Approach} \label{ProposedApproach}
Inspired by the approach of Stamatatos \cite{StamatatosTextDistortion:2017}, we propose an alternative \topicMasking technique called $\bm{\mathsf{POSNoise}}$ (\quote{\underline{POS}-Tag-based \underline{Noise} smoothing}), which addresses the two issues of \text distortion mentioned in Section~\ref{ExistingApproaches}.
The core idea of our approach is to keep stylistically relevant words and phrases in a given document $\D$ using a predefined list $\listStylePatterns$, while replacing topic-related words with their corresponding \posTags represented by a set $\mathcal{S}$. Text units not covered by $\listStylePatterns$ and $\mathcal{S}$, such as punctuation marks and idiosyncratic words, are further retained.
In what follows, we first describe the requirements of \posNoise and explain how it differs from the existing \textDistortion approach.
Afterwards, we present the respective steps of our \topicMasking algorithm, which is listed in Algorithm~\ref{POSNoiseAlgo} as Python-like pseudocode.
\\
\\
Similarly to \textDistortion, our approach also relies on a predefined list\footnote{The list is available under \url{http://bit.ly/ARES-2021}.} $\listStylePatterns$ of specific words that should not be masked.
However, unlike \textDistortion, which uses a list of words ordered by frequency of occurrence in the \e{British National Corpus} (BNC), our list $\listStylePatterns$ is structured by grammatical factors.
More precisely, $\listStylePatterns$ comprises different categories of function words, phrases, contractions, generic adverbs and empty verbs.
Regarding the function word categories, we consider conjunctions, determiners, prepositions, pronouns and quantifiers, which are widely known in the literature (\eg \cite{PavelecAAConjunctionsAndAdverbs:2008,StolermanPhD:2015}) to be content and topic independent.
With respect to the phrases, we use different categories of transitional phrases including \e{causation}, \e{contrast}, \e{similarity}, \e{clarification}, \e{conclusion}, \e{purpose} and \e{summary}.
As generic adverbs, we consider \e{conjunctive}, \e{focusing}, \e{grading} and \e{pronominal} adverbs, while as empty verbs, we take \e{auxiliary} and \e{delexicalised} verbs into account, as these have no meaning on their own. The tenses\footnote{For this purpose, we used \e{pattern} available under \url{https://github.com/clips/pattern}.} of verbs are additionally considered so that AV methods operating at the character level of documents can benefit from morphological features occurring in the inflected form of such words.
Table~\ref{table:POSNoiseFeatures} lists all the categories of words and phrases considered by \posNoise, together with a number of examples.
Note that the comparison of the text units with $\listStylePatterns$ is case-insensitive (analogous to \textDistortion), \ie the original case of the text units is retained. We also wish to emphasize that in our list $\listStylePatterns$ (in contrast to the word list $W_{k}$ used by \textDistortion) \textbf{topic-related} nouns and pronouns, verbs, adverbs and adjectives are not present. According to Sundararajan and Woodard \cite{SundararajanWhatIsStyle:2018}, especially the former two are strongly influenced by the content of the documents.
\begin{table}
\centering\footnotesize
\begin{tabular}{ll}
\toprule
\textbf{Category} & \textbf{Examples} \\ \midrule
Contractions & $\{\texttt{i'm, i'd, i'll, i've, it's, we're, how's,}\,\e{...}\,\}$ \\
Auxiliary verbs & $\{\texttt{can, could, might, must, ought, shall, will,}\,\e{...}\,\}$ \\
Delexicalised verbs & $\{\texttt{get, go, take, make, do, have, give, set,}\,\e{...}\,\}$ \\
Conjunctions & $\{\texttt{and, as, because, but, either, for, however,}\,\e{...}\,\}$ \\
Determiners & $\{\texttt{a, an, both, either, every, no, other, some,}\,\e{...}\,\}$ \\
Prepositions & $\{\texttt{above, below, beside, between, beyond, inside,}\,\e{...}\,\}$ \\
Pronouns & $\{\texttt{all, another, anyone, anything, everything,}\,\e{...}\,\}$ \\
Quantifiers & $\{\texttt{any, certain, each, either, lots, neither,}\,\e{...}\,\}$ \\
Generic adverbs & $\{\texttt{only, almost, just, again, yet, therefore,}\,\e{...}\,\}$ \\
Transitional phrases & $\{\texttt{of course, because of, in contrast,}\,\e{...}\,\}$ \\
\bottomrule
\end{tabular}
\caption{All categories of function words and phrases contained in our list $\listStylePatterns$. \label{table:POSNoiseFeatures}}
\end{table}
\\
\\
Besides our statically defined list $\listStylePatterns$, we also make use of certain dynamically generated \posTags to retain stylistic features.
For this, we apply a POS tagger\footnote{For this, we used \e{spaCy} (model \quotetxt{en\_core\_web\_lg}) available under \url{https://spacy.io}.} to $\D$ so that a sequence of pairs $\langle t_i, p_i \rangle$ is created, where $t_i$ denotes a token and $p_i$ its corresponding \posTag.
Here, we decided to restrict ourselves to the \e{Universal POS Tagset}\footnote{The list of all \posTags is available under \url{https://universaldependencies.org/u/pos}.}
so that each $p_i$ falls into a coarse-grained POS category (cf. Table~\ref{table:SubstitutionPOSTags}) of the token $t_i$.
There are two reasons why we decided to use this tagset. First, universal \posTags allow a better adaptation of \posNoise to other languages,
as it can be observed that the cardinality of fine-grained \posTags differ from language to language \cite{PetrovUniversalPOSTagsa:2012}.
Second, the POS tagger might lead to more misclassified \posTags if the fine-grained tagset is used instead.
Note that we do not use the original form of the tags as they appear in the tagset such as \texttt{PROPN} (proper noun) or \texttt{ADJ} (adjective).
Instead, we use individual symbols as \textbf{representatives} that are more appropriate with respect to \AV methods that operate at the character level of documents. However, for readability reasons, we refer to these symbols as \quote{tags}.
Once $\D$ has been tagged, \posNoise substitutes all adjacent pairs $\langle t_i, p_i \rangle, \langle t_{i + 1}, p_{i + 1} \rangle, \ldots, \langle t_{i + n}, p_{i + n} \rangle$, whose tokens $t_i, \ldots, t_n$ form an element in $\listStylePatterns$, with their corresponding pos tags $p_i, \ldots, p_n$, respectively. However, the replacement is only performed if $p_i \in \mathcal{S} = \{$\textsf{\#}, \textsf{§}, \textsf{Ø}, \textsf{@}, \textsf{©}, \textsf{$\mu$}, \textsf{\$}, \textsf{\yen}$\}$ holds (cf. Table~\ref{table:SubstitutionPOSTags}).
More precisely, every token $t_i$ for which $p_i \notin \mathcal{S}$ applies is retained, in addition to all words and phrases in the document $\D$ that occur in $\listStylePatterns$. The retained tokens are, among others, \textbf{punctuation marks} and \textbf{interjections} (\eg \quotetxt{yes}, \quotetxt{no}, \quotetxt{okay}, \quotetxt{hmm}, \quotetxt{hey}, etc.), where the latter represent highly personal and thus idiosyncratic stylistic features according to Silva et al. \cite{SilvaMicroBlogAA:2011}.
\\
\\
Regarding numerals, we keep written numbers unmasked as such words and their variations may reflect stylistic habits of certain authors (\eg \quotetxt{one hundred} / \quotetxt{one-hundred}). Digits, numbers and roman numerals, on the other hand, are masked by the \posTag \textsf{$\mu$}.
\begin{table}
\centering\small
\begin{tabular}{lcl}
\toprule
\textbf{Category} & \textbf{Tag} & \textbf{Examples} \\\midrule
Noun & \textsf{\#} & $\{\texttt{house, music, bird, tree, air, }\,\e{...}\,\}$ \\
Proper noun & \textsf{§} & $\{\texttt{David, Vivien, London, USA, COVID-19,}\,\e{...}\,\}$ \\
Verb & \textsf{Ø} & $\{\texttt{eat, laugh, dance, travel, hiking,}\,\e{...}\,\}$ \\
Adjective & \textsf{@} & $\{\texttt{red, shiny, fascinating, phenomenal,}\,\e{...}\,\}$ \\
Adverb & \textsf{©} & $\{\texttt{financially, foolishly, angrily,}\,\e{...}\,\}$ \\
Numeral & \textsf{$\mu$} & $\{\texttt{0, 5, 2013, 3.14159, III, IV, MMXIV,}\,\e{...}\,\}$ \\
Symbol & \textsf{\$} & $\{\texttt{\pounds, \copyright, §, \%, \#,}\,\e{...}\,\}$ \\
Other & \textsf{\yen} & $\{\texttt{xfgh, pdl, jklw, }\,\e{...}\,\}$ \\
\bottomrule
\end{tabular}
\caption{\posTags considered by \posNoise that aim to replace topic-related words, after all words and phrases listed in Table~\ref{table:POSNoiseFeatures} have been retained. \label{table:SubstitutionPOSTags}}
\end{table}
In a subsequent step, we adjust punctuation marks that were separated from their adjacent words (\eg \quotetxt{however ,} $\rightsquigarrow$ \quotetxt{however,}) as a result of the tokenization process of the POS tagger. Our intention is to retain the positional information of the punctuation marks, as certain \AV methods might use standard tokenizers that split by white-spaces so that \quotetxt{however ,} would result in two different tokens. As a final step, all tokens are concatenated into the topic-masked representation $\Dmasked$.
In Section~\ref{Evaluation}, we compare the resulting representations of \posNoise and \textDistortion. For the latter, we also explain how a suitable setting of the hyperparameter $k$ (\ie one that suppresses topic-related words but retains style-related words as best as possible) was determined.
\begin{algorithm} [h!]
\footnotesize
\SetKwInput{KwData}{Input}
\SetKwInput{KwResult}{Output}
\SetKw{KwBreak}{break}
\SetKw{KwContinue}{continue}
\SetKw{KwThrow}{throw}
\SetKwIF{If}{ElseIf}{Else}{if}{}{else if}{else}{end if}%
\SetKwFor{For}{for}{}{end for}%
\SetKwFor{ForEach}{foreach}{}{end foreach}%
\SetKwFor{While}{while}{}{end while}%
\SetNoFillComment
\DontPrintSemicolon
\KwData{Document $\D$, pattern list $\listStylePatterns$}
\KwResult{Topic-masked representation $\Dmasked$}
\BlankLine
$\Dtokens \leftarrow$ tokenize($\D$) \tcc{Segment $\D$ into a list of tokens $\Dtokens$.}
$\Dtoklocs \leftarrow$ token\_locations($\D$) \tcc{Store for all tokens their respective start locations in $\D$.}
$\Dpos \leftarrow$ pos\_tag($\Dtokens$) \tcc{Classify for each $t_i$ in $\Dtokens$ its corresponding \posTag.}
$n \leftarrow$ length($\Dtokens$) \\
\BlankLine
\tcc{Initialize a bitmask array for the $n$ tokens in $\D$. Activated bits ('1') reflect tokens that should be retained.}
$\Dbitmask \leftarrow [0, 0, 0, \ldots, 0]$
\BlankLine
\ForEach{$\ell \in \listStylePatterns$}
{
$\lWords \leftarrow$ tokenize($\ell$) \tcc{Note that $\ell$ might be a phrase such as \texttt{"apart from this"}.}
$m \leftarrow \mathrm{length}(\lWords)$ \\
$x \leftarrow$ 0 \\
$i \leftarrow$ 0
\While{$i < n$}
{
\If{$\mathrm{lowercase(}\Dtokens[i]\mathrm{) == lowercase(}\lWords[x]\mathrm{)}$}
{
$x \leftarrow x + 1$ \\
\If{$x == m$}
{
\For{$j \leftarrow i - m + 1; j < i + 1; j \leftarrow j + 1$}
{
$\Dbitmask[j] \leftarrow 1$
}
$x \leftarrow 0$
}
}
\Else
{
$i \leftarrow i - x$ \\
$x \leftarrow 0$
}
$i \leftarrow i + 1$
}
}
\BlankLine
\tcc{Define POS tags that aim to replace topic-related words.}
$ \mathcal{S} = \{$ \texttt{"\#"}, \texttt{"§"}, \texttt{"Ø"}, \texttt{"@"}, \texttt{"©"}, \texttt{"$\mu$"}, \texttt{"\$"}, \texttt{"\yen"} $\}$
\BlankLine
$\Dmasked \leftarrow \D$ \\
\For{$i \leftarrow n - 1; i \geq 0; i \leftarrow i - 1$}
{
\tcc{Retain truncated contraction tokens.}
\If{$\Dtokens[i] \in \{$ \e{\texttt{"'m"}}, \e{\texttt{"'d"}}, \e{\texttt{"'s"}}, \e{\texttt{"'t"}}, \e{\texttt{"'ve"}}, \e{\texttt{"'ll"}}, \e{\texttt{"'re"}}, \e{\texttt{"'ts"}} $\}$ }
{
$\Dbitmask[i] \leftarrow 1$
}
\If{$\Dtokens[i]$ \e{represents a written-out number (\eg \texttt{"four"} or \texttt{"twelve"})}}
{
$\Dbitmask[i] \leftarrow 1$
}
\If{ $\Dbitmask[i] == 0$}
{
sub $ \leftarrow $ \texttt{""} \\
\lIf{$\Dpos[i] \in \mathcal{S} $}
{
sub $ \leftarrow \Dpos[i]$
}
\lElse
{
sub $ \leftarrow \Dtokens[i]$
}
$\Dmasked \leftarrow \Dmasked[:\Dtoklocs[i]] + \mathrm{sub} + \Dmasked[(\Dtoklocs[i] + \mathrm{length(}\Dtokens[i]\mathrm{)}):]$
}
}
\KwRet{$\Dmasked$}\;
\caption{POSNoise}
\label{POSNoiseAlgo}
\end{algorithm} |
1,108,101,564,900 | arxiv | \section{Introduction}
The description of complex systems in terms of the so-called "collective" variables has a long history in condensed matter physics. An important example of such a variable is the "order parameter field" usually employed for theoretical analysis of phase transitions. A convenience of this approach is guaranteed by the most economic formulation, nevertheless enabling to provide nontrivial results. Sometimes the correct description can even be constructed phenomenologically, as was the case, e.g., with the celebrated Ginzburg-Landau theory of superconductivity \cite{GL} justified later on microscopic grounds \cite{Gor}.
Another milestone of this formalism is represented by the Feynman-Vernon influence functional theory \cite{FH} and the related Caldeira-Leggett analysis of quantum dissipation \cite{CL,Weiss}. Within
this description all "unimportant" (bath) degrees of freedom are integrated out and the theory is formulated in terms of the effective action being the functional of the only collective variable of interest. Both dissipation and superconductivity are combined within the Ambegaokar-Eckern-Sch\"on (AES) effective action approach \cite{AES,SZ} describing macroscopic quantum behavior of metallic tunnel junctions. In this case the collective variable of interest is the Josephson phase, and the whole analysis can be formulated for both superconducting and normal systems embracing various equilibrium and non-equilibrium situations.
Later on it was realized that the AES type-of-approach can be extended to arbitrary (though sufficiently short) coherent conductors, including, e.g., diffusive metallic wires, highly transparent quantum contacts etc. Also in this general case a complete effective action of the system can be
derived both within Matsubara \cite{Z} and Keldysh \cite{SN} techniques, however the resulting expressions turn out to be rather involved
and usually become tractable only if one treats them approximately in certain limits. The character of approximations naturally depends on the problem under consideration. E.g., Coulomb effects on electron transport in short coherent conductors, as well as on shot noise and higher current cumulants can be conveniently studied within the quasiclassical approximation for the phase variable \cite{GZ,GGZ,ns}, renormalization group methods \cite{BN}, instanton technique \cite{N} and for almost reflectionless scatterers \cite{SS,GGZ2}. Some of the above approximations are also helpful for the analysis of frequency dispersion of current cumulants \cite{GGZ2,GGZ3}.
Another type of approximation is realized if one restricts phase fluctuations to be sufficiently small. This approximation may be particularly useful for superconducting contacts with
arbitrary transmissions of their conducting channels. In this case one can derive the effective
action in a tractable form \cite{we} and employ it for the analysis of various phenomena, such as, e.g., equilibrium supercurrent noise, fluctuation-induced capacitance renormalization and Coulomb interaction effects.
An important feature of the effective action \cite{we} is that it fully accounts for the presence of
subgap Andreev bound states in superconducting contacts. In the case of sufficiently short contacts
the corresponding energies of such bound states are $\pm\epsilon_n(\chi )$, where
\begin{equation}
\epsilon_n(\chi)=\Delta\sqrt{1-T_n\sin^2(\chi/2)},
\label{And}
\end{equation}
$\Delta$ is the superconducting gap, $T_n \leq 1$ defines the transmission of the $n$-th conducting channel and $\chi$ is the superconducting phase jump across the contact. In the tunneling limit $T_n \ll 1$
we have $\epsilon_n(\chi)\simeq \Delta$ for any value of the phase $\chi$,
i.e. subgap bound states are practically irrelevant in this case. For this reason
such states are missing, e.g., in the AES action \cite{AES,SZ}. On the other hand, at higher
transmission values the energies of Andreev levels (\ref{And}) can be considerably lower than
$\Delta$ and may even tend to zero for fully open channels and $\chi \approx \pi$.
The presence of such subgap states may yield considerable changes in the behavior of
(relatively) transparent superconducting contacts as compared to that of Josephson tunnel junctions.
Recently the authors \cite{Breth1,Breth2} performed experiments aimed at directly detecting Andreev levels by means of microwave spectroscopy of non-tunnel superconducting atomic contacts. In this work we will employ the effective action approach \cite{we} and develop a microscopic theory of Andreev level spectroscopy in superconducting contacts with arbitrary distribution of transmission values $T_n$. As a result of our analysis, we will formulate a number of predictions which would allow for explicit experimental verification of our theory.
The structure of the paper is as follows. In section II we will specify the system under consideration and formulate the problem to be addressed in this work. In section III we will employ our effective action formalism \cite{we} and evaluate the impedance of an effective environment formed by a system involving subgap Andreev levels. These results will then be used in section IV in order to establish the $P(E)$-function for our system and to determine the relative intensity of different current peaks in the
subgap part of the $I-V$ curve. The effect of capacitance renormalization on both the positions and the heights of such peaks will be studied in section V, while in section VI we will address thermal broadening
of these peaks. In section VII we will analyse the $I-V$ curve at larger voltages where
quasiparticle tunneling dominates over that of Cooper pairs. The paper will be concluded in section VIII by a brief summary of our main observations.
\section{Statement of the problem}
Following the authors \cite{Breth1,Breth2} we will consider the circuit depicted in Fig. 1. This circuit can be divided into two parts. The part to the right of the vertical dashed line represents a superconducting loop pierced by an external magnetic flux $\Phi$. This loop includes a Josephson tunnel junction with normal state resistance $R_N$ and Josephson coupling energy $E_J$ connected to a non-tunnel superconducting contact thereby forming an asymmetric SQUID. The latter contact is characterized by an arbitrary set of transmissions $T_n$ of their transport channels and -- provided
the superconducting phase difference $\chi$ is imposed -- may conduct the supercurrent \cite{KO}
\begin{eqnarray}
&& I_{S}(\chi )=\frac{e\Delta\sin\chi}{2}\sum_n\frac{T_n}{\sqrt{1-T_n\sin^2(\chi/2)}}
\label{Ichi}
\\ && \times \tanh\frac{\Delta\sqrt{1-T_n\sin^2(\chi/2)}}{2T},
\nonumber
\end{eqnarray}
where $-e$ stands for the electron charge. Below we will assume that temperature $T$ is sufficiently
low $T \ll \Delta$ and we will stick to the limit
\begin{equation}
R_N \ll R_c,
\label{RNRc}
\end{equation}
where $1/R_c=(e^2/\pi )\sum_nT_n$ is the normal state resistance of a non-tunnel contact. In this case the critical current of the Josephson tunnel junction $\propto 1/R_N$ strongly
exceeds that of the non-tunnel superconducting contact $\propto 1/R_c$. In this limit the phase jump across the Josephson junction is close to zero, while
this jump across the non-tunnel contact is
$\chi\approx 2\pi \Phi/\Phi_0$. Here $\Phi_0=\pi c/e$ is the superconducting flux quantum, $c$ is the light velocity and the Planck's constant is set equal to unity $\hbar=1$.
The remaining part of the circuit in Fig. 1 (one to the left of the vertical dashed line) serves as measuring device called a spectrometer \cite{Breth2}. It consists of a voltage biased superconducting tunnel junction with Josephson coupling energy $E_{JS}$ connected to the asymmetric SQUID via a large capacitance $C_0$.
\begin{figure}
\includegraphics[width=9cm]{schnf.eps}
\caption{The circuit under consideration. The measured system, shown to the right of the dashed line,
represents an asymmetric SQUID comprising a Josephson tunnel junction with resistance $R_N$ and Josephson coupling energy $E_J$ and
a non-tunnel superconducting contact, characterized by an arbitrary set of transmissions $T_n$ of its
conducting channels. The total capacitance $C$ consists of a sum of geometric capacitances of both superconducting junctions $C_\Sigma$ and
also includes the renormalization term from the Josephson element, cf. Eq. (\ref{capren}) below. The superconducting loop is pierced by
the magnetic flux $\Phi$. The measuring device (the spectrometer) is shown to the left of the dashed line.
It incorporates a voltage-biased tunnel junction with Josephson coupling energy $E_{JS}$ connected
to the measured system via a low resistance $R$ and a large capacitance $C_0$.}
\label{steps}
\end{figure}
Assuming that the value $E_{JS}$ is sufficiently small, one can evaluate the inelastic Cooper pair current $I$ across the spectrometer perturbatively in $E_{JS}$. At subgap values of the applied voltage $V$ one readily finds \cite{Av,Ing}
\begin{eqnarray}
I=\frac{eE_{JS}^2}{2}\left( P(2eV)-P(-2eV)\right),
\label{pert}
\end{eqnarray}
where
\begin{eqnarray}
P(E)=\int\limits_{-\infty}^\infty dt e^{iEt}\exp\left\{ \frac{4e^2}{\pi}\int\limits_0^\infty \frac{d\omega}{\omega} {\rm Re}\left[ Z(\omega)\right]\right.\nonumber\\
\left.\times
\left[\coth\frac{\omega}{2T}\left( \cos (\omega t)-1\right)-i\sin(\omega t)\right]\right\}
\label{pe}
\end{eqnarray}
is the function describing energy smearing of a tunneling Cooper pair due to its interaction with
the electromagnetic environment characterized by a frequency-dependent impedance $Z(\omega)$ and temperature $T$. Provided the function $P(E)$ has the form of a delta-function $P(E)\propto\delta(E-E_0)$, the current will be peaked as $I(V)\propto \delta(2eV-E_0)$. This situation is similar to a narrow spectral line on a photoplate, thereby justifying the name of the measuring device.
Coupling of the spectrometer to a single environmental mode (provided, e.g., by an $LC$-contour) was considered in Ref. \onlinecite{Ing}. In this case the environmental impedance takes a simple form
\begin{equation}
Z_{0}(\omega)=\frac{i\omega}{C((\omega+i0)^2-\omega_0^2)}.\label{plm}
\end{equation}
Here $C$ is an effective capacitance of the $LC$-contour and $\omega_0$ is the oscillation frequency. As usually, an infinitesimally small imaginary part $i0$ added to $\omega$ in the denominator indicates the retarded nature of the response. Employing Eq. (\ref{pe}) together with the Sokhotsky's formula
\begin{equation}
{\rm Im}\,\frac{1}{x+i0}=-\pi\delta(x),
\end{equation}
in the limit of low temperatures one finds
\begin{equation}
P(E)=2\pi e^{-\rho}\sum_{k=0}^\infty\frac{\rho^k}{k!}\delta(E-k\omega_0),\quad \rho=\frac{4E_C}{\omega_0}.
\label{sm}
\end{equation}
Here and below $E_C=e^2/2C$ is the effective charging energy. Combining Eqs. (\ref{sm}) and (\ref{pert}) we obtain the $I-V$ curve for our device which
consists of narrow current peaks at voltages
\begin{equation}
2eV=k\omega_0, \quad k=1,2,...
\label{discv}
\end{equation}
The physics behind this result is transparent: A Cooper part with energy $2eV$ that tunnels across the junction releases this energy by exciting the environmental modes. In the case of an environment with a single harmonic quantum mode considered above this process can occur only at discrete set
of voltages (\ref{discv}).
Turning back to the system depicted in Fig. 1, we observe a clear similarity
to the above example of the $LC$-contour. Indeed, the asymmetric SQUID configuration on the right of Fig. 1 plays
the role of an effective inelastic environment for the spectrometer. Bearing in mind the kinetic inductances of both the Josephson element
and the non-tunnel superconducting contact, to a certain approximation this environment can also be viewed as an effective $LC$-contour.
An important difference with the latter, however, is the presence of extra quantum states -- discrete Andreev levels (\ref{And}) --
inside the superconducting contact. Hence, tunneling of a Cooper pair can also be accompanied by upward transitions between these states
and -- along with the current peaks at voltages (\ref{discv}) -- one can now expect the appearance of extra peaks at
\begin{equation}
2eV=k\omega_0+2\epsilon_n(\chi), \quad k=0,1,2,...
\label{discv2}
\end{equation}
This simple consideration served as a basic principle for the Andreev spectroscopy experiments \cite{Breth1} as well as for their interpretation \cite{Breth2}. While this phenomenological theory \cite{Breth2} correctly captures some important features of the phenomenon, it does not yet allow for the complete understanding of the system behavior, see, e.g., the corresponding discussion in Ref. \onlinecite{Breth2}. Therefore, the task at hand is to microscopically evaluate the function $P(E)$ for the asymmetric SQUID of Fig. 1, which governs the response of the spectrometer to the applied voltage.
In the next section we will describe the effective formalism which will be employed in order to accomplish this goal.
\section{Effective action and effective impedance}
Let us denote the total phase difference across the non-tunnel superconducting contact as $\chi +2\varphi (t)$, where $\chi$ is the constant part determined
by the magnetic flux $\Phi$ and $2\varphi (t)$ is the fluctuating part of the superconducting phase. Assuming that the Josephson coupling energy of a
tunnel junction $E_J$ is sufficiently large one can restrict further
analysis to small phase fluctuations $2\varphi (t) \ll 1$ in both tunnel and non-tunnel contacts forming our asymmetric SQUID.
The total action $S$ describing our system consists of three terms
\begin{equation}
S=S_{Ch}+S_J+S_{sc},
\label{sum}
\end{equation}
describing respectively the charging energy, the Josephson tunnel junction and the non-tunnel superconducting contact. In what follows we will stick to the Keldysh representation of the action in which case it is necessary to consider the phase fluctuation variable on two branches of the Keldysh contour, i.e. to define $\varphi_1(t)$ and $\varphi_2(t)$. At subgap frequencies the sum of the first two terms in Eq. (\ref{sum}) reads
\begin{equation}
S_{Ch}+S_J= -\int dt\varphi_-(t)[\ddot \varphi_+(t)/(2E_C)+4E_{J}\varphi_+(t)].
\label{sum2}
\end{equation}
Here, as usually, we introduced the so-called "classical" and "quantum" phases $\varphi_+(t)=(\varphi_1(t)+\varphi_2(t))/2$, $\varphi_-(t)=\varphi_1(t)-\varphi_2(t)$ and defined
an effective capacitance
\begin{equation}
C=C_{\Sigma}+\frac{\pi}{16 \Delta R_N},
\label{capren}
\end{equation}
which accounts for the renormalization of the geometric capacitance $C_{\Sigma}$ due to fluctuation effects in the Josephson junction \cite{SZ}. The above expansion of the total effective action in powers of (small) phase fluctuations remains applicable for
\begin{equation}
E_{J} \gg E_C.
\end{equation}
Expanding now the action $S_{sc}$ around the phase value $\chi$, we obtain \cite{we}
\begin{equation}
iS_{sc}=-\frac{i}{e}\int\limits_0^t dt'I_S(\chi)\varphi_-(t')+ iS_R-S_I, \label{finalsS}
\end{equation}
where $I_{S}(\chi)$ is defined in Eq. (\ref{Ichi}) and
\begin{eqnarray}
S_R&=&\int\limits_0^t dt'\int\limits_{0}^{t}dt''
{\cal R}(t'-t'') \varphi_-(t')\varphi_+(t''), \label{SRRR}\\
S_I&=&\int\limits_{0}^{t}dt'\int\limits_{0}^{t}dt''
{\cal I}(t'-t'') \varphi_-(t')\varphi_-(t'').
\end{eqnarray}
Both kernels ${\cal R}(t)$ and ${\cal I}(t)$ are real functions related to each other via the fluctuation-dissipation theorem. Defining the Fourier transform of these two kernels respectively as ${\cal R}_\omega={\cal R}'_\omega+i{\cal R}''_\omega$ and ${\cal I}_\omega$ (having only the real part), we obtain
\begin{equation}
{\cal R}''_\omega=2 {\cal I}_\omega \tanh\frac{\omega}{2T}.
\label{FDT}
\end{equation}
The action (\ref{finalsS}) results in the following current through the contact \cite{we}
\begin{equation}
I=I_{S}(\chi)-e \int d t'{\cal R}(t-t')\varphi_+(t')+\delta I(t).
\label{Langevin}
\end{equation}
Here $\delta I(t)$ is the stochastic component of the current. In the non-fluctuating case $\dot\varphi_+(t)=eV(t)$, and Eq. (\ref{Langevin}) defines the current-voltage relation.
The explicit expression for the kernel ${\cal R}(t)$ contains three contributions \cite{we}: One of them originates from the subgap Andreev bound states, another one describes quasiparticle states above the gap and, finally, the third term accounts for the interference between the first two. As here we are merely interested in the subgap response of our system, below we will
specify only the part of the kernel ${\cal R}$ governed by the Andreev bound states.
In the limit of low temperatures it reads (cf. Eqs. (A3), (A5) in Ref. \onlinecite{we}):
\begin{equation}
{\cal R}_\omega=\sum_n\frac{\gamma_n}{4\epsilon_n^2(\chi)-(\omega+i0)^2},
\label{rand}
\end{equation}
where, as before, the summation is taken over the conducting channels of the superconducting contact and
\begin{equation}
\gamma_n=4T_n^2(1-T_n) \frac{\Delta^4}{\epsilon_n(\chi)}\sin^4\frac{\chi}{2}\tanh\frac{\epsilon_n(\chi)}{2T}.\label{randg}
\end{equation}
Now we are in a position to evaluate the current through the spectrometer. In the second order in $E_{JS}$ we obtain
\begin{eqnarray}
&& I(V)=\frac{eE_{JS}^2}{2}\int d t \,{\rm Re}\left( e^{2ieVt} \left\langle e^{2i\varphi_1(t)-2i\varphi_1(0)}+\right.\right.\label{peK}\\&&\left.\left.
e^{2i\varphi_2(t)-2i\varphi_1(0)}- e^{2i\varphi_1(t)-2i\varphi_2(0)}-e^{2i\varphi_2(t)-2i\varphi_2(0)}\right\rangle \right),\nonumber
\end{eqnarray}
where the angular brackets imply averaging performed with the total Keldysh action (\ref{sum}). Under
the approximations adopted here this average is Gaussian and it can be handled in a straightforward manner. As a result, we again arrive at Eqs. (\ref{pert}), (\ref{pe}), where the inverse
impedance of our effective environment takes the form
\begin{eqnarray}
\frac{1}{Z(\omega)}=\frac{C\left(\omega^2-\omega_0^2 \right)}{i\omega}+\sum_n\frac{e^2\gamma_n}{i\omega\left[ 4\epsilon_n^2(\chi)-\omega^2 \right]}.\label{mr}
\end{eqnarray}
Here and below $\omega_0=\sqrt{8E_JE_C}$ is the Josephson plasma frequency.
Eq. (\ref{mr}) -- combined with Eqs. (\ref{And}), (\ref{randg}) -- is our central result which will be employed below in order
to evaluate the $P(E)$-function and to quantitatively describe the results of Andreev level spectroscopy experiments.
\section{Intensity of spectral lines}
It is obvious from Eqs. (\ref{pert}), (\ref{pe}) that the positions of the current peaks are determined by zeroes of the inverse impedance (\ref{mr}). Our theory allows to establish both the positions and relative
heights of these peaks.
To begin with, let us assume that only one transport channel with transmission $T_n$ in our superconducting contact is important, while all others do not exist or are irrelevant for some reason. In this case
from Eq. (\ref{mr}) we obtain
\begin{widetext}
\begin{eqnarray}
&& {\rm Re}\left[ Z(\omega)\right]=\frac{\pi}{4C}\left\{\left[\delta(\omega-\sqrt{x_1})+\delta(\omega+\sqrt{x_1})\right] \left[ 1+\frac{4\epsilon_n^2(\chi)-\omega_0^2}{\sqrt{(4\epsilon_n^2(\chi)-\omega_0^2)^2 +(4e^2\gamma_n/C)}}\right]\right.+
\nonumber\\
&& \left. \left[\delta(\omega-\sqrt{x_2})+\delta(\omega+\sqrt{x_2})\right] \left[ 1+\frac{\omega_0^2-4\epsilon_n^2(\chi)}{\sqrt{(4\epsilon_n^2(\chi)-\omega_0^2)^2 +(4e^2\gamma_n/C)}}\right]
\right\},\label{f1}
\end{eqnarray}
\end{widetext}
where
\begin{equation}
x_{1,2}=\frac{4\epsilon_n^2(\chi)+\omega_0^2\mp\sqrt{(4\epsilon_n^2(\chi)-\omega_0^2)^2 +4e^2\gamma_n/C}}{2}.\label{f2}
\end{equation}
These equations demonstrate that close to the "level intersection" point $\omega_0\approx 2\epsilon_n$ an effective
"level repulsion" is controlled by the factor $\gamma_n$ (\ref{randg}). Outside of an immediate vicinity of this point
one can make use of the condition
\begin{equation}
\gamma_n \ll E_C{\rm max}(\omega_0^2,\epsilon_n^2(\chi))
\end{equation}
(which is typically well satisfied for the parameters under consideration) and expand the
square roots in Eqs. (\ref{f1}), (\ref{f2}) in powers of $\gamma_n$. As a result, one finds
\begin{eqnarray}
{\rm Re}\left[ Z(\omega)\right] =\frac{\pi}{2C}\left[ \delta(\omega-\omega_0)+\delta(\omega+\omega_0)
\right.\nonumber\\
\left.+ \frac{2E_C\gamma_n}{\left(\omega_0^2-4\epsilon_n^2 \right)^2}\left( \delta(\omega-2\epsilon_n)+\delta(\omega+2\epsilon_n)\right)\right].\label{rez}
\end{eqnarray}
Introducing the dimensionless expressions
\begin{equation}
\kappa_n=\frac{E_C\omega_0\gamma_n}{\epsilon_n \left( \omega_0^2-4\epsilon_n^2\right)^2},\label{kap}
\end{equation}
we get up to the first order in $\kappa_n$:
\begin{eqnarray}
P(E)=2\pi e^{-\rho(1+\kappa_n)}\sum_{k=0}^\infty\frac{\rho^k}{k!}\left[\delta(E-k\omega_0)\right.\nonumber\\
\left. +\kappa_n \rho \delta(E-k\omega_0-2\epsilon_n)\right].\label{pef}
\end{eqnarray}
Substituting this result into Eq. (\ref{pert}) we recover the $I-V$-curve of our device at subgap voltages which
fully determines the heights of all current peaks.
For instance, Eq. (\ref{pef}) yields the following ratio for the intensities of the two principal (voltage-integrated)
current peaks occurring at the points $2eV=2\epsilon_n$ and $2eV=\omega_0$:
\begin{equation}
\frac{\int\limits_{eV\approx \epsilon_n} I(V) dV}{\int\limits_{eV\approx \omega_0/2} I(V) dV}=\kappa_n\propto\frac{\sin^4\frac{\chi}{2}}{\epsilon_n^2(\chi)\left( \omega_0^2-4\epsilon_n^2(\chi)\right)^2}.
\label{mch}
\end{equation}
This formula determines relative intensities of the spectral lines as a function of
the phase $\chi$ (or, equivalently, the applied magnetic flux $\Phi$) and constitutes a specific prediction of our theory that can be directly verified in experiments. Eq. (\ref{mch}) holds irrespective of the fact that in any realistic experiment the $\delta$-function current peaks can be somewhat broadened by inelastic effects and it applies not too close to the point $\omega_0=2\epsilon_n$. This ratio of intensities is graphically illustrated in Figs. 2 and 3. The parameters of the figures are chosen in such a way, that $\omega_0=2\epsilon_n$ at $\chi\approx \pi/2$. Fig. 3 is characterized by the smaller value of $\gamma_n$. The approximate expression (\ref{mch}) provides a good description away from $\chi\approx\pi/2$ for both figures. It becomes better in the Fig.3, since it corresponds to smaller $\gamma_n$.
\begin{figure}
\includegraphics[width=8.cm]{rati1.eps}
\caption{The ratio of the intensities of the current peaks at $2eV=2\epsilon_n$ and $2eV=\omega_0$. The parameters are $T_n=0.9$, $\omega_0=1.48\Delta$, $e^2/C=0.4\Delta$. The dashed line results from the exact expression (\ref{f1}), the solid line represents the approximate expression (\ref{mch}). }
\end{figure}
\begin{figure}
\includegraphics[width=8.cm]{rati2.eps}
\caption{The same as in Fig. 2 for $e^2/C=0.1\Delta$.}
\end{figure}
The above consideration can be generalized to the case of several conducting channels in a straightforward manner. For the sake of definiteness let us consider the contacts containing two transport channels with transmissions $T_n$ and $T_m$. In this case Eq. (\ref{f2})
should be modified accordingly. Outside an immediate vicinity of the point $\omega_0=2\epsilon_n$ we obtain the change of the root corresponding to the plasma mode
\begin{equation}
x_1=\omega_0^2+\frac{2E_C\gamma_n}{\left( \omega_0^2-4\epsilon_n^2\right)}+\frac{2E_C\gamma_m}{\left( \omega_0^2-4\epsilon_m^2\right)}+...
\end{equation}
where $...$ stands for higher order in $\gamma_{n,m}$ terms. Similarly, for the other root we get
\begin{eqnarray}
x_2=4\epsilon_n^2+\frac{2E_C\gamma_n}{\left( 4\epsilon_n^2-\omega_0^2\right)}-\frac{4E_C^2\gamma_n^2}{ \left( 4\epsilon_n^2-\omega_0^2\right)^3}\nonumber\\
+\frac{E_C^2\gamma_n\gamma_m}{\left( 4\epsilon_n^2-\omega_0^2\right)^2\left( \epsilon_n^2-\epsilon_m^2\right)}+...
\end{eqnarray}
It also follows that the coefficients in front of the $\delta$-functions in Eq. (\ref{rez}) take the same form in the leading order in $\gamma_{n,m}$. Thus, instead of Eq. (\ref{pef}) we now have
\begin{eqnarray}
P(E)=2\pi e^{-\rho(1+\kappa_n+\kappa_m)}\sum_{k=0}^\infty\frac{\rho^k}{k!}\left[\delta(E-k\omega_0)\right.\nonumber\\
\left.+\kappa_n \rho \delta(E-k\omega_0-2\epsilon_n)+\kappa_m \rho \delta(E-k\omega_0-2\epsilon_m)\right].\label{pef2}
\end{eqnarray}
Close to the intersection point between the plasma mode and one of the Andreev modes the picture will still be governed by Eqs. (\ref{f1}), (\ref{f2}).
Thus, Eq. (\ref{pef2}) demonstrates that the two transport channels just yield "additive" contributions to the $P(E)$-function
describing the asymmetric SQUID under consideration. Along the same lines one can also recover the $P(E)$-function for the case of
more than two transport channels available in the contact.
\section{Capacitance renormalization}
In the above analysis we implicitly assumed that the Josephson plasma frequency $\omega_0$ does not depend on $\chi$. In the interesting for us limit (\ref{RNRc}) this assumption is well justified provided
all channel transmission values $T_n$ remain substantially lower than unity. The situation may change, however, if at least one channel is (almost) open $T_n \approx 1$ and, on top of that, the phase
$\chi$ controlled by the magnetic flux $\Phi$ is driven sufficiently close to $\pi$. In that case capacitance renormalization effects due to phase fluctuations in the superconducting contact may yield
an important contribution which needs to be properly accounted for.
In order to do so we make use of the results \cite{we} where the capacitance renormalization in a superconducting contact with arbitrary distribution of transmissions $T_n$ was investigated in details.
Accordingly, Eq. (\ref{capren}) should in general be replaced by
\begin{equation}
C(\chi )=C_{\Sigma}+\frac{\pi}{16 \Delta R_N}+\delta C(\chi),
\label{capren2}
\end{equation}
where \cite{we}
\begin{eqnarray}
&& \delta C(\chi)=\frac{e^2}{4\Delta}\sum_n \bigg\{ \frac{ 2-(2-T_n)\sin^2(\chi/2)}{T_n\sin^4(\chi/2)} \label{capren3}
\\&&-\left(1-T_n\sin^2(\chi/2) \right)^{-5/2}\bigg[2T_n(T_n-2)\sin^2(\chi/2)
\nonumber\\&&+5+T_n
+\frac{2-2(1+2T_n)\sin^2(\chi/2)}{T_n\sin^4(\chi/2)}\bigg]\bigg\}.\nonumber
\end{eqnarray}
For any transmission distribution and small phase values $\chi \ll 1$ Eq. (\ref{capren3})
yields
\begin{equation}
\delta C\simeq \frac{\pi}{16 \Delta R_c},
\label{smallchi}
\end{equation}
while for small $T_n \ll 1$ and any $\chi$ one finds
\begin{equation}
\delta C(\chi)=\frac{3\pi}{32 \Delta R_c}\left(1-\frac{\cos\chi}{3} \right).
\end{equation}
In both cases under the condition (\ref{RNRc}) an extra capacitance term $\delta C(\chi)$ in Eq. (\ref{capren2}) can be safely neglected and the latter reduces back to Eq. (\ref{capren}). On the other hand, in the presence
of highly transparent channels with $T_n\approx 1$ Eq. (\ref{capren3}) results in a sharp peak of $\delta C$ at $\chi \to \pi$:
\begin{equation}
\delta C\simeq\frac{e^2}{4\Delta}\sum_n\frac{1}{(1-T_n)^{3/2}}, \label{fd}
\end{equation}
which, depending on the parameters, may even dominate the effective capacitance $C$ at such values of $\chi$. As a result, the plasma frequency $\omega_0$ acquires the dependence on $\chi$ which may become
quite significant for phase values approaching $\chi \approx \pi$. In this case in the results derived
in the previous section one should replace $\omega_0 \to \omega_0(\chi )=\sqrt{8E_{J}E_C(\chi )}$, where
$E_C(\chi )=e^2/2(C+\delta C(\chi))$.
The dependence $\delta C(\chi)$ for various transmission distributions was studied in Ref. \onlinecite{we}
(cf., e.g., Fig. 3 in that paper). One of the important special cases is that of diffusive barriers. In this case the distribution of channel transmissions $T_n$ approaches the universal bimodal form with some channels being almost fully open and, hence, the capacitance renormalization effect should play a prominent role at $\chi \approx \pi$. At such values of $\chi$ one finds \cite{we} $\delta C(\chi)\simeq [\Delta R_c (\pi -\chi )^2]^{-1}$.
It should be emphasized that this capacitance renormalization influences not only Andreev peaks at $eV=2\epsilon_n$, but also the peaks occuring at voltages (\ref{discv}). Namely, as the phase $\chi$ approaches $\pi$ the positions of these peaks are shifted towards smaller voltages (since $\omega_0 \propto 1/\sqrt{C(\chi )}$) while the magnitudes of these peaks decrease (since $\rho \propto 1/\sqrt{C(\chi )}$). Likewise, the magnitudes of principal Andreev peaks $\propto \rho \kappa_n$ may decrease significantly for
$\chi \to \pi$.
\section{Spectral lines width}
Within the framework of our model the width of current peaks should tend to zero at $T=0$. However, at any nonzero $T$ these peaks become effectively
broadened due to inelastic effects. The corresponding linewidth can be estimated as $\delta \sim 1/(\tilde RC)$, where $\tilde R(T)$ is the effective
resistance of our system which tends to infinity at $T \to 0$ but remains finite at nonzero temperatures. The value $\tilde R(T)$ is controlled by the imaginary part of the kernel ${\cal R}$. It is necessary to include two contributions to this kernel -- one from the non-tunnel superconducting contact (already discussed above) and another one from the Josephson tunnel junction. Accordingly, for the imaginary part of the Fourier component for the total kernel $\tilde {\cal R}$ we have
\begin{equation}
\tilde {\cal R}_\omega''={\cal R}_\omega''+{\cal R}_{J\omega}'',
\end{equation}
where (for $0<\omega<2\Delta$)
\begin{eqnarray}
{\cal R}_{J\omega}''=\frac{1}{e^2 R_N}\int\limits_\Delta^\infty \frac{d\epsilon\left[ \epsilon(\epsilon+\omega)+\Delta^2\right]}{\sqrt{\epsilon^2-\Delta^2}\sqrt{(\epsilon+\omega)^2-\Delta^2}}
\nonumber\\
\times\left[\tanh\frac{\epsilon+\omega}{2T}-\tanh\frac{\epsilon}{2T}\right]\label{imt}
\end{eqnarray}
and ${\cal R}_\omega''$ is obtained from Eq. (\ref{FDT}) combined with Eq. (A1) from Ref. \onlinecite{we}.
As a result, for the subgap region we get
\begin{widetext}
\begin{eqnarray}
&& {\cal R}_\omega''=\sum_n\left\{T_n^{3/2}\left[\tanh\frac{\omega+\epsilon_n(\chi)}{2T}- \tanh\frac{\epsilon_n(\chi)}{2T} \right]\theta(\omega-\Delta+\epsilon_n(\chi))\left| \sin\frac{\chi}{2}\right|\right.\label{ct}\\&& \times\frac{\Delta\left( \omega\epsilon_n(\chi)+\Delta^2(1+\cos\chi)\right)}{2\epsilon_n(\chi)\left((\omega+\epsilon_n(\chi))^2-
\epsilon^2_n(\chi)\right)}\sqrt{(\omega+\epsilon_n(\chi))^2-\Delta^2} \nonumber
\\&&+\frac{T_n}{\pi}\int\limits_\Delta^\infty d\epsilon\frac{\sqrt{\epsilon^2-\Delta^2}\sqrt{(\epsilon+\omega)^2-\Delta^2}}{
(\epsilon^2-\epsilon^2_n)((\epsilon+\omega)^2-\epsilon^2_n)}\left( \epsilon(\epsilon+\omega)+\Delta^2\cos\chi+T_n\Delta^2\sin^2\frac{\chi}{2}\right) \left.\left( \tanh\frac{\epsilon+\omega}{2T}-\tanh\frac{\epsilon}{2T}\right)\right\}.\nonumber
\end{eqnarray}
\end{widetext}
Note that in the lowest order in $T_n$ this expression naturally reduces to that in Eq. (\ref{imt})
(with $R_N \to R_c$). On the other hand, for higher transmission values the difference between the two contributions (\ref{imt}) and (\ref{ct}) become essential: While the former yields the standard thermal factor $\sim \exp(-\Delta/T)$, the latter turns out to be proportional to $\sum_n\exp(-\epsilon_n/T)$ (as long as $\omega+\epsilon_{n}>\Delta$).
It follows from the above consideration that the width of the plasma mode peak can be estimated as
\begin{equation}
\delta \sim \frac{2E_C \tilde {\cal R}''_{\omega_0}}{\omega_0 },
\end{equation}
whereas the width of the current peak corresponding to the $n$-th Andreev level (away from its intersection with the plasma mode) is
\begin{equation}
\delta \sim \frac{2\kappa_nE_C\tilde {\cal R}''_{2\epsilon_n}}{\omega_0 }.\label{cw}
\end{equation}
with $\kappa_n$ defined in Eq. (\ref{kap}). In the vicinity of the intersection point $\omega_0=2\epsilon_n$ it is necessary to replace $\kappa_n$ by a more complicated expression resulting from Eq. (\ref{f1}).
These estimates demonstrate the crossover from the standard thermal broadening factor $\sim \exp(-\Delta/T)$ to a bigger one $\sim \exp(-\epsilon_n(\chi)/T)$ which accounts for the presence of subgap Andreev levels.
Note that our present consideration is sufficient only in the absence of extra sources of dissipation and under the assumption of thermalization. Both additional dissipation and non-equilibrium effects can further broaden the current peaks beyond the above estimates. Non-equilibrium effects can be captured, e.g., within the effective action formalism
\cite{Kos} which -- being equivalent to that of Ref. \onlinecite{we} in equilibrium -- also allows for non-equilibrium population of Andreev bound states. The corresponding analysis, however, is beyond the frames of the present paper.
\section{Quasiparticle current}
To complete our analysis let us briefly discuss the system behavior at higher voltages $eV >2\Delta$. In this case the $I-V$ curve of our device is determined by quasiparticle tunneling. In the presence
of an inelastic environment one has \cite{Fal}
\begin{equation}
I_{{\rm qp}}(V)=\int\limits_{-\infty}^\infty \frac{d\omega}{2\pi}\frac{1-e^{- eV/T}}{1-e^{-\omega/T}}P(eV-\omega)I_{{\rm qp}}^{(0)}\left(\frac{\omega}{e} \right).\label{fbs}
\end{equation}
Here $I_{{\rm qp}}(V)$ and $I_{{\rm qp}}^{(0)}$ represent the non-oscillating part of the voltage-dependent quasiparticle current respectively in the presence and in the absence of the environment. At $T \to 0$ the latter is defined by the well-known expression
\begin{eqnarray}
&&I_{{\rm qp}}^{(0)}(V)=\\&& \frac{\Delta}{eR_{NS}}\theta(v-1)\left[ 2vE\left(1-v^{-2}\right)-\frac{1}{v}K\left(1-v^{-2}\right)\right], \nonumber
\end{eqnarray}
where $R_{NS}$ is the normal resistance of the spectrometer junction, $v=eV/2\Delta$ and $E(k),\, K(k)$ are complete elliptic integrals defined as
\begin{equation}
E(k)=\int\limits_0^{\pi/2} d\phi\sqrt{1-k\sin^2\phi}, \; K(k)=\int\limits_0^{\pi/2} \frac{d\phi}{\sqrt{1-k\sin^2\phi}}.
\label{EK}
\end{equation}
\begin{figure}
\includegraphics[width=8.5cm]{cv.eps}
\caption{Zero-temperature quasiparticle current (\ref{fbs}) with the environment characterized by two quantum modes with frequencies $\omega_1=0.4\Delta$ and $\omega_2=0.7\Delta$. We also set $\rho_1=2$ and $\rho_2=1.$ The current steps are observed at $eV=2\Delta+k\omega_1+l\omega_2$. If $\rho_2$ were much smaller than unity, the steps would be observed at $eV=2\Delta+k\omega_1$ and $eV=2\Delta+k\omega_1+\omega_2$.}
\end{figure}
Combining Eqs. (\ref{fbs})- (\ref{EK}) with the expression for the $P(E)$-function (which is still defined by Eq. (\ref{pef}) with
$\rho \to \rho /4=E_C/\omega_0$) we arrive at the $I-V$ curve which contains two sets of current jumps (steps) at $eV=2\Delta+k\omega_0$ and
$eV=2\Delta+k\omega_0+2\epsilon_n$. This behavior for an effective two mode environment is illustrated in Fig. 4.
\section{Conclusions}
In this work we developed a microscopic theory enabling one to construct a quantitative description of microwave spectroscopy experiments aimed at detecting
subgap Andreev states in non-tunnel superconducting contacts. Employing the effective action analysis \cite{we} we derived an effective impedance of
an asymmetic SQUID structure of Fig. 1 which specifically accounts for the presence of Andreev levels in the system.
At subgap voltages the
$I-V$ curve for the spectrometer is determined by inelastic tunneling of Cooper pairs and has the form of narrow current peaks at voltage values
(\ref{discv}) and (\ref{discv2}). Our theory allows to explicitly evaluate the intensity of these current peaks and establish its dependence on the
external magnetic flux $\Phi$ piercing the system. We also estimated thermal broadening of the current peaks to be determined by the factor
$\sim \exp(-\epsilon_n(\chi)/T)$ rather than by the standard one $\sim \exp(-\Delta/T)$.
In the vicinity of the point $\Phi \approx \Phi_0/2$ and provided at least one of the channel transmissions
$T_n$ is sufficiently close to unity, the positions and heights of the current peaks may be significantly influenced by capacitance renormalization in a
superconducting contact. For instance, the positions of the current peaks can decrease at the flux values $\Phi \approx \Phi_0/2$. We speculate that
this effect could be responsible for experimental observations \cite{Breth1} of such a decrease in one of the samples (sample 3). This sample had about
20 conducting channels some of which could well turn out to be highly transparent, thus providing necessary conditions for substantial $\chi$-dependent
capacitance renormalization.
Finally, we also analyzed the system behavior at overgap voltages $eV >2\Delta$ in which case the $I-V$ curve is mainly determined
by quasiparticle tunneling. The presence of both the plasma mode and Andreev levels results in the sets of current steps
on the $I-V$ curve of our device, as illustrated, e.g., in Fig. 4.
All the above theoretical predictions can be directly verified in future experiments.
|
1,108,101,564,901 | arxiv | \section{Introduction}
Zero-range processes (ZRP) are minimal models~\cite{spitz},
often used as simplified realisations of more complex processes
(for reviews, see \cite{kip,evans,lux}).
For instance they are instrumental for the understanding
of condensation transitions in driven diffusive systems~\cite{wis1,glm}.
They are closely related to urn models,
which themselves are simplified models of
a number of stochastic processes in Physics~\cite{urn}.
In a ZRP, particles, all equivalent, hop from sites to sites
on a lattice, with prescribed rates which only depend on the occupation of the departure site.
The fundamental property of this process is that
the stationary measure is explicitly known as a function of the rates,
and is a product measure~\cite{spitz,andj}.
A natural generalisation of this definition
is when two different species are allowed to coexist on each site,
again hopping with prescribed rules. However in this case
the stationary measure is a product measure only if the rates with which
the particles of both species leave a given site satisfy a constraint~\cite{gross,hanney}
(see eq.~(\ref{constraint}) below).
For short, we refer to ZRP satisfying (\ref{constraint}) as {\it integrable}.
If these rates do not satisfy the constraints,
the stationary measure is not known,
and the corresponding ZRP is a generic nonequilibrium model:
it violates detailed balance, even in the absence of a drive applied to
the system, as will be shown below.
A question of fundamental importance posed by the study of
nonequilibrium systems is the nature of their stationary state,
and in particular the possible existence of phase transitions at stationarity.
The present work is devoted to the investigation of this question
on the particular example of a non integrable two-species ZRP.
The model arose from the study of a two-species driven diffusive
system (DDS) exhibiting,
at stationarity, condensation with coexistence between a high and a low density phase
in each individual domain~\cite{glm}.
A domain in the original DDS, i.e. a stretch of particles of the two species,
corresponds to a site in the ZRP,
while the high and low density phases correspond to the two
species of the ZRP.
We study the model on the complete graph (i.e., in the fully connected geometry),
using analytical and numerical methods.
While for equal densities of the two species the transition
between a fluid phase and a condensed phase is continuous
(as is the case for the corresponding single-species ZRP),
for non-equal densities this transition is discontinuous.
The model exhibits a sudden phase transition from an imbalanced fluid
where both species have densities larger than
the critical density to a neutral fluid, with densities of both species equal to the critical
density, and an imbalanced condensate.
As a consequence reentrance is observed. The system is successively fluid, condensed, fluid,
when increasing the density of one species, holding the density of the other species fixed.
Coexistence between the two phases takes place along the transition line only.
This study can serve as a template for the study of the one-dimensional model.
\section{Definition of the model}
\subsection{A reminder on zero-range processes}
We first give a short reminder of the definition of a ZRP.
Consider, in any dimension, a lattice of $M$ sites
on which $N$ particles are moving.
Multiple occupancy of a site is allowed.
The dynamics consists in choosing a site at random,
then transferring one of the particles present on this site,
to an arrival site.
On the complete graph all sites are connected.
The arrival site is any site chosen randomly.
In one dimension, the arrival site is one of the two nearest neighbours,
chosen with a given probability, $p$, to the right, or $q=1-p$, to the left.
The transfer of particles is done with the rate
$u_{k}$ ($k>0$),
only depending on the number $k$ of particles on
the departure site.
The fundamental property of the ZRP is that its stationary measure is known,
and is a product measure, as follows.
Let us denote by $N_i$ the random occupation of site $i$.
The stationary weight of a configuration of the system is
\begin{equation}\label{z:factor}
\P(N_{1},\dots,N_{M})=\frac{1}{Z_{M,N}}\prod_{i=1}^M p_{N_i},
\end{equation}
where the normalisation factor $Z_{M,N}$ reads
\begin{equation}\label{z:part}
Z_{M,N}=\sum_{N_{1}}\cdots\sum_{N_{M}}\,p_{N_{1}}\cdots p_{N_{M}}\;\delta
\left(\sum_{i}N_{i},N\right).
\end{equation}
For a given rate $u_k$, the factors $p_k$ obey the relation
\begin{equation}\label{rec1}
u_k\,p_k=p_{k-1}
\end{equation}
which leads to the explicit form
\begin{equation}
p_0=1,\qquad p_k=\frac{1}{u_1\dots u_k}.
\label{z:pk}
\end{equation}
Let us emphasize two important characteristics of the ZRP (the same holding
for integrable two-species ZRP, defined below).
\begin{itemize}
\item
When the dynamics is symmetric, e.g. in the one dimensional geometry with $p=1/2$,
or in the fully connected geometry,
detailed balance with respect to the stationary measure
is satisfied. For the single-species ZRP the detailed balance condition reads
\[
p_k\,p_l\,u_k=p_{k-1}\,p_{l+1}\,u_{l+1},
\]
which is precisely the property that leads to~(\ref{rec1}).\footnote{When the dynamics
is not symmetric, the condition of detailed balance is replaced by a
condition of pairwise balance. See~\cite{lux} for a discussion of this point.}
\item
As can be seen on (\ref{z:factor}), the stationary measure is independent of the asymmetry.
As a consequence, any property of the ZRP based on the sole knowledge
of this measure is itself independent of the asymmetry.
For example, with special choices of the transfer rate $u_k$, a condensation transition
can occur in the system. The features characterising this
phase transition are independent of the asymmetry.
\end{itemize}
This is in contrast with
the ZRP studied in the present work, where
detailed balance is {\it not} satisfied, {\it even when the dynamics is symmetric},
i.e., in the absence of a drive, as explained below.
In this sense this model is a generic nonequilibrium model, and the phase transition
described in the next sections is specific of a nonequilibrium system.
\subsection{The model considered in the present work}
The model considered in the present work is
a two-species ZRP.
The general definition of a two-species ZRP is a simple extension
of that of the usual ZRP~\cite{gross,hanney}.
Consider, in any dimension, a lattice of $M$ sites
with $n$ particles of type 1, $m$ particles of type 2.
The dynamics consists in choosing a site at random,
then transferring one of the particles present on this site,
of one of the species chosen at random, to an arrival site.
The transfer of particles is done with rates
$u_{k,l}$ ($k>0$) for a particle of the first species,
and $v_{k,l}$ ($l>0$) for a particle of the other species,
where $k$ and $l$ are respectively the number of particles of each species
on the departure site.
At variance with the case of single-species ZRP where the stationary measure is
a product measure for any choice of the transfer rate,
for a two-species ZRP this property holds only if the following constraint
on the rates $u_{k,l}$ and $v_{k,l}$
is satisfied~\cite{gross,hanney}:
\begin{equation}\label{constraint}
u_{k,l}\,v_{k-1,l}=v_{k,l}\,u_{k,l-1}.
\end{equation}
In the present work we choose rates which violates this constraint.
As a consequence, nothing a priori is known on the nature of the stationary measure
of the model.
The rates read
\begin{equation}\label{rates}
u_{k,l}=1+\frac{b}{l},\qquad
v_{k,l}=1+\frac{b}{k},
\end{equation}
where $b$ is a given parameter, which plays the role of inverse temperature~\cite{lux}.
We also set
$u_{k,0}=v_{0,l}=1+b$ in order to complete the definition of the process.
These rates favour equality of the number of particles of both species
on each site.
This choice aims at reproducing a feature of the original DDS, where
inside the domains coexistence between a high and a low density phase takes place.
The model was first introduced in~\cite{glm}, and studied in the equal density case.
This is reviewed and extended below. We then focus on the non-equal density case.
\section{The case of two sites}
We begin by considering the case where the system is made of two sites.
This case shares many common features with the complete model and serves
as a useful preparation for the rest.
A configuration of the system is entirely specified by the numbers $k$ and $l$
of particles of each species on site 1,
since the number of particles on site 2 are then just equal to
$n-k$ and $m-l$.
Therefore the weight of a configuration of the system is given by
the probability $f_{k,l}(t)$ that site 1 contains
$k$ particles of one species, and $l$ particles of the other species, at time $t$.
It obeys the master equation
\begin{eqnarray}\label{master2}
\frac{\d f_{k,l}(t)}{\d t}
&=&
u_{k+1,l}\,f_{k+1,l}(1-\delta_{k,n})+v_{k,l+1}\,f_{k,l+1}(1-\delta_{l,m})\nonumber\\
&+&u_{n-k+1,m-l}\,f_{k-1,l}(1-\delta_{k,0})
+v_{n-k,m-l+1}\,f_{k,l-1}(1-\delta_{l,0})\nonumber\\
&-&\left[u_{k,l}+v_{k,l}
+u_{n-k,m-l}
+v_{n-k,m-l}\right]f_{k,l},
\end{eqnarray}
where it is understood that $u_{0,l}=v_{k,0}=0$.
This is the master equation of a biased random walk in the
rectangle $0\le k\le n$, $0\le l\le m$, with reflecting boundary conditions.
We would like to know the stationary solution
$f_{k,l}$ of this equation, for {\it any choice of rates $u_{k,l}$, $v_{k,l}$}.
It turns out that this question is already too hard to answer for the seemingly simple
problem of a two-site model.
We must content ourselves of the knowledge of the stationary solution
for the class of processes fulfilling the constraint~(\ref{constraint}).
Indeed, proviso the rates fulfill this constraint, the stationary distribution is given by
\begin{equation}\label{factor}
f_{k,l}=\frac{p_{k,l}\,p_{n-k,m-l}}{Z_{M,n,m}},
\end{equation}
where
\begin{eqnarray}\label{rec}
p_{k,l}\,u_{k,l}=p_{k-1,l}\nonumber \\
p_{k,l}\,v_{k,l}=p_{k,l-1},
\end{eqnarray}
and
$Z_{M,n,m}$ is a normalisation (the partition function).
Relations~(\ref{rec}) can be iterated, thus
determining the $p_{k,l}$ in terms of the rates.
Eqs.~(\ref{factor}) and (\ref{rec}) generalize eqs.~(\ref{z:factor}) and (\ref{rec1}).
The method used in~\cite{gross,hanney} to obtain these results consists
in making the ansatz~(\ref{factor}), carry this form into the master equation,
which leads to~(\ref{rec}), which itself imposes~(\ref{bis}) as a compatibility
relation.
We wish to bring an independent and complementary viewpoint to this issue.
We first note that the dynamics between the two sites
is symmetric.
We therefore question the possibility for the process to be reversible
in time, and the consequences thereby.
Reversibility is equivalently the property that the process obeys
detailed balance with respect to the stationary measure,
or otherwise stated that the system is at equilibrium.
We proceed as follows.\\
(i)
We first determine the stationary distribution $f_{k,l}$
when
detailed balance is obeyed.
Consider the transitions from $\{k,l\}$
to $\{k+1,l\}$ and back, and
from $\{k,l\}$
to $\{k,l+1\}$ and back.
Detailed balance requires
\begin{eqnarray*}
u_{n-k,m-l}\,f_{k,l}=u_{k+1,l}\,f_{k+1,l},\\
v_{n-k,m-l}\,f_{k,l}=v_{k,l+1}\,f_{k,l+1}.
\end{eqnarray*}
It is readily found that a solution of these equations is given by (\ref{factor})
and (\ref{rec}).\\
(ii)
We now determine, by yet another path, the conditions on the rates
for the model to satisfy reversibility.
We use the Kolmogorov criterion~\cite{kol,kel} which is a
necessary and sufficient condition
for a Markov process to be reversible.
This condition states that the product of the transition rates along
any cycle in the space of configurations
should be equal to the product of the transition rates for the
reverse cycle.
In the present case, the space of configurations is the rectangle $0\le k\le n$,
$0\le l\le m$.
Taking the cycle
\[
(k,l)\to(k,l-1)\to(k+1,l-1)\to(k+1,l)\to(k,l),
\]
then the cycle in reverse order,
the Kolmogorov condition leads to the equation
\begin{equation}\label{kol}
\frac{u_{k+1,l}\,v_{k,l}}{u_{k+1,l-1}\,v_{k+1,l}} =
\frac{u_{n-k,m-l}\,v_{n-k,m-(l-1)}}
{u_{n-k,m-(l-1)}\,v_{n-(k+1),m-(l-1)}}.
\end{equation}
The two sides of this equation should be satisfied independently.
This imposes that
\begin{equation}\label{bis}
u_{k,l}\,v_{k-1,l}=u_{k,l-1}\,v_{k,l},
\end{equation}
which is the constraint (\ref{constraint}).%
\footnote{Eq.~(\ref{kol}) can be satisfied by imposing
symmetry relations on the rates:
\[
u_{k+1,l}=u_{n-k,m-l},\nonumber\\
v_{k,l+1}=v_{n-k,m-l}.
\]
The corresponding stationary measure is uniform:
\[
f_{k,l}=\frac{1}{(n+1)(m+1)},
\]
and detailed balance is obeyed.
We discard this solution because the rates would then also depend on the arrival site.}
To summarize, reversibility implies stationary product measure, eqs.~(\ref{factor}),
(\ref{rec}), and a constraint on the rates, eq.~(\ref{bis}).
The reciprocal statement holds.
The proof follows easily from the fact that a Markov process with a finite configuration
space has a unique stationary solution.
We leave it to the reader.
The physical interpretation of the results above is that
when the system is at equilibrium, its energy
is equal to the sum of the energies of two independent sites.
Conversely, for a choice of rates violating (\ref{bis}),
as is the case for the model studied here, the model is
not reversible, the stationary measure does not take
the simple form (\ref{factor}), and is not known a priori.
In other words, for a general choice of rates, the two-site
model can have an arbitrarily complex
stationary measure. In this sense it represents an example of
a minimal nonequilibrium system.
\section{The model on the complete graph}
The virtue of considering the fully connected geometry,
in the thermodynamical limit of an infinite system,
is that it leads to analytical
results on the model.
The rest of the paper is devoted to this case.
Consider again the single-site occupation probability $f_{k,l}(t)$, that is
the probability that a generic site contains
$k$ particles of one species, and $l$ particles of the other species, at time $t$.
Conservation of probability and particle numbers imposes
$\sum_{k,l}f_{k,l}(t)=1$
and
\begin{equation}
\sum_{k=1}^{\infty }k\,f_{k}(t) =\rho_1, \qquad
\sum_{l=1}^{\infty }l\,f_{l}(t) =\rho_2 ,
\label{rho}
\end{equation}
where for a large system, densities are defined as $\rho_1=n/M$, $\rho_2=m/M$,
and where the marginals are denoted by
$f_k=\sum_{l }f_{k,l}$, and
$f_l=\sum_{k }f_{k,l} $.
The master equation for the temporal evolution of
$f_{k,l}(t)$ reads
\begin{eqnarray}\label{recgene}
\frac{\d f_{k,l}(t)}{\d t}
&=&
u_{k+1,l}\,f_{k+1,l}+v_{k,l+1}\,f_{k,l+1}\nonumber
\\&+&\u_t\,f_{k-1,l}(1-\delta_{k,0})+\v_t\,f_{k,l-1}(1-\delta_{l,0})\\
&-&\left[u_{k,l}+v_{k,l}+\u_t+\v_t\right]f_{k,l}\;,
\nonumber
\end{eqnarray}
where
\[
\u_t =\sum_{k,l}u_{k,l}\,f_{k,l}, \qquad
\v_t =\sum_{k,l}v_{k,l}\,f_{k,l}
\]
are the mean rates at which a particle arrives on a site
($k\rightarrow k+1 $) or ($l\rightarrow l+1 $).
Equation~(\ref{recgene}) is the master equation for a biased random walk in the quadrant
$k, l\ge0 $, with reflecting boundary conditions on the axes.
We wish to determine the stationary solution of this equation.
We follow the same line of thought as in the previous section.
We show that the stationary distribution $f_{k,l}$ has a known
closed-form expression only if reversibility is assumed.
Indeed, using the detailed balance conditions
\begin{eqnarray*}
f_{k+1,l}\,u_{k+1,l}=\u\,f_{k,l},\\
f_{k,l+1}\,v_{k,l+1}=\v\,f_{k,l},
\end{eqnarray*}
it is easy to derive the following explicit expression for the stationary
distribution $f_{k,l}$:
\begin{equation}\label{fklmf}
f_{k,l}=\frac{p_{k,l}\,\u^k\,\v^l}{\sum_{k,l} p_{k,l}\,\u^k\,\v^l},
\end{equation}
where the $p_{k,l}$ are given by (\ref{rec}), and $\u$ and $\v$ are the
stationary mean hopping rate.
Let us also show that, as for the two-site system, the constraint~(\ref{bis})
is a consequence of imposing reversibility.
Indeed, the space of configurations is the quadrant $k,l\ge0$.
Taking the cycle
\[
(k,l)\to(k,l-1)\to(k+1,l-1)\to(k+1,l)\to(k,l),
\]
then the cycle in reverse order,
the Kolmogorov condition implies
\[
v_{k,l}\,\bar u\, \bar v\, u_{k+1,l}=\bar u\, v_{k+1,l}\,u_{k+1,l-1}\,\bar v,
\]
which yields~(\ref{bis}).
As for the two-site system, we conclude that conversely, when (\ref{bis}) is violated,
as is the case with the choice of rates~(\ref{rates}),
the stationary distribution remains unknown.
The present work is an endeavour to determine features of the stationary measure
of the model for a thermodynamical system,
and to investigate the possible existence of nonequilibrium phase transitions.
In the case of an integrable
two-species ZRP, the fugacities are functions of the densities (see eq.~(\ref{fklmf})).
The duality fugacity-density is replaced here by the duality mean hopping rate-density.
\section{Criticality}
As will appear clearly as we proceed, the critical point for this model is unique, and
corresponds to taking $\u=\v=1$.
\subsection{Continuum limit: universal properties}
Let us first consider the
continuum limit of the stationary equation,
in the asymptotic regime where $k$ and $l$ are large, and setting
$\u=\e^\mu$, $\v=\e^\nu$, where $\mu$ and $\nu$ are small.
Expanding $f_{k,l}$ to second order, we obtain
\begin{equation}
\frac{\partial^2 f_{k,l}}{\partial k^2}+\frac{\partial^2 f_{k,l}}{\partial l^2}
+\frac{\partial f_{k,l}}{\partial k}\left(\frac{b}{l}-\mu\right)
+\frac{\partial f_{k,l}}{\partial l}\left(\frac{b}{k}-\nu\right)=0.
\label{limcont}
\end{equation}
\begin{figure}[htb]
\begin{center}
\includegraphics[angle=-90,width=.7\linewidth]{ab_new.eps}
\caption{\small
Decay exponent $a$ as a function of $b$.
At large values of $b$, $a\approx2 b+\sqrt{2}$.
}
\label{f1}
\end{center}
\end{figure}
At criticality, i.e. when $\mu=\nu=0$, eq.~(\ref{limcont})
becomes scale invariant, and reads
\begin{equation}\label{crit}
\frac{\partial^2 f_{k,l}}{\partial k^2}+\frac{\partial^2 f_{k,l}}{\partial l^2}
+b\left(\frac{1}{l}\frac{\partial f_{k,l}}{\partial k}+
\frac{1}{k}\frac{\partial f_{k,l}}{\partial l}\right)
=0.
\end{equation}
Using polar coordinates: $k=r \cos\theta$,
$l=r \sin\theta$, with $0\le \t\le\pi/2$,
this equation is transformed into
\begin{equation}
\frac{\partial^2 f(r,\theta)}{\partial r^2}
+\frac{1}{r^2}\frac{\partial^2 f(r,\theta)}{\partial \theta^2}
+\frac{1}{r}\frac{\partial f(r,\theta)}{\partial r}
\left( 1+\frac{2b}{\sin 2\theta}\right)
=0.
\label{critpol}
\end{equation}
Now, setting $f(r,\theta)=r^{-a}g(\theta)$, we find an equation for the
angular function $g(\theta)$:
\begin{equation}\label{eq_g}
\frac{\d^2 g(\theta)}{\d \theta^2}
+a\left( a-\frac{2b}{\sin 2\theta}\right)g(\theta)
=0.
\end{equation}
The unknown decay exponent $a$ is determined by the boundary conditions imposed on
$g(\t)$, which are the quantisation conditions
of this Schr\"odinger equation.
Indeed, $g(\t)$ is positive for $0\le\t\le\pi/2$, symmetric with respect to $\pi/4$
and must vanish for $\t=0$ or $\t=\pi/2$.
For special values of $b$, exact solutions of eqs.~(\ref{critpol})
can be found:
\begin{eqnarray}\label{beq0}
f(r,\t)=& r^{-2}\sin\t\cos\t\quad&(b=0),\\\label{beq1}
f(r,\t)=& r^{-3}\sin\t\cos\t(\sin\t+\cos\t)\quad&(b=2/3).
\end{eqnarray}
For $b=0$ the original model has no critical behaviour, hence formally $a=0$.
On the other hand the prediction of the continuum limit for the decay exponent,
in the limit $b\to0$, is $a=2$.
The decay exponent $a$ is discontinuous at $b=0$.
For a generic value of $b$, the decay exponent $a$ is determined by
numerical integration of the differential equation~(\ref{eq_g}).
At large values of $b$, the behaviour of this exponent can be obtained analytically.
Indeed, expanding the potential term in (\ref{eq_g}) to second order
around its minimum, located at $\pi/4$, yields the equation of a harmonic oscillator,
with coupling constant $\omega=2\sqrt{a b}$ and energy $a(a-2 b)/2$:
\[
g''(x)+a(a-2b-4b x^2)g(x)=0
\]
where $x=\pi/4-\t$.
Imposing that the ground state energy be equal to $\omega/2$ yields
the asymptotic quantisation condition $a=2 b+\sqrt{2}$. (See figure~\ref{f1}.)
As a consequence, we find the remarkable result that,
at criticality,
the marginal distributions $f_k$ and $f_l$
decay as power laws at large occupations, with a non-trivial exponent equal to $a-1$.
The same holds for $p_m$, with $m=k+l$, the distribution of the total
number of particles on a site,
\[
p_m=\sum_{k=0}^m f_{k,m-k}\sim m^{-(a-1)}.
\]
Note finally that both the function $g(\t)$ and the exponent $a$ are universal,
and only depend on $b$.
\subsection{Discrete equations: critical density}
\label{discrete}
The determination of the non universal critical density $\rho_c$ of both species,
where
\[
\rho_c=\sum_{k=1}^{\infty }k\,f_{k}=
\sum_{l=1}^{\infty }l\,f_{l} ,\qquad (\u=\v=1),
\]
requires the knowledge of the stationary solution $f_{k,l}$ of the discrete eqs.~(\ref{recgene}).
These are integrated numerically with $\u=\v=1$,
using the following method.
We truncate these equations
at a given value of $k+l$ denoted by $\max$, which plays the role of a cut-off.
We solve the linear system $A\,F=I$, where $F$ is the column matrix of the occupation
probabilities $f_{k,l}$, $I$ is the matrix containing the inhomogeneous term $f_{0,0}$,
itself determined at the end of the computation by normalisation, and
$A$ is the matrix deduced from the stationary equations.
We impose the boundary conditions $f_{k,l}=0$ outside the triangle delimited by
$k=0$, $l=0$, and $k+l=m_\star$.
The maximal value of the cut-off $\max$ attainable is limited by the size of the matrices involved.
For example, taking $\max=160$ corresponds to a linear system of order $13040$.
As an illustration we take $b=3/2$,
corresponding to a value of the decay exponent $a\approx4.520$.
Extrapolating the data for several values
of $m_\star$,
using the estimate
\[
\rho_c-\rho_c({\max})\approx\int_{\max}^\infty \d m\, m\, p_m
\sim\max^{-(a-3)},
\]
as depicted in figure~\ref{f4}, leads to $\rho_c\approx0.976$.
\begin{figure}[htb]
\begin{center}
\includegraphics[angle=-90,width=.7\linewidth]{rho_crit_b.eps}
\caption{\small
Determination of the critical density by extrapolation of the data
for $m_\star=40,60,\ldots,160$.
The circle on the vertical axis is the extrapolated value for $\rho_c$.
($b=3/2$, $a=4.520\ldots$,
$\u=\v=1$.)
}
\label{f4}
\end{center}
\end{figure}
The theoretical prediction for
the critical decay exponent of $p_{m}$, or $f_k\equiv f_l$,
agrees perfectly well with numerical measurements.
\section{Fluid phase}
For non zero values of $\mu$ and $\nu$ the system is driven away from criticality.
We begin by investigating exponentially decreasing solutions of the continuum limit
stationary equation~(\ref{limcont}).
We then study those of the discrete stationary equations.
We finally determine the region of existence of such solutions.
\subsection{Stationary solutions at exponential order: continuum limit}
If we content ourselves of the knowledge of the stationary solutions
at exponential order, that is retaining their exponential dependence only,
and discarding any prefactor, then terms containing $b$ can be neglected.
Equation~(\ref{limcont}) now reads
\begin{equation}
\frac{\partial^2 f_{k,l}}{\partial k^2}+\frac{\partial^2 f_{k,l}}{\partial l^2}
-\mu\frac{\partial f_{k,l}}{\partial k}
-\nu\frac{\partial f_{k,l}}{\partial l}
=0.
\label{massive}
\end{equation}
Setting
$f_{k,l}=\e^{\frac{1}{2}(\mu\,k+\nu\,l)}\,h_{k,l}$
in (\ref{massive}) yields
\[
\frac{\partial^2 h_{k,l}}{\partial k^2}+\frac{\partial^2 h_{k,l}}{\partial l^2}
-\frac{\mu^2+\nu^2}{4} h_{k,l}
=0.
\]
Changing to polar coordinates
and setting $h(r,\t)=u(r)v(\t)$, we obtain, after rescaling $r$ by $\sqrt{\mu^2+\nu^2}/2$,
\begin{equation}
r^2 u''(r)+r u'(r)-(r^2+n^2)u(r)=0,
\end{equation}
where $n$ is to be determined, and
\[
v''(\t)+n^2v(\t)=0.
\]
Imposing $v(\t)=0$ for $\t=0$ and $\pi/2$ leads to $v(\t)=\sin 2\t$, hence
$n=2$.
The solution of the differential equation for $u(r)$ is the Bessel function
\[
u(r)=K_2\left(\sqrt{\mu^2+\nu^2}\ \frac{r}{2}\right).
\]
Finally, the solution, when $b\to0$, reads, up to a normalising constant,
\begin{equation}\label{b0}
f(r,\t)=
K_2\left(\sqrt{\mu^2+\nu^2}\ \frac{r}{2}\right)
\e^{\frac{r}{2}(\mu\cos\t+\nu\sin\t)}\,\sin 2\t.
\end{equation}
This solution encompasses all three regimes where
$\mu r\sim 1$, $\mu r\ll1$, or $\mu r\gg1$.
Simplified expressions are obtained
in the two latter cases:
$\bullet$ For small values of the argument, we have $K_2(x)\sim 1/x^2$.
We thus find
\[
f(r,\t)\approx {\rm const.}\ r^{-2} \sin 2\t, \qquad (\mu r\ll 1),
\]
which matches consistently with the critical solution~(\ref{beq0}) for $b\to0$.
$\bullet$ For large values of the argument, we have
$K_2(x)\approx \sqrt{\pi/2x}\,\e^{-x}$, hence we obtain the
asymptotic behaviour
\begin{equation}
f(r,\t)\approx {\rm const.}\ r^{-1/2}\,\e^{-r P(\t)}\sin 2\t, \qquad (\mu r\gg 1),
\label{asympt}
\end{equation}
where
\begin{equation}
P(\t)=\frac{1}{2}\left(\sqrt{\mu^2+\nu^2}-\mu\cos\t-\nu\sin\t\right).
\label{ptheta}
\end{equation}
For any values of $\mu$ and $\nu$ non simultaneously positive, $P(\t) $ is positive,
$f$ is exponentially decaying, corresponding to a fluid phase.
When $\mu$ and $\nu$ are simultaneously positive,
$P(\t)$ vanishes at an angle $\t$ satisfying $\tan\t=\nu/\mu$.
For such a value of $\t$, the function $f(r,\t)\sim r^{-1/2}$ is not normalisable.
The whole region $\mu>0$ and $\nu>0$ is therefore
non physical.
\subsection{Stationary solutions at exponential order: discrete equations }
\label{elim}
In order to investigate exponentially decaying solutions
beyond the continuum limit, we consider again the discrete stationary equations.
As above, for $k$ and $l$ large, we can neglect terms containing $b$, thus
obtaining
\begin{equation}
f_{k+1,l}+f_{k,l+1}
+\u\,f_{k-1,l}+\v\,f_{k,l-1}
-(2+\u+\v)f_{k,l}=0.
\label{stat}
\end{equation}
Introducing the generating function $\hat f(x,y)=\sum f_{k,l}x^ky^l$,
we get from (\ref{stat})
\begin{equation}\label{eq:fhat}
D(x,y)\,\hat f(x,y)
=
A(x,y),
\end{equation}
where
\begin{equation}\label{eq:D}
D(x,y)=x^{-1}+y^{-1}-2+\u(x-1)+\v(y-1).
\end{equation}
The locus of singularities of $\hat f$ is thus given by $D(x,y)=0$.
The right-hand side, $A(x,y)$, comes from the contribution of the boundary terms
$\hat f(0,y)$ and $\hat f(x,0)$
\footnote{For the generic case $b>0$, $A(x,y)$ is finite on $D(x,y)$,
while if $b=0$, the singularities of $A(x,y)$ cancels those of $D(x,y)$
in such a way that the resulting expression for $\hat f(x,y)$ has just a simple
pole in both complex variables $x$ and $y$:
\[
\hat f(x,y)=\frac{(1-\u)(1-\v)}{(1-\u x)(1-\v y)},\qquad (b=0).
\]
}.
By inversion we have
\[
f_{k,l}=\oint\frac{\d x}{2\i\pi x}\frac{\d y}{2\i\pi y}\hat f(x,y)\,x^{-k}y^{-l}.
\]
At large $k$ and $l$, $f_{k,l}$ can be estimated by taking the saddle point of
this expression, yielding
\[
f_{k,l}\sim x^{-k}y^{-l}\equiv \e^{-rP(\t)},
\]
where
$x,y$ is a point on the curve $D(x,y)=0$, and $P(\t)=\cos \t\ln x+\sin \t \ln y$.
Extremising $P(\t)$ on this curve with respect to the variables $x$ and $y$,
i.e. the expression $P(\t)-\lambda D(x,y)$, where $\lambda$
is a Lagrange multiplier, leads, together with $D(x,y)=0$, to three equations,
which determine $\lambda$, $x$ and $y$, hence $P(\t)$.
Let us first check that this method leads to the expected result~(\ref{ptheta})
in the particular case of the
continuum limit.
Set $x=\e^s$, $y=\e^t$, $\u=\e^\mu$, $\v=\e^\nu$,
with $\mu$ and $\nu$ small.
The equation for $D(x,y)$ reads:
\[
s^2+t^2+\mu\,s+\nu\,t=0.
\]
Extremising $P(\t)-\lambda D(x,y)$ with respect to $s$ and $t$ yields
\begin{eqnarray*}
\lambda\cos \t-2s+\mu&=&0,\\
\lambda\sin \t-2t+\nu&=&0.
\end{eqnarray*}
The three former equations lead, after some algebra, to eq.~(\ref{ptheta}).
The general case leads to lengthy expressions for $P(\t)$.
We give the results of this method for a particular example.
We choose $\u=1.89$, $\v=0.66$ in the stationary equations, which
we integrate by solving the linear system, as explained in section~\ref{discrete},
for increasing values of the cut-off $m_\star$.
The resulting values of the densities $\rho_1$ and $\rho_2$ are plotted in
figure~\ref{fa1}.
Clearly $\rho_2$ is larger than $\rho_c$\footnote{
The values of $\u$ and $\v$ were precisely chosen to serve this purpose.
We first integrated the master equation~(\ref{recgene}) numerically for
$b=3/2$, corresponding to $\rho_c\approx0.976$, and for
values of the densities $\rho_1=10$ and $\rho_2=1$.
The stationarity values $\u\approx1.894$, $\v\approx0.661$ were thus obtained.
}.
Figure~\ref{fa2} depicts $\ln p_m$, as obtained
by the same method, together with the theoretical
prediction for the coefficient of the exponential decay:
\[
p_m=\int r\,\d r\d \theta\,\e^{-rP(\t)}\delta\left ( r-\frac{m}{\cos \t+\sin\t}\right)
\sim\e^{-m \frac{P(\t_0)}{\cos \t_0+\sin\t_0}}
\]
where $\t_0$ denotes the value of the angle such that the argument of the
exponential is minimum.
In the present case, $\t_0=0$, and $p_m\sim\e^{-0.0372\, m}$.
\begin{figure}[htb]
\begin{center}
\includegraphics[angle=-90,width=.4\linewidth]{fig1_appendix.eps}
\includegraphics[angle=-90,width=.4\linewidth]{fig2_appendix.eps}
\caption{\small
Densities as functions of $m_\star$ ($\u=1.89$, $\v=0.66$, $b=3/2$).
The dashed line corresponds to $\rho_c$.
}
\label{fa1}
\end{center}
\end{figure}
\begin{figure}[htb]
\begin{center}
\includegraphics[angle=-90,width=.7\linewidth]{fig_fm.eps}
\caption{\small
Distribution of the total single-site occupation $p_m$ as a function of $m$ ($m\le\max=140$).
Line: theoretical prediction for the coefficient of the exponential decay.
($\u=1.89$, $\v=0.66$, $b=3/2$.)
}
\label{fa2}
\end{center}
\end{figure}
\subsection{Domain of existence of the fluid phase in the $\u-\v$ plane}
The domain of existence of the homogeneous fluid
solution in the $\u-\v$ plane is shown in figure \ref{f7}.
It is the interior of the domain delimited by the two symmetric curves.
These curves are obtained as follows.
\begin{figure}[htb]
\begin{center}
\includegraphics[angle=-90,width=.7\linewidth]{domain.eps}
\caption{\small
Domain of existence of the homogeneous fluid solution ($b=3/2$).
The lower right wing corresponds to $\rho_1=\infty$, $\rho_2$ finite,
and symmetrically for the left upper wing.
}
\label{f7}
\end{center}
\end{figure}
Consider the situation where one of the densities, $\rho_1$, say, is infinite.
Then $v_{k,l}$ is to be taken equal to 1,
and it is intuitively expected that the two species decouple.
Hence $f_l=(1-\v)\v^l$. It follows that
\[
\rho_2=\frac{\v}{1-\v},
\]
and
\begin{equation}\label{frontiere}
\u=\sum_{l=0}^\infty f_l\left(1+\frac{b}{l}\right)
=1+b(1-\v)(1-\ln(1-\v)).
\end{equation}
The two former equations give the equation of the boundary of
the domain of existence of fluid solutions, for $\rho_2$ varying.
The second part of the curve is obtained symmetrically by doing the
same analysis with $\rho_2$ infinite.
We note that the tangents to the two curves at the
symmetric point $\bar u=\bar v=1$ are parallel to the axes.
\subsection{Limits of stability of the fluid phase}
Finally the question is how the domain of existence of the fluid region defined
above is mapped onto the density plane.
As we now show, this domain
maps onto a region of the density plane complementary to a wedge
with tip located at the critical point
$\rho_1=\rho_2=\rho_c$,
as depicted in figure~\ref{f2}.
All the analysis relies on how the neighbourhood of the point $\u=\v=1$
is mapped onto the density plane.
We begin by a linear analysis of the mapping between the critical points ($\u=\v=1$)
and $(\rho_1=\rho_2=\rho_c)$.
Consider a small segment, with one of the two ends located at $\u=\v=1$,
and with the other one at a given angle with the $\u$ axis.
Let $t$ be the tangent of this angle.
Because of the symmetry between the two species,
and since the transformation of the local derivatives around
the critical point is linear,
the slope of the transformed segment in the density plane is given by
\begin{equation}\label{tt}
T=\frac{1+c t}{c+t}.
\end{equation}
Thus, if $t=\pm1$, then $T=\pm1$.
The constant $c$ is determined numerically by taking $t=\infty$ (segment parallel to the $\v$ axis).
The limiting slopes of the tangents to the wedge at the tip
follow from (\ref{tt}).
For the lower edge it is given by $T(t=0)$, i.e., $1/c$.
For $b=3/2$, we find $T(t=0)\approx0.48$, with $m_\star=80$.
We then investigate how points
located
on the boundary curve~(\ref{frontiere}), at increasing distances
of the critical point, $\u=\v=1$, are mapped onto the density plane.
For successive values of the cut-off $\max$ the images
of these points are expected to converge to a single curve.
The lower edge of the wedge depicted in figure~\ref{f2}
is the curve with $m_\star=160$.
The other edge is obtained by symmetry.
These edges represent the limits of stability of the fluid phase.
\section{Condensation and phase diagram}
\subsection{Equal densities}
The case of equal densities, $\rho_1=\rho_2$, is similar to the
situation encountered for a single-species ZRP~\cite{evans, lux}.
From the analysis of the previous sections, as well as
from numerical integrations of the temporal equations~(\ref{recgene}),
or of their stationary form, the following picture is obtained.
The region $\u=\v<1$ maps onto the fluid phase $\rho_1=\rho_2<\rho_c$,
corresponding to exponential solutions of eq.~(\ref{limcont}).
The critical point corresponds to $\u=\v=1$, i.e., $\rho_1=\rho_2=\rho_c$.
Condensation occurs for
$a>3$, i.e. $b>2/3$ (see eq.~(\ref{beq1})), and
$\rho_1=\rho_2>\rho_c$.
A condensate appears sustaining the excess density
with respect to the critical fluid.
The region $\u=\v>1$ is unphysical.\\
\subsection{Nonequal densities: Existence of a line of transition}
The limits of stability of the condensed phase are given by the
line $\rho_2=\rho_c$, $\rho_1>\rho_c$, and the symmetric line with respect to
the bisectrix.
There is numerical evidence for the existence of a transition line
between the fluid phase and the condensed phase, lying in between
the two corresponding stability lines of these phases.
The transition is discontinuous on this coexistence line.
There is no coexisting solutions accessible dynamically
on both sides of the line.
The transition between the fluid and condensed phases
is obtained by Monte Carlo simulations of the model,
using the following procedure.
The density $\rho_1$ is fixed to a given value greater than $\rho_c$, and $\rho_2$ increases from a value less than $\rho_c$.
Crossing the stability line of the condensed phase, i.e. for $\rho_2>\rho_c$,
one might expect condensation to occur.
Instead, the only accessible phase turns out to be the fluid one (see an example of
fluid solution in section~\ref{elim}).
Then, increasing the density $\rho_2$, and crossing the transition line,
there is a sudden phase transition from an imbalanced fluid where both species have densities larger than
the critical density, to a neutral critical fluid and an imbalanced condensate.
Beyond this line, the only accessible solutions are condensed,
with $\u=\v=1$,
{ while fluid solutions to eq.~(\ref{recgene}) do exist}, as long as $\rho_2$ has
not reached the edge of the wedge.
A surprising consequence of this phase diagram is the occurrence of a
reentrance phenomenon: increasing $\rho_2$ beyond
the symmetric transition line (with respect to the bisectrix) the system becomes fluid again.
\begin{figure}[htb]
\begin{center}
\includegraphics[angle=-90,width=.7\linewidth]{diag_phase.eps}
\caption{\small
Phase diagram in the density plane.
Dot-dashed line with circles: line of transition points
(the symmetric line is not figured).
Dashed lines: limits of stability of the fluid phase.
Dotted lines: limits of stability of the condensed phase.
Straight lines at the tip (critical point) are the local tangents, computed as
explained in the text.
($b=3/2$, $\rho_c\approx0.976$.)}
\label{f2}
\end{center}
\end{figure}
We now describe more precisely the method for the determination of the location
of the transition line.
We fix the value of $\rho_1$,
for example $\rho_1=10$, and let $\rho_2$ increase
from a value less than $\rho_c$.
Then $\u$ and $\v$ are measured.
Focussing on $\u$,
we observe that, when $\rho_2$ crosses some value, $\rho_2\approx1.8$, there is
a sudden discontinuity in $\u$, dropping from $\u\approx 1.4$ down to
$\u\approx1$.
More precisely, for a system of given size, say $M=40, 60, \ldots$,
the system is first run to stationarity.
Then $n$ successive runs of duration $\Delta t$ less than the flipping time
$\tau$ between the fluid and the condensed phases, and such that $n \Delta t\gg \tau$
are performed.
This flipping time is measured to be exponentially increasing with the system size
(it is approximately doubled
when $M$ is incremented by $20$).
The histogram of the values of $\u$ is a bimodal distribution.
The criterion for the location of the transition point $(\rho_1^*,\rho_2^*)$, when $\rho_2$ varies,
consists in choosing the value of $\rho_2$ such that the two weights of the two maxima are equal.
Thus for $M=40, 60, 80$, we have $\rho_{2}^*\approx1.86, 1.82, 1.8$,
respectively.
We proceed in the same fashion to obtain the transition points visible
on figure~\ref{f2}, for $\rho_1=4, 6, 8$.
As $\rho_1$ decreases down to $\rho_c$, the discontinuity in $\u$ is smaller
and the determination of the transition point is harder since
it involves larger system sizes.
\section{Final remarks}
Let us first summarize the main outcomes of the present work.
For the process considered, a non-integrable two-species ZRP, we are
able to obtain a number of analytical results from the study of the model
on the complete graph. In particular
the critical phase is well understood. The coefficient of the exponential decay
of the fluid solutions is analytically predicted.
Finally we can predict the phase diagram of the system by a joint
analytical and numerical investigation.
A salient feature of the phase diagram is the presence
of a single critical point.
A more thorough analysis of the first-order phase transition
taking place between the fluid phase
and the condensed phase would be interesting, though probably hard to achieve.
The existence of such a transition
is not expected in integrable
two-species ZRP~\cite{gross,hanney,tom,stefan}.
Beyond the present work,
a natural question to ask is whether the phenomena observed for the fully
connected geometry survive
in the one-dimensional geometry.
Preliminary investigations
using the analysis performed on the complete graph as a template
indicate similar behaviour.
\subsection*{Acknowledgements.}
It is a pleasure to thank J-M Luck for invaluable discussions.
Thanks are also due to EDG Cohen, M Evans, S Grosskinsky, T Hanney,
E Levine and
D Mukamel for helpful discussions.
This work was partially carried out while CG was a Meyerhoff
Visiting Professor at the Weizmann Institute. Support of the
Albert Einstein Minerva Center for Theoretical Physics and the
Israel Science Foundation (ISF) is gratefully acknowledged.
\section*{References}
|
1,108,101,564,902 | arxiv | \section{Introduction: Shape variations in the loop space and renormalization-group evolution}
Generalized loop space, associated to some base manifold, consists of the distributional version\footnote{In the sense that one needs to integrate over the integration contour or path ($\{\Gamma_i\}$) in the base manifold.} of closed paths, with an equivalence relation introduced by the holonomy of the gauge connection at the base point of the closed paths, and where taking the trace provides gauge invariance, see, e.g., [\refcite{General_LS}] and references therein.
This holonomy is also referred to as the Wilson loop, Eq. (\ref{eq:WL_definition}), where the gauge fields ${\cal A}_\mu$ belong to a certain representation of the non-Abelian gauge group $SU(N_c)$ and where the closed paths $\{\Gamma_i\}$ are elements of generalized loop space. Taking vacuum expectation values over a set of Wilson loops leads to Wilson loop variables, Eq. (\ref{eq:wl_def}), which are an alternate way of representing a gauge theory [\refcite{General_LS}] under certain constraints known as the Mandelstam constraints, Eq. (\ref{eq : mandelstam constraints}):
\begin{eqnarray}
& & {\cal W}_n [\Gamma_1, ... \Gamma_n]
=
\Big \langle 0 \Big| {\cal T} \frac{1}{N_c} {\rm Tr}\ \Phi (\Gamma_1)\cdot \cdot \cdot \frac{1}{N_c}{\rm Tr}\ \Phi (\Gamma_n) \Big| 0 \Big\rangle \ , \label{eq:wl_def} \\
& & \Phi (\Gamma_i)
=
{\cal P} \ \exp\left[ig \oint_{\Gamma_i} \ dz^\mu {\cal A}_{\mu} (z) \right] \ .
\label{eq:WL_definition}
\end{eqnarray}
These variables are now {\it functionals on loops}, and exhibit non-trivial behaviour in the vicinity of path discontinuities, cusps or self-intersections. Moreover, the renormalization and conformal properties of Wilson loops possessing light-like segments (or lying completely on the light-cone) are known to be more intricate than their off-light-cone counterparts. Therefore, the study of the
geometrical and dynamical properties of loop space, which can include, in general, cusped light-like Wilson exponentials, will provide us with fundamental information on the renormalization group behaviour and evolution of the various gauge-invariant quantum correlation functions [\refcite{Loop_Space,WL_RG}].
Generalized loop space can be shown to be a topological group that has an (infinite dimensional) Lie algebra
associated with it, allowing the definition of different differential operators such as the path- and area-derivative, Eqs. (\ref{eq:area_derivative},\ref{eq:path_derivative}):
\begin{align}
\frac{\delta}{\delta \sigma_{\mu\nu} (x)} \ \Phi (\Gamma)
&\equiv
\lim_{|\delta \sigma_{\mu\nu} (x)| \to 0} \ \frac{ \Phi (\Gamma\delta \Gamma) - \Phi (\Gamma) } {|\delta \sigma_{\mu\nu} (x)|} \ ,
\label{eq:area_derivative}\\
\partial_\mu \Phi(\Gamma)
&=
\lim_{|\delta x_{\mu}| \to 0} \frac{\Phi(\delta x_\mu^{-1}\Gamma\delta x_\mu) - \Phi(\Gamma)}{|\delta x_{\mu}|} \ ,
\label{eq:path_derivative}
\end{align}
used by Makeenko and Migdal to derive their famous non-perturbative equations [\refcite{MM_WL,WL_Renorm}]:
\begin{equation}
\partial_x^\nu \ \frac{\delta}{\delta \sigma_{\mu\nu} (x)} \ W_1[\Gamma]
=
N_c g^2 \ \oint_{\Gamma} \ dz^\mu \ \delta^{(4)} (x - z) W_2[\Gamma_{xz} \Gamma_{zx}] \ ,
\label{eq:MM_general}
\end{equation}
supplemented with the Mandelstam constraints
\begin{equation}
\sum a_i { {\cal W}_{n_i} [\Gamma_{1} ... \Gamma_{n_i}] } = 0 \ .
\label{eq : mandelstam constraints}
\end{equation}
However, the possibility to apply the MM equations directly in the form (\ref{eq:MM_general}) suggest the use of sufficiently smooth paths off-the-light-cone, which excludes, e.g., loops with cusps and loops (partially) lying on the light-cone. Such paths occur, e.g., in the investigation of the duality between the $n-$gluon scattering amplitudes and $n-$polygonal Wilson loops in ${\cal N} = 4$ super-Yang-Mills theory, Refs. [\refcite{WL_CFT}], or in the configurations arising in the soft parts of $3D$-parton densities (see below).
In recent work, Refs. [\refcite{ChMVdV_2012}], we developed an approach which allows one to apply the Schwinger method [\refcite{Schwinger51}], as used in the derivation of the Makeenko-Migdal equations, to certain classes of cusped loops (partially) lying on the light-like rays. To this end, we defined a new differential operator, Eq. (\ref{eq:area_log}), which might be related to the Fr{\'e}chet derivative [\refcite{General_LS}], where loop variations are generated by an infinite number of area-derivative-like variations.
This approach led us to propose an evolution equation valid for the planar light-like Wilson rectangular loops:
\begin{figure}[ht]
$$\includegraphics[angle=90,width=0.6\textwidth]{area_diff}$$
\caption{Shape variations allowed for a light-cone Wilson rectangle: we consider only those variations of the integration path which conserve the angles between the sides.}
\end{figure}
\begin{equation}
\mu \frac{d }{d \mu } \ {\left( \frac{d}{d \ln \sigma} \ \ln \ {\cal W} [\Gamma] \right) }
=
- \sum { \Gamma_{\rm cusp} } \ ,
\label{eq:mod_schwinger}
\end{equation}
where the area differentials are defined in the transverse $\bm z_\perp = 0$:
\begin{equation}
{ d \sigma^{+-} }
=
{ N^+ d N^- } \ , \
{ d \sigma^{-+} }
=
- { N^- d N^+ } \ ,
\label{eq:delta_area}
\end{equation}
and the only allowed shape variations are presented in Fig. 1. The area logarithmic derivation operator then reads
\begin{eqnarray}
\frac{d}{d \ln \sigma}
\equiv
\sigma^{\mu\nu} \frac{d}{d \sigma^{\mu\nu}}
=
\sigma^{+-} \frac{d}{d \sigma^{+-}}
+
\sigma^{-+} \frac{d}{d \sigma^{-+}} \ .
\label{eq:area_log}
\end{eqnarray}
The r.h.s. of Eq. (\ref{eq:mod_schwinger}) is defined by the sum of the light-cone cusp anomalous dimensions, the total amount of which is given by the number of the cusp-like discontinuities in the slope of the integration path. The sum of the cusp anomalous dimensions [\refcite{CAD_universal}] in the r.h.s. of Eq. (\ref{eq:mod_schwinger}) can be interpreted as a fundamental ingredient of an effective quantum action for the Wilson loops with simple obstructions.
Hence, we reveal a connection between the geometrical properties (given in terms of the area/shape infinitesimal variations and corresponding differential equations) of the generalized loop space, and the renormalization-group behaviour of the Wilson polygons with conserved angles between the light-like straight lines (that is to say, classical conformal invariance is assumed). In other words, the dynamics in loop space is governed by the discontinuities of the path derivatives. These obstructions play the role of the {sources} within the Schwinger field-theoretical picture. We have shown, therefore, that the Schwinger quantum dynamical principle [\refcite{Schwinger51}] is helpful in the investigation of certain classes of elements of the loop space, namely, the planar cusped Wilson polygons. It is worth noting that Eq. (\ref{eq:mod_schwinger}) suggests, in fact, a duality relation between the rapidity evolution of certain correlation functions and the equations of motion in the generalised loop space. Rapidities associated with the light-like vectors $N^\pm$ are, of course, infinite, and are given by
\begin{equation}
Y^\pm
=
\frac{1}{2} \ \ln \frac{(N^\pm)^+}{(N^\pm)^-}
=
\lim_{\eta^\pm \to 0} \pm \frac{1}{2} \ \ln \frac{N^+N^-}{\eta^\pm} \ ,
\label{eq:rapidity}
\end{equation}
where $\eta^\pm$ is a cutoff and where we take into account the fact that plus- and minus- components of a vector $a_\mu$ are defined as the scalar products $a^\pm = (a \cdot N^\mp)$.
Eq. (\ref{eq:rapidity}) suggests, clearly, that
\begin{equation}
\frac{d }{d \ln \sigma} \sim \frac{d }{d Y} \ ,
\end{equation}
that is, the rapidity evolution equation of a certain correlation function can be set dual to the area variation law of a properly chosen class of elements of the generalized loop space.
In particular, we will argue below that Eq. (\ref{eq:mod_schwinger}) can be used to study the evolution of the $3D$-parton distribution functions.
To this end, we will focus on the behaviour of parton densities in the large Bjorken-$x_B$ approximation.
\section{Large-$x_B$ regime at modern experimental facilities}
Describing the three-dimensional structure of the nucleon is one of the hot topics of a number of current and planned experimental and theoretical projects (see [\refcite{INT,JLab12GeV,TMD_Pheno}] and references therein).
In the infinite-momentum frame (IMF), the nucleon can be treated as a complicated quantum many-body system of point-like constituents (partons). These partons can be probed in high-energy scattering processes with a spatial resolution equal to the inverse momentum transfer $Q^{-1}$. The energy transfer is then related to the Bjorken variable $x_B$, which defines as well the fraction of the nucleon's longitudinal momentum carried by the struck parton.
The region of $0.1 < x_B <1$ features the predominance of the three valence quarks inside the nucleon, while the effects of sea quarks become significant in the region of $0.01 < x_B < 0.1$, and gluons are known to dominate at even smaller $x_B$.
Past studies of fully inclusive deep inelastic scattering (DIS) heavily relayed on the IMF framework, using factorization to separate out IR and collinear singularities, the latter being collected from the hard (perturbatively calculable) part into the parton distribution functions (PDFs). In this case, it suffices to treat the struck parton as collinear with the parent nucleon. Deep inelastic electron-nucleon scattering experiments deliver, therefore, the information about the one-dimensional (longitudinal) structure of the nucleon. In semi-inclusive processes (such as Drell-Yan, SIDIS, etc.), however, it is necessary to additionally incorporate the intrinsic transversal degrees of freedom of the struck partons, and in specific processes one also has to add possible spin-correlations to the PDFs. This information is delivered by the transverse momentum distributions (TMDs), providing a full three-dimensional picture of the nucleon structure.
A promising setup to investigate the behaviour of $3D$ parton densities at large $x_B$ is the new energy upgrade from 6 to 12 GeV of CEBAF at Jefferson Lab, which is being effectuated at the moment [\refcite{JLab12GeV}]. CEBAF is a fixed-target experiment, so one has $x_B=Q^2 / (2M\nu)$, implying that this upgrade will allow us to probe the region $x_B$ from $\sim 0.1$ up to very close to 1, in other words one will probe mainly valence quarks. This is ideal to study the evolution of density functions in the so far unexplored large-$x_B$ region. On the other hand, smaller $x_B$ (in the same region of values of $Q^2$) can be reached by the planned EIC, which will target sea quarks and gluons as well [\refcite{INT}]. The combined kinematic range of both experiments will be of the order to $10^{-3} < x_B < 1$ and $2$ GeV$^2 < Q^2 < 100$ GeV$^2$. This coverage is unique in its large-$x_B$ component, which will allow us to make precision tests of TMD factorization and evolution. For the extended original discussion and figures, see [\refcite{JLab12GeV}] and Refs. therein.
\section{Large-$x_B$ factorization and evolution of transverse-distance dependent parton densities}
Three-dimensional parton densities in the momentum representation (TMDs) are intensively studied within different frameworks, Refs. [\refcite{TMD_CSS,TMD_origin,CS_all,New_TMD_Col,SCET_TMD,TMD_Pheno}]. It is beyond the scope of this paper to compare advantages and drawbacks of those approaches. Instead, we will just take it for granted that some reasonable TMD-factorization scheme can be formulated, and that an appropriate operator definition of the TMD quark distribution function exists that has the same quantum numbers as the correlation function below, Eq. (\ref{eq:TDD_general}).
In the present discussion we will be particularly concerned with the $3D$-correlation functions in the large-$x_B$ limit: this regime is apparently easier to analyze within a simple factorization scheme and perfectly fits the Jefferson Lab 12 GeV kinematics. Furthermore, we will argue that the large-$x_B$ approximation is an ideal natural laboratory for the study of applications of the generalized loop space formalism in hadronic and nuclear physics.
To this end, let us consider the following {\it transverse-distance dependent} (TDD) correlation function
\begin{eqnarray}
& & {\cal F} \left(x, {\bm b}_\perp; P^+, n^-, \mu^2 \right)
= \int\! d^2 k_\perp \ {\rm e}^{-ik_\perp \cdot b_\perp} {\cal F} \left(x, {\bm k}_\perp; p^+, n^-, \mu^2 \right) =
\label{eq:TDD_general} \nonumber \\
& & \int\! \frac{d z^-}{2\pi} \left\langle
P \ | \bar \psi (z^-, \bm{b}_\perp)
{\cal W}_{n^-}^\dagger[z^-, \bm{b}_\perp;
\infty^-, \bm{b}_\perp] {\cal W}_{\bm l}^\dagger[\infty^-, {\bm b}_\perp;
\infty^-, {\infty}_\perp] \right. \\
&& \left.
\times
\gamma^+ {\cal W}_{\bm l}[\infty^-, {\infty}_\perp;
\infty^-, \bm{0}_\perp]
{\cal W}_{n^-}[\infty^-, \bm{0}_\perp; 0^-,\bm{0}_\perp]
\psi (0^-,\bm{0}_\perp) | \ P
\right\rangle \nonumber \ ,
\end{eqnarray}
which is supposed to deliver the information about the quark distribution in the longitudinal one-dimensional momentum space and in the two-dimensional impact-parameter coordinate space.
Generic semi-infinite Wilson lines evaluated along a certain four-vector $w_\mu$ are defined as
\begin{equation}
\label{eq:SIWL}
{\cal W}_w[\infty ; z]
\equiv {}
{\cal P} \exp \left[
- i g \int_0^\infty d\tau \ w_{\mu} \
{\cal A}^{\mu} (z + w \tau)
\right] \ ,
\end{equation}
where, in the cases under consideration, the vector $w_\mu$ can be either longitudinal $w_\mu = (w_L, \bm 0_\perp)$, or transverse $w_\mu = (0_L, {\bm l}_\perp)$.
The TDD arises, by definition, as a result of the partial Fourier transform of the standard {\it transverse-momentum dependent} correlator ${\cal F} \left(x, {\bm k}_\perp; P^+, n^-, \mu^2 \right)$.
The factorization and evolution of the gauge-invariant collinear PDFs in the large-$x_B$ regime has been studied in Ref. [\refcite{LargeX_KM}]. We propose to generalize this approach to the $3D$-PDF, Eq. (\ref{eq:TDD_general}).
The large-$x_B$ regime suggests the following assumptions (for a detailed discussion, see Ref.[\refcite{LargeX_KM,CMTV_LargeX}]):
\begin{itemize}
\item In the IMF, the struck quark acquires ``almost all'' the momentum of the nucleon, that is: $k_\mu \approx P_\mu$. Provided that the transverse component of the nucleon momentum is equal to zero, the transverse momentum of the quark $\bm k_\perp$ is gained by the gluon interactions;
\item A very fast moving quark with momentum $k_\mu$ can be considered as a classical particle with a (dimensionless) velocity parallel to the nucleon momentum $P$, so that the quark fields are replaced by
$$
\psi (0)
=
{\cal W}_P [\infty; 0] \ \Psi_{\rm in-jet} (0)
\ , \ \bar \psi (z^-, \bm z_\perp)
=
\bar \Psi_{\rm in-jet} (z) \ {\cal W}_P^\dag [z ; \infty] \ ,
$$
where the fields $\bar \Psi_{\rm in-jet}, \Psi_{\rm in-jet}$ represent the incoming-collinear jets in the initial and final states [\refcite{Eikonal}];
\item Provided that ``almost all'' momentum of the nucleon is carried by the struck quark, real radiation can only be soft: $q_\mu \sim (1- x) P_\mu$;
\item Virtual gluons can be soft or collinear, collinear gluons can only be virtual, quark radiation is suppressed in the leading-twist;
\item Rapidity singularities stem only from the soft contributions: they are known to occur in the soft region, at small gluon momentum $q^+ \to 0$. In other words, rapidity divergences are known to originate from the minus-infinite rapidity region, where gluons travel along the direction of the outgoing jet, not incoming-collinear;
\item Real contributions are UV-finite (in contrast to the integrated PDFs), but can contain rapidity singularities and a non-trivial $x_B$- and $\bm b_\perp$-dependence.
\end{itemize}
These assumptions imply the following large-$x_B$ factorization formula:
\begin{equation}
{\cal F} \left(x, {\bm b}_\perp; P^+, n^-, \mu^2 \right)
=
{\cal H} (\mu, P^2) \times {\Phi} (x, {\bm b}_\perp; P^+, n^-, \mu^2 ) \ ,
\label{eq:LargeX_factor}
\end{equation}
where the contribution of incoming-collinear partons is summed up into the $x_B$-independent function, while
the soft function $\Phi$ is given by\footnote{For the sake of simplicity, we work in covariant gauges, so that the transverse Wilson lines at infinity can be ignored.}
\begin{eqnarray}
& & {\Phi} (x, {\bm b}_\perp; P^+, n^-, \mu^2 )
= P^+ \int\!dx \ {\rm e}^{-i (1-x) P^+ z^-} \cdot \nonumber \\
& & \times \langle 0 | \ {\cal W}_P^\dag [z ; - \infty] {\cal W}_{n^-}^\dag[z; \infty] {\cal W}_{n^-} [\infty ; 0] {\cal W}_P [0; \infty] \ | 0 \rangle \ ,
\label{eq:soft_LargeX}
\end{eqnarray}
with two kinds of Wilson lines: incoming-collinear (non-light-like, $P^2 \neq 0$) ${\cal W}_P$, and outgoing-collinear ($(n^-)^2 = 0 $), ${\cal W}_{n^-}$.
The rapidity and renormalzation-group evolution equations are:
\begin{eqnarray}
& & \mu \frac{d}{d\mu} \ln {\cal F} \left(x, {\bm b}_\perp; P^+, n^-, \mu^2 \right)
= \mu \frac{d}{d\mu} \ln {\cal H} (\mu^2) + \mu \frac{d}{d\mu} \ln {\Phi} (x, {\bm b}_\perp; P^+, \mu^2)
\label{eq:TDD_evolution_1}
\ , \\
& & P^+ \frac{d}{d P^+} \ln {\cal F} \left(x, {\bm b}_\perp; P^+, n^-, \mu^2 \right)
= P^+ \frac{d}{d P^+} \ln {\Phi} (x, {\bm b}_\perp; P^+, \mu^2) \ .
\label{eq:TDD_evolution_2}
\end{eqnarray}
Note that the rapidity is introduced via $\ln P^+$ with proper regularization [\refcite{Li_WL}]. The r.h.s. of Eq. (\ref{eq:TDD_evolution_1}) is, in fact, $\bm{b}_\perp$-independent and contains only a single-log dependence on the rapidity [\refcite{CS_all}]. Therefore, the r.h.s. of Eq. (\ref{eq:TDD_evolution_2}) corresponds to the Collins-Soper-Sterman rapidity-independent kernel ${\cal K}_{\rm CSS}$. At this point, we are in a position to make use of the evolution equation (\ref{eq:mod_schwinger}). To this end, we emphasize that the soft function ${\cal F}$ is a Fourier transform of an element of the loop space: the Wilson loop evaluated along the path, defined in Eq. (\ref{eq:soft_LargeX}). This fact enables us to consider the shape variations of this path, which are generated by the infinitesimal variations of the rapidity variable $\ln P^+$. The corresponding differential operator reads
$$
\frac{d}{d \ln \sigma} \sim P^+ \frac{d}{d P^+} \ ,
$$
given that $d P^+ = (dP \cdot n^-)$. Therefore:
\begin{equation}
\mu \frac{d}{d\mu} \ \left(P^+ \frac{d}{d P^+} \ln {\cal F} \right)
= \mu \frac{d}{d\mu} \ \left( P^+ \frac{d}{d P^+} \ln {\Phi} \right)
=
- \sum_{\rm TDD} \Gamma_{\rm cusp} (\alpha_s )
=
\mu \frac{d}{d\mu}
{\cal K}_{\rm CSS}(\alpha_s ) \ .
\label{eq:CSS}
\end{equation}
Equations (\ref{eq:TDD_evolution_1}-\ref{eq:CSS}) can be straightforwardly integrated to give a complete evolution of the TDD (\ref{eq:TDD_general}) in the large-$x_B$ region [\refcite{CMTV_LargeX}], which can be directly applied to the JLab 12 GeV phenomenology (see also [\refcite{Accardi_2013}] and Refs. therein).
\section{Conclusions}
We have shown that the geometrical properties of the generalized loop space can be used to analyze the combined rapidity and RG-evolution of the transverse-distance dependent parton densities in the large-$x_B$ approximation. This regime allows one to factorize the soft contributions responsible for the non-trivial $x_B$- and $\bm b_\perp$-dependence in the form of vacuum averages of the system of Wilson lines lying on- and off- the light-cone. It is crucial that the rapidity divergences get factorized into the soft part as well, while incoming-collinear jets contribute to the renormalization-group evolution via the corresponding anomalous dimensions. Being an element of the generalized loop space, the soft function $\Phi$ obeys the equations of motion, which describe its behaviour under the shape variations of the specially chosen form. The latter can be connected with the rapidity differentials in the energy-momentum space.
Although the differential equations, Eqs. (\ref{eq:TDD_evolution_1}-\ref{eq:CSS}), coincide with the ones obtained in other approaches [\refcite{TMD_CSS,New_TMD_Col,SCET_TMD}], their solution seems to be easier due to the simpler structure of the integration constants and the more straightforward relation to the integrated case. The solution and applications will be reported separately [\refcite{CMTV_LargeX}].
\section*{Acknowledgments} We thank V. Guzey for interest in our work, and for numerous useful comments.
|
1,108,101,564,903 | arxiv | \section{Introduction}
Since the early work of Euler \cite{Euler1844}, one-dimensional (1D) models have been successfully used to describe the flow of blood in the large arteries of the systemic network \cite{PedleyBook1980,Alastruey2011,Wang2015,Muller2014,Boileau2015}. They have proved to be valuable and efficient tools to capture pulse wave propagation in large network simulations and obtain satisfactory evaluations of average quantities such as the cross-sectional area ($A$), the flow rate ($Q$) or the pressure ($P$) \cite{Alastruey2009,Politi2016}. In recent works, 1D models have also been used to compute inverse problems to obtain patient specific parameters \cite{Lagree2000,Martin2005,Dumas2012}. Due to their simplicity, efficiency and the reduced number of parameters they require, we hope that in the near future these 1D models will be intensively used by medical practitioners to make pre-and-post operative diagnosis and perform patient specific simulations of surgeries.
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{0.33\textwidth}
\centering
\includegraphics[scale=0.15,angle=0]{Figure1a.pdf}\\
\end{minipage}
\begin{minipage}[t]{0.33\textwidth}
\centering
\includegraphics[scale=0.15,angle=0]{Figure1b.pdf}\\
\end{minipage}
\begin{minipage}[t]{0.33\textwidth}
\centering
\includegraphics[scale=0.15,angle=0]{Figure1c.pdf}\\
\end{minipage}
}
\caption{Schematic representations of possible arterial geometrical configurations.\\
\underline{\textit{Left}}: Taper; \underline{\textit{Center}}: Stenosis; \underline{\textit{Right}}: Aneurysm.}
\label{fig:Taper-Stenosis-Aneurysm}
\end{figure}
In physiological situations, the mechanical and geometrical properties of the arterial wall can vary locally. These variations can be caused by tapering (figure \ref{fig:Taper-Stenosis-Aneurysm} left), pathologies such as stenoses (figure \ref{fig:Taper-Stenosis-Aneurysm} center) or aneurysms (figure \ref{fig:Taper-Stenosis-Aneurysm} right) and endovascular prosthesis (stent). Mathematically, they result in a source term in the momentum conservation equation that prevents from writing the system in a conservation-law form. A naive discretization of this nonconservative source term can lead to spurious oscillations of the numerical solution and the failure of the numerical method, especially close to steady states \cite{Delestre2012}. This problem was originally pointed out by Roe \cite{Roe1987} for the scalar equation with source terms and reflects a truncation error between the discretization of the conservative flux gradient and the nonconservative source term that does not vanish close to steady states. Since the works of Berm\'udez and V\'azquez \cite{Bermudez1994} and LeRoux \cite{Gosse1996,Greenberg1996} in the context of shallow-water equations, numerical schemes that preserve some steady states at a discrete level are called well-balanced.\\
The aim of this study is to propose a simple, robust and efficient well-balanced numerical method for blood flow in an artery with variations of its mechanical and geometrical properties. As blood flow equations are mathematically similar to shallow water equations, several well-balanced numerical schemes have been derived for 1D blood flow equations with varying geometrical and mechanical properties. A popular approach consists in expressing the system in terms of primitive variables, namely the cross-sectional area ($A$) and the flow velocity $u$. The resulting system can be written in a conservation-law form, even in the presence of varying geometrical and mechanical properties. However, it has been proved for shallow water equations that this formulation is not mass-conservative and can lead to erroneous estimations of the wave celerity \cite{ToroBook2001}. This analysis is also valid for blood flow equations and the numerical solutions obtained with a nonconservative system will be incorrect in the presence of elastic jumps. Indeed, the Rankine-Hugoniot jump relation of the nonconservative form is different from the one of the conservative form. \u{C}ani\'c \cite{Canic2002} and Sherwin \cite{Sherwin2003} were among the first to address the issue of the nonconservative source term for blood flow simulation. \u{C}ani\'c proposed to treat the nonconservative product in this source term through jump conditions, while Sherwin used a two-rarefaction Riemann solver when the material properties varied abruptly. More recently, Toro and Siviglia \cite{Toro2013} reformulated the 1D conservative system with varying geometrical and mechanical properties as a homogeneous quasi-linear system and solved the associated Riemann problem. To do so, they introduced an auxiliary steady variable containing the geometrical and mechanical properties of the artery, and also included variations of the external pressure. In the framework of path-conservative methods \cite{Pares2006}, M\"uller and Toro \cite{Muller2013} used this augmented quasi-linear system to propose an exactly well-balanced numerical scheme for all steady states (subcritical, transcritical and supercritical). Murillo and Garc\'ia-Navarro \cite{Murillo2015} derived an energy balanced numerical scheme in the framework of augmented solvers for arteries with varying mechanical and geometrical properties, and also variations of the external pressure. In \cite{Delestre2012}, Delestre and Lagr\'ee successfully applied the hydrostatic reconstruction (HR), proposed in \cite{Audusse2004} for shallow water equations, to compute blood flow in arteries with varying cross-sectional area. In more recent work \cite{Delestre2016}, Delestre extended the hydrostatic reconstruction (HR) to arteries with varying cross-sectional area and arterial wall rigidity. \\
The hydrostatic reconstruction (HR) meets the simplicity and efficiency requirements for 1D blood flow simulation and will be the reference well-balanced method used in this study. The hydrostatic reconstruction (HR) can be used with any finite-volume numerical flux for a conservative problem and guarantees the following natural properties of shallow water flows:
\begin{itemize}
\item well-balanced for the steady states at rest, or hydrostatic equilibria;
\item the conservation of mass;
\item the non-negativity of the water-height $h$;
\item the ability to compute dry states and transcritical flows;
\item a discrete or semi-discrete entropy inequality, which enables to compute the entropic solution in the presence of a discontinuity.
\end{itemize}
Unfortunately, the steady states at rest preserved by the hydrostatic reconstruction (HR) are not relevant for blood flow as they only occur in "dead men" \cite{Delestre2012}. We propose two extensions of the hydrostatic reconstruction adapted to blood flow simulation in large arteries.
By relaxing some of the properties of the hydrostatic reconstruction (HR) such as the ability to compute dry states, we derive an extension of the hydrostatic reconstruction, that we refer to as the "low-Shapiro" hydrostatic reconstruction (HR-LS). The low-Shapiro hydrostatic reconstruction (HR-LS) accurately preserves low-Shapiro number steady states that may occur in large network simulations. The Shapiro number $S=u/c$ is the equivalent of the Froude number for shallow water equations and the Mach number for compressible Euler equations. We also adapt the subsonic hydrostatic reconstruction (HR-S), proposed by Bouchut \cite{Bouchut2010}, to blood flow equations with variable geometrical and mechanical properties. The subsonic hydrostatic reconstruction (HR-S) exactly preserves all subcritical steady states, including low-Shapiro number steady states. By construction, both the low-Shapiro hydrostatic reconstruction (HR-LS) and the subsonic hydrostatic reconstruction (HR-S) are able to accurately compute wave reflections and transmissions. The different numerical methods are then tested and compared in a series of steady and unsteady physiological flow configurations, where both the geometrical and mechanical wall properties vary. \\
This work is organized as follows. In section \ref{sec:Math-Model} we derive the hyperbolic system of equations that describes the flow of blood in large arteries and recall its main mathematical properties. In section \ref{sec:Num}, we present a kinetic numerical scheme for the homogeneous problem and the boundary conditions used in the examples presented in this study. In section \ref{sec:HR}, we introduce the low-Shapiro hydrostatic reconstruction (HR-LS) and the subsonic hydrostatic reconstruction (HR-S) for blood flow in arteries with varying mechanical and geometrical wall properties. In sections \ref{sec:Ex-Single} and \ref{sec:Ex-55}, we present a series a steady and unsteady test cases for a single artery and a 55 arteries network, in which we evaluate the performances of the different hydrostatic reconstruction techniques.\\
\section{Mathematical model}
\label{sec:Math-Model}
\subsection{Model derivation}
The 1D models for blood flow are derived by averaging over the cross-sectional area of each artery a simplified Navier-Stokes system of equations. These simplified equations are obtained using the long wave approximation (${D / \lambda} \ll 1$, ratio between the averaged diameter of an artery $D$ and the average wave length of the pulse wave $\lambda$) and supposing the axial symmetry of blood flow ($\partial_{\theta} = 0$). We recall that in arteries the ratio ${D / \lambda}$ is of the order of $1 \times 10^{-2}$, therefore the long wave hypothesis is asymptotically valid. Because blood and wall viscosities will damp the effects we want to highlight, namely pulse wave propagation, we neglect them in the rest of this study. We use the inviscid system of equations describing the flow of blood in an elastic artery at the axial position $x$ and time $t$
\begin{equation}
\left\{
\begin{split}
\frac{\partial A }{\partial t }& + \frac{\partial Q }{\partial x } = 0\\
\frac{\partial Q }{\partial t }& + \frac{\partial }{\partial x }\left( \frac{Q^2}{A}\right) = -\frac{A}{\rho}\frac{\partial P }{\partial x }.
\end{split}
\right.
\label{eq:BF-Sys-Pressure}
\end{equation}
The variables $Q$, $A$ and $P$ are respectively the flow rate, the cross-sectional area and the blood pressure. We also introduce the flow velocity $u = \frac{Q}{A}$. The parameter $\rho$ is the density of blood and is supposed constant. For a description of the derivation of system \eqref{eq:BF-Sys-Pressure} we refer the reader to \cite{Lambert1958,Barnard1966,Hughes1973}. To close the system of equations, the variation of pressure is linked to the deformation of the artery. Assuming that the arterial wall is a homogeneous, incompressible Hookean solid and that the artery is represented by a thin-cylinder whose sections move independently of one another, the following wall law is obtained, describing the elastic, or spring-like, behavior of the arterial wall
\begin{equation}
P\left( x,t \right) = P_0 + K\left( x\right) \left( \sqrt{A\left( x,t \right)}-\sqrt{A_0\left( x\right)} \right),
\label{eq:Pressure-Elastic}
\end{equation}
where $A_0$ is the cross-sectional area at rest of the artery and $K$ is the arterial wall rigidity. Both quantities can vary with the axial position $x$. More complex and general pressure laws can be used (for example in veins \cite{Pedley1996}), yet equation \eqref{eq:Pressure-Elastic} contains sufficient information to describe the main features of blood flow in large arteries \cite{Wang2015,Politi2016}. Combining both system \eqref{eq:BF-Sys-Pressure} and equation \eqref{eq:Pressure-Elastic} we obtain the final 1D nonconservative system of equations
\begin{equation}
\left\{
\begin{split}
&\frac{\partial A }{\partial t } + \frac{\partial Q }{\partial x } = 0\\
&\frac{\partial Q }{\partial t } + \frac{\partial F }{\partial x } = S_T,
\end{split}
\right.
\label{eq:BF-Sys-Topo}
\end{equation}
where $F$ is the momentum flux
\begin{equation}
F = \frac{Q^2}{A} + \frac{K}{3\rho}A^{\frac{3}{2}},
\label{eq:BF-Flux}
\end{equation}
and $S_T$ is a source term taking into account the possible variations of the geometrical and mechanical properties of the arterial wall
\begin{equation}
S_T = \frac{A}{\rho} \left( \frac{\partial }{\partial x }\left( K\sqrt{A_0} \right) - \frac{2}{3}\sqrt{A}\frac{\partial K }{\partial x } \right).
\label{eq:BF-Source-Topo}
\end{equation}
\subsection{Hyperbolic system}
System \eqref{eq:BF-Sys-Topo} can be written as a system of balance laws
\begin{equation}
\frac{\partial \mathbf{U} }{\partial t }
+ \frac{\partial }{\partial x }\left[ \mathbf{F}\left( \mathbf{U},K \right) \right]
= \mathbf{S}\left( \mathbf{U},K \right) \frac{\partial \bm{\sigma} }{\partial x }.
\label{eq:BF-Sys}
\end{equation}
$\mathbf{U}$ and $\mathbf{F}$ are respectively the vector of conservative variables and the vector of mass and momentum flux
\begin{equation}
\mathbf{U}=
\begin{bmatrix}
A\\
Q\\
\end{bmatrix},
\qquad
\mathbf{F}\left( \mathbf{U},K \right) =
\begin{bmatrix}
Q\\
F\\
\end{bmatrix}
,
\end{equation}
and the vector $\bm{\sigma}$ and the matrix $\mathbf{S}$ are defined as:
\begin{equation}
\bm{\sigma}=
\begin{bmatrix}
K\\
Z\\
\end{bmatrix}
=
\begin{bmatrix}
K\\
K\sqrt{A_0}\\
\end{bmatrix},
\qquad
\mathbf{S}\left( \mathbf{U} \right) =
\begin{bmatrix}
0 & 0 \\
-\frac{2}{3}\frac{A^{\frac{3}{2}}}{\rho} & \frac{A}{\rho} \\
\end{bmatrix}.
\end{equation}
The main difficulty of system \eqref{eq:BF-Sys} lies in the presence of the nonconservative source term $\mathbf{S}\frac{\partial \bm{\sigma} }{\partial x}$. This nonconservative term vanishes when the cross-sectional area at rest $A_0$ and the arterial wall rigidity $K$ are constant, and system \eqref{eq:BF-Sys} is reduced to the following system of conservation laws
\begin{equation}
\frac{\partial \mathbf{U} }{\partial t } + \frac{\partial }{\partial x }\left[\mathbf{F}\left( \mathbf{U},K \right)\right] = 0.
\label{eq:BF-Sys-Conservative}
\end{equation}
The conservative system \eqref{eq:BF-Sys-Conservative} has been thoroughly studied by many authors and we only briefly recall its properties. Additional details can be found in \cite{Formaggia2003}. To analyze the mathematical properties of the system \eqref{eq:BF-Sys-Conservative}, we compute the Jacobian matrix of the flux vector $\mathbf{F}$
\begin{equation}
\mathbf{J}\left( \mathbf{U},K \right) = \frac{\partial \mathbf{F} }{\partial \mathbf{U} }=
\begin{bmatrix}
0 & 1\\
\frac{K\sqrt{A}}{2 \rho}-\frac{Q^2}{A^2} & \frac{2Q}{A} \\
\end{bmatrix}.
\end{equation}
$\mathbf{J}\left( \mathbf{U,K} \right)$ has two real eigenvalues $\lambda_1$ and $\lambda_2$, respectively associated to two right eigenvectors $\mathbf{R_1}$ and $\mathbf{R_2}$
\begin{equation}
\lambda_1 = \frac{Q}{A} - c
, \quad
\lambda_2 = \frac{Q}{A} + c ,
\qquad
\mathbf{R_1} =
\begin{bmatrix}
1\\
\lambda_1 \\
\end{bmatrix}
, \quad
\mathbf{R_2} =
\begin{bmatrix}
1\\
\lambda_2 \\
\end{bmatrix} .
\end{equation}
The variable $c$ is the Moens-Korteweg wave speed \cite{Moens1878,Korteweg1878} and corresponds to the speed of pulse waves in an artery
\begin{equation}
c = \sqrt{\frac{K}{2\rho}\sqrt{A}}.
\label{eq:Moens-Korteweg-c}
\end{equation}
The hyperbolicity of the system is characterized by the Shapiro number $S_h$, introduced by Shapiro in \cite{Shapiro1977}
\begin{equation}
S_h = \frac{u}{ c} = \frac{1}{c} \frac{Q}{A} .
\end{equation}
$S_h$ as the analogue of the Froude number $F_r$ for the shallow-water equations or of the Mach number $M_a$ for compressible flows. Depending on the value of $S_h$, we distinguish two flow regimes, represented respectively by the subcritical velocity domain $\mathbb{U}_{sub}$ and the supercritical velocity domain $\mathbb{U}_{sup}$
\begin{equation}
\left\{
\begin{split}
\mathbb{U}_{sub} = & \left\{ \frac{Q}{A} \in \mathbb{R} \: | \: A > 0 , \: K > 0, \: Z > 0, \: S_h < 1 \right\} \\
\mathbb{U}_{sup} = & \left\{ \frac{Q}{A} \in \mathbb{R} \: | \: A > 0 , \: K > 0, \: Z > 0, \: S_h > 1 \right\} . \\
\end{split}
\right.
\end{equation}
In both regions $\mathbb{U}_{sub}$ and $\mathbb{U}_{sup}$, system \eqref{eq:BF-Sys-Conservative} is strictly hyperbolic as $\lambda_1 \neq \lambda_2$ and the right eigenvectors $\mathbf{R_1}$ and $\mathbf{R_2}$ are linearly independent. However, when $S_h=1$ the flow is critical and the system looses its strict hyperbolicity. In this case resonance phenomena can occur, leading to a possible loss of uniqueness of the solution \cite{Liu1987,Isaacson1992,LevequeBook2002,Han2012}.
In physiological conditions, blood flow is almost always subcritical. Nevertheless, very specific pathologies may lead to supercritical flows but will not be the subject of this study. Only subcritical solutions of system \eqref{eq:BF-Sys-Conservative} and more generally of system \eqref{eq:BF-Sys} in $\mathbb{U}_{sub}$ will be considered here.\\
For solutions of system \eqref{eq:BF-Sys-Conservative} in $\mathbb{U}_{sub}$, linear algebra shows that the Jacobian matrix $\mathbf{J}$ is diagonalizable in the form $\mathbf{J} =\mathbf{R} \mathbf{\Delta} \mathbf{R}^{-1}$, where the columns of $\mathbf{R}$ are the right eigenvectors $\mathbf{R_1}$ and $\mathbf{R_2}$ and $\mathbf{\Delta}$ is a diagonal matrix containing the eigenvalues of $\mathbf{J}$. Introducing a new vector $\mathbf{W} = \left[W_1,W_2\right]^{T}$ such that $\partial_{\mathbf{U}} \mathbf{W} = \mathbf{R}^{-1}$, system \eqref{eq:BF-Sys-Conservative} can be written as:
\begin{equation}
\frac{\partial \mathbf{W} }{\partial t } + \mathbf{\Delta} \frac{\partial \mathbf{W} }{\partial x } = 0.
\label{eq:bf-characteristic}
\end{equation}
Finally, by integrating the equation $\partial_{\mathbf{U}} \mathbf{W} = \mathbf{R}^{-1}$, the following expression for $\mathbf{W}$ is obtained
\begin{equation}
\mathbf{W} =
\begin{bmatrix}
W_1\\
W_2\\
\end{bmatrix} =
\begin{bmatrix}
\frac{Q}{A} - 4 c\\
\\
\frac{Q}{A} + 4 c\\
\end{bmatrix}.
\label{eq:bf-Riemann-Invariants}
\end{equation}
The vector $\mathbf{W}$ is often referred to as the Riemann invariant vector and is linked to the conservative variables
\begin{equation}
\left\{
\begin{split}
& A = \left( \frac{2 \rho}{K} \right)^2 \left( \frac{W_2 - W_1}{8} \right)^4 \\
& Q = A \frac{W_1 + W_2}{2} .\\
\end{split}
\right.
\label{eq:bf-AQW}
\end{equation}
The relations \eqref{eq:bf-AQW} are useful to define the boundary conditions at the inlet and outlet of the computational domain.\\
The vector $\mathbf{U}$, solution of system \eqref{eq:BF-Sys-Conservative}, satisfies an entropy inequality linked to the entropy pair $\left( \eta,G \right)$
\begin{equation}
\frac{\partial \eta }{\partial t} + \frac{\partial G }{\partial x} \leq 0,
\label{BF:eq-Entropy-Inequality}
\end{equation}
where $\eta$ is the entropy and $G$ is the entropy flux
\begin{equation}
\left\{
\begin{split}
&\eta\left( \mathbf{U},K \right) = \frac{Q^2}{2A} + \frac{2}{3}\frac{K}{\rho} A^{\frac{3}{2}} \\
&G\left( \mathbf{U} ,K\right) = \left( \frac{Q^2}{2A} + \frac{K}{\rho}A^{\frac{3}{2}} \right)\frac{Q}{A}. \\
\end{split}
\right.
\end{equation}
This entropy inequality is extended to solutions of system \eqref{eq:BF-Sys} through a new entropy pair $\left( \tilde{\eta},\tilde{G} \right)$ taking into account the vector $\bm{\sigma}$
\begin{equation}
\left\{
\begin{split}
&\tilde{\eta}\left( \mathbf{U},\bm{\sigma} \right) = \eta\left( \mathbf{U},K \right) - \frac{Z}{\rho} A \\
&\tilde{G}\left( \mathbf{U} ,\bm{\sigma}\right) = G\left( \mathbf{U},K \right) - \frac{Z}{\rho} Q. \\
\end{split}
\right.
\end{equation}
This entropy inequality is closely linked to the variation of the physical energy of the system. The existence of such an inequality is essential in order to select the correct physical solution across discontinuities \cite{GosseBook2013}.\\
System \eqref{eq:BF-Sys} admits non-trivial steady solutions, verifying the following steady state system of equations
\begin{equation}
\left\{
\begin{split}
& Q = C_1\\
& \frac{1}{2} \frac{Q^2}{A^2} + \frac{1}{\rho} \left( K\sqrt{A}-Z \right) = C_2 , \\
\end{split}
\right.
\label{eq:BF-Steady-State-Equation}
\end{equation}
where $C_1$ and $C_2$ are two constants. In the following, we note $E = \frac{1}{2} \frac{Q^2}{A^2} + \frac{1}{\rho} \left( K\sqrt{A}-Z \right)$ the energy discharge. A particular family of steady states are the steady states at rest, or "man at eternal rest" equilibria, defined by
\begin{equation}
\left\{
\begin{split}
& Q = 0\\
& K\sqrt{A}-Z = C_2 . \\
\end{split}
\right.
\label{eq:BF-Steady-State-Equation-Rest}
\end{equation}
For shallow water flows, steady states mainly occur in lakes and verify the "man at eternal rest" equilibria \eqref{eq:BF-Steady-State-Equation-Rest}. In arteries, steady or quasi-steady flow regimes are observed in small segments when the frequency of the pulse wave is greatly reduced due to a high resistance of the flow, for example after severe stenoses or in smaller arteries. In these cases, the relevant equilibria are no longer the steady states at rest but the non-zero flow steady states described by system \eqref{eq:BF-Steady-State-Equation}.
\section{Numerical scheme for the homogeneous conservative system}
\label{sec:Num}
In this section we describe the finite volume numerical scheme used to solve the homogeneous conservative system \eqref{eq:BF-Sys-Conservative}. The spatial domain is discretized in a series of cells $C_i$ defined as
\begin{equation}
C_i = \left[x_{i-\frac{1}{2}}, \: x_{i+\frac{1}{2}} \right] = \left[x_i - \frac{\Delta x}{2}, \: x_i + \frac{\Delta x}{2} \right] , \quad i \in \left[1,N\right],
\end{equation}
where $\Delta x$ is the cell size, supposed constant for simplicity. The time domain is also discretized using a constant time step $\Delta t$ and the discrete times are defined as
\begin{equation}
t^n = n \Delta t , \quad n \in \mathbb{N}.
\end{equation}
\subsection{Finite volume numerical scheme}
We first derive the integral form of the conservative system \eqref{eq:BF-Sys-Conservative} by integrating it with respect to $t$ and $x$ over $\left] t^n, \: t^{n+1}\right[ \times C_i$ \cite{LevequeBook2002}
\begin{equation}
\left.
\begin{split}
& \int_{C_i} \left[ \mathbf{U}\left( x,t^{n+1} \right) - \mathbf{U}\left( x,t^{n} \right) \right] \mathrm{d}x + \\
& \int_{t^n}^{t^{n+1}} \left[ \mathbf{F}\left( \mathbf{U}\left( x_{i+\frac{1}{2}},t \right), K\left( x_{i+\frac{1}{2}} \right) \right) - \mathbf{F}\left( \mathbf{U}\left( x_{i-\frac{1}{2}},t \right), K\left( x_{i-\frac{1}{2}} \right) \right) \right] \mathrm{d}t = 0
.
\end{split}
\right.
\label{eq:BF-Finite-Volume-Scheme}
\end{equation}
We then approximate the integrals in \eqref{eq:BF-Finite-Volume-Scheme} using the discrete variable $\mathbf{U_i^n}$ and the numerical flux $\mathbf{F^n_{i+\frac{1}{2}}}$, corresponding respectively to an approximation of the space average of the exact solution $\mathbf{U}$ over the cell $C_i$ at time $t^n$
\begin{equation}
\mathbf{U_i^n} \approx \frac{1}{\Delta x} \int_{C_i} \mathbf{U}\left( x,t^n \right)\text{d}x ,
\end{equation}
and to an approximation of the time average of $\mathbf{F}$ at the cell interface $I_{i+\frac{1}{2}}$
\begin{equation}
\mathbf{F^n_{i+\frac{1}{2}}} \approx \frac{1}{\Delta t} \int_{t^n}^{t^{n+1}} \mathbf{F}\left( \mathbf{U}\left( x_{i+\frac{1}{2}},t \right), K\left( x_{i+\frac{1}{2}} \right) \right) \mathrm{d}t .
\end{equation}
Using these definitions, we obtain the following explicit finite volume numerical scheme
\begin{equation}
\mathbf{U_i^{n+1}} = \mathbf{U_i^n} - \frac{\Delta t}{\Delta x} \left[ \mathbf{F^n_{i+\frac{1}{2}}} - \mathbf{F^n_{i-\frac{1}{2}}} \right].
\label{eq:BF-First-Order-Scheme}
\end{equation}
We define $\mathbf{F^n_{i+\frac{1}{2}}}$ as a two-points numerical flux vector, namely
\begin{equation}
\mathbf{F^n_{i+\frac{1}{2}}} = \mathcal{F}\left( \mathbf{U_L},\mathbf{U_R} \right)
=
\begin{bmatrix}
\mathcal{F_A}\left( \mathbf{U_L},\mathbf{U_R} \right)\\
\mathcal{F_Q}\left( \mathbf{U_L},\mathbf{U_R} \right)\\
\end{bmatrix}
.
\end{equation}
As we focus only on first-order finite volume numerical schemes, the vectors $\mathbf{U_L}$ and $\mathbf{U_R}$ at the cell interface $I_{i+\frac{1}{2}}$ at time $t^n$ are defined as
\begin{equation}
\mathbf{U_L} = \mathbf{U_i^n},
\qquad
\mathbf{U_R} = \mathbf{U_{i+1}^{n}}.
\label{eq:BF-UL-UR}
\end{equation}
The choice of the function $\mathcal{F}$ defines the numerical flux and thus the finite volume scheme. Several possibilities exist, and a review of the most common ones applied to blood flow equations can be found in \cite{Delestre2012,Muller2013,Muller2015,Wang2015,Murillo2015,Audebert2016}.
\subsection{Kinetic numerical flux}
We choose to compute the function $\mathcal{F}$ using a kinetic numerical flux, and a review of this method applied to different systems of equations can be found in \cite{Bouchut1999}. The kinetic method was first introduced for shallow water equations in \cite{Perthame2001}, combined with the hydrostatic reconstruction (HR) in \cite{Audusse2005} and adapted to the blood flow in \cite{Delestre2012,Audebert2016}. The principal motivations for choosing a kinetic numerical flux are that it preserves the positivity of the cross-sectional area and its numerical diffusion is better suited to compute resonant solutions \cite{BouchutBook2004, Andrianov2005}. We briefly recall the classical kinetic approach.\\
Following \cite{Perthame2001,Audusse2005}, we introduce the real, positive, even and compactly supported function $\chi\left( w \right)$, verifying the following properties
\begin{equation}
\left\{
\begin{split}
& \chi\left( -w \right) = \chi \left( w \right) \\
& \int_\mathbb{R} \chi \left( w \right)\mathrm{d}w = \int_\mathbb{R}w^2 \chi \left( w \right) \mathrm{d}w = 1 .\\
\end{split}
\right.
\label{eq:kin-chi-Properties}
\end{equation}
We choose the following expression for the function $\chi\left( w \right)$
\begin{equation}
\chi\left( w \right) =
\left\{
\begin{split}
& \frac{1}{2 \sqrt{3}} &\text{ if } |w| \leq \sqrt{3}\\
& 0 &\text{ else}.\\
\end{split}
\right.
\label{eq:kin-chi}
\end{equation}
Using this function, we define the kinetic Maxwellian, or so-called \textit{Gibbs equilibrium}, which represents the density of microscopic particles moving at the velocity $\xi \in \mathbb{R}$
\begin{equation}
M\left( x,t,\xi \right)=M\left( A,\xi-u \right) = \frac{A\left( x,t \right)}{\tilde{c}} \chi \left( \frac{\xi - u}{\tilde{c}} \right),
\label{eq:kin-Maxwellian}
\end{equation}
where
\begin{equation}
\tilde{c} = \sqrt{\frac{K}{3 \rho} \sqrt{A}}.
\label{eq:kin-c}
\end{equation}
Noticing that the integral and the first and second moments on $\mathbb{R}$ of $M$ respectively allow to recover $A$, $Q$ and $F$, it can be proved \cite{Perthame2001} that $\mathbf{U}$ is solution of system \eqref{eq:BF-Sys-Conservative} if and only if $M$ satisfies the following linear kinetic equation
\begin{equation}
\frac{\partial M }{\partial t } + \xi \frac{\partial M }{\partial x } = \mathcal{Q}\left( x,t,\xi \right),
\label{eq:BF-kin-eq}
\end{equation}
where $\mathcal{Q}\left( x,t,\xi \right)$ is a collision term that satisfies
\begin{equation}
\int_{\mathbb{R}} \mathcal{Q}\mathrm{d}\xi = \int_{\mathbb{R}} \xi \mathcal{Q}\mathrm{d}\xi = 0.
\label{eq:BF-kin-Collision}
\end{equation}
As the equation \eqref{eq:BF-kin-eq} is linear, it can be approximated by a simple upwind scheme. The flux function $\mathcal{F}$ is then obtained using the integral and the first moment of the upwind numerical flux used to solve the linear kinetic equation \eqref{eq:BF-kin-eq}, and writes
\begin{equation}
\mathcal{F}\left( \mathbf{U_L},\mathbf{U_R} \right) = \mathcal{F}^+\left( \mathbf{U_L} \right) + \mathcal{F}^-\left( \mathbf{U_R} \right),
\end{equation}
where $\mathbf{U_L}$ and $\mathbf{U_R}$ are defined as in \eqref{eq:BF-UL-UR}. The fluxes $\mathcal{F}^+\left( \mathbf{U} \right)$ and $\mathcal{F}^- \left( \mathbf{U} \right)$ are defined as
\begin{equation}
\left\{
\begin{split}
& \mathcal{F}^+\left( \mathbf{U} \right) &=& \int_{\xi \geq 0} \xi
\begin{bmatrix}
1\\
\xi\\
\end{bmatrix}
M \left( A,\xi-u \right)\text{d} \xi\\
& \mathcal{F}^-\left( \mathbf{U} \right) &=& \int_{\xi \leq 0} \xi
\begin{bmatrix}
1\\
\xi\\
\end{bmatrix}
M \left( A,\xi-u \right)\text{d} \xi .\\
\end{split}
\right.
\end{equation}
After some computation, we find that
\begin{equation}
\left\{
\begin{split}
&\mathcal{F}^+\left( \mathbf{U} \right) =
\frac{A}{2 \sqrt{3} \tilde{c}}
\begin{bmatrix}
\frac{1}{2}\left(\left(\xi_{p}^+\right)^2 - \left(\xi_{m}^+\right)^2 \right)\\
\frac{1}{3}\left( \left(\xi_{p}^+\right)^3- \left(\xi_{m}^+\right)^3 \right)\\
\end{bmatrix}
\\
&\mathcal{F}^-\left( \mathbf{U} \right) =
\frac{A}{2 \sqrt{3} \tilde{c}}
\begin{bmatrix}
\frac{1}{2}\left(\left(\xi_{p}^-\right)^2 - \left(\xi_{m}^-\right)^2 \right)\\
\frac{1}{3}\left( \left(\xi_{p}^-\right)^3- \left(\xi_{m}^-\right)^3 \right)\\
\end{bmatrix} ,\\
\end{split}
\right.
\label{eq:kin-num-flux}
\end{equation}
with
\begin{equation}
\left\{
\begin{split}
&\xi_p^+ = \max\left(0,u + \sqrt{3}\tilde{c}\right), \qquad &\xi_m^+ = \max\left(0,u - \sqrt{3}\tilde{c}\right)\\
&\xi_p^- = \min\left(0,u + \sqrt{3}\tilde{c}\right), \qquad &\xi_m^- = \min\left(0,u - \sqrt{3}\tilde{c}\right).\\
\end{split}
\right.
\label{eq:kin-xi}
\end{equation}
The stability of the scheme is ensured if at each time $t^{n}$, the time step $\Delta t$ verifies the following CFL (Courant, Friedrichs and Lewy) \cite{Courant1967} condition
\begin{equation}
\Delta t \leq \min_{i=1}^N \frac{\Delta x}{|u_i^n| + \tilde{c}_i^n}.
\label{eq:CFL-kin}
\end{equation}
\subsection{Initial condition}
All numerical simulations presented in this study are initialized by the following solution of the steady state at rest system \eqref{eq:BF-Steady-State-Equation-Rest}
\begin{equation}
Q = 0 \quad \mathrm{ and } \quad A = A_0 ,\\
\label{eq:Initial-Condition}
\end{equation}
and the initial vector of conservative variable in the cell $C_i$ is then
\begin{equation}
\mathbf{U_i^0} =
\begin{bmatrix}
A_{0,i}\\
0\\
\end{bmatrix} .
\label{eq:Initial-Vector}
\end{equation}
\subsection{Subcritical boundary condition}
\label{sec:BC}
In each artery at time $t^{n}$, boundary conditions are imposed in inlet and outlet ghost cells, respectively noted $C_{in}$ and $C_{out}$, by setting the value of their associated vector of conservative variable $\mathbf{U_{in}^n}$ and $\mathbf{U_{out}^n}$. As we compute subcritical solutions of system \eqref{eq:BF-Sys} in $\mathbb{U}_{sub}$, one boundary condition is imposed in the inlet ghost cell $C_{in}$ and one boundary condition is imposed in the outlet ghost cell $C_{out}$, respectively allowing to determine one component of $\mathbf{U_{in}^n}$ and one component of $\mathbf{U_{out}^n}$. To compute the remaining unknown components of $\mathbf{U_{in}^n}$ and $\mathbf{U_{out}^n}$, we follow the methodology proposed by Bristeau and Coussin \cite{Bristeau2001} and Alastruey \cite{Alastruey2008}. In the following, we assume that in each cell $C_i$ at time $t^{n}$, the discrete vector of conservative variables $\mathbf{U_i^n}$ is known.
\subsubsection{Inlet boundary condition: imposed flow rate $Q_{in}$}
We describe here a methodology to impose the flow rate $Q_{in}\left( t^{n} \right) = Q_{in}^n$ at the interface between the first cell of the computational domain $C_1$ and the inlet ghost cell $C_{in}$, namely
\begin{equation}
\mathcal{F_A}\left( \mathbf{U_{in}^n} , \mathbf{U_1^n} \right) = Q_{in}^n .
\label{eq:bc-inlet-flux}
\end{equation}
Taking advantage of the fact that the kinetic flux function $\mathcal{F}$ can be split in two, equation \eqref{eq:bc-inlet-flux} can be expressed as
\begin{equation}
\mathcal{F_A}^+\left( \mathbf{U_{in}^n} \right) +\mathcal{F_A}^-\left( \mathbf{U_1^n} \right) = Q_{in}^n.
\label{eq:bc-inlet-flux-kin}
\end{equation}
To ensure the stability of the scheme, this condition is imposed in an upwind manner. Following \cite{Bristeau2001}, we define the quantity
\begin{equation}
a_1 = Q_{in}^n - \mathcal{F_A}^-\left( \mathbf{U_1^n} \right).
\end{equation}
Two possible cases exist:
\begin{itemize}
\item If $a_1 \leq 0$, the dominant part of the information is coming from inside the computational domain. As we are performing an upwind evaluation of the inlet boundary condition, we impose
\begin{equation}
\left\{
\begin{split}
& \mathcal{F_A}^+\left( \mathbf{U_{in}^n} \right) = 0\\
& \mathcal{F_Q}^+\left( \mathbf{U_{in}^n} \right) = 0 .\\
\end{split}
\right.
\label{eq:bc-inlet-flux-kin-a1-neg}
\end{equation}
\item If $a_1 > 0$, the dominant part of the information is coming from outside the computational domain. In this case, we impose
\[
\mathcal{F_A}^+\left( \mathbf{U_{in}^n} \right) = a_1.
\]
An additional equation is required to completely determine $\mathbf{U_{in}^n}$. We take advantage of the characteristic structure of the problem: as we are in the subcritical case, there exists an outgoing characteristic on which the Riemann invariant $W_1$ is constant. Using this property, we assume that a correct estimation of the cell average value of the outgoing Riemann invariant $W_1\left(\mathbf{U_{in}^n}\right)$ is $W_1\left( \mathbf{U_1^n} \right)$. Finally, we impose
\begin{equation}
\left\{
\begin{split}
& \mathcal{F_A}^+\left( \mathbf{U_{in}^n} \right) = a_1\\
&W_1\left(\mathbf{U_{in}^n}\right) = W_1\left( \mathbf{U_1^n} \right) . \\
\end{split}
\right.
\label{eq:bc-inlet-flux-kin-a1-pos}
\end{equation}
\end{itemize}
$\mathbf{U_{in}^n}$ is obtained by solving either system \eqref{eq:bc-inlet-flux-kin-a1-neg} or system \eqref{eq:bc-inlet-flux-kin-a1-pos}. This can be done using a classic Newton's method in a limited number of iterations ($\sim 5$).
\subsubsection{Unsteady outlet boundary condition: reflection of the outgoing characteristic}
\label{sec:BC-Rt}
We propose here a methodology to characterize the incoming information at the outlet of the computational domain. Indeed, as we are in the subcritical regime, there exists an outgoing characteristic on which the Riemann invariant $W_2$ is constant and an incoming characteristic on which propagates the Riemann invariant $W_1$. As in the previous case, the cell average value of the outgoing Riemann invariant $W_2\left(\mathbf{U_{out}^n}\right)$ can be estimated by $W_2\left( \mathbf{U_N^n} \right)$ and we impose
\begin{equation}
W_2\left(\mathbf{U_{out}^n}\right) = W_2\left( \mathbf{U_N^n} \right).
\end{equation}
The value of the incoming Riemann invariant $W_1\left(\mathbf{U_{out}^n}\right)$ is unknown as it propagates on a characteristic coming from outside the computational domain. In large artery simulations, it is common to estimate the incoming Riemann invariant $W_1\left(\mathbf{U_{out}^n}\right)$ as a fraction of the outgoing Riemann invariant $W_2\left(\mathbf{U_{out}^n}\right)$ \cite{Wang2015,Alastruey2008,Alastruey2009,Murillo2015}. This fraction is quantified by a reflection coefficient $R_t$ such that
\begin{equation}
W_1\left(\mathbf{U_{out}^n}\right) - W_1\left(\mathbf{U_{out}^0}\right) = - R_t \left[ W_2\left(\mathbf{U_{out}^n}\right) - W_2\left(\mathbf{U_{out}^0}\right) \right],
\label{eq:bc-outlet-Rt}
\end{equation}
where $W_1\left(\mathbf{U_{out}^0}\right)$ and $W_2\left(\mathbf{U_{out}^0}\right)$ are the initial Riemann invariants of the ghost cell $C_{out}$. The reflection coefficient $R_t$, whose value ranges between $0$ and $1$, models the reflective and resistive behavior of the network that is not taken into account in the numerical simulation and lies distal (anatomically located far from the point of reference) to the outlet of the computational domain. Finally, using the relations \eqref{eq:bf-AQW}, we solve the following system of equations to obtain $\mathbf{U_{out}^n}$
\begin{equation}
\left\{
\begin{split}
&W_2\left(\mathbf{U_{out}^n}\right) = W_2\left( \mathbf{U_N^n} \right)\\
&W_1\left(\mathbf{U_{out}^n}\right) - W_1\left(\mathbf{U_{out}^0}\right) = - R_t \left[ W_2\left(\mathbf{U_{out}^n}\right) - W_2\left(\mathbf{U_{out}^0}\right) \right] .\\
\end{split}
\right.
\label{eq:bc-outlet-sys}
\end{equation}
When we wish to remove any incoming information, or equivalently any distal reflection, we set $R_t=0$.
\subsubsection{Steady outlet boundary condition: imposed cross-sectional area $A_{out}$}
\label{sec:BC-At}
We describe here a methodology to impose the cross-sectional area $A_{out}$ at the outlet of the computational domain. Indeed, when the flow rate is imposed at the inlet of the computational domain and the vector $\bm{\sigma}$ is known, the steady states verifying system \eqref{eq:BF-Steady-State-Equation} are completely determined if we impose the value of the cross-sectional area at the outlet of the computational domain. We set in the outlet ghost cell $C_{out}$ a constant cross-sectional area $A_{out}$
\[
A_{out}^n = A_{out}.
\]
We need only to compute $Q_{out}^n$ to completely determine the outlet vector of conservative variables $\mathbf{U_{out}^n}$. To do so, we estimate as in the previous section $W_2\left( \mathbf{U_{out}^n} \right)$ by
\[
W_2\left(\mathbf{U_{out}^n}\right) = W_2\left( \mathbf{U_N^n} \right),
\]
and using the relations \eqref{eq:bf-AQW}, we compute $W_1\left(\mathbf{U_{out}^n}\right)$ and then $Q_{out}^n$
\begin{equation}
\left\{
\begin{split}
& W_1\left(\mathbf{U_{out}^n}\right) = W_2\left(\mathbf{U_{out}^n}\right) - 8 c_{out}^n\\
& Q_{out}^n = A_{out}^n \frac{W_1\left(\mathbf{U_{out}^n}\right)+W_2\left(\mathbf{U_{out}^n}\right)}{2} . \\
\end{split}
\right.
\label{eq:bc-out-sys}
\end{equation}
\subsubsection{Junction boundary condition: conservation of mass and continuity of total pressure}
In large network simulations, boundary conditions must also be provided at every junction point, where arteries that are neither the inlet segment nor the terminal segments are connected together. At these junctions points, the outlets of the parent arteries are linked to the inlets of the daughter arteries through the conservation of mass and the continuity of total pressure, or equivalently the energy discharge \cite{Sherwin2003-2}. We consider here a junction point where a single parent artery $A_P$ is connected to $N_D$ daughter arteries $\left( A_{Di} \right)_{i=1}^{N_D}$. The values of $\mathbf{U_{out}^n} |_{A_P}$ and of $\left(\mathbf{U_{in}^n} |_{A_{Di}}\right)_{i=1}^{N_D}$ must be computed, with a total of $2\left( N_D+1 \right)$ unknowns. $N_D+1$ equations are obtained by estimating the outgoing Riemann invariant of the parent and daughter arteries as
\begin{equation}
\left\{
\begin{split}
&W_2\left(\mathbf{U_{out}^n}\right)|_{A_P} = W_2\left( \mathbf{U_N^n} \right)|_{A_P}\\
&W_1\left(\mathbf{U_{in}^n}\right)|_{A_{Di}} = W_1\left( \mathbf{U_1^n} \right)|_{A_{Di}} , \quad i\in \left[1,N_D\right] .\\
\end{split}
\right.
\label{eq:bc-conj-Riemann}
\end{equation}
The missing equations are provided by the conservation of mass and total pressure, or equivalently the energy discharge, at each junction point \cite{Raines1974,Sherwin2003}
\begin{equation}
\left\{
\begin{split}
&Q_{out}^n|_{A_P} = \sum_{i=1}^{N_D}Q_{in}^n |_{A_{Di}} \\
&\left[\frac{1}{2} \rho \left(\frac{Q_{out}^n}{A_{out}^n}\right)^2 + K \sqrt{A_{out}^n} - Z\right]|_{A_P} = \left[\frac{1}{2} \rho \left(\frac{Q_{in}^n}{A_{in}^n}\right)^2 + \right. \\
& \qquad \qquad \qquad \qquad \left.K \sqrt{A_{in}^n} - Z \right]|_{A_{Di}} , \quad i=1,...,N_D .\\
\end{split}
\right.
\label{eq:bc-conj-Q-P}
\end{equation}
In practice, since in physiological conditions the flow is always subcritical, we can simplify the problem and impose only the continuity of pressure \cite{Alastruey2009}, neglecting the advection terms in the second equation of system \eqref{eq:bc-conj-Q-P}
\begin{equation}
\left\{
\begin{split}
&Q_{out}^n|_{A_P} = \sum_{i=1}^{N_D}Q_{in}^n |_{A_{Di}} \\
&\left[K \sqrt{A_{out}^n} - Z\right]|_{A_P} = \left[ K \sqrt{A_{in}^n} - Z \right]|_{A_{Di}}, \quad i=1,...,N_D .\\
\end{split}
\right.
\label{eq:bc-conj-Q-P-Low-Fr}
\end{equation}
This set of equations allows to accurately compute wave reflections and transmissions if a change of impedance occurs between the parent and the daughter arteries. Accurately computing reflected waves is crucial to obtain physiological wave forms in the simulated network, as the observed pressure and flow waves are the sum of the incoming and the multiple reflected waves \cite{Alastruey2009,Alastruey2011,Politi2016}.
\section{Hydrostatic reconstruction}
\label{sec:HR}
In many physiological configurations, the geometrical and mechanical properties of an artery vary significantly with its length. In the scope of this paper, these geometrical and mechanical gradients are limited to variations of the cross-sectional area at rest $A_0$ and the arterial wall rigidity $K$. To prevent spurious oscillations of the numerical solution of system \eqref{eq:BF-Sys} close to steady states, a well-balanced numerical scheme is required to properly balance the source term $S_T$ and the flux gradient $\partial_x \mathbf{F}$.\\
In order to make an explicit analogy with the well-balanced methods derived for shallow water equations, we introduce the following notations
\begin{equation}
\mathcal{P}(A,K)= \frac{K}{3 \rho} A^{\frac{3}{2}}
, \qquad
\mathcal{E}(A,K) = \frac{2K}{3 \rho} \sqrt{A}
, \qquad
H = K \sqrt{A}.
\label{eq:SW-Notations}
\end{equation}
With these notations, we have $\frac{\partial \mathcal{E} }{\partial A }= \frac{\mathcal{P}}{A^2}$ and the flux vector $\mathbf{F}$ can be expressed as
\begin{equation}
\mathbf{F}\left( \mathbf{U},K \right) =
\begin{bmatrix}
Q\\
\frac{Q^2}{A} + \mathcal{P}(A,K)\\
\end{bmatrix}.
\label{eq:bf-Flux-Vector}
\end{equation}
Moreover, the steady state systems \eqref{eq:BF-Steady-State-Equation} and \eqref{eq:BF-Steady-State-Equation-Rest} can respectively be written as
\begin{equation}
\left\{
\begin{split}
& Q = C_1\\
& \frac{1}{2} \frac{Q^2}{A^2} + \mathcal{E}(A,K) + \frac{\mathcal{P}(A,K)}{A} -\frac{Z}{\rho} = C_2 ,\\
\end{split}
\right.
\label{eq:BF-Steady-State-Equation-SW}
\end{equation}
and
\begin{equation}
\left\{
\begin{split}
& Q = 0\\
& H-Z = C_2 .\\
\end{split}
\right.
\label{eq:BF-Rest-Steady-State-Equation-SW}
\end{equation}
\subsection{The hydrostatic reconstruction: HR}
The hydrostatic reconstruction (HR) was introduced by Audusse \cite{Audusse2004} for shallow water equations and applied to blood flow equations by Delestre \cite{Delestre2012,Delestre2016}. Through a reconstruction of the conservative variables, HR allows to obtain a simple and efficient well-balanced numerical scheme given any finite volume numerical flux for the homogeneous problem \eqref{eq:BF-Sys-Conservative}. It is simple to implement and can easily be adapted to different pressure laws with multiple varying parameters, which is useful when considering veins, collapsible tubes and external pressure variations \cite{Pedley1996,Cavallini2010,Muller2014}. This technique allows to preserve at a discrete level the steady states at rest \eqref{eq:BF-Rest-Steady-State-Equation-SW} and guarantees that the scheme verifies some natural properties of the shallow water equations (listed as bullets in the introduction), such as the positivity of the water height (equivalent of the cross-sectional area $A$), the ability to compute dry states and transcritical flows and a semi-discrete entropy inequality. This last property is necessary to select the admissible entropy solution across a discontinuity, as explained in \cite{GosseBook2013}.
On both sides of each cell interface $I_{i+\frac{1}{2}}$, reconstructed conservative variables are defined to preserve the following system of equations, which coincides with the steady states at rest system \eqref{eq:BF-Rest-Steady-State-Equation-SW} when the flow rate $Q$ or the velocity $u$ are zero
\begin{equation}
\left\{
\begin{split}
& u = \frac{Q}{A} = C_1 \\
& H-Z = C_2 . \\
\end{split}
\right.
\label{eq:WB-HRU-sys-SW}
\end{equation}
Details on the derivation of HR for blood flow in an artery with variable cross-sectional area $A$ and variable arterial wall rigidity $K$ can be found in \cite{Delestre2016}.\\
In large arteries, the steady states at rest preserved by HR only occur for "dead men" or distal to an obliterated segment and are of little interest when simulating blood flow in the systemic network. However, in regions of large flow resistance such as small arteries, arterioles or arteries severely constricted by a stenosis, the flow looses its pulsatility and reaches steady or near-steady states with a non-zero flow rate. These quasi-steady flow configurations can occur in large network simulations when the level of arterial precision extends to small arteries and arterioles or in the presence of a very severe stenosis. They are described by the steady state system \eqref{eq:BF-Steady-State-Equation-SW}. Therefore, a modification of HR is necessary to capture the relevant steady states for blood flow in large arteries, described by system \eqref{eq:BF-Steady-State-Equation-SW}.
\subsection{The low-Shapiro hydrostatic reconstruction: HR-LS}
System \eqref{eq:BF-Steady-State-Equation-SW} is nonlinear and difficult to solve in practice. However, in physiological conditions, blood flow is subcritical with a Shapiro number of the order of $S_h \approx 1 \times 10^{-2}$. Therefore, the nonlinear advection term $\frac{1}{2}\frac{Q^2}{A^2}$ in system \eqref{eq:BF-Steady-State-Equation-SW} can be neglected at first order with respect to the term $\mathcal{E}(A,K) + \frac{\mathcal{P}(A,K)}{A} -\frac{Z}{\rho}$ that scales as $c^2$. Doing so, we obtain the following simplified low-Shapiro number steady state system of equations
\begin{equation}
\left\{
\begin{split}
& Q = C_1\\
& H-Z = C_2 . \\
\end{split}
\right.
\label{eq:WB-HRQ-sys-SW}
\end{equation}
System \eqref{eq:WB-HRQ-sys-SW} coincides with the steady state at rest system \eqref{eq:BF-Rest-Steady-State-Equation-SW} when $Q$ or $u$ are zero and is an asymptotically correct approximation of the steady state system \eqref{eq:BF-Steady-State-Equation-SW} in low-Shapiro number flow regimes. It also contains the correct conservation properties to obtain low-Shapiro number wave reflections if a change of impedance occurs at the interface between two cells of the computational domain. Indeed, the conservation properties of system \eqref{eq:WB-HRQ-sys-SW} are identical to those of system \eqref{eq:bc-conj-Q-P-Low-Fr}, which have proved to be adequate to compute wave reflections and transmissions at junction points \cite{Alastruey2009,Wang2015}. System \eqref{eq:WB-HRQ-sys-SW} is the basis for the derivation of the modification of HR we propose in this study, referred to as the low-Shapiro hydrostatic reconstruction (HR-LS) and better suited to compute blood flow in physiological conditions.
HR-LS aims at preserving low-Shapiro number steady states system \eqref{eq:WB-HRQ-sys-SW} in an artery with a varying cross-sectional area at rest $A_0$ and arterial wall rigidity $K$. Similarly to HR, the well-balanced property is enforced by defining reconstructed variables on both sides of each cell interface $I_{i+\frac{1}{2}}$ according to the reconstruction procedure \eqref{eq:WB-HRQ-sys-SW}. In the following, variables noted with $"^*"$ will refer to the reconstructed variables. Given the vectors of conservative variables $\mathbf{U_L}$ and $\mathbf{U_R}$ and the vectors $\bm{\sigma_L}$ and $\bm{\sigma_R}$ at the left and right of the interface $I_{i+\frac{1}{2}}$ between cells $C_i$ and $C_{i+1}$, the discrete analogue of system \eqref{eq:WB-HRQ-sys-SW} writes
\begin{equation}
\left\{
\begin{split}
&Q_{L}^* = Q_{L}\\
&H_{L}^*- Z^* = H_L - Z_L ,\\
\end{split}
\right.
\quad
\left\{
\begin{split}
&Q_{R}^* = Q_{R}\\
&H_{R}^*- Z^* = H_R - Z_R .\\
\end{split}
\right.
\label{eq:WB-HRQ-sys-SW-Discrete}
\end{equation}
By solving system \eqref{eq:WB-HRQ-sys-SW-Discrete} and preserving the positivity of $H$, we obtain the following reconstructed variables
\begin{equation}
\left\{
\begin{split}
&H_L^* = \max \left( 0,\:Z^* + H_L - Z_L \right)\\
&Q_L^* = Q_{L} ,\\
\end{split}
\right.
\quad
\left\{
\begin{split}
&H_R^* = \max \left( 0,\:Z^* + H_R - Z_R \right)\\
& Q_R^* = Q_{R} . \\
\end{split}
\right.
\end{equation}
The reconstructed variable $Z^*$ is chosen considering nonlinear stability arguments that require that
\[
\left\{
\begin{split}
&0 \leq H_{L}^* \leq H_{L}\\
&0 \leq H_{R}^* \leq H_{R} ,\\
\end{split}
\right.
\]
to preserve the positivity of $H$. A simple choice is the downwind value
\begin{equation}
\begin{split}
Z^* = \min \left( Z_L,Z_R \right) .
\end{split}
\end{equation}
In order to obtain the reconstructed values $A_L^*$ and $A_R^*$, we must select a reconstruction for $K^*$. Following \cite{BouchutBook2004,Delestre2016} we choose
\begin{equation}
K^* = \max(K_L,K_R).
\end{equation}
Therefore, we directly have
\begin{equation}
A_{L}^* = \left( \frac{H_{L}^*}{K^*} \right)^2
, \qquad
A_{R}^* = \left( \frac{H_{R}^*}{K^*} \right)^2 .
\end{equation}
Finally, at each side of the interface $I_{i+\frac{1}{2}}$, we obtain the reconstructed conservative vectors
\begin{equation}
\mathbf{U_L}^* =
\begin{bmatrix}
A_L^*\\
Q_L^*\\
\end{bmatrix}
, \qquad
\mathbf{U_R}^* =
\begin{bmatrix}
A_R^*\\
Q_R^*\\
\end{bmatrix} ,
\label{eq:bf-WB-Conservative-Vector}
\end{equation}
that will be used to compute the numerical flux $\mathcal{F}\left( \mathbf{U_L^*} , \mathbf{U_R}^* \right)$.\\
A conservative formulation for the source term $S_T$ is obtained by integrating over the cell $C_i$ the steady flux gradient in which the nonlinear advection term is neglected. This approximation is valid in low-Shapiro number flow regimes, and therefore particularly appropriate for blood flow in large arteries. The following conservative expression for $S_T$ is obtained, expressed in terms of the reconstructed conservative vector $U^*$
\begin{equation}
\begin{split}
S_{Ti}^n = & \frac{1}{\Delta x} \int_{C_i} S_T\left( \mathbf{U},\bm{\sigma} \right) \text{d}x = \mathcal{P}\left( {A_{L,i+\frac{1}{2}}^*},K_{i+\frac{1}{2}}^* \right) - \mathcal{P}\left( {A_{R,i-\frac{1}{2}}^*},K_{i-\frac{1}{2}}^* \right) ,\\
\end{split}
\end{equation}
where $\left( {A_{L,i+\frac{1}{2}}^*},{A_{R,i-\frac{1}{2}}^*} \right)$ are the reconstructed cross-sectional areas at the left of the cell interface $I_{i+\frac{1}{2}}$ and at the right of the cell interface $I_{i-\frac{1}{2}}$ respectively and $\left( {K_{i+\frac{1}{2}}^*} , {K_{i-\frac{1}{2}}^*} \right)$ are the reconstructed arterial wall rigidities at the cell interfaces $I_{i+\frac{1}{2}}$ and $I_{i-\frac{1}{2}}$ respectively. For consistency reasons, we modify the previous expression and write
\begin{equation}
\left.
\begin{split}
&S_{Ti} =&&
\left[\mathcal{P}\left( {A_{L,i+\frac{1}{2}}^*},K_{i+\frac{1}{2}}^* \right) - \mathcal{P}\left( {A_{L,i+\frac{1}{2}}},K_{L,i+\frac{1}{2}} \right) \right] - \\
&&& \left[\mathcal{P}\left( {A_{R,i-\frac{1}{2}}^*},K_{R,i-\frac{1}{2}}^* \right) - \mathcal{P}\left( {A_{R,i-\frac{1}{2}}},K_{R,i-\frac{1}{2}} \right) \right] . \\
\end{split}
\right.
\end{equation}
To simplify the expression, we introduce the notation
\begin{equation}
\mathcal{P}\left( A,A^*,K,K^* \right)=\mathcal{P}\left( A^*,K^* \right) - \mathcal{P}\left( A,K \right).
\end{equation}
With these notations, the first order well-balanced finite-volume numerical scheme for system \eqref{eq:BF-Sys} is simply
\begin{equation}
\mathbf{U_i^{n+1}} = \mathbf{U_i^n} - \frac{\Delta t}{\Delta x} \left[ \mathbf{F^{n*}_{i+\frac{1}{2}}} - \mathbf{F^{n*}_{i-\frac{1}{2}}} \right],
\label{
}
\end{equation}
with
\begin{equation}
\left\{
\begin{split}
&\mathbf{F^{n*}_{i+\frac{1}{2}}} = && \mathcal{F}\left( \mathbf{U_{L,i+\frac{1}{2}}}^*,\mathbf{U_{R,i+\frac{1}{2}}}^*,K_{i+\frac{1}{2}}^* \right) +\\
& & &
\begin{bmatrix}
0 \\
\mathcal{P}\left( A_{L,i+\frac{1}{2}},A_{L,i+\frac{1}{2}}^*,K_{L,i+\frac{1}{2}},K_{i+\frac{1}{2}}^* \right)\\
\end{bmatrix}
\\
&\mathbf{F^{n*}_{i-\frac{1}{2}}} = & & \mathcal{F}\left( \mathbf{U_{L,i-\frac{1}{2}}}^*,\mathbf{U_{R,i-\frac{1}{2}}}^*,K_{i-\frac{1}{2}}^* \right) + \\
& & &
\begin{bmatrix}
0 \\
\mathcal{P}\left( A_{R,i-\frac{1}{2}},A_{R,i-\frac{1}{2}}^*,K_{R,i-\frac{1}{2}},K_{i-\frac{1}{2}}^* \right) \\
\end{bmatrix}
.\\
\end{split}
\right.
\end{equation}
It is straightforward to see that HR-LS is well-balanced for the steady states at rest system \eqref{eq:BF-Rest-Steady-State-Equation-SW} and provides a good evaluation of low-Shapiro number steady states system \eqref{eq:WB-HRQ-sys-SW}. It also guarantees the following natural properties of blood flow equations:
\begin{itemize}
\item the conservation of mass;
\item the non-negativity of the cross-sectional area $A$;
\item correct reflection and transmission conditions when variations of vessel impedance occur.
\end{itemize}
In physiological conditions, the arteries never completely collapse, therefore the numerical scheme no longer needs to be able to compute dry states. Furthermore, as the flow is subcritical and the heart input signal is not discontinuous, transcritical or supercritical regimes and discontinuities of the conservative variables do no occur. Hence the semi-discrete entropy inequality as well as the ability to compute transcritical flows are no longer crucial requirements of the numerical scheme. Finally, the viscosity of the blood and of the arterial wall, that are not taken into account in the theoretical part of this study, are of great importance in arteries and have diffusive and dissipative effects that remove high frequency components and therefore any discontinuity in the conservative variables. This last point will be addressed in the last example section \ref{sec:Ex-55}.
\subsection{The subsonic hydrostatic reconstruction: HR-S}
In \cite{Bouchut2010}, Bouchut proposed an extension of HR, referred to as the subsonic hydrostatic reconstruction (HR-S), ideal for blood flow simulations in large arteries. HR-S is well-balanced for all subcritical steady states \eqref{eq:BF-Steady-State-Equation-SW} and also preserves the good properties of HR (listed as bullets in the introduction), that is the positivity of the water height (equivalent of the cross-sectional area $A$), the ability to compute dry states and transcritical flows and a semi-discrete entropy inequality. HR-S is also able to correctly capture wave reflections and transmissions in regions where the impedance of the arterial wall changes. Indeed, the subcritical steady states system \eqref{eq:BF-Steady-State-Equation-SW} coincides with the junction conservation properties \eqref{eq:bc-conj-Q-P}. However, HR-S requires the resolution of the nonlinear steady state system \eqref{eq:BF-Steady-State-Equation-SW} at each time step at every cell interface presenting a gradient of the artery's geometrical or mechanical properties. This increases the computational cost compared to HR and HR-LS, especially if the region requiring a well-balanced treatment is not limited to a few cells.
In this section, we present the derivation of HR-S adapted to blood flow in an artery where both variations of cross-sectional area at rest $A_0$ and variations of the arterial wall rigidity $K$ are taken into account. HR-S will serve as a reference exactly well-balanced method to be compared to HR and HR-LS. In particular, HR-S will allow us to assess if relaxing the dry-state property and the semi-discrete entropy inequality in HR-LS impacts solutions of blood flow in physiological conditions. With the notations \eqref{eq:SW-Notations} and \eqref{eq:BF-Steady-State-Equation-SW}, we are in the framework introduced by Bouchut \cite{Bouchut2010}. Therefore, we will only briefly recall the main steps of the derivation of HR-S and additional details can be found in the cited publication.
\subsubsection{Well-balanced subsonic positivity-preserving reconstruction procedure for the cross-sectional area $A$}
Similarly to HR and HR-LS, the well-balanced property is enforced by defining reconstructed variables on both sides of each cell interface $I_{i+\frac{1}{2}}$ according to the reconstruction procedure \eqref{eq:BF-Steady-State-Equation-SW}. Variables noted with $"^*"$ will refer to the reconstructed variables. Following \cite{Bouchut2010}, we introduce the function $f$
\begin{equation}
\left.
\begin{split}
f: \: \:& \mathbb{R}\times \left(\mathbb{R}^{+*}\right)^2 &\rightarrow& \mathbb{R} \\
& \left(Q,A,K \right) &\rightarrow& \frac{1}{2}\frac{Q^2}{A^2} + \left[ \mathcal{E}\left( A,K \right) + \frac{\mathcal{P}\left( A,K \right)}{A} \right] ,\\
\end{split}
\right.
\label{eq:WB-HRS-f}
\end{equation}
and given the vectors of conservative variables $\mathbf{U_L}$ and $\mathbf{U_R}$ and the vectors $\bm{\sigma_L}$ and $\bm{\sigma_R}$ at the left and right of the interface $I_{i+\frac{1}{2}}$ between cells $C_i$ and $C_{i+1}$, the discrete analogue of system \eqref{eq:BF-Steady-State-Equation-SW} writes
\begin{equation}
\left.
\begin{split}
&\left\{
\begin{split}
& Q_L^* = Q_L \\
& f\left( Q_L^*,A_L^*,K^* \right) = f\left( Q_L,A_L,K_L \right) + \delta_L\\
\end{split}
\right.\\
&\left\{
\begin{split}
& Q_R^* = Q_R \\
& f\left( Q_R^*,A_R^*,K^* \right) = f\left( Q_R,A_R,K_R \right) + \delta_R ,\\
\end{split}
\right.\\
\end{split}
\right.
\label{eq:BF-Steady-State-Equation-f}
\end{equation}
with
\begin{equation}
\left\{
\begin{split}
&\delta_L = \frac{1}{\rho}\left( Z^* - Z_L \right)\\
&\delta_R = \frac{1}{\rho}\left( Z^* - Z_R \right) .\\
\end{split}
\right.
\label{eq:BF-HRS-delta}
\end{equation}
Similarly to HR-LS, the reconstruction of the flow rate $Q^*$ is straightforward. However, contrary to HR and HR-LS, system \eqref{eq:BF-Steady-State-Equation-f} is nonlinear in $A^*$ and is difficult to solve analytically. To help the resolution of system \eqref{eq:BF-Steady-State-Equation-f}, we recall the following properties (see \cite{Bouchut2010} for details).\\
For fixed values of $Q$ and $K$, function $f$ admits a minimum in $A_s\left( Q,K \right)$ and $m_s\left( Q,K \right)$ is the minimum value of $f$
\begin{equation}
A_s\left( Q,K \right) = \left( \frac{2 \rho }{K} Q^2 \right)^{\frac{2}{5}}
, \qquad
m_s\left( Q,K \right) = \frac{5}{4}\frac{K}{\rho}\left[ \frac{2 \rho }{K} Q^2 \right]^{\frac{1}{5}} .
\label{eq:WB-HRS-As-ms}
\end{equation}
For fixed values of $Q$ and $K$ and since the function $f$ is convex, system \eqref{eq:BF-Steady-State-Equation-f} admits a subcritical and a supercritical solution for the cross-sectional area $A$ if $f\left( Q,A,K \right)> m_s\left( Q,K \right)$. Furthermore, if $A > A_s\left( Q,K \right)$ the flow is subcritical with $U \in \mathbb{U}_{sub}$ and inversely if $A < A_s\left( Q,K \right)$ the flow is supercritical with $U \in \mathbb{U}_{sup}$ (see figure \ref{fig:HRS-f}). \\
Using these properties, Bouchut \cite{Bouchut2010} proposed a reconstruction procedure for the cross-sectional area $A^*$. The first step is to select reconstructions of the variables $Z^*$ and $K^*$ that preserve the positivity of $A$ and select the subcritical solution of system \eqref{eq:BF-Steady-State-Equation-f}. The following inequalities must be verified to respectively preserve the positivity of $A$ and select the subcritical solution of system \eqref{eq:BF-Steady-State-Equation-f}
\begin{equation}
\left\{
\begin{split}
&A_{L}^* \leq A_{L}\\
&A_{R}^* \leq A_{R} ,\\
\end{split}
\right.
\label{eq:WB-HRS-A-Inequality}
\end{equation}
and
\begin{equation}
\left\{
\begin{split}
&A_s \leq A_{L}^* \\
&A_s \leq A_{R}^* .\\
\end{split}
\right.
\label{eq:WB-HRS-A-Inequality-Sub}
\end{equation}
The inequalities \eqref{eq:WB-HRS-A-Inequality-Sub} are naturally verified as we consider only subcritical flow configurations. On the contrary, the inequalities \eqref{eq:WB-HRS-A-Inequality} are verified if inequalities \eqref{eq:WB-HRS-A-Inequality-Sub} are true and if $Z^*$ and $K^*$ are chosen such that $\delta_{L,R}\leq 0$. A simple choice for $Z^*$ and $K^*$ is
\begin{equation}
Z^* = \min \left( Z_L,Z_R \right)
, \qquad
K^* = \max(K_L,K_R).
\label{eq:WB-HRS-Z-K}
\end{equation}
Given the expressions \eqref{eq:WB-HRS-Z-K} for $Z^*$ and $K^*$, we adapted the reconstruction procedure for the cross-sectional area $A^*$ proposed by Bouchut \cite{Bouchut2010} to blood flow in arteries with variable cross-sectional area $A_0$ and variable arterial wall rigidity $K$. It is summarized in figure \ref{fig:HRS-f} and is presented in the algorithm \ref{alg:HRS-A-Reconstruction}. The algorithm \ref{alg:HRS-A-Reconstruction} describes the steps that need to be followed to obtain the reconstructed cross-sectional area $A_{L}^*$, solution of system \eqref{eq:BF-Steady-State-Equation-f}. The same algorithm can be applied to reconstruct $A_R^*$.\\
\begin{algorithm}[H]
\caption{Algorithm to compute the reconstructed cross-sectional area $A_L^*$ to enforce the well-balanced property by interface for the steady state system \eqref{eq:BF-Steady-State-Equation-SW}.}
\label{alg:HRS-A-Reconstruction}
\begin{algorithmic}
\If{$\delta_L =0$}
\State $A_{L}^* \gets A_L$
\Else
\If{$u_L \geq c_L$}
\State $A_{L}^* \gets A_L$
\Else
\If{$f\left( Q_L,A_L,K_L \right) + \delta_L > m_s\left( Q_L,K^* \right)$}
\State \[
\left\{
\begin{split}
&Q_{L}^* = Q_{L}\\
&f\left( Q_{L}^*,A_L^*,K^* \right) = f\left( Q_L,A_L,K_L \right) + \delta_L\\
\end{split}
\right.
\]
\State The solution of the system can be obtained numerically using a recursive procedure.
\Else
\State $A_{L}^* \gets A_s(Q_L,K^*)$
\EndIf
\EndIf
\EndIf
\end{algorithmic}
\end{algorithm}
\begin{figure}[!h]
\begin{center}
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{1\textwidth}
\centering
\includegraphics[scale=0.35,angle=0,trim={4cm 4cm 2.4cm 4cm},clip]{Figure2.pdf}\\
\end{minipage}%
}
\caption{Representation of the function $f\left( Q_L,\cdot,\cdot\right)$. The abscissa of the intersections between function $f$ and the straight lines representing the different values of $f\left( Q_L^*,A_L^*,k^* \right)$ give the possible values of $A_L^*$. A graphical analysis shows that conditions \eqref{eq:WB-HRS-A-Inequality} and \eqref{eq:WB-HRS-A-Inequality-Sub} are met only for $\delta_L < 0$.}
\label{fig:HRS-f}
\end{center}
\end{figure}
\subsubsection{Well-balanced subsonic first-order numerical scheme}
Similarly to HR and HR-LS, a conservative formulation for the source term $S_T$ is obtained by integrating over the cell $C_i$ the steady flux gradient. However, the nonlinear advection term is no longer neglected and an additional flux term is introduced to take it into account
\begin{equation}
\begin{split}
S_{Ti} =&
\mathcal{P}\left( A_{L,i+\frac{1}{2}},A_{L,i+\frac{1}{2}}^*,K_{L,i+\frac{1}{2}},K_{i+\frac{1}{2}}^* \right) + \\
&\mathcal{T}_L\left( \mathbf{U_{L,i+\frac{1}{2}}},\mathbf{U_{L,i+\frac{1}{2}}}^*,\mathbf{U_{R,i+\frac{1}{2}}}^*,K_{i+\frac{1}{2}}^* \right) - \\
& \mathcal{P}\left( A_{R,i-\frac{1}{2}},A_{R,i-\frac{1}{2}}^*,K_{R,i-\frac{1}{2}},K_{i-\frac{1}{2}}^* \right) - \mathcal{T}_R\left( \mathbf{U_{R,i-\frac{1}{2}}}, \mathbf{U_{L,i-\frac{1}{2}}}^*,\mathbf{U_{R,i-\frac{1}{2}}}^*,K_{i-\frac{1}{2}}^* \right),
\end{split}
\end{equation}
where $\left( {A_{L,i+\frac{1}{2}}^*},{A_{R,i-\frac{1}{2}}^*} \right)$ are the reconstructed cross-sectional areas at the left the cell interface $I_{i+\frac{1}{2}}$ and at the right of the cell interface $I_{i-\frac{1}{2}}$ respectively and $\left( {K_{i+\frac{1}{2}}^*} , {K_{i-\frac{1}{2}}^*} \right)$ are the reconstructed arterial wall rigidities at the cell interfaces $I_{i+\frac{1}{2}}$ and $I_{i-\frac{1}{2}}$ respectively. The additional fluxes $\mathcal{T}_L$ and $\mathcal{T}_R$ are chosen such that the numerical scheme satisfies an entropy inequality by interface (see \cite{Bouchut2010} for details). The computation of $\mathcal{T}_L$ and $\mathcal{T}_R$ is presented in the algorithm \ref{alg:HRS-Flux-Computation}. Only the steps that need to be followed to obtain $\mathcal{T}_L$ are detailed in \ref{alg:HRS-Flux-Computation} but similar results are obtained for $\mathcal{T}_R$.
\begin{algorithm}[H]
\caption{Algorithm to compute the flux $\mathcal{T}_L$ used in HR-S to balance the nonlinear advection term $\frac{Q^2}{A}$ and the source term $S_T$.}
\label{alg:HRS-Flux-Computation}
To simplify the expression of $\mathcal{T}_L$ we use the following notations
\[
\left\{
\begin{split}
&\mathcal{F_A} = \mathcal{F_A}\left( \mathbf{U_L^*},\mathbf{U_R^*},K^* \right)\\
&\mathcal{F_Q} = \mathcal{F_Q}\left( \mathbf{U_L^*},\mathbf{U_R^*},K^* \right)\\
&\mathcal{P} = \mathcal{P}\left( A_L,A_L^*,K_L,K^* \right)\\
& \Delta f = f\left( Q_L^*,A_L^*,K^* \right) - f\left( Q_L,A_L,K_L \right) - \delta_L, \\
\end{split}
\right.
\]
\begin{algorithmic}
\If{$\delta_L =0$}
\State $\mathcal{T}_L\left(\mathbf{U_L}, \mathbf{U_L^*},\mathbf{U_R^*},K^* \right) \gets 0$
\Else
\If{$u_L \geq c_L$}
\State $ \mathcal{T}_L\left(\mathbf{U_L}, \mathbf{U_L^*},\mathbf{U_R^*},K^* \right) \gets -\frac{A_L}{Q_L} \, \mathcal{F_A} \, \delta_L $
\Else
\If{$f\left( Q_L,A_L,K_L \right) + \delta_L > m_s\left( Q_L,K^* \right)$}
\State \[
\begin{split}
\mathcal{T}_L\left(\mathbf{U_L}, \mathbf{U_L^*},\mathbf{U_R^*},K^* \right) \gets
&\frac{A_L - A_L^*}{A_L^*} \left[ \mathcal{F_Q} - \mathcal{P} - \frac{Q_L^*}{A_L^*}\mathcal{F_A} \right] - \\
&\mathcal{F_A} \left[ \frac{Q_L^*}{A_L^*} - \frac{Q_L}{A_L} \right]
\end{split}
\]
\Else
\State \[
\begin{split}
\mathcal{T}_L\left(\mathbf{U_L}, \mathbf{U_L^*},\mathbf{U_R^*},K^* \right) \gets
&\frac{A_L - A_L^*}{A_L^*} \left[ \mathcal{F_Q} - \mathcal{P} - \frac{Q_L^*}{A_L^*}\mathcal{F_A} \right] - \\
&\mathcal{F_A} \left[ \frac{Q_L^*}{A_L^*} - \frac{Q_L}{A_L} \right] + \frac{A_L}{Q_L} \mathcal{F_A} \, \Delta f
\end{split}
\]
\EndIf
\EndIf
\EndIf
\end{algorithmic}
\end{algorithm}
Finally, the first-order well-balanced finite-volume numerical scheme for system \eqref{eq:BF-Sys} is still
\begin{equation}
\mathbf{U_i^{n+1}} = \mathbf{U_i^n} - \frac{\Delta t}{\Delta x} \left[ \mathbf{F^{n*}_{i+\frac{1}{2}}} - \mathbf{F^{n*}_{i-\frac{1}{2}}} \right],
\label{eq:BF-First-Order-Num-HRS}
\end{equation}
with
\begin{equation}
\left\{
\begin{split}
&\mathbf{F^{n*}_{i+\frac{1}{2}}} & =& \mathcal{F}\left( \mathbf{U_{L,i+\frac{1}{2}}}^*,\mathbf{U_{R,i+\frac{1}{2}}}^*,K_{i+\frac{1}{2}}^* \right) +\\
& &&
\begin{bmatrix}
0 \\
\mathcal{P}\left( A_{L,i+\frac{1}{2}},A_{L,i+\frac{1}{2}}^*,K_{L,i+\frac{1}{2}},K_{i+\frac{1}{2}}^* \right) + \mathcal{T}_L\left(\mathbf{U_{L,i+\frac{1}{2}}}, \mathbf{U_{L,i+\frac{1}{2}}}^*,\mathbf{U_{R,i+\frac{1}{2}}}^*,K_{i+\frac{1}{2}}^* \right)\\
\end{bmatrix}
\\
&\mathbf{F^{n*}_{i-\frac{1}{2}}} & =& \mathcal{F}\left( \mathbf{U_{L,i-\frac{1}{2}}}^*,\mathbf{U_{R,i-\frac{1}{2}}}^*,K_{i-\frac{1}{2}}^* \right) +\\
& &&
\begin{bmatrix}
0 \\
{P}\left( A_{R,i-\frac{1}{2}},A_{R,i-\frac{1}{2}}^*,K_{R,i-\frac{1}{2}},K_{i-\frac{1}{2}}^* \right) + \mathcal{T}_R\left(\mathbf{U_{R,i-\frac{1}{2}}}, \mathbf{U_{L,i-\frac{1}{2}}}^*,\mathbf{U_{R,i-\frac{1}{2}}}^*,K_{i-\frac{1}{2}}^* \right)\\
\end{bmatrix}
. \\
\end{split}
\right.
\end{equation}
In the following section, we present a series of numerical test-cases were we systematically compare HR, HR-LS and HR-S.
\section{Physiological examples in a single artery}
\label{sec:Ex-Single}
In this section we present a series of numerical computations designed to evaluate the performances in physiological conditions of the low-Shapiro hydrostatic reconstruction (HR-LS) in comparison with the hydrostatic reconstruction (HR) and the subsonic hydrostatic reconstruction (HR-S). All quantities are represented in centimeters, grams and seconds, or equivalently "cgs", which are the natural units to describe blood flow. Indeed, the density of blood is close to 1 in "cgs".\\
The following numerical simulations are performed in a single artery representative of a large artery such as the aorta. Table \ref{table:Artery-Properties} summarizes the values of the characteristic properties of blood and of the artery, namely the blood density $\rho$, the length $L$ of the artery and the inlet radius at rest and arterial wall rigidity $R_{in}$ and $K_{in}$, all written in "cgs".
\begin{table}[H]
\begin{center}
\def\arraystretch{1.2
\begin{tabular}{c|c|c|c}
$\rho$ [$g.cm^{-3}$] & $L$ [$cm$] & $R_{in}$ [$cm$] & $K_{in}$ [$g.cm^{-2}.s^{-2}$] \\
\hline
1 & 10 & 0.5 & $1 \times 10^5$
\end{tabular}
\end{center}
\caption{Parameters describing the artery used in the different test-cases, given in "cgs": the density $\rho$, the length $L$, the inlet radius $R_{in}$ and the inlet rigidity $K_{in}$.}
\label{table:Artery-Properties}
\end{table}
We study two geometrical configurations in which both the cross-sectional area at rest $A_0$ and the arterial wall rigidity $K$ vary. Both are idealized representations of variations of arteries' geometrical and mechanical properties encountered in arterial networks. The first configuration is a smooth stenosis and corresponds to a local reduction of the cross-sectional area at rest $A_0$. It is a classical arterial pathology caused by the formation of plaque that deposits on the arterial wall and slowly obliterates the vessel. The stenosis is represented in figure \ref{fig:Stenosis-Step-R0-K} and is defined by the following radius at rest $R_0$ and arterial wall rigidity $K$
\begin{equation}
\left\{
\begin{split}
& R_0\left( x \right) = & \left\{
\begin{split}
&R_{in} & \: \: \: \text{ if } & x < x_s \text{ or } x > x_f \\
&R_{in} \left( 1 - \frac{\Delta \mathcal{G}}{2} \left[ 1 + \cos \left( \pi +2 \pi \frac{x-x_s}{x_f -x_s} \right) \right] \right) & \: \: \: \text{ if } & x_s \leq x \geq x_f \\
\end{split}
\right.\\
& K \left( x \right) = &\left\{
\begin{split}
&K_{in} & \: \: \: \text{ if } & x < x_s \text{ or } x > x_f \\
&K_{in} \left( 1 + \frac{\Delta \mathcal{G}}{2} \left[ 1 + \cos \left( \pi +2 \pi \frac{x-x_s}{x_f -x_s} \right) \right] \right) & \: \: \: \text{ if } & x_s \leq x \geq x_f .\\
\end{split}
\right.\\
\end{split}
\right.
\label{eq:Ex-Stenosis-Geom}
\end{equation}
We choose $x_s = \frac{3L}{10}$ and $x_f = \frac{7 L}{10}$. The second configuration we investigate is a decreasing step, or decreasing discontinuity. It is an idealized representation of a pointwize transition between a parent artery and a smaller daughter artery and is useful to evaluate the reflecting behavior of a numerical method. The decreasing step is represented in figure \ref{fig:Stenosis-Step-R0-K} and is defined by the following radius at rest $R_0$ and arterial wall rigidity $K$
\begin{equation}
\left\{
\begin{split}
& R_0\left( x \right) = & \left\{
\begin{split}
&R_{in} & \: \: \: \text{ if } & x < x_m \\
&R_{in} \left( 1 - \Delta \mathcal{G} \right) & \: \: \: \text{ if } & x \geq x_m\\
\end{split}
\right.\\
& K \left( x \right) = &\left\{
\begin{split}
&K_{in} & \: \: \: \text{ if } & x < x_m \\
&K_{in} \left( 1 + \Delta \mathcal{G} \right) & \: \: \: \text{ if } & x \geq x_m . \\
\end{split}
\right.\\
\end{split}
\right.
\label{eq:Ex-Step-Geom}
\end{equation}
We choose $x_m = \frac{L}{2}$. In both configuration, the amplitude of the geometrical and mechanical variations depends on the wall deformation parameter $\Delta \mathcal{G}$. The values of $\Delta \mathcal{G}$ used in the following simulations are taken from table \ref{table:Sh-DG-Steady} and are chosen to test the limits of the well-balanced methods while staying in the subcritical flow regime. From a well-balanced point of view, each of these two configurations has a different behavior with respect to the cell size $\Delta x$. Indeed, the step configuration is a discontinuity of the cross-sectional area at rest $A_0$ and of the arterial wall rigidity $K$, and therefore the amplitude of the variation of the geometrical and mechanical properties of the artery, proportional to $\Delta \mathcal{G}$, is independent of $\Delta x$. On the contrary, the stenosis configuration is a smooth variation of $A_0$ and $K$, and therefore the local variation of the artery's geometrical and mechanical properties at each cell interface will decrease with the cell size $\Delta x$.\\
\begin{figure}[!h]
\begin{center}
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{0.33\textwidth}
\centering
\includegraphics[scale=0.25,angle=0]{Figure3a.pdf}\\
\end{minipage}%
\begin{minipage}[t]{0.33\textwidth}
\centering
\includegraphics[scale=0.25,angle=0]{Figure3b.pdf}\\
\end{minipage}%
\begin{minipage}[t]{0.33\textwidth}
\centering
\includegraphics[scale=0.25,angle=0]{Figure3c.pdf}\\
\end{minipage}%
}
\caption{Representation of the radius at rest $R_0$ and the arterial wall rigidity $K$ for the smooth stenosis \eqref{eq:Ex-Stenosis-Geom} and the decreasing step \eqref{eq:Ex-Step-Geom} for $\Delta \mathcal{G}=10 \%$. \underline{\textit{Left}}: $R_0$ for the stenosis; \underline{\textit{Center}}: $R_0$ for the step; \underline{\textit{Right}}: $K$ for the stenosis (full line) and the step (dashed line).}
\label{fig:Stenosis-Step-R0-K}
\end{center}
\end{figure}
We now provide the values of the conservative variables at the inlet and outlet of the computational domain, based on methods detailed in section \ref{sec:BC}. We impose the flow rate $Q_{in}$ at the inlet of the computational domain, in $x=0$. In reality, to control the flow regime, we give the value of the inlet Shapiro number $S_{h,in}$ and compute the inlet flow rate $Q_{in}$ as a function of $S_{h,in}$
\begin{equation}
Q_{in} = S_{h,in} A_{in} c_{in}.
\label{bc:Qin-Shin}
\end{equation}
$A_{in}$ and $c_{in}$ are respectively the inlet cross-sectional area and Moens-Korteweg wave speed \eqref{eq:Moens-Korteweg-c} and are unknown. However, a dimensional analysis of system \eqref{eq:BF-Sys-Conservative} allows us to show that the inlet Shapiro number $S_{h,in}$ scales as the ratio of the perturbation of the wall's radius $\Delta R=R-R_0$ over the radius at rest $R_0$. With this scaling law, we can estimate a value of the inlet cross-sectional area $A_{in}$ consistent with the inlet Shapiro number $S_{h,in}$ and we compute $A_{in}$ as
\begin{equation}
A_{in} = A_0\left( x=0 \right) \left[ 1 + S_{h,in} \right]^2.
\label{bc:Ain-Shin}
\end{equation}
At the outlet of the computational domain, in $x=L$, we either impose the reflection coefficient $R_t=0$ or the cross-sectional area $A_{out}$, depending on the test case. Similarly to the inlet cross-sectional area $A_{in}$, we compute the outlet cross-sectional area as a function of $S_{h,in}$
\begin{equation}
A_{out} = A_0\left( x=L \right) \left[ 1 + S_{h,in} \right]^2.
\label{bc:Aout-Shin}
\end{equation}
The values of the inlet Shapiro number $S_{h,in}$ and the wall deformation parameter $\Delta \mathcal{G}$ used in the following simulations are presented in table \ref{table:Sh-DG-Steady}. They cover a wide range of physiological configurations, allowing us to assess the behavior of the three numerical schemes under consideration in the limit of the low-Shapiro number flow regime. We recall that in arteries the average Shapiro number is of the order of $S_h = 1 \times 10^{-2}$.
\begin{table}[H]
\begin{center}
\def\arraystretch{1.2
\begin{tabular}{|c||c|c|c|c|}
\hline
$S_{h,in}$ & $0$ & $ 1 \times 10^{-3}$ & $ 1 \times 10^{-2}$ & $ 1 \times 10^{-1}$ \\ \hline
\end{tabular}
\quad
\begin{tabular}{|c||c|c|c|}
\hline
$\Delta \mathcal{G}$ & $1 \%$ & $10 \%$ & $30 \%$ \\ \hline
\end{tabular}
\end{center}
\caption{Values of the inlet Shapiro number $S_{h,in}$ and the wall deformation parameter $ \Delta \mathcal{G} $ used in the single artery test-cases. These values are chosen to test the well-balanced methods in the limits of the low-Shapiro number flow regime.}
\label{table:Sh-DG-Steady}
\end{table}
\subsection{Steady solutions}
\label{subsec:Steady}
We evaluate the well-balanced properties of HR, HR-LS and HR-S by computing steady solutions of system \eqref{eq:BF-Sys} in the smooth stenosis \eqref{eq:Ex-Stenosis-Geom} and the decreasing step \eqref{eq:Ex-Step-Geom}. Steady flow configurations in arterial geometries similar to the stenosis \eqref{eq:Ex-Stenosis-Geom} have been studied by M\"uller \cite{Muller2013}, where only variations of the wall rigidity $K$ are taken into account. In \cite{Murillo2015}, the authors computed steady solutions in tapered tubes. In the context of the shallow water equations, steady flow solutions over a bump (analogue of the stenosis) or a step have been studied by many authors \cite{Castro2007,Noelle2007,Castro2013,Delestre2013}.\\
The steady numerical solutions are obtained for $t=200$ $s$. The time step $\Delta t$ is constant and chosen such that the CFL condition \eqref{eq:CFL-kin} is always satisfied. We impose the flow rate $Q_{in}$ \eqref{bc:Qin-Shin} at the inlet and the cross-sectional area $A_{out}$ \eqref{bc:Aout-Shin} at the outlet. We therefore select a specific steady state characterized by its associated flow rate $Q_{st}$ and energy discharge $E_{st}$. These values can be computed analytically and provide exact solutions to compare with our numerical results
\begin{equation}
\left\{
\begin{split}
&Q_{st} = Q_{in} \\
&E_{st} = \frac{1}{2} \frac{Q_st^2}{A_{out}^2} + \frac{K\left( x=L \right)}{\rho}\left( \sqrt{A_{out}} - \sqrt{A_0\left( x=L \right)} \right). \\
\end{split}
\right.
\label{eq:Ex-Steady-Solution}
\end{equation}
In both configurations \eqref{eq:Ex-Stenosis-Geom} and \eqref{eq:Ex-Step-Geom}, we perform a series of 12 numerical computations for all combinations of the inlet Shapiro number $S_{h,in}$ and the wall deformation parameter $\Delta \mathcal{G}$ taken from table \ref{table:Sh-DG-Steady}. Table \ref{table:Steady-Err-Sh-DG} shows $L^1$ relative errors between the analytic solutions and the results obtained with HR, HR-LS and HR-S for a fixed number of cells $N=50$.
\begin{table}[!h]
\begin{footnotesize}
\begin{center}
\def\arraystretch{1.2
{\setlength{\tabcolsep}{0.5em}
\begin{tabular}{c|c||c|c|c||c|c|c}
\multicolumn{2}{c||}{} & \multicolumn{3}{c||}{Stenosis} & \multicolumn{3}{c}{Step} \\ \hline \hline
\multicolumn{2}{c||}{$S_{h,in}$} & \multicolumn{6}{c}{0} \\ \cline{3-8}
\multicolumn{2}{c||}{$\Delta \mathcal{G}$} & $1\%$ & $10\%$ & $30\%$ & $1\%$ & $10\%$ & $30\%$ \\
\hline
& HR &
$0$ & $0$ & $0$ &
$0$& $0$& $0$
\\
$L^1\left[Q\right]$ & HR-LS &
$0$ & $0$ & $0$ &
$0$ & $0$ & $0$
\\
& HR-S &
$0$ & $0$ & $0$ &
$0$ & $0$ & $0$
\\
\hline
& HR &
$0$ & $0$ & $0$ &
$0$ & $0$ & $0$
\\
$L^1\left[E\right]$ & HR-LS &
$0$ & $0$ & $0$ &
$0$ & $0$ & $0$
\\
& HR-S &
$0$ & $0$ & $0$ &
$0$ & $0$ & $0$
\\ \hline \hline
\multicolumn{2}{c||}{$S_{h,in}$} & \multicolumn{6}{c}{$1 \times 10^{-3}$} \\ \cline{3-8}
\multicolumn{2}{c||}{$\Delta \mathcal{G}$} & $1\%$ & $10\%$ & $30\%$ & $1\%$ & $10\%$ & $30\%$ \\
\hline
& HR &
$4.0 \times 10^{-4}$& $4.2 \times 10^{-3}$& $1.4 \times 10^{-2}$&
$2.2 \times 10^{-4}$& $2.3 \times 10^{-3}$& $7.4 \times 10^{-3}$
\\
$L^1\left[Q\right]$ & HR-LS &
$3.6 \times 10^{-7}$& $4.1 \times 10^{-6}$& $1.9 \times 10^{-5}$ &
$1.8 \times 10^{-7}$& $2.1 \times 10^{-6}$& $9.4 \times 10^{-6}$
\\
& HR-S &
$5.4 \times 10^{-13}$& $5.3 \times 10^{-13}$& $4.2 \times 10^{-14}$&
$2.9 \times 10^{-13}$& $3.1 \times 10^{-13}$& $5.8 \times 10^{-13}$
\\
\hline
& HR &
$3.0 \times 10^{-4}$& $5.1 \times 10^{-3}$& $4.2 \times 10^{-2}$&
$2.1 \times 10^{-4}$& $9.4 \times 10^{-3}$& $1.3 \times 10^{-1}$
\\
$L^1\left[E\right]$ & HR-LS &
$2.1 \times 10^{-7}$& $2.6 \times 10^{-6}$& $1.5 \times 10^{-5}$&
$1.1 \times 10^{-7}$& $1.4 \times 10^{-6}$& $1.0 \times 10^{-5}$
\\
& HR-S &
$4.6 \times 10^{-13}$& $4.9 \times 10^{-13}$& $6.1 \times 10^{-13}$&
$6.7 \times 10^{-13}$& $6.5 \times 10^{-13}$& $1.4 \times 10^{-12}$ \\ \hline \hline
\multicolumn{2}{c||}{$S_{h,in}$} & \multicolumn{6}{c}{$1 \times 10^{-2}$} \\ \cline{3-8}
\multicolumn{2}{c||}{$\Delta \mathcal{G}$} & $1\%$ & $10\%$ & $30\%$ & $1\%$ & $10\%$ & $30\%$ \\
\hline
& HR &
$4.0 \times 10^{-4}$& $4.2 \times 10^{-3}$& $1.4 \times 10^{-2}$&
$2.3 \times 10^{-4}$& $2.3 \times 10^{-3}$& $7.4 \times 10^{-3}$
\\
$L^1\left[Q\right]$ & HR-LS &
$3.6 \times 10^{-6}$& $4.1 \times 10^{-5}$& $1.9 \times 10^{-4}$ &
$1.8 \times 10^{-6}$& $2.1 \times 10^{-5}$& $9.4 \times 10^{-5}$
\\
& HR-S &
$2.6 \times 10^{-13}$& $2.7 \times 10^{-13}$& $9.6 \times 10^{-14}$&
$2.4 \times 10^{-13}$& $9.5 \times 10^{-14}$& $1.8 \times 10^{-13}$
\\
\hline
& HR &
$3.0 \times 10^{-4}$& $5.1 \times 10^{-3}$& $4.2 \times 10^{-2}$&
$2.1 \times 10^{-4}$& $9.4 \times 10^{-3}$& $1.2 \times 10^{-1}$
\\
$L^1\left[E\right]$ & HR-LS &
$2.1 \times 10^{-6}$& $2.6 \times 10^{-5}$& $1.5 \times 10^{-4}$&
$1.1 \times 10^{-6}$& $1.4 \times 10^{-5}$& $8.1 \times 10^{-5}$
\\
& HR-S &
$2.7 \times 10^{-13}$& $2.7 \times 10^{-13}$& $3.3 \times 10^{-13}$&
$2.7 \times 10^{-13}$& $3.4 \times 10^{-13}$& $5.9 \times 10^{-13}$ \\ \hline \hline
\multicolumn{2}{c||}{$S_{h,in}$} & \multicolumn{6}{c}{$1 \times 10^{-1}$} \\ \cline{3-8}
\multicolumn{2}{c||}{$\Delta \mathcal{G}$} & $1\%$ & $10\%$ & $30\%$ & $1\%$ & $10\%$ & $30\%$ \\
\hline
& HR &
$4.0 \times 10^{-4}$& $4.2 \times 10^{-3}$& $1.4 \times 10^{-2}$&
$2.3 \times 10^{-4}$& $2.3 \times 10^{-3}$& $7.5 \times 10^{-3}$
\\
$L^1\left[Q\right]$ & HR-LS &
$3.6 \times 10^{-5}$& $4.1 \times 10^{-4}$& $1.8 \times 10^{-3}$ &
$1.8 \times 10^{-5}$& $2.1 \times 10^{-4}$& $9.0 \times 10^{-4}$
\\
& HR-S &
$2.6 \times 10^{-13}$& $3.4 \times 10^{-13}$& $2.0 \times 10^{-13}$&
$2.8 \times 10^{-13}$& $2.4 \times 10^{-13}$& $1.4 \times 10^{-13}$
\\
\hline
& HR &
$3.2 \times 10^{-4}$& $5.4 \times 10^{-3}$& $4.4 \times 10^{-2}$&
$2.2 \times 10^{-4}$& $9.9 \times 10^{-3}$& $1.2 \times 10^{-1}$
\\
$L^1\left[E\right]$ & HR-LS &
$2.2 \times 10^{-5}$& $2.8 \times 10^{-4}$& $1.8 \times 10^{-3}$&
$1.2 \times 10^{-5}$& $2.0 \times 10^{-4}$& $2.2 \times 10^{-3}$
\\
& HR-S &
$2.3 \times 10^{-13}$& $2.4 \times 10^{-13}$& $2.9 \times 10^{-13}$&
$2.3 \times 10^{-13}$& $2.9 \times 10^{-13}$& $3.4 \times 10^{-13}$
\end{tabular}
}
\end{center}
\end{footnotesize}
\caption{Steady solutions: Relative errors $L^1\left[Q\right]$ and $L^1\left[E\right]$ computed in the stenosis \eqref{eq:Ex-Stenosis-Geom} and the step \eqref{eq:Ex-Step-Geom} for $N=50$ cells for all combinations of values of the inlet Shapiro number $S_{h,in}$ and the wall deformation parameter $\Delta \mathcal{G}$ taken for table \ref{table:Sh-DG-Steady}. Only HR-S is exactly well-balanced, but HR-LS is more accurate than HR.}
\label{table:Steady-Err-Sh-DG}
\end{table}
In both the stenosis \eqref{eq:Ex-Stenosis-Geom} and the step \eqref{eq:Ex-Step-Geom} configurations, the results are similar and indicate that, as expected, each numerical method is exactly well-balanced for the steady states at rest ($S_{h,in}=0$). Only HR-S is exactly well-balanced for all considered subcritical steady states. For the low-Shapiro number steady states ($S_{h,in}=10^{-3},10^{-2},10^{-1}$), HR-LS is more accurate than HR. However, the accuracy of HR-LS diminishes when the values of $S_{h,in}$ and $\Delta \mathcal{G}$ increase, and for $S_{h,in}=1 \times 10^{-1}$ and $\Delta \mathcal{G} = 30 \%$, in the limit of the low-Shapiro number flow regime, HR-LS is only one order of magnitude more accurate than HR. Interestingly, the errors obtained with HR are independent of the inlet Shapiro number $S_{h,in}$, but increase significantly with the wall deformation parameter $\Delta \mathcal{G}$. \\
To test the consistency and the order of convergence of the different methods, we perform a convergence study for the average low-Shapiro steady configuration $S_{h,in} = 1 \times 10^{-2}$ and $\Delta \mathcal{G}=10 \%$ in both the stenosis and the step configurations. $L^1$ relative errors with analytic solutions are presented in table \ref{table:Steady-CV-Sh-DG} for the following number of cells $N \in \left\{ 50,100,200,400\right\}$.
\begin{table}[!h]
\begin{footnotesize}
\begin{center}
\def\arraystretch{1.2
{\setlength{\tabcolsep}{0.5em}
\begin{tabular}{c || c|c|c|c|| c|c|c|c }
& \multicolumn{4}{c||}{Stenosis} & \multicolumn{4}{c}{Step} \\ \hline \hline
& \multicolumn{8}{c}{HR} \\ \cline{2-9}
$N$ & $L^1\left[Q\right]$ & Order & $L^1\left[E\right]$ & Order & $L^1\left[Q\right]$ & Order & $L^1\left[E\right]$ & Order \\
\hline
$50$& $4.22 \times 10^{-3}$& $-$ & $5.09 \times 10^{-3}$ & $-$ &
$2.34 \times 10^{-3}$& $-$ & $9.41 \times 10^{-3}$ & $-$
\\
$100$& $2.11 \times 10^{-3}$& $-1.01$ & $2.56 \times 10^{-3}$ & $-1.01$ &
$1.17 \times 10^{-3}$& $-1.01$ & $8.64 \times 10^{-3}$ & $-0.12$
\\
$200$& $1.05 \times 10^{-3}$& $-1.01$ & $1.28 \times 10^{-3}$ & $-1.01$ &
$5.86 \times 10^{-4}$& $-1.01$ & $8.26 \times 10^{-3}$ & $-0.07$
\\
$400$& $5.26 \times 10^{-4}$& $-1.00$ & $6.38 \times 10^{-4}$ & $-1.00$ &
$2.93 \times 10^{-4}$& $-1.00$ & $8.07 \times 10^{-3}$ & $-0.03$
\\
\hline \hline
& \multicolumn{8}{c}{HR-LS} \\ \cline{2-9}
$N$ & $L^1\left[Q\right]$ & Order & $L^1\left[E\right]$ & Order & $L^1\left[Q\right]$ & Order & $L^1\left[E\right]$ & Order \\
\hline
$50$& $4.14 \times 10^{-5}$& $-$ & $2.61 \times 10^{-5}$ & $-$ &
$2.08 \times 10^{-5}$& $-$ & $1.39 \times 10^{-5}$ & $-$
\\
$100$& $2.07 \times 10^{-5}$& $-1.01$ & $1.31 \times 10^{-5}$ & $-1.01$ &
$1.04 \times 10^{-5}$& $-1.01$ & $7.24 \times 10^{-6}$ & $-0.96$
\\
$200$& $1.04 \times 10^{-5}$& $-1.01$ & $6.58 \times 10^{-6}$ & $-1.00$ &
$5.19 \times 10^{-6}$& $-1.01$ & $3.91 \times 10^{-6}$ & $-0.90$
\\
$400$& $5.19 \times 10^{-6}$& $-1.00$ & $3.30 \times 10^{-6}$ & $-1.00$ &
$2.59 \times 10^{-6}$& $-1.00$ & $2.24 \times 10^{-6}$ & $-0.80$
\\
\hline \hline
& \multicolumn{8}{c}{HR-S} \\ \cline{2-9}
$N$ & $L^1\left[Q\right]$ & Order & $L^1\left[E\right]$ & Order & $L^1\left[Q\right]$ & Order & $L^1\left[E\right]$ & Order \\
\hline
$50$& $2.68 \times 10^{-13}$& $-$ & $2.73 \times 10^{-13}$ & $-$ &
$9.53 \times 10^{-14}$& $-$ & $3.43 \times 10^{-13}$ & $-$
\\
$100$& $1.40 \times 10^{-15}$& $-$ & $3.39 \times 10^{-13}$& $-$ &
$9.20 \times 10^{-14}$& $-$ & $3.97 \times 10^{-13}$ & $-$
\\
$200$& $1.94 \times 10^{-12}$& $-$ & $7.30 \times 10^{-13}$ & $-$ &
$2.26 \times 10^{-12}$& $-$ & $8.44 \times 10^{-13}$ & $-$
\\
$400$& $8.83 \times 10^{-12}$& $-$ & $1.45 \times 10^{-12}$ & $-$ &
$1.01 \times 10^{-11}$& $-$ & $1.63 \times 10^{-12}$ & $-$
\end{tabular}
}
\end{center}
\end{footnotesize}
\caption{Steady solutions: Relative errors $L^1\left[Q\right]$ and $L^1\left[E\right]$ computed in the stenosis \eqref{eq:Ex-Stenosis-Geom} and the step \eqref{eq:Ex-Step-Geom} for $S_{h,in}=1 \times 10^{-2}$ and $\Delta \mathcal{G} = 10 \%$ obtained for $N \in \left\{ 50,100,200,400\right\}$. HR and HR-LS converge with order 1 whereas HR-S is exactly well-balanced up to machine precision.}
\label{table:Steady-CV-Sh-DG}
\end{table}
In the stenosis configuration \eqref{eq:Ex-Stenosis-Geom}, both HR and HR-LS converge with order 1, whereas in the step configuration \eqref{eq:Ex-Step-Geom}, they do not achieve order 1 convergence. Indeed, in the stenosis configuration, the variations of the artery's geometrical and mechanical properties at each cell interface decrease when the number of cells $N$ increases, enabling the convergence of both methods. On the contrary, the geometrical and mechanical variations remain unchanged in the step configuration when the number of cells $N$ increases. These observations are illustrated by figures \ref{fig:Steady-Stenosis-Fr-10m2-dR-10} and \ref{fig:Steady-Step-Fr-10m2-dR-10}, where we respectively plot the spatial evolution of the flow rate $Q$ and the energy discharge $E$ with the number of cells in the stenosis and step configurations.
In both configurations, the values of the errors obtained in table \ref{table:Steady-CV-Sh-DG} with HR-S are of the order of machine precision, indicating that HR-S is exactly well-balanced for the considered low-Shapiro steady state. However, the errors increase slightly with the number of cells. Similar behaviors are observed in convergence studies presented in \cite{Castro2013} for an exactly well-balanced method. In our case, this phenomenon is due to a small error between the computed boundary conditions and those required to obtain the desired steady state, and is not caused by HR-S.\\
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure4a.pdf}\\
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure4b.pdf}\\
\end{minipage}
}
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure4c.pdf}\\
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure4d.pdf}\\
\end{minipage}
}
\caption{Steady solutions: Spatial evolution of the flow rate $Q$ (top) and the energy discharge $E$ (bottom) in the stenosis configuration \eqref{eq:Ex-Stenosis-Geom}, at $t=200$ s for $S_{h,in}=1 \times 10^{-2}$ and $\Delta \mathcal{G} = 10 \%$ obtained with different numbers of cells $N = \left\{ 50 \mathrm{ (blue)},100 \mathrm{ (green)},200 \mathrm{ (red)},400 \mathrm{ (purple)} \right\}$ and compared to the analytic solution \eqref{eq:Ex-Steady-Solution} (black). \underline{\textit{Left}}: HR; \underline{\textit{Right}}: HR-LS. We observe that for both HR and HR-LS, the errors with the analytic solution decrease when the number of cells $N$ increases, indicating the convergence of the method. }
\label{fig:Steady-Stenosis-Fr-10m2-dR-10}
\end{figure}
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure5a.pdf}\\
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure5b.pdf}\\
\end{minipage}
}
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure5c.pdf}\\
\end{minipage}
\begin{minipage}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure5d.pdf}\\
\end{minipage}
}
\caption{Steady solutions: Spatial evolution (zoom for $0.4 \leq \frac{x}{L} \leq 0.6$) of the flow rate $Q$ (top) and the energy discharge $E$ (bottom) in the step configuration \eqref{eq:Ex-Step-Geom}, at $t=200$ s for $S_{h,in}=1 \times 10^{-2}$ and $\Delta \mathcal{G} = 10 \%$ obtained with different numbers of cells $N = \left\{ 50 \mathrm{ (blue)},100 \mathrm{ (green)},200 \mathrm{ (red)},400 \mathrm{ (purple)} \right\}$ and compared to the analytic solution \eqref{eq:Ex-Steady-Solution} (black). \underline{\textit{Left}}: HR; \underline{\textit{Right}}: HR-LS. We observe that for both HR and HR-LS, the maximal amplitude of the errors with the analytic solution remains unchanged when the number of cells $N$ increases. However, the region of error is more localized when the number of cells increases, explaining why the error decreases. }
\label{fig:Steady-Step-Fr-10m2-dR-10}
\end{figure}
The results indicate that among the three well-balanced methods considered, HR is the least accurate when computing low-Shapiro number steady solutions in an artery presenting smooth and discontinuous variations of its cross-sectional area at rest $A_0$ and of its arterial wall rigidity $K$. On the contrary, HR-S is the only exactly well-balanced method for the considered low-Shapiro number steady states. Finally, even though HR-LS is not exactly well-balanced for the considered low-Shapiro number steady states, it allows to compute with satisfying accuracy steady solutions for smooth and discontinuous variations of the artery's geometrical and mechanical properties. These results show that the system \eqref{eq:WB-HRQ-sys-SW} (used by HR-LS) is a better approximation than system \eqref{eq:WB-HRU-sys-SW} (used by HR) of the steady state system \eqref{eq:BF-Steady-State-Equation-SW} (used by HR-S) in low-Shapiro flow configurations.
\subsection{Single wave propagation}
The wave-capturing properties of HR, HR-LS and HR-S are now evaluated. We simulate the propagation of a single wave in the smooth stenosis \eqref{eq:Ex-Stenosis-Geom} and the decreasing step \eqref{eq:Ex-Step-Geom}. The step configuration was studied in \cite{Delestre2012,Delestre2016,Wang2016} for an artery with only variations of its cross-sectional area at rest $A_0$. \\
The results are obtained for $t=0.045$ s. The time step $\Delta t$ is constant and chosen such that the CFL condition \eqref{eq:CFL-kin} is always satisfied. We impose a single pulse of flow at the inlet of the computational domain and the unsteady inlet flow rate $Q_{in}\left( t \right)$ is defined as
\begin{equation}
\left.
\begin{split}
&Q_{in}\left( t \right) =
\left\{
\begin{split}
& Q_{pulse} \sin \left( 2 \pi \frac{t}{T_{pulse}} \right) & \: \: \: &\text{ if } t \leq \frac{T_{pulse}}{2}\\
& 0 & \: \: \: &\text{ else } & .\\
\end{split}
\right.
\end{split}
\right.
\label{bc:Inlet-Flow-Pulse}
\end{equation}
We choose $T_{pulse} = 0.04 $ s to artificially reduce the wave length of the pulse for visualization purposes and the value of $Q_{pulse}$ is a function of the inlet Shapiro number $S_{h,in}$ and is defined as in equation \eqref{bc:Qin-Shin}. Figure \ref{fig:Inlet-Flow-Rate-Wave} represents the function $Q_{in}$ for $S_{h,in} = 1 \times 10^{-2}$. At the outlet of the computational domain, we set the reflection coefficient $R_t=0$ to remove any terminal reflection.
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{0.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure6.pdf}\\
\end{minipage}
}
\caption{Wave propagation: Time evolution of the inlet flow rate $Q_{in}$ for $S_{h,in} = 1 \times 10^{-2}$ and $T_{pulse} = 0.04 $.}
\label{fig:Inlet-Flow-Rate-Wave}
\end{figure}
\subsubsection{The step configuration}
\label{subsec:Wave-Step}
We focus on the decreasing step configuration \eqref{eq:Ex-Step-Geom}. Given the inlet condition \eqref{bc:Inlet-Flow-Pulse}, the pulse wave propagates in the artery starting from the left-hand side of the domain until it reaches the step. The change of impedance of the vessel creates reflected and transmitted waves that need to be captured by the numerical scheme. A linear analytic solution was proposed in \cite{Raines1974} and validated in \cite{Delestre2012,Delestre2016,Wang2015}, and gives the expression of the reflection coefficient $R_t$ and the transmission coefficient $T_t$, based on the conservation properties \eqref{eq:bc-conj-Q-P-Low-Fr}
\begin{equation}
\left\{
\begin{split}
& R_t = \frac{Y_L - Y_R}{Y_L + Y_R}\\
& T_t = 1 + R_t,\\
\end{split}
\right.
\label{eq:Rt-Tt}
\end{equation}
where $Y = A / \left(\rho c\right)$ is the vessel admittance. Subscripts $L$ and $R$ respectively refer to the values at the left and right of the step. As the coefficients $R_t$ and $T_t$ do not depend on the frequency of the incoming wave, we can analytically predict the position, shape and amplitude of the linear reflected and transmitted waves. However, as the inlet Shapiro number $S_{h,in}$ is non-zero, the flow is nonlinear and the linear analytic solution \eqref{eq:Rt-Tt} is only valid in the asymptotic limit $S_{h,in} \rightarrow 0$. To evaluate the quality of the results obtained with HR, HR-LS and HR-S, we compute reference solutions, obtained with HR-S for $N=25600$ and values of $S_{h,in}$ and $\Delta \mathcal{G}$ taken from table \ref{table:Sh-DG-Steady}. To assess the validity of these reference solutions, we compare them to the linear analytic solutions \eqref{eq:Rt-Tt} in figure \ref{fig:Step-Q-Analytic}. We observe that for low values of the inlet Shapiro number $S_{h,in}$ (figure \ref{fig:Step-Q-Analytic} left), for which the linear approximation is valid, the analytic and reference solutions match. As expected, for higher values of $S_{h,in}$, the flow is no longer linear and the propagation speed as well as the amplitude of the reflected and transmitted waves change (figure \ref{fig:Step-Q-Analytic} center and right).\\
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.33\textwidth}
\centering
\includegraphics[scale=0.25,angle=0]{Figure7a.pdf}\\
\end{minipage}%
\begin{minipage}[t]{.33\textwidth}
\centering
\includegraphics[scale=0.25,angle=0]{Figure7b.pdf}\\
\end{minipage}
\begin{minipage}[t]{.33\textwidth}
\centering
\includegraphics[scale=0.25,angle=0]{Figure7c.pdf}\\
\end{minipage}
}
\caption{Wave propagation: Comparison between the linear solution (full black line) and the reference solution (dashed black line) for the step configuration \eqref{eq:Ex-Step-Geom}, obtained using HR-S for $N=25600$, for the flow rate $Q$ at $t=0.045$ s for $\Delta \mathcal{G}=10 \%$. \underline{\textit{Left}}: $S_h = 1 \times 10^{-3}$; \underline{\textit{Center}}: $S_h = 1 \times 10^{-2}$; \underline{\textit{Right}}: $S_h = 1 \times 10^{-1}$.}
\label{fig:Step-Q-Analytic}
\end{figure}
We present results only for the flow rate $Q$ to reduce the number of variables and simplify the analysis of the results. Similar conclusions to those presented hereafter would have been drawn if we had considered the pressure $P$ or the wall perturbation $R-R_0$.\\
We perform a series of 9 numerical computations with different combinations of the non-zero inlet Shapiro number $S_{h,in}$ and the wall deformation parameter $\Delta \mathcal{G}$ taken from table \ref{table:Sh-DG-Steady}. Table \ref{table:Step-Err-Sh-DG} shows $L^1\left[Q\right]$ relative errors between the reference solutions and the results obtained with HR, HR-LS and HR-S for a fixed number of cells $N=1600$. We choose a high value of $N$ to reduce the numerical dissipation and highlight the effects of the well-balanced methods.
\begin{table}[!h]
\begin{footnotesize}
\begin{center}
\def\arraystretch{1.2
{\setlength{\tabcolsep}{0.5em}
\begin{tabular}{c | c|| c|c|c }
\multicolumn{2}{c||}{$S_{h,in}$} & \multicolumn{3}{c}{$1 \times 10^{-3}$} \\
\cline{3-5}
\multicolumn{2}{c||}{$\Delta \mathcal{G}$} & $1\%$ & $10\%$ & $30\%$ \\
\hline
& HR &
$2.3 \times 10^{-2}$ & $5.5 \times 10^{-2}$ & $5.5 \times 10^{-1}$
\\
$L^1\left[Q\right]$ & HR-LS &
$2.3 \times 10^{-2}$ & $2.8 \times 10^{-2}$ & $6.6 \times 10^{-2}$
\\
& HR-S &
$2.3 \times 10^{-2}$ & $2.8 \times 10^{-2}$ & $6.6 \times 10^{-2}$
\\
\hline \hline
\multicolumn{2}{c||}{$S_{h,in}$} & \multicolumn{3}{c}{$1 \times 10^{-2}$} \\ \cline{3-5}
\multicolumn{2}{c||}{$\Delta \mathcal{G}$} & $1\%$ & $10\%$ & $30\%$ \\
\cline{1-5}
& HR &
$2.3 \times 10^{-2}$ & $5.5 \times 10^{-2}$ & $5.5 \times 10^{-1}$
\\
$L^1\left[Q\right]$ & HR-LS &
$2.3 \times 10^{-2}$ & $2.8 \times 10^{-2}$ & $6.6 \times 10^{-2}$
\\
& HR-S &
$2.3 \times 10^{-2}$ & $2.8 \times 10^{-2}$ & $6.6 \times 10^{-2}$
\\
\hline \hline
\multicolumn{2}{c||}{$S_{h,in}$} & \multicolumn{3}{c}{$1 \times 10^{-1}$} \\ \cline{3-5}
\multicolumn{2}{c||}{$\Delta \mathcal{G}$} & $1\%$ & $10\%$ & $30\%$ \\
\cline{1-5}
& HR &
$2.9 \times 10^{-2}$& $6.1 \times 10^{-2}$& $5.1 \times 10^{-1}$
\\
$L^1\left[Q\right]$ & HR-LS &
$2.9 \times 10^{-2}$& $3.5 \times 10^{-2}$& $7.6 \times 10^{-2}$
\\
& HR-S &
$2.9 \times 10^{-2}$& $3.5 \times 10^{-2}$& $7.5 \times 10^{-2}$
\end{tabular}
}
\end{center}
\end{footnotesize}
\caption{Wave propagation: Relative error $L^1\left[Q\right]$ computed in the step \eqref{eq:Ex-Step-Geom} for values of $S_{h,in}$ and $\Delta \mathcal{G}$ taken from table \ref{table:Sh-DG-Steady} obtained for $N =1600$. HR, HR-LS and HR-S present similar results except for $\Delta \mathcal{G} = 30 \%$.}
\label{table:Step-Err-Sh-DG}
\end{table}
The results obtained with HR, HR-LS and HR-S are almost identical and indicate that each method is able to correctly compute the expected reflected and transmitted waves. For each method, the error $L^1\left[Q\right]$ is independent of the inlet Shapiro number $S_{h,in}$ but increases with the wall deformation parameter $\Delta \mathcal{G}$. However, the error obtained with HR increases faster with $\Delta \mathcal{G}$ than with the other methods. In particular, for $\Delta \mathcal{G} = 30 \%$, the value of $L^1\left[Q\right]$ obtained with HR is one order of magnitude higher than the one obtained with HR-LS or HR-S.\\
This last point is corroborated by figures \ref{fig:Step-Fr-10m2-dR-10}, \ref{fig:Step-Fr-10m2-dR-30} and \ref{fig:Step-Fr-10m2-dR-60}, where we represent the spatial evolution of the flow rate $Q$ at $t=0.045$ s, obtained using $N=100$ (left) and $N=1600$ (right) for $S_{h,in}=1\times 10^{-2}$ and $\Delta \mathcal{G}= \left\{ 10 \%,30 \%,60 \%\right\}$ respectively. In each figure, we compare the results obtained using HR, HR-LS and HR-S to the corresponding reference solution and observe if increasing the number of cells allows the numerical solution to converge towards the reference solution. In figure \ref{fig:Step-Fr-10m2-dR-10}, the results obtained for $\Delta \mathcal{G}=10 \%$ with HR, HR-LS and HR-S are similar and indicate that each numerical solution converges towards the reference solution. On the contrary, in figure \ref{fig:Step-Fr-10m2-dR-30} for $\Delta \mathcal{G} = 30 \%$ and in figure \ref{fig:Step-Fr-10m2-dR-60} for $\Delta \mathcal{G} = 60 \%$, only the solutions obtained with HR-LS and HR-S converge towards the reference solution. HR is unable to compute the expected amplitude of the reflected and transmitted waves and overestimates the amplitude of the reflected wave and underestimates the amplitude of the transmitted wave. \\
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure8a.pdf}\\
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure8b.pdf}\\
\end{minipage}
}
\caption{Wave propagation: Flow rate $Q\left( x \right)$ for the step \eqref{eq:Ex-Step-Geom} at $t=0.045$ s for the reference solution (black dashed line), HR (blue circle), HR-LS (green square) and HR-S (red triangle) for $S_h = 1 \times 10^{-2}$ and $\Delta \mathcal{G} = 10 \%$; \underline{\textit{Left}}: $N=100$; \underline{\textit{Right}}: $N=1600$. For $N=100$ and $N=1600$, all solutions are comparable, and for $N=1600$, HR, HR-LS and HR-S converge towards the reference solution.}
\label{fig:Step-Fr-10m2-dR-10}
\end{figure}
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure9a.pdf}\\
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure9b.pdf}\\
\end{minipage}
}
\caption{Wave propagation: Flow rate $Q\left( x \right)$ for the step \eqref{eq:Ex-Step-Geom} at $t=0.045$ s for the reference solution (black dashed line), HR (blue circle), HR-LS (green square) and HR-S (red triangle) for $S_h = 1 \times 10^{-2}$ and $\Delta \mathcal{G} = 30 \%$; \underline{\textit{Left}}: $N=100$; \underline{\textit{Right}}: $N=1600$. HR-LS and HR-S converge towards the reference solution while HR does not.}
\label{fig:Step-Fr-10m2-dR-30}
\end{figure}
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure10a.pdf}\\
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure10b.pdf}\\
\end{minipage}
}
\caption{Wave propagation: Flow rate $Q\left( x \right)$ for the step \eqref{eq:Ex-Step-Geom} at $t=0.045$ s for the reference solution (black dashed line), HR (blue circle), HR-LS (green square) and HR-S (red triangle) for $S_h = 1 \times 10^{-2}$ and $\Delta \mathcal{G} = 60 \%$; \underline{\textit{Left}}: $N=100$; \underline{\textit{Right}}: $N=1600$. HR-LS and HR-S converge towards the reference solution while HR does not.}
\label{fig:Step-Fr-10m2-dR-60}
\end{figure}
The results indicate that HR-LS and HR-S are able to compute wave reflections and transmissions in an artery presenting an arbitrary large discontinuous variation of its cross-sectional area at rest $A_0$ and arterial wall rigidity $K$. On the contrary, HR is unable to compute the correct amplitude of the reflected and transmitted waves when the discontinuous variation of the artery's geometrical and mechanical properties is too large, independently of the number of cells $N$. Moreover, these results show that the system \eqref{eq:WB-HRQ-sys-SW} (used by HR-LS) has the appropriate conservation properties to compute wave reflections for arbitrary large discontinuous geometrical and mechanical variations in low-Shapiro number flow regimes. On the contrary, HR, using the system \eqref{eq:WB-HRU-sys-SW}, is only able to compute wave reflections for small discontinuous variations of the artery's properties ($\Delta \mathcal{G}=10 \%$, see figure \ref{fig:Step-Fr-10m2-dR-10}). This last point can be problematic as large variations of the artery's geometrical and mechanical properties can be encountered when modeling arterial pathologies such as stenoses.
\subsubsection{The stenosis configuration}
\label{subsec:Wave-Stenosis}
In this subsection we focus on the stenosis configuration \eqref{eq:Ex-Stenosis-Geom}. To evaluate the quality of the results obtained with HR, HR-LS and HR-S, we compute reference solutions, obtained with HR-S for $N=25600$ and values of $S_{h,in}$ and $\Delta \mathcal{G}$ taken from table \ref{table:Sh-DG-Steady}. As the variation of geometrical and mechanical properties of the artery is smooth, the observed flow rate is constituted of a continuum of reflected and transmitted waves that are created at each cell interface, where the artery's geometrical and mechanical properties are discontinuous.\\
Similar results to those of subsection \ref{subsec:Wave-Step} are obtained, and therefore we do not completely repeat the previous analysis. In figures \ref{fig:Stenosis-Fr-10m2-dR-10}, \ref{fig:Stenosis-Fr-10m2-dR-30} and \ref{fig:Stenosis-Fr-10m2-dR-60}, we present the spatial evolution of the flow rate $Q$ at $t=0.045$ s, obtained using $N=100$ (left) and $N=1600$ (right) for $S_{h,in}=1\times 10^{-2}$ and $\Delta \mathcal{G}= \left\{ 10 \%,30 \%,60 \%\right\}$ respectively. In each figure, we compare the results obtained using HR, HR-LS and HR-S to the corresponding reference solution and observe if increasing the number of cells allows the numerical solution to converge towards the reference solution. Contrary to the step configuration studied in subsection \ref{subsec:Wave-Step}, the results obtained with HR, HR-LS and HR-S are similar and indicate that each numerical solution converges towards the reference solution. However, for $\Delta \mathcal{G}=\left\{30\%,60\%\right\}$ and $N=100$, HR is less accurate than HR-LS and HR-S.\\
These results are coherent with those of subsection \ref{subsec:Wave-Step}. Indeed, when studying the step configuration, we showed that contrary to HR-LS and HR-S, HR overestimates the amplitude of the reflected wave and underestimates the amplitude of the transmitted wave when a large discontinuous variation of the artery's geometrical and mechanical properties is considered ($\Delta \mathcal{G} = \left\{ 30 \%, 60 \%\right\}$). As the stenosis is a smooth variation of the cross-sectional area at rest $A_0$ and of the arterial wall rigidity $K$, discontinuous variations of the arterial wall's geometrical and mechanical properties occur at each cell interface and the amplitude of these variations decreases with the number of cells $N$. Hence, for $\Delta \mathcal{G}=\left\{30\%,60\%\right\}$ and $N=100$, the local discontinuous variations of the artery's properties are large enough for HR to be inaccurate. On the contrary, for $N=1600$, the local discontinuous variation of the artery's geometrical and mechanical properties are sufficiently small for HR to be as accurate as HR-LS and HR-S. \\
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure11a.pdf}\\
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure11b.pdf}\\
\end{minipage}
}
\caption{Wave propagation: Flow rate $Q\left( x \right)$ for the stenosis \eqref{eq:Ex-Stenosis-Geom} at $t=0.045$ s for the reference solution (black dashed line), HR (blue circle), HR-LS (green square) and HR-S (red triangle) for $S_h = 1 \times 10^{-2}$ and $\Delta \mathcal{G} = 10 \%$; \underline{\textit{Left}}: $N=100$; \underline{\textit{Right}}: $N=1600$. For $N=100$ and $N=1600$, all solutions are comparable, and for $N=1600$, HR, HR-LS and HR-S converge towards the reference solution.}
\label{fig:Stenosis-Fr-10m2-dR-10}
\end{figure}
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure12a.pdf}\\
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure12b.pdf}\\
\end{minipage}
}
\caption{Wave propagation: Flow rate $Q\left( x \right)$ for the stenosis \eqref{eq:Ex-Stenosis-Geom} at $t=0.045$ s for the reference solution (black dashed line), HR (blue circle), HR-LS (green square) and HR-S (red triangle) for $S_h = 1 \times 10^{-2}$ and $\Delta \mathcal{G} = 30 \%$; \underline{\textit{Left}}: $N=100$; \underline{\textit{Right}}: $N=1600$. For $N=100$ and $N=1600$, all solutions are comparable, and for $N=1600$, HR, HR-LS and HR-S converge towards the reference solution.}
\label{fig:Stenosis-Fr-10m2-dR-30}
\end{figure}
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure13a.pdf}\\
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure13b.pdf}\\
\end{minipage}
}
\caption{Wave propagation: Flow rate $Q\left( x \right)$ for the stenosis \eqref{eq:Ex-Stenosis-Geom} at $t=0.045$ s for the reference solution (black dashed line), HR (blue circle), HR-LS (green square) and HR-S (red triangle) for $S_h = 1 \times 10^{-2}$ and $\Delta \mathcal{G} = 60 \%$; \underline{\textit{Left}}: $N=100$; \underline{\textit{Right}}: For $N=100$, HR is less accurate than HR-LS and HR-S, and for $N=1600$, HR, HR-LS and HR-S converge towards the reference solution.}
\label{fig:Stenosis-Fr-10m2-dR-60}
\end{figure}
We have studied the wave capturing behavior of HR, HR-LS and HR-S. We showed that for arbitrary large smooth or discontinuous variations of the artery's cross-sectional area at rest $A_0$ and arterial wall rigidity $K$, both HR-LS and HR-S are able to compute the expected reflected and transmitted waves. On the contrary, HR is unable to correctly compute reflected and transmitted waves when large discontinuous variations of the artery's properties are considered. In particular, HR overestimates the reflected wave and underestimates the transmitted wave. Therefore, HR-LS and HR-S are good choices to compute wave reflections and transmissions in low-Shapiro flow regimes. In the following subsection, we will analyze the behavior of the different well-balanced methods in large network computations, where multiple effects come into play.
\section{A realistic example: stenosis of the iliac artery in a 55 arteries network}
\label{sec:Ex-55}
We study the response at the systemic level of a model network to the presence of a pathology. Indeed, the observed pressure and flow waveforms in the systemic network are the superposition of multiple reflected and transmitted waves, generated at each arterial bifurcation and dampened and diffused by the viscosity of the blood and the arterial wall. The presence of a pathology creates additional reflected and transmitted waves that change the reflection pattern and therefore the shape and amplitude of the observed waveforms. When such pathologies are modeled in the network, a well-balanced method is required to take into account the geometrical and mechanical source term induced by the local variations of the cross-sectional area and arterial wall rigidity representing the pathology. \\
In the purpose of performing large network blood flow simulations, we use the arterial network proposed by Sherwin in \cite{Sherwin2003-2} which was adapted from Westerhof \cite{Westerhof1970}, describing 55 of the great arteries of the systemic network (human arterial network). This model was more recently used by Wang \cite{Wang2015} to perform viscoelastic blood flow simulations using different numerical methods. The network under consideration is represented in figure \ref{fig:Network-55}. The parameters of the model were obtained using physiological data and in each artery the geometrical and mechanical parameters do not vary with the axial position $x$. Therefore, in the absence of an arterial pathology, a well-balanced method is not required to compute blood flow in the considered network. The details of the parameters of the model are not recalled here and we refer the reader to the cited publications.
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.1\textwidth}
\centering
\includegraphics[scale=0.45,angle=0,trim={40cm 0 40cm 0}]{Figure14.pdf}\\
\end{minipage}%
}
\caption{ Scheme of the 55 arteries network proposed in \cite{Sherwin2003-2} and used in this article. The numbered segments represent arteries described in the model. Each of the 55 arteries is characterized by a constant cross-sectional area at rest $A_0$, an constant arterial wall rigidity $K$ and a length $L$. At the end of each terminal segment, a constant reflection coefficient is imposed to model the resistive behavior of the distal network that is not taken into account in the modeled network. Therefore, a pulse wave propagates in the network starting from the heart and is reflected at each arterial bifurcation and terminal artery. The stenosis is represented in red in artery 49.}
\label{fig:Network-55}
\end{figure}
The pathology considered is a stenosis of the right iliac artery (artery 49 in figure \ref{fig:Network-55}). We consider two possible shapes for the stenosis. The first corresponds to a succession of an increasing and a decreasing step and will be referred to as the square stenosis. It is defined by the following radius at rest $R_0$ and arterial wall rigidity $K$
\begin{equation}
\left\{
\begin{split}
& R_0\left( x \right) = & \left\{
\begin{split}
&R_{in} & \: \: \: \text{ if } & x < x_s \\
&R_{in} \left( 1 - \Delta \mathcal{G} \right) & \: \: \: \text{ if } & x_s < x < x_e\\
&R_{in} & \: \: \: \text{ if } & x \geq x_e\\
\end{split}
\right.\\
& K \left( x \right) = &\left\{
\begin{split}
&K_{in} & \: \: \: \text{ if } & x < x_s \\
&K_{in} \left( 1 + \Delta \mathcal{G} \right) & \: \: \: \text{ if } & x_s < x < x_e\\
&K_{in} & \: \: \: \text{ if } & x \geq x_e .\\
\end{split}
\right.\\
\end{split}
\right.
\label{eq:Ex-Stenosis-Square-Geom-55}
\end{equation}
We choose $x_s = 6.25$ cm and $x_f = 8.25$ cm. The second geometry is the stenosis \eqref{eq:Ex-Stenosis-Geom} presented in subsections \ref{subsec:Steady} and \ref{subsec:Wave-Stenosis}. Its radius at rest $R_0$ and arterial wall rigidity $K$ vary as
\begin{equation}
\left\{
\begin{split}
& R_0\left( x \right) = & \left\{
\begin{split}
&R_{in} & \: \: \: \text{ if } & x < x_s \text{ or } x > x_f \\
&R_{in} \left( 1 - \frac{\Delta \mathcal{G}}{2} \left[ 1 + \cos \left( \pi +2 \pi \frac{x-x_s}{x_f -x_s} \right) \right] \right) & \: \: \: \text{ if } & x_s \leq x \geq x_f \\
\end{split}
\right.\\
& K \left( x \right) = &\left\{
\begin{split}
&K_{in} & \: \: \: \text{ if } & x < x_s \text{ or } x > x_f \\
&K_{in} \left( 1 + \frac{\Delta \mathcal{G}}{2} \left[ 1 + \cos \left( \pi +2 \pi \frac{x-x_s}{x_f -x_s} \right) \right] \right) & \: \: \: \text{ if } & x_s \leq x \geq x_f . \\
\end{split}
\right.\\
\end{split}
\right.
\label{eq:Ex-Stenosis-Cos-Geom-55}
\end{equation}
We choose $x_s = 5.5$ cm and $x_f = 9.5$ cm. We will refer to this stenosis as the cos stenosis. The cos stenosis is twice as long as the square stenosis to match the deformation area of the square stenosis. However, the maximal amplitudes of both configuration are the same and are proportional to $\Delta \mathcal{G}$.\\
In \cite{Ghigo2015}, the authors studied a similar pathological network and showed that the presence of the stenosis has a noticeable impact on the global hemodynamics for large values of $\Delta \mathcal{G}$. To that effect, we choose $\Delta \mathcal{G} = 65 \%$.\\
The results presented in this section are obtained using a time step $\Delta t = 5 \times 10^{-5}$ and mesh size $\Delta x = 0.2$ cm and are compared to results obtained with the 55 arteries network without the pathology. This network will be referred to as the "Sane" network and does not require the use of any well-balanced method. We focus on four measurement points corresponding to typical measurement points used by medical practitioners during surgery. These points are situated in the middle of the following arteries, and the numbers indicated correspond to the numbering of the arteries in figure \ref{fig:Network-55}: the Left Subclavian II (20), Left Femoral (45), Right Femoral (51) and Right External Iliac (49), before the stenosis. Furthermore, as the pressure $P$ is the most common and simple variable to measure in vivo, we present only pressure waveforms in the following.\\
In figure \ref{fig:55Arteries-Inviscid}, we compare the pressure waveforms obtained using HR, HR-LS and HR-S for the cos stenosis. The results obtained using HR-LS and HR-S are almost identical in the arteries 20 and 45, suited far from the stenosis, but also in the artery 49, located before the stenosis. In artery 51, situated after the stenosis, small differences exist between the results obtained with HR-LS and HR-S, especially during diastole ($t=7.5$ s and $t=8$ s). Moreover, the results obtained with HR-LS and HR-S in arteries 20, 45 and 49 are very close to those obtained with the Sane network, indicating that in these arteries, the resistive behavior of the stenosis is negligible compared to the global resistance of the network. However, in artery 51, the results obtained with HR-LS and HR-S slightly differ from those obtained with the Sane network, indicating that the presence of the stenosis has a local effect, especially during diastole ($t=7.5$ s to $t=8$ s). HR produces significantly different results from HR-LS and HR-S in each artery considered. In arteries 20, 45 and 49, HR overestimates the amplitude of the pressure waveform, whereas in artery 51 it underestimates it. These results are in good accord with the observations made in figures \ref{fig:Step-Fr-10m2-dR-60} and \ref{fig:Stenosis-Fr-10m2-dR-60} in subsections \ref{subsec:Wave-Step} and \ref{subsec:Wave-Stenosis}. \\
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure15a.pdf}\\
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure15b.pdf}\\
\end{minipage}%
}
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure15c.pdf}\\
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure15d.pdf}\\
\end{minipage}%
}
\caption{ Pressure $P$ in the middle of different arteries of the model network (20, 45, 49, 51) using an inviscid fluid and an elastic wall model. Comparison between the sane case (black line) and the cos stenosis using HR (blue circle), HR-LS (green square) and HR-S (red triangle). HR-LS and HR-S are different only in artery 51. HR is different from HR-LS and HR-S.}
\label{fig:55Arteries-Inviscid}
\end{figure}
In figure \ref{fig:55Arteries-Inviscid-Shape}, we compare the pressure waveforms obtained using HR-LS for the cos and the square stenosis. The results indicate that in arteries 20, 45 and 49, there are no significant differences when using either the cos or the square stenosis. Only in artery 51 is the influence of the shape noticeable.\\
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure16a.pdf}\\
\end{minipage}
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure16b.pdf}\\
\end{minipage}
}
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure16c.pdf}\\
\end{minipage}
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure16d.pdf}\\
\end{minipage}
}
\caption{ Pressure $P$ in the middle of different arteries of the model network (20, 45, 49, 51) using an inviscid fluid and an elastic wall model. Comparison between the sane case (black line), the square stenosis (red square) and the cos stenosis (blue triangle) using HR-LS. Changes with the shape of the pathology are visible only in artery $51$.}
\label{fig:55Arteries-Inviscid-Shape}
\end{figure}
In figure \ref{fig:55Arteries-Inviscid}, the effects of the flow viscosity and the wall viscoelasticity are neglected. However, they play an important role in the global hemodynamics and need to be taken into account to obtain an accurate description of pressure and flow waves in a network. In figure \ref{fig:55Arteries-Viscoelastic} , we present similar results to those obtained in figure \ref{fig:55Arteries-Inviscid}, but now viscous and viscoelastic effects are taken into account. For the implementation of the viscous and viscoelastic terms, we refer the reader to \cite{Wang2015}. The results indicate that viscosity and viscoelasticity have a dissipative and diffusive effect and in the presence of such effects, the results obtained with HR-LS and HR-S overlap, even in artery 51. However, HR still overestimates the amplitude of the pressure waves in arteries 20, 45 and 49 and underestimates it in artery 51.\\
\begin{figure}[!h]
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure17a.pdf}\\
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure17b.pdf}\\
\end{minipage}%
}
\makebox[1.\textwidth][c]{
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure17c.pdf}\\
\end{minipage}%
\begin{minipage}[t]{.5\textwidth}
\centering
\includegraphics[scale=0.40,angle=0]{Figure17d.pdf}\\
\end{minipage}%
}
\caption{Pressure $P$ in the middle of different arteries of the model network (20, 45, 49, 51) using a viscous fluid and a viscoelastic wall model. Comparison between the sane case (black line) and the cos stenosis using HR (blue circle), HR-LS (green square) and HR-S (red triangle). Viscosity and viscoelasticity erase the differences between HR-LS and HR-S, which are now identical even in artery 51. HR is still different from HR-LS and HR-S.}
\label{fig:55Arteries-Viscoelastic}
\end{figure}
The results presented in this section indicate that the pressure waveform is sensitive to the choice of the well-balanced method. Even though HR-LS, HR-S have comparable behaviors, the results obtained with these methods are very different from those obtained with HR. On the contrary, changing the shape of the pathology has little effect on the shape and amplitude of the pressure waveforms. The small differences between the results are erased when blood viscosity and wall viscoelasticity are taken into account, due to the damping and diffusion behavior of the fluid and wall viscosities. Overall, HR-LS and HR-S behave similarly and produce satisfying results.\\
\section{Conclusion}
We introduced two well-balanced hydrostatic reconstruction techniques for blood flow in large arteries with varying geometrical and mechanical properties. The low-Shapiro hydrostatic reconstruction (HR-LS) is a simple and efficient well-balanced reconstruction technique, inspired from the hydrostatic reconstruction technique (HR) proposed in \cite{Audusse2004,Delestre2012}. It accurately preserves low-Shapiro number (equivalent of the Froude number for shallow water equations and the Mach number for compressible Euler equations) steady states that may occur in large network simulations and are the appropriate conserved properties at discontinuities of the geometrical and mechanical properties of the artery. The subsonic hydrostatic reconstruction (HR-S), introduced in \cite{Bouchut2010} and adapted here to blood flow, exactly preserves all subcritical steady states. We performed a series a numerical computations to compare the properties of HR, HR-LS and HR-S. In all numerical computations, HR was the least accurate method and was unable to correctly compute wave reflection and transmission when large variations of the artery's geometrical and mechanical properties were considered. HR-S proved to be exactly well-balanced for all low-Shapiro number steady states and the most accurate reconstruction technique. We showed that HR-LS is well-balanced only for steady states at rest, but provides satisfactory approximations of low-Shapiro steady states. HR-LS is also able to capture wave reflections and transmissions for arbitrary large variations of the artery's geometrical and mechanical properties, which is an essential property to compute realistic flow and pressure waveforms. We have also evaluated the sensitivity of the model to well-balanced methods and to the shape of the pathology in an 55 arteries network simulation. We showed that the model is not sensitive to the geometry of the pathology. However, important differences were observed between HR and the other well-balanced methods, namely HR-LS and HR-S, due to the fact that HR is unable to capture wave reflection and transmission. Finally, we observed that the small differences between HR-LS and HR-S are erased when adding viscous and viscoelastic effects, which are required to obtain realistic pressure and flow waveforms. This analysis allows us to conclude that both HR-LS and HR-S are adequate well-balanced methods to compute blood flow in large arteries with varying cross-sectional area at rest and arterial wall rigidity. However, in large networks where many arteries present variations of their geometrical and mechanical properties, the extra iterations required by HR-S increase the computational cost compared to HR-LS. We therefore recommended using HR-LS in this case, as it is a good compromise between simplicity, numerical accuracy and efficiency. In future works, we will investigate further the properties of HR-LS and propose an extension of the method to higher order.\\
\section{Acknowledgments}
The authors are grateful to thank F. Bouchut and E. Audusse for their helpful remarks and comments.
\clearpage
\pagebreak
\newpage
\fancyfoot{}
\fancyhead{}
\renewcommand{\headrulewidth}{0pt}
\renewcommand{\footrulewidth}{0pt}
|
1,108,101,564,904 | arxiv | \section{Introduction and Notations}
A {\em body} is a compact set with nonempty interior. For a body $K$ which is star-shaped with respect to the origin its {\em radial function} is defined by
$$
\rho_K(u) = \max \{\lambda \ge 0 : \lambda u \in K\} \quad\mbox{for every } u\in S^{n-1}.
$$
A body $K$ is called a {\em star body} if it is star-shaped at the origin and its radial function $\rho_K$ is positive and continuous.
The {\em Minkowski functional} $\|\cdot\|_K$ is defined by $\|x\|_K = \min \{\lambda \ge 0 : x \in \lambda K\}$. Clearly, $\rho_K(u)=\|u\|_K^{-1}$, for $u \in S^{n-1}$. Moreover, we can assume that this identity holds for all $u\in \mathbb R^n\setminus\{0\}$ by extending $\rho_K$ from the sphere to $\mathbb R^n\setminus\{0\}$ as a homogeneous function of degree $-1$.
In \cite{Lu1}, Lutwak introduced the notion of the {\em intersection body} $IK$ of a star body $K$. $IK$ is defined by its radial function $$\rho_{IK}(u)= |K\cap u^\bot| , \quad \mbox{ for } u \in S^{n-1}.$$
Here and throughout the paper, $u^\perp$ denotes the hyperplane perpendicular to $u$, i.e.
$ u^\perp=\{x\in \R^n: x\cdot u=0\}$. By $|A|_k$, or simply $|A|$ when there is no ambiguity, we denote the $k$-dimensional Lebesgue measure of a set $A$.
We refer the reader to the books \cite{Ga2}, \cite{Ko}, \cite{KoY} for more information on the definition and properties of intersection bodies, and their applications in convex geometry and geometric tomography.
In this paper we are interested in the properties of the operator $I$ that assigns to a star body $K$ its intersection body $IK$.
One of the well-known results is the classical Busemann theorem (\cite{Bu}, see also \cite{Ba1}, \cite{Ba2} and \cite{MP}).
\begin{theorem}\label{th:Busemann} {
Let $K$ be an origin-symmetric convex body in $\R^n$. Then its intersection body $IK$ is convex.
}
\end{theorem}
Recently, a new proof of Busemann's theorem
was established by Berck \cite{Be}, who also generalized the theorem to the case of $L_p$ intersection bodies (see \cite{GG}, \cite{HL}, \cite{YY} and \cite{Ko} for more information on the theory of $L_p$ intersection bodies).
The development of the theory of intersection bodies shows that it is not natural to restrict ourselves to the class of convex bodies, and in fact, in many situations one has to deal with bodies which are not necessarily convex. How does $I$ act on these bodies? In this paper we will answer this question for the class of $p$-convex bodies.
Let $p\in (0,1]$. We say that a body $K$ is
$p$-convex if, for all $x, y \in \R^n$,
$$
\|x+y\|^p_K \le \|x\|^p_K+\|y\|^p_K,
$$
or, equivalently $t^{1/p}x+(1-t)^{1/p}y \in K$ whenever $x$ and $y$
are in $K$ and $t\in (0,1)$. One can see that
$p$-convex sets with $p=1$ are just convex. Note also that a $p_1$-convex body is $ p_2$-convex for all $0<p_2\le p_1$.
There is an extensive literature on $p$-convex bodies as well as the closely related concept of quasi-convex bodies, see for example \cite{A}, \cite{BBP1}, \cite{BBP2}, \cite{D}, \cite{GoK}, \cite{GoL}, \cite{GuL}, \cite{Ka}, \cite{KaT}, \cite{KaPR}, \cite{Li1}, \cite{Li2}, \cite{Li3}, \cite{LMP}, \cite{LMS}, \cite{M} and others.
The first question that we consider is the following.
{\it Let $K$ be an origin-symmetric $p$-convex body. Is the intersection body $IK$ necessarily $q$-convex for some $q$?
}
In Section 2, we prove that if $K$ is $p$-convex and symmetric then its intersection body $IK$ is $q$-convex for all $q \le \left[
\big(\frac 1p-1\big)(n-1) + 1 \right]^{-1}$. Furthermore, we construct an example showing that this upper bound is asymptotically sharp.
Another important question about the operator $I$ comes from works of Lutwak \cite{Lu2} and Gardner \cite[Prob. 8.6, 8.7]{Ga2} (see also \cite{GrZ}). It is easy to see that the intersection body of a Euclidean ball is again a Euclidean ball. A natural question is whether there are other fixed points of $I$. In order to measure the distance between star bodies we will be using the Banach-Mazur distance
$d_{BM}(K,L)=\inf \{b/a: \exists T \in GL(n): aK \subset TL \subset bK \}$ (see \cite{MS}).
In \cite{FNRZ} it is shown that in a sufficiently small neighborhood of the ball (with respect to the Banach-Mazur distance) there are no other fixed points of the intersection body operator. However, in general this question is still open.
In view of this it is natural to ask the following question.
{\it Does $IK$ have to be closer to the ball than $K$? }
In Section 2 we show that the answer is ``No". There are $p$-convex bodies for which the intersection body is farther from the Euclidean ball.
It is worth noting that, in the convex situation, there exists an absolute constant $C>0$ such that
$d_{BM}(IK,B_2^n)<C$ for all origin-symmetric convex bodies $K$ (see \cite{He}, \cite{Ba1}, \cite{Ba2}, \cite{Bou}, \cite{MP} or Corollary 2.7 in \cite{Ko}). The example in Section 2 shows that this statement is wrong if we only assume $p$-convexity for $p<1$, and $d_{BM}(IK,B_2^n)$, for a $p$-convex body $K$, can be as large as $C_p^n$, where $C_p>1$ is a certain constant that depends only on $p$.
In recent times a lot of attention has been attracted to the study of log-concave measures. These are measures whose densities are log-concave functions. The interest to such measures stems from the Brunn-Minkowski inequality, and many results known for convex bodies are now generalized to log-concave measures, see for example \cite{Ba1}, \cite{Ba2}, \cite{AKM}, \cite{KlM}, \cite{FM}, \cite{Pa} and the references cited therein.
In Section 3 we study intersection bodies in spaces with log-concave measures. Namely, let $\mu$ be a log-concave measure on $\mathbb R^n$ and $\mu_{n-1}$ its restrictions to $(n-1)$-dimensional subspaces. Define the {\em intersection body} $I_\mu K$ of a star body $K$ {\em with respect to $\mu$} by
$$
\rho_{I_\mu K}(u) = \mu_{n-1}(K\cap u^\bot),\quad u \in S^{n-1}.
$$
We show that if $K$ is an origin-symmetric $p$-convex body and $\mu$ is a symmetric and log-concave measure, then $I_\mu K$ is $q$-convex for all $q \le \left[
\big(\frac 1p-1\big)(n-1) + 1 \right]^{-1}$. The proof uses a version of Ball's theorem \cite{Ba1}, \cite{Ba2} for $p$-convex bodies. Namely, we show that if $f$ is an even non-negative log-concave function on $\mathbb R^n$, $k\ge 1$, and $K$ is a $p$-convex body in $\R^n$, $0<p\le 1$, then the body $L$ defined by the Minkowski functional
$
\|x\|_L = \left[\int_0^{\|x\|_K^{-1}} f(rx)r^{k-1}dr \right]^{-1/k}, $ $x\in\R^n, $
is $p$-convex.
If the measure $\mu$ is not symmetric, the situation is different, as explained at the end of Section 3. $L$ defined above does not have to be $q$-convex for any $q>0$. However, if we consider $s$-concave measures, $0<s<1/n$, that are not necessarily symmetric, then the above construction defines a body $L$ that is $q$-convex for all $q\le \[\big(\frac 1p-1\big)\big(\frac 1s-n\big)\frac 1k + \frac 1p\]^{-1}$. We also show that this bound is sharp. This is the content of Section 4.
\noindent {\bf Acknowledgment}. We are indebted to Alexander Litvak and Dmitry Ryabogin for
valuable discussions.
\section{$p$-convexity and related results}
Here we prove a version of Busemann's theorem for $p$-convex bodies.
\begin{theorem}\label{th:pbus}
Let $K$ be a symmetric $p$-convex body in $\R^n$, $p \in (0,1]$, and $E$ a $(k-1)$-dimensional subspace of $\R^n$ for $1\le k\le n$. Then the map
$$
u \longmapsto \frac{|u|}{\big|K \cap {\rm span}(u,E)\big|_k},\quad u\in E^\perp
$$
defines the Minkowski functional of a $q$-convex body in $E^\bot$ with $q =\left[
\left(1/p-1\right)k + 1 \right]^{-1}$.
\end{theorem}
\begin{proof} We follow the general idea of the proof from \cite{MP} (see also \cite[p.311]{Ga2}).
Let $u_1, u_2 \in E^\perp \setminus \{0\}$ be nonparallel vectors. Denote $u=u_1+u_2$, and
\begin{eqnarray*}
\rho (u_1) &=&\frac{ |K \cap \mbox{span}\{u_1,E\}|}{|u_1|} = \int_{-\infty}^{\infty}
\left|K \cap (r u_1+E)\right|dr, \\
\rho (u_2) &=&\frac{ |K \cap \mbox{span}\{u_2,E\}| }{|u_2|}= \int_{-\infty}^{\infty} \left|K
\cap (r u_2+E)\right|dr.
\end{eqnarray*}
Define the functions $r_1 = r_1(t)$ and $r_2 = r_2(t)$ by
\begin{eqnarray*}
t &=& \frac{1}{\rho (u_1)} \int_0^{r_1} \left|K \cap(r
u_1+E)\right|dr \\
&=& \frac{1}{\rho (u_2)} \int_0^{r_2} \left|K \cap(r
u_2+E)\right|dr ,\qquad t\in [0,1/2].
\end{eqnarray*}
Let $r = \left( r_1^{-p} + r_2^{-p} \right)^{-\frac{1}{p}}$, $\lambda_1
= \frac{r_1^{-p}}{r_1^{-p} + r_2^{-p}}$ and $\lambda_2 =
\frac{r_2^{-p}}{r_1^{-p} + r_2^{-p}}$. Then
\begin{eqnarray*}
\frac{dr}{dt} &=& -\frac{1}{p}\left(r_1^{-p} +
r_2^{-p}\right)^{-\frac{1}{p}-1} \left( -p\,
r_1^{-p-1}\frac{dr_1}{dt}
- p\, r_2^{-p-1}\frac{dr_2}{dt} \right) \\
&=& r \left[ \lambda_1 \left(\frac{1}{r_1}\frac{dr_1}{dt}\right) +
\lambda_2 \left(\frac{1}{r_2}\frac{dr_2}{dt}\right) \right] \\
&\ge& r \left(\frac{1}{r_1}\frac{dr_1}{dt}\right)^{\lambda_1}
\left(\frac{1}{r_2}\frac{dr_2}{dt}\right)^{\lambda_2} =
\left(\lambda_1^{1/p}\frac{dr_1}{dt}\right)^{\lambda_1}
\left(\lambda_2^{1/p}\frac{dr_2}{dt}\right)^{\lambda_2} \\
&=& \left(\lambda_1^{\lambda_1}\lambda_2^{\lambda_2}\right)^{\frac{1}{p}}
\left(\frac{\rho (u_1)}{|K \cap (r_1 u_1+E)|}\right)^{\lambda_1}
\left(\frac{\rho (u_2)}{|K \cap (r_2 u_2+E)|}\right)^{\lambda_2}.
\end{eqnarray*}
On the other hand, since $r\,u = (r_1^{-p}+r_2^{-p})^{-\frac{1}{p}}
(u_1+u_2) = \lambda_1^{\frac{1}{p}}r_1u_1+\lambda_2^{\frac{1}{p}}r_2u_2$, we
have
\begin{eqnarray*}
K \cap (r\,u+E) &\supset& \lambda_1^{\frac{1}{p}}\left(K
\cap (r_1\, u_1+E)\right) + \lambda_2^{\frac{1}{p}}\left(K \cap (r_2\,u_2+E)\right) \\
&=& \lambda_1\left(\lambda_1^{\frac{1}{p}-1}K \cap (r_1\, u_1+E)\right) +
\lambda_2\left(\lambda_2^{\frac{1}{p}-1}K \cap (r_2\,u_2+E)\right).
\end{eqnarray*}
Thus, by the Brunn-Minkowski inequality (see, for example, \cite{Ga3}), we get
\begin{eqnarray*}
|K \cap (ru+E)| &\ge& \left|\lambda_1^{\frac{1}{p}-1}K \cap (r_1
u_1+E)\right|^{\lambda_1} \left|\lambda_2^{\frac{1}{p}-1}K \cap (r_2
u_2+E)\right|^{\lambda_2} \\
&=& \left(\lambda_1^{\lambda_1}\lambda_2^{\lambda_2}\right)^{(\frac{1}{p}-1)(k-1)}
\left|K \cap (r_1 u_1+E)\right|^{\lambda_1} \left|K \cap (r_2
u_2+E)\right|^{\lambda_2}.
\end{eqnarray*}
Finally, we have the following:
\begin{eqnarray*}
\rho (u_1+u_2) &=& \int_{-\infty}^{\infty} \left|K \cap (r
u+E)\right|dr = 2\int_0^{1/2} \left|K \cap (r
u+E)\right|\frac{dr}{dt}dt \\
&\ge& 2\int_0^{1/2}
\left(\lambda_1^{\lambda_1}\lambda_2^{\lambda_2}\right)^{(\frac{1}{p}-1)(k-1)+\frac{1}{p}}
\rho (u_1)^{\lambda_1}\rho (u_2)^{\lambda_2} dt \\
&\ge& 2\int_0^{1/2}
\left[ \Big(\lambda_1
\left[\rho (u_1)\right]^q\Big)^{\lambda_1} \Big(\lambda_2
\left[\rho (u_2)\right]^q\Big)^{\lambda_2} \right]^{1/q} dt\\
&\ge& 2\int_0^{1/2} \left[ \frac{\lambda_1}{\lambda_1
\left[\rho (u_1)\right]^q} + \frac{\lambda_2}{\lambda_2
\left[\rho (u_2)\right]^q}
\right]^{-1/q} dt\\
&=& \left[ \left[\rho (u_1)\right]^{-q} +
\left[\rho (u_2)\right]^{-q} \right]^{-1/q}.
\end{eqnarray*}
Therefore $\rho(u)$ defines a $q$-convex body.
\end{proof}
As an immediate corollary of the previous theorem we obtain the following.
\begin{theorem}\label{th:q-conv}
Let $K$ be a symmetric $p$-convex body in $\R^n$ for $p \in (0,1]$. Then the
intersection body $IK$ of $K$ is $q$-convex for $q = \left[
\left(1/p-1\right)(n-1) + 1 \right]^{-1}$.
\end{theorem}
\begin{proof}
Let $L=IK$ be the intersection body of $K$. Let $v_1,v_2\in \mbox{span}\{u_1,u_2\}$ be orthogonal to $u_1$ and $u_2$ correspondingly. Denote $E=\mbox{span}\{u_1,u_2\}^\perp$.
Then $$\rho_L(v_1) = |K \cap \mbox{span}\{u_1,E\}| = \rho (u_1),$$
and $$\rho_L(v_2) = |K \cap \mbox{span}\{u_2,E\}| = \rho (u_2).$$
Using the previous theorem with $k=n-1$, we see that $L$ is $q$-convex for $q = [
\left(1/p-1\right)(n-1) + 1 ]^{-1}$.
\end{proof}
\begin{remark}
Note that the previous theorem does not hold without the symmetry assumption. To see this, use the idea from \cite[Thm 8.1.8]{Ga2}, where it is shown that $IK$ is not necessarily convex if $K$ is not symmetric.
\end{remark}
A natural question is to see whether the value of $q$ in Theorem \ref{th:q-conv} is optimal. Unfortunately, we were unable to construct a body that gives exactly this value of $q$, but our next result shows that the bound is asymptotically correct.
\begin{theorem}
There exists a $p$-convex body $K\subset \mathbb R^n$ such that $IK$ is $q$-convex with $q\le \left[(1/p-1)(n-1)+1 +g_n(p)\right]^{-1}$, where $g_n(p)$ is a function that satisfies
1) $g_n(p)\ge - \log_2(n-1),$
2) $\displaystyle \lim_{p\to 1^-} g_n(p) =0 .$
\end{theorem}
\begin{proof}
Consider the following two $(n-1)$-dimensional cubes in $\mathbb R^{n}$:
$$C_1=\{|x_1|\le 1, ...,|x_{n-1}|\le 1, x_{n}=1\} \,\,\,\,\,\, \mbox{ and } \,\,\,\,\,\, C_{-1}=\{|x_1|\le 1, ...,|x_{n-1}|\le 1, x_{n}=-1\}.$$
For a fixed $0<p<1$, let us define a set $K\subset \mathbb R^{n}$ as follows:
$$
K = \{z\in \mathbb R^n : z= t^{1/p}x+ (1-t)^{1/p}y, \mbox{ for some } x\in C_{1}, y\in C_{-1}, 0\le t\le 1\}.
$$
We claim that $K$ is $p$-convex. To show this let us consider two arbitrary points $z_1, z_2\in K$,
$$z_1= t_1^{1/p}x_1+ (1-t_1)^{1/p}y_1,\qquad z_2= t_2^{1/p}x_2+ (1-t_2)^{1/p}y_2,$$ where $x_1, x_2\in C_{1}$, $y_1, y_2\in C_{-1}$, and $t_1, t_2 \in [0, 1]$.
We need to show that for all $s\in (0,1)$ the point $w= s^{1/p}z_1+ (1-s)^{1/p}z_2$ belongs to $K$.
Assume first that $t_1$ and $t_2$ are neither both equal to zero nor both equal to one.
Since $C_1$ and $C_{-1}$ are convex sets, it follows that the points
$${\bar x} = \frac{s^{1/p}t_1^{1/p}x_1+ (1-s)^{1/p}t_2^{1/p}x_2}{s^{1/p}t_1^{1/p} + (1-s)^{1/p}t_2^{1/p}}
\,\,\,\,\,\,\mbox{ and }\,\,\,\,\,\,
{\bar y} =\frac{s^{1/p} (1-t_1)^{1/p}y_1+ (1-s)^{1/p}(1-t_2)^{1/p}y_2}{s^{1/p} (1-t_1)^{1/p}+ (1-s)^{1/p}(1-t_2)^{1/p}}$$
belong to $C_1$ and $C_{-1}$ correspondingly. Then $w= \alpha {\bar x} + \beta {\bar y}$, where $$\alpha = s^{1/p}t_1^{1/p} + (1-s)^{1/p}t_2^{1/p} \,\,\,\,\,\, \mbox { and }\,\,\,\,\,\, \beta =s^{1/p} (1-t_1)^{1/p}+ (1-s)^{1/p}(1-t_2)^{1/p}.
$$
Note that $\alpha^p +\beta^p \le s t_1 + (1-s) t_2 + s (1-t_1) + (1-s) (1-t_2) = 1.$
Therefore, there exists $\mu\ge 0$ such that $(\alpha+\mu)^p+(\beta+\mu)^p =1$ and
$$w= (\alpha+\mu) {\bar x} + (\beta {\bar y}+\mu (-{\bar x})) = (\alpha+\mu) {\bar x} + (\beta+\mu)\frac{\beta {\bar y}+\mu (-{\bar x})}{\beta +\mu}.$$
Since ${\bar y}\in C_{-1}$ and $-{\bar x}\in C_{-1}$, it follows that $${\widetilde y} = \frac{\beta {\bar y}+\mu (-{\bar x})}{\beta +\mu}\in C_{-1}.$$
Therefore $w$ is a $p$-convex combination
of ${\bar x}\in C_{1}$ and ${\widetilde y}\in C_{-1}$.
If $t_1$ and $t_2$ are either both zero or one, then either $\alpha=0$ or $\beta=0$. Without loss of generality let us say $\alpha=0$, then $w = \beta {\bar y}.$ Now choose $\bar x\in C_1$ arbitrarily, and apply the considerations above to the point $w= 0 {\bar x} + \beta {\bar y}.$
The claim follows.
Note that $K$ can be written as
\begin{eqnarray} \label{lab:K}
K &=& \left\{ r^\frac1p x+ (1-r)^\frac1p y : x\in C_{1}, y\in C_{-1}, 0\le r\le 1 \right\} \nonumber \\
&=& \left\{ r^\frac1p v+ (1-r)^\frac1p w + \[r^\frac1p -(1-r)^\frac1p \]e_n: v, w\in B_\infty^{n-1}, 0\le r\le 1 \right\} \nonumber \\
&=& \left\{ \[r^\frac1p +(1-r)^\frac1p \] z + \[r^\frac1p -(1-r)^\frac1p \]e_n : z\in B_\infty^{n-1}, 0\le r\le 1\right\} \nonumber \\
&=& \Big\{ f(t) z + te_n : z \in B_\infty^{n-1}, -1\le t\le 1\Big\} ,
\end{eqnarray}
where $B^{n-1}_\infty=[-1,1]^{n-1}\subset \R^{n-1}$ and $f$ is a function on $[-1,1]$ defined as the solution $s=f(t)$ of $$\(\frac{s+t}{2}\)^p + \(\frac{s-t}{2}\)^p =1, \quad s\ge|t|,-1\le t\le 1.$$
Let $L=IK$ be the intersection body of $K$. Then
$$\rho_L(e_n)= |K\cap e_n^\perp| = \left(2f(0)\right)^{n-1}=\left(\frac{4}{2^{1/p}}\right)^{n-1} .$$
In order to compute the volume of the central section of $K$ orthogonal to $(e_1+e_n)/\sqrt{2}$, use (\ref{lab:K}) to notice that its projection onto $x_1=0$ coincides with $K\cap e_1^\perp$. Therefore
$$\rho_L((e_1+e_n)/\sqrt{2} ) = \sqrt{2}\rho_L( e_1 )=2 \sqrt{2}\int_0^1 \left[2f(t)\right]^{n-2} dt .$$
Let $L=IK$ be $q$-convex. In order to estimate $q$, we will use the inequality
$$\left\|\sqrt{2} e_n\right\|_L^q\le \left\|\frac{e_n+e_1}{\sqrt{2}}\right\|_L^q+ \left\|\frac{e_n-e_1}{\sqrt{2}}\right\|_L^q = 2 \left\|\frac{e_n+e_1}{\sqrt{2}}\right\|_L^q,$$ that is
$$\frac{\sqrt{2}}{\rho_L(e_n)} \le \frac{2^{1/q}}{\rho_L((e_1+e_n)/\sqrt{2} )} .$$
Thus, we have
$$2^{1/q}\ge \left(\frac{2^{1/p}}{2}\right)^{n-1} 2 { \int_0^1 f(t)^{n-2} dt } . $$
We now estimate the latter integral.
From the definition of $f$ it follows that $f(t)\ge \frac{2}{2^{1/p}}$ and $f(t)\ge t $ for $t\in [0,1]$. Taking the maximum of these two functions, we get $$\displaystyle f(t) \ge \left\{\begin{array}{ll}\frac{2}{2^{1/p}},& \mbox{ for } 0\le t \le \frac{2}{2^{1/p}},\\
t,& \mbox{ for } \frac{2}{2^{1/p}} \le t\le 1. \end{array} \right.$$
Therefore, \begin{align}\label{intf} \int_0^1 f(t)^{n-2} dt&\ge\int_0^{\frac{2}{2^{1/p}}}\left(\frac{2}{2^{1/p}}\right)^{n-2} dt + \int_{\frac{2}{2^{1/p}}}^1 t^{n-2} dt\\\notag &=\left(\frac{2}{2^{1/p}}\right)^{n-1}+\frac{1}{n-1}\left(1 - \left(\frac{2}{2^{1/p}}\right)^{n-1} \right).\end{align}
Hence,
$$2^{1/q}\ge \ 2 \left(1+\frac{1}{n-1}\left(\left(\frac{2^{1/p}}{2}\right)^{n-1} - 1\right) \right) , $$
which implies $$q\le \left[\left(\frac1p-1\right)(n-1)+1+\log_2\frac{(n-2)\left(\frac{2}{2^{1/p}}\right)^{n-1}+1}{n-1}\right]^{-1}.$$
Denoting $$g_n(p)=\log_2\frac{(n-2)\left(\frac{2}{2^{1/p}}\right)^{n-1}+1}{n-1},$$
we get the statement of the theorem.
\end{proof}
We will use the above example to show that in general the intersection body operator does not improve the Banach-Mazur distance to the Euclidean ball $B_2^n$.
\begin{theorem}
Let $p\in (0,1)$ and let $c$ be any constant satisfying $1<c< {2^{1/p-1}}$. Then for all large enough $n$, there exists a $p$-convex body $K\subset \mathbb R^n$ such that
$$c^n d_{BM}(K, B_2^n) < d_{BM} (IK, B_2^n).$$
\end{theorem}
\begin{proof} We will consider $K$ from the previous theorem. One can see that $K \subset B_\infty ^n \subset \sqrt{n}B_2^n$. Also note that for any $a \in B_\infty^n$, there exist $x \in C_1$, $y\in C_{-1}$ and $\lambda \in [0,1]$ such that $a=\lambda x + (1-\lambda)y$. Then we have
$
\|a\|^p_K \le \lambda^p+(1-\lambda)^p\le 2^{1-p}.
$
Therefore, $K \supset 2^{\frac{p-1}{p}}B_\infty^n \supset 2^{\frac{p-1}{p}}B_2^n$, and thus
\begin{equation}\label{KB}
d_{BM}(K, B_2^n) \le 2^{\frac{1-p}{p}} \sqrt{n}.
\end{equation}
Next we would like to provide a lower bound for $d_{BM}(IK, B_2^n)$. Let
$E$ be an ellipsoid such that
$
E \subset IK \subset d E,
$ for some $d$.
Then
$$
IK \subset \mathop{\textup{conv}}(IK) \subset d E \,\,\,\,\,\,\ \mbox{ and }\,\,\,\,\,\,\, \frac{1}{d}\mathop{\textup{conv}}(IK) \subset E \subset IK.
$$
Therefore, $1/d \le 1/r$, where $r=\min \{ t: conv(IK) \subset t IK\}$. Thus, $$d_{BM}(IK, B_2^n) \ge r=\max\left\{\frac{\rho_{conv(IK)}(\theta)}{\rho_{IK}(\theta)}, \theta \in S^{n-1}\right\} \ge \frac{\rho_{conv(IK)}(e_n)}{\rho_{IK}(e_n)}.$$
The convexity of $conv(IK)$ gives
$$
\rho_{conv(IK)}(e_n) \ge \left\| \frac{1}{2} \left (\rho_{IK}\left(\frac{e_n+e_1}{\sqrt{2}}\right) \frac{e_n+e_1}{\sqrt{2}}+ \rho_{IK}\left(\frac{e_n-e_1}{\sqrt{2}}\right)\frac{e_n-e_1}{\sqrt{2}}\right)\right\|_2$$
$$
= \frac{1}{\sqrt{2}} \rho_{IK}\left(\frac{e_n+e_1}{\sqrt{2}}\right) .
$$
Combining the above inequalities with inequality (\ref{intf}) from the previous theorem, we get
$$
d_{BM}(IK, B_2^n) \ge \frac{\rho_{IK}(\frac{e_n+e_1}{\sqrt{2}})} {\sqrt{2}\rho_{IK}(e_n)} \ge \left(\frac{2^{1/p}}{2}\right)^{n-1} \frac{1}{n-1}. $$ Comparing this with (\ref{KB}) we get the statement of the theorem.
\end{proof}
\section{Generalization to log-concave measures}
A measure $\mu$ on $\mathbb R^n$ is called log-concave if for any measurable $A,B \subset \mathbb R^n$
and $0 < \lambda < 1$, we have
$$
\mu(\lambda A + (1-\lambda)B) \ge \mu(A)^\lambda \mu(B)^{(1-\lambda)}
$$
whenever $\lambda A + (1-\lambda)B$ is measurable.
Borell \cite{Bor} has shown that a measure $\mu$ on $\mathbb R^n$ whose support is not contained
in any affine hyperplane is a log-concave measure if and only if it is
absolutely continuous with respect to the Lebesgue measure, and its
density is a log-concave function.
To extend Busemann's theorem to log-concave measures on $\R^n$, we need the following theorem of Ball \cite{Ba1}, \cite{Ba2}.
\begin{theorem} \label{th:ball}
Let $f:\R^n\rightarrow [0,\infty)$ be an even log-concave function satisfying $0<\int_{\mathbb R^n} f <\infty$ and let $k\ge 1$. Then the map
$$
x \longmapsto \[\int_0^\infty f(rx)r^{k-1}dr\]^{-\frac 1k}
$$
defines a norm on $\R^n$.
\end{theorem}
An immediate consequence of Ball's theorem is a generalization of the classical Busemann theorem to log-concave measures on $\R^n$.
Let $\mu$ be a measure on $\R^n$, absolutely continuous with respect to the Lebesgue measure $m$, and $f$ its density function.
If $f$ is locally integrable on $k$-dimensional affine subspaces of $\R^n$, then we denote by $\mu_k = f m_k$ the restriction of $\mu$ to $k$-dimensional subspaces, where $m_k$ is the $k$-dimensional Lebesgue measure.
Define the {\em intersection body} $I_\mu K$ of a star body $K$ {\em with respect to $\mu$} by
$$
\rho_{I_\mu K}(u) = \mu_{n-1}(K\cap u^\bot),\quad u \in S^{n-1}.
$$
Let $\mu$ be a symmetric log-concave measure on $\R^n$ and $K$ a symmetric convex body in $\R^n$.
Let $f$ be the density of the measure $\mu$. If we apply Theorem \ref{th:ball} to the log-concave function $1_K f$, we get a symmetric convex body $L$ whose Minkowski functional is given by $$\|x\|_L = \[(n-1)\int_0^\infty (1_K f)(rx)r^{n-2}dr\]^{-\frac{1}{n-1}}.$$ Then for every $u \in S^{n-1}$,
\begin{eqnarray*}
\mu_{n-1}(K\cap u^\perp) &=& \int_{S^{n-1}\cap u^\perp}\int_0^\infty (1_K f)(r\theta)r^{n-2} dr d\theta \\
&=& \frac{1}{n-1} \int_{S^{n-1}\cap u^\perp} \|\theta\|_L^{-n+1} d\theta = |L\cap u^\perp|.
\end{eqnarray*}
Using Theorem \ref{th:Busemann} for the convex body $L$, one immediately obtains the following version of Busemann's theorem for log-concave measures.
\begin{theorem}
Let $\mu$ be a symmetric log-concave measure on $\R^n$ and $K$ a symmetric convex body in $\R^n$. Then the intersection body $I_\mu K$ is convex.
\end{theorem}
In order to generalize Theorem \ref{th:q-conv} to log-concave measures, we will first prove a version of Ball's theorem (Thm \ref{th:ball}) for $p$-convex bodies.
\begin{theorem}\label{th:log-concave}
Let $f:\R^n\rightarrow [0,\infty)$ be an even log-concave function, $k\ge 1$, and $K$ a $p$-convex body in $\R^n$ for $0<p\le 1$. Then the body $L$ defined by the Minkowski functional
$$
\|x\|_L = \left[\int_0^{\|x\|_K^{-1}} f(rx)r^{k-1}dr \right]^{-\frac 1k}, \quad x\in\R^n,
$$
is $p$-convex.
\end{theorem}
\begin{proof}
Fix two non-parallel vectors $x_1,x_2\in\R^n$ and denote $x_3=x_1+x_2$. We claim that $\|x_3\|_L^p \le \|x_1\|_L^p + \|x_2\|_L^p$. Consider the following 2-dimensional bodies in the plane $E={\rm span}\{x_1,x_2\}$,
$$
\bar{K}=\left\{\frac{t_1 x_1}{\|x_1\|_K}+\frac{t_2 x_2}{\|x_2\|_K}: t_1,t_2\ge 0, t_1^p +t_2^p \le 1\right\}
$$
and
$$
\bar{L}=\left\{x\in\R^n: \|x\|_{\bar L} = \left[\int_0^{\|x\|_{\bar K}^{-1}} f(rx)r^{k-1}dr \right]^{-\frac 1k}\le 1 \right\}.
$$
One can see that the boundary of $\bar K$ consists of a $p$-arc connecting the points $\frac{x_1}{\|x_1\|_K}$ and $\frac{x_2}{\|x_2\|_K}$, and two straight line segments connecting the origin with these two points. Clearly $\bar{K}$ is $p$-convex and $\bar{K}\subset K$. Also note that $\|x_i\|_{\bar K}=\|x_i\|_K$ for $i=1,2$, since $\frac{x_1}{\|x_1\|_K}$ and $\frac{x_2}{\|x_2\|_K}$ are on the boundary of $\bar{K}$, and $\|x_3\|_{\bar K} \ge \|x_3\|_K$ since $\bar{K}\subset K$. It follows that
$\|x_i\|_{\bar L}=\|x_i\|_L \,\, (i=1,2) , $ and $\|x_3\|_{\bar L} \ge \|x_3\|_L$.
Consider the point $y = \frac{\|x_1\|_{\bar L}}{\|x_1\|_{\bar K}}x_1 + \frac{\|x_2\|_{\bar L}}{\|x_2\|_{\bar K}}x_2$ in the plane $E$. The point $\frac{y}{\|y\|_{\bar K}}$ lies on the $p$-arc connecting $\frac{x_1}{\|x_1\|_{\bar K}}$ and $\frac{x_2}{\|x_2\|_{\bar K}}$. Consider the tangent line to this arc at the point $\frac{y}{\|y\|_{\bar K}}$. This line intersects the segments $[0, {x_i}/{\|x_i\|_{\bar K}}]$, $i=1,2$, at some points
$\frac{t_ix_i}{\|x_i\|_{\bar K}}$ with $t_i\in(0,1)$.
Since $\frac{t_1x_1}{\|x_1\|_{\bar K}}$, $\frac{t_2x_2}{\|x_2\|_{\bar K}}$ and $\frac{y}{\|y\|_{\bar K}}$ are on the same line, it follows that the
coefficients of $\frac{t_1x_1}{\|x_1\|_{\bar K}}$ and $\frac{t_2x_2}{\|x_2\|_{\bar K}}$ in the equality
$$
\frac{y}{\|y\|_{\bar K}} = \frac{1}{\|y\|_{\bar K}} \( \frac{\|x_1\|_{\bar L}}{t_1}\cdot\frac{t_1x_1}{\|x_1\|_{\bar K}} + \frac{\|x_2\|_{\bar L}}{t_2}\cdot\frac{t_2x_2}{\|x_2\|_{\bar K}} \)
$$
have to add up to 1. Therefore,
$$
\|y\|_{\bar K} = \frac{\|x_1\|_{\bar L}}{t_1} + \frac{\|x_2\|_{\bar L}}{t_2}.
$$
Note also that the line between $\frac{t_1x_1}{\|x_1\|_{\bar K}}$ and $\frac{t_2x_2}{\|x_2\|_{\bar K}}$ separates $\frac{x_3}{\|x_3\|_{\bar K}}$ from the origin, which means that the three points $\frac{t_1x_1}{\|x_1\|_{\bar K}}$, $\frac{t_2x_2}{\|x_2\|_{\bar K}}$ and $\frac{x_3}{\|x_3\|_{\bar K}}$ are in the ``convex position". Applying Ball's theorem on log-concave functions (Thm \ref{th:ball}) to these three points, we have
$$
\[\int_0^{\frac{1}{\|x_3\|_{\bar K}}} f(rx_3)r^{k-1}dr \]^{-\frac 1k}
\le \[\int_0^{\frac{t_1}{\|x_1\|_{\bar K}}} f(rx_1)r^{k-1}dr \]^{-\frac 1k}
+ \[\int_0^{\frac{t_2}{\|x_2\|_{\bar K}}} f(rx_2)r^{k-1}dr \]^{-\frac 1k}.
$$
If we let $s_i = \|x_i\|_{\bar L} \[\int_0^{\frac{t_i}{\|x_i\|_{\bar K}}} f(rx_i)r^{k-1}dr \]^{\frac 1k}$ for each $i=1,2$, the above inequality becomes $$\|x_3\|_{\bar L} \le \frac{\|x_1\|_{\bar L}}{s_1} + \frac{\|x_2\|_{\bar L}}{s_2}.$$ By a change of variables, we get
\begin{eqnarray*}
s_i = t_i \|x_i\|_{\bar L} \[\int_0^{\frac{1}{\|x_i\|_{\bar K}}} f(t_i rx_i)r^{k-1}dr \]^{\frac 1k}
\ge t_i \|x_i\|_{\bar L} \[\int_0^{\frac{1}{\|x_i\|_{\bar K}}} f(rx_i)r^{k-1}dr \]^{\frac 1k}
= t_i
\end{eqnarray*}
for each $i=1,2$. The above inequality comes from the fact that an even log-concave function has to be non-increasing on $[0,\infty)$. Indeed,
\begin{eqnarray*}
f(t_i rx_i) = f\(\frac{1+t_i}{2}\cdot rx_i - \frac{1-t_i}{2}\cdot rx_i\)
\ge f(rx_i)^{\frac{1+t_i}{2}}f(-rx_i)^{\frac{1-t_i}{2}} = f(rx_i).
\end{eqnarray*}
Putting all together, we have
\begin{eqnarray*}
\|x_3\|_L \le \|x_3\|_{\bar L}\le \frac{\|x_1\|_{\bar L}}{s_1} + \frac{\|x_2\|_{\bar L}}{s_2}
\le \frac{\|x_1\|_{\bar L}}{t_1} + \frac{\|x_2\|_{\bar L}}{t_2} = \|y\|_{\bar K}.
\end{eqnarray*}
Using the $p$-convexity of $\bar{K}$, we have
\begin{eqnarray*}
\|y\|_{\bar K}^p \le \left\|\frac{\|x_1\|_{\bar L}}{\|x_1\|_{\bar K}}x_1\right\|_{\bar K}^p + \left\|\frac{\|x_2\|_{\bar L}}{\|x_2\|_{\bar K}}x_2\right\|_{\bar K}^p
=\|x_1\|_{\bar L}^p + \|x_2\|_{\bar L}^p = \|x_1\|_L^p + \|x_2\|_L^p,
\end{eqnarray*}
and therefore $\|x_3\|_L^p \le \|x_1\|_L^p + \|x_2\|_L^p$.
\end{proof}
\begin{corollary}
Let $\mu$ be a symmetric log-concave measure and $K$ a symmetric $p$-convex body in $\R^n$ for $p \in (0,1]$. Then the intersection body $I_\mu K$ of $K$ is $q$-convex with $q =\left[(1/p-1)(n-1) + 1 \right]^{-1}$.
\end{corollary}
\begin{proof}
Let $f$ be the density function of $\mu$. By Theorem \ref{th:log-concave}, the body $L$ with the Minkowski functional
$$
\|x\|_L = \left[(n-1)\int_0^{\|x\|_K^{-1}} f(rx)r^{n-2}dr\right]^{\frac{-1}{n-1}},\quad x\in\R^n,
$$
is $p$-convex.
On the other hand, the intersection body $I_\mu K$ of $K$ is given by the radial function
\begin{eqnarray*}
\rho_{I_\mu K}(u) &=& \mu_{n-1}(K\cap u^\bot) \\
&=& \int_{\R^n} 1_{K\cap u^\bot}(x)f(x) dx \quad=\quad \int_{S^{n-1}\cap u^\bot} \int_0^{\|u\|_K^{-1}}f(rv)r^{n-2} drdv \\
&=& \frac{1}{n-1}\int_{S^{n-1}\cap u^\bot} \|v\|_L^{-n+1}dv \quad=\quad |L\cap u^\bot|_{n-1} \\ &=& \rho_{IL}(u),
\end{eqnarray*}
which means $I_\mu K = IL$. By Theorem \ref{th:pbus}, $IL$ is $q$-convex with $q =\left[ (1/p-1)(n-1) +1 \right]^{-1}$, and therefore so is $I_\mu K$.
\end{proof}
We conclude this section with an example that shows that the condition on $f$ to be even in Theorem \ref{th:log-concave} cannot be dropped.
\noindent{\bf Example 1}.
Let $\mu$ be a log-concave measure on $\R^n$ with density
$$f(x_1,\ldots,x_n)=\left\{\begin{array}{ll} 1, &\text{if }x_1+x_2 \ge 2^{1-1/p},\\
0, &\text{otherwise.}\end{array}
\right.$$ Consider the $p$-convex body $K=B_p^n$ for $p\in(0,1)$. If $L$ is the body defined in Theorem \ref{th:log-concave}, then $\|e_1+e_2\|_L=0$ and $\|e_1\|_L=\|e_2\|_L>0$, which means $L$ is not $q$-convex for any $q>0$.\\
\section{Non-symmetric cases and $s$-concave measures}
Note that Ball's theorem (Thm \ref{th:ball}) remains valid even if $f$ is not even, as was shown by Klartag \cite{Kl}. On the other hand,
as we explained above, Theorem \ref{th:log-concave} does not hold for non-symmetric log-concave measures. However, if we restrict ourselves to
the class of $s$-concave measures, $s>0$, then it is possible to give a version of Theorem \ref{th:log-concave} for non-symmetric measures.
Borell \cite{Bor} introduced the classes $\mathfrak{M}_s(\Omega)$, ($-\infty
\le s \le \infty$, $\Omega \subset \R^n$ open convex) of $s$-concave
measures, which are Radon measures $\mu$ on $\Omega$ satisfying the
following condition: the inequality
$$
\mu(\lambda A + (1-\lambda)B) \ge \left[\lambda \mu(A)^s + (1-\lambda)\mu(B)^s
\right]^{\frac 1s}
$$
holds for all nonempty compact $A, B \subset \Omega$ and all $\lambda \in
(0,1)$. In particular, $s=0$ gives the class of log-concave measures.
Let us consider the case $0<s<1/n$. According to Borell, $\mu$ is
$s$-concave if and only if the support of $\mu$ is $n$-dimensional
and $d\mu = fdm$ for some $f \in L^1_{loc}(\Omega)$ such that
$f^{\frac{s}{1-ns}}$ is a concave function on $\Omega$.
\begin{theorem}
Let $\mu$ be an $s$-concave measure on $\Omega\subset\R^n$ with density $f$, for $0<s<1/n$, and $K$ a $p$-convex body in $\Omega$, for $p\in (0,1]$. If $k\ge 1$, then the body $L$ whose Minkowski functional is given by
$$
\|x\|_L = \left[\int_0^\infty 1_K(rx)f(rx)r^{k-1}dr\right]^{-\frac 1k},\quad x\in\R^n
$$
is $q$-convex with $q= \[\big(\frac 1p-1\big)\big(\frac 1s-n\big)\frac 1k + \frac 1p\]^{-1}$.
\end{theorem}
\begin{proof}
Let $x_1,x_2 \in \R^{n}$ and $x_3=x_1+x_2$. Then, for $i=1,2$,
\begin{eqnarray*}
\|x_i\|_L^{-k} &=& \int_0^\infty 1_K(rx_i) f(rx_i)r^{k-1}dr =
\frac 1p\int_0^\infty 1_K(s^{-\frac 1p}x_i) f(s^{-\frac 1p}x_i) s^{-\frac kp-1}ds \\
&=& \frac 1p\int_0^\infty F_i(s)ds =
\frac 1p\int_0^\infty \Big|\{s \in (0,\infty) : F_i(s)>t\}\Big|\, dt,
\end{eqnarray*}
where $F_i(s) = 1_K(s^{-\frac 1p}x_i) f(s^{-\frac 1p}x_i)
s^{-\frac kp-1}$ for each $i=1,2,3$. We claim that
$$
2^{\frac kq+1} F_3(s_3) \ge F_1(s_1)^{\lambda_1}F_2(s_2)^{\lambda_2}
$$
whenever $s_3 = s_1 + s_2$ and $\lambda_i = \frac{s_i}{s_1+s_2}$ for
$i=1,2$. Indeed, since
\begin{eqnarray*}
s_3^{-\frac 1p}x_3 &=& \left(\frac{s_1}{s_1+s_2}\right)^{\frac
1p}s_1^{-\frac 1p}x_1 + \left(\frac{s_2}{s_1+s_2}\right)^{\frac
1p}s_2^{-\frac 1p}x_2 \\
&=& \lambda_1(\lambda_1^{\frac 1p -1}s_1^{-\frac 1p}x_1) +
\lambda_2(\lambda_2^{\frac 1p -1}s_2^{-\frac 1p}x_2),
\end{eqnarray*}
the concavity of $f^\gamma$, $\gamma=\frac{s}{1-ns}$, gives
\begin{eqnarray*}
f^\gamma(s_3^{-\frac 1p}x_3) &\ge& \lambda_1 f^\gamma(\lambda_1^{\frac 1p-1}s_1^{-\frac 1p}x_1)
+ \lambda_2 f^\gamma(\lambda_2^{\frac 1p -1}s_2^{-\frac 1p}x_2) \\
&\ge& \[ f^\gamma(\lambda_1^{\frac 1p -1}s_1^{-\frac 1p}x_1)\]^{\lambda_1}
\[f^\gamma(\lambda_2^{\frac 1p -1}s_2^{-\frac 1p}x_2) \]^{\lambda_2}\\
&\ge& \[\lambda_1^{\frac 1p -1} f^\gamma(s_1^{-\frac 1p}x_1)\]^{\lambda_1}
\[\lambda_2^{\frac 1p -1}f^\gamma(s_2^{-\frac 1p}x_2) \]^{\lambda_2}\\
&=& \( \[\(\frac{s_1}{s_3}\)^{\frac 1\gamma(\frac 1p-1)} f(s_1^{-\frac 1p}x_1) \]^{\lambda_1}
\[\(\frac{s_2}{s_3}\)^{\frac 1\gamma(\frac 1p -1)} f(s_2^{-\frac 1p}x_2) \]^{\lambda_2} \)^\gamma ,
\end{eqnarray*}
that is,
$$
s_3^{\frac 1\gamma(\frac 1p -1)} f(s_3^{-\frac 1p}x_3) \ge
\prod_{i=1}^2 \left[ s_i^{\frac 1\gamma(\frac 1p -1)} f(s_i^{-\frac 1p}x_i) \right]^{\lambda_i}.
$$
On the other hand, note that
$$
\frac{2}{s_3} = \frac{\lambda_1}{s_1} + \frac{\lambda_2}{s_2}
\ge \(\frac{1}{s_1}\)^{\lambda_1} \(\frac{1}{s_2}\)^{\lambda_2}
$$ and
$$
1_K(s_3^{-\frac 1p}x_3) \ge 1_K(s_1^{-\frac 1p}x_1) 1_K(s_2^{-\frac 1p}x_2),
$$
since $s_3^{-\frac 1p}x_3 = \lambda_1^{\frac 1p}(s_1^{-\frac 1p}x_1) + \lambda_2^{\frac 1p
}(s_2^{-\frac 1p}x_2)$. Thus
\begin{eqnarray*}
F_1(s_1)^{\lambda_1} F_2(s_2)^{\lambda_2}
&=& \prod_{i=1}^2 \[1_K(s_i^{-\frac 1p}x_i) f(s_i^{-\frac 1p}x_i)s_i^{-\frac kp-1} \]^{\lambda_i} \\
&\le& 1_K(s_3^{-\frac 1p}x_3) \prod_{i=1}^2 \[ s_i^{\frac 1\gamma(\frac 1p-1)}
f(s_i^{-\frac 1p}x_i) \cdot \(\frac{1}{s_i}\)^{\frac 1\gamma (\frac 1p-1) + \frac kp+1}\]^{\lambda_i} \\
&\le& 1_K(s_3^{-\frac 1p}x_3) f(s_3^{-\frac 1p}x_3)
2^{\frac 1\gamma (\frac 1p-1)+\frac kp+1} s_3^{-\frac kp-1} \\
&\le& 2^{\frac kq +1} F_3(s_3).
\end{eqnarray*}
It follows that for every $t>0$
$$
\{s_3 : 2^{\frac kq +1} F_3(s_3)>t\} \supset \{s_1 : F_1(s_1)>t\}+ \{s_2 : F_2(s_2)>t\}.
$$
Applying the Brunn-Minkowski inequality, we have
\begin{eqnarray*}
\|x_1+x_2\|_L^{-k} &=& \|x_3\|_L^{-k} = \frac 1p\int_0^\infty F_3(s)ds \\
&=& \frac{1}{2^{\frac kq +1}} \cdot \frac 1p\int_0^\infty \Big|\{s_3 \in
(0,\infty) : 2^{\frac kq +1}F_3(s_3)>t\}\Big|\, dt \\
&\ge& \frac{1}{2^{\frac kq+1}} \cdot \frac 1p\int_0^\infty
\(\Big|\{s_1 : F_1(s_1)>t\}\Big| + \Big|\{s_2 :F_2(s_2)>t\}\Big| \) dt \\
&=& \frac{1}{2^{\frac kq+1}} (\|x_1\|_L^{-k} + \|x_2\|_L^{-k}).
\end{eqnarray*}
Thus,
\begin{eqnarray*}
\|x_1+x_2\|_L &\le& 2^{\frac 1q} \(\frac{\|x_1\|_L^{-k} + \|x_2\|_L^{-k}}{2}\)^{-\frac 1k}
= \[ \frac 12 \( \frac{(\|x_1\|_L^{-q})^{\frac kq} + (\|x_1\|_L^{-q})^{\frac kq}}{2} \)^{\frac qk}\]^{-\frac 1q} \\
&\le& \[ \frac 12 \( \frac{\|x_1\|_L^{-q} + \|x_2\|_L^{-q} }{2} \)
\]^{-\frac 1q}
\le \[ \frac 12 \( \frac{\|x_1\|_L^q + \|x_2\|_L^q}{2} \)^{-1} \]^{-\frac 1q}\\
&=& \( \|x_1\|_L^q + \|x_2\|_L^q \)^{\frac 1q},
\end{eqnarray*}
which means that $L$ is $q$-convex.
\end{proof}
The following example shows that the value of $q$ in the above theorem is sharp.\\
\noindent{\bf Example 2.}
Let $\mu$ be an $s$-concave measure on $\Omega=\{(x_1,\ldots,x_n)\in\R^n: x_1\ge 0\}$ for $s>0$ with density $$f(x_1,\ldots,x_n)=|x_1|^{1/s-n}$$
and let
$$
K=\left\{(x_1,\ldots,x_n): x_1 \ge 0, \left|\frac{x_1+x_2}{2}\right|^p+\left|\frac{x_1-x_2}{2}\right|^p\le 1, \,\, |x_i|\le 1 \,\,\forall i=3,\ldots,n\right\}.
$$
Note that $\|e_1\|_K = 2^{1-1/p}$ and $\|e_1+e_2\|_K=\|e_1-e_2\|_K=1$. If $L$ is the body defined by $K$ in the above theorem, then
$$
\|e_1\|_L = \[\int_0^{2^{1-1/p}}r^{1/s-n}r^{k-1}dr\]^{-\frac 1k}= \[\frac{2^{\(1-\frac 1p\)\(\frac 1s-n+k\)}}{\frac 1s-n+k}\]^{-\frac1k}
$$
and
$$
\|e_1+e_2\|_L = \[\int_0^1 r^{1/s-n}r^{k-1}dr\]^{-\frac 1k}= \[\frac 1s-n+k\]^{\frac 1k}.
$$
If $L$ is $q$-convex for some $q$, then the inequality $\|2e_1\|_L \le \(\|e_1+e_2\|_L^q +\|e_1-e_2\|_L^q\)^{1/q}$ implies
$$
2\[\frac{2^{\(1-\frac 1p\)\(\frac 1s-n+k\)}}{\frac 1s-n+k}\]^{-\frac 1k} \le 2^{\frac 1q}\[\frac 1s-n+k\]^{\frac 1k}
$$
that is,
$$
q\le \[\(\frac 1p-1\)\(\frac 1s-n\)\frac 1k+\frac 1p\]^{-1}.
$$
Note that in our construction $\Omega$ is not open, as opposed to what
we said in the beginning of Section 4. This is done for the sake of simplicity of the
presentation. To be more precise one would need to define
$\Omega=\{(x_1,\ldots,x_n)\in\R^n: x_1>-\epsilon\}$ and
$f(x_1,\ldots,x_n)=|x_1+\epsilon|^{1/s-n}$, for $\epsilon>0$, and then
send $\epsilon \to 0^+$.
|
1,108,101,564,905 | arxiv | \section{Introduction}
Recently, a number of important progress have been made in the
physics of $J/\psi$ suppression as a signature of quark gluon plasma
that inevitably leads us to augment the original work by Matsui and
Satz\cite{Matsui86} with a more detailed study of heavy quark system
at finite temperatures, before confronting the recent RHIC
data\cite{Phenix1}, and in predicting results for LHC. Among these
theoretical developments are the phenomenologically successful
statistical model for $J/\psi$
production\cite{Gorenstein99,PBM99,PBM06}, based on a coalescence
assumption near $T_c$\cite{PBM06}, the recombination of charm pairs
into $J/\psi$\cite{The06}, and the recent lattice calculations,
showing strong evidence that the heavy quarkonium will persist above
$T_c$\cite{Hatsuda03,Hatsuda04,Datta03,Datta05,Datta06}. While these
results seem at odd with each other, it only suggests that one still
needs a more detailed understanding of the properties of heavy quark
system in the quark gluon plasma, especially between the phase
transition and the dissolving temperatures, before a consistent
picture of quarkonium suppression in heavy ion collision can be
achieved.
In this respect, an important quantity to investigate is the
effective thermal width, and/or the effective dissociation cross
section of a heavy quarkonium in the quark gluon plasma. Except for
its existence, the present lattice results are far from making
quantitative statements on the magnitude of thermal width for
charmonium states above $T_c$. Hence, in this work, we will use
the perturbative QCD approach to calculate the thermal width. So
far, such calculations have been limited to dissociation processes
by gluons to the lowest order
(LO)\cite{BP79,Lee87,KS94,AGGA01,Wong04,Blaschke:2005jg}, because
the elementary $J/\psi$-parton dissociation cross section was
available only to that order\cite{Peskin79}, and to part of the
next-to-leading-order (NLO) in the quasi free
approximation\cite{Rapp}. Recently, two of us have performed the
dissociation cross section calculation to NLO in QCD\cite{SL05}.
Here, we will implement the NLO formula, to calculate the
corresponding thermal width of charmonium\cite{Lee07}, and then
perform a similar calculation for the bottonium case.
The NLO calculation of $J/\psi$-parton dissociation calculations
involves collinear divergence. When applying this elementary
cross section to dissociation by hadrons, the collinear divergence
is cured by mass factorization, which renormalizes the divergent
part of the cross section into the parton distribution function of
the hadron. Such complications disappear at finite temperatures,
as the thermal masses of the partons automatically renders the
divergence finite. The magnitude of the thermal mass as
a function of temperature has been obtained previously by examining
the equation of state \cite{Levai97}. In the region of $T_c$ to
2$T_c$, they are of the order of 300-400 MeV for quarks and 400-600 MeV for gluons.
In this work, instead of following the detailed temperature dependence, we shall
study results with a thermal mass of 400 and 600 MeV for both the quarks and gluons.
As we will see, with an effective thermal mass
of 400 MeV, we find that the effective thermal dissociation cross
section above 1.4 $T_c$ is larger than 250 MeV. The NLO calculation
is proportional to the derivative of the momentum space wave
function. We will use the Coulomb wave function, whose size have
been fitted to reproduce the result obtained by Wong\cite{Wong04},
using a potential extracted from lattice gauge thermodynamical
quantities.
In Section II, we will recapitulate the LO result. In Section III,
we will discuss the NLO for $J/\psi$. The LO and NLO results for
bottonium are given in Section IV.
\section{LO result}
The LO invariant matrix element for the $J/\psi$ dissociation
cross section by gluon first obtained by Peskin\cite{Peskin79} and
rederived by one of us\cite{OKL02} using the Bethe-Salpeter
equation is given as,
\begin{figure}
\centerline{
\includegraphics[width=4cm]{fig1.eps}}
\caption{LO diagram} \label{lo-feynman}
\end{figure}
\begin{eqnarray}
\mathcal{M}^{\mu\nu} &=& -g \sqrt{\frac{M_\Phi}{N_c}} \left\{ {\bf
k} \cdot \frac{\partial \psi({\bf p})}{\partial {\bf p}}
\delta^{\nu 0} + k_0 \frac{\partial \psi({\bf p})}{\partial p^j}
\delta^{\nu j} \right\} \delta^{\mu i} \nonumber \\ && \mbox{}
\times \bar{u}(p_1) \frac{1+\gamma_0}{2} \gamma^i
\frac{1-\gamma_0}{2}T^a v(p_2)
\label{eq:amp}
\end{eqnarray}
Here, $\mu,\nu$ represents the polarization index of the $J/\psi$
and gluon respectively, and $k, p_1,p_2$ are the four momentum of
the gluon, $c, \bar{c}$. The quantity ${\bf p}$ is the relative
three momentum between $c$ and $\bar{c}$, and $\psi({\bf p})$ is the
charmonium wave function. $M_\Phi$ is the mass of a quarkonium, and
$N_c$ is the number of color.
Current conservation is easily shown to be
satisfied, $k_\nu M^{\mu \nu}=0$.
The energy conservation in the non-relativistic limit implies,
\begin{eqnarray}
k_0+m_\Phi=2m_c+\frac{{|\vec{p_1}|}^2+{|\vec{p_2}|}^2}{2m_c}
,
\end{eqnarray}
from which, the counting scheme for both
the LO\cite{Peskin79} and the NLO\cite{SL05} are given as
follows,
\begin{eqnarray}
|\vec{p_1}|\sim|\vec{p_2}|\sim|\vec{p}|\sim O(mg^2) \nonumber \\
k^0 \sim |\vec{k}| \sim O(mg^4).
\end{eqnarray}
The effective thermal width and cross section are obtained by folding the matrix
element with the thermal parton distribution, $n(k_0)$
\begin{eqnarray}
\Gamma^{eff} & = &
d_p\int \frac{d^3k}{(2\pi)^3}n(k_0)v_{rel}\sigma(k_0), \nonumber \\
\sigma^{eff} & = & \int \frac{d^3k}{(2\pi)^3}n(k_0) \sigma(k_0)
/ \int \frac{d^3k}{(2\pi)^3}n(k_0), \label{g-lo}
\end{eqnarray}
where $d_p$ is the parton degeneracy, which is taken to be 16 for
the gluon in the LO calculation. Here, we will perform the
calculation in the rest frame of $J/\psi$, so the relative velocity
of $J/\psi$ and initial parton is $v_{rel}=|\vec{k}/k_{0}|$. The
cross section is given as,
\begin{eqnarray}
\sigma=\int\frac{1}{128\pi^2 m_\Phi
|\vec{k}|}\sqrt{\frac{k_0-\epsilon_0}{m_c}}
\overline{|\mathcal{M}|}^2 d\Omega\nonumber\\
\overline{|\mathcal{M}|}^2=\frac{2g^2m_c^2m_\Phi
(2k_0^2+m_{k_1}^2)}{3N_c} {\Big|\frac{\partial \psi({\bf
p})}{\partial {\bf p}}\Big|}^2 \label{lo-sigma}
\end{eqnarray}
\begin{table}
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline $T/T_c$ & 1.13 & 1.18 & 1.25 & 1.40 & 1.60 & 1.65 \\[2pt]
\hline $\epsilon_0$(MeV)& 36.4 & 20.9 & 10.1 & 3.4 & 0.14 & 0.004 \\[2pt]
\hline $\sqrt{<r^2>}$(fm) & 0.97 & 1.19 & 1.54 & 2.30 & 4.54 & 5.17 \\[2pt]
\hline $a_0$(fm) & 0.56 & 0.69 & 0.89 & 1.33 & 2.62 & 2.99 \\[2pt]
\hline
\end{tabular}
\caption{The binding energy, the rms radius, and its corresponding
Bhor radius of $J/\psi$ at finite temperature.} \label{jpsi-wave}
\end{table}
where $m_{k_1}$ is the thermal mass of a gluon. As can be seen from
Eq.(\ref{lo-sigma}), the cross section is proportional to the
absolute square of the derivative of the momentum space wave
function, which comes from the dipole nature of the interaction
between the gluon and the quark antiquark pair with opposite
charges. In the calculation of the effective thermal width or the
effective cross section in Eq.(\ref{g-lo}), the cross section is
integrated over the incoming energy, which effectively integrates
over the absolute square of the derivative of the momentum space
wave function. As a consequence, the results are sensitive to the
size of the wave function only and not so much on its detailed
functional form. Therefore, we will use a Coulomb wave function,
whose Bohr radius is fitted to reproduce the rms radius obtained by
one of us\cite{Wong04} by solving the bound states in a
temperature-dependent potential extracted from lattice gauge
thermodynamical quantities. For the binding energy, we use the
values obtained in ref. \cite{Wong04}. Table \ref{jpsi-wave}
summarizes the $J/\psi$ binding energy, its rms radius, and its
corresponding Bhor radius at finite temperature. The coupling
constant $g$ is set such that $\alpha_s$ is 0.5. As can be seen in
Fig. \ref{lo-diff-sig}, $\sigma(k_0)$ in the LO is dominant around
the $J/\psi$ binding energy\cite{Wong04}, which decreases as
temperature increases. On the other hand, $n(k_0)$ favors higher
$k_0$ as temperature increases. Hence the overlap in
Eq.(\ref{g-lo}) decreases at higher temperature, more so because the
overlap integral starts from the thermal mass of the gluon. As can
be seen in Fig. \ref{width-lo}, choosing the effective thermal mass
of the gluons to be 400 and 600 MeV, which is much larger than the
binding energy, we find that the effective thermal width decreases
as the temperature increases and becomes very small. With lower
bound of the thermal mass (400 MeV) the width at LO is smaller than
3 MeV at 1.13 $T_c$ and less than 1 MeV at 1.4 $T_c$. At the NLO,
there are other thermal gluon and quark induced interaction
reactions representatively shown in Fig. \ref{NLO-graph}. The width
due to these contributions will be calculated in the next section.
\begin{figure}[h]
\centering \epsfig{file=jpsLOsigma.eps, width=1.\hsize}
\caption{(Color online) $\sigma(E_{gluon})$ of $J/\psi$ at LO.}
\label{lo-diff-sig}
\end{figure}
\begin{figure}[h]
\centering \epsfig{file=jpslo.eps, width=.8\hsize} \caption{(Color
online) Effective thermal width of $J/\psi$ at LO. The dots and
circles show the results at temperatures given in Table
\ref{jpsi-wave} with $T_c=170$MeV. } \label{width-lo}
\end{figure}
\section{NLO result}
The $J/\psi$ dissociation cross section by partons in QCD at the
NLO was performed by two of us\cite{SL05}. The dissociation cross
section can be divided into two parts; the dissociation by quarks
and that by gluons (Fig. \ref{NLO-graph}).
\begin{figure}[b]
\centerline{
\includegraphics[width=3.5cm]{fig4a.eps}\hfill
\includegraphics[width=3.5cm]{fig4b.eps}} \caption{NLO diagrams induced by (a)quarks,
(b)gluons.}\label{NLO-graph}
\end{figure}
\begin{figure}[h]
\centering \epsfig{file=NLOq-jpsxsection.eps, width=1.\hsize}
\caption{(Color online) $\sigma(E_{quark})$ of $q-J/\psi$ at NLO.}
\label{nlo-q-jpsi-diff-sig}
\end{figure}
\begin{figure}[h]
\centering \epsfig{file=NLOg-jpsxsection.eps, width=1.\hsize}
\caption{(Color online) $\sigma(E_{gluon})$ of $g-J/\psi$ at NLO.}
\label{nlo-g-jpsi-diff-sig}
\end{figure}
\begin{figure}[h]
\centering \epsfig{file=jpsNLOwidth.eps, width=1.\hsize}
\caption{(Color online) Effective thermal width of $J/\psi$ at NLO
induced (a) by quarks and (b) by gluons} \label{jpsieffwidth}
\end{figure}
\begin{figure}[h]
\centering \epsfig{file=jpseffsigma.eps, width=1.\hsize}
\caption{(Color online) Effective cross-section of $J/\psi$ induced
(a) by quarks (NLO) and (b) by gluons (LO+NLO)} \label{jpsi-effx}
\end{figure}
In both cases, the cross sections are given as,
\begin{eqnarray}
\sigma(u) = \frac{1}{4 \sqrt{(q \cdot k_{1})^{2}-m_{k_1}^2 m_\Phi^2
}}\int d\sigma_3 \overline{|\mathcal{M}|}^2
\end{eqnarray}
where $u^2=(q+k_1)^2$, $m_{k_1}$ is the thermal mass of a parton, and $d\sigma_3$ is the 3-body phase space,
\begin{eqnarray}
d\sigma_3 &=& \frac{d^3 k_2}{(2\pi)^3 2 k_{20}}
\frac{d^3p_1}{(2\pi)^32 p_{10}}\frac{d^3 p_2}{(2\pi)^3 2 p_{20}}
\nonumber\\&& \times (2\pi)^4 \delta^{(4)}(q+k_1-k_2-p_1-p_2)
\end{eqnarray}
Here, $q$, $k_1$, $k_2$, $p_1$ and $p_2$ are respectively the
momentum of $J/\psi$, incoming parton, outgoing parton, charm quark,
and anti-charm quark.
In the $J/\psi$ rest frame, the cross section can be written as,
\begin{eqnarray}
\sigma &=& \frac{1}{4 \sqrt{(q \cdot k_1)^2-m_\Phi^2 m_{k_1}^2}}
\nonumber
\\&& \times\int^{\beta}_{\alpha} {dw}^2 \int^{\beta'}_{\alpha'}
dp_\Delta^2 \frac{\sqrt{1-4 m_c^2/w^2}}{16^2 \pi^3 m_\Phi
|\vec{k_1}|} \overline{|\mathcal{M}|}^2,
\label{phase-space}
\end{eqnarray}
where $p_\Delta^2=(k_1-k_2)^2, w^2=(q+p_\Delta)^2$. The
integration range is
\begin{eqnarray}
\alpha &=& (m_{p_2}+m_{p_1})^2=4m_c^2, \nonumber\\
\beta &=& (u-m_{k_1})^2, \nonumber\\
\alpha' &=& -b-\sqrt{b^2-ac}, \nonumber\\
\beta' &=& -b+\sqrt{b^2-ac},
\end{eqnarray}
where
\begin{eqnarray}
b &=& \frac{1}{2u^2}
\left\{u^2-(m_\Phi+m_{k_1})^2\right\}\left\{u^2-(m_\Phi-m_{k_1})^2\right\}\nonumber\\&&
-\frac{1}{2u^2}\left\{u^2-(m_\Phi^2-m_{k_1}^2)\right\}\left\{w^2-m_\Phi^2\right\},
\nonumber\\
b^2-ac &=&
\frac{1}{4u^4}\left\{(u^2-m_\Phi^2+m_{k_1}^2)^2-4u^2m_{k_1}^2\right\}\nonumber\\&&
\times\left\{w^2-(u+m_{k_1})^2\right\}\left\{w^2-(u-m_{k_1})^2\right\}.
\nonumber
\end{eqnarray}
The variables appearing in $\sigma$ can be expressed in terms of
$w^2, p_\Delta$ and $u^2$ as follows,
\begin{eqnarray}
q \cdot k_1 &=& (u^2-m_\Phi^2-m_{k_1}^2)/2, \nonumber\\
|\vec{k_1}| &=&
\sqrt{\left\{(u^2-m_{k_1}^2+m_\Phi^2)/(2m_\Phi)\right\}^2-u^2},
\nonumber\\
k_1 \cdot k_2 &=& (-p_\Delta^2+2m_{k_1}^2)/2, \nonumber\\
k_{10} &=& \sqrt{|\vec{k_1}|^2+m_{k_1}^2}, \nonumber\\
k_{20} &=& k_{10}-(w^2-p_\Delta^2-m_\Phi^2)/(2m_\Phi), \nonumber\\
|\vec{p}| &=& \sqrt{m_c(k_{10}-k_{20}+m_\Phi-2m_c)}.
\label{parameters}
\end{eqnarray}
We first consider the NLO effective thermal width induced by
quark. Here, the quark degeneracy is 36 assuming 3 flavors. The
invariant matrix element is given as\cite{SL05},
\begin{eqnarray}
\overline{|\mathcal{M}|}^2=\frac{4}{3} g^4 m_c^2 m_\Phi
{\Big|\frac{\partial \psi({\bf p})}{\partial {\bf p}}\Big|}^2
\left(-\frac{1}{2}+\frac{k_{10}^2+k_{20}^2}{2 k_1 \cdot k_2}\right).
\label{m2-nloq}
\end{eqnarray}
Next, the gluon induced NLO calculation has the same degeneracy as
the LO case, and the invariant matrix element is given as
follows\cite{SL05},
\begin{eqnarray}
&&\overline{|\mathcal{M}|}^2=\frac{4}{3} g^4 m_c^2 m_\Phi
{\Big|\frac{\partial \psi({\bf p})}{\partial {\bf p}}\Big|}^2
\Bigg\{-4+\frac{k_1 \cdot k_2}{k_{10}k_{20}}\nonumber \\&&
+\frac{2k_{10}}{k_{20}}+\frac{2k_{20}}{k_{10}}
-\frac{k_{20}^2}{k_{10}^2}-\frac{k_{10}^2}{k_{20}^2} +\frac{2}{k_1
\cdot k_2}\nonumber\\&& \times\left(
\frac{(k_{10}^2+k_{20}^2)^2}{k_{10}k_{20}} -2 k_{10}^2-2
k_{20}^2+k_{10}k_{20}\right) \Bigg\}.
\label{m2-nlog}
\end{eqnarray}
In the hadronic phase, the $1/(k_1\cdot k_2)$ term, and the
$1/k_{20}^2$ term give rise to collinear divergence, and soft
divergence respectively. However, in QGP phase, the thermal mass of
a parton $m_{k_1}$ in Eq.(\ref{parameters}) plays the role of a
cutoff. In contrast to the LO calculation, as can be seen from Fig.
\ref{nlo-q-jpsi-diff-sig} for quark and from Fig.
\ref{nlo-g-jpsi-diff-sig} for gluon, $\sigma(k_0)$ at the NLO does
not vanish at large $k_0$. This is so because irrespective of how
large the energies of the incoming quark or the gluon are, they can
always radiate small energy gluon in the order of the binding energy
to effectively dissociate $J/\psi$ via the LO process. Hence, in Eq.
(\ref{g-lo}), $\sigma$ has non trivial overlap with the maximum of
$n(k_0)$ that increase with temperature, leading to the result shown
in Fig. \ref{jpsieffwidth}(a) and in Fig. \ref{jpsieffwidth}(b) for
quark and gluon induced NLO width respectively. One thing to note
from Fig. \ref{nlo-q-jpsi-diff-sig} is that the elementary cross
section has a peak near threshold at 1.4 $T_c$. Such peak structure
only appears when the binding energy becomes very small and the
corresponding momentum space wave function becomes highly peaked
near zero momentum. When the incoming energy is small, these highly
peaked region gives important contributions to the two dimensional
phase space integral in Eq. (\ref{phase-space}). But when the
incoming energy becomes large, the phase space for the peaked region
becomes smaller and so does the total cross section. Such singular
behavior as a function of the incoming energy disappears when the
binding energy becomes larger.
Here we take the thermal mass from 400 MeV to 600 MeV\cite{Levai97}
within the temperature region of a few $T_c$. With those masses we
obtained large thermal widths. Even with an upper limit thermal mass
of 600 MeV, the width exceeds 100 MeV above 1.4 $T_c$, where we have
taken $T_c$=170 MeV. For example, if the thermal mass of partons is
600 MeV, and the produced $J/\psi$ remains at 1.4 $T_c$ for 2 fm/c,
its survival rate will be less than 40 $\%$. As can be seen from
Fig. \ref{jpsi-effx}(a) and Fig. \ref{jpsi-effx}(b), with a 600 MeV
thermal mass at 1.4 $T_c$, the effective dissociation cross section
by a quark is about 1.0 mb and that by a gluon 1.5 mb.
Hence, even though the $J/\psi$
might start forming at 1.6 $T_c$, its effective width is very large
and will not accumulate until the system cools down further.
\section{$\Upsilon$ Dissociation}
\begin{figure}[b]
\centering \epsfig{file=upslo.eps, width=.8\hsize} \caption{(Color
online) Effective thermal width of $\Upsilon$ at LO}
\label{ups-width-lo}
\end{figure}
Here, we present the result for the $\Upsilon$ case. The $\Upsilon$
wave function is less sensitive to changes in the
temperature\cite{Wong04}. In the LO calculation, while the trends
in the temperature dependence of $\sigma$ is similar to that of the
$J/\psi$, its variation in magnitudes is much smaller. Moreover, as
can be seen in Table \ref{upsilon-wave} the binding energy remains
large. As a consequence, the overlap of $\sigma$ and the thermal
distribution remains large even at large temperatures, and the
effective thermal width slowly decreases with temperature as can be
seen in Fig. \ref{ups-width-lo}.
\begin{figure}[t]
\centering \epsfig{file=NLOq-upsxsection.eps, width=1.\hsize}
\caption{(Color online) $\sigma(E_{quark})$ of $q-\Upsilon$ at NLO.}
\label{nlo-q-ups-diff-sig}
\end{figure}
\begin{figure}[t]
\centering \epsfig{file=NLOg-upsxsection.eps, width=1.\hsize}
\caption{(Color online) $\sigma(E_{gluon})$ of $g-\Upsilon$ at NLO.}
\label{nlo-g-ups-diff-sig}
\end{figure}
\begin{figure}
\centering \epsfig{file=upsNLOwidth.eps, width=1.\hsize}
\caption{(Color online) Effective thermal width of $\Upsilon$ at
NLO induced (a) by quarks and (b) by gluons.} \label{ups-width-nlo}
\end{figure}
\begin{figure}
\centering \epsfig{file=upseffsigma.eps, width=1.\hsize}
\caption{(Color online) Effective cross-section of $\Upsilon$ due to
(a) a quark and (b) a gluon.}\label{ups-effx}
\end{figure}
In the NLO calculation, as shown in Fig. \ref{nlo-q-ups-diff-sig},
and Fig. \ref{nlo-g-ups-diff-sig}, the form of the cross section is
similar to that of $J/\psi$. However, because the binding energy of
$\Upsilon$ is still large at high temperature, there is no peak
structure near threshold. Moreover, because $\Upsilon$ is more
tightly bound and has a smaller dipole size than $J/\psi$, its
corresponding $\sigma$ is also smaller. Therefore, as can be seen
from Fig. \ref{ups-width-nlo}(a) for the quark induced and Fig.
\ref{ups-width-nlo}(b) for the gluon induced width, the overall
value of the width for $\Upsilon$ is smaller than that of $J/\psi$.
With an upper limit of thermal mass of 600 MeV and at temperature of
1.65 $T_c$, the sum of the LO and NLO thermal widths is less than 50
MeV. At this temperature, the effective dissociation cross section
by a quark is less than 0.2 mb and by a gluon than 0.6 mb.
Therefore, unlike the $J/\psi$ case, the $\Upsilon$ has a smaller
thermal width and effective dissociation cross section, and will
effectively start accumulating at higher temperatures. Fig.
\ref{ups-effx}(b) is the sum of LO and NLO effective cross section
of $\Upsilon$ due to gluon. Because the contribution of LO is not
negligible, the shape is quite different from the effective cross
section due to quarks of Fig \ref{ups-effx}(a).
\begin{table}[b]
\centering
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline $T/T_c$ & 1.13 & 1.18 & 1.25 & 1.4 & 1.65 & 1.95 \\[2pt]
\hline $\epsilon_0$(MeV)& 313 & 247 & 203 & 150 & 111 & 86 \\[2pt]
\hline $\sqrt{<r^2>}$(fm) & 0.294 & 0.331 & 0.366 & 0.425 & 0.494 & 0.562 \\[2pt]
\hline $a_0$(fm) & 0.17 & 0.19 & 0.21 & 0.246 & 0.285 & 0.324 \\[2pt]
\hline
\end{tabular}
\caption{Binding energy and rms of the $\Upsilon$.}
\label{upsilon-wave}
\end{table}
\section{Summary}
In this work, we have calculated the thermal widths of $J/\psi$ and
$\Upsilon$ at finite temperature using the elementary
parton-quarkonium dissociation cross section at NLO in QCD and
assuming thermal partons with effective thermal mass. We find that
for $J/\psi$ at 1.4 $T_c$, the thermal width will be 100 to 250 MeV,
which translates into an effective thermal cross section of several
mb. However, the corrsponding width and effective cross section for
the $\Upsilon$ is much smaller. Recently Mocsy and Petreczky
\cite{Mocsy:2007jz} have also calculated the thermal widths of
charmonium and bottonium in QGP, assuming that the quarkonium and
its constituents are in thermal equilibrium with the surrounding.
The thermal width estimated by MP is similar to ours for the
bottonium but several times larger for the charmonium. The result
for the charmonium by MP is obtained using a phenomenological
formula obtained when the binding energy is much smaller than the
temperature; hence further work has to be performed to understand
the discrepancy.
\section{Acknowledgement}
The work was supported by Korea Research Foundation KRF-2006-C00011
and in part by the US National Science Foundation under contract
number NSF-INT-0327497 with the University of Tennessee. The work of
SHL was also supported by the Yonsei University Research Grant. We
would like to thank C. M. Ko, and R. Rapp for useful discussion.
|
1,108,101,564,906 | arxiv | \section{Introduction}
Nearly $\%40$ words in the English language have estimated to have ambiguous meanings, i.e., their meanings depend on their contexts. There are two documented reasons for such an ambiguity; Homonymy which means that an ambiguous word can have the same spelling or pronunciation with a word that has a different sense or polysemy which means that the ambiguous word can have more than one sense \cite{Nguyen13}.
For example, the word \textit{interest} can have six different meanings (see Table \ref{int}). If it is used as in \textit{interest in this business}, then the meaning is to give attention (see sense 3 in Table \ref{int}), and the usage \textit {ten percent interest in this business} has another meaning which means a share in a company (see Sense 5 in Table \ref{int}). There is a clear semantic link between these two sentences whereas the homonymous words are semantically unrelated such as the usage of the word \textit{bank} as in \textit{the savings bank} and the usage as in \textit{a river bank} \cite{McCaughren09}.
Hence the main problem is to predict the exact sense of the word, i.e., choosing the right class defined by the dictionary based on its context. The generalized name for resolving such problems is Word Sense Disambiguation (WSD) which is to computationally identify the exact sense of an ambiguous word from its context \cite{Navigli09}. Compared to polysemy, the homonymy is more natural to resolve since the context of the words are in general very different. Polysemy, on the other hand, can be seen in very similar contexts as in the example, and it may become a very subtle problem.
Native speakers tend to understand these subtle changes in the meaning subconsciously, and catching the exact meaning seems effortless to them. On the other hand, for computers as well as language learners, solving the problem is a nontrivial task \cite{Turdakov10}.
WSD is handled extensively as a part of text classification studies since it potentially has significant effects on choosing the right class of a given document. Moreover, it is used as a part of machine translation algorithms \cite{Carpuat07}. There are two kinds of methods that are used in attacking a WSD problem, one of them is called knowledge-based, and the other one is called the corpus-based method \cite{Navigli09}. Knowledge-based methods take advantage of the knowledge resources such as dictionaries whereas corpus-based methods use manually sense annotated datasets to train a model. For machine learning, the general approach is to use corpus-based methods since the performance of supervised methods are higher than the unsupervised ones \cite{Li16}.
Another approach for resolving the WSD problem is to use Kernel methods \cite{Cortes95, Muller01, Shawe-Taylor04}. The basic idea of Kernel methods is to get the non-linear similarities of the data without explicitly computing the feature maps via kernel trick \cite{Scholkopf02}. The problem of choosing the right kernel for the right task is called model selection \cite{Gonen11}. General usage favors Support Vector Machines (SVM) as the classifier \cite{Cortes95, Muller01, Shawe-Taylor04,Wang14, Wang17, Wang14}. On the other hand, there are kernel studies that make use of Principal Component Analysis (PCA), which is called Kernel Principal Component Analysis (KPCA) \cite{ Scholkopf98, Su04,Wu04}. For further reading on kernel methods, we refer to \cite{Hofman08}, and for further knowledge of the usage of kernels for WSD, we refer to \cite{Li16}.
In this study, we introduce a corpus-based kernel method, which is called 'Diffusion Kernel Principal Component Analysis' (DKPCA). Following the footprints of \cite{ Kandola03, Wang14, Wu04}, we merge the semantic diffusion kernel and non-linear PCA in order to construct DKPCA. In section \ref{SemKer}, we explain Bag of Words (BoW) representation and its shortcomings for grasping intrinsic semantic relations of terms. Such drawbacks of this representation lead to the demand for semantic similarity kernels, and we show the construction of such kernel, i.e., semantic diffusion kernel. We wrap up the mathematics and give an algorithm to compute semantic diffusion kernel. In section \ref{KPCA}, we give a linear algebra based approach to the well known KPCA algorithm \cite{Vidal16}. By using the results from section \ref{KPCA}, we introduce DKPCA.
We test our algorithm with SensEval data and compare it with SVM and KPCA algorithms with several kernels. For evaluating our algorithm, we use mainly $F_1$ scores, since the label distribution of our datasets is imbalanced. Our $F_1$ macro-averaged results indicate that DKPCA outperforms all the competitive algorithms for $5\%$ and $10\%$ labeled data in SenSeval task (interest, serve, line, and hard) and get similar results for $\%30$ labeled data. For $F_1$ micro averaged scores, we get close results with SVM with semantic diffusion kernel for hard data set and DKPCA still outperforms the other algorithms $5\%$ and $10\%$ labeled data for interest, serve and line datasets. These results are highly encouraging concerning the scarcity of the labeled data.
\section{Semantic Diffusion Kernel for WSD}
\subsection{Problem Set up for WSD}\label{SemKer}
Let $\{t_0, t_1,\dots, t_N \}$ denote the terms (words) in our dictionary which is the set of terms appear in the corpus (the set of documents or contexts) and let $\{d_1,...,d_m\}$ denote the set of documents. Let $t_0$ be disambiguated which is seen in the document $d.$ We remove $t_0$ from the document $d$ and define the following map
\begin{eqnarray*}
\varphi: \mathcal{D} &\longrightarrow& \mathcal{D}\\
d_i &\mapsto &(tf(t_1,d_i)...tf(t_N,d_i))=x_i
\end{eqnarray*}where $tf(t_j, d_i)$ represents the frequency of the term $t_j$ in document $d_i.$ Then the $document \times term$ data matrix $D$ is given as follows.
\begin{equation}
\label{datD}
D= \bordermatrix{& t_1 & \dots &t_N \cr
d_1 & tf(t_1, d_1) &\dots & tf(t_N, d_1) \cr
\vdots & \vdots & \ddots& \vdots\cr
d_m& tf (t_1, d_m) & \dots & tf(t_N, d_m) \cr}
\end{equation} This representation of the data is called the Bag of Words (BoW) representation. The matrix
\begin{equation}
\label{linker}
K=DD^T
\end{equation}
is the Gram matrix that represents the following inner product
\begin{eqnarray}
K: X\times X&\longrightarrow&\mathrm{R}^N\\
\, \, \, (x_i,x_j)&\mapsto &(x_i,x_j^T)\nonumber
\label{innermap}
\end{eqnarray} where $D^T$ is a $term \times document$ matrix \cite{Shawe-Taylor04}, and $K$ is a $document \times document$ matrix.
Note that we use the same symbol $K$ for the Gram matrix and the inner product map since they are equivalent.
The Kernel Map (\ref{innermap}) and the matrix Kernel (\ref{linker}) defined above both indicate the most basic kernel, namely the linear kernel, that satisfies the Mercer Conditions. These conditions force the given matrix to be positive semi-definite and symmetric in order to be a valid Kernel function. More sophisticated functions can be chosen as a kernel candidate as long as the representation matrix satisfies the Mercer conditions. Some popular kernels are as follows.
\begin{eqnarray}
\label{kernels}
\text{Gaussian \, Kernel}&:& K_{rbf}(x_i, x_j)=e^{\frac{-||x_i-x_j||}{2\sigma^2}}\label{ilk}\\
\text{Polynomial \, Kernel}&:& K_{pol}(x_i, x_j)= (x_ix_j +1)^d \label{iki}\\
\text{Linear \, Kernel}&:& K_{lin}(x_i, x_j)=x_i ^T x_j \label{uc}
\end{eqnarray}
\subsection{Semantic Diffusion Kernel}\label{SemKer1}
We can represent a data matrix by employing an undirected graph where each vertex indexes each term and each edge keeps the information of the co-occurrences between terms in the documents.
Let us explain what we mean by 'co-occurrences between terms' with an example. Let us consider three sentences: \textit{Today is very cold and dark} is the first one, \textit{Dark rooms have generally have mold} is the second one and \textit{Mold can cause sickness} is the third one. We call them as $d_1, d_2,$ and $d_3$ respectively. We focus on the terms \textit{cold}, \textit{dark}, \textit{mold} and \textit{sickness}, which we name $t_1, t_2, t_3,$ and $t_4$ respectively. The terms $t_1$ and $ t_2$ have a first order co-occurrence since they are both in $d_1.$ Similarly, terms $t_2$ and $t_3$ with terms $t_3$ and $t_4$ have first order co-occurrence. This way, we can construct the representation matrix $D^T$ as given in \cite{Wang14, Wang17}.
\begin{equation}
(D^{T})_{ij} = \left\{
\begin{array}{rl}
1 & \, \, \text{if } \, \,t_j \in d_i,\\
0 &\, \, \text{if } \, \,otherwise.
\end{array} \right.
\end{equation}
Wang et al. \cite{Wang14, Wang17} show that Matrix $G=D^TD$ captures the first order correlations since it gives out the first order co-occurrence paths between terms.
However, the first order co-occurrences indicate a substantial similarity between the terms, the representations that depend on them like BoW representation and the graph representation, are both sparse and only carries limited information. For example, there is a semantic similarity between \textit{cold} and \textit{mold}, which is maybe crucial in classifying $d_1$ and $d_2$, but $G$ fails to represent this phenomenon.
Hence, careful observation will reveal that the similarity information of semantically close yet distinct words is lost in these representations. A common approach to solve this problem is to enrich BoW representation with the usage of semantic kernels. A semantic kernel $S$ is a $term\times term$ matrix which carries the global similarity scores among terms. It is used to diffuse this information to the kernel in use without harming the Mercer conditions (positive definiteness and the symmetry).
In Equation (\ref{semdif}) we diffuse a semantic kernel $S$ to the linear (BoW) kernel $K_{lin}.$
\begin{equation}
\label{semdif}
K_{new}(x_i, x_j)=x_i S S^Tx_j
\end{equation}
There are different ways to create a semantic kernel \cite{Altinel15, Cristianini02, Gliozzo05} but how can we construct a semantic kernel $S$ that can be used as a vessel to carry this semantic information? Before answering this question, let us go back to our example. If there is a first order co-occurrence between two terms, then they are said to have first order correlation, e.g., \textit{cold} and \textit{dark} have a first order correlation. The similarity between the term \textit{mold} and the term \textit{cold} is called the second order correlation through the term \textit{dark}. Moreover, there is a third order correlation between \textit{cold} and \textit{sickness} through cold and mold. Such correlations among terms are called higher order correlations and can be captured by $G^p$ where $p$ is the order. The answer to our question is the semantic diffusion kernel in Equation (\ref{Tay})\cite{Kandola03}, which is a matrix that is designed to carry out the high order semantic correlations.
\begin{eqnarray}
\label{Tay}
S&=& e^{\lambda G/2}\\
&=& (G^0+ \lambda G + \frac{\lambda^2 G^2}{2!} +\dots +\frac{\lambda^p G^p}{p!}+\dots )\nonumber\\
&=&\sum_{p=0}^{\infty} \frac{\lambda^p G^p}{p!}\nonumber
\end{eqnarray}
Intuitively, as the order of the correlation increases the similarity score should decrease. That is why we use $\lambda$ as the decaying factor in Equation \ref{Tay}, which is the Taylor expansion of $e^{\lambda G/2}.$ As the sum goes to infinity, this kernel approaches the Gaussian kernel. In essence, the diffusion kernel is the discretized Gaussian kernel, which is the solution of the time-dependent (continuous) diffusion equation \cite{Kondor04}. In practice, the semantic problem does not need a continuous construction at all. Our experiments show that even after the second or third step of the Taylor expansion, the similarity scores among terms fade away.
Now let us check whether Mercer conditions hold. The matrix $S$ is a $term \times term$ symmetric, positive semi-definite matrix. It encodes the semantic relations among terms (features). This semantic kernel has to be diffused to the linear Kernel (\ref{linker}) as follows.
\begin{equation}
K_{sd}=DS(DS)^T=DSS^TD^T
\end{equation} Matrix $K_{sd}$ is a symmetric and positive semi-definite.
Let us sum up the discussion above in the following algorithm.\RestyleAlgo{boxruled}
\LinesNumbered
\begin{algorithm}[ht]
\caption{Diffusion Kernel Algorithm\label{algDifKer}}
\KwData{ Matrix $D$, Parameter $\lambda,$ Step}
\KwResult{Semantic Diffusion Kernel K}
\textbf{initialization}\;
Calculate $G=D^TD$\;
Compute the Taylor expansion as follows\;
$S:$ Identity matrix of the same size as $G$\;
\For {$I =1\, \textbf{to} \, \, Step$}{
$S:= S+\frac{ \lambda ^I} { I!} G^{I}$\;
}
Compute $K_{sd}=DSS^TD^T$\;
\end{algorithm}
\section{Kernel Principal Component Analysis}\label{KPCA}
Principal Component Analysis (PCA) is a technique that captures the information from data by a linear transformation to a coordinate system which is mathematically designed to maximize the variance among data points \cite{Pearson01, Hotelling33}. If the data does not fit in a linear subspace of the space that data lies then PCA fails to perform this task. A general method to deal with such non-linearity is Kernel PCA (KPCA) \cite{Scholkopf98}. A Kernel function or a Kernel matrix is calculated directly from the data and represents a dot product in a feature space. It enables us to get the non-linear similarities of the data without explicitly computing the feature maps via kernel trick \cite{Hofman08}.
Mathematically, KPCA is a method to compute the non-linear principal components, which correspond to the eigenvector of the Kernel matrix \cite{Vidal16}.
If we focus on the Kernel methods what we observe that the basic idea of such methods is to map the data points from the input space to some feature space where the separation of the data becomes computationally more manageable, this simplicity stems from the so-called 'Kernel Trick.' Kernel trick enables us to skip the cumbersome mathematics of computing the images of $D$ in a feature space and projecting back the result of the inner product in that space. Instead, we can construct an inner product with specific criteria, and we have all the similarities of the data in feature space.
As discussed above, KPCA projects the data a non -linear coordinate system which describes the intrinsic relations among data points better than a linear coordinate system. The first step to achieve this goal is to construct the right kernel matrix. In this work, to capture the semantic similarities, best possible kernel among custom kernels is the semantic diffusion kernel.
For the problem setup, we follow the notation of \cite{Vidal16} and we refer this excellent source for detailed information on the topic. We construct the semantic diffusion kernel $K_{sd}$ by following Algorithm (\ref{algDifKer}). To get a meaningful variance, we center the kernel matrix $K_{sd}$ as
\begin{equation}
\label{centerJ}
\hat{K}_{sd}= JK_{sd} J^T \, \, \text{where} \, \, J_{ij} =
\left\{
\begin{array}{rl}
1 & \, \, \text{if } \, \,i=j \\
\frac{-1}{m}&\, \, \, \,otherwise.
\end{array} \right.
\end{equation}Matrix $J$ in Equation (\ref{centerJ}) is called the centering matrix, and it is an $M\times M$ matrix which has $1$ on the diagonal and $\frac{-1}{m}$ on the other positions. After centering we compute the eigenvalues and the corresponding eigenvectors of $\hat{K}_{sd}.$
\begin{equation}
\label{eig}
\hat{K}_{sd} w_i= \lambda_i w_i
\end{equation}
The eigenvalues are $\lambda_i,$ and $w_i$ are the corresponding eigenvectors. We let $\Lambda$ denote the diagonal matrix that contains the eigenvalues and $W$ denote the matrix of eigenvectors. Then Equation (\ref{eig}) becomes
\begin{equation}
\label{EV}
\hat{K}_{sd} W= \Lambda W.
\end{equation}
However, our main purpose is to find the right transformation which enables us to separate the data as accurate as possible, before the projection step we can also apply the dimension reduction as in traditional PCA in order to clear the noise. This step is crucial when the number of features is very high. Let us choose the dimension $d$ and take the first $d$ eigenvalues $\Lambda_d$ and corresponding eigenvectors $W_d.$ For choosing this number either grid search on the ratios of the eigenvalues or by using a graph can be applied.
After ordering the eigenvalues in descending order, we pick the first $d$ eigenvalues and the corresponding eigenvectors. Then we normalize the eigenvectors so that for every $i\in\{1,..., d\}$ we have $$||w_i||=\lambda^{-1/2} \iff \hat{W}_d=\Lambda_d^{-1/2} W_d.$$
Then the low dimensional non-linear principal components for the entire data set can be calculated as follows.
\begin{eqnarray}
\label{NPC}
Y&=& \hat{W}_d^T K \nonumber \\
&=& \Lambda_d^{-1/2}W^T K\nonumber\\
&=& \Lambda_d^{-1/2}\Lambda W_d^T\nonumber\\
&=& \Lambda_d^{1/2} W_d^T
\end{eqnarray}
Equation \ref{NPC} implies that the low dimensional coordinates can be computed by using the eigenvalues and eigenvectors of the kernel matrix $K_{sd}.$
\subsection{Approach-Diffusion Kernel PCA Algorithm}\label{DKPCA}
The main goal of this work is to create a methodology which learns the semantic similarities efficiently and transforms the data while applying dimension reduction guided by these similarities and classify this processed data. In the previous sections, we have established the basic theoretical tools necessary to construct such an algorithm. Now we combine these tools to create our algorithm the Diffusion Kernel Principal Component Analysis.
We start by preprocessing the data by cleaning punctuation and stop words with NLTK\footnote{https://www.nltk.org/ } and continue to get the BoW representation of it as given in Section \ref{SemKer}. Then we compute the Diffusion Kernel $K_{sd}$ as explained in Section \ref{SemKer1}. This kernel is a $document \times document$ matrix. We use this matrix as we use the covariance matrix in plain PCA, i.e., we centralize $K_{sd}$ as in Equation (\ref{centerJ}) and compute the eigenvalues (\ref{eig}) and eigenvectors as in (\ref{EV}).
At this point, we decide whether a dimension reduction should be applied. The main idea is the find the values that explain the most significant portion of the variance. First, we order the eigenvalues in descending order, and we did grid search in one fold of the ten folds train-test splits (this part is explained in detail in Section \ref{exp}) to find the most appropriate dimension and avoid data leakage. Our search process checks the ratios of the eigenvalues. As well checking the ratios of eigenvalues, suitable dimension can be estimated by checking the ratio of each eigenvalue to the sum of the eigenvalues and getting rid of the smallest values. When all features are crucial for the model, this step can be skipped, but we believe for long texts with a large vocabulary, this step is a vital ingredient to get the right dimension reduction that is guided by the semantic similarities. Then by applying Equality (\ref{NPC}) we get the non-linear transformation of the data.
These steps were done in an unsupervised manner except for the search for the right dimension. We need a classifier to convert the algorithm to a supervised algorithm. We choose the k-nearest neighbors algorithm because of three main reasons.
\begin{itemize}
\item[1)] We want to compare our results with \cite{Wu04} which introduces the usage of (polynomial kernel) KPCA with KNN for WSD and \cite{Wang14} which presents the usage of Diffusion Kernel with SVM
\item[2)] As a result of Reason (1), SVM as a classifier is not an option
\item[3)] KNN is an algorithm with high variance and low bias. Choosing KNN disables the classifier to infuse its own bias to our model and enables us to test the performance of DKPCA better.
\end{itemize}Let us wrap up the mathematical computation and our discussion with the following algorithm.
As explained above for the last step, any standard classification algorithm can be used.
\RestyleAlgo{boxruled}
\LinesNumbered
\begin{algorithm}[ht]
\caption{Diffused Kernel PCA\label{alg}}
\KwData{Matrix $X$, Parameter $\lambda,$ Step}
\KwResult{Non-linear Principal Components Y}
\textbf{initialization}\;
Calculate Matrix $D$ as given in (\ref{datD})\;
Call Algorithm (\ref{algDifKer})\;
Compute Matrix $J$ as in (\ref{centerJ})\;
Center $K$ as $\hat{K}_{sd} =JK_{sd}J^{-1}$\;
Calculate Matrices $\Lambda$ and $W$ as given in (\ref{EV})\;
Apply dimension reduction from $N$ to $d$ and get $\Lambda_d$ and $W_d$\;
Compute the transformed dataset $Y$ as in (\ref{NPC})\;
Compute KNN with input $Y$\;
\end{algorithm}
\vspace{-0.5cm}
\section{Experiments, Evaluations and Results}\label{exp}
In order to evaluate the performance of DKPCA for the WSD task, we use the most commonly used datasets of the SensEval.\footnote{http://senseval.org/} We implemented SVM and non-linear PCA based kernel methods with linear, polynomial, Gaussian and semantic diffusion kernels.
\subsection{Datasets}
\begin{table}[h!]
\centering
\caption{Senses and Frequencies in Hard Dataset}
\label{hard}
\begin{tabular}{|l|l|l|}
\hline
\bf{No} & \bf{Sense} & \bf{Frequency} \\ \hline
1 & not easy (difficult) & 3455 \\ \hline
2 & not soft (metaphoric) & 502 \\ \hline
3 & not soft (physical) & 376 \\ \hline
\end{tabular}
\end{table}
\paragraph{Hard Data} consists of 4333 instances with three senses taken from WordNet, which is given in Table \ref{hard}. The instances are taken from the San Jose Mercury News Corpus (SJMN) \cite{Leacock98}.
As seen in Table \ref{hard} the data is not equally distributed, nearly three fourth of the data is labeled with sense 1, which is why we included $F_1$ micro and macro scores.
\begin{table}[h!]
\centering
\caption{Senses and Frequencies in Line Dataset}
\label{line}
\begin{tabular}{|l|l|l|}
\hline
\bf{No} & \bf{Sense} & \bf{Frequency} \\ \hline
1 & Stand in line & 349 \\ \hline
2 & A nylon line & 502 \\ \hline
3 & A line between good and evil & 374 \\ \hline
4 & A line from Shakespeare & 404 \\ \hline
5 & The line went dead & 429 \\ \hline
6 & A new line of workstations & 2217 \\ \hline
\end{tabular}
\end{table}\paragraph{Line Data} is due to \cite{Leacock93}. It consists of 4147 instances which are gathered from 1987-1989 Wall Street Journal(WSJ) corpus and the American Printing House for the Blind (APHB) with
six different senses annotated from WordNet. Similar to hard data, the distribution of the instances are not homogenous.
\begin{table}[]
\centering
\caption{Senses and Frequencies in Serve Dataset}
\label{serve}
\begin{tabular}{|l|l|l|}
\hline
\bf{No} & \bf{Sense} & \bf{Frequency} \\ \hline
1 & Function as something & 853 \\ \hline
2 & Provide a service & 439 \\ \hline
3 & Supply with food & 1814 \\ \hline
4 & Hold an office & 1272 \\ \hline
\end{tabular}
\end{table} \paragraph{Serve Data} is due to \cite{Leacock98} like hard data and like line data, the data is gathered from 1987-1989 Wall Street Journal (WSJ) corpus and the American Printing House for the Blind (APHB). It is labeled with four distinct senses from WordNet.
\begin{table}[h!]
\centering
\caption{Senses and Frequencies in Interest Dataset}
\label{int}
\begin{tabular}{|l|l|l|}
\hline
\bf{No} & \bf{Sense} & \bf{Frequency} \\ \hline
1 & Readiness to give attention & 361 \\ \hline
2 & Quality of causing attention to be given & 11 \\ \hline
3 & Activity, etc. That one gives attention to & 66 \\ \hline
4 & Advantage, advancement or favor & 177 \\ \hline
5 & A share in a company or business & 500 \\ \hline
6 & Money paid for the use of money & 1252 \\ \hline
\end{tabular}
\end{table}\paragraph{Interest Data} is created with 2368 instances from part of speech tagged subset of the Penn Treebank Wall Street Journal Corpus (ACL/DCI version) which are annotated by using the Longman Dictionary of Contemporary English \cite{Bruce94}.
\subsection{Experimental Setup}\label{Ex}
In this study, we compare the performances of six different algorithms. Three of them are SVM based: SVM with linear, Gaussian, or radial basis function (RBF) and semantic diffusion kernel. The rest is PCA based: PCA, Kernel PCA with Gaussian (RBF), Polynomial, and Diffusion Kernel PCA (DKPCA). All proposed algorithms are implemented with Python\footnote{https:// www.python.org/downloads/} using Python Data Analysis Library (pandas v0.22.0), NLTK, Machine Learning Libraries (scikit learn 0.19.1 and sklearn). For the kernels (RBF, Lin and Pol) and algorithms (PCA, KPCA and SVM) we used the embedded algorithms in scikitlearn. We implemented Diffusion kernel and Diffusion Kernel PCA by following Algorithms (\ref{algDifKer}) and (\ref{alg}). Our implementation can be reached in GitHub\footnote{Github.com/dkpca/wsd-project}.
All the tests are performed on a computer with CPU: INTEL(R) CORE(TM)[email protected] and 16 GB RAM.
For preprocessing, the stop words with non-alphabetical words are removed and we compute the vector representation of the data by using tf (term frequency) as in Section \ref{SemKer}. We split the datasets into three different training set sizes with label percentage: $\%5, \%10$ and $\%30.$ We run the algorithms for ten times with these train-test splits. After testing for each fold we take the arithmetic mean of these results as the final evaluation result in Tables \ref{Acc}-\ref{F1-mac}.
In order to avoid data leakage, we optimize parameters only in one fold of these train-test splits. For the linear kernel, the only optimization is done for parameter $C.$ After grid search for the values $\{0.01, 0.05, 0.1,0.5, 1, 10 \}$ we find out that the values $1, 0.01,0.05, 10$ cause slight increases in the accuracy results depending on the data set. For the RBF kernel, we do not optimize the parameter $\gamma$ since our previous attempts show that the default value in sklearn gives the best accuracy values.
The choice of the decaying factor $\lambda$ is done by grid search as well and is fixed as $\{0.0039\}.$ We are surprised to observe that choice of the Taylor step should be done either 2 or 3 steps. This makes sense once we consider the effect of the semantic similarity, which is expected to be getting smaller as the correlation is of higher order \cite{Ganiz06}.
For the polynomial kernel, we use the default degree, which is 3.
We perform dimension reduction on the interest dataset by checking the ratios of the eigenvalues. Estimating the right dimension can be done by calculating the ratios of each eigenvalue to the sum of eigenvalues as well. Interest dataset has 2367 documents (contexts) and 6929 terms (features). We reduced the dimension from 6929 to 1710 as seen in Fig \ref{fig1}. In Table \ref{IntDim}, we present the test results without the dimension reduction and compare the performance in Section \ref{Eva}.
Our main algorithm is based on KPCA and in order to use the unsupervised KPCA as a supervised algorithm, we need to use a classifier other than SVM. We choose KNN as we explain in detail in Section \ref{DKPCA}. Choice of the number of neighbors for KNN is done by employing cross validation on the values $\{1,..10\}$ and tests indicate that it should be 6.
\subsection{Evaluation}\label{Eva}
The average test results after ten train-test splits are summarized in Tables \ref{Acc}-\ref{F1-mac}. The abbreviations on the tables are Lin. indicates linear kernel. RBF corresponds to Gaussian kernel. Dif. is the SVM with Diffusion kernel and Pol. is the polynomial kernel. The datasets we use have skewed label distribution as given in Tables \ref{hard}-\ref{int}. As a result, we employ $F_1$ micro and macro scores as our primary evaluation metric.
\begin{table}[!htp]
\centering
\caption{Accuracy scores on the Datasets: If the difference between two competitive scores is less than $1\%$ then we emphasized both scores.}
\label{Acc}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
Dataset & Size \% & SVM Lin & SVM RBF & SVM Dif. & PCA & KPCA Pol. & KPCA RBF & DKPCA \\ \hline
\multirow{3}{*}{Interest} & 5\% & 57,36 & 53,00 & 63,86 & 64,17 & 64,37 & 64,32 & \textbf{72,94} \\ \cline{2-9}
& 10\% & 65,06 & 53,15 & 65,39 & 67,41 & 66,77 & 67,65 & \textbf{77,53} \\ \cline{2-9}
& 30\% & 79,48 & 53,59 & 79,04 & 73,08 & 72,99 & 72,80 & \textbf{81,68} \\ \hline
\multirow{3}{*}{Line} & 5\% & 53,74 & 53,46 & 58 & 60,45 & 62,76 & 62 & \textbf{67,92} \\ \cline{2-9}
& 10\% & 56,06 & 53,45 & 61,26 & 63,88 & 66,48 & 65,57 & \textbf{71,27} \\ \cline{2-9}
& 30\% & 73,27 & 53,46 & \textbf{75,51} & 69,21 & 70,58 & 71,71 & \textbf{75,75} \\ \hline
\multirow{3}{*}{Serve} & 5\% & 65,27 & 41,44 & 68,71 & 63,53 & 64,04 & 62,06 & \textbf{71,73} \\ \cline{2-9}
& 10\% & 72,26 & 41,45 & 70,69 & 66,56 & 66,94 & 67,05 & \textbf{74,48} \\ \cline{2-9}
& 30\% & \textbf{81,71} & 41,48 & 78,51 & 69,81 & 70,88 & 70,47 & 77,23 \\ \hline
\multirow{3}{*}{Hard} & 5\% & 79,80 & 79,76 & \textbf{81,81} & 80,19 & 79,76 & 79,80 & \textbf{81,15} \\ \cline{2-9}
& 10\% & 80,12 & 79,76 & \textbf{82,54} & 80,41 & 79,86 & 79,79 & \textbf{82,31} \\ \cline{2-9}
& 30\% & 81,79 & 79,85 & \textbf{84,07} & 80,86 & 80,08 & 80 & \textbf{83,86} \\ \hline
\end{tabular}
\end{table}
Tables \ref{Acc}-\ref{F1-mac} show the results after the dimension reduction and Tables \ref{IntDim} and \ref{NoDim} show the test performance of DKPCA without dimension reduction. \begin{table}[!htp]
\centering
\caption{DKPCA Scores Without Dimension Reduction for Interest Dataset}
\label{IntDim}
\begin{tabular}{|l|l|l|l|}
\hline
Size & $F_1$-macro & $F_1$-micro & Accuracy \\ \hline
5\% & 36,38 & 65,51 & 65,51 \\ \hline
10\% & 41,96 & 74,74 & 68,46 \\ \hline
30\% & 50,68 & 74,74 & 74,74 \\ \hline
\end{tabular}
\end{table}\vspace{-0.5cm}According to Tables \ref{IntDim} and \ref{Acc}, dimension reduction improves the accuracy and $F_1$ scores substantially, yet even without the dimension reduction DKPCA performs better than the other algorithms for $5\%$ and $10\%$ for interest and line datasets and similar for hard and serve data sets.
On the other hand, dimension reduction improves the efficiency of KNN, since KNN performs poorly for datasets with high dimensions (dimensionality curse).
\begin{table}[!htp]
\centering
\caption{Micro $F_1$-scores on the Datasets: If the difference among two competitive scores is less than $1\%$ then we emphasized both scores.}
\label{F1-mic}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
Dataset & Size \% & SVM Lin & SVM RBF & SVM Dif. & PCA & KPCA Pol. & KPCA RBF & DKPCA \\ \hline
\multirow{3}{*}{Interest} & 5\% & 57,36 & 53,00 & 63,86 & 64,17 & 64,37 & 64,32 & \textbf{72,94} \\ \cline{2-9}
& 10\% & 65,06 & 53,15 & 65,39 & 67,41 & 66,77 & 67,65 & \textbf{77,53} \\ \cline{2-9}
& 30\% & 79,48 & 53,59 & 79,04 & 73,08 & 72,99 & 72,80 & \textbf{81,68} \\ \hline
\multirow{3}{*}{Line} & 5\% & 53,74 & 53,46 & 58 & 60,45 & 62,76 & 62 & \textbf{67,92} \\ \cline{2-9}
& 10\% & 56,06 & 53,45 & 61,26 & 63,88 & 66,48 & 65,57 & \textbf{71,27} \\ \cline{2-9}
& 30\% & 73,27 & 53,46 & \textbf{75,51} & 69,21 & 70,58 & 71,71 & \textbf{75,75} \\ \hline
\multirow{3}{*}{Serve} & 5\% & 65,27 & 41,44 & 68,71 & 63,53 & 64,04 & 62,06 & \textbf{71,73} \\ \cline{2-9}
& 10\% & 72,26 & 41,45 & 70,69 & 66,56 & 66,94 & 67,05 & \textbf{74,48} \\ \cline{2-9}
& 30\% & \textbf{81,71} & 41,48 & 78,51 & 69,81 & 70,88 & 70,47 & 77,23 \\ \hline
\multirow{3}{*}{Hard} & 5\% & 79,80 & 79,76 & \textbf{81,81} & 80,19 & 79,76 & 79,80 & \textbf{81,15} \\ \cline{2-9}
& 10\% & 80,12 & 79,76 & \textbf{82,54} & 80,41 & 79,86 & 79,79 & \textbf{82,31} \\ \cline{2-9}
& 30\% & 81,79 & 79,85 & \textbf{84,07} & 80,86 & 80,08 & 80 & \textbf{83,86} \\ \hline
\end{tabular}
\end{table}For hard (12055 terms), serve(17475 terms ) and line(17002 terms) datasets, we just set the dimension to be 1710 randomly, to see how random choice affects the algorithm. In Table \ref{NoDim} we present the scores DKPCA get when applied on these datasets without the dimension reduction. Moreover, the skewness of the distribution of the classes are significant and this skewness causes the micro averaged scores to be larger than macro averaged scores as seen in Tables \ref{F1-mac} and \ref{F1-mic}. \begin{figure}
\centering
\includegraphics[width=7cm]{Dim.eps}
\caption{For interest data set, we reduced the dimension from 6929 to 1710. } \label{fig1}
\end{figure}
Table \ref{Acc} shows that for $5\%$ and $10\%$ labeling the accuracy scores of DKPCA are significantly larger than all the other algorithms for the datasets line, serve and interest. This is an important result, concerning the scarcity of the labelled data. This result also indicates that this algorithm can be converted to semi-supervised algorithm as done in \cite{Su04}. Moreover, carefully done dimension reduction on interest dataset creates a larger positive difference, circa $10\%,$ for DKPCA. Even random or no reduction on the other sets creates a lesser difference, circa $2\%.$
For the hard dataset the difference between the closest Algorithm, i.e., Diffusion Kernel SVM are statistically the same since the difference is not statistically significant. Hence, careful parameter optimization for hard data might change the difference. However, for $30\%$ labeled dataset, DKPCA has a competitive performance compared to SVM Dif. and SVM Lin., for serve dataset, SVM Lin. gives very high performance. Micro-averaged $F_1$ scores, as seen in Table \ref{F1-mic}, are very similar to accuracy scores. On the other hand, macro-averaged $F_1$ scores are very low compared to the other metrics in use. Despite this fact, DKPCA outperforms all the other algorithms for $5\%$ and $10\%$ percent labeled data and get statistically similar results for $30\%$ labeling except for the serve dataset.
\begin{table}[!htp]
\centering
\caption{Macro $F_1$-scores on the Datasets: If the difference among two competitive scores is less than $1\%$ then we emphasized both scores.}
\label{F1-mac}
\begin{tabular}{|l|l|l|l|l|l|l|l|l|}
\hline
Dataset & Size \% & SVM Lin & SVM RBF & SVM Dif. & PCA & KPCA Pol. & KPCA RBF & DKPCA \\ \hline
\multirow{3}{*}{Interest} & 5\% & 17,92 & 11,76 & 29,18 & 32,19 & 32,77 & 32,01 & \textbf{45,26} \\ \cline{2-9}
& 10\% & 26,03 & 12,32 & 34,49 & 39,31 & 38,21 & 39,97 & \textbf{52,56} \\ \cline{2-9}
& 30\% & 51,69 & 13,40 & 54,87 & 47,92 & 48,19 & 47,60 & \textbf{57,62} \\ \hline
\multirow{3}{*}{Line} & 5\% & 12,55 & 11,61 & 21,97 & 31,63 & 41,76 & 40,31 & \textbf{49,08} \\ \cline{2-9}
& 10\% & 19,59 & 11,61 & 28,58 & 38,88 & 47,78 & 46,73 & \textbf{54,78} \\ \cline{2-9}
& 30\% & 60,24 & 11,61 & \textbf{61,21} & 48,68 & 53,48 & 54,64 & \textbf{61,76} \\ \hline
\multirow{3}{*}{Serve} & 5\% & 42,59 & 14,65 & 53,43 & 47,65 & 49,66 & 46,81 & \textbf{62,94} \\ \cline{2-9}
& 10\% & 56,79 & 14,68 & 55,76 & 51,37 & 53,00 & 53,10 & \textbf{67,02} \\ \cline{2-9}
& 30\% & \textbf{75,18} & 14,73 & 70,97 & 57,77 & 59,73 & 58,80 & 70,38 \\ \hline
\multirow{3}{*}{Hard} & 5\% & 30,05 & 29,74 & 43,60 & 35,52 & 30,30 & 30,64 & \textbf{48,30} \\ \cline{2-9}
& 10\% & 32,63 & 29,94 & 47,19 & 36,31 & 30,79 & 30,72 & \textbf{53,77} \\ \cline{2-9}
& 30\% & 43,75 & 43,75 & 54,79 & 38,17 & 32,37 & 32.32 & \textbf{58,40} \\ \hline
\end{tabular}
\end{table}
\begin{table}[!htp]
\centering
\caption{DKPCA Scores Without Dimension Reduction for Line, Hard and Serve Datasets}
\label{NoDim}
\begin{tabular}{|l|l|l|l|l|}
\hline
Dataset & Size & Accuracy & $F_1$-micro & $F_1$-macro \\ \hline
\multirow{3}{*}{Line} & 5\% & 65,17 & 65,17 & 42,95 \\ \cline{2-5}
& 10\% & 68,68 & 68,68 & 48,92 \\ \cline{2-5}
& 30\% & 72,77 & 72,77 & 55,79 \\ \hline
\multirow{3}{*}{Serve} & 5\% & 67,02 & 67,02 & 53,73 \\ \cline{2-5}
& 10\% & 69,56 & 69,56 & 57,93 \\ \cline{2-5}
& 30\% & 72,21 & 72,21 & 62,23 \\ \hline
\multirow{3}{*}{Hard} & 5\% & 80,10 & 80,10 & 34,33 \\ \cline{2-5}
& 10\% & 80,44 & 80,44 & 36,21 \\ \cline{2-5}
& 30\% & 81,36 & 81,36 & 34,33 \\ \hline
\end{tabular}
\end{table}
\section{Conclusion and Further Study}
Experimental results show that DKPCA outperforms all kernel based SVM algorithms including the semantic diffusion kernel based SVM for $\%5$ and $\%10$ percent train-test splits for all datasets except for hard dataset. Considering the scarcity of the labeled data, our proposed algorithm (Diffusion Kernel Principal Component Analysis) DKPCA gives promising results.
As stated in \cite{Su04} there are some disadvantages of using KPCA based algorithms for datasets that contain targets with dissimilar contexts. In the same paper, this issue is resolved by converting the supervised method to semi-supervised method by including unlabeled data for creating the kernel. As future work, we want to apply this semi-supervised technique with DKPCA and compare our results.
Instead of employing DKPCA only for WSD problems, it can be applied to other research areas in machine learning, i.e., DKPCA can be quite versatile. For example, DKPCA can be used for feature extraction and dimension reduction based on higher order correlation of the data, and this approach may lead to hybrid DKPCA models with other machine learning algorithms or neural networks. We believe that it is a good starting point for merging DKPCA with other deep learning methods. We expect that DKPCA boots the other algorithms performance when there are higher order similarities among data \cite{Ganiz06}.
|
1,108,101,564,907 | arxiv | \section{Introduction}
The state of a quantum system in thermodynamic equilibrium with a heat reservoir
at temperature $T$ is given by the (un-normalized) Gibbs canonical density matrix
\begin{align}\label{GibsRhoDeffEq}
\hat{\rho} = e^{-\beta\hat{H}}, \qquad \beta = 1/(kT),
\end{align}
where $\hat{H}$ is the quantum Hamiltonian. This state plays a fundamental role in statistical mechanics.
In particular, a system's equilibrium thermodynamical properties
can be directly calculated from the corresponding
partition function
$
Z = {\rm Tr}\, \left[ \hat{\rho} \right].
$
Furthermore, studies of nonequilibrium dynamics driven by an external perturbation require
the knowledge of the state in Eq. (\ref{GibsRhoDeffEq}), usually serving as both initial and final condition.
Recognizing the mathematical intractability for obtaining a generic quantum Gibbs state,
Wigner attempted to derive a quantum correction to the corresponding classical ensemble by introducing
the quasi probability distribution now bearing his name \cite{Wigner1932}.
This discovery subsequently led to the development of the phase space representation
of quantum mechanics \cite{Hillery1984121, zachos2005quantum, bolivar2004quantum, polkovnikov2010phase, Curtright2013, caldeira2014introduction}, where an observable $O=O(x,p)$ is a real-valued function of the
coordinate $x$ and momentum $p$ [systems with one spatial dimension are considered; the scalability is discussed prior to Eq. (\ref{WignerOBMPSEq})],
while the system's state is represented by the Wigner function
\begin{align}\label{WignerFuncDeffEq}
W_{xp} = \frac{1}{2\pi} \int
\langle x - \smallfrac{\hbar}{2}\theta | \hat{\rho} | x + \smallfrac{\hbar}{2}\theta \rangle
e^{i p \theta} d \theta,
\end{align}
where $\langle x | \hat{\rho} | x' \rangle$ denotes a density matrix in the coordinate representation.
The Wigner function is a standard tool for studying the quantum-to-classical interface \cite{zurek2001sub,bolivar2004quantum, dragoman2004quantum, zachos2005quantum,Haroche2006, PhysRevD.58.025002, zachos2001, bolivar2004quantum, polkovnikov2010phase}, chaotic systems \cite{Heller84}, emergent classical dynamics \cite{Bhattacharya00, PhysRevLett.88.040402, Bhattacharya03, RevModPhys.75.715, Everitt09, jacobs2014quantum}, and open systems evolution \cite{Hillery1984121, Kapral2006, petruccione2002theory, Bolivar2012, caldeira2014introduction}. Moreover, the Wigner distribution has a broad range of applications in optics and signal processing \cite{Cohen1989, Dragoman2005, Schleich2001}, and quantum computing \cite{Miquel2002a, Galvao2005, Cormick2006, Ferrie2009a, Veitch2012a, Mari2012, Veitch2013}. Techniques for the experimental measurement of the Wigner function are also developed \cite{Kurtsiefer1997, Haroche2006, Ourjoumtsev2007, Deleglise2008, Manko2014}.
The knowledge of the Gibbs state Wigner function is essential for some models of nonequilibrium dynamics
and transport phenomena (see, e.g., reviews \cite{coffey2007, Clearly2010, cleary_2011}). Despite numerous
attempts, no \emph{ab initio} and universally valid method to obtain the Gibbs state in the phase space exists.
Based on recent analytical and algorithmic advances in the phase space representation
of quantum mechanics \cite{PhysRevA.88.052108, Cabrera2015}, we finally deliver a numerically efficient
and accurate method for calculating the Gibbs canonical state within the Wigner formalism. Additionally,
a robust method to calculate ground and excited state Wigner functions are also designed. Thomas-Fermi and Bose-Einstein distributions in the Wigner representation are also computed. Since all the
simulations presented below require the computational power of an average laptop, these algorithmic advancements
enable quantum phase space simulations previously thought to be prohibitively expensive.
The rest of the paper is organized as follows: The numerical method to calculate the Wigner function for the Gibbs canonical state is presented in Sec. \ref{Sec:NumMethodExplained}. Extensions of the algorithm to obtain the Wigner functions for pure stationary states as well as Thomas-Fermi and Bose-Einstein distributions are developed in Secs. \ref{Sec:GetPureStatesW} and \ref{Sec:TFBE}, respectively. Python implementations of all the algorithms are supplied. Finally, the conclusions are drawn in the last section.
\section{Gibbs state as a Bloch equation solution}\label{Sec:NumMethodExplained}
Given the definition (\ref{WignerFuncDeffEq}), the problem of finding the Gibbs state Wigner function might appear
to be trivial: substitute Eq. (\ref{GibsRhoDeffEq}) into Eq. (\ref{WignerFuncDeffEq}) and perform the numerical
integration. However, this route is not only computationally demanding, but also yields poor results. In particular,
the obtained function will not be a stationary state with respect to the dynamics generated by the Moyal equation
of motion \cite{moyal1949quantum, zachos2005quantum, curtright2012quantum, Cabrera2015}
\begin{align}\label{MoyalEq}
i \hbar \, \partial_t W_{xp} = H_{xp} \star W_{xp} - W_{xp} \star H_{xp},
\end{align}
where $H_{xp}$ and $\hat{H}$ are connected via the Wigner transform (\ref{WignerFuncDeffEq}) and $\star$ denotes
the Moyal product \cite{PhysRevD.58.025002, zachos2001, Curtright2013},
\begin{align}
H_{xp} \star W_{xp} \equiv H_{xp} \exp \! \left(
\smallfrac{i\hbar}{2} \, \overleftarrow{\partial_x} \,
\overrightarrow{\partial_p} -
\smallfrac{i\hbar}{2} \,
\overleftarrow{\partial_p} \, \overrightarrow{\partial_x}
\right) W_{xp},
\end{align}
which is a result of mapping the noncommutative operator product in the Hilbert space into the phase space. Note that
we follow the conventions of Ref. \cite{Cabrera2015} throughout. The Moyal equation (\ref{MoyalEq}) is obtained
by Wigner transforming (\ref{WignerFuncDeffEq}) the von Neumann equation for the density matrix,
\begin{align}\label{vonNeumannEq}
i \hbar \partial_t \hat{\rho} = [ \hat{H}, \hat{\rho} ].
\end{align}
Such a simple approach fails because the interpolation is required
in Eq. (\ref{WignerFuncDeffEq}) to obtain values of the density matrix at half steps, as indicated
by the $\hbar\theta/2$ shifts. Therefore, a different route must be taken to completely avoid the density matrix.
Following Refs. \cite{Hillery1984121, Clearly2010}, we note that the unnormalized Gibbs
state (\ref{GibsRhoDeffEq}) obeys the Bloch equation \cite{Bloch1932}
\begin{align}
\partial_{\beta} \hat{\rho} = - (\hat{H}\hat{\rho} + \hat{\rho}\hat{H})/2,
\qquad \hat{\rho}(\beta=0) = \hat{1}.
\end{align}
The latter could be written in the phase space as
\begin{align}\label{BlochPhaseSpaceEq}
\partial_{\beta} W_{xp} = -(H_{xp} \star W_{xp} + W_{xp} \star H_{xp})/2.
\end{align}
The Bloch equation in the Wigner representation is mathematically similar to the Moyal
equation (\ref{MoyalEq}). Thus, a recently developed numerical
propagator \cite{Cabrera2015} (as well as other methods \cite{thomann2016stability, machnes2016quantum,koda2015initial, koda2016mixed}) can be readily adapted to obtain the Gibbs state.
Assume that the Hamiltonian is of the form
\begin{align}
H_{xp} = K(p) + V(x).
\end{align}
To construct the numerical method, we first lift Eq. (\ref{BlochPhaseSpaceEq}) into the Hilbert
phase space, as prescribed by Refs. \cite{PhysRevA.88.052108, Cabrera2015},
\begin{align}
\frac{d}{ d\beta} | \rho \rangle &=
-\frac{1}{2} \left[ H\left( \hat{x} - \smallfrac{\hbar}{2} \hat{\theta}, \hat{p} + \smallfrac{\hbar}{2} \hat{\lambda} \right) \right. \notag\\
& \left. \qquad + H\left( \hat{x} + \smallfrac{\hbar}{2} \hat{\theta}, \hat{p} - \smallfrac{\hbar}{2} \hat{\lambda}\right)
\right] | \rho \rangle \notag\\
&= -\frac{1}{2} \left[ \hat{K}^{+} + \hat{K}^{-} + \hat{V}^{+} + \hat{V}^{-} \right] | \rho \rangle, \label{HilbertPhaseSpaceSchEq}\\
\hat{V}^{\pm} &= V\left(\hat{x} \pm \smallfrac{\hbar}{2} \hat{\theta} \right), \qquad
\hat{K}^{\pm} = K\left( \hat{p} \pm \smallfrac{\hbar}{2} \hat{\lambda}\right), \\
W_{xp} &= \smallfrac{1}{\sqrt{2 \pi \hbar}} \langle x p | \rho \rangle,
\end{align}
where the four-operator algebra of self-adjoint operators $\hat{x},\hat{p}, \hat{\theta}, \hat{\lambda}$ satisfies the
following commutator relations \cite{bondar2012operational, PhysRevA.88.052108, Cabrera2015}:
\begin{align} \label{commutation-rels}
{[} \hat{x} , \hat{p} {]} = 0, \quad
{[} \hat{x} , \hat{\lambda} {]} = i, \quad
{[} \hat{p} , \hat{\theta} {]} = i, \quad
{[} \hat{\lambda} , \hat{\theta} {]} = 0,
\end{align}
and $| x p \rangle$ denotes the common eigenvector of operators $\hat{x}$ and $\hat{p}$,
\begin{align}
\hat{x} | x p \rangle = x | x p \rangle, \qquad
\hat{p} | x p \rangle = p | x p \rangle.
\end{align}
The power of the Hilbert phase space formalism lies in the fact that the Bloch
equation (\ref{BlochPhaseSpaceEq}) is transformed into Eq. (\ref{HilbertPhaseSpaceSchEq}) resembling
an imaginary-time Schr\"odinger equation in two spatial dimensions, which could be efficiently
solved via the spectral split operator method \cite{feit1982solution}. The formal
solution of Eq. (\ref{HilbertPhaseSpaceSchEq}) reads
\begin{align}
| \rho(\beta) \rangle
= e^{-\frac{\beta}{2} \left( \hat{K}^{+} + \hat{K}^{-} + \hat{V}^{+} + \hat{V}^{-} \right)} | \rho(\beta=0) \rangle.
\end{align}
Using the Trotter product~\cite{Trotter1959}, the iterative first-order scheme is obtained
\begin{align}
| \rho(\beta + d\beta ) \rangle =&
e^{-\frac{d\beta}{2}\left(\hat{K}^{+} + \hat{K}^{-} \right)} \notag\\
& \times e^{-\frac{d\beta}{2}\left( \hat{V}^{+} + \hat{V}^{-} \right)} | \rho(\beta) \rangle
+ O\left( d\beta^2 \right).
\end{align}
Returning to the Wigner phase space representation, we finally arrive at the desired numerical scheme
\begin{align}\label{BlochWignerNumerMethEq}
W_{xp}(\beta + d\beta) &= \mathcal{F}^{\lambda \to x}
e^{-\frac{d\beta}{2}\left(K^{+} + K^{-} \right)}
\mathcal{F}^{x \to \lambda} \notag\\
&\times \mathcal{F}_{\theta \to p}
e^{-\frac{d\beta}{2}\left( V^{+} + V^{-} \right)} \mathcal{F}_{p \to \theta} W_{xp}(\beta) ,
\end{align}
where $\mathcal{F}_{p \to \theta}$ and $\mathcal{F}^{x \to \lambda}$ are direct Fourier transforms with respect to the variables $p$ and $x$, respectively,
\begin{align}
\mathcal{F}_{p \to \theta}[W_{xp}] = \int W_{xp} e^{-ip\theta} dp, \\
\mathcal{F}^{x \to \lambda}[W_{xp}] = \int W_{xp} e^{-ix\lambda} dx,
\end{align}
$\mathcal{F}_{\theta \to p}$ and $\mathcal{F}^{\lambda \to x}$ are the corresponding inverse transformations, and $V^\pm = V\left( x \pm \smallfrac{\hbar}{2} \theta \right)$, $K^{\pm} = K\left( p \pm \smallfrac{\hbar}{2} \lambda\right)$ have now become scalar functions.
Utilizing the fast Fourier transforms \cite{frigo2005design}, the complexity
of the algorithm (\ref{BlochWignerNumerMethEq}) is $O(N\log N)$, where $N$ is the total length of an array storing the Wigner function. Moreover, the Wigner function at every iteration corresponds to a
Gibbs state of a certain temperature, and Eq. (\ref{BlochWignerNumerMethEq}) physically models cooling.
In the current work, we consider one-body systems. The \emph{ab initio} algorithm (\ref{BlochWignerNumerMethEq}) can be straightforwardly extended to the $D$-body case albeit at the price of the exponential scaling $O\left(DN^D \log N\right)$. In the subsequent work, we will present a polynomial algorithm by adapting the matrix product state formalism \cite{wall_out_equilibrium_2012, orus_practical_2014} to phase space dynamics. For example, deployment of the following matrix product state ansatz for a $D$-body Wigner function, $W^{(D)} = W\left(x_1, p_1; x_2, p_2; \ldots ; x_D, p_D\right)$,
\begin{align}\label{WignerOBMPSEq}
W^{(D)} = \prod_{k=1}^{D-1} W_k (x_k, p_k; x_{k+1}, p_{k+1})
\end{align}
should lead to the desired polynomial scaling.
In Fig. \ref{fig:MexicanHatGibbsState}, we employ Eq. (\ref{BlochWignerNumerMethEq}) to compute the Gibbs state Wigner
function for a Mexican hat potential. Atomic units (a.u.), where $\hbar=m=1$, are used throughout. To verify the consistency
of the obtained solution, we subsequently propagate it by the Moyal equation (\ref{MoyalEq}) using the method
in Ref. \cite{Cabrera2015}. Comparing the initial [Fig. \ref{fig:MexicanHatGibbsState}(a)]
and final [Fig. \ref{fig:MexicanHatGibbsState}(b)] states, one observes that the Gibbs state remains stationary
up to $O\left( 10^{-14} \right)$.
\begin{figure}
\includegraphics[width=1.\hsize]{MexicanHatGibbsStatesFig.pdf}
\caption{(Color online) Log plot of the Gibbs canonical state Wigner function ($\beta = 1$ a.u.) for the Mexican hat system with the Hamiltonian $H_{xp} = p^2/2 -0.05x^2 + 0.03x^4$ (a.u.) Since the Gibbs state is characterized by a positive Wigner function, we use the logarithmic scale to show that the Gibbs distribution obtained by Eq. (\ref{BlochWignerNumerMethEq}) [Fig. (a)] remains invariant under the time evolution of the Moyal equation (\ref{MoyalEq}) up to the values of $10^{-14}$. See Ref. \cite{CodeFig1} regarding the python code used to generate this figure.}\label{fig:MexicanHatGibbsState}
\end{figure}
\begin{figure}
\includegraphics[width=1.\hsize]{MexicanHatGroundExitedFig.pdf}
\caption{(Color online) Wigner functions of the ground (a) and first excited states (b) for the Mexican hat system with the Hamiltonian $H_{xp} = p^2/2 -0.05x^2 + 0.03x^4$ (a.u.). In Figs. (c) and (d), the red solid line depict the marginal coordinate distribution of the ground and first excited state, respectively; whereas, the dashed blue line depict the coordinate distribution obtained after the propagation via the Moyal equation (\ref{MoyalEq}). Note that both lines overlap, indicating that the pure states in Figs. (a) and (b) are calculated with high accuracy. See Ref. \cite{CodeFig2} regarding a python code used to generate this figure.}\label{fig:MexicanHatGroundExitedFig}
\end{figure}
\section{Wigner functions of pure stationary states}\label{Sec:GetPureStatesW}
The numerical scheme (\ref{BlochWignerNumerMethEq}) recovers the ground state as $\beta\to\infty$. To speed up the convergence
to the zero-temperature ground state, the following adaptive step algorithm can be employed.
Initially pick a large value of the inverse temperature step $d\beta \sim 1$ (a.u.). Using a constant Wigner
function [i.e., $W_{xp}(\beta=0)=1$] as an initial guess, obtain $W_{xp}(\beta + d\beta)$ within Eq. (\ref{BlochWignerNumerMethEq}).
Accept the updated Wigner function, if it lowers the energy [i.e.,
$
\int W_{xp}(\beta) H_{xp} dxdp > \int W_{xp}(\beta + d\beta) H_{xp} dxdp
$]
and $W_{xp}(\beta + d\beta)$ represents a physically valid state. If either condition is violated, reject the state $W_{xp}(\beta + d\beta)$,
half the increment $d\beta$, and repeat the procedure.
Note that there is no computationally feasible criterion to verify that a Wigner function underlines
a positive density matrix (see, e.g., Ref. \cite{Ganguli1998}). Thus, we suggest to employ the following
heuristic: verify that the purity,
$
\mathcal{P} = 2\pi\hbar \int W_{xp}^2 dxdp
$
cannot exceed unity (note that $\mathcal{P} =1$ is for pure states only) and the Heisenberg uncertainty principle is obeyed.
See Ref. \cite{CodeFig2} regarding a pythonic implementation of the full algorithm.
Once the ground state is found, any exited state can be constructed in a similar fashion. For example,
an amended algorithm can be used to calculate the first exited state. After Eq. (\ref{BlochWignerNumerMethEq}), the
ground state Wigner function $W_{xp}^{(g)} \coloneqq W_{xp}(\beta=\infty)$ should be projected
out of $W_{xp}(\beta + d\beta)$. More specifically, the state $W_{xp}(\beta + d\beta)$ must be updated as
\begin{align}
W_{xp}(\beta + d\beta) &\coloneqq \frac{W_{xp}^{(1)}}{ \int W_{xp}^{(1)} dxdp},\\
W_{xp}^{(1)} &= W_{xp}(\beta + d\beta)
- c W_{xp}^{(g)} , \\
c &= 2\pi\hbar \int W_{xp}(\beta + d\beta) W_{xp}^{(g)} dx dp.
\end{align}
The physical meaning of $c$ is a fraction of the total population occupying the ground state. This algorithm, implemented in Ref. \cite{CodeFig2}, is more efficient and easier to maintain than the one in Refs. \cite{hug_1998a, hug_1998b}.
Figures \ref{fig:MexicanHatGroundExitedFig}(a) and \ref{fig:MexicanHatGroundExitedFig}(b) show the Wigner functions
of ground and first exited states, respectively, for the Mexican hat potential. Using merely $512 \times 512$ grids to store
Wigner functions, the purities of the computed states are found to be $1 - O\left( 10^{-14} \right)$ and $1- O\left( 10^{-7} \right)$,
respectively. This demonstrates numerical effectiveness of the developed algorithm. Furthermore, to verify that
the states are stationary, we propagated them via the Moyal equation (\ref{MoyalEq}) \cite{Cabrera2015}. The
comparison of the coordinate marginal distributions, defined as $\int W_{xp}dp$, before and after
Moyal propagation [Figs. \ref{fig:MexicanHatGroundExitedFig}(c) and \ref{fig:MexicanHatGroundExitedFig}(d)] confirm
that the ground [Fig. \ref{fig:MexicanHatGroundExitedFig}(a)] and exited [Fig. \ref{fig:MexicanHatGroundExitedFig}(b)]
states are stationary within the accuracy of $O\left( 10^{-14} \right)$.
It is noteworthy that the ground state Wigner function [Fig. \ref{fig:MexicanHatGroundExitedFig}(a)] exhibits small negative values.
This is in compliance with Hudson's theorem \cite{Hudson1974} stating that a pure state Wigner function is positive
if and only if the underlying wave function is a Gaussian (whereas, the ground state of the Mexican hat potential
is evidently non-Gaussian). The excited state Wigner function [Fig. \ref{fig:MexicanHatGroundExitedFig}(b)] contains
a central oval region with pronounced negative values encircled by a zero-valued oval followed by a positive-value region.
This is a hallmark structure of the Wigner distributions for first excited states.
The zero-valued oval and negative center emerge from the node at $x=0$ in the first excited state
wave function, which can also be seen in Fig. \ref{fig:MexicanHatGroundExitedFig}(d) visualizing the absolute
value square of the wave function.
Note that Wigner function's negativity is associated with the exponential speedup in quantum computation \cite{Ferrie2009a, Veitch2012a, Mari2012, Veitch2013}.
\section{Wigner functions for Thomas-Fermi and Bose-Einstein distributions}\label{Sec:TFBE}
The proposed method can be used to compute other steady states not directly describable by the Bloch equation. In particular,
to calculate the Thomas-Fermi ($s=+1$) or Bose-Einstein ($s=-1$) states, the following expansion can be utilized
\begin{align}\label{TFBEWignerEq}
& \frac{1}{ e^{\beta(\hat{H} - \mu)} + s }
= \frac{e^{-\beta(\hat{H} - \mu)}}{ 1 + s e^{-\beta(\hat{H} - \mu)} } \notag\\
& = \sum_{k=0}^{\infty} e^{(2k+1)\beta \mu} e^{-(2k+1)\beta \hat{H}}
-s \sum_{k=1}^{\infty} e^{(2k)\beta\mu} e^{-(2k)\beta\hat{H}},
\end{align}
where $\mu$ denotes the chemical potential. Equation (\ref{TFBEWignerEq}) consists of the (unnormalized) Gibbs states at different temperatures $\beta, 2\beta, 3\beta, \ldots$. Thus the Wigner function for the Thomas-Fermi and Bose-Einstein states could be easily found via the numerical method (\ref{BlochWignerNumerMethEq}) by adding or subtracting the corresponding Gibbs distributions, which are sequentially obtained during $\beta \to \infty$ propagation. In Fig. \ref{fig:MexicanHatBE_TF}, the Gibbs state [Fig. \ref{fig:MexicanHatBE_TF}(a)] has been compared with the Bose-Einstein [Fig. \ref{fig:MexicanHatBE_TF}(b)] and Thomas-Fermi [Fig. \ref{fig:MexicanHatBE_TF}(c)] distributions for $\beta = 1.5$ (a.u.) and vanishing chemical potential.
\begin{figure}
\includegraphics[width=1\hsize]{MexicanHatBE_TF.pdf}
\caption{(Color online) Wigner functions of the (a) Gibbs, (b) Bose-Einstein, and (c) Thomas-Fermi states for the Mexican hat system with the Hamiltonian $H_{xp} = p^2/2 -0.05x^2 + 0.03x^4$ (a.u.), $\beta = 1.5$ (a.u.), and $\mu = 0$. Distributions (b) and (c) were obtained using expansion (\ref{TFBEWignerEq}). See Ref. \cite{CodeFig3} regarding a python code used to generate this figure.
}\label{fig:MexicanHatBE_TF}
\end{figure}
\section{Outlook}
The Gibbs canonical state, a maximum entropy stationary solution of the von Neumann equation (\ref{vonNeumannEq}),
is a cornerstone of quantum statistical mechanics. The Wigner phase space representation of quantum dynamics
currently undergoes a renewing interest due to the promise to solve open problems in nonequilibrium thermodynamics. To simulate open system dynamics, a good quality initial condition, usually the Gibbs
state Wigner function, needs to be supplied. We have developed the numerical algorithm yielding Gibbs states with
nearly machine precision. Moreover, an extension of this algorithm allows computing Wigner functions of pure
stationary states, corresponding to the eigensolutions of the Schr\"{o}dinger equation. Wigner functions for Thomas-Fermi and Bose-Einstein distributions are also calculated. Such states are essential
for studying nonequilibrium dynamics in atomic and molecular systems. As a result, the developed algorithmic techniques
finally make the Wigner quasiprobability phase space representation of quantum dynamics
a computationally advantageous formulation compared to the density matrix approach.
\emph{Acknowledgments}. The authors acknowledge financial support
from (H.A.R.) NSF CHE 1058644, (R.C.) DOE DE-FG02-02-ER-15344 and (D.I.B.) ARO-MURI W911-NF-11-1-2068. A.G.C. was supported by the Fulbright foundation. D.I.B. was also supported by 2016 AFOSR Young Investigator Research Program.
|
1,108,101,564,908 | arxiv | \section*{Introduction}
\begin{figure*}
\includegraphics[width=1\linewidth]{skin_setup.PNG}
\caption{Schematic (left) and picture (right) of the experimental set-up. Porcine skin is teared in front of an optical and an infrared camera, while both the applied force and displacement are measured. In the real set-up (e.g., by contrast to what is suggested by the schematic) the optical camera is placed above the infrared one.}
\label{fig:setup}
\end{figure*}
The toughness of matter is often characterised by its energy release rate\,\cite{Griffith1921}, that is, by the amount of mechanical energy dissipated by the rupture of a surface unit of a given solid matrix. Among other dissipation mechanisms (e.g.,\,\cite{rice_surface,crackwaves}), running cracks tend to emit some heat, a phenomenon which has long been reported and studied by the fracture physics community (e.g.,\,\cite{Irwin1957, RiceLevy, Fuller1975, Bouchaud2012}). In various materials, such as PMMA (polymethyl methacrylate)\,\cite{TVD2}, acrylic adhesives\,\cite{TVD2} or paper\,\cite{ToussaintSoft}, such heat actually accounts for a significant portion (i.e., from $10$ to almost $100$\%) of the total energy release rate, that is, for a significant portion of these materials' strength. More than a simple energetic loss, the heat dissipation have been suspected to lead to potentially paramount secondary effects, which are still actively debated. These effects notably include the brittleness of matter\,\cite{Marshall_1974, carbonePersson, ThermalRunaway, ToussaintSoft, TVD1} or the instability of some seismic faults (e.g.,\,\cite{HeatWeak, pressur2005, SulemCarbo}) through various thermal weakening phenomena.\\
Recently, we have also suggested a theory\,\cite{TVDpain1} that the damage-induced heat in biological tissues could be responsible for a degree of mechanical pain in the human body. Indeed, should the heat dissipation be large enough, it may be detected by the thermo-sensitive nociceptors of sensory neurons, the so-called TRPs proteins, and thus may trigger action potentials (i.e., electrochemical signals) in the nervous system. TRPs (standing for Transient Receptor Potential cation channels - e.g.,\,\cite{PainTRP}) are proteins expressed in many cells of the cutaneous tissue\,\cite{skinTRP}, and in particular at the surface of sensory neurons. Their ability to gate ions through cells' membranes\,\cite{voltage_gating} is temperature dependent, and different TRP types react to different temperature ranges. In particular, the role of TRPV3 and TRPV1 was considered\,\cite{TVDpain1} in the detection of fracture induced heat. The former (TRPV3) is sensitive to temperatures in the normal biological range (i.e., $30$ to $40^\circ$C), making it likely responsible for the feeling of warmth, and the latter (TRPV1) starts to activate at more painful temperatures above about $43^\circ$C (e.g.,\,\,\cite{PainTRP,skinTRP}). Completing this range, another channel expressed in the skin, TRPV2, activates at noxious temperatures above $52^\circ$C. The role of TRPs is thus commonly admitted in thermal sensing, and some also react to particular chemicals, for instance contained in `hot' pepper or `cool' mint (e.g.,\,\cite{PainTRP}). The role of TRPs in the feeling of mechanical pain has also been previously suspected, notably as thermal and mechanical pain was shown to be coupled in human subjects\,\cite{coupled_pain}, with a threshold to feel mechanical pain that decreases at a higher ambient temperature. Incidentally, cooling is sometimes used for the anesthesia of cutaneous and non-cutaneous tissues prior to medical mechanical injections (e.g.,\,\cite{cool_anesthesia,cool_anesthesia2}). In rodents, the drug-induced inhibition of TRPV1 and TRPV3 has also proven to reduce mechanical hyperalgesia\,\cite{TRPV1_mechano,TRPV1_mechano2,TRPV3_mechano} (i.e., the decreased threshold to feel mechanical pain after a first stimulus).\\
In this work, we experimentally show that the rupture of skin can generate heat anomalies that are in the sensing ranges of TRPV1, TRPV2 and TRPV3, on time and space scales that are similar to those of these nociceptors sensibility\,\cite{neurite_density, TRP_response2, TRP_response}. We thus confirm the relevance of this novel thermal pathway for mechanical algesia.
We tear pork skin samples, assumed to be a reasonably good model for human skin (e.g.,\,\cite{PigHuman, PigHuman2, skin_mod_examp}), in front of an infrared camera and report temperature elevations of a few degrees to tens of degrees over hundreds of milliseconds, depending on the skin samples and the damaging rate. With a normal skin temperature of about $35^\circ$C\,\cite{skin_surf_temp,skin_temp1}, such thermal anomalies shall indeed open the TRP channels, and we here discuss both direct algesia and hyperalgesia scenarii. We characterise the relationship between damage velocity and local temperature elevation, suggesting that a minimal fracture velocity of about $1$\,cm\,s\textsuperscript{-1} may be needed for strong thermo-mechanical pain to actually be at play. We also provide the energy release rate of our samples, $\sim 135\,$kJ\,m\textsuperscript{-2} in average, and, with two different methods, we give a coarse estimation of the portion of which that is transformed into heat when fractures progress in skin ($\sim3$\% to $50$\%). We thus show that heat dissipation is responsible for a non negligible part of the cutaneous strength, actually making temperature monitoring an efficient way to detect mechanical damages in biological tissues.
\section{Methods}
\subsection{Experimental set-up}
Let us start by describing the experimental set-up. Most of it is a standard mechanical test bench, a schematic and a picture of which is shown in Fig.\,\ref{fig:setup}. Porcine skin samples are placed in this horizontal test bench, where they are held at their extremities by two self-tightening wedge grips. This type of grips holds stronger and stronger as the sample they clamp is brought into tension. Mechanical tensile tests are performed on the skin up to its full rupture. To that extent, one of the wedge grips can be displaced along a uniaxial slider with the help of a manual lever. The other grip is fixed. It is attached to a force sensor (Mark-10\textsuperscript{\textregistered} MR01-300) that allows force measurements up to $1$\,N accuracy. The displacement of the moving grip is monitored with a digital scale (Mitutoyo\textsuperscript{\textregistered} 572-311-10) having a $30\,\upmu$m precision. An optical camera also records the set-up and the deforming skin sample. A mono-channel infrared camera (Flir\textsuperscript{\textregistered} SC3000), measuring the radiation intensity in the $800$ to $900$\,nm bandwidth and equipped with a macroscopic lens, is placed in front of the skin. This infrared camera, sensitive to temperature changes of fractions of degree, monitors the sample as it is loaded up to rupture. It is the main measurement of these experiments. Finally, behind the skin sample, a cold plastic plate, just out of a freezer, acts as a cold background for the infrared images. This ensures that any measured elevation of temperature (i.e., the signal of interest) does arise from the sample itself, and later simplifies the images analysis. This cold plate is not in direct contact with the skin and is placed $20$\,cm away from it, so it does not affect the samples' temperature.
\subsection{Skin samples and model limitations\label{sec:sample}}
Porcine skin was here chosen as a model for human skin, as they both display similar structural, thermal and mechanical characteristics\,\cite{PigHuman, PigHuman2, skin_speheat2, skin_thermal}. Such comparison holds particularly well when comparing human skin to the cutaneous tissues of other mammals. The skin that we tested was acquired at a local butcher shop and originates from the flank or the upper leg of various pig specimens. This product is a standard product sold by this store and, thus, no pig was harmed specifically for the need of our experiment. The studied skin included the epidermis (that is, the surface layer of the skin), the dermis and a thin layer of subcutaneous fat (hypodermis). The total skin thickness varied between $1.6$ and $2.7$\,mm, and was measured on each sample with a caliper.
\\The rupture of seven skin specimens (denoted I to VII in this manuscript) was studied, in order to grasp some of the diversity in behaviours of such a biological material.
With a scalpel, each skin sample was carved out to be rectangular with width $2.2$\,cm and length $10$ to $15$\,cm. The length of the samples simply allowed enough skin to be grabbed by the test bench's wedge grips (grabbing $3.5$\,cm on each side), to insure the samples stability during the experiments. The force, applied to tear the skin, was applied parallel to this direction. The width of the samples matched both approximately the grips' width and the frame's size of the infrared camera. Figure\,\ref{fig:sample} shows an optical picture of a skin sample installed in the set-up. To ensure that the skin rupture does occur in the frame of the infrared camera, a notch of length $3$ to $6\,$mm was initially cut perpendicularly to the sample's length, to control the initiation of the fracture.\\
We do not present in vivo experiments, for obvious ethical and practical reasons, and we should report that the tested skin went through two initial processes before being acquired that could have slightly alter their mechanical properties. The first of this processes is dehairing (e.g.,\,\cite{meat_processing}), which sparsely left some cuts on the epidermis (i.e., the shallowest part of our samples).
This wear, if significant, should only have weakened the skin, as a perfectly intact skin would be overall tougher and, thus, would likely dissipate more heat upon rupture. In this regard, the results we here present are thus conservative. This being stated, we chose skin samples where the dehairing damage was the less pronounced.
The second potentially altering process is freezing, which can cause micro-damages in soft biological tissues (e.g.,\,\cite{freezing}). The skin samples were indeed bought frozen and were unfrozen in a ambient temperature water bath during half an hour before running the rupture tests. After this bath, the sample surfaces were only gently dried with a paper towel. In the case of porcine skin, it was shown\,\cite{PigHuman2} that the fracture energy of the epidermis significantly increases with the freezing/unfreezing process, although by less than an order of magnitude. Conveniently, in the same study (focusing on micro-needle insertions in the epidermis rather than the full tearing of skin that we here consider) fresh human skin was actually shown to be as tough as unfrozen porcine one\,\cite{PigHuman2} (i.e., fresh human skin tend to be slightly stronger than fresh pork skin).\\
Overall, it should be stated that there is no perfect model for fresh human skin, and we here assume that the various physical parameters, measured on our butcher acquired samples, are representative of our own cutaneous tissue.
\begin{figure}
\includegraphics[width=1\linewidth]{skin_sample.PNG}
\caption{A porcine skin sample installed in the set-up, with the epidermis facing the camera. A little stretch, $\sim5\%$, is here applied by the force $F$, so that the nominal sample length between the grips is $l_0=28.5\,$mm and the stretch is $\Delta l=1.5\,$mm. The initial unbroken width is denoted $d_0$. The dashed lines illustrate some barely visible (shallow) cuts on the epidermis, from the dehairing process. The picture's point of view corresponds approximately to that of the infrared camera during the experiments.}
\label{fig:sample}
\end{figure}
\subsection{Experimental versus biological resolutions}
The displacement's digital measurement was outputted with a $5$\,Hz rate. The optical camera, recording with a $50$\,fps (frames per second) rate, hence allowed a faster but less accurate monitoring of the displacement. Depending on the experiment, the force measurement was outputted with a $10$ to $50$\,Hz rate. The infrared camera - i.e., the core measurement of our experiments - recorded with a $50$\,fps frame rate. It had a resolution of $240\,\times\,320$\,pixels, with a pixel size of $80$ to $90\,\upmu$m depending on the exact camera position and focus of each experimental realisation. This actual pixel size, for given realisations, was calibrated by capturing a hot metallic rod of known diameter, in the same plane as the skin sample on which the camera was priorly focused.\\
To understand how significant would any heat anomaly be in terms of algesia (pain), it is important to compare the time and space resolutions of our (infrared) thermal measurements to the time and space sensibility of the human nervous system. Based on microscopic observations\,\cite{neurite_density} of the density $D_n$ of neurites in the human epidermis (i.e., the body extension of neurons), $D_n\sim2000$\,mm\textsuperscript{-2}, one can broadly estimate\,\cite{TVDpain1} the nominal distance between two of these neurites to be about $1/\sqrt{D_n}\sim20\,\upmu$m. At the surface of these neurites, the typical response time of the TRPs nociceptors to temperature jumps has also been measured\,\cite{TRP_response2,TRP_response}, with patch clamp experiments, to range from a few milliseconds to a few tens of milliseconds. Thus, our experimental resolution in space ($\sim85\,\upmu$m) and time ($1/50$\, Hz $=20\,$ms) should be rather close, yet slightly coarser, to that of neural sensing in the human skin. Any significant temperature change (i.e., as per the TRPs sensitivity) recorded by our set-up should then be able to initiate action potential in a live biological system. Oppositely, our infrared camera could miss some of the most accurate details available to the actual neural system on the smallest spatial and temporal scales.
\section{Results}
\begin{figure*}
\includegraphics[width=1\linewidth]{skin_result.PNG}
\caption{Infrared temperature maps and corresponding optical frames during the propagation of a tensile crack in porcine skin (skin specimen V). Seven different times are displayed with a constant increment in between. The maximum temperature elevation is $\Delta T=8^\circ$C in the first frame ($t_0$) and up to $\Delta T=17^\circ$C in the last one ($t_6$), Contrarily to the infrared camera, the optical camera monitored the whole set-up and was not equipped with a macroscopic lens, hence the noticeable difference in image resolution. It also recorded the skin with a slightly different (non-orthogonal) line of sight, as per Fig.\,\ref{fig:setup}.}
\label{fig:result}
\end{figure*}
\subsection{Temperature profiles}
Infrared videos, for all experiments, are available to the reader as supplementary material. Figure\,\ref{fig:result} shows the map of measured temperature $T$ at seven successive times for skin specimen V. As expected, some heat is emitted by the fracture as it progresses through the cutaneous tissue.\\
In order to convert the infrared signal to temperatures, skin was assumed to be a black body, that is, to follow Planck's law (\,e.g.\,\cite{blackbody}) and have an emissivity close to $1$. In practice, the emissivity of porcine skin may range\,\cite{pig_emissivity} between $0.94$ and $1$. Varying this parameter, it was found that our reported elevations of temperature $\Delta T$ may hold an error of less than $10$\%, and this uncertainty will be irrelevant to our final conclusions.\\
In Fig.\,\ref{fig:result}, one can for instance observe temperature elevations up to $15^\circ$C ($\pm 1^\circ$C). This magnitude is significant with regard to general mammal biology and, more specifically, with regard to the sensitivity of given neuronal thermal sensors. Indeed, assuming a normal inner temperature of about $35^\circ$C\,\cite{skin_surf_temp,skin_temp1} (i.e., for live subjects), elevations of $\sim8^\circ$C and more should be enough to trigger TRPV1, and elevations of $\sim17^\circ$C should trigger TRPV2\,\cite{PainTRP,skinTRP}. Additionally, fast thermal elevations of a few degrees Celsius could also excite TRPV3. It was shown\,\cite{TRVP3_grad2} that this sensor produces a higher bio-current intensity for faster rise in temperature, and, for temperature elevations of a few degrees, we have measured heating rates of up to $200^\circ$C s\textsuperscript{-1}. The thermal anomalies typically spread over a few millimetres across and along the crack trajectory, so that many neural receptors ($\sim10^{4}$) could likely sense it. As priorly discussed, the typical spacing between two neurites should indeed be in the tens of micrometer range\,\cite{neurite_density}. Appendix\,\ref{app:all} presents some temperature profile observed in the rupture of the six other skin specimens. Similar orders of magnitude are observed, although a variety of patterns shows in the temperature maps. The maximal temperature elevation $\Delta T_\text{max}$, which was recorded during each test, is indicated in table\,\ref{table:summary}. In all occurrence, it exceeds the TRPV1 threshold and sometimes exceeds that of TRPV2.
\subsection{Mean stress at rupture and elastic modulus}
As we performed relatively standard tensile tests (the most exotic feature being to monitor them with an infrared camera), we here also provide some of the mechanical constants of our skin samples. From Fig.\,\ref{fig:force_disp}, showing for specimen V the measured force $F$ versus displacement $\Delta l$ plot, one can, in particular, estimate a mean stress $\overline{\sigma}_\text{f}$ at rupture by computing $\overline{\sigma}_\text{f}=F/(hd_0)$ at the onset of the fracture. Here, $h$ is the thickness of the sample and $d_0$ is its initial unbroken width. Note that this stress value does not account for any stress concentration, so that the actual stress shall reach, locally, higher values, around the initial crack tip or at the scale of the skin's collagen fibers (i.e., the main dry constituent of the dermis). The strength of our samples ranged from about $9$ to $19$\,MPa, which is rather logically comparable to the strength of individual collagen fibers (e.g.,\,\cite{single_fiber}). From the force versus displacement plots, an elastic (Young) modulus $E$ of the skin samples can also be estimated, from the approximately constant ratio $E = F l_0/(hd_0\Delta l)$ that holds as the sample is loaded elastically, and where $l_0$ is the initial sample length between the two grips. We derived $E$ in the range of $20$ to $110$\,MPa. For each skin specimen, appendix\,\ref{app:all} shows the force and displacement plots and table\,\ref{table:summary} summarises the samples initial geometry (e.g.,\,$l_0$, $d_0$ and $h$), the computed values for the mean stress at rupture $\overline{\sigma}_\text{f}$ and the cutaneous Young modulus\,$E$.\\
Although not the core interest of the present study, it is satisfying that both the values of $E$ and $\overline{\sigma}_\text{f}$ that we report are compatible with other studies of in vitro (e.g., post-mortem) skin samples (e.g.\,\cite{other_young}). It is however to be noted that significantly lower elastic modulus, in the range of $1\,$Mpa, were also reported for human and porcine skins (e.g.\,\cite{PigHuman2}). Extensively discussed in the literature (e.g., see\,\cite{other_young}), the causes for such variability may include the inherent spread in the properties of biological materials, the differences in testing methods (in particular for in vitro versus in vivo samples), or anisotropy in the skin structure.
\subsection{Energy release rate}
\begin{table*}
\begin{tabular}{c|ccc|ccccc}
\begin{tabular}[c]{@{}c@{}}Skin\\ specimen\end{tabular} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}$h$\\ \,\,\,\,\,(mm)\,\,\,\,\,\end{tabular}} &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}$l_0$\\ \,\,\,\,\,(mm)\,\,\,\,\,\end{tabular}} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}$d_0$\\ \,\,\,\,\,(mm)\,\,\,\,\,\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}$\overline{\sigma}_\text{f}$\\ \,\,\,\,\,(MPa)\,\,\,\,\,\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}$E$\\ \,\,\,\,\,(MPa)\,\,\,\,\,\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}$\Delta T_\text{max}$\\ \,\,\,\,\,($^\circ$C)\,\,\,\,\,\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}$G$\\ \,\,\,\,\,(kJ m\textsuperscript{-2})\,\,\,\,\,\end{tabular}} & \multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}$\phi$\\ \,\,\,\,\,(-)\,\,\,\,\,\end{tabular}} \\ \hline
I & $2.0$ & $65$ & $18$ & $15\pm2$ & $110\pm20$ & $24\pm1.5$ & $130\pm20$ & $\sim30$\%\\
II & $2.0$ & $71$ & $16$ & $13\pm2$ & $31\pm6$ & $22\pm1.5$ & $160\pm20$ & $\sim5$\%\\
III & $1.8$ & $29$ & $18$ & $10\pm2$ & $35\pm6$ & $22\pm1.5$ & $80\pm10$ & $\sim30$\%\\
IV & $1.6$ & $32$ & $17$ & $9\pm2$ & $48\pm8$ & $19\pm1$ & $80\pm10$ & $\sim15$\%\\
V & $2.7$ & $25$ & $17$ & $12\pm2$ & $20\pm4$ & $17\pm1$ & $150\pm20$ & $\sim30$\%\\
VI & $2.0$ & $32$ & $14$ & $19\pm3$ & $33\pm6$ & $12\pm1$ & $210\pm30$ & $\sim15$\%\\
VII & $2.4$ & $42$ & $19$ & $10\pm2$ & $35\pm6$ & $16\pm1$ & $135\pm20$ & $\sim50$\%
\end{tabular}
\caption{Summary of various physical quantities estimated on each skin specimen.
$\overline{\sigma}$ is the mean stress at rupture, $E$ is the skin Young modulus, $\Delta T_\text{max}$ the maximal temperature elevation recording during a test, $G$ the mean energy release rate and $\phi$ the mean thermal efficiency in the dissipation of $G$. As explained in the text, $\phi$ should not be interpreted beyond its order of magnitude.
For reference, the initial samples geometry (i.e., as per Fig.\,\ref{fig:sample}) is also provided, with accuracy $\pm0.15$\,mm for $h$ and $\pm0.5$\,mm for $l_0$ and $d_0$.}
\label{table:summary}
\end{table*}
\begin{figure}
\includegraphics[width=1\linewidth]{skin_force_disp.PNG}
\caption{Force versus displacement plot for the experiment shown in Fig.\,\ref{fig:result} (skin specimen V). Labels $t_0$ to $t_6$ refer to the times of the frames in this other figure. The relatively linear relation at small displacements allows to invert for the skin's Young modulus $E$. The area below the plot is the total mechanical work $W_\text{tot}$ provided to the sample.}
\label{fig:force_disp}
\end{figure}
The rise in skin temperature, which we are here mainly interested in, accounts for a portion of the dissipated energy, as the rupture progresses. The total mechanical work that was provided during a tensile test is given by
\begin{equation}
W_\text{tot} = \int_{\Delta l=0}^{+\infty} F \,\mathrm{d}\Delta l,
\label{eq:work}
\end{equation}
that is, the area below the measured force versus displacement curve. By definition, the mean energy release rate of skin $G$ can then be derived as
\begin{equation}
G \sim \frac{W_\text{tot}}{hd_0}.
\end{equation}
The estimated $G$ is shown for each skin specimen in table\,\ref{table:summary} and is in the $80$ to $210$\,kJ\,m\textsuperscript{-2} range ($135$\,kJ\,m\textsuperscript{-2} in average for all samples, with a significant standard deviation of $35$\,kJ\,m\textsuperscript{-2}).
\\A first remark is that the magnitude of $G$, here reported for the tearing of skin, is significantly higher than that reported for the scissors cutting of skin\,\cite{skinG}, which is about $2\,$kJ\,m\textsuperscript{-2}. It is also higher than the likely energy release rate of individual polymeric fibers (e.g.,\,of collagen fibers, which compose most of the cutaneous tissue), which should be\,\cite{thin_silk} in the order of $1\,$kJ\,m\textsuperscript{-2}. Likely, this difference translates that, contrarily to cutting, tearing is a process involving some fiber-to-fiber interactions rather than only processes below the fiber's scale, with the (heat-emitting) friction between these fibers known to account for most of the tissue toughness (e.g.,\,\cite{tearskin}).\\
Another remark is that Vincent-Dospital\,\textit{et al.}\,\cite{TVD2,TVD3} proposed the energy release rate of a material to be related to the core length scale $l$ at which most of the energy is dissipated:
\begin{equation}
l \sim \frac{a^3G}{2u},
\label{eq:Gl}
\end{equation}
where $a\sim2$ \AA\,\,is the typical size of a molecular link and $u\sim1\,$eV the typical magnitude of its energetic cohesion. Satisfyingly, in our case, this value is in the micrometer range (l$\sim3\,\upmu$m in average, although Eq.\,(\ref{eq:Gl}) only provides an order a magnitude). Such value is a typical length scale for the diameter of collagen fibers\,\cite{collagen}, which tends to confirm the importance of this scale in the tearing of skin. Because this size is small compared to the extend of our measured thermal anomaly (i.e., Fig.\,\ref{fig:result}), it suggests that most of this anomaly is subsequent to the heat diffusion at larger scale, and not directly related to the intrinsic size of the heat sources $l$.
\subsection{Approximate thermal energy budget\label{sec:budget}}
To estimate the portion $\phi$ of $W_\text{tot}$ that was dissipated as heat, we computed the rise in internal energy $U$ that could be captured by the infrared camera as
\begin{equation}
U(t) = \rho ch\iint_S \Delta T(x,y,t) \,\mathrm{d}s.
\label{eq:U}
\end{equation}
In this expression, $c$ is the heat capacity of porcine skin, $\sim3.2\,$kJ\,K\textsuperscript{-1}\,kg\textsuperscript{-1} \cite{skin_speheat, skin_speheat2}, $S$ the surface of the sample available to the camera, $x$ and $y$ are the 2D-coordinates of the infrared frames (i.e.,\,see Fig.\,\ref{fig:result}), and $\mathrm{d}s=\mathrm{d}x\times\mathrm{d}y$ is the elementary surface unit (i.e., the surface of one infrared pixel, in this case). We assessed, with a simple scale, the volumetric mass $\rho$ of samples of various measured volumes to be $1150\pm50$\,kg\,m\textsuperscript{-3}.\\
The time evolutions of the mechanical work $W(t)$ provided to the teared samples (defined the same way as $W_\text{tot}$, but for an ongoing rupture) and of the thermal energy $U(t)$ are shown in Fig.\,\ref{fig:ener}, for the same experiment (V) whose results are displayed in Figs.\,\ref{fig:result} and\,\ref{fig:force_disp}. They are also shown for the other skin specimens in appendix\,\ref{app:all}. The mean thermal efficiency $\phi$, for the complete rupture, can then be computed at the end of the experiment, when all of $W_\text{tot}$ as been dissipated, as the final $U/W_\text{tot}$ ratio and ranges between $5$\% to up to $50$\% depending on the skin sample.
\begin{figure}
\includegraphics[width=1\linewidth]{energy.PNG}
\caption{(a): Comparison of the time evolution the provided mechanical load $W$, as per Fig.\,\ref{fig:force_disp}, and the rise in internal (thermal) energy $U$ in the skin sample, as per Eq.\,(\ref{eq:U}). This plot is for skin specimen V. Labels $t_0$ to $t_6$ refer to the frames of Fig.\,\ref{fig:result}. The mean observable thermal efficiency, which is the ratio $\phi=U/W_\text{tot}$ at the end of the rupture, is about $30$\%.}
\label{fig:ener}
\end{figure}
\\Such a wide range for $\phi$ does not come as a surprise. In addition to the likely complexity in the rupture of skin, and to the intrinsic diversity in this biological material, the total dissipated energy and the thermal energy were only broadly estimated. Indeed, our definition of $W_\text{tot}$ (i.e.,\,Eq.\,(\ref{eq:work})) is not fully intrinsic to the studied material and may depends on the loading geometry and sample size. Note that such dependency with the sample size does not however display in table\,\ref{table:summary}, where $l_0$ and $G$ are not particularly correlated. Likely, it translates that size effects are small compared to our samples' variability in strength. There are also several hypothesis presupposed by Eq.\,(\ref{eq:U}) in the computation of $U$, and not all of them may be conservatively respected. First, it is supposed that all of the thermal energy is available to the infrared camera. In this regard, we made sure to compute $U$ only as long as the heat conduction through the skin did not obviously transfer this energy out of the camera frame (as the full length of the stretched sample is not monitored by the camera which is rather framed around the crack tip). We also verified that the energy exchange with the air surrounding the sample was slow enough to be neglected over the time of observation. The typical time constant for such an air-skin exchange was indeed measured (see appendix\,\ref{app:air}) to be about $6$\,minutes when the fracture of a skin sample typically took a few seconds. Likely, the strongest hypothesis behind Eq.\,(\ref{eq:U}) is that the temperature profile, that is only measured at the surface of the epidermis, holds on the full sample's thickness $h$. In practice, skin is a layered (heterogeneous) material, and significant temperature differences may hold between the epidermis, dermis and hypodermis. Additionally, one can observe likely thin fiber bundles around the progressing crack (e.g., see Fig.\,\ref{fig:result}), indicating that the assumption that $h$ is a homogeneous thickness is no doubt limited. Finally, as the rupture progresses, the portions of the skin samples lying behind the crack tip gain some freedom in moving outside of the focal plane of the infrared camera, so that the temperature measurement may there be less accurate.\\
Overall, $\phi$ should not be interpreted beyond its order of magnitude. Despite the listed limitations, we however compute, in the next section, the thermal efficiency $\phi$ with a different method and obtain similar results (in order of magnitude) to what we have here reported.
\subsection{Temperature elevation versus damage speed}
One can notice, in Fig.\,\ref{fig:result}, some correlation between the crack velocity $V$ and the magnitude of the temperature anomaly $\Delta T$. See for instance the relative crack advancement and the tip temperature between times $t_0$ and $t_1$ and times $t_5$ and $t_6$. Figure.\,\ref{fig:vel} displays the relationship between the maximal recorded temperature elevation and the crack velocity, as observed during the experiments. To better compare the different experiments, $\Delta T$ is there rescaled by the ratio $\overline{G}/G$ where $\overline{G}=135\,$kJ\,m\textsuperscript{-2} is the mean energy release rate of all the samples. To avoid confusion, we remind here that $G$ itself is an average value over the rupture of a unique sample (i.e., derived from Eq.\,(\ref{eq:work})).
Note that the exact position of the crack tip, which is necessary to define an accurate velocity and which we have manually picked on each infrared frame, is subject to a large incertitude, and the data in Fig.\,\ref{fig:vel} thus retains relatively large error bars. A similar trend is yet shown for all skin specimens, with $\Delta T$ increasing with the fracture velocity
\\Such a correlation of temperature elevation with velocity does not come as a surprise, and has been investigated for the rupture of other materials (e.g.,\,\cite{ToussaintSoft}). Fast cracks tend to be hotter, as less time is then allowed for thermal conduction to efficiently evacuate the excess in heat away from the crack tip\,\cite{RiceLevy, ToussaintSoft, TVD2}, where the energy is dissipated. A model based on these considerations has notably shown (e.g.,\,\cite{ToussaintSoft, TVD2}) that, at low velocity, the tip temperature elevation (hereafter referred to as $\Delta T'_\text{slow}(V)$) should increase linearly with fracture velocity. For faster crack tips, the temperature should however increase slower and slower with $V$ (a relation which is hereafter referred to as $\Delta T'_\text{transition}(V)$) and eventually reaches a plateau $\Delta T'_\text{plateau}$ at highest velocities. The notation $\Delta T'$ is here used to differentiate the prediction of this simple model and the actual camera measured temperature elevation $\Delta T$. The three asymptotic regimes of the model\,\cite{ToussaintSoft, TVD2} are described by:
\begin{figure}
\includegraphics[width=1\linewidth]{vel.PNG}
\caption{Recorded elevation of temperature at the damage tip as a function of crack velocity (log scales), scaled by $\overline{G}/G$ for each skin sample. Data points for the seven experiments are shown with different symbols (and colours). For reference, the threshold for the activation of TRPV1 and TRPV2 (thermal pain) is shown by the plain arrows, assuming an ambient temperature of $35^\circ$C for live inner skin. The dashed lines are the model described by Eq.\,(\ref{eq:camera}) using $\phi=3$\%, $10$\% and $30$\%, and with $\rho c =3.7$\,MJ\,K\textsuperscript{-1}\,m\textsuperscript{-3}, $\lambda=0.3$\,J\,s\,\textsuperscript{-1}\,m\,\textsuperscript{-1}K\,\textsuperscript{-1}, $\overline{G}=135$\,kJ\,m\,\textsuperscript{-2} and $l=3\,\upmu$m. The transition velocity $V_c$ predicted by the model (i.e.,\,Eq.\,(\ref{eq:vtrans})) is also indicated. The top-left inset illustrates the 1D diffusion hypothesis lying being Eq.\,(\ref{eq:camera}), due to the time scale difference between the crack propagation and the heat diffusion at the camera pixel size.}
\label{fig:vel}
\end{figure}
\begin{equation}
\Delta T'_\text{slow}\sim \phi G \frac{V}{\lambda},
\label{eq:slow}
\end{equation}
\begin{equation}
\Delta T'_\text{transition}\sim \phi G \sqrt{\frac{V}{4\pi \rho c \lambda l}},
\label{eq:transition}
\end{equation}
\begin{equation}
\Delta T'_\text{plateau}\sim \frac{\phi G}{\pi \rho c l}.
\label{eq:plateau}
\end{equation}
where $\lambda \sim 0.3\,$J\,s\,\textsuperscript{-1}\,m\,\textsuperscript{-1}K\,\textsuperscript{-1} is the heat conductivity of skin\,\cite{skin_thermal} and $l$ is the typical length scale over which energy is dissipated and partly transformed into heat
\\Assuming that the different physical parameters are relatively independent on velocity, these equations describe a transition between $\Delta T' \propto V^{1}$ and $\Delta T' \propto V^{0}$, which is highly compatible with the experimental observation (see Fig.\,\ref{fig:vel}, where $\Delta T$ increasing by $1.5$ orders of magnitude when $V$ increases by $3$ orders of magnitude). A prediction of the model is that such a transition occurs at velocities around $V=V_c$,
\begin{equation}
V_\text{c}\sim\frac{\lambda}{\pi \rho c l},
\label{eq:vtrans}
\end{equation}
which corresponds\,\cite{ToussaintSoft} to velocities for which the diffusion skin depth $\sim\sqrt{\lambda(l/V)/(\pi \rho c)}$ over the intrinsic warming time $l/V$ is similar to the size $l$ of the heat source. Because our experimental data seems to lie in the transition range, Eq.\,(\ref{eq:vtrans}) is another way of estimating $l$. Indeed, if one uses $V_c\sim1$\,cm\,s\textsuperscript{-1}, as the central order of magnitude of the velocity of our experimental cracks (i.e.,\,see Fig.\,\ref{fig:vel}), one obtains $l$ in the order of a few micrometers. This value is satisfyingly consistent with the prior estimation from Eq.\,(\ref{eq:Gl}) and with the diameter of collagen fibers\,\cite{collagen}.
\\Note that the temperature elevations $\Delta T'$, predicted by the model, only hold at the length scale for heat dissipation $l$. That is, they hold at a scale almost two orders of magnitude smaller than the camera pixel size $L\sim85\,\upmu$m. If one then considers that the heat deposited behind the crack tip diffuses perpendicularly to the crack direction up to the pixel size (i.e.,\,assuming 1D diffusion), the temperature elevation available to our infrared camera, when the extra heat has diffused enough, should then be of the order of:
\begin{equation}
\Delta T_\text{camera}(V)\sim \frac{l}{L} \times
\left\{
\begin{matrix}
\Delta T'_\text{slow}(V), \hspace{0.8cm}\text{if } V\ll V_c\\
\Delta T'_\text{transition}(V), \hspace{0.1cm}\text{if } V\sim V_c\\
\Delta T'_\text{plateau}, \hspace{1.0cm}\text{if } V\gg V_c
\end{matrix}\right.
\label{eq:camera}
\end{equation}
This 1D simplification of heat diffusion is here approximately valid because, for the propagation velocities that we consider, the crack advances by one pixel in a time $L/V\sim0.001$\,to $0.1$\,s shorter than the typical time $L^2\rho c/\lambda\sim0.1\,$s for diffusion at the pixel scale. Therefore, at this scale, the heat transport along the crack direction can be approximately neglected (i.e., see the inset of Fig.\,\ref{fig:vel}), in particular for the upper part of our measured fracture velocities.\\
Fitting Eq.\,(\ref{eq:camera}) to the experimental data, as shown in Fig.\,\ref{fig:vel}, one gets a reasonable match, and, then, a new way of estimating $\phi$, that is, a new way of providing an energy budget for the heat dissipation, as this is the only unknown physical quantity. We found $\phi\sim3$ to $30$\%, which is rather compatible with the coarse estimation of section\,\ref{sec:budget}. Again, one should consider this value as only an order of magnitude, as it is, in practice, highly dependent on a model bound to only be a broad representation of the actual tearing of skin (i.e.,\,it is highly dependent on Eqs.\,(\ref{eq:slow})) to\,(\ref{eq:plateau}) and on the re-scaling proposed by Eq.\,(\ref{eq:camera})). This model is indeed a continuous mesoscopic approach while skin is highly heterogeneous at the fiber's scale, and simple Fourier conduction (a base for the model) is known to hold limitations to describe the cutaneous heat transport (e.g.,\,\cite{skin_nonfourier}). It nonetheless provides another indication that thermal dissipation accounts for a non negligible part of the strength of the cutaneous tissue.
\section{Discussion and conclusion}
In this work, we have demonstrated that the tearing of skin generates temperature anomalies from a few degrees Celsius to tens of degrees Celsius. Unfrozen porcine skin was used as a model for human skin. The recorded heat bursts were observed on space and time scales similar (although slightly coarser) to the expected resolutions of the human neural system\,\cite{neurite_density,TRP_response2,TRP_response}, and in the sensibility range of the thermal biosensor TRPV3 (often associated to the feeling of warmth\,\cite{PainTRP,skinTRP}) and those of TRPV1 and TRPV2 (that are associated to thermal pain\,\cite{PainTRP,skinTRP}). A novel thermal pain pathway has thus been shown to be likely involved for the reporting of mechanical damages to the nervous system, in particular when the damage rate is fast enough. Indeed, our infrared camera could spot temperature elevations that have the ability to trigger TRPV1 for crack velocities above $1\,$cm\,s\textsuperscript{-1} and to trigger TRPV2 for even faster cracks (see Fig.\,\ref{fig:vel}).\\
The pixel size of our measurement ($\sim85\,\upmu$m) was about half an order of magnitude bigger than the typical distance between two neurites ($\sim20\,\upmu$m\,\cite{neurite_density, TVDpain1}). It was also almost two orders of magnitude bigger than the length scale $l$, where the heat is dissipated (here inverted to be in the micrometer range, which is similar to the size of the skin fibers, with two different methods, i.e., with Eqs.\,(\ref{eq:Gl}) or\,(\ref{eq:vtrans})). We therefore suggest that cracks propagating at slower velocities than $1\,$cm\,s\textsuperscript{-1} may already be locally hot enough to trigger direct algesia. Indeed, and although this was not here measured, high local thermal anomalies (above the TRPV1 threshold) may exist at the neurites or fibers' scale for these lower velocities. In the most extreme scenario, Eq.\,(\ref{eq:plateau}) predicts temperature burst up to $\Delta T'\sim\phi G/(\pi Cl)\sim100 - 1000^\circ$C at the microscopic scale. Of course such temperatures may seem excessive in a biological context, but they were actually suspected in the rupture of various other materials\,\cite{ToussaintSoft, Bouchaud2012, Fuller1975, TVD2}. These high temperatures could themselves cause a rapid deterioration of the surrounding skin cells (e.g.,\,\cite{thermal_biodamage}), but they would only occur briefly and locally around an already failing tissue so that such potential secondary thermal damage may not be of key significance.\\
What remains certain is that milder temperature anomalies, but still strong enough to trigger the TRP nociceptors, can be directly measured. Although porcine skin is only a model of human skin, we have discussed, in section\,\ref{sec:sample} how the orders of magnitude we report should be valid to the human biology, in particular as pig and human skin have relatively close mechanical strength and thermal properties\,\cite{PigHuman, PigHuman2, skin_speheat}.\\
From our observations, we then propose propose two different thermo-mechanical pain processes. For fastest fractures, direct algesia may arise from the simple activation of TRPV1 or even TRPV2 at noxious heat level. Slower fractures may not trigger such direct mechanism, although some action potentials may already be send through the nervous system by the warmed-up TRPV3. It is to be noted that this later sensor has been shown\,\cite{TRPhysteresis} to present a highly hysteresical sensitivity to thermal anomalies, responding far better at low-intensity temperature bursts after a first activation at a higher, noxious, level. After a first fast damage, very slow ruptures may thus actively be reported via the stronger activation of TRPV3, in a hyperalgesia process. The mitigation of mechanical hyperalgesia, that is, of the increased sensibility to pain after a first stimulus, when suppressing TRVPs, has actually been one of the indications that first suggested that these proteins should play a role in mechanical pain\,\cite{TRPV1_mechano,TRPV1_mechano2,TRPV3_mechano}. The TRPV2 channel also holds a similar hysteresis\,\cite{TRP_usedep} and could also, then, play some role in hyperalgesia, while, oppositely, TRPV1 was shown to provide a consistent response to repeated thermal bursts\,\cite{TRP_usedep}. Another, not mutually exclusive, mechanism for hyperalgesia, in our framework, could be that inflamed and/or infected tissues around pre-existing wounds tend to exhibit a higher background temperature (by $1$\,to $5^\circ$C, e.g.,\,\cite{inflinfec}), and could then be sensitive to slower fractures, as the TRPV1 threshold shall then be easier to be reached.\\
Incidentally, tissue cooling is already used for anesthesia prior to the mechanical injections of treatments\,\cite{cool_anesthesia, cool_anesthesia2}, and a principle lying being such anesthesia methods may lie in preventing any damage-related thermal anomaly to exceed the nociceptors thresholds. Additionally, we suggest that using highly conductive materials for the design or the coating of needles and other invasive medical tools (i.e.,\,sharps) may help reduce pain, as the heat may then be efficiently transported away from the tissues through the conductive blades. Actually, most sharps are made of stainless or carbon steel, which are already relatively good thermal conductors but less so than other metals or some other specific materials.\\
Finally, in this manuscript, we gave a broad estimation, with two different methods, of the percentage of mechanical work that is effectively transformed into heat during the propagation of cutaneous cracks ($\phi\sim3$ to $50$\%). This portion being non negligible, it could make thermal monitoring a ´natural' way for the detection of damages, and, as a very qualitative statement, it would not be surprising that evolution exploited the detection of the dissipated heat for the preservation of life. Note that, consistently, our estimation of $\phi$ is inline with what we elsewhere reported for the tear of another fibrous tissue of biological origin, that is, paper\,\cite{ToussaintSoft}, where $\phi$ was about $10$\%
\\The here measured thermal anomalies are however bigger than those that we recently theorised, when the here discussed pain pathway was first proposed\,\cite{TVDpain1}. In this former theoretical study, it was suggested that these anomalies should be on the edge of the TRPs sensitivity and thus relevant to hyperalgesia only rather than to direct algesia as well. By contrast, we have here shown that direct thermo-mechanical algesia is also likely at play for fast cracks. One of the differences between the previous theoretical work and the present one is that damages at the full skin scale have here been studied, while the rupture of a unique collagen fiber was priorly considered. This being stated, the main difference lies in the value of the considered energy release rate, that was here computed to be in the $80-210$\,kJ\,m\textsuperscript{-2} range for the tearing of skin, but reported to be more than one order of magnitude smaller for the cutting of skin\,\cite{skinG}. Such discrepancy likely derives from the different role of inter fiber friction when tearing or cutting skin, which contribution was proposed to account for most of the cutaneous strength in tearing\,\cite{tearskin}, but is likely negligible in cutting. Similar studies to the present one should then be performed for other types of damages, and in particular for cuts or punctures which are both common injuries and common medical procedures, as thermo-mechanical pain may there be of different importance.\\
On a similar note, it is important to state that the pain pathway that we here propose is not to comprehensively account for any sense of pain. For instance, the pressure pain threshold in human subjects was measured (e.g.,\,\cite{PainPressThreh}) to be around $0.1$\,to $1$\,MPa, that is, at stress levels far less than what is needed to initiate an actual skin rupture and hence strong thermal anomalies (i.e., $\overline{\sigma}_f\sim10\,$MPa, as here or elsewhere\,\cite{other_young} reported). It is actually comforting that pain shall occur before an actual rupture, but shall also increase, maybe through different mechanisms, when rupture is indeed reached. Other nociceptors exist at the membrane of neurons, for instance, the Piezo channels\,\cite{piezo2_nox}, which opening is believed to be related to the stretch of cell membranes. Such channel opening with stretch has, interestingly, also been proposed as another explanation for the involvement of TRPs in mechanical sensing\,\cite{forcing_trps}, without the consideration of any thermal anomaly. In practice, both effects could coexist, with the thermal sensitivity of TRPs that could be improved by their abnormal stretch, hence leading to a polymodal detection. Overall, mechanical algesia is likely a very convoluted phenomenon, involving many types of nociceptors and of biological processes\,\cite{mechano_pain}. The present study aimed to introduce and shed some (infrared) light on a novel, thermo-mechanical, pain process.\\
\begin{figure*}
\includegraphics[width=1\linewidth]{skin_result2.PNG}
\caption{Temperature maps examples (first column), force versus displacement curves (second column) and time evolution of $W$ and $U$ (plain and dashed plot, respectively, third column) from the onset of the rupture (arbitrarily at $t=0$\,s) for skin specimens I, II, III, IV, VI and VII. The arrows on the $F$ versus $\Delta l$ plots indicate the onsets of rupture.}
\label{fig:result2}
\end{figure*}
\newpage
\section*{Acknowledgements, conflicts of interest and ethics}
\noindent The authors acknowledge the support of the University of Oslo, of PoreLab, of the Njord centre and of the IRP France-Norway D-FFRACT. This work was also partly supported by the Research Council of Norway through its Centre of Excellence funding scheme, project number 262644. The authors declare no competing financial interests in the publishing of this manuscript.\\
We thank Zbigniew Rozynek from the Adam Mickiewicz University for its early experimental assistance and for insightful discussions. We also thank the Strøm-Larsen butcher shop for providing the porcine skin and answering rather unusual questions about it. The skin was a standard product of the shop, so that no animal was harmed specifically for the purpose of the present work.
|
1,108,101,564,909 | arxiv | \section{Introduction}
\label{sec:introduction}
Recent results of heavy ion collision experiments at RHIC \cite{Adams:2005dq} and LHC \cite{Aamodt:2008zz}
shed some light on properties of the quark
gluon plasma and the position of the transition line in the temperature – baryon density plane.
New experiments will be carried out at FAIR (GSI) and NICA (JINR).
To fully
explore the phase diagram theoretically it is necessary to make computations at finite temperature and
finite baryon chemical potential. For finite temperature lattice QCD is the only ab-initio method available
and many results had been obtained. However, for finite baryon density lattice QCD faces the so-called
complex action problem (or sign problem).
Various proposals exist at the moment to solve this problem see, e.g. reviews~\cite{Muroya:2003qs,Philipsen:2005mj,deForcrand:2010ys} and yet it is still
very hard to get reliable results at $\mu_B/T>1$. Our work is devoted to developing the canonical partition function approach.
The fermion determinant at nonzero baryon chemical potential $\mu_B$, $\det\Delta(\mu_B)$, is in general not real.
This makes impossible to apply standard Monte Carlo techniques to computations with the partition function
\beq
Z_{GC}(\mu_q,T,V) = \int \mathcal{D}U (\det\Delta(\mu_q))^{N_f} e^{-S_G},
\label{Eq:PathIntegral}
\eeq
where $S_G$ is a gauge field action, $\mu_q=\mu_B/3$ is quark chemical potential, $T=1/(aN_t)$ is temperature, $V=(aN_s)^3$ is volume, $a$ is lattice spacing, $N_t, N_s$ - number of lattice sites in time and space directions.
The canonical approach was studied in a number of papers \cite{deForcrand:2006ec,Ejiri:2008xt,Li:2010qf,Li:2011ee,Danzer:2012vw,Gattringer:2014hra,Fukuda:2015mva,Nakamura:2015jra}.
It is based on the following relations. Relation between
grand canonical partition function $Z_{GC}(\mu_q, T, V)$ and the canonical one $Z_C(n, T, V)$
called fugacity expansion:
\beq
Z_{GC}(\mu, T, V)=\sum_{n=-\infty}^\infty Z_C(n,T,V)\xi^n,
\label{ZG}
\eeq
where
$\xi=e^{\mu_q/T}$ is the fugacity.
The inverse of this equation can be presented in the following form
\cite{Hasenfratz:1991ax}
\beq
Z_C\left(n,T,V\right)=\int_0^{2\pi}\frac{d\theta}{2\pi}
e^{-in\theta}Z_{GC}(\theta,T,V).
\label{Fourier}
\eeq
$Z_{GC}(\theta,T,V)$ is the grand canonical partition function
for imaginary chemical potential $\mu_q=i\mu_{qI} \equiv iT\theta$. Standard Monte Carlo simulations are
possible for this partition function since fermionic determinant is real for imaginary $\mu_q$.
The QCD partition function $Z_{GC}$ is a periodic function of $\theta$: $Z_{GC}(\theta) = Z_{GC}(\theta+2\pi/3)$.
This symmetry is called Roberge-Weiss symmetry \cite{Roberge:1986mm}. As a consequence of this periodicity
the canonical partition functions $Z_C(n,T,V)$ are nonzero only for $n=3k$.
QCD possesses a rich phase structure at nonzero $\theta$, which depends on the number of
flavors $N_f$ and the quark mass $m$. This phase structure is shown in
Fig.~\ref{RW_ph_d}. $T_c$ is the confinement/deconfinement crossover point at
zero chemical potential. The line $(T \ge T_{RW},\mu_I/T=\pi/3)$ indicates the
first order phase transition. On the curve between $T_c$
and $T_{RW}$, the transition is expected to change from the crossover to the first order
for small and large quark masses, see e.g. \cite{Bonati:2014kpa}.
\begin{figure}[htb]
\centering
\includegraphics[width=0.35\textwidth,angle=0]{fig1.eps
\vspace{0cm}
\caption{Schematical figure of Roberge-Weiss phase structure in the pure imaginary chemical
potential regions.}
\label{RW_ph_d}
\end{figure}
Quark number density $n_q$ for $N_f$ degenerate quark flavours is defined by the following equation:
\beq
\frac{n_{q}}{T^{3}} = \frac{1}{VT^{2}}\frac{\partial}{\partial \mu_q}\ln
Z_{GC}
=\frac{N_{f}N_{t}^{3}}{N_s^3 Z_{GC}} \int \mathcal{D}U e^{-S_G} (\det\Delta(\mu_q))^{N_f}
\mathrm{tr}\left[\Delta^{-1}\frac{\partial \Delta}{\partial \mu_q/T}\right].
\label{density1}
\eeq
It can be computed numerically for imaginary chemical potential. Note, that for the imaginary chemical potential $n_q$ is also purely imaginary: $n_q = i n_{qI}$.
From eqs.~(\ref{ZG}) and (\ref{density1}) it follows that densities $n_{q}$ and $n_{qI}$ are related to $Z_C(n,T,V)$ (below we will use the notation $Z_n$ for the ratio
$Z_C(n,T,V) / Z_C(0,T,V)$) by equations
\beq
\label{density2}
n_{q}/T^3 = {\cal{N}}\frac{2\sum_{n>0} n Z_n \sinh(n\theta)}{1+2\sum_{n>0} Z_n \cosh(n\theta)},\,\,
n_{qI}/T^3 = {\cal{N}}\frac{2\sum_{n>0} n Z_n \sin(n\theta)}{1+2\sum_{n>0} Z_n \cos(n\theta)}\,,
\eeq
where ${\cal{N}}$ is a normalization constant, ${\cal{N}}=\frac{N_t^3 }{N_s^3}$.
Our suggestion is to compute $Z_n$ using equation (\ref{density2}) for $n_{qI}$.
One can compute $Z_{GC}(\theta,T,V)$ using numerical data for $n_{qI}/T^3$ via numerical integration
\beq
L_Z(\theta) \equiv \log\frac{Z_{GC}(\theta,T,V)}{Z_{GC}(0,T,V)} = - V \int_{0}^{\theta} d \tilde{\theta}~n_{qI}(\tilde{\theta})\,,
\label{integration_1}
\eeq
where we omitted $T$ and $V$ from the grand canonical partition function
notation.
Then $Z_n$ can be computed as
\beq
Z_n = \frac{\int_0^{2\pi}\frac{d\theta}{2\pi} e^{-in\theta} e^{L_Z(\theta)} }{ \int_0^{2\pi}\frac{d\theta}{2\pi}
e^{L_Z(\theta)} }
\label{Fourier_2}
\eeq
In our work we use modified version of this approach \cite{Bornyakov:2016}.
Instead of numerical integration in (\ref{integration_1})
we fitted $n_{qI}/T^3$ to theoretically motivated functions
of $\mu_{qI}$. It is known that the density of noninteracting quark gas is described by
\beq
n_q/T^3 = N_f \Bigl ( 2\frac{\mu_q}{T} + \frac{2}{\pi^2} \Bigl (\frac{\mu_q}{T} \Bigr )^3 \Bigr ).
\eeq
We thus fit the data for $n_{qI}$ to an odd power polynomial of $\theta$
\beq
n_{qI}(\theta)/T^3 = \sum_{n=1}^{n_{max}} a_{2n-1} \theta^{2n-1}\,,
\label{eq_fit_polyn}
\eeq
in the deconfining phase. This type of the fit was also used in Ref.~\cite{Takahashi:2014rta} and Ref.~\cite{Gunther:2016vcp}.
In the confining phase (below $T_c$) the hadron resonance gas model provides
good description of the chemical potential dependence of thermodynamic observables
\cite{Karsch:2003zq}.
Thus it is reasonable to fit the density to a Fourier expansion
\beq
n_{qI}(\theta)/T^3 = \sum_{n=1}^{n_{max}} f_{3n} \sin(3n \theta)
\label{eq_fit_fourier}
\eeq
Again this type of the fit was used in Ref.~\cite{Takahashi:2014rta} and conclusion was made that it works well.
To demonstrate our method we made simulations of the lattice QCD with $N_f=2$ clover improved Wilson quarks and Iwasaki improved gauge field action, for detailed definition of the lattice action see \cite{Bornyakov:2016}.
We simulate $16^3 \times 4$ lattices at temperatures $T/T_c=1.35, 1.20$ and 1.08 in the deconfinemnt phase and $0.99, 0.93, 0.84$ in the confinement phase along the line of constant physics with
$m_{\pi}/m_{\rho}=0.8$. All parameters of the action, including $c_{SW}$ value were borrowed from the WHOT-QCD collaboration paper~\cite{Ejiri:2009hq}. We compute the number density on samples of $N_{conf}$ configurations with $N_{conf}=1800$, using every 10-th trajectory produced with Hybrid Monte Carlo algorithm.
\section{Results}
\label{mainresults}
\begin{figure}[htb]
\begin{center}
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width=0.73\textwidth,angle=-90]{fig2.ps}
\caption{Imaginary density as function of $\theta$ in the deconfinement phase at temperatures $T/T_c=1.35, 1.20, 1.08$.
The curves show fits to function (\ref{eq_fit_polyn}). }
\label{density_deconf}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\vspace{-0mm}
\includegraphics[width=0.77\textwidth,angle=-90]{fig3.ps}
\vspace{-4mm}
\caption{Imaginary density as function of $\theta$ for temperatures in the confining phase. The curves show fits to (\ref{eq_fit_fourier}) with $n_{max} = 1$ for $T/T_c = 0.84, 0.93$ and $ n_{max}=2$ for $T/T_c = 0.99$.}
\label{density_conf}
\end{minipage}
\end{center}
\end{figure}
In Fig.~\ref{density_deconf} and Fig.~\ref{density_conf} we show numerical results for $n_{qI}$ as a function of $\theta$ in
deconfining and confining phases, respectively \cite{Bornyakov:2016}.
Note, that behavior of $n_{qI}$ at $T=1.08$ is different from that at higher temperatures. This temperature is below $T_{RW}$ and at $\theta=\pi/3$ there is no first order
phase transition, $n_{qI}$ is continuous. Instead there is a crossover to the confinement phase
at about $\theta=0.92(2)$.
It is not yet clear how to fit the data over the range of $\mu_{qI}$ covering both deconfining and confining phase.
Here we use fit to function (\ref{eq_fit_polyn}) with $n_{max}=3$ over the range $[0,0.8]$, i.e. including only the deconfining phase.
In this case we should consider the fit as a Taylor expansion.
We computed $Z_n$ using new procedure described in the previous section and compared with results obtained with use of
the hopping parameter expansion. We found good agreement between two methods indicating that the new method works well \cite{Bornyakov:2016}.
This allows us to make analytical continuation to the real values of $\mu_q$ beyond the Taylor expansion validity range (for all temperatures
apart from $T/T_c=1.08$). Respective results are shown in Fig.~\ref{dens_anal_cont}.
\begin{figure}[htb]
\begin{center}
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width=0.73\textwidth,angle=-90]{fig4.ps}
\caption{Analytical continuation for the number density vs. $\mu_q^2$.
The curves show respective fits eq.~(\ref{eq_fit_polyn}) or eq.~(\ref{eq_fit_fourier}). The width of the curves indicates the statistical error of extrapolation to $\mu_q^2 > 0$.}
\label{dens_anal_cont}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\vspace{-0mm}
\includegraphics[width=0.77\textwidth,angle=-90]{fig5.ps}
\vspace{-4mm}
\caption{Constants $a_i$ of eq.~(\ref{eq_fit_polyn}) for six values of temperature. The curves show polynomial fits over $T>T_c$ range.}
\label{constants}
\end{minipage}
\end{center}
\end{figure}
Using the results for the number density $n_q$ we can compute the temperature of the transition from the hadron phase to quark-gluon plasma phase using the following procedure.
Our results for the number density at temperatures $T>T_c$ as function of the chemical potential $\mu_q$
are reliable even for large $\mu_q$ for $T>T_{RW}$, while for $T_c < T < T_{RW}$ they are reliable at the moment for small $\mu_q$
only. We can use these results to compute the pressure $\Delta P_{deconf}(T,\mu_q) = P(T,\mu_q) - P(T,0)$ as function of $\mu_q$ and then extrapolate pressure for fixed $\mu_q$ to
temperatures $T<T_c$. To get good extrapolation we need pressure computed for more values of temperature than is available now
but three values which we have in this work is enough to demonstrate the idea. We then find the transition temperature $T_c(\mu_q)$
solving numerically equation $\Delta P_{deconf}(T,\mu_q) = \Delta P_{conf}(T,\mu_q)$,
where $\Delta P_{conf}(T,\mu_q)$ is pressure computed from results for the number density $n_q$ we obtained in the confinement phase.
In this paper we use extrapolation for the coefficients in (\ref{eq_fit_polyn}) rather than for pressure itself.
This extrapolation is shown in Fig.~\ref{constants}. We fitted the data for $a_i, i=1,3,5$ by a polynomial $b_0+b_1\frac{T}{T_c}+b_2\left(\frac{T}{T_c}\right)^2$ and
then computed the extrapolated values of these parameters at $T/T_c=0.93$ and 0.84. The extrapolated values were used to compute
$\Delta P_{deconf}(T,\mu_q)/T^4$ as
\beq
\frac{\Delta P_{deconf}(T,\mu_q)}{T^4} = \frac{a_1}{2} \left(\frac{T}{T_c}\right)^2 - \frac{a_3}{4} \left(\frac{T}{T_c}\right)^4 + \frac{a_5}{6} \left(\frac{T}{T_c}\right)^6
\label{eq_pressure_deconf}
\eeq
\begin{figure}[htb]
\begin{center}
\begin{minipage}[t]{0.49\textwidth}
\includegraphics[width=0.73\textwidth,angle=-90]{fig6.ps}
\caption{Pressure computed via eq.~(\ref{eq_pressure_deconf}) (denoted by 'deconf', thin curves) and by integration of eq.~(\ref{eq_fit_fourier})
(denoted by 'conf', thick curves) for $T/T_c=0.93, 0.84$.
}
\label{fig_pressure}
\end{minipage}
\hfill
\begin{minipage}[t]{0.49\textwidth}
\vspace{-0mm}
\includegraphics[width=0.77\textwidth,angle=-90]{fig7.ps}
\vspace{-4mm}
\caption{Transition line in the temperature - chemical potential plane. The curve show quadratic fit.}
\label{fig_Tc}
\end{minipage}
\end{center}
\end{figure}
We show results in Fig.~\ref{fig_pressure} together with pressure computed by integration of eq.~(\ref{eq_fit_fourier}) for $T/T_c=0.93, 0.84$. The crossing points determine values of $\mu_q$ on transition line at respective value of temperature.
We show the coordinates of these crossing points in Fig.~\ref{fig_Tc} together with fit of the form
\beq
T_c(\mu_q)/T_c = 1 - C \left(\mu_q/T_c \right)^2
\eeq
The result for the fit parameter is $C=0.07$.
We have to emphasize that in Figs.~\ref{fig_pressure} and ~\ref{fig_Tc} we do not show statistical or systematic errors
since we are not yet able to compute them. These figures are shown to illustrate our idea about the method to compute
the transition temperature dependence on the chemical potential. Still it is encouraging that the value of the coefficient
$C$ obtained is of correct order of magnitude. We can compare with Refs.~\cite{deForcrand:2002hgr,Wu:2006su} where $C=0.051(3)$ and $C=0.065(7)$ respectively
were found in lattice QCD with $N_f=2$. In both papers analytical continuation of $\frac{T_c(\mu_q)}{T_c}$ from imaginary $\mu_q$
was used.
Thus we presented new method to compute the canonical partition functions $Z_n$.
It is based on fitting of the imaginary number density for all values of imaginary
chemical potential to the theoretically motivated
fitting functions: polynomial fit (\ref{eq_fit_polyn}) in the deconfinement
phase for $T$ above $T_{RW}$ and Fourier-type fit (\ref{eq_fit_fourier}) in the confinement phase.
We also explained how the transition line at real $\mu_q$ can be computed using results we obtained for the number density $n_q$.
We are planning to increase the number of temperature values to improve the precision of this method.
It is worth to note that the method of direct analytical continuation for $\frac{T_c(\mu_q)}{T_c}$ from
imaginary chemical potential can be applied to our data. This will help us to improve the method presented here.
\noindent
{\bf Acknowledgments} \\
This work was completed due to support by
RSF grant under contract 15-12-20008. Computer simulations were performed on the FEFU GPU cluster Vostok-1 and MSU 'Lomonosov' supercomputer.
|
1,108,101,564,910 | arxiv | \section{Introduction}
The focusing nonlinear Schr\"{o}dinger (NLS) equation in the space of
one dimension is a fundamentally important model which brings together nonlinearity and dispersion of modulated waves in many physical systems \cite{NLS2,NLS1}. It has been used as the main testbed for rogue
waves in fluids and optics \cite{Rogue1,Rogue2}, where the rogue waves
appear from nowhere and disappear without any trace. One of the important
properties of the focusing cubic NLS equation is its integrability,
which allows to construct the basic solutions for the rogue waves
in a closed analytical form. Although these solutions have been constructed
long ago in the works of Akhmediev {\em et al.} \cite{Nail1}, Kuznetsov \cite{Kuznetsov}, Ma \cite{Ma}, Peregrine \cite{Peregrine},
and Tajiri \& Watanabe \cite{TW}
they have been studied a lot in the past few years in physics
literature \cite{Kibler1,Kibler2}.
To explain the current state of art in the mathematical studies of these
breather solutions, we set up the stage and take the NLS equation in the following dimensionless form:
\begin{equation}
i \psi_t + \frac{1}{2} \psi_{xx} + |\psi|^2 \psi = 0,
\label{nls}
\end{equation}
where the unknown $\psi$ is a complex-valued function depending on time $t\in\mathbb R$ and space $x\in\mathbb R$.
The NLS equation (\ref{nls}) is invariant under the scaling transformation
\begin{equation*}
\mbox{\rm if} \; \psi(x,t) \;\; \mbox{\rm is a solution, so is} \;
c \psi(c x,c^2 t),
\;\; \mbox{\rm for every} \;\; c \in \mathbb{R},
\end{equation*}
and under translations in $x$ and $t$.
Up to these symmetries, the NLS equation (\ref{nls}) admits the following exact solutions on the background of the constant-amplitude wave $\psi(x,t) = e^{it}$:
\begin{itemize}
\item Akhmediev breather (AB)
\begin{equation}
\label{AB}
\psi(x,t) = \left[ - 1 + \frac{2(1-\lambda^2) \cosh(\lambda k t) + i \lambda k \sinh(\lambda k t)}{\cosh(\lambda k t) - \lambda \cos(kx)} \right] e^{it},
\end{equation}
where $k = 2 \sqrt{1-\lambda^2}$, and $\lambda \in (0,1)$ is the only free parameter.
\item Kuznetsov--Ma breather (KMB)
\begin{equation}
\label{KM}
\psi(x,t) = \left[ - 1 + \frac{2(\lambda^2-1) \cos(\lambda \beta t) + i \lambda \beta \sin(\lambda \beta t)}{\lambda \cosh(\beta x) - \cos(\lambda \beta t)} \right] e^{it},
\end{equation}
where $\beta = 2 \sqrt{\lambda^2-1}$, and $\lambda \in (1,\infty)$ is the only free parameter.
\item Peregrine's rogue wave (PRW)
\begin{equation}
\label{P}
\psi(x,t) = \left[ - 1 + \frac{4(1+2it)}{1 + 4(x^2 + t^2)} \right] e^{it}.
\end{equation}
\end{itemize}
Note that PRW can be obtained in the limit $\lambda \to 1$ from either AB or KMB. Also note the formal transformation $k = i \beta$ between AB and KMB.
The main goal of this work is to study the linear instability of AB and KMB.
Stability of breathers is a challeging question that has been extensively studied in the mathematics literature. A major difficulty comes from the nontrivial dependence of the breathers on time $t$. Therefore, many of the analytical methods developed for stability of stationary or traveling waves in nonlinear partial differential equations do not apply to breathers. For instance, spectral methods are out of reach for AB and PRW which are localized in time $t$. Since KMB is periodic in time $t$, Floquet theory can be used, at least formally, to compute stable and unstable modes of KMB. This has been done numerically in \cite{Cuevas} after KMB were truncated on a spatially periodic domain in $x$. Further studies of KMB in discrete setting of the NLS equation can be found in \cite{Kevr-Salerno}.
Very recently, the authors of \cite{Z} set up a basis for a rigorous investigation of stability of breathers which are periodic in time $t$ and localized in space $x$. Using tools from the theory of semigroups and Fredholm operators, they analyzed properties of the monodromy operator for the linearization of the cubic-quintic complex Ginzburg--Landau equation about such solutions, and computed its essential spectrum. These results being obtained in a dissipative setting do not directly apply to KMB due to the Hamiltonian nature of the NLS equation.
Most of the existing instability results for breathers of the NLS equation strongly rely upon the integrability properties of the NLS equation.
Instability of AB was concluded by using the variational characterization
of breathers in the energy space in \cite{AFM-2}. Perturbations
to the AB were considered in the periodic space $H^s_{\rm per}$ for $s > 1/2$. Similar techniques were applied to KMB and PRW in \cite{AFM} (see also the review in \cite{Alejo}). It was shown that both KMB and PRW are unstable with respect
to perturbations in $H^s(\mathbb{R})$ for $s > 1/2$.
Evolution of KMB and PRW under perturbations
was studied in \cite{Garn} and \cite{BK}, where inverse scattering transform
was applied to the NLS equation in the class of functions decaying
to the nonzero boundary conditions.
Instability of PRW was visualized numerically in \cite{KH} by using time-dependent simulations of the NLS equation.
Linear instability of PRW was also studied numerically in \cite{CalSch2019}.
By using perturbation theory for embedded eigenvalues of the Lax system,
it was shown in \cite{KPR} that the perturbed PRW is transformed
to either KMB or two counter-propagating breathers, the latter solutions
were later constructed explicitly in \cite{ZakGelash}.
Our approach to linear instability is closely related to the recent works
\cite{Bil2, CalSch2012,GS-2021}, where solutions of the linearized NLS equation are constructed from solutions of the associated Lax system. Eigenfunctions of the Lax system related to the Lax spectrum provide solutions of the linearized NLS equation relevant for the linear instability of breathers. The completeness of the resulting solution set is a particularly challenging question. In the class of spatially localized functions, it was shown in \cite[Section 3.4]{Bil2} how to obtain a complete set of solutions of the linearized NLS equation at PRW. Stability of AB
under periodic perturbations of the same period was stated in \cite{CalSch2012} without the proof of completeness. It was recently discovered in \cite{GS-2021}
that the set of eigenfunctions constructed in \cite{CalSch2012} is incomplete
and two unstable modes exist for AB. The spatially periodic unstable modes
for AB were constructed in \cite{GS-2021} by taking a suitable combination of unbounded solutions of the linearized NLS equation.
The purpose of this paper is twofold. Firstly, we give a full description of the Lax spectra for AB and KMB, including algebraic multiplicities of eigenvalues. Secondly, we obtain all solutions of the linearized NLS equations at AB and KMB generated by eigenfunctions and generalized eigenfunctions of the Lax systems. These solutions are spatially periodic for AB and spatially localized for KMB. The completeness question is outside the scope of this paper and will be the subject of subsequent studies.
Similar to \cite{Bil2,CalSch2012,GS-2021}, we use the Darboux transformation to obtain AB and KMB from the constant-amplitude wave and then to precisely determine the Lax spectra at AB and KMB from the Lax spectrum at the constant-amplitude wave. For AB we focus on solutions of the linearized NLS equation with the first three spatially periodic Fourier modes, whereas for KMB we focus on spatially localized solutions.
Aiming for a presentation accessible to readers who are not expert in integrable systems, we review some properties of the Lax system and the Darboux transformation in Section~\ref{s prelim}. In Section~\ref{s constant} we consider the constant-amplitude wave. We compute the Lax spectrum and establish the explicit relation between the solutions of the linearized NLS equation obtained by a standard Fourier analysis and the ones generated by the Lax system. We focus on spatially periodic and spatially localized solutions. Then using the Darboux transformation, we determine the Lax spectra and the resulting solutions of the linearized NLS equations for AB in Section~\ref{s AB} and for KMB in Section~\ref{s KMB}. The paper is concluded
at Section \ref{sec-conclusion} with a discussion of further directions.
\medskip
\noindent
{\bf Acknowledgments:}
M. Haragus was partially supported by the project Optimal (ANR-20-CE30-0004) and the EUR EIPHI program (ANR-17-EURE-0002). D. E. Pelinovsky was partially supported by the National Natural Science Foundation of China (No. 11971103).
\section{Preliminaries}\label{s prelim}
We recall the Lax system for the NLS equation (\ref{nls}), its connection with the linearized NLS equation, and the Darboux transformation for the NLS equation and its Lax system.
For our purpose, it is convenient to write $\psi(x,t) = u(x,t) e^{it}$, where $u$ satisfies the normalized NLS equation
\begin{equation}
i u_t + \frac{1}{2}u_{xx} + (|u|^2 - 1) u = 0.
\label{nls-u}
\end{equation}
The constant-amplitude wave $\psi(x,t) = e^{it}$ of the NLS equation \eqref{nls} becomes $u(x,t) =1$ and the breathers \eqref{AB}, \eqref{KM}, and \eqref{P} provide exact solutions of the normalized equaiton \eqref{nls-u} without the factor $e^{it}$ in these formulas.
\subsection{Lax system}\label{ss lin nls}
The normalized NLS equation \eqref{nls-u} for $u = u(x,t)$ is a compatibility condition $\varphi_{xt} = \varphi_{tx}$ for a $2$-vector $\varphi = \varphi(x,t)$ satisfying the Lax system
\begin{equation}
\label{lax-1}
\varphi_x = U(u,\lambda) \varphi, \quad
U(u,\lambda) = \left(\begin{array}{cc}
\lambda & u \\
- \bar{u} & -\lambda
\end{array}
\right)
\end{equation}
and
\begin{equation}
\label{lax-2}
\varphi_t = V(u,\lambda) \varphi, \quad
V(u,\lambda) = i \left(\begin{array}{cc}
\lambda^2 + \frac{1}{2} (|u|^2 - 1) & \lambda u + \frac{1}{2} u_x \\
-\lambda \bar{u} + \frac{1}{2} \bar{u}_x & -\lambda^2 - \frac{1}{2} (|u|^2-1)
\end{array}
\right),
\end{equation}
where $\lambda$ a complex number. The $x$-derivative equation (\ref{lax-1}) is the Zakharov--Shabat (ZS) spectral problem, which is a particular case of the AKNS spectral problem; see pioneering works \cite{ZS} and \cite{AKNS}.
The $t$-derivative equation (\ref{lax-2}) gives the time evolution
of the solution $\varphi(x,t)$ of the ZS spectral problem (\ref{lax-1}).
Spatially bounded solutions of the Lax system are referred to as {\em eigenfunctions} and the corresponding values $\lambda$ as {\em eigenvalues}. The set of eigenvalues $\lambda$ form the {\em Lax spectrum} of the ZS spectral problem (\ref{lax-1}). Rigorously, this terminology corresponds to considering the ZS spectral problem in the space $C_b^0(\mathbb R)$ of $x$-dependent functions which are bounded and continuous on~$\mathbb R$. However, depending on the properties of the solution $u = u(x,t)$ to the NLS equation (\ref{nls-u}) other function spaces may be considered as, for instance, the space of $L$-periodic functions $L^2_{\rm per}(0,L)$, or the space of $L$-antiperiodic functions $L^2_{\rm antiper}(0,L)$, or the space of localized functions $L^2(\mathbb{R})$. The choice of the function space affects the nature of the Lax spectrum, as this is usual for spectra of differential operators. For the spaces mentioned above, the Lax spectrum is a purely point spectrum consisting of isolated eigenvalues for $L^2_{\rm per}(0,L)$, or $L^2_{\rm antiper}(0,L)$, whereas it is a purely continuous spectrum, up to possibly a finite number of eigenvalues for $L^2(\mathbb{R})$.
The ZS spectral problem (\ref{lax-1}) can be rewritten as a classical eigenvalue problem
\begin{equation}
\label{eigen}
\left( \mathcal{L} - \lambda I \right) \varphi = 0, \quad
\mathcal{L}:= \left( \begin{array}{cc} \partial_x & -u \\
-\bar{u} & -\partial_x \end{array} \right).
\end{equation}
In particular, this allows to define generalized eigenfunctions and algebraic multiplicities of eigenvalues in the usual way by the bounded solutions
of $(\mathcal{L}- \lambda I)^k \varphi = 0$ for $k \in \mathbb{N}$.
If $\lambda$ is a double eigenvalue with the only eigenfunction $\varphi$ satisfying (\ref{eigen}), then there exists a generalized eigenfunction $\varphi_g$ satisfying the nonhomogeneous linear equation
\begin{equation}
\label{eigen-generalized}
\left( \mathcal{L} - \lambda I \right) \varphi_g = \varphi.
\end{equation}
In this case, $\lambda$ has geometric multiplicity {\em one} and algebraic
multiplicity {\em two}.
\begin{remark}
\label{remark-symmetry}
Solutions of the Lax equations (\ref{lax-1}) and (\ref{lax-2}) satisfy the following symmetry. If $\varphi = (p,q)^T$ is a solution for $\lambda$,
then $\varphi = (-\bar{q},\bar{p})^T$ is a solution for $-\bar{\lambda}$.
\end{remark}
Taking a solution $u = u(x,t)$ to the normalized NLS equation (\ref{nls-u}), solutions $v = v(x,t)$ of the corresponding linearized NLS equation
\begin{equation}
i v_t + \frac{1}{2} v_{xx} + (2 |u|^2 - 1) v + u^2 \bar{v} = 0,
\label{nls-lin}
\end{equation}
can be constructed from solutions $\varphi = \varphi(x,t)$ of the Lax system (\ref{lax-1})--(\ref{lax-2}). The following well-known property is a result of a straightforward calculation.
\begin{proposition}
\label{prop-NLS-lin} Assume $u$ is a solution to the normalized NLS equation (\ref{nls-u}).
If $\varphi = (\varphi_1,\varphi_2)^T$ is a solution
to the Lax system (\ref{lax-1})--(\ref{lax-2}) for some $\lambda$, then
\begin{equation}
\label{v-relation}
v = \varphi_1^2 - \bar{\varphi}_2^2, \quad \bar{v} = -\varphi_2^2 + \bar{\varphi}_1^2
\end{equation}
and
\begin{equation}
\label{v-relation-another}
v = i(\varphi_1^2 + \bar{\varphi}_2^2), \quad \bar{v} = -i(\varphi_2^2 + \bar{\varphi}_1^2)
\end{equation}
are solutions to the linearized NLS equation (\ref{nls-lin}).
\end{proposition}
\begin{proof}
Due to the symmetry in Remark \ref{remark-symmetry} and the linear superposition principle, it is sufficient
to confirm the relations (\ref{v-relation}) and (\ref{v-relation-another}) by using $v = \varphi_1^2$ and $\bar{v} = -\varphi_2^2$. This is obtained directly:
\begin{eqnarray*}
&& i v_t + \frac{1}{2} v_{xx} + (2 |u|^2 - 1) v + u^2 \bar{v} \\
&& = \varphi_1 (2 i \varphi_{1t} + \varphi_{1xx}) + (\varphi_{1x})^2
+ (2|u|^2-1) \varphi_1^2 - u^2 \varphi_2^2 \\
&& = \varphi_1 ((1-|u|^2) \varphi_1 - 2 \lambda^2 \varphi_1
-2 \lambda u \varphi_2 - u_x \varphi_2 + \lambda (\lambda \varphi_1 + u \varphi_2) + u_x \varphi_2 + u (-\bar{u} \varphi_1 - \lambda \varphi_2)) \\
&& \phantom{t} + (\lambda \varphi_1 + u \varphi_2)^2 +
(2|u|^2 -1) \varphi_1^2 - u^2 \varphi_2^2 \\
&& = 0.
\end{eqnarray*}
Extending the solution by using (\ref{v-relation}) and (\ref{v-relation-another}) ensures that $\bar{v}$ is a complex conjugate of $v$.
\end{proof}
\begin{remark}
\label{remark-completeness}
Solutions $\varphi = \varphi(x,t)$ to the Lax system (\ref{lax-1})--(\ref{lax-2}) which are bounded functions in $x$ generate bounded solutions $v = v(x,t)$ to the linearized NLS equation (\ref{nls-lin}) by means of the transformations (\ref{v-relation}) and (\ref{v-relation-another}). On the other hand, solutions $\varphi = \varphi(x,t)$ which are unbounded functions in $x$ generate unbounded solutions $v = v(x,t)$ but the linear superposition of unbounded solutions may become bounded \cite{GS-2021}. This latter property must be taken into account when constructing solutions to the linearized NLS equation (\ref{nls-lin}) either in $L^2_{\rm per}(0,L)$ or in $L^2(\mathbb{R})$ by using Proposition \ref{prop-NLS-lin}.
\end{remark}
The result in Proposition~\ref{prop-NLS-lin} can be extended by taking two linearly independent solutions $\varphi = (\varphi_1,\varphi_2)^T$ and $\phi = (\phi_1,\phi_2)^T$ to the Lax system (\ref{lax-1})--(\ref{lax-2}) for the same value of $\lambda$. Then from these two solutions we can construct the three pairs of solutions of the linearized NLS equation (\ref{nls-lin}) given in Table \ref{table-1}. The symmetry of the Lax system in Remark~\ref{remark-symmetry} implies that the solutions of the Lax system for $-\bar\lambda$ lead, up to sign, to the same solutions of the linearized NLS equation~(\ref{nls-lin}).
\begin{table}[hbtp!]
\begin{center}
\begin{tabular}{|c|c|c|}
\hline
Pair I & Pair II & Pair III \\
\hline
& & \\
$v = \varphi_1^2 - \bar{\varphi}_2^2$ & $v = \varphi_1 \phi_1 - \bar{\varphi}_2 \bar{\phi}_2$ & $v = \phi_1^2 - \bar{\phi}_2^2$ \\
\hline
& & \\
$v = i \varphi_1^2 + i \bar{\varphi}_2^2$ & $v = i \varphi_1 \phi_1 + i \bar{\varphi}_2 \bar{\phi}_2$ & $v = i\phi_1^2 + i \bar{\phi}_2^2$ \\
\hline
\end{tabular}
\end{center}
\vspace*{1ex}
\caption{Table of possible solutions of the linearized NLS equation (\ref{nls-lin}) generated from two solutions $\varphi = (\varphi_1,\varphi_2)^T$ and $\phi = (\phi_1,\phi_2)^T$ to the Lax system (\ref{lax-1})--(\ref{lax-2}) for the same value of $\lambda$.}
\label{table-1}
\end{table}
\begin{remark}
\label{remark-double}
If $\lambda$ is a double eigenvalue with the only eigenfunction
$\varphi = (\varphi_1,\varphi_2)^T$ satisfying (\ref{eigen})
and the generalized eigenfunction
$\varphi_g = (\varphi_{g1},\varphi_{g2})^T$ satisfying (\ref{eigen-generalized}), then the linearized NLS equation
(\ref{nls-lin}) admits the solutions
\begin{equation}
\label{solutions-generalized}
v = 2 \varphi_1 \varphi_{g1} - 2 \bar{\varphi}_2 \bar{\varphi}_{g2}, \quad
v = 2 i \varphi_1 \varphi_{g1} + 2 i \bar{\varphi}_2 \bar{\varphi}_{g2},
\end{equation}
in addition to the two solutions in Pair I of Table \ref{table-1}.
\end{remark}
\subsection{Darboux transformation}
For the construction of breathers, we use the following version of the one-fold Darboux transformation from \cite[Propositions 2.2 and 3.1]{ContPel}.
\begin{proposition}
\label{prop-Darboux}
Assume that $u = u_0(x,t)$ is a solution to the normalized NLS equation (\ref{nls-u}) and pick $\lambda_0 \in \mathbb{C}$. If $\varphi = (p_0,q_0)^T$ is a particular solution of the Lax system (\ref{lax-1})--(\ref{lax-2}) with $u = u_0$ and $\lambda = \lambda_0$, then
\begin{equation}
\label{DT-potential}
\hat{u}_0 = u_0 + \frac{2(\lambda_0 + \bar{\lambda}_0) p_0 \bar{q}_0}{|p_0|^2 + |q_0|^2}
\end{equation}
is a solution to the normalized NLS equation (\ref{nls-u}) and
$\varphi = (\hat{p}_0,\hat{q}_0)^T$ with
\begin{equation}
\label{DT-eigen}
\left[ \begin{array}{l} \hat{p}_0 \\ \hat{q}_0 \end{array} \right] = \frac{\lambda_0 + \bar{\lambda}_0}{|p_0|^2 + |q_0|^2} \left[ \begin{array}{l} -\bar{q}_0 \\ \bar{p}_0\end{array} \right]
\end{equation}
is a particular solution of the Lax system (\ref{lax-1})--(\ref{lax-2}) with $u = \hat{u}_0$ and $\lambda = \lambda_0$. Furthermore, the following identity holds:
\begin{equation}
\label{DT-squared}
|\hat{u}_0|^2 = |u_0|^2 + \frac{\partial^2}{\partial x^2} \log(|p_0|^2 + |q_0|^2).
\end{equation}
\end{proposition}
\begin{remark}
By the symmetry in Remark \ref{remark-symmetry}, $\varphi = (-\bar{q}_0,\bar{p}_0)^T$ is a solution of the Lax system (\ref{lax-1})--(\ref{lax-2}) with $u = u_0$ and $\lambda = -\bar{\lambda}_0$,
whereas $\varphi = (-\bar{\hat{q}}_0,\bar{\hat{p}}_0)^T$ is a solution of the Lax system for $u = \hat{u}_0$ and $\lambda = -\bar{\lambda}_0$.
\end{remark}
\begin{remark}
The result in Proposition~\ref{prop-Darboux} provides new solutions to the normalized NLS equation (\ref{nls-u}), and to the associated Lax system (\ref{lax-1})--(\ref{lax-2}), when $\lambda_0+\bar\lambda_0\not=0$, i.e., when $\lambda_0$ is not purely imaginary. When
$\lambda_0+\bar\lambda_0=0$, it gives the same solution $\hat u_0=u_0$ to the normalized NLS equation (\ref{nls-u}) and the trivial solution $\varphi = (0,0)^T$ to the Lax system (\ref{lax-1})--(\ref{lax-2}). Breathers are found by taking $u_0=1$ and positive values $\lambda_0$: $\lambda_0\in(0,1)$ for AB, $\lambda_0\in(1,\infty)$ for KMB, and $\lambda_0=1$ for PRW.
\end{remark}
In addition to the Darboux transformation $u_0 \mapsto \hat{u}_0$ in Proposition~\ref{prop-Darboux}, we have a Darboux transformation
$\Phi(\lambda) \mapsto \hat{\Phi}(\lambda)$ between solutions of the Lax system
(\ref{lax-1})--(\ref{lax-2}). More precisely, assuming that $\Phi(\lambda)$ is a $2\times2$ matrix solution to the Lax system with $u = u_0$, then
\begin{equation}
\label{fund-matrix}
\hat{\Phi}(\lambda) = D(\lambda) \Phi(\lambda)
\end{equation}
is a $2\times2$ matrix solution to the Lax system with $u = \hat{u}_0$
if $\lambda \neq \{ \lambda_0,-\bar{\lambda}_0\}$,
where the Darboux matrix $D(\lambda)$ is given by
\begin{equation}
\label{DT-matrix}
D(\lambda) := I + \frac{1}{\lambda - \lambda_0} \left[ \begin{array}{l} \hat{p}_0 \\ \hat{q}_0 \end{array} \right] \left[ -q_0 \;\; p_0 \right]
\end{equation}
and $I$ stands for the $2\times2$ identity matrix. Since
$$
\det D(\lambda) = \frac{\lambda+\bar\lambda_0}{\lambda-\lambda_0},
$$
the matrix $D(\lambda)$ is invertible, and the correspondence between the $2\times2$ matrix solutions $\Phi(\lambda)$ and $\hat\Phi(\lambda)$ is one-to-one, when $\lambda \neq \{ \lambda_0,-\bar{\lambda}_0\}$.
\section{Constant-amplitude background}\label{s constant}
Here we discuss the simple case of the constant solution $u=1$. We determine the Lax spectrum and compare the set of solutions of the linearized NLS equation
\begin{equation}
i v_t + \frac{1}{2} v_{xx} + v + \bar{v} = 0
\label{nls-lin-constant}
\end{equation}
obtained using standard tools of Fourier analysis with the one given by the solutions of the Lax system \eqref{lax-1}--\eqref{lax-2}. This comparison will be useful in the study of linear instability of AB and KMB in Sections \ref{s AB} and \ref{s KMB} respectively.
\subsection{Lax spectrum}
Since the solution $u=1$ is constant, the Lax system (\ref{lax-1})--(\ref{lax-2}) can be solved explicitly.
Two linearly independent solutions exist for every
$\lambda$ since the Lax system (\ref{lax-1})--(\ref{lax-2}) is of the second order. We only consider real and purely imaginary values of $\lambda$ because the solutions found for the other complex values $\lambda$ are unbounded.
For $\lambda \in \mathbb{R}_+$, two solutions to the Lax equations (\ref{lax-1})--(\ref{lax-2}) are given by:
\begin{equation}
\label{eigenvectors-constant-background}
\varphi = \left[ \begin{array}{l} \sqrt{\lambda - \frac{i}{2} k(\lambda)} \\
- \sqrt{\lambda + \frac{i}{2} k(\lambda)} \end{array} \right] e^{-\frac{1}{2} i k(\lambda) (x + i\lambda t)}, \quad
\phi = \left[ \begin{array}{l} \sqrt{\lambda + \frac{i}{2} k(\lambda)} \\
- \sqrt{\lambda - \frac{i}{2} k(\lambda)} \end{array} \right] e^{+\frac{1}{2} ik(\lambda) (x + i\lambda t)},
\end{equation}
where $k(\lambda) := 2 \sqrt{1-\lambda^2}$.
These solutions are bounded for $\lambda \in (0,1]$ and are linearly independent for $\lambda \neq 1$, that is, for $k(\lambda)\neq 0$. For $\lambda=1$, two linearly independent solutions are given by
\begin{equation}
\label{eigenvectors-constant-background-zero}
\lambda = 1 : \quad
\varphi = \left[ \begin{array}{l} 1\\
-1 \end{array} \right], \quad
\phi = \left[ \begin{array}{l} x+it + 1 \\ -x-it \end{array} \right].
\end{equation}
Solutions for $\lambda\in\mathbb R_-$, and in particular, for $\lambda \in [-1,0)$, are found from the symmetry property of the Lax equations in Remark~\ref{remark-symmetry}. This implies that any $\lambda\in(-1,0)\cup(0,1)$ is a geometrically double eigenvalue, whereas $\lambda=\pm1$ are geometrically simple.
For $\lambda = i \gamma$ with $\gamma \in \mathbb{R}$, two solutions to the Lax equations (\ref{lax-1})--(\ref{lax-2}) are given by:
\begin{equation}
\label{eigenvectors-constant-background-gamma}
\varphi = \left[ \begin{array}{l} \sqrt{\frac{1}{2} k(\gamma) - \gamma} \\
- i \sqrt{\frac{1}{2} k(\gamma) + \gamma} \end{array} \right] e^{-\frac{1}{2} i k(\gamma) (x- \gamma t)}, \quad
\phi = \left[ \begin{array}{l} \sqrt{\frac{1}{2} k(\gamma) + \gamma} \\
i \sqrt{\frac{1}{2} k(\gamma) - \gamma} \end{array} \right] e^{+\frac{1}{2} ik(\gamma) (x- \gamma t)},
\end{equation}
where $k(\gamma) := 2 \sqrt{1 + \gamma^2}$.
These solutions are bounded and linearly independent for every $\gamma \in \mathbb R_+$. Solutions for $\gamma \in \mathbb R_-$ are found from the symmetry property of the Lax equations in Remark~\ref{remark-symmetry}. Consequently, any $\lambda = i \gamma$ with $\gamma \in \mathbb{R} \backslash \{0\}$ is a geometrically double eigenvalue.
For $\lambda = 0$ ($\gamma = 0$), there are two linearly independent solutions,
\begin{equation}
\label{eigenvectors-constant-background-two}
\lambda = 0 : \quad \varphi = \left[ \begin{array}{c} 1\\
-i \end{array} \right] e^{-i x}, \quad
\phi = \left[ \begin{array}{c} 1 \\ i \end{array} \right] e^{+ix},
\end{equation}
implying that $\lambda=0$ is a geometrically double eigenvalue. In contrast to the eigenvalues above, the eigenvalue $\lambda = 0$ has algebraic multiplicity four because the bounded solutions of $\mathcal{L}^2 \varphi = 0$ are spanned by (\ref{eigenvectors-constant-background-two}) and two additional solutions
\begin{equation}
\label{eigenvectors-constant-background-two-generalized}
\lambda = 0 : \quad \varphi_{\rm g} = \left[ \begin{array}{c} t\\
-1 - i t \end{array} \right] e^{-i x}, \quad
\phi_{\rm g} = \left[ \begin{array}{c} -t \\
-1 - it \end{array} \right] e^{+i x}.
\end{equation}
These computations are summarized in the following lemma, where
we have also checked algebraic multiplicities of all eigenvalues.
\begin{lemma}
\label{lemma-spectrum}
The Lax spectrum of the spectral problem (\ref{lax-1}) with $u = 1$ in the space $C_b^0(\mathbb{R})$ of bounded continuous functions is the set
\begin{equation}
\label{spectrum-constant}
\Sigma_0 = i\mathbb{R} \cup [-1,1],
\end{equation}
and the following properties hold:
\begin{enumerate}
\item $\lambda = \pm 1$ are algebraically simple eigenvalues;
\item each $\lambda \in \Sigma_0 \backslash \{0,\pm 1\}$ is a geometrically and algebraically double eigenvalue;
\item $\lambda = 0$ is an eigenvalue with geometric multiplicity two and algebraic multiplicity four.
\end{enumerate}
\end{lemma}
\begin{proof}
Geometric multiplicity of all eigenvalues has been checked with direct computations resulting in (\ref{eigenvectors-constant-background}), (\ref{eigenvectors-constant-background-zero}), (\ref{eigenvectors-constant-background-gamma}), and (\ref{eigenvectors-constant-background-two}).
In order to check the algebraic multiplicity of eigenvalues,
we note that for each eigenvalue $\lambda$,
the bounded eigenfunctions $\varphi$ and $\phi$ in $C^0_b(\mathbb{R})$ are periodic in $x$ with some spatial period $L$. For
the algebraic multiplicity of $\lambda$, we need to solve $(\mathcal{L} - \lambda I) \varphi_g = \varphi$ and $(\mathcal{L} - \lambda I) \phi_g = \phi$ in the space of periodic functions with the same period $L$. Consequently, we can check the Fredholm condition in $L^2(0,L)$ equipped with the standard inner product $\langle \cdot, \cdot \rangle$.
Let $\varphi = (\varphi_1,\varphi_2)$ be the bounded eigenfunction of the eigenvalue problem $(\mathcal{L}- \lambda I) \varphi = 0$. By the symmetry, the adjoint problem $\left( \mathcal{L}^* - \bar{\lambda} I \right) \varphi^* = 0$ admits the eigenfunction $\varphi^* = (\bar{\varphi}_2,\bar{\varphi}_1)^T$. If $\lambda \in \Sigma_0 \backslash \{+1,-1\}$, there exists another linearly independent eigenfunction $\phi = (\phi_1,\phi_2)^T$, for which we have similarly $\phi^* = (\bar{\phi}_2,\bar{\phi}_1)^T$. Since
$\langle \psi^*,\varphi \rangle = \langle \varphi^*, \phi \rangle = 0$, the generalized eigenfunctions $\varphi_g$ and $\phi_g$
exist if and only if $\langle \varphi^*, \varphi \rangle = 0$ and $\langle \phi^*, \phi \rangle = 0$.
For $\lambda \in (0,1)$, we obtain
\begin{eqnarray*}
\langle \varphi^*, \varphi \rangle = -2 \lambda L e^{\lambda k(\lambda) t}, \quad
\langle \phi^*, \phi \rangle = -2 \lambda L e^{-\lambda k(\lambda) t},
\end{eqnarray*}
which are both nonzero for $\lambda \neq 0$. For $\lambda = 1$,
only one linearly independent eigenfunction $\varphi$ in (\ref{eigenvectors-constant-background-zero}) exists and we
check that $\langle \varphi^*,\varphi \rangle = -2 L \neq 0$.
For $\lambda = i \gamma$ with $\gamma \in \mathbb{R}$, we obtain
\begin{eqnarray*}
\langle \varphi^*, \varphi \rangle = -2 i \gamma L e^{i \gamma k(\gamma) t}, \quad
\langle \phi^*, \phi \rangle = -2 i \gamma L e^{-i \gamma k(\gamma) t},
\end{eqnarray*}
which are both nonzero for $\gamma \neq 0$.
Hence, the algebraic multiplicity of all nonzero eigenvalues is equal to their geometric multiplicity.
For the eigenvalue $\lambda=0$ with the eigenfunctions (\ref{eigenvectors-constant-background-two}), we obtain $\langle \varphi^*, \varphi \rangle = \langle \phi^*, \phi \rangle = 0$, in agreement with the existence of the generalized eigenfunctions (\ref{eigenvectors-constant-background-two-generalized}). On the other hand,
we also have
$$
\langle \varphi^*, \varphi_g \rangle = -L, \quad
\langle \phi^*, \phi \rangle = -L,
$$
which implies that no new generalized eigenfunctions satisfying $\mathcal{L}^3 \varphi = 0$ exist. Hence, the zero eigenvalue has algebraic multiplicity equal to four.
\end{proof}
Replacing the space $C_b^0(\mathbb{R})$ by $L^2(\mathbb R)$ in Lemma~\ref{lemma-spectrum} the Lax spectrum does not change, the difference being that $\Sigma_0$ becomes a purely continuous spectrum in $L^2(\mathbb R)$. In the space $L^2_{\rm per}(0,L)$ of $L$-periodic functions, the Lax spectrum only contains the eigenvalues $\lambda\in\Sigma_0$ with $L$-periodic associated eigenfunctions, hence the purely point spectrum is located at
\begin{equation}
\label{spectrum-constant-periodic}
\Sigma_0^{(P)}= \{ \pm \lambda^{(P)}_m, \;\; m \in \{0,\mathbb{N}_{\rm even}\}\}, \quad
\lambda^{(P)}_m := \sqrt{1-\frac{\pi^2}{L^2} m^2}.
\end{equation}
Similarly, in the space $L^2_{\rm antiper}(0,L)$ of $L$-antiperiodic functions, the Lax spectrum only contains the eigenvalues $\lambda\in\Sigma_0$ with $L$-antiperiodic associated eigenfunctions, hence the purely point spectrum is located at
\begin{equation}
\label{spectrum-constant-antiperiodic}
\Sigma_0^{(A)}= \{ \pm \lambda^{(A)}_m, \;\; m \in \mathbb{N}_{\rm odd}\}, \quad
\lambda^{(A)}_m := \sqrt{1-\frac{\pi^2}{L^2} m^2}.
\end{equation}
The algebraic and geometric multiplicities of these eigenvalues remain the same, as given by Lemma~\ref{lemma-spectrum}. Notice that $\lambda=0$ is an eigenvalue only for particular periods $L\in\pi\mathbb N$.
Figure \ref{fig-Lax} illustrates these results. The left panel shows the purely continuous spectrum of $\Sigma_0$ in $L^2(\mathbb{R})$ given by (\ref{spectrum-constant}). The right panel shows the union $\Sigma_0^{(P)} \cup \Sigma_0^{(A)}$ of the purely point spectra in $L^2_{\rm per}(0,L)$ and $L^2_{\rm antiper}(0,L)$ given by (\ref{spectrum-constant-periodic}) and (\ref{spectrum-constant-antiperiodic}), respectively.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=6cm,height=4cm]{SpectrumLax1}
\includegraphics[width=6cm,height=4cm]{SpectrumLax2}
\end{center}
\caption{Left: The Lax spectrum $\Sigma_0$ in $L^2(\mathbb{R})$.
Right: The union $\Sigma_0^{(P)} \cup \Sigma_0^{(A)}$ of the Lax spectra in $L^2_{\rm per}(0,L)$ and $L^2_{\rm antiper}(0,L)$ for some positive $L\notin\pi\mathbb N$.}
\label{fig-Lax}
\end{figure}
\subsection{Localized solutions}
Since the linearized NLS equation \eqref{nls-lin-constant} has constant coefficients, the Fourier transform provides a basis of bounded solutions in $x$ which can be used to represent a general solution in $L^2(\mathbb{R})$.
The following proposition gives the result.
\begin{proposition}
\label{theorem-localized solutions}
For every $v_0 \in L^2(\mathbb{R})$, there exists a unique solution
$v \in C^0(\mathbb{R},L^2(\mathbb{R}))$ to the linearized NLS equation (\ref{nls-lin-constant}) satisfying $v(x,0) = v_0(x)$ in the form of a linear superposition
\begin{equation}\label{basis Fourier}
v(x,t)=\int_0^\infty \left[c_k^+ \widetilde v_k^+(t) + c_k^- \widetilde v_k^-(t)\right] \cos(kx)dk +
\int_0^\infty\left[ d_k^+ \widetilde v_k^+(t) + d_k^- \widetilde v_k^-(t)\right]\sin(kx)dk,
\end{equation}
where coefficients $c_k^\pm$ and $d_k^\pm$ are uniquely found from $v_0 \in L^2(\mathbb{R})$, and the functions ${\widetilde v}_k^{\pm}(t)$ for $k \geq 0$ are given as follows:
\begin{eqnarray}
\label{vk0}
k = 0 : & \quad & \left\{ \begin{array}{l} \widetilde v_0^+(t)= 2i,\\ \widetilde v_0^-(t)= 1+2it,\end{array} \right. \\
\label{vkpm}
k \in (0,2) : & \quad & \left\{ \begin{array}{l} \widetilde v_k^+(t)=(2i\lambda+k)e^{k\lambda t},\\
\widetilde v_k^-(t)=(2i\lambda-k)e^{-k\lambda t},\end{array} \right.
\quad \lambda=\lambda(k)=\frac{1}{2} \sqrt{4 - k^2}, \\
\label{vk2}
k = 2 : & \quad & \left\{ \begin{array}{l} \widetilde v_2^+(t)= 2,\\
\widetilde v_2^-(t)= i+2t, \end{array} \right. \\
\label{vkpm2}
k \in (2,\infty) : & \quad & \left\{ \begin{array}{l} \widetilde v_k^+(t)=k\cos(k\gamma t)-2i\gamma\sin(k\gamma t), \\
\widetilde v_k^-(t)=2i\gamma\cos(k\gamma t)+k\sin(k\gamma t), \end{array} \right.
\quad \gamma=\gamma(k)=\frac{1}{2} \sqrt{k^2 - 4}.
\end{eqnarray}
\end{proposition}
\begin{proof}
The proof is based on separation of variables and straightforward computations. Indeed, substituting $v(t,x) = \widetilde v_k(t) e^{ikx}$ into (\ref{nls-lin-constant}) yields the linear differential equation
\[
i \frac d{dt} \widetilde v_k +\left(1-\frac{k^2}2\right) \widetilde v_k +\overline{\widetilde v}_k = 0,
\]
with two linearly independent solutions $\widetilde v_k^+(t)$ and $\widetilde v_k^-(t)$
given by (\ref{vk0}), (\ref{vkpm}), (\ref{vk2}), and (\ref{vkpm2}) for different values of $k \geq 0$. Completeness of the basis of bounded
functions in $L^2(\mathbb{R})$ is given by the Fourier theory.
\end{proof}
\begin{remark}\label{spectrum bounded}
Since the points $k = 0$ and $k = 2$ are of measure zero in the integral (\ref{basis Fourier}) we actually do not need the solutions for $k=0$ and $k=2$. However these solutions play a role when the space $L^2(\mathbb R)$ is replaced by the space $L^2_{\rm per}(0,L)$ of $L$-periodic functions.
\end{remark}
\begin{remark}
From the Fourier decomposition \eqref{basis Fourier} we can determine the spectrum of the linear operator from the linearized NLS equation (\ref{nls-lin-constant}), when acting in the space $L^2(\mathbb{R})$. We find
that the purely continuous spectrum is located at
$\{\pm k\lambda(k) \;;\; k\geq0\} = i\mathbb R \cup[-1,1]$. This implies the spectral instability of $u=1$ in the linearized NLS equation (\ref{nls-lin-constant}).
\end{remark}
It follows from Proposition \ref{prop-NLS-lin} that solutions of the linearized NLS equation \eqref{nls-lin-constant} can be constructed from solutions of the Lax equations (\ref{lax-1})--(\ref{lax-2}) with $u = 1$ and $\lambda\in\mathbb C$.
We show below how to recover the Fourier basis in the decomposition \eqref{basis Fourier} from the eigenfunctions associated to the Lax spectrum in Lemma~\ref{lemma-spectrum}. We use the three pairs of solutions given in Table~\ref{table-1}.
\vspace*{1.5ex}
{\bf Pair~II of Table~\ref{table-1}.} Using $\varphi$ and $\phi$ in either \eqref{eigenvectors-constant-background} or \eqref{eigenvectors-constant-background-gamma},
we obtain the same constant solutions
\begin{equation}\label{v0+}
v=0,\quad \widetilde v_0^+(t)=2i,
\end{equation}
where $\widetilde v_0^+$ is the same as in (\ref{vk0}). Using $\varphi$ and $\phi$ from \eqref{eigenvectors-constant-background-zero}, we find the solutions
\begin{equation}\label{v0-}
{\widetilde v}_0^-(t) = 1 + 2it,\quad v(x) = i(2x+1),
\end{equation}
where $\widetilde v_0^-$ is the same as in (\ref{vk0}). The two bounded solutions in the Fourier decomposition \eqref{basis Fourier} with $k=0$ are recovered.
\vspace*{1.5ex}
{\bf Pairs~I and III of Table~\ref{table-1}.}
Using the eigenfunction $\varphi$ from \eqref{eigenvectors-constant-background-zero} associated to the simple eigenvalue $\lambda=1$ we obtain the solutions from \eqref{v0+}, again. By the symmetry of the Lax system in Remark~\ref{remark-symmetry}, the solutions obtained for $\lambda=-1$ are, up to sign, the same.
Next, using $\varphi$ and $\phi$ in \eqref{eigenvectors-constant-background} for $\lambda\in(0,1)$, we find the following four linearly independent bounded solutions:
\begin{equation}
\label{Fourier-basis-I}
v_\lambda^{+}(x,t) = -(2i \lambda + k) e^{\lambda k t} \sin(kx) ,
\quad
v_{-\lambda}^{+}(x,t) = (2i \lambda + k) e^{\lambda k t} \cos(k x),
\end{equation}
and
\begin{equation}
\label{Fourier-basis-III}
v_\lambda^{-}(x,t) = (2i \lambda - k) e^{-\lambda k t} \sin(k x), \quad
v_{-\lambda}^{-}(x,t) = (2i \lambda - k) e^{-\lambda k t} \cos(k x),
\end{equation}
in which $k=k(\lambda)\in(0,2)$. These are, up to sign, equal to the four solutions in the Fourier decomposition \eqref{basis Fourier} given by \eqref{vkpm} so that we have a one-to-one correspondence between the solutions provided by the Lax system with $\lambda\in(0,1)$ and the solutions in \eqref{basis Fourier} with $k\in(0,2)$ through the equalities $k=k(\lambda)$ and $\lambda=\lambda(k)$. By the symmetry of the Lax system in Remark~\ref{remark-symmetry}, the solutions obtained for $\lambda=(-1,0)$ are, up to sign, the same.
Using $\varphi$ and $\phi$ in \eqref{eigenvectors-constant-background-gamma} for $\lambda=i\gamma$ with $\gamma\in\mathbb R_+$, we only find two linearly independent solutions
\begin{equation}
\label{Fourier-basis-1-new}
\begin{array}{l}
v_{\lambda}^+(x,t) = k\cos( k x- k\gamma t) + 2 i \gamma \sin( kx - k\gamma t),\\
v_{-\lambda}^+(x,t) = -2 i \gamma \cos(kx -k \gamma t) + k\sin( kx-k\gamma t),
\end{array}
\end{equation}
in which $k=k(\gamma) \in (2,\infty)$. However, using $\varphi$ and $\phi$ in \eqref{eigenvectors-constant-background-gamma} with $-\gamma$ instead of $\gamma$,
we obtain other two linearly independent solutions,
\begin{equation}
\label{Fourier-basis-2-new}
\begin{array}{l}
v_{\lambda}^-(x,t) = k\cos (kx+ k\gamma t) - 2 i \gamma \sin (kx + k\gamma t), \\
v_{-\lambda}^-(x,t) = 2 i \gamma \cos (kx +k \gamma t) + k \sin (kx +k \gamma t).
\end{array}
\end{equation}
Solutions (\ref{Fourier-basis-1-new}) and (\ref{Fourier-basis-2-new}) are linear combinations of the four solutions in the Fourier decomposition \eqref{basis Fourier} with $k\in(2,\infty)$ given by \eqref{vkpm2}, and we have a one-to-one correspondence between these solutions through the equalities $k = k(\gamma)$ and $\gamma = \gamma(k)$.
Finally, using $\varphi$ and $\phi$ in \eqref{eigenvectors-constant-background-two} for $\lambda=0$, we obtain two linearly independent solutions
\begin{equation}\label{Fourier 0}
v_0^+(x,t) = -2 \sin(2x),\quad
v_{-0}^+(x,t) = 2 \cos(2x).
\end{equation}
These recover the two solutions with $k=2$ in the Fourier decomposition \eqref{basis Fourier} corresponding to $\widetilde v_2^+$ in \eqref{vk2}.
In order to recover the two solutions given by $\widetilde v_2^-$ in (\ref{vk2}), we use (\ref{solutions-generalized}) with the eigenfunctions (\ref{eigenvectors-constant-background-two}) and the generalized eigenfunctions (\ref{eigenvectors-constant-background-two-generalized}) to obtain
\begin{equation}\label{Fourier 0 gen}
v_0^-(x,t) = 2 (i + 2t) \cos(2x) - 2 \sin(2x),\quad
v_{-0}^-(x,t) = 2 (i + 2t) \sin(2x) + 2 \cos(2x).
\end{equation}
Using (\ref{solutions-generalized}) with $\phi$ and $\phi_g$ produces
the same solutions as (\ref{Fourier 0 gen}) up to the change of signs.
Solutions (\ref{Fourier 0}) and (\ref{Fourier 0 gen}) for $\lambda = 0$ recover the four solutions in the Fourier decomposition \eqref{basis Fourier} given by (\ref{vk2}) for $k=2$.
Summarizing, the set of eigenfunctions of the Lax equations (\ref{lax-1})--(\ref{lax-2}) with $u = 1$ and $\lambda \in \Sigma_0$ allows us to recover the Fourier basis in the decomposition \eqref{basis Fourier}, except for the two functions $\widetilde v_2^-(t)\cos(2x)$ and $\widetilde v_2^-(t)\sin(2x)$ with $k=2$. The entire basis is recovered when also using the generalized eigenfunctions (\ref{eigenvectors-constant-background-two-generalized})
associated to the eigenvalue $\lambda=0$. This leads to an alternative expansion for solutions $v \in C^0(\mathbb{R},L^2(\mathbb{R}))$ to the linearized NLS equation (\ref{nls-lin-constant}),
\begin{equation}\label{basis Fourier 2}
v(x,t)=\int_0^\infty \left[c_k^+ v_{\lambda(k)}^+(x,t) + c_k^-v_{\lambda(k)}^-(x,t) + c_{-k}^+ v_{-\lambda(k)}^+(x,t) + c_{-k}^-v_{-\lambda(k)}^-(x,t)\right] dk,
\end{equation}
where coefficients $c_{\pm k}^\pm$ are uniquely defined from the initial condition $v(\cdot,0)=v_0 \in L^2(\mathbb R)$, and $v_{\pm\lambda(k)}^\pm(x,t)$ are given by \eqref{Fourier-basis-I}--\eqref{Fourier-basis-III} if $k\in(0, 2)$ and by \eqref{Fourier-basis-1-new}--\eqref{Fourier-basis-2-new} if $k\in(2,\infty)$. Since the points $k = 0$ and $k = 2$ are of measure zero in the integral (\ref{basis Fourier 2}) we do not need
solutions (\ref{v0+}), (\ref{v0-}), (\ref{Fourier 0}), and (\ref{Fourier 0 gen}).
\begin{remark}\label{localized solutions}
Since the solutions for $k = 0$ and $k = 2$ are not used in the expansion (\ref{basis Fourier 2}), the solutions found from Pair~II of Table~\ref{table-1} and from the eigenvalues $\lambda=0$ and $\lambda=\pm1$ play no role in the dynamics of localized perturbations on the background of $u = 1$. In particular, linearly growing in $t$ solutions play no role in this dynamics.
All relevant solutions are obtained using the eigenfunctions of the Lax system for $\lambda\in\Sigma_0\setminus\{0,\pm1\}$ in Pairs I and III of Table \ref{table-1}.
\end{remark}
\subsection{Periodic solutions}\label{ss periodic1}
Solutions of the linearized NLS equation \eqref{nls-lin-constant} in the space $L^2_{\rm per}(0,L)$ of periodic functions with the fundamental period $L>0$ are found by restricting the continuous Fourier decomposition (\ref{basis Fourier}) to the discrete values
\begin{equation}
\label{k-m}
k_m := \frac{2\pi m}{L}, \quad
m \in \mathbb{N}_0 := \{ 0, \mathbb{N}\}.
\end{equation}
This leads to a decomposition in Fourier series
\begin{equation}\label{basis Fourier per}
v(x,t)=\sum_{m\in\mathbb N_0} \left[c_{k_m}^+ \widetilde v_{k_m}^+(t) + c_{k_m}^- \widetilde v_{k_m}^-(t)\right] \cos({k_m}x) +
\sum_{m\in\mathbb N}\left[ d_{k_m}^+ \widetilde v_{k_m}^+(t) + d_{k_m}^- \widetilde v_{k_m}^-(t)\right]\sin({k_m}x),
\end{equation}
where coefficients $c_{k_m}^\pm$ and $d_{k_m}^\pm$ are uniquely found from the initial condition $v(\cdot,0)=v_0 \in L^2_{\rm per}(0,L)$, and the functions ${\widetilde v}_{k_m}^{\pm}(t)$ are given by \eqref{vk0}--\eqref{vkpm2}.
We obtain an equivalent decomposition using the eigenfunctions of the Lax system. For the Lax system we have to consider both $L$-periodic and $L$-antiperiodic solutions, because the solutions of the linearized NLS equation \eqref{nls-lin-constant} are constructed using squares of solutions of the Lax system.
The Lax spectra $\Sigma_0^{(P)}$ in $L^2_{\rm per}(0,L)$ and $ \Sigma_0^{(A)}$ in $L^2_{\rm antiper}(0,L)$ are given in \eqref{spectrum-constant-periodic} and \eqref{spectrum-constant-antiperiodic}, respectively.
For notational simplicity we set $\lambda(k_m)=\lambda^{(P)}_m$, if $m$ is even, and $\lambda(k_m)=\lambda^{(A)}_m$, if $m$ is odd, so that $\Sigma_0^{(P)} \cup \Sigma_0^{(A)} = \{\pm\lambda(k_m), \; m\in\mathbb N_0\}$.
The arguments above show that all functions in the Fourier series \eqref{basis Fourier per} are recovered from the eigenfunctions of the Lax system associated to the eigenvalues $\lambda\in\Sigma_0^{(P)} \cup \Sigma_0^{(A)}$. Indeed,
for $m=0$ we have the eigenvalues $\pm\lambda(0)=\pm1\in\Sigma_0^{(P)}$ leading to the solutions $\widetilde v_0^+$ and $\widetilde v_0^-$ given by \eqref{v0+} and \eqref{v0-}, respectively, which are constant in $x$. If $0<\pi m < L$, then $\lambda(k_m)\in(0,1)$, and we have the four linearly independent solutions in \eqref{Fourier-basis-I}--\eqref{Fourier-basis-III} with $\lambda= \lambda(k_m)$. If $\pi m > L$, then $\lambda(k_m) = i \gamma(k_m)$ is purely imaginary, and we have the four linearly independent solutions in \eqref{Fourier-basis-1-new}--\eqref{Fourier-basis-2-new} with $\gamma= \gamma(k_m)$. In the particular case $L=\pi m$, for some $m\in\mathbb N$, we have $\lambda(k_m)=0$ and four linearly independent solutions are given in \eqref{Fourier 0} and \eqref{Fourier 0 gen}.
As a consequence, an arbitrary solution of the linearized NLS equation (\ref{nls-lin-constant}) in $L^2_{\rm per}(0,L)$ can be written in the series form:
\begin{eqnarray}
\nonumber
v(x,t)&=& c_0^+ \widetilde v_0^+(t) + c_0^- \widetilde v_0^-(t) \\
&& + \sum_{m \in \mathbb{N}}\left[ c_m^+ v_{\lambda(k_m)}^+(x,t) + c_m^- v_{\lambda(k_m)}^-(x,t) + c_{-m}^+ v_{-\lambda(k_m)}^+(x,t) + c_{-m}^- v_{-\lambda(k_m)}^-(x,t)\right]\!, \qquad\
\label{v-arbitrary-constant}
\end{eqnarray}
where coefficients $c_{\pm m}^\pm$ are uniquely defined from the initial condition $v(\cdot,0)=v_0 \in L^2_{\rm per}(0,L)$, and $v_{\pm\lambda(k_m)}^\pm(x,t)$ are given by \eqref{Fourier-basis-I}--\eqref{Fourier-basis-III} if $0<\pi m < L$, by \eqref{Fourier 0}--\eqref{Fourier 0 gen} if $\pi m =L$, and by \eqref{Fourier-basis-1-new}--\eqref{Fourier-basis-2-new} if $\pi m > L$.
\begin{remark}\label{r periodic}
When $L\notin \pi\mathbb N$, the functions $v_{\pm\lambda(k_m)}^\pm(x,t)$ in the decomposition \eqref{v-arbitrary-constant} are all obtained from the eigenfunctions associated to nonzero eigenvalues $\pm \lambda(k_m)$. When $L=\pi m$, the eigenvalues $\pm \lambda(k_m)$ vanish and the associated eigenfunctions only provide the two linearly independent solutions \eqref{Fourier 0}. The generalized eigenfunctions associated to the eigenvalue $\lambda(k_m)=0$ must be used in this case to obtain the other two solutions in \eqref{Fourier 0 gen}.
\end{remark}
\section{Akhmediev breather (AB)}
\label{s AB}
By using the Darboux transformation in Proposition~\ref{prop-Darboux},
we obtain AB from the constant solution $u = 1$. We describe
the associate Lax spectrum in Section~\ref{ss AB Lax} and construct periodic solutions of the linearized NLS equation in Section~\ref{ss AB LNLS}.
Let $\lambda_0 \in (0,1)$ and define
the particular solution $\varphi = (p_0,q_0)^T$
of the Lax system (\ref{lax-1})--(\ref{lax-2})
with $u = 1$ and $\lambda = \lambda_0$:
\begin{equation}
\label{AB-eigen}
\left\{ \begin{array}{l}
\displaystyle
p_0(x,t) = \sqrt{\lambda_0 - \frac{i}{2} k_0} \; e^{\frac{1}{2} (-ik_0 x + \sigma_0 t)} - \sqrt{\lambda_0 + \frac{i}{2} k_0} \;
e^{\frac{1}{2} (ik_0 x - \sigma_0 t)}, \\
\displaystyle
q_0(x,t) = -\sqrt{\lambda_0 + \frac{i}{2} k_0} \; e^{\frac{1}{2} (-i k_0 x + \sigma_0 t)} + \sqrt{\lambda_0 - \frac{i}{2} k_0} \;
e^{\frac{1}{2} (i k_0 x - \sigma_0 t)},
\end{array}
\right.
\end{equation}
where $k_0 = 2 \sqrt{1-\lambda_0^2} \,\in (0,2)$ and $\sigma_0 = \lambda_0 k_0$.
Elementary computations give
\begin{eqnarray*}
&&|p_0|^2 + |q_0|^2 = 4 \left[ \cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x) \right], \\
&&|p_0|^2 - |q_0|^2 = 2 k_0 \sin(k_0 x), \\
&& p_0 \bar{q}_0 = 2 \cos(k_0 x) -2 \lambda_0 \cosh(\sigma_0 t) + i k_0 \sinh(\sigma_0 t).
\end{eqnarray*}
and the one-fold Darboux transformation (\ref{DT-potential}) yields the formula for AB:
\begin{equation}
\label{AB-u}
\hat{u}_0(x,t) = - 1 + \frac{2(1-\lambda_0^2) \cosh(\sigma_0 t) + i \sigma_0 \sinh(\sigma_0 t)}{\cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)}.
\end{equation}
The AB solution $\hat u_0$ is $L$-periodic in $x$ with $L = 2\pi/k_0 \,>\pi$ and
\[
\lim_{t \to \pm \infty} \hat{u}_0(x,t) = 1 - 2 \lambda_0^2 \pm i k_0 \lambda_0
=
\left(\sqrt{1-\lambda_0^2} \pm i \lambda_0 \right)^2,
\]
from which it follows that $\lim\limits_{t \to \pm \infty} |\hat{u}_0(x,t)| = 1$.
The complementary transformation (\ref{DT-squared}) gives
\begin{equation*}
| \hat{u}_0(x,t)|^2 = 1 + \lambda_0 k_0^2 \frac{\cosh(\sigma_0 t) \cos(k_0 x) - \lambda_0}{\left[ \cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)\right]^2},
\end{equation*}
which is consistent with the exact solution (\ref{AB-u}).
\subsection{Lax spectrum at AB}
\label{ss AB Lax}
For the Lax system (\ref{lax-1})--(\ref{lax-2}), we consider both $L$-periodic and $L$-antiperiodic eigenfunctions $\varphi = \varphi(x,t)$ in $x$. We use the Darboux transformation (\ref{fund-matrix}) and the result of Lemma \ref{lemma-spectrum} to determine the Lax spectrum for AB, which is illustrated in Figure~\ref{fig-Lax-2}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=8cm,height=6cm]{SpectrumLaxAB}
\end{center}
\caption{The union $\Sigma_{AB}^{(P)} \cup \Sigma_{AB}^{(A)}$ of the Lax spectra in $L^2_{\rm per}(0,L)$ and $L^2_{\rm antiper}(0,L)$ for AB. The red dots represent the eigenvalues $\{ +\lambda_0, -\lambda_0 \}$.}
\label{fig-Lax-2}
\end{figure}
In the space of $L$-antiperiodic functions, we show that Lax spectrum of AB consists of the same eigenvalues (\ref{spectrum-constant-antiperiodic}) as for the constant-amplitude solution $u = 1$. The only difference between the two spectra is that the eigenvalues $\{ +\lambda_0, -\lambda_0 \}$ are geometrically double for $u = 1$, while they are geometrically simple and algebraically double for $u = \hat{u}_0$.
\begin{lemma}\label{lax antiper}
Consider AB given by (\ref{AB-u}) and assume $L \notin \pi \mathbb{N}_{\rm odd}$. The spectrum of the ZS spectral problem (\ref{lax-1}) with $u = \hat{u}_0$
in $L^2_{\rm antiper}(0,L)$ consists of isolated eigenvalues
\[
\Sigma_{\rm AB}^{(A)}= \{ \pm \lambda^{(A)}_m, \;\; m \in \mathbb{N}_{\rm odd}\}, \quad
\lambda^{(A)}_m := \sqrt{1-\frac{\pi^2}{L^2} m^2},
\]
with the following properties:
\begin{enumerate}
\item For each $m\in\mathbb{N}_{\rm odd}$, $m\not=1$, the eigenvalues $\pm\lambda^{(A)}_m$ are geometrically and algebraically double.
\item The eigenvalues $\lambda^{(A)}_1 = \lambda_0$ and $-\lambda^{(A)}_1 = -\lambda_0$ are geometrically simple and algebraically double with associated eigenfunctions $\varphi = (\hat{p}_0,\hat{q}_0)^T$ and $\varphi = (-\bar{\hat{q}}_0,\bar{\hat{p}}_0)^T$ and generalized eigenfunctions $\varphi_g = (\varphi_{1,1},\varphi_{1,2})^T$ and $\varphi_g = (-\bar\varphi_{1,2},\bar\varphi_{1,1})^T$, where $\varphi_0 = (\hat{p}_0,\hat{q}_0)^T$ and $\varphi_1 =(\varphi_{1,1},\varphi_{1,2})^T$ are given by \eqref{norming-factor} and \eqref{gen-eigenvector} in Appendix \ref{a hatphi}.
\end{enumerate}
\end{lemma}
\begin{proof}
The Darboux matrix $D(\lambda)$ given by (\ref{DT-matrix}) is $L$-periodic in $x$ and invertible for every $\lambda \neq \pm \lambda_0$. It follows
from the relation (\ref{fund-matrix}) that there is one-to-one correspondence between the $L$-antiperiodic solutions of the Lax systems with $u=1$ and $u= \hat{u}_0$ when $\lambda \neq \pm \lambda_0$. Consequently,
with the exception of $m = 1$, the $L$-antiperiodic Lax spectrum for $u=\hat u_0$ is the same as the $L$-antiperiodic Lax spectrum for $u=1$ in (\ref{spectrum-constant-antiperiodic}) and the property (1) holds.
The linearly independent eigenfunctions for the eigenvalues $\lambda=\pm \lambda^{(A)}_m$ are given in the form
\begin{equation}
\label{mode-continuous}
\hat{\varphi} := \varphi + \frac{1}{\lambda - \lambda_0} \left[ \begin{array}{l} \hat{p}_0 \\ \hat{q}_0 \end{array} \right] \left[ -q_0 \;\; p_0 \right] \varphi, \quad
\hat{\phi} := \phi + \frac{1}{\lambda - \lambda_0} \left[ \begin{array}{l} \hat{p}_0 \\ \hat{q}_0 \end{array} \right] \left[ -q_0 \;\; p_0 \right] \phi,
\end{equation}
where the two linearly independent eigenfunctions $\varphi$ and $\phi$ are
given by (\ref{eigenvectors-constant-background}) if $0<L<\pi m$
and by (\ref{eigenvectors-constant-background-gamma}) if $L>\pi m$.
The marginal case $L=\pi m$ is excluded by the assumption.
For $\lambda=\lambda_0$, transformation (\ref{DT-eigen}) gives the eigenfunction $\varphi_0 = (\hat{p}_0,\hat{q}_0)^T$ of the Lax system with $u = \hat{u}_0$ and it is easy to check that $\varphi_0$ is $L$-antiperiodic in $x$. For $\lambda = -\lambda_0$ we have the eigenfunction $\varphi = (-\bar{\hat{q}}_0,\bar{\hat{p}}_0)^T$ due to the symmetry in Remark \ref{remark-symmetry}. Hence $\{ +\lambda_0, -\lambda_0 \}$ belong to the $L$-antiperiodic Lax spectrum for $u=\hat u_0$. It remains to show that $\lambda_0$ is geometrically simple and algebraically double, the result for $-\lambda_0$ following then by the symmetry of the Lax system.
For this part of the proof, we rely on the explicit computation of the expansion into Laurent series of the $2\times2$ matrix solution $\hat{\Phi}(\lambda)$ to the Lax system with $u=\hat u_0$ given in Appendix~\ref{a hatphi}.
The vector $\phi_0$ given by \eqref{non-periodic-vector} is a second linearly independent solution to the Lax system (\ref{lax-1})--(\ref{lax-2}) for $u = \hat{u}_0$ and $\lambda = \lambda_0$. Since it is not $L$-antiperiodic in $x$, we deduce that $\lambda_0$ is geometrically simple. Next, $\varphi_1$ given by \eqref{gen-eigenvector} is $L$-antiperiodic and satifies $(\mathcal L-\lambda_0 I)\varphi_1=\varphi_0$, whereas
$\varphi_2$ given by \eqref{second-generalized-eigenvector} satisfies $(\mathcal L-\lambda_0 I)\varphi_2=\varphi_1$, but it is not $L$-antiperiodic. This implies that $\lambda_0$ is algebraically double and completes the proof.
\end{proof}
\begin{remark}
For an alternative proof that $\lambda_0$ is algebraically double,
we can check the Fredholm condition for the eigenfunction $\varphi_0$ and the first generalized eigenfunction $\varphi_1$. Taking the eigenfunction $\varphi_0^* = (\bar{\hat{q}}_0,\bar{\hat{p}}_0)^T$ of the adjoint problem $\left( \mathcal{L}^* - \lambda_0 I \right) \varphi_0^* = 0$ and the inner product $\langle \cdot, \cdot \rangle$ in $L^2(0,L)$, we find that
\begin{eqnarray*}
\langle \varphi_0^*, \varphi_0 \rangle =
\lambda_0^2 \int_{0}^L \frac{\cosh(\sigma_0 t - ik_0 x) - \lambda_0}{(\cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x))^2} dx = 0
\end{eqnarray*}
and
\begin{eqnarray*}
\langle \varphi_0^*, \varphi_1 \rangle
&=& \frac{\lambda_0}{2} \int_{0}^L \frac{\cos(k_0 x)}{\cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)} dx \\
&&
+ \frac{2\lambda_0^2}{k_0^2} \int_{0}^L \frac{(\cosh(\sigma_0 t + i k_0 x) - \lambda_0) (\cosh(\sigma_0 t - i k_0 x) - \lambda_0)}{(\cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x))^2} dx\\
&=&
\frac{2 \lambda_0^2}{k_0^2} \int_{0}^L \frac{\cosh^2(\sigma_0 t) + \lambda_0^2 \cos^2(k_0 x) - 2\lambda_0^2}{(\cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x))^2} dx
=\frac{2 \lambda_0^2}{k_0^2} L \not=0.
\end{eqnarray*}
Since $\langle \varphi_0^*, \varphi_0 \rangle=0$, there exists the generalized eigenfunction $\varphi_1$ satisfying $(\mathcal{L} - \lambda_0 I) \varphi_1 = \varphi_0$ in $L^2_{\rm antiper}(0,L)$. Since $\langle \varphi_0^*, \varphi_1 \rangle \neq 0$, there is no the second generalized eigenfunction
$\varphi_2$ satisfying $(\mathcal{L} - \lambda_0 I) \varphi_2 = \varphi_1$ in $L^2_{\rm antiper}(0,L)$. This implies that $\lambda_0$ is algebraically double.
\end{remark}
For $L$-periodic solutions, we show that Lax spectrum of AB consists of the same eigenvalues (\ref{spectrum-constant-periodic}) as for the constant-amplitude solution $u = 1$. Moreover, algebraic multiplicitilies of the eigenvalues coincide.
\begin{lemma}\label{lax per}
Consider AB given by (\ref{AB-u}) and assume $L \notin \pi \mathbb{N}_{\rm even}$. The spectrum of the ZS spectral problem (\ref{lax-1}) with $u = \hat{u}_0$ in $L^2_{\rm per}(0,L)$ consists of isolated eigenvalues
\[
\Sigma_{\rm AB}^{(P)}= \{ \pm \lambda^{(P)}_m, \;\; m \in \{0,\mathbb{N}_{\rm even}\}\}, \quad
\lambda^{(P)}_m := \sqrt{1-\frac{\pi^2}{L^2} m^2},
\]
with the following properties:
\begin{enumerate}
\item For each $m\in\mathbb{N}_{\rm even}$, the eigenvalues $\pm\lambda^{(P)}_m$ are geometrically and algebraically double.
\item The eigenvalues $\lambda^{(P)}_0 = 1$ and $-\lambda^{(P)}_0 = -1$ are algebraically simple with associated eigenfunctions $\varphi = (\hat{\varphi}_1,\hat{\varphi}_2)^T$ and $\varphi = (-\bar{\hat{\varphi}}_2,\bar{\hat{\varphi}}_1)^T$ respectively, where
$\hat{\varphi} = (\hat{\varphi}_1,\hat{\varphi}_2)^T$ is given~by
\begin{equation}
\label{varphi-1}
\hat{\varphi} =
\left[ \begin{array}{c} 1\\ -1\end{array} \right] -
\frac{p_0 + q_0}{1-\lambda_0} \left[ \begin{array}{c} \hat{p}_0 \\ \hat{q}_0 \end{array} \right].
\end{equation}
\end{enumerate}
\end{lemma}
\begin{proof}
As in the proof of Lemma~\ref{lax antiper}, the set of eigenvalues and their geometric and algebraic multiplicities are found from the Darboux matrix $D(\lambda)$ in (\ref{DT-matrix}) and the transformation (\ref{fund-matrix}).
Since $D(\lambda)$ is $L$-periodic in $x$ and invertible for every $\lambda = \pm \lambda_m^{(P)}$, there is one-to-one correspondence between the $L$-periodic solutions of the Lax systems with $u = 1$ and $u = \hat{u}_0$. Moreover, the explicit expressions (\ref{mode-continuous}) for
eigenfunctions $\hat{\varphi}$ and $\hat{\phi}$ hold for every $\lambda = \pm \lambda_m^{(P)}$.
For the eigenvalue $\lambda = \lambda^{(P)}_0 = 1$, only one linearly independent eigenfunction
$\varphi = \hat{\varphi} = (\hat{\varphi}_1,\hat{\varphi}_2)^T$ in $L^2_{\rm per}(0,L)$ exists in the form (\ref{varphi-1}). In order to check the algebraic multiplicity of $\lambda=1$, we take eigenfunction $\varphi^* = (\bar{\hat{\varphi}}_2,\bar{\hat{\varphi}}_1)^T$ of the adjoint problem
$(\mathcal{L}^* - I) \varphi^* = 0$ and compute the scalar product
\begin{eqnarray*}
\langle \varphi^*, \varphi \rangle
&=& -2 \int_{0}^L \left[ 1 + \frac{2\lambda_0}{1-\lambda_0}
\frac{p_0 \bar{q}_0 + \bar{p}_0 q_0 + |p_0|^2 + |q_0|^2}{|p_0|^2 + |q_0|^2} + \frac{4 \lambda_0^2}{(1-\lambda_0)^2} \frac{p_0 q_0 (\bar{p}_0 + \bar{q}_0)^2}{(|p_0|^2 + |q_0|^2)^2} \right] dx \\
&=& -2 \frac{1+\lambda_0}{1-\lambda_0} \int_{0}^L \left[ 1 + 2 \lambda_0
\frac{\cosh(\sigma_0 t) \cos(k_0 x) - \lambda_0 - i \lambda_0 \sinh(\sigma_0 t) \sin(k_0 x)}{\left[\cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)\right]^2} \right] dx \\
&=& -2 \frac{1+\lambda_0}{1-\lambda_0} L.
\end{eqnarray*}
Since $\langle \varphi^*, \varphi \rangle \neq 0$, there exists no generalized eigenfunction satisfying $(\mathcal{L} - I) \hat{\varphi}_g = \hat{\varphi}$
in $L^2_{\rm per}(0,L)$ so that the eigenvalue $\lambda = 1$ is algebraically simple. The result for $\lambda=-1$ is a consequence of the symmetry of the Lax system in Remark~\ref{remark-symmetry}.
\end{proof}
\subsection{Linearized NLS equation at AB}\label{ss AB LNLS}
As in the case of the constant-amplitude solution $u=1$, we construct $L$-periodic solutions of the linearized NLS equation at AB from the $L$-periodic and $L$-antiperiodic solutions of the Lax equations. These solutions are generated by the eigenvalues $\lambda^{(A)}_m$ and $\lambda^{(P)}_m$ in Lemmas~\ref{lax antiper} and~\ref{lax per}, respectively.
We focus on the solutions related to the positive eigenvalues $\lambda^{(A)}_1 = \lambda_0$ and $\lambda^{(P)}_0 = 1$. By the symmetry of the Lax system, the negative eigenvalues $-\lambda^{(A)}_1 = -\lambda_0$ and $-\lambda^{(P)}_0 = -1$ provide the same solutions up to the sign change. The particular goal is to identify six linearly independent solutions of the linearized NLS equation at AB which correspond to the six linearly independent solutions in the decomposition (\ref{v-arbitrary-constant}) with $m=0$ and $m=1$ for the solutions of the linearized NLS equation at $u=1$. The correspondence is established by showing that the solutions constructed for AB become identical to the ones for $u=1$ asymptotically as $t\to\pm\infty$.
The following theorem presents the main result of these computations.
\begin{theorem}\label{six solutions}
Consider AB given by (\ref{AB-u}). Solutions of the Lax system (\ref{lax-1})--(\ref{lax-2}) with $u=\hat u_0$ for the eigenvalues $\lambda= 1$ and $\lambda=\lambda_0$ generate the following six linearly independent $L$-periodic solutions of the linearized NLS equation (\ref{nls-lin}) at AB:
\begin{enumerate}
\item the solutions $v_1$ in (\ref{v1}) and $v_2$ in (\ref{v2}), which are asymptotically equivalent to the solutions $v_{\lambda(k_1)}^-$ and $\widetilde v_0^+$ respectively, in the decomposition (\ref{v-arbitrary-constant});
\item the solution $w_2$ in (\ref{w2}), which is asymptotically equivalent to the solution $v_{-\lambda(k_1)}^-$ in the decomposition (\ref{v-arbitrary-constant});
\item the solution $v$ in (\ref{element-basis}), which is asymptotically equivalent to the solution $\widetilde v_{0}^-$ in the decomposition (\ref{v-arbitrary-constant});
\item the solutions $v$ in (\ref{new+}) and (\ref{new-}), which are asymptotically equivalent to the solutions $v_{-\lambda(k_1)}^+$ and $v_{\lambda(k_1)}^+$ respectively, in the decomposition (\ref{v-arbitrary-constant}).
\end{enumerate}
\end{theorem}
The six solutions in this theorem are computed explicitly in the next three subsections.
\begin{remark}
Due to the two exponentially growing solutions in item (4) of Theorem \ref{six solutions}, AB is linearly unstable. This agrees with the main conclusion of \cite{GS-2021} based on symbolic computations.
For periods $L\in(\pi,2\pi)$ the eigenvalues $\lambda_0$ and $1$ are the only positive eigenvalues of the Lax system, see Figure \ref{fig-Lax-2}. For larger periods $L>2\pi$, there are additional positive eigenvalues which lead to exponentially growing solutions for the linearized NLS equation at~AB.
\end{remark}
\subsubsection{Solutions related to $\lambda = 1$}
Recall from Lemma \ref{lax per} that $\lambda=1$ is an algebraically simple eigenvalue in $\Sigma_{\rm AB}^{(P)}$ associated with eigenfunction $\hat\varphi$ given by \eqref{varphi-1}. The second linearly independent solution of the Lax system (\ref{lax-1})--(\ref{lax-2}) for $u = \hat{u}_0$ and $\lambda = 1$
is obtained from the second vector in (\ref{eigenvectors-constant-background-zero}) by using the transformation formula (\ref{fund-matrix})--(\ref{DT-matrix}) with $\lambda = 1$:
\begin{equation}
\label{varphi-2}
\hat\phi = \left[ \begin{array}{c} x+it+1 \\ -x-it\end{array} \right] -
\frac{(x+it)(p_0 + q_0)+q_0}{1 - \lambda_0} \left[ \begin{array}{c} \hat{p}_0 \\ \hat{q}_0 \end{array} \right].
\end{equation}
By using the $L$-periodic eigenfunction $\hat\varphi$ in (\ref{varphi-1}) in
Pair I in Table \ref{table-1}, we obtain
the following two $L$-periodic solutions of the linearized NLS equation (\ref{nls-lin}):
\begin{equation}\label{v1}
v_1(x,t) = -\frac{2\lambda_0 (1+\lambda_0)}{1-\lambda_0} \;
\frac{\sin(k_0 x) \left[ k_0 \cosh(\sigma_0 t) + 2 i \lambda_0 \sinh(\sigma_0 t)\right]}{\left[ \cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)\right]^2}.
\end{equation}
and
\begin{eqnarray}
\nonumber
v_2(x,t) &=&
\frac{2i (1+\lambda_0)}{1-\lambda_0} \;
\left[ \frac{i k_0 \lambda_0 \sinh(\sigma_0 t) \cosh(\sigma_0 t)}{\left[ \cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)\right]^2} \right. \\
&& \left. + \frac{(1-2\lambda_0^2) \cosh^2(\sigma_0 t) - \lambda_0^2 \cos^2(k_0 x) +2 \lambda_0^2}{\left[ \cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)\right]^2} \right].
\label{v2}
\end{eqnarray}
As $t \to \pm \infty$, the periodic solution $v_1$ decays to $0$, whereas the periodic solution $v_2$ approaches a nonzero constant. These two solutions
are asymptotically equivalent to the solutions $v_{\lambda(k_1)}^-$ and $\widetilde v_0^+$ in the decomposition (\ref{v-arbitrary-constant}).
By using both the $L$-periodic eigenfunction $\hat\varphi$ in (\ref{varphi-1}) and the non-periodic solution $\hat\phi$ in (\ref{varphi-2}) in
Pair II in Table \ref{table-1}, we obtain the following two
non-periodic solutions of the linearized NLS equation (\ref{nls-lin}):
\begin{eqnarray}
\label{v3}
&& v_3(x,t) = x v_1(x,t) + t v_2(x,t) + f_1(x,t), \\
\label{v4}
&& v_4(x,t) = x v_2(x,t) - t v_1(x,t) + f_2(x,t),
\end{eqnarray}
where the periodic parts $f_1$ and $f_2$ are given by
\begin{eqnarray*}
f_1(x,t)& = & \frac{1+\lambda_0}{1-\lambda_0} \;
\left[ 1 + \frac{4 i \lambda_0^2}{k_0} \sinh(\sigma_0 t) \frac{\lambda_0 \cosh(\sigma_0 t) - \cos(k_0 x)}{\left[ \cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x) \right]^2} \right. \\
&&
\left. +
2 \lambda_0^2 \frac{\cosh^2(\sigma_0 t) - \cos^2(k_0 x) - i \sin(k_0 x) \sinh(\sigma_0 t)}{\left[ \cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)\right]^2}
\right. \\
&&
\left. - \lambda_0 k_0 \frac{\cosh(\sigma_0 t) \sin(k_0x)}{\left[ \cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)\right]^2} \right].
\end{eqnarray*}
and
\begin{eqnarray*}
f_2(x,t) &= & i \frac{1+\lambda_0}{1-\lambda_0} \;
\left[ 1 - \frac{k_0 \lambda_0}{1-\lambda_0^2} \frac{\sin(k_0 x)}{\cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)} + \frac{i \lambda_0 k_0 \cosh(\sigma_0 t) \sinh(\sigma_0 t)}{\left[ \cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)\right]^2} \right. \\
&&
\left. +
\frac{4 \lambda_0 (1-\lambda_0) \cos(k_0 x)}{\cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)} -
\lambda_0^2 \frac{\cosh(2 \sigma_0 t) + \cos(2k_0 x)}{\left[ \cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)\right]^2} \right.\\
&&
\left. +
\frac{\lambda_0 (1+\lambda_0) (1-\lambda_0^2) \cos(k_0 x)^2}{\left[ \cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)\right]^2}
\frac{2 \lambda_0^3 \cosh(\sigma_0 t) \cos(k_0 x)}{\left[ \cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)\right]^2}\right].
\end{eqnarray*}
Both solutions grow linearly in $x$ and are not $L$-periodic.
As $t \to \pm \infty$, the non-periodic solution $v_3$ becomes asymptotically periodic, because $v_1$ decays to $0$, and could represent ${v}_0^-$ in the decomposition (\ref{v-arbitrary-constant}). However, one needs to cancel the polynomial term in $x$ by using a linear combination with other solutions of the linearized NLS equation (\ref{nls-lin}).
Finally, by using the non-periodic solution (\ref{varphi-2}) in
Pair III in Table \ref{table-1}, we obtain two other non-periodic solutions of the linearized NLS equation (\ref{nls-lin}), which are quadratic with respect to $x$. As is described in the recent symbolic computations in \cite{GS-2021},
such quadratic solutions in $x$ play no role in the proof of Theorem \ref{six solutions}.
\subsubsection{Solutions related to $\lambda = \lambda_0$}
By Lemma~\ref{lax antiper}, the eigenvalue $\lambda_0$ is geometrically simple with $L$-antiperiodic eigenfunction $\varphi_0 = (\hat{p}_0,\hat{q}_0)^T$ given by (\ref{norming-factor}). A second linearly independent solution of the Lax system is the non-periodic solution $\phi_0$ given by (\ref{non-periodic-vector}) in Appendix~\ref{a hatphi}.
By using Pair I in Table \ref{table-1} with $\varphi_0$, we obtain the following two $L$-periodic solutions of the linearized NLS equation (\ref{nls-lin}):
\begin{equation}
w_1 = \hat{p}_0^2 - \bar{\hat{q}}_0^2 = \frac{\lambda_0^2 \sin(k_0 x) \left[
k_0 \cosh(\sigma_0 t) + 2 i \lambda_0 \sinh(\sigma_0 t)\right]}{2 \left[\cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)\right]^2}
= -\lambda_0 k_0^{-2} \frac{\partial \hat{u}_0}{\partial x}
\label{w1}
\end{equation}
and
\begin{equation}
w_2 = i(\hat{p}_0^2 + \bar{\hat{q}}_0^2) = \frac{\lambda_0^2 \left[
k_0 \sinh(\sigma_0 t) \cos(k_0 x) + 2 i \lambda_0 \cosh(\sigma_0 t) \cos(k_0 x) - 2i \right]}{2 \left[\cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)\right]^2} = -k_0^{-2} \frac{\partial \hat{u}_0}{\partial t}.
\label{w2}
\end{equation}
These are neutral modes generated by the translational symmetries of the NLS equation (\ref{nls-u}) in $x$ and $t$. Note that $w_1$ is proportional to the solution $v_1$ in \eqref{v1},
\begin{equation}
\label{v1-w1}
v_1 = -\frac{4(1+\lambda_0)}{\lambda_0 (1-\lambda_0)} w_1.
\end{equation}
As $t \to \pm \infty$, the two periodic solutions $w_1$ and $w_2$ decay to $0$. These two solutions are asymptotically equivalent to the solutions
$v_{\lambda(k_1)}^-$ and $v_{-\lambda(k_1)}^-$ in the decomposition (\ref{v-arbitrary-constant}).
Next, we record the following algebraic computations:
\begin{eqnarray*}
&&-\bar{q}_0 p_+(\lambda_0) + p_0 \bar{q}_+(\lambda_0) = 4i \sin(k_0 x), \\
&&-\bar{q}_0 p_-(\lambda_0) + p_0 \bar{q}_-(\lambda_0) = 0, \\
&&-\bar{q}_0 p_+(\lambda_0) - p_0 \bar{q}_+(\lambda_0) = 4 \lambda_0 \sinh(\sigma_0 t) - 2 i k_0 \cosh(\sigma_0 t), \\
&&-\bar{q}_0 p_-(\lambda_0) - p_0 \bar{q}_-(\lambda_0) = 4 \left[ \lambda_0 \cosh(\sigma_0 t) - \cos(k_0x) \right] - 2i k_0 \sinh(\sigma_0 t).
\end{eqnarray*}
Then, by using Pair II in Table \ref{table-1} with the $L$-antiperiodic eigenfunction $\varphi_0$ in (\ref{norming-factor}) and the non-periodic solution $\phi_0$ in (\ref{non-periodic-vector}), we obtain
the following two non-periodic solutions of the linearized NLS equation (\ref{nls-lin}):
\begin{eqnarray}
\label{w3}
&& w_3(x,t) = -4 \lambda_0 x w_1(x,t) + 4(1-2\lambda_0^2) t w_2(x,t) + g_1(x,t), \\
\label{w4}
&& w_4(x,t) = -4 \lambda_0 x w_2(x,t) - 4(1-2\lambda_0^2) t w_1(x,t) + g_2(x,t),
\end{eqnarray}
where the periodic parts $g_1$ and $g_2$ are given by
\begin{eqnarray*}
g_1(x,t)& = &
4 k_0^{-1} \left[ \cosh(\sigma_0 t) \sin(k_0 x) w_1(x,t) + \sinh(\sigma_0 t) \cos(k_0 x) w_2(x,t) \right] \\
&& + \lambda_0
\frac{2 \lambda_0 \cosh(\sigma_0 t) - 2 \cos(k_0x) - i k_0 \sinh(\sigma_0 t)}{\cosh(\sigma_0 t) - \lambda_0 \cos(k_0x)}
\end{eqnarray*}
and
\begin{eqnarray*}
g_2(x,t) =
4 k_0^{-1} \left[ \cosh(\sigma_0 t) \sin(k_0 x) w_2(x,t) - \sinh(\sigma_0 t) \cos(k_0 x) w_1(x,t) \right].
\end{eqnarray*}
Note that the components $g_1$ and $g_2$ are bounded in $t$ as $t \to \pm \infty$. In view of (\ref{v1-w1}), the linear combination
\begin{eqnarray}
\label{element-basis}
v(x,t) = \lambda_0^2 \frac{1-\lambda_0}{1+\lambda_0} v_3(x,t) - w_3(x,t) = t s_0(x,t) + f_0(x,t),
\end{eqnarray}
where $s_0$ and $f_0$ are $L$-periodic in $x$ and bounded as $t \to \pm \infty$, e.g.
\begin{eqnarray*}
s_0 &=& 2i \lambda_0^2 \left[ 1 - 2 \lambda_0^2 + \frac{ik_0 \lambda_0 \sinh(\sigma_0 t)}{\cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)} \right. \\
&& \left. + (1-\lambda_0^2) \frac{ik_0 \sinh(\sigma_0 t) \cos(k_0 x) + 2(1 - \lambda_0^2 \cos^2(k_0x))}{\left[ \cosh(\sigma_0 t) - \lambda_0 \cos(k_0x) \right]^2} \right].
\end{eqnarray*}
As $t \to \pm \infty$, the solution $v$ in (\ref{element-basis}) is asymptotically equivalent to the solution $\widetilde{v}_0^-$ in the decomposition (\ref{v-arbitrary-constant}).
\begin{remark}
Since $v_2$ and $w_2$ are not linearly dependent from each other, there is no a linear combination of $v_4$ and $w_4$ which would be $L$-periodic in $x$.
\end{remark}
Finally, by using Pair III in Table \ref{table-1} with the non-periodic solution $\phi_0$ in (\ref{non-periodic-vector}), we obtain two non-periodic solutions which are quadratic in $x$. Again, such quadratic solutions in $x$ play no role in the proof of Theorem \ref{six solutions}.
\subsubsection{Solutions related to the generalized eigenfunction at $\lambda = \lambda_0$}
With the account of $v_1$, $v_2$, $w_1$, $w_2$, the relation
(\ref{v1-w1}), and the linear combination (\ref{element-basis}), it remains to obtain two $L$-periodic solutions
of the linearized NLS equation (\ref{nls-lin}) at AB,
which would be asymptotically equivalent to the remaining solutions $v_{\lambda(k_1)}^+$ and $v_{-\lambda(k_1)}^+$ in the decomposition (\ref{v-arbitrary-constant}). These solutions will be constructed from linear combinations of non-periodic solutions that grows linearly in $x$, just as the solution $v$ in (\ref{element-basis}).
By Lemma~\ref{lax antiper}, the eigenvalue $\lambda_0$ is algebraically double with the generalized eigenfunction $\varphi_1$ given by
(\ref{gen-eigenvector}) in addition to the eigenfunction $\varphi_0$ given by (\ref{norming-factor}). It is natural to expect additional solutions for the linearized NLS equation to be obtained from the eigenfunction $\varphi_0$ and the generalized eigenfunction $\varphi_1$. It is surprising, however, that this is not the case. As a result, we have to use the eigenfunction $\varphi_0$ and the generalized eigenfunction $\varphi_1$ together with the non-periodic solutions $\phi_0$ and $\phi_1$ given by (\ref{non-periodic-vector})
and (\ref{second-non-periodic-vector}).
By using the expansion of the $2\times2$ matrix solution $\hat{\Phi}(\lambda)$ computed in Appendix~\ref{a hatphi}, we write
\[
\hat\Phi(\lambda) = \left[ 2ik_0\varphi(\lambda), \phi(\lambda)\right],
\]
where
\begin{eqnarray}
\label{varphilambda}
\varphi(\lambda) &=& \frac{\varphi_0}{\lambda - \lambda_0}
+ \varphi_1 + \varphi_2 (\lambda - \lambda_0) + \mathcal{O}((\lambda - \lambda_0)^2) ,
\\
\label{philambda}
\phi(\lambda) &=& \phi_0 + \phi_1 (\lambda - \lambda_0) + \mathcal{O}((\lambda - \lambda_0)^2).
\end{eqnarray}
Here $\varphi_0$, $\varphi_1$, and $\varphi_2$ are given by (\ref{norming-factor}), (\ref{gen-eigenvector}), and \eqref{second-generalized-eigenvector},
whereas $\phi_0$ and $\phi_1$ are given by (\ref{non-periodic-vector}) and \eqref{second-non-periodic-vector}.
Both columns of $\hat\Phi(\lambda)$ being solutions of the Lax system (\ref{lax-1})--(\ref{lax-2}), the three pairs in Table~\ref{table-1} give solutions $v(\lambda)$ of the linearized NLS equation (\ref{nls-lin}) at AB. Expanding $v(\lambda)$ at $\lambda=\lambda_0$ generates a set of possible solutions to the linearized NLS equation (\ref{nls-lin}) at AB. It turns out that the $L$-periodic solutions and the linearly growing in $x$ solutions obtained from Pair I in Table \ref{table-1} are all linear combinations of the previously obtained solutions and that the solutions obtained from Pair III in Table \ref{table-1} are all at least quadratic in $x$. As a result, the new suitable solutions to the linearized NLS equation (\ref{nls-lin}) must be obtained by using Pair II in Table \ref{table-1}.
Using Pair II in Table \ref{table-1} with the two columns in the matrix $\hat{\Phi}(\lambda)$ expanded at $\lambda=\lambda_0$, we obtain the following expansions
\begin{equation}
\label{expansion-v}
v = (2ik_0) \left[ \frac{w_{\pm}}{\lambda - \lambda_0} + v_{\pm} + \mathcal{O}(\lambda - \lambda_0) \right],
\end{equation}
where each term of the expansion gives a solution $v$ to the linearized NLS equation (\ref{nls-lin}) at AB. It follows that $w_+ = w_3$ and $w_- = w_4$ were previously obtained in (\ref{w3}) and (\ref{w4}) respectively. The next corrections in (\ref{expansion-v}) give two new solutions:
\begin{eqnarray*}
v_+ &=& \varphi_{0,1} \phi_{1,1} - \bar{\varphi}_{0,2} \bar{\phi}_{1,2} + \varphi_{1,1} \phi_{0,1} - \bar{\varphi}_{1,2} \bar{\phi}_{0,2}, \\
v_- &=& i \varphi_{0,1} \phi_{1,1} + i \bar{\varphi}_{0,2} \bar{\phi}_{1,2} + i \varphi_{1,1} \phi_{0,1} + i \bar{\varphi}_{1,2} \bar{\phi}_{0,2},
\end{eqnarray*}
where the first subscript stands for $\varphi_0$, $\varphi_1$, $\phi_0$, and $\phi_1$ and the second subscript stands for the first and second components of the $2$-vectors. For further computations of $v_\pm$ we obtain
\begin{eqnarray*}
p_0 p_+(\lambda_0) + \bar{q}_0 \bar{q}_+(\lambda_0) &= &2 \cos(k_0x) \left[ 2 \lambda_0 \sinh(\sigma_0 t) - i k_0 \cosh(\sigma_0 t) \right], \\
p_0 p_+(\lambda_0) - \bar{q}_0 \bar{q}_+(\lambda_0)& =& -2 \sin(k_0x) \left[ k_0 \sinh(\sigma_0 t) + 2 i \lambda_0 \cosh(\sigma_0 t) \right].
\end{eqnarray*}
After substitution of (\ref{norming-factor}), (\ref{gen-eigenvector}),
(\ref{non-periodic-vector}), and (\ref{second-non-periodic-vector}) into $v_{\pm}$, we obtain
\begin{eqnarray*}
v_{\pm}(x,t) = x r_{\pm}(x,t) + t s_{\pm}(x,t) + f_{\pm}(x,t),
\end{eqnarray*}
where the $L$-periodic parts are computed explicitly:
\begin{eqnarray*}
r_+ &=& -\frac{8}{k_0^2} (3 - 2 \lambda_0^2) w_1, \\
r_- &=& -\frac{8}{k_0^2} (1 - 4 \lambda_0^2) w_2 +
\frac{2\lambda_0^2}{(1+\lambda_0)^2}v_2,
\end{eqnarray*}
\begin{eqnarray*}
s_+ &=& \frac{4}{k_0} (1-2\lambda_0^2) (\hat{p}_0 p_+(\lambda_0) - \bar{\hat{q}}_0 \bar{q}_+(\lambda_0)) + \frac{16}{k_0^2} (1-2\lambda_0^2) \sinh(\sigma_0t) \sin(k_0 x) w_1 \\
&& +\frac{8}{k_0^2} (1-2\lambda_0^2) \left( 2 \cosh(\sigma_0t) \cos(k_0 x) - \lambda_0 \right) w_2 - 8 \lambda_0 w_2, \\[1ex]
s_- &=& \frac{4i}{k_0} (1-2\lambda_0^2) (\hat{p}_0 p_+(\lambda_0) + \bar{\hat{q}}_0 \bar{q}_+(\lambda_0)) + \frac{16}{k_0^2} (1-2\lambda_0^2) \sinh(\sigma_0t) \sin(k_0 x) w_2 \\
&& -\frac{8}{k_0^2} (1-2\lambda_0^2) \left( 2 \cosh(\sigma_0t) \cos(k_0 x) - \lambda_0 \right) w_1 + 8 \lambda_0 w_1,
\end{eqnarray*}
and
\begin{eqnarray*}
f_+ &=& \frac{1}{2i k_0} \left( p_0 p_+(\lambda_0) + \bar{q}_0 \bar{q}_+(\lambda_0)\right) +
\frac{i}{k_0} (\hat{p}_0 p_+(\lambda_0) - \bar{\hat{q}}_0 \bar{q}_+(\lambda_0)) \\
&& + \frac{2}{k_0^2} \left( \cosh(\sigma_0 t) \cos(k_0 x) - \lambda_0 \right) (p_0 \hat{p}_0 - \bar{q}_0 \bar{\hat{q}}_0) - \frac{2i}{k_0^2} \sinh(\sigma_0 t) \sin(k_0 x) (p_0 \hat{p}_0 + \bar{q}_0 \bar{\hat{q}}_0)\\
&& + \frac{2}{k_0^2} \sinh(\sigma_0 t) \cos(k_0 x) (\hat{p}_0 p_+(\lambda_0) - \bar{\hat{q}}_0 \bar{q}_+(\lambda_0))
\\ && - \frac{2i}{k_0^2} \cosh(\sigma_0 t) \sin(k_0 x) (\hat{p}_0 p_+(\lambda_0) + \bar{\hat{q}}_0 \bar{q}_+(\lambda_0)) \\
&& + \frac{4}{k_0^3}
\cosh(2 \sigma_0t) \sin(2 k_0 x) w_1 + \frac{4}{k_0^3} \sinh(2 \sigma_0t) \cos(2 k_0 x) w_2, \\[1ex]
f_- &=& \frac{1}{2 k_0} \left( p_0 p_+(\lambda_0) - \bar{q}_0 \bar{q}_+(\lambda_0)\right) -
\frac{1}{k_0} (\hat{p}_0 p_+(\lambda_0) + \bar{\hat{q}}_0 \bar{q}_+(\lambda_0)) \\
&& + \frac{2i}{k_0^2} \left( \cosh(\sigma_0 t) \cos(k_0 x) - \lambda_0 \right) (p_0 \hat{p}_0 + \bar{q}_0 \bar{\hat{q}}_0)
+ \frac{2}{k_0^2} \sinh(\sigma_0 t) \sin(k_0 x) (p_0 \hat{p}_0 - \bar{q}_0 \bar{\hat{q}}_0)\\
&& + \frac{2i}{k_0^2} \sinh(\sigma_0 t) \cos(k_0 x) (\hat{p}_0 p_+(\lambda_0) + \bar{\hat{q}}_0 \bar{q}_+(\lambda_0))
\\ && + \frac{2}{k_0^2} \cosh(\sigma_0 t) \sin(k_0 x) (\hat{p}_0 p_+(\lambda_0) - \bar{\hat{q}}_0 \bar{q}_+(\lambda_0)) \\
&& + \frac{4}{k_0^3} \cosh(2 \sigma_0t) \sin(2 k_0 x) w_2
- \frac{4}{k_0^3} \sinh(2 \sigma_0t) \cos(2 k_0 x) w_1.
\end{eqnarray*}
The $x$-growing part of $v_+$ is cancelled in the linear combination
\begin{eqnarray}
\nonumber
v(x,t) &=& k_0^2 v_+(x,t) - \frac{2 \lambda_0 (3-2\lambda_0^2)(1-\lambda_0)}{1+\lambda_0} v_3(x,t) \\
\label{new+}
&=& t s_1(x,t) + k_0^2 f_+(x,t) - \frac{2 \lambda_0 (3-2\lambda_0^2)(1-\lambda_0)}{1+\lambda_0} f_1(x,t),
\end{eqnarray}
where
\begin{eqnarray*}
s_1(x,t) &=& 4i \lambda_0 (7-10\lambda_0^2)
\frac{(2 \lambda_0^2-1) \cosh(\sigma_0 t)-i\lambda_0k_0 \sinh(\sigma_0 t) - \lambda_0 \cos(k_0 x)}{\cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)},
\end{eqnarray*}
and $f_+(x,t)$, $f_1(x,t)$ are $L$-periodic in $x$.
Note that $t s_1(x,t)$ is irreducible in the sense that there are no other solutions of the linearized NLS equation (\ref{nls-lin}) with the same behavior as $t s_1(x,t)$. On the other hand, $s_1(x,t)$ and $f_1(x,t)$ are bounded as $t \to \pm \infty$, whereas $k_0^2 f_+(x,t)$ is unbounded. As $t \to \pm \infty$, we deduce that the exponentially growing component of $v(x,t)$ is given by
\[
v(x,t) \sim -4 k_0^2 (1 - 4 \lambda_0^2) \cosh(\sigma_0 t) \cos(k_0 x)
- 8 i \lambda_0 k_0 (3 - 4 \lambda_0^2) \sinh(\sigma_0 t) \cos(k_0x).
\]
We conclude that the solution $v$ in \eqref{new+} is $L$-periodic and asymptotically equivalent to the mode $v_{-\lambda(k_1)}^+$ in the decomposition (\ref{v-arbitrary-constant}) as $t \to \pm \infty$.
The $x$-growing part of $v_-$ is cancelled in the linear combination
\begin{eqnarray}
\nonumber
v(x,t) &=& k_0^2 v_-(x,t) - \frac{2 (1-4\lambda_0^2)}{\lambda_0} w_4(x,t)
-\frac{8\lambda_0^2 (1-\lambda_0)}{1+\lambda_0} v_4(x,t) \\
\label{new-}
&=& t s_2(x,t) + k_0^2 f_-(x,t)
- \frac{2 (1-4\lambda_0^2)}{\lambda_0} g_2(x,t) -\frac{8\lambda_0^2 (1-\lambda_0)}{1+\lambda_0} f_2(x,t),
\end{eqnarray}
where
\begin{eqnarray*}
s_2(x,t) = -\lambda_0 k_0^2 (2\lambda_0^2+1)
\frac{\sin(k_0 x)[k_0 \cosh(\sigma_0 t)+2i\lambda_0 \sinh(\sigma_0 t)]}{[\cosh(\sigma_0 t) - \lambda_0 \cos(k_0 x)]^2},
\end{eqnarray*}
and $f_-(x,t)$, $f_2(x,t)$, $g_2(x,t)$ are $L$-periodic in $x$.
Again, the solution $v(x,t)$ grows exponentially in time as $t \to \pm \infty$ due to the unbounded component $k_0^2 f_-(x,t)$, according to
$$
v(x,t) \sim -4 k_0^2 (1 - 4 \lambda_0^2) \sinh(\sigma_0 t) \sin(k_0 x)
- 8 i \lambda_0 k_0 (3 - 4 \lambda_0^2) \cosh(\sigma_0 t) \sin(k_0x).
$$
We conclude that the solution $v$ in \eqref{new-} is $L$-periodic and asymptotically equivalent to the mode $v_{\lambda(k_1)}^+$ in the decomposition (\ref{v-arbitrary-constant}) as $t \to \pm \infty$. This completes the proof of the theorem.
\subsubsection{Solutions related to other eigenvalues}
We conclude this section with some comments on the solutions generated by the remaining eigenvalues in the Lax spectrum. These are the geometrically double eigenvalues $\{ \lambda_m^{(P)} \}_{m \in \mathbb{N}_{\rm even}}$ for $L$-periodic solutions and $\{ \lambda_m^{(A)}\}_{m \in \mathbb{N}_{\rm odd} \backslash \{1\}}$ for $L$-antiperiodic solutions. We exclude the case $L \in \pi\mathbb N$ when $0$ is an eigenvalue of higher algebraic multiplicity and the two eigenfunctions alone were not enough to obtain the decomposition (\ref{v-arbitrary-constant}) for the constant solution~$u=1$. Then, using Pairs I and III in Table \ref{table-1}, from the associated eigenfunctions we obtain $L$-periodic solutions of the linearized NLS equation (\ref{nls-lin}) at AB which are asymptotically equivalent to the solutions $\{ v_{\pm \lambda(k_m)}^\pm \}_{m\in\mathbb N\setminus\{0,1\}}$, in the decomposition (\ref{v-arbitrary-constant}). Pair II in Table \ref{table-1} generates two $L$-periodic solutions which are linear combinations of the solutions $v_1$, $v_2$, and $w_2$ from Theorem~\ref{six solutions}. Together with the other three solutions from Theorem~\ref{six solutions}, the resulting set of solutions is asymptotically equivalent to the one in the decomposition (\ref{v-arbitrary-constant}). While we do not attempt to prove completeness of this set, we refer to \cite[Section 4]{GS-2021} for a recent discussion of this question.
\section{Kuznetsov--Ma breather (KMB)} \label{s KMB}
Here we apply the same procedure of Section \ref{s AB} for KMB. Since KMB is localized in $x$, we have to consider the Lax spectrum and bounded solutions of the linearized NLS equation in the function space $L^2(\mathbb R)$.
Let $\lambda_0 \in (1,\infty)$ and define
the particular solution $\varphi = (p_0,q_0)^T$ of the Lax system (\ref{lax-1})--(\ref{lax-2}) with $u = 1$ and $\lambda = \lambda_0$:
\begin{equation}
\label{KMB-eigen}
\left\{ \begin{array}{l}
\displaystyle
p_0(x,t) = \sqrt{\lambda_0 + \frac{1}{2} \beta_0} \; e^{\frac{1}{2} (\beta_0 x + i \alpha_0 t)} - \sqrt{\lambda_0 - \frac{1}{2} \beta_0} \;
e^{-\frac{1}{2} (\beta_0 x + i \alpha_0 t)}, \\
\displaystyle
q_0(x,t) = -\sqrt{\lambda_0 - \frac{1}{2} \beta_0} \; e^{\frac{1}{2} (\beta_0 x + i \alpha_0 t)} + \sqrt{\lambda_0 + \frac{1}{2} \beta_0} \;
e^{-\frac{1}{2} (\beta_0 x + i \alpha_0 t)},
\end{array}
\right.
\end{equation}
where $\beta_0 = 2 \sqrt{\lambda_0^2-1}$ and $\alpha_0 = \lambda_0 \beta_0$.
Notice that $p_0$ and $q_0$ in (\ref{KMB-eigen}) are related symbolically to the ones for AB in \eqref{AB-eigen} through the equalities $k_0=i\beta_0$ and $\sigma_0=i\alpha_0$. Elementary computations give
\begin{eqnarray} \nonumber
&&|p_0|^2 + |q_0|^2 = 4 \left[ \lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t) \right]\\
\label{modKMB}
&&|p_0|^2 - |q_0|^2 = 2\beta_0\sinh(\beta_0 x)\\
\nonumber
&&p_0 \bar{q}_0 = -2 \cosh(\beta_0 x) + 2 \lambda_0 \cos(\alpha_0 t) + i \beta_0 \sin(\alpha_0 t),
\end{eqnarray}
so that the one-fold Darboux transformation (\ref{DT-potential}) yields the formula for KMB:
\begin{equation}
\label{KM-u}
\hat{u}_0(x,t) = - 1 + \frac{2(\lambda_0^2-1) \cos(\alpha_0 t) + i \alpha_0 \sin(\alpha_0 t)}{\lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t)}.
\end{equation}
The complementary transformation (\ref{DT-squared}) gives a consistent relation
\begin{equation*}
|\hat{u}_0(x,t)|^2 = 1 + \alpha_0 \beta_0 \frac{\lambda_0 - \cosh(\beta_0 x) \cos(\alpha_0 t)}{(\lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t))^2},
\end{equation*}
which can also be derived from (\ref{KM-u}). KMB is periodic in $t$ with period $T = 2\pi/\alpha_0$ and localized in $x$ with $\lim\limits_{x \to \pm \infty} \hat{u}_0(x,t) =-1$.
\subsection{Lax spectrum at KMB} \label{ss KMB Lax}
As for AB, we use the Darboux matrix (\ref{DT-matrix}) to construct the bounded solutions of the Lax system (\ref{lax-1})--(\ref{lax-2}) with $u = \hat{u}_0$ from the bounded solutions of the Lax system with $u=1$ and then determine the Lax spectrum at KMB in~$L^2(\mathbb R)$. The Lax spectrum $\Sigma_{\rm KMB}$ is shown in Figure~\ref{fig-Lax-3} where the red dots show isolated eigenvalues $\{+\lambda_0,-\lambda_0\}$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=8cm,height=6cm]{SpectrumLaxKM}
\end{center}
\caption{The Lax spectrum $\Sigma_{\rm KMB}$ in $L^2(\mathbb{R})$ for KMB. }
\label{fig-Lax-3}
\end{figure}
The following lemma gives precisely the spectrum $\Sigma_{\rm KMB}$.
\begin{lemma}\label{LaxKMB}
Consider KMB given by (\ref{KM-u}). The spectrum of the ZS spectral problem (\ref{lax-1}) with $u = \hat{u}_0$ in $L^2(\mathbb R)$ is the set
\begin{equation}
\label{spectrum-KMB}
\Sigma_{\rm KMB} = i \mathbb{R} \cup [-1,1] \cup \{ \lambda_0, -\lambda_0\},
\end{equation}
with the following properties:
\begin{enumerate}
\item For each $\lambda \in i \mathbb{R} \cup (-1,1)$, there exist two linearly independent bounded solutions.
\item For $\lambda = 1$ and $\lambda = -1$, there exists only one bounded solution.
\item The eigenvalues $\lambda = \lambda_0$ and $\lambda = -\lambda_0$ are algebraically simple with associated eigenfunctions $\varphi = (\hat{p}_0,\hat{q}_0)^T$ and $\varphi = (\bar{\hat{q}}_0,-\bar{\hat{p}}_0)^T$ respectively.
\end{enumerate}
\end{lemma}
\begin{proof}
The Darboux matrix $D(\lambda)$ given by (\ref{DT-matrix}) is invertible for $\lambda\not=\pm\lambda_0$. Moreover, both $D(\lambda)$ and its inverse are bounded in $x$ for $\lambda\not=\pm\lambda_0$. As a result, we have a one-to-one correspondence between the bounded solutions of the Lax systems with $u = 1$ and $u = \hat{u}_0$ for $\lambda\not=\pm\lambda_0$. This implies that, up to the values $\pm\lambda_0$, the ZS spectral problems
(\ref{lax-1}) with $u = 1$ and $u = \hat{u}_0$ have the same continuous spectrum $\Sigma_0$ given by \eqref{spectrum-constant}, so that properties (1) and (2) hold.
It remains to prove (3). Due to the symmetry property of the Lax system, it is enough to show the result for $\lambda_0$.
The vector $\varphi = (p_0,q_0)^T$ given by \eqref{KMB-eigen} is a solution of the Lax system (\ref{lax-1})--(\ref{lax-2}) with $u = 1$ and $\lambda = \lambda_0$. The Darboux transformation (\ref{DT-eigen}) gives the solution $\varphi = (\hat{p}_0,\hat{q}_0)^T$ of the Lax system (\ref{lax-1})--(\ref{lax-2}) with $u = \hat{u}_0$ and $\lambda = \lambda_0$. From the formulas \eqref{KMB-eigen} and \eqref{modKMB} we find that $\hat{p}_0(x,t),\hat{q}_0(x,t) \to 0$ as $|x| \to \infty$ exponentially fast, hence $\varphi = (\hat{p}_0,\hat{q}_0)^T$ is an eigenfunction in $L^2(\mathbb{R})$ associated with $\lambda_0$.
Furthermore, the Lax system (\ref{lax-1})--(\ref{lax-2}) having zero-trace, the Wronskian of any two solutions is constant both in $x$ and $t$. Since one solution is decaying to zero as $|x| \to \infty$, another linearly independent solution is necessarily growing at infinity. Consequently, $\lambda_0$ is geometrically simple.
For the algebraic multiplicity, we use the form $\left( \mathcal{L} - \lambda_0 I \right) \varphi_0 = 0$ with the eigenfunction $\varphi_0 = (\hat{p}_0,\hat{q}_0)^T \in L^2(\mathbb{R})$ and show that the linear nonhomogeneous equation
\begin{equation}
\label{eigen-gen}
\left( \mathcal{L} - \lambda_0 I \right) \psi_0 = \varphi_0,
\end{equation}
does not have the generalized eigenfunction $\psi_0\in L^2(\mathbb R)$. The solvability condition for this equation is given by the Fredholm condition
$\langle \varphi_0^*, \varphi_0 \rangle = 0$, where
$\langle \cdot, \cdot \rangle$ is the inner product in $L^2(\mathbb{R})$ and $\varphi_0^*$ is an eigenfunction of the adjoint problem $\left( \mathcal{L}^* - \lambda_0 I \right) \varphi_0^* = 0$. A direct calculation shows that
$\varphi_0^* = (\bar{\hat{q}}_0,\bar{\hat{p}}_0)^T$, and then we compute
\begin{eqnarray*}
\langle \varphi_0^*, \varphi_0 \rangle &=& -\int_{\mathbb{R}} \frac{8 \lambda_0^2 p_0 q_0}{(|p_0|^2 + |q_0|^2)^2} dx \\
&=& \lambda_0^2 \int_{\mathbb{R}} \frac{\cosh(\beta_0 x + i \alpha_0 t) - \lambda_0}{(\lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t))^2} dx \\
&=& -\lambda_0 \beta_0^{-1} \frac{\lambda_0 \sinh(\beta_0 x) + i \sin(\alpha_0 t)}{\lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t)} \biggr|_{x \to -\infty}^{x \to +\infty} =-2 \lambda_0 \beta_0^{-1}.
\end{eqnarray*}
Consequently, $\langle \varphi_0^*, \varphi_0 \rangle\not=0$ so that $\lambda_0$ is a simple eigenvalue and property (3) holds.
\end{proof}
\subsection{Linearized NLS equation at KMB} \label{ss KMB LNLS}
As in the case of the constant-amplitude solution $u=1$, we construct bounded solutions of the linearized NLS equation \eqref{nls-lin} at KMB from the bounded solutions of the Lax equations for $\lambda\in\Sigma_{\rm KMB}$ in Lemma~\ref{LaxKMB}. Recall that the solutions are unbounded in $x$ when $\lambda\notin\Sigma_{\rm KMB}$.
Here we focus on solutions generated by the eigenvalue $\lambda_0$, and in particular on those which are decaying to zero as $|x| \to \infty$. The eigenvalue $\lambda = -\lambda_0$ produces the same solutions of the linearized NLS equation, due to the symmetry in Remark \ref{remark-symmetry}.
The following theorem provides the main result of these computations.
\begin{theorem}\label{t kmb}
Consider KMB given by (\ref{KM-u}). The eigenvalue $\lambda_0$ of the Lax system generates three linearly independent exponentially decaying solutions of the linearized NLS equation (\ref{nls-lin}). These solutions are proportional to the three derivatives of KMB with respect to $x$, $t$, and $\lambda_0$.
\end{theorem}
\begin{remark}
Besides the three exponentially decaying solutions in Theorem \ref{t kmb}, we also find three linearly independent bounded solutions which are asymptotically constant as $x\to\pm\infty$. These six solutions are the analgue for KMB of the six solutions given in Theorem~\ref{six solutions} for~AB.
\end{remark}
\subsubsection{Solutions related to $\lambda= \lambda_0$}
Since $\lambda_0$ is a simple eigenvalue in $\Sigma_{\rm KMB}$, the eigenfunction $\varphi_0 = (\hat{p}_0,\hat{q}_0)^T$ provides an exponentially decaying solution of the Lax system (\ref{lax-1})--(\ref{lax-2}) with $\lambda=\lambda_0$. The second linearly independent solution is exponentially growing in $x$. According to Remark~\ref{hatphi kmb} in Appendix~\ref{a hatphi} this second solution is given by:
\begin{equation}
\label{growing-vector}
\phi_0 = \left[ \begin{array}{c} p_0 \\ q_0 \end{array} \right] +
4 \left[-\lambda_0 x + i (1-2\lambda_0^2) t + \beta_0^{-1} \sinh(\beta_0 x + i \alpha_0 t) \right] \left[ \begin{array}{c} \hat{p}_0 \\ \hat{q}_0 \end{array} \right].
\end{equation}
By using Pair I in Table \ref{table-1} with $\varphi_0$, we obtain the solutions
\begin{eqnarray}
w_1(x,t) = \hat{p}_0^2 - \bar{\hat{q}}_0^2 = -\frac{\lambda_0^2 \sinh(\beta_0 x) (\beta_0 \cos(\alpha_0 t) + 2 i \lambda_0 \sin(\alpha_0 t))}{2 (\lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t))^2}
\label{mode-1}
\end{eqnarray}
and
\begin{eqnarray}
w_2(x,t) = i\hat{p}_0^2 +i\bar{\hat{q}}_0^2 =\frac{i\lambda_0^2 (2 \lambda_0 \cosh(\beta_0 x) \cos(\alpha_0 t) - 2 + i \beta_0 \cosh(\beta_0 x) \sin(\alpha_0 t))}{2 (\lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t))^2},
\label{mode-2}
\end{eqnarray}
which are periodic in $t$ and exponentially decaying in $x$. It turns out that these solutions are proportional to the derivatives of $\hat{u}_0$ with respect to $x$ and $t$,
\[
w_1= \lambda_0 \beta_0^{-2} \frac{\partial \hat{u}_0}{\partial x},\quad
w_2 = \beta_0^{-2} \frac{\partial \hat{u}_0}{\partial t}.
\]
Hence these solutions are generated by the symmetries of the NLS equation (\ref{nls-u}) with respect to translation in $x$ and $t$.
While Pair III in Table \ref{table-1} with $\phi_0$ gives exponentially growing solutions, Pair II with $\varphi_0$ and $\phi_0$ gives two bounded solutions:
\begin{eqnarray}
\label{mode-3}
w_3(x,t) &=& -4 \lambda_0 x w_1(x,t) + 4 (1-2\lambda_0^2) t w_2(x,t) + f_1(x,t),\\
\label{mode-4}
w_4(x,t) &=& -4 \lambda_0 x w_2(x,t) - 4 (1-2\lambda_0^2) t w_1(x,t) + f_2(x,t),
\end{eqnarray}
where $w_1$ and $w_2$ are given by \eqref{mode-1} and \eqref{mode-2}, respectively, and
\begin{eqnarray*}
f_1(x,t) &=& 2 \lambda_0 \cos(\alpha_0 t) \frac{2 \lambda_0 \cos(\alpha_0 t) - (1+\lambda_0^2) \cosh(\beta_0 x)}{\left[ \lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t) \right]^2} \\
&& + 4 i \lambda_0 \beta_0^{-1} \sin(\alpha_0 t) \frac{(2\lambda_0^2 -1) \cos(\alpha_0 t) - \lambda_0^3 \cosh(\beta_0 x)}{\left[ \lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t) \right]^2},
\\
f_2(x,t)& =& 4i \lambda_0^2 \beta_0^{-1} \frac{\sinh(\beta_0 x)}{\lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t)}.
\end{eqnarray*}
Here $f_1$ is exponentially decreasing as $|x| \to \infty$, whereas $f_2$ is bounded but not decaying as $|x| \to \infty$, and both $f_1$ and $f_2$ are periodic in $t$. Consequently, $w_3$ is also exponentially decaying in $x$, and a direct computation shows that it is proportional to the derivative of $\hat{u}_0$ with respect to $\lambda_0$,
\[
\frac{d \hat{u}_0}{d \lambda_0} = x \beta_0^{-1} \beta_0'(\lambda_0) \frac{\partial \hat{u}_0}{\partial x} + t \alpha_0^{-1} \alpha_0'(\lambda_0) \frac{\partial \hat{u}_0}{\partial t} + \frac{\partial \hat{u}_0}{\partial \lambda_0} = -\lambda_0^{-1} v_3.
\]
In this computation, $\lambda_0$ is an arbitrary parameter and we write $\beta_0 = \beta_0(\lambda_0)= 2 \sqrt{\lambda_0^2 - 1}$ and $\alpha_0=\alpha_0(\lambda_0) = \lambda_0 \beta_0(\lambda_0)$.
The solution $w_4$ is asymptotically constant, with
\begin{equation}\label{limv4}
\lim_{x \to \pm \infty} w_4(x,t) =\pm 4i \lambda_0 \beta_0^{-1}.
\end{equation}
The solutions $w_1$, $w_2$, and $w_3$ are the three linearly independent exponentially decaying solutions in Theorem~\ref{t kmb}.
\subsubsection{Solutions related to $\lambda \in \Sigma_0$}
First, we consider the solutions of the linearized NLS equation which are asymptotically constant, but not decaying to $0$. These solutions are obtained from Pairs I and II in Table \ref{table-1} for $\lambda=1$ and from Pair II for any $\lambda \in \Sigma_0$. We are looking for a suitably chosen linear combination of these solutions with $w_4$ which might lead to a fourth exponentially decaying solution of the linearized NLS equation (\ref{nls-lin}). We show below that this is not the case.
For $\lambda=1$, the Lax system has the bounded solution $\hat\varphi$ in \eqref{varphi-1} and the unbounded solution $\hat\phi$ in \eqref{varphi-2} in which $(p_0,q_0)$ is given by (\ref{KMB-eigen}).
Using Pair~I of Table~\ref{table-1} with $\hat\varphi$ we find the solutions
\begin{equation}\label{e kmb v1}
v_1(x,t)= \frac{2\lambda_0(1+\lambda_0)}{1-\lambda_0}\frac{ \sinh(\beta_0 x) (\beta_0 \cos(\alpha_0 t) + 2 i \lambda_0 \sin(\alpha_0 t))}{[\lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t)]^2}
\end{equation}
and
\begin{eqnarray}
v_2(x,t)&=& \frac{2i}{(1-\lambda_0)^2}\left[\frac{-\lambda_0 \cosh(\beta_0 x) +(2\lambda_0^2-1) \cos(\alpha_0 t) + 2 i \lambda_0\beta_0 \sin(\alpha_0 t)}{ \lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t)}\right. \nonumber \\
&&\left.+\lambda_0^2\frac{\beta_0^2 \sinh^2(\beta_0 x)+
(-2 \cosh(\beta_0 x) + 2 \lambda_0 \cos(\alpha_0 t) + i \beta_0 \sin(\alpha_0 t))^2}{4 \left[\lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t)\right]^2}\right].
\label{e kmb v2}
\end{eqnarray}
The solution $v_1$ is proportional to $w_1$ given in \eqref{mode-1},
\[
v_1 = -\frac{4(1+\lambda_0)}{\lambda_0 (1-\lambda_0)} w_1,
\]
whereas the solution $v_2$ is asymptotically constant with
\[
\lim_{x \to \pm \infty} v_{2}(x,t) =-\frac{2i(1+\lambda_0)}{1-\lambda_0} .
\]
By using Pair~II with $\hat{\varphi}$ and $\hat{\phi}$,
we find the bounded solution
\begin{equation}\label{e kmb v3}
v_3(x,t)= \left(x+\frac12\right)v_{1}(x,t) + t v_{2}(x,t) + g_1(x,t) ,
\end{equation}
where
\begin{eqnarray*}
g_1(x,t)= \frac1{(1-\lambda_0)^2}\left[1+
\lambda_0^2\frac{\beta_0^2 \sinh^2(\beta_0 x)+
(-2 \cosh(\beta_0 x) + 2 \lambda_0 \cos(\alpha_0 t) + i \beta_0 \sin(\alpha_0 t))^2}{4 \left[\lambda_0 \cosh(\beta_0 x) - \cos(\alpha_0 t)\right]^2}\right] ,
\end{eqnarray*}
and a solution $v_4(x,t)$ which is unbounded in $x$. The solution $v_3$ grows linearly in $t$ and
\[
\lim_{x \to \pm \infty} v_{3}(x,t) = -t\,\frac{2i(1+\lambda_0)}{1-\lambda_0}+\frac{\lambda_0^2}{(1-\lambda_0)^2}.
\]
Pair~III gives two unbounded solutions. The three bounded, but not decaying to $0$, solutions $v_2$, $v_3$, and $w_4$ are linearly independent. Comparing their limits at $x=\pm\infty$ we conclude that there is no linear combination of these solutions which could lead to a localized solution.
Using Pair~II in Table \ref{table-1} with the two linearly independent solutions of the Lax system for $\lambda\in i\mathbb R \cup (-1,1)$ we do not find any new solutions. After some computations, we obtain that all these solutions are linear combinations of the exponentially decaying solutions $w_1$, $w_2$, obtained from the eigenvalue $\lambda=\lambda_0$, and the asymptotically constant solution $v_2$ obtained from $\lambda=1$.
The remaining solutions of the linearized NLS equation (\ref{nls-lin}) are obtained using Pairs I and III in Table \ref{table-1} for $\lambda\in i\mathbb R \cup (-1,1)$. Since the Darboux matrix $D(\lambda)$ in (\ref{DT-matrix}) is invertible with constant limits as $x\to\pm\infty$, these solutions are asymptotically the linear combinations of the solutions found for $u = 1$. For $\lambda\in i\mathbb R \cup (-1,1)$ the latter solutions are asymptotically periodic in $x$ with wavenumber $k=k(\lambda)=2\sqrt{1-\lambda^2}>0$. By analogy with the case $u=1$, we denote by $\hat{v}_{\pm\lambda(k)}^\pm(x,t)$ the four solutions of the linearized NLS equation at KMB for $k \in (0,\infty) \backslash \{2\}$. Although only two linearly independent solutions are obtained for $k = 2$, the point $k = 2$ is of measure zero in the continuous spectrum $\Sigma_{\rm KMB}$.
\subsubsection{Localized solutions}
Based on these explicit computations, we expect that a solution $v \in C^0(\mathbb{R},L^2(\mathbb{R}))$ of the linearized NLS equation (\ref{nls-lin}) at KMB can be uniquely expressed in the linear superposition form
\begin{eqnarray}
v(x,t)&=& c_1 v_1(x,t) + c_2 v_2(x,t) + c_3 v_3(x,t) \nonumber \\
&&+\int_0^\infty \left[\hat c_k^+ \hat v_{\lambda(k)}^+(x,t) + \hat c_k^-\hat v_{\lambda(k)}^-(x,t) + \hat c_{-k}^+ \hat v_{-\lambda(k)}^+(x,t) + \hat c_{-k}^-\hat v_{-\lambda(k)}^-(x,t)\right] dk,\quad
\label{v-KMB}
\end{eqnarray}
where the coefficients $c_1$, $c_2$, $c_3$, and $\hat{c}_{\pm k}^\pm$ are uniquely determined by the coefficients by the initial condition $v(\cdot,0)=v_0\in L^2(\mathbb R)$. A rigorous justification of this formula requires an additional completeness proof which is outside the scope of this paper.
\begin{remark}
The decomposition (\ref{v-KMB}) precisely shows how many linearly independent solutions of the linearized NLS equation (\ref{nls-lin}) at KMB correspond to the point and continuous parts of the Lax spectrum $\Sigma_{\rm KMB}$. Interestingly, this decomposition is different from the complete set of solutions of the linearized NLS
equation at the NLS soliton \cite{Kaup1,Kaup2} where perturbations to a single NLS soliton are decomposed over four exponentially decaying solutions which correspond to translations of the NLS soliton over four parameters and the four continuous families of eigenfunctions of the continuous spectrum. Here, we only found three exponentially decaying solutions.
\end{remark}
\begin{remark}
It follows from (\ref{v-KMB}) that the linear instability of KMB is related to the continuous spectrum $\Sigma_0$ in $\Sigma_{\rm KMB}$ with exactly the same growth rate as the one of the constant-amplitude background $u = 1$. This is in agreement with the numerical computation of unstable modes for KMB in \cite{Cuevas}, where KMB was truncated on a spatially periodic domain $[-L,L]$. According to Figs. 1,2, and 3 in \cite{Cuevas}, the number of unstable modes of KMB depends on the period $T$ for every fixed $L$. In the limit $T \to 0$ $(\lambda_0 \to \infty)$, the number of unstable modes corresponds to those of the constant-amplitude background $u = 1$. However, for each fixed $L$, the number of unstable modes decreases as $T$ decreases. Our analysis corresponds to the case $L = \infty$, when the unstable modes form a continuous spectrum which is independent of period~$T$. Indeed, the results in \cite{Cuevas} showed that the number of unstable modes increases when $L$ increases.
\end{remark}
\section{Conclusion}
\label{sec-conclusion}
We have classified solutions of the linearized NLS equation (\ref{nls-lin})
at two breather solutions of the NLS equation (\ref{nls-u}) given by
AB and KMB. In the case of AB, our results agree with the symbolic computations in \cite{GS-2021} where exponentially growing in time and spatially periodic
solutions of the linearized NLS equation were discovered. In the case of KMB,
we provide the set of solutions for characterizing the linear instability of breathers which was not achieved in the previous work \cite{Z} due to lack of spectral mapping properties. In both cases, the question of completeness was left opened and is the future open problem of highest priority.
Among further directions, it is worth mentioning that AB and KMB are particular solutions of the third-order Lax-Novikov equation
\begin{equation}
\label{LN-3}
u''' + 6 |u|^2 u' + 2i c (u'' + 2|u|^2 u) + 4 b u' + 8i a u = 0,
\end{equation}
for $a = c = 0$. More general solutions of the third-order Lax--Novikov
equation with $a = c = 0$ are represented by the double-periodic solutions which are periodic both in $x$ and $t$ \cite{Nail1,CPW}. Linear instabilities of the double-periodic solutions were recently explored in \cite{Pel-Double} by utilizying the Floquet theory both in $x$ and $t$. The linear unstable bands
of the double-periodic solutions should correspond to the linear unstable
modes of AB and KMB in the case of degeneration of the double-periodic solutions, this limiting procedure is still to be studied in future.
Overall, characterizing instability of breathers on the constant-amplitude background is a more difficult problem compared to characterizing of the modulation instability of travelling periodic waves in the NLS equation
\cite{CPW-2020,DecSegal}. Further understanding of the linear and nonlinear instability of breathers will provide better clarity of the formation of complex
rogue wave patterns and integrable turbulence in the framework of the
NLS equation (\ref{nls}).
|
1,108,101,564,911 | arxiv | \section{Introduction}
This work is part of a large cycle 7 HST project (PID 7307) to study
the formation and evolution of rich star clusters in the LMC. Details
and first results can be found in Beaulieu (1998),
Elson (1998c) and Johnson (1998).
We have obtained observations (with WFPC2, NICMOS and
STIS) of 8 rich LMC star clusters. These clusters have masses
$\approx$ 10$^{4}$M$_{\sun}$\ and ages from 20 Myr to 12 Gyr. They are
grouped into four age pairs and we concentrate here on the
youngest cluster pair of NGC1818 and NGC1805. NGC1818 is quite well
studied (Will 1995, Hunter 1997, Elson 1998a, Elson 1998b) and has a
metallicity [Fe/H] $\approx$ -0.8, a dynamical time of 1.6-10 Myr and
a relaxation time of 150-600 Myr (Elson 1987). Little previous work exists
for NGC1805.
For the young clusters the main aims of the project are: to search for
primordial binaries (Elson 1998a, Santiago 1999) and pre-main sequence
stars (Beaulieu 1999), and to place limits on the age spreads
amongst the massive stars (Johnson 1999). From the low mass and
high mass star formation timescales that we find we will
investigate the sequence of star formation.
In this paper we focus on searching for age spreads amongst the
massive stars. The amount of age spread depends on the timescale for
star formation which has important implications for the mechanism
which triggered the star formation and for the early evolution of the
cluster. A short (compared to the dynamical time) timescale requires a
strong perturbation to initiate it, whereas self propagating star
formation proceeds on the order of the dynamical time. The combination
of the star formation timescale and the efficiency of the star
formation influences the cluster evolution and possible disruption.
The various scenarios are discussed in Elson (1987).
The amount of age spread also has implications for the process of self
enrichment in these clusters which may be expected to occur on the
order of a massive star lifetime, $\approx$10 Myr.
\section{Results}
To derive the amount of age spread amongst the massive stars we use
colour magnitude diagrams produced from the WFPC2 data. Isochrones
with the same metallicity but different ages diverge for the massive
stars (V$<$18.5) producing a colour spread. For example, at V=16,
stars that are 25 and 40 Myr old differ by $\approx$0.04 mags in V-I
which is similar to our photometric errors.
The WFPC2 images of these clusters (see the project web page) were
oriented with the cluster core (and hence the majority of the massive
stars) falling on the planetary camera (PC) chip. Here we just
present results from the PC, results from all the chips will be
presented in Johnson (1999).
Figure~\ref{fig:colmag} shows V vs V-I colour magnitude diagrams for
NGC1818 and NGC1805.
\begin{figure}
\plotfiddle{johnsonr1_rev.eps}{8.5cm}{-90}{54}{54}{-240}{330}
\plotfiddle{johnsonr2_rev.eps}{8.5cm}{-90}{54}{54}{-240}{305}
\caption{V vs V-I colour magnitude diagrams for the PC data of the centre of NGC1818 (top) and NGC1805(bottom)}
\label{fig:colmag}
\end{figure}
Overlaid on the data (solid lines) are four isochrones (Bertelli 1994), all
with [Fe/H]=-0.7, and ages of 25, 40, 63 and 100 Myr. NGC1818 is known
to contain binaries (Elson 1998a). The dashed lines show the position
of an equal mass binary sequence for the 25 and 40 Myr isochrones.
It can be seen from these figures that the massive stars do exhibit a
large colour spread. However, before attributing this to age, other
possibilities have to be ruled out. Possible contributors to the
colour spread are binaries, differential stellar rotation and Be
stars. Further complications are provided by interacting binaries
which make young stellar populations appear younger (in colour
magnitude diagrams) than they really are (e.g. Van Bever 1998).
We aim to simulate colour magnitude diagrams including these effects
to try and disentangle the various possibilities (Johnson 1999).
However, it is clear from previous observations (Grebel 1997) that Be
stars make a large contribution to the colour spread. Be stars are
stars that have at some time shown H$\alpha$ emission. It is thought
that this emission comes from a circumstellar disk which also reddens
the star colours (e.g. Fabregat et al. 1996).
Figure~\ref{fig:bestars} shows the positions on the NGC1818 colour
magnitude diagram of the Be stars identified by Grebel (1997). It can
be seen that these stars show a wide spread in colour, from normal B
star colours to some 0.5 mags redder in V-I. There are still several
very red stars in the colour magnitude diagram that are not identified
as Be stars by Grebel. Unfortunately the Be star phenomenon is known
to vary on short timescales, therefore some stars that are Be stars
(and therefore redder) at the time of our HST observations may have
appeared as normal B stars at the time of Grebel's observations. To
eliminate this problem we have obtained time on the ESO NTT to make
our own Be star identifications.
\begin{figure}
\plotfiddle{johnsonr3_rev.eps}{7.7cm}{-90}{51}{51}{-230}{280}
\caption{NGC1818 V vs V-I colour magnitude diagrams with Be stars identified
by Grebel (1997) marked with filled circles}
\label{fig:bestars}
\end{figure}
|
1,108,101,564,912 | arxiv | \section{Introduction}
\label{sec:int}
Modeling the dynamic behavior of large systems of interacting agents remains a challenging problem in complex systems analysis.
Due to the large state space dimension of such systems, it has historically been an ongoing research goal to construct useful reduced-order models with which to collectively
describe the coarse-grained dynamics of agent ensembles.
Such coarse-grained, collective descriptions arise in many contexts, e.g. in thermodynamics, where interacting particles may effectively be described at the macroscopic level by temperature, pressure and density; or in kinetic theory, where collisions in the Boltzmann equation can lead to
continuum descriptions, such as the Navier-Stokes equations - but also in
contexts such as chemotaxis or granular flows.
One important issue in this coarse-graining is to find coarse-grained observables (density fields, momentum fields, concentration fields, void fraction fields) that describe the collective behavior in physical space.
Macroscopic, effective models are then often approximated as partial differential equations (PDEs) for these fields: their time derivatives are expressed locally in terms of the local spatial derivatives of the field(s) at each point.
The closures required to derive predictive models can be obtained either mathematically (with appropriate assumptions) and/or semi-empirically through experimental or computational observations.
When the interacting agents are coupled oscillator systems, their observed low-dimensional dynamics can sometimes be described as a ``lumped'' system of a few ordinary differential equations (ODEs) in terms of
so-called {\em order parameters}~\cite{kuramoto1984_chemical_osci, STROGATZ20001, Ott-Antonsen}.
For large heterogeneous systems of interacting oscillators we observe, at any given moment, a distribution of oscillator states; being able to usefully describe this evolution by a few ODEs for appropriate order parameters corresponds, conceptually, to describing the distribution evolution through a finite, closed set of a few moment equations for the distribution. The ``few good'' order parameters are here provided by the few leading moments in terms of which a closed set of model ODEs (or even stochastic differential equations) can be written.
And while in some cases such a reduced description can be quite successful, there are other cases where a few ODEs will not suffice, and where one needs to write evolution equations (e.g. PDEs) for evolving {\em field(s)} of instantaneous oscillator behavior(s).
The question then naturally arises: What is a good way of parametrizing the spatial support of this evolving distribution of behaviors? Which (and how many) are the few {\em independent}, ``spatial'' variables, in the space of which we will attempt to derive evolutionary PDE models for the collective behavior evolution?
In other words, when the problem does not evolve in physical space (e.g. when the oscillators are nodes in an interacting network) {\em does there exist} a useful continuum embedding space in which we can observe the behavior evolving as a spatiotemporal field? And if so, how can we detect this {\em emergent space} and its parametrizing independent coordinates in a data-driven way, based on observations of the collection of individual coupled agent dynamics?
Our task thus has two components, both accomplished here in a data-driven fashion:
(a) find emergent ``spatial'' coordinates in which the oscillator behavior can be (embedded and) observed as smooth
spatiotemporal field evolution; and
(b) once these emergent coordinates have been obtained, learn a model of the evolving dynamics,
if possible in the form of a partial differential equation governing this field; that is, approximate
the (pointwise) time derivative(s) of the field(s) in terms of a few local spatial derivatives of the
field {\em in the emergent independent variables}.
The data-driven approximation of such evolution operators for spatiotemporal dynamics using machine learning tools (neural networks, Gaussian processes, manifold learning....) is a long-standing research endeavor - we, among others, have worked on neural network-based identification of nonlinear distributed systems~\cite{krischer93_model_ident_spatiot_varyin_catal_react,rico-martinez92_discr_vs, gonzalez-garcia98_ident_distr_param_system}; the subject is currently exploding in the machine learning literature, e.g. ~\cite{brunton20_machin_learn_fluid_mechan, lu2020deeponet}.
The ``twist'' in our work here is that the space in which the evolution operator
(that is, the PDE) will be learned (the independent variables in which the ``spatial derivatives'' will be estimated) is not known {\em a priori} but will be rather identified, in a first step, through data mining/manifold learning~\cite{kemeth18_emerg_space_distr_data_with, arbabi2020coarsegrained}.
If/when such an approach is successful, it can lead to a dramatic reduction of the computational cost
of simulation/prediction of the collective, coarse-grained dynamics (compared to the individual evolution of every oscillator/agent in the ensemble).
This reduced description also enables tasks (effective stability and bifurcation analysis, even control and optimization) that would be difficult or impossible to perform with the fine-scale model.
More importantly, if successful and generalizable enough, this alternative description in terms of field PDEs in emergent variables, (assisted by computationally mapping back-and-forth between fine and coarse descriptions) may guide a new, coarse-grained interpretation and even understanding of the system dynamics.
There may appear to be a contradiction between having fine-scale dynamics we know to involve long-range interactions (here, all-to-all coupling), and then learning a model based on local interactions (here, coupling with oscillators that have nearby behavior, through local ``behavior derivatives'' in our emergent space). We will return to this issue repeatedly in the discussion below, but we mention that the learned operators are not themselves ``the true physics''; they are but a particular, parsimonious parametrization of the long-term dynamics (after initial transients) on a much lower-dimensional slow manifold on which the collective behavior evolves. It is the low dimensionality of this manifold, and the power of embedding theorems like those of Whitney~\cite{Whitney} and Takens~\cite{Takens1981} that enable data-driven {\em parameterizations} (as opposed to physically meaningful mechanistic interpretations) of the long-term dynamics. The many coupled local grid points underpinning a finite-difference discretization of a PDE will here play the role of the many ``generic observers'' parametrizing the relatively low-dimensional manifold on which the coarse-grained long-term dynamics and the attractors of the system are expected to live.
This approach is fundamentally different from recent approaches where the dynamics are learned in a latent space of {\em dependent variables}, typically as systems of ODEs (but also PDEs with {\em known} independent variables).
Examples of these ``dependent variable latent spaces'' include learning the dynamics of spatial principal component coefficients on an inertial
manifold~\cite{linot20_deep_learn_to_discov_predic}
or learning an ODE in a latent space of an autoencoder using dictionaries
and sparsity promoting regularization~\cite{champion19_data_driven_discov_coord_gover_equat}.
Since early works (e.g. see~\cite{lapedes} on the Mackey-Glass equation, also Refs.~\cite{hudson,rico-martinez92_discr_vs,gonzalez-garcia98_ident_distr_param_system}),
learning dynamical systems from data has regained increased attention in recent years.
Popular examples include (in a vast literature) sparse identification of nonlinear dynamical systems using dictionaries~\cite{brunton16_discov_gover_equat_from_data}, DeepXDE\cite{lu19_deepx}, neural ODEs~\cite{duvaneau},
LSTM neural networks~\cite{vlachas18_data_driven_forec_high_dimen} and PDE-net~\cite{long17_pde_net}.
As in the latter, the emergent PDE will be learned here from discrete time data using an explicit forward Euler time integration step (in effect, training a ResNet); many other approaches are also possible (for a ResNet-like Runge-Kutta recurrent network, see Ref.~\cite{gonzalez-garcia98_ident_distr_param_system}).
To find coordinates in which to learn the PDE description, we follow the recent work~\cite{kemeth18_emerg_space_distr_data_with,thiem20_emergent_spaces_for_coupled_oscillators} and use diffusion maps~\cite{Nadler2006,Coifman2006},
a nonlinear manifold learning technique.
As our agent-based example, we use coupled Stuart-Landau oscillators,
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t} W_k = \left(1+i \omega_k\right) W_k - \left|W_k\right|^2W_k + \frac{K}{N}\sum_{j=1}^N\left( W_j - W_k\right)\label{eq:sle};
\end{equation}
each oscillator $k=1,\dots, N$ is represented by a complex variable
$W_k$ and coupled to all other oscillators through the ensemble average.
The long-range interaction is in fact global, since the coupling is all-to-all.
Each agent, when uncoupled, undergoes periodic motion with its own intrinsic frequency $\omega_k$, different across agents, making the ensemble heterogeneous.
Suppose we initialize an ensemble of $N=256$ oscillators with values $W_k$ on a regular grid,
as shown in Fig.~\ref{fig:1}(a).
The color coding thereby correlates with the imaginary part of $W_k$.
Integrating this initial condition using Eq.~\eqref{eq:sle} with coupling constant $K=1.2$ and intrinsic frequencies $\omega_k$ distributed equally spaced within the interval $\left[-1.5, 1.9\right]$ yields
the dynamics in Fig.~\ref{fig:1}(b):
although the behavior appears quite irregular at the beginning, it quickly settles onto a cylinder-like structure.
Note that the color coding is still the same.
After the transients decay, the agents appear arranged on
this structure in an irregular manner if colored based on their initialization, see the zoom in of the upper part as shown in Fig.~\ref{fig:1}(c).
Using manifold learning, we will show that it is possible to find a
parametrization of the agents (a ``different coloring'') in which the dynamics appears more ordered and regular.
This is shown by the new color coding of the last snapshot in Fig.~\ref{fig:1}(c),
and the recolored attractor in Fig.~\ref{fig:1}(d).
\begin{figure}[ht!]
\centering
\includegraphics[width=0.9\textwidth]{plot_1_v2.png}
\caption{(a) Initial condition of the Stuart-Landau ensemble, Eq.~\eqref{eq:sle},
colored with ascending imaginary part of $W_k$.
(b) Trajectories obtained from integrating the initial conditions of (a) with the same
color coding as in (a). The last snapshot is marked by black dots.
(c) Zoom in to the upper part of (b), with the last snapshot marked by black dots.
Above it, the last snapshot is color coded {\em based on the ordering of the oscillators along the curve at that moment}.
(d)~Zoom in on the top part of (b), but now with the new color coding.
(e) Trajectories of the real part of the $W_k$,
arranged by their initial values $\mbox{Im} W$.
(f) Trajectories of the real part of the $W_k$,
arranged by the new color coding $\phi_i$ as in (d). (Finding $\phi_i$ is discussed in the text).}
\label{fig:1}
\end{figure}
Indeed, when contrasting the time series of the agents in the original color coding $i$
(Fig.~\ref{fig:1}(e)) and the new color coding $\phi_i$ (Fig.~\ref{fig:1}(f)),
we argue that the dynamics appear more regular in a space parametrized by $\phi_i$,
suggesting the possibility that the solution can be described by a PDE with $\phi_i$ and time as the independent variables.
The remainder of this article is organized as follows: First,
we illustrate our approach through a caricature, where we start with a known
PDE in a predefined spatial variable. We observe the dynamics at a number of mesh points in this
known space, but then we ``scramble" the time series ourselves, on purpose,
concealing the spatial coordinates of where the behavior was observed.
We obtain a predictive PDE description in a learned emergent ``spatial'' or ``heterogeneity'' coordinate $\phi_1$,
discovered through data mining these scrambled behaviors.
We then confirm that this emergent coordinate
is one-to-one with the (discarded) physical location $x$ of the original mesh points.
Returning to our
globally-coupled oscillator ensemble,
we show how to extract an intrinsic space coordinate,
and learn a PDE description in this parametrization and time.
We then study parametric dependencies of this PDE: we sample dynamics at parameter values bracketing a (collective) Hopf bifurcation.
Using this data, we show that learning a PDE with an additional input for a parameter can capture
the location and nature of bifurcations in this parameter.
We then go beyond a single ``emergent space'' dimension:
We vary the nature of the oscillator ensemble by adding a second parameter, beyond
the oscillator frequency.
Data mining discovers that the description of the agent behaviors is now two-dimensional.
We again learn a PDE describing the agent dynamics - now in two ``emergent space coordinates'' and time.
We conclude with a discussion of the approach and its shortcomings, and what we perceive as open questions and directions for future research. We also discuss the explainability of the learned emergent coordinate(s) for such agent-based systems.
Details on the algorithms and numerical methods are summarized in the Methods section.
The code to reproduce the results will be made available under \href{https://github.com/fkemeth/emergent_pdes}{{https://github.com/fkemeth/emergent\_pdes}} upon publication.
\section{Results}
{\bf Learning PDEs in Emergent Coordinates}.
For an illustrative caricature, we use a PDE with a known independent space variable,
before returning to our coupled agent example.
Consider the 1D complex Ginzburg-Landau equation,
a PDE for the evolution of a complex field $W(x, t)$ in one spatial dimension $x\in\left[0, L\right]$, defined by
\begin{equation}
\frac{\partial}{\partial t} W(x, t) = W(x, t) + \left(1+ic_1\right)\frac{\partial^2}{\partial x^2} W(x, t) - \left(1-ic_2\right)|W(x, t)|^2 W(x, t)
\label{eq:cgle}
\end{equation}
with real parameters $c_1=1$, $c_2=2$, $L=200$ and, here, no-flux (Neumann) boundary conditions.
We integrate starting with initial condition
\begin{equation*}
W(x, 0) = \frac{1+\cos{\frac{x\pi}{L}}}{2}
\end{equation*}
using a finite-difference method in space and an implicit Adams method for integration, and sample data after initial transients have decayed, i.e. after 4000 dimensionless time units.
This spatiotemporal evolution is depicted in Fig.~\ref{fig:3}(a).
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{plot_34.pdf}
\caption{(a) The real part of the complex field $W(x, t)$
obtained from simulating Eq.~\eqref{eq:cgle} with $N=128$ mesh points
after initial transients have decayed.
(b) Removing the spatial label yields a collection of $N$ time series plotted here
in random sequence.
(c) Using manifold learning (here diffusion maps), one finds that there exists a one-dimensional
parametrization $\phi_1$ of these time series. Each point corresponds to one of the $N$
time series, and is colored by its actual spatial location $x$.
(d) The real parts of the time series parametrized by $\phi_1$.
(e) Real part of simulation predictions for the complex variable $W$ starting from an initial condition in our test set, using the partial differential equation model learned with $\phi_1$ as the spatial variable.
%
Since no analytical boundary conditions are available, we provide the true values near the boundaries during integration, within a ``corridor" indicated by white vertical lines.
%
(f) Smallest Euclidean distance in $\mathbb{C}^N$ between the transients and the true attractor at each time step: true PDE (blue), learned PDE (orange).}
\label{fig:3}
\end{figure}
The spatial coordinate $x$ is discretized into
$N=128$ equidistant points $x_k$.
Eq.~\eqref{eq:cgle} thus yields $N$ (here complex) time series $W_k(t)$ at each mesh point $x_k$.
We can think of the behavior at each mesh point as the behavior of an agent in an ensemble of interacting agents.
Assuming the $x_k$ label of each agent is not available
(cf. Fig.~\ref{fig:3}(b), where the agents are parametrized by a random index $i$);
is it possible to find a collective description of the
dynamics in these time series based on a data-driven, ``emergent'' spatial variable,
and in the form of a partial differential equation, involving partial derivatives in this variable?
We accomplish this by extracting an intrinsic independent coordinate from the time series data.
As proposed in
Ref.~\cite{kemeth18_emerg_space_distr_data_with}
we use diffusion maps (each of the scrambled time series is a data point) to extract a coordinate parametrizing
the ensemble of time series: the leading diffusion map component of each time series (of each data point); see Methods.
It may be qualitatively helpful (even though we use a nonlinear manifold learning algorithm) to think of this as performing principal component analysis (PCA) on the ensemble of time series (each of them is a data point) and then keepinging the leading PCA component as an emergent spatial coordinate.
This emergent coordinate is used to parametrize a useful embedding space in which to learn a PDE.
For the time series data in Fig.~\ref{fig:3}(b),
one indeed finds a one-dimensional parametrization of the $W_k$,
which is shown in Fig.~\ref{fig:3}(c).
This coordinate is one-to-one with the (original, ``forgotten'') spatial coordinate $x$
(see color coding in Fig.~\ref{fig:3}(c)).
Even not knowing knowing the spatial location of the mesh points,
we can still extract a data-driven parametrization $\phi_1$ and set out to learn a
PDE with this coordinate as the spatial dimension.
The data parametrized this way is depicted in Fig.~\ref{fig:3}(d).
Note that $\phi_1$ is one-to-one with, {\em but not identical}, to $x$.
In particular, it is also flipped (see the mirrored Figs.~\ref{fig:3}(a) and~\ref{fig:3}(d))
and slightly ``compressed'' due to edge effects at small and large~$x$.
We now set out to learn a PDE description based on partial derivatives in $\phi_1$,
\begin{equation}
\frac{\partial}{\partial t} W(\phi_1, t) = f\left(W, \frac{\partial W}{\partial \phi_1},
\frac{\partial^2W}{\partial \phi_1^2}, \frac{\partial^3W}{\partial \phi_1^3}\right)
\label{eq:pde}
\end{equation}
where $f$ is represented by a fully connected neural network.
See Methods for details on the neural network architecture and the data sampling.
A number of issues arise in learning such a PDE in $\phi_1$:
\begin{itemize}
\item Since $\phi_1$ is not identical to $x$, trajectories $W$ are not equally spaced.
To calculate a finite difference approximation of $\partial^n W/\partial \phi_1^n$,
we interpolate the $\phi_1$-parametrized data using cubic splines and sample $W$ at
$N=128$ equidistant points on the interval $\left[-1, 1\right]$.
\item Due to the deformation of the space coordinate, the boundary conditions in the transformed variable may no longer be obvious.
We therefore learn $f$ only in the interior of the $\phi_1$ domain.
When we simulate the learned PDE, we provide (as boundary conditions) a narrow space-time data corridor
as needed. The imposition of such ``{\em finite corridor} boundary conditions'' will be particularly important for agent-based systems,
where the form of effective boundary condition formulas (like Dirichlet, Neumann or Robin) in the emergent space is not known {\em a priori}.
\item PDEs are infinite dimensional; we cannot sample the full state space, and so
our learned surrogate PDE will ``not know'' the dynamics in all state space directions.
Various techniques proposed in recent years (especially in imitation learning) attempt to regularize surrogate dynamical systems, .
These include contraction theory~\cite{lohmiller98_contr_analy_non_linear_system,singh19_learn_stabil_nonlin_dynam_with,blocher17_learn,sindhwani18_learn_contr_vector_field_stabl_imitat_learn},
and convex neural networks~\cite{brandon17_input_convex_neural_networks,manek20_learn_stabl_deep_dynam_model}. They rely on the existence of a Lyapunov function;
other approaches include Jacobian
regularization~\cite{hoffman19_robus_learn_with_jacob_regul,pan18_long_time_predic_model_nonlin}.
However, they usually involve additional loss terms
or are computationally expensive.\\
Here, we instead regularize the output of the learned PDE as follows:
First, we sample several transients close to the limit cycle solution of
the complex Ginzburg-Landau equation (a ``tube'' in phase space).
Then, we create a truncated singular value decomposition (SVD) based on all the sampled transients.
During inference, we filter the state obtained by integration of the neural network output by projecting it back onto this truncated SVD subspace, thus keeping the predicted trajectories there.
\end{itemize}
Integrating from an initial snapshot using the learned PDE $f$ in the emergent variable $\phi_1$ is shown in
Fig.~\ref{fig:3}(e).
Notice the close correspondence between predicted and actual dynamics,
cf. Fig.~\ref{fig:3}(d).
We also investigate whether nearby transients approaching the attractor are captured accurately
by the learned PDE.
To test this we integrate starting from an off-attractor snapshot using both the original PDE {\em and} the learned PDE,
and plot the smallest Euclidean distance in $\mathbb{C}^N$ between the transients obtained this way and the true attractor over time.
See Fig.~\ref{fig:3}(f) for a measure of the true distance (blue) and the distance when integrating with the learned model (orange).
There is good correspondence between the two curves, rendering the blue trajectory
barely visible.\\
In the next Section, we will follow the same approach, but now for a system where {\em there
is no original space coordinate}.
{\bf Learning Partial Differential Equations for Coupled Stuart-Landau Oscillator Dynamics}
Recall the original problem, Eq.~\eqref{eq:sle}, of an
ensemble of mean-coupled Stuart-Landau oscillators,
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t} W_k = \left(1+i \omega_k\right) W_k - \left|W_k\right|^2W_k + \frac{K}{N}\sum_j\left( W_j - W_k\right)
\end{equation}
with $k=1,\dots,N$ and the real coupling constant $K$.
The intrinsic frequencies $\omega_k$ are taken linearly spaced in the interval $\left[-\gamma+\omega_0, \gamma+\omega_0\right]$.
Depending on the parameters $K$ and $\gamma$, a plethora of different dynamical phenomena are known to arise.
Examples range from frequency locked oscillations and quasiperiodic dynamics to chaos and oscillator death.
See Ref~\cite{matthews90_phase_diagr_collec_behav_limit_cycle_oscil} for a more detailed discussion.
Here, we fix $K=1.2$, $\gamma=1.7$ and $\omega_0=0.2$ - resulting in periodic, synchronized oscillations:
the oscillators in the ensemble oscillate with a common frequency and
maintain a constant mutual phase difference.
The real part of such dynamics is depicted in Fig.~\ref{fig:6}(a),
parametrized by $\phi_1$, the first independent diffusion map mode.
As for the complex Ginzburg-Landau equation, we sample data not only on the attractor,
but also on transients in its neighborhood approaching it.
These long-term dynamics can be thought of as lying on an attracting slow manifold; see Methods.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{plot_6.pdf}
\caption{(a)~Real part of the complex variable $W$ for a system of $N=512$ oscillators,
parametrized by the first emergent diffusion mode $\phi_1$.
(b) Dynamics obtained from the learned model by integrating starting from the same initial snapshot as in (a).
(c) Smallest Euclidean distance in $\mathbb{C}^N$ at each time step between the transients and the true attractor for the true PDE (blue) and the learned PDE (orange).
(d) The first diffusion mode $\phi_1$ as a function of the intrinsic frequencies $\omega$ of the
oscillator ensemble.}
\label{fig:6}
\end{figure}
The predictions from an initial condition on the limit cycle using the learned PDE model are depicted in Fig.~\ref{fig:6}(b),
and closely resemble the actual dynamics,
as depicted in Fig.~\ref{fig:6}(a).
The model also captures the dynamics approaching the limit cycle.
This can be visualized by integrating from initial conditions on the slow manifold
but off the attracting limit cycle.
We integrated such an initial condition from our test set using forward Euler and both the full ODE system,
Eq.\eqref{eq:sle}, as well as the learned emergent PDE model.
The smallest Euclidean distance in $\mathbb{C}^N$ between these transients and the true attractor at each time step is depicted in Fig.~\ref{fig:6}(c).
Note that both the true and learned transients converge to the limit cycle at a similar rate,
and the learned PDE trajectory approximates the behavior of the full ODE system well.
In an attempt to obtain a physical meaning of the emergent coordinate $\phi_1$,
we plot it as a function of the intrinsic
frequency $\omega$ of the oscillators in Fig.~\ref{fig:6}(d).
It becomes obvious that the two quantities are one-to-one, analogous to the ($\phi_1$, $x$) pair in
the complex Ginzburg-Landau example above: our data mining has ``discovered" the heterogeneity of the ensemble, and uses it to parametrize the dynamics.
Knowing the equations and how $\omega_k$ enters in them, one could analytically attempt to derive Ott-Antonsen-type equations (for phase oscillators) in $\omega$ space~\cite{Ott-Antonsen}. We know neither the equations, nor the $\omega_k$ (and the oscillators are not phase oscillators to boot); everything here is data-driven.
Having been successful in capturing the attractor and its nearby dynamics for a single
parameter value, it becomes natural to explore whether the learned PDE can also capture bifurcations: qualitative changes in the dynamics when changing system parameters.
In particular, for $\gamma=\gamma_H\approx 1.75$,
the Stuart-Landau ensemble undergoes a collective Hopf bifurcation, at which the amplitude of the
oscillations shown in Fig.~\ref{fig:6} vanishes.
For $\gamma>\gamma_H$, a stable fixed point ensues, in which
all individual amplitudes of the respective oscillators are zero, also called oscillator death~\cite{aronson90_amplit_respon_coupl_oscil}.
We now collect data for training at several $\gamma$ values, linearly spaced in the interval $\left[1.7, 1.8\right]$,
on both sides of the Hopf bifurcation; the $\gamma$ value was provided
as additional input to the model.
We again perturbed along the slow stable eigendirections of each attractor, see Methods, collecting transients that inform the model about nearby dynamics.
We then learned a PDE of the form
\begin{equation}
\frac{\partial}{\partial t} W(\phi_1, t) = f\left(W, \frac{\partial W}{\partial \phi_1},
\frac{\partial^2W}{\partial \phi_1^2}, \frac{\partial^3W}{\partial \phi_1^3}; \gamma\right).
\label{eq:pde_alpha}
\end{equation}
The learned dynamics, starting from an initial oscillator ensemble profile, and integrated using the learned model
are shown in Fig.~\ref{fig:7} for $\gamma < \gamma_H$ (left inset) and for $\gamma > \gamma_H$ (right inset).
We observe the transient dynamics approaching the fixed point $W=0 \, \forall \omega$
for $\gamma = 1.8$, as the true dynamics (not shown here) also do.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{plot_7b.pdf}
\caption{
Computational bifurcation diagram by plotting the mean amplitude $\langle | W_{\mbox{limit}}| \rangle$
averaged over the ensemble at the limit set.
In particular, we integrate from random initial conditions close to the limit set for $T=10000$
dimensionless time units for the Stuart-Landau ensemble (blue circles) and the learned PDE
(orange crosses). A mean amplitude indicates convergence to the fixed-point $W=0$ $\forall \omega$,
whereas a non-zero $\langle | W_{\mbox{limit}}| \rangle$ indicates oscillations with
finite amplitude.
The color codings of the insets show real part of the complex variable $W$ obtained from
integrating an initial condition close to the fixed point $W_k=0$ with $\gamma=1.8$ (right inset) and close to the limit cycle with $\gamma=1.7$ (left inset)
using the learned model and employing explicit forward Euler for $\gamma=1.8 > \gamma_H$.}
\label{fig:7}
\end{figure}
Validating the approach further, we start at random initial conditions in the slow eigenspace of the attractor at different $\gamma$ values
using the Stuart-Landau system, Eq.~\eqref{eq:sle}, as well as the learned PDE model.
For both models, we record a snapshot after $T=10000$ dimensionless time units
and calculate its average amplitude $\langle | W_{\mbox{limit}}| \rangle$.
An average amplitude equal to zero then indicates that the initial condition converged to the
fixed point $W=0 \, \forall \omega$ under the respective model, whereas a nonzero amplitude
indicates convergence to the (collective/spatiotemporal) limit cycle.\\
The resulting $\langle | W_{\mbox{limit}}| \rangle$ values for different $\gamma$ are
shown in Fig.~\ref{fig:7}, with blue circles for the original dynamics and orange crosses for
the learned dynamics.
The Hopf bifurcation manifests itself in the sudden increase in amplitude
when $\gamma$ is varied.
Note the close correspondence between the learned model and the original oscillator
system: both converge to a fixed point for $\gamma > \gamma_H \approx 1.75$, and to the
limit cycle for $\gamma < \gamma_H \approx 1.75$.
\begin{figure}[ht]
\centering
\includegraphics[width=0.9\textwidth]{plot_789.pdf}
\caption{(a) Diffusion maps embedding of an ensemble of Stuart-Landau oscillators with
two heterogeneous parameters. The color thereby encodes one of the two parameters, the intrinsic frequencies $\omega$. The grid indicates the space in which we resample the data and learn a PDE description. (b) Data points and fit of the real parts of $W$ in the grid shown in (a).
(c) Snapshots of the real parts of $W$ obtained using the learned model at different time points
$\Delta t = 4$ apart. The black lines in the last snapshot indicate the ``boundary corridors" provided to the model
during integration.
(d) Smallest Euclidean distance in $\mathbb{C}^N$ between the transients and the true attractor, obtained by using the true PDE (blue, Eq.~\eqref{eq:sle_2d}) and the learned PDE (orange, Eq.~\eqref{eq:pde_2d}) for integration.}
\label{fig:8}
\end{figure}
{\bf Two emergent spatial coordinates}.
The approach can easily be extended to situations with more than one emergent spatial dimension,
that is, problems in which more than one diffusion map component become necessary to parametrize the inherent heterogeneity of agent behaviors.
As an example, consider our Stuart-Landau ensemble above, but now with {\em two heterogeneous parameters}, $\omega_k$ and $\lambda_k$,
\begin{equation}
\frac{\mathrm{d}}{\mathrm{d}t} W_k = \left(\lambda_k+i \omega_k\right) W_k - \left|W_k\right|^2W_k + \frac{K}{N}\sum_{j=1}^N\left( W_j - W_k\right).
\label{eq:sle_2d}
\end{equation}
The $\omega_k$ are taken linearly spaced in the interval $\left[-\gamma+\omega_0, \gamma+\omega_0\right]$,
while the $\lambda_k$ are drawn uniformly from the interval $\left[0.9, 1.1\right]$.
Using diffusion maps, one confirms that there is a two-parameter family of oscillator behaviors.
The two independent modes $\phi_1$ and $\phi_2$ are shown in Fig.~\ref{fig:8}(a),
color-coded by the $\omega_k$.
In the following, we set out to learn a PDE in this two-dimensional emergent space spanned
by $\phi_1$ and $\phi_2$.
In order to do so, we interpolate the available data on the rectangular grid shown in Figs.~\ref{fig:8}(a-c);
both the real and the imaginary components of the $W_k$ are interpolated at equidistant points.
A snapshot of this is shown in Fig.~\ref{fig:8}(b).
As earlier, we sample transients initialized on the attractor and in the slow manifold, but now learn a PDE $f$ of the form
\begin{equation}
\frac{\partial}{\partial t} W(\phi_1, \phi_2, t) = f\left(W, \frac{\partial W}{\partial \phi_1}, \frac{\partial W}{\partial \phi_2},
\frac{\partial^2W}{\partial \phi_1^2},\frac{\partial^2W}{\partial \phi_2^2}\right).
\label{eq:pde_2d}
\end{equation}
To evaluate the quality of the learned model, we integrate starting with initial snapshots (approximately) on the slow manifold but off the attractor, both using the learned model and the original system, and plot the closest distance between the two transients and the true attractor as a function of time.
This is shown in Fig.~\ref{fig:8}(d). Again, the learned model captures the transient dynamics approaching, as well as along, the attractor.
\section{Discussion}
We have seen that it is possible to learn a predictive model for the dynamics of coupled agents based on
local partial derivatives with respect to one (or more) emergent, data-driven ``spatial variable(s)'' and time, that is, in the form of a partial differential equation.
As an example, we investigated an ensemble of mean-coupled Stuart-Landau oscillators, where each oscillator has an intrinsic frequency $\omega_k$.
Using manifold learning (here diffusion maps), we were able to extract an intrinsic coordinate $\phi_1$ from time
series segments of these oscillators.
Starting with just a single parameter value $\gamma=1.7<\gamma_H$,
our results indicate that a model based on a few partial derivatives with respect to $\phi_1$
is able to accurately capture the collective dynamics in the slow manifold and on the final attracting limit cycle.
These results extend to the case in which data is sampled for different $\gamma$ values on both sides
of the Hopf bifurcation point $\gamma_H$.
The learned PDE then modeled successfully the slow transients towards
either the stable limit cycle or the stable fixed point, depending on the parameter.
We then extended our analysis to the case where the oscillators depend
on {\em two heterogeneity parameters},
and the corresponding diffusion maps embedding is two-dimensional.
This then results in a PDE in a two-dimensional emergent space.
For a successful implementation of our approach we employed a systematic way of sampling
training data:
From a given limit set, we perturb along the slow stable manifold, and sample transients approaching the attractor.
This sampling strategy is assisted by estimates of the slow stable directions (and their time scales) through the linearized system Jacobian, that help produce informative initial conditions.
Because of the ``fast-slow'' nature of the dynamics, we found that
starting practically anywhere and integrating for a short time
will bring the dynamics close to this slow manifold.\\
This ought to also be the case when collecting experimental data (discarding short initial transients to the
slow manifold).
Clearly, the model cannot be expected to learn the right asymptotic behavior in dimensions in which it has seen no data.
This can lead to instabilities when attempting to predict the long term dynamics
of the system.
We addressed this problem through filtering, in particular through a truncated SVD regularization.
An SVD basis was constructed from the training data, and, during inference,
we filtered by projecting the predictions on this basis; the predicted dynamics cannot leave the space spanned by the truncated SVD.
This introduces an additional hyperparameter to the model: the dimension after which to truncate the SVD used for filtering.
Too many dimensions may allow for instability in the predictions (lack of training data);
too few leads to poor representations and distorted dynamics.
Our threshold was empirically chosen by trial and error; the subject is worthy of a more detailed study.
An important question in deciding which PDE model to learn, is how many ``emergent spatial'' derivatives one has to include in the PDE right hand side.
In other words, how can one decide when $\partial W/\partial t$ is well approximated by
$W$ and its derivatives with respect to $\phi_1$?
For Gaussian process regression, recent work using Automatic
Relevance Determination helps tackle this problem~\cite{lee20_coars_scale_pdes_from_fine}.
In our case we again decided empirically, by trial and error; a more thorough study must clearly follow.
In addition, the issue of boundary conditions in emergent space (here we used narrow ``boundary corridors"), as well as what
constitutes a well posed problem for an operator identified in a data-driven way constitute important (and challenging) questions to pursue; we mention here the possibility of using the approach of the ``baby-bathwater'' scheme in \cite{li07_decid_natur_coars_equat_throug_micros_simul}.
Fig.~\ref{fig:7}(b) indicates that the learned model
captures qualitative changes in the dynamics when changing a system parameter,
here a Hopf bifurcation from a fixed point for $\gamma>\gamma_H$ to collective oscillations
for $\gamma < \gamma_H$.
More quantitatively, we reported
the leading spectrum of the linearization of the model evaluated at the fixed point.
This was obtained using automatic differentiation of the neural network model with respect to
its inputs.
Such computations can shed more light on the similarities and differences of
agent-based simulations and their emergent PDE descriptions.
In this paper we focused on a particular regime in parameter space.
However, our approach can easily be extended to more intricate dynamics that are known
in such a Stuart-Landau ensemble; informative examples are included in the videos \href{https://github.com/fkemeth/Emergent_PDE_Videos/blob/master/emergent_pde1.avi}{SI1} and \href{https://github.com/fkemeth/Emergent_PDE_Videos/blob/master/emergent_pde1.avi}{SI2}.
Historically, it is known that physical phenomena modeled at the fine scale through
atomistic/stochastic/agent-based simulations are often
well approximated using closed partial differential equations in terms of a few of
their collective observables (e.g. moments of the particle distribution, such as the agent density).
Our approach will be useful when we believe that such effective, collective PDE models in principle exist, but the closures required to write them down are not known.
It can also provide useful results in regimes where the strong mathematical assumptions required to provably obtain explicit closures can be relaxed.
This is an area where equation-free multiscale numerics has been used to solve the equations without writing them down, and where manifold learning has been used to even perform this solution ``(dependent) variable free'', that is, in terms of dependent variables not known a priori, but revealed through data mining of detailed simulations (see, for example, the discussion in~\cite{erban07_variab_free_explor_stoch_model}).
All scientific computation in latent space (e.g. see~\cite{chiavazzo14_reduc_model_chemic_kinet_via} and~\cite{lee20_model_reduc_dynam_system_nonlin}) falls in this class.
What is different and exciting (to us at least) in the present study, is the extension of this
approach to problems where there are no obvious {\em independent spatial variables} - dynamics of coupled oscillators, dynamics on and of networks, dynamics of ``systems of interacting systems'', where ``the right space'' for modeling the problem is not known {\em a priori}.
Writing models in such an emergent ``activity space'', with emergent space {\em and even emergent time(!)}~\cite{kemeth18_emerg_space_distr_data_with}
coordinates may become a useful method for the modeler: a tool that extends the toolkit for linking domain science knowledge at the detailed level with machine/manifold learning to build useful, predictive models.
Here, we chose a model based on local descriptors, {\em local in the emergent space}.
One can speculate about contexts in which such a local description might be beneficial. It certainly is more humanly parsimonious/compact to write down than the detailed list of all units and all interactions. It may also be convenient if one needs to make predictions with limited memory
(limited ``fast cpu memory'' so to speak). We do not need to know what every unit is doing - we look at the activity of similar units (that are already embedded nearby in emergent space) and make predictions based on smoothness (mathematically expressed through Taylor series) and the behavior of the neighbors.
Our emergent space can then be thought of as {\em a space where nearby (observations of) behaviors come already usefully clustered}. Alternatively, we can think of this space as embodying a useful ``attention geometry'' - the behaviors we need to pay attention to (because of their similarity) in order to make a prediction, are already our neighbors in this space. Geometric proximity in the emergent space saves us then from having to search for comparable behavior histories across all interacting units in physical space-time. This enables us to exploit smoothness across behavior histories in order to make local predictions with only a few nearby data.
We touched briefly upon the explainability of our emergent spatial coordinates by showing that our $\phi_1$ was one-to-one with, and thus calibratable to, the oscillator intrinsic frequencies - the agent heterogeneity.
The suggested approach then is to (a) decide how many emergent independent variables are necessary; (b) ask a domain scientist for physical quantities that may ``explain them'' and then (c) to test whether the explainable and the data-driven parametrizations
are one-to-one on the data (the determinant of the Jacobian of the transformation is bi-Lipschitz, bounded away from zero and from infinity, on the data, e.g.~\cite{sonday09_coars_grain_dynam_driven_inter,frewen10_coars_collec_dynam_animal_group,meila2018regression}).
Clearly, the explainability of predictive, generative equations in terms of data-driven dependent and independent variables, and operators approximated through machine learning is a crucial endeavor - when and why will we decide we trust results when we ``understand'' the algorithms, but do not ``understand'' the mechanistic, physical steps underlying the observations of what we model?
Will a ``different understanding" arise in latent/emergent space -analogous, say, to describing operators
in Fourier space rather than physical space, or studying control in Laplace space rather than state space?
From flocking starlings to interacting UAV swarms, this promises to be an exciting playing field for contemporary modelers.
\section{Methods}
{\bf Diffusion Maps}
Diffusion maps use a kernel function to weigh pairwise distances between data points
~\cite{Coifman2006,Nadler2006}, typically the Gaussian kernel
\begin{equation*}
k(x, y) = \exp\left(- \frac{\lVert x - y \rVert^2}{\epsilon}\right)
\end{equation*}
with a predefined kernel scale $\epsilon$ and a Euclidean distance metric, which we adopt here.
The data points $x, y$ are, in our case, the $N$ time series (each of length $8\cdot 10^5$; see below), resulting in a
$K \in \mathbb{R}^{N\times N}$ kernel matrix.
Row-normalizing this kernel matrix yields a Markov transition matrix,
also called diffusion matrix, and its leading independent eigenvectors
corresponding to the largest eigenvalues can be used to parametrize the data~\cite{dsilva18_parsim_repres_nonlin_dynam_system}.
{\bf Example: Complex Ginzburg-Landau Equation}.
Consider the complex Ginzburg-Landau equation
\[\frac{\partial}{\partial t} W(x, t) = W(x, t) + \left(1+ic_1\right)\frac{\partial^2}{\partial x^2} W(x, t) -
\left(1-ic_2\right)|W(x, t)|^2 W(x, t)\]
in one spatial dimension \(x\), in a domain of length \(L\).
We solve this equation using the initial condition
\[W(x, 0) = \left(1+\cos \frac{\pi x}{L}\right)/2,\]
with zero-flux boundary conditions and parameter values \(c_1=1\), \(c_2=2\) and \(L=200\).
Numerically, we integrate using a three point stencil for the finite difference approximation
of the second derivative \(\partial^2/\partial x^2\) with \(N_{\mbox{int}}=256\)
discretization points and an implicit Adams method with \(dt=10^{-3}\) for the temporal evolution.
The resulting behavior is depicted in Fig.~\ref{fig:3}(a).
Data for training our model is sampled as described in the following:
For the number of training examples, we set \(n_{\mbox{train}}=20\) and for the number of test
examples \(n_{\mbox{train}}=1\), yielding \(n_{\mbox{total}}=21\).
At \(n_{\mbox{total}}=21\) points along the limit cycle shown in Fig.~\ref{fig:3}(a),
we sample data as follows:
At \(t_i = t_{\mbox{min}}=2000 + i d\tau\) with \(i\in\{0, \dots, n_{\mbox{total}}-1\}\) we perturb
the limit cycle by scaling the respective snapshot at \(t_i\) as \(0.9 \cdot W(x, t_i)\) and
\(1.1 \cdot W(x, t_i)\). We integrate both of these snapshots forward in time for \(T=20\) time units,
and sample data after each \(dt=10^{-3}\).
This results in two transients, each comprised of \(20001\) snapshots at each \(t_i\).
This means, in total there are \(2 \times 20000 \times 20 = 8\cdot 10^5\) snapshot data pairs for training,
and \(2 \times 20000\) for validation.
We subsequently downsample the data to $N=128$ points per snapshot.
In order to find a parametrization for the discretization points of the PDE,
we concatenate the training time series of the \(N=128\) points,
resulting in \(2\times 20000 \times 20\) long trajectories.
Then, we use diffusion maps with an Euclidean distance and a Gaussian kernel, and take the kernel scale \(\epsilon\) as the median of all squared distances.
This results in the one-dimensional parametrization $\phi_1$, as shown in Fig.~\ref{fig:3}(c).
We resample data on a regular grid in the interval $\left[-1, 1\right]$ using a cubic spline.
We estimate the time derivative at each point using finite differences in time,
\begin{equation}
\frac{\partial}{\partial t} W(x, t_j) = \partial_t W(x, t_j) \approx (W(x, t_j+dt)-W(x, t_j))/dt,
\label{eq:fd}
\end{equation}
yielding 20000 $(W(x, t_j)$, $\partial_t W(x, t_j))$ pairs per transient and $t_i$.\\
Using the $(W(x, t_j)$, $\partial_t W(x, t_j))$ pairs, we train a neural network \(f\) such that
\[ \partial_t W(x, t_j) \approx f(W(x, t_j))\]
in a supervised manner as follows:
We take \(N=128\) discretization points on each snapshot.
At these points we calculate the first \(n_{\mbox{derivs}}=3\) spatial derivatives
using a finite difference stencil of length \(l=9\) and the respective finite difference kernel for
each spatial derivative of the highest accuracy order that fits into \(l=9\).
The model thus takes the form
\[ \partial_t W(x_i, t_j) \approx f(W(x_i, t_j), \partial_x W(x_i, t_j), \partial_{xx} W(x_i, t_j),
\partial_{xxx} W(x_i, t_j))\]
with the derivatives calculated as described above.
Note that \(W(x, t)\) is complex, which means at each \((x_i, t_j)\) the input to the neural network
is $8$-dimensional for \(n_{\mbox{derivs}}=3\).
The network itself is composed of \(4\) fully connected hidden layers
with \(96\) neurons each and \(\mbox{tanh}\)
activation function (resulting in $\approx 28\cdot 10^3$ trainable parameters). The output layer contains two neurons with no activation function,
one neuron for the real and imaginary part of \(\partial_t W\), respectively.
The network weights are initialized uniformly using PyTorch's default weight initialization~\cite{PyTorch},
and are optimized using the Adam optimizer~\cite{kingma2017adam} with initial learning rate of \(10^{-3}\) and
batch size of 1024.
Mean-squared error between the predicted and actual $\partial_t W(x_i, t_j)$, Eq.~\eqref{eq:fd}, is taken as the loss.
The model is trained for 60 epochs, and the learning rate reduced by a factor of 2 if the
validation loss does not decrease for 7 epochs.
Needless to say, other general purpose approaches to learning the right-hand-side of the operator (Gaussian Processes~\cite{lee20_coars_scale_pdes_from_fine}, Geometric Harmonics~\cite{coifman06_geomet_harmon}, etc.) can also be used.
Inference is done by taking an initial snapshot of the validation data or on the limit cycle and integrating it forward in time using the learned model and an integration scheme such as forward Euler.
At each time step, the boundary conditions (in the form of narrow boundary corridors) are taken from the ground-truth data.
The issue arises of the right width for these corridors, and, more generally, the prescription of boundary/initial/internal conditions appropriate for the well-posedness of the overall problem,
especially since the operator (the right hand side of the PDE) comes in the form of a ``black box''. This is already the subject of extensive research that we, among others, are pursuing~\cite{wellposedness}.
In addition, each predicted snapshot from the model is filtered as described in the following.
On the whole training data set, an SVD is performed.
Using the obtained \(U\) and \(V\) matrices, we can decompose each predicted snapshot during inference.
In doing so, we truncate the SVD decomposition after two dimensions, and reconstruct the snapshot.
This means that each snapshot is projected onto the two-dimensional subspace in which the training
data lives, and thus prevents directions that have not been sampled from growing during inference.
The resulting dynamics obtained from the learned model and using an initial snapshot on
the limit cycle is depicted in Fig.~\ref{fig:3}(e).
$4$-point wide boundaries are provided on both sides of the domain.
The learned dynamics can be investigated more clearly by comparing the true and the learned
transient dynamics towards the limit cycle.
To do so, we integrate a snapshot perturbed away from the limit cycle using the
complex Ginzburg-Landau equation and the learned model,
and calculate the smallest Euclidean distance in $\mathbb{C}^N$ at each time step of the
obtained trajectories to the limit cycle.
The results are shown in Fig. \ref{fig:3}(f).\\
We also carefully checked that the learned model is converged with respect to the number of discretization points $N$.
{\bf Example: Stuart-Landau Ensemble}
The dynamics as depicted in Figs.~\ref{fig:1} and~\ref{fig:6} are globally stable for the parameters considered here~\cite{matthews90_phase_diagr_collec_behav_limit_cycle_oscil}.
In fact, arbitrary initial conditions decay to the limit cycle exponentially.
Such behavior can be investigated in more detail using Floquet theory:
the convergence to the limit cycle can then be described by Floquet multipliers with their
associate eigendirections.
Since the limit cycle described above is stable, the absolute values of the Floquet multipliers
are less than one, except for one of them which equals one.
In particular, multipliers with large magnitude indicate slow attracting directions,
whereas multipliers with absolute values close to zero indicate fast decaying directions.
If both small and large Floquet multipliers are present,
then there exist transients with multiple time scales.
\begin{figure}[ht]
\centering
\includegraphics{appendix_1.pdf}
\caption{(a)~Floquet multipliers obtained from the monodromy matrix
for the dynamics shown in Fig.~\ref{fig:6}.
(b)~Eigendirection corresponding to the pair of complex conjugate multipliers
$\lambda_2$ and$\lambda_3$ (marked in orange) indicating a slow attracting direction.
(c, d) Eigendirections corresponding to the pairs of complex conjugate multipliers marked
as green and red, indicating fast contracting directions.
Note that since the $W_k$ are complex, the directions $\vec{v}_i$ are complex,
with the real parts indicated as solid curves,
and the imaginary parts indicated as shaded curves.}
\label{fig:a1}
\end{figure}
Following Ref.~\cite{taylor92_coupled_double_toil},
we calculate the Floquet multipliers by calculating the monodromy matrix $\mathbf{V}$ along the limit cycle.
In particular, we obtain $\mathbf{V}$ by the integration
\begin{equation}
\mathbf{V}(T) = \int_{t=0}^{t=T}\left.\frac{\partial F}{\partial x}\right|_{x(t)} \cdot \mathbf{V}\, \mathrm{d}t
\label{eq:monodromy}
\end{equation}
with $\mathbf{V}(0) = \mathbf{I}_{2N\times 2N}$, $\mathbf{I}$ being the identity matrix,
and $T$ being the period of one oscillation.
The matrix $\frac{\partial F}{\partial x}$ represents the Jacobian of Eq.~\eqref{eq:sle} obtained analytically through differentiation and evaluated along the limit cycle.
The eigenvalues of $\mathbf{V}(T)$ then correspond to the Floquet multipliers,
with the corresponding eigenvectors being their respective directions.\\
The largest multipliers obtained this way, together with the three slowest eigendirections,
are depicted in Fig.~\ref{fig:a1}.
Notice the single multiplier equal to one represents the neutral direction along the limit cycle.
In addition, there is a pair of complex conjugate eigenvalues $\lambda_{2,3}\approx-0.4\pm 0.4i$ (orange in Fig.~\ref{fig:a1}).
Due to the magnitude of their real parts, the dynamics in this eigenspace is slow compared to the
subsequent eigendirections.
These eigendirections are, as apparent from Fig.~\ref{fig:a1}(b) smooth functions
of the frequencies $\omega_k$.
In addition, perturbations in this two-dimensional eigenspace spiral towards the stable limit cycle.\\
The directions of the subsequent multipliers affect only isolated oscillators.
In particular, the subsequent direction (green in Fig.~\ref{fig:a1})
following the slow eigenspace affects only the fastest oscillator,
that is, the oscillator with the largest intrinsic frequency $\omega_k$.
The next direction then perturbs the second fastest oscillator (red in Fig.~\ref{fig:a1}),
and so on.
The step-like structure of the Floquet multipliers highlights the multi-scale behavior of the coupled
oscillator system: The oscillation and the inward spiraling slow dynamics on one scale,
and the single oscillator dynamics towards the limit on the other, the fast scale.
These eigendirections with support on the ``most different'' oscillator are indicative of the
SNIPER bifurcation marking the ``edge of synchronization''.
We sample data by integrating system Eq.~\eqref{eq:sle} from random initial conditions, until the
dynamics are settled on the limit cycle. For $n_{lc}$ different points along the limit cycle,
we calculate the monodromy matrix from Eq.~\eqref{eq:monodromy} and estimate the least stable eigendirection $\vec{v}_1$ transverse to the limit cycle, presumably lying on the slow stable manifold.
Then, we perturb in this direction by perturbing each point $W_{lc}$ on the limit cycle as $W_{lc}\pm \epsilon \vec{v}_1$, with $\epsilon=0.1$. This yields three initial points; integrating these points for a fixed amount of time then returns two transients towards the limit cycle and one trajectory on the attractor.
Here, we choose $n_{lc}=20$ for the training data, and $n_{lc}=5$ for the test data, and a time window of $T=200$ dimensionless time units with a sampling rate of $dt=0.05$, yielding $4000$ data points per trajectory, or $3 \cdot n_{cl}\cdot T/dt=240,000$ training data points and $60,000$ test data points.
The concatenated time series of length $3 \cdot n_{lc} \cdot T/dt$ then serve as input data points for diffusion maps; the possibility of using time series ``snippets" of different durations is explored in~\cite{kemeth18_emerg_space_distr_data_with}.
The temporal derivative $\partial_t W$ is then estimated using finite differences, cf. Eq.~\eqref{eq:fd}.
When also changing the system parameter $\gamma$ we provide for each data point the corresponding $\gamma$ value as additional input to the network.
In addition, the training data consists of uniform $\gamma$ values in $\left[1.7, 1.8\right]$, and the test data of randomly sampled $\gamma$ different from the training data.
In addition, we estimate an SVD basis from the complete training data. During inference,
the prediction of $f$ are reconstructed using this basis and a truncation with $n_s=3$
dimensions.\\
For the extraction of diffusion modes, we use a kernel scale of $\epsilon=20$ for the case when $\gamma$ is fixed and $\epsilon=10$ when we sample data with different $\gamma$ values.
Other hyperparameters and the model architecture are as described in the previous section.\\
For the case with two heterogeneous parameters, we simulate an ensemble of
$N=2048$ oscillators and use \(n_{\mbox{train}}=5\) and \(n_{\mbox{train}}=1\).
We resample the data on a rectangular $32\times 32$ grid, as shown in Fig.~\ref{fig:8}(a) in diffusion maps space. Here, we use up to 2 derivatives in each dimension, and a finite difference kernel size of $l=5$. We thus provide boundaries of width 2 along the edges.
The model has 3 layers with 64 neurons each (resulting in $\approx 9.3\cdot 10^3$ trainable parameters), and is trained for 20 epochs.
\textbf{Code Availability}
The code to reproduce the results will be made available under \href{https://github.com/fkemeth/emergent_pdes}{{https://github.com/fkemeth/emergent\_pdes}} upon publication.
\textbf{Acknowledgements} This work was partially supported by U.S. Army Research Office (through a MURI program), DARPA, and the U.S. Department of Energy.
\textbf{Author Contributions} IGK. conceived the research which was planned jointly with all the authors. FPK performed a large part of the research, with contributions from TB, TT, FD, SJM and CRL. FPK and IGK initially wrote the manuscript, which was edited in final form with contributions from all the authors.
\textbf{Competing interests}
The authors declare no competing interests.
|
1,108,101,564,913 | arxiv | \section{Introduction}
Let $M$ be a compact 3-manifold, and let $K$ be a knot inside $M$. Since the work of Dehn in 1910 \cite{dehn1910topologie}, deciding whether $K$ can be unknotted has been a major question in low-dimensional topology. Dehn formulated the word and the isomorphism problems for groups in an attempt to solve this question (The isomorphism problem was stated by Tietze \cite{tietze1908topologischen} in 1908 as well.) This in turn led to Novikov's discovery of the undecidability of the word problem for finitely presented groups \cite{novikov1955algorithmic} and the undecidability of the isomorphism problem for finitely presented groups by Adian \cite{Adian1957unsolvability} and Rabin \cite{rabin1958unsolvability}. Haken was the first person to prove that the unknot recognition problem is decidable using the theory of normal surfaces, introduced previously by Kneser \cite{kneser1929geschlossene}.
Seifert defined the \emph{genus} of a knot in the 3-sphere \cite{seifert1935geschlecht}. Consider all connected, compact, embedded, orientable surfaces in $M$ whose boundary coincides with $K$, and let the genus, $g(K)$, be the minimum genus of the surfaces in this family. If there is no such surface, then we define $g(K)= \infty$ in this case. An easy observation is that $g(K) < \infty$ if and only if $K$ represents the trivial element in the first homology group $H_1(M; \mathbb{Z})$. Furthermore, $g(K) = 0$ if and only if $K$ is the unknot.
Thus, one of the most basic decision problems in low-dimensional topology is as follows: given a knot $K$ in a 3-manifold $M$ and a non-negative integer $g$, is the genus of $K$ equal to $g$? The manifold $M$ is provided via a triangulation in which $K$ is a specified subcomplex. We term this problem \textsc{3-manifold knot genus}. There are good reasons for believing this to be hard. Agol, Hass and Thurston \cite{agol2006computational} considered the related problem \textsc{Upper bound on 3-manifold knot genus}, which asks whether $g(k) \leq g$, and they proved that this problem is \textbf{NP}-complete. A consequence is that if \textsc{3-manifold knot genus} were to be in \textbf{NP}, then \textbf{NP} $=$ \textbf{co-NP}, contradicting a basic conjecture in complexity theory. (See Theorem 1.4 of \cite{lackenby2016efficient} for this deduction).
It is natural to ask whether the difficulty of \textsc{3-manifold knot genus} is consequence of the fact that $K$ and $M$ can both vary. What if we fix the manifold $M$, and only allow $K$ to vary? In \cite{agol2006computational}, Agol, Hass and Thurston asked about the computational complexity of this problem. The specific case where $M$ is the 3-sphere was addressed by the first author. He showed \cite{lackenby2016efficient} that, in this restricted setting, deciding whether a knot has genus $g$ \emph{is} in \textbf{NP}. More generally, if we are given a triangulation of a rational homology 3-sphere $M$, a knot $K$ as a subcomplex and an integer $g$, then the question `does $g(K)$ equal $g$?' lies in \textbf{NP}.
Let $N(K)$ be a tubular neighbourhood of $K$ with interior $N^\circ(K)$. The reason why knots in rational homology 3-spheres seem to be so much more tractable than in general 3-manifolds is that, in this situation, there can be only one possible homology class in $H_2(M - N^\circ(K), \partial N(K))$, up to sign, for a compact oriented spanning surface. This suggests that knots in more complicated 3-manifolds $M$ might be difficult to analyse, since as soon as $b_1(M) \geq 1$, there may be infinitely many possibilities for the homology class of a spanning surface. However, the main result of this paper is that, provided we consider knots in a fixed 3-manifold $M$, then the problem of determining their genus lies in \textbf{NP}.
In order to state this result more precisely, we need to explain how the knots $K$ in $M$ are presented. Any closed orientable 3-manifold is obtained by integral surgery on a framed link $L$ in the 3-sphere \cite{lickorish1962representation, wallace1960modifications}. When $M$ is closed, we fix such a surgery description of $M$, by fixing a diagram $D$ for $L$ where the framing of $L$ is diagrammatic framing and this specifies the surgery slopes. We specify knots $K$ in $M$ by giving a diagram for $K \cup L$ that contains $D$ as a sub-diagram. The \emph{total number of crossings} of $K$ is defined as the number of crossings in this diagram between $K$ and itself and between $K$ and $L$.
\medskip
\noindent \textbf{Problem}: \textsc{Determining knot genus in the fixed closed 3-manifold $M$}.\\
\emph{Input}: A diagram of $K \cup L$ that contains $D$ as a subdiagram, and an integer $g \geq 0$ in binary.\\
\emph{Question}: Is the genus of $K$ equal to $g$?\\
The size of the input is equal to sum of the number of digits of $g$ in binary and the total number of crossings of $K$. Strictly speaking, there are infinitely many decision problems here, one for each 3-manifold $M$ and surgery diagram $D$.
\begin{thm}
Let $M$ be a closed, orientable 3-manifold given by integral surgery on a framed link in the 3-sphere. The problem \textsc{Determining knot genus in the fixed closed 3-manifold $M$} lies in \textbf{NP}.
\label{main}
\end{thm}
This can be generalised to compact orientable 3-manifolds with non-empty boundary, as follows. Any compact orientable 3-manifold $M$ can be specified by means of the disjoint union of a graph $\Gamma$ and a framed link $L$ in the 3-sphere. The manifold $M$ is obtained from $S^3$ by removing an open regular neighbourhood of $\Gamma$ and performing surgery along $L$. We fix a diagram $D$ for $\Gamma \cup L$, where again the surgery slopes on $L$ agree with the diagrammatic framing. We can then specify a knot $K$ in $M$ by giving a diagram for $K \cup \Gamma \cup L$ that contains $D$ as a sub-diagram. Again, the \emph{total crossing number} of $K$ is the number of crossings in this diagram between $K$ and itself and between $K$ and $\Gamma \cup L$. We say that \textsc{Determining knot genus in a fixed 3-manifold $M$} is the decision problem asking whether the genus of $K$ is equal to a given non-negative integer.
\begin{thm}
Let $M$ be a compact, orientable 3-manifold given as above. The problem \textsc{Determining knot genus in the fixed 3-manifold $M$} lies in \textbf{NP}.
\label{main:boundary}
\end{thm}
\subsection{Ingredients of the proof}
\medskip
(1) One of the key technical tools in the paper is the use of non-standard measures of complexity for various objects. We introduce the relevant terminology now.
For an integer $n$, let $C_{\mathrm{nat}}(n) = |n|$ and let $C_{\mathrm{dig}}(n)$ be the number of digits of $n$ when expressed in binary. In the case of negative $n$, we view the minus sign at the front as an extra digit. For a list of integers $(n_1, \cdots, n_k)$, let $C_{\mathrm{nat}}(n_1, \cdots, n_k)$ be $\sum_i C_{\mathrm{nat}}(n_i)$. Similarly, let $C_{\mathrm{dig}}(n_1, \cdots, n_k)$ be $\sum_i C_{\mathrm{dig}}(n_i)$.
For a matrix $A$ with integer entries $A_{ij}$, let $C_{\mathrm{nat}}(A)$ be $\sum_{ij} C_{\mathrm{nat}}(A_{ij})$ and let $C_{\mathrm{dig}}(A)$ be $\sum_{ij} C_{\mathrm{dig}}(A_{ij})$.
For a rational number $p/q$, with $p$ and $q$ in their lowest terms, let $C_{\mathrm{nat}}(p/q)= C_{\mathrm{nat}}(p) + C_{\mathrm{nat}}(q)$ and let $C_{\mathrm{dig}}(p/q)= C_{\mathrm{dig}}(p) + C_{\mathrm{dig}}(q)$.
The $C_{\mathrm{dig}}$ notions of size are the most natural ones and the ones that are most widely used in complexity theory, because they reflect the actual amount of memory required to store the number, list or matrix. However, we will also find the $C_{\mathrm{nat}}$ versions useful.
\medskip
(2) One of the main ingredients is the following result, proved by the first author in \cite{lackenby2016efficient}.
\begin{thm}[Lackenby]
\textsc{Thurston norm of a homology class} is in \textbf{NP}.
\label{lackenby}
\end{thm}
The decision problem \textsc{Thurston norm of a homology class} takes as its input a triangulation $\mathcal{T}$ for a compact orientable 3-manifold $M$, a simplicial 1-cocycle $c$ and an integer $g$, and it asks whether the Thurston norm of the dual of $c$ is equal to $g$. (For the definition of the Thurston norm, see Section \ref{Sec:ThurstonNorm}.) The measure of complexity of $\mathcal{T}$ is its number of tetrahedra, denoted $|\mathcal{T}|$. The measure of complexity of $c$ is $C_{\mathrm{dig}}(c)$, where we view $c$ as a list of integers, by evaluating it against all the edges of $\mathcal{T}$ (when they are oriented in some way). The measure of complexity of $g$ is $C_{\mathrm{dig}}(g)$.
\medskip
(3) Thus, one can efficiently certify the Thurston norm of the dual of a \emph{single} cohomology class. However, in principle, a minimal genus Seifert surface for the knot $K$ could be represented by one of infinitely many classes. To examine all possible classes simultaneously, one needs a good picture of the Thurston norm ball.
\begin{thm}
Fix an integer $B \geq 0$. The problem \textsc{Thurston norm ball for $b_1 \leq B$} lies in \textbf{FNP}, where $b_1$ denotes the first Betti number.
\label{thurston ball}
\end{thm}
Recall that \textbf{FNP} is the generalisation of \textbf{NP} from decision problems (where a yes/no answer is required) to function problems (where more complicated outputs might be required). A formal definition is given in Section \ref{Sec:Complexity}.
The problem \textsc{Thurston norm ball for $b_1 \leq B$} is as follows. It takes, as its input, a triangulation $\mathcal{T}$ of a compact orientable 3-manifold $X$ with $b_1(X) \leq B$, and a list of simplicial integral cocycles $\phi_1, \cdots, \phi_b$ that form a basis for $H^1(X; \mathbb{R})$. The output is all the information that one needs to compute the Thurston norm ball:
\begin{enumerate}
\item a collection of integral cocycles that forms a basis for the subspace $W$ of $H^1(X; \mathbb{R})$ with Thurston norm zero;
\item a list $V \subset H^1(X ; \mathbb{Q})$ of points that project to the vertices of the unit ball of $H^1(X; \mathbb{R})/W$, together with a list of subsets of these vertices that form faces. These are given as rational linear combinations of $\phi_1, \cdots, \phi_b$.
\end{enumerate}
At first sight, this theorem seems to lead easily to the proof of Theorem \ref{main}. However, its power is blunted by the non-standard notion of complexity that it uses. This is defined to be $|\mathcal{T}| + \sum_i C_{\mathrm{nat}}(\phi_i)$. Thus, it only works well when $C_{\mathrm{nat}}(\phi_i)$ is `small' for each $i$. That such a collection of simplicial cocycles exists in our setting is a consequence of the following surprising result.
\medskip
(4) Constructing an efficient basis for the second homology of a knot complement, for a fixed ambient manifold.
\begin{thm}
Let $M$ be a compact orientable 3-manifold given by removing an open regular neighbourhood of a graph $\Gamma$ in $S^3$ and performing integral surgery on a framed link $L$ in the complement of $\Gamma$. Let $D$ be a fixed diagram for $\Gamma \cup L$ where the surgery slopes on $L$ coincide with the diagrammatic framing. Let $K$ be a homologically trivial knot in $M$, given by a diagram of $K \cup \Gamma \cup L$ that contains $D$ as a sub-diagram. Let $c$ be the total crossing number of $K$. Set $X = M - N^\circ(K)$ as the exterior of $K$ in $M $. There is an algorithm that builds a triangulation of $X$ with $O(c)$ tetrahedra, together with simplicial $1$-cocycles $\phi_1 , \cdots , \phi_b$ that form an integral basis for $H^1(X ; \mathbb{Z})$ with $\sum_i C_{\mathrm{nat}}(\phi_i)$ at most $O(c^2)$. The algorithm runs in time polynomial in $c$. All the above implicit constants depend on the manifold $M$ and not the knot $K$.
\label{basis}
\end{thm}
\medskip
(5) Controlling the number of faces and vertices of the Thurston norm ball polyhedron, in the presence of an efficient basis for the second homology.
A crucial step in the proof of Theorem \ref{thurston ball} is to bound the number of vertices and faces of the Thurston norm ball of the manifold $X$. The following result gives this, assuming that we have a good bound on the Thurston norm of a collection of surfaces that form a basis for $H_2(X, \partial X; \mathbb{R})$.
\begin{thm}
Let $X$ be a compact orientable 3-manifold, and let $m$ be a natural number. Assume that there exist properly immersed oriented surfaces $S_1 , \cdots, S_b$ in $X$ such that their homology classes form a basis for $H_2(X, \partial X; \mathbb{R}) $, and for each $1 \leq i \leq b$ we have $|\chi_-(S_i)| \leq m$. Denote by $W$ the subspace of $H^1(X ; \mathbb{R})$ with trivial Thurston norm.
The number of facets of the unit ball for the induced Thurston norm on $H^1(X ; \mathbb{R})/W$ is at most $(2m+1)^b$, where $b = b_1(X)$ is the first Betti number of $X$.
Hence, the number of vertices is at most $(2m+1)^{b^2}$ and the number of faces is at most $b(2m+1)^{b^2}$.
\label{number of faces}
\end{thm}
For the definition of $\chi_-(S)$, see Section \ref{Sec:ThurstonNorm}. The proof of Theorem \ref{number of faces} uses the fact, due to Thurston \cite{thurston1986norm}, that the vertices of the dual unit ball of the Thurston norm are integral. See Theorem \ref{lattice points} for a result that gives an upper bound on the number of these integral points.
\medskip
(6) Constructing a basis for the subspace of the second homology with trivial Thurston norm.
\begin{thm}
\label{Thm:BasisForW}
Let $\mathcal{T}$ be a triangulation of a compact orientable irreducible 3-manifold $X$. If $X$ has any compressible boundary components, suppose that these are tori. Then there is a collection $w_1, \cdots, w_r$ of integral cocycles that forms a basis for the subspace $W$ of $H^1(X; \mathbb{R})$ consisting of classes with Thurston norm zero with $\sum_i C_{\mathrm{dig}}(w_i)$ at most $O(|\mathcal{T}|^2)$.
\end{thm}
This is proved by showing that $W\cap H^1(X ; \mathbb{Z})$ is spanned by fundamental normal surfaces, which is a consequence of work of Tollefson and Wang \cite{TollefsonWang}.
\medskip
(7) In Theorem \ref{Thm:BasisForW}, it is assumed that $X$ is irreducible and that every component of $\partial X$ is toroidal or incompressible. In Section \ref{Sec:SpheresDiscs}, we explain how we may ensure this. We cut along a maximal collection of compression discs and essential normal spheres to decompose $X$ into pieces, and we construct a new simplicial basis for the cohomology of the pieces. We also use the following result from \cite{lackenby2016efficient}.
\begin{thm}[Lackenby]
\label{Thm:IrreducibleNP}
The following decision problem lies in \textbf{NP}. The input is a triangulation of a compact orientable 3-manifold $M$ with (possibly empty) toroidal boundary and $b_1(M) > 0$, and the problem asks whether $M$ is irreducible.
\end{thm}
\begin{cor}
\label{Cor:IrredIncompNP}
The following decision problem lies in \textbf{NP}. The input is a triangulation of a compact orientable 3-manifold $M$ with $b_1(M) > 0$, and the problem asks whether $M$ is irreducible and has incompressible boundary.
\end{cor}
This is an immediate consequence of Theorem \ref{Thm:IrreducibleNP}. This is because a compact orientable 3-manifold $M$ is irreducible with incompressible boundary if and only if its double $DM$ is irreducible. This follows from the equivariant sphere theorem \cite{MeeksSimonYau}. Moreover, $b_1(DM)> 0$ if and only if $b_1(M)>0$.
\subsection{Varying $M$ and $K$}
As mentioned above, it seems very unlikely that Theorems \ref{main} and \ref{main:boundary} remain true if $M$ and $K$ are allowed to vary, because of the following result of Agol, Hass and Thurston \cite{agol2006computational}.
\begin{thm}
The following problem is \textbf{NP}-complete. The input is a triangulation of a closed orientable 3-manifold $M$, a knot $K$ in its 1-skeleton and an integer $g$, and the problem asks whether the genus of $K$ is at most $g$.
\end{thm}
However, what if we allow $M$ to vary but fix $b_1(M)$ in advance? It is unclear to the authors whether the problem of determining knot genus in such manifolds $M$ is likely to lie in \textbf{NP}.
We believe that in this more general setting, Theorem \ref{basis} does not hold. Certainly, the proof of Theorem \ref{basis} required $M$ to be fixed. This bound on $\sum_i C_{\mathrm{nat}}(\phi_i)$ was used to bound $\chi_-(S_i)$, where $S_i$ is a representative surface for the Poincar\'e dual of $c_i$. In the absense of such a bound, it is not clear that one can find a good upper bound on the number of faces and vertices of the Thurston norm ball for $H^1(X; \mathbb{R})$. In particular, it is an interesting question whether there is a sequence of 3-manifolds $X$ with bounded first Betti number and triangulations $\mathcal{T}$, where the number of vertices of the Thurston norm ball of $X$ grows faster than any polynomial function of $|\mathcal{T}|$.
\section{Preliminaries}
\begin{notation}
For a metric space $X$ and $A \subset X$, denote the metric completion of $X - A$ with the induced path metric by $X \setminus \setminus A$.
For a subset $A$ of a topological space $Y$, the interior of $A$ is shown by $A^ \circ$.
The first Betti number of a manifold $M$ is indicated by $b_1(M)$.
\end{notation}
\subsection{Complexity Theory}
\label{Sec:Complexity}
The material in this section is borrowed from \cite{arora2009computational,rich2008automata}, and we refer the reader to them for a more thorough discussion.
Let $\{ 0 ,1 \}^*$ be the set of all finite strings in the alphabet $\{ 0 , 1 \}$. A \emph{problem} $P$ is defined as a function from $\{ 0 , 1 \} ^*$ to $\{ 0 ,1 \}^*$. Here the domain is identified with the \emph{inputs} or \emph{instances}, and the range is identified with the \emph{solutions}. A \emph{decision problem} is a problem whose range can be taken to be $\{ 0 ,1 \} \subset \{ 0 ,1 \}^*$. Intuitively a decision problem is a problem with yes or no answer.
A (deterministic) \emph{Turing machine} is a basic computational device that can be used as a model of computation. We refer the reader to Page 12 of \cite{arora2009computational} for a precise definition. By an \emph{algorithm} for the problem $P$, we mean a Turing machine $M$ that given any instance $I$ of the problem on its tape, computes and halts exactly with the solution $P(I)$. We say $M$ runs in time $T: \mathbb{N} \longrightarrow \mathbb{N}$, if for any instance $I$ of binary length $|I|$, if we start the Turing machine $M$ with $I$ on its tape, the machine halts after at most $T(|I|)$ steps.
The \emph{complexity class} \textbf{P} consists of all decision problems $P$ for which there exists a Turing machine $M$ and positive constants $c,d$ such that $M$ answers the problem in time $c n^d$.
The complexity class \textbf{NP} consists of decision problems such that their yes solutions can be efficiently \emph{verified}. By this we mean that there is a Turing machine that can verify the yes solutions in polynomial time. This is possibly a larger complexity class than the class \textbf{P}, which was described as the set of decision problems that can be efficiently \emph{solved}. In other words, \textbf{P} $\subseteq$ \textbf{NP}. The precise definition is as follows. By a \emph{language}, we mean a subset of $\{ 0 ,1 \}^*$. In our context, we have a decision problem $P \colon \{ 0 ,1 \}^* \longrightarrow \{ 0 ,1 \}$ and we take $L$ as the set of instances whose solutions are equal to $1$ (yes answer).
\begin{definition}
A language $L \subset \{ 0 ,1 \}^*$ is in \textbf{NP}, if there exists a polynomial $p \colon \mathbb{N} \longrightarrow \mathbb{N}$ and a Turing machine $M$ that runs in polynomial time (called the \emph{verifier} or \emph{witness} for $L$) such that for every instance $x \in \{ 0 ,1 \}^*$
\[ x \in L \iff \exists u \in \{ 0,1 \}^{p(|x|)} \hspace{2mm} \text{such that} \hspace{2mm} M(x,u)=1. \]
If $x \in L$ and $u \in \{ 0 ,1 \}^{p(|x|)}$ and $M(x,u)=1$, we call $u$ a \emph{certificate} for $x$.
\end{definition}
A decision problem is called \textbf{NP}-\emph{hard} if it is at least as hard as any other problem in \textbf{NP}. More specifically, every problem in \textbf{NP} is \emph{Karp-reducible} to any \textbf{NP}-hard problem. (See Page 42 of \cite{arora2009computational} for a definition of Karp-reducibility.) In particular, if any \textbf{NP}-hard problem is solvable in polynomial time, then \textbf{P} $=$ \textbf{NP}.
Now instead of restricting our attention to decision problems, we consider the computational complexity of more general problems. Recall that a \emph{problem} $P$ is just a function $P \colon \{ 0 ,1 \}^\ast \rightarrow \{ 0 ,1 \}^\ast$. We say that $P$ is in \textbf{FNP} if there is a deterministic polynomial time verifier that, given an arbitrary input pair $(x, y)$ where
$x,y \in \{ 0 ,1 \}^\ast$, determines whether $P(x) = y$.
\subsection{Thurston norm}
\label{Sec:ThurstonNorm}
Let $M$ be any compact orientable 3-manifold. Thurston \cite{thurston1986norm} defined a semi-norm on the second homology group $H_2(M , \partial M ; \mathbb{R})$. This norm generalises the notion of knot genus, and for any homology class measures the minimum `complexity' between all properly embedded orientable surfaces representing that homology class. More precisely, for any properly embedded connected orientable surface $S$, define
\[\chi_-(S): = \max \{ 0 , -\chi(S) \}. \]
If $S$ has multiple components, define $\chi_-(S)$ as the sum of the corresponding values for the components of $S$. Now for any integral homology class $a \in H_2(M , \partial M ; \mathbb{R})$ define the \emph{Thurston norm} of $a$, $x(a)$, as
\[ x(a) = \min \{ \chi_-(S) \hspace{1mm} | \hspace{1mm} [S]=a, \hspace{3mm} S \text{ is compact, oriented and properly embedded} \}. \]
This defines the norm for integral homology classes. One can extend this linearly to rational homology classes, and then extend it continuously to all real homology classes.
Consider the special case that $K$ is a knot of genus $g$ in $S^3$, and $M: = S^3 - N^\circ(K)$, where $N(K)$ is a tubular neighbourhood of $K$. The second homology group $H_2(M , \partial M; \mathbb{R})$ is isomorphic to $\mathbb{R}$ and the Thurston norm of a generator for the integral lattice
\[ H_2(M , \partial M ; \mathbb{Z}) \subset H_2(M , \partial M; \mathbb{R}) \]
is equal to $2g-1$ if $g \geq 1$, and $0$ otherwise.
In general this might be a semi-norm as opposed to a norm, since one might be able to represent some non-trivial homology classes by a collection of spheres, discs, tori or annuli. However, if $W$ denotes the subspace of $H_2(M , \partial M ; \mathbb{R})$ with trivial Thurston norm, then one gets an induced norm on the quotient vector space $H_2(M , \partial M ; \mathbb{R})/W$.
Thurston proved that the unit ball of this norm is a convex polyhedron. Given any norm on a vector space $V$, there is a corresponding dual norm on the dual vector space, that is the space of functionals on $V$. In our case, the dual space to $H_2(M , \partial M ; \mathbb{R})$ is $H^2(M , \partial M ; \mathbb{R})$. Thurston showed that the unit ball for the corresponding dual norm $x^\ast$ is a convex polyhedron with \emph{integral} vertices. For a thorough exposition of Thurston norm and examples see \cite{thurston1986norm, candel2003foliations}.
Finally, it is possible to define a norm $x_s$ using singular surfaces and allowing real coefficients. Thus, if $S_1 , \cdots, S_k$ are oriented singular surfaces in a 3-manifold $M$, and if $S = \sum a_i S_i$ is a real linear combination, representing a homology class $a$, we may define
\[ \chi_-(S) = \sum |a_i| \chi_-(S_i). \]
The singular norm $x_s$ is defined as
\[ x_s(a) = \inf \{ \chi_-(S) \hspace{1mm} | \hspace{1mm} [S]=a \}. \]
Gabai \cite{gabai1983foliations} proved the equivalence of the two norms $x$ and $x_s$, previously conjectured by Thurston \cite{thurston1986norm}.
\begin{thm}[Gabai]
Let $M$ be a compact oriented 3-manifold. Then on $H_2(M)$ or $H_2(M, \partial M)$, $x_s = x$ where $x_s$ denotes the norm on homology based on singular surfaces.
\label{singular-norm}
\end{thm}
\subsection{Bareiss algorithm for solving linear equations}
\label{SubSec:Bareiss}
Gaussian elimination is a useful method for solving a system of linear equations with integral coefficients, computing determinants and calculating their echelon form. The algorithm uses $O(n^3)$ arithmetic operations, where $n$ is the maximum of the number of variables and the number of equations. One caveat is that the intermediate values for the entries during the process can get large. An algorithm due to Bareiss resolves this issue. If the maximum number of bits for entries of the input is $L$, then the running time of the algorithm is at most a polynomial function of $n+L$. Moreover, no intermediate value (including the final answer) needs more than $O(n \log(n) + n L)$ bits \cite{bareiss1968sylvester}.
\subsection{Mixed integer programming}
\label{SubSec:MixedIntegerProg}
This refers to the following decision problem. Let $n \geq 0$ and $m > 0$ be integers, and let $k$ be a positive integer satisfying $k \geq n$. Let $A$ be an $m \times k$ matrix with integer coefficients, and let $b \in \mathbb{Z}^m$. Then the problem asks whether there is an $x = (x_1, \dots, x_k)^T \in \mathbb{R}^k$ such that
\[ Ax \leq b\]
\[ x_i \in \mathbb{Z} \textrm{ for all } i \textrm{ satisfying } 1 \leq i \leq n.\]
The size of the input is given by $k + m + C_{\mathrm{dig}}(A) + C_{\mathrm{dig}}(b)$.
Lenstra \cite{lenstra1983integer} provided an algorithm to solve this problem that runs in polynomial time for any fixed value of $n$.
It is also shown in \cite{lenstra1983integer}, using estimates of von zur Gathen and Sieveking \cite{VzGSieveking}, that if the above instance of Mixed Integer Programming does have a positive solution $x$, then it has one for which $C_{\mathrm{dig}}(x)$ is bounded above by a polynomial function of the size of the input.
Figure \ref{linear programming} shows an example of Mixed Integer Programming where
\begin{enumerate}
\item the shaded region is the \emph{feasible region} namely $A x \leq b$ where $x \in \mathbb{R}^2$;
\item the dots indicate integral points inside the feasible region.
\end{enumerate}
In this example, there is at least one integral point inside the feasible region and hence the answer is yes.
\begin{figure}
\centering
\includegraphics[width= 2 in]{linear-programming}
\caption{Integer Linear Programming}
\label{linear programming}
\end{figure}
\subsection{Polyhedra and their duals}
Our exposition is from \cite{brondsted2012introduction} and we refer the reader to that for more details and proofs. A set of points $\{ y_0 , y_1 , \cdots, y_m \} \subset \mathbb{R}^d$ is \emph{affinely independent} if the vectors $y_1 -y_0 , \cdots, y_m- y_0$ are linearly independent. A \emph{polytope} $P$ is the convex hull of a non-empty finite set $\{ x_1 , \cdots, x_n \}$ in $\mathbb{R}^d$. We say $P$ is $k$-dimensional if some $(k+1)$-subfamily of $\{ x_1 , \cdots, x_n \}$ is affinely independent, and $k$ is maximal with respect to this property. A convex subset $F$ of $P$ is called a \emph{face} of $P$ if for any two distinct points $y,z \in P$ such that $]y,z[ \hspace{1mm} \cap F$ is non-empty, we have $[y,z] \subset F$. Here $]y,z[$ and $[y,z]$ denote the open and closed segments connecting $y$ and $z$ respectively. A face $F$ is \emph{proper} if $F \neq \emptyset, P$. A point $x \in P$ is a \emph{vertex} if $\{ x \}$ is a face. A \emph{facet} $F$ of $P$ is a face of $P$ with $\dim(F) = \dim(P)-1$. Every face $F$ of $P$ is itself a polytope, and coincides with the convex hull of the set of vertices of $P$ that lie in $F$. Every proper face $F$ of $P$ is the intersection of facets of $P$ containing $F$ (see Theorem 10.4 of \cite{brondsted2012introduction}).
The intersection of any family of faces of $P$ is again a face. For any family $\mathcal{A}$ of faces of $P$, there is a largest face contained in all members of $\mathcal{A}$ denoted by $\inf \mathcal{A}$, and there is a smallest face that contains all members of $\mathcal{A}$ denoted by $\sup \mathcal{A}$. Denote the set of faces of $P$ by $\mathcal{F}(P)$ and let $\subset$ denote inclusion. Therefore, the partially ordered set $(\mathcal{F}(P), \subset)$ is a \emph{complete lattice} with the lattice operations $\inf \mathcal{A}$ and $\sup \mathcal{A}$. The pair $(\mathcal{F}(P), \subset)$ is called the \emph{face-lattice} of $P$.
Let $P$ be a $d$-dimensional polytope in $\mathbb{R}^d$ containing the origin. Define the \emph{dual} of $P$ as
\[ P^\ast:= \{ y \in \mathbb{R}^d \hspace{1mm}|\hspace{1mm} \sup_{x \in P} \langle x, y \rangle \leq 1 \}. \]
For any face $F$ of $P$, define the dual face $F^{\triangle}$ as
\[ F^\triangle:= \{ y \in P^\ast \hspace{1mm}|\hspace{1mm} \sup_{x \in F} \langle x, y \rangle = 1 \}.\]
We have $(P^\ast)^\ast = P$, and $(F^\triangle)^\triangle = F$. There is a one-to-one correspondence between faces $F $ of $P$ and faces $F^\triangle$ of $P^\ast$, and
\[ \dim(F) + \dim(F^\triangle) =d-1. \]
Moreover, the mapping $F \mapsto F^\triangle$ defines an \emph{anti-isomorphism} of face-lattices (see Corollary 6.8 of \cite{brondsted2012introduction})
\[ (\mathcal{F}(\mathcal{P}), \subset) \rightarrow (\mathcal{F}(\mathcal{P}^*), \subset). \]
A subset $Q$ of $\mathbb{R}^d$ is called a \emph{polyhedral set} if $Q$ is the intersection of a finite number of closed half-spaces or $Q = \mathbb{R}^d$. Polytopes are precisely the non-empty bounded polyhedral sets.
\subsection{Pseudo-manifolds, orientability and degree of mappings }
At some point in this article, we need to talk about the degree of a mapping between two topological spaces that a-priori are not manifolds. They are similar to manifolds, but with particular types of singularities. The following discussion is from \cite{birman1980seifert}. `A \emph{closed pseudo-manifold} is defined as follows:
PM1) It is a \emph{pure}, finite $n$-dimensional simplicial complex ($n \geq 1$); by pure we mean that each $k$-simplex is a face of at least one $n$-simplex (\emph{purity condition}).
PM2) Each $(n - 1)$-simplex is a face of exactly two $n$-simplices (\emph{non-branching condition}).
PM3) Every two $n$-simplexes can be connected by means of a series of alternating $n$- and $(n - 1)$-simplexes, each of which is incident with its successor (\emph{connectivity condition}).
A closed pseudo-manifold is said to be \emph{orientable} if each of its $n$-simplices can be oriented coherently, that is, oriented so that opposite orientations are induced in each $(n - 1)$-simplex by the two adjoining $n$-simplices.
A closed $n$-chain on an orientable and coherently oriented closed pseudo-manifold is completely determined whenever one knows how often a single, arbitrarily chosen, oriented $n$-simplex appears in the chain. This is so
because each of the $n$-simplices adjoining this simplex must appear equally often, from (PM3). One can reach each $n$-simplex by moving successively through adjoining simplices; hence all $n$-simplices must appear equally often. Consequently the $n$-th homology group is the free cyclic group. In other words, the $n$-th Betti number is equal to 1. A basis for this group is one of the two chains which arise by virtue of the coherent orientation of the pseudo-manifold.' \\
A choice for one of these chains is an \emph{orientation} of the pseudo-manifold and its homology class is then called the \emph{fundamental class}.
Let $D$ and $R$ be oriented pseudo-manifolds of dimension $n$, and $f \colon D \longrightarrow R$ be a continuous map. The $n$-th homology groups of $D$ and $R$ are both isomorphic to a free cyclic group. Denote by $[D]$ and $[R]$ the fundamental homology classes of $D$ and $R$ respectively. Then there is an integral number $d$ such that $f_\ast([D])$ is homologous to $d [R]$. This number $d$ is defined as the \emph{degree} of $f$. Similar to the case of manifolds, one can compute the degree by counting signed preimages of a generic point, where the sign depends on whether the map is locally orientation-preserving or not.
\subsection{Normal surfaces}
\label{Sec:Normal}
The theory of normal surfaces was introduced by Kneser in \cite{kneser1929geschlossene} where he proved a prime decomposition theorem for compact 3-manifolds, and was extended by Haken in his work on algorithmic recognition of the unknot \cite{haken1961theorie}. Let $\mathcal{T}$ be a triangulation of a compact 3-manifold $M$. A surface $S$ properly embedded in $M$ is said to be \emph{normal} if it intersects each tetrahedron in a collection of disjoint triangles and squares, as shown in Figure \ref{Fig:Normal}.
\begin{figure}[h]
\centering
\includegraphics[width= 3 in]{normsur3.pdf}
\caption{A triangle and a sqaure}
\label{Fig:Normal}
\end{figure}
In each tetrahedron, there are 4 types of triangle and 3 types of square. Thus, in total, there are $7t$ types of triangles and squares in $\mathcal{T}$, where $t$ is the number of tetrahedra in $\mathcal{T}$. A normal surface $S$ determines a list of $7t$ non-negative integers, which count the number of triangles and squares of each type in $S$. This list is called the \emph{vector} for $S$ and is denoted by $(S)$.
The normal surface $S$ is said to be \emph{fundamental} if $(S)$ cannot be written as $(S_1) + (S_2)$ for non-empty properly embedded normal surfaces $S_1$ and $S_2$.
It is said to be a \emph{vertex surface} if no non-zero multiple of $(S)$ can be written as $(S_1) + (S_2)$ for non-empty properly embedded normal surfaces $S_1$ and $S_2$. This has an alternative interpretation in terms of the normal solution space, as follows.
The \emph{normal solution space} $\mathcal{N}(\mathcal{T})$ is a subset of $\mathbb{R}^{7t}$. The co-ordinates of $\mathbb{R}^{7t}$ correspond to the $7t$ types of triangles and squares in $\mathcal{T}$. The subset $\mathcal{N}(\mathcal{T})$ consists of those points in $\mathbb{R}^{7t}$ where every co-ordinate is non-negative and that satisfy the normal \emph{matching equations} and \emph{compatibility conditions}. There is one matching equation for each type of normal arc in each face of $\mathcal{T}$ not lying in $\partial M$. The equation asserts that in each of the two tetrahedra adjacent to that face, the total number of triangles and squares that intersect the given face in the given arc type are equal. The compatibility conditions assert that for different types of square within a tetrahedron, at least one of the corresponding co-ordinates is zero. For any properly embedded normal surface $S$, its vector $(S)$ lies in $\mathcal{N}(\mathcal{T})$. Indeed, the set of points in $\mathcal{N}(\mathcal{T})$ that are a vector of a properly embedded normal surface is precisely $\mathcal{N}(\mathcal{T}) \cap \mathbb{Z}^{7t}$.
The \emph{projective solution space} $\mathcal{P}(\mathcal{T})$ is the intersection
$$\mathcal{N}(\mathcal{T}) \cap\{ (x_1, \cdots, x_{7t}) : x_1 + \cdots + x_{7t} = 1 \}.$$
It is shown in \cite{matveev2007algorithmic} that $\mathcal{P}(\mathcal{T})$ is a union of convex polyhedra. A normal surface $S$ is \emph{carried} by a face of $\mathcal{P}(\mathcal{T})$ if its vector $(S)$ lies on a ray through the origin of $\mathbb{R}^{7t}$ that goes through that face. When a normal surface $S$ is carried by a face $C$, and $(S) = (S_1) + (S_2)$ for normal surfaces $S_1$ and $S_2$, then $S_1$ and $S_2$ are also carried by $C$. The reason for this is that $C$ is the intersection between $\mathcal{P}(\mathcal{T})$ and some hyperplanes of the form $\{ x_i = 0 \}$. Since $(S) = (S_1) + (S_2)$, then $(S_1)$ and $(S_2)$ also lie in these hyperplanes and hence also are carried by $C$.
A normal surface $S$ is a vertex surface exactly when some non-zero multiple of $(S)$ is a vertex of one of the polyhedra of $\mathcal{P}(\mathcal{T})$. Using this observation, it was shown by Hass and Lagarias (Lemma 3.2 in \cite{HassLagarias}) that each co-ordinate of the vector of a vertex normal surface is at most $2^{7t-1}$. Hence, the number of points of intersection between a vertex normal surface and the 1-skeleton of $\mathcal{T}$ is at most $28t2^{7t-1}$. They also showed that each co-ordinate of a fundamental normal surface in $\mathcal{T}$ has modulus at most $t2^{7t+2}$.
A common measure of complexity for a normal surface $S$ is its \emph{weight} $w(S)$ which is its number of intersections with the 1-skeleton of $\mathcal{T}$.
\section{Main Theorem}
In this section, we give a proof of the main theorem, assuming various ingredients that will be proved in later sections.
\theoremstyle{theorem}
\newtheorem*{main}{Theorem \ref{main:boundary}}
\begin{main}
Let $M$ be a compact, orientable 3-manifold given by a fixed diagram $D$ for $\Gamma \cup L$, where $M$ is obtained from $S^3$ by removing an open regular neighbourhood of the graph $\Gamma$ and performing surgery on the framed link $L$. The problem \textsc{Determining knot genus in the fixed manifold $M$} lies in \textbf{NP}.
\end{main}
\begin{proof}
We are given a diagram for $K \cup \Gamma \cup L$, which contains $D$ as a sub-diagram, where $K$ is our given knot. Let $c$ be the total crossing number of this diagram of $K$. Recall that this is the number of crossings between $K$ and itself, and between $K$ and $\Gamma \cup L$. Set $X:= M - N^{\circ}(K)$ to be the exterior of a tubular neighbourhood of $K$ in $M$.\\
\textbf{Step 1}: By Theorem \ref{basis}, we can construct a triangulation $\mathcal{T}$ of $X = M - N^\circ(K)$ and simplicial 1-cocycles $\phi_1 , \cdots , \phi_b$ such that
\begin{enumerate}
\item the number of tetrahedra $|\mathcal{T}|$ of $\mathcal{T}$ is at most a linear function of $c$;
\item the cocycles $\phi_1, \cdots , \phi_b$ form an integral basis for $H^1(X ; \mathbb{Z})$;
\item the complexity $\sum_i C_{\mathrm{nat}}(\phi_i)$ is at most a polynomial function of $c$.
\end{enumerate}
Moreover, the construction of $\mathcal{T}$ and $\phi_1, \cdots, \phi_b$ can be done in polynomial time in $c$. \\
\textbf{Step 2:} We check whether $K$ is homologically trivial in $M$, as otherwise there is no Seifert surface for $K$ and the genus of $K$ is $\infty$. We do this by attaching a solid torus to $\mathcal{T}$ to form a triangulation of $M$ in which $K$ is simplicial and then determining whether $K$ is the boundary of a simplicial 2-chain. This can be done in time that is polynomial in $c$, using the Bareiss algorithm for solving linear equations.
If $K$ is homologically trivial, it has a \emph{longitude}, which is defined as the boundary of any Seifert surface $S$ for $K$ in $X$. The longitude is unique up to sign, for the following reason. If $\ell'$ is any other longitude, the intersection number $[\ell'].[\ell]$ on $\partial X$ equals the intersection number $[\ell'].[S]$ in $X$, but this is zero because $\ell'$ is homologically trivial in $X$.
Again using the Bareiss algorithm, the longitude $\ell$ on $\partial N(K)$ can be determined in time that is polynomial in $c$ and, when it is represented as a simplicial 1-cycle, its complexity $C_{\mathrm dig}(\ell)$ is at most a polynomial function of c.
\\
\textbf{Step 3}: Note the first Betti number of $X$ is bounded above by the constant $B = b_1(M)+1$, since we are drilling a knot from $M$. Therefore, we can use Theorem \ref{thurston ball} to compute the unit ball of the Thurston norm on $H^1(X ; \mathbb{R})$, using a non-deterministic Turing machine. Here $H^1(X ; \mathbb{R})$ has been identified with $H_2(X , \partial X ; \mathbb{R})$ using Poincar\'{e} duality.
Note that the size of the input, that is the sum of the number of tetrahedra and $\sum_i C_{\mathrm{nat}}(\phi_i)$, is at most a polynomial function of $c$. Therefore, this can be done in time that is at most polynomially large in $c$. Hence, we can construct the following:
\begin{enumerate}
\item A basis $\{ w_1 , \cdots , w_r \}$ for the subspace $W$ of $H^1(X ; \mathbb{R})$ with trivial Thurston norm. Each $w_i$ is an integral cocycle and is written as a linear combination of the given cocycles $\{ \phi_1 , \cdots , \phi_b \}$. Denote by $p$ the projection map from $H^1(X ; \mathbb{R})$ to $H^1(X ; \mathbb{R})/W$.
\item A set of points $V \subset H^1(X ; \mathbb{Q})$ such that $p(V)$ is the set of vertices of the unit ball of $H^1(X ; \mathbb{R})/W$, together with a list $\mathcal{F}$ of subsets of $V$. Each element of $V$ is written in terms of the basis $\{ \phi_1 , \cdots , \phi_b \}$. We think of $\mathcal{F}$ as the list of faces of the unit ball for $H^1(X ; \mathbb{R})/W$, in the sense that for $F \in \mathcal{F}$ the set
\[ \{ p(v) \hspace{2mm} | \hspace{2mm} v \in F \}, \]
forms the set of vertices of some face of the unit ball for $H^1(X ; \mathbb{R})/W$. Moreover, this covers all the faces as we go over all the elements $F$ of $\mathcal{F}$.
\end{enumerate}
Because the problem \textsc{Thurston norm ball for $b_1 \leq B$} lies in \textbf{FNP}, the number of digits of the output is at most a polynomial function of the complexity of the input. Hence, $\sum_i C_{\mathrm{dig}}(w_i)$, $\sum_{v \in V} C_{\mathrm{dig}}(v)$ and $|\mathcal{F}|$ are all bounded above by polynomial functions of $c$.\\
\textbf{Step 4}:
There is an identification between $H^1(X; \mathbb{Z})$ and $H_2(X, \partial X ; \mathbb{Z})$ using Poincar\'{e} duality, and there is a boundary map
\[ H_2(X , \partial X; \mathbb{Z}) \longrightarrow H_1(\partial X ; \mathbb{Z}). \]
This in turn induces a boundary map
\[ \partial \colon H^1(X ; \mathbb{Z}) \longrightarrow H_1(\partial X ; \mathbb{Z}). \]
For each facet $F = \{ u_1, \cdots, u_s \} $ of the unit ball of the Thurston norm on $H^1(X ; \mathbb{R} ) / W$, denote by $\text{Cone}(F)$ the cone over the face $F$:
\[ \text{Cone}(F) : = \{ r_1 u_1 + \cdots + r_s u_s \hspace{1mm}| \hspace{1mm} r_1, \cdots, r_s \in \mathbb{R}_{\geq 0} \}. \]
For each facet $F$, consider the following minimum:
\[ m_F: = \min \hspace{1mm} \{ x(h) \hspace{2mm} | \hspace{2mm} h \in (W + \text{Cone}(\mathcal{F})) \cap H^1(X; \mathbb{Z}) \hspace{2mm} \text{and} \hspace{2mm} \partial (h) = \pm [\ell] \} . \]
This integer $m_F$ is given to us non-deterministically. We will show below that the number of digits of $m_F$ is bounded above by a polynomial function of $c$. We verify that $m_F$ is indeed the above minimum, in time that is at most polynomially large in $c$, by the following argument.
Let $h$ be any element of $W + \text{Cone}(F)$. We may write
\begin{equation}
h = \beta_1 w_1 + \cdots + \beta_r w_r + \alpha_1 u_1 + \cdots + \alpha_s u_s,
\end{equation}
for some $\beta_1, \cdots, \beta_r \in \mathbb{R}$ and $\alpha_1, \cdots, \alpha_s \in \mathbb{R}_{\geq 0}$.
We also require that $h$ lies in $H^1(X; \mathbb{Z})$ which is the condition
\begin{equation}
h = \gamma_1 \phi_1 + \cdots + \gamma_b \phi_b,
\end{equation}
for some $\gamma_1, \cdots, \gamma_b \in \mathbb{Z}$. The condition $\partial (h) = \pm [\ell]$ translates into
\begin{equation}
\beta_1 \partial(w_1) + \cdots + \beta_r \partial (w_r) + \alpha_1 \partial(u_1)+ \cdots + \alpha_s \partial(u_s) = \pm [\ell].
\end{equation}
Since each of the vertices $\{ u_1 , \cdots , u_s \}$ has Thurston norm equal to one and they all lie on the same face, we have
\[ x(h) = \alpha_1 + \cdots + \alpha_s. \]
Therefore, we are looking to find the minimum $m_F$ of the linear functional $\alpha_1 + \cdots + \alpha_s$ under the conditions
i) $\alpha_i \in \mathbb{R}_{\geq 0}$, $\beta_i \in \mathbb{R}$, $\gamma_i \in \mathbb{Z}$;
ii) $(1)=(2)$ (by which we mean putting the right hand sides of the equations equal) and $(3)$.
Since $K$ is homologically trivial, there is a solution to this system of constraints. Hence, as explained in Section \ref{SubSec:MixedIntegerProg}, there is a solution where $\sum_i C_{\mathrm{dig}}(\alpha_i)$ is bounded above by a polynomial function of the number of real variables, the number of equations and the number of bits encoding the coefficients of the linear constraints. Hence, $m_F$ is bounded above by a polynomial function of $c$.
Our certificate includes the value of $m_F$. Thus, we must verify that the condition
\[ x(h) = m_F \]
is satisfied for some $h$, whereas the
condition
\[x(h) \leq m_F - 1\]
has no solution. (Note that the Thurston norm takes only integer values on elements of $H^1(X; \mathbb{Z})$ and so we do not need to consider the possibility that $x(h)$ might lie strictly between $m_F-1$ and $m_F$.) These are instances of Mixed Integer Programming. Since the number $b$ of integer variables is fixed, Lenstra's algorithm provides a solution in polynomial time, as a function of the number of real variables, the number of equations and the number of bits encoding the coefficients of the linear constraints. By hypothesis, these are at most polynomially large in $c$. See Figure \ref{knot genus} for an example, with the following properties.
\begin{enumerate}
\item The octahedron is the unit ball of the Thurston norm ($W = \{0\}$).
\item The affine plane $P$ is the location of points with boundary equal to $[\ell]$. In this example, $P$ is disjoint from the unit ball.
\item The shaded region on $P$ is the intersection of $P$ with the cone over the shaded face $F$ of the unit ball. Here the shaded face $F$ is a triangle, and its projection to $P$ is a degenerate (non-compact) triangle since one edge of $F$ happened to be parallel to $P$.
\item The dots on $P$ indicate the integral points on $P$.
\item The face $F$ determines the equation for the linear functional $x(h)$. The Mixed Integer Programming problem asks whether there is an integral point $h$ on $P$ that satisfies $x(h) \leq m_F$ (respectively $m_F -1$) and the constraints i) and ii) above.
\end{enumerate}
\begin{figure}
\labellist
\pinlabel $P$ at 200 50
\pinlabel $F$ at 100 100
\endlabellist
\centering
\includegraphics[width=3 in]{knot-genus-np}
\caption{Finding the minimum genus between Seifert surfaces coming from a single face of the Thurston norm ball}
\label{knot genus}
\end{figure}
\textbf{Step 5}: The minimum of $m_F$ over all facets $F$ of the unit ball of the Thurston norm for $H^1(X ; \mathbb{R})/W$ is equal to the Thurston complexity of $K$. Moreover, the genus can be easily read from the Thurston complexity. Therefore, we can check if this minimum is equal to the given integer $g$ or not. On the other hand, the number of facets is at most polynomially large in $c$ by the combination of Theorems \ref{basis} and \ref{number of faces}. Hence, the algorithm runs in time that is at most polynomially large in $c$.\\
This finishes the non-deterministic algorithm for finding the genus of a knot in a fixed 3-manifold.
\end{proof}
\section{The number of faces and vertices of the Thurston norm ball }
In this section, we prove Theorem \ref{number of faces}, which provides an upper bound on the number of faces and vertices of the Thurston norm ball. The key to this is the following result, which controls the number of integral points in the dual norm ball.
\begin{thm}
Let $X$ be a compact orientable 3-manifold, and let $m$ be a natural number. Assume that there exist properly immersed surfaces $S_1 , \cdots, S_b$ in $X$ such that their homology classes form a basis for $H_2(X, \partial X; \mathbb{R}) $, and for each $1 \leq i \leq b$ we have $|\chi_-(S_i)| \leq m$. Define the set $\mathcal{A}$ as the set of integral points inside $H^2(X, \partial X; \mathbb{Z}) \otimes \mathbb{Q}$ whose dual norm is at most one. The size of $\mathcal{A}$ is at most $(2m+1)^b$, where $b = b_1(X)$ is the first Betti number of $X$.
\label{lattice points}
\end{thm}
\begin{proof}
Let $\langle \cdot , \cdot \rangle$ be the pairing between cohomology and homology. Define dual elements $e^1 , \cdots , e^b \in H^2(X, \partial X; \mathbb{Z}) \otimes \mathbb{Q}$ as
\[ \langle e^i , [S_j] \rangle = \delta_{ij}, \]
where $1 \leq i, j \leq b$, and $\delta_{ij}$ is the Kronecker function. Every integral point in $u \in H^2(X, \partial X; \mathbb{Z}) \otimes \mathbb{Q}$ can be written as
\[ u = \alpha_1 e^1 + \cdots + \alpha_b e^b, \]
where $\alpha_i$ are integers. This is because $u$ being integral means that its evaluation against each element of $H_2(X, \partial X; \mathbb{Z})$ is an integer. In particular,
$\alpha_i = \langle u, [S_i] \rangle$ is an integer. Assume that the dual norm of $u$ is at most one. By definition of the dual norm, for each $1 \leq i \leq b$ we have:
\[ |\langle u , [S_i] \rangle| \leq x([S_i]) = x_s([S_i]), \]
where $x([S_i])$ and $x_s([S_i])$ are the Thurston norm and the singular Thurston norm of $[S_i]$, and the last equality is by Theorem \ref{singular-norm}. Since $|\chi_-(S_i)| \leq m$, we have
\[ x_s([S_i]) \leq m. \]
Combining the two inequalities implies that
\[ |\alpha_i| = |\langle u , [S_i] \rangle| \leq x([S_i]) = x_s([S_i]) \leq m. \]
Since $-m \leq \alpha_i \leq m$ is an integer, there are at most $2m+1$ possibilities for each coordinate of the tuple $(\alpha_1, \cdots , \alpha_b)$. Therefore the number of possibilities for $u$ is at most $(2m+1)^b$.
\end{proof}
\theoremstyle{theorem}
\newtheorem*{number of faces}{Theorem \ref{number of faces}}
\begin{number of faces}
Let $X$ be a compact orientable 3-manifold, and let $m$ be a natural number. Assume that there exist properly immersed oriented surfaces $S_1 , \cdots, S_b$ in $X$ such that their homology classes form a basis for $H_2(X, \partial X; \mathbb{R}) $, and for each $1 \leq i \leq b$ we have $|\chi_-(S_i)| \leq m$. Denote by $W$ the subspace of $H^1(X ; \mathbb{R})$ with trivial Thurston norm.
The number of facets of the unit ball for the induced Thurston norm on $H^1(X ; \mathbb{R})/W$ is at most $(2m+1)^b$, where $b = b_1(X)$ is the first Betti number of $X$.
Hence, the number of vertices is at most $(2m+1)^{b^2}$ and the number of faces is at most $b(2m+1)^{b^2}$.
\end{number of faces}
\begin{proof}
Note we have identified $H_2(X, \partial X ; \mathbb{Z})$ with $H^1(X ; \mathbb{Z})$ using Poincar\'{e} duality. Facets of the unit ball for $H^1(X; \mathbb{R})/W$ correspond to the vertices of the dual ball. As the vertices of the dual ball are integral and have dual norm equal to one, the number of them is at most $(2m+1)^b$ by Theorem \ref{lattice points}. This proves the first part of the theorem.
Let $d$ be the dimension of $H^1(X; \mathbb{R})/W$; hence $d \leq b$. Every $m$-dimensional face of the unit ball is the intersection of $(d-m)$ facets. Hence, the number of $m$-dimensional faces is at most
\[{(2m+1)^b \choose d-m}.\]
As a result, the total number of faces is at most
\[ { (2m+1)^b \choose 1}+ {(2m+1)^b \choose 2}+ \cdots + {(2m+1)^b \choose b}, \]
which is bounded above by $b (2m+1)^{b^2}$. In particular, the number of vertices is at most $(2m+1)^{b^2}$.
\end{proof}
\section{A basis for the homology of a knot complement with small Thurston complexity}
Let $\Gamma$ be a graph embedded in $S^3$ and let $L$ be a framed link in the complement of $\Gamma$. Let $M$ be the compact orientable 3-manifold given by removing an open regular neighbourhood of $\Gamma$ and performing surgery along $L$. We are considering a knot $K$ in $M$ given by a diagram for $K \cup \Gamma \cup L$. In this section, we show how to compute bases for $H^1(M)$ and $H^1(M - N^\circ(K))$. From these, we will be able to construct a basis for $H_2(M - N^\circ(K), \partial M \cup \partial N(K))$ with relatively small Thurston complexity.
Our first step is to construct bases for $H^1(S^3 - N^\circ(\Gamma \cup L))$ and $H^1(S^3 - N^\circ(K \cup \Gamma \cup L))$ using the following lemma.
\begin{lem}
\label{Lem:GraphComplement}
Let $G$ be a graph in $S^3$, possibly with multiple edges between vertices and edge loops. Then $H^1(S^3 - N^\circ(G))$ has the following basis. Pick a maximal forest $F$ in $G$. For each edge $e\in G - F$, orient it in some way and let $L_e$ be the knot that starts at the initial vertex of $e$, runs along $e$ and then back to the start of $e$ through an embedded path in $F$. Let $\psi_e$ be the homomorphism $\pi_1(S^3 - N^\circ(G)) \rightarrow \mathbb{Z}$ that sends a loop in $S^3 - N^\circ(G)$ to its linking number with $L_e$. Then $\{ \psi_e : e \textrm{ is an edge of } G - F \}$ forms an integral basis for $H^1(S^3 - N^\circ(G))$.
\end{lem}
\begin{proof}
Note first that $\psi_e$ really is a homomorphism $\pi_1(S^3 - N^\circ(G)) \rightarrow \mathbb{Z}$ since any homotopically trivial loop is sent to zero. These homomorphisms form linearly independent elements of $H^1(S^3 - N^\circ(G))$ because $\psi_e$ evaluates to $1$ on the meridian of $e$, but evaluates to $0$ on the meridian of any other edge of $G - F$. By Alexander duality, $b_1(S^3 - N^\circ(G)) = b_1(G)$, which is equal to the number of edges in $G - F$. So, $\{ \psi_e : e \textrm{ is an edge of } G - F \}$ forms a rational basis for $H^1(S^3 - N^\circ(G); \mathbb{Q})$. In fact, it forms an integral basis for $H^1(S^3 - N^\circ(G))$, for the following reason. Any element $\psi \in H^1(S^3 - N^\circ(G))$ is a linear combination $\sum_e \lambda_e \psi_e$, where each $\lambda_e \in \mathbb{Q}$. Since $\psi$ is integral, its evaluation on the meridian of an edge $e$ of $G - F$ is integral. But this number is $\lambda_e$.
\end{proof}
We now consider how to compute $H^1(M)$. We obtain $M$ from $S^3 - N^\circ(\Gamma \cup L)$ by attaching solid tori, each of which can be viewed as a 2-cell and a 3-cell. So, $H^1(M)$ can be viewed as the subgroup of $H^1(S^3 - N^\circ(\Gamma \cup L))$ consisting of those classes that evaluate to zero on each of the surgery slopes
of the framed link $L$. This subgroup can be expressed in terms of the \emph{generalised linking matrix} of $\Gamma \cup L$.
Recall that the \emph{linking matrix} for the oriented framed link $L = L_1 \cup \cdots \cup L_{|L|}$ is defined as the $|L| \times |L|$ symmetric matrix whose $(i,j)$ entry is equal to $\ell k (L_i , L_j)$ when $i \neq j$, and is equal to the framing of $L_i$ when $i = j$. Here $\ell k (L_i , L_j)$ is the linking number of $L_i$ and $L_j$, where $L_i$ and $L_j$ are considered as disjoint knots in $S^3$. More generally, we define the \emph{generalised linking matrix} $A$ of $\Gamma \cup L$ to have rows given by the components of $L$ and columns given by the components of $L$ and also by the edges of $\Gamma - F$, where $F$ is a maximal forest in $\Gamma$. For a component $L_i$ of $L$ and an edge $e$ of $\Gamma - F$, the corresponding entry of $A$ is $\ell k (L_i , L_e)$, where $L_e$ is the knot defined as in Lemma \ref{Lem:GraphComplement}. Similarly, for each component $L_i$ of $L$ and each component $L_j$ of $L$ with $i \not= j$, the corresponding entry of $A$ is $\ell k (L_i , L_j)$. Finally, when $L_i = L_j$, the corresponding entry of $A$ is the framing of $L_i$.
Let $k$ be the number of columns of the generalised linking matrix. Thus, $k$ is the sum of the number of components of $L$ and the number of edges of $\Gamma - F$. In other words, $k = b_1( \Gamma \cup L)$. To make the notation more uniform, we identify the edges in $\Gamma - F$ by numbers $|L|+1 \leq j \leq k$ and refer to $L_e$ for $e \in \Gamma - F$ by $L_j$.
\begin{lem}
The cohomology group $H^1(M)$ is isomorphic to the subgroup of $H^1( S^3 - N^\circ(\Gamma \cup L)) = \mathbb{Z}^k$ given by the kernel of the generalised linking matrix.
\end{lem}
\begin{proof} We have already identified $H^1( S^3 - N^\circ(\Gamma \cup L))$ by specifying an integral basis for it in Lemma \ref{Lem:GraphComplement}. The subgroup $H^1(M)$ consists of those classes that evaluate to zero on each of the surgery slopes of the framed link $L$. When we write the surgery slope of $L_i$ as a linear combination of the basis elements, the coefficients are precisely the entries of the $i$th row of the generalised linking matrix.
\end{proof}
We now wish to compute $H^1(M - N^\circ(K))$. This can be viewed as containing $H^1(M)$ as a subgroup, using the long exact sequence of the pair $(M, M - N^\circ(K))$:
$$0 = H^1(M, M - N^\circ(K)) \rightarrow H^1(M) \rightarrow H^1(M - N^\circ(K)) \rightarrow H^2(M, M - N^\circ(K)) \rightarrow H^2(M).$$
Now, using excision and Poincar\'e duality,
$$H^2(M, M - N^\circ(K)) \cong H^2(N(K), \partial N(K)) \cong H_1(N(K))$$
and
$H^2(M) \cong H_1(M,\partial M)$. So, under our assumption that $K$ is homologically trivial, the map $H^2(M, M - N^\circ(K)) \rightarrow H^2(M)$ is the trivial map. Therefore, $H^1(M - N^\circ(K))$ can be viewed as adding on a $\mathbb{Z}$ summand to $H^1(M)$. This summand is isomorphic to $H^2(M, M - N^\circ(K))$.
We now wish to construct an integral basis for $H^1(M - N^\circ(K))$. Such a basis can be found by starting with an integral basis for $H^1(M)$ and taking its image in $H^1(M - N^\circ(K))$, and then adding one more element. This extra element must map to a generator for $H^2(M, M - N^\circ(K))$. We will build explicit cocycles representing this basis.
\theoremstyle{theorem}
\newtheorem*{basis}{Theorem \ref{basis}}
\begin{basis}
Let $M$ be a compact orientable 3-manifold given by removing an open regular neighbourhood of a graph $\Gamma$ in $S^3$ and performing integral surgery on a framed link $L$ in the complement of $\Gamma$. Let $D$ be a fixed diagram for $\Gamma \cup L$ where the surgery slopes on $L$ coincide with the diagrammatic framing. Let $K$ be a homologically trivial knot in $M$, given by a diagram of $K \cup \Gamma \cup L$ that contains $D$ as a sub-diagram. Let $c$ be the total crossing number of $K$. Set $X = M - N^\circ(K)$ as the exterior of $K$ in $M $. There is an algorithm that builds a triangulation of $X$ with $O(c)$ tetrahedra, together with simplicial $1$-cocycles $\phi_1 , \cdots , \phi_b$ that form an integral basis for $H^1(X ; \mathbb{Z})$ with $\sum_i C_{\mathrm{nat}}(\phi_i)$ at most $O(c^2)$. The algorithm runs in time polynomial in $c$. All the above implicit constants depend on the manifold $M$ and not the knot $K$.
\end{basis}
\begin{proof}
\textbf{Step 1:} Building a triangulation of $S^3 - N^\circ(K \cup \Gamma \cup L)$. \\
We view $S^3$ as the union of $\mathbb{R}^3$ and a point at infinity. We will arrange for $K \cup \Gamma \cup L$ to sit inside $\mathbb{R}^3$ as specified by the diagram. Thus, the vertical projection map $\mathbb{R}^3 \rightarrow \mathbb{R}^2$ onto the first two co-ordinates will project $K \cup \Gamma \cup L$ onto the underlying planar graph specified by the diagram. Our triangulation will have the following properties:
\begin{enumerate}
\item The number of tetrahedra is bounded above by a linear function of $c$.
\item Each edge of the triangulation is straight in $\mathbb{R}^3$.
\item The meridian of each component of $K \cup L$ and of each edge of $\Gamma$ is simplicial, as is the surgery slope of each component of $L$.
\end{enumerate}
There are many possible ways to build this triangulation. We will follow the recipe given by Coward and the first author in Section 4 of \cite{CowardLackenby}. The triangulation provided by Theorem 4.3 of \cite{CowardLackenby} has all the required properties when $\Gamma = \emptyset$. We only need to generalise to the situation where $\Gamma \not= \emptyset$ and show that the triangulation can be constructed algorithmically in polynomial time, as a function of $c$. We briefly review the steps in Section 4 of \cite{CowardLackenby}.
Step 1 is to embed the underlying planar graph $G$ of the diagram into $\mathbb{R}^2$ as a union of straight arcs, as follows. We first modify $\Gamma$ by expanding each vertex of $\Gamma$ with valence more than $3$ into a tree, so that each of the new vertices has valence exactly $3$. This does not change the exterior of $K \cup \Gamma \cup L$. Let $\overline{G}$ be the graph obtained from $G$ by collapsing parallel edges to a single edge and removing edge loops. F\'ary's theorem says that $\overline{G}$ has an embedding in $\mathbb{R}^2$ where each edge is straight \cite{fary1948straight}; this was proved independently by Wagner \cite{wagner1936bemerkungen} and Stein \cite{stein1951convex} as well. Such an embedding can be found in polynomial time using, for example, the algorithms of \cite{deFPachPollack} or \cite{Schnyder}. We place this embedded graph into the interior of a square $Q$.
Now reinstate the parallel edges of $G$ with 2 straight arcs each
and the edge loops of $G$ with 3 straight arcs. Step 2 is to replace each edge of $G$ by 4 parallel edges, replace each 2-valent vertex of $G$ by 4 vertices joined by 3 edges and replace each 4-valent vertex of $G$ by $16$ vertices arranged in a grid. Furthermore, each 3-valent vertex of $G$ is replaced by a triangle. This is triangulated by placing a vertex in its interior and coning from that point. The result is a graph $G_+$ where each edge is still straight (see Figure \ref{Fig:Gplus}) .
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{g-to-gplus}
\caption{Forming $G_+$ from $G$} \label{Fig:Gplus}
\end{figure}
In Step 3, we triangulate the complementary regions of $G_+ \cup \partial Q$ by adding new edges. Let $E$ be its 1-skeleton. In Step 4, we insert 4 copies of $Q$ into the cube $Q \times I$, one being the top face, one the bottom face, and two parallel copies between them. Insert $E \times I$ into the cube $Q \times I$. This divides the cube into convex balls. We triangulate each face of each ball by adding a vertex to its interior and then coning off, and we then triangulate each ball by adding a vertex to its interior and then coning. We now modify this triangulation as follows. Near each crossing, there are $27$ little cubes. We remove these and insert a new triangulation of the cube. A portion of this is shown in Figure \ref{fig:crosstri}. It is clear that if these are inserted in the correct way, we obtain a regular neighbourhood of $K \cup \Gamma \cup L$ as a simplicial subset of this triangulation. Removing the interior of this gives the required triangulation of the exterior of $K \cup \Gamma \cup L$. It is clearly constructible in polynomial time and has the required properties. \\
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth]{crosstri-eps-converted-to.pdf}
\caption{The triangulation near each crossing} \label{fig:crosstri}
\end{figure}
\textbf{Step 2}: Building the triangulation of $M - N^\circ(K)$.\\
The above triangulation of $S^3 - N^\circ(K \cup \Gamma \cup L)$ extends to a triangulation of $M - N^\circ(K)$ in an obvious way. We need to attach a solid torus to each component of $\partial N(L)$. We do this by attaching a triangulated meridian disc along the surgery slope. We then attach on a 3-ball, which is triangulated as a cone on its boundary. This process is again completed in polynomial time, and the number of tetrahedra remains bounded above by a linear function of $c$.\\
As described above, we can form a basis for $H^1(M - N^\circ(K))$ by
\begin{enumerate}
\item picking a basis for $H^1(M)$ and taking its image under the homomorhpism $H^1(M) \rightarrow H^1(M - N^\circ(K))$ induced by inclusion;
\item adding one extra element that maps to a generator for $H^2(M, M - N^\circ(K))$.
\end{enumerate}
\vspace{0.5cm}
\textbf{Step 3:} Defining $b_1(M)$ simplicial 1-cocycles on $S^3 - N^\circ(K \cup \Gamma \cup L)$. \\
We have already identified elements of $H^1(M)$ with integral solutions to the equation $A \beta = 0$, where $A$ is the generalised linking matrix for $\Gamma \cup L$. Therefore, consider an integral solution $\beta = (\beta_1 , \cdots , \beta_k)^T$ to the equation $A \beta =0$. The corresponding cocycle $\sum_{i=1}^k \beta_i \psi_i$ is a 1-cocycle on $H^1(S^3 - N^\circ(\Gamma \cup L))$ that evaluates to zero on each surgery slope of the framed link. We can restrict this to a cocycle on $S^3 - N^\circ(K \cup \Gamma \cup L)$, which represents an element of $H^1(M - N^\circ(K))$.
More specifically, define the 1-cocycle $c_{\beta}$ on $S^3 - N^\circ(K \cup \Gamma \cup L)$ as follows. Let $T$ be a maximal tree in the 1-skeleton of the triangulation. For every edge $e \in T$ define $\langle c_{\beta} , e \rangle = 0$. For any oriented edge $e \notin T$, construct a loop $\ell_e$ that starts at the initial vertex of $e$, runs along $e$ and then back to the start of $e$ through an embedded path in $T$. Since we are assigning $0$ to every edge contained in $T$, it should be clear that the numbers assigned to $e$ and $\ell_e$ are the same. Define
\[ \langle c_{\beta} , e \rangle := \sum_{i=1}^{k} \hspace{1mm} \beta_i \hspace{1mm} \ell k(\ell_e , L_i), \]
where $\beta_i$ are integers. It is clear that this forms a 1-cocycle since each term $\ell k(\ell_e , L_i)$ is a 1-cocycle. \\
\textbf{Step 4:} Extending the simplicial $1$-cocycles $c_\beta$ to the triangulation of $M - N^\circ(K)$.\\
The triangulation of $M - N^\circ(K)$ is obtained by gluing triangulated solid tori to the triangulation of $S^3 - N^\circ(K \cup \Gamma \cup L)$, such that the restrictions of both triangulations to their common boundary, $\partial N(L)$, agree with each other. The manifold $X$ is obtained by Dehn filling along $L_i$ for $1 \leq i \leq |L|$.
We can extend the cocycles over the attached solid tori since we started with $\beta$ satisfying $A\beta=0$. This can be achieved with linear size control over the values of newly added edges. It is also easy to see that $\langle c_\beta , m_K \rangle = 0$, where $m_K$ is the meridian of $K$. \\
\textbf{Step 5}: Constructing the extra cocycle.\\
We construct an extra 1-cocycle on $M - N^\circ(K)$ that will form a generator for the summand of $H^1(M - N^\circ(K))$ corresponding to $H^2(M, M - N^\circ(K))$. This extra element, together with the cocycles that formed from a basis for $H^1(M)$, will provide the required basis for $H^1(M - N^\circ(K))$.
Denote by $\kappa = (\kappa_1 , \cdots , \kappa_k)^T$ with $\kappa_i := \ell k (K , L_i)$ the vector encoding the linking numbers of $K$ with $L_i$. We claim that the condition on $K$ being homologically trivial in $M$, is equivalent to the linear equation $A \beta = - \kappa$ having an integral solution. The homology group $H_1(S^3 - N^\circ(\Gamma \cup L) ; \mathbb{Z})$ is freely generated by the meridians $\mu_1 , \cdots , \mu_k$ encircling $L_1, \cdots, L_k$. For $1 \leq i \leq |L|$, denote by $\lambda_i$ the longitude of $L_i$ that has zero linking number with $L_i$. Then $H_1(M; \mathbb{Z})$ is obtained by adding the relations $a_{ii} \hspace{1mm} \mu_i + \lambda_i =0$, one for each component $L_i$ of $L$. The latter relation is equivalent to
\[ a_{ii} \hspace{1mm} \mu_i + \sum_{j \neq i} \ell k (L_i , L_j) \mu_j = \sum_j a_{ij} \hspace{1mm} \mu_j = 0. \]
In other words if we set $\mu = (\mu_1 , \cdots , \mu_k )$ then the relations are obtained by putting the entries of the vector $A \mu$ equal to $0$. Therefore, $K$ being trivial in $H_1(M ; \mathbb{Z})$ is exactly the condition that $A \beta = -\kappa$ has an integral solution for $\beta$.
Let $\theta$ be any integral solution to the linear equation $A \theta = - \kappa$. Define the 1-cocycle $c_\theta$ similar to $c_\beta$ but with slight modification to make the evaluation on the meridian of $K$ non-zero. More precisely
\[ \langle c_\theta , e \rangle := \ell k (\ell_e , K) + \sum_{i=1}^{k} \hspace{1mm} \theta_i \hspace{1mm} \ell k(\ell_e , L_i).\]
The evaluation of $c_\theta$ on each surgery curve is zero, and the evaluation on the meridian of $K$ is equal to 1. It therefore is sent to the generator of $H^2(M, M - N^\circ(K))$, under the map $H^1(M - N^\circ(K)) \rightarrow H^2(M, M - N^\circ(K))$.\\
\textbf{Step 6:} Analysing the computational cost of the algorithm. \\
The number of edges of the triangulation of $S^3 - N^\circ(K \cup \Gamma \cup L)$ is $O(c)$. A spanning tree $T$ and the loops $\ell_e$ for $e \notin T$ can be found in polynomial time in the number of edges. The numbers $\ell k (\ell_e , L_i)$ can be computed as follows. We can construct the diagram $L \cup \Gamma \cup \ell_e$, since $\ell_e$ is a union of edges of the triangulation. Each edge of the triangulation is straight, and so when it is projected to the plane of the diagram, the image of $\ell_e$ is a union of straight arcs. We compute the linking number $\ell k (\ell_e , L_i)$ using the usual signed count over the crossings of $\ell_e$ with $L_i$. Each of the linking numbers is at most linear in the number of crossings $c$. This is because the triangulation of $S^3 - N^\circ(K \cup \Gamma \cup L)$ has $O(c)$ edges and each edge can contribute at most a consant number of crossings. Moreover the coordinates $\theta_i$ are at most linear in $c$ and can be computed in polynomial time, as $A$ is a fixed matrix. Therefore the constructed $1$-cocycles on $S^3 - N^\circ(K \cup \Gamma \cup L)$ have $C_{\mathrm{nat}}$ at most $O(c^2)$ and can be computed in polynomial time. Extending these $1$-cocycles over the attached triangulated solid tori can be done in polynomial time. Moreover, the extension keeps the total number of tetrahedra linear in $c$, and the total $C_{\mathrm{nat}}$ at most quadratic in $c$.
\end{proof}
\section{Surfaces with trivial Thurston norm}
Recall from Section \ref{Sec:Normal} the definition of a fundamental normal surface. In this section, we will prove the following result.
\begin{thm}
\label{Thm:FundamentalNormZero}
Let $\mathcal{T}$ be a triangulation of a compact orientable irreducible 3-manifold $X$. If $X$ has any compressible boundary components, suppose that these are tori. The subspace of $H_2(X, \partial X; \mathbb{Z})$ with trivial Thurston norm is spanned by a collection of fundamental normal tori, annuli and discs.
\end{thm}
As an immediate consequence, we obtain the following.
\theoremstyle{theorem}
\newtheorem*{Thm:BasisForW}{Theorem \ref{Thm:BasisForW}}
\begin{Thm:BasisForW}
Let $\mathcal{T}$ be a triangulation of a compact orientable irreducible 3-manifold $X$. If $X$ has any compressible boundary components, suppose that these are tori. Then there is a collection $w_1, \cdots, w_r$ of integral cocycles that forms a basis for the subspace $W$ of $H^1(X; \mathbb{R})$ consisting of classes with Thurston norm zero with $\sum_i C_{\mathrm{dig}}(w_i)$ at most $O(|\mathcal{T}|^2)$.
\end{Thm:BasisForW}
\begin{proof}
By Theorem \ref{Thm:FundamentalNormZero}, there is a collection of fundamental normal surfaces that forms a generating set for $W \cap H^1(X; \mathbb{Z})$. Some subset of this collection therefore forms a basis for $W$. Any fundamental surface $S$ intersects each edge of $\mathcal{T}$ at most $t2^{7t+2}$ times, where $t = |\mathcal{T}|$, the number of tetrahedra of $\mathcal{T}$.
Hence, when $S$ is oriented, the cocycle $w$ dual to $S$ has evaluation at most $t2^{7t+2}$ on each edge. So $C_{\mathrm{dig}}(w)$ is at most $O(t^2)$.
\end{proof}
We will prove Theorem \ref{Thm:FundamentalNormZero} using results of Tollefson and Wang \cite{TollefsonWang}. In that paper, $X$ was required to be irreducible and its compressible boundary components were required to be tori. It is for this reason that these are also hypotheses of Theorem \ref{Thm:FundamentalNormZero}.
\begin{definition}
Let $X$ be a compact orientable 3-manifold with a triangulation $\mathcal{T}$.
A compact oriented normal surface $F$ properly embedded in $X$ is \emph{lw-taut} if
\begin{enumerate}
\item its homology class $[F]$ is non-trivial in $H_2(X, \partial X)$;
\item it is $\chi_-$ minimising;
\item there is no union of components of $F$ that is homologically trivial;
\end{enumerate}
and furthermore it has smallest weight among all normal surfaces in its homology class satisfying the above conditions.
\end{definition}
Recall from Section \ref{Sec:Normal} the definition of the projective solution space ${\mathcal P}(\mathcal{T})$. Recall also the notion of a normal surface being carried by a face of $\mathcal{P}(\mathcal{T})$.
\begin{definition} A face of $\mathcal{P}(\mathcal{T})$ is \emph{lw-taut} if every surface carried by the face is lw-taut.
\end{definition}
The following result was proved by Tollefson and Wang (Theorem 3.3 and Corollary 3.4 in \cite{TollefsonWang}).
\begin{thm}
\label{Thm:lwtaut}
Let $\mathcal{T}$ be a triangulation of a compact orientable irreducible 3-manifold $X$. If $X$ has any compressible boundary components, suppose that these are tori. Let $F$ be an lw-taut surface and let $C$ be the minimal face of $\mathcal{P}(\mathcal{T})$ that carries $F$. Then $C$ is lw-taut. Furthermore, there are unique orientations assigned to the surfaces carried by $C$ such that if $G$ and $H$ are carried by $C$, then the normal sum $G+H$ satisfies $[G+H] = [G] + [H] \in H_2(X, \partial X)$ and $x([G+H]) = x([G]) + x([H])$.
\end{thm}
\begin{proof}[Proof of Theorem \ref{Thm:FundamentalNormZero}] Let $T$ consist of those fundamental annuli, tori and discs that lie in some lw-taut face. Consider any element of $H_2(X, \partial X; \mathbb{Z})$ with trivial Thurston norm. This is represented by an lw-taut surface $F$. Let $C$ be the minimal face of $\mathcal{P}(\mathcal{T})$ that carries $F$. By Theorem \ref{Thm:lwtaut}, $C$ is an lw-taut face. Now, $F$ is a normal sum of fundamental surfaces $G_1, \cdots, G_n$ that are also carried by $C$. By Theorem \ref{Thm:lwtaut}, they are all oriented surfaces. Since they are lw-taut and $X$ is irreducible, no $G_i$ is a sphere. By Theorem \ref{Thm:lwtaut}, $[F] = [G_1] + \cdots + [G_n]$ in $H_2(X, \partial X)$ and $0 = x([F]) = x([G_1]) + \cdots + x([G_n])$. Since Thurston norm is always non-negative, this implies that $x([G_i]) = 0$ for each $i$. As the $G_i$ are oriented and lw-taut, they are discs, annuli and tori, and hence they are elements of $T$.
\end{proof}
\section{Computational complexity of \textsc{Thurston norm ball}}
In this section, we analyse the decision problem \textsc{Thurston norm ball for $b_1 \leq B$} that was mentioned in the Introduction. We now define it precisely. The input is a triangulation $\mathcal{T}$ for a compact orientable 3-manifold $X$ with first Betti number $b_1(X) \leq B$, and a collection of integral simplicial 1-cocycles $\{ \phi_1 , \cdots, \phi_b \}$ that forms a basis for $H^1(X ; \mathbb{R})$. The problem asks to compute the unit ball for the Thurston semi-norm. Here we have identified $H^1(X ; \mathbb{R})$ with $H_2(X , \partial X ; \mathbb{R})$ using Poincar\'{e} duality. The output consists of the following two sets of data:\\
1) A collection of integral cocycles that forms a basis for the subspace $W$ of $H^1(X; \mathbb{R})$ with Thurston norm zero. These are written as rational linear combinations of the given cocycles $\{ \phi_1 , \cdots , \phi_b \}$. Denote by $p$ the projection map from $H^1(X ; \mathbb{R})$ to $H^1(X ; \mathbb{R})/W$.\\
2) A finite set of points $V \subset H^1(X ; \mathbb{Q})$ such that $p(V)$ is the set of vertices of the unit ball of $H^1(X ; \mathbb{R})/W$, together with a list $\mathcal{F}$ of subsets of $V$. The set $\mathcal{F}$ is the list of faces of the unit ball for $H^1(X ; \mathbb{R})/W$. In other words, for $F\in \mathcal{F}$ the set
\[ \{ p(v) \hspace{2mm} | \hspace{2mm} v \in F \}, \]
forms the set of vertices of some face of the unit ball for $H^1(X ; \mathbb{R})/W$. Moreover, this covers all the faces as we go over all the elements of $\mathcal{F}$. Thus, the unit ball of $H^1(X; \mathbb{R})$ is the inverse image of the unit ball of $H^1(X ; \mathbb{R})/W$ under the projection map $p$.
The complexity of the input is defined to be $|\mathcal{T}| + \sum_i C_{\mathrm{nat}}(\phi_i)$. Recall that $|\mathcal{T}|$ is the number of tetrahedra of $\mathcal{T}$. As discussed in the Introduction, the fact that the complexity of $\phi_i$ is measured using $C_{\mathrm{nat}}$ rather than $C_{\mathrm{dig}}$ is definitely not standard. In order to simplify the notation a little, we let $\Phi$ be the matrix with columns $\phi_1, \cdots, \phi_b$. More specifically, it has $b$ columns and has a row for each oriented edge of $\mathcal{T}$, and its $(i,j)$ entry is the evaluation of $\phi_j$ on the $i$th edge. So $C_{\mathrm{nat}}(\Phi) = \sum_i C_{\mathrm{nat}}(\phi_i)$.
\theoremstyle{theorem}
\newtheorem*{thurston ball}{Theorem \ref{thurston ball}}
\begin{thurston ball}
Fix an integer $B \geq 0$. The problem \textsc{Thurston norm ball for $b_1 \leq B$} lies in \textbf{FNP}, where $b_1$ denotes the first Betti number.
\end{thurston ball}
We will prove this over the next two sections. In this section, we will consider the following restricted version of the problem.
In \textsc{Thurton norm ball for irreducible boundary-irreducible 3-manifolds with $b_1 \leq B$}, we consider compact, orientable, irreducible, boundary-irreducible 3-manifolds $X$. We allow $X$ to be disconnected. Thus, the input is a triangulation $\mathcal{T}$ for $X$ with first Betti number $b_1(X) \leq B$, and a collection of simplicial integral 1-cocycles $\{ \phi_1 , \cdots, \phi_b \}$ that forms a basis for $H^1(X; \mathbb{R})$. The output is the data in (1) and (2) above.
\begin{thm}
\label{Thm:MainThmIrred}
\textsc{Thurton norm ball for irreducible boundary-irreducible 3-manifolds with $b_1 \leq B$} is in \textbf{FNP}.
\end{thm}
\begin{proof}
Let $d = \dim (H^1(X; \mathbb{R})/W)$, and denote by $B_{\bar{x}}$ the unit ball of the induced Thurston norm $\overline{x}$ on $H^1(X; \mathbb{R})/W$:
\[B_{\bar{x}} = \{ v \in H^1(X ; \mathbb{R})/W \hspace{3mm} | \hspace{3mm} \overline{x}(v)\leq 1 \}. \]
Then $B_{\bar{x}}$ is a convex polyhedron. The boundary, $\partial B_{\bar{x}}$, inherits a facial structure from $B_{\bar{x}}$, where the faces of $\partial B_{\bar{x}}$ correspond to faces of $B_{\bar{x}}$ except for the face $B_{\bar{x}}$ itself. In particular, top-dimensional faces of $\partial B_{\bar{x}}$ correspond to facets of $B_{\bar{x}}$, and from now on a top-dimensional face refers to a top-dimensional face of $\partial B_{\bar{x}}$. The plan of the proof is as follows:
\begin{enumerate}
\item A basis for the subspace $W$ consisting of classes with Thurston norm zero is given to us non-deterministically.
\item The list of vertices $V$ and faces $\mathcal{F}$ is given to us non-deterministically.
\item We verify that for each face $F \in \mathcal{F}$, the vertices of $F$ actually lie on the same face.
\item Let $P$ be the space obtained by patching together geometric realisations of given top-dimensional faces of $\partial B_{\bar{x}}$ along their common boundaries. We have the maps
\[ P \xrightarrow{i} \partial B_{\bar{x}} \xrightarrow{\pi}S^{d-1}, \]
where $i$ is the inclusion (well-defined by (3)) and $\pi$ is the radial projection onto the $(d-1)$-dimensional sphere $S^{d-1}$. We verify that the composition $\pi \circ i$ is a homeomorphism.
\item We verify that the list of faces of $\partial B_{\bar{x}}$ is complete.
\end{enumerate}
\textbf{Step 1: A basis for $W$}\\
By Theorem \ref{Thm:BasisForW}, there is a collection $w_1, \cdots, w_r$ of integral cocycles that forms a basis for the subspace $W$ of $H^1(X; \mathbb{R})$ consisting of classes with Thurston norm zero and that satisfies $\sum_i C_{\mathrm{dig}}(w_i) \leq O(|\mathcal{T}|^2)$. We assume that these simplicial cocycles are given to us non-deterministically. We can certify that the elements $w_1, \cdots, w_r$ have Thurston norm zero, using Theorem \ref{lackenby}.
We express each $w_i$ as a linear combination of the given cocycles $\phi_1, \cdots, \phi_b$, as follows. There is a coboundary map $\partial^\ast \colon C^0(\mathcal{T}) \rightarrow C^1(\mathcal{T})$ from 0-cochains to 1-cochains. There is a natural basis $x_1, \cdots, x_m$ for $C^0(\mathcal{T})$ where $x_i$ is the 0-cocycle that evaluates to $1$ on the $i$th vertex of $\mathcal{T}$ and evaluates to zero on the other vertices. We wish to solve
$$\alpha_1 \phi_1 + \cdots + \alpha_b \phi_b + \beta_1 \partial^\ast (x_1) + \cdots + \beta_k \partial^\ast (x_m) = w_i.$$
Using the Bareiss algorithm, this can be done in polynomial time as a function of $C_{\mathrm{dig}}(\Phi)$ and $|\mathcal{T}|$. The resulting coefficients $\alpha_1, \cdots, \alpha_b$ have $C_{\mathrm{dig}}(\alpha_i)$ at most a polynomial function of $C_{\mathrm{dig}}(\Phi)$ and $|\mathcal{T}|$.
We can also verify whether the cocycles $w_1, \cdots, w_r$ are linearly independent in $H^1(X; \mathbb{R})$.
In the remaining steps, we will certify that the induced Thurston semi-norm on $H^1(X; \mathbb{R})/W$ is indeed a norm, hence the basis elements actually generate $W$.\\
\textbf{Step 2A: Bounding the number of faces and vertices of the Thurston unit ball}\\
We are given the simplicial integral cocycles $\{ \phi_1 , \cdots, \phi_b \}$. From these, we can construct properly embedded oriented surfaces $S_1 , \cdots , S_b$ that are Poincar\'{e} dual to $\phi_1 , \cdots , \phi_b$, and whose total complexity, $\sum_i \chi_-(S_i)$, is at most $O(C_{\mathrm{nat}}(\Phi))$. To see this geometrically, fix a 1-cocycle $\phi_i$ and consider an arbitrary simplicial triangle $\Delta$ in the triangulation. Assume that the numbers that $\phi_i$ associates to the edges of $\Delta$ are $a, b , c \geq 0$ such that $a = b+c$. We can draw $a= b+c$ normal arcs in $\Delta$ that intersect the edges of $\Delta$ in respectively $a, b $ and $c$ points. Given any tetrahedron, we can look at the drawn normal curves on its boundary triangles and place normal disks (triangles or squares) inside the tetrahedron with the given boundary curves. Construct an embedded surface $S_i$ by putting together the normal disks together glued along the common boundaries of the tetrahedra. The constructed surface is Poincar\'{e} dual to the starting 1-cocycle, and $\chi_-(S_i)$ is at most a linear multiple of $C_{\mathrm{nat}}(\phi_i)$.
By Theorem \ref{number of faces}, the total number of faces and vertices of the Thurston unit ball for $H^1(X ; \mathbb{R})/W$ are at most polynomial functions of $C_{\mathrm{nat}}(\Phi)$. Note the degrees of these polynomials are bounded above by $B^2$, which is a fixed constant by our assumption. \\
\textbf{Step 2B: Bounding the number of bits encoding the coefficients of the vertices of the Thurston unit ball}
\begin{lem}
There is a set of points $V \subset H^1(X ; \mathbb{Q})$ such that
\[ \{ p(v) \hspace{2mm} | \hspace{2mm} v \in V \}, \]
is the set of vertices of the unit ball for the Thurston norm on $H^1(X , \mathbb{R})/W$ with the following properties:
\begin{enumerate}
\item $|V|$ is at most a polynomial function of $C_{\mathrm{nat}}(\Phi)$;
\item each element of $V$ is $\gamma_1\phi_1 + \cdots + \gamma_b \phi_b$, for rational numbers $\gamma_1, \cdots, \gamma_b$ such that $\sum_i C_{\mathrm{dig}}(\gamma_i)$ is at most a polynomial function of $\log(C_{\mathrm{nat}}(\Phi)))$.
\end{enumerate}
\label{vertex coefficients}
\end{lem}
\begin{proof}
Define $\mathcal{A}$ as the set of integral points in $H^2(X , \partial X ; \mathbb{Z}) \otimes \mathbb{Q}$ with dual norm at most one. By the previous step, we can construct surfaces $S_1 , \cdots , S_b$ Poincar\'{e} dual to $\phi_1 , \cdots , \phi_b$ whose total complexity, $\sum \chi_-(S_i)$, is at most $O(C_{\mathrm{nat}}(\Phi))$. By Theorem \ref{lattice points}, the size of $\mathcal{A}$ is at most a polynomial function of $O(C_{\mathrm{nat}}(\Phi))$. Let $v \in H^1(X ; \mathbb{Q})$ be such that $p(v)$ is a vertex of the unit ball for $H^1(X ; \mathbb{R})/W$. Then there are points $a_1 , \cdots , a_r \in \mathcal{A}$ such that the set of points $z \in H^1(X ; \mathbb{R})$ satisfying the equations
\[ \langle a_1 , PD(z) \rangle = 1 , \]
\[ \vdots \]
\[ \langle a_r , PD(z) \rangle =1, \]
coincides with the affine space $v +W$. Here $PD(z)$ is the Poincar\'{e} dual to $z$, and $a_1 , \cdots, a_r$ can be chosen to be the set of vertices spanning the face of the dual unit ball that is dual to the vertex $p(v)$. Moreover, since $z \in H^1(X ; \mathbb{R})$ lies inside a $b$-dimensional space, at most $b$ of the above equations can be linearly independent; hence we may assume that $r \leq b$ by choosing a suitable subset of $\{ a_1 , \cdots, a_r \}$. Recall that the dual basis $\{ e^1 , \cdots , e^b \}$ for $H^2(X , \partial X ; \mathbb{Z}) \otimes \mathbb{Q}$ is defined as
\[ \langle e^i , [S_j] \rangle = \delta_{ij}, \]
where $\delta_{ij}$ is the Kronecker function. From the proof of Theorem \ref{lattice points} we know that, if we write $a_i$ in the basis $\{ e^1 , \cdots , e^b \}$ then the coefficients are integral, and their absolute values are bounded above by $O(C_{\mathrm{nat}}(\Phi))$. Hence for each $1 \leq i \leq r$ we can write
\[ a_i = \sum_j \eta_j^i \hspace{1mm} e^j,\]
with $|\eta_j^i| \leq O(C_{\mathrm{nat}}(\Phi))$. Since $ \{ \phi_1 , \cdots , \phi_b \}$ is a basis for $H^1(X ; \mathbb{R})$ we can write
\[ z = \gamma_1 \phi_1 + \cdots + \gamma_b \phi_b , \]
for real numbers $\gamma_j$. Now for $1 \leq i \leq r$ we have
\[ 1 = \langle a_i , PD(z) \rangle = \langle \sum_j \eta_j^i \hspace{1mm} e^j , \sum_s \gamma_s [S_s] \rangle = \eta_1^i \gamma_1 + \cdots + \eta_b^i \gamma_b. \]
This gives a set of $r$ linear equations for $\gamma_1 , \cdots , \gamma_b$. The number of variables and the number of equations are bounded above by the constant $B$, and the total absolute values of the coefficients $\eta_j^i$ is bounded above by a polynomial function of $C_{\mathrm{nat}}(\Phi)$. Therefore, there exists a rational solution, $z = \gamma_1 \phi_1 + \cdots + \gamma_b \phi_b$, where the total number of bits for $(\gamma_1 , \cdots , \gamma_b)$ is at most a polynomial function of $\log(C_{\mathrm{nat}}(\Phi))$, for example by the Bareiss algorithm.
\end{proof}
\textbf{Step 2C: The list of vertices, $V$, and faces $\mathcal{F}$}\\
By Lemma \ref{vertex coefficients}, there is a set of points $V \subset H^1(X ; \mathbb{Q})$ such that
\[ \{ p(v) \hspace{2mm} | \hspace{2mm} v \in V \}, \]
is the set of vertices of the unit ball for the Thurston norm on $H^1(X , \mathbb{R})/W$, $|V|$ is at most a polynomial function of $C_{\mathrm{nat}}(\Phi)$, and the total number of bits for writing each element of $V$ in terms of the basis $\{ \phi_1 , \cdots , \phi_b \}$ is at most a polynomial function of $\log(C_{\mathrm{nat}}(\Phi))$. Likewise, the number of faces of the unit ball for $H^1(X ; \mathbb{R})/W$, that is $|\mathcal{F}|$, is bounded above by a polynomial function of $C_{\mathrm{nat}}(\Phi)$. The sets $V$ and $\mathcal{F}$ are part of the certificate, and are given to us non-deterministically. We use Theorem \ref{lackenby} to certify that each element of $V$ has Thurston norm one.\\
\textbf{Step 3: Certifying that the vertices of each face in $\mathcal{F}$ actually lie on the same face}\\
Recall that the number of elements of $\mathcal{F}$ is at most polynomially large in $C_{\mathrm{nat}}(\Phi)$. For any element $F \in \mathcal{F}$ with $F = \{ u_1 , \cdots , u_s \} $, we check that
\[ x(K_1u_1 + \cdots + K_s u_s) = K_1 \hspace{1mm}x(u_1)+ \cdots + K_s \hspace{1mm} x(u_s) = K_1 + \cdots + K_s, \]
for some positive integral choices of $K_1 , \cdots, K_s$. Here $x$ represents the Thurston norm on $H^1(X ; \mathbb{R})$. It is clear that once proven, this implies that $\{u_1 , \cdots , u_s \}$ lie on the same face.
We would like to choose each $K_i$ such that the $1$-cocycle $K_i u_i$ is integral. First we need to check that there is a choice of $K_i$ that is not too large. To see this, we can write $u_i$ in the integral basis $\phi_1 , \cdots, \phi_b$ as
\[ u_i = \alpha_1^i \hspace{1mm} \phi_1 + \cdots + \alpha_b^i \hspace{1mm} \phi_b, \]
and take $K_i$ to be the product of denominators of $\alpha_j^i$ for $1 \leq j \leq b$. From Step 2 we know that the total number of digits of $K_i$ is bounded above by a polynomial function of $C_{\mathrm{nat}}(\Phi)$. The numbers $K_i$ are part of the certificate, and are given to us non-deterministically.
Therefore, we can use Theorem \ref{lackenby} to certify that the Thurston norm of
\[ K_1 \hspace{1mm} u_1 + \cdots + K_s \hspace{1mm} u_s, \]
is $K_1 + \cdots + K_s$. This finishes the certification for each face $F$. Since the total number of faces is bounded above by a polynomial function of $C_{\mathrm{nat}}(\Phi)$, we are done.\\
\textbf{Step 4A: Decomposing each top-dimensional face in $\mathcal{F}$ into simplices.}\\
The dimension of a face $F$ is the maximum number $m$ such that $F$ has $m+1$ affinely independent vertices. Hence, we can compute the dimension of each face from the list of its vertices, in time that is bounded above by a polynomial function of $C_{\mathrm{nat}}(\Phi)$. The boundary of the polytope can be subdivided to a triangulation without adding any new vertices. This can easily be proved by induction on the dimension of the faces. In particular, each top-dimensional face in $\mathcal{F}$ can be subdivided in this way so that along incident faces, their triangulations agree. Such a subdivision will be provided to us non-deterministically. Thus, for each top-dimensional face $F$ in $\mathcal{F}$, we are provided with a collection of subsets of $F$, each consisting of $d$ vertices, where $d$ is the dimension of $H^1(X ; \mathbb{R}) / W$. The number of such subsets is at most $|F|^d$, which is at most $|V|^b$. Let $\Sigma$ denote the collection of all these subsets, as we run over all top-dimensional faces. Then the number of elements of $\Sigma$ is at most a polynomial function of $C_{\mathrm{nat}}(\Phi)$. \\
\textbf{Step 4B: Certifying that the composition $\pi \circ i$ is injective.}\\
It is enough to verify that any pair of simplices $\sigma_1, \sigma_2 \in \Sigma$ have disjoint interiors.
Here and afterwards, we slightly abuse the notation by denoting the geometric realisation of $\sigma_i$ by $\sigma_i$ again. The condition $\sigma_1^\circ \cap \sigma_2^\circ$ is equivalent to $\text{Cone}(\sigma_1^\circ) \cap \text{Cone}(\sigma_2^\circ) =\emptyset$, since the restriction of the Thurston norm to both $\sigma_1$ and $\sigma_2$ is equal to $1$. We now show how to verify the condition $\text{Cone}(\sigma_1^\circ) \cap \text{Cone}(\sigma_2^\circ) =\emptyset$ using Linear Programming. Assume that $\{ u_1 , \cdots , u_d \}$ forms the list of vertices of $\sigma_1$, and $\{ y_1 , \cdots , y_d \}$ forms the list of vertices of $\sigma_2$. We would like to check that
\begin{eqnarray}
\alpha_1 u_1 + \cdots + \alpha_d u_d = \beta_1 y_1 + \cdots + \beta_d y_d
\label{disjoint-cones}
\end{eqnarray}
has no solution for $\alpha_i > 0$ and $\beta_j > 0$. However, Equation (\ref{disjoint-cones}) has a solution with $\alpha_i >0$ and $\beta_j>0$ if and only if it has a solution with $\alpha_i\geq 1$ and $\beta_j \geq 1$, essentially by scaling. This is an instance of Linear Programming. Since no variables are required to be integers, it can be solved in polynomial time as a function of $C_{\mathrm{nat}}(\Phi)$.\\
\textbf{Step 4C: Certifying that the composition $\pi \circ i$ is a homeomorphism.}\\
The certificate provides the vertices and faces of the boundary of the unit norm ball, and it provides a collection $\Sigma$ of $(d-1)$-dimensional simplices. We have checked that the interiors of these simplices are disjoint. But we need to check that their union is the entire boundary of the unit norm ball. We check that $P$ is a closed, oriented, pseudo-manifold as follows. For this purpose, we need to check the conditions of \emph{purity}, \emph{non-branching}, \emph{connectivity} and \emph{orientability}.
\emph{Purity condition}: For each element $\sigma$ of $\Sigma$, we check that $p(\sigma)$ actually forms the vertices of a $(d-1)$-dimensional simplex. To do this, we verify that its vertices form a linearly independent set in $H^1(X ; \mathbb{R}) / W$. This can be done in polynomial time in $|\mathcal{T}|$ and $C_{\mathrm{nat}}(\Phi)$.
\emph{Non-branching condition}: We check that every $(d-2)$-dimensional simplex appears in exactly two $(d-1)$-dimensional simplices. In other words, for each $(d-2)$-dimensional face of $\Sigma$, we check that it lies in exactly one other $(d-1)$-dimensional simplex in $\Sigma$. Since $|\Sigma|$ is bounded by a polynomial in $C_{\mathrm{nat}}(\Phi)$, this can be checked in time that is polynomially bounded in $C_{\mathrm{nat}}(\Phi)$.
\emph{Connectivity condition}: For every pair of simplices $\sigma_1$ and $\sigma_2$ in $\Sigma$, we check that $\sigma_1$ and $\sigma_2$ can be connected by a path consisting of $(d-2)$- and $(d-1)$-dimensional simplices. We may assume that such a (minimal) path is given to us non-deterministically. Since $|\Sigma|$ is bounded by polynomials in $C_{\mathrm{nat}}(\Phi)$, this can be checked in time that is polynomially bounded in $C_{\mathrm{nat}}(\Phi)$.
\emph{Orientability}: We specify an orientation of each simplex in $\Sigma$ by specifying an ordering of its vertices. We check that this orientation is compatible with its orientation from $H^1(X ; \mathbb{R}) / W$, by checking that the matrix with columns given by the elements of $\Sigma$ has positive determinant. For every two top-dimensional faces that share a $(d-2)$-dimensional face, we check that they are glued by an orientation-reversing map along their intersection.
We have now established that $P$ is a closed, oriented, pseudo-manifold. We check that the map $\pi \circ i$ is injective and surjective, and hence a homeomorphism. Injectivity was established in Step 4B. To prove the surjectivity, it is enough to show that the degree of the map $\phi$ is non-zero. Here we are using the fact that the degree is well-defined between compact, oriented pseudo-manifolds of the same dimension. Moreover, any such map that is not surjective has degree $0$, since $S^{d-1} - \{ \text{point} \}$ is contractible and the degree is invariant under homotopy. To check that the degree is non-zero in our case, note that the degree can be computed as the signed count of the points in $P$ that map to a generic but fixed element in $S^{d-1}$. Since all of the signs agree by our construction, this signed count is always non-zero. This finishes Step 4 of the certification. \\
\textbf{Step 5: Certifying that the list of faces of $\partial B_{\bar{x}}$ is complete.}\\
The maps $\pi \circ i$ and $\pi$ are both homeomorphisms, hence so is the map $i \colon P \rightarrow \partial B_{\bar{x}}$. This implies that the list of top-dimensional faces used to construct the space $P$ is the complete list of top-dimensional faces of $\partial B_{\bar{x}}$, otherwise the inclusion map $i$ would not have been surjective.
For every face $F \in \mathcal{F}$ there are top-dimensional faces $F_1, \cdots, F_r \in F$ with $r \leq d$ such that
\[ F = \bigcap_{i=1}^{r} F_i, \]
where we have considered faces as subsets of the vertices $V$. Moreover, any intersection as above determines a face. Hence, we may go over all subsets of size at most $d$ of the set of top-dimensional faces, and verify that our list of faces is complete.
\end{proof}
\begin{remark}
Note that in defining the complexity of the basis for cohomology, we used $C_{\mathrm{nat}}$ rather $C_{\mathrm{dig}}$. Although this was enough for the current application, it would be interesting to know if \textsc{Thurston norm ball for $b_1 \leq B$} still lies in \textbf{NP} if we change the definiton of the complexity of the cohomology basis to $C_{\mathrm{dig}}$. \end{remark}
\section{Decomposing a triangulated manifold along spheres and discs}
\label{Sec:SpheresDiscs}
In the previous section, we proved Theorem \ref{Thm:MainThmIrred}. This established the main theorem in the case of irreducible boundary-irreducible 3-manifolds. In this section, we start to tackle the general case, by decomposing our given 3-manifold along spheres and discs.
The following is Theorem 11.4 and Addendum 11.5 of \cite{lackenby2016efficient}. It provides a method for building a triangulation of the connected summands of a 3-manifold $X$. The input is a triangulation $\mathcal{T}$ of $X$, together with normal spheres $S$ that specify the connected sum. The running time of the algorithm is bounded above in terms of the weight $w(S)$. Recall that this is the number of intersection points between $S$ and the 1-skeleton of $\mathcal{T}$.
\begin{thm}
\label{Thm:DecomposeAlongSpheres}
There is an algorithm that takes, as its input, the following data:
\begin{enumerate}
\item a triangulation $\mathcal{T}$ with $t$ tetrahedra of a compact orientable 3-manifold $X$;
\item a vector $(S)$ for a normal surface $S$ in $\mathcal{T}$ that is a union of disjoint spheres;
\item a simplicial $1$-cocycle $c$ on $\mathcal{T}$.
\end{enumerate}
The output is a triangulation $\mathcal{T}'$ of $X \backslash\backslash S$ and a simplicial 1-cocycle $c'$ on $\mathcal{T}'$ with the following properties:
\begin{enumerate}
\item the number of tetrahedra in $\mathcal{T}'$ is at most $200t$;
\item the classes $[c']$ and $i^\ast([c])$ in $H^1(X \backslash\backslash S)$ are equal, where $i \colon X \backslash\backslash S \rightarrow X$ is the inclusion map;
\item $C_{\mathrm{nat}}(c') \leq 1200 t \, C_{\mathrm{nat}}(c).$
\end{enumerate}
The algorithm runs in time that is bounded above by a polynomial function of
$$t (\log w(S)) (\log (C_{\mathrm{nat}}(c) + 1)).$$
\end{thm}
We need a small extension of this result.
\begin{thm}
\label{Thm:SpheresAndDiscs}
Theorem \ref{Thm:DecomposeAlongSpheres} remains true if $S$ is a union of disjoint spheres and discs.
\end{thm}
The proof is essentially identical, and we therefore only sketch it.
When $S$ is a normal surface properly embedded in a compact orientable 3-manifold $X$ with a triangulation $\mathcal{T}$, then $X \backslash\backslash S$ inherits a handle structure, as follows. One first dualises $\mathcal{T}$ to form a handle structure $\mathcal{H}$ for $X$. The normal surface $S$ then determines a surface that is standard in $\mathcal{H}$, which means that it is disjoint from the 3-handles, it intersects each handle in discs and, in the case of a 1-handle or 2-handle, these discs respect the handle's product structure. Then, by cutting along this surface, each $i$-handle of $\mathcal{H}$ is decomposed into $i$-handles in the required handle structure. We call this the \emph{induced} handle structure on $X \backslash\backslash S$.
We do not actually construct this handle structure in the proof of Theorem \ref{Thm:DecomposeAlongSpheres}. The reason is that the number of handles (when $S$ is closed) is at least $w(S)$. So it is not possible to build this handle structure in time that is bounded above by a polynomial function of $\log w(S)$.
In the next definition, it is useful to think about $\mathcal{H}'$ as the induced handle structure on $X' = X \backslash\backslash S$ and where $S'$ is the copies of $S$ in $\partial X'$.
\begin{definition}
Let $\mathcal{H}'$ be a handle structure for a compact 3-manifold $X'$. Let $S'$ be a compact subsurface of $\partial X'$ such that $\partial S'$ is disjoint from the 2-handles and respects the product structure on the 1-handles. A handle $H$ of $\mathcal{H}'$ is a \emph{parallelity handle} for $(X',S')$ if
it admits a product structure $D^2 \times I$ such that
\begin{enumerate}
\item $D^2 \times \partial I = H \cap S'$;
\item each component of intersection between $H$ and another handle of $\mathcal{H}'$ is $\beta \times I$, for an arc $\beta$ in $\partial D^2$.
\end{enumerate}
The union of the parallelity handles is the \emph{parallelity bundle}.
\end{definition}
We will typically view the product structure $D^2 \times I$ on a parallelity handle as an $I$-bundle over $D^2$. It is shown in Lemma 5.3 of \cite{lackenby2009crossing} that these $I$-bundle structures patch together to form an $I$-bundle structure on the parallelity bundle.
\begin{definition}
\label{Def:BundleDefinitions}
Let $\mathcal{B}$ be an $I$-bundle over a compact surface $F$. Its \emph{horizontal boundary} $\partial_h \mathcal{B}$ is the $(\partial I)$-bundle over $F$. Its \emph{vertical boundary} $\partial_v \mathcal{B}$ is the $I$-bundle over $\partial F$. We say that a subset of $\mathcal{B}$ is \emph{vertical} if it is a union of fibres, and that it is \emph{horizontal} if it is a surface transverse to the fibres.
\end{definition}
The main step in the proof of Theorem 11.4 in \cite{lackenby2016efficient} was an application of the following result (Theorem 9.3 in \cite{lackenby2016efficient}).
\begin{thm}
There is an algorithm that takes, as its input,
\begin{enumerate}
\item a triangulation $\mathcal{T}$, with $t$ tetrahedra, for a compact orientable 3-manifold $X$;
\item a vector $(S)$ for an orientable normal surface $S$;
\end{enumerate}
and provides as its output, the following data. If $S'$ is the two copies of $S$ in $\partial (X \backslash\backslash S)$,
and $\mathcal{B}$ is the parallelity bundle for the pair $(X \backslash\backslash S,S')$ with its induced handle structure, then
the algorithm produces a handle structure for $(X \backslash\backslash S) \backslash\backslash \mathcal{B}$ and, for each
component $B$ of $\mathcal{B}$, it determines:
\begin{enumerate}
\item the genus and number of boundary components of its base surface;
\item whether $B$ is a product or twisted $I$-bundle; and
\item for each component $A$ of $\partial_vB$, the location of $A$ in $(X \backslash\backslash S) \backslash\backslash \mathcal{B}$.
\end{enumerate}
It runs in time that is bounded by a polynomial in $t \log(w(S))$.
\end{thm}
In the above, the meaning of the \emph{location} of $A$ is as follows. The intersection between $A$ and each handle of $(X \backslash\backslash S) \backslash\backslash \mathcal{B}$ is a union of fibres in the $I$-bundle structure on $A$, and hence is a square. In the case when $A$ lies entirely in $(X \backslash\backslash S) \backslash\backslash \mathcal{B}$, then $A$ is a union of these squares, and in this case, the algorithm provides these squares in the order they appear as one travels around $A$. However, $A$ need not lie entirely in $(X \backslash\backslash S) \backslash\backslash \mathcal{B}$. This arises in the situation where $S$ has boundary. For example, if $D$ and $D'$ are normally parallel discs of $S$ that are incident to the boundary of $X$, then the space between them becomes a parallelity handle $D^2 \times I$ such that $\partial D^2 \times I$ intersects $\partial X$. Thus, in this situation, $A$ is decomposed into a union of squares, which are the components of intersection between $A$ and the handles of $(X \backslash\backslash S) \backslash\backslash \mathcal{B}$ and also components of intersection between $A$ and $\partial X$. The algorithm provides the squares lying in $(X \backslash\backslash S) \backslash\backslash \mathcal{B}$ in the order they appear as one travels around $A$.
Thus, the triangulation $\mathcal{T}'$ is constructed by decomposing each of the handles of $(X \backslash\backslash S) \backslash\backslash \mathcal{B}$ into tetrahedra and by giving a compatible triangulation of $\mathcal{B}$. The number of handles of $(X \backslash\backslash S) \backslash\backslash \mathcal{B}$ is bounded above by a linear function of $t$ and each of these handles can intersect its neighbours in a very limited number of possibilities. Thus, it is not hard to triangulate $(X \backslash\backslash S) \backslash\backslash \mathcal{B}$ using at most $100t$ tetrahedra. In addition, we may ensure that the intersection with $\partial_v \mathcal{B}$ is simplicial. The horizontal boundary of each component $B$ of $\mathcal{B}$ is a planar surface, since $S$ is a union of spheres and discs. Thus, the topology of $B$ is determined entirely by the number of boundary components of its base surface and whether it is a twisted $I$-bundle or a product. It is shown that the total number of boundary components of the base surface of $\mathcal{B}$ is at most $10t$. Hence, it is not hard to construct the triangulation on $\mathcal{B}$ with at most $100t$ tetrahedra.
We now explain briefly how the cocycle $c'$ is constructed. This is explained in Addendum 11.5 in \cite{lackenby2016efficient}.
For each oriented edge $e$ in $\mathcal{T}'$, we need to define $c'(e)$. It is convenient to dualise $c$ to form an oriented surface $F$ properly embedded in $X$. We may assume that $F$ is transverse to $S$ and that the intersection between $F$ and $\mathcal{B}$ is vertical in $\mathcal{B}$. If $e$ lies in $(X \backslash\backslash S) \backslash\backslash \mathcal{B}$, then we define $c'(e)$ to be the algebraic intersection number between $e$ and $F \backslash\backslash S$. This therefore defines the restriction of $c'$ to $\partial_v \mathcal{B}$. In the proof of Addendum 11.5 in \cite{lackenby2016efficient}, we replace $F$ by any compact oriented surface $F'$ that equals $F$ in $(X \backslash\backslash S) \backslash\backslash \mathcal{B}$, that is vertical in $\mathcal{B}$ and that satisfies $\partial_v \mathcal{B} \cap F' = \partial_v \mathcal{B} \cap F$. It is shown how to do this while maintaining control over the number of intersections with the edges of $\mathcal{T}'$. In particular, the cocycle $c'$ dual to $F'$ satisfies $C_{\mathrm{nat}}(c') \leq 1200 t \, C_{\mathrm{nat}}(c)$. Now, $F'$ and $F$ differ by a class that is represented by a vertical surface in $\mathcal{B}$ disjoint from $\partial_v \mathcal{B}$. In our situation, any such surface is dual to the trivial class in $H^1(X \backslash\backslash S)$, since $S$ is a union of spheres and discs. Thus, in fact, $[c']$ and $i^\ast([c])$ are equal.
This completes the outline of the proof of Theorem \ref{Thm:SpheresAndDiscs}. We will first apply it to essential spheres in $X$ with the following property.
\begin{definition}
A collection of disjoint essential spheres $S$ properly embedded in a 3-manifold $X$ is \emph{complete} if the manifold obtained from $X \backslash\backslash S$ by attaching a 3-ball to each spherical boundary component is irreducible.
\end{definition}
The following was proved by King (Lemma 4 in \cite{King}). King's result is stated for closed orientable 3-manifolds, but his argument extends immediately to compact orientable 3-manifolds with boundary. (See also Lemma 2.6 in \cite{mijatovic}).
\begin{thm}
Let $\mathcal{T}$ be a triangulation of a compact orientable 3-manifold $X$ with $t$ tetrahedra. Then there is a complete collection of disjoint essential normal spheres in $\mathcal{T}$ with weight at most $2^{185t^2}$.
\label{king}
\end{thm}
It might be possible to improve this estimate. It was shown by Jaco and Tollefson (Theorem 5.2 in \cite{JacoTollefson}) that, when $X$ is closed, it contains a complete collection of essential spheres, each of which is a vertex normal surface. (See Section \ref{Sec:Normal} for the definition of a vertex normal surface.) By Lemma 3.2 in \cite{HassLagarias} a vertex normal surface has weight at most $28t2^{7t-1}$. However, the generalisation of Jaco and Tollefson's argument to manifolds with non-empty boundary does not seem so straightforward. In any case, Theorem \ref{king} is sufficient for our purposes.
Jaco and Tollefson also proved the following result dealing with compression discs for the boundary (Theorem 6.2 in \cite{JacoTollefson}). It refers to a \emph{complete} collection of compressing discs, which means that the manifold obtained by compressing along these discs has incompressible boundary.
\begin{thm}
\label{Thm:JacoTollefsonDiscs}
Let $\mathcal{T}$ be a triangulation of a compact orientable irreducible 3-manifold $X$. Then $X$ has a complete collection of disjoint compressing discs, each of which is a vertex normal surface. Hence, each such disc has weight at most $28t2^{7t-1}$, and their total weight is at most $280t^2 2^{7t-1}$.
\end{thm}
The final estimate is a consequence of the well known result, essentially due to Kneser \cite{kneser1929geschlossene}, that in any collection of more than $10t$ disjoint normal surfaces, at least two of the surfaces are parallel.
\begin{proof}[Proof of Theorem \ref{thurston ball}]
We are given a triangulation $\mathcal{T}$ of the compact orientable 3-manifold $X$ and a collection of integral simplicial cocycles $\phi_1, \cdots, \phi_b$ that forms a basis for $H^1(X; \mathbb{R})$. Our goal is to compute the Thurston norm ball. Recall that the required output is:
\begin{enumerate}
\item A collection of elements that are integral linear combinations of $\phi_1, \cdots, \phi_b$. These will form a basis $\mathcal{B}$ for the subspace $W$ of $H^1(X;\mathbb{R})$ with Thurston norm zero.
\item A collection $V$ of rational linear combinations of $\phi_1, \cdots, \phi_b$ that project to the vertices of the norm ball in $H^1(X; \mathbb{R}) / W$.
\item A collection $\mathcal{F}$ of subsets of $V$ that form the faces.
\end{enumerate}
These will all be part of our certificate. In addition, the following will also form our certificate:
\begin{enumerate}
\item A normal surface $S$ in $\mathcal{T}$, given via its vector $(S)$, that is in fact a complete collection of disjoint essential spheres. It has weight at most $2^{185t^2}$ where $t = |\mathcal{T}|$.
\item A triangulation $\mathcal{T}'$ for the manifold $X'$ obtained by cutting along $S$ and then attaching a 3-ball to each spherical boundary component.
\item A collection of simplicial 1-cocycles $\phi'_1, \cdots, \phi'_b$ that are the images of $\phi_1, \cdots, \phi_b$ in $H^1(X')$ under the map $H^1(X) \rightarrow H^1(X \backslash\backslash S) \cong H^1(X')$.
\item A normal surface $D$ in $\mathcal{T}'$, given via its vector $(D)$, that is in fact a complete collection of disjoint compression discs for $\partial X'$. It has weight at most $280 |\mathcal{T}'|^2 2^{7|\mathcal{T}'|-1}$.
\item A triangulation $\mathcal{T}''$ for $X'' = X' \backslash\backslash D$.
\item A collection of simplicial 1-cocycles $\phi''_1, \cdots, \phi''_b$ that are the images of $\phi'_1, \cdots, \phi'_b$ in $H^1(X'')$.
\item A certificate for the decision problem \textsc{Thurton norm ball for irreducible boundary-irreducible 3-manifolds with $b_1 \leq B$}, which provides the data for the Thurston norm ball of $H^1(X'')$. This data is a basis for the subspace $W''$ of $H^1(X''; \mathbb{R})$ with Thurston norm zero, together with the vertices $V''$ and faces $\mathcal{F}''$ for the norm ball in $H^1(X''; \mathbb{R}) / W''$.
\end{enumerate}
The certificate is verified as follows:
\begin{enumerate}
\item Verification that $S$ is a collection of spheres using the algorithm in \cite{agol2006computational}.
\item Verification that $\mathcal{T}'$ is a triangulation of $X'$ and that $\phi'_1, \cdots, \phi'_b$ are the images of $\phi_1, \cdots, \phi_b$ in $H^1(X')$, using Theorem \ref{Thm:DecomposeAlongSpheres}.
\item Verification that $D$ is a collection of discs using \cite{agol2006computational}.
\item Verification that $\mathcal{T}''$ is a triangulation of $X''$ and that $\phi''_1, \cdots, \phi''_b$ are the images of $\phi'_1, \cdots, \phi'_b$ in $H^1(X'')$ using Theorem \ref{Thm:DecomposeAlongSpheres}.
\item Verification that each component of $X''$ either is irreducible and boundary-irreducible or is a rational homology 3-sphere, using Corollary \ref{Cor:IrredIncompNP}.
\item Verification of the certificate for \textsc{Thurton norm ball for irreducible boundary-irreducible 3-manifolds with $b_1 \leq B$} for the manifold $X''$. The components of
$X''$ that are (possibly reducible) rational homology 3-spheres play no role here.
\item Verification that we may write $\mathcal{B} = \mathcal{B}_1 \cup \mathcal{B}_2 \cup \mathcal{B}_3$ such that \\
i) the elements of $\mathcal{B}_1$ form a basis for the kernel of the map
\[H^1(X) \rightarrow H^1(X - N^\circ(S)) \cong H^1(X'),\]
where $N(S)$ is a tubular neighbourhood of $S$;\\
ii) the elements of $\mathcal{B}_2$ project to a basis for the kernel of the map
\[H^1(X') \rightarrow H^1(X' \backslash\backslash D) = H^1(X''),\]
and this projection is one-to-one;\\
iii) the elements of $\mathcal{B}_3$ project to a basis of $W''$ and this projection is one-to-one.
\item Verification that the map $H^1(X) \rightarrow H^1(X'')$ sets up a bijection $V \rightarrow V''$ and a bijection $\mathcal{F} \rightarrow \mathcal{F}''$.
\end{enumerate}
The input to \textsc{Thurton norm ball for irreducible boundary-irreducible 3-manifolds with $b_1 \leq B$} requires a collection of integral cocycles that forms a basis for $H^1(X''; \mathbb{R})$. Although $\phi''_1, \cdots, \phi''_b$ might not form a basis, they do form a spanning set, and therefore some subset of them (which can easily be found) forms a basis.
The output provides integral cocycles that form a basis for the subspace $W''$ of norm zero. It also consists of a set of points $V''$ in $H^1(X''; \mathbb{Q})$ that give the vertices of the norm ball and a collection $\mathcal{F}''$ of subsets of $V''$ that give the faces. Looking at the long exact sequence of the pair $(X, X - N^\circ(S))$ we have
\begin{eqnarray*}
H^1(X, X- N^\circ(S)) \rightarrow H^1(X) \rightarrow H^1(X - N ^\circ(S)) \rightarrow H^2(X, X - N^\circ(S)) \rightarrow \cdots
\end{eqnarray*}
By excision and the Poincar\'{e} duality we have
\[ H^1(X , X - N^\circ(S)) \cong H^1(N(S), \partial N(S)) \cong H_2(N(S)) \cong H_2(S). \]
Similarly $H^2(X , X - N^\circ(S)) \cong H_1(S) \cong 0$. Therefore, the above long exact sequence takes the form
\[ H_2(S) \rightarrow H^1(X) \xrightarrow{p} H^1(X - N ^\circ(S)) \rightarrow 0. \]
Thus, $p$ is surjective. It is Thurston norm-preserving and its kernel is generated by spheres in $S$. Similarly, the map $H^1(X') \rightarrow H^1(X'')$ is surjective, norm-preserving and its kernel is generated by discs in $D$. Thus, we let $\mathcal{B}_1$ be a basis for the kernel of $p$. We let $\mathcal{B}_2$ be a collection of elements that are sent by $p$ to a basis for the kernel of $H^1(X') \rightarrow H^1(X'')$. Finally, assume that $\mathcal{B}_3$ is a subset of $H^1(X)$ that projects to a basis for the subspace $W''$ of $H^1(X''; \mathbb{R})$ with Thurston norm zero. Then $\mathcal{B} = \mathcal{B}_1 \cup \mathcal{B}_2 \cup \mathcal{B}_3$ is a basis for the subspace $W$ of $H^1(X;\mathbb{R})$ with Thurston norm zero.
Now, there is an induced isomorphism from $H^1(X; \mathbb{R}) /W$ to $H^1(X'' ;\mathbb{R}) / W''$ which is norm-preserving. Thus, we may obtain the points $V$ in $H^1(X ; \mathbb{Q})$ by running through each element of $V''$ in $H^1(X''; \mathbb{Q})$ and picking a point in its inverse image. A set of points in $V$ spans a face if and only if the corresponding points in $V''$ do. Thus, we obtain the required output for \textsc{Thurton norm ball for $b_1 \leq B$}.
We need to show that the certificate exists and can be verified in polynomial time.
By Theorem \ref{king}, there is a complete collection of disjoint essential normal spheres, $S$, in $\mathcal{T}$ with weight at most $2^{185t^2}$ where $t = |\mathcal{T}|$, the number of tetrahedra in $\mathcal{T}$. The normal coordinates of elements of $S$ are part of the certificate, and are given to us non-deterministically. Now we may decompose the manifold along $S$ and then attach balls to any resulting spherical boundary components. Let $X'$ be the resulting irreducible 3-manifold. Theorem \ref{Thm:DecomposeAlongSpheres} guarantees that we may build a triangulation $\mathcal{T}'$ of $X'$ with no more than $O(|\mathcal{T}|)$ tetrahedra, and simplicial 1-cocycles $\phi_j' \in H^1(X' ; \mathbb{Z})$ such that the cohomology classes $i^*([\phi_j])$ and $[\phi_j']$ are equal and $C_{\mathrm{nat}}(\phi_j ')$ is bounded above by a polynomial function of $|\mathcal{T}|$ and $C_{\mathrm{nat}}(\phi_j)$. Moreover, this procedure can be done in time that is a polynomial function of $bt ( \log w(S) )(\log(C_{\mathrm{nat}}(\phi_j))+1))$, which is bounded above by a polynomial function of $|\mathcal{T}|$ and $C_{\mathrm{nat}}(\Phi)$ by our assumption on the weight of $S$ and the complexity of the homology basis.
By Theorem \ref{Thm:JacoTollefsonDiscs}, there is a complete collection of compression discs for $X'$ that are normal in $\mathcal{T}'$ and with weight at most $280 |\mathcal{T}'|^2 2^{7|\mathcal{T}'|-1}$. Applying Theorem \ref{Thm:SpheresAndDiscs}, we may cut along these discs, forming a 3-manifold $X''$ and obtain a triangulation $\mathcal{T}''$ and cocycles $\phi''_1, \cdots, \phi''_b$. As above, the number of tetrahedra is $O(|\mathcal{T}'|)$ and therefore $O(|\mathcal{T}|)$. The cocycles $\phi_j''$ have $C_{\mathrm{nat}}$ that is bounded above by a polynomial function of $|\mathcal{T}|$ and $C_{\mathrm{nat}}(\Phi)$. The procedure may be completed in polynomial time.
Finally, the certificate for \textsc{Thurton norm ball for irreducible boundary-irreducible 3-manifolds with $b_1 \leq B$} is verified in polynomial time.
\end{proof}
\section{Other representations of the manifold and knot}
In the decision problem \textsc{Determining knot genus in the fixed 3-manifold $M$}, the manifold $M$ is given to us by means of a diagram $D$ for $\Gamma \cup L$, where $\Gamma$ is a graph in $S^3$ and $L$ is a framed link, and $K$ is specified by giving a diagram for $K \cup \Gamma \cup L$ that contains $D$ as a subdiagram. This method of representing $M$ and $K$ is a natural one. However, it also played a critical role in the proof of Theorem \ref{main:boundary}, as the construction of an efficient basis for $H_2(M - N^\circ(K), \partial M \cup \partial N(K))$ relied on this presentation of $M$ and $K$. So it is reasonable to consider other methods for representing $M$ and $K$, and to ask whether the resulting decision problems still lie in \textbf{NP}.
For simplicity, we will focus on closed orientable 3-manifolds $M$, although much of our discussion does generalise to the case of non-empty boundary.
One way of specifying a closed orientable 3-manifold is by giving a Heegaard splitting for it. Here, we are given a closed orientable surface $S$, a union $\alpha$ of disjoint simple closed curves $\alpha_1, \cdots, \alpha_g$ in $S$ and another collection $\beta$ of disjoint simple closed curves $\beta_1, \cdots, \beta_g$ in $S$, with the property that $S - N^\circ(\alpha)$ and $S - N^\circ (\beta)$ are both planar and connected. We also assume that each component of $S - N^\circ(\alpha \cup \beta)$ is a disc. We suppose that $M$ is obtained by attaching two handlebodies to $S$ so that the curves $\alpha$ bound discs in one handlebody and the curves $\beta$ bound discs in the other handlebody. We think of this presentation of $M$ as fixed and given to us in some way, for example by specifying a triangulation of $S$ in which the curves $\alpha$ and $\beta$ are all simplicial.
We now wish to add $K$ to the picture. We do this by specifying a diagram for $K$ in $S$, in other words an immersed curve with generic double points at which under/over crossing information is specified. We also assume that this immersed curve intersects the $\alpha$ and $\beta$ curves transversely. We call this a \emph{diagram} for $K$. This specifies an embedding of $K$ into $S \times [-1,1]$ and hence into $M$, once we have agreed on the convention that the handlebody with discs attached to the $\alpha$ curves lies on the $S \times \{ - 1\}$ side. We say that the \emph{total crossing number} of $K$ is the sum of the number of crossings of $K$ with itself and its number of intersections with the $\alpha$ and $\beta$ curves. This is our measure of complexity for $K$.
Note that every knot $K$ in $M$ is specified by such a diagram, as follows. Each handlebody is a regular neighbourhood of a graph. We can isotope $K$ off a small open regular neighbourhood of these two graphs. It then lies in the complement of this open neighbourhood, which is a copy of $S \times [-1,1]$. The projection $S \times [-1,1] \rightarrow S$ onto the first factor specifies the diagrammatic projection map. After a small isotopy, the image of $K$ has only generic double point singularities, which form the crossings of $K$ with itself.
Thus, we can phrase the following decision problem. We fix a Heegaard diagram for $M$ in a closed orientable surface $S$, as above.
\medskip
\noindent \textbf{Problem}: \textsc{Determining knot genus in the fixed closed orientable 3-manifold $M$ via a Heegaard diagram}.\\
\emph{Input}: A diagram of $K$ in $S$, as above, and an integer $g \geq 0$ in binary.\\
\emph{Question}: Is the genus of $K$ equal to $g$?\\
\begin{thm}
\label{Thm:HeegaardNP}
\textsc{Determining knot genus in the fixed closed orientable 3-manifold $M$ via a Heegaard diagram} lies in \textbf{NP}.
\end{thm}
\begin{remark}
\label{Rem:NonDisc}
We briefly discuss the above requirement that each component of $S - N^\circ(\alpha \cup \beta)$ is a disc. This almost always happens automatically anyway. Indeed, if some component of $S - N^\circ(\alpha \cup \beta)$ is not a disc, then it contains an essential simple closed curve that bounds a disc in both handlebodies. The Heegaard splitting is then reducible. However, we can always ensure that each component of $S - N^\circ(\alpha \cup \beta)$ is a disc, by performing an isotopy to $\beta$. For if $S - N^\circ(\alpha \cup \beta)$ is not a union of discs, then we can pick a properly embedded essential arc in some component joining the $\beta$ curves to the $\alpha$ curves, and then isotope the relevant $\beta$ curve along it, to introduce two new intersection points between the $\alpha$ curves and the $\beta$ curves. We call this a \emph{finger move}. Repeating this process if necessary, we end with the required collection of $\alpha$ and $\beta$ curves.
The reason for making this requirement is that it avoids the following scenario. Suppose that some component $P$ of $S - N^\circ(\alpha \cup \beta)$ is not a disc. Then we could choose a diagram of some knot $K$ to wind many times around $P$, plus possibly intersect $\partial P$. In this way, we would get infinitely many distinct diagrams, all with the same total crossing number. Thus, in this case, the total crossing number would not become a reasonable measure for the complexity of the diagram.
\end{remark}
We will prove Theorem \ref{Thm:HeegaardNP} by reducing \textsc{Determining knot genus in the fixed closed orientable 3-manifold $M$ via a Heegaard diagram} to \textsc{Determining knot genus in the fixed closed orientable 3-manifold $M$}. In order to this, we need an algorithm to translate a diagram for a knot $K$ in a Heegaard surface to a planar diagram for $K$ lying in the complement of some surgery curves. This is provided by the following result.
\begin{thm}
\label{Thm:HeegaardToPlanarDiagram}
Let $S$ be a closed orientable surface with curves $\alpha = \alpha_1 \cup \cdots \cup \alpha_g$ and $\beta = \beta_1 \cup \cdots \cup \beta_g$ specifying a Heegaard splitting of $M$. Suppose that $S - N^\circ(\alpha \cup \beta)$ is a union of discs. Then there is a diagram $D$ of a framed link $L$ in $S^3$ that specifies a surgery description of $M$ and that has the following property. Let $K$ be a knot in $M$ given via a diagram of $K$ in $S$ with total crossing number $c$. Then there is a diagram of a knot in the complement of $L$ that is isotopic to $K$, that contains $D$ as a subdiagram and that has total crossing number $O(c^2)$. This may be constructed in polynomial time as a function of $c$. Here, the implied constant depends only on $M$ and the Heegaard splitting, and not on $K$.
\end{thm}
We start with the case of the \emph{standard} Heegaard splitting for $S^3$. This has curves $\alpha_1, \cdots, \alpha_g$ and $\beta_1, \cdots, \beta_g$ satisfying $|\alpha_i \cap \beta_j| = \delta_{ij}$.
\begin{lem}
\label{Lem:StandardHeegaardToPlanarDiagram}
Let $S$ be a closed orientable surface with genus $g$, equipped with curves that give the standard genus $g$ Heegaard splitting for the 3-sphere. Let $K$ be a knot given by a diagram in $S$ with total crossing number $c$. Then there is a diagram for $K$ in the plane with crossing number at most $c^2$. This may be constructed in polynomial time as a function of $c$. This remains true if $K$ is a link with several components. Furthermore, some of its components may be framed via surface framing in $S$, in which case we can also require that the resulting planar diagram specifies the same framing on these components.
\end{lem}
\begin{proof}
Let $c_K$ be the number of crossings in $S$ between $K$ and itself. Then the total crossing number $c$ of $K$ is
\[ c_K + \sum_i |K \cap \alpha_i| + \sum_i |K \cap \beta_i|. \]
We will modify the given diagram of $K$ in $S$ so that it becomes disjoint from the $\alpha$ curves. So consider a curve $\alpha_i$. We may isotope its intersection points with $K$ so that they all lie in a small neighbourhood of the point $\alpha_i \cap \beta_i$. We may then isotope $K$ across the disc bounded by $\beta_i$. This has the effect of removing these points of $\alpha_i \cap K$, but possibly introducing new crossings of $K$. Near each point of $K \cap \beta_i$, we get new $|\alpha_i \cap K|$ new crossings of $K$. Thus, after these modifications, the total crossing number of $K$ is
\[ c_K + \sum_i |K \cap \alpha_i|(|K \cap \beta_i| -1) \]
which is clearly at most $c^2$.
We now use this to create a diagram for $K$ in the plane. We compress $S$ along the curves $\alpha_1, \cdots, \alpha_g$. Since the diagram for $K$ is now disjoint from these curves, the result is a diagram for $K$ in the 2-sphere, and hence the plane.
\end{proof}
We now extend this to slightly more general Heegaard splittings for $S^3$.
\begin{lem}
\label{Lem:Isotopy}
Let $S$ be a closed orientable surface with genus $g$. Let $\alpha = \alpha_1 \cup \cdots \cup \alpha_g$ be a union of disjoint simple closed curves that cut $S$ to a planar connected surface. Let $\beta = \beta_1 \cup \cdots \cup \beta_g$ be another collection of disjoint simple closed curves with the same property. Suppose that there is an isotopy taking $\beta$ to curves that, with $\alpha$, form the standard Heegaard splitting for $S^3$. Let $K$ be a knot given by a diagram in $S$ with total crossing number $c$. Then there is a planar diagram for $K$ in $S^3$ with crossing number at most $c^2$. This diagram may be constructed in a polynomial time as a function of $c$. Here, the implied constants depend on the curves $\alpha$ and $\beta$ but not $K$. This remains true if $K$ is a link with several components, some of which may be framed.
\end{lem}
\begin{proof}
We are assuming that there is an isotopy taking $\beta_1, \cdots, \beta_g$ to curves $\beta'_1, \cdots, \beta'_g$ satisfying $|\alpha_i \cap \beta'_j| = \delta_{ij}$. This isotopy may be performed by performing a sequence of \emph{bigon} moves. Here, one has a disc $D$ in $S$ with the interior of $D$ disjoint from $\alpha$ and $\beta$, and with $\partial D$ consisting of a sub-arc of an $\alpha$ curve and a sub-arc of a $\beta$ curve. The isotopy slides this $\beta$ arc across $D$. We shall show how to create a new diagram for $K$ in $S$ when such a move is performed. This will have the property that the total crossing number of the new diagram is at most the total crossing number of the old diagram. Hence, after these moves are performed, we may construct a diagram for $K$ in the plane with crossing number at most $c^2$, using Lemma \ref{Lem:StandardHeegaardToPlanarDiagram}.
Within the disc $D$, there is a portion of the diagram for $K$. We will pull this portion of the diagram entirely through $\alpha$ or through $\beta$, so that after this, the arcs of $K$ within $D$ run directly from $\alpha$ to $\beta$ without any crossings. The choice of whether to slide this portion of the diagram through $\alpha$ or $\beta$ is made so that it does not increase the number of crossings. Thus, if there are $c_\alpha$ crossings between $K$ and $\alpha$ along $\partial D$, and $c_\beta$ crossings between $K$ and $\beta$ along $\partial D$, then after this operation, the number of crossings between $K$ and $\partial D$ is $2 \min \{ c_\alpha, c_\beta \}$. Thus, the total crossing number of $K$ has not gone up. After this, we may isotope $\beta$ across $D$ without changing the number of crossings.
\end{proof}
\begin{lem}
\label{Lem:DehnTwist}
Let $S$ be a closed orientable surface with genus $g$. Let $\alpha$ be disjoint simple closed curves that cut $S$ to a planar connected surface. Let $\beta$ be another collection of disjoint simple closed curves with the same property. Suppose that each component of $S - N^\circ(\alpha \cup \beta)$ is a disc. Let $C$ be an essential simple closed curve in $S$. Then there is a constant $\lambda \geq 1$ with the following property. Let $K$ be a link, some components of which may be framed, given by a diagram in $S$ with total crossing number $c$. Let $K'$ be obtained from $K$ by Dehn twisting about $C$, and let $\beta'$ also be obtained from $\beta$ by Dehn twisting about $C$. Then the total crossing number of the diagram on $S$ given by $K' \cup C$ with respect to the curves $\alpha$ and $\beta'$ is at most $\lambda c + \lambda$. Moreover, this diagram may be constructed in polynomial time as a function of $c$.
\end{lem}
\begin{proof}
By assumption, each component of $S - N^\circ(\alpha \cup \beta)$ is a disc. We realise this as a convex Euclidean polygon with straight sides, where each side is parallel to an arc of intersection with $\alpha$ or $\beta$. We may assume that $C$ intersects $\alpha \cup \beta$ minimally, and hence that its intersection with this disc consists of straight arcs. We isotope the diagram of $K$ within this disc so that most of it lies very close to one of the edges of the polygon and is distant from $C$. We also ensure that the remainder of the diagram consists of straight arcs. Each intersection point between $K$ and $C$ lies in a straight arc of $K$, and this straight arc has an endpoint on $\alpha \cup \beta$. Thus, there is a constant $\lambda_1>0$, depending on $\alpha$, $\beta$ and $C$, such that the number of crossings between $K$ and $C$ is at most $\lambda_1 c$. We now perform the Dehn twist about $C$, giving the link $K'$ and the curves $\beta'$. The intersection points between $K'$ and $\beta'$ correspond to the intersection points between $K$ and $\beta$. The crossings of $K'$ with itself correspond to the crossings of $K$ with itself. Each crossing between $K$ and $C$ gives $|C \cap \alpha|$ extra crossings between $K'$ and $\alpha$. Thus, the total crossing number of $K$ goes up by a factor of at most $1+ \lambda_1 |C \cap \alpha|$. We also need to consider the crossings involving $C$. There are at most $\lambda_1 c$ of these with $K'$, and at most a constant number with $\alpha \cup \beta'$. The required bound then follows.
\end{proof}
\begin{remark}
\label{Rem:ExtraFingerMoves}
In the above lemma, we made the hypothesis that each component of $S - N^\circ(\alpha \cup \beta)$ is a disc. We would like to ensure that $\alpha$ and $\beta'$ have the same property, in other words that each component of $S - N^\circ(\alpha \cup \beta')$ is a disc. However, this might not be the case. Near $C$, there are various components of $S - N^\circ(\alpha \cup \beta)$. The components of $S - N^\circ(\alpha \cup \beta')$ are obtained by cutting along $C$ and then possibly gluing some of these together in a different way. An example is shown in Figure \ref{fig:dehn}, where a component of $S - N^\circ(\alpha \cup \beta')$ is obtained from two components of $S - N^\circ(\alpha \cup \beta \cup C)$ glued together. However, if this process does create some components of $S - N^\circ(\alpha \cup \beta')$ that are not discs, they may be cut into discs using finger moves, as in Remark \ref{Rem:NonDisc}. The number of finger moves that are needed is at most $|C \cap (\alpha \cup \beta)|$. This has the effect of increasing the total crossing number of $K' \cup C$ by at most $2|K \cap C|$, which is at most a constant times $c$.
\end{remark}
\begin{figure}[h]
\centering
\includegraphics[width=0.7\textwidth]{dehn-twist-complement.pdf}
\caption{Dehn twisting $\beta$ along $C$} \label{fig:dehn}
\end{figure}
\begin{proof}[Proof of Theorem \ref{Thm:HeegaardToPlanarDiagram}]
We are given a closed orientable surface $S$ with curves $\alpha$ and $\beta$ specifying a fixed Heegaard splitting of $M$. We are also given a diagram in $S$ of a knot $K$ with total crossing number $c$. We will change the diagram and the Heegaard splitting in a sequence of modifications. There is an orientation-preserving homeomorphism of $S$ taking the curves $\beta = \beta_1 \cup \cdots \cup \beta_g$ to curves $\beta'' = \beta''_1 \cup \cdots \cup \beta''_g$ that satisfy $\alpha_i \cap \beta''_j = \delta_{ij}$. This homeomorphism is obtained by a product of Dehn twists about simple closed curves in $S$, followed by an isotopy. We can apply such a Dehn twist if we also add a surgery curve $C$ that undoes it. Thus, we can replace the knot $K$ and curves $ \beta$ by a knot $K'$ together with the framed surgery curve $C$, and the curves $\beta'$ obtained by Dehn twisting along $C$. By Lemma \ref{Lem:DehnTwist}, the new knot $K'$ and surgery curve $C'$ have total crossing number bounded above by a constant times $c$. (The additive constant in the lemma can be subsumed into the multiplicative constant since we can assume that the total crossing number is non-zero.) By Remark \ref{Rem:ExtraFingerMoves}, we can also ensure that each component of $S - N^\circ(\alpha \cup \beta')$ is a disc, at a cost of increasing the total crossing number of $K' \cup C$ by at most a constant factor. Repeating this for each Dehn twist in the sequence, we end with curves $\beta'$, a diagram for $K$ and the framed link $L$ specifying the surgery. This has total crossing number that is at most $O(c)$. The curves $\beta'$ are isotopic to $\beta''$, and so by Lemma \ref{Lem:Isotopy}, we obtain a planar diagram for $K \cup L$ with total crossing number that is at most $O(c^2)$.
\end{proof}
This completes the proof of Theorem \ref{Thm:HeegaardNP}.
\begin{remark} There is another possible way of representing $M$ and $K$ using triangulations. We could be simply given a triangulation for $M$ containing $K$ as a subcomplex. We would be told that this was indeed a triangulation of $M$. But in the absense of an efficient method converting this to a \emph{fixed} triangulation of $M$, it is hard to see how this could be useful.
\end{remark}
\bibliographystyle{plain}
|
1,108,101,564,914 | arxiv | \chapter{Synergy between neuroscience and machine learning}
\label{ch:ml-ns-synergy}
Both, neuroscience and artificial intelligence, share, as one of their goals, the purpose of uncovering and understanding the mechanisms of intelligence\footnote{Here and throughout this work we adhere to using this loosely defined term to denote the collection of properties and behavior patterns that we attribute to systems that have analytic capabilities, can operate using abstract notions and carry out high level planning. The search for mechanisms of intelligence is congruent to the search for the precise definition of what the intelligence is, until that search is over, we need a term we can use, we use \emph{intelligence}.}. Neuroscience analyzes the existing examples of intelligent systems that animals and humans have, and tries to figure out how these systems work. Artificial intelligence approaches the task by searching through the space of possible solutions, implementing them one by one and using incremental improvements in performance as the guiding light. Sharing a common goal makes it inevitable that the paths of those two fields of scientific inquiry will cross.
\section{Neuroscience-inspired machine learning}
\label{sec:ns-to-ml}
Before exploring the ways machine learning can contribute to neuroscientific research, we first review the role neuroscience has played in establishing one of the most important machine learning methods of the present day. Since both fields contribute to the quest of solving intelligence, we find that it is important to explore the symbiosis between the fields, establish the benefit it had and highlight the importance of maintaining that symbiotic relationship going forward. This section provides the context for our work and helps to advocate in favor of interdisciplinary scientific inquiry, by which the results and methods of one field can greatly benefit the progress in another.
\subsection{Historical influence of neuroscience}
The first contribution from the field of neuroscience to the field of logical calculus, and thus to the early stages of AI research, can be traced to \cite{mcculloch1943logical}, where the authors describe the nervous system as \emph{``a net of neurons, each having a soma and an axon. Their adjunctions, or synapses, are always between the axon of one neuron and the soma of another. At any instant a neuron has some threshold, which excitation must exceed to initiate an impulse''}. They then show that \emph{``to each reaction of any neuron there is a corresponding assertion of a simple proposition''}, propose a mathematical model of an artificial neuron that is capable of the same behaviour as simplified biological neuron in the description, and postulate \emph{``Theorem II: Every temporal propositional expression is realizable by a net of order zero.''}, allowing to draw parallels between mathematical logic and inner workings of human brain.
Growing attention towards the \emph{``feasibility of constructing a device possessing human-like functions as perception, recognition, concept formation, and the ability to generalize from experience''}~\citep{rosenblatt1957perceptron} led to the first mechanism that was able to modify it's behavior by learning from examples -- perceptron~\citep{rosenblatt1958perceptron}, a physical system built from artificial neurons that were able to adjust their weights (artificial simplistic analog of synaptic connections).
According to \cite{schmidhuber2015deep} early works on animal visual cortex such as the ones by \cite{hubel1959receptive, hubel1962receptive} inspired layered architectures in artificial neural networks that became known as \emph{multilayer perceptrons} \citep{rosenblatt1961principles}, which, paired with the power of backpropagation algorithm \citep{werbos1974beyond, rumelhart1985learning}, are the backbone of modern deep learning~\citep{lecun2015deep}. The concept of the perceptive field from the same work has contributed to the notion and success of convolutional neural networks in computer vision \citep{fukushima1980neocognitron, lecun1998gradient, krizhevsky2012imagenet} by proposing a way of how visual information is being processed in animal brain.
A second pillar of contemporary AI \citep{hassabis2017neuroscience} is the field of reinforcement learning (RL). Dating back to the work on animals done by \cite{pavlov1903experimental} that later became known as \emph{classical conditioning} \citep{rescorla1972theory} the principles of reinforcement learning made their way into computer science and machine learning with the works of \cite{sutton1990time, sutton1998introduction}. Paired with deep learning, reinforcement learning was instrumental for achieving such results as computers learning to play computer games with no prior knowledge \citep{mnih2015human, alphastarblog, OpenAI_dota}, winning world champion at Go \citep{silver2017mastering}, and others.
The historical lens that we have presented here allows us to appreciate the enormous impact neuroscience had on the development of the fields of machine learning and artificial intelligence.
\subsection{Examples of modern machine learning techniques\\inspired by neuroscientific insights}
There exists a certain difference in opinion when it comes to the question of how brain-like the \emph{modern} artificial learning systems are. In its most popular form the question is ill-posed and does not look into the matter deep enough to make that debate useful. We would like to attempt to rectify that by highlighting that it is important to keep the discussion separate for different levels of analysis \citep{marr1976understanding}: the level of implementation, the level of algorithm and representation, and the most abstract -- the computational level.
On the level of \emph{implementation} (following Marr's tri-level taxonomy), while there is a superficial similarity between biological neural networks and modern machine learning architectures, the specifics of engineering detail differ a lot. At this lowest level of analysis we would side with the claim that apart from the superficial similarity between a biological neuron and an artificial neuron, the systems are fundamentally different. However, as we move to a higher level of abstraction at the \emph{level of algorithm and representation}, the design principles, representations and strategies of information processing of biological systems sometimes start to resemble the architectural principles that the best artificial systems rely on. We will show several such examples later in this chapter. On the \emph{computational} level, that reflects the goal and purpose of the computation, biological and artificial system are often identical: object and speech recognition, speech synthesis, decision-making based on observations, spatial orientation -- these are some of the examples of computational goals that biological and artificial systems share.
In this section we will demonstrate several examples where from the similarity on the computational level (the goal of the computation) emerges the similarity on the level of algorithm and representation. In other words, when the goal of an artificial system coincides with the goal of the corresponding biological system, then the algorithmic mechanism of achieving that goal in an artificial system follows the mechanism we know to exist in its biological counterpart. These examples extend the discussion of the similarities between artificial and biological systems and demonstrate that there is more to this question than the simplistic comparison between neurons and units in an artificial neural network.
\ \\
\textsc{Working memory.} The mechanism of \emph{working memory} is an important cognitive system that allows us to hold and use the information that is immediately relevant to the task at hand. It can contain context, recent occurrences, and bits of information that preceded the current moment. It also allows to hold pieces of information back while running another cognitive process and then recall the held-back information. This ability is crucial for reasoning and decision-making where the next logical step might depend on results of another, intermediate, process. The very similar challenge exists in artificial learning systems: an algorithm might need to remember some information to use it later, to match information across time and make decision based on temporally disjoint inputs. Recurrent Neural Networks (RNNs) \citep{hopfield1982neural} and later Long Short-Term Memory (LSTM) networks \citep{hochreiter1997long} were proposed to address that challenge. LSTM network consists of extended artificial neurons, that have a memory cell to hold a certain values and a set of gates, that regulate under which conditions the content of the memory cell can be modified or released back into the network. Since we do not know how biological working memory is working, we cannot claim the similarity on algorithmic level, but the similarity on computational level is clearly present.
\ \\
\textsc{Associative memory.} It has been conjectured that there are multiple memory types in a human brain \citep{tulving1985many}. Other types of biological memory gave rise to various ideas in machine learning and reinforcement learning. \emph{Associative memory}, characterized by the ability to recall a certain piece of information by triggering a certain stimulus, found its reflection in an artificial memory model called \emph{Hopfield network}~\citep{hopfield1982neural} -- a neural network that can store different patterns and a given partial pattern return the whole. According to \cite{hassabis2017neuroscience}, \emph{experience replay}, a critical component of Deep Q-Network (DQN) \citep{mnih2013playing}, was \emph{``directly inspired by theories that seek to understand how the multiple memory systems in the mammalian brain might interact''} and draw the parallel between the role of hippocampus and experience replay buffer: \emph{``the replay buffer in DQN might thus be thought of as a very primitive hippocampus, permitting complementary learning in silico much as is proposed for biological brains''}. Persistent, \emph{long-term memory} is also a crucial part of a biological intelligent system, and, although the biological mechanisms of it did not yet find direct reflection in artificial intelligence systems, the conceptual necessity for this type of memory is widely acknowledged and was implemented in Neural Turing Machines~\citep{graves2014neural} and later in an architecture called Differentiable Neural Computer~\citep{graves2016hybrid}.
\ \\
\ \\
\textsc{Predictive coding.} The theory of \emph{predictive coding} \citep{rao1999predictive} proposes that the brain learns a statistical model of the sensory input and uses that model to predict neural responses to sensory stimuli. Only in the case when the prediction does not match the actual response the brain propagates the mismatch to the next level of the processing hierarchy. By building and memorizing an internal model of the sensory input such mechanism would reduce the redundancy of fully processing each sensory input anew at all levels and thus greatly reduce the processing load on the sensory system. A recently proposed AI agent architecture called MERLIN \citep{wayne2018unsupervised} achieves a marked improvement on the tasks \emph{``involving long delays between relevant stimuli and later decisions: <...> navigation back to previously visited goals, rapid reward valuation, where an agent must understand the value of different objects after few exposures, and latent learning,
where an agent acquires unexpressed knowledge of the environment before being probed with
a specific task''} by introducing the similar principle into the architecture of the system. The authors point out that using reinforcement learning to learn the entire system at once, including the representations of the input, recurrent computation, rules for accessing the memory, and the action-making policy is indirect and inefficient. They propose to decouple the learning of the sensory data from learning the behavior policy that drives the decision-making by creating a subsystem that learns to compress sensory observations into efficient representation in an unsupervised manner. The decision-making policy is a recipient of already encoded information and thus does not have to learn the encoding through trial and error. The authors acknowledge the theory of predictive coding as one of the inspirations for the architecture.
\ \\
\textsc{Successor representations.} The trade-off between \emph{model-based} and \emph{model-free} methods is a long-standing question in the field of RL. As the name suggests, the agents in the model-based methods have to learn (or have access to) the model of the environment, while model-free agents try to map observations directly onto actions or value estimates. While having a model would allow the agent to use it to plan ahead and be more \emph{sample efficient} during learning, it also poses significant challenges as learning a model of the environment, especially if the environment is complex, is a very hard task. Many successful results were achieved with model-free methods as those are easier to implement and learning the mapping between the observations and the actions is in most cases sufficient and is easier than properly learning the model of the environment. The idea of \emph{successor representations} \citep{dayan1993improving} lies in-between those two approaches. During the learning the agent counts how often the transition between a state $s_a$ and state $s_b$ has occurred. After interacting with the environment for some time the agent forms what is called an \emph{occupancy matrix} $M$, which holds empirical evidence of transitioning between the states. This matrix is much easier to obtain than a full model of the environment and at the same time it provides some of the benefits of the model-based approach by allowing to model which transition is likely to occur next. The hypothesis that the brain is using successor representations proposes that brain stores in some form the occupancy probabilities of future states and is supported by behavioral \citep{tolman1948cognitive, russek2017predictive, momennejad2017successor} and neural evidence \citep{alvernhe2011local, stachenfeld2017hippocampus}. Using this statistics the brain can estimate which states are likely to occur next, serving as a computationally efficient approximation of a full-fledged environment model. The revival of the original concept in the context of RL~\citep{momennejad2017successor} proposes a way to introduce some of the benefits of model-based methods without sacrificing the efficiency and ease of implementation of model-free methods.
\ \\
\textsc{Grid cells.} In 2014 the Nobel Prize in Physiology or Medicine was awarded for the discovery of cells that constitute the positioning system in the brain \citep{o1976place, sargolini2006conjunctive}. In the recent work by \cite{banino2018vector} it was demonstrated that an artificial agent trained with reinforcement learning to navigate a maze starts to form periodic space representation similar to that provided by grid cells. This representation \emph{``provided an effective basis for an agent to locate goals in challenging, unfamiliar, and changeable environments''}.
\ \\
\textsc{Attention.} After providing the initial motivation for convolutional neural networks (CNN) via the ideas of hierarchical organization and the concept of a receptive field, neuroscience served a source of ideas for further improvement though the concept of \emph{attention} \citep{desimone1995neural, posner1990attention, olshausen1993neurobiological}. Adding the similar functionality to CNNs \citep{mnih2014recurrent, ba2014multiple} helped to further improve the performance of visual object recognition. The same concept was found to be useful in artificial neural networks designed for natural language processing tasks \citep{bahdanau2014neural, vaswani2017attention} and as a component of the memory module of differentiable neural computers \citep{graves2016hybrid}.
\ \\
\textsc{Memory consolidation.} The standard model of \emph{systems memory consolidation} \citep{squire1995retrograde} suggests that a novel memory is first retained in hippocampus, and then, with each new recollection of that memory, its engram is strengthened in neocortex, making the memory permanent \citep{dudai2004neurobiology}. On one hand this mechanism ensures that the important memories, that are being recalled often, become permanent, but it also keeps neocortex free of clutter and thus makes it more stable. If every new memory would be immediately consolidated we would remember too much ``noise''. A similar principle is used in Double DQN architecture \citep{van2016deep} to make deep reinforcement learning process more stable: two networks instead of one are maintained at the same time, the \emph{online} network is used to pick actions and its weights are updated immediately, while the second, \emph{target} network, is used to evaluate the selected actions and is updated periodically. Periodic update of the network, in contrast to updating the network immediately, provides more stable evaluation of actions -- within the period between the updates the actions are evaluated by the same network, allowing those evaluations to have a common reference point and thus serving as a better relative measure of the quality of an action.
\ \\
The examples we discussed in this section demonstrate that on algorithmic level, the biological and artificial systems sometimes share curious similarities. This observation holds a very promising message in the context of our work: since the systems share some of the properties, it can be informative to analyze one in order to gain knowledge about the other. In our case -- to analyze artificial learning systems, explore their mechanisms and hypothesize the similarities between those mechanisms and the cognitive processes of biological systems.
\section{The role of machine learning in neuroscience}
\label{sec:ml-to-ns}
Approximately 20 years after \cite{hodgkin1952quantitative} published their fundamental single neuron model that inspired multiple works in mathematical modeling of neural dynamics, the field has accumulated enough methodology and data to start looking into the models of neuronal populations~\citep{wilson1972excitatory, wilson1973mathematical, nunez1974brain}. Due to the volume of that data and the complexity of the systems being modeled, the community turned to statistical methods to provide approximations of aggregate behavior~\citep{lopes1974model, buice2009statistical, pillow2005neural, rolls2010noisy}. See \cite{van2013modeling} for more examples. The adoption of statistical modeling, which is a precursor of modern machine learning, has established a link between neuroscience and statistical learning. The advancement of computational power and the growing amount of digital data fueled the development of data processing tools, pattern recognition algorithms and data analysis methods. These tools found multiple applications in various fields, including, of course, the field of neuroscience~\citep{vu2018shared, hassabis2017neuroscience, paninski2017neural, hinton2011machine, glaser2019roles}. According to Semantic Scholar\footnote{https://www.semanticscholar.org}, the percentage of neuroscience papers that mention machine learning has risen from 1.3\% to 7.6\% over the last two decades. In this section we give an overview of the major roles machine learning methods play in neuroscientific inquiry. We then suggest that there is a methodological component that is readily available, would benefit the study of neural systems, and would extend the role of machine learning in neuroscience even further, but, in our observation, this methodology is lacking mass adoption.
\ \\
\emph{Neural decoding} represents the most direct application of machine learning methods to neurological data. A dataset of neural responses to predetermined stimuli is collected, and a machine learning method is tasked with building a model that can reverse the mapping -- given a neural signal it has to learn to identify which stimulus caused that neural response. It does so by inferring a set of rules or statistical associations that map neural responses to the corresponding stimuli in the dataset. One of the earliest applications of data analysis to characterize stimulus specific cortical activity can be traced back to \cite{mountcastle1969cortical} and displays a case of manual data analysis. With the rise of machine learning techniques the process of looking for patterns in vast arrays of data became automated, and now it is safe to say that machine learning is the default approach to neural decoding. While the studies that employ the approach are too numerous to list here, we would like to mention a few. The algorithm proposed by \cite{bialek1991reading} is one of the first direct attempts to read the neural code to identify movement detection in blowfly's visual system. Already in the work by \cite{seung1993simple} statistical modeling based on maximum likelihood estimation was applied to a decode direction from the activity of sensory neurons. \cite{zhang1998interpreting} had successfully applied the decoding methods to identify animal's location based on the activity of the place cells. \cite{haxby2001distributed} have demonstrated that it is possible to decode fMRI recordings of responses to 8 different visual categories with average accuracy of $96$\%. Decoding of prefrontal activity of rats learning an alternating goal task allowed to predict rat's decision, effectively reading rat's future intention directly from brain activity \citep{baeg2003dynamics}. In the works of \cite{nishimoto2011reconstructing, shen2019deep} it was demonstrated that it is possible to train a decoder that can, albeit with a limited quality, reconstruct visual information such as images or movies directly from fMRI recordings from occipitotemporal visual cortex of human subjects who watched natural movies or images. In their extensive fMRI study, \cite{huth2012continuous} mapped 1705 object and action categories to the changes they evoke in human test subjects watching natural movies, allowing them to map the semantic space of those categories onto cortical maps of human brain. Applying decoding toolbox to the responses of cells to facial stimuli allowed \cite{chang2017code} to identify the code for facial identity in the primate brain. The uncovered code allowed the authors to both predict the neural responses that a particular facial stimulus will elicit and also to decode facial identity from the neural activity. Using recurrent neural networks to decode articulatory movements from cortical activity allowed \cite{anumanchipalli2019speech} to decode the intended utterances and synthesize audible speech.
Neural decoding of the activity of the motor cortex into the intended movement of a subject has branched into its own field, called \emph{brain-computer interfaces}~\citep{wolpaw2002brain}. \cite{fetz1969operant} first demonstrated that a monkey can learn to operate a robotic hand that was controlled by the activity of single cells in the motor cortex, effectively learning to operate an artificial limb. With more advanced multi-site neural ensemble recoding capabilities \cite{wessberg2000real} were able to make accurate real-time predictions of the trajectories of arm movements of a non-human primate and successfully use those predictions for the control of a robotic arm. Similar work by \cite{serruya2002brain} demonstrated even wider applicability of the method by showing that the same approach allows a monkey to move a computer cursor to any location on the computer screen. Finally in the work by \cite{hochberg2006neuronal} the technology was successfully applied to a human subject, allowing to operate a robotic limb with nothing else other than the mental intention to do so.
Performance of a decoding model can be used as a way to quantify the lower bound on the \emph{amount of information or selectivity} of a certain brain region~\citep{hung2005fast, raposo2014category, rich2016decoding}. Providing a learning algorithm with the data from the region of interest and tasking it with decoding forces the algorithm to uncover the information that is pertinent to the process of decoding. The level of performance of the final model informs the investigator on the existence and the quality of relevant information in that region.
Difference in performance of the decoders trained under different experimental conditions provides a way to \emph{quantify that difference and allow for quantitive comparison}. For example, \cite{hernandez2010decoding} recorded the neuronal activity of diverse cortical areas while monkeys performed a certain task. The level of performance of the decoding models trained on the activity from different cortical areas was used as an indicator of the involvement of each particular area in that particular task. Similar approach is used by \cite{van2010triple} to analyze the contribution of hippocampus, ventral striatum, and dorsal striatum into the information processing during a spatial decision task. By comparing the results of decoding the activity of posterior parietal cortex (PPC) under two different tasks, \cite{quiroga2006movement} were able to establish that activity in PPC predicts the location of targets significantly worse than it predicts the intended movements, providing insight into the functional role of that area.
In section \ref{sec:ns-to-ml} we have described multiple models that the field of artificial intelligence has produced in attempts to solve various perceptual and behavioral tasks. Most of the problems we challenge the artificial intelligence systems with are the ones that we, humans, already are capable of solving. This fact naturally leads to the idea that it could be interesting to compare the biological mechanisms of solving these problems with the mechanisms that are employed and learned by artificial systems. The modeling branch of computational neurosciences approaches this question by proposing models of biological systems and comparing the behavior of the proposed models with biological data. We find, and this is one of the main arguments we would like to put forward in this thesis (see Chapter \ref{ch:ml-for-modeling}), that the rise of the fields of artificial intelligence and machine learning awarded us with an alternative way to investigate that question. For example, to quantify the similarity between the hierarchies of a convolutional neural network (CNN) and human ventral stream system \citep{yamins2013hierarchical} employed representational similarity analysis (RSA) \citep{kriegeskorte2008representational}) to find that the representations of that are formed in a CNN were similar to the representations in the ventral stream. Similar and more detailed findings were reported by \cite{cadieu2014deep, yamins2014performance, gucclu2015deep, seeliger2017cnn, kuzovkin2018activations} confirming the evidence in favor of the similarities in hierarchical organization of both biological and artificial systems of vision. \cite{khaligh2014deep} compared representational dissimilarity matrices (RDM) of 37 computational models of vision reaching the same conclusion, that deep convolutional neural networks explain activations of inferior temporal cortex during visual object recognition task. Similar to visual perception there are comparisons between the hierarchical structure of human auditory cortex and hierarchy of artificial neural networks trained to process auditory data \citep{kell2018task, huang2018connecting}.
A new potential role of machine learning in neuroscience was alluded to in the works on Neurochip~\citep{jackson2006long, zanos2011neurochip, nishimura2013spike, zanos2018phase}. Being an example of a bidirectional brain-computer interface, Neurochip both reads the inputs from biological neurons, and, after running on-chip computations on those inputs, stimulates the cortex with its output connections. Seeing the similarities between some computational mechanisms of biological and artificial systems we are very curious to see the development of that idea and creation of a computational system that is a hybrid of biological and artificial circuits.
\emph{Biologically plausible deep learning} is a direction of research that develops artificial learning architectures under the restrictions the biological systems are prone to. \cite{hinton2007backpropagation} outlined a list of reasons why the processes employed by modern deep learning methods cannot be running in the brain. The question was further explored and reviewed by~\cite{bengio2015towards}. This sparked multiple works~\citep{urbanczik2014learning, lillicrap2016random, liao2016important, scellier2017equilibrium} where those limitations were addressed to demonstrate that it is still possible to achieve learning in an artificial system while respecting some of the biological constraints. This line of research creates yet another way for machine learning to play a role in creating plausible computational models of neuronal processing thus advancing our understanding of the brain.
\ \\
This overview of the major paths of how machine learning benefits the advancement of neuroscience highlights the fact that, for various reasons, numerous machine learning models are being trained on neurological data. While all of those models serve their purpose in the above-mentioned studies, many of them are being treated as ``black box'' tools, where the input is provided and the output is tested and accepted to be used for further analysis, interpretation and confirmation of the experimental findings. In the next chapter we will argue that some of the models that were created in the above-mentioned and other scientific studies have inadvertently captured some of the key computational mechanisms of the phenomena the models were being trained on. The analysis of how exactly these models achieve their results and reach their predictions could lead to unearthing those captured computational mechanisms. We find, that while many research groups are working in this direction, more rigorous and widespread adoption of the tools that facilitate interpretation of machine learning models would require little effort but could lead to new and unexpected byproducts of the main investigation.
\chapter{Machine learning as automatic builder of computational models}
\label{ch:ml-for-modeling}
\begin{flushright}
\emph{``All models are wrong,\\but some are useful.''}\\
-- George E. P. Box
\end{flushright}
\ \\
Building a model of a complex phenomenon is an ancient way for humans to gain knowledge and understanding of that phenomenon. Models of planetary motion~\citep{kepler1621epitome}, gravity~\citep{newton1687philosophiae, einstein1915feldgleichungen}, standard model of particle physics~\citep{wilczek1975weak} are prominent examples of this approach. By comparing the predictions made by a model to observations in the real world, we theorize that the mechanism driving the model could be the same as the one driving the phenomenon. By building more and more accurate models, we approach the true mechanism closer and closer, hoping to get to the point of being able to artificially replicate the phenomenon in full.
This line of scientific inquiry is being widely applied to brain studies as well. The method of mathematical modeling spans across the whole field of computational neuroscience and includes single neuron models~\citep{lapicque1907recherches, hodgkin1952quantitative, koch2004biophysics, herz2006modeling}, network models~\citep{white1986structure, hagmann2008mapping, bullmore2009complex, sporns2010networks, bassett2017network, bassett2018nature}, models of memory~\citep{durstewitz2000neurocomputational, frank2001interactions, chaudhuri2016computational}, cognition~\citep{smith2004psychology, oaksford2009precis, tenenbaum2011grow, palmeri2017model, kriegeskorte2018cognitive} and learning~\citep{hebb1949organization, raisman1969neuronal, zilles1992neuronal, fuchs2014adult}, sensory precessing~\citep{barlow1959sensory, barlow1967neural, ernst2002humans, weiss2002motion, olshausen2004sparse, kording2004bayesian, kriegeskorte2018cognitive} and other neural phenomena. Both computational and structural modeling lead to numerous discoveries and important contributions to our understanding of the nervous system.
The most prized property of a model is our ability to understand its mechanism and thus understand the phenomenon that is being modeled. Coming up with a theory of how a particular phenomenon works and proposing a model that describes it has always required careful and extensive observation, good intuitive understanding of the process and almost artful ability to consolidate the intuition with the observation into a formal description that generalizes well across all instances of the phenomenon. A good sign of a model being successful is its ability to make predictions about future observations and results of interactions, making predictability the first litmus test of any model or theory. Models and theories that do not pass that test are usually discarded from the pool of scientific knowledge.
A typical machine learning pipeline involves such consecutive steps as data acquisition, data preprocessing, training a model that employs statistical tools to describe the data, and testing of the resulting model on a hold-out subset of the data~\citep{murphy2012machine}. This latter step is of particular interest to us in the context of the argument we put forward in this chapter. Statistical learning theory~\citep{vapnik2013nature} addresses the problem of selecting such a model that minimizes its error on the data, while keeping bias and variance of the model as low as possible. Further set of techniques, such as \emph{training/test split}, \emph{cross-validation} and others are then applied to estimate model's performance and its generalization ability. All this theoretical machinery serves one purpose -- the resulting model should accurately describe the data at hand and make correct predictions on previously unseen data samples. A model that does not sufficiently satisfy this requirement is discarded the same way as non-predictive models and theories we discussed in the previous paragraph.
The consequence of the machine learning approach being set up in this way is that all of the successful models that were ever built on neural data, including the ones we have discussed in Section~\ref{sec:ml-to-ns}, do, by design, satisfy the primary requirement of a good model and pass the litmus test of generalizability. In this section we put forward the argument that in addition to solving the primary issue those models were created to address (being it neural decoding, comparison of experimental conditions, quantification of information, or else), they also are models (or reflect the dynamics) of the computational mechanisms that gave rise to that neural data. Our predecessors had to analyze such data manually and use their insight to come up with a good model they understood in great detail. In the era of big data and high performance computing we are facing the opposite -- the analysis of the data and building of a model that satisfies that data is done automatically, but what we sacrifice is the understanding of the resulting model. Thankfully, modern machine learning toolbox does include various methods to achieve model interpretability, which, combined with the abundance of data and computing power leaves us with the best of two worlds -- we can build models on a grand scale and at a fast pace and interpret those models to read out the formalisms they develop, informing us on the underlying neural mechanisms.
\section{Gaining an intuitive understanding of computation carried out by machine-learned models}
The definition of a \emph{mathematical model} is a broad one and includes statistical models, differential equations, computational models, dynamical systems and more. The precise nature of a model produced by a machine learning approach depends on the particular machine learning algorithm that generated the model. In this section we will describe general mechanics of machine learning process, provide an example based on the decision tree algorithm that demonstrates how a computational model is born from local statistical decisions, and describe major families of machine learning methods to understand what kind of model is being created by each of those families when applied to a set of data.
To illustrate the necessity and motivation for the following material let us introduce a hypothetical situation. Let us assume that during a study a group of researchers have obtained vast volumes of data, preprocessed it and successfully trained a machine learning model that accurately differentiates between the experimental conditions and generalizes well to previously unseen data. Now we are in a peculiar situation, where the group of researchers, given the same data, will not be able to decode it, but in their hands they have a model, which does ``know'' how to do it. The content of this section explores the feasibility of transferring that knowledge from within the model and into the researchers.
\subsection{General mechanics of the machine learning approach}
\label{sec:general-ml}
The process starts with a \emph{dataset} of observations, where each particular observation is called a \emph{sample} and is described by a set of values called \emph{features}. A sample can also have a \emph{label} associated with it, that, depending on type of the learning problem, can represent the category of the sample (\emph{supervised} learning, \emph{classification} problem), numerical outcome (supervised learning, \emph{regression} problem), reward from the environment (\emph{reinforcement} learning), or not be present at all (\emph{unsupervised} learning). An example of a neural dataset could be a set of observations in the frequency domain, where the features are particular frequency bands, a sample is described by the powers of those frequencies and has a label indicating whether test subject's eyes were open or closed when that sample was recorded. A straightforward application of machine learning on such data would be to train a decoder (a model), that can identify whether test subject's eyes are open or closed based on the neurological data alone.
Once the initial set of observations is collected, the next steps are \emph{feature engineering} and \emph{feature selection}. During feature engineering one has to come up with the best way to represent the data from the perspective of a machine learning algorithm. In the example above we took powers of the frequency bands as our features, but that was not the only choice available and we did it only because we know that the information about whether the test subject's eyes are closed or open is readily available in the alpha frequencies. We made the decision to represent our data in this particular form because we know that this representation will make it easy for the learning algorithm to identify the pattern that separates closed eyes recordings from open eyes recordings. Feature engineering is often a creative process, that requires both domain knowledge and understanding of the machine learning method that will be subsequently applied. One of the reasons for the popularity of deep learning methods is the ability of deep artificial neural networks to automate feature engineering and learn good features directly from data. This methodology has revolutionized the fields of computer vision~\citep{krizhevsky2012imagenet} and speech recognition~\citep{hinton2012deep}, and proved to be applicable in other areas as well~\citep{lecun2015deep}.
The subsequent (or alternative) step of \emph{feature selection} is a related, but conceptually different process, where we seek to identify the most representative features and remove the rest to make the learning problem easier. This can be done manually by employing human domain knowledge, or with the help of statistical techniques~\citep{blum1997selection, hall1999correlation}.
The next, central, step is running a machine learning algorithm on preprocessed data. The choice of the algorithm will depend on the type of the learning problem (supervised, unsupervised or reinforcement, classification or regression) and on the types of the features we use to describe the data (numerical or categorial, continuous or discrete, etc). The exact learning mechanism can be quite different depending on the chosen algorithm, but the underlying framework of \emph{mathematical optimization}~\citep{snyman2005practical} is common to all of them. Every machine learning algorithm has two essential components: an \emph{objective function} (also called the \emph{loss function}) that the algorithm has to optimize and a set of \emph{parameters} that it can change in order to optimize the objective. Depending on the algorithm, the parameters can be numerical weight coefficients (examples: linear and logistic regression as presented in~\cite{murphy2012machine}; neural networks), categorical variables and numerical thresholds (decision trees by~\cite{breiman1984classification}; random forest by~\cite{breiman2001random}), points in the feature space (K-means clustering by~\cite{hartigan1979algorithm}; support vector machines by~\cite{cortes1995support}; linear discriminant analysis by~\cite{fisher1936use}) or have one of the multiple other possible representations. The final configuration of parameters in conjunction with the computational process that the algorithm runs is the final model that the algorithm will output. Changing the parameters affects the value of the objective function, so all the algorithm has to do is to find the parameters that work best. To give an example of how this can be achieved we consider the case when the objective function is differentiable and the parameters are continuous, which is the case for such algorithms as artificial neural networks, linear and logistic regression and many others. In such a case gradient-based optimization methods can be applied to iteratively approach better and better model parameters. Each configuration of parameters is a point in the parameter space where the objective function is defined. Since the function is differentiable we can compute the gradient (derivative) of the function at every possible configuration point. That gradient is a vector in the parameter space, that tells us which way we should move the point in order to increase the value of the objective function. Depending on whether we want to maximize or minimize the objective function we respectively move in the direction of the gradient or in the direction opposite to it. This optimization technique is called \emph{gradient descent} (or \emph{gradient ascent}). For a more detailed and formal description of this and other optimization methods see~\cite{vanderplaats2001numerical, snyman2005practical}. Once the optimization process has approached the global or a local optimum within a predefined tolerance threshold, or is unable to improve the result any further, the learning algorithm stops and outputs the configuration of parameters that has achieved the best result so far.
The final step of the process is the evaluation of model's performance and generalization ability. When a human is designing a model, he or she takes particular care to make their model general, so that it would not only describe the data at hand, but also work correctly on future data samples. A machine learning algorithm has no such natural inclination and, if it has sufficient expressive power, tends to memorize the whole dataset, as such representation will be, in most cases, the most accurate one from the optimization perspective. This phenomenon is called \emph{overfitting} and has to be avoided if we want the resulting model to capture the underlying dynamics or patterns in the data. The ability of a model to do so is called \emph{generalization ability} and is as important as accuracy of the representation of the training data. A common approach to estimate generalization ability of a model is to reserve a portion of the data, a \emph{test set}, run the learning procedure on the remaining \emph{training set}, and use the performance of the final model on the test set as the estimate of generalization ability. In most cases the very first algorithm we try will not be successful at finding a good model and we will try many different ones before the one that works is found. In the process of doing so we can overfit to the test set as well. To avoid that the training set is further split in two parts: a smaller training set and a \emph{validation} set. The complete training procedure looks as follows: learning algorithms are trained only on the smaller version of the training set, then their performance is estimated on the validation set and, if desired, the process is repeated until a good model is found. And only then the test set is used once to gauge model's true performance. There are variants to this procedure such as \emph{cross-validation}, \emph{leave-one-out} and a few others, all of which were developed to ensure that the model that was built by an artificial learning system is able to generalize and make accurate predictions on previously unseen data. This process is set in place to emulate human modeler's natural strive towards general and elegant models.
The process we have outlined above is being applied across multiple branches of neuroscientific research. Often, in the context of a particular scientific study that employs machine learning approach, the question of \emph{how} the resulting model achieves its result is not in the spotlight, because the focus is on the result itself. However, behind each successful model, lies, encoded in the values of those parameters, the computational principle that allowed the model to succeed. Often trivial, sometimes revelational -- we will only know once we have interpreted the parameters and unearthed the principle.
\subsection{An example of intuitive understanding emerging from a machine-built decision tree}
The decisions a machine learning algorithm makes during the process of fitting the model to the data are driven by local statistical, information-theoretic, probabilistic or combinatorial rules. The question of whether a combination of such decisions can amount to a comprehensive mathematical model is a valid one to ask. In this section we argue in favor of the positive answer to that question and illustrate our reasoning using one particular learning algorithm -- a decision tree~\citep{breiman1984classification}.
Consider the task of decoding a neural recording to determine whether a test subject's eyes are open or closed, that we introduced above. Assume that the data for that task was recorded using an EEG device, the raw signal was cleaned and transformed to frequency domain, and the power spectral densities of 30 frequencies (1 Hz to 30 Hz) constitute the feature space. Building of a decision tree using ID3~\citep{quinlan1986induction} algorithm would proceed as follows:
\ \\
\begin{enumerate}[label=(\alph*)]
\item Given the dataset $S$, for each feature $f$ compute, using entropy $\text{H}$, the \emph{information gain} $\text{IG}(S, f) = \text{H}(S) - \text{H}(S|f)$. That number shows the amount of additional information that will be obtained if the dataset $S$ is split into two disjoint subsets $S_\text{right}$ and $S_\text{left}$ using the value $v_f$ of the feature $f$ as the splitting criterion. Maximal information gain will be achieved by splitting at optimal (in terms of information gain) value $v^\ast$. The data samples that have $v_f \geq v^\ast$ are assigned to $S_\text{right}$ and the rest to $S_\text{left}$.
\item If all of the samples in $S_\text{right}$ belong to the same class (eyes closed, for example) the branching process stops and this subset becomes a \emph{leaf} that contains samples of the ``eyes closed'' category. The same is done with $S_\text{left}$.
\item If a subset contains samples from both classes, the algorithm goes recursively into this subset and repeats the procedure starting from step (a).
\end{enumerate}
Assume that we have completed the training process, tested the resulting model on a test set and found that the model is very accurate and can reliably identify if test subject's eyes are open or closed. If the purpose of our study was to prove that such decoding is possible, or it was an engineering project for clinical purposes (for example to automatically detect whether a patient is asleep), then we have successfully achieved the goal of our study. Many real-world studies do stop at this stage.
We would like to note, that at this point we do have a model that works, but we do not know \emph{why} or \emph{how} it works. An additional step of interpreting the model should be taken in order to answer those questions. In the case of a decision tree the analysis is very simple -- we can visualize the tree that is the final model. Figure \ref{fig:decision-tree} illustrates a made-up example of how such a tree might look like.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\linewidth]{images/decision-tree-for-eeg-eyes.pdf}
\caption{A made-up example of a decision tree built on a dataset of recordings of power spectral density features under two experimental conditions: eyes open and eyes closed. Traversal of this tree provides us with a rule-based computational model and supplies knowledge about neuronal dynamics -- it indicates which frequencies are relevant to the task and which thresholds are the best discriminators of the two experimental conditions.}
\label{fig:decision-tree}
\end{figure}
This analysis will reveal to us, that the model has 8 parameters -- the four features that are put in the branching points and the four threshold values of these features for making the branching decisions. Over the whole set of frequencies from 1 Hz to 30 Hz the model deemed important only the 11 Hz, 10 Hz, 9 Hz and 12 Hz. This informs us that these are the frequencies which are indicative of the ``eyes closed'' experimental condition. Furthermore we learn that the power spectral density values those frequencies need to reach in order to indicate the ``eyes closed'' condition are, respectively, 8.3, 7.7, 6.5 and 7.2 $\tfrac{\mu\text{V}^2}{\text{Hz}}$. We also find out that the 11 Hz feature provides the highest information gain (since it was selected first and was placed at the root of the tree), and is followed by 10 Hz, and then by 9 Hz and 12 Hz. We can also see, that in the case of 9 Hz reaching the threshold of 6.5 $\tfrac{\mu\text{V}^2}{\text{Hz}}$ there is still a chance that this could happen even under ``eyes open'' condition and thus the further check of whether 12 Hz is higher than 7.2 $\tfrac{\mu\text{V}^2}{\text{Hz}}$ is required, thus indicating that only in conjunction those two features can reliably indicate the ``eyes closed'' condition. All these observations carry information about the neurological correlates of our experimental conditions and all those details would be missed if we would not pursue the analysis and have stopped as soon as the primary goal of the project has been achieved. Pursuing the analysis, however, allowed to postulate an intuitive rule-based computational model of the neural conditions characteristic of the ``eyes closed'' state.
Although this example is trivial, its simplicity allows us to describe the process in full detail. In Chapter \ref{ch:spectral-signatures-based} we provide the details and the findings of a study, that employed similar approach to analyze the contributions of spectral components into the process of visual categorization based on a dataset of 11000 local field potential (LFP) recordings from intracerebral electrodes across 100 human subjects.
\ \\
\subsection{Understanding the models built by different machine learning algorithms}
\label{sec:representation-taxonomy}
The example in the previous section has demonstrated that the way to understand a particular machine learning model and the way to interpret it will depend a lot on the algorithm and the architecture that generated the model. The architecture of a decision tree enabled us to readily convert the output of the algorithm into a set of intuitive rules that provide neurological information to a domain expert. Applying other machine learning methods would result in very different representations of the computation that is required to solve the task. The core challenge in gaining an intuitive understanding from observing model parameters lies in the requirement to know the details of the inner mechanism in order to see what is \emph{it} that the model has learned that allows it to make the decisions.
\ \\
\emph{Interpretability} becomes more and more important topic in the machine learning community, across scientific communities that employ machine learning methods, and even in the global community as machine learning models become embedded in our everyday lives~\citep{doshi2017towards}. \emph{``Interpret means to explain or to present in understandable terms. In the context of ML systems, we define interpretability as the ability to explain or to present in understandable terms to a human''} (ibid). Multiple general-purpose methodologies on how interpretability could be achieved have been suggested over the years~\citep{vellido2012making, ribeiro2016model, ribeiro2016should} along with numerous domain specific approaches. Since the notion of an understandable explanation in ambiguous, it is hard to come up with a rigorous method to quantify and measure interpretability of a machine learning model. As a result of this ambiguity, multiple review articles~\citep{lipton2016mythos, bibal2016interpretability, doshi2017towards, guidotti2018survey, gilpin2018explaining, murdoch2019interpretable} proposed different taxonomies to help systematize the way we think about interpretability. Surveys like the one by \cite{narayanan2018humans} are being conducted to empirically estimate interpretability via user-studies. \cite{bibal2016interpretability} systematically explore various terms that are used in machine learning literature to denote interpretability and make suggestions how to bring the terminology in order. The same motivation drives~\cite{lipton2016mythos} and leads to suggesting desiderata for interpretability: trust, causality, transferability, informativeness and ethics, followed by a taxonomy of the properties of interpretable models. Another study by~\cite{doshi2017towards} argues for the need of a rigorous approach and introduces the notion of \emph{incompleteness} of problem formalization. Incompleteness encompasses unquantifiable gaps in knowledge that a model might have and has to be addressed in order to reach the desiderata of a comprehensive model. The outstanding survey by~\cite{guidotti2018survey} proposes a classification of approaches to model interpretability based on the type of the problem, type of the \emph{explanator} adopted, type of the model and type of data. The most recent review~\citep{gilpin2018explaining} provides a good summary of the taxonomies proposed in the previous studies and puts forward a distinction between interpretability and \emph{explainability} -- ability of a model to summarize the reasons for the behavior of the model.
Exploring the question of interpretability in the context of neuroscience allows us to narrow down the scope of applicable desiderata, properties and methods and focus on the ways to uncover knowledge from the models that are the products of automatic scientific discovery. In our case the question we want to answer is \emph{``when translated back from model's representation into neuroscientific domain, what is it that allows the model to make accurate predictions?''}. In such form the question of interpretability is perhaps best covered by \emph{multivariate pattern analysis}~\citep{ritchie2017decoding} on fMRI data~\citep{haxby2012multivariate}, where simple linear methods allowed researchers to decode mental states and analyze their representations~\citep{haynes2006neuroimaging, norman2006beyond, o2007theoretical, kriegeskorte2013representational, haxby2014decoding}. Applying other machine learning methods with the direct goal of extracting neuroscientific knowledge was also attempted by interpreting SVM~\citep{grosenick2008interpretable, hardoon2010decomposing, haufe2014interpretation}, decision trees and random forests~\citep{richiardi2010brain, oh2003estimating}, artificial neural networks~\citep{sturm2016interpretable, samek2017explainable}, probabilistic models~\citep{ma2006bayesian, doya2007bayesian, wolpert2007probabilistic, griffiths2010probabilistic}, dimensionality reduction techniques~\citep{freeman2014mapping, cunningham2014dimensionality}, graphical models~\citep{bullmore2011brain}, and other methods.
The choice of the algorithm for building an interpretable model on neural data is guided by the nature of knowledge representation the authors of the above-mentioned studies were aiming to extract. Such reasoning for the choice of the algorithm leads to yet another basis for a taxonomy of interpretable machine learning models. Given the abundance of different methods and the freedom to choose any of them for a particular neurophysiological study, the obvious choice would be in favor of the method that will uncover the representation that is most interpretable in the context of this particular study. If an investigator is interested in which neural features are the most informative for a given task -- they should choose a method that is naturally suited for feature importance analysis (e.g. Random Forest). If the aim of the investigation is to identify the data samples that are crucial for correct performance -- a method that identifies such samples during the learning process (e.g. SVM). Here we propose a preliminary taxonomy (Table \ref{tab:representation-taxonomy}) of machine learning methods that forgoes classical distinctions such as supervised or unsupervised, predictive or generative and instead organizes the methods into the groups based on the representation of the core knowledge that the model learns in order to make its decisions.
\ \\
\begin{table}[]
\centering
\begin{tabular}{|p{7.2em}|p{23.1em}|}
\hline
\thead{\textbf{Knowledge}\\\textbf{representation}} & \thead{\textbf{Learning algorithms} } \\
\hline
\thead{Linear coefficients} &
\thead{Linear regression\\
Logistic regression} \\
\hline
\thead{Points in feature\\space} &
\thead{Linear discriminant analysis~\citep{fisher1936use}\\
Support vector machines~\citep{cortes1995support}\\
K-means~\citep{macqueen1967some}\\
Self-organizing maps~\citep{kohonen1990self}\\
Density-based spatial clustering of applications with noise\\\hspace*{\fill}(DBSCAN,~\cite{ester1996density})} \\
\hline
\thead{Distance between\\samples} &
\thead{Hierarchial clustering~\citep{ward1963hierarchical}\\
K-nearest neighbors (kNN, \cite{cover1967nearest})\\
Representational similarity analysis\\\hspace*{\fill}(RSA, \cite{kriegeskorte2008representational})} \\
\hline
\thead{Distribution in\\feature space} &
\thead{Bayesian learning~\citep{murphy2012machine}, e.g. Na\"ive Bayes classifier\\
Gaussian and non-gaussian mixture models\\\hspace*{\fill}\citep{mclachlan2019finite}} \\
\hline
\thead{States and\\transitions} &
\thead{Probabilistic graphical models~\citep{koller2009probabilistic},\\\hspace*{\fill}e.g. HMM~\citep{rabiner1989tutorial}\\
Reinforcement learning~\citep{sutton2018reinforcement}} \\
\hline
\thead{Tree structure,\\important features,\\thresholds} &
\thead{Decision trees~\citep{breiman1984classification}\\
Random forest~\citep{breiman2001random}} \\
\hline
\thead{Distributed\\representations\\over inputs or\\latent variables} &
\thead{Deep learning (\cite{lecun2015deep}):\\
\phantom{----}Feed-forward neural network (ANN)\\
\phantom{----}Convolutional neural networks (CNN, \cite{fukushima1980neocognitron})\\
\phantom{----}Recurrent neural networks (RNN, \cite{hopfield1982neural})\\
\phantom{----}Long short-term memory\\\hspace*{\fill}(LSTM, \cite{hochreiter1997long})} \\
\hline
\thead{Compressed\\feature space} &
\thead{Autoencoder~\citep{vincent2008extracting}\\
Restricted Boltzmann Machines\\\hspace*{\fill}(RBM, \cite{hinton2006reducing})\\
Multidimensional scaling (MDS, \cite{mead1992review})\\
Principal component analysis (PCA, \cite{pearson1901liii})\\
Independent component analysis (ICA, \cite{comon1994independent})} \\
\hline
\thead{Embeddings} &
\thead{Deep learning methods, such as:\\
\phantom{----}word2vec~\citep{mikolov2013efficient}\\
\phantom{----}Convolutional neural networks\\
\phantom{----}Graph convolutional networks~\citep{kipf2016semi}\\
\phantom{----}node2vec~\citep{grover2016node2vec}}\\
\hline
\thead{Functions} &
\thead{Gaussian processes~\citep{rasmussen2003gaussian}} \\
\hline
\end{tabular}
\caption{Taxonomy of machine learning algorithms based on the way they represent the knowledge they have gathered during inference (with some examples).}
\label{tab:representation-taxonomy}
\end{table}
\textsc{Linear coefficients.} One of the most straightforwardly interpretable, but also the least expressive in terms of encoded knowledge, are the algorithms like logistic and linear regression, that encode the learned inferences in \emph{linear coefficients} of features. The learning process directly optimizes the coefficients to minimize (in the most common formulation) the \emph{cross-entropy} or, in the case of linear regression, the \emph{mean squared} error. The final output of, for example, a logistic classifier can be represented as a separating hyperplane in the feature space. Please note that while the final decision rule of many classification algorithms can be represented by a separating hyperplane, the underlying principles and knowledge on which the separating plane is built are very different across different learning algorithms. From the interpretability perspective linear coefficients indicate each feature's contribution into the final decision, and, especially if features were normalized before training, can allow for comparison between feature importances. Whether a coefficient is positive or negative provides an additional dimension for interpretation.
\ \\
\textsc{Points in the feature space.} Many popular classification and clustering algorithms encode their findings in particular \emph{points in the feature space}. Support Vector Machines find the samples in the training dataset that are next to the decision boundary, thus indicating which samples either have particular significance, or are the fringe members of class categories. Linear Discriminant Analysis (also known as Fisher's discriminant) locates \emph{centroids} of the samples in each category and then devises the separating boundary that is perpendicular to the straight line connecting the centroids. The centroids are thus characteristic of the groups of samples they represent. Similar knowledge representation can be found in one of the most popular clustering algorithms -- K-Means. The algorithm finds the predetermined number of cluster centers and places them at the locations in the feature space that optimize the objective function. Similar, but extended, concept is employed by self-organizing maps, where each unit of a map is assigned to a centroid in a feature space that represents center of a cluster. In the context of interpretability centroids can obtain special meaning when interpreted by a domain expert that has intuitive understanding of the feature space.
\ \\
\textsc{Distance between samples.} Multiple methods make conclusions about similarity of data samples based on pairwise distances between them. The family of hierarchical clustering
algorithms and distance-based classification algorithms such as K-Nearest Neighbors are good examples of this knowledge representation. In neuroscientific domain the method of Representation Similarity Analysis facilitated scientists to compare representational geometry of samples that have different representations. The interpretability of the findings that those methods make is straightforward as the intuitive meaning of a distance between the samples is directly applicable in almost any domain of knowledge.
\ \\
\textsc{Distributions in the feature space.} Extending the idea of important points in the feature space some methods store distributions in the same space and each distribution is assigned to a category, for example to an experimental condition. Mixture models describe data centroids as distributions, providing much more information that a point-based centroid would. Parameters of a distribution will capture statistical properties of each group of samples, modeling the values of the features that describe the samples, their orientation and extent in the feature space. Bayesian learning methods, such as Na\"ive Bayes, learn, before applying the Bayes' rule, the distribution of observations conditioned on the category these observations belong to. The interpretative value of this knowledge representation is similar to that provided by the points in the feature space, but carries considerably more information.
\ \\
\textsc{States and transitions}. Graph-like representation of possible states of a system and transitions between them is a very flexible way to capture the rules inferred by a learning model. Probabilistic graphical models, such as Bayesian networks or Hidden Markov Models, condense the dynamics they observe in the data into probabilistic state machines and are interpretable by human investigator as a set of rules that governs the underlying data generation process.
\ \\
\textsc{Tree structure.} Decision trees and their ensembles such as Random Forest, represent the inferred decision making in a form of a hierarchically organized sequences of threshold-based rules, where at each step of the process a certain rule is applied to the value of a feature and depending on the outcome the sample is assigned to a certain category. Decision trees are considered to be one of the most interpretable algorithms, they store the whole process of decision in an intuitive form, are directly translatable to deterministic decision rules, rank the features by their importance (the more important a feature is, the closer it will be to the root of a tree) and find the threshold values of those features that are meaningful in the context of the decision-making process. All this information can be easily accessed by the investigator and provide domain-specific insight.
\ \\
\textsc{Distributed representations.} The enormous capacity~\citep{vapnik2015uniform} of modern deep learning models has led, on one hand, to their success in the last decade, and to obscuring the reasons for model's decisions on the other. Both the intermediate, and the final knowledge is stored in the model as a set of weights, often measured in millions. However, the final decision of a deep learning model can often be decomposed into (a) hierarchical organization and (b) set of local decision within each layer of the hierarchy. This allows to portray the knowledge stored by the model as a collection of distributed representations, each of which describes one of the decision rules that the model makes during computation. Direct interpretation of this knowledge, is, however, impossible and an investigator has to use additional tools to extract the knowledge. We will review these tools in Section \ref{sec:interpretability-tools}.
\ \\
\textsc{Compressed feature space.} This group of algorithms employs various methods to reach the same end-goal -- find a feature space of a smaller dimensionality than the original, but preserve as much of the information as possible when the data samples are transformed from the original space to the new one. These methods are often referred to as \emph{dimensionality reduction techniques}. An \emph{autoencoder} is an artificial neural network that receives a sample as an input, performs a series of transformations to encode that sample using smaller number of artificial neurons (the \emph{bottleneck} layer), and then decodes it back from the bottleneck layer to the original representation. The difference between the original sample and its reconstruction serves as the objective function that the algorithm has to minimize. \emph{Principle component analysis} finds an orthogonal transformation of the feature space, such that the axis of the new space correspond to those data dimensions with largest variance. The new axis are called the \emph{principle components}. The first component explains largest amount of variance, the second represents the dimension with second-largest variance, etc. After the transformation the user can estimate how many components are to be kept in order to preserve the predetermined percentage of variance (usually 90\%, 95\% or 99\%) and discards the rest. \emph{Multidimensional scaling} takes another approach where instead of the variance, the new feature space, while being reduced in size, preserves the pairwise distances between the samples as well as possible. There are other example algorithms, but the important common property that allows us to group those algorithms together is that in order to preserve the information given limited expressive power, these algorithms are forced to detect an underlying pattern, or a principle, following which the data can be best reconstructed. Capturing that principle allows to reduce the size of the feature space, but also, importantly for the interpretability context, distills the underlying patterns from the less important ones and from the noise.
\ \\
\textsc{Learned embeddings.} While being similar to the previous group in form, \emph{embeddings} are numerical features vectors that are built, differently from dimensionality reduction techniques, by following a rule that captures a specific property, of even semantics, of the original data. Modern word embeddings~\citep{mikolov2013efficient}, for example, are built by teaching a neural network to predict the word that appears in the context of other words in a language corpus. Words that appear in similar context will have similar internal representation in the embedding (feature) space. Since appearing in a similar context carries semantic meaning in natural language, this representation even allows for arithmetic operations on words, such as the one demonstrated by the famous example $\text{E}(king) - \text{E}(man) + \text{E}(woman) = \text{E}(queen)$, where $\text{E}$ is the function that maps the word to its numerical vector in the embedding space. Subtracting $man$ from $king$ leaves us with the embedding that represents notion of ``kingness'', adding $woman$ to that leads to an embedding that combines notion of ``femenineness'' with ``kingness'', leading to the word $queen$. The ability of embedding to capture semantic similarity has been also demonstrated in visual domain~\citep{deselaers2011visual, frome2013devise}. When applied to graphs, an embedding can reflect topological properties of graph nodes~\citep{grover2016node2vec}, or combine topological data with node attributes~\citep{kipf2016semi}. The key property of embeddings for the interpretability efforts is their ability to capture semantic similarity between the data samples.
\ \\
\textsc{Functions.} \emph{Gaussian processes} is an example of a learning method that models a distribution over the possible functions, that are consistent with the observed data. After applying initial constraints to reduce the set of possible functions, the learning process narrows the distribution by eliminating the functions that are not consistent with the data. The final output is a set of possible functions that can capture the learnt knowledge.
\ \\
This brief overview of the basis of the proposed taxonomy of machine learning algorithms provides a useful guide for selecting the method that is appropriate for a learning problem at hand when a certain interpretation of the learnt knowledge is desired. While some of the representations above are straightforward to interpret, others require additional tools, such as visualization, rule extraction or simplification, in order to extract intuitive understanding of the knowledge represented by the model. In the next section we provide an overview of such tools.
\subsection{Techniques to analyze machine learning models and extract knowledge from representations}
\label{sec:interpretability-tools}
At this point we have completed three out of four steps along the path towards insightful automatically built computational models of neural computation: (1) selected the learning algorithm with accordance to the learning problem and desired end-knowledge representation (Section \ref{sec:representation-taxonomy}), (2) trained the model to fit the data, and (3) evaluated model's performance and generalization ability (Section \ref{sec:general-ml}). The last step is gaining intuitive understanding of the model's knowledge. Depending on the algorithm we used to train the model this knowledge can be already in a human-tractable form, or it might require some additional steps.
Linear models, single decision trees and rules are recognized by the machine learning community as most interpretable \citep{guidotti2018survey} and empirical experiments were conducted to estimate comprehensibility of these models by humans~\citep{huysmans2011empirical}. Following the nomenclature presented in Table \ref{tab:representation-taxonomy} we extend the list of straightforwardly interpretable knowledge representations to include linear coefficients, points in the feature space, distances between samples, and distributions in the feature space. For a domain expert, who has the understanding of the data the model was trained on, these representations have direct meaning and no further steps are required to spot an intuitive meaning if the model has uncovered one.
\emph{Feature importance analysis} is applicable to any model that has a quantifiable way to estimate each feature's contribution into the final decision. Linear models, provided that the data was normalized across the dataset prior to training the model, provide this information in the values of linear coefficients. Decision trees and random forests make branching decision based on the information gain, which acts as a measure of how big of a role a feature plays in the decision-making process. In Chapter \ref{ch:spectral-signatures-based} we use this method to identify time-frequency patterns of neural activity that are important for perceptual categorization in human brain. If a model does not provide quantitative information about each feature's contribution, \emph{sensitivity analysis}~\citep{saltelli2002sensitivity} gives us the means to obtain the same information by measuring how the output of a model is affected by altering the input. If altering the input values of a certain feature (or a set of features) does not change model's behavior, one can conclude that this feature (or features) is not important for the model. If features of the dataset represent specific domain knowledge, their importance is directly interpretable by a domain expert.
\emph{Visualization} of model parameters and internal data representation is one of the main tools for achieving interpretability. Simplest methods include plotting, if dimensionality permits, the data points, decision boundary and the knowledge representation (support vectors, centroids, etc). In most cases, however, dimension of the feature space is too high to be visualized directly and investigators resort to visualizing aggregated statistics, such as histograms of value distributions. Dimensionality reduction techniques, that are, on one hand, learning algorithms in their own regard as we noted in the previous section, can, on the other hand, be essential for visualization efforts. Reducing the dimensions down to 2 or 3 allows to plot the data, decision boundaries and internal data representation in a human-readable form. Different dimensionality reduction techniques focus on preserving different properties of the object that undergoes the reduction, allowing the investigator to choose the right one depending on what property should be preserved. Topology preserving algorithms such as t-distributed stochastic neighbor embedding (t-SNE, \cite{maaten2008visualizing}), self-organizing maps~\citep{kohonen1990self}, and multi-dimensional scaling keep the objects that were similar in the original space also close in the new space. Principle component analysis identifies the data dimensions that have the highest variance and keep the transformation that allows to see the contribution of the dimensions of the original space into the dimensions of the new space. Please refer to \cite{guidotti2018survey} for a comprehensive review of visualization methods in the context of interpretability.
\emph{Automatic rule extraction} is another way to achieve interpretability by converting complex model into a set of rules. In additional to trees and tree ensembles that can be naturally converted to this representation, the approach has been proposed for SVMs~\citep{nunez2002rule} and neural networks~\citep{andrews1995survey, tsukimoto2000extracting, zilke2015extracting}.
The enormous number of trainable parameters and high capacity of deep learning models is one of the reasons for the success of these methods in the last decade. It is also the reason why the decisions made by these methods are even less transparent than the ones made by other machine learning methods. This predicament has led to an explosion in the number of studies proposing, in addition to model-agnostic, also neural network-specific interpretability methods. \emph{Activation maximization} methods identify input patterns that maximize the activation values of a particular neuron, later visual inspection of those patterns can be very insightful, especially in visual input domain~\citep{zeiler2014visualizing}. \emph{Attention} mechanism~\citep{mnih2014recurrent} allows the investigators to analyze the areas of the input that the model deemed worth of its attention and understand which part of the feature space, or what data content, was most relevant for the model. \emph{Proxy models} are trained to approximate the behavior of a neural network on a full, or partial, set of data and build a simpler representation of network's behavior. Due to ease of interpretation, linear~\citep{ribeiro2016should} or decision tree~\citep{craven1996extracting, schmitz1999ann, zilke2016deepred} models are most commonly used as proxy models. \emph{Readout} technique employs linear models to take activations of a subset of neurons as input and train to predict the final outcome (for example the category of a sample), the method draws inspiration from \emph{reservoir computing}~\citep{schrauwen2007overview}. Ability or inability of a readout model to perform well is an indicator of the involvement of the chosen subset of neurons in the decision-making process of the model. \cite{koul2018learning} demonstrate how to extract \emph{finite state representation} of a recurrent neural network. Please see \cite{gilpin2018explaining} for an extensive survey of interpretability techniques developed specifically for deep learning methods.
\section{Interpretation of machine-learned models for neuroscientific inquiry at different levels of organization}
In the next three chapters of this thesis we present detailed description of three studies that demonstrate how the approach we have described in this chapter can be realized in the context of neuroscientific inquiry to analyze, interpret and understand neural processes at three different levels of organization. Each study shows how the in-depth analysis of the machine learning models can lead to insights or confirmations of conjectures about inner workings of the human brain.
Chapter \ref{ch:spectral-signatures-based} demonstrates the analysis on a level of local field potentials. We first train a decoder to predict visual category based on spectro-temporal activity of single iEEG probes using a random forest classifier. We then perform feature importance analysis to understand which locations are relevant for the task of visual decoding and which, while being active and responsive compared to the baseline, do not carry relevant information. Further analysis of the important parts of TF spectrum shows difference in roles of different neuronal locations and uncovers category-specific patterns we call \emph{spectral signatures} of visual perceptual categorization.
Chapter \ref{ch:intracranial-dcnn-based} shows how the comparison of activations of an artificial model of vision (convolutional neural network) with the activations in human visual cortex allows to draw the analogy between the hierarchical structures of those two systems. This study serves as an example of how machine learning methods can be used to analyze the brain on the level of functional organization.
The third example, demonstrated in Chapter \ref{ch:mental-space-visualization-based}, shows how using dimensionality reduction for clustering and visualization of high-dimensional EEG feature space helps to gain a high-level understanding of relative properties of mental concepts encoded in that space. By visualizing mental state space we can browse the signals generated by the human brain under different conditions and visually assess which ones are close to each other and which ones are further apart. The work shows an application of this concept to the field of brain-computer interfaces and serves as an example of applicability of the interpretable machine approach on the level of mental concepts.
\chapter{Feature importances of random forest models inform on localized cortical activity}
\label{ch:spectral-signatures-based}
Human brain has developed mechanisms to efficiently decode sensory information according to perceptual categories of high prevalence in the environment, such as faces, symbols, objects. Neural activity produced within localized brain networks has been associated with the process that integrates both sensory bottom-up and cognitive top-down information processing. Yet, how specifically the different types and components of neural responses reflect the local networks’ selectivity for categorical information processing is still unknown. In this work we train Random Forest classification models to decode eight perceptual categories from broad spectrum of human intracranial signals ($4-150$ Hz, 100 subjects) obtained during a visual perception task. We then analyze which of the spectral features the algorithm deemed relevant to the perceptual decoding and gain the insights into which parts of the recorded activity are actually characteristic of visual categorization process in human brain. We show that network selectivity for a single or multiple categories in sensory and non-sensory cortices is related to specific patterns of power increases and decreases in both low ($4-50$ Hz) and high ($50-150$ Hz) frequency bands. By focusing on task-relevant neural activity and separating it into dissociated anatomical and spectrotemporal groups we uncover spectral signatures describing neural mechanisms of visual category perception in human brain that have not yet been reported in the literature.
Previous works have shown where and when perceptual category information can be decoded from the human brain, our study adds to that line of research by allowing to identify spectrotemporal patterns that contribute to category decoding without the need to formulate a priori hypothesis on which spectral components and at which times are worth investigating. Application of this method to an extensive dataset of human intracerebral recordings delineates the locations that are predictive of several perceptual categories from the locations that have narrow specialization, identifies spectral signatures characteristic of each of 8 perceptual categories and allows to observe global and category-specific patterns of neural activity pertinent to visual perception and cognition.
\section{Spectral and temporal signatures\\of human brain activity}
\label{sec:signatures-intro}
Our capacity to categorize sensory information allows us to quickly process and recognize complex elements in our environment. Early studies revealed strong relations between the brain activity within certain localized networks and the neural representations of certain stimulus categories, as for example faces, bodies, houses, cars, objects and words \citep{kanwisher1997fusiform, epstein1999parahippocampal, peelen2009neural, malach1995object, haxby2001distributed, ishai1999distributed, cohen2000visual}. These early assessments also revealed brain networks' capability to rapidly extract categorical information from short exposure to natural scenes \citep{potter1975time, thorpe1996speed, li2002rapid} based on models of parallel processing across neural networks \citep{rousselet2002parallel, peelen2009neural}. In both animal and human studies, visual cortices and particularly inferior temporal cortex (ITC) appear as a key region to integrate information at the object-level \citep{grill2014functional, tanaka1996inferotemporal, dicarlo2012does}. In humans, a great deal of observations of cortical response selectivity have been achieved using fMRI, but measuring direct neuronal activity \citep{quiroga2005invariant, kreiman2000category} also revealed similar patterns. To further understand how stimulus features and perceptual experience is processed in neural networks, brain activity, especially in sensory cortices, has been decoded using a variety of methods and signals \citep{haynes2006neuroimaging, kriegeskorte2006information, kamitani2006decoding}. This decoding often relies on machine learning to avoid a priori selection of partial aspects of the data by the human observer, and unless additional analysis is performed on the model itself it does not emphasize the mechanisms of neuronal communication within and between neural networks involved in this processing.
A pervasive feature of electrophysiological neural activity are its spectral fingerprints. Neural oscillations have been proposed to reflect functional communication processes between neural networks \citep{fries2009neuronal, buzsaki2006rhythms, siegel2012spectral, michalareas2016alpha}. Certain frequency bands are selectively associated with the operating of different cognitive processes in the human and animal brain \citep{vidal2006visual, wyart2008neural, jensen2010shaping, vanrullen2016perceptual, engel2010beta, dalal2011spanning}, and lately, direct recordings from the human cortex have revealed the remarkable representation selectivity of broadband high-gamma activity ($50-150$ Hz) \citep{lachaux2012high, parvizi2018promises, fox2018intracranial}. Human intracranial recordings have previously shown evidence of functional processing of neural networks related to perceptual category representation \citep{mccarthy1997face} and lately the prominence of broadband high-gamma activity in selective category responses in visual areas \citep{vidal2010category, davidesco2013spatial, hamame2014functional, privman2007enhanced, fisch2009neural}. Yet, very little is known about the specific relation between the different components of the full power-spectrum, including high-gamma activity, and their level of selectivity in processing perceptual categories. Previous works have shown where and when perceptual category information can be decoded from the human brain, the approach introduced in this work adds to that line of research by allowing to identify spectrotemporal patterns that contribute to category decoding without the need to formulate a priori hypothesis on which spectrotemporal regions of interest are worth investigating.
In this work we capitalize on an extensive dataset of deep intracranial electrical recordings on 100 human subjects to decode neural activity produced by 8 different stimulus categories. We analyzed the decoding models built by a random forest classifier to disentangle the most informative components of the time-frequency spectrum related to the simultaneous classification of 8 different perceptual categories. Via \emph{feature importance} analysis we quantified the contribution of each TF component into the decoding decision, which allowed us to identify the activity patterns that were either characteristic of the processing of a specific visual category or were shared by several categories. In addition to feature importance we analyzed the predictive power of each activity pattern and identified how informative was their spectral signature for the classification of visual categories. We tested the predictive power of broadband high-gamma activity in comparison to lower frequency activity as they reflect different communication mechanisms elicited by networks seemingly involved in distinct temporal windows of functional neuronal processing. Through the analysis of feature importance we show the specific neuronal spectral fingerprints from highly distributed human cortical networks that were elicited during automatic perceptual categorization. The uncovered spectral signatures provide insight into neural mechanisms of visual category perception in human brain.
\section{Large-scale intracortical recordings during\\visual object recognition task}
\label{sec:raw-intracranial-data}
One of the important steps leading up to being able to interpret the results and representations of a machine learning model is the correct choice of representation of the input data. In this section we explain the origin of our dataset and preprocessing choices that were made in order to present the data in a form that is both informative for the algorithm and directly interpretable by human domain expert.
\subsection{Patients and recordings}
100 patients of either gender with drug-resistant partial epilepsy and candidates for surgery were considered in this study and recruited from Neurological Hospitals in Grenoble and Lyon (France). All patients were stereotactically implanted with multi-lead EEG depth electrodes (DIXI Medical, Besan\c con, France). All participants provided written informed consent, and the experimental procedures were approved by local ethical committee of Grenoble hospital (CPP Sud-Est V 09-CHU-12). Recording sites were selected solely according to clinical indications, with no reference to the current experiment. All patients had normal or corrected to normal vision.
\subsubsection{Electrode implantation}
11 to 15 semi-rigid electrodes were implanted per patient. Each electrode had a diameter of 0.8 mm and was comprised of 10 or 15 contacts of 2 mm length, depending on the target region, 1.5 mm apart. The coordinates of each electrode contact with their stereotactic scheme were used to anatomically localize the contacts using the proportional atlas of Talairach and Tournoux \citep{talairach1993referentially}, after a linear scale adjustment to correct size differences between the patient's brain and the Talairach model. These locations were further confirmed by overlaying a post-implantation MRI scan (showing contact sites) with a pre-implantation structural MRI with VOXIM$^\text{\textregistered}$ (IVS Solutions, Chemnitz, Germany), allowing direct visualization of contact sites relative to brain anatomy.
All patients voluntarily participated in a series of short experiments to identify local functional responses at the recorded sites \citep{vidal2010category}. The results presented here were obtained from a test exploring visual recognition. All data were recorded using approximately $120$ implanted depth electrode contacts per patient using SD LTM Express, Micromed system for signal acquisition with a sampling rate of $512$ Hz, high-pass filter 0.15 Hz, low-pass filter 500 Hz. Data were obtained from a total of $11321$ recording sites.
\subsubsection{Stimuli and task}
\label{sec:stimuli-and-task}
The visual recognition task lasted for about $15$ minutes. Patients were instructed to press a button each time a picture of a fruit appeared on screen (visual oddball paradigm). Non-target stimuli consisted of pictures of objects of eight possible categories: houses, faces, animals, scenes, tools, pseudo words, consonant strings, and scrambled images. All the included stimuli had the same average luminance. All categories were presented within an oval aperture of $2\degree \times 3\degree$ in visual angle (illustrated on Figure~\ref{fig:methods-pipeline}a) at a distance of $70-90$ cm using NeuroBehavioral Systems (NBS) Presentation$^\text{\textregistered}$ software. Stimuli were presented for a duration of $200$ ms every $1000-1200$ ms in series of 5 pictures interleaved by 3 second pause periods during which patients could freely blink. Patients reported the detection of a target through a right-hand button press and were given feedback of their performance after each report. A 2 second delay was placed after each button press before presenting the follow-up stimulus in order to avoid mixing signals related to motor action with signals from stimulus presentation. Altogether, responses to 400 unique natural images were measured per subject, 50 from each category.
\subsection{Processing of neural data}
\label{sec:signatures-ns-processing}
The analyzed dataset consisted of $4528400$ local field potential (LFP) recordings -- responses from $11321$ recording sites to $400$ stimuli. To remove the artifacts the signals were linearly detrended and the recordings that contained values $\ge10\sigma_{images}$, where $\sigma_{images}$ is the standard deviation of voltage values (in the time window from $-500$ ms to $1000$ ms) of that particular probe over all stimuli, were excluded from data. All electrodes were re-referenced to a bipolar reference and the reference electrodes were excluded from the analysis. The signal was segmented in the range from $-500$ ms to $1000$ ms, where $0$ marks the moment when the stimulus was shown. The $-500$ to $-100$ ms time window served as a baseline.
To quantify the power modulation of the neural signals across time and frequency we used standard time-frequency (TF) wavelet decomposition \citep{daubechies1990wavelet}. The signal $s(t)$ was convoluted with a complex Morlet wavelet $w(t, f_0)$, which has Gaussian shape in time $(\sigma_t)$ and frequency $(\sigma_f)$ around a central frequency $f_0$ and defined by $\sigma_f = 1/2 \pi \sigma_t$ and a normalization factor. To achieve good time and frequency resolution over all frequencies we slowly increased the number of wavelet cycles with frequency, $\frac{f_0}{\sigma_f}$ was set to: 6 for high ($61 - 150$ Hz) and low ($31 - 60$ Hz) gamma, 5 for beta ($15 - 30$ Hz), 4 for alpha ($9 - 14$ Hz) and 3 for theta ($4 - 8$ Hz) frequency ranges. This method allowed to obtain better frequency resolution than applying a constant cycle length \citep{delorme2004eeglab}. The square norm of the convolution results in a time-varying representation of spectral power, given by: $P(t, f_0) = |w(t, f_0) \cdot s(t)|^2$. Baseline normalization was performed by dividing the average power after stimulus onset ($0$ to $1000$ ms) in each frequency by the average power of that frequency in the baseline window ($-500$ to $-100$ ms). Each LFP recording was transformed from 768 data points (1.5 seconds of voltage readings at 512 Hz sampling rate) into a matrix of size $146 \times 48$ where each row represents a $1$ Hz frequency band from $4$ Hz to $150$ Hz and columns represent $31.25$ ms time bins. Value in each cell of that matrix is the power of that specific frequency averaged over $16$ time points.
Further analysis was done only on the electrodes that were responsive to the visual task. In each frequency band we compared each electrode's average post-stimulus band power to the average baseline power with a Wilcoxon signed-rank test for matched-pairs. Only the probes that showed a post-stimulus response that is statistically significantly (p-value $\leq 0.005$, corrected for multiple comparisons with the false discovery rate (FDR) procedure \citep{genovese2002thresholding}) different from the baseline response in at least two frequencies were preserved for future analysis. Please note that eliciting a significant response in at least 2 out of 146 frequencies is a relaxed requirement. The use of such a relaxed criterion allowed us to include into analysis not only the areas that had a strong response in the visual areas, but also the responses from other brain areas that might reflect downstream processes related to automatic perceptual categorization. This was possible due to the fact that the proposed method, given sufficiently large dataset, will not be hindered by the additional volume of irrelevant data and is able to detect narrow phenomena even in the large corpus of data.
To anatomically localize the source of each signal in subject's brain each electrode's MNI coordinates were mapped to a corresponding Brodmann brain area~\citep{brodmann1909vergleichende} using Brodmann area atlas from MRICron~\citep{rorden2007mricron} software.
To confirm that probe's predictiveness of a certain category implies that the probe belongs to the network selective of that category we ran a set of experiments on three well-known functional areas: Fusiform Face Area (FFA) \citep{kanwisher1997fusiform}, Visual Word Form Area (VWFA) \citep{cohen2000visual} and Parahippocampal Place Area (PPA). Following Montreal Neurological Institute (MNI) coordinates of FFA reported in \citep{harris2012morphing} and \citep{axelrod2015successful} we defined FFA bounding box as $x \in [-44,-38]$, $y \in [-61, -50]$, $z \in [-24, -15]$ in the left hemisphere and $x \in [36, 43]$, $y \in [-55, -49]$, $z \in [-25, -13]$ in the right hemisphere. Based on the Table 1 from \citep{price2003myth} we defined VWFA area as MNI bounding box $x \in [-50,-38]$, $y \in [-61, -50]$, $z \in [-30, -16]$ in the left hemisphere. From MNI coordinates reported in \citep{bastin2013temporal} and \citep{park2009different, hamame2013dejerine} we defined PPA bounding box to be $x \in [-31,-22]$, $y \in [-55, -49]$, $z \in [-12, -6]$ in the left hemisphere and $x \in [24, 32]$, $y \in [-54, -45]$, $z \in [-12, -6]$ in the right hemisphere.
\section{Feature importances of a decoder are indicative of task-relevant brain activity}
In the taxonomy of knowledge representation (Section \ref{sec:representation-taxonomy}), decision trees were brought forward as the representation best suited for feature importance analysis (see Section \ref{sec:interpretability-tools}). In this work we use spectral power readings in time-frequency domain as our input and look to identify which time-frequency features contribute most to the task at hand. Since feature importance is our desired interpretation we have chosen Random Forest learning algorithm to build the decoding model. In this section we explain the inner workings of this algorithm and present our feature analysis approach in details.
\subsection{Random Forest as a decoding model}
\label{sec:signatures-rf-decoder}
A Random Forest~\citep{breiman2001random} is a collection of decision trees, where each tree gets to operate on a subset of features. Each tree is assigned a random set of features and it has to find the decision boundaries on those features that lead to best classification performance. At each branching point the algorithm must decide which feature will be most efficient in terms of reducing the entropy of class assignations to the data points in the current branch of the decision tree. To achieve that, the feature that is most useful is selected first and will be responsible for largest information gain. For example, if the activity of a probe at 52 Hz at 340 ms is high when a subject is presented with a face and low for all other categories, decision tree will use that fact and rely on the ``$52$ Hz at $340$ ms'' feature, thus assigning it some importance. How high the importance of a feature is depends on how well does this feature distinguish faces from all other categories. As Random Forest is a collection of trees and the same feature will end up being included into several different trees, being important in many trees contributes to the overall importance of a feature (for the exact computation see the section on feature importance below).
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{images/methods_pipeline.pdf}
\caption{Major steps of the data processing pipeline. \textbf{a}: Image stimuli from 8 categories were presented to test subjects. \textbf{b}: Human brain responses to images were recorded with deep intracranial electrodes. \textbf{c}: LFP signals were preprocessed and transformed into time-frequency domain. \textbf{d}: Random Forest models were trained to decode image category from each electrode's activity. \textbf{e}: Feature importances of each model were calculated to identify the region on each electrode's activity map that was relevant to visual object recognition. Notice how the final results on panel \textbf{e} tell us that high gamma activity in $90-120$ ms window and the subsequent activity in the low gamma range in $120-250$ ms window are the only bands and time windows in that particular electrode's activity that are relevant for the classification task, while the spectrogram on panel \textbf{c} also shows that there was activity in early theta, beta and low gamma bands. Our analysis revealed that not all activity was relevant (or useful) for the classification of an object and showed which parts of the activity are actually playing the role in the process.}
\label{fig:methods-pipeline}
\end{figure}
We treated each electrode's responses as a separate dataset consisting of 400 data points (one per stimulus image), and 7008 features -- time-frequency transformation of LFP response into 146 frequencies and 48 time bins. For each electrode we trained a Random Forest with 3000 trees and used 5-fold cross-validation to measure the predictive power of the neural activity recorded by each of the electrodes. Per-class F\textsubscript{1} score, a harmonic mean of precision and recall of a statistical model, provides us with a metric of success of the classification. The parameters were selected by performing informal parameter search. Random Forest was the algorithm of choice for our analysis due to interpretability of the resulting models, that allowed us to track the process that led each particular model to a decoding decision and due to its previous application to spectrotemporal features \citep{westner2018across}. We used \texttt{scikit-learn}~\citep{scikit-learn} implementation of the above-mentioned methods with default parameters unless indicated otherwise.
As the first step of the decoding analysis we estimated which of $11321$ electrodes have predictive power. For that we split each electrode's 400-sample dataset into 320 samples for training and 80 for prediction estimation. Repeating this procedure 5 times provided us with 400 predictions that we could compare to the true categories. By running a permutation test 100000 times on electrodes with randomly permuted class labels we estimated that 99.999th percentile (equivalent to significance threshold of $p \leq 0.00001$) of \emph{F\textsubscript{1} score} is $0.390278$. F\textsubscript{1} score is an aggregated metric of the performance of a classifier that combines both the \emph{precision} (the ratio of the data samples that truly belong to a category among the ones that were assigned to that category by the model) and \emph{recall} (the ratio of data samples that were correctly identified to belong to a category to the total number of samples of that category in the dataset) into one number: $\text{F}_1 = 2\cdot\frac{\text{precision}\cdot\text{recall}}{\text{precision}+\text{recall}}$. In total $787$ electrodes had a predictive power of F\textsubscript{1} $> 0.390278$ in at least one of the categories. For each of those electrodes a Random Forest model was retrained once more on whole data (400 samples instead of 320) and that model was used for calculating feature importances and, ultimately, for understanding which parts of the recorded activity were relevant for visual object recognition in human brain.
\subsection{Feature importance for the analysis of task-relevant neural activity}
During the process of constructing the decision trees, Random Forest relies on some features more than on the others. We chose \emph{Gini impurity} \citep{breiman2017classification} as a measure of which features should be used to make the branching decisions in the nodes of a tree. This score, along with the number of times each particular feature was used across trees, informed us on the relative importance of each particular feature with respect to other features. Gini impurity $G$ is calculated as
\begin{equation}
G = \displaystyle\sum^{i=n_c}_{i=1}p_i(1 - p_i),
\end{equation}
where $n_c$ is the number of categories and $p_i$ is the proportion of class $i$ in a node. To pick a feature for a parent node, Gini impurity of both child nodes of that parent are calculated and used to estimate the \emph{reduction in impurity} that would be achieved by picking that particular feature as the branching factor for the node. The feature that decreases impurity the most is selected to be the branching factor of that parent node. The reduction in impurity is calculated as
\begin{equation}
I = G_{\text{parent}} - G_{\text{left child}} - G_{\text{right child}}
\end{equation}
and is called \emph{node importance}. \emph{Feature importance} of a feature $f$ is estimated by calculating the sum of Gini impurity reductions over all samples in the dataset that were achieved with the use of a particular feature $f$ and normalizing it by the total number of samples. Figure~\ref{fig:methods-pipeline}e is a visual representation of relative feature importance, color intensity shows the importance of each of 7008 (146 frequencies $\times 48$ time bins) spectrotemporal features from one probe. In total our analysis has produced $787 \times 8$ such images -- one for each probe-class pair.
The importance map computed as depicted on Figure~\ref{fig:methods-pipeline} is an example of a global map for all 8 categories. The regions that are highlighted on the map are important for distinguishing between all 8 categories. There is, however, a way to look at category-specific importances as well. The final set of nodes of a decision tree, called \emph{leaves}, are the end-points of the classification process and each leaf is associated with a certain category. If we take one TF activity map (TF components are the features) and start traversing a decision tree following the rules set by the nodes of the tree, we will end up in a certain leaf. That leaf will be associated with a certain category, for example, with faces. The fact that we followed the rules and ended up in that leaf indicates that the TF map we used as the input to the tree probably comes from a trial where a \texttt{face} stimulus was shown to the subject. In order to get category-specific feature importance map we took all the leaves associated with a category, traversed the tree backwards and tracked all the features that were used on the path from the leaf to the root of the tree. This way we got a list of features (TF components) that were used to identify a neural response as belonging to a certain category. Random Forest feature importance allowed us to identify which sub-regions of neural activity (TF maps) are relevant for decoding. It showed that only a small portion of activity is actually crucial for identifying the categories (see Figure \ref{fig:filter-by-importance}).
\begin{figure}
\centering
\includegraphics[width=1.0\linewidth]{images/filter_by_importance.pdf}
\caption{Using the importance map to filter out irrelevant activity. The three rows show three different examples of how filtering of the activity by importance is beneficial: in patient 23, probe 74 we see that only later portion of the broadband gamma activity increase was useful for identifying this activity as a response to the \texttt{animal} stimulus; patient 27, probe 97 shows that although there is an increase in broadband activity, the actually useful information contained in decrease in the lower frequency bands; patient 87, probe 52 demonstrates that for decoding this particular probe's activity one must focus on the activity in lower frequencies at specific time and, despite prominent presence, ignore the increase in broadband gamma. \textbf{a.} Probe's importance map, color codes the relative importance of each spectrotemporal feature within the map. \textbf{b.} Full spectrotemporal activity of the probe, features with importances one standard deviation higher than the average (in contour) mark the regions of activity that were useful for the decoding model. \textbf{c.} Activity of the probes filtered by the importance mask, only the relevant activity is preserved.}
\label{fig:filter-by-importance}
\end{figure}
To compare importance maps between each other we fit a normal distribution on the difference between two maps and considered statistically significant the differences that are bigger than $\mu + 4\sigma$. One spectrotemporal importance map consists of 7008 values. To filter out false positives we stipulated that only 1 false positive out of 7008 pixels can be tolerated and tuned the threshold accordingly. That requirement resulted in the p-value of $0.0001427$ and confidence level of $99.99\%$, corresponding to $3.89\sigma$, which we rounded up to $\sigma=4.0$.
\subsection{Hierarchical clustering to reveal types of activity patterns}
\label{sec:signatures-clustering}
To further analyze the spectrotemporal signatures elicited by different visual categories in different parts of human brain we clustered filtered activity patterns and identified the most prominent groups. The result of this analysis is shown in the second column of Figure~\ref{fig:importances-clusters-mnis}. For each category, the four most populated (in terms of the number of probes) clusters of activity patterns elicited by this category are shown.
To do the clustering we first took each probe's category-specific activity separately by averaging probe's responses to 50 images of each particular category in time-frequency domain. We then masked the activity with the category importance map (as shown on Figure~\ref{fig:filter-by-importance}), leaving only those features out of $146 \times 48$ that have importance score larger that $\mu + \sigma$, where $\mu$ is the average importance score for that category and $\sigma$ is one standard deviation of the score distribution.
Masked activity patterns were hierarchically clustered using Eq~\ref{eq:clustering-distance} to calculate the distance between a pair of clusters $U$ and $V$ as the maximal cosine distance between all of the clusters' member observations (complete linkage clustering):
\begin{equation}
\label{eq:clustering-distance}
d(U, V) = max\Big(\displaystyle\frac{\mathbf{u} \cdot \mathbf{v}}{\|\mathbf{u}\| \|\mathbf{v}\|}\Big)\ \forall \mathbf{u} \in U,\ \forall \mathbf{v} \in V
\end{equation}
\texttt{SciPy}~\citep{scipy} implementation of the hierarchical clustering methods was used in this work. Resulting clustering assignments were visually inspected and corrected.
\section{The role and diversity of time-frequency patterns of individual locations and area networks in perceptual categorization}
By choosing the machine learning algorithms with subsequent need for interpretability in mind (random forest, hierarchical clustering) and application of interpretability techniques (feature importance analysis, visualization) we were able to extract the knowledge that the model had obtained, present it in a way that is understandable to a neuroscientist and articulate the neurological insights that the model has found. The three main observations we made as a result of this analysis were: (a) the difference between responsiveness of a neural location and its ability to predict an experimental condition (visual category), (b) existence of monopredictive and polypredictive neural locations, where the former are specialized and only are relevant for processing specific visual categories, while the latter carry information that is relevant to decoding of several categories, and (c) to extensively map and describe time-frequency patterns that are characteristic of cognitive processing of each particular visual category. In this section we present these findings in full detail.
\subsection{Feature importance allows to separate out the neural signals that are predictive of perceptual categorization from the mixture of stimulus-induced responses}
To identify spectrotemporal features that are characteristic of automatic perceptual categorization of a particular category we relied on time-frequency (TF) maps of the neural responses of intracranially implanted electrodes. Out of the total set of 11321 probes 11094 (98\%) were responsive (see the section \ref{sec:signatures-ns-processing} on processing of neural data for details) to the stimuli from at least one of the categories. On one hand this provides us with abundance of data, on the other raises the question whether all of that activity was relevant to the processes that encode and process visual input.
Training a decoding model (see the section \ref{sec:signatures-rf-decoder} on Random Forest as decoding model) for each of the probes allowed us to dissociate the \emph{predictive probes} that exhibited activity that was useful for decoding from the rest of the \emph{responsive probes} that did not carry such activity.
Green markers on Figure~\ref{fig:responsive-vs-predictive}a show the set of probes that are responsive to the \texttt{house} category, while the blue markers are the probes that are predictive of that category (4.8\%, 535 probes). Decoding models built on the neural responses of the predictive probes were successful at classifying at least one perceptual category ($\text{F}_1 > 0.39$ for one or more classes), focusing on them in our further analysis allowed to work only with the locations that carry information relevant to the task of perceptual categorization.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{images/responsive-vs-predictive.pdf}
\caption{Distribution of predictive probes. \textbf{a}. Green markers indicate all of the probes that were responsive to stimuli from the \texttt{house} category. Blue markers indicate only the predictive probes that carry information that is relevant to decoding the neural response as reaction to \texttt{house} stimulus. \textbf{b}. Distribution of predictive probes over areas within each category (each row sums up to 100\%). Color shows the percentage across categories. \textbf{c}. Distribution of predictive probes over a category within each area (each column sums up to 100\%). Color shows the percentage across areas.}
\label{fig:responsive-vs-predictive}
\end{figure}
Predictive probes had a heterogeneous distribution in the brain, yet remained mostly concentrated in visual cortices and inferior temporal regions (76\%), from BA17 to BA20, including early visual areas (BA 18, 19), fusiform gyrus (BA 37) and inferior temporal cortex (BA 20). A majority of the predictive probes were in fusiform cortex (average of 52\% over all categories, Figure~\ref{fig:responsive-vs-predictive}b), followed by BA 19 (27\%), across all category networks.
Within the primary visual cortex, BA 17 and 18, the \texttt{scrambled} was the stimulus that elicited most predictive probes (28) amongst all stimulus categories (Figure~\ref{fig:responsive-vs-predictive}c), followed by \texttt{pseudowords} (13). Probes predictive of \texttt{faces} were mostly concentrated in BA19, BA37 and BA20 (72\%, 108 out of 150). The low number of predictive probes in area 17 is explained by the fact that less than 1\% of the implantation sites in the original dataset were located in primary visual cortex.
Previous studies have shown that perceptual category-selective networks are located in occipito-temporal cortex \citep{grill2014functional, ishai1999distributed, malach1995object}. To test whether predictive power of the Random Forest model trained to decode activity of probes is coherent with known functional processing by cortical networks we evaluated the selectivity of the predictive power in three known functional networks: Fusiform Face Area (FFA) \citep{kanwisher1997fusiform}, Visual Word Form Area (VWFA) \citep{cohen2000visual} and Parahippocampal Place Area (PPA) \citep{epstein1998cortical}. We checked whether the probes located in each of these areas and the Random Forest model trained on these probe's activity to discriminate between 8 categories produces the highest predictive power for the category for which this area is known to be selective. Probes in FFA are associated with facial recognition and encoding facial information \citep{parvizi2012electrical, ghuman2014dynamic, kadipasaoglu2016category, jonas2016face, jonas2015beyond} and thus we expect their activity to be predictive of the \texttt{face} category, probes in VWFA should be predictive of \texttt{characters} and \texttt{pseudowords} categories \citep{kadipasaoglu2016category, lochy2016left, hirshorn2016decoding} and probes in PPA should be responsive to \texttt{scenes} and \texttt{houses} \citep{aguirre1998area, megevand2014seeing, epstein1998cortical, bastin2013timing}.
There were 12 probes in the FFA that were significantly (permutation test $p < 1e-4$) predictive (classification score $\text{F}_1 > 0.39$) of a category: 5 were predictive of \texttt{faces}, 4 of \texttt{animals} (which mostly have faces on the image), 2 of \texttt{pseudowords} and 1 of \texttt{scrambled} images. Most probes that were in FFA and were predictive, carried information of the categories containing facial features.
There were 8 probes in the VWFA that were predictive of a category: 5 were predictive of \texttt{pseudowords}, 2 of \texttt{characters} and 1 of \texttt{faces}. This points to the fact that the predictive probes in VWFA are predictive of the stimuli with written characters on them. These results confirm that predictive power of a Random Forest model trained on probes activity in VWFA reflects the functional role known to be carried by this area.
For probes in the PPA results were less selective. There were 23 probes inside that area that were predictive of a category: 5 were predictive of \texttt{houses}, 4 of \texttt{scenes}, 5 of \texttt{characters}, 5 of \texttt{scrambled} images, 2 of \texttt{tools} and 2 of \texttt{pseudowords}. The probes from PPA predicted not only \texttt{houses} and \texttt{scenes}, but also other categories. However, \texttt{houses} and \texttt{scenes} were among the categories that the probes from PPA were able to identify successfully in highest proportion as compared to the other categories.
These confirmatory findings give credibility to the methodology by which the probes that are identified as predictive of a certain category are involved in the processing of the stimuli that belong to that category.
Training per-probe decoding models not only allowed us to identify the predictive locations, but also to apply feature importance analysis to decoding models trained on local activity. Computing the feature importance across the time-frequency map ($4 - 150$ Hz and $-500$ to $1000$ ms) allowed us to see which parts of neural activity are crucial for the decoding. Overlaying the importance over time-frequency map showed at which frequencies and at what times the activity that was important for the algorithm has occurred. This can be applied both on aggregated level, where the importance map is averaged over probes, and on individual probe level. Figure~\ref{fig:filter-by-importance} illustrates the application of probe importance map to filter irrelevant activity and obtain spectrotemporal signature of a particular category on a particular probe. Now we can use the feature importance map as a mask and perform the analysis of the activity itself, focusing only on the relevant parts of it. When applicable, this methodology helps to filter out irrelevant activity and allows to focus on the activity that is important to the scientific question under investigation.
\begin{figure}[htb]
\centering
\includegraphics[width=1.0\linewidth]{images/eight-importance-maps.pdf}
\caption{Average importance map of each of eight categories over probes predictive of that category. The color shows the relative importance of each spectrotemporal feature, indicating how informative that particular feature was for the task of decoding.}
\label{fig:eight-importance-maps}
\end{figure}
We took an average over importance maps of all individual probes within each category to obtain the global picture of where the category-specific activity lies in time and frequency space. Figure \ref{fig:eight-importance-maps} summarizes such analysis and singles out the spectrotemporal signatures that are unique to specific categories and those that are less selective. From these importance maps we notice that certain TF components are distinctly present per category, as for example high (significantly higher than 81 out of 112 regions of interest, Mann-Whitney U $p < 8.9\mathrm{e}{-7}$, corrected) importance of the transient theta activity in all categories, or the almost absence of importance of broadband gamma (significantly lower than 10 out of 12 other regions of interest, Mann-Whitney U $p < 8.3\mathrm{e}{-5}$, corrected) in the control scrambled condition.
In the following sections we expand our analysis to the comparison of the feature maps and analyzing the activity in the regions that we have identified as important.
\subsection{Polypredictive and monopredictive probes}
The analysis revealed two types of neural locations: \emph{polypredictive} probes are predictive of multiple visual categories, while \emph{monopredictive} are useful for decoding only one out of 8 different types of stimuli revealing a high degree of specialization (Figure~\ref{fig:poly-mono-location}b). We considered a probe to be predictive of a category if cross-validation F\textsubscript{1} score for that category was higher than $0.39$ (see the section \ref{sec:signatures-rf-decoder} for the details on the threshold selection), which is a stricter condition than above-chance criterion ($\text{F}_1 > 0.125$). Figure~\ref{fig:poly-mono-location}a shows that polypredictive probes reside mainly (94\%, 136 out of 145) in posterior occipital and posterior temporal, while the monopredictive probes extend, in addition to occupying similar posterior occipital and temporal locations, to frontal cortex (92\%, 45 out of 49 probes in this area are monopredictive) and anterior temporal cortex (88\%, 51 out of 58 probes). Both mono- and polypredictive probes are also observed in parietal cortex. Monopredictive probes that extend beyond ventral stream and temporal cortex pertain to the following perceptual categories: \texttt{faces} (orbitofrontal cortex), \texttt{animals} and \texttt{pseudowords} (dorsofrontal cortex, inferior frontolateral cortex, premotor cortex), and, to a smaller extent, \texttt{scrambled} images (prefrontal cortex).
\begin{figure}[htb]
\centering
\includegraphics[width=1.0\linewidth]{images/poly-mono-location.pdf}
\caption{Anatomical distribution of mono- and polypredictive locations. \textbf{a:} Red markers are the locations of monopredictive probes, blue markers are the locations of polypredictive ones. Polypredictive probes (145 unique locations) are mostly confined to visual areas and temporal lobe (both parts of the ventral stream), while monopredictive (specialized, 401 unique locations) probes are, in addition to visual areas, also found in frontal and parietal cortical structures. \textbf{b:} The histogram shows how many categories are predictable by how many probes.}
\label{fig:poly-mono-location}
\end{figure}
The unique association of specific TF feature importance components with either polypredictive and monopredictive probes was category specific, as shown in figures \ref{fig:poly-mono-fitfs}a to \ref{fig:poly-mono-fitfs}h.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{images/poly-mono-fitfs.png}
\caption{Statistically significant differences between the importance of monopredictive and polypredictive probes' activity. Gray regions indicate the areas of TF spectrum where both monopredictive and polypredictive probes exhibit important ($2\sigma$ from the mean) activity. On top of it, if one of the groups is statistically significantly ($4\sigma$ difference) more important than another, the region is colored with blue (polypredictive) or red (monopredictive) to show which of two kinds of neural specialization dominates this TF region in terms of importance. For example decoding of \texttt{scenes} (d) involves early theta activity of polypredictive (blue) probes, followed broadband gamma activity that is significantly important (gray), and is slightly dominated by monopredictive (red) probes, then followed by late alpha activity produced predominantly by monopredictive neural locations. \textbf{a:} \texttt{house}, \textbf{b:} \texttt{face}, \textbf{c:} \texttt{animal}, \textbf{d:} \texttt{scene}, \textbf{e:} \texttt{tool}, \textbf{f:} \texttt{pseudoword}, \textbf{g:} \texttt{characters}, \textbf{f:} \texttt{scrambled}. }
\label{fig:poly-mono-fitfs}
\end{figure}
While all of the data presented on these figures shows statistically significant differences between monopredictive and polypredictive neural locations, we will focus only on a few that were supported by the strongest signal in the data. For \texttt{face} stimuli, most of the feature importance in the early broadband gamma response was significantly ($4\sigma$) higher in polypredictive probes as compared to monopredictive probes, indicating that the most useful information for distinguishing \texttt{faces} from other visual categories is coded in that region of time-frequency space and is carried by polypredictive probes (Figure~\ref{fig:poly-mono-fitfs}b). Decoding of \texttt{animals} and \texttt{tools} relied on the activity patterns produced by monopredictive neural locations in late broadband gamma range ($> 300$ ms) and in even later ($350 - 600$ ms) alpha/beta range, with very little involvement of the activity of polypredictive probes. \texttt{Scenes} and \texttt{houses} also show strong feature importance in late alpha and beta band responses of monopredictive probes ($4\sigma$ higher). Interestingly, for \texttt{characters} (Figure~\ref{fig:poly-mono-fitfs}g), feature importance in the early broadband gamma range was dominant for polypredictive probes ($4\sigma$ higher than monopredictive), while the opposite was true for the \texttt{pseudowords} (Figure~\ref{fig:poly-mono-fitfs}f) -- the late broadband gamma revealed to be dominant for monopredictive probes, also note the difference in the anatomical locations that were the most useful for the decoding of the \texttt{pseudowords} compared to the locations that were useful for decoding \texttt{characters}. \texttt{Pseudowords} also elicited a significantly stronger TF feature importance in monopredictive probes in late ($350 - 750$ ms) low-frequency ($4 - 12$ Hz) range, similar to \texttt{animal} and \texttt{tool} stimulus categories. Finally, an interesting observation was that \texttt{animals} and \texttt{faces} share most of their polypredictive probes (51\%) indicating a large overlap of categorization networks of these two categories.
\subsection{Further decomposition of important activity reveals clusters of distinct time-frequency patterns}
We ran clustering analysis of the probes predictive of a category based on their activity to see which probes in the category-network behave in a similar way. Left column of Figure~\ref{fig:importances-clusters-mnis} shows an averaged feature importance map for a given category. We look into the regions of the time-frequency map that are indicated as important by the feature importance map, extract baseline-normalized activity in those regions and cluster the probes according to that activity using hierarchical complete linkage clustering with cosine distance (see the section \ref{sec:signatures-clustering} on hierarchical clustering for details). The second column of Figure~\ref{fig:importances-clusters-mnis} shows the activity of four most populated clusters for each category. Each cluster represents the activity pattern exhibited by the probes in that cluster. Only the probes whose activity had predictive power ($\text{F}_1 > 0.39$) are included in this analysis. As the final step we identified the anatomical locations of the probes from each cluster to see whether difference in the activity patterns could be attributed to the functional regions of the brain. The visualization of this step in the last two columns of Figure~\ref{fig:importances-clusters-mnis}.
This analysis allowed us make a number of \emph{global} and \emph{category-specific} observations. The set of visual categories presented in our data is diverse enough to consider category-specific findings to be general and emerge under any comparable set of visual stimuli.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{images/importances-clusters-mnis-top4.png}
\caption{Detailed analysis of spectral activity of \textbf{(a)} \texttt{animals}, \textbf{(b)} \texttt{faces}, \textbf{(c)} \texttt{pseudowords} and \textbf{(d)} \texttt{scrambled} images. Leftmost column contains the importance maps extracted from Random Forest models and shows where in time and frequency the important activity is. Second column visualizes the four largest (by the number of recording sites) clusters of activity patterns inside those spectrotemporal regions that are deemed important. The numbers in the top right corner of each cluster's activity pattern show the average predictive power (F\textsubscript{1} score) of the probes in that cluster and proportion of polypredictive locations that exhibited this particular pattern of activity. Note how every cluster has a designated color: green, blue, red or black. This color of the cluster matches the color of MNI location markers in the last two columns, that show sagittal and dorsal views of the brain. White markers show the probes that have predictive power, but their activity pattern does not belong to any of the four major clusters.}
\label{fig:importances-clusters-mnis}
\end{figure}
The first global observation was that it is not only broadband gamma activity that is useful for the decoder's (Random Forest) performance, but low-frequency activity also contributed significantly (41\% of predictive probes exhibited only low-frequency activity in the regions of importance), sometimes overshadowing the activity of higher frequency bands altogether (for \texttt{face} and \texttt{scrambled} stimuli low frequency activity was significantly more important than broadband gamma activity, Mann-Whitney U test $p < 1\mathrm{e}{-7}$, corrected). Most clusters were composed of a combination of low and high-frequency components (Figure~\ref{fig:importances-clusters-mnis}, second column) and were mostly (87\%) located in occipito-temporal cortices, though some electrodes in parietal and frontal cortex (7\%) also appeared to contribute with predictive responses in the decoding process (Figure~\ref{fig:importances-clusters-mnis}, two right columns), especially for such stimulus categories as \texttt{animal} and \texttt{pseudoword}.
The second observation spanning across all categories was that the classifier used not only the increases in power to perform the classification, but also relied on power decreases in different brain networks (7 out of 32 dominant activity clusters consisted solely from the activity patterns characterized by power decrease). The most prominent examples are the clusters \texttt{faces-2} (Figure~\ref{fig:importances-clusters-mnis}b), \texttt{animals-2} (Figure~\ref{fig:importances-clusters-mnis}a), \texttt{tools-2}, \texttt{pseudowords-1}, \texttt{pseudowords-2} (Figure~\ref{fig:importances-clusters-mnis}c), \texttt{scrambled-1} and \texttt{scrambled-2} (Figure~\ref{fig:importances-clusters-mnis}d). For example, to decode \texttt{face} or \texttt{pseudowords} from the activity of the blue cluster network, the RF classifier used broadband gamma power decreases located in posterior inferior temporal cortex and inferior occipital gyrus. None of the probes for which the decrease in activity was identified as important for decoding were located in classically defined Default Mode Network \citep{buckner2008brain, raichle2015brain}.
Across all categories, the earliest component that often appeared in clusters was the brief power increase (mean non-zero power increase was $2.8$ times the baseline in the region of interest) in the low-frequency interval (4-25 Hz), which for one group of probes can be associated to an almost instantaneous broadband gamma power increase (\ref{fig:importances-clusters-mnis}b, cluster 3, mean broadband gamma increase of 1.9 times the baseline), but remains the only source of important activity for another group of probes (\ref{fig:importances-clusters-mnis}b, cluster 1).
Studying the anatomical locations of the probes belonging to different clusters of activity revealed interesting observations. Figure~\ref{fig:importances-clusters-mnis}c, \texttt{pseudowords}, clusters 1 and 3 show a clear example how clustering by activity patterns leads to assigning the probes into functionally different anatomical areas. The gamma-band increase signature captured by cluster 3 occurs only in the left hemisphere (red markers on Figure~\ref{fig:importances-clusters-mnis}c), the late theta-alpha power decrease captured by cluster 1 also occurs only in the left hemisphere (green markers) and is spatially clearly distinct from probes in cluster 3. Because it is known that \texttt{pseudoword} stimuli elicit top-down language-related (orthographic, phonological and semantic) analysis, which elicits highly left-lateralized networks identifiable in iEEG recordings \citep{juphard2011direct, mainy2008cortical}, we know that this observation reflects a functional brain process. This dissociation in both the spectrotemporal and anatomical domains provides us with valuable data on the locations and associated activity patterns emerging during automatic perceptual categorization and highlights the benefit of disentangling the activity into functionally and anatomically disassociated clusters.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{images/predictiveness_bbgamma_vs_low.png}
\caption{Comparison of predictive power of the electrodes from three different sets of features: full spectrum ($4 - 150$ Hz), broadband gamma alone ($50 - 150$ Hz) and lower frequencies alone ($4 - 50$ Hz) across categories. The bracket with the p-value indicates a significant difference according to Mann--Whitney U test.}
\label{fig:predictiveness-bbgamma-vs-low}
\end{figure}
Finally, the relevance of the different components in the TF domain for the Random Forest classification process was assessed. Specifically, we tested whether the activity in the broadband gamma range, commonly present on most clusters across categories, is in general the most valuable neural signature for category networks as compared to the low-frequency parts of the spectrum. To test whether broadband gamma was solely the most informative frequency interval we statistically compared predictive power of three intervals: broadband gamma ($50 - 150$ Hz), low-frequency ($4 - 50$ Hz) and full spectrum ($4 - 150$ Hz). Overall, across 7 perceptual categories out of 8 (except for \texttt{scenes}), using the full spectrum was more informative than using the broadband gamma interval or the low-frequency interval alone (Mann--Whitney U test, $p < 0.001563$, corrected to the number of clusters compared, see Figure~\ref{fig:predictiveness-bbgamma-vs-low}), which is in line with the results reported by \cite{miller2016spontaneous}. Importantly, for \texttt{scrambled} images and \texttt{faces} the broadband gamma carried \emph{less} (Mann-Whitney U $p < 1\mathrm{e}{-7}$, corrected) decoding-relevant information than the lower frequencies.
\section{Significance of bottom-up approach to the\\analysis of human intracerebral neural activity}
In this chapter we explored the bottom-up approach to the analysis of human intracerebral neural activity. Due to a rich dataset and powerful methodology we were able to uncover facts about neural processing of automatic visual categorization that we would not necessarily address in a hypothesis-driven study. Previous works have shown where and when perceptual category information can be decoded from the human brain, our study adds to that line of research by identifying spectrotemporal patterns that contribute to category decoding without the need to formulate a priori hypothesis on which spectral components and at which times are worth investigating.
The classifier model first allowed us to globally identify two types of neural responses: those that were predictive of a certain category and those that did not predict any category despite eliciting strong amplitude modulation across multiple frequency bands. Surprisingly, when comparing the level of predictability of probe responses we found that only 4.8\% of the responsive probes were predictive of a category. This very low percentage highlights an important fact regarding the level of ``selectivity'' of a neural responses. In this decoding approach, the level of single-probe neural response selectivity depends on the diversity and overall quantity of the comparison/reference group to which it is compared to. Stimulus-induced neural signal selectivity is thus a graded quality that can be assessed through multiple comparisons with a broad variety of stimulation conditions. This result also implies that although any stimulus can elicit a local neural response throughout the cerebral cortex, in the light of our results, there is a high probability of it being non-predictive of any of the categories or being polypredictive of several categories at once.
In line with a vast literature on the localization of category related networks \citep{kanwisher1997fusiform, epstein1999parahippocampal, malach1995object, haxby2001distributed, ishai1999distributed, cohen2000visual, peelen2009neural, grill2014functional, tanaka1996inferotemporal, dicarlo2012does} predictive probes concentrated mostly in the inferior temporal cortex, namely the fusiform gyrus (BA 37), yet surprisingly for some categories, probes in primary visual cortex were also predictive of these categories. This effect is probably related to the specifics of the physical content of certain images that uniquely characterize certain categories amongst all others, as for example the content in high-contrast edge information in \texttt{scrambled} and written text stimuli.
Predictive probes were subsequently classified according to their level of selectivity towards a single or multiple visual categories. Polypredictive probes (36\%) clustered in visual cortices and inferior temporal cortex and were associated with early spectral components ($< 300$ ms) such as broadband gamma power increases and a transient theta burst shortly after stimulus presentation. Monopredictive probes (64\%) were abundant in these same regions, but extending uniquely in frontal, parietal, superior temporal and anterior limbic cortex. Their activity was strongly associated with the later ($> 300$ ms) time and with power suppression of spectral importance features, versus baseline, in the theta ($4-7$ Hz), alpha ($8-15$ Hz) and beta bands ($16-40$ Hz). In a subgroup of probes the associated power suppression of the feature importances extended into the broad gamma band ($50-150$ Hz).
Importantly, the capacity to ascribe category selectivity to predictive probes (mono vs polypredictive probes) arises from the fact that the decoding model was trained to discriminate between all 8 categories simultaneously. The separation between mono and polypredictive probes revealed specific effects in terms of network localization and time-frequency components. The high concentration of polypredictive probes (and local networks) in early visual cortices, from primary visual cortex up to inferior temporal cortex is coherent with the idea that networks in the ventral visual stream progressively integrate more complex features into object representations, thus becoming progressively more selective, and converge within median temporal lobe to more stimulus-invariant representations \citep{quiroga2005invariant}. This progressive information integration by spectral features of neuronal responses across the visual hierarchy has been recently connected with the computations carried out by deep convolutional neural networks trained to solve the task of visual recognition \citep{kuzovkin2018activations}.
Globally, the random forest data classification provided results that are coherent with current knowledge on 1) the implication of networks located in visual cortex and inferior temporal cortex in processing visual categories, 2) the timing of object categorization in the human brain and 3) the role of broadband gamma responses in processing category-selective information within these networks. Previous studies have shown that certain stimulus categories elicit clustered cortical responses of highly localized networks in the occipito-temporal ventral stream such as the fusiform-face-area (FFA) and the visual-word-form area (VWFA) \citep{kanwisher1997fusiform, cohen2000visual}. Yet, other studies have broadened this scope by showing that certain categories, as for example faces, rely on the involvement of a larger brain-wide distributed network \citep{ishai2005face, vidal2010category}. Our classification analysis shows that the spatial extent of this network distribution is category specific, certain stimuli eliciting larger network responses, such as for \texttt{faces}, \texttt{animals} and \texttt{pseudowords}, as compared to \texttt{scenes}, \texttt{houses} and \texttt{scrambled} images which concentrate in the fusiform cortex, the parahippocampal cortex and primary visual cortex respectively.
Our results largely agree with previous works trying to decode visual object categories over time with magnetoencephalography (MEG) \citep{carlson2013representational, cichy2014resolving} or intracranial recordings \citep{liu2009timing}. All these studies converge on the result that perceptual categories can be decoded from human brain signals as early as 100 ms. Our current work goes a step beyond these previous investigations by demonstrating which spectral components underlie this fast decoding. Previous intracranial studies have also shown that broadband gamma is modulated by information about object categories \citep{vidal2010category, privman2007enhanced, fisch2009neural}. Moreover, broadband gamma has been suggested as a proxy to population spiking output activity \citep{manning2009broadband, ray2011different, lachaux2012high, ray2008neural}. It has since then been considered as a hallmark of local population processing \citep{parvizi2018promises}. Our classification results however show that broadband gamma is not the sole selectivity marker of functional neural processing, and that higher decoding accuracy can be achieved by including low-frequency components of the spectrum. For certain stimulus categories, as scrambled images, the broadband gamma range is even outperformed by the predictive power of the low-frequency range.
To understand which spectral components play a specific role in stimulus categorization we analyzed the decision process that drives the decoding model and identified the combined spectrotemporal regions that are informative for the output of the random forest classification procedure. This allowed us 1) to identify the category-selective spectral components of high importance for the automatic visual categorization process, and 2) identify the correlates functional involvement of positive as well as negative power modulations (increases and decreases versus baseline) in early and late time windows of neural processing involved in visual categorization.
While the distinctive activity of polypredictive neural locations is mostly reflected by early TF components (i.e. broadband gamma and theta burst in \texttt{faces}), the sustained decrease in power in the alpha/beta band was extended in space and time. This process is probably dependent on the degree of difficulty for the networks in reaching a perceptual decision and which appeals to the involvement of top-down processing required to resolve perceptual ambiguity elicited by the different stimulus categories. For example, \texttt{animal} and \texttt{tool} stimuli are highly diverse in their physical image structure, as compared to \texttt{face} stimuli. This affects the efficiency of bottom-up process in extracting category information, often associated with increase in gamma activity, and probably in parallel triggers top-down processes through selective activity modulation in low-frequency channels \citep{bastos2015visual}. In our data, this latter phenomenon could be mirrored by a decrease of predictive power in the low-frequency range. Studies have shown that power modulations reflect changes in network connectivity \citep{tewarie2018relationships} and that top-down processes, eliciting a decrease in power in the alpha-beta band, are accompanied by an increase in distant network connectivity \citep{gaillard2009converging}.
Finally, we also show that certain probes elicit decreased broadband gamma responses (versus baseline) while representing a significant feature importance for the classification model. It has been shown that neural activity in the Default Mode Network can be negatively modulated by attending sensory stimulation \citep{buckner2008brain}, and intracranial studies have found that this was reflected by decreases (versus baseline) in the broad gamma range \citep{ossandon2011transient, jerbi2010exploring, dastjerdi2011differential}. Here we found no evidence of such power decreases in probes located in the DMN \citep{buckner2008brain}. However, the random forest classifier singled-out broad spectral patterns of power decreases at probes located in visual regions and beyond for categories \texttt{faces}, \texttt{pseudowords} and \texttt{characters}. This is the first time, to our knowledge, that power decreases in the broadband gamma range outside the DMN have been associated with highly functional neural signal classification of perceptual categories. Their functional significance should be studied in the future as they could reflect an important phenomenon of communication regulation between networks during perceptual decision making of visual categories.
Expanding on this work and established methodology by including more subject data in the future might allow us to make a transition from the observations of local activity and the analysis of its role to being able to detect signatures of global decision-making processes. It is possible that these signatures would be reflected in specific spectral fingerprints as many classic theories would suggest \citep{rodriguez1999perception, varela2001brainweb, engel2001dynamic, siegel2012spectral}. The methodology proposed in this study can facilitate the search of those fingerprints without the need to formulate a priori hypothesis about which spectrotemporal components are worth investigating.
\chapter{Representational similarities between biological visual processing and artificial neural networks inform on structural organization of human visual cortex}
\label{ch:intracranial-dcnn-based}
In the previous chapter we have demonstrated a way to interpret an automatically built model to gain knowledge about neurological mechanisms on the level of local field potentials. In this chapter we move to a higher level of abstraction and employ machine learning and interpretability to peek into functional organization of human visual cortex. Using a metric based on representational similarity analysis we compare the activations of biological neurons in the layers of human visual cortex with the activations of artificial neurons in the layers of a deep convolutional neural network. This comparison allows us to find an alignment between those two hierarchical structures and to investigate which spectrotemporal regions of human brain activity are aligned the best, confirming the role of high gamma activity in visual processing. Using \emph{deconvolution} technique to interpret the behavior of the DCNN we were able to visualize visual inputs that the artificial neurons are tuned to and observe the similarities with the reported tuning of biological neurons when processing visual inputs.
\section{The search for the model of\\human visual system}
Biological visual object recognition is mediated by a hierarchy of increasingly complex feature representations along the ventral visual stream \citep{dicarlo2012does}. Intriguingly, these transformations are matched by the hierarchy of transformations learned by deep convolutional neural networks (DCNN) trained on natural images~\citep{gucclu2015deep}. It has been shown that DCNN provides the best model out of a wide range of neuroscientific and computer vision models for the neural representation of visual images in high-level visual cortex of monkeys \citep{yamins2014performance} and humans \citep{khaligh2014deep}. Other studies with functional magnetic resonance imaging (fMRI) data have demonstrated a direct correspondence between the hierarchy of the human visual areas and layers of the DCNN \citep{gucclu2015deep, eickenberg2016seeing, seibert2016performance, cichy2016comparison}. In sum, the increasing feature complexity of the DCNN corresponds to the increasing feature complexity occurring in visual object recognition in the primate brain \citep{kriegeskorte2015deep, yamins2016using}.
However, fMRI based studies only allow one to localize object recognition in space, but neural processes also unfold in time and have characteristic spectral fingerprints (i.e. frequencies). With time-resolved magnetoencephalographic recordings it has been demonstrated that the correspondence between the DCNN and neural signals peaks in the first 200 ms \citep{cichy2016comparison, seeliger2017cnn}. Here we test the remaining dimension: that biological visual object recognition is also specific to certain frequencies. In particular, there is a long-standing hypothesis that especially gamma band ($30 - 150$ Hz) signals are crucial for object recognition \citep{singer1995visual, singer1999neuronal, fisch2009neural, tallon1997oscillatory, tallon1999oscillatory, lachaux1999measuring, wyart2008neural, lachaux2005many, vidal2006visual, herrmann2004cognitive, srinivasan1999increased, levy2015selective}. More modern views on gamma activity emphasize the role of the gamma rhythm in establishing a communication channel between areas \citep{fries2005mechanism, fries2015rhythms}. Further research has demonstrated that especially feedforward communication from lower to higher visual areas is carried by the gamma frequencies \citep{van2014alpha, bastos2015visual, michalareas2016alpha}. As the DCNN is a feedforward network one could expect that the DCNN will correspond best with the gamma band activity. In this work we used the DCNN as a computational model to assess whether signals in the gamma frequency are more relevant for object recognition than other frequencies.
To empirically evaluate whether gamma frequency has a specific role in visual object recognition we assessed the alignment between the responses of layers of a commonly used DCNN and the neural signals in five distinct frequency bands and three time windows along the areas constituting the ventral visual pathway. Based on the previous findings we expected that: mainly gamma frequencies should be aligned with the layers of the DCNN; the correspondence between the DCNN and gamma should be confined to early time windows; the correspondence between gamma and the DCNN layers should be restricted to visual areas. In order to test these predictions we capitalized on direct intracranial depth recordings from $100$ patients with epilepsy and a total of $11293$ electrodes implanted throughout the cerebral cortex.
We observe that activity in the gamma range along the ventral pathway is statistically significantly aligned with the activity along the layers of DCNN: gamma ($31 - 150$ Hz) activity in the early visual areas correlates with the activity of early layers of DCNN, while the gamma activity of higher visual areas is better captured by the higher layers of the DCNN. We also find that while the neural activity in the theta range ($5 - 8$ Hz) is not aligned with the DCNN hierarchy, the representational geometry of theta activity is correlated with the representational geometry of higher layers of DCNN.
\section{Simultaneous recordings of human intracortical responses and of responses of an artificial neural network to the same visual stimuli}
The dataset that was created for this study consists of two components: recordings of local field potentials in human visual cortex and activations of artificial neurons of a deep convolutional neural network trained on a visual recognition task. The raw neurological data was the same as the one used in Chapter \ref{ch:spectral-signatures-based}, please refer to section \ref{sec:raw-intracranial-data} for the technical details on the subjects and data acquisition parameters. The preprocessing pipeline was mostly similar to the one performed in the previous study, however there were a few differences, please see the section below. Further in this section we present the protocol we used to obtain activations of artificial neurons once the artificial neural network was presented with the same visual stimuli as the the human subjects.
\subsection{Processing of neural data}
The final dataset consists of $2823250$ local field potential (LFP) recordings -- $11293$ electrode responses to $250$ stimuli. To remove the artifacts the signals were linearly detrended and the recordings that contained values $\ge10\sigma_{\text{images}}$, where $\sigma_{\text{images}}$ is the standard deviation of responses (in the time window from $-500$ ms to $1000$ ms) of that particular probe over all stimuli, were excluded from data. All electrodes were re-referenced to a bipolar reference. For every electrode the reference was the next electrode on the same rod following the inward direction. The electrode on the deepest end of each rod was excluded from the analysis. The signal was segmented in the range from $-500$ ms to $1000$ ms, where $0$ marks the moment when the stimulus was shown. The $-500$ to $-100$ ms time window served as the baseline. There were three time windows in which the responses were measured: $50 - 250$ ms, $150 - 350$ ms and $250 - 450$ ms.
We analyzed five distinct frequency bands: $\theta$ ($5 - 8$ Hz), $\alpha$ ($9 - 14$ Hz), $\beta$ ($15 - 30$ Hz), $\gamma$ ($31 - 70$ Hz) and $\Gamma$ ($71 - 150$ Hz). To quantify signal power modulations across time and frequency we used standard time-frequency (TF) wavelet decomposition \citep{daubechies1990wavelet}. The signal $s(t)$ is convoluted with a complex Morlet wavelet $w(t, f_0)$, which has Gaussian shape in time $(\sigma_t)$ and frequency $(\sigma_f)$ around a central frequency $f_0$ and defined by $\sigma_f = 1/2 \pi \sigma_t$ and a normalization factor. In order to achieve good time and frequency resolution over all frequencies we
slowly increased the number of wavelet cycles with frequency ($\frac{f_0}{\sigma_f}$ was set to 6 for high and low gamma, 5 for beta, 4 for alpha and 3 for theta). This method allows obtaining better frequency resolution than by applying a constant cycle length \citep{delorme2004eeglab}. The square norm of the convolution results in a time-varying representation of spectral power, given by: $P(t, f_0) = |w(t, f_0) \cdot s(t)|^2$.
Further analysis was done on the electrodes that were responsive to the visual task. We assessed neural responsiveness of an electrode separately for each region of interest -- for each frequency band and time window we compared the average post-stimulus band power to the average baseline power with a Wilcoxon signed-rank test for matched-pairs. All p-values from this test were corrected for multiple comparisons across all electrodes with the false discovery rate procedure \citep{genovese2002thresholding}. In the current study we deliberately kept only positively responsive electrodes, leaving the electrodes where the post-stimulus band power was lower than the average baseline power for future work. Table \ref{tab:responsive-counts} contains the numbers of electrodes that were used in the final analysis in each of $15$ regions of interest across the time and frequency domains.
\begin{table}[h!]
\centering
\begin{tabular}{r|ccccc}
& $\theta$ & $\alpha$ & $\beta$ & $\gamma$ & $\Gamma$ \\ \hline
$50 - 250$ ms & 1299 & 709 & 269 & 348 & 504 \\
$150 - 350$ ms & 1689 & 783 & 260 & 515 & 745 \\
$250 - 450$ ms & 1687 & 802 & 304 & 555 & 775
\end{tabular}
\caption{Number of positively responsive electrodes in each of the $15$ regions of interest in a time-resolved spectrogram.}
\label{tab:responsive-counts}
\end{table}
Each electrode's Montreal Neurological Institute coordinate system (MNI) coordinates were mapped to a corresponding Brodmann brain area~\citep{brodmann1909vergleichende} using Brodmann area atlas contained in MRICron~\citep{rorden2007mricron} software.
To summarize, once the neural signal processing pipeline is complete, each electrode's response to each of the stimuli is represented by one number -- the average band power in a given time window normalized by the baseline. The process is repeated independently for each time-frequency region of interest.
\subsection{Processing of DCNN data}
We feed the same images that were shown to the test subjects to a deep convolutional neural network (DCNN) and obtain activations of artificial neurons (nodes) of that network. We use \texttt{Caffe} \citep{jia2014caffe} implementation of \texttt{AlexNet} \citep{krizhevsky2012imagenet} architecture (see Figure \ref{fig:layer_specificity_and_volume}) trained on \texttt{ImageNet} \citep{ILSVRC15} dataset to categorize images into 1000 classes. Although the image categories used in our experiment are not exactly the same as the ones in the \texttt{ImageNet} dataset, they are a close match and DCNN is successful in labelling them.
The architecture of the \texttt{AlexNet} artificial network can be seen on Figure \ref{fig:layer_specificity_and_volume}. It consists of 9 layers. The first is the input layer, where one neuron corresponds to one pixel of an image and activation of that neuron on a scale from 0 to 1 reflects the color of that pixel: if a pixel is black, the corresponding node in the network is not activated at all (value is 0), while a white pixel causes the node to be maximally activated (value 1). After the input layer the network has 5 \emph{convolutional layers} referred to as \texttt{conv1-5}. A convolutional layer is a collection of filters that are applied to an image. Each filter is a 2D arrangement of weights that represent a particular visual pattern. A filter is convolved with the input from the previous layer to produce the activations that form the next layer. For an example of a visual pattern that a filter of each layer is responsive to, please see Figure \ref{fig:layer_specificity_and_volume}b. Each layer consists of multiple filters and we visualize only one per layer for illustrative purposes. A filter is applied to every possible position on an input image and if the underlying patch of an image coincides with the pattern that the filter represents, the filter becomes activated and translates this activation to the artificial neuron in the next layer. That way, nodes of \texttt{conv1} tell us where on the input image each particular visual pattern occurred. Figure \ref{fig:layer_specificity_and_volume}b shows an example output feature map produced by a filter being applied to the input image. Hierarchical structure of convolutional layers gives rise to the phenomenon we are investigating in this work -- increase of complexity of visual representations in each subsequent layer of the visual hierarchy: in both the biological and artificial systems. Convolutional layers are followed by 3 \emph{fully-connected} layers (\texttt{fc6-8}). Each node in a fully-connected layer is, as the name suggests, connected to every node of the previous layer allowing the network to decide which of those connections are to be preserved and which are to be ignored. For both convolutional and fully-connected layers we can apply \emph{deconvolution} \citep{zeiler2014visualizing} technique to map activations of neurons in those layers back to the input space. This visualization gives better understanding of inner workings of a neural network. Examples of deconvolution reconstruction for each layer are given in Figure \ref{fig:layer_specificity_and_volume}b.
For each of the images we store the activations of all nodes of DCNN. As the network has 9 layers we obtain 9 representations of each image: the image itself (referred to as layer 0) in the pixel space and the activation values of each of the layers of DCNN. See the step 2 of the analysis pipeline on Figure \ref{fig:methods_pipeline} for the cardinalities of those feature spaces.
\section{The mapping between the Brodmann areas and layers of a Deep Convolutional Neural Network}
As a result of the preprocessing steps we were left with two sets of responses to the same set of stimuli: one from a biological system, one from an artificial one. Our ultimate goal was to compare those responses, but since the representations were very different a direct comparison was not possible. To overcome this we used representational similarity analysis -- a technique that relies on the distance measure between the data samples (see taxonomy in Table \ref{tab:representation-taxonomy} of Section \ref{sec:representation-taxonomy}) to provide a way to compare behaviors of two systems under the same set of stimuli while having different data representations.
\subsection{Mapping neural activity to the layers of DCNN}
\label{sec:mapping-brain-to-dcnn}
Once we extracted the features from both neural and DCNN responses our next goal was to compare the two and use a similarity score to map the brain area where a probe was located to a layer of DCNN. By doing that for every probe in the dataset we obtained cross-subject alignment between visual areas of human brain and layers of DCNN. There are multiple deep neural network architectures trained to classify natural images. Our choice of \texttt{AlexNet} does not imply that this particular architecture corresponds best to the hierarchy of visual layers of human brain. It does, however, provide a comparison for hierarchical structure of human visual system and was selected among other architectures due to its relatively small size and thus easier interpretability.
Recent studies comparing the responses of visual cortex with the activity of DCNN have used two types of mapping methods. The first type is based on linear regression models that predict neural responses from DCNN activations~\citep{yamins2014performance, gucclu2015deep}. The second is based on representational similarity analysis (RSA)~\citep{kriegeskorte2008representational}. We used RSA to compare distances between stimuli in the neural response space and in the DCNN activation space~\citep{cichy2016deep}.
\begin{figure}[h!]
\centering
\includegraphics[width=1.0\linewidth]{images/fig_methods_pipeline.pdf}
\caption{Overview of the analysis pipeline where $250$ natural images are presented to human subjects and to an artificial vision system. The activities elicited in these two systems are compared to map regions of human visual cortex to layers of deep convolutional neural network. \textbf{Step 1:} LFP response of each of $11293$ electrodes to each of the images is converted into the frequency domain. Activity evoked by each image is compared to the activity evoked by every other image and results of this comparison are presented as a representational dissimilarity matrix (RDM). \textbf{Step 2:} Each of the images is shown to a pre-trained DCNN and activations of each of the layers are extracted. Each layer's activations form a representation space, in which stimuli (images) can be compared to each other. Results of this comparison are summarized as a RDM for each DCNN layer. \textbf{Step 3:} Subject's intracranial responses to stimuli are randomly reshuffled and the analysis performed in step 1 is repeated $10000$ times to obtain $10000$ random RDMs for each electrode. \textbf{Step 4:} Each electrode's MNI coordinates are used to map the electrode to a Brodmann area. The figure also gives an example of electrode implantation locations in one of the subjects (blue circles are the electrodes). \textbf{Step 5:} Spearman's rank correlation is computed between the true (non-permuted) RDM of neural responses and RDMs of each layer of DCNN. Also $10000$ scores are computed with the random RDM for each electrode-layer pair to assess the significance of the true correlation score. If the score obtained with the true RDM is significant (the value of $p < 0.001$ is estimated by selecting a threshold such that none of the probes would pass it on the permuted data), then the score is added to the mapping matrix.The procedure is repeated for each electrode and the correlation scores are summed and normalized by the number of electrodes per Brodmann area. The resulting mapping matrix shows the alignment between the consecutive areas of the ventral stream and layers of DCNN.}
\label{fig:methods_pipeline}
\end{figure}
We built a representation dissimilarity matrix (RDM) of size \emph{number of stimuli} $\times$ \emph{number of stimuli} (in our case $250 \times 250$) for each of the probes and each of the layers of DCNN. Note that this is a non-standard approach: usually the RDM is computed over a population (of voxels, for example), while we do it for each probe separately. We use the non-standard approach because often we only had 1 electrode per patient per brain area. Given a matrix $\text{RDM}^\text{feature space}$ a value $\text{RDM}_{ij}^\text{feature space}$ in the $i$th row and $j$th column of the matrix shows the Euclidean distance between the vectors $\mathbf{v}_i$ and $\mathbf{v}_j$ that represent images $i$ and $j$ respectively in that particular feature space. Note that the preprocessed neural response to an image in a given frequency band and time window is a scalar, and hence correlation distance is not applicable. Also, given that DCNNs are not invariant to the scaling of the activations or weights in any of its layers, we preferred to use closeness in Euclidean distance as a more strict measure of similarity. In our case there are 10 different feature spaces in which an image can be represented: the original pixel space, 8 feature spaces for each of the layers of the DCNN and one space where an image is represented by the preprocessed neural response of probe $p$. For example, to analyze region of interest of high gamma in $50 - 250$ ms time window we computed $504$ RDM matrices on the neural responses -- one for each positively responsive electrode in that region of interest (see Table \ref{tab:responsive-counts}), and $9$ RDM matrices on the activations of the layers of DCNN. A pair of a frequency band and a time window, such as ``high gamma in 50-250 ms window'' is referred to as \emph{region of interest} in this work.
The second step was to compare the $\text{RDM}^{\text{probe}\ p}$ of each probe $p$ to RDMs of layers of DCNN. We used Spearman's rank correlation as measure of similarity between the matrices:
\begin{equation}
\label{eq:rsa_score}
\rho^{\text{probe}\ p}_{\text{layer}\ l}= \text{Spearman}(\text{RDM}^{\text{probe}\ p}, \text{RDM}^{\text{layer}\ l}).
\end{equation}
As a result of comparing $\text{RDM}^{\text{probe}\ p}$ with every $\text{RDM}^{\text{layer}\ l}$ we obtain a vector with 9 scores: $(\rho_\text{pixels},\rho_\text{conv1},\ldots,\rho_\texttt{fc8})$ that serves as a distributed mapping of probe $p$ to the layers of DCNN (see step 5 of the analysis pipeline on Figure \ref{fig:methods_pipeline}). The procedure is repeated independently for each probe in each region of interest.
To obtain an aggregate score of the correlation between an area and a layer the $\rho$ scores of all individual probes from that area are summed and divided by the number of $\rho$ values that have passed the significance criterion. The data for the Figures \ref{fig:all_areas_in_gammas} and \ref{fig:15_plates} are obtained in such manner.
Figure \ref{fig:rdm_corrs_within_dnn} presents the results of applying RSA within the DCNN to compare the similarity of representational geometry between the layers.
To assess the statistical significance of the correlations between the RDM matrices we ran a permutation test. In particular, we reshuffled the vector of brain responses to images $10000$ times, each time obtaining a dataset where the causal relation between the stimulus and the response is destroyed. On each of those datasets we ran the analysis and obtained Spearman's rank correlation scores. To determine score's significance we compared the score obtained on the original (unshuffled) data with the distribution of scores obtained with the surrogate data. If the score obtained on the original data was bigger than the score obtained on the surrogate sets with $p < 0.001$ significance, we considered the score to be significantly different. The threshold of $p = 0.001$ is estimated by selecting such a threshold that on the surrogate data none of the probes would pass it.
To size the effect caused by training artificial neural network on natural images we performed a control where the whole analysis pipeline depicted on Figure \ref{fig:methods_pipeline} is repeated using activations of a network that was not trained -- its weights are randomly sampled from a Gaussian distribution $\mathcal{N}(0, 0.01)$.
For the relative comparison of alignments between the bands and the noise level estimation we took 1,000 random subsets of half of the size of the dataset. Each region of interest was analyzed separately. The alignment score was calculated for each subset, resulting in 1,000 alignment estimates per region of interest. This allowed us to run a statistical test between each pair of regions of interest to test the hypothesis that the DCNN alignment with the probe responses in one band is higher than the alignment with the responses in another band. We used Mann-Whitney~U~test~\citep{mann1947test} to test that hypothesis and accepted the difference as significant at p-value threshold of $0.005$ Bonferroni corrected~\citep{dunn1961multiple} to $2.22\mathrm{e}{-5}$.
\subsection{Quantifying properties of the mapping}
\label{sec:quantifying-mapping}
To evaluate the results quantitatively we devised a set of measures specific to our analysis. \emph{Volume} is the total sum of significant correlations (see Equation \ref{eq:rsa_score}) between the RDMs of the subset of layers $\mathbf{L}$ and the RDMs of the probes in the subset of brain areas $\mathbf{A}$:
\begin{equation}
\label{eq:volume}
V^{\text{areas}\ \mathbf{A}}_{\text{layers}\ \mathbf{L}} = \sum_{a \in \mathbf{A}}\sum_{l \in \mathbf{L}}\sum_{p \in \mathbf{S}^a_l} \rho^{\text{probe} \ p}_{\text{layer}\ l},
\end{equation}
where $\mathbf{A}$ is a subset of brain areas, $\mathbf{L}$ is a subset of layers, and $\mathbf{S}^a_l$ is the set of all probes in area $a$ that significantly correlate with layer $l$.
We express \emph{volume of visual activity} as
\begin{equation}
\label{eq:visual-volume}
V^{\mathbf{A} = \{17,18,19,37,20\}}_{\mathbf{L} = \text{all layers}},
\end{equation}
which shows the total sum of correlation scores between all layers of the network and the Brodmann areas that are located in the ventral stream: $17, 18, 19, 37$, and $20$.
\emph{Visual specificity} of activity is the ratio of volume in visual areas and volume in all areas together, for example visual specificity of all of the activity in the ventral stream that significantly correlates with any of layers of DCNN is
\begin{equation}
\label{eq:visual-specificity}
S^{\mathbf{A} = \{17,18,19,37,20\}}_{\mathbf{L} = \text{all layers}} = \displaystyle\frac{V^{\mathbf{A} = \{17,18,19,37,20\}}_{\mathbf{L} = \text{all layers}}}{V^{\mathbf{A} = \text{all areas}}_{\mathbf{L} = \text{all layers}}}
\end{equation}
The measures so far did not take into account hierarchy of the ventral stream nor the hierarchy of DCNN. The following two measures are the most important quantifiers we rely on in presenting our results and they do take hierarchical structure into account.
The \emph{ratio of complex visual features to all visual features} is defined as the total volume mapped to layers \texttt{conv5}, \texttt{fc6}, \texttt{fc7} divided by the total volume mapped to layers \texttt{conv1}, \texttt{conv2}, \texttt{conv3}, \texttt{conv5}, \texttt{fc6}, \texttt{fc7}:
\begin{equation}
\label{eq:hhl}
C^{\mathbf{A}} = \displaystyle\frac{V^{\mathbf{A}}_{\mathbf{L} = \{\texttt{conv5}, \texttt{fc6}, \texttt{fc7}\}}}{V^{\mathbf{A}}_{\mathbf{L} = \{\texttt{conv1}, \texttt{conv2}, \texttt{conv3}, \texttt{conv5}, \texttt{fc6}, \texttt{fc7}\}}}.
\end{equation}
Note that for this measure layers \texttt{conv4} and \texttt{fc8} are omitted: layer \texttt{conv4} is considered to be the transition between the layers with low and high complexity features, while layer \texttt{fc8} directly represents class probabilities and does not carry visual representations of the stimuli (if only on very abstract level).
Finally, the \emph{alignment} between the activity in the visual areas and activity in DCNN is estimated as Spearman's rank correlation between two vectors each of length equal to the number of probes with RDMs that significantly correlate with an RDM of any of DCNN layers. The first vector is a list of Brodmann areas $\mathbf{BA}^p$ to which a probe $p$ belong if its activity representation significantly correlates with activity representation of a layer $l$:
{\small
\begin{equation}
\label{eq:alignment-areas}
\textbf{A}_\text{align} = \Big\{\mathbf{BA}^p\ |\ \forall p\ \exists\ l : \rho(\text{RDM}^p, \text{RDM}^l)\ \parbox{5.5em}{\tiny{is significant according to the permutation test}}\ \Big\}.
\end{equation}
}$\mathbf{A}$ is ordered by the hierarchy of the ventral stream: BA17, BA18, BA19, BA37, BA20. Areas are coded by integer range from 0 to 4. The second vector lists DCNN layers $\mathbf{L}^p$ to which the very same probes $p$ were assigned:
{\small
\begin{equation}
\label{eq:alignment-layers}
\mathbf{L}_\text{align} = \Big\{\mathbf{L}^p\ |\ \forall p\ \exists\ l : \rho(\text{RDM}^p, \text{RDM}^l)\ \parbox{5.5em}{\tiny{is significant according to the permutation test}}\ \Big\}.
\end{equation}
}Layers of DCNN are coded by integer range from 0 to 8. We denote Spearman rank correlation of those two vectors as \emph{alignment}
\begin{equation}
\label{eq:alignement}
\rho_\text{align} = \text{Spearman}(\mathbf{A}_\text{align}, \mathbf{L}_\text{align}).
\end{equation}
We note that although the hierarchy of the ventral stream is usually not defined through the progression of Brodmann areas, such ordering nevertheless provides a reasonable approximation of the real hierarchy \citep{lerner2001hierarchical, grill2004human}. As both the ventral stream and the hierarchy of layers in DCNN have an increasing complexity of visual representations, the relative ranking within the biological system should coincide with the ranking within the artificial system. Based on the recent suggestion that significance levels should be shifted to 0.005~\citep{dienes2017redefine} and after Bonferroni-correcting for 15 time-frequency windows we accepted alignment as significant when it passed $p < 0.0003(3)$.
\section{Alignment between the layers of the DCNN and layers of human visual cortex}
This section present the results and observations that were achieved by comparing the two systems of vision. Here is a brief summary of our findings: activity in gamma band is aligned better than other frequencies to the hierarchical structure of a deep convolutional neural network, this alignment is mostly attributed to having two types of layer in DCNN: convolutional, that are representationally more similar to the activity of early visual areas, and fully connected layers, that are more similar to later visual and temporal areas of the ventral stream. The section describes the evidence in favor of those conclusion and presents more granular and deeper analysis focusing on specific areas of visual cortex and layers of the DCNN.
\subsection{Activity in gamma band is aligned with the DCNN}
\begin{figure}[h!]
\centering
\includegraphics[width=0.85\linewidth]{images/fig_all_areas_in_gammas.pdf}
\caption{Mapping of the activity in Brodmann areas to DCNN layers. Underlying data comes from the activity in low gamma (31-70 Hz, panel a) and high gamma (71-150 Hz, panel b) bands in 150-350 ms time window. On the vertical axis there are Brodmann areas and the number of significantly correlating probes in each area out of the total number of responsive probes in that area. Horizontal axis represents succession of layers of DCNN. Number in each cell of the matrix is the total sum of correlations (between RDMs of probes in that particular area and the RDM of that layer) normalized by the number of significantly correlating probes in an area.}
\label{fig:all_areas_in_gammas}
\end{figure}
We tested the hypothesis that gamma activity has a specific role in visual object recognition compared to other frequencies. To that end we assessed the alignment of neural activity in different frequency bands and time windows to the activity of layers of a deep convolutional neural network (DCNN) trained for object recognition. In particular, we used RSA to compare the representational geometry of different DCNN layers and the activity patterns of different frequency bands of single electrodes (see Figure \ref{fig:methods_pipeline}). We consistently found that signals in low gamma ($31 - 70$ Hz) frequencies across all time windows and high gamma ($71 - 150$ Hz) frequencies in $150-350$ ms window are aligned with the DCNN in a specific way: increase of the complexity of features along the layers of the DCNN was roughly matched by the transformation in the representational geometry of responses to the stimuli along the ventral stream. In other words, the lower and higher layers of the DCNN explained gamma band signals from earlier and later visual areas, respectively. Figure \ref{fig:all_areas_in_gammas}a illustrates assignment of neural activity in low gamma band and Figure \ref{fig:all_areas_in_gammas}b the high gamma band to Brodmann areas and layers of DCNN. Most of the activity was assigned to visual areas (areas 17, 18, 19, 37, 20). Focusing on visual areas revealed a diagonal trend that illustrates the alignment between ventral stream and layers of DCNN (see Figure \ref{fig:15_plates}).
\subsection{Activity in other frequency bands}
To test the specificity of gamma frequency in visual object recognition, we assessed the alignment between the DCNN and other frequencies. Our findings across all subjects, time windows and frequency bands are summarized on Figure \ref{fig:diagonality_specificity_and_volume}a. We note that the alignment in the gamma bands is also present at the single-subject level (see supplementary Figure \ref{fig:single_plates} and supplementary materials \ref{sup:dcnn}). Apart from the alignment we looked at the total amount of correlation and its specificity to visual areas. Figure \ref{fig:diagonality_specificity_and_volume}b shows the volume of significantly correlating activity was highest in the high gamma range. Remarkably, 97\% of that activity was located in visual areas, which is confirmed by Figure \ref{fig:all_areas_in_gammas} where we see that in the gamma range only a few electrodes were assigned to Brodmann areas that are not part of the ventral stream. The detailed mapping results for all frequency bands and time windows are presented in layer-to-area fashion on Figure \ref{fig:15_plates}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\linewidth]{images/fig_diagonality_and_significance_specificity_and_volume.pdf}
\caption{Overall relative statistics of brain responses across frequency bands and time windows. Panel \textbf{a} shows the alignment between visual brain areas and DCNN layers (see Equation \ref{eq:alignement}). The color indicates the correlation value ($\rho$) while the size of the marker shows the logarithm (so that not significant results are still visible on the plot) of inverse of the statistical significance of the correlation, dotted circle indicates $p = 0.0003(3)$ -- the Bonferroni-corrected significance threshold level of $0.005$. Panel \textbf{b} shows whether activity in a region of interest is specific to visual areas (see Equation \ref{eq:visual-specificity}): intense red means that most of the activity in that band and time window happened in visual areas, size of the marker indicates total volume (Equation \ref{eq:volume}) of activity in all areas. The maximal size of a marker is defined by the biggest marker on the figure.}
\label{fig:diagonality_specificity_and_volume}
\end{figure}
The results in the right column of Table \ref{tab:alignment-significance} show the alignment values and significance levels for a DCNN that is trained for object recognition on natural images. On the left part of Table \ref{tab:alignment-significance} the alignment between the brain areas and a DCNN that has not been trained on object recognition (i.e. has random weights) is given for comparison. One can see that training a network to classify natural images drastically increases the alignment score $\rho$ and its significance. One can see that weaker alignment (that does not survive the Bonferroni correction) is present in early time window in theta and alpha frequency range. No alignment is observed in the beta band.
\begin{table}[h!]
\vspace{0.5em}
\centering
{\small
\begin{tabular}{cr|rr|rr|l}
& & \multicolumn{2}{p{1.9cm}|}{\small{\parbox{2.5cm}{Alignment with layers of randomly initialized \texttt{AlexNet}\vspace{0.4em}}}} & \multicolumn{2}{p{1.9cm}|}{\small{\parbox{2.6cm}{Alignment with layers of \texttt{AlexNet} trained on \texttt{ImageNet}}}} & \\ \hline
\textbf{Band} & \textbf{Window} & \textbf{$\rho_\text{align}$} & \textbf{p-value} & \textbf{$\rho_\text{align}$} & \textbf{p-value} & \\[1ex]
$\theta$ & 50-250 ms & 0.0632 & 0.71 & 0.2257 & 0.00231575 & \textbf{*} \\
$\theta$ & 150-350 ms & -0.1013 & 0.59 & 0.1396 & 0.08848501 & \\
$\theta$ & 250-450 ms & 0.1396 & 0.59 & 0.0695 & 0.78400416 & \\[1.5ex]
$\alpha$ & 50-250 ms & -0.2411 & 0.32 & 0.3366 & 0.00103551 & \textbf{*} \\
$\alpha$ & 150-350 ms & 0.0000 & 1.00 & 0.2720 & 0.13199463 & \\
$\alpha$ & 250-450 ms & -- & -- & -- & -- & \\[1.5ex]
$\beta$ & 50-250 ms & -- & -- & 0.4166 & 0.00397929 & \\
$\beta$ & 150-350 ms & -- & -- & 0.3808 & 0.16141286 & \\
$\beta$ & 250-450 ms & -- & -- & -- & -- & \\[1.5ex]
$\gamma$ & 50-250 ms & 0.1594 & 0.62 & 0.5979 & 0.00004623 & \textbf{***} \\
$\gamma$ & 150-350 ms & -0.1688 & 0.34 & 0.5332 & 0.00000059 & \textbf{***} \\
$\gamma$ & 250-450 ms & -0.1132 & 0.56 & 0.5217 & 0.00001624 & \textbf{***} \\[1.5ex]
$\Gamma$ & 50-250 ms & 0.0869 & 0.42 & 0.2259 & 0.00222940 & \textbf{*} \\
$\Gamma$ & 150-350 ms & -0.0053 & 0.96 & 0.3200 & 0.00000051 & \textbf{***} \\
$\Gamma$ & 250-450 ms & -0.1361 & 0.33 & 0.2688 & 0.00047999 & \textbf{*} \\[1ex]
\end{tabular}}
\caption{Alignment score $\rho_\text{align}$ and the significance levels for all 15 regions of interest. * indicates the alignments that pass p-value threshold of 0.05 Bonferroni-corrected to $< 0.003(3)$ and *** the ones that pass 0.005~\citep{dienes2017redefine} Bonferroni-corrected to $< 0.0003(3)$. Note how the values differ between random (control) network and a network trained on natural images. Visual representation of alignment and significance is given on Figure \ref{fig:diagonality_specificity_and_volume}a.}
\label{tab:alignment-significance}
\end{table}
\setcounter{figure}{12}
\begin{sidewaysfigure}
\centering
\includegraphics[width=1.0\linewidth]{images/fig_15_plates_MD.pdf}
\caption{Mapping of activity in visual areas to activations of layers of DCNN across five frequency bands and three time windows. Vertical axis holds Brodmann areas in the order of the ventral stream (top to bottom), horizontal axis represents the succession of layers of DCNN. Number in each cell of a matrix is the total sum of correlations (between RDMs of probes in that particular area and the RDM of that layer) normalized by the number of significantly correlating probes in an area. The alignment score is computed as Spearman's rank correlation between electrode assignment to Brodmann areas and electrode assignment to DCNN layers (Equation \ref{eq:alignement}). The numbers on the left of each subplot show the number of significantly correlating probes in each area out of the total number of responsive probes in that area.}
\label{fig:15_plates}
\end{sidewaysfigure}
In order to take into account the intrinsic variability when comparing alignments of different bands between each other, we performed a set of tests to see which bands have statistically significantly higher alignment with DCNN than other bands. See the section \ref{sec:mapping-brain-to-dcnn} for details. The results of those tests are presented in Table \ref{tab:pairwise-pvalues}. Based on these results we draw a set of statistically significant conclusions on how the alignment of neural responses with the activations of DCNN differs between frequency bands and time windows. In the low gamma range ($31-70$ Hz) we conclude that the alignment is larger than with any other band and that within the low gamma the activity in early time window $50-250$ ms is aligned more than in later windows. Alignment in the high gamma ($71 - 150$ Hz) is higher than the alignment of $\theta$, but not higher than alignment of $\alpha$. Within the high gamma band the activity in the middle time window $150-350$ ms has the highest alignment, followed by late $250-450$ ms window and then by the early activity in $50-250$ ms window. Outside the gamma range we conclude that theta band has the weakest alignment across all bands and that alignment of early alpha activity is higher than the alignment of early and late high gamma.
\begin{table}[h!]
\vspace{0.3em}
\centering
\begin{tabular}{p{2.5cm}p{0.5cm}cp{6.8cm}}
\normalsize$0.2079$ \tiny$\pm0.1381$\normalsize & $\theta^{50}$ & $>$ & - \vspace{0.2em}\\
\normalsize$0.3352$ \tiny$\pm0.0989$\normalsize & $\alpha^{50}$ & $>$ & $\theta^{50}$, $\Gamma^{50}$, $\Gamma^{250}$ \vspace{0.2em}\\
\normalsize$0.5652$ \tiny$\pm0.1953$\normalsize & $\gamma^{50}$ & $>$ & $\theta^{50}$, $\alpha^{50}$, $\gamma^{150}$, $\gamma^{250}$, $\Gamma^{50}$, $\Gamma^{150}$, $\Gamma^{250}$ \vspace{0.2em}\\
\normalsize$0.4880$ \tiny$\pm0.1650$\normalsize & $\gamma^{150}$ & $>$ & $\theta^{50}$, $\alpha^{50}$, $\Gamma^{50}$, $\Gamma^{150}$, $\Gamma^{250}$ \vspace{0.2em}\\
\normalsize$0.4656$ \tiny$\pm0.2185$\normalsize & $\gamma^{250}$ & $>$ & $\theta^{50}$, $\alpha^{50}$, $\Gamma^{50}$, $\Gamma^{150}$, $\Gamma^{250}$ \vspace{0.2em}\\
\normalsize$0.2172$ \tiny$\pm0.1179$\normalsize & $\Gamma^{50}$ & $>$ & - \vspace{0.2em}\\
\normalsize$0.3116$ \tiny$\pm0.1115$\normalsize & $\Gamma^{150}$ & $>$ & $\theta^{50}$, $\Gamma^{50}$, $\Gamma^{250}$ \vspace{0.2em}\\
\normalsize$0.2494$ \tiny$\pm0.1381$\normalsize & $\Gamma^{250}$ & $>$ & $\theta^{50}$, $\Gamma^{50}$ \vspace{0.2em}\\
\end{tabular}
\caption{Comparison of the alignment across regions of interest. Alignment of the region of interest on the left is statistically significantly larger than the alignments of the regions of interest on the right. To obtain these results a pairwise comparison of the magnitude of alignment between the regions of interest was made. First column enlists significantly aligned regions, their average alignment $\rho$ score when estimated on 1000 random subsets of the data (each of the half of the size of the dataset), and standard deviation of the alignment. On the right side of the table we list the regions of interest of which the ROI on the left is larger. The hypothesis was tested using Mann-Whithney U test and only the results with the p-values that have passed the threshold of $2.2\mathrm{e}{-}5$ ($0.005$ Bonferroni corrected to take into account multiple comparisons) are presented in the table.}
\label{tab:pairwise-pvalues}
\end{table}
\subsection{Alignment is dependent on having two types of layers in DCNN}
\setcounter{figure}{13}
\begin{sidewaysfigure}
\centering
\includegraphics[width=1.0\linewidth]{images/fig_layer_specificity_and_volume.pdf}
\caption{Specificity of neural responses to layers of DCNN across frequency bands and time windows. \textbf{a.} The architecture of the DCNN. Convolutional layer 1 consists of $96$ feature detectors of size $11 \times 11$, they take as input pixels of the image and their activations create 96 features maps of size $55 \times 55$, architecture of all consecutive convolutional layers is analogous. Five convolutional layers are followed by 3 fully-connected layers of sizes $4096, 4096$ and $1000$ respectively. \textbf{b.} The leftmost image is an example input image. For each layer we have selected one interesting filter that depicts what is happening inside of the neural network and plotted: a) a reconstruction of the original image from the activity of that neuron using the deconvolution \citep{zeiler2014visualizing} technique (upper larger image), (b) activations on the featuremap generated by that neuron (left sub-image) and (c) synthetic image that shows what input the neuron would be most responsive to (right sub-image). Visualizations were made with Deep Visualization Toolbox \citep{yosinski2015understanding}. All filters are canonical to AlexNet trained on ImageNet and can be explored using the above-mentioned visualization tool or visualized directly from the publicly available weights of the network. \textbf{c.} Specificity of neural responses across frequency bands and time windows for each layer of DCNN. Size of a marker is the total activity mapped to this layer and the intensity of the color is the specificity of the activity to the Brodmann areas constituting the ventral stream: BA17-18-19-37-20.}
\label{fig:layer_specificity_and_volume}
\end{sidewaysfigure}
On figures \ref{fig:all_areas_in_gammas} and \ref{fig:15_plates} one can observe that sites in lower visual areas (17, 18) are mapped to DCNN layers 1 to 5 without a clear trend but are not mapped to layers 6-8. Similarly areas 37 and 20 are mapped to layers 6-8 but not to 1-5. Hence we next asked whether the observed alignment is depending on having two different groups of visual areas related to two groups of DCNN layers. We tested this by computing alignment within the subgroups. We looked at alignment only between the lower visual areas (17, 18, 19) and the convolutional layers 1-5, and separately at the alignment between higher visual areas (37, 20) and fully connected layers of DCNN (6-8). We observed no significant alignment within any of the subgroups. So we conclude that the alignment mainly comes from having different groups of areas related more or less equally to two groups of layers. The underlying reason for having these two groups of layers comes from the structure of the DCNN -- it has two different types of layers, convolutional (layers 1-5) and fully connected (layers 6-8) (See Figures \ref{fig:layer_specificity_and_volume}a and \ref{fig:layer_specificity_and_volume}b for a visualization of the different layers and their learned features and a longer explanation of the differences between the layers in the \ref{sec:dcnn-discussion}). As can be evidenced on Figure \ref{fig:rdm_corrs_within_dnn} the layers 1-5 and 6-8 of the DCNN indeed cluster into two groups. Taken together, we observed that early visual areas are mapped to the convolutional layers of the DCNN whereas higher visual areas match the activity profiles of the fully connected layers of the DCNN.
\setcounter{joonis}{14}
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\linewidth]{images/fig_rdm_corrs_within_dnn.pdf}
\caption{Correlations between the representation dissimilarity matrices of the layers of the deep convolutional neural network. All scores are significant.}
\label{fig:rdm_corrs_within_dnn}
\end{figure}
\subsection{Visual complexity varies across areas and frequencies}
\setcounter{figure}{15}
\begin{sidewaysfigure}
\centering
\includegraphics[width=1.0\linewidth]{images/fig_hhl_and_volume.pdf}
\caption{Area-specific analysis of volume of neural activity and complexity of visual features represented by that activity. Size of the marker shows the sum of correlation coefficients between the area and DCNN for each particular band and time window. Color codes the ratio of complex visual features to simple visual features, i.e. the comparison between the activity that correlates with the higher layers (\texttt{conv5, fc6, fc7}) of DCNN to the lower layers (\texttt{conv1}, \texttt{conv2}, \texttt{conv3}). Intense red means that the activity was correlating more with the activity of higher layers of DCNN, while the intense blue indicates the dominance of correlation with the lower areas. If the color is close to white then the activations of both lower and higher layers of DCNN were correlating with the brain responses in approximately equal proportion.}
\label{fig:hhl_and_volume}
\end{sidewaysfigure}
\setcounter{joonis}{16}
To investigate the involvement of each frequency band more closely we analyzed each visual area separately. Figure \ref{fig:hhl_and_volume} shows the volume of activity in each area (size of the marker on the figure) and whether that activity was more correlated with the complex visual features (red color) or simple features (blue color). In our findings the role of the earliest area (17) was minimal, however that might be explained by a very low number of electrodes in that area in our dataset (less than 1\%). One can see on Figure \ref{fig:hhl_and_volume} that activity in theta frequency in time windows $50-250$ ms and $150-350$ ms had large volume and is correlated with the higher layers of DCNN in higher visual areas (19, 37, 20) of the ventral stream. This hints at the role of activity reflected by the theta band in visual object recognition. In general, in areas 37 and 20 all frequency bands reflected the information about high level features in the early time windows. This implies that already at early stages of processing the information about complex features was present in those areas.
\subsection{Gamma activity is more specific to convolutional layers}
We analysed volume and specificity of brain activity that correlates with each layer of DCNN separately to see if any bands or time windows are specific to particular level of hierarchy of visual processing in DCNN. Figure \ref{fig:layer_specificity_and_volume} presents a visual summary of this analysis. In section \ref{sec:quantifying-mapping} we have defined total volume of visual activity in layers $\mathbf{L}$ as $V_\mathbf{L}$. We used average of this measure over frequency band intervals to quantify the activity in low and high gamma bands. We noticed that while the fraction of gamma activity that is mapped to convolutional layers is high ($\frac{\bar{V}_{\mathbf{L} = \{\texttt{conv1} \ldots \texttt{conv5}\}}^{\gamma, \Gamma}}{\bar{V}_{\{\mathbf{L} = \texttt{conv1} \ldots \texttt{conv5}\}}^{\text{all bands}}} = 0.71$), this fraction diminished in fully connected layers \texttt{fc6} and \texttt{fc7} ($\frac{\bar{V}_{\mathbf{L} = \{\texttt{fc6}, \texttt{fc7}\}}^{\gamma, \Gamma}}{\bar{V}_{\mathbf{L} = \{\texttt{fc6}, \texttt{fc7}\}}^{\text{all bands}}} = 0.39$). Note that \texttt{fc8} was excluded as it represents class label probabilities and does not carry information about visual features of the objects. On the other hand the activity in lower frequency bands (theta, alpha, beta) showed the opposite trend -- fraction of volume in convolutional layers was $0.29$, while in fully connected it grew to $0.61$. This observation highlighted the fact that visual features extracted by convolutional filters of DCNN are more similar to gamma frequency activity, while the fully connected layers that do not directly correspond to intuitive visual features, carry information that has more in common with the activity in the lower frequency bands.
\section{Extending the methodology beyond the visual system}
\label{sec:dcnn-discussion}
The recent advances in artificial intelligence research have demonstrated a rapid increase in the ability of artificial systems to solve various tasks that are associated with higher cognitive functions of human brain. One of such tasks is visual object recognition. Not only do the deep neural networks match human performance in visual object recognition, they also provide the best model for how biological object recognition happens \citep{kriegeskorte2015deep, yamins2013hierarchical, yamins2014performance, yamins2016using}. Previous work has established a correspondence between hierarchy of the DCNN and the fMRI responses measured across the human visual areas \citep{gucclu2015deep, eickenberg2016seeing, seibert2016performance, cichy2016comparison}. Further research has shown that the activity of the DCNN matches the biological neural hierarchy in time as well \citep{cichy2016comparison, seeliger2017cnn}. Studying intracranial recordings allowed us to extend previous findings by assessing the alignment between the DCNN and cortical signals at different frequency bands. We observed that the lower layers of the DCNN explained gamma band signals from earlier visual areas, while higher layers of the DCNN, responsive for more complex features, matched with the gamma band signals from higher visual areas. This finding confirms previous work that has given a central role for gamma band activity in visual object recognition \citep{singer1995visual, singer1999neuronal, fisch2009neural} and feedforward communication \citep{van2014alpha, bastos2015visual, michalareas2016alpha}. Our work also demonstrates that the correlation between the DCNN and the biological counterpart is specific not only in space and time, but also in frequency.
The research into gamma oscillations started with the idea that gamma band activity signals the emergence of coherent object representations \citep{gray1989stimulus, singer1995visual, singer1999neuronal}. However, this view has evolved into the understanding that activity in the gamma frequencies reflects neural processes more generally. One particular view \citep{fries2005mechanism, fries2015rhythms} suggests that gamma oscillations provide time windows for communication between different brain regions. Further research has shown that especially feedforward activity from lower to higher visual areas is carried by the gamma frequencies \citep{van2014alpha, bastos2015visual, michalareas2016alpha}. As the DCNN is a feedforward network our current findings support the idea that gamma rhythms provide a channel for feedforward communication. However, our results by no means imply that gamma rhythms are only used for feedforward visual object recognition. There might be various other roles for gamma rhythms \citep{buzsaki2012mechanisms, fries2015rhythms}.
We observed significant alignment to the DCNN in both low and high gamma bands. However, when directly contrasted the alignment was stronger for low gamma signals. Furthermore, for high gamma this alignment was more restricted in time, surviving correction only in the middle time window. Previous studies have shown that low and high gamma frequencies are functionally different: while low gamma is more related to classic narrow-band gamma oscillations, high frequencies seem to reflect local spiking activity rather than oscillations \citep{manning2009broadband, ray2011different}, the distinction between low and high gamma activity has also implications from cognitive processing perspective \citep{vidal2006visual, wyart2008neural}. In the current work we approached the data analysis from the machine learning point of view and remained agnostic with respect to the oscillatory nature of underlying signals. Importantly, we found that numerically the alignment to the DCNN was stronger and persisted for longer in low gamma frequencies. However, high gamma was more prominent when considering volume and specificity to visual areas. These results match well with the idea that whereas high gamma signals reflect local spiking activity, low gamma signals are better suited for adjusting communication between brain areas \citep{fries2005mechanism, fries2015rhythms}.
In our work we observed that the significant alignment depended on the fact that there are two groups of layers in the DCNN: the convolutional and fully connected layers. We found that these two types of layers have similar activity patterns (i.e. representational geometry) within the group but the patterns are less correlated between the groups (Figure \ref{fig:rdm_corrs_within_dnn}). As evidenced in the data, in the lower visual areas (17, 18) the gamma band activity patterns resembled those of convolutional layers whereas in the higher areas (37 and 20) gamma band activity patterns matched the activity of fully connected layers. Area 19 showed similarities to both types of DCNN layers.
Convolutional layers impose a certain structure on the network’s connectivity -- each layer consists of a number of visual feature detectors, each dedicated to finding a certain pattern on the source image. Each neuron of the subsequent layer in the convolutional part of the network indicates whether the feature detector associated with that neuron was able to find its specific visual pattern (neuron is highly activated) on the image or not (neuron is not activated). Fully connected layers on the other hand, as the name suggests, connect every neuron of a layer to every neuron in the subsequent layer, allowing for more flexibility in terms of connectedness between the neurons. The training process determines which connections remain and which ones die off. In simplified terms, convolutional layers can be thought of as feature detectors, whereas fully connected layers are more flexible: they do whatever needs to be done to satisfy the learning objective. It is tempting to draw parallels to the roles of lower and higher visual areas in the brain: whereas neurons in lower visual areas (17 and 18) have smaller receptive fields and code for simpler features, neurons in higher visual areas (like 37 and parts of area 20) have larger receptive fields and their activity explicitly represents objects \citep{grill2004human, dicarlo2012does}. On the other hand, while in neuroscience one makes the broad differences between lower and higher visual cortex \citep{grill2004human} and sensory and association cortices \citep{zeki1993visual}, this distinction is not so sharply defined as the one between convolutional and fully connected layers. Our hope is that the present work contributes to understanding the functional differences between lower and higher visual areas.
Visual object recognition in the brain involves both feedforward and feedback computations \citep{dicarlo2012does, kriegeskorte2015deep}. What do our results reveal about the nature of feedforward and feedback compoments in visual object recognition? We observed that the DCNN corresponds to the biological processing hierarchy even in the latest analysed time-window (Figure \ref{fig:diagonality_specificity_and_volume}). In a directly relevant previous work Cichy and colleagues compared DCNN representations to millisecond resolved magnetoencephalographic data from humans \citep{cichy2016comparison}. There was a positive correlation between the layer number of the DCNN and the peak latency of the correlation time course between the respective DCNN layer and magnetoencephalography signals. In other words, deeper layers of the DCNN predicted later brain signals. As evidenced on Figure 3 in \citep{cichy2016comparison}, the correlation between DCNN and magnetoencephalographic activity peaked between ca 100 and 160 ms for all layers, but significant correlation persisted well beyond that time-window. In our work too the alignment in low gamma was strong and significant even in the latest time-window 250-450 ms, but it was significantly smaller than in the earliest time-window 50-250 ms. In particular, the alignment was the strongest for low gamma signals in the earliest time-window compared to all other frequency-and-time combinations.
The present work relies on data pooled over the recordings from $100$ subjects. Hence, the correspondence we found between responses at different frequency bands and layers of DCNN is distributed over many subjects. While it is expected that single subjects show similar mappings (see also Supplementary Figure \ref{fig:single_plates}), the variability in number and location of recording electrodes in individual subjects makes it difficult a full single-subject analysis with this type of data. We also note that the mapping between electrode locations and Brodmann areas is approximate and the exact mapping would require individual anatomical reconstructions and more refined atlases. Also, it is known that some spectral components are affected by the visual evoked potentials (VEPs). In the present experiment we could not disentangle the effect of VEPs from the other spectral responses as we only had one repetition per image. However, we consider the effect of VEPs to be of little concern for the present results as it is known that VEPs have a bigger effect on low frequency components, whereas our main results were in the low gamma band.
It must be also noted that the DCNN still explains only a part of the variability of the neural responses. Part of this unexplained variance could be noise \citep{gucclu2015deep,khaligh2014deep}. Previous works that have used RSA across brain regions have in general found the DCNNs to explain a similar proportion of variance as in our results \citep{cichy2016comparison,seibert2016performance}. It must be noted that the main contribution of DCNN has been that it can explain the gradually emerging complexity of visual responses along the ventral pathway, including the highest visual areas where the typical models (e.g. HMAX) were not so successful \citep{yamins2014performance, khaligh2014deep}. Recently it also has been demonstrated that the DCNN provides the best model for explaining responses to natural images also in the primate V1 \citep{cadena2017deep}. Nevertheless, the DCNNs cannot be seen as the ultimate model explaining all biological visual processing \citep{kriegeskorte2015deep, rajalingham2018large}. Most likely over the next years deep recurrent neural networks will surpass DCNNs in the ability to predict cortical responses \citep{kriegeskorte2015deep, shi2017deep}.
Intracranial recordings are both precisely localized in space and time, thus allowing us to explore phenomena not observable with fMRI. In this work we investigated the correlation of DCNN activity with five broad frequency bands and three time windows. Our next steps will include the analysis of the activity on a more granular temporal and spectral scale. Replacing representation similarity analysis with a predictive model (such as regularized linear regression) will allow us to explore which visual features elicited the highest responses in the visual cortex. In this study we have investigated the alignment of visual areas with one of the most widely used DCNN architectures -- AlexNet. The important step forward would be to compare the alignment with other networks trained on visual recognition task and investigate which architectures preserve the alignment and which do not. That would provide an insight into which functional properties of DCNN architecture are compatible with functional properties of human visual system.
To sum up, in the present work we studied which frequency components match the increasing complexity of representations of an artificial neural network. As expected by previous work in neuroscience, we observed that gamma frequencies, especially low gamma signals, are aligned with the layers of the DCNN. Previous research has shown that in terms of anatomical location the activity of DCNN maps best to the activity of visual cortex and this mapping follows the propagation of activity along the ventral stream in time. With this work we have confirmed these findings and have additionally established at which frequency ranges the activity of human visual cortex correlates the most with the activity of DCNN, providing the full picture of alignment between these two systems in spatial, temporal and spectral domains.
\chapter{State space visualization informs on representation of mental concepts in human brain}
\label{ch:mental-space-visualization-based}
Numerous studies in the area of BCI are focused on the search for a better experimental paradigm -- a set of mental actions that a user can evoke consistently and a machine can discriminate reliably. Examples of such mental activities are motor imagery, mental computations, etc. We propose a technique that instead allows the user to try different mental actions in the search for the ones that will work best. The system is based on a modification of the self-organizing map (SOM) algorithm and enables interactive communication between the user and the learning system through a visualization of user's mental state space. During the interaction with the system the user converges on the paradigm that is most efficient and intuitive for that particular user. Results of two experiments, one allowing muscular activity, another permitting mental activity only, demonstrate soundness of the proposed method and empirically validate the performance improvement over the traditional closed-loop feedback approach.
\section{The search for distinguishable mental patterns}
In many BCI experiments, participants are asked to perform certain mental actions. Consider an experiment, where a person is asked to control a point on a screen, and have it move to the left. In essense, the subject is requested to focus on a thought of ``moving the point leftwards''. This request is quite ambiguous -- should the user concentrate on the abstract notion of ``left'', engage in motor imagery or think about an unrelated concept?
The problem of choosing the best kind of mental activity for BCI has been studied by \cite{curran2003learning, friedrich2012effect}. Most experiments first propose a particular paradigm and then evaluate its average effectiveness on a sample of users. Many paradigms have been evaluated this way~\citep{anderson1996classification, babiloni2000linear, alivisatos1997functional, allison2010toward, bacsar2007brain, cabrera2008auditory, chochon1999differential, curran2004cognitive}. As brain activity for a particular mental action differs across subjects \citep{miller2012individual, ganis2005understanding, tavor2016task}, any general paradigm will be suboptimal compared a user-specific one. In this work we propose a method that facilitates self-guided interactive search for a user-specific paradigm through communication between the user and the learning system. We demonstrate the feasibility of the approach on EEG recordings from two separate experiments on muscular and mental activity. The approach is general and does not depend on the neuroimaging method.
To achieve our goal we replace the traditional feedback~\citep{pfurtscheller2001motor} with a visualization of the feature space within which the underlying machine learning algorithm is operating. This visualization facilitates a `dialogue' between the learning algorithm and the user by visually explaining to the user why his current set of mental actions is suboptimal, which ones are being recognized well by the system and which ones should be changed. By exploring how their mental actions affect the visualization, a user can find a suitable mental action for each of the stimuli. The exploration of the mental state space can go for as long as needed to find mental actions that the user can produce consistently over time and that are distinguishable by the algorithm.
\section{BCI via topology-preserving visualization of the feature space}
At the core of almost any BCI system lies a machine learning algorithm that classifies user brain signal into desired actions~\citep{lotte2007review}. The algorithm sees the incoming data in a high-dimensional space and operates in that space. If an algorithm is unable to correctly discern the desired actions from the signal one can rely on visualization of the data and the space state to figure out why that is the case. Visualization allows to see particular data points in the context of other data, and allows to detect such issues as homogeneous data representation, failure to represent critical features of the data, biases in the data, insufficient flexibility of the feature space to present different data points differently, too high variance of the data points that should belong to the same group, and others. In the case of classification of mental actions we find that the two most important aspects a visualization could help evaluate are the cases where the data points from different classes look too much alike (one mental pattern is too similar to another) for the algorithm to differentiate between them, and the variance of the data within a class -- mental patterns that a user produces for the same action are not consistent and the algorithm is not able to group them together. With enough simplification we were able to present such a visualization to the user directly, allowing for a real-time evaluation of the above-mentioned issues during the training process. This allows the user to modify his mental activity in accordance with the feedback and try to produce more consistent and more distinguishable mental patterns. Figure~\ref{fig:interaction-process} depicts the interaction process between the user and the proposed feedback system.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\linewidth]{images/interaction-process.pdf}
\caption{Real-time interaction process between the system and the user, during which the user realizes that he must modify his mental activity for one of the actions to increase the system's performance.}
\label{fig:interaction-process}
\end{figure}
Direct visualization of the space of mental signals provides more information to the user and allows to make more informed decisions than would be possible with the traditional approach \citep{pfurtscheller2001motor}. If in the case of usual system the subject has no information of why the system cannot distinguish the user's mental states, in the \emph{adaptive} paradigm, proposed in this work, the subject can see which mental actions are not being recognized or are colliding with others, previously attempted, mental states. The proposed framework naturally addresses a few limitations of the traditional approach, such as limited number of actions that can be trained simultaneously and makes a more efficient use of the training time by shifting training data class distribution towards more complicated cases.
The concept described above poses several technological constraints on the choice of the underlying learning algorithm. To facilitate the real-time feedback loop the algorithm should work in an online setting and be fast enough to support real-time operation. In order to present the projection of the feature space to the user the algorithm must be compatible with topology-preserving dimensionality reduction techniques \citep{gisbrecht2015data}. In this section we describe a method that satisfies those requirements.
\subsection{Self-organizing map}
\label{sec:som}
Self-organizing map (SOM) \citep{kohonen1990self} is one of the topology-preserving dimensionality reduction techniques. These techniques try to preserve the relative distances through the transformation, such that the data points that were close in the original space remain close in the target space, and those that were apart, remain apart. SOM projects the data form the original space onto a \emph{map}, which is a collection of $m$ units organized into a multidimensional rectangular grid. Most commonly (and also in this work) a two-dimensional grid is used. Each SOM unit $u$ corresponds to a vector $\mathbf{w}_u \in \mathbb{R}^d$ in the space of input data points (signal samples from the EEG device, in our case). This way each unit effectively covers a region in the signal space. In this work the map has $625$ units ($25 \times 25$ square grid) with 630-dimensional vectors $\mathbf{w}$ initialized from uniform distribution $\mathcal{U}(0, 0.01)$.
The learning phase consists of updating vectors $\mathbf{w}$ with each new training sample $\mathbf{x}$. Once a new sample is obtained from a neuroimaging device the \emph{best matching unit} (BMU) for that sample is found according to Equation \ref{eq:bmu} with Euclidean distance used as the distance measure.
\begin{equation}
\label{eq:bmu}
BMU(\mathbf{x}) = \underset{u\in\{1\ldots m\}}{\mathrm{argmin}}\ \mathrm{distance}(\mathbf{w}_u, \mathbf{x})
\end{equation}
Once BMU is found the weights $\mathbf{w}$ of unit $u$ and its neighbors are updated as shown in Equation \ref{eq:som-update}, where $s$ is the number of the current iteration.
\begin{equation}
\label{eq:som-update}
\mathbf{w}_u^{s + 1} = \mathbf{w}_u^s + \Theta(BMU, u, s) \alpha(s)(\mathbf{x} - \mathbf{w}_u^s)
\end{equation}
Default SOM is an offline learning algorithm that performs several passes over the training data. The update is thus repeated for each iteration $s \in \{1, \dots, S\}$, for each input data vector ($\mathbf{x}_1, \ldots, \mathbf{x}_n$) in the training set and for each unit in the map ($u_1, \ldots, u_m$). In total this procedure is being repeated up to $S \times n \times m$ times, where $S$ is the iteration limit, $n$ is the number of samples in the training data and $m$ is the size of the map. Not all units are updated with each new input vector, furthermore, not all units among the updated ones are updated equally. There are two functions in Equation (\ref{eq:som-update}), which are responsible for deciding which units will be updated and by how much. $\Theta(b, u, s)$ is called the \emph{neighborhood function}, it determines to what extent unit $u$ is neighbor of a unit $b$: for $b$ itself $\Theta(b, b, s) = 1$ and for some unit $u$, which is too far away to be considered to be a neighbor of $b$ $\Theta(b, u, s) = 0$. The parameter $s$ is used to decrease the number of neighbors on later iterations. The function $\alpha(s)$ outputs \emph{learning rate} that decreases with more iterations allowing the learning process to converge.
At the end of the learning process the units of the map represent centers of signal clusters in the training data. Each new data sample can be assigned to one of the clusters and this cluster will hold samples that are similar. The samples that are close in the original space will be assigned to map units that are close to each other on the map.
\subsection{Predictive online SOM}
We extend SOM to work in an online setting \citep{deng2000esom, somervuo2004online}, where the map is updated only once with each new data sample. We also assign a vector of probabilities $\mathbf{p}_u \in \mathbb{R}^C$ to each unit $u$ and use that vector to classify each new incoming data sample into one of $C$ classes. The class probability vector $\mathbf{p}$ of unit $u$ of Predictive Online SOM (POSOM) is initialized to a random vector of length $C$ with values sampled from uniform distribution $\mathcal{U}(0.0, 0.2)$. This vector holds action probability distribution for the unit $u$. It shows what is the probability that a signal $\mathbf{x}$, which was classified into unit $u$, has been produced in response to action $a$.
\begin{figure}[htb]
\centering
\includegraphics[width=0.47\linewidth]{images/SOM.pdf}
\caption{SOM extended with class probability vectors. Signal representation $\mathbf{w}_u$ in the original feature space is mapped to a unit $u$ on a two dimensional map. This unit represents a cluster of signal samples similar to $\mathbf{w}_u$, such as sample $\mathbf{x}$. Unit $u$ holds a vector of class probabilities $\mathbf{p}_u$ that shows to which class a sample assigned to the cluster with centroid $u$ most probably belongs.}
\label{fig:som-predictive}
\end{figure}
Class probability vectors are updated after each sample according to Equation \ref{eq:pred-som-update},
\begin{equation}
\label{eq:pred-som-update}
\mathbf{p}^{s+1}(u) = \mathbf{p}^s(u)(1 - \alpha) + \mathbf{c}\alpha
\end{equation}
where $s$ is iteration number, $\alpha\in(0,1)$ is a parameter, which specifies how fast the contribution of the older data samples deteriorates, and $\mathbf{c}$ is a bit vector, where for each class we have value 0 or 1. There can be only one non-zero value in the vector $\mathbf{c}$ and its position indicates the true class of a sample.
The probability vector $\mathbf{p}_u$ is used for classification as follows: for each new sample $\mathbf{x}$ we first identify POSOM's BMU $u$ for this sample, and predict the class of this sample by choosing the most probable class in the vector $\mathbf{p}_u$.
\subsection{POSOM-based BCI training system}
The learning method defined in the previous section satisfies all of the requirements of a system with an interactive feedback based on visualization on user's mental state space we have outlined in the beginning of the chapter.
The training process begins by presenting an empty SOM map to the user (Figure \ref{fig:posom-for-bci}a). A stimulus cue is displayed for a brief period of time and the system starts receiving samples from the neuroimaging device. It finds the best matching unit $u$ for each of the samples and updates the $\mathbf{w}_u$ and $\mathbf{p}_u$ vectors of the unit $u$ and its neighbours. Some of the randomly initialized SOM units now represent certain mental patterns and are mapped to corresponding actions, the action each unit is associated with is shown with a pictogram on the map (Figure \ref{fig:posom-for-bci}b).
\begin{figure}[htb]
\centering
\includegraphics[width=1.0\linewidth]{images/posom-for-bci.pdf}
\caption{The process of forming the map. \textbf{a:} Visualization of an empty SOM and the very first stimulus cue. \textbf{b:} The first few samples are collected and assigned to the units on the map. \textbf{c:} Repeating steps (a) and (b) for all stimuli multiple times results in a map, where units are assigned mental state representations and corresponding actions.}
\label{fig:posom-for-bci}
\end{figure}
The process continues until the user is satisfied with his ability to produce the required set of patterns consistently and system's ability to assign these patterns to correct units on the map (Figure \ref{fig:posom-for-bci}c). The user can see on the map which of the mental patterns are always assigned correctly and which ones are `jumping' across the map. This informs the user about the variance of a mental pattern, if the variance is too high it might be best to switch to another mental pattern instead. If a user can see that the patterns of two or more different actions tend to occupy the same region of the map he can conclude that the mental patterns he is producing for these actions are not different enough to be distinguishable and he should consider replacing one or all of them.
\section{Experimental validation on brain-computer interface for control}
Each of 5 test subjects completed a set of four experimental runs to compare maximal achievable classification accuracy of \emph{adaptive} (proposed) versus \emph{control} approach under two different conditions. In the first pair of experiments subjects were allowed to engage facial muscles to achieve the control of the system more easily. In the second pair only mental activity was allowed and the subjects were instructed to rely on mental imagery to control the system.
Subjects were seated in front of a computer screen in a quiet room with no distractions. All subjects had normal or corrected to normal vision. In all of experiments subjects were presented with 3 different stimuli (\texttt{left}, \texttt{right} and \texttt{none}) and were asked to engage different mental (or, in the case of the experiment where facial expressions were allowed, muscular) activity for each stimulus. A stimulus was shown for 7 seconds. Subjects were briefed on the usual mental paradigms including motor imagery~\citep{pfurtscheller2001motor, hwang2009neurofeedback}, vivid visual imagery~\citep{marks1973visual, neuper2005imagery}, mental computations~\citep{chochon1999differential} and auditory recollections~\citep{cabrera2010comparison}.
\begin{figure}[htb]
\centering
\includegraphics[width=1.0\linewidth]{images/interface.pdf}
\caption{Interface of the experiment. \textbf{a:} In adaptive experiment the user is presented with a grid that visualizes 2D projection of decoder's feature space. The grid is updated with each new data sample received from the neuroimaging device, enabling the user to see how his mental actions affect the mental state space representation in real time. Cue to the next stimulus is shown in the center of the screen and disappears after 1 second, allowing the user to see the full grid. \textbf{b:} The control experiment provides the feedback by raising or lowering per-class performance bar, indicating which stimuli are being recognized well by the system. \textbf{c:} The decoding models resulting from both adaptive and control experiments are tested with the same interface, where a user is presented with a description of the mental activity he must engage in. We do not use the same cues as during training to measure the ability of the user to engage the metal activity associated with the action and avoid measuring involuntary reaction to the cue image.}
\label{fig:interface}
\end{figure}
\ \\
The sequence of the experimental runs each test subject has completed was as follows:
\begin{enumerate}
\item Training of the classification model in the traditional way. Stimuli were presented in a random order for 7 seconds each, for total time of 7 minutes, keeping the number of times each stimulus is shown balanced. The test subject received real-time feedback by observing the height of the performance indicator that was changing with each new data sample. Currently highlighted bar is the current action, height of the bar indicates the performance (Figure \ref{fig:interface}b).
\item Testing the traditional model. To avoid measuring the involuntary reaction to the cue image the user interface of the testing stage was different from the training stage and is shown on Figure \ref{fig:interface}c. Currently highlighted stimulus is the one the user should engage in. Stimuli were shown for 7 seconds in random order for a total length of 3 minutes.
\item Training of the classification model in adaptive way. The user was presented with a visualization of the projection of the feature space onto 2D grid (Figure \ref{fig:interface}a). Each stimulus is shown for 7 seconds, the duration of the experiment was not fixed to allow the subject to test different mental activities for the same action until the one that works is found. The stimuli were presented in the order of their performance rate, the actions that have the lowest score are shown more frequently.
\item Testing of the adaptively trained model. The procedure repeats the steps outlined in (2) exactly, making the testing runs comparable.
\end{enumerate}
\ \\
Upon finishing the trials the test subjects were asked of their subjective evaluation of the adaptive system in comparison with the traditional one and whether they were able to feel the interaction with the system and its efforts to adapt to test subject's mental efforts.
\subsection{Preprocessing of EEG data}
The data was recorded using the Emotiv EPOC \citep{stytsenko2011evaluation} consumer-grade EEG device. Signal from all 14 EEG channels was split into 1000 ms windows with a step size of 250 ms. Each 1000 ms recording was linearly detrended and converted to frequency domain using fast Fourier transform \citep{welch1967use}. Frequencies outside the 1 to 45 Hz range were excluded from further analysis.
A 1000 ms signal from one channel was represented by 45 frequency power measurements. By concatenating representations of all 14 channels we obtained feature representation of a signal with 630 dimensions. In machine learning terms a \emph{sample} $\mathbf{x}$ that represents 1000 ms of EEG recording has 630 \emph{features} and a categorical class label.
\section{Feedback based on mental state space visualization leads to higher decoding accuracy}
We have conducted two types of experiments to empirically validate the benefits of the adaptive search for mental BCI paradigm via visual exploration of a projection of subject's mental state space. During the first experiment the test subjects were allowed to engage in facial muscle activity in response to the stimuli \citep{huang2006application, heger2011online}. The second experiment was aimed at controlling the system via mental efforts only. In both experiments the proposed approach demonstrated statistically significant improvement in performance over the traditional method. Average performance of the model trained on facial expressions in traditional way was 23\% higher (Mann-Whitney U test $p = 0.006$) than that of the traditional approach. For the mental actions the adaptive approach resulted in a model that was significantly and consistently higher than the chance level (F1 score = $0.422$), while traditional approach failed to deliver a non-trivial model (F1 score = $0.354$). Comparatively, the adaptive approach yielded 19\% higher performance (Mann-Whitney U test $p = 0.018$).
\begin{figure}[h!]
\centering
\includegraphics[width=0.25\linewidth]{images/facial_3.png}
\includegraphics[width=0.25\linewidth]{images/mental_3.png}
\caption{Average result of the experiments on facial expressions (left) and mental activity (right). In both cases adaptive approach demonstrates statistically significant improvement over the traditional approach.}
\label{fig:results}
\end{figure}
Figure \ref{fig:real-results-facial-mental} (left) presents the detailed analysis of the results of the experiments involving facial expressions. Face muscle activity highly affects the EEG readings \citep{d1974contamination} and can be observed with the naked eye even on the raw signal. The primary goal of this series of experiments was to demonstrate the benefit of the adaptive approach in a high signal to noise ratio (SNR) scenario. Compared to the facial expressions experiment the task of distinguishing mental states was much harder \citep{haynes2006neuroimaging}. Since the effect of changing the activity was not immediately evident, it required more time for the test subjects to begin to understand how their efforts affect the learning system. Figure \ref{fig:real-results-facial-mental} (right) shows the detailed results of the experiment.
\begin{figure}[h!]
\centering
\includegraphics[width=0.49\linewidth]{images/real_results_facial-eps-converted-to.pdf}
\includegraphics[width=0.49\linewidth]{images/real_results_mental-eps-converted-to.pdf}
\caption{Details of performance on 3-class control problem. \textbf{Left:} 3-class training results using facial expressions. Circular markers denote the results achieved using the traditional approach and the triangular ones denote the adaptive approach. Each test subject is marked with a different color. On the $x$-axis we can see the number of samples the algorithm needed to reach the F1 score displayed on the $y$-axis. Traditional experiments were ran for 240 samples, or, if a subject felt that he would benefit from longer interaction with the system, the experiment was extended to 420 samples. \textbf{Right:} 3-class training results using power of thought via traditional (circle) and interactive (triangle) approach. Horizontal axis shows the number of sample is took to train the model, while the vertical one indicates the performance of the final model on the test run. The experiment continued for as long as test subject felt necessary.}
\label{fig:real-results-facial-mental}
\end{figure}
\section{The general quest of navigating human mental state space}
In this work we have proposed, implemented and empirically tested an alternative approach to closed loop feedback BCI training that relies on the visualization of test subject's mental state space in real time facilitating the interaction with the learning system. In the traditional approach the feedback serves only one purpose -- to inform the test subject on his current performance. We expand the information available to the subject by allowing him to see not only that his current actions work poorly (or well), but also provide him with an insight into why a particular set of mental actions might not be a suitable one. We then provide him with an interactive way to experiment with other mental actions until he finds the ones that he can engage consistently and that are distinguishable by the learning system. By the sheer fact of sharing more information with the user we expect our system to achieve better performance. By facilitating the interactive training process we enable the test subject, given enough effort, to reach the desired level of performance.
In addition to the primary benefit described above we find that a few other properties of our approach are beneficial for training BCI systems, namely: the resulting paradigm is personalized to each particular test subject and thus can be tuned better than a one-fits-all paradigm such as motor imagery; system automatically takes care of failed trials and mistakes on the test subject's side -- a subject can rectify a mistake via further interaction with the system, the failed record does not taint the dataset forever, but is gradually phased out by further training; flexibility in training time allows to deviate from strict stimulation schedule and allows the test subject to focus on the most problematic actions, giving them more attention if more attention is needed.
We would like to highlight the choice of the testing paradigm we employed in our work. We find that testing of a general-purpose BCI system must be decoupled from the training in terms of visual cues and protocol. This is necessary to avoid training the subject to simply react to the visual cues and not engage in the corresponding mental activity. By changing the cues we ensure that during the test time the test subject has to invoke the mental activity corresponding to each particular action. Such approach makes the resulting model more robust in the context of real-world applications.
We acknowledge the shortcoming of this study, such as low number of test subjects and a low-end EEG device. This work serves the purpose of initial validation of the concept that allows to plan a larger study, ideally involving intracranial neuroimaging techniques that would have sufficient SNR to make the approach applicable for real-world applications. Rectifying the above-mentioned issues and further exploring the possible topology-preserving dimensionality reduction techniques such as t-SNE \citep{maaten2008visualizing} and neural networks-based solutions are the primary directions for our future work.
The proposed methodology of visualizing test subject's mental state space in real time has wider applicability than BCIs. It gives a user the opportunity to roam the visualization space that is topologically consistent with the space of representation of user's mental signals. This means that if two mental state are close in terms of the brain activity they generate, they will also be close visually, which allows the user to see which thought, emotions, motor actions, and other activities that involve brain activity are close together and which are further apart. The basis for this approach lies in the ability of machine learning interpretability tools to explain what an artificial model is seeing to a human observer. Supplying the proposed system with a high quality neuroimaging device (such as intracortical electrode system) would allow a researcher to gain better understanding of the space of neural signals, or a general user to explore their own mind. Such use of this methodology could lead to new realizations about how human brain works and to new applications of neural technology.
\chapter*{Introduction}
It has been a very long time since humans began to use their reasoning machinery -- the brain, to reason, among other things, about that same reasoning machinery itself. Some claim that such self-referential understanding is impossible to attain in full, but others are still trying and call it Neuroscience. The approach we take is the very same we use to understand almost any other phenomenon -- observe, collect data, infer knowledge from the data and formalize the knowledge into elegant descriptions of reality. In neuroscience we came to refer to this later component as \emph{modeling}. Many aspects of the phenomenon in question were addressed and explained using this approach by neuroscientists over the years. Some aspects remain unexplained, some others even unaddressed.
Entering the era of digital computing allowed us to observe and collect data at ever-growing rate. The amount of data gave rise to the need, and the increase in computational power provided the means, to develop automatic ways of inferring knowledge from data, and the field of Machine Learning was born. In its essence it is the very same process of knowledge discovery that we have been using for years: a phenomenon is observed, the data is collected, the knowledge is inferred and a formal model of that knowledge is created. The main difference being that now a large portion of this process is done automatically.
Neuroscience is traditionally a hypothesis-driven discipline, a hypothesis has to be put forward first, before collecting and analyzing the data that will support or invalidate the hypothesis. Given the amount of work that is required to complete a study, the reason for the process being set up in this way has a solid ground. In a setting where collecting data and extracting the knowledge takes a long time, exploratory analysis would indeed have a low yield in terms of solid and actionable knowledge as exploratory analysis can often result in finding nothing of value. However, with the new ways of automatic knowledge discovery the time that is required to complete the process has decreased and the balance between hypothesis-driven and exploratory, data-driven, approach is starting to change. In this work we put forward the argument that machine learning algorithms can act as automatic builders of insightful computational models of neurological processes. These methods can build models that rely on much larger arrays of data and explore much more complex relationships than a human modeler could. The tools that exist to estimate model's generalization ability can act as a test of model's elegance and applicability to the general case. The human effort can thus be shifted from manually inferring the knowledge from data to interpreting the models that were produced automatically and articulating their mechanisms into intuitive explanations of reality.
In Chapter \ref{ch:ml-ns-synergy} we explore the history of the symbiosis between the fields of neuroscience and machine learning, evidencing the fact that those areas of scientific discovery have a lot in common and discoveries in one often lead to progress in another. Chapter \ref{ch:ml-for-modeling} explores more formally what would it take to create an intuitive description of a neurological process from a machine-learned model. We present the subfield called \emph{interpretable} machine learning, that provides the tools for in-depth analysis of machine learning models. When applied to neural data, it makes those models to be a source of insights about the inner workings of the brain. We propose a taxonomy of machine learning algorithms that is based on the internal knowledge representation a model relies on to make its predictions. In the following chapters \ref{ch:spectral-signatures-based}, \ref{ch:intracranial-dcnn-based} and \ref{ch:mental-space-visualization-based} we provide examples of scientific studies that gained knowledge about human brain by interpreting machine learning models trained on neurological data. The studies present the applicability of this approach on three different levels of organization: Chapter \ref{ch:spectral-signatures-based} shows how the analysis of a decoder trained on human intracerebral recordings leads to a better understanding of category-specific patterns of activity in human visual cortex. Chapter \ref{ch:intracranial-dcnn-based} compares the structure of human visual system with the structure of an artificial system of vision by quantifying the similarities between knowledge representations these two systems use. The final chapter makes a step into even higher level of abstraction and employs topology-preserving dimensionality reduction technique in conjunction with real-time visualization to explore relative distances between human subject's mental states.
With this work we aim to demonstrate that machine learning provides a set of readily available tools to facilitate automatic knowledge discovery in neuroscience, make a step forward in our ways of creating computational models, and highlight the importance and unique areas of applicability of exploratory data-driven approach to neuroscientific inquiry.
\chapter*{Abstract}
\vspace{-2.0em}
\small
The thesis explores the role machine learning methods play in creating intuitive computational models of neural processing. We take the perspective that, combined with interpretability techniques, machine learning could replace human modeler and shift the focus of human effort from creating the models to extracting the knowledge from the already-made models and articulating that knowledge into intuitive representations. Automatic model-building methods can process larger volumes of data and explore more computationally complex relationships than a human modeler could. This perspective makes the case in favor of the larger role that exploratory and data-driven approach to computational neuroscience could play while coexisting alongside the traditional hypothesis-driven approach. We provide an example of how an intuitive model can be extracted from machine-learned knowledge, explore major machine learning algorithms in the context of the knowledge representation they employ, and propose a taxonomy of machine learning algorithms based on the knowledge representation that is driving their decision-making process.
We exemplify the illustrated approach in the context of the knowledge representation taxonomy with three research projects that employ interpretability techniques on top of machine learning methods at three different levels of neural organization. In each case we demonstrate the applicability of the approach and present the neuroscientific knowledge it allowed us to extract. The first study (Chapter \ref{ch:spectral-signatures-based}) explores feature importance analysis of a random forest decoder trained on intracerebral recordings from 100 human subjects to identify spectrotemporal signatures that characterize local neural activity during the task of visual categorization. The second study (Chapter \ref{ch:intracranial-dcnn-based}) employs representation similarity analysis to compare the neural responses of the areas along the ventral stream with the activations of the layers of a deep convolutional neural network. The analysis allowed us to make conclusions and observations about the hierarchical organization of the human visual cortex and the similarities between the biological and an artificial system of vision. The third study (Chapter \ref{ch:mental-space-visualization-based}) proposes a method that allows test subjects to visually explore the state representation of their neural signal in real time. This is achieved by using a topology-preserving dimensionality reduction technique that allows to transform the neural data from the multidimensional representation used by the computer into a two-dimensional representation a human can grasp.
Taken together, the approach, the taxonomy, and the examples, present a strong case for the applicability of machine learning methods in conjunction with interpretability techniques to automatic knowledge discovery in neuroscience. Seen from this perspective, machine learning models cease to be mere statistical black boxes and, by capturing the underlying dynamics of real life processes, reintroduce themselves as candidate models of reality.
\normalsize
\tableofcontents
\addcontentsline{toc}{chapter}{Introduction}
\include{introduction}
\include{chapter-1}
\include{chapter-2}
\include{chapter-3}
\include{chapter-4}
\include{chapter-5}
\chapter*{Conclusion}\addcontentsline{toc}{chapter}{Conclusion}
Traditionally, neuroscience was and to a large extent is a hypothesis-driven science. The growing amount of data that is being generated by modern neuroimaging techniques merits, however, an increase in the role that exploratory, data-driven approach plays in modern neuroscience. In this thesis we make the case for the importance of adopting methods of interpretability of machine learning models in neuroscientific research. We discuss the benefit that machine learning brings by augmenting the ways of neuroscientific inquiry with an additional path of automatic hypothesis generation and validation.
The ultimate purpose of proposing new hypotheses and models of brain function is to discover the ones that describe the phenomenon well. In the hypothesis-driven approach most of the process relies on a human investigator, who first observes a certain phenomenon, then comes up with a model or a hypothesis to describe it, collects the data, validates the model based on the data, and, finally, rules the model to be true or insightful or discards it. In this scenario the role of automated data processing is confined to the process of obtaining and processing the data to provide measures on certain narrowly-defined experimental metrics that the investigator needs to reach a conclusion. This approach allows the human investigator to be in full control of the meaning of the model that is being created, but scales up only by increasing the number of investigators, naturally limiting the space of possible hypothesis that is humanly possible to test against the data. In the data-driven approach the process starts with the dataset that contains the observations of a phenomenon. Machine learning methods then automatically generate models (hypotheses) that attempt to explain the dynamics that was captured by the data. Model validation step automatically discards most of the models that merely capture shallow statistical dependencies in the particular data instance that was recorded and do not generalize to capture the underlying process. Some of the models, however, do, and, when validated, show to generalize well to correctly describe previously unseen data from the same phenomenon. When this happens we know that the process of automatic modeling has captured a description of the process that governs the phenomenon. All of the steps leading to this stage can be done with minimal human involvement and scale up with the amount of computational resources. This allows the space of models that are proposed and tested to be considerably larger than if we would only use humans to sift through the possible explanations. Now the human effort can be concentrated on the models and hypothesis that were identified by the automatic process as descriptive and general. Interpreting those models will show which ones are true and insightful and which ones are trivial.
The hypothesis-driven and data-driven approaches to hypothesis generation cover different parts of the conceptual space of unknown hypotheses and should both be exploited to advance our knowledge of the brain. The data-driven approach is designed to excel in the exploratory analysis, and, given the above-mentioned volumes of data, such exploration of this data has ever-growing chance of making a discovery. To properly facilitate this process, the interpretability techniques that have established their role in general machine learning community have to find their way into neuroscience research on a wider scale. We hope this thesis contributes to this process.
The exploration of the symbiosis between the fields of neuroscience and machine learning in Chapter \ref{ch:ml-ns-synergy} establishes the already existing and also the emerging track record of mutual benefit those two fields have provided each other. We find that one of the ways this benefit can prosper further is by adopting the view presented in Chapter \ref{ch:ml-for-modeling} of the machine learning approach playing the role of a builder of computational neurophysiological models. The need to interpret the knowledge the model has acquired and to articulate it in an intuitive manner leads us to the interpretability techniques. The need for a better understanding of the knowledge representation that artificial learning algorithms create require a structured approach to navigating the space of those representations. In Section \ref{sec:representation-taxonomy} we propose a taxonomy of machine learning methods based on knowledge representation and hope that this view angle proves useful when designing next neuroscientific study that involves machine learning methods.
The work that became the basis of this thesis serves as an example of adopting the proposed perspective and methodology and demonstrates its applicability on three different levels of organization. In Chapter \ref{ch:spectral-signatures-based} interpretable machine learning model is used to analyze neural dynamics at the level of localized activity across the human brain. This analysis allowed to characterize neural locations and their activity in during the task of visual perceptual categorization. The uncovered signatures of visual processing in human brain provided a multifaceted view on spectral, temporal and anatomical characteristics of this process. The comparison between biological and artificial systems of vision in Chapter \ref{ch:intracranial-dcnn-based} gives an example of the role machine learning models can play at a more abstract level, where the aim is to understand the functional organization of the human brain. In the last study described in Chapter \ref{ch:mental-space-visualization-based} the dimensionality reduction and visualization techniques provide an actionable insight into relative organization of mental concepts within a subject's mental state space. Visualizing the mental state space allows to analyze the behavior of our brain at the highest level of abstraction.
Taken together, the ideas and the results of this thesis highlight one of the roles machine learning could play in advancing our understanding of the human brain. The ability to uncover patterns and extract knowledge from data makes the method of machine learning a suitable tool for augmenting our capacity to create explanations of natural phenomena around us. Neuroscience is a particularly fitting area for application of this methodology due to its symbiosis with the area of artificial intelligence and machine learning. The shared goal of uncovering the mechanism of intelligence made the field of artificial intelligence follow and reapply the discoveries made in neuroscience. In some cases this has led to the realization that both systems, the biological and the artificial ones, if presented with the same functional goal, sometimes develop similar mechanisms of achieving it. The similarities between the mechanisms employed by biological and artificial systems that were discovered to this date, such as hierarchy of the visual system, the mechanism of periodic memory consolidation, grid-like representation of space for navigation, and others endorse the fact that an artificial learning system can emerge a mechanism that is similar to the one that is used by our brain. In this thesis we stress the importance of continued analysis of the ways how machine learning algorithms achieve their results as understanding of these mechanisms can shed light on the mechanisms employed by our brain.
\ \\
\emph{We hope you have found the perspective curious and the examples convincing enough to let the proposed approach occupy a part of your mental state space.}
\addcontentsline{toc}{chapter}{Bibliography}
\printbibliography
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.